anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Doing Correct Calculations with Binary Star Systems
Question: I've been experimenting with Binary Star System calculations but came across some issues. I can correctly calculate the Semi-Major Axis of the Earth's orbit around the Sun by doing the equation: Using data from Wikipedia. a = (1.521e11 + 1.47095e11) / 2. <- average radius of Earth's orbit (meters) μ = G * M = 6.67408e(-11) * 1.98847e30 gravitational constant * Mass of Sun (kg) I got ≈ 31557884 seconds which equates to ≈ a year Now my issue is that I've been searching around for a formula that calculates the period and etc of a Binary System where two masses (such as stars) orbit around each other. So then I try essentially the same formula, this: I used data from the Procyon A and B (binary system) a = Semi-Major Axis = 4.3 AU M1 = Procyon A Mass = 1.499 Solar Masses M2 = Procyon B Mass = 0.602 Solar Masses these give me a T (period) of 4731227. It says 40.82 years on Wiki. What am I doing wrong? Is it the data? is it the formula? My comprehension of the question? Thank you for the help! Answer: The semi-major axis is 4.3 arcseconds. At a distance of 3.51 parsecs, this corresponds to 15.09 au. Using your formula I get 40.3 years, which is close enough given the 2 significant figure precision of the semi-major axis.
{ "domain": "astronomy.stackexchange", "id": 3296, "tags": "orbit, binary-star" }
How the CMB anisotropy is linked to the existence of cold dark matter and dark energy?
Question: After the data from the cosmic microwave background has been collected by WMAP or Planck, what types of analysis is needed to conduct in order to deduce the cold dark matter density and the distribution of matter in the universe? In other words - How the CMB anisotropy measurements giving us evidence for the existence of dark matter/energy? (Any further explanation on the analysis of data is also welcomed.) Answer: The physics of the CMB is remarkably simple. For a good introduction, I would recommend Wayne Hu's webpages that talk about all of this and more. Planck, WMAP, and other CMB experiments measure the brightness of a radiation field that is predicted in an expanding Universe. The intensity of the radiation is that of a black body radiating at a certain temperature, so the signal is quantified as a temperature measurement. The introduction is here: http://background.uchicago.edu/~whu/beginners/introduction.html A summary: In the simple model of the CMB, there are essentially three things that matter early on: "dark" matter (DM), "regular" matter (i.e. matter that interacts with itself and photons, aka baryons), and photons (light). The Universe, thanks to inflation, starts off very, but not perfectly smooth. There are little density enhancements against this smooth background. There is more DM than baryons, so the dark matter in the overdense regions collapses, forming dense regions. The baryons follow along for the ride. At some point, the regular matter starts to heat up and provides a pressure to support against the dark matter gravitational collapse. In effect, this causes an oscillatory "bounce" for some of these overdense regions. At the same time, the Universe is expanding and cooling. At some point, the Universe will cool to the point where protons and electrons combine to form Hydrogen (recombination). At this point, the photons are no longer bound to the matter, and they stream directly across the Universe (more or less) until they enter a telescope. Since the CMB is from this instant of recombination, there is a sharp cut-off to the oscillations early on. The strength and size of the oscillations is basically what Planck measures. The ratio of dark matter to regular matter is set by the relative heights of the oscillations. (More regular matter means a bigger bounce. More dark matter means more compression.) The evidence for Dark Energy is a bit more complicated. However, at the most basic level, the simple model outlined above predicts a very specific size for the regions that have time to collapse between the Big Bang and recombination. By measuring the apparent size of these regions on the sky, you can estimate the curvature of space between the telescope and the CMB. If the Universe isn't curved, the energy density in the Universe is equal to the critical density (i.e. there's just enough stuff to cause the expanding Universe to coast to a 0 expansion at infinity). The CMB shows matter accounts for 30% of the critical density and the total is 1. Therefore "something else" is 70%, and Dark Energy is a convenient explanation (although not the only explanation). Planck's measurement is a little bit more complicated. As Planck has better resolution than WMAP, it's able to tell a little bit more about things. This is because at the large scales probed by all-sky surveys, the physics I mentioned above matters. As you get to smaller scales (small patches on the sky) more "local" physics dominates. As an example, galaxies in between us and the CMB we observe can gravitationally lens the light of the CMB, which adds information about the large-scale structure between us and the CMB, which provides some insight into Dark Energy. Galaxy clusters are an example of this, and their number density depends a lot on Dark Energy because they take such a long time to form. Hope this helps.
{ "domain": "physics.stackexchange", "id": 7310, "tags": "cosmology, dark-matter, dark-energy, cosmic-microwave-background" }
Why is the equivalence principle so important to general relativity?
Question: In its simplest form, equivalence principle states that the inertial mass and the gravitational mass should be the same. This is easy to understand. But why is it so important to the formulation of General Relativity? To be more specific, I don't understand how the gravitational field equation: $$G_{\mu\nu}+\Lambda g_{\mu\nu}=\frac{8\pi G}{c^4}T_{\mu\nu}$$ can be derived from this principle. Answer: A derivation of Einstein's equation isn't why the Equivalence principle is central to GR. The reason that the equivalence principle is central to GR is in the fact that you can represent the gravitational field with a metric tensor at all--you can replace a force equation with a geodesic equation for a test mass precisely due to the fact that the geodesic that that test mass follows (or the "acceleration" felt by a Newtonian mass) is independent of the mass of that test$^{1}$ particle. The equivalence principle, however, only selects out that one can represent gravity with a metric tensor. There are a great many other so-called "metric theories of gravity" that obey the equivalence principle, but are not general relativity--amongst other things, they will differ in the field equation for the metric tensor, or have extra fields in addition to the metric--the most famous of these is the Brans-Dicke theory, which treats Newton's constant as a scalar field coupled to the metric tensor. Most alternative metric theories have either been experimentally ruled out, or have had their additional fields constrained to the point where their values are consistent with zero (for instance, Brans-Dicke theory has a parameter $\omega$, and tends to GR if $\frac{1}{\omega}\rightarrow 0$. Current data says that $\omega > 4000$, or some similar number.). $^{1}$Note that this is generally only true if the mass of the test particle is "small" compared to the local curvature of the spacetime, and if it's motion is slow enough to not produce gravitational radiation comparable to its energy. Either of these effects will cause the test mass to perturb the background spacetime, and those effects will both be mass dependent and cause the test mass to not follow a geodesic of the background spacetime. Both of these approximations are true (to great precision, at least) of all of the planets, asteroids and comets orbiting the sun, amongst many other things.
{ "domain": "physics.stackexchange", "id": 13685, "tags": "general-relativity, equivalence-principle" }
Normal force of loop-the-loop at the side of the circle
Question: In the loop-the-loop ride a car goes around a vertical, circular loop at a constant speed. The car has the mass of 230 kg and moves with the speed of 300 m/s. The loop-the-loop has a radius R=20 m. What would then be the magnitude of the normal force on the car when it is at the side of the circle moving upward? I tried to solve this problem, by: gravitational force = $mg$ = 230*9.8 (downward) centripetal force = $mv^2/r$ = 300^2/20 (toward circle, which is horizontal) and by vector addition/subtraction, magnitude of normal force would be $\sqrt {(mg)^2+(mv^2/r)^2}$ . But the answer I got is wrong, so this approach must be wrong... What did I do wrong here? Answer: If I'm understanding your problem correctly, then the normal force is the centripetal force. $ F_N = \frac{mv^2}{r} $ In other words, the normal force from the rail causes the centripetal acceleration towards the center of the circle. There are, as I understand it, no other forces acting in the normal direction. Remember that you are only supposed to consider forces in the normal direction: $ \sum F_N=ma_N $ The gravitational force is perpendicular to the normal force at this position and so has no effect in the normal direction.
{ "domain": "physics.stackexchange", "id": 17060, "tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics, centripetal-force" }
What would the electromagnetic field of a massless electron look like
Question: The Standard Model gives non-zero mass to the electron via the coupling to the Higgs field. Issues of renormalizability aside, this is fundamentally unrelated to the fact that the electron couples to the EM field. However, if the Higgs mechanism did not operate - that is, if there were no vacuum symmetry breaking, the electron field would have no effective mass term. In QFT perturbation theory, this model offers no special difficulty. My question is, what is the classical limit of this theory, if it has one? Does the electron acquire a purely EM mass? If the low energy renormalized mass is set to zero (is there an obstacle to doing that?) what do the classical field configurations look like? My puzzlement is related to the classical description of the EM field of a relativistic charged particle - namely, that it becomes "flattened" in the direction of motion, just like a Lorentz-contracted sphere. That is, the field is weaker than for a motionless particle in the longitudinal direction, and stronger in the transverse directions. The limiting case is that the field becomes concentrated on a 2-d surface transverse to the particle location, where it has infinite strength - obviously an unphysical situation. So what actually happens? Answer: This has some bearing on the question in your title http://arXiv.org/abs/hep-th/9112020 There is an analogous problem in general relativity, the gravitational field generated by a massless particle is called the Aichelburg - Sexl metric. Couple more comments: Perturbation theory in the presence of massless charged fields is not unproblematic, there are infrared divergences one has to deal with. These are much more problematic than soft photon divergences. I think Weinberg's field theory book has a discussion of this. Massless electrons would have an extra chiral symmetry, which should forbid the appearance of mass term. But, I don't think these quantum field theory issues are directly related to your question, which is essentially one in classical electrodynamics.
{ "domain": "physics.stackexchange", "id": 301, "tags": "electromagnetism, quantum-field-theory" }
How implement a multi-threaded ROS node with callbacks not being subscribers?
Question: I would like to implement a node which listens to a TCP socket which receives asynchronous data, converts it into messages and publishes the latest of these messages at a fix rate in a topic. The best solution in my eyes would be to have a dedicated thread (spinner) that always reads the latest TCP data and saves it to a variable; it stalls when there is no message. The main spinner would read this variable, convert it into a message, publish it and sleep to for predefined loop rate time. At the moment I can only find ways to add subscribers to a callback queue but not generic functions such as the envisioned TCP listener. Therefore my questions: How can i assign a function (the TCP listener) to a callback queue if it is no subscriber? How can I avoid concurrent modification of the data communicated between the two threads? I.e. how do I guarantee that the TCP thread is not concurrently writing to the same memory as the publisher thread is reading from? Would you see a more elegant solution to my problem? I want to make sure that the data on the TCP socket is read asap to not interrupt the program that is sending the data to the socket AND I want to make sure that the topic is only publishing the latest data but with a fixed interval (i.e. data is dropped if it arrives faster than the publishing rate). Originally posted by John Waffle on ROS Answers with karma: 13 on 2015-08-05 Post score: 1 Original comments Comment by Erwan R. on 2015-08-05: Why can't you loop on your TCP listening and publish at the condition that there is new data or that enough time has elapsed ? You just need a publisher that will be invoked potentially as fast as the TCP listener or at a lower rate depending on the condition set. Comment by John Waffle on 2015-08-05: This is my fallback plan because it does not guarantee that the publishing rate is constant. If the TCP socket is still busy when the next topic message is due, it will be delayed. In the case of a separate thread this can be avoided by only publishing the latest full data block the node received. Answer: This is not something to be done with ROS internals only. Have a look at the Boost::Thread Library which provides excellent tools for doing this. (Obviously, there are many other tools available.) To your questions: I guess this cannot be done the way you propose it here. Spawn a new thread (using e.g. Boost) and read the TCP socket therein. This is what boost::mutex and boost::lock is for. I guess the solution you proposed is fine. To achieve this, the thread reading the TCP socket just needs to be running faster than the one publishing the data. If you then write to a (well protected, see question 2) ) member variable, you can use this variable to sync the data and only publish the latest one. As an example how to do this, (even though they have it basically the other way round, publishing data from a thread) see the old slam_karto node (again, there are probably many other examples out there or directly in boost). (During my questions, I assumed you'd be using C++ as language of choice. If you plan to do this using pyhton see the Python Threading module) Originally posted by mgruhler with karma: 12390 on 2015-08-05 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by John Waffle on 2015-08-05: Thank you! Yes, I am using C++. Sorry for not mentioning.
{ "domain": "robotics.stackexchange", "id": 22379, "tags": "ros, node, multi-thread" }
Confusion regarding plane of the paper
Question: I find terms like into the plane of paper out of the plane of paper How am I supposed to understand that? Answer: The paper can be approximated as a two-dimensional object (or a plane). So the surface of the paper represents the plane. The area of that plane is a vector that points out normal to the plane surface. Now, any vector that points into the paper will be in a direction anti parallel to the area vector of the plane of the paper. Anything that points out of the paper will be in a direction parallel to the plane of paper. This convention holds for any of the one surface at a time. That surface is given by the plane you can see , so that the area vector points towards you. So, whenever there is something written as "points into the page" just look down to the plane of the paper. That's the direction of the vector. If it says "points out of the paper", simply look straight up from the paper surface. In both cases, you should consider only one surface at a time.
{ "domain": "physics.stackexchange", "id": 36628, "tags": "homework-and-exercises, terminology, vectors" }
How to plot LIDAR data in RVIZ
Question: Right now what I have are some csv lidar scan files. I also using Python to extract the csv file to be able to plot all the cloud data in Python. Can RVIZ read lidar data directly? Is there any tutorial that I can check? I really appreciate the help from anyone who can provide any information. Originally posted by chenbao on ROS Answers with karma: 1 on 2016-11-26 Post score: 0 Answer: Can RVIZ read lidar data directly? No. The 'only' thing RViz does is visualise dataflows that publishers are already publishing. If you want to visualise your lidar data, you'll have to write a node that reads in the CSVs, transforms the data into the appropriate ROS msg (typically a sensor_msgs/LaserScan or sensor_msgs/PointCloud2) and then publishes those. In RViz, add the corresponding display (for laser scans or pointclouds), select your topic and you should see the scans visualised. Another approach could be to convert the CSV files into rosbags (using the Python API fi) and use rosbag play .. to publish everything. That way you wouldn't have to write a node, but only a (small?) conversion script. You would still use the same messages, but only write them out to the bag file. Originally posted by gvdhoorn with karma: 86574 on 2016-11-27 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by chenbao on 2016-11-29: I really appreciate your help. I started RVIZ online tutorial recently, they use c++ to creat markers or graphs. If you wan to display the marker or graph in rviz, you need to publish the message first, and I think you also to need to set frame_id and marker topic. But how to do them in Python? Th Comment by chenbao on 2016-11-29: For example, the way to define the frame_id is to use the c++ command " points.heafer.fram_id="/my_frame" " and set marker topic using command " points.ns="points_and_lines" ". In this case the framID is called /my_frame and you can insert it in rviz fix fram blanket. Comment by gvdhoorn on 2016-11-30: I'm not sure I'd use Markers for this. The rosbag approach seems like the least amount of work, unless you are only interested in a static visualisation (of a single scan).
{ "domain": "robotics.stackexchange", "id": 26338, "tags": "lidar, rviz" }
Represent bonds spanning a unit cell boundary plane
Question: Is there a chemical input format that is- Supported by Open Babel (and hence supports visualization in avogadro??) Can represent a bond connecting an atom to its partner in a neighbouring unit cell, for crystals/systems with periodic boundary conditions Follow up (added later): I could not understand how cif does it, or cml. But it would be nice if I could get rid of the symmetry folds in the markup given below. Here is the cif file. The cml output using babel is -- <?xml version="1.0"?> <molecule id="Calcium titanate" xmlns="http://www.xml-cml.org/schema"> <crystal> <scalar title="a" units="units:angstrom">5.380000</scalar> <scalar title="b" units="units:angstrom">5.440000</scalar> <scalar title="c" units="units:angstrom">7.639000</scalar> <scalar title="alpha" units="units:degree">90.000003</scalar> <scalar title="beta" units="units:degree">90.000003</scalar> <scalar title="gamma" units="units:degree">90.000003</scalar> <symmetry spaceGroup="-P 2c 2ab"> <transform3>1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1</transform3> <transform3>1 0 0 0.5 0 -1 0 0.5 0 0 -1 0 0 0 0 1</transform3> <transform3>-1 0 0 0 0 -1 0 0 0 0 1 0.5 0 0 0 1</transform3> <transform3>-1 0 0 0.5 0 1 0 0.5 0 0 -1 0.5 0 0 0 1</transform3> <transform3>-1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 1</transform3> <transform3>-1 0 0 0.5 0 1 0 0.5 0 0 1 0 0 0 0 1</transform3> <transform3>1 0 0 0 0 1 0 0 0 0 -1 0.5 0 0 0 1</transform3> <transform3>1 0 0 0.5 0 -1 0 0.5 0 0 1 0.5 0 0 0 1</transform3> </symmetry> </crystal> <atomArray> <atom id="a1" elementType="Ti" formalCharge="4" xFract="0.000000" yFract="0.500000" zFract="0.000000"/> <atom id="a2" elementType="Ca" formalCharge="2" xFract="0.006480" yFract="0.035600" zFract="0.250000"/> <atom id="a3" elementType="O" formalCharge="-2" xFract="0.571100" yFract="-0.016100" zFract="0.250000"/> <atom id="a4" elementType="O" formalCharge="-2" xFract="0.289700" yFract="0.288800" zFract="0.037300"/> </atomArray> <bondArray> <bond atomRefs2="a1 a4" order="1"/> <bond atomRefs2="a4 a2" order="1"/> </bondArray> </molecule> Answer: As the developer of both Open Babel and Avogadro I can say the answer is "not yet." The main thing isn't the file format. It's that Avogadro at the moment doesn't have support at the moment to show bonds across a unit cell. I believe there was a patch for that, but it was too slow for typical use: https://github.com/dlonie/avogadro/commits/ENH_intercell_bonds
{ "domain": "chemistry.stackexchange", "id": 1852, "tags": "crystal-structure, software" }
Diff_drive_controller not subscribing to /cmd_vel
Question: Hi, my team is working on a differential drive robot and trying to decide if the diff_drive_controller is a good fit for our project. We have been borrowing and adapting code from projects like My_ROS_mobile_robot, ros_control_boilerplate, and the hardware_interface tutorial. Our problem right now is that when we launch all the nodes that we believe we need, there is no subscription to /cmd_vel as we would expect, given diff_drive_controller's documentation. I'll leave some snippets of our code below. Here is a link to our full repository, with relevant code to this problem located in the hw_interface and launch directories. Here is our relevant code: diff_drive.launch: <launch> <!-- Hardware Interface --> <node name='hardware_interface' type='hw_interface_node' pkg='hw_inter'/> <!-- URDF --> <arg name="urdf_file" default="$(find xacro)/xacro --inorder '$(find rover_4_core)/description/urdf/rover.urdf'" /> <param name="robot_description" command="$(arg urdf_file)" /> <node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher" /> <!-- Load controller config --> <rosparam command="load" file="$(find rover_4_core)/hw_interface/config/common.yaml"/> <rosparam command="load" file="$(find rover_4_core)/hw_interface/config/control.yaml"/> <node name="controller_spawner" pkg="controller_manager" type="spawner" output="screen" ns="/" args="mobile_base_controller joint_state_controller leftWheel_effort_controller rightWheel_effort_controller"/> </launch> control.yaml: my_robot: motor_commands: pwm_update_frequency: 50.0 cmd_vel_topic: left: "/cmd_vel" right: "/cmd_vel" # Publish all joint states ----------------------------------- # Creates the /joint_states topic necessary in ROS joint_state_controller: type: joint_state_controller/JointStateController publish_rate: 50 # Effort Controllers --------------------------------------- leftWheel_effort_controller: type: effort_controllers/JointEffortController joint: lwheel_to_base pid: {p: 100.0, i: 0.1, d: 10.0} #pid: {p: 50.0, i: 0.1, d: 0.0} rightWheel_effort_controller: type: effort_controllers/JointEffortController joint: rwheel_to_base pid: {p: 100.0, i: 0.1, d: 10.0} #pid: {p: 50.0, i: 0.1, d: 0.0} common.yaml: mobile_base_controller: type : "diff_drive_controller/DiffDriveController" left_wheel : 'left_wheel' right_wheel : 'right_wheel' publish_rate: 50.0 # default: 50 pose_covariance_diagonal : [0.001, 0.001, 1000000.0, 1000000.0, 1000000.0, 1000.0] twist_covariance_diagonal: [0.001, 0.001, 1000000.0, 1000000.0, 1000000.0, 1000.0] # Wheel separation and diameter. These are both optional. # diff_drive_controller will attempt to read either one or both from the # URDF if not specified as a parameter wheel_separation : 0.026 # meters wheel_radius : 0.0045 # meters # Wheel separation and radius multipliers wheel_separation_multiplier: 1.0 # default: 1.0 wheel_radius_multiplier : 1.0 # default: 1.0 # Velocity commands timeout [s], default 0.5 cmd_vel_timeout: 0.25 # Base frame_id base_frame_id: base_link #default: base_link # Velocity and acceleration limits # Whenever a min_* is unspecified, default to -max_* linear: x: has_velocity_limits : true max_velocity : 1.15 #1.0 # m/s min_velocity : -1.15 #-0.5 # m/s has_acceleration_limits: true max_acceleration : 5.0 #0.8 # m/s^2 min_acceleration : -1.0 #-0.4 # m/s^2 has_jerk_limits : true max_jerk : 5.0 # m/s^3 angular: z: has_velocity_limits : true max_velocity : 50.0 #1.7 # rad/s has_acceleration_limits: true max_acceleration : 1.5 # rad/s^2 has_jerk_limits : true max_jerk : 2.5 # rad/s^3 #Publish to TF directly or not enable_odom_tf: true #Name of frame to publish odometry in odom_frame_id: odom # Publish the velocity command to be executed. # It is to monitor the effect of limiters on the controller input. publish_cmd: true Finally here's a link to our hardware interface node, because it's too long to include here. We appreciate your time and help! EDIT: console output of diff_drive.launch $ roslaunch rover_4_core diff_drive.launch ... logging to /home/nate/.ros/log/4f103afa-5e62-11ea-8022-b827eb2d9e86/roslaunch-natebook-9951.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. xacro: in-order processing became default in ROS Melodic. You can drop the option. started roslaunch server http://172.20.171.238:38857/ SUMMARY ======== PARAMETERS * /enable_odom_tf: True * /mobile_base_controller/angular/z/has_acceleration_limits: True * /mobile_base_controller/angular/z/has_jerk_limits: True * /mobile_base_controller/angular/z/has_velocity_limits: True * /mobile_base_controller/angular/z/max_acceleration: 1.5 * /mobile_base_controller/angular/z/max_jerk: 2.5 * /mobile_base_controller/angular/z/max_velocity: 50.0 * /mobile_base_controller/base_frame_id: base_link * /mobile_base_controller/cmd_vel_timeout: 0.25 * /mobile_base_controller/left_wheel: left_wheel * /mobile_base_controller/linear/x/has_acceleration_limits: True * /mobile_base_controller/linear/x/has_jerk_limits: True * /mobile_base_controller/linear/x/has_velocity_limits: True * /mobile_base_controller/linear/x/max_acceleration: 5.0 * /mobile_base_controller/linear/x/max_jerk: 5.0 * /mobile_base_controller/linear/x/max_velocity: 1.15 * /mobile_base_controller/linear/x/min_acceleration: -1.0 * /mobile_base_controller/linear/x/min_velocity: -1.15 * /mobile_base_controller/pose_covariance_diagonal: [0.001, 0.001, 10... * /mobile_base_controller/publish_rate: 50.0 * /mobile_base_controller/right_wheel: right_wheel * /mobile_base_controller/twist_covariance_diagonal: [0.001, 0.001, 10... * /mobile_base_controller/type: diff_drive_contro... * /mobile_base_controller/wheel_radius: 0.0045 * /mobile_base_controller/wheel_radius_multiplier: 1.0 * /mobile_base_controller/wheel_separation: 0.026 * /mobile_base_controller/wheel_separation_multiplier: 1.0 * /my_robot/joint_state_controller/publish_rate: 50 * /my_robot/joint_state_controller/type: joint_state_contr... * /my_robot/leftWheel_effort_controller/joint: lwheel_to_base * /my_robot/leftWheel_effort_controller/pid/d: 10.0 * /my_robot/leftWheel_effort_controller/pid/i: 0.1 * /my_robot/leftWheel_effort_controller/pid/p: 100.0 * /my_robot/leftWheel_effort_controller/type: effort_controller... * /my_robot/motor_commands/cmd_vel_topic/left: /cmd_vel * /my_robot/motor_commands/cmd_vel_topic/right: /cmd_vel * /my_robot/motor_commands/pwm_update_frequency: 50.0 * /my_robot/rightWheel_effort_controller/joint: rwheel_to_base * /my_robot/rightWheel_effort_controller/pid/d: 10.0 * /my_robot/rightWheel_effort_controller/pid/i: 0.1 * /my_robot/rightWheel_effort_controller/pid/p: 100.0 * /my_robot/rightWheel_effort_controller/type: effort_controller... * /odom_frame_id: odom * /publish_cmd: True * /robot_description: <?xml version="1.... * /rosdistro: melodic * /rosversion: 1.14.3 NODES / controller_spawner (controller_manager/spawner) hardware_interface (rover_4_core/hw_interface_node2) robot_state_publisher (robot_state_publisher/robot_state_publisher) ROS_MASTER_URI=http://robot.dyn.brandeis.edu:11311 process[hardware_interface-1]: started with pid [9965] process[robot_state_publisher-2]: started with pid [9966] process[controller_spawner-3]: started with pid [9967] [INFO] [1583358955.344833]: Controller Spawner: Waiting for service controller_manager/load_controller [INFO] [1583358955.386302]: Controller Spawner: Waiting for service controller_manager/switch_controller [INFO] [1583358955.419234]: Controller Spawner: Waiting for service controller_manager/unload_controller [INFO] [1583358955.452522]: Loading controller: mobile_base_controller EDIT 2: The output of rosnode info hardware_interface while everything is running is as follows: $ rosnode info hardware_interface -------------------------------------------------------------------------------- Node [/hardware_interface] Publications: * /left_wheel_vel [std_msgs/Float32] * /right_wheel_vel [std_msgs/Float32] * /rosout [rosgraph_msgs/Log] Subscriptions: * /encoder_left [std_msgs/Int64] * /encoder_right [std_msgs/Int64] Services: * /controller_manager/list_controller_types * /controller_manager/list_controllers * /controller_manager/load_controller * /controller_manager/reload_controller_libraries * /controller_manager/switch_controller * /controller_manager/unload_controller * /hardware_interface/get_loggers * /hardware_interface/set_logger_level contacting node http://172.20.171.238:38647/ ... Pid: 6911 Connections: * topic: /rosout * to: /rosout * direction: outbound * transport: TCPROS * topic: /left_wheel_vel * to: /serial_node * direction: outbound * transport: TCPROS * topic: /right_wheel_vel * to: /serial_node * direction: outbound * transport: TCPROS * topic: /encoder_left * to: /serial_node (http://robot:41185/) * direction: inbound * transport: TCPROS * topic: /encoder_right * to: /serial_node (http://robot:41185/) * direction: inbound * transport: TCPROS Originally posted by ndimick on ROS Answers with karma: 23 on 2020-03-04 Post score: 1 Original comments Comment by gvdhoorn on 2020-03-04: Your .launch file suggests you are attempting to start all of these controllers together: mobile_base_controller joint_state_controller leftWheel_effort_controller rightWheel_effort_controller. That cannot work, as the effort controllers will try to claim the same resources as your mobile_base_controller. Can you show the output on the console after you roslaunch [..] diff_drive.launch? Comment by ndimick on 2020-03-04: Hi, @gvdhoorn, thanks for your help! I've edited the original question to include the console output at the bottom of the question. Comment by gvdhoorn on 2020-03-05: If that's all there is then I'm not sure everything is being started. Can you try launching using roslaunch --screen rover_4_core diff_drive.launch Is this one machine btw? Because I see this in your output: ROS_MASTER_URI=http://robot.dyn.brandeis.edu:11311 If this is multi-machine then please make sure DNS is working properly (in both ways, so forward and reverse lookup). Comment by ndimick on 2020-03-05: launching using roslaunch --screen rover_4_core diff_drive.launch produced the same console output as before. No new or additional lines. Yes we are on a multi-machine setup, launching diff_drive.launch from a remote pc, running roscore on a raspberry pi attached to our hardware. We tested DNS lookups with dig and it is working properly both ways from both machines. Ros topics are also communicating between the two without issue. Comment by pitosalas on 2020-03-09: @gvdhoorn I am working with @ndimick. We're wondering if you (or anyone else) has further insight into the problem above and/or whether you need some more information? Thanks Comment by gvdhoorn on 2020-03-09: I find the console log you show suspicious. I'd expect a line mentioning something like "started controller .." there as well. Try listing the controllers loaded and ascertaining their status. Also: what is the output of rosnode info hardware_interface after you've started everything? Comment by ndimick on 2020-03-09: Should the controller start automatically after it is spawned or does it have to be manually started inside of a node? Comment by gvdhoorn on 2020-03-10: Please answer my other questions as well. Comment by ndimick on 2020-03-10: I've added the output of rosnode info hardware_interface to the original post. it is at the very bottom of the post. Additionally, rosservice call /controller_manager/list_controllers "{}" and rosrun controller_manager controller_manager list both produce no output. I hope that helps. Comment by gvdhoorn on 2020-03-10: According to your comment, no controllers are loaded into your hardware_interface. That would explain why you don't see the cmd_vel subscription. Comment by fjp on 2020-03-18: @ndimick, @pitosalas, @gvdhoorn why are the two JointEffortControllers required? Isn't DiffDriveController sufficient? Answer: This is the main(..) from the code you link in your question: int main(int argc, char **argv) { /* * Main loop of the hardware interface. */ ros::init(argc, argv, "hw_interface"); MyRobot robot; controller_manager::ControllerManager cm(&robot); ros::Time time_now; ros::Duration period_now; ros::Rate sleep_rate(10); while (true) { time_now = robot.get_time(); period_now = robot.get_period(); robot.read(period_now); cm.update(time_now, period_now); robot.write(); sleep_rate.sleep(); } } There is one thing suspiciously missing here, and that would be somewhere for ROS to actually process events. The fact that the controller_manager services do not produce any output is indicative of this. Even a hardware_interface needs some time to process incoming and outgoing events, but you don't give it any. You'll either want to call ros::spinOnce() somewhere in your while loop, or instantiate a ros::AsyncSpinner and start it. The latter would actually be recommended. Something like this should do the trick: ros::AsyncSpinner spinner(2); spinner.start(); A single thread is not enough, as there is a high potential for deadlocks. Note that this also done in the two examples that you link: PickNikRobotics/ros_control_boilerplate (here) and eborghi10/my_ROS_mobile_robot (here). Edit: just noticed you have this: while (true) { [..] } This is not a good idea in a ROS node -- which a hardware_interface essentially is. You'll want to check for ros::ok() instead. Originally posted by gvdhoorn with karma: 86574 on 2020-03-10 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by ndimick on 2020-03-11: Thank you! adding the ros::AsyncSpinner did the trick. Again, thanks for your time, help and additional insights.
{ "domain": "robotics.stackexchange", "id": 34543, "tags": "ros, ros-melodic, ros-control, diff-drive-controller, hardware-interface" }
Angular diameter of the Sun's reflection from the ocean, seen from Sun-Earth L1?
Question: I'm trying to understand how smooth the reflecting ocean surface would need to be to produce such a small bright spot as seen from the DSCOVR satellite at Sun-Earth L1. It appears to be only about 8E-05 rad, or about 0.3 arcminutes. My question is primarily about the geometrical optics involved in the reflection of light from the Sun off of the Earth. Interpretation of the apparent size and comparing to the image is secondary. At a distance of 151.3 million km from Earth, the 1.391 million km diameter Sun has an angular width of 0.009196 radians. The DSCOVR satellite is 1.586 million km from the Earth looking at the Sun's reflection from somewhere around the Sun-Earth L1 point. It's close to the Earth-Sun axis (the Sun-Earth-DSCOVR angle is about 7.6 degrees) so let's simplify and assume it sits on-axis. The Earth's radius is 6378 km; as a convex mirror its focal length is $f=-r/2$ or -3189 km. Question: What would be the angular size of the Sun seen from DSCOVR reflecting in a spherical mirror model for the Earth in this configuration? What I'm looking at (for context): This sequence of images taken by the EPIC camera on the DSCOVR satellite near the Sun-Earth L1 point is notable for showing the progress of the Moon's shadow across the Earth, but that's not the topic of my question. The first image shows the back-reflection of the Sun from the Earth. It is likely to be from a particularly calm spot on the Ocean, but that's not for sure, there are some images that show reflections from horizontally oriented ice crystals as well. See Are there measurements or calculations that suggest atmospheric ice plates would be horizontal to within 0.1 degrees? and also How could the recently explained “glints” seen by DSCOVR appear so compact considering the finite size of the sun? for some interesting images and more on that. But that doesn't matter for the purposes of my question. Width of Earth 6378 km x 2 / 1586000 km = 8.0428E-03 rad corresponds to ~1500 pixels. Bright spot is about 15 pixels wide, or about 8E-05 rad. Could such a small angular width be a geometrical reflection of the Sun from a very smooth ocean? Raw image at NASA seen here. Answer: $d_o$ and $h_o$ are the distance and diameter of the Sun and $d_i$ and $h_i$ of the respective images. The Earth's radius is $r$ and $f=\frac{r}{2}$ is the focal length. The mirror equation $$\frac{1}{d_o} + \frac{1}{d_i} = \frac{1}{f}$$ in the approximation $d_o \gg f$ gives $d_i \approx f$. Then, using the magnification equation $$h_i = \frac {d_i}{d_o}h_o \approx \frac {f}{d_o}h_o.$$ If $S$ is the distance to the satellite then the angular size is $$\frac{h_i}{S} \approx \frac{h_o}{d_o}\frac{f}{S} = \frac{h_o}{d_o}\frac{r}{2S} \approx \frac{1.4\times10^6}{150\times10^6}\frac{6.4\times10^3}{2\times 1.6\times 10^6}\approx 1.8 \times 10^{-5}$$ where distances are in km. (We have also ignored that the image is $\frac{R}{2}$ closer to the satellite than is the Earth's center.) On a more descriptive note, the reflection on perfectly smooth water would look like a very bright spot about 29 km in diameter. It is blurred out (due to sea waves) to about ten times that diameter on the image. It would be interesting to calculate the brightness (per area) of the (unblurred) spot compared to the brightness of the sunlit Earth's surface.
{ "domain": "astronomy.stackexchange", "id": 3562, "tags": "telescope, earth, optics" }
Computational indistinguishability for any distribution using a Chernoff bound
Question: I had a question about a general statement regarding finding a computationally indistinguishable distribution given any distribution, observed (in the third paragraph of Section 11, page 31) here. This is the statement: For any distribution $D$ over $\{0,1\}^{n}$, there exists a distribution $D'$ such that $D$ and $D'$ are $\epsilon$ indistinguishable with respect to any classical distinguisher of size $n^{k}$. Let me restate the proof. Let $D$ be any distribution over $\{0,1\}^{n}$. Then choose $w$ elements independently with replacement from $D$, and let $D′$ be the uniform distribution over the resulting multiset (so in particular, $H(D′) \leq \log_{2} n$). Certainly $D′$ can be sampled by a circuit of size $\mathcal{O}(nw)$: just hardwire the elements. Now, by a Chernoff bound, for any fixed circuit $C$, clearly $D$ and $D′$ are $\epsilon$-indistinguishable with respect to $C$, with probability at least $1 − e^{-\epsilon^{2} w}$ over the choice of $D′$. But there are “only” $n^{\mathcal{O}(n^{k})}$ Boolean circuits of size at most $n^k$. So by the union bound, by simply choosing $w = \Omega\left(\frac{n^{k} \log n}{\epsilon^2}\right)$, we can ensure that $D$ and $D′$ are $ε$-indistinguishable with respect to all circuits of size at most $n^{k}$, with high probability over $D′$. I do not understand how the Chernoff bound is applied. How do we know the action of the circuit $C$? I also don't understand why $w = \Omega\left(\frac{n^{k} \log n}{\epsilon^2}\right)$ in the union bound. Since we need to "protect against" $n^{\mathcal{O}(n^{k})}$ Boolean circuits, shouldn't $w$ be something like $w = \Omega\left(\frac{n^{\mathcal{O}(n^k)} \log n}{\epsilon^2}\right)$? Answer: How the Chernoff bound is applied What does it mean for $D$ and $D'$ to be indistignuishable by a circuit $C$? It means that $$ |\Pr_{x \in D}[C(x) = 1] - \Pr_{x \in D'}[C(x) = 1]| \leq \epsilon. $$ Let $x_1,\ldots,x_w$ be the sampled elements. Then $$ \Pr_{x \in D'}[C(x) = 1] = \frac{X_1 + \cdots + X_w}{w}, $$ where $X_i$ is the indicator of $C(x_i) = 1$. If we sample $x_1,\ldots,x_w$ at random, then $X_1,\ldots,X_w$ are independent Bernoulli random variables whose expectation is $\Pr_{x \in D}[C(x) = 1]$. Chernoff's bound shows that their average is concentrated around their expectation. Applying the union bound The probability that a choice of $x_1,\ldots,x_w$ is bad for a specific circuit $C$ is $e^{-\epsilon^2 w}$. The probability that a choice of $x_1,\ldots,x_w$ is bad for one of $N$ different circuits is at most $Ne^{-\epsilon^2 w}$. Hence if $Ne^{-\epsilon^2 w} < 1$, the probability that a choice of $x_1,\ldots,x_w$ is good to all $N$ circuits is positive, hence there exists such a choice which is good to all $N$ circuits. Since $w$ is in the exponent, the minimal $w$ needed to satisfy $Ne^{-\epsilon^2 w} < 1$ scales only logarithmically in $N$.
{ "domain": "cs.stackexchange", "id": 17809, "tags": "asymptotics, probability-theory, randomness, pseudo-random-generators, chernoff-bounds" }
Charging a battery
Question: So we have a charger outputting V volts and $I$ Amperes and two rechargable batteries of the same capacity $Q$ coulombs but different emf's $V_1\&V_2$. Which battery would charge faster? Approach: So since the charge stored in both the batteries is $Q$ C, and the battery provides a current of $I$, which is $I \frac{C}{sec}$ so the time taken should be the same in both the cases. ($\frac{Q}{I}sec$). However the power provided by the adapter in both the cases is the same and the energy stored in the battery is different in both the cases so this answer is not possible. Where did I go wrong? Answer: Where did I go wrong? The first 'wrong' is stipulating that the charge voltage across $V_C$ and current through $I_C$ are fixed by the charger. This is quite unrealistic. Why? Because you can't independently specify these when charging a battery. To see this, stipulate that the uncharged battery emf is $V_{B,uc}$ and that the internal resistance of the battery is $r_B$. If follows that a charger with voltage across $V_C \gt V_{B,uc}$ must be supplying a charging current $I_C$ equal to $$I_C = \frac{V_C - V_{B,uc}}{r_B}$$ and so the charging current and voltage are not independent. Typically, a battery charger will limit the charging current to a safe value by controlling $V_C$ and so, in the example you give, the charge currents may be the same but the charging voltage will not be. Thus, you can't say that the power provided by the adapter to each battery during the charging process is the same. The second 'wrong' is assuming that all of the power from the charger goes to charge the battery. Some of the power, $I^2_C\cdot r_B$, is dissipated by the internal resistance (the battery warms up during charging). Finally, as others have pointed out, a battery (or cell) stores energy and not electric charge. If two batteries have the same (energy) capacity (typically given in watt-hours), then for the same charging current, the battery with the largest emf will finish charging first. For example, and at the risk of simplifying too much, assume you have a 6V and a 12V battery each with the same capacity and 'small' internal resistance. If both (fully discharged) batteries are charged with a 1A charging current, the 12V battery will become fully charged in essentially half the time of the 6V battery.
{ "domain": "physics.stackexchange", "id": 52787, "tags": "electricity, electric-circuits, batteries" }
Is gravity the Earth's centripetal acceleration? Should the gravitational acceleration be equal to the centripetal acceleration at the equator?
Question: I understand that different forces can act as centripetal forces (shear, tension of a string etc) but in the case of the rotating earth, is it really the gravitational force the centripetal force that keeps the earth spinning? In that case, shouldn't the centripetal acceleration at the equator be equal to the gravitational acceleration of 9.81 m/s2, instead of the 0.03 m/s2 obtained using the linear velocity at the equator and the equatorial radius? What force actually acts as the centripetal force on earth or on any rotating sphere? I have also read elsewhere that for someone standing on earth, the centripetal force is the net force resulting from the difference between the weight and the normal force. But where does this difference comes from (why is it different from 0)? As the gravitational attraction itself doesn't change, I believe this difference is due to a change in the normal force. But why does it change at all? Answer: First, no force is needed on a spinning object to keep it spinning. Second, force is needed to cause objects to move on non-straight paths. Third, don't try to classify an single force as a centripetal force. When you analyze the motion of an object, identify all the actual forces (as vectors) acting on the object, ideally using a free-body diagram. Then find the net force by adding those vectors. If the object is moving in a circular path, you then set the radial component of that net force equal to the mass, $m$, of the object times the radial, or centripetal, acceleration term: $$\left(\sum_i \vec{F}_i\right)\cdot\hat{r} = m \vec{a}_c = m\frac{v^2}{r}= m\omega^2 r.$$ In the case of an object at the surface of the earth, there is no single force which is a "centripetal force." There are forces which contribute to the centripetal acceleration. Those forces are the gravitational force the earth's mass exerts on the object mass and the normal component of the contact (electromagnetically-based) force between the object's bottom surface and the surface the object is resting on. Mathematically you have $$\frac{v^2}{R_e}=\frac{mg - F_{norm}}{m}$$ I wrote it that way to emphasize that the centripetal acceleration is the sum of forces in the radial direction divided by the mass. That's a much better way to think about these concepts rather than trying to find "the centripetal force." P.S.- Don't make the mistake of thinking that the normal component of contact force is equal to mg. It's not.
{ "domain": "physics.stackexchange", "id": 65003, "tags": "newtonian-gravity, reference-frames, rotational-dynamics, centripetal-force, centrifugal-force" }
Is there a conflict, or is there not a conflict between the Pusey-Barrett-Rudolph (PBR) theorem and the information theory interpretation?
Question: In the wikipedia article, it says that the PBR theorem sort of rules out the psi epistemic interpretations. I want to know, is this the end of the information theory interpretation and relational interpretation? I am thinking that there is no conflict. The statement of this theorem says that one physical reality is not consistent with multiple pure states. But psi epistemic models do not attribute different pure states to the same physical situation, do they? For example, in the Wiger's friend experiment, the information theory/relational interpretation says that the friend observes a collapsed state, say $|\text {spin up}\rangle$. But Wigner will describe the experiment using something like $|\text{spin up, friend measured up}\rangle +|\text{spin down, friend measured down}\rangle$. So it is true that Wigner and Wigner's friend are using different states, but they're not describing the same physical situation. Wigner's friend is only describing the state of the particle. But Wigner is describing the joint system of his friend and the particle. Is this correct? Is there a conflict or not, between the PBR theorem and the information theory/ relational interpretation? Answer: As with all such results, understanding this requires a careful consideration of the underlying mathematical framework and a knowledge of how it relates to the intuitive concepts under discussion. In particular, while it makes statements like "$\psi$-epistemic views are inconsistent with the probabilities described by QM", the term "$\psi$-epistemic" does not mean a mere equation of the wave function with "knowledge" or "information" but rather a specific mathematical model of that intuitive idea, and that mathematical model is not closed to objection. Indeed, I'll outline an objection here, that I am not sure has been raised before (but if someone has, I'd gladly accept the citation!). PBR bascially starts from the following idea, which is a sort of extension of the setup used in Bell's Theorem. Suppose that a quantum system has associated to it some "objective" characteristics, or state, $\lambda$, drawn from a possibility set $\Lambda$. In Bell's theorem, the idea being tested is that $\lambda$ determines each measurement result - that is, for each (for simplicity) yes/no projector operator $\hat{\Pi}$ representing an elementary question to be asked about the system, $\lambda$ directly determines it answer, i.e. there is a function from the cross product of $\Lambda$ and the set of all such projectors to $\{ 0, 1 \}$, such that if we knew $\lambda$, we would know what every measurement would give in advance. PBR softens this a bit. It doesn't presume that $\lambda$ necessary determines the measurement results by itself, but does nonetheless presume that each $\lambda$, together with a question $\hat{\Pi}$, will determine a probability for that question to be "yes", i.e. $P(\hat{\Pi}|\lambda)$, to use a notation evocative of conditional probability. It then assumes $\lambda$ is wholly set by the preparation procedure, and moreover that two different procedures set to prepare the same pure quantum state $|\psi\rangle$ prepare $\lambda$ in the same way, i.e. to each such $|\psi\rangle$, we can associate a different preparation probability measure $\mu_\psi$ on $\Lambda$, viz. the probability of getting a "yes" to some $\hat{\Pi}$ is $$P(\text{$\hat{\Pi}$ gives "yes"}) = \int_{\lambda \in \Lambda} P(\hat{\Pi}|\lambda)\ d\mu_\psi$$ where the integral is meant in the sense of Lebesgue. Note that if the inner probability is definitive for all questions (i.e. either $0$ or $1$), then the probability to get an answer is simply that that the preparer prepared the system with that answer, i.e. the same as the setup of Bell's Theorem. The argument then goes from there to consider a joint measurement on two independently prepared systems, and moreover shows how this measurement can be realized with real quantum systems, and shows that distinct quantum states must "partition"(*) the putative set $\Lambda$. The "$\psi$-epistemic" view is specifically defined here as the view that there could exist a pair of quantum states $|\psi_1\rangle$ and $|\psi_2\rangle$ such that the objective state $\lambda$ could belong to both, that is, knowing one or the other would be consistent with the same $\lambda$, by analogy with classical ignorance-interpreted probability distributions, where that you can assign two different probability distributions despite only 1 given state of reality being the case at a given time. So does this rule out an informational interpretation? It depends: If you take P, B, and R's setup above as defining what you mean by "informational interpretation", then yes, it does. However, is that the only consistent informational interpretation? I would argue it does not, and will give you two reasons to think this below. For one thing, and as I pointed out in a comment here: there are other, and arguably better-behaved, mathematical quantum formalisms in which a distinction between pure and mixed states does not have to be drawn so explicitly as in the Hilbert formalism, but instead emerges naturally from a common description. In particular, the system of $C^{*}$-algebras reveals that pure states are essentially extreme points on a convex hull of mixed quantum states. This exists in Hilbertian QM as well, if you just stick entirely to density operators and pretend vectors don't exist. Here, the vectors go away and thus we just have one set of "states". As I said before, an "informational interpretation", most generally, simply means assigning the semantic value of "agent knowledge" to the quantum state. In this case, then it follows both mixed and pure states represent knowledge. It is just that, under the PBR setup, pure states represent enough knowledge to separate at least different classes of real states of the system. There is no reason an "information" referent cannot become coincident in some sense with actual reality. The second, more subtle, objection is the one I'm not so sure has been made before. It concerns a step that is easy to miss within the PBR argument, and that is that the move from the individual systems to the joint system assumes a passage from individual objective state spaces $\Lambda_1$ and $\Lambda_2$ to a total joint space of $\Lambda_1 \times \Lambda_2$. Besides providing another attack vector on the theorem by itself (though I believe some have followed at least direct approaches down that line and not found them very fruitful), the more interesting line of objection is what it says regarding the assumptions underlying the Bell-like framework I just described. In particular, to what extent can we consider the "state" $\lambda$, or even for that matter, $|\psi\rangle$, to truly belong solely to the system? Indeed, here's a simple thought experiment as to why we should think that frameworks of the kind P, B, and R use shouldn't be used, and it requires us to remember, as seem strangely often forgotten, that quantum mechanics, as the name suggests, is a theory of mechanics - in all this, let's not lose sight of basic physics principles! In particular, we can shift physical reference frames - for all this stuff about talking about "agents", "knowledge" and "observer effects" and what not, it seems often ignored is the question of a simple change of coordinates of the kind you learn about in your first course in Newtonian mechanics. Suppose we have a setup located far out in the depths of space, away from gravity and other perturbing forces. There is a single electron, surrounded by some probing apparatus which, depending on how it's oriented, will measure a different component of the electron's spin angular momentum vector. We consider this apparatus to include enough capabilities it can be treated as a quantum agent, e.g. maybe it's running an AI program. The whole thing is prepared so that it regards the electron it has as being in a state like $\left|\uparrow\right\rangle$. Now, here's something. This agent has reaction wheels attached to it. It can rotate - change its orientation - without influencing the electron. Suppose it uses them to execute a 90-degree repositioning maneuver around it. Presumably, $\lambda$ for the electron should not change, right? Yet in order to calculate the correct probabilities for the measurement, it must now assign it the Hadamard state $$|+\rangle = \frac{\left|\uparrow\right\rangle + \left|\downarrow\right\rangle}{\sqrt{2}}$$ and will obtain $\uparrow$ or $\downarrow$ with probability $\frac{1}{2}$. Yet nothing changed about $\lambda$! This situation cannot easily be accounted for in the PBR framework! Either we have to assume that $\lambda$ is somehow strangely altered "spookily" by the rotation, or we have to assume the set of questions the agent is asking is changing. The trouble with the second option is that it violates fundamental physical symmetry laws: namely, it tells us there is a preferred orientation in space, against which those questions are defined, because no such transformation appears in the PBR framework itself - the operators all remain the same, only the quantum state $\left|\psi\right\rangle$ changes. Hence, the thesis I would suggest is that, in fact, our quantum pure state assignment $|\psi\rangle$ actually does not just capture the "state of the system" alone, but instead reflects also some stuff about the situation of our agent viz. that system. In particular, there is no reason that $P(\hat{\Pi}|\lambda)\ d\mu_\psi(\{ \lambda \})$ should be considered sufficient to generate the probability given by QM to begin with, because it also needs conditioning upon some further aspects of the world external to the system! And that is tacitly stamped out because the movement to the Cartesian product when considering the joint systems is in effect to limit the referents solely to the systems themselves. And indeed, Rovelli's relational interpretation is very natural for dealing with this scenario, and cannot be given an ontology in the PBR framework, because it does not assign objective states $\lambda$ to single systems alone, but instead to pairs of systems. In particular, relational ontology might look something like this: if we have three systems, $a$, $b$, and $c$ where $b$ and $c$ can act as agents observing $a$, then we would say the pairs $(b, a)$ and $(c, a)$ (where I've written it in the form $(\text{agent}, \text{system})$) have (different!) objective states $\lambda$, but not $a$ itself, nor presumably $b$ or $c$ by themselves (very interesting metaphysical implications). (Note that we could add a "self-state" to pairs like $(a, a)$, but it will not be the one observed by any external measurer.) Most particularly, in the PBR setup we have the systems $a$ and $b$, individual measurers $m_a$ and $m_b$, and joint measurer $m_j$. Then the pairs $(m_a, a)$, $(m_b, b)$, $(m_j, a \otimes b)$ could (and will!) all have different states, too. But also note that Rovelli doesn't say $\Lambda$ is distinct from the Hilbert space $H$ of any of the systems - they could, in fact, be one and the same. Going back to our example, the mechanical rotation process I described would then be understood as changing the state of the relational pair involving the measuring agent and measured system, but changes nothing about the state of that measured system individually.
{ "domain": "physics.stackexchange", "id": 91570, "tags": "quantum-mechanics, quantum-information, quantum-interpretations" }
Why can integrals be written as $I=\int \phi(x) \epsilon$?
Question: Carroll's book Spacetime and Geometry gives this new way to think about integrals: $$I=\int \phi (x) \epsilon\tag{2.98}$$ $\epsilon$ is the Levi - Civita tensor (2.96). I don't see how the RHS equals the integral of $\phi(x)$ though. If we removed the $\int$, then $\phi(x) \epsilon$ remains, which is just a tensor field. Why does it make sense to put a $\int$ before a tensor field? I don't think $\int$ is defined to operate on tensor fields. Answer: The Direct Answer. Here's the short and direct answer. On an oriented $n$-dimensional smooth manifold $M$, we can define the integrals of $n$-forms. So, if $\omega$ is an $n$-form on $M$, then we can define the number $\int_M\omega\in\Bbb{R}$, called the integral of the differential $n$-form over $M$ (ok I'm glossing over some integrablity conditions for the sake of brevity). So, the 'things' we are supposed to integrate on $n$-dimensional oriented smooth manifolds are differential $n$-forms. You write which is just a tensor field. Why does it make sense to put a $\int$ before a tensor field? I don't think $\int$ is defined to operate on tensor fields. and yes, $\phi\cdot\epsilon$ is a tensor field, but it's not some random tensor field; it is an alternating $(0,n)$ tensor field (i.e an $n$-form), and this is what allows us to define its integral. Brief definition of Integrals of $n$-forms. The $n$-form $\epsilon$ is what a mathematician would call a volume form (and in this case it's the volume form induced by the Lorentzian metric, and it defines the orientation). So, if you have a function $\phi:M\to\Bbb{R}$, then $\phi\cdot\epsilon$ is again an $n$-form, and this is the 'correct' thing to be integrating on $M$ (because it is oriented). The topic of integration on manifolds is a basic part of differential geometry, and is covered in any good textbook (e.g Spivak, Lee). The definition of the integral $\int_M\omega$ is slightly subtle, and the details are covered in these textbooks. The key thing is that if you have a coordinate chart $(U,\alpha=(x^1,\dots, x^n))$ which is positively oriented, then you can write $\omega=f\,dx^1\dots\wedge x^n$ for some unique function $f:U\to\Bbb{R}$, and we have \begin{align} \int_U\omega&:=\int_{\alpha[U]}(f\circ\alpha^{-1})\cdot d\lambda_n\equiv \int_{\alpha[U]}f\circ\alpha^{-1}, \end{align} where the symbols on the right are the standard multivariable calculus integrals of functions defined on subsets of $\Bbb{R}^n$ (i.e the usual integral with respect to $n$-dimensional Lebesgue measure... or you can just use Riemann integrals throughout). In words, this is saying the most obvious thing you can do: to integrate an $n$-form in a coordinate patch, write the $n$-form as a function times a wedge of $dx^i$'s and then integrate that chart-dependent function over the corresponding subset of $\Bbb{R}^n$. To then define $\int_M\omega$ from these various $\int_U\omega$, one uses what's called a partition of unity, and the purpose of me emphasizing orientation is so that in this global step, we choose the correct signs... i.e we have to put humpty dumpty back together correctly. More Context, and Reconciling Various Ideas. Perhaps the following answer of mine will be helpful in some details. In reality, the correct things to integrate on an arbitrary smooth manifold are scalar densities. However, if your manifold is oriented, then you can construct an isomorphism from the vector space of scalar densities to the vector space of $n$-forms. So, one then, by 'transport of structure', define integrals for $n$-forms. I think what you're most confused by is that we're integrating complicated types of objects (scalar densities/$n$-forms) as opposed to smooth functions. But this shouldn't be surprising at all! When doing integrals, we don't just integrate functions $f$. We integrate functions with respect to some measure $\mu$ (in $\Bbb{R}$, this is the integral with respect to $1$-dimensional Lebesgue measure, which for continuous functions is equal to the basic vanilla Riemann integral we all know and love). In $\Bbb{R}^n$, we integrate functions $f$ with respect to $n$-dimensional Lebesgue measure: $\int_{\Bbb{R}^n}f\,d\lambda_n$. Or more generally on an abstract measure space $(X,\mathfrak{M},\mu)$, we integrate functions $f:X\to\Bbb{R}$ with respect to the measure $\mu$, to get the number $\int_Xf\,d\mu$. The question now becomes: on an abstract manifold $M$, what is the measure? Is there a natural measure? In the general case the answer is no, but for GR, the answer is yes! The metric tensor $g$ gives rise to a unique measure $\mu_g$, also denoted $\lambda_g$ or $dV_g$, which a mathematician might call the Riemann-Lebesgue volume measure on $M$ (see here for details of the definition and construction). So, we can now talk about integrating functions on smooth manifolds $\int_Mf\,dV_g$. So, the point is that there are many ways to develop the subject of integration on manifolds: If you're more analysis-minded, you might go the Lebesgue route and ask 'what measure should I use'? In the case of Riemannian/Lorentzian geometry, you'd end up using the measure $dV_g$. In classical mechanics (particularly the Hamiltonian formulation on phase space, you'd use the measure $dV_{\Omega}$ induced by the symplectic form). Once you have the measure, you can start integrating functions with respect to this measure. If you were more differential-geometry minded, you'd say 'hmmm...what are the objects I should be integrating? Well, a manifold comes with a bunch of local coordinate charts, so maybe I can use this local information to somehow put together a global definition?' Pursuing this line of thought leads you to the notion of integrating scalar densities. Going one step further, you'd say scalar densities aren't so nice to work with. Also, from single variable calculus, $f(t)\,dt$ can be thought of as a $1$-form, and differential $n$-forms capture the notion of signed-volumes, and differential forms are very easy to compute with. Can I somehow make a definition for integrating $n$-forms? Pursuing this line of thought, you'd see that if you impose the orientedness condition on $M$, then several pesky minus signs disappear and you now have a notion of integration for $n$-forms (particularly you can now integrate the $n$-form $\phi\cdot\epsilon$). Finally, you can check that regardless of which route you take, all these approaches give you the same answer given correct hypotheses (oriented $n$-dimensional pseudo-Riemannian manifold). So, the fact that you have to end up considering more complicated objects ($n$-forms) is not really an issue, and is in fact a strength in disguise (they're very computationally flexible, and we have the famous Stokes' theorem).
{ "domain": "physics.stackexchange", "id": 90230, "tags": "general-relativity, differential-geometry, tensor-calculus, integration" }
Are there any consequences to carbon capture and storage that also sequesters oxygen?
Question: A big argument for carbon capture and storage is true reversal; while switching to renewable energy and eliminating CO2 emissions is a must, it will not reverse the massive movement of carbon from underground hydrocarbons to the atmosphere and biosphere. However, prevailing strategies are not exact reversals of burning fuels, either. Rather than converting CO2 to hydrocarbons to be buried, CO2 may be directly injected deep into the ground or converted to carbonates first. Would there be consequences for the biosphere if we do manage to bury 2-3 oxygen atoms for every carbon atom that will also be buried, that so far would have been released into the atmosphere? EDIT: For the sake of argument, let's assume a hypothetical method of capture and storage that uses only renewable energy and that the buried matter does not leak back out. This question should not be construed as an endorsement of current carbon capture and storage technology. Answer: No, I think there are not. At least not at the scale of the proposed projects. I say this just because $\text{CO}_2$ makes up only a 0.04% of the atmosphere, so even if you burn fossil fuels until you double the pre-industrial amount of CO$_2$ and then capture and bury it back (both the Carbon and Oxygen). You would only drop Oxygen levels from its current 20.95% to 20.91% or something like that, which is not very significant. In a closed space like your bedroom for instance, you would probably make a bigger change in the Oxygen level by just breathing inside for a few minutes. Also, many chemical reactions that sequester Oxygen and are sensitive to its concentration (oxidation and combustion for example) would work towards keeping equilibrium: they will slow down if there is less Oxygen allowing the concentration to bounce back. Or they will accelerate if there is too much Oxygen, dropping the level back down.
{ "domain": "earthscience.stackexchange", "id": 1646, "tags": "atmosphere, climate-change, carbon-cycle, carbon, oxygen" }
Qiskit - order of swap gates
Question: It seems I don't understand what swap gates actually do. The following code: import qiskit import numpy as np from qiskit.quantum_info.operators import Operator circuit = qiskit.QuantumCircuit(3) circuit.swap(0, 1) circuit.swap(1, 2) display(circuit.draw(output='mpl')) display(Operator(circuit).data) print({x: y for (x, y) in zip(*np.nonzero(Operator(circuit).data))}) outputs ... {0: 0, 1: 2, 2: 4, 3: 6, 4: 1, 5: 3, 6: 5, 7: 7}, meaning that the circuit maps 0 to 0, 1 to 2, 2 to 4, etc. Some of these outputs are not what I expect. For example, I expected 1=001 ----swap(0,1)---> 010 ---swap(1,2)---> 100 = 4, not 2. What do I misunderstand? Some of my experiments seem to indicate that the swaps happen in the reverse order. Another crazy explanation I have is that swap swaps not the values on the wires, but the wires themselves. But there is nothing like this in the documentation. Each gate independently (i.e. only swap(0, 1) or only swap(1, 2)) behaves as I expect, so there is no chance that I confused the order of bits (i.e. big-endian vs little-endian). Is my usage of Operator wrong? Again, don't see anything about it in documentation. I'm starting to question my sanity, so any help is appreciated. Answer: You're correct in your expectations: qiskit's little-endian will indeed send $|001\rangle$ to $|100\rangle$ with your circuit. In fact, this is shown when you display Operator(circuit).data, which gives the following matrix: $$\begin{pmatrix} 1&0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&0&0&1&0&0&0\\ 0&0&0&0&0&0&1&0\\ 0&1&0&0&0&0&0&0\\ 0&0&0&1&0&0&0&0\\ 0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&1 \end{pmatrix}$$ The output you get when inputting $|001\rangle$ into this circuit is given by reading the second column of this matrix, which gives $|100\rangle$. However, np.nonzero will return (x, y) couples such that the x-th row and y-th column has a non-zero coefficient. Thus, you're essentially reading the transpose of your circuit by doing this. If you replace your last line by: print({y: x for (x, y) in zip(*np.nonzero(Operator(circuit).data))}) You obtain {0: 0, 2: 1, 4: 2, 6: 3, 1: 4, 3: 5, 5: 6, 7: 7}, which was what you expected to get. You can also do this if you want the output to be sorted according to the keys: print(dict(sorted({y: x for (x, y) in zip(*np.nonzero(Operator(circuit).data))}.items()))) Using which you will obtain {0: 0, 1: 4, 2: 1, 3: 5, 4: 2, 5: 6, 6: 3, 7: 7}.
{ "domain": "quantumcomputing.stackexchange", "id": 4799, "tags": "qiskit, programming" }
Why can I see the glow from my TV remote's LED if it is supposed to emit in the infrared?
Question: The TV remote light emissions being >700nm are supposed to be invisible right? Same goes for proximity sensor on phone and laser focus on phone's back camera. But how could, me and 3 more people I asked, seem to be able to see them all and describe them with dim red glow? Can you see it? I want to know what percentage of people can see it. Note, I've used the following remote brands for test ─ Bpl, Sony, and Samsung. Answer: I've worked a lot at 785 nm (e.g., for Rb spectroscopy). I can assure you anyone can see this. It's a lovely deep shade of red. However, according to wikipedia most TV remotes use a 940 nm diode. That is not visible by humans. Even my 800 nm diodes are invisible. You could always take it apart and see what diode is used.
{ "domain": "physics.stackexchange", "id": 48411, "tags": "photons, biophysics, biology, infrared-radiation" }
Makefile to compile an OS which also uses functions/macros
Question: So I've started writing an OS to know how they work, and I came up with the following makefile to compile it. The reason I used a makefile is because I wanted to see everything it was doing and have full control over it, to be as explicit as possible. Hence, please don't tell me about CMake (I don't care enough to switch to it, and I find the syntax quite... not ugly, but definitely too compacted at times), or autotools (except if you think it does exactly what I want. I want it to be as explicit as possible as well), or other Makefile-like generators. Furthermore, please don't tell me about implicit rules or their brother .c.o. I know. I don't want them. They're implicit (and also they generate the object next to the source, which I find annoying and unwanted). With that out of the way, here's what I want it to be able to do: I only use GNU Make and a Linux distribution (though it's supposed to be a standalone build, not much should be impacted by the source OS...). I'm currently on an x86_64 computer, and I don't think I'll ever change, but if there are easy fixes I could make, please do tell. Because I like to be up-to-date, I don't care about features only present in recent makes, I can use it if it's useful. My current tree (after a make distclean or a git clone) is: . ├── Makefile └── src ├── common │ └── include │ └── string.h ├── i686 │ └── linker.ld └── kernel ├── include │ ├── arch │ │ └── i686 │ │ └── [some headers] │ └── [some headers] └── source ├── arch │ └── i686 │ └── [some sources] └── [some sources] (If you need the exact tree, I can put it.) As you can see, I want to be able to support multiple destination architectures, as well as a potential libc (and maybe libk when I understand what it's used for; I only have C and memory management setup hopefully correctly right now in it, so it's far from being complete). The Makefile generates a parallel tree of src in makedeps for any .mk source dependencies and another in obj/[debug or release] for objects. Once all objects are compiled, a new directory $(SYSROOT) is created and populated with the root directory of the image file (as I understand how grub-mkrescue works), including all required headers and/or object/binary/... files (of which I have none for now). It also generates a GRUB configure file and then packages everything in a single iso. One feature my Makefile takes advantage of that I have not seen on this site is that I use evals and calls. I have made some "functions" to add a source file for the kernel (no wildcard, as explicit as possible), which could easily be adapted to add source files to a libc. This makefile is thus structured as such: all default target Variables definitions "Functions"/macros definitions Files declarations What I'd like to know is: What would you do differently (apart from "use $(YOUR_PREFERRED_MAKE_GENERATOR) instead")? Is there anything that seems wrong/hardly maintainable? Is this Makefile easily modifiable to fit for generic C/C++ compiling? Is this Makefile organized in a logical and readable way? Are there GNU Make features (or compiler options) that I am missing? (For instance, would VPATH be useful in this situation? As I understand it, it's useful for compilation out of the source tree, which seems... redundant.) What other things could I improve? More specific question: I use ANSI escape codes for colors. Should I assume such a computer also has mkdir -p support (which would remove some dependencies)? Also, if I need to add some comments, please tell me. all: .PHONY: all NAME?=kernel SYSROOT?=sysroot TARGET_MACHINE?=i686 OPTIM?=2 DEBUG?=1 # FORCE_COLOR: set to non-empty, non 0 to force colorized output # ECHO: set to non-empty, non 0 to echo commands out usage: @echo 'Targets:' @echo ' - usage (current target)' @echo ' - all (default target)' @echo ' - kernel.iso' @echo ' - obj/{target}/*.o' @echo ' - clean' @echo ' - distclean' @echo '' @echo 'Options:' @echo ' - NAME: rename the kernel in the GRUB menu' @echo ' - SYSROOT: system root directory' @echo ' - TARGET_MACHINE: target architecture' @echo ' - OPTIM: GCC optimization level (-O is prepended)' @echo ' - DEBUG: set to 0 for release build, set to non-0 for debug build (default)' @echo ' - FORCE_COLOR: set to non-0 to force colorized output' @echo ' - ECHO: set to non-0 to echo out commands executed' .PHONY: usage ifeq ($(ECHO:0=),) SILENCER:=@ else SILENCER:= endif ifeq ($(SYSROOT),) $(error SYSROOT cannot be empty!) endif ifeq ($(strip $(TARGET_MACHINE)),i686) AS:=i686-elf-as CC:=i686-elf-gcc CXX:=i686-elf-g++ else $(error Unknown target machine $(strip $(TARGET_MACHINE))) endif ifneq ($(strip $(DEBUG)),0) CFLAGS+= -g -DDEBUG CXXFLAGS+= -g -DDEBUG OBJDIR?=debug else CFLAGS+= -DRELEASE CXXFLAGS+= -DRELEASE OBJDIR?=release endif COMMON_WARNINGS:=-Wall -Wextra -Wfloat-equal -Wundef -Werror=shadow -Werror=implicit-function-declaration COMMON_WARNINGS+= -Werror=return-type -Werror=pointer-arith -Werror=strict-overflow COMMON_WARNINGS+= -Wwrite-strings -Waggregate-return -Wcast-qual -Werror=switch-enum -Wconversion -Wunreachable-code COMMON_WARNINGS+= -Werror=format=2 -Werror=format-overflow=2 -Werror=format-signedness -Wformat-truncation=2 COMMON_WARNINGS+= -Wnull-dereference -Wimplicit-fallthrough=3 -Wfatal-errors COMMON_WARNINGS+= -fanalyzer -Wmissing-include-dirs -Wshift-overflow=2 -Wunknown-pragmas -Wstringop-overflow=4 COMMON_WARNINGS+= -Wsuggest-attribute=pure -Wsuggest-attribute=const -Wsuggest-attribute=noreturn COMMON_WARNINGS+= -Wsuggest-attribute=malloc -Wsuggest-attribute=format -Wsuggest-attribute=cold -Wmissing-noreturn COMMON_WARNINGS+= -Wmissing-format-attribute -Walloc-zero -Werror=attribute-alias=2 -Wduplicated-branches COMMON_WARNINGS+= -Werror=duplicated-cond -Wsystem-headers -Wtrampolines -Wstack-usage=1024 -Wunsafe-loop-optimizations COMMON_WARNINGS+= -Wunused-macros -Wcast-align=strict -Wdate-time -Wlogical-op -Wredundant-decls -Winline COMMON_WARNINGS+= -Wdisabled-optimization CFLAGS_WARNINGS:= -Werror=jump-misses-init -Werror=strict-prototypes CXXFLAGS_WARNINGS:= -Werror=overloaded-virtual override ASFLAGS+= override CFLAGS+= -Isrc/common/include -std=gnu17 -ffreestanding -O$(OPTIM) $(COMMON_WARNINGS) $(CFLAGS_WARNINGS) override CXXFLAGS+= -Isrc/common/include -std=gnu++20 -ffreestanding -fdiagnostics-show-template-tree -O$(OPTIM) $(COMMON_WARNINGS) override CXXFLAGS+= $(CXXFLAGS_WARNINGS) # Machine specific ifeq ($(strip $(TARGET_MACHINE)),i686) # SSE3 is required, AVX/AVX512 is enabled if available override CFLAGS+= -mmmx -msse -msse2 -msse3 override CXXFLAGS+= -mmmx -msse -msse2 -msse3 override CFLAGS+= -DIS_I686 override CXXFLAGS+= -DIS_I686 #CFLAGS/CXXFLAGS+= -mcmodel=kernel -mno-red-zone in x86-64 endif # For the entry kernel file, if compiled in C++ # CXXFLAGS_ENTRYKER=-fno-exception -fno-rtti override LDFLAGS+= -ffreestanding -O$(OPTIM) -nostdlib override LDLIBS+= -lgcc ASKERFLAGS+= CKERFLAGS+= -D__kernel__ -Isrc/kernel/include CXXKERFLAGS+= -D__kernel__ -Isrc/kernel/include CRTBEGIN_OBJ:=$(shell $(CC) $(CFLAGS) -print-file-name=crtbegin.o) CRTEND_OBJ:=$(shell $(CC) $(CFLAGS) -print-file-name=crtend.o) OBJLIST=$(OBJLIST_KERNEL) OBJLIST_KERNEL:= SPECIAL_OBJS:=%/crti.o $(CRTBEGIN_OBJ) $(CRTEND_OBJ) %/crtn.o INSTALL_HEADERS:= # Until bug #101648 is fixed obj/$(OBJDIR)/kernel/arch/i686/mm.o: private CFLAGS+= -Wno-analyzer-malloc-leak .SUFFIXES: .SECONDEXPANSION: ifneq ($(MAKECMDGOALS),clean) .: ; # $(eval $(call reproduce_tree,<base>)) define reproduce_tree = $(1): ; $(SILENCER)mkdir $$@ $(1)/kernel: | $(1) ; $(SILENCER)mkdir $$@ $(1)/kernel/arch: | $(1)/kernel ; $(SILENCER)mkdir $$@ $(1)/kernel/arch/$(TARGET_MACHINE): | $(1)/kernel/arch ; $(SILENCER)mkdir $$@ endef obj: ; $(SILENCER)mkdir $@ obj/$(OBJDIR): | obj $(eval $(call reproduce_tree,obj/$(OBJDIR))) $(eval $(call reproduce_tree,makedir)) $(SYSROOT): ; $(SILENCER)mkdir $@ $(SYSROOT)/boot: | $(SYSROOT) ; $(SILENCER)mkdir $@ $(SYSROOT)/boot/grub: | $(SYSROOT)/boot ; $(SILENCER)mkdir $@ $(SYSROOT)/usr: | $(SYSROOT) ; $(SILENCER)mkdir $@ $(SYSROOT)/usr/include: | $(SYSROOT)/usr ; $(SILENCER)mkdir $@ endif # Colors: # ------- # +----------+-----------+ # | 3 | 9 | # +-+----------+-----------+ # |0| | | Black # |1| | RM | Red # |2| | [MSG] | Green # |3| Creating | ISO | Yellow # |4|Installing| CP | Blue # |5| --- |LD/Checking| Purple # |6| AS/C/C++ | | Cyan # |7| | | White/gray # +-+----------+-----------+ # $(call colorize,<br_color>,<br_text>,<text_color>,<text>) ifdef $(if $(FORCE_COLOR:0=),FORCE_COLOR,MAKE_TERMOUT) colorize=@echo "\033[$(1)m[$(2)]\033[m \033[$(3)m$(4)\033[m" else colorize=@echo "[$(2)] $(4)" endif define newline := endef # $(call remove,<list of file_names to remove>) define remove = $(call colorize,1;91,RM ,91,Removing $(1)) $(SILENCER)$(RM) -r $(1) endef # $(eval $(call install_header,<install_dir>,<source_dir>,<file_name>)) define install_header = INSTALL_HEADERS+=$(SYSROOT)/usr/include/$(1)$(3) $(SYSROOT)/usr/include/$(1)$(3): src/$(2)$(3) | $$$$(@D) $(call colorize,94,CP ,34,Installing $(3)) $(SILENCER)cp $$^ $$@ endef # $(eval $(call add_deptree,<output_filename_noext>,<input_filename_withoutsrc>)) ifeq ($(MAKECMDGOALS),clean) add_deptree= else define add_deptree = makedir/$(1).mk: | $$$$(@D) $(call colorize,95,DEP,33,Creating $(2) dependancies) $(SILENCER)set -e; $$(CC) $$(CFLAGS) $$(CKERFLAGS) -MM src/$(2) \ | sed 's,\($$(notdir $$(basename $(2)))\)\.o[ :]*,src/$$(dir $(2))\1.o $$@: ,g' >$$@ include makedir/$(1).mk endef endif # $(call kernel_o,<base_dir>,<source_filename>,<output_filename>) kernel_o=obj/$(OBJDIR)/kernel/$(1)$(3).o # $(eval $(call compile_kernel_s,<base_dir>,<source_filename>,<output_filename>)) define compile_kernel_s = OBJLIST_KERNEL+=$(call kernel_o,$(1),$(2),$(3)) $(call kernel_o,$(1),$(2),$(3)): src/kernel/source/$(1)$(2).s | $$$$(@D) $(call colorize,36,AS ,92,Compiling $$@) $(SILENCER)$$(AS) $$(ASFLAGS) $$(ASKERFLAGS) -c src/kernel/source/$(1)$(2).s -o $$@ endef # $(eval $(call compile_kernel_c,<base_dir>,<source_filename>,<output_filename>)) define compile_kernel_c = $$(eval $$(call add_deptree,kernel/$(1)$(3),kernel/source/$(1)$(2).c)) OBJLIST_KERNEL+=$(call kernel_o,$(1),$(2),$(3)) $(call kernel_o,$(1),$(2),$(3)): src/kernel/source/$(1)$(2).c | $$$$(@D) $(call colorize,36,C ,92,Compiling $$@) $(SILENCER)$$(CC) $$(CFLAGS) $$(CKERFLAGS) -c src/kernel/source/$(1)$(2).c -o $$@ endef # $(eval $(call compile_kernel_cxx,<base_dir>,<source_filename>,<output_filename>)) define compile_kernel_cxx = $$(eval $$(call add_deptree,kernel/$(1)$(3),kernel/source/$(1)$(2).cpp)) OBJLIST_KERNEL+=$(call kernel_o,$(1),$(2),$(3)) $(call kernel_o,$(1),$(2),$(3)): src/kernel/source/$(1)$(2).cpp | $$$$(@D) $(call colorize,36,C++,92,Compiling $$@) $(SILENCER)$$(CXX) $$(CXXFLAGS) $$(CXXKERFLAGS) -c src/kernel/source/$(1)$(2).cpp -o $$@ endef # $(eval $(call compile_arch_dependant,<arch list>,<group>,<lang>,<args...>)); arch will be added automatically compile_arch_dependant = $(foreach arch,$(1),$\ $(if $(TARGET_MACHINE:$(arch)),$\ OBJLIST+=$(call $(2)_o,$(4),$(5),$(6))$(newline),$\ $(call compile_$(2)_$(3),arch/$(arch)/$(4),$(5),$(6))$(newline))) kernel.iso: $(SYSROOT)/boot/kernel.kern | $(SYSROOT)/boot/grub $(call colorize,93,ISO,33,Creating iso) $(SILENCER)echo "menuentry \"$(NAME)\" {\n\tmultiboot /boot/kernel.kern\n}" >$(SYSROOT)/boot/grub/grub.cfg $(SILENCER)grub-mkrescue -o kernel.iso $(SYSROOT) -quiet $(SYSROOT)/boot/kernel.kern: $$(OBJLIST_KERNEL) obj/$(OBJDIR)/kernel/arch/$(TARGET_MACHINE)/crti.o \ obj/$(OBJDIR)/kernel/arch/$(TARGET_MACHINE)/crtn.o src/$(TARGET_MACHINE)/linker.ld | $$(@D) $(call colorize,95,LD ,92,Linking $@) $(SILENCER)$(CC) -T src/$(TARGET_MACHINE)/linker.ld -o $@ \ obj/$(OBJDIR)/kernel/arch/$(TARGET_MACHINE)/crti.o $(CRTBEGIN_OBJ) $(OBJLIST_KERNEL) $(LDFLAGS) $(LDLIBS) \ $(CRTEND_OBJ) obj/$(OBJDIR)/kernel/arch/$(TARGET_MACHINE)/crtn.o $(call colorize,35,---,95,Checking output kernel) $(SILENCER)grub-file --is-x86-multiboot $@ $(eval $(call compile_arch_dependant,i686,kernel,s,,crti,crti)) $(eval $(call compile_arch_dependant,i686,kernel,s,,boot,boot)) $(eval $(call compile_arch_dependant,i686,kernel,s,,tables,tables)) $(eval $(call compile_arch_dependant,i686,kernel,s,,interrupts,interrupts)) $(eval $(call compile_arch_dependant,i686,kernel,s,,apic,apic)) $(eval $(call compile_arch_dependant,i686,kernel,s,,atomic,atomic)) $(eval $(call compile_kernel_c,,entry,entry)) $(eval $(call compile_arch_dependant,i686,kernel,c,,kernel,kernel)) $(eval $(call compile_arch_dependant,i686,kernel,s,,tty,tty.s)) $(eval $(call compile_arch_dependant,i686,kernel,c,,tty,tty.c)) $(eval $(call compile_arch_dependant,i686,kernel,c,,mm,mm)) $(eval $(call compile_kernel_c,,multiboot,multiboot)) $(eval $(call compile_arch_dependant,i686,kernel,s,,crtn,crtn)) OBJLIST_KERNEL:=$(filter-out $(SPECIAL_OBJS),$(OBJLIST_KERNEL)) all: kernel.iso clean: $(call remove,kernel.iso) $(call remove,isodir) $(call remove,$(OBJLIST)) $(call remove,obj/$(OBJDIR)/kernel/arch/$(TARGET_MACHINE)/crti.o obj/$(OBJDIR)/kernel/arch/$(TARGET_MACHINE)/crtn.o) $(call remove,$(INSTALL_HEADERS)) .PHONY: clean distclean: clean $(call remove,makedir) .PHONY: distclean .DELETE_ON_ERROR: Answer: Here are some things that may help you improve your code. Understand your tools There are a huge number of redundant compiler warnings that serve little purpose except to clutter up the Makefile. I would recommend trimming that to the smallest possible non-redundant equivalent. The reason is that it will be very painful if you decide that you need to, for example, ignore a particular cast warning, but can't easily turn it off because -Wall enables it anyway. Also, alphabetizing the remaining flags will help in maintenance. The documentation for gcc, for example, shows the -Wall and -Wextra flags in alphabetical order, which makes it easier to scan the list. Don't override all user settings If I have set ASFLAGS, CFLAGS, and CXXFLAGS on my system, it seems rather presumptuous to override every single one of those in the Makefile. Better would be to either allow for using either user environment strings or at the very least letting the user know you're ignoring all of them. Put user-adjustable variables at the top I have the equivalent of i686-elf-gcc on my machine, but that is not the name of the executable on my machine. User variables such as these should be at the top of the Makefile, if they're used at all (see previous note). Support out-of-tree build Right now, everything is rigidly nailed in place with no flexibility about where or how the project is built. The problem with that is that it means that, for example, trying two different versions to see how they perform, is made more difficult because there is not any obvious way to specify the destination directory tree. That is something that autotools supports that is extremely useful. Rethink your use of macros The Makefile current contains this: define compile_kernel_c = $$(eval $$(call add_deptree,kernel/$(1)$(3),kernel/source/$(1)$(2).c)) OBJLIST_KERNEL+=$(call kernel_o,$(1),$(2),$(3)) $(call kernel_o,$(1),$(2),$(3)): src/kernel/source/$(1)$(2).c | $$$$(@D) $(call colorize,36,C ,92,Compiling $$@) $(SILENCER)$$(CC) $$(CFLAGS) $$(CKERFLAGS) -c src/kernel/source/$(1)$(2).c -o $$@ endef This is obtuse in the extreme. If the purpose is to provide separate flags for certain subsets of source code, the way to do that is to create variables such as your existing OBJLIST_KERNEL for each of the different types and then create either an explicit or implicit rule for each. You can have 100% control with implicit rules if you think carefully about what you're doing. If you run gnu make with --trace --always-build you will see that the build is not reliable with this Makefile. If, for example, the obj directory already exists, it will fail. Consider supporting a "help" target Most Makefiles that I use or maintain support make help rather than make usage. At the least, I'd suggest supporting help as an alias for usage. Fix the bug(s) Your colorize macro does not seem to work on my 64-bit Linux machine. Instead of color, I get things like this: make: i686-elf-gcc: No such file or directory make: i686-elf-gcc: No such file or directory \033[95m[DEP]\033[m \033[33mCreating kernel/source/multiboot.c dependancies\033[m /bin/sh: i686-elf-gcc: command not found The reason for this is that you are using \033 to represent ESC, but that requires the use of echo -e.
{ "domain": "codereview.stackexchange", "id": 41886, "tags": "makefile, kernel" }
Filter Artifacts from Periodic Filtering
Question: I have a merged image containing 20 segments. When I concatenate them directly, I have obviously observable patches in the whole image. I want to get rid of them while distorting the original image as little as possible by using some sort of post-processing. What I did first was to apply Gaussian and median filters. However, they did not work properly which was expected. And then I applied a low pass Butterworth filter (using skimage library) since the sudden changes in merging points of patches imply a high frequency component. Even though the resulting image looks decent enough, I have an issue that I would also like to solve. The main issue resides in the bottom part of the images. In the original patches, there are no high valued component at the bottom of the picture. However, when I post-process it via low-pass Butterworth, it seems that there is some sort of leakage from the top of the original patch to the bottom part. Is this due to periodic nature of frequency (fft) domain? In addition, how can I solve this issue without having the residuals in the bottom part? Answer: What you see is indeed by either using filtration on the frequency domain which uses periodic boundary condition or by using is explicitly. A better choice would be the replicate / nearest boundary condition. in Python you may use scipy.ndimage.convolve() with mode = nearest. Yet this will force you to extract the kernel you want as an array. If you use SciKit Image then what you need to pad the image before applying the filter using numpy.pad() with mode = edge then apply the filter and crop.
{ "domain": "dsp.stackexchange", "id": 10944, "tags": "python, image-processing" }
How do we know that the cosmic background radiation comes from the early universe?
Question: How do we know that the source of the CMB comes from the early universe, and we don't simply observe the rare interstellar or intergalactic dust of 3K temperature? Answer: The Big Bang model is the current standard model for the evolution of the universe. It is built up by using General Relativity equations and particle data and astrophysical observations. It is a successful model because there has been no unexplainable contradiction, i.e. the model developed to encompass and explain mathematically, what looked like contradictions. That the universe is not in a steady (thermodynamic) state is an observational fact, it is expanding and recently accelerated expansion has been observed with better measurements. The Cosmic Microwave Background is electromagnetic radiation, i.e. photons of very low frequency, not dust. There are experiments measuring and mapping the sky for this . Graph of cosmic microwave background spectrum measured by the FIRAS instrument on the COBE, the most precisely measured black body spectrum in nature.[8] The error bars are too small to be seen even in an enlarged image, and it is impossible to distinguish the observed data from the theoretical curve. The current history of the universe goes as follows: In the Big Bang model for the formation of the universe, Inflationary Cosmology predicts that after about 10^−37 seconds the nascent universe underwent exponential growth that smoothed out nearly all inhomogeneities. The remaining inhomogeneities were caused by quantum fluctuations in the inflaton field that caused the inflation event. After 10^−6 seconds, the early universe was made up of a hot, interacting plasma of photons, electrons, and baryons. As the universe expanded, adiabatic cooling caused the energy density of the plasma to decrease until it became favorable for electrons to combine with protons, forming hydrogen atoms. This recombination event happened when the temperature was around 3000 K or when the universe was approximately 379,000 years old. At this point, the photons no longer interacted with the now electrically neutral atoms and began to travel freely through space, resulting in the decoupling of matter and radiation. The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to 2.7260±0.0013 K, it will continue to drop as the universe expands. The extreme uniformity of the map of the sky , The detailed, all-sky picture of the infant universe created from nine years of WMAP data. The image reveals 13.77 billion year old temperature fluctuations (shown as color differences) that correspond to the seeds that grew to become the galaxies. The signal from the our Galaxy was subtracted using the multi-frequency data. This image shows a temperature range of ± 200 microKelvin. Note that the color map is about 5 orders of magnitude smaller than the current black body temperature of the CMB, the all-sky is extremely uniform. It was the measurement of this uniformity that forced modeling the beginning of the universe quantum mechanically, as at the time the photons decoupled the universe could not come into thermodynamic equilibrium: due to the light cones of special relativity there were parts not possible to thermodynamically interact with other parts so as to homogenize the radiation. We know it is radiation, we know it is uniform and has a black body spectrum, and it fits in the whole current model of the universe. Dust certainly does not.
{ "domain": "physics.stackexchange", "id": 27213, "tags": "cosmology, cosmic-microwave-background" }
Polynomial calculation in SymPy
Question: Thie programs is written in Sympy 1.9 and it should find a polynomial g of degree dim for a given polynomial f such that f o g = g o f as described here, where I already posted program written in Maxima. This is my first program in SymPy so I am interested in your feedback on this program except on the algorithm itself. from sympy import symbols, pprint, expand, Poly, IndexedBase, Idx,Eq,solve from sympy.abc import x,f,g,i fg,gf,d=symbols('fg gf d') f=x**3+3*x # a polynomial g exists #f=x**3+4*x # a polynomial g does not exist dim=5 a=IndexedBase('a') i=Idx('i',dim) # create a polynomial of degree dim # with unknown coefficients, # but the coefficient of the highest power is 1 # an the lowest power is 1 g=a[0]+x**dim for i in range(1,dim): g+=x**i*a[i] # calculate fg = f o g # and gf = g o f fg=f gf=g fg=expand(f.subs(x,g)) gf=expand(g.subs(x,f)) # calculate the difference d=(f o g) - (g o f) # and check how to choose the coefficients of g # such that it will become 0 difference=(fg-gf).as_poly(x) candidates=solve(difference.all_coeffs()[:dim],[a[i] for i in range(dim)]) replacements=[(a[i],candidates[0][i]) for i in range(dim)] if (difference.subs(replacements)==0): pprint(g.subs(replacements)) else: print("no solution exists") Answer: Turning into a Function I would advise turning your program into a function. This allows people to use your function in other Python scripts. That is: def poly_commute(f, dim): a=IndexedBase('a') i=Idx('i',dim) g=a[0]+x**dim for i in range(1,dim): g+=x**i*a[i] # calculate fg = f o g # and gf = g o f fg=f gf=g fg=expand(f.subs(x,g)) gf=expand(g.subs(x,f)) # calculate the difference d=(f o g) - (g o f) # and check how to choose the coefficients of g # such that it will become 0 difference=(fg-gf).as_poly(x) candidates=solve(difference.all_coeffs()[:dim],[a[i] for i in range(dim)]) replacements=[(a[i],candidates[0][i]) for i in range(dim)] if (difference.subs(replacements)==0): return g.subs(replacements) else: return None Also, you should add some documentation so others can understand what your function does. Related: Removing Global Variables Managing state can be a pain. Keeping track of what can be in which state can be a headache. It isn't of course unmanageable in this case, but it would be useful to get rid of the state. To do so, I would add a main function. def main(): f = x**3 + 3*x dim = 5 g = poly_commute(f, dim) if g: pprint(g) else: print("No solution exists.") if __name__ == '__main__': main() PEP 8: There is some spacing of equations that seem to be off. This is explained in "Other recommendations" of PEP 8: If operators with different priorities are used, consider adding whitespace around the operators with the lowest priority(ies). Use your own judgment; however, never use more than one space, and always have the same amount of whitespace on both sides of a binary operator: # Correct: i = i + 1 submitted += 1 x = x*2 - 1 hypot2 = x*x + y*y c = (a+b) * (a-b) # Wrong: i=i+1 submitted +=1 x = x * 2 - 1 hypot2 = x * x + y * y c = (a + b) * (a - b)
{ "domain": "codereview.stackexchange", "id": 42658, "tags": "python, symbolic-math, sympy" }
Could this depth system for a game be improved?
Question: I am still new to C++ and don't have a great insight on my coding yet, so I would be very grateful to anyone and everyone that gives advice. Also, this code is meant to: keep all of my objects in an ordered fashion based on depth. I have a couple functions that allow for easy management and I made object friends with the depth manager because I only want the depthManager to have control over each objects idDepth and depth which are two different things. The reason I need a depth system is because I need to have objects execute there code in an orderly fashion, and I also need to have control of what objects are drawn first to last. This class has been tested and works as expected. Stay up to date with this project on GitHub.com Abstract object class: class object{ // Placement Data unsigned int depth, idDepth, idObject, idMain; // Friends friend class depthManager; friend class objectManager; protected: unsigned int getDepthId(){ return idDepth; } public: virtual void update() = 0; virtual void draw() = 0; unsigned int getDepth() { return this->depth; } }; note: I have each object handle its own update and draw events for easier design. I'm mainly asking on approval for the depth system. depthManager class class depthManager{ private: std::map<unsigned, std::vector<object*>* > objectMap; void changeListPlacement( unsigned int depth, unsigned int position, int change); public: void objectAdd( unsigned int depth, object* obj); void objectRemove( object* obj ); void objectMove( unsigned int depth, object* obj ); }; depthManagers Functions: void depthManager::objectAdd(unsigned int depth, object *obj) { obj->depth = depth; // Check if depth key existant if ( objectMap.find( depth ) != objectMap.end() ) { std::vector< object* >* &refVec = objectMap[ depth ]; refVec->push_back( obj ); obj->idDepth = (unsigned)(int)refVec->size() - 1; } else // Add new Key { objectMap[ depth ] = new std::vector< object* >; std::vector< object* >* &refVec = objectMap[ depth ]; refVec->push_back( obj ); obj->idDepth = (unsigned)(int)refVec->size() - 1; } } void depthManager::changeListPlacement(unsigned int depth, unsigned int position, int change = -1) { if ( objectMap.find( depth ) == objectMap.end() ) { return; } std::vector<object*>* &refVec = objectMap[ depth ]; for( unsigned int i = refVec->size() - 1; i > position; i -- ) { object* &pObj = refVec->at( i ); pObj->idDepth += change; } } void depthManager::objectRemove(object *obj) { if ( objectMap.find( obj->depth ) == objectMap.end() ) { std::cout << "ERROR DEPTH NOT FOUND \n" << std::flush ; return; } std::vector<object*>* &refVec = objectMap[ obj->depth ]; changeListPlacement( obj->depth, obj->idDepth); refVec->erase( refVec->begin() + obj->idDepth ); } void depthManager::objectMove(unsigned int depth, object *obj) { this->objectRemove( obj ); this->objectAdd( depth, obj ); } previous verions Depth Manager Source Code 1 Answer: Some remarks to your current map-based implementation: The idiomatic way of accessing an item in a map and adding it if not present usually goes like this: if not map.contains(key) map[key] = new value value = map[key] ... modify value So objectAdd could be shortened: obj->depth = depth; // make sure we have an object vector for the given depth if ( objectMap.find( depth ) == objectMap.end() ) { objectMap[ depth ] = new std::vector< object* >; } std::vector< object* >* &refVec = objectMap[ depth ]; refVec->push_back( obj ); obj->idDepth = (unsigned)(int)refVec->size() - 1; In objectRemove it is apparently illegal to pass an object with a non-existent depth. Printing an error message to stdout is not the best way to handle an error like that. You should throw an appropriate exception (fail early is a valuable debugging tool) or allow it by ignoring invalid depths. It is not imminently clear what objectMove exactly does based on its name and the names of its parameters. It looks like it moves an object to a different depth. So a better name and signature might be: void depthManager::changeDepth(unsigned int newDepth, object *obj) In class object the methods getIdDepth() and getDepth do not modify the object state so you should consider making them const (i.e. unsigned int getDepth() const { return this->depth; }). I'm not 100% convinced of the idDepth property. It basically just reflects the current position of the object in the depth-list and you are writing a fair amount of boiler plate code to keep it that way. This increases the complexity of the class somewhat. I'd revisit the concept and check if I can't get by without it.
{ "domain": "codereview.stackexchange", "id": 5354, "tags": "c++, performance, object-oriented, beginner, memory-management" }
Recursive brute-force approach to maximum points you can obtain from cards
Question: I came across this question on Leetcode. The question description is as follows: There are several cards arranged in a row, and each card has an associated number of points. The points are given in the integer array cardPoints. In one step, you can take one card from the beginning or from the end of the row. You have to take exactly k cards. Your score is the sum of the points of the cards you have taken. Return the maximum score you can obtain. Sample testcases cardPoints = [1,2,3,4,5,6,1], k = 3 => Output is 12 cardPoints = [2,2,2], k = 2 => Output is 4 cardPoints = [9,7,7,9,7,7,9], k = 7 => Output is 55 cardPoints = [1,1000,1], k = 1 => Output is 1 After looking at the discussion forums, I realized that this could be converted into a sliding window problem where we have to find the smallest subarray sum of length len(cardPoints) - k. While I do understand this, The initial method I tried was brute-force recursive and using dynamic programming to cache intermediate results. Despite this, it still results in a timeout. Is there any other optimization I can make to make my code run faster using this approach? class Solution { public: int maxScoreUtil(int left, int right,vector<int>& cardPoints, int k,vector<vector<int>>& dp){ if(k == 0 || left == cardPoints.size() || right < 0) return 0; if(dp[left][right] != -1) return dp[left][right]; int val_1 = maxScoreUtil(left+1,right,cardPoints,k-1,dp) + cardPoints[left]; int val_2 = maxScoreUtil(left,right-1,cardPoints,k-1,dp) + cardPoints[right]; return dp[left][right] = max(val_1,val_2); } int maxScore(vector<int>& cardPoints, int k) { int n = cardPoints.size(); vector<vector<int>> dp(n+1, vector<int>(n+1, -1)); return maxScoreUtil(0,n-1,cardPoints,k,dp); } };I Before using DP => 16/40 test cases passed followed by TLE After using DP => 31/40 test cases passed followed by TLE Answer: You recursive solution takes numbers from [0..left] and [right..n-1] where (left+1)+(n-right) <= k So even if there are k ways to select some elements from left and others from right, i.e. (0,k), (1,k-1), ... (k,0). You look at far more a bigger sample space, in worst case, it would be \$O(n^2)\$. I don't think much can be done with this approach.
{ "domain": "codereview.stackexchange", "id": 38512, "tags": "c++, programming-challenge, recursion, time-limit-exceeded, dynamic-programming" }
Reaction of 2,4-DNPH with aromatic aldehydes
Question: Does 2,4-dinitrophenylhydrazine react with aromatic aldehydes, e.g. benzaldehyde (phenylmethanal), in a condensation reaction similar to 2,4-DNPH's reaction with aldehydes and ketones ? Answer: 2,4-DNPH is a derivatization reagent for the identification of ketones and aldehydes. This was extensively used in organic analysis before instrumental methods were readily available. You determined the boiling point of the problem, which in the case of benzaldehyde is $179\ ^{\circ}\mathrm{C}$ , then you made a solid derivative and determined it's melting point. In the case of benzaldehyde and 2,4-DNPH that would be $237\ ^{\circ}\mathrm{C}$ . Then you go to the tables, cross check both values and had benzaldehyde identified. For aldehydes and ketones there are different options: oxime, semicarbazone, phenylhydrazone, p-nitrophenylhydrazone and 2,4-dinitrophenylhydrazone. The latter one was usualy preferred because they have higher melting points and so it was easier to get a solid even if not totally pure.
{ "domain": "chemistry.stackexchange", "id": 10400, "tags": "organic-chemistry, carbonyl-compounds" }
(Altland-Simons) Question about a seemingly additional term in the functional field integral
Question: The following is the part of the book from Atland-Simons. My question is about the additional $-\overline{\psi}^{n+1}\psi_n$ in $(4,27)$ of the book. I understand that the term $\overline{\psi}^{n}\psi_n$ arises due to the overcompleteness relation, i.e. $$ \exp\left({-\mbox{$\sum_i$} \bar\psi_i\psi_i}\right)\,, $$ in $(4.25)$. But why does $-\overline{\psi}^{n+1}\psi_n$ arise? Answer: That's because $$ \langle \psi | \hat{H} |\psi'\rangle =\langle \psi | \psi'\rangle\,H(\bar{\psi},\psi')\,, $$ as written below $(4.28)$. In inserting the resolution of the identity many times you'll get products of $$ \langle \psi^{n+1} | \hat{H} |\psi^n\rangle = \langle \psi^{n+1} | \psi^n\rangle \,H(\bar{\psi}^{n+1},\psi^n)\,. $$ The $H(\bar{\psi}^{n+1},\psi^n)$ is there in the exponential, together with the $\bar{\psi}^n \psi^n$ which, as you say, is compensating for the overcompleteness. The remaining piece is $$ \langle \psi^{n+1} | \psi^n\rangle = \bar{\psi}^{n+1} \psi^n\,, $$ explaining the extra term at the exponent.
{ "domain": "physics.stackexchange", "id": 59639, "tags": "quantum-field-theory, path-integral" }
Why does a thermometer in wind not show a lower temperature than one shielded from it?
Question: I'm a little familiar with the physics and thermodynamics of the wind chill effect, but this question seems to come up from time to time: Why, given two temperature sensors or thermometers in the same environment, do both report the same temperature if one is exposed to wind when the other is shielded from it? People often ask because the temperature reported by, for example, their vehicle, doesn't seem to change as they drive at different speeds, etc. (Other than of course the changes from one actual temperature to another as they change geography.) My understanding is that inert devices aren't endothermic like we are, so the effects of wind chill don't apply. Can you explain this in layman's terms? Answer: It’s really pretty simple: The thermometer measures temperature, wind chill measures heat loss for a body warmer than the air. Wind makes more unheated air available to conduct heat away from a hot body, but with a body at air temperature no heat is being conducted away from the thermometer. Say you asked a secondary question, “When I put a thermometer outside, how long must I wait until it has reached air temperature so that its reading is meaningful?” In that case, the windy conditions will decrease the time it takes to equilibrate.
{ "domain": "physics.stackexchange", "id": 1164, "tags": "thermodynamics, temperature" }
How to represent relation between users as a feature?
Question: I'm developing a model for unsupervised anomaly detection. I have a dataset representing communications between users (each example represents a communication): there are many features (time, duration, ...) and the ids of sender and receiver. My question is: how to represent the link between those two users? I have several ideas, but each of them seems to have serious drawbacks: Use id as is. Drawback: even if ids are integers, they have no numerical sense (id 15 is not 3 times id 5) and I think this may mislead the system Use sort of vectors: for example, with 3 users: user1 = (0 0 1), user2 = (0 1 0), user3 = (1 0 0). Drawback : the number of users may vary over time, thus the number of features would vary as well and I would have to re-train my model. Graph theory: I've heard of that way of representing data, which could fit perfectly my data model. Drawback: I've absolutely no knowledge in graph analysis Assign each user a id which is a prime number. That way a communication could be represented in an unique way as the product of the 2 ids. Drawback: as for point 1, ids do not have a "numerical sense" What do you think may be the better way to represent these relations? Answer: There's a couple of approaches you can take depending on the nature of your data. It sounds like you're trying to detect social anomalies in your data so you need to model the communication boundaries between them which leads to some sort of graph representation. If you don't have too many users in the system (say $n$) then you can create an $n\times n$ matrix, $M$ over a time period that represents the communications between users. The component $M_{ij}$ could be either $1$ or $0$ if user $i$ and $j$ communicated or the number of times that they communicated. If you have more data then you would want to represent the data in terms of nodes and edges. Nodes would be the users and edges would be the presence of a communication. This can be done manually or by using a library such as NetworkX. Here's a Python tutorial on getting started with graph network analysis. If you're doing this at a large scale then you might want to use a graph database such as Neo4J.
{ "domain": "datascience.stackexchange", "id": 3048, "tags": "feature-extraction, anomaly-detection, feature-construction" }
Generic wrapper for single value or array of values
Question: I'm writing a chart.js port for c# blazor (you can find it here) using C# 8.0 with .NET Core 3 (latest preview). In chart.js there is this thing called indexable options. It let's you use either a single value or an array and handles it accordingly. Because I want the port to be as type-safe as possible, I implemented a wrapper which allows you to store a value which either represents a single value or an array of values. Points that might need (the most) attention: Equality This is the main thing I'd like feedback on. I've implemented IEquatable and tried to write "smart" equality checks which use the most appropriate method to compare but since I'm not an expert, I don't expect them to be perfect :) Conversion Because I have implemented two implicit and two explicit conversions, which are vital for the coding experience when using this struct, it's important to me that those are correct. Summaries This isn't as important as the other factors but I still think it's nice to write good summaries, especially considering this is a library. I tried to keep it consistent and mimic the summaries of built-in classes like string where it makes sense. Again, it's not as important but if you see something and think "you don't do that usually", I'd love to hear what you'd do instead. Other Improving coding style (and other "basic" stuff) is of course always good so don't hold back on that if anything bugs you. I'd also like to mention my thoughts on a few design choices because maybe I was wrong before I even started coding this class: Use struct instead of class because it has value semantics. Implement IEquatable because it's a struct and therefore avoid unnecessary boxing. Keep the Value-property public simply because there's no reason to hide it. Use an array instead of e.g. an IEnumerable because I want the values to already be present when the instance is serialized. This way there is definitely no downtime wasted on e.g. database access when I try to serialize the instance. /// <summary> /// Represents an object that can be either a single value or an array of values. This is used for typesafe js-interop. /// </summary> /// <typeparam name="T">The type of data this <see cref="IndexableOption{T}"/> is supposed to hold.</typeparam> public struct IndexableOption<T> : IEquatable<IndexableOption<T>> { /// <summary> /// The compile-time name of the property which stores the wrapped value. This is used internally for serialization. /// </summary> internal const string PropertyName = nameof(Value); /// <summary> /// The actual value represented by this instance. /// </summary> public object Value { get; } /// <summary> /// Gets the value indicating whether the option wrapped in this <see cref="IndexableOption{T}"/> is indexed. /// <para>True if the wrapped value represents an array of <typeparamref name="T"/>, false if it represents a single value of <typeparamref name="T"/>.</para> /// </summary> public bool IsIndexed { get; } /// <summary> /// Creates a new instance of <see cref="IndexableOption{T}"/> which represents a single value. /// </summary> /// <param name="singleValue">The single value this <see cref="IndexableOption{T}"/> should represent.</param> public IndexableOption(T singleValue) { Value = singleValue ?? throw new ArgumentNullException(nameof(singleValue)); IsIndexed = false; } /// <summary> /// Creates a new instance of <see cref="IndexableOption{T}"/> which represents an array of values. /// </summary> /// <param name="indexedValues">The array of values this <see cref="IndexableOption{T}"/> should represent.</param> public IndexableOption(T[] indexedValues) { Value = indexedValues ?? throw new ArgumentNullException(nameof(indexedValues)); IsIndexed = true; } /// <summary> /// Implicitly wraps a single value of <typeparamref name="T"/> to a new instance of <see cref="IndexableOption{T}"/>. /// </summary> /// <param name="singleValue">The single value to wrap</param> public static implicit operator IndexableOption<T>(T singleValue) => new IndexableOption<T>(singleValue); /// <summary> /// Implicitly wraps an array of values of <typeparamref name="T"/> to a new instance of <see cref="IndexableOption{T}"/>. /// </summary> /// <param name="indexedValues">The array of values to wrap</param> public static implicit operator IndexableOption<T>(T[] indexedValues) => new IndexableOption<T>(indexedValues); /// <summary> /// Explicitly unwraps an <see cref="IndexableOption{T}"/> to a single value. /// <para>If this instance represents an array of values instead of a single value, an <see cref="InvalidCastException"/> will be thrown.</para> /// </summary> /// <param name="wrappedValue">The wrapped single value</param> public static explicit operator T(IndexableOption<T> wrappedValue) { if (wrappedValue.IsIndexed) throw new InvalidCastException("This instance represents an array of values and can't be converted to a single value."); return (T)wrappedValue.Value; } /// <summary> /// Explicitly unwraps an <see cref="IndexableOption{T}"/> to an array of values. /// <para>If this instance represents a single value instead of an array of values, an <see cref="InvalidCastException"/> will be thrown.</para> /// </summary> /// <param name="wrappedValue">The wrapped array of values</param> public static explicit operator T[](IndexableOption<T> wrappedValue) { if (!wrappedValue.IsIndexed) throw new InvalidCastException("This instance represents a single value and can't be converted to an array of values."); return (T[])wrappedValue.Value; } /// <summary> /// Determines whether the specified <see cref="IndexableOption{T}"/> instance is considered equal to the current instance. /// </summary> /// <param name="other">The <see cref="IndexableOption{T}"/> to compare with.</param> /// <returns>true if the objects are considered equal; otherwise, false.</returns> public bool Equals(IndexableOption<T> other) { if (IsIndexed != other.IsIndexed) return false; if (IsIndexed) { return EqualityComparer<T[]>.Default.Equals((T[])Value, (T[])other.Value); } else { return EqualityComparer<T>.Default.Equals((T)Value, (T)other.Value); } } /// <summary> /// Determines whether the specified object instance is considered equal to the current instance. /// </summary> /// <param name="obj">The object to compare with.</param> /// <returns>true if the objects are considered equal; otherwise, false.</returns> public override bool Equals(object obj) { // an indexable option cannot store null if (obj == null) return false; if (obj is IndexableOption<T> option) { return Equals(option); } else { return Value.Equals(obj); } } /// <summary> /// Returns the hash of the underlying object. /// </summary> /// <returns>The hash of the underlying object.</returns> public override int GetHashCode() { return -1937169414 + Value.GetHashCode(); } /// <summary> /// Determines whether two specified <see cref="IndexableOption{T}"/> instances contain the same value. /// </summary> /// <param name="a">The first <see cref="IndexableOption{T}"/> to compare</param> /// <param name="b">The second <see cref="IndexableOption{T}"/> to compare</param> /// <returns>true if the value of a is the same as the value of b; otherwise, false.</returns> public static bool operator ==(IndexableOption<T> a, IndexableOption<T> b) => a.Equals(b); /// <summary> /// Determines whether two specified <see cref="IndexableOption{T}"/> instances contain different values. /// </summary> /// <param name="a">The first <see cref="IndexableOption{T}"/> to compare</param> /// <param name="b">The second <see cref="IndexableOption{T}"/> to compare</param> /// <returns>true if the value of a is different from the value of b; otherwise, false.</returns> public static bool operator !=(IndexableOption<T> a, IndexableOption<T> b) => !(a == b); } Answer: It's a bit counter intuitive you store Value as Object if the class IndexableOption<T> is generic. I understand you are trying to find a common type to store both T as T[]. However, if T : struct then what happens in case Value is T is a thing called boxing*. From reference source: Performance In relation to simple assignments, boxing and unboxing are computationally expensive processes. When a value type is boxed, a new object must be allocated and constructed. To a lesser degree, the cast required for unboxing is also expensive computationally. For instance, in Equals there is an expensive unboxing: if (IsIndexed) { return EqualityComparer<T[]>.Default.Equals((T[])Value, (T[])other.Value); } else { // unboxing#1 unboxing#2 return EqualityComparer<T>.Default.Equals((T)Value, (T)other.Value); } I don't think it's worth storing both T as T[] in the same property. Why don't you use 2 properties instead and use the one associated with the value of IsIndexed? public T Value { get; } public T[] IndexedValue { get; } Equals refactored: if (IsIndexed) { return EqualityComparer<T[]>.Default.Equals(IndexedValue , other.IndexedValue ); } else { return EqualityComparer<T>.Default.Equals(Value, other.Value); } One other thing, you may not like that the consumer has to call IsIndexed in order to decide to use Value or IndexedValue. If you really must, you can shield both properties from public access and still decide to use some boxing. But at least, internally, no redundant boxing/unboxing takes place. protected T Value { get; } protected T[] IndexedValue { get; } public object ValueRef => IsIndexed ? IndexedValue : Value; Or you may want to turn the logic around without using boxing: public T[] ValueRef => IsIndexed ? IndexedValue : new[] { Value }; Which makes me wonder, perhaps you should only store T[] and return either the array or its sole element. This should avoid most recurring if-statements as well. I also think you should only use EqualityComparer<T>, not EqualityComparer<T[]>, but that has to be verified. Footnote: Boxing is the process of converting a value type to the type object or to any interface type implemented by this value type
{ "domain": "codereview.stackexchange", "id": 35817, "tags": "c#, object-oriented, generics, wrapper" }
Is there a maximum number of neutrons of an isotope?
Question: I wondered whether there is a maximum number of neutrons in an isotope. Or is there no maximum number? So, can an H-75 atom exist? Answer: Effectively there is a maximum number, or rather producing more and more neutron-rich isotopes requires energy. You can think of it this way. Identical particles are affected by the Pauli exclusion principle; this applies to neutrons in the nucleus. Therefore a stack of neutrons will fill up the lowest energy states, but will then have to occupy higher energy states. However, it could then be the case that it is more energetically favourable for a neutron in a higher energy state to beta decay into a proton and an electron (and anti-neutrino). The proton would occupy a lower energy level in the nucleus. But produce too many protons and they fill their possible low-lying energy states and then the possible decay proton has nowhere to go - the beta decay is blocked and instead inverse beta decay can become favourable. In this way there is a locus of stability for the ratio of neutrons to protons as a function of the total mass number of the nucleus. See also this question. If there are too many neutrons then they are unstable to beta decay; too few neutrons and they are unstable to electron capture (inverse beta decay). The plot below indicates this line of stability (taken from the wikipedia page on beta stability isobars). However, this is a "low density" approximation. It is possible to produce extremely neutron-rich isotopes in the ultra-dense crust regions of neutron stars. Here the nuclei are surrounded by ultra-relativistic degenerate electrons and, at very high densities, degenerate neutrons. The requirement that beta decay electrons cannot be created below the Fermi energy of the electron gas shifts the beta equilibrium towards more neutron rich nuclei. A crude calculation (the Harrison-Wheeler treatment) suggest that $n/p$ can reach 3 or more at densities above $10^{15} kg/m^{3}$. However the most stable nuclei at these densities must have atomic masses of 200 or more, so your suggestion of Hydrogen 75 is not on the cards. At even higher densities, the individual nuclei lose their identity and form very neutron-rich structures (often called nuclear pasta) and then even to a nucleon fluid dominated by neutrons; but these could not be considered nuclei in the sense you mean.
{ "domain": "physics.stackexchange", "id": 19979, "tags": "nuclear-physics, neutrons, isotopes" }
Is $K(b|a) \geq 1$ if $a\neq b$?
Question: Since $K(a|a) = 0$, is $K(b|a) \geq 1$ when $a\neq b$, as we need at least one bit to distinguish between $K(a|a)$ and $K(b|a)$? If not true in general, is it true if $a$ and $b$ are elegant programs? Answer: Whether $K(a|a) = 0$ could depend on your universal computer. The only thing you can say in general is that $K(a|a) \leq C$ for some constant $C$ independent of $a$. You can arrange for a universal computer for which $K(a|a) = 0$ for all $a$, and then indeed $K(b|a) > 0$ for all $b \neq a$, since there is only one program whose length is zero, and given $a$ it generates $a$.
{ "domain": "cs.stackexchange", "id": 8832, "tags": "kolmogorov-complexity" }
Why does the surface (quantum error correction) code have such a high threshold for errors?
Question: Is there an intuitive explanation why the surface code fares so much better than older quantum error correction codes in terms of its high error threshold, with thresholds of up to a few percent rather than some ppm? If so, what is it? I am particularly interested in having it clarified if such a comparison is a fair (apples to apples) comparison. I understand that the results for older quantum error correction are usually analytic results whilst those for surface codes tend to be numeric. Could it be that the analytic solutions indeed take into account the worst possible (coherent) errors whilst numerical solutions perhaps do not capture the worst possible errors because they can only explore a subset of all possible errors? Answer: Thresholds are often calculated by treating $X$ and $Z$ errors as occurring with the same probability. Any code that corrects one more effectively than the other will then be at a disadvantage: whichever it corrects least effectively will be the bottleneck that determines the threshold. The surface code avoids this by treating these errors in an identical way. For fault-tolerance thresholds, the fidelity of stabilizer measurements is also taken into account. This includes errors that occur in the required entangling gates. Stabilizers that act on more qubits, and so require more entangling gates, will have less reliable measurements. But the surface code has essentially all stabilizers the same size: they all act on four qubits. And that size is relatively low. They are also quasilocal, so there is no error overhead in shunting qubits around to be measured. The combination of these effects means that it competes well in terms of threshold, and should be moderately easy to implement. It's true that analytic results are always worse, because they aim to establish lower bounds rather than exact values. They came more from the era when it was important to determine whether fault-tolerant quantum computation was actually possible, even in principle. Just finding a non-zero threshold was a big deal. Now we focus more on things that can be done on real devices, and how they might perform, and seek thresholds as high as we can for practical reasons. The surface code nevertheless wins in a typical apples-to-apples comparison of numerically calculated thresholds (I'll update with a reference when I find one). But it should ne noted that the choice of noise model can be used to change the goal posts in the favour of your favourite code.
{ "domain": "quantumcomputing.stackexchange", "id": 64, "tags": "error-correction" }
Problem with subscribing to robot_pose_ekf/odom_combined
Question: I am working with Turtlebot. I launched the following launch files containing openni, minimal.launch, depthimage_to_laserscan and robot_pose_ekf On the workstation, When I run the command rostopic list I can see both /odom and /robot_pose_ekf/odom_combined in the list of topics. Even when I run rostopic echo /robot_pose_ekf/odom_combined I can see the content of the that topic on the terminal. Now, the problem when I subscribe to this topic in my C++ file like this robot.OdomCombinedSubscriber = nh.subscribe<nav_msgs::Odometry>("/robot_pose_ekf/odom_combined", 10, &TurtlebotRobot::OdometryCombinedCallback, &robot); There is nothing received in the callback. When this instruction is executed, there is a error message in red in the Tutrlebot saying: [ERROR] [1389688193.050254972]: Client [/testApp] wants topic /robot_pose_ekf/odom_combined to have datatype/md5sum [nav_msgs/Odometry/cd5e73d190d741a2f92e81eda573aca7], but our version has [geometry_msgs/PoseWithCovarianceStamped/953b798c0f514ff060a53a3498ce6246]. Dropping connection. I got a similar error for the bumper message but this was when I was using Groovy on turtlebot and Hydro on workstation. Now, I put both to hydro but still I got the message above. Any help will be appreciated. Originally posted by Anis on ROS Answers with karma: 253 on 2014-01-13 Post score: 0 Answer: I just found the error. It is in here robot.OdomCombinedSubscriber = nh.subscribe<nav_msgs::odometry>("/robot_pose_ekf/odom_combined", 10, &TurtlebotRobot::OdometryCombinedCallback, &robot); The message type should be geometry_msgs::PoseWithCovarianceStamped and not nav_msgs::odometry I changed instruction to robot.OdomCombinedSubscriber = nh.subscribe<geometry_msgs::PoseWithCovarianceStamped>("/robot_pose_ekf/odom_combined", 10, &TurtlebotRobot::OdometryCombinedCallback, &robot); and it worked! Originally posted by Anis with karma: 253 on 2014-01-13 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 16644, "tags": "ros, navigation, turtlebot, odom-combined, robot-pose-ekf" }
Localization just from IMU data?
Question: I am looking for a solution to localize my robot just from IMU data. The problem is, I have some Velodyne 3D scans and IMU data saved to a Bag file and I need to make an Octomap or some other 3D reconstruction method to visualize my data. I tested the laser_scan_matcher and the pointcloud_to_laserscan package but the first throws an error Invalid number of rays: 62112 and the second one just outputs something like a straight line that doesn't change. So both are not capable of processing the huge amount of Velodyne data. Originally posted by madmax on ROS Answers with karma: 496 on 2013-04-29 Post score: 1 Original comments Comment by fiorano10 on 2017-10-09: I got both the laser_scan_matcher and the pointcloud_to_laserscan package working with the VLP-16 LiDAR, you have to change some parameters for both to get them to work. Answer: I don't know what a Velodyne is, but I know you can't hope to use just the IMU unless it's a VERY precise one. For a more indepth discussion of this problem see my answer here. You could try to augment the IMU by using a Kalman with both the IMU data and the odometry data. And you could also give the Velodyne stack a try. Originally posted by Claudio with karma: 859 on 2013-04-30 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by madmax on 2013-04-30: Thanks for your answer. Velodyne is a very precise 3D LIDAR Scanner with I think about 1million points per second. But there seems to be no package to use this point cloud for localization. The problem is I had a 2D Laser scanner mounted on my robot also, but the data was not recorded right... Comment by madmax on 2013-04-30: And I need a way to get a 3D reconstruction of the Velodyne data, and I only have a thermal cam image, Kinect RGB Image ( depth image also got lost somehow) and an omnidirectional image...
{ "domain": "robotics.stackexchange", "id": 13996, "tags": "localization, imu, navigation, octomap, velodyne" }
Outputting all possible words which fit a string of letters
Question: I got inspired by this C# question. It asks to write a program to output all possible words (included in a dictionary) which fit a string of letters obtained by swiping the finger over the keyboard, as is often done with mobile keyboards. Description Software like Swype and SwiftKey lets smartphone users enter text by dragging their finger over the on-screen keyboard, rather than tapping on each letter. You'll be given a string of characters representing the letters the user has dragged their finger over. For example, if the user wants "rest", the string of input characters might be "resdft" or "resert". Input Given the following input strings, find all possible output words 5 characters or longer. qwertyuytresdftyuioknn gijakjthoijerjidsdfnokg Output Your program should find all possible words (5+ characters) that can be derived from the strings supplied. Use http://norvig.com/ngrams/enable1.txt as your search dictionary. The order of the output words doesn't matter. queen question gaeing garring gathering gating geeing gieing going goring Notes/Hints Assumptions about the input strings: QWERTY keyboard Lowercase a-z only, no whitespace or punctuation The first and last characters of the input string will always match the first and last characters of the desired output word Don't assume users take the most efficient path between letters Every letter of the output word will appear in the input string The function read_vocabulary reads the file and saves the words in a nested dictionary structure, where the first key is the first letter of the word and the second key the last letter of the word (this improves running time by about a factor 2 with respect to a simple set). I saved the linked word list as dictionary.txt on my computer. The find_words function tries to find all characters of a word (with the right first and last letter) in the pattern string and yields it if it found all. The call to sorted in the last part is not necessary from the defined interface (any ordering is fine), but I like it better this way. All comments are welcome, especially about improving readability of the code. import string from collections import defaultdict def read_vocabulary(file_name): vocabulary = {letter: defaultdict(set) for letter in string.lowercase} with open(file_name) as dict_file: for word in dict_file: word = word.strip().lower() vocabulary[word[0]][word[-1]].add(word) return vocabulary def find_words(vocabulary, pattern, length=5): """ Search `vocabulary` for words matching `pattern` generated by swiping the finger over the keyboard. Yields all matching words >>> vocabulary = {'q': {'n': {'queen'}, 'r': {'qualor'}}} >>> list(find_words(vocabulary, 'qwertyuytresdftyuioknn')) ['queen'] """ for word in vocabulary[pattern[0]][pattern[-1]]: if len(word) >= length: i = 1 for character in word[1:-1]: try: i = pattern.index(character, i) except ValueError: break else: yield word if __name__ == "__main__": words = ["qwertyuytresdftyuioknn", "gijakjthoijerjidsdfnokg", "cghhjkkllooiuytrrdfdftgyuiuytrfdsaazzseertyuioppoiuhgfcxxcfvghujiiuytrfddeews"] vocabulary = read_vocabulary("dictionary.txt") for word in words: # print word print " ".join(sorted(find_words(vocabulary, word), key=len, reverse=True)) Regarding run-time: With the 1 million random characters, as linked at the other question, this code runs in about 0.14 seconds (as determined by python -m cProfile script.py), which includes reading the 1M characters from a file (because they are too big to just paste them in...). Answer: Some comments: You exclude words less than 5 characters when reading your vocabulary. I think it would be better to exclude them when creating the vocabulary, which reduces the number of elements in your vocabulary and avoids you needlessly looping over words that are too short. Considering there are only 26 letters, it may be more efficient to have a single dictionary with two character keys rather than nested dictionaries. This would reduce the number of dictionary lookups. You would need to time it to see if it helps or hurts, but one possible initial check would be to create a collections.Counter of each word in your dictionary, and a collections.Counter of pattern at the beginning, and then use the subtraction operator to make sure all the letters in each word are present in the pattern. If not, you can skip doing a linear search on that word. But the act of creating each Counter at the beginning may offset the benefit of the check. Sorting will hurt your performance.
{ "domain": "codereview.stackexchange", "id": 22298, "tags": "python, programming-challenge, python-2.x" }
Which complexity class does this problem belong to?
Question: Consider the following problem $\mathcal{P}$. Instance: A Boolean formula $F$ of $n$ Boolean variables ($x_1,...,x_n$) and $m$ Boolean parameters ($b_1,...,b_m$) where $0 \leq m \leq n$. Problem: Find an assignment $b_1^*,...,b_m^*$ to the parameters $b_1,...,b_m$ such that the number of satisfying assignments to the variables $x_1,...,x_n$ of $F(b_1/b_1^*,...,b_m/b_m^*)$ is minimum. For example, $F = \{((x_2 \lor x_3) \leftrightarrow x_1) \lor (x_1 \leftrightarrow b_1 \land (x_2 \lor x_3) \leftrightarrow \neg b_1)\} \land \{((x_1 \land \neg x_2) \leftrightarrow x_2) \lor (x_2 \leftrightarrow b_2 \land (x_1 \land \neg x_2) \leftrightarrow \neg b_2)\} \land \{x_1 \leftrightarrow x_3\}$ where $n = 3$ and $m = 2$. If $(b_1^*,b_2^*) = (0,0)$, then the number of satisfying assignments of $F(b_1/b_1^*,b_2/b_2^*)$ is 2. If $(b_1^*,b_2^*) = (0,1)$, then the number of satisfying assignments of $F(b_1/b_1^*,b_1/b_2^*)$ is 3. Here, I consider the constructive version $\mathcal{P}_C$ of $\mathcal{P}$ (i.e., the output of $\mathcal{P}_C$ includes the optimal assignment to $b_1, ..., b_m$ and the minimum number of assignments to $x_1, ..., x_n$). When $m = 0$, $\mathcal{P}_C$ is equivalent to #SAT, which is known as #P-complete. Thus, $\mathcal{P}_C$ is #P-hard. However, it is insufficient to conclude that $\mathcal{P}_C$ is #P-complete. Which complexity class does this problem belong to (#P or other one)? If it does not belong to #P, please give me a proof. Answer: We'll argue that the following formulation of OP's problem is complete for OPT#P under poly-time reductions: input: A Boolean formula $\phi\big(b=(b_1,b_2,\ldots,b_n), x=(x_1, x_2,\ldots, x_m)\big)$ output: The maximum, over all assignments to $b$, of the number of assignments to $x$ such that $\phi(b, x)$ is satisfied (evaluates to true). The problem differs from OP's problem in two minor ways. First, the output does not include an assignment to $b$. Second, it chooses $b$ to maximize, rather than minimize, the number of satisfying assignments. However, OP's problem for a given $\phi$ is essentially equivalent to this problem for the complement of $\phi$. Lemma 1. The problem above is OPT#P-complete under polynomial-time reductions. Proof sketch. The proof is a simple variant of the standard proof that SAT is NP-complete. First, as I understand it, OPT#P is the class of functions of the form $$g(w) = \max_b \#M(w, b)$$ for some non-deterministic poly-time TM $M$, where $\#M(w, b)$ is the number of accepting computation paths for $M$ on input $(w, b)$. In the $\max$, $b$ ranges over all binary strings of length equal to some fixed polynomial $p(|w|)$. So fix any such TM $M$ and corresponding $g$. Given any $w$, the reduction will produce (in time poly$(|w|)$) an equivalent instance of the problem in question: a Boolean formula $f_w(B, X)$ with Boolean variables $(B, X)$ such that $$g(w) = \max_{b} \#f_w(b),$$ where $\# f_w(b)$ is the number of assignments $X=x$ such that $f_w(b, x)$ is true. Recall that the classical Cook-Levin reduction for $M$ on a given input $(w, b)$ first produces a formula $F(W,B,X)$ with boolean inputs $W$, $B$, and $X$, where $|W|=|w|$, $|B|=|b|$, and $|X|$ is some fixed polynomial in $|w|+|y|$. But then it adds constraints to force $W=w$ and $B=b$ (or makes these substitutions and simplifies the resulting formula), resulting in a formula $F_{wb}(X)$ such that there is exactly one assignment to $X$ that satisfies $f_{wb}(X)$ for each accepting computation of $M$ on input $(w, b)$. (The variables in $X$ encode the non-deterministic guesses of $M(w, b)$, and also auxiliary values that encode the rest of the computation. But the auxiliary values are determined by the non-deterministic guesses and $w$ and $b$.) In this way, $f_{wb}(X)$ is satisfiable if and only if $M(w, b)$ has an accepting computation. Instead, given $w$, the reduction outputs the formula $f_w(B,X)$ obtained from $F(W,B,X)$ by adding only the constraints that force $W=w$. Then, for any given second argument $b$, the number of accepting computations of $M(w, b)$ is the number of assignments $X=x$ such that $f_w(b, x)$ is true. That is, in our previous notation, for all $b$, $$\#M(w, b) = \# f_w(b).$$ It follows that $g(w) = \max_b \# f_w(b)$ as desired.$~~~~~\Box$
{ "domain": "cstheory.stackexchange", "id": 4854, "tags": "cc.complexity-theory, complexity-classes" }
Why are the masses of $W^{\pm}$ and $Z^0$ different?
Question: We know that through the Higgs phenomenon, the weak bosons become massive. In our Lagrangian the $W^\pm$ boson is usually defined as $\frac{1}{\sqrt{2}}(W^1_\mu\mp iW^2_\mu)$ and $Z^o$ is usually defined as $(-B_\mu+W^3_\mu)$ ignoring pre factors and couplings. Because of these definitions the masses of $W^\pm$ and $Z^o$ are different. Is the reason of these definitions purely experimental? Or was there a reason for doing this purely from theoretical grounds? Answer: You could ignore over-all factors, but definitely not couplings. The (not quite purely) theoretical mass term in the generic Weinberg-Salam model is, instead, proportional to $$ (W_\mu^1)^2+ (W_\mu^2)^2+\left (W_\mu^3-\frac{g'}{g} B_\mu\right ) ^2, $$ (not quite purely, as the form was all but suggested by a skein of experimental facts--a hugely long story involving the neutrality of neutrinos and the chirality of the charged currents, ultimately spelled out by Glashow in 1961). But, certainly, the magnitude of the weak mixing, $$ \frac{g'}{g} \equiv \tan \theta_W $$ is a purely experimental fact of life. (Theoretically arbitrary, unless you joined speculative explanatory schemes in GUTs, etc...) Theoretically, if nature chose $g'=0$, so $\theta_W=0$, the mass of the Z would be the same as that of the charged Ws, since the argument of the third parenthesis is $Z_\mu / \cos \theta_W$. But, experimentally, nature chose $\sin^ 2 \theta _W$ = 0.2397 ± 0.0013 instead of 0. "Nobody really knows why"...
{ "domain": "physics.stackexchange", "id": 63872, "tags": "standard-model, higgs, symmetry-breaking, electroweak" }
Equilibrium of a rigid body
Question: Why is it necessary for a body to have the net torque acting on it be balanced along with the forces for it to be in equilibrium? Isn't torque just some special case of force like rotation is a special case of translation? Why doesn't it suffice to have only the forces balanced for the body to be in equilibrium? I know there are examples which prove otherwise, but I need a more theoretical explanation. The case where the net torque is zero even when the force is not is understandable, since torque only represents the rotary component of force. Answer: There would have been no need of introducing the concept of torque if we were only concerned with motion of a point particle (as the name suggests, it is hypothetical physical object having mass but no size). There is no concept of rotation of a particle and thus no need to introduce the concept of torque. The concept of torque arises when we are concerned with system of particles. Here, there is a possibility that the net force on our system is zero but still the body is somehow moving. This possibility arises because the forces may act on different particles and there may exist internal interactions as well,causing the body to rotate if it is rigid. That is how the introduction of the concept of torque is justified. Torque is qualitatively a twisting force. Therefore, if net torque is zero, then the body cannot rotate. And if net force is zero the body will not translate. That is why for mechanical equilibrium, both the conditions must hold . I hope I made myself clear.
{ "domain": "physics.stackexchange", "id": 45678, "tags": "newtonian-mechanics, rotational-dynamics, torque, equilibrium, statics" }
Is single-source single-destination shortest path problem easier than its single-source all-destination counterpart?
Question: Dijkstra's algorithm (wiki) and Bellman-Ford (wiki) algorithm are two typical algorithms for the single-source shortest path problem. Both of them compute distances for all nodes from source $s$. If both source $s$ and destination $t$ are fixed, can we compute the shortest path from $s$ to $t$ without computing distances for all other nodes from $s$? More fundamentally, Is single-source single-destination shortest path problem easier (e.g., in terms of worst-case time complexity) than its single-source all-destination counterpart? Answer: Nope. In order to find the distance from $s$ to $t$ it is necessary to determine the length of all paths that are at least at the same distance as $t$ is. If $t$ has a "median" distance half of the distances will be known that way. In certain application areas (with an exponentially sized "implicit" state space) it can still be worthwhile to start searching from both sides and look for nodes that are found in the middle.
{ "domain": "cs.stackexchange", "id": 3620, "tags": "algorithms, complexity-theory, graphs, reference-request, shortest-path" }
How to visualize multivariate regression results
Question: Are there commonly accepted ways to visualize the results of a multivariate regression for a non-quantitative audience? In particular, I'm asking how one should present data on coefficients and T statistics (or p-values) for a regression with around 5 independent variables. Answer: I personally like dotcharts of standardized regression coefficients, possibly with standard error bars to denote uncertainty. Make sure to standardize coefficients (and SEs!) appropriately so they "mean" something to your non-quantitative audience: "As you see, an increase of 1 unit in Z is associated with an increase of 0.3 units in X." In R (without standardization): set.seed(1) foo <- data.frame(X=rnorm(30),Y=rnorm(30),Z=rnorm(30)) model <- lm(X~Y+Z,foo) coefs <- coefficients(model) std.errs <- summary(model)$coefficients[,2] dotchart(coefs,pch=19,xlim=range(c(coefs+std.errs,coefs-std.errs))) lines(rbind(coefs+std.errs,coefs-std.errs,NA),rbind(1:3,1:3,NA)) abline(v=0,lty=2)
{ "domain": "datascience.stackexchange", "id": 225, "tags": "visualization, regression, linear-regression" }
Tensor product decomposition of $SU(2)$ duplet representations
Question: I have a rather trivial question. I am looking for the decomposition of $1/2\otimes 1/2\otimes 1/2$. It should give, $0,1/2$ and $3/2$. I thought one must get as the overall dimension of this space 8, but counting, I just get 7. Does one have 2 singlets? Answer: From where did you get the idea that one can get a spin zero representation? The product of an even/odd number of Fermion representations always gives a Boson/Fermion representation. In your particular case, repeated use of $$1/2 \otimes s = (s-1/2) \oplus (s+1/2)$$ gives $$1/2 \otimes 1/2 \otimes 1/2=(0\oplus 1) \otimes 1/2 = (0 \otimes 1/2) \oplus (1 \otimes 1/2) =1/2 \oplus (1/2 \oplus 3/2).$$ Thus one gets two spin 1/2 and one spin 3/2 representations.
{ "domain": "physics.stackexchange", "id": 4457, "tags": "homework-and-exercises, angular-momentum, quantum-spin, representation-theory" }
Where can I find Single Cell Data with Location "Coordinates"?
Question: Does single cell data typically have the following meta-data: the "coordinates" (e.g. on a tissue, adjacent tissues) saying where each cell in the sample was located relative to other cells? If not, is it possible to reconstruct this with other meta-data on the cells? With the ultimate goal of working hands on with such location-tagged data, I am hoping for suggestions on the correct terms to search for, and even references to previous studies and/or easy to use public datasets. For visual reference of the idea I have in mind, consider this picture: I expect expression of any given gene in the skin cells in Condition 1 to be very different according to the location of the cell. For condition 2, I wouldn't expect the expression of a gene in the skin cells to be overly different, if at all. I want to see if I can formalize this idea using data mining techniques that were created for other purposes, but first I need the proper data. edit: Here is an example of the format I eventually hope to work with. Fake table 1: Each row represents measurements taken for a unique cell, while the last three columns have coordinates for its location. \begin{array}{r|lllllllll} \hline cell & phenotype1 & phenotype2 & \dots & gene1.expr & gene2.expr & \dots & x.loc.coord & y.loc.coord & z.loc.coord \\ \hline 1 & 0.69 & 1.34 & \dots & 1.91 & 0.21 & \dots & 1.12 & 0.05 & 1.09 \\ 2 & 0.34 & 0.92 & \dots & 1.74 & 2.03 & \dots & 0.57 & 0.46 & 0.24 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ n & 1.97 & 1.3 & 0.96 & 0.19 & 0.66 & \dots & 0.25 & 0.02 & 1.27 \\ \hline \end{array} Alternatively, if the (x,y,z) coordinates of Fake Table 1 are unavailable or not reconstructable, are there datasets which help us construct an adjacency list of cell pairs according to their location, such as Fake Table 2? Fake Table 2: An adjacency table with pairs of cells which were next to each other when the measurement was taken. \begin{array}{r|cc} \hline pair & cell1 & cell2 \\ \hline 1 & 1 & 2 \\ 2 & 1 & 3 \\ 3 & 1 & 4 \\ 4 & 2 & 3 \\ 5 & 2 & 4 \\ \vdots & \vdots & \vdots \\ m-1 & n & n-2 \\ m & n & n-1 \\ \hline \end{array} I come from a statistics background with a basic understanding of this type of data from talks I've attended, but not hands on experience. I mostly just want to explore this type of data to inform my future work. I am open to any type of scRNA-seq data with cell location "coordinates," if that's the correct thing to ask. Answer: Does single cell data typically have the following meta-data: the "coordinates" (e.g. on a tissue, adjacent tissues) saying where each cell in the sample was located relative to other cells? No. Typical scRNA-seq is just capturing random cells in a tube with no additional information. The technology you are looking for is spatial transcriptomics where you are measuring RNA levels at a particular location, but that is still not on the single-cell level. If not, is it possible to reconstruct this with other meta-data on the cells? There have been efforts to reconstruct spatial relationship. For example: Satija et al. 2015. There should be more recent approaches that I am not aware of.
{ "domain": "bioinformatics.stackexchange", "id": 2139, "tags": "rna-seq, single-cell, public-databases, differential-expression, gene-expression" }
Can a shock wave travel around the Earth's curvature?
Question: Can an explosion be felt in the ground on the opposite side of the Earth like from an asteroid? Would planes in the air on that side of the Earth be able to survive? Does the shock waive follow the ground or will the curvature of the Earth limit the shock wave? Related: This is from a broad question that I am breaking up in 3 questions. Feel free to edit. Scenario 2. On or Underground in a cave? Answer: The processes are a mixture of interesting relationships. An asteroid hitting the other side of the planet that could cause such a shock would be massive in scope and would cause significant issues. As far as we known, geologists,etc, its not impossible, but very very unlikely unless we are looking at an extinction type vent. I would imagine any planes would fare well that are either in the direct patch of the asteroid or in the general vicinity of the impact. We are talking the impact equivalent of several nuclear bombs. However, with seismological equipment (seismograph) we can with great clarity, monitor explosions, earthquakes, or impacts. Seismographs during WW2 and later where used to detect nuclear detonations. Due to the crustal composition, or earths composition the shock-wave would travel through the lithosphere (crust and upper mantle) and I would imagine would spread similar to that of earthquakes. Such as certain waves traveling through the crust that a berried by transitions between the layers and some will pass through the liquid layers of the planet. That whole thing is enough to give you a headache, fascinating non the less. If you want to learn more about asteroids, I would suggest starting with: https://en.wikipedia.org/wiki/Meteor_Crater Now it is specific to meteors and may not answer your specific question but it is an interesting read. This information came from a geology student. This is my first post and I am just trying to follow the rules. Hope that gave some insight! Oh as for a cave, it would depend where, but you definitely could in the right locality and it would be the last place Id want to be, at least during impact and initial effects following the impact. If you into mega nature processes, check out the lat 1800's eruption of Krakatau, its explosion is the largest record in the US and what hear as far as Australia. Or Columbia river flood basalt or the ignimbrite storm.
{ "domain": "earthscience.stackexchange", "id": 1349, "tags": "geology, geophysics, plate-tectonics, crust" }
Flattening column data with split then merging df with Pandas
Question: Using names = df['Name and Location'].str.split(',', expand=True) I am able to split this dense data at delimiters like colons. I'm stuck on how to recombine the data into a flatter record. I've tried: pd.concat([df, names]) Records end at "complaint #", and begin at date: which is in another column. Date: 1999/12/29 **Last_Name , First_Name** City: City_Name County: OUT_OF_STATE Zip Code: 00000 License #: AA0000000 Complaint # AA00000000000 Date: 1999/03/01 **Company:** Company_Name,_INC City: City_Name County: County_Name Zip Code: 00000 Company: Company_Name LIC AA0000 City: City_Name County: County_Name Zip Code: 00000 License: string_or_int Complaint # AA00000000000 Date: 1999/05/04 **Last_Name**, First_Name Company: Company_Name City: City_Name County: County_Name Zip Code: 00000 License #: AA00000000000 Complaint # AA00000000000 Ideally, each "record" would ultimately be flat, like: First Name Last Name Company City County Zip Code License Complaint Date Last_name_1 First_name_1 Company_Name_1 City_1 County_1 00001 AA000000 1999/12/29 Answer: To split at a delimiter, and also create and combine a new column with the existing df, use: df = pd.concat((df, df['Column_to_Split'].str.split('String_to_Go:', expand=True)), axis=1, ignore_index=True) Any delimiter can be used, including an empty string. The key here is expand = True as it creates a new column, which was the goal.
{ "domain": "datascience.stackexchange", "id": 10728, "tags": "pandas, data-cleaning" }
Gazebo (Fuerte on Ubuntu 12.04) returns "Invalid arguments"
Question: Dear all, I tried to run the KUKA youbot simulation in Gazebo, with code that worked just fine under Electric on Ubuntu 11.04. But now I get this very meaningful error 'invalid arguments' without further indication where to look. Gazebo is not coming up. The mentioned log files are empty or don't exist. The output shows also an attribute error, which Gazebo gives also when launching the PR2 or an empty world (which didn't pose a problem for those simulations). If I delete the remap statements out of the launch file, I get a new error (see below), although the remap statement looks fine. Anyone any idea? Did the syntax change? Thanks in advance! Nick The launch script: (if the launch script is not shown properly, you can find the full version here: https://github.com/youbot/youbot-ros-pkg/tree/master/youbot_common/youbot_description/launch/youbot_publisher.launch ) <code> <launch> <param name="/use_sim_time" value="true" /> <node name="gazebo" pkg="gazebo" type="gazebo" args="$(find gazebo_worlds)/worlds/empty.world" respawn="false" output="screen"> <env name="GAZEBO_RESOURCE_PATH" value="$(find youbot_description):$(find gazebo_worlds):$(find gazebo)/gazebo/share/gazebo" /> <remap from="base_controller/command" to="cmd_vel"/> <remap from="scan_front" to="base_scan"/> <remap from="/base_odometry/odom" to="/odom" /> </node> <!-- send youbot urdf to param server --> <param name="robot_description" command="$(find xacro)/xacro.py '$(find youbot_description)/robots/youbot.urdf.xacro'"/> <!-- push robot_description to factory and spawn robot in gazebo --> <node name="youbot_gazebo_model" pkg="gazebo" type="spawn_model" args="-urdf -param robot_description -model youBot -x 0.0 -y 0.0 -z 0.1" respawn="false" output="screen" /> <include file="$(find youbot_description)/launch/control/youbot_base_control.launch" /> <include file="$(find youbot_description)/launch/control/youbot_arm_control.launch" /> </launch> </code> the output ---------------- ... logging to /home/u0065688/.ros/log/3df9c36e-cc1e-11e1-a6e5-0024e8dfc1df/roslaunch-pma-09-053-14267.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://localhost:54986/ SUMMARY PARAMETERS * /arm_1/arm_controller/gains/arm_joint_1/d * /arm_1/arm_controller/gains/arm_joint_1/i * /arm_1/arm_controller/gains/arm_joint_1/p * /arm_1/arm_controller/gains/arm_joint_2/d * /arm_1/arm_controller/gains/arm_joint_2/i * /arm_1/arm_controller/gains/arm_joint_2/p * /arm_1/arm_controller/gains/arm_joint_3/d * /arm_1/arm_controller/gains/arm_joint_3/i * /arm_1/arm_controller/gains/arm_joint_3/p * /arm_1/arm_controller/gains/arm_joint_4/d * /arm_1/arm_controller/gains/arm_joint_4/i * /arm_1/arm_controller/gains/arm_joint_4/p * /arm_1/arm_controller/gains/arm_joint_5/d * /arm_1/arm_controller/gains/arm_joint_5/i * /arm_1/arm_controller/gains/arm_joint_5/p * /arm_1/arm_controller/joints * /arm_1/arm_controller/type * /base_controller/caster_joint_bl/position_controller/d * /base_controller/caster_joint_bl/position_controller/i * /base_controller/caster_joint_bl/position_controller/i_clamp * /base_controller/caster_joint_bl/position_controller/p * /base_controller/caster_joint_bl/velocity_controller/d * /base_controller/caster_joint_bl/velocity_controller/i * /base_controller/caster_joint_bl/velocity_controller/i_clamp * /base_controller/caster_joint_bl/velocity_controller/p * /base_controller/caster_joint_br/position_controller/d * /base_controller/caster_joint_br/position_controller/i * /base_controller/caster_joint_br/position_controller/i_clamp * /base_controller/caster_joint_br/position_controller/p * /base_controller/caster_joint_br/velocity_controller/d * /base_controller/caster_joint_br/velocity_controller/i * /base_controller/caster_joint_br/velocity_controller/i_clamp * /base_controller/caster_joint_br/velocity_controller/p * /base_controller/caster_joint_fl/position_controller/d * /base_controller/caster_joint_fl/position_controller/i * /base_controller/caster_joint_fl/position_controller/i_clamp * /base_controller/caster_joint_fl/position_controller/p * /base_controller/caster_joint_fl/velocity_controller/d * /base_controller/caster_joint_fl/velocity_controller/i * /base_controller/caster_joint_fl/velocity_controller/i_clamp * /base_controller/caster_joint_fl/velocity_controller/p * /base_controller/caster_joint_fr/position_controller/d * /base_controller/caster_joint_fr/position_controller/i * /base_controller/caster_joint_fr/position_controller/i_clamp * /base_controller/caster_joint_fr/position_controller/p * /base_controller/caster_joint_fr/velocity_controller/d * /base_controller/caster_joint_fr/velocity_controller/i * /base_controller/caster_joint_fr/velocity_controller/i_clamp * /base_controller/caster_joint_fr/velocity_controller/p * /base_controller/caster_names * /base_controller/caster_position_pid_gains/d * /base_controller/caster_position_pid_gains/i * /base_controller/caster_position_pid_gains/i_clamp * /base_controller/caster_position_pid_gains/p * /base_controller/caster_velocity_filter/name * /base_controller/caster_velocity_filter/params/a * /base_controller/caster_velocity_filter/params/b * /base_controller/caster_velocity_filter/type * /base_controller/caster_velocity_pid_gains/d * /base_controller/caster_velocity_pid_gains/i * /base_controller/caster_velocity_pid_gains/i_clamp * /base_controller/caster_velocity_pid_gains/p * /base_controller/max_rotational_acceleration * /base_controller/max_rotational_velocity * /base_controller/max_translational_acceleration/x * /base_controller/max_translational_acceleration/y * /base_controller/max_translational_velocity * /base_controller/publish_tf * /base_controller/state_publish_rate * /base_controller/timeout * /base_controller/type * /base_controller/wheel_joint_bl/d * /base_controller/wheel_joint_bl/i * /base_controller/wheel_joint_bl/i_clamp * /base_controller/wheel_joint_bl/p * /base_controller/wheel_joint_br/d * /base_controller/wheel_joint_br/i * /base_controller/wheel_joint_br/i_clamp * /base_controller/wheel_joint_br/p * /base_controller/wheel_joint_fl/d * /base_controller/wheel_joint_fl/i * /base_controller/wheel_joint_fl/i_clamp * /base_controller/wheel_joint_fl/p * /base_controller/wheel_joint_fr/d * /base_controller/wheel_joint_fr/i * /base_controller/wheel_joint_fr/i_clamp * /base_controller/wheel_joint_fr/p * /base_controller/wheel_pid_gains/d * /base_controller/wheel_pid_gains/i * /base_controller/wheel_pid_gains/i_clamp * /base_controller/wheel_pid_gains/p * /base_odometry/base_footprint_frame * /base_odometry/base_link_frame * /base_odometry/caster_calibration_multiplier * /base_odometry/caster_names * /base_odometry/cov_xrotation * /base_odometry/cov_xy * /base_odometry/cov_yrotation * /base_odometry/ils_max_iterations * /base_odometry/odom_frame * /base_odometry/odom_publish_rate * /base_odometry/odometer_publish_rate * /base_odometry/publish_tf * /base_odometry/rotation_stddev * /base_odometry/state_publish_rate * /base_odometry/type * /base_odometry/verbose * /base_odometry/wheel_radius_multiplier * /base_odometry/x_stddev * /base_odometry/y_stddev * /pr2_controller_manager/joint_state_publish_rate * /pr2_controller_manager/mechanism_statistics_publish_rate * /publish_frequency * /robot_description * /robot_state_publisher/publish_frequency * /robot_state_publisher/tf_prefix * /rosdistro * /rosversion * /use_sim_time NODES / arm_controller_spawner (pr2_controller_manager/spawner) base_controllers_spawner (pr2_controller_manager/spawner) gazebo (gazebo/gazebo) joint_translator (itasc_youbot/joint_translator) pr2_mechanism_diagnostics (pr2_mechanism_diagnostics/pr2_mechanism_diagnostics) robot_state_publisher (robot_state_publisher/state_publisher) velocity_translator (itasc_youbot/velocity_translator) youbot_gazebo_model (gazebo/spawn_model) auto-starting new master Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[master]: started with pid [14286] ROS_MASTER_URI=http://localhost:11311 setting /run_id to 3df9c36e-cc1e-11e1-a6e5-0024e8dfc1df Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[rosout-1]: started with pid [14299] started core service [/rosout] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[gazebo-2]: started with pid [14313] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[youbot_gazebo_model-3]: started with pid [14324] Gazebo multi-robot simulator, version 1.0.2 Copyright (C) 2011 Nate Koenig, John Hsu, Andrew Howard, and contributors. Released under the Apache 2 License. http://gazebosim.org **Error. Invalid arguments** Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored [gazebo-2] process has died [pid 14313, exit code 255, cmd /opt/ros/fuerte/stacks/simulator_gazebo/gazebo/scripts/gazebo /opt/ros/fuerte/stacks/simulator_gazebo/gazebo_worlds/worlds/empty.world base_controller/command:=cmd_vel scan_front:=base_scan /base_odometry/odom:=/odom __name:=gazebo __log:=/home/u0065688/.ros/log/3df9c36e-cc1e-11e1-a6e5-0024e8dfc1df/gazebo-2.log]. log file: /home/u0065688/.ros/log/3df9c36e-cc1e-11e1-a6e5-0024e8dfc1df/gazebo-2*.log process[robot_state_publisher-4]: started with pid [14325] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[pr2_mechanism_diagnostics-5]: started with pid [14346] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[base_controllers_spawner-6]: started with pid [14365] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[arm_controller_spawner-7]: started with pid [14366] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[joint_translator-8]: started with pid [14367] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[velocity_translator-9]: started with pid [14381] loading model xml from ros parameter error without remap statements ------------------------------ waiting for service spawn_urdf_model [ WARN] [1342106748.009969198]: multiple inconsistent <turnGravityOff> exists due to fixed joint reduction, overwriting previous value [true] with [false]. Warning [parser.cc:332] Gazebo SDF has no gazebo element Warning [parser.cc:271] parse as sdf version 1.0 failed, trying to parse as old deprecated format Warning [parser.cc:277] parse as old deprecated world file failed, trying old model format. [ INFO] [1342106749.288581927, 0.001000000]: Laser plugin missing <hokuyoMinIntensity>, defaults to 101 [ INFO] [1342106749.289155673, 0.001000000]: INFO: gazebo_ros_laser plugin should set minimum intensity to 101.000000 due to cutoff in hokuyo filters. Dbg plugin model name: youBot [ INFO] [1342106749.307460931, 0.001000000]: starting gazebo_ros_controller_manager plugin in ns: / [ INFO] [1342106749.309969773, 0.001000000]: Callback thread id=0x7fa3787a39e0 Dbg plugin model name: youBot spawn status: SpawnModel: successfully spawned model [ INFO] [1342106749.722135591, 0.049000000]: waitForService: Service [/gazebo/set_physics_properties] is now available. Unhandled exception in thread started by [ INFO] [1342106749.784878201, 0.110000000]: Starting to spin physics dynamic reconfigure node... sys.excepthook is missing lost sys.stderr [youbot_gazebo_model-2] process has finished cleanly Originally posted by NickVT on ROS Answers with karma: 51 on 2012-07-12 Post score: 1 Original comments Comment by NickVT on 2012-07-12: I'm able to spawn the youbot model on it's own btw, using rosrun gazebo spawn_model -file /home/u0065688/src/svn/robotics-ros/packages/youbot-ros-pkg/youbot_common/youbot_description/robots/youbot.urdf -urdf -z 0.1 -model youBot Comment by NickVT on 2012-07-12: My question relates (I think) to the one posted on the ROS-users mailinglist (unanswered) :"Remap does not work on gazebo_ros_prosilica" on 24/6/2012 Answer: Ticketed. Yes, it's a problem with Fuerte that the gazebo node is no longer a ros node. The command line arguments needs to be passed in to underlying gazebo executable and passed through to the ros plugin that calls ros::init. I'll working on fixing this so namespace remapping works. As a side note, I notice you are setting GAZEBO_RESOURCE_PATH in your launch file for the gazebo node; you can avoid doing so by updating manifest.xml in the youbot_description package to make sure it includes the following items: <depend package="gazebo" /> <export> <gazebo gazebo_media_path="${prefix}/gazebo/share/gazebo-1.0.2" /> </export> Originally posted by hsu with karma: 5780 on 2012-07-18 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 10162, "tags": "gazebo, roslaunch, ros-fuerte" }
rospack find foobar
Question: Hi, I am trying to do the tutorial : http://wiki.ros.org/ROS/Tutorials/Creating%20a%20Package%20by%20Hand I did already: source /home/fschiano/catkin_ws/devel/setup.bash And I am in the catkin_ws folder. But when I give the command rospack find foobar I get the error: [rospack] Error: package 'foobar' not found How can I solve this? thanks. Originally posted by fabbro on ROS Answers with karma: 115 on 2014-10-10 Post score: 0 Answer: Try a rospack profile (after you've sourced setup.bash). If that doesn't work, make sure foobar actually is a ROS package (has a manifest: package.xml). Also make sure there is no CATKIN_IGNORE file in the package directory. Edit: your last comment does not correspond to what you wrote in your OP. Anyway, these steps should work (in a fresh terminal): cd /home/fschiano/catkin_ws source /opt/ros/indigo/setup.bash catkin_make source devel/setup.bash # (this step is optional, only if the next step doesn't immediately work) rospack profile rospack find foobar If rospack cannot find foobar then, something is wrong. Originally posted by gvdhoorn with karma: 86574 on 2014-10-10 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by fabbro on 2014-10-10: When I type rospack profile I get : Full tree crawl took 0.014759 seconds. Directories marked with (*) contain no manifest. You may want to delete these directories. To get just of list of directories without manifests, re-run the profile with --zombie-only ---------- Comment by fabbro on 2014-10-10: 0.014213 /opt/ros/indigo/share 0.000038 * /opt/ros/indigo/share/doc 0.000005 * /opt/ros/indigo/share/doc/liborocos-kdl I am in the ~/catkin_ws Comment by gvdhoorn on 2014-10-10: great. Now again try rospack find foobar (in the same terminal). Comment by fabbro on 2014-10-10: I always get : [rospack] Error: package 'foobar' not found I am in this folder /home/fschiano/catkin_ws and I do first : source /opt/ros/indigo/setup.bash then rospack profile and then rospack find foobar catkin_ws folder has to stay in the ros folder?
{ "domain": "robotics.stackexchange", "id": 19694, "tags": "rospack" }
Adiabatic Index ($C_{p}/C_{v}$) Through Phase Change
Question: Does the adiabatic index change when a phase change happens? For example, when we have a compressible liquid (say CO$_{2}$) inside a pipe that goes from high pressure (100 bar) to low pressure (1 bar) in an orifice which causes the liquid phase to change into solid/vapor mixture. In this case, does the adiabatic index ($C_{p}/C_{v}$) change, and why? Answer: The heat capacities are a function of temperature. Consider , Cp(T) = a + bT + cT² . . . And Cv(T) = A + BT + CT² . . . If you divide the two , you shall get that the adiabatic index is not just a constant . About the index for different phases thus shall obviously mean that it's not going to be same as can be seen from wikipedia https://en.m.wikipedia.org/wiki/Heat_capacity_ratio
{ "domain": "physics.stackexchange", "id": 87212, "tags": "thermodynamics, fluid-dynamics, phase-transition, thermal-conductivity" }
What are the differences between an agent and a model?
Question: In the context of Artificial Intelligence, sometimes people use the word "agent" and sometimes use the word "model" to refer to the output of the whole "AI-process". For examples: "RL agents" and "deep learning models". Are the two words interchangeable? If not, in what case should I use "agents" instead of "models" and vice versa? Answer: Agent The other answer defines an agent as a policy (as it's defined in reinforcement learning). However, although this definition is fine for most current purposes, given that currently agents are mainly used to solve video games, in the real world, an intelligent agent will also need to have a body, which Russell and Norvig call an architecture (section 2.4 of the 3rd edition of Artificial Intelligence: A Modern Approach, page 46), which should not be confused with an architecture of a model or neural network, but it's the computing device that contains the physical sensors and actuators for the agent to sense and act on the environment, respectively. So, to be more general, the agent is defined as follows agent = body + policy (brain) where the policy is what Russell and Norvig call the agent program, which is an implementation of the agent function. Alternatively, it can be defined as follows An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. This is just another definition given by Russell and Norvig, which I also report in this answer, where I describe different types of agents. Note that these definitions are equivalent. However, in the first one, we just emphasize that we need some means to "think" (brain) and some means to "behave" (body). These definitions are quite general, so I think people should use them, although, as I said above, sometimes people refer to an agent as just the policy. Model In this answer, I describe what a model is or what I like to think a model is, and how it is different from a function. In AI, a model can refer to different but somehow related concepts. For example, in reinforcement learning, a model typically refers to $p(s', r \mid s, a)$, i.e. the joint probability distribution over the next state $s'$ and reward $r$, given the current state $s$ and action $a$ taken in $s$. In deep learning, a model typically refers to a neural network, which can be used to compute (or model) different functions. For example, a neural network can be used to compute/represent/model a policy, so, in this case, there would be no actual difference between a model and an agent (if defined as a policy, without a body). However, conceptually, at a higher-level, these would still be different (in the same way that biological neural networks are different from the brain). More generally, in machine learning, a model typically refers to a system that can be changed to compute some function. Examples of models are decision trees, neural networks, linear regression models, etc. So, as I also state in the other answer, I like to think of a model as a set of functions, so, in this sense, a model would be a hypothesis class in computational learning theory. This definition is roughly consistent with $p(s', r \mid s, a)$, which can also be thought of as a (possibly infinite) set of functions, but note that a probability distribution is not exactly a set of functions. In the context of knowledge bases, a model is an assignment to the variables, which represents a "possible world". See section 7.3, page 240, of the cited book. There are possible other uses of the word model (both in the context of AI, e.g. in the context of planning, there's often the idea of a conceptual model, which is similar to an MDP in RL, and in other areas), but the definitions given above should be more or less widely applicable in their contexts. What is the difference between an agent and a model? Given that there are different possible definitions of a model depending on the context, it's not easy to briefly state what the difference between the two is. So, here's the difference in the context of RL (and you can now find out the differences in other contexts by using the different definitions): an agent can have a model of the world, which allows it to predict e.g. the reward it will receive given its current state and some action that it decides to take. The model can allow the agent to plan. In this same context, a model could also refer to the specific system (e.g. a neural network) used to compute/represent the policy of the agent, but note that people usually refer to $p(s', r \mid s, a)$ when they use the word model in RL. See this post for more details.
{ "domain": "ai.stackexchange", "id": 2632, "tags": "terminology, definitions, models, intelligent-agent" }
Regularity of CFG and DCFL
Question: I read that it is undecidable whether, given a CFG $G$, $L(G)$ is regular. And there exists no algorithm that, given a CFG $G$ such that $L(G)$ is regular, outputs a DFA that accepts $L(G)$. My question is, why is regularity decidable for DCFLs and undecidable for CFLs? Answer: Your question unfortunately doesn't have a simple answer. The best I can do is go over the proofs and point out where they fail when trying to apply them to the other class. Regularity of the language generated by a CFG The proof that regularity of the language generated by a CFG is undecidable is very similar to the proof that universality of the language generated by a CFG is undecidable, so we'll start with the latter. The proof proceeds by reduction from the halting problem. Given a Turing machine $M$, we construct a context-free grammar $G$ with the following property: If $M$ doesn't halt on the empty input, then $G$ is universal, that is, $L(G) = \Sigma^*$. If $M$ does halt on the empty input, then $L(G) = \Sigma^* \setminus \{t\}$, where $t$ is a transcript of the execution of $M$ on the empty input. We encode transcripts as sequences of configurations: $c_1 \# c_2 \# \cdots \# c_n$. Here each $c_i$ is a configuration (the contents of the tape together with the location of the head and the current state), and $c_{i+1}$ is the configuration following $c_i$. The grammar $G$ describes all strings which are not transcripts of halting executions. A string fails to be such a transcript if one of the following cases holds: It is not of the form $c_1\#c_2\#\cdots\#c_n$, where $c_1,\ldots,c_n$ are valid configurations. The transcript is of the form $c_1\#c_2\cdots\#c_n$, where $c_1$ is not a valid initial configuration. The transcript is of the form $c_1\#c_2\cdots\#c_n$, where $c_n$ is not a valid halting configuration. The transcript is of the form $c_1\#c_2\cdots\#c_n$, and for some $i$, the configuration $c_{i+1}$ doesn't follow from the configuration $c_i$. The first three cases represent regular constraints, and see are easy to handle (and could be handled by a DPDA). The problematic case is the final one: while it is easy to describe bad stretches of the form $\#c_i\#c_{i+1}\#$, the grammar has to "guess" the value of $i$. Indeed, consider the special case $n=3$, and take the point of view of a PDA. The PDA has to guess whether it should be comparing $c_1$ to $c_2$, or whether it should skip $c_1$ and instead compare $c_2$ to $c_3$. This is something that a DPDA cannot do. From universality to regularity. In the construction outlined above, the language $L(G)$ is regular in both cases. However, given $G$, we can construct another grammar $G'$ that accepts the language $$ \{a^n b^m \# w : n,m \ge 0, w \in L(G)\} \cup \{a^n b^n \# w : n \ge 0, w \in \Sigma^*\} $$ By construction, If $M$ doesn't halt on the empty input then $L(G') = a^*b^*\#\Sigma^*$, which is regular. If $M$ does halt on the empty input then $L(G') = \Sigma^* \setminus \{a^nb^m\#t : n \neq m\}$, which isn't regular. Regularity of the language accepted by a DPDA There are several proofs that the regularity of the language accepted by a DPDA is decidable. Here I will outline the original proofs of Stearns and Valiant, but a newer and more efficient algorithm of Shankar and Adiga, A graph-based regularity test for deterministic context-free languages, should also be mentioned. Given a DPDA, we would like to show that if the language it accepts is regular, then it is accepted by a DFA with at most $N$ many states, where $N$ is a quantity that depends in some explicit computable way on the number of states, number of state symbols, and maximum number of symbols pushed at a single step of the DPDA. Suppose we somehow showed that. How do we proceed? We go over all DFAs having at most $N$ many states, and for each one, check whether it accepts the same language as the DPDA. Denote the DPDA by $P$ and the candidate DFA by $D$. We first check whether $L(P) \cap \overline{L(D)} = \emptyset$, that is, whether $L(D) \subseteq L(P)$. We can do this by combining three constructions: Given a DFA, we can construct a DFA for the complement language. Given a PDA and a DFA, we can construct a PDA for the intersection language. Given a PDA, we can solve the emptiness problem, that is, determine whether the language is empty. We then check whether $\overline{L(P)} \cap L(D) = \emptyset$ using a similar approach, crucially using the fact that given a DPDA, we can construct a DPDA for the complement language. If both tests pass, then $L(P) = L(D)$. In contrast, CFLs are not closed under complementation, and in particular, given a PDA, we cannot in general construct a PDA for the complement language. Moreover, as shown above, we cannot decide if a PDA generates the same language as a given DFA; indeed, even the special case of universality is undecidable. Now let us go back to the first part of the proof, in which we give a bound on the number of states in an equivalent DFA. Unfortunately this part of the proof is somewhat complicated. Roughly speaking, we show that if at some point of the execution the stack is very deep, then the first few symbols on the stack are too deep to make a difference; otherwise, we can "pump" the input in some way that guarantees that the language accepted by the DPDA is not regular. This implies that in order to simulate the DPDA, it suffices to keep track of a constant number of symbols on the stack, which a DFA can do. The undecidability proof for CFGs shows that a similar statement is wrong for PDAs. More accurately, while there is still a bound on the number of states in an equivalent DFA, it cannot be computed from the complexity parameters of the PDA since it grows too fast. Indeed, for the grammar $G$ we construct in the undecidability proof, in the case where the machine $M$ halts, the size of the equivalent DFA is essentially the same as the size of the transcript $t$, and so it grows faster than the number of steps it takes $M$ to halt. The maximal number of steps in an equivalent minimal DFA is thus some sort of busy beaver function, which grows too fast to be computable.
{ "domain": "cs.stackexchange", "id": 19119, "tags": "regular-languages, finite-automata, context-free, undecidability, decidability" }
Why is the time domain low-pass filter the "sinc" shape?
Question: Consider: I'm looking at low-pass filters, and I see that the time domain representation of an "ideal" filter resembles the shape above whereas the frequency domain is a box. I also get the as you lower the cutoff, the main lobe in the middle gets wider. So that implies that if you "slide over" this shape with a signal in a convolution, it cuts out the low frequencies. What I don't get is the intuition for why exactly it's this shape, or is this something that is better accepted as just "how it is"? Answer: It is a good way to understand the lowpass behavior of sinc function (as well as the convolution) through visualization. I've made some modification on this animated convolution project and here are the results showing sine waves with different frequencies filtered by the sinc function. In the low frequency case the waveform remains unchanged. While in the high frequency case the convolution result has a low level of magnitude, which means "lowpass". Note that when the sine function is moving to a position that overlaps with the main lobe of the sinc function, if the width of main lobe contains sine waves of more than one wavelength, the result of the convolution will cancel out and obtain a very small value. Now, let's convolve the same sine wave with a thinner sinc function, which is more like a delta function (compared with the wavelength of the sine wave): and we can find that A wider sinc function in the time domain corresponds to a narrower frequency response in the frequency domain. A thinner sinc function in the time domain corresponds to a wider frequency response in the frequency domain.
{ "domain": "dsp.stackexchange", "id": 12401, "tags": "filters, fourier-transform, fourier, sinc" }
Choose the right type for method GetById
Question: I designed my repository class and I need some advice from you. Now I have a method GetByID that returns a generic type: public T GetById(object id) { return this.Entities.Find(id); } I'm going to use OData to filter data, and for that (when I'm using $expand for single row) I need to do something like this: SingleResult.Create(repository.GetByID(id)); But of course I received an error because SingleResult needs a type IQueryable<T>. So, I decided to change my GetByID to: public IQueryable<T> GetById(object id) { return this.Entities.FirstOrDefault(p => p.ID == (int)id); } Could you please tell me if is it the right way? What type should be returned from GetByID? I prefer to use T GetById because it is more correct in my opinion, but I don't know. Answer: If you're only going to make the minimal required change Accommodate the IQueryable requirement in the layer where it is needed, not in your repository layer. That will probably be some object that takes in an entity, wraps it in an IEnumerable, and calls AsQueryable() on it. (Disclaimer: This may or may not work. I haven't tested it.) If you were to do otherwise and change your repo signature to use IQueryable, that would be coupling the repository layer to downstream requirements, which you are correct in wanting to avoid. A side pet peeve Please avoid using object in signatures if you're going to use generics in the same class and you don't have a reason to box (I believe DbSet has to use reflection to find the key so it has a boxing requirement, but your code does not require boxing). Use strong typing all the way until you absolutely must cast to object. You'll thank me later. And make the parameter names expressive: public TEntity GetById<TKey>(TKey id) { return this.Entities.Find((object)id); } Digging deeper It seems a bit wonky and counterintuitive to have to pass an IQueryable to SingleResult. But come! Join me on a journey in pursuit of greater understanding of this "IQueryable". An IQueryable is a bit more than just a collection. It functions as a sort of gateway to the data you're after, letting you query that data without having to use its unique language (SQL, or OData queries, what have you). This is accomplished through an associated query provider, which translates the lambda expressions you provide it to the language of your data store. More importantly, that query provider decides how to compose and manage queries, and that includes utilizing deferred evaluation. The fact that you can pass it a query from your EF context is its way of telling you "I got this bro. This IQueryable has all I need to pull the data from here when I need it." It will then pull the data it needs for the OData query it's building on demand. If you instead return it an entity, you have eagerly loaded that entity from the DB yourself, when you may not have needed to. Or at the very least, before you needed to. A Catch-22 Using the generic repository here directly pits the benefits of IQueryable (deferred evaluation, the need for less "intermediary" code) against the reusability of a library that abstracts data access away from queries (is this even being reused?). In order for these two to coexist, some compromises are going to have to be made, compromises that may result in an extremely mediocre "solution." A simpler appeal If you look at the way the OData provider functionality is designed, it's evident that it's meant to directly consume EF context queries including in your web app layer. If the DbContext was meant to be shut away inside a library, they would have designed things a lot differently. To sum up I would recommend reconsidering the purpose of your generic repository, why you need it, and what it's buying you.
{ "domain": "codereview.stackexchange", "id": 15901, "tags": "c#, asp.net-mvc, repository, asp.net-web-api" }
Should binding constants be unitless when deriving fractional occupancy equations from reactions?
Question: It is known that binding (aka association) constants are in fact unitless, as has been discussed here already. However, I'm not a chemist and am confused about when one should or should not use units when working with association constants. One source says: $K_\text{eq}$ for a reaction with unequal numbers of reactants and products is always given with units, even in published papers. But why is that? Why not always use unitless values? Is there something inherently wrong with never using units for binding constants? Consider this example. Let $\ce{A}$ bind to $\ce{X}$ as an n-mer (e.g. $\ce{A}$ can be a transcriptional activator binding to gene promoter). This results in active state, $\ce{X_{A}}$ (e.g. a state leading to gene transcription): $$\ce{X + nA ->[k_{\text{on}}][k_{\text{off}}] X_{A}}$$ Association constant: $K_\text{A} = \frac{k_\text{on}}{k_\text{off}} $ Assuming equilibrium and law of mass action: $$K_\text{A} \cdot \text{X} \cdot \mathrm{A^n} = \mathrm{X_A}$$ Now, the fractional occupancy (active states to all states ratio) is: $$y=\mathrm{\frac{X_A}{X+X_A}} = \frac{K_\text{A}\cdot \mathrm{A^n}}{1 + K_\text{A}\cdot \mathrm{A^n}}$$ As $y$ must be unitless (i.e. it can be interpreted as a probability), for this particular equation, $K_\text{A}$ must have units of $\mathrm{M^{-n}}$ (in general, $\text{concentration}^{-n}$) for the equation to work out, correct (assuming concentration of $\ce{A}$ has units $\mathrm{M}$)? So in this particular case, using unitless $K_\text{A}$ seems wrong, but is it really, or is there something I'm missing? Now let's extend this equation so that it includes a half-saturation constant $h$, i.e. concentration of $\ce{A}$ required for $y=0.5$ (50% activation). If I'm doing this correctly, we get: $$y=\frac{K_\text{A}\cdot \mathrm{A^n}}{K_\text{A}\cdot h^n + K_\text{A}\cdot \mathrm{A^n}} = \frac{\mathrm{A^n}}{h^n + \mathrm{A^n}}$$ Note that this is equivalent to the Hill equation for an activator. This generalized equation, unlike the previous one, works just fine regardless of whether or not $K_\text{A}$ is unitless. Is my understanding of this correct, and does the choice of unitless vs non-unitless binding constant indeed depend on the formulation of a specific equation? Answer: The equilibrium constant does not need to be unitless, because it is depending on its definition. See goldbook: Equilibrium Constant Quantity characterizing the equilibrium of a chemical reaction and defined by an expression of the type $$K_x = \Pi_B x_B^{\nu_B},$$ where $\nu_B$ is the stoichiometric number of a reactant (negative) or product (positive) for the reaction and $x$ stands for a quantity which can be the equilibrium value either of pressure, fugacity, amount concentration, amount fraction, molality, relative activity or reciprocal absolute activity defining the pressure based, fugacity based, concentration based, amount fraction based, molality based, relative activity based or standard equilibrium constant (then denoted $K^\circ$ ), respectively. The standard equilibrium constant is always unitless, as it is defined differently (goldbook) Standard Equilibrium Constant $K$, $K^\circ$ (Synonym: thermodynamic equilibrium constant) Quantity defined by $$K^\circ = \exp\left\{-\frac{\Delta_rG^\circ}{\mathcal{R}T}\right\}$$ where $\Delta_rG^\circ$ is the standard reaction Gibbs energy, $\mathcal{R}$ the gas constant and $T$ the thermodynamic temperature. Some chemists prefer the name thermodynamic equilibrium constant and the symbol $K$. In your example both would work if you straighten your definitions, involving concentrations \begin{aligned} K_A \cdot c(\ce{X}) \cdot c^n(\ce{A}) &= c(\ce{X_{A}}),\\ y=\frac{c(\ce{X_A})}{c(\ce{X})+c(\ce{X_{A}})} &= \frac{K_A\cdot c^n(\ce{A})}{1 + K_A\cdot c^n(\ce{A})}, \end{aligned} or activities \begin{aligned} K^\circ_A \cdot a(\ce{X}) \cdot a^n(\ce{A}) &= a(\ce{X_{A}}),\\ y=\frac{a(\ce{X_A})}{a(\ce{X})+a(\ce{X_{A}})} &= \frac{K^\circ_A\cdot a^n(\ce{A})}{1 + K^\circ_A\cdot a^n(\ce{A})}. \end{aligned} In both cases $y$ remains unitless. In experimental chemistry it is much easier to observe concentrations instead of activities. Therefore to fist named constant is more often in use. A straightforward use of the thermodynamical equilibrium constant might sometimes prove a little tricky. For reasonable dilutions one can simply substitute activity with a unitless concentration as given through $$a(\ce{Y})=\gamma\frac{c(\ce{Y})}{c^\circ},$$ with $\gamma\approx1$ for $c(\ce{Y})\to0$ and $c^\circ=1\:\mathrm{mol/L}$.
{ "domain": "chemistry.stackexchange", "id": 1310, "tags": "equilibrium, kinetics, chemical-biology" }
Neutrinos from a Type 1a Supernova
Question: How much energy does a Type Ia Supernova produce in the form of neutrinos? I know that a Type II Supernova produces around $10^{45}$ joules worth of neutrinos, but not the amount produced by a Type 1a supernova. Also, how far away could we detect the neutrinos from the explosion? Answer: It looks like there is expected to be a $\sim 1$ second burst of neutrinos peaking at around $10^{43}$ W (Kunugise and Iwanamoto 2007). Wright et al. (2017) present simulations for gravitationally confined detonations and deflagration scenarios that only produce $10^{41}$–$2\times10^{42}$ joules. So about $10^4$ times less energy in neutrinos than a core collapse supernova. The neutrinos are released via electron captures onto free protons and neutron-rich iron-peak nuclei produced in the explosion. The neutrinos have energies of a few MeV, which is a bit lower than the $\sim 10$ MeV neutrinos expected from core collapse supernova. Wright, et al. (2017) suggest that the larger values they obtain would be detectable by the largest neutrino detectors, like Hyper-Kamiokande if they were closer than the Galactic centre (8 kpc), but possibly only out to about$\sim 1 $ kpc, depending on the exact explosion mechanism. i.e. Certainly not if the supernova were as far away as the Large Magellanic Cloud like the core collapse supernova SN1987a.
{ "domain": "physics.stackexchange", "id": 96586, "tags": "neutrinos, supernova" }
Does the state space of an MDP change in these two examples?
Question: In the classic Atari environments, like that introduced in the original DQN paper, the state space is the set of all possible images that the Atari emulator can produce (or more generally just any RGB image, potentially stacked to better represent the environment). This makes sense as the CNN in the DQN is trained end-to-end with the RL signal, and so the Q-Fuction looks directly at the image as input. Now, in methods such as CURL that look to pre-train the CNN and treat it as an encoder, does the state space change here? My thinking is that, if the pre-trained encoder is a function $\psi: \mathcal{I} \rightarrow \mathbb{R}^d$, where $\mathcal{I}$ is the space of images, then the state space is now $\mathbb{R}^d$. The rationale for my thinking is that now the agent directly observes the vector in $\mathbb{R}^d$ rather than the image from the encoder, and the state space should be what the agent observes (even though this vector is a representation for the image). Answer: Whilst engineering solutions in reinforcement learning, I think it is common to discuss the concept of state space loosely, in terms of what the search space looks like for the algorithm, and what compromises are OK even though they technicaly make the problem a POMDP. In terms of definitions relating to the MDP, the state space has well-defined meaning. It is the set of all state values that can occur in the environment. That set/space can be mapped into different domains, but it remains the same size of space in terms of the set for any bidirectional mapping. Once you start to implement a state representation in a real system, in a practical agent, you often need to compromise regarding this definition. Even in purely mathematical treatments, it may not be convenient to determine all the theoretically reachable states. Determining them can be more complex than the optimal control problem. So it is very common to over-specify the state space. Atari games don't reach states where they produce arbitrary images. Their output during gameplay is on a relatively small manifold embedded in image space. Despite this, the over-specification in image space is useful, because we have good toolkits for working with it, including CNNs for learning generalising function outputs when images are used as inputs. Another compromise seen in the original Atari DQN is missing state. Only using the image, even when stacked, can mean a certain amount of state is not being used. Depending on the game, this state could be important enough that a POMDP would make a better model, and the images would move from a state space representation to an observation space representation (as an aside, stacking images to include velocity information could be replaced by a sequence-aware model such as RNN, and this is similar to POMDP approach, building an internal state representation separate from observations). In both cases - over-specified state space, and missing state - the state space of the problem is not changed. When implementing the agent, you know the representation space you are using, and expect it to have good coverage of all possible states, but often do not know the precise underlying state space of the MDP. This further gets confounded by feature engineering. I would treat the embedding by pre-trained CNN as a form of feature engineering. In theory it could reduce the dimensionality of the optimisation problem significantly, speeding up learning, but there is always a risk that the pre-training misses key features due to differences that are important in RL context having a low weighting in unsupervised learning of the embeddings. So does converting an RL problem that works with images from vision-based observations, to work with embeddings of those images reduce the state space? I would say no, the problem definition is not changed, it has the same state space as before. However, the separation of concerns (vision processing vs policy or value prediction), and lower dimension space for generalising has still done something useful. It may help with generalisation, as similar states may be be closer in the embedding space than they are in the larger image space. Loosely speaking you could say that CURL "reduces state space" and most people would understand what you meant in practical terms. I would personally caveat that with e.g. "effectively reduces state space" or perhaps "makes it easier for the agent to generalise its experience across the state space".
{ "domain": "ai.stackexchange", "id": 3201, "tags": "reinforcement-learning, markov-decision-process, state-spaces" }
ROT47 function implementation
Question: According to Wikipedia, here below is the definition of the algorithm: ROT13 ("rotate by 13 places", sometimes hyphenated ROT-13) is a simple letter substitution cipher that replaces a letter with the letter 13 letters after it in the alphabet. ROT13 is a special case of the Caesar cipher, developed in ancient Rome. ROT47 is a derivative of ROT13 which, in addition to scrambling the basic letters, also treats numbers and common symbols. Instead of using the sequence A–Z as the alphabet, ROT47 uses a larger set of characters from the common character encoding known as ASCII. Specifically, the 7-bit printable characters, excluding space, from decimal 33 '!' through 126 '~', 94 in total, taken in the order of the numerical values of their ASCII codes, are rotated by 47 positions, without special consideration of case. For example, the character A is mapped to p, while a is mapped to 2. I already implemented it in C++ but this time, I have implemented it in SQL Server. Here below is the user-defined function I wrote: CREATE FUNCTION [dbo].[ROT47] ( @PLAIN_TEXT nvarchar(MAX) ) RETURNS nvarchar(MAX) AS BEGIN DECLARE @ENCRYPTED_TEXT nvarchar(MAX) = N'' DECLARE @LENGTH_TEXT int = 0 DECLARE @c nvarchar = N'' DECLARE @i int = 1 SET @LENGTH_TEXT = LEN(@PLAIN_TEXT) WHILE (@i <= @LENGTH_TEXT) BEGIN SET @c = SUBSTRING(@PLAIN_TEXT, @i, 1) IF (ASCII(@c) BETWEEN ASCII(N'!') AND ASCII(N'~')) BEGIN SET @c = char(ASCII(N'!') + (ASCII(@c) - ASCII(N'!') + 47) % 94) SET @ENCRYPTED_TEXT = @ENCRYPTED_TEXT + @c END SET @i = @i + 1 END RETURN @ENCRYPTED_TEXT END Below is the query I wrote to test my UDF: DECLARE @PLAIN_TEXT nvarchar(MAX) = N'HelloWorld' DECLARE @ENCRYPTED_TEXT nvarchar(MAX) DECLARE @DECRYPTED_TEXT nvarchar(MAX) SET @ENCRYPTED_TEXT = ( SELECT dbo.ROT47(@PLAIN_TEXT) ) SET @DECRYPTED_TEXT = ( SELECT dbo.ROT47(@ENCRYPTED_TEXT) ) SELECT @PLAIN_TEXT AS PLAIN_TEXT, @ENCRYPTED_TEXT AS ENCRYPTED_TEXT, @DECRYPTED_TEXT AS DECRYPTED_TEXT As expected, the above query gives the following result: +--------------+------------------+------------------+ | PLAIN_TEXT | ENCRYPTED_TEXT | DECRYPTED_TEXT | +--------------+------------------+------------------+ | HelloWorld | w6==@(@C=5 | HelloWorld | +--------------+------------------+------------------+ What do you think about my implementation ? Is there a way to improve it ? I know loops are things you're trying to avoid in SQL Server but is there a way to avoid using a loop in my case ? Answer: Most functions are ugly in SQL Server. When there is something that needs to be done that is ugly, it's what they are used for IMHO. With that being said, here is a non-looping way. You'd have to run it in your environment to see which is faster, your method or one similar to this. I'd suspect this one to start being a lot faster as the size of the string grew. In that case, you may want to use the table version below. Note, in your OP you stated "without special consideration to case". This method, since it uses ASCII conversions, is respective to case. However... this can be altered by using UPPER or LOWER to the entire string to maintain consistency. DECLARE @PLAIN_TEXT nvarchar(MAX) = N'HelloWorld' --split your string into a column, and compute the decimal value (N) if object_id('tempdb..#staging') is not null drop table #staging select substring(a.b, v.number+1, 1) as Val ,ascii(substring(a.b, v.number+1, 1)) as N --,row_number() over (order by (select null)) as RN into #staging from (select @PLAIN_TEXT b) a inner join master..spt_values v on v.number < len(a.b) where v.type = 'P' --select * from #staging --create a fast tally table of numbers to be used to build the ROT-47 table. ;WITH E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)), E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max cteTally(N) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4 ) ----uncomment this out to see the encrypted mapping... --select -- s.Val -- ,s.N -- ,e.ENCRYPTED_TEXT --from #staging s --left join( -- select -- N as DECIMAL_VALUE -- ,char(N) as ASCII_VALUE -- ,case -- when 47 + N <= 126 then char(47 + N) -- when 47 + N > 126 then char(N-47) -- end as ENCRYPTED_TEXT -- from cteTally -- where N between 33 and 126) e on e.DECIMAL_VALUE = s.N --Here we put it all together with stuff and FOR XML select PLAIN_TEXT = @PLAIN_TEXT ,ENCRYPTED_TEXT = stuff(( select --s.Val --,s.N e.ENCRYPTED_TEXT from #staging s left join( select N as DECIMAL_VALUE ,char(N) as ASCII_VALUE ,case when 47 + N <= 126 then char(47 + N) when 47 + N > 126 then char(N-47) end as ENCRYPTED_TEXT from cteTally where N between 33 and 126) e on e.DECIMAL_VALUE = s.N FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 0, '') drop table #staging USING A TABLE OF VALUES TO CIPHER declare @table table (ID int, PLAIN_TEXT nvarchar(4000)) insert into @table values (1,N'HelloWorld'), (2,N'AnotherWord'), (3,N'SomeNewWord') --split your string into a column, and compute the decimal value (N) if object_id('tempdb..#staging') is not null drop table #staging select substring(a.b, v.number+1, 1) as Val ,ascii(substring(a.b, v.number+1, 1)) as N --,dense_rank() over (order by b) as RN ,a.ID into #staging from (select PLAIN_TEXT b, ID FROM @table) a inner join master..spt_values v on v.number < len(a.b) where v.type = 'P' --select * from #staging --create a fast tally table of numbers to be used to build the ROT-47 table. ;WITH E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)), E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max cteTally(N) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4 ) --Here we put it all together with stuff and FOR XML select PLAIN_TEXT ,ENCRYPTED_TEXT = stuff(( select --s.Val --,s.N e.ENCRYPTED_TEXT from #staging s left join( select N as DECIMAL_VALUE ,char(N) as ASCII_VALUE ,case when 47 + N <= 126 then char(47 + N) when 47 + N > 126 then char(N-47) end as ENCRYPTED_TEXT from cteTally where N between 33 and 126) e on e.DECIMAL_VALUE = s.N where s.ID = t.ID FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 0, '') from @table t
{ "domain": "codereview.stackexchange", "id": 25883, "tags": "sql, sql-server, caesar-cipher" }
Simpler boolean truth table?
Question: I'm doing a CodingBat exercise and would like to learn to write code in the most efficient way. On this exercise, I was just wondering if there's a shorter way to write this code. monkeyTrouble(true, true) → true monkeyTrouble(false, false) → true monkeyTrouble(true, false) → false public boolean monkeyTrouble(boolean aSmile, boolean bSmile) { if (aSmile && bSmile) { return true; } if (!aSmile && !bSmile) { return true; } return false; } Answer: Sometimes it is easy to forget that the simplest logical constructs like boolean are comparable with the == operator, and that, in Java, (false == false) is true. With this in mind, your code could become: public boolean monkeyTrouble(boolean aSmile, boolean bSmile) { return aSmile == bSmile; } It may be easier to see how to get there if you first transform your original code into public boolean monkeyTrouble(boolean aSmile, boolean bSmile) { if ((aSmile && bSmile) || (!aSmile && !bSmile)) { return true; } else { return false; } } … which could become public boolean monkeyTrouble(boolean aSmile, boolean bSmile) { return (aSmile && bSmile) || (!aSmile && !bSmile); } From there, you may come to the realization that "both true or both false" is equivalent to "both the same". Here is a verification of the output: public static boolean monkeyTrouble(boolean aSmile, boolean bSmile) { return aSmile == bSmile; } private static void testTruth(boolean a, boolean b) { System.out.printf("monkeyTrouble(%s, %s) = %s\n", a, b, monkeyTrouble(a, b)); } public static void main(String[] args) { testTruth(true, true); testTruth(true, false); testTruth(false, true); testTruth(false, false); } This produces: monkeyTrouble(true, true) = true monkeyTrouble(true, false) = false monkeyTrouble(false, true) = false monkeyTrouble(false, false) = true
{ "domain": "codereview.stackexchange", "id": 5553, "tags": "java" }
Where does $\tan 2a = \frac{B}{A - C}$ come from?
Question: I was reading about elliptical polarisation and stumbled across an equation involving the rotation angle of the ellipse. It has the form $$\tan(2a) = \frac{B}{A - C}$$ where $B$, $A$ and $C$ are the coefficients of the GENERAL equation $$Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0$$ What is the physical intuition behind this trigonometric formula? Does it involve coordinate transformation or a change in reference frame? Answer: In a coordinate system that the ellipse is not rotated; the axis of the ellipse match the axis of the coordinate system, the $xy$ term does not exist. Now to find the angle of rotation we need to find the relation between the $x$, $y$ with the $x'$ and $y'$, where the former are the coordinates of rotated ellipse and the later are in fixed system: $$\left( \begin{array}{c} x \\ y\\ \end{array} \right) = \underbrace{\left( \begin{array}{cc} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta\\ \end{array} \right)}_{\text{Rotation matrix}}\left( \begin{array}{c} x' \\ y'\\ \end{array} \right) $$ Now you need to substitute $x$ and $y$ with $x'$ and $y'$ in the equation of ellipse, then equate the coefficient of $x'y'$ term to zero : $$2A\sin\theta\cos\theta-2C\sin\theta\cos\theta+B(\cos^2\theta-\sin^2\theta)=0\\ \Rightarrow (A-C)\sin2\theta+B\cos2\theta=0\\ \Rightarrow \tan2\theta=\Big|\frac{B}{A-C}\Big|$$ There is an ambiguity in the sign of $\theta$ which depends on your choice of rotation matrix, however, the absolute value remains the same.
{ "domain": "physics.stackexchange", "id": 27005, "tags": "waves, reference-frames, polarization" }
Electric discharge through my knuckle
Question: In some particular days, it seems that my car is somehow at a different potential from the ground. When I get out my car and I wear my sneakers (that I think are better insulators than other kind of shoes) and I touch the metallic gate with my hand or with a key I am holding, I can feel (and see, if it is night) the electric discharge. I noted that, when it happens, I can feel it not only on my skin but also somewhere inside my knuckle, approximately between finger bones, just before the last phalanx. My question is: could this be explained from a physical point of view? Could it be that there is a change in the resistance of my finger or something similar to a capacitor in my knuckle? If this is exclusively related to my sensation of the discharge, then maybe this question is more related to biology. Please excuse me if you consider this question an off-topic. Thinking again to my question, I experienced the same effect (but through my elbow) when I touched the conducting grid of a mosquito racket (even when it is off, there is a certain voltage, I discovered). Therefore, I don't think it is just a matter of sensations. This can be related to Factors affecting pain of static electricity shock, even if it is a quite different question. Edit (23/01/2017): After Squid's comment, I have tried to change clothes, shoes and even to drive another car; in every case, especially in dry and cold days, the electric discharge were produced. Therefore, I think that this static electricity is due to my movements driving, but I still have no idea of why I feel the discharge in my knuckles and not elsewhere. Answer: The elbow ("funny bone") is particularly sensitive to knocks because the ulnar nerve is close to the skin and unprotected at this point. The sensation when it is knocked is often described as being like an electric shock. Part of the explanation may be that the nerves are more exposed to the electrical discharge at the elbow and knuckle, so you are more aware of it here. But also, since the purpose of nerves is to transmit electrical signals through the body, the nerves and also the fluid in joints offer a path of low resistance. See : https://www.quora.com/Why-do-we-feel-a-shock-like-an-electric-current-when-our-elbow-is-hurt-on-a-specific-point.
{ "domain": "physics.stackexchange", "id": 36808, "tags": "electricity, electric-current, everyday-life, biophysics, biology" }
Command dispatcher in Python
Question: I have this function in python: def executecommand(call): if call.data == "/getpair": return getpaircmd(call.message) elif call.data == "/menu": return menucmd(call.message) elif call.data == "/setalarm": return setalarmcmd(call.message) And I was hoping to replace it with object and .get(call.data) message, like this: def executecommand(call): return { "/getpair": getpaircmd(call.message), "/menu": menucmd(call.message), "/setalarm": setalarmcmd(call.message) }.get(call.data) But it seems like it calls every function on the list. So, is there a way to simplify it? Answer: The parentheses act as a function-calling operator in Python. Therefore, your attempt to build the dispatch table would end up calling all the functions. What you can do instead is this: def executecommand(call): return { "/getpair": getpaircmd, "/menu": menucmd, "/setalarm": setalarmcmd, }[call.data](call.message) Note that the behavior with the lookup table is different from the original in the case where call.data is not one of the expected choices. In your code unknown messages will be ignored, in the example code above it would raise a TypeError because the dictionary would return None, which is not callable. I also recommend putting a comma consistently, so that if you later add or remove a command, you'll get cleaner diffs in your source code version control. The fact that you see all the functions being called highlights a potential performance issue: the dispatch table is rebuilt every time executecommand() runs. The table should be built just once: _DISPATCH = { "/getpair": getpaircmd, "/menu": menucmd, "/setalarm": setalarmcmd, } def executecommand(call): return _DISPATCH[call.data](call.message)
{ "domain": "codereview.stackexchange", "id": 43363, "tags": "python" }
Half-life and shelf-life of second-order reaction
Question: For integrated rate laws, I have attempted to find the shelf-life and half-life of second-order reaction: However, the answers I have obtained (which are labelled with a red cross) are very different from the correct answers (bottom-right corner). I am wondering which parts were done incorrectly. Answer: Your mathematics is correct until you calculated for specific cases. Let's go back to your $n$th order version: $$\frac{[\ce{A}]^{1-n}}{1-n} = -k_nt + \frac{[\ce{A}]_\circ^{1-n}}{1-n} \ \text{where } n \ne 1 \tag1$$ At this point, since you are working on second order kinetic, it is easy if you substitute $n = 2$ in equation $(1)$: $$\frac{[\ce{A}]^{-1}}{-1} = -k_2t + \frac{[\ce{A}]_\circ^{-1}}{-1} $$ Once simplify, it becomes: $$\frac{1}{[\ce{A}]} = k_2t + \frac{1}{[\ce{A}]_\circ} \tag2$$ For self life, substitute $t = t_{10\%}$ and ${[\ce{A}]} = 0.9{[\ce{A}]_\circ}$ in equation $(2)$: $$\frac{1}{0.9[\ce{A}]_\circ} = k_2t_{10\%} + \frac{1}{[\ce{A}]_\circ} $$ $$\therefore \; t_{10\%} = \frac{1}{k_2}\left(\frac{1}{0.9[\ce{A}]_\circ} - \frac{1}{[\ce{A}]_\circ} \right) = \frac{1}{k_2[\ce{A}]_\circ}\left(\frac{1}{0.9} - 1 \right) = \frac{1}{k_2[\ce{A}]_\circ}\left(\frac{1}{9} \right) = \bbox[yellow]{\frac{0.11}{k_2[\ce{A}]_\circ}}$$ Similarly, for half life, substitute $t = t_{50\%}$ and ${[\ce{A}]} = 0.5{[\ce{A}]_\circ}$ in equation $(2)$: $$\frac{1}{0.5[\ce{A}]_\circ} = k_2t_{50\%} + \frac{1}{[\ce{A}]_\circ} $$ $$\therefore \; t_{50\%} = \frac{1}{k_2}\left(\frac{1}{0.5[\ce{A}]_\circ} - \frac{1}{[\ce{A}]_\circ} \right) = \frac{1}{k_2[\ce{A}]_\circ}\left(\frac{1}{0.5} - 1 \right) = \frac{1}{k_2[\ce{A}]_\circ}\left(\frac{0.5}{0.5} \right) = \bbox[yellow]{\frac{1}{k_2[\ce{A}]_\circ}}$$
{ "domain": "chemistry.stackexchange", "id": 13990, "tags": "organic-chemistry, physical-chemistry, kinetics, decomposition" }
How could lithium burning take place in a quasi-star?
Question: According to Begelman et al. (2008), one of the most distinguishing features of the hypothetical quasi-star is that it's supported by radiation pressure from the accretion disk of the black hole in its core. The paper suggests that these stars form from Population III stars. What's more, while almost all of the star's luminosity is from the accretion disk (which is far more luminous than thermonuclear reactions), temperatures around the central black hole are hot enough for fusion: Although we do not model the non-hydrostatic region of quasi-stars in any detail, in our case high temperatures are attained in the immediate vicinity of the black hole. However, even in this region the neglect of nuclear reactions is justified, first because black hole accretion is energetically much more efficient than fusion, and secondly because on-going accretion limits the time-scale over which inflowing gas is exposed to high $T$. That's all fine, but what surprised me is the footnote on the third page: In the most massive quasi-stars, the central temperature may be high enough (a few million K) to initiate lithium burning. This is energetically negligible, and although the presence or absence of lithium does affect the opacity, the effect is small for the photospheric temperatures and densities of interest here. Even though this part dismisses lithium burning as an important factor, since it's energy output doesn't compare to that of the accretion disk's, it still mentions that lithium burning can take place. Considering that these form from Population III stars, how is it that much lithium burning can happen in the first place? Although lithium is a relatively light atom, it is considered a metal and lithium burning depletes the amount of lithium in a star. Since Population III stars are supposed to be nearly metal-free, shouldn't only hydrogen or helium fusion be a factor here, even if they only have a small effect? Answer: Lithium, along with Hydrogen and Helium, was one of the 3 elements created in the Big Bang. Thus, it should exist to some part in any star that hasn't burnt all of it out, and as mentioned, it's not an easy thing to do. Population III stars are expected to contain Lithium, and Beryllium as well. The amount, however, is not particularly high.
{ "domain": "astronomy.stackexchange", "id": 7207, "tags": "black-hole, temperature, early-universe, metallicity, hypothetical" }
My Javascript form validation
Question: Hello today after few weeks of learning I tried some form validation. I am wondering how I could improve my code and what things I missed. Here is a preview hosted on github + repository. And here is my code : HTML <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <link rel="stylesheet" href="main.css"> <link href="https://fonts.googleapis.com/css2?family=Rubik:wght@300;400;500;600;700&display=swap" rel="stylesheet"> <title>Form validation</title> </head> <body> <form action="#" class="form"> <h2>Contact with us</h2> <div class="wrapper"> <div> <input type="text" name="name" class='input-name' id='input' placeholder="First name *" required> <p id='input-name-p' class='p-hidden'></p> </div> <div> <input type="text" name="surname" class='input-surname' id='input' placeholder="Surname"> <p id='input-surname-p' class='p-hidden'></p> </div> <div> <input type="text" name="email" class='input-email' id='input' placeholder="E-mail *" required> <p id='input-email-p' class='p-hidden'></p> </div> <div> <input type="text" name="phone" class='input-phone' id='input' placeholder="Phone number"> <p id='input-phone-p' class='p-hidden'></p> </div> <textarea name="message" rows="8" cols="80" class='input-message' id='input' placeholder="Message *" required></textarea> <button href="#" class="btn">Submit</button> </div> </form> <script type="text/javascript" src="script.js"> </script> </body> </html> JS 'use strict'; const submit = document.querySelector('.btn'); const name = document.querySelector('.input-name'); const surname = document.querySelector('.input-surname'); const email = document.querySelector('.input-email'); const phone = document.querySelector('.input-phone'); const items = document.querySelectorAll('#input'); const isValid = function(item) { for (let i = 0; i < item.length; i++) { if (item[i].value){ // Check if contains value let error = document.getElementById(`${item[i].className}-p`); const letters = /^[A-Za-z]+$/; const numbers = /^\d+$/; let inputLength = item[i].value.length; switch (item[i].className){ case 'input-name': case 'input-surname': if(!letters.test(item[i].value)){ // Invalid input error.textContent = 'Invalid data'; error.classList.remove('p-hidden'); }else error.classList.add('p-hidden'); break; case 'input-email': if(!item[i].value.includes('.') || !item[i].value.includes('@') || !letters.test(item[i].value[inputLength-1]) ){ // check if includes (@ or .) and check if the last index is a letter error.textContent = 'Invalid data'; error.classList.remove('p-hidden'); }else error.classList.add('p-hidden'); break; case 'input-phone': if(!numbers.test(item[i].value) || inputLength<5){ // Invalid input error.textContent = 'Invalid data'; error.classList.remove('p-hidden'); }else error.classList.add('p-hidden'); break; } } } } for (let i = 0; i < items.length; i++) { items[i].addEventListener('click',function(){ // Check if the input is valid isValid(items); }); } Answer: Duplicate IDs are invalid HTML You have multiple elements of id='input', which is not permitted in HTML. If multiple elements need a particular attribute, use classes instead. IDs should be reserved for elements that are going to be absolutely unique on a page (or, you could also consider not using IDs at all, since they implicitly create global variables, which can lead to hard-to-understand bugs). Input iteration You put the input collection into a variable named items, which is good: const items = document.querySelectorAll('#input'); but then you pass the collection to a function whose parameter is named item, and you do: for (let i = 0; i < item.length; i++) { if (item[i].value) { // Check if contains value // a long block } A reader of the code wouldn't expect an individual item to have a length and a numeric index. How about calling the parameter items instead - or, even better, inputs? Rather than iterate over the indicies of each element, since you don't actually care about the indicies, but only about the underlying elements, it might be preferable to use for..of instead, so you never have to reference the indicies. Nested indentation can be difficult to read. Instead of a long block inside an if statement, consider continuing the loop early instead: for (const input of inputs) { if (!input.value) { continue; } // put validation code here Or put it into a function: for (const input of inputs) { if (input.value) { validateInput(input); } Check validity on blur, not on click. Your current implementation will show errors only after the user has inputted something invalid, focused away, then clicked on the input box again. Better to inform them immediately, as soon as a box is de-focused. Check validity even if input is empty, since errors may be displayed - if the input is empty, you'll want to clear the error. You could also consider clearing errors when an input is focused (so that errors are only displayed when an input currently isn't active). Error text is always the same, so don't set it via the JS - put it into the HTML, and hide the error on pageload. Use the case-insensitive flag in regular expressions instead of repeating both the capital and lowercase versions, eg: /^[a-z]$/i Wording Contact with us would be better as Contact us DRY input navigation There are a few sources of repetitiveness in the code: Separate class names for each input, requiring iteration through each class name in the switch Separate error class names for each input Separate logic for each class name You can make this better by: Use an object indexed by the name attribute of each input, whose values are regular expressions to test the values against Navigate to the input's adjacent error element with nextElementSibling instead of having separate error classes For the email, you can use a regular expression so that the validation is of the same shape as for the other inputs. const validators = { name: /^[a-z]$/i, surname: /^[a-z]$/i, email: /^(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])$/, phone: /^\d{5,}$/, }; for (const input of document.querySelectorAll('#input')) { input.addEventListener('blur', () => { checkValidity(input); }); } const checkValidity = (input) => { const isBad = validators[input.name].test(input.value); input.nextElementSibling.classList.toggle('p-hidden', isBad); }; That's all you need. 'use strict'; const validators = { name: /^[a-z]+$/i, surname: /^[a-z]+$/i, email: /^(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])$/, phone: /^\d{5,}$/, }; for (const input of document.querySelectorAll('.wrapper input')) { input.addEventListener('blur', () => { checkValidity(input); }); } const checkValidity = (input) => { const isBad = validators[input.name].test(input.value); input.nextElementSibling.classList.toggle('p-hidden', isBad); }; .p-hidden { display: none; } <form action="#" class="form"> <h2>Contact us</h2> <div class="wrapper"> <div> <input name="name" placeholder="First name *" required> <p class='p-hidden'>Invalid data</p> </div> <div> <input name="surname" placeholder="Surname"> <p class='p-hidden'>Invalid data</p> </div> <div> <input name="email" placeholder="E-mail *" required> <p class='p-hidden'>Invalid data</p> </div> <div> <input name="phone" placeholder="Phone number"> <p class='p-hidden'>Invalid data</p> </div> <textarea name="message" rows="8" cols="80" placeholder="Message *" required></textarea> <button href="#" class="btn">Submit</button> </div> </form> The email regex is probably more complicated than it needs to be, you could write a much more easy-to-read one with only a handful of characters that works just as well in 99% of situations, if you wanted. Another option would be to remove all the JavaScript, and use the pattern attribute and type="email" in the HTML instead, letting the browser inform the user of invalid inputs: <form action="#" class="form"> <h2>Contact us</h2> <div class="wrapper"> <div> <input name="name" placeholder="First name *" required pattern="[a-zA-z]+"> </div> <div> <input name="surname" placeholder="Surname" pattern="[a-zA-z]+"> </div> <div> <input name="email" placeholder="E-mail *" required type="email"> </div> <div> <input name="phone" placeholder="Phone number" pattern="\d+{5,}"> </div> <textarea name="message" rows="8" cols="80" placeholder="Message *" required></textarea> <button href="#" class="btn">Submit</button> </div> </form> Result:
{ "domain": "codereview.stackexchange", "id": 39862, "tags": "javascript, html" }
Muons - how are we even able to detect them?
Question: muons have a very small half life comparable to 2.5 μs or so. But we know that it has to cover a very large distance from upper atmospheric layers to reach the particle detectors installed at earth's surface. How is this possible as there journey would take a large time and they should spontaneously decay into other sub atomic particles before even reaching the detectors? please help? Answer: From our point of view, Lorentz time dilation causes the fast-moving muon to have a longer half-life before decaying than it does at rest. So it has enough time to reach the surface. From the muon’s point of view, Lorentz length contraction causes the distance it has to travel through the atmosphere to be much less than what we think of as the thickness of the atmosphere. So, again, it can reach the surface before it decays. What amazing evidence for both time dilation and length contraction! Without them we would not see cosmic muons.
{ "domain": "physics.stackexchange", "id": 56202, "tags": "special-relativity, particle-physics, particle-detectors, leptons" }
Solving for gas law partial derivative
Question: I don't know where to begin with this question. A scientist discovered that the state of the unknown gas can be well described by the equation of state below: $$P = \frac{RT}{\overline{V}} + \left(\frac{a + bT}{\overline{V}^2}\right)$$ Find the partial derivative $\left(\frac{\partial \overline{V}}{\partial T}\right)_P$ which is expressed in terms of $\overline{V}, R, b$, and $P$. (Hint: If $\overline{V}$ is a function of two variables $T$ and $P$ (i.e. $\bar{V} = f(T, P)$), then the total differential can be expressed as $$d\bar{V} = df(T, P) = \left(\frac{\partial \overline{V}}{\partial T}\right)_PdT + \left(\frac{\partial \overline{V}}{\partial P}\right)_TdP$$ Use a particular condition to derive the expression $\left(\frac{\partial \overline{V}}{\partial T}\right)_P$ in terms of $\left(\frac{\partial \overline{V}}{\partial P}\right)_T$ and $\left(\frac{\partial P}{\partial T}\right)_{\overline{V}}$.) This problem is related to the physical chemistry of the gas law. What is the correct approach? Answer: Think of it this way. $$dP = \left(\partial P\over\partial T\right)_VdT+ \left(\partial P\over\partial V\right)_TdV$$ Now, we want P to be constant (that is, $dP=0$). What does that entail? A certain relation between $dV$ and $dT$ (which, BTW, are now $\partial$ rather than $d$): $$\left(\partial V\over\partial T\right)_P=-\left(\partial P\over\partial T\right)_V\left/\left(\partial P\over\partial V\right)_T\right.$$
{ "domain": "chemistry.stackexchange", "id": 5245, "tags": "physical-chemistry, thermodynamics, gas-laws" }
Longest common substring approach using suffix arrays
Question: I am trying to speed up a function to return the longest common substring. The requirements: The function takes two strings of arbitrary length (although on average they will be less than 50 chars each) If two subsequences of the same length exist it can return either Speed is the primary concern This is what I have and it works according to my tests: from os.path import commonprefix class LongestCommonSubstr(object): def __init__(self, lstring, rstring): self.lstring = lstring+'0' self.rstring = rstring+'1' self._suffix_str_array = sorted(self._get_suffix_str(self.lstring) + self._get_suffix_str(self.rstring)) self.longest_common_substr = self._get_lcsubstr() @staticmethod def _get_suffix_str(string): return [string[i:] for i in range(len(string))] def _get_lcsubstr(self): try: substr_len =0 max_len = 0 lcs = None for i,n in enumerate(self._suffix_str_array): if n[-1] != self._suffix_str_array[i+1][-1]: substr = commonprefix([n,self._suffix_str_array[i+1]]) substr_len = len(substr) if substr_len > max_len: max_len = substr_len lcs = substr except IndexError: pass return lcs Can the code be made faster in pure Python? Is there a better option to differentiate the input strings when made into one sorted list than concatenating an ID to them? Answer: OK then: since your concern is speed, let's track our progress with actual timing data. The first step is to run the code through the Python profiler. With the addition of a bit of driver code that just calls LongestCommonSubstr().longest_common_substr 10000 times, I get the following results: $ python3.4 -m profile lcs_profile.py abcdeffghiklnopqr bdefghijklmnopmnop 1130008 function calls in 3.083 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.000 0.000 :0(__build_class__) 1 0.000 0.000 3.082 3.082 :0(exec) 1 0.000 0.000 0.000 0.000 :0(hasattr) 280000 0.257 0.000 0.257 0.000 :0(len) 260000 0.330 0.000 0.330 0.000 :0(max) 260000 0.350 0.000 0.350 0.000 :0(min) 1 0.000 0.000 0.000 0.000 :0(setprofile) 10000 0.042 0.000 0.042 0.000 :0(sorted) 1 0.000 0.000 0.000 0.000 <frozen importlib._bootstrap>:2264(_handle_fromlist) 260000 1.067 0.000 1.746 0.000 genericpath.py:69(commonprefix) 1 0.022 0.022 3.082 3.082 lcs_profile.py:1(<module>) 20000 0.069 0.000 0.157 0.000 lcs_profile.py:11(_get_suffix_str) 20000 0.071 0.000 0.071 0.000 lcs_profile.py:13(<listcomp>) 10000 0.807 0.000 2.794 0.000 lcs_profile.py:15(_get_lcsubstr) 1 0.000 0.000 0.000 0.000 lcs_profile.py:3(LongestCommonSubstr) 10000 0.067 0.000 3.060 0.000 lcs_profile.py:4(__init__) 1 0.000 0.000 3.083 3.083 profile:0(<code object <module> at 0x10996f030, file "lcs_profile.py", line 1>) 0 0.000 0.000 profile:0(profiler) The most important column is the second one, tottime, marking the total amount of time consumed by each function (but not the functions it calls). The largest entries in that column are for the commonprefix function and the _get_lc_substr function, so those are the spots you should focus on optimizing. To keep track of our progress, I coupled your class with the following driver program which prints the execution time for 10000 runs: def lcs_longest_match(a, b): return LongestCommonSubstr(a, b).longest_common_substr if __name__ == '__main__': import timeit, sys print('lcs ', timeit.timeit('lcs_longest_match(sys.argv[1], sys.argv[2])', setup='from __main__ import lcs_longest_match', number=10000)) (this is for Python 3). A first run on my computer gives $ python3.4 lcs.py abcdeffghiklnopqr bdefghijklmnopmnop lcs 0.49046793299203273 I'll start with _get_lc_substr, since you wrote that code and it'll be easier to work through. The algorithm you use is to go through each pair of consecutive strings in the sorted suffix array, find the common prefix, and save that prefix only if it's longer than any common prefix already found. I can suggest a few improvements: Iterating over pairs of consecutive elements is a common task that has a fairly standard recipe, pairwise(iterable), given in the documentation for the itertools module. You can use the implementation from the more-itertools package if you want. This also lets you get rid of the try/except block (which probably didn't affect runtime much, but it helps code clarity). Python has a built-in function, max, to find the maximum value of an iterable. If you use it, then the nuts and bolts of the loop as well as the compare-and-store-if-greater process get handled internally by the interpreter, which should be faster than doing them manually in pure Python. Repeatedly accessing an attribute of an object, namely the method self._suffix_str_array, is slower than accessing it once and storing it locally as a new variable. Let's check the effect of these changes on the runtime of the driver program: lcs 0.49046793299203273 optlcs1 0.4739605739887338 optlcs2 0.4895538759883493 optlcs3 0.4828952929965453 Not much of a change. Actually, using max is slower here, so let's discard that change. Time now to take a closer look at commonprefix. If you look at the source code of this function, def commonprefix(m): "Given a list of pathnames, returns the longest common leading component" if not m: return '' s1 = min(m) s2 = max(m) for i, c in enumerate(s1): if c != s2[i]: return s1[:i] return s1 you see that it goes through some preliminary steps relating to the fact that it needs to handle a list of potentially several paths. You only ever have two strings to compare, so you can skip that - and in fact, if you look back at the profiling data, you'll see that these calls to max and min do take up a significant amount of time. Therefore, it makes sense to implement your own version of commonprefix without the max and min calls. This makes a big difference in the runtime, cutting it down by over 30%: lcs 0.49046793299203273 optlcs3 0.4828952929965453 optlcs4 0.3215277789859101 If you think about it, you don't even really need the common prefix itself, except for the one string you actually return from _get_lcsubstr. You only need its length, so you can decide which substring is the longest. So instead of using a function that finds the full common prefix, just write one that will give you its length. You store the length, along with one of the strings, and at the end of _get_lcsubstr, use the length to trim the stored string. def _get_lcsubstr(self): # initialize for s1, s2 in mt.pairwise(suffix_str_array): if s1[-1] != s2[-1]: substr_len = get_common_prefix_length(s1, s2) if substr_len > max_len: max_len = substr_len max_substr = s1 return max_substr[:max_len] This shaves another few percent off the runtime: lcs 0.49046793299203273 optlcs4 0.3215277789859101 optlcs5 0.30241094100347254 If it's faster to avoid computing a substring in your commonprefix replacement, you might think of doing the same thing when you're finding the suffixes in the first place. In other words, instead of calculating all the suffixes of a string in _get_suffix_str, just make a list of (index, which_string) tuples to represent the suffixes. Then whenever you need to actually compare two suffixes, instead of taking a substring of the original string, you just start comparing characters at the required indices. There are two problems with this in practice: first, Python isn't well suited to iterating from an arbitrary point in the middle of a string. In a language like C, where strings are character pointers, this would work out quite well, because you can jump into the middle of the string by advancing a pointer. But in Python, iterating from the middle of a string requires you to either start from the beginning and just skip the first several characters, or bypass the whole iteration mechanism and use a for loop with an integer index to access characters inside the string by their indices (which typically involves more Python code that is relatively inefficient). And besides, the other reason is you need to create the substrings anyway to sort them. If you try to do it so that you use the substrings as comparison keys without actually storing them, the program spends a lot of time converting between a substring and its index. All told, the changes you need to make to the code to use indices everywhere wind up hurting, not helping. Here's the timing result: lcs 0.49046793299203273 optlcs5 0.30241094100347254 optlcs6 0.3683815289987251 At this point, you can spend a lot of time making little tweaks to try to squeeze some extra performance out of the program, but I don't think there are any major performance gains left. It's already something like 40% faster than the original, which is not bad. Of course, as dawg wrote in a comment, you can accomplish this task using Python's standard module difflib. def difflib_longest_match(a, b): i, j, k = difflib.SequenceMatcher(a=a, b=b ).find_longest_match(0, len(a), 0, len(b)) return a[i:i+k] I have no idea what's in the difflib code (well, I could look, but I'll leave that as an exercise), but it's clearly heavily optimized for this kind of task. It's another 25% faster than my best version of your program: lcs 0.49046793299203273 optlcs5 0.30241094100347254 difflib 0.2154458940058248 So if you really want to do this not for educational purposes, just use difflib. Here is the content of the test script I used. import difflib import itertools as it import more_itertools as mt from os.path import commonprefix class LongestCommonSubstr(object): def __init__(self, lstring, rstring): self.lstring = lstring+'0' self.rstring = rstring+'1' self._suffix_str_array = sorted(self._get_suffix_str(self.lstring) + self._get_suffix_str(self.rstring)) self.longest_common_substr = self._get_lcsubstr() @staticmethod def _get_suffix_str(string): return [string[i:] for i in range(len(string))] def _get_lcsubstr(self): try: substr_len =0 max_len = 0 lcs = None for i,n in enumerate(self._suffix_str_array): if n[-1] != self._suffix_str_array[i+1][-1]: substr = commonprefix([n,self._suffix_str_array[i+1]]) substr_len = len(substr) if substr_len > max_len: max_len = substr_len lcs = substr except IndexError: pass return lcs class OptimizedLongestCommonSubstr1(object): def __init__(self, lstring, rstring): self.lstring = lstring+'0' self.rstring = rstring+'1' self._suffix_str_array = sorted(self._get_suffix_str(self.lstring) + self._get_suffix_str(self.rstring)) self.longest_common_substr = self._get_lcsubstr() @staticmethod def _get_suffix_str(string): return [string[i:] for i in range(len(string))] def _get_lcsubstr(self): substr_len =0 max_len = 0 lcs = None for s1, s2 in mt.pairwise(self._suffix_str_array): if s1[-1] != s2[-1]: substr = commonprefix([s1,s2]) substr_len = len(substr) if substr_len > max_len: max_len = substr_len lcs = substr return lcs class OptimizedLongestCommonSubstr2(object): def __init__(self, lstring, rstring): self.lstring = lstring+'0' self.rstring = rstring+'1' self._suffix_str_array = sorted(self._get_suffix_str(self.lstring) + self._get_suffix_str(self.rstring)) self.longest_common_substr = self._get_lcsubstr() @staticmethod def _get_suffix_str(string): return [string[i:] for i in range(len(string))] def _get_lcsubstr(self): return max((commonprefix([s1,s2]) for s1, s2 in mt.pairwise(self._suffix_str_array) if s1[-1] != s2[-1]), key=len) class OptimizedLongestCommonSubstr3(object): def __init__(self, lstring, rstring): self.lstring = lstring+'0' self.rstring = rstring+'1' self._suffix_str_array = sorted(self._get_suffix_str(self.lstring) + self._get_suffix_str(self.rstring)) self.longest_common_substr = self._get_lcsubstr() @staticmethod def _get_suffix_str(string): return [string[i:] for i in range(len(string))] def _get_lcsubstr(self): s_array = self._suffix_str_array return max((commonprefix([s1,s2]) for s1, s2 in mt.pairwise(s_array) if s1[-1] != s2[-1]), key=len) class OptimizedLongestCommonSubstr4(object): def __init__(self, lstring, rstring): self.lstring = lstring+'0' self.rstring = rstring+'1' self._suffix_str_array = sorted(self._get_suffix_str(self.lstring) + self._get_suffix_str(self.rstring)) self.longest_common_substr = self._get_lcsubstr() @staticmethod def _get_suffix_str(string): return [string[i:] for i in range(len(string))] @staticmethod def _get_common_prefix(s1, s2): for i, c in enumerate(s1): if c != s2[i]: return s1[:i] return s1 def _get_lcsubstr(self): s_array = self._suffix_str_array gcp = self._get_common_prefix substr_len =0 max_len = 0 lcs = None for s1, s2 in mt.pairwise(s_array): if s1[-1] != s2[-1]: substr = gcp(s1, s2) substr_len = len(substr) if substr_len > max_len: max_len = substr_len lcs = substr return lcs class OptimizedLongestCommonSubstr5(object): def __init__(self, lstring, rstring): self.lstring = lstring+'0' self.rstring = rstring+'1' self._suffix_str_array = sorted(self._get_suffix_str(self.lstring) + self._get_suffix_str(self.rstring)) self.longest_common_substr = self._get_lcsubstr() @staticmethod def _get_suffix_str(string): return [string[i:] for i in range(len(string))] @staticmethod def _get_common_prefix_length(s1, s2): for i, c in enumerate(s1): if c != s2[i]: return i return len(s1) def _get_lcsubstr(self): s_array = self._suffix_str_array gcpl = self._get_common_prefix_length max_len = 0 max_substr = '' for s1, s2 in mt.pairwise(s_array): if s1[-1] != s2[-1]: substr_len = gcpl(s1, s2) if substr_len > max_len: max_len = substr_len max_substr = s1 return max_substr[:max_len] class OptimizedLongestCommonSubstr6(object): def __init__(self, lstring, rstring): self.lstring = lstring self.rstring = rstring self._suffix_str_array = sorted( [(i, True) for i, _ in enumerate(lstring)] + [(i, False) for i, _ in enumerate(rstring)], key=lambda t: (lstring if t[1] else rstring)[t[0]:]) self.longest_common_substr = self._get_lcsubstr() @staticmethod def _get_common_prefix_length(s1, start1, s2, start2): for i, c in enumerate(it.islice(s1, start1, None)): if c != s2[i + start2]: return i return i def _get_lcsubstr(self): s_array = self._suffix_str_array gcpl = self._get_common_prefix_length max_len = 0 max_start = 0 max_substr = '' for (i1, b1), (i2, b2) in mt.pairwise(s_array): if b1 != b2: if b1: s1 = self.lstring s2 = self.rstring else: s1 = self.rstring s2 = self.lstring substr_len = gcpl(s1, i1, s2, i2) if substr_len > max_len: max_len = substr_len max_start = i1 max_substr = s1 return max_substr[max_start:max_start+max_len] def lcs_longest_match(a, b): return LongestCommonSubstr(a, b).longest_common_substr for n in it.count(1): if 'OptimizedLongestCommonSubstr{}'.format(n) in globals(): exec('''def optimized_lcs_longest_match_{}(a, b): return OptimizedLongestCommonSubstr{}(a, b).longest_common_substr '''.format(n, n)) else: break def optimized_lcs_longest_match(a, b): return OptimizedLongestCommonSubstr(a, b).longest_common_substr def difflib_longest_match(a, b): i, j, k = difflib.SequenceMatcher(a=a, b=b).find_longest_match(0, len(a), 0, len(b)) return a[i:i+k] if __name__ == '__main__': import timeit, sys n_runs = 10000 print('lcs ', timeit.timeit('lcs_longest_match(sys.argv[1], sys.argv[2])', setup='from __main__ import lcs_longest_match', number=n_runs)) for k in range(1, n): print('optlcs{}'.format(k), timeit.timeit( 'optimized_lcs_longest_match_{}(sys.argv[1], sys.argv[2])'.format(k), setup='from __main__ import optimized_lcs_longest_match_{}'.format(k), number=n_runs)) print('difflib', timeit.timeit('difflib_longest_match(sys.argv[1], sys.argv[2])', setup='from __main__ import difflib_longest_match', number=n_runs)) print('control results:', difflib_longest_match(sys.argv[1], sys.argv[2])) for k in range(1, n): print('test results {}: '.format(k), eval('optimized_lcs_longest_match_{}(sys.argv[1], sys.argv[2])'.format(k)))
{ "domain": "codereview.stackexchange", "id": 16730, "tags": "python, performance" }
Potentially fatal near miss: Due diligence in tool selection
Question: I recently outfitted a crew to build scaffold in preparation for an upcoming turnaround at our plant. Among the tools requested were canvas buckets for manually hauling material and tools up and down scaffolds. In sourcing these I went through a reliable well established industrial supplier with a know brand. I ordered bags rated for 100 lbs which above the maximum load for manual hoisting permissible on our site. Several days after arrival on site one of these bags failed with a 27 lb load in it. Nobody was hurt but it could have easily turned out differently. We immediately pulled all bags from use. In investigating it was discovered that when used outside in winter conditions that the plastic parts of the bag bottoms become brittle. No temperature restrictions were offered in any documentation related to these bags and these tools were being used for the purpose they were designed for. I feel awful that a decision I made put people at risk. Is there some aspect of due diligence in tool selection that a reasonable engineer would have been expected to take that I had missed? Answer: As a busy engineer, I would say you did your due diligence. There are always more variables than we are given time to consider. The higher the risk of failure/injury, the more time we spend on it, but there are limits. I often try to have another engineer or manager look at my work so there is another set of eyes on it. Also, taking time to test new tools/equipment is a good when possible. In hindsight this would have addressed the issue. Sometimes you can do "field testing" by just testing the smallest portion of the system at a time. This could be like first giving out one bag to a worker that you know will give your prompt and reliable feedback before your give out all of them. Our company just started requiring a hazard analysis for all capital projects and major process changes. A small change like bags may have flown under the radar, but a procedure like that might be something worth implementing. On the other side of things, remember that the workers are not helpless. They consciously or subconsciously know to test their tools before they use them. They should also be wearing proper PPE like hardhats and steel toe boots to reduce risk in general. It also helps to let these guys know that they have a say in what tools they use. If they feel something is unsafe they need to make everyone aware. We recently implemented a take 5 safety program that is designed to assist with this. In addition to this, please let the manufacturer know (not just the vendor). You will save more people than just the people at your facility. If they are of any credibility they will refund or replace your bags too.
{ "domain": "engineering.stackexchange", "id": 1957, "tags": "tools, specifications" }
Kin selection Vs altruism (social biology)
Question: I know that this is a contentious topic and I found conflicting explanations online which is what prompted me to put his question up. In some papers, Kin selection is mentioned based off the concept of inclusive fitness (sum of direct and indirect fitness). However, Kin selection is said to be an altruistic characteristic. That being said, altruistic genes only propagate through indirect selection (since an actor gives up its own fitness and do not generally reproduce). In that case, isn't Kin selection based off just indirect fitness and not inclusive fitness as a whole? In a webpage, it is mentioned that "Kin selection is the evolutionary mechanism that selects for those behaviors that increase the inclusive fitness of the donor.". In that case, how can kin selection be altruistic? Can someone shed some light on what Kin selection is exactly? Also, a proper definition for indirect selection would be good as well. Thank you. Answer: how can kin selection be altruistic? Part of you confusion is purely semantic. Kin selection cannot be altruistic. Kin selection is an evolutionary process. Altruism is a behaviour. Saying "kin selection is altruistic" is like saying "natural selection is flying" (when thinking of selection for flying abilities in, say, flying squirrels). Altruism Altruism is a behaviour in which increases another individual fitness at the expense of its own fitness. However, this definition lead to confusion of whether one is talking to lifetime fitness or only to a contribution to one's fitness that will eventually be returned. For this reason people talk about True Altruism and False Altruism True Altruism True altruism is a behaviour in which the actors's (the one performing the altruistic behaviour) life time fitness is decreased while the recipient (the one benefitting from the altruistic behaviour) is increased. True Altruism can only evolve via kin selection (or group selection for the few who still view these two processes as different). It is the indirect component of the inclusive fitness that selects for altruism indeed. False Altruism False altruism refers to cases where the actor performs the behaviour because he is expecting (not necessarily consciously) a return later in life. False altruism can be seen as an investment. Such false altruism can be selected by natural selection (no need to consider indirect fitness). "Return on investment" can be caused by direct reciprocity or indirect reciprocity. In case of indirect reciprocity, some populations can use a system of reputation, where individuals are more likely to help individuals that they have seen being helpful to others before. Game theory All types of interaction between individuals can be modelled in game theory. One can also investigate specific type of strategy in response to a a specific game. For example, Tit-for-Tat is one type of strategy in a multiple encounter between two individuals game.
{ "domain": "biology.stackexchange", "id": 6938, "tags": "fitness, sociobiology" }
.txt word-counter
Question: I'm starting coding with Python (only able to use & create easy code!) and just created a word-counter. It reads a .txt file and enters the words in a dictionary with dict[word]="number of appearances in the text". As far as I can see, the code works well. Nevertheless, I'm not satisfied with the fact that I needed to create 3 "wordlist" since I wanted to delete special characters or empty words from the first wordlist. Afaik, modifying the list while looping over it with "for" is not possible (?). Moreover, I only succeeded printing out the most popular words by iterating over the maximum of the dict, printing and deleting it. Thus, the original dictionary will be altered within the printing process. Questions: Are there any (simple) improvements to make the points mentioned above more elegant? In general, what's your opinion about the code? book = open("bibel.txt", "r") dict = {} lines = book.readlines() wordlist=[] #adding every word into the wordlist for line in lines: words = line.split(" ") for word in words: wordlist.append(word) book.close() #wordlist2 = cleaned wordlist (delete non-alphabetical characters) def clean(word): for char in word: if char.lower() not in "abcdefghijklmnopqrstuvwxyzäüöß": word = word.replace(char,'') word=word.lower() return word wordlist2=[] for word in wordlist: wordlist2.append(clean(word)) #wordlist3 = delete empty words ("") wordlist3=[] for word in wordlist2: if len(word)>0: wordlist3.append(word) #count wordlist3 into dictionary: def count(word): if word in dict: dict[word]=dict[word]+1 else: dict[word]=1 for word in wordlist3: count(word) #print out the first 10 words and values from dictionary: for i in range(100): topword = max(dict, key=dict.get) print(topword, dict[topword]) del dict[topword] Answer: You are mixing function declarations and code execution. Do not. Your programme should look like: def fun1(): # do_stuff def fun2() # do_stuff def main(): fun1() fun2() if __name__ == "__main__": main() Stating the obvious in comments is not good #adding every word into the wordlist #wordlist2 = cleaned wordlist (delete non-alphabetical characters) #wordlist3 = delete empty words ("")', '#count wordlist3 into dictionary: #print out the first 10 words and values from dictionary: Comments should explain why not what, if you need comments to describe the what you need better names and maybe more functions. The following for example should be declared as a function and called later book = open("bibel.txt", "r") dict = {} lines = book.readlines() wordlist=[] #adding every word into the wordlist for line in lines: words = line.split(" ") for word in words: wordlist.append(word) book.close() def get_wordlist(filename): with open(filename) as f: lines = f.readlines() # More exaplanation about the following line later wordlist = [word for line in lines for word in line.split(" ")] return wordlist I added in if __name__ == "__main__": because it allows you to import your file without actually running it. dict = {} is a global, do not use changing globals, (Global Constants are ok). Also it should be noted that dict is reserved, it is better to write word_dict or dict_. You should use with open(filename) as f: do_stuff(f.read()) when you open file, it is simpler and handles closing automatically.. You wrote wordlist=[] #adding every word into the wordlist for line in lines: words = line.split(" ") for word in words: wordlist.append(word) The following is more idiomatic then a for loop with append but may be a little harder to understand. Using it or not is up to personal style. wordlist = [word for line in lines for word in line.split(" ")] When a list comprehension is ease you should prefer it over append: wordlist2=[] for word in wordlist: wordlist2.append(clean(word)) should become wordlist2 = [(clean(word) for word in wordlist] Checking if a thing is empty in Python is done like if not thing wordlist3 = [word for word in wordlist2 if word] Puttting word = word.lower() at the start simplifies things a bit def clean(word): word = word.lower() for char in word: if char not in "abcdefghijklmnopqrstuvwxyzäüöß": word = word.replace(char,'') return word Clean is not a clear name. You should call that function alphabet_chars_only. "abcdefghijklmnopqrstuvwxyzäüöß" is the alphabet, it is easy to guess it, anyway a global consant at the start of your file would be better: ALPHABET = "abcdefghijklmnopqrstuvwxyzäüöß" def clean(word): word = word.lower() for char in word: if char not in ALPHABET: word = word.replace(char,'') return word Just to show the sheer power of list comprehension, you may or may not use the following, it is personal preference: def clean(word): word = word.lower() return ''.join([char for char in word if char in alphabet]) Down there there must be a typo #print out the first 10 words and values from dictionary: # --- TEN for i in range(100): # -- A HUNDREAD topword = max(dict, key=dict.get) print(topword, dict[topword]) del dict[topword] You do not use i in the above loop, it is convention to mark an unused variable as _ or __ You may print the top words like the following: # Credit goes to http://stackoverflow.com/questions/613183/sort-a-python-dictionary-by-value import operator topword = max(dict, key=dict.get) sorted_topword = sorted(x.items(), key=operator.itemgetter(1)) print(sorted_topword[:10]) ( lst[:x] means the first x elements of lst) Python has an official style guide that you should follow when writing good and readable code, it is called Pep8. The best and faster option is to write your code without thinking about it and then run Autopep8 on your script, it will make your script Pep8/compliant with no effort. Docstrings are triple quoted strings put at the start of a function to give some info about it. It is up to you to decide when a function is simple that it doesn't need one. Putting all my advice together: import operator FILENAME = "bibel.txt" ALPHABET = "abcdefghijklmnopqrstuvwxyzäüöß" def get_wordlist(filename): with open(filename) as f: lines = f.readlines() wordlist = [word for line in lines for word in line.split(" ")] return wordlist def alphabet_chars(word): word = word.lower() return ''.join([char for char in word if char in ALPHABET]) def count(wordlist): """ Returns a dict where the words are the keys and their frequencies are the values. """ dict_ = {} for word in wordlist: if word in dict_: dict_[word]=dict_[word]+1 else: dict_[word]=1 return dict_ def main(): wordlist = get_wordlist(FILENAME) new_wordlist = [alphabet_chars(word) for word in wordlist if word] dict_ = count(new_wordlist) sorted_dict_ = sorted(x.items(), key=operator.itemgetter(1)) print(sorted_dict_[:10]) if __name__ == "__main__": main()
{ "domain": "codereview.stackexchange", "id": 11058, "tags": "python, python-3.x" }
What is meant by 'Gravitational Potential Energy of a System'?
Question: 'Gravitational potential energy' is defined as: 'energy an object possesses because of its position in a gravitational field'. Consider two planets of masses $M$ and $m$ at a distance from $r$ of each other. (Please note that $r$ is the distance between CoMs of two planets and this image does not show it properly) In the gravitational field of $M$, $m$'s P.E. is $$-\frac{GMm}{r}$$ and in the field of $m$, $M$'s P.E. is $$-\frac{GMm}{r}$$(same as before). But I am not getting the idea of gravitational potential energy of the whole system that contains both planets, which is defined as $-\frac{GMm}{r}$. According to the definition mentioned at the beginning, how can I clarify this one? In other words, what is meant by the gravitational potential energy of a system? Answer: Unlike kinetic energy, which a single body can possess, potential energy is always a property of a system that has atleast two bodies. Potential energy exists in a system when two (or more) objects comprising the system interact by means of a conservative force Your first defintion is actually incorrect. The potential energy belongs to the system of the object and the gravitational field. There are many misconceptions associated with a single object possessing a potential energy. For example, when you raise a ball from the Earth's surface to a particular height, it is incorrectly stated that the ball possesses a gravitational potential energy (given by $mgy$). The correct way of saying it is that the system of the ball and the Earth or the system of the ball and the Earth's gravitational field has a gravitational potential energy given by $mgy$. In this case, the system consists of the ball and the Earth, which interact by means of a conservative force ; gravity. Your expression for two objects is synonymous with the ball and the Earth. The system that they comprise has a property of potential energy because they interact by means of a conservative force. You may have also come across the expression for gravitational potential energy of a system of three or more particles: $$U_g = \frac{1}{2} \sum_{i \neq j}\frac{-Gm_im_j}{r_{ij}}$$ In such a case, would it make sense to say that a single object out of all of them possess this value of potential energy (as your definition suggests)? In response to your question (posted in the comments): (1) In general, the article you mentioned has a lot of mistakes. Once again, potential energy is a property of the mass-field system, so neither the gravitational field, nor the mass possess the potential energy (equal to $V.m$, read my second point). In fact it is a property of the combined system and hence this statement: Also why the gravitational field is assigned potential energy when the work on the body is done by field? is physically meaningless. Note that some sources may claim that the gravitational potential energy is stored in the field. While not entirely accurate, this is somewhat justified, because the gravitational field would change with distance (just like potential energy would) (2) You must not be confused between gravitational potential energy and gravitational potential. The gravitational potential is defined as $$V = \frac{U_g}{m}$$ where m is the mass of the source mass causing the field. It is only numerically equal to the potential energy when you substitute $m = 1 kg$ (3) It is incorrect to state that a single body possesses a potential energy. This notation however is too entrenched in our language, which is why you may see several references of it. However, a single isolated object cannot have a potential energy function (as described in my answer)
{ "domain": "physics.stackexchange", "id": 80934, "tags": "newtonian-mechanics, newtonian-gravity, potential-energy, definition, conventions" }
Frequency of vibration of a spring
Question: Hi, actually I'm confused about the velocity formula (In blue boundary) why the velocity of that small element taken in that way. Answer: Suppose that you have a spring of length $\ell$ fixed at one end and it is extended an amount $e_\ell$ at the other end. Half way down the spring from the fixed end, $\ell/2$, the extension of the spring is $e_\ell/2$. In fact the extension of the spring from the fixed end is proportional to the distance from the fixed end. Thus at a distance $s$ from the fixed end the extension is $e= \dfrac{e_\ell}{\ell} \cdot s$ If one differentiates this expression with respect to time one obtains an expression for the speed of the spring,$\dot e$, at various positions along the spring. So the speed of the spring a distance $s$ from the fixed end is $\dot e = \dfrac{\dot e_\ell}{\ell} \cdot s = \dfrac{v}{\ell} \cdot s$ where $v\,(=\dot e_\ell)$ is the speed of the spring at the moving end.
{ "domain": "physics.stackexchange", "id": 89683, "tags": "classical-mechanics, waves, harmonic-oscillator, frequency" }
Menu Bar Animation Plugin
Question: I'm fairly new to JS/jQuery (a few months), and I think it's time to start getting involved in the community. So I wrote a little plugin. Nothing revolutionary. Really, the project is to write a clean, workable plugin. Any and all thoughts and suggestions on how I can make the code cleaner, or the animations smoother, or anything, are very much appreciated. Here's the plugin in action: http://jsfiddle.net/VA7P5/ Here is the plugin code: (function ($) { $.fn.menuBar = function (options) { var defaults = { width: 145, // Width of Sidebar left: true, // If true, sidebar is positioned left. If false, it's positioned right height: 80, // Height of footer barColor: '#000', // Color of three-bar menu before it's opened menuBackground: '#303030', // Background color of sidebar and footer closeColor: '#fff' // Color of close-button }; var options = $.extend(defaults, options); return this.each(function () { var i = $(this); var o = options; var width = $('nav.sidebar').css('width'); var height = $('footer.hidden').css('height'); var closeColor = $('.bar').css('background'); var barColor = $('.bar').css('background'); var barOne = $('.menu-bar-top'); var barTwo = $('.menu-bar-bottom'); var barThree = $('.menu-bar-mid'); var menuTrigger = $('nav.sidebar a'); var fadeWrapper = $('#fade-wrapper'); var nav = $('nav.sidebar'); var footerHidden = $('footer.hidden'); var bar = $('.bar'); bar.css('background', o.barColor); if (o.left) { nav.css({ 'width': o.width, 'left': o.width - (o.width * 2), 'background': o.menuBackground }); $('.menu-trigger').css({ 'left': 0 }); } else { nav.css({ 'width': o.width, 'right': o.width - (o.width * 2), 'background': o.menuBackground }); $('.menu-trigger').css({ 'right': 0 }); } footerHidden.css({ 'height': o.height, 'bottom': o.height - (o.height * 2), 'background': o.menuBackground }); i.click(function(){ if (i.hasClass('open')) { closeMenu(); i.removeClass('open'); // Allow scrolling again when menu is closed $('body').css('overflow', ''); } else { openMenu(); i.addClass('open'); // No scrolling while menu is open $('body').css('overflow', 'hidden'); } }); $('#fade-wrapper').click(function(){ closeMenu(); i.removeClass('open'); $('body').css('overflow', ''); }); /*=========================================================================================================== Opening/Closing Functions ===========================================================================================================*/ function openMenu() { fadeWrapper.fadeIn(100, function(){ barOne.css({ 'top': '8px', 'transform': 'rotate(405deg)', '-webkit-transform': 'rotate(405deg)', '-moz-transform': 'rotate(405deg)', '-ms-transform': 'rotate(405deg)', '-o-transform': 'rotate(405deg)' }); barTwo.css({ 'top': '8px', 'transform': 'rotate(-405deg)', '-webkit-transform': 'rotate(-405deg)', '-moz-transform': 'rotate(-405deg)', '-ms-transform': 'rotate(-405deg)', '-o-transform': 'rotate(-405deg)' }); if (o.left) { nav.animate({'left': '+=' + o.width}, 200); } else { nav.animate({'right': '+=' + o.width}, 200); } footerHidden.animate({'bottom': '+=' + o.height}, 200); barThree.fadeOut(100); bar.css('background', o.closeColor); }); } function closeMenu() { setTimeout(function(){ barThree.fadeTo(100, 1); fadeWrapper.fadeOut(100); if (o.left) { nav.animate({'left': '-=' + o.width}, 200); } else { nav.animate({'right': '-=' + o.width}, 200); } footerHidden.animate({'bottom': '-=' + o.height}, 200); bar.css('background', o.barColor); barOne.css({ 'top': '3px', 'transform': 'rotate(360deg)', '-webkit-transform': 'rotate(360deg)', '-moz-transform': 'rotate(360deg)', 'ms-transform': 'rotate(360deg)', 'o-transform': 'rotate(360deg)' }); barTwo.css({ 'top': '13px', 'transform': 'rotate(-360deg)', '-webkit-transform': 'rotate(-360deg)', '-moz-transform': 'rotate(-360deg)', '-ms-transform': 'rotate(-360deg)', '-o-transform': 'rotate(-360deg)' });}, 1); } }); }; })(jQuery); The necessary HTML: <nav class="sidebar"> <a class="menu cursor" title="Menu"> <div class="menu-trigger"> <div class="bar-container"> <div class="bar menu-bar-top"></div> <div class="bar menu-bar-mid"></div> <div class="bar menu-bar-bottom"></div> </div> </div> </a> <!-- Sidebar content goes here --> </nav> <div id="fade-wrapper"></div> <footer class="hidden"> <!-- Footer content goes here --> </footer> The necessary CSS: a.cursor { cursor: pointer; } #fade-wrapper { display: none; position: fixed; top: 0; left: 0; height: 100%; width: 100%; background: rgba(0, 0, 0, 0.3); z-index: 5000; } nav.sidebar { position: fixed; top: 0; height: 100%; z-index: 9999; } .menu-trigger { position: fixed; top: 8px; width: 40px; height: 20px; line-height: 40px; } .bar-container { margin-top: 3px; height: 13px; } .bar { position: absolute; height: 3px; width: 90%; outline: 1px solid transparent; -webkit-transition: all .5s ease; -moz-transition: all .5s ease; -ms-transition: all .5s ease; -o-transition: all .5s ease; transition: all .5s ease; } .menu-bar-top { top: 3px; left: 2px; } .menu-bar-mid { top: 8px; left: 2px; } .menu-bar-bottom { top: 13px; left: 2px; } footer.hidden { position: fixed; left: 0; width: 100%; z-index: 9999; } Answer: I like your code, it is easy to follow, has comments, well named variables etc. The only thing I would point out is that you are repeating yourself here and there. So I will focus on that: This piece of code: if (o.left) { nav.css({ 'width': o.width, 'left': o.width - (o.width * 2), 'background': o.menuBackground }); $('.menu-trigger').css({ 'left': 0 }); } else { nav.css({ 'width': o.width, 'right': o.width - (o.width * 2), 'background': o.menuBackground }); $('.menu-trigger').css({ 'right': 0 }); } Is really repeating the same thing but with left being replaced with right, you could just assign left or right to a variable first. var key = o.left ? 'left' : 'right'; nav.css({ 'width': o.width, 'background': o.menuBackground }).css( key, o.width - (o.width * 2) ); $('.menu-trigger').css( key , 0 ); This piece of code is repeated a few times at well, the only different is the degrees and the value of 'top'. barTwo.css({ 'top': '13px', 'transform': 'rotate(-360deg)', '-webkit-transform': 'rotate(-360deg)', '-moz-transform': 'rotate(-360deg)', '-ms-transform': 'rotate(-360deg)', '-o-transform': 'rotate(-360deg)' });}, 1); You could consider a helper function that this transformation for you function generateTransformation( top , transformation ){ return { 'top': top, 'transform': transformation, '-webkit-transform': transformation, '-moz-transform': transformation, '-ms-transform': transformation, '-o-transform': transformation' }; } Then, you can simply do barOne.css( generateTransformation( '3px' , 'rotate(360deg)' )); barTwo.css( generateTransformation( '13px' , 'rotate(-360deg)' )); These: if (o.left) { nav.animate({'left': '+=' + o.width}, 200); } else { nav.animate({'right': '+=' + o.width}, 200); } I will leave to you as an exercise for the reader. var bar = $('.bar'); <- This will select 3 bars, perhaps call it bars ? var closeColor = $('.bar').css('background'); <- This takes the background color of the first bar, I would put that in a comment or make it more obvious by calling $('.bar').eq(0).css('background'); or even better determine bars first and then go for bars.eq(0).css('background');
{ "domain": "codereview.stackexchange", "id": 7197, "tags": "javascript, jquery, beginner, css, plugin" }
String Calculator
Question: This is a calculator I made for fun and also to practice a bit. My goal was to make a calculator that can handle user input as well as a scientific calculator. I made it as a Singleton to keep things tidy and provide some additional calculator functionality. #include <iostream> #include <vector> #include <utility> #include <string> #include <cstring> #include <cmath> static const long double pi_num = 3.1415926535897932; template <typename T, typename U> static T factorial(U num) { T res = 1; while (num > 1) { res *= num; --num; } return res; } // singleton template <typename NUM_TYPE> class calculator { public: static calculator &get(); calculator(const calculator &) = delete; calculator &operator=(const calculator &) = delete; static NUM_TYPE calc(const std::string &expression); static NUM_TYPE calc(const char *expression); NUM_TYPE calc_substr(const std::string &, unsigned begin, unsigned end); static const std::string output(); static void printOutput(); static bool error(); static NUM_TYPE ans(); private: calculator() {} std::string error_msg; NUM_TYPE answer = 0; bool error_flag = false; bool paren_flag = false; // for preventing parentheses from overwriting answer static void applyFunction(std::string &, NUM_TYPE &); }; template <typename NUM_TYPE> calculator<NUM_TYPE> &calculator<NUM_TYPE>::get() { static calculator<NUM_TYPE> Calculator; return Calculator; } template <typename NUM_TYPE> NUM_TYPE calculator<NUM_TYPE>::calc(const std::string &expression) { return get().calc_substr(expression, 0, expression.length() - 1); } template <typename NUM_TYPE> NUM_TYPE calculator<NUM_TYPE>::calc(const char *expression) { return get().calc_substr(expression, 0, strlen(expression) - 1); } template <typename NUM_TYPE> NUM_TYPE calculator<NUM_TYPE>::calc_substr(const std::string &expression, unsigned begin, unsigned end) { // the calculator splits the input into segments (units) each containing an operation and a number // these segments (units) are stored in calc_units std::vector< std::pair<char, NUM_TYPE> > calc_units; std::string function; function.reserve(6); NUM_TYPE num = 0, res = 0; char operation = '+'; bool operation_flag = true; // setting the operation flag to true since // the first number's plus sign is usually omitted bool negative_flag = false; bool function_flag = false; error_flag = false; // parsing the string and calculating functions for (int i = begin; i <= end; ++i) { if (expression[i] == '+' || expression[i] == '-' || expression[i] == '*' || expression[i] == '/' || expression[i] == '%' || expression[i] == '^') { if (operation_flag) { if (expression[i] == '-') // negative number negative_flag = true; else if (operation == '*' && expression[i] == '*') // python notation for exponentiation operation = '^'; else { error_flag = true; error_msg = "Syntax Error"; return 0; } } else if (function_flag) { error_flag = true; error_msg = "Syntax Error"; return 0; } else { operation = expression[i]; operation_flag = true; negative_flag = false; } } else if (expression[i] == '!') calc_units[calc_units.size() - 1].second = factorial<NUM_TYPE>(calc_units[calc_units.size() - 1].second); else if (expression[i] >= 'a' && expression[i] <= 'z') { function.clear(); while ((expression[i] >= 'a' && expression[i] <= 'z') && i <= end) { function.push_back(expression[i]); ++i; } i--; if (function == "ans") { num = answer; if (negative_flag) num *= -1; if (operation_flag == false) // omitting the '*' in multiplication operation = '*'; calc_units.push_back(std::make_pair(operation, num)); num = 0; operation_flag = false; negative_flag = false; } else if (function == "pi") { num = pi_num; if (negative_flag) num *= -1; if (operation_flag == false) // omitting the '*' in multiplication operation = '*'; calc_units.push_back(std::make_pair(operation, num)); num = 0; operation_flag = false; negative_flag = false; } else function_flag = true; } // parsing numbers and applying functions // the user might use a decimal point without a zero before it to show a number smaller than one // example: 1337 * .42 where the zero in 0.42 is omitted else if ((expression[i] >= '0' && expression[i] <= '9') || expression[i] == '.') { while (expression[i] >= '0' && expression[i] <= '9' && i <= end) { num = 10 * num + (expression[i] - '0'); ++i; } if (expression[i] == '.') // decimal point { ++i; unsigned decimals_count = 0; NUM_TYPE decimals = 0; while (expression[i] >= '0' && expression[i] <= '9' && i <= end) { decimals = 10 * decimals + (expression[i] - '0'); decimals_count++; ++i; } num += decimals / pow(10, decimals_count); decimals = 0; decimals_count = 0; } if (negative_flag) // negative number num *= -1; // applying functions if (function_flag) { applyFunction(function, num); if (error_flag) { error_msg = "Unknown Function"; return 0; } function_flag = false; } if (operation_flag == false) // omitting the '*' in multiplication operation = '*'; calc_units.push_back(std::make_pair(operation, num)); num = 0; operation_flag = false; negative_flag = false; --i; } else if (expression[i] == '(') { unsigned open = ++i; // the user might open parentheses but not close them // in the case that several parentheses are opened but only some of them // are closed, we must pair the closest open and close parentheses together // parenthesis_count is used to check if a close parenthesis belongs to // the current open paranthesis int parenthesis_count = 1; while (parenthesis_count > 0 && i <= end) { if (expression[i] == '(') ++parenthesis_count; if (expression[i] == ')') --parenthesis_count; ++i; } i--; paren_flag = true; // preventing parentheses from overwriting answer num = get().calc_substr(expression, open, i); if (error_flag) return 0; if (negative_flag) num *= -1; // applying functions if (function_flag) { applyFunction(function, num); if (error_flag) { error_msg = "Unknown Function"; return 0; } function_flag = false; } if (operation_flag == false) // omitting the '*' in multiplication operation = '*'; calc_units.push_back(std::make_pair(operation, num)); num = 0; operation_flag = false; negative_flag = false; paren_flag = false; } } for (int i = 0; i < calc_units.size(); ++i) { if (calc_units[i].first == '+') { num = calc_units[i].second; } else if (calc_units[i].first == '-') { num = calc_units[i].second * -1; } // left-to-right associativity else if (calc_units[i].first == '*' || calc_units[i].first == '/') { res -= num; while (i < calc_units.size() && (calc_units[i].first == '*' || calc_units[i].first == '/')) { if (calc_units[i].first == '*') num *= calc_units[i].second; else if (calc_units[i].first == '/') { if (calc_units[i].second == 0) { error_flag = true; error_msg = "Math Error"; return 0; } else num /= calc_units[i].second; } ++i; } --i; } // right-to-left associativity else if (calc_units[i].first == '^' || calc_units[i].second == '%') { res -= num; NUM_TYPE temp; int count = 0; // finding where the operations with right-to-left associativity end while (i + count + 1 < calc_units.size() && (calc_units[i + count + 1].first == '^' || calc_units[i + count + 1].first == '%')) ++count; temp = calc_units[i + count].second; for (int j = count; j >= 0; --j) { if (calc_units[i + j].first == '^') temp = pow(calc_units[i + j - 1].second, temp); if (calc_units[i + j].first == '%') temp = (long long) calc_units[i + j - 1].second % (long long) temp; } if (calc_units[i - 1].first == '+') num = temp; else if (calc_units[i - 1].first == '-') num = temp * -1; else if (calc_units[i - 1].first == '*') { num /= calc_units[i - 1].second; num *= temp; } else if (calc_units[i - 1].first == '/') { num *= calc_units[i - 1].second; num /= temp; } i += count; } res += num; } if (paren_flag == false) // preventing parentheses from overwriting answer answer = res; return res; } template <typename NUM_TYPE> const std::string calculator<NUM_TYPE>::output() { if (get().error_flag) return get().error_msg; else { using std::to_string; // for compatibility with non-fundamental data types return to_string(get().answer); } } template <typename NUM_TYPE> void calculator<NUM_TYPE>::printOutput() { if (get().error_flag) std::cout << get().error_msg; else std::cout << get().answer; } template <typename NUM_TYPE> bool calculator<NUM_TYPE>::error() { return get().error_flag; } template <typename NUM_TYPE> NUM_TYPE calculator<NUM_TYPE>::ans() { return get().answer; } template <typename NUM_TYPE> void calculator<NUM_TYPE>::applyFunction(std::string &function, NUM_TYPE &num) { if (function == "abs") num = fabs(num); else if (function == "sqrt") num = sqrt(num); else if (function == "cbrt") num = cbrt(num); else if (function == "sin") num = sin(num); else if (function == "cos") num = cos(num); else if (function == "tan") num = tan(num); else if (function == "cot") num = 1 / tan(num); else if (function == "sec") num = 1 / cos(num); else if (function == "csc") num = 1 / sin(num); else if (function == "arctan") num = atan(num); else if (function == "arcsin") num = asin(num); else if (function == "arccos") num = acos(num); else if (function == "arccot") num = atan(1 / num); else if (function == "arcsec") num = acos(1 / num); else if (function == "arccsc") num = asin(1 / num); else if (function == "sinh") num = sinh(num); else if (function == "cosh") num = cosh(num); else if (function == "tanh") num = tanh(num); else if (function == "coth") num = 1 / tanh(num); else if (function == "sech") num = 1 / cosh(num); else if (function == "csch") num = 1 / sinh(num); else if (function == "arctanh") num = atanh(num); else if (function == "arcsinh") num = asinh(num); else if (function == "arccosh") num = acosh(num); else if (function == "arccoth") num = atanh(1 / num); else if (function == "arcsech") num = acosh(1 / num); else if (function == "arccsch") num = asinh(1 / num); else if (function == "log") num = log10(num); else if (function == "ln") num = log(num); else if (function == "exp") num = exp(num); else if (function == "gamma") num = tgamma(num); else if (function == "erf") num = erf(num); else get().error_flag = true; function.clear(); } Possible way of using the calculator: using Calculator = calculator<long double>; int main() { std::string expression; while (true) { std::getline(std::cin, expression); Calculator::calc(expression); if (Calculator::error()) std::cout << Calculator::output() << "\n\n"; else std::cout << "= " << std::setprecision(15) << Calculator::ans() << "\n\n"; } } Output example: 4400 * 1337 - 42 / 7 + 9000 = 5891794 2sin(pi/4)cos(pi/4) = 1 ans * 32 = 32 2 * 2 ^ 2 ^ 3 = 512 (2 + 3) * 4 = 20 5(8+9) = 85 2 * -4 = -8 tan(2)*log(5)/exp(6) = -0.00378574198801152 sin1sqrt2 = 1.19001967905877 1 / 0 Math Error sin*cos Syntax Error 2 */ 4 Syntax Error lol(1234) Unknown Function A few questions: Is my extensive use of flags causing code-spaghetti? Does my code need more comments? Was it a good idea to use the Singleton design pattern? Let me know what you think! Suggestions and Ideas are very welcome :) Answer: Use the constant M_PI (and others) from <cmath> instead of defining your own. Singletons are bad, and there is no need for one here. I would recommend you avoid this pattern. There is no way to cleanly exit the program. Break out some functions, the body of the main calculation function is too long to be easy to understand. Use std::stringstream and it's formatted input functions to read numbers etc instead of writing your own code for this. You should use the correct algorithm for parsing mathematical expressions: shunting yard algorithm. Regarding more or less comments. Your code should be structured such that comments are not necessary. Break out functions wherever you think you need a comment and make the function name sat what what your comment would have had is one way to think of it. Of course it's not always possible but it's one way to think about it. Eg. Instead of having: // Read in a number from string ... Lots of code... Do: auto number = read_number(input_string); If you apply this consistently you'll find that you get more readable and maintainable code with less comments. I'm missing unit tests, this is an obvious class to test with unit testing to make sure it works and produces the correct result. I'm going to stop here without going too deep into the technical issues with the code such as using int instead of vector<>::size_type etc because I believe that you have bigger things to address (e.g. use the right algorithm and test your code)
{ "domain": "codereview.stackexchange", "id": 37689, "tags": "c++, calculator" }
OpenMP parallelization of a for loop with function calls
Question: Using OpenMP, is it correct to parallelize a for loop inside a function "func" as follows? void func(REAL coeff, DATAMPOT *dmp, int a, int la, int b, int lb, REAL L) { int i,j,k; REAL dx,dy,dz; REAL dx2,dy2,dz2; REAL r; #pragma omp parallel for default(shared) private(k,i,j,dx,dy,dz,dx2,dy2,dz2,r) reduction(+:deltaE) for(k=0; k<la*lb; ++k){ j=k/la+b; i=k%la+a; dx=fabs(part[i].x-part[j].x); dy=fabs(part[i].y-part[j].y); dz=fabs(part[i].z-part[j].z); dx2=(dx<0.5?dx*dx:(1-dx)*(1-dx)); dy2=(dy<0.5?dy*dy:(1-dy)*(1-dy)); dz2=(dz<0.5?dz*dz:(1-dz)*(1-dz)); r=L*sqrt(dx2+dy2+dz2); deltaE += coeff*((dmp+NSPES*part[i].s+part[j].s)->npot>1? mpot(r,dmp+NSPES*part[i].s+part[j].s,((REAL)rand())/RAND_MAX): (dmp+NSPES*part[i].s+part[j].s)->pot[0](r,(dmp+NSPES*part[i].s+part[j].s)->dp ) ); } } Where: REAL is double (#define REAL double) DATAMPOT *dmp is a pointer to a struct containing (among others) some pointers to functions, such as pot[0] part is a global array of struct deltaE (variable for summation-reduction) is a REAL global variable I know that, for a correctness, a special treatment of function rand() is also required; but apart from that, are there some other important (conceptual) correction to do on the above parallelization? Which is limited at only one directive row? Answer: There is nothing wrong with your code, but it can be improved somehow. First, automatic variables, defined in a scope that is outer to the parallel region, are automatically shared. Therefore the default(shared) clause is redundant. Second, the loop counter k has predetermined sharing class of private - you can safely omit it. Also you should declare all variables in the scope where they are used. In your case all variables except k can be declared in the parallel region. Such variables have predetermined sharing class of private. If you follow both of the above points, your OpenMP directive will be greatly simplified: void func(REAL coeff, DATAMPOT *dmp, int a, int la, int b, int lb, REAL L) { int k; #pragma omp parallel for reduction(+:deltaE) for (k = 0; k < la*lb; ++k) { int j = k/la+b; int i = k%la+a; REAL dx = fabs(part[i].x-part[j].x); REAL dy = fabs(part[i].y-part[j].y); REAL dz = fabs(part[i].z-part[j].z); REAL dx2 = (dx<0.5?dx*dx:(1-dx)*(1-dx)); REAL dy2 = (dy<0.5?dy*dy:(1-dy)*(1-dy)); REAL dz2 = (dz<0.5?dz*dz:(1-dz)*(1-dz)); REAL r = L*sqrt(dx2+dy2+dz2); DATAMPOT *ptr = dmp + NSPES*part[i].s+part[j].s; deltaE += coeff*(ptr->npot>1 ? mpot(r,ptr,((REAL)rand())/RAND_MAX) : ptr->pot[0](r,ptr->dp)); } } If you can use C99 constructs in your code, then you can even move the declaration of k inside the for loop, i.e.: #pragma omp parallel for reduction(+:deltaE) for (int k = 0; k < la*lb; k++) { ... } Also make sure that none of the functions called inside the loop have visible side effects, i.e. they don't modify some shared global state in an unexpected and unsynchronised way.
{ "domain": "codereview.stackexchange", "id": 10604, "tags": "c, openmp" }
Basic approximation algorithms understanding
Question: Question: Suppose we have 2 algorithms $Alg1$ and $Alg2$ for the same minimization problem. We know that $Alg1$ is a $2$-approximation algorithm and $Alg2$ a $4$-approximation algorithm. Is the following statement true? There must be an input $I$ such that $Alg2(I) \geq 2 \cdot Alg1(I)$. My answer: I think the statement is false: In order to prove that the statement is false it is sufficient to find one case where $Alg1$ is a $2$-approximation and $Alg2$ is a 4 approximation and that there is no input which satisfies the inequality from the statement. Does my approach make sense? Is there some other approach or thought process which proves statements of this type true/false that I'm missing? Note: $Alg$ is a $\rho$-approximation algorithm for a minimization problem $\iff$ $\forall I.(OPT(I) \leq Alg(I) \leq \rho \cdot OPT(I))$ where $\rho > 1$ and $OPT(I)$ denotes the optimal solution to the minimization problem for the input $I$. Answer: According to your definition (which is the standard definition), an algorithm solving a certain minimization problem is an $\alpha$-approximation algorithm for any $\alpha \geq 1$. Consider a minimization problem solvable in polynomial time, for example minimum cut. Let Alg1 and Alg2 be algorithms solving this problem exactly. Alg1 is a 2-approximation algorithm and Alg2 is a 4-approximation algorithm, yet there is no input on which Alg2 performs worse than Alg1.
{ "domain": "cs.stackexchange", "id": 9819, "tags": "algorithms, proof-techniques, approximation" }
When an acid is added to water, why does the hydroxide ion concentration decrease?
Question: At equilibrium in pure water, we have $$\ce{[H_3O+][OH-]} = 10^{-14}$$ Since $\ce{H3O+}$ and $\ce{OH-}$ ions are produced in pairs, we may conclude $$\ce{[H_3O+]}=\ce{[OH-]} = 10^{-7}$$ So far so good. But shouldn't things change when we introduce a new substance into water ? I mean why does the first equation above hold no matter what ? Also when I introduce $\ce{H2SO4}$ into the water, it doesn't just give a $\ce{H+}$ ion, it also gives $\ce{HSO4-}$ ion. Shouldn't these new negative ions change the behavior of water? Why does my textbook never talk about these new negative ions? Help appreciated. Thanks! Answer: Your question title is a bit misleading, but i try to answer all the small questions in you question text. The equation $[\ce{H3O^+}][\ce{OH^-}]$ holds true, if other parameters (like T) are constant. Keep in mind, the power of hydroxide decreases, whereas the power oxonium increases. Being equal in the equation and considering how logs are being computed,bthey will add up to 14 every time. Regarding the introduced $\ce{HSO4^-}$, they don't contribute to pH by definition. On the other hand, they alter the behaviour of the water, by increasing its conductivity. Your textbooks don't talk about the other negative ions in acidic or alkaline solutions, because they don't directly contribute to the values of pH or pOH by definition. In cases of polyacids like sulfuric acid, ions like $\ce{HSO4^-}$ are accounted for by using a different formula to calculate the actual pH value, but the "not-hydrogen" part is largely irrelevant in the behaviour of the solution itself.
{ "domain": "chemistry.stackexchange", "id": 10724, "tags": "acid-base, ph" }
Resources on randomized algorithms for analysis and design of quantum algorithms
Question: Are there any good resources (online courses, books, websites, etc) to study randomized algorithms that would help with an specific scope on the analysis and design of quantum algorithms? Answer: Probability And Randomness: Quantum Versus Classical by By Andrei Yu Khrennikov is a great book on the foundations of quantum randomness. Introduction to Random Time and Quantum Randomness by Kai Lai Chung and Jean-Claude Zambrini is another work on quantum randomness. Both these books are more focusing on the theoretical foundations of quantum randomness. MIT has also published a recent work on quantum randomness by Liu, Zi-Wen.
{ "domain": "quantumcomputing.stackexchange", "id": 2170, "tags": "resource-request, randomised-benchmarking" }
printf-style string formatter
Question: As part of a larger project I'm working on, I've made a function that aims to replicate, as best as I can, the placeholder part of the console.log() function usable in most browsers. It replaces %s, %d , %i , %f and ,%o. It works by having 'unlimited' parameters (the replacements), looping through the parameters, checking for any placeholders and replacing them by index with the parameter given. The switch is for checking whether the passed parameter (replacement) is the correct type - if not, it will add NaN, just like the real console would. My main questions are: Is my code readable? Are there any better ways to do any part of this? Can it be cleaned up anymore? function replacePlaceholders(string) { var args = Array.prototype.slice.call(arguments).splice(1); //get all the extra arguments (exclude the original string) var counter = 0; //set a counter for (var i = 0; i < args.length; i++) { //loop through the extra arguments (should match the number of placeholders in the string) var match = /%s|%d|%i|%f|%o/.exec(string); //regex if (match) { //if match found var index = match.index; //store the index switch (match[0]) { //check whether the placeholder has a real value with the correct type case '%s': //for strings string = (typeof args[counter] == 'string') ? string.substr(0, index) + args[counter] + string.substr(index + 2) : string.substr(0, index) + NaN + string.substr(index + 2); break; case '%d': //for numbers.... case '%i': case '%f': string = (typeof args[counter] == 'number') ? string.substr(0, index) + args[counter] + string.substr(index + 2) : string.substr(0, index) + NaN + string.substr(index + 2); break; case '%o': //for objects string = (typeof args[counter] == 'object') ? string.substr(0, index) + args[counter] + string.substr(index + 2) : string.substr(0, index) + NaN + string.substr(index + 2); break; } } counter++; } return string; } console.log(replacePlaceholders("string %s %d %i %f %o", 'string', 1, 4, 5.5, {one: 1})); //test case You can test this here - check the console for the returned string. Answer: Before I start: Thank you so much for making this code easily readable! Thank you from the bottom of my heart! Second thing that I have noticed was the name. It's wrong! If it is a sptrinf-like function, give it the right name. sprintf! Done! No overly complicated boring names. You have an argument called string. Please, don't do that. In fact, remove it's name! You know it is the first element. You can do this: var text = arguments[0]; for (var i = 1; i < arguments.length; i++) { Done! But it won't be needed and you will see why. You still didn't learn: Store the length in a local variable. Always! Like this: for (var i = 1, length = args.length; i < length; i++) { Accessing a property in an object (in this case, an array) is slower than accessing a loval variable. By setting the local variable with that slow property, we increase performance. It is really a great idea to have literal regular expressions, but the whole code is quite... shady... Specially the useless validations! You expect a string to be a real string so you can do something... But why? What about a number? What about an array? What around something else? Why can't I convert anything into a string? Every single object has the .toString() method. And why that NaN there? What is it for? Is that to say it is NaN? The %o is doing something somewhat somewhere that I don't get. By the C++ documentation and the PHP documentation specify that this is the octal representation. And now, the questions: Why do you check if an argument is a string before converting it to string? Why do you check if an argument is og any type before converting? why don't you do any kind of validation? Why are you using exec? I would never do it on this way. It's so frail and picky. It's so hard to change anything without breaking it. You have to deal with things you should, like breaking a string in 3 parts and replacing the middle part with something. That's a bad way to do a .replace(). I would do like this: function sprintf(){ var toBase = function(value, base){ //converts to the required bases return (parseInt(value, 10) || 0).toString(base); }; var map = { s: function(value){ return value.toString(); }, d: function(value){ return toBase(value, 10); }, i: function(value){ return this.d(value); }, b: function(value){ //binary conversion return toBase(value, 2); }, o: function(value){ //octal conversion return toBase(value, 8); }, x: function(value){ //hexadecimal conversion return toBase(value, 16); }, X: function(value){ //HEXADECIMAL CONVERSION IN CAPITALS return toBase(value, 16).toUpperCase(); }, e: function(value){ //scientific notation return (parseFloat(value, 10) || 0).toExponential(); }, E: function(value){ return this.e(value, 10).toUpperCase(); }, f: function(value){ //floating point return (parseFloat(value, 10) || '0.0').toString(); } }; //crude way to extract the keys var keys = ''; for(var k in map) { keys += k; } var args = Array.prototype.slice.call(arguments).slice(); return args.shift().toString().replace(new RegExp('%([' + keys + '])','g'),function(_, type){ if(!args.length) { throw new Error('Too few elements');//appropriate type here? } return map[type](args.shift()); }); } Come on! It's so easy to understand! You can change it's functionality easily! If you want, you can remove that loop to fetch the keys and use this literal regular expression: /%([diobsxXeEf])/g. Same thing, but faster and less flexible. And this is a "good enough" clone of the sprintf function. It doesn't support padding, float-number size specifitation and others. But, it works for the bare minimum. Outside the scope of this review, thanks to you, I finally have a function to write my polyglot questions! Also, you may want to look at http://phpjs.org/functions/sprintf/ if you want a perfect clone of the sprintf function. To reflect closely the desired implementation, one can change the map variable. I went a bit further and changed all unnecessary things. To be consistent with the MDN documentation and the Chrome console API documentation, you can do this: function sprintf(){ var map = { s: function(value){ return value.toString(); }, d: function(value){ return (parseInt(value, 10) || 0).toString(); }, i: function(value){ return this.d(value); } f: function(value){ return (parseFloat(value, 10) || '0.0').toString(); } }; var args = Array.prototype.slice.call(arguments).slice(); return args.shift().toString().replace(/%([difs])/g,function(_, type){ if(!args.length) { throw new Error('Too few elements'); } return map[type](args.shift()); }); } The %o, %O and %c specifiers are implemented internally and isn't possible to implement using this method. %c and %Cwork on elements, by creating a list in the console or a link to the element to show in the Element Inspector. The %c uses CSS styles to style the output in the console itself.
{ "domain": "codereview.stackexchange", "id": 15304, "tags": "javascript, strings, console, formatting" }
Is entanglement transitive?
Question: Is entanglement transitive, in a mathematical sense? More concretely, my question is this: Consider 3 qubits $q_1, q_2$ and $q_3$. Assume that $q_1$ and $q_2$ are entangled, and that $q_2$ and $q_3$ are entangled Then, are $q_1$ and $q_3$ entangled? If so, why? If not, is there a concrete counterexample? On my notion of entanglement: qubits $q_1$ and $q_2$ are entangled, if after tracing out $q_3$, the qbits $q_1$ and $q_2$ are entangled (tracing out $q_3$ corresponds to measuring $q_3$ and discarding the result). qubits $q_2$ and $q_3$ are entangled, if after tracing out $q_1$, the qbits $q_2$ and $q_3$ are entangled. qubits $q_1$ and $q_3$ are entangled, if after tracing out $q_2$, the qbits $q_1$ and $q_3$ are entangled. Feel free to use any other reasonable notion of entanglement (not necessarily the one above), as long as you clearly state that notion. Answer: TL;DR: It depends on how you choose to measure entanglement on a pair of qubits. If you trace out the extra qubits, then "No". If you measure the qubits (with the freedom to choose the optimal measurement basis), then "Yes". Let $|\Psi\rangle$ be a pure quantum state of 3 qubits, labelled A, B and C. We will say that A and B are entangled if $\rho_{AB}=\text{Tr}_C(|\Psi\rangle\langle\Psi|)$ is not positive under the action of the partial transpose map. This is a necessary and sufficient condition for detecting entanglement in a two-qubit system. The partial trace formalism is equivalent to measuring qubit C in an arbitrary basis and discarding the result. There's a class of counter-examples that show that entanglement is not transitive, of the form $$ |\Psi\rangle=\frac{1}{\sqrt{2}}(|000\rangle+|1\phi\phi\rangle), $$ provided $|\phi\rangle\neq |0\rangle,|1\rangle$. If you trace out qubit $B$ or qubit $C$, you'll get the same density matrix both times: $$ \rho_{AC}=\rho_{AB}=\frac12\left(|00\rangle\langle 00|+|1\phi\rangle\langle 1\phi|+|00\rangle\langle 1\phi|\langle\phi|0\rangle+|1\phi\rangle\langle 00|\langle0|\phi\rangle\right) $$ You can take the partial transpose of this (taking it on the first system is the cleanest): $$ \rho^{PT}=\frac12\left(|00\rangle\langle 00|+|1\phi\rangle\langle 1\phi|+|10\rangle\langle 0\phi|\langle\phi|0\rangle+|0\phi\rangle\langle 10|\langle0|\phi\rangle\right) $$ Now take the determinant (which is equal to the product of the eigenvalues). You get $$ \text{det}(\rho^{PT})=-\frac{1}{16}|\langle 0|\phi\rangle|^2(1-|\langle 0|\phi\rangle|^2)^2, $$ which is negative, so there must be a negative eigenvalue. Thus, $(AB)$ and $(AC)$ are entangled pairs. Meanwhile $$ \rho_{BC}=\frac12(|00\rangle\langle 00|+|\phi\phi\rangle\langle\phi\phi |). $$ Since this is a valid density matrix, it is non-negative. However, the partial transpose is just equal to itself. So, there are no negative eigenvalues and $(BC)$ is not entangled. Localizable Entanglement One might, instead, talk about the localizable entanglement. Before further clarification, this is what I thought the OP was referring to. In this case, instead of tracing out a qubit, one can measure it in a basis of your choice, and calculate the results separately for each measurement outcome. (There is later some averaging process, but that will be irrelevant to us here.) In this case, my response is specifically about pure states, not mixed states. The key here is that there are different classes of entangled state. For 3 qubits, there are 6 different types of pure state: a fully separable state 3 types where there is an entangled state between two parties, and a separable state on the third a W-state a GHZ state Any type of quantum state can be converted into one of the standard representatives of each class just by local measurements and classical communication between the parties. Note that the conditions of $(q_1,q_2)$ and $(q_2,q_3)$ being entangled remove the first 4 cases, so we only have to consider the last 2 cases, W-state and GHZ-state. Both representatives are symmetric under exchange of the particles: $$ |W\rangle=\frac{1}{\sqrt{3}}(|001\rangle+|010\rangle+|100\rangle)\qquad |GHZ\rangle=\frac{1}{\sqrt{2}}(|000\rangle+|111\rangle) $$ (i.e. if I swap qubits A and B, I still have the same state). So, these representatives must have the required transitivity properties: If A and B are entangled, then B and C are entangled, as are A and C. In particular, Both of these representatives can be measured in the X basis in order to localize the entanglement. Thus, any pure state that you're given must be such that you can include the measurement to convert it into the standard representative into the measurement for localizing the entanglement, and you're done!
{ "domain": "quantumcomputing.stackexchange", "id": 166, "tags": "entanglement" }
Software for calculating the trajectory of a body in a coordinate system
Question: I'm looking for a program, which would simulate the path of motion of a body in a coordinate system, given the force acting upon the body. I'd type in the initial conditions such as the velocity, $(x,y)$ position and the formula of the force (e.g. $F=k \sqrt{x^2+y^2}$) and then I'd like the application to simulate the path of motion the body will follow. Does anyone know such a software? Answer: You are asking how to numerically solve a second order initial value problem. An initial value problem involves advancing some initial state over time given an ordinary differential equation (ODE) that describes the time evolution of the state. There are many books, journal articles, and college classes about this topic. There is no one perfect technique. Some techniques are blindingly simple, others are hideously complex. As Kevin Ye mentioned, Mathematica can do this quite nicely for many initial value problems. So can Matlab, but you have to know a bit more about numerical integration with Matlab. For example, you need to know whether your problem is stiff. Sometimes these commercial integrators fail miserably. There's a reason there are many books, articles, and classes on the subject. It's not easy, and there is no one perfect technique. Most ODE solvers address multidimensional first order ODEs. A higher order ODE can readily be converted to a first order ODE by creating a larger state vector. For example, Newton's second law is a second order ODE. Simply create a state vector that contains position and velocity. You asked about two dimensional position (x and y) so that means the composite state vector would have four elements, x, y, and their time derivatives. Voila! The second order ODE $\ddot {\vec x} (t) = \vec F/m$ has been transformed to a first order ODE. There's a price to pay with using a first order ODE solver on a second order ODE problem. While this transformation works conceptually, it loses something in practice. In particular, it loses "symplecticity". This may not matter if you just want an approximate solution, if the time interval is sufficiently short, or if the ODE solver is very, very good. An example of the latter is the Lawrence Solver for Ordinary Differential Equations, or LSODE, developed at Lawrence Livermore. This is one of the best solvers out there for first order ODEs. It works very nicely on second order problems, too, at least over sufficiently short spans of time so that you don't see that things that should be conserved (energy, linear momentum, and angular momentum) aren't conserved. The easiest ODE solver is Euler's method. This is a very, very simple technique, and as a result it is typically very, very lousy. However, understanding Euler's method is essential to understanding any more advanced method. You can easily implement Euler's method in a spreadsheet or in an easy-to-learn programming language such as python. You simply advance state one step at a time until you reach the desired end time via $$\vec u(t+\Delta t) = \vec u(t) + \vec f(\vec u(t),t)\,\Delta t$$ where $\vec u(t)$ is the state at time $t$, $\vec f(\vec u(t),t)$ is the function that computes the time derivative of the state, $\Delta t$ is the time step, and $\vec u(t+\Delta t)$ is the estimated state at the next time step $t+\Delta t$. This technique can easily be adapted to Newton's second law via $$\begin{aligned} \vec r(t+\Delta t) &= \vec r(t) + \vec v(t)\, \Delta t \\ \vec v(t+\Delta t) &= \vec v(t) + \frac {\vec F(\vec r(t), \vec v(t), t)}{m}\, \Delta t \end{aligned}$$ Here $\vec F(\vec r, \vec v, t)$ calculates force as a function of position, velocity, and time. This basic implementation is not symplectic: $$\begin{aligned} \vec v(t+\Delta t) &= \vec v(t) + \frac {\vec F(\vec r(t), \vec v(t), t)}{m}\, \Delta t \\ \vec r(t+\Delta t) &= \vec r(t) + \vec v(t+\Delta t)\, \Delta t \end{aligned}$$ In other words, calculate the new velocity first, and use this new velocity to update the position.
{ "domain": "physics.stackexchange", "id": 17380, "tags": "newtonian-mechanics, resource-recommendations, computational-physics, simulations, software" }
Random number wrapper class
Question: I made this class to handle any min max integer value and return a random number in the entered parameter range. I tested it to work with all combinations(I think). But is this totally wrong or written with unnecessary amounts of code? Are there obvious Java convention violations or obvious redundancy in this? I was wondering if it was possible to do the same without instantiating Random, since I have read that object instantiation is more resource demanding than method invoking, like invoking Math.random(). I just couldn't make that work, unfortunately, as I didn't save the strange non-working code that that ended with. I tried the solution here. However, I don't understand what the rand() part is, which is too bad since it seemed really simple with just one line of code. abstract class RandomInteger { static int randomNumber; public static int returnRandomIntRange(int start, int end){ if(end < start){ throw new IllegalArgumentException("Start cannot exceed End."); } else if(end == 0 && start == 0){ throw new IllegalArgumentException("Start and End can't both be 0."); } else if(end == start){ throw new IllegalArgumentException("Start and End can't be the same."); } else if(end >= 0 && start <= 0){ Random random=new Random(); int range; range = end - start + 1; System.out.println("aEnd > 0 && aStart < 0. 1range is: " + range); randomNumber=(random.nextInt(range))+(start); System.out.println("aEnd > 0 && aStart >= 0. invoked. 1randomNumber is: " + randomNumber); } else if(end > 0 && start > 0){ Random randomGenerator=new Random(); int range; range = end - start + 1; System.out.println("aEnd > 0 && aStart >= 0. invoked. 2range is: " + range); randomNumber = randomGenerator.nextInt(range) + start; System.out.println("aEnd > 0 && aStart >= 0. invoked. 2randomNumber is: " + randomNumber); } else if(end < 0 && start < 0){ Random randomGenerator=new Random(); int range; range = (start - end -1) * -1; System.out.println("aEnd <= 0 && aStart < 0. invoked. 3range is: " + range); randomNumber = ((randomGenerator.nextInt(range)+ start)); System.out.println("aEnd <= 0 && aStart < 0. invoked. 3randomNumber is: " + randomNumber); } return randomNumber; } } Answer: That does look a little confusing. The example you looked at is for C, not Java. I would do this: int random = (int) Math.floor(Math.random() * (max - min + 1) ) + min; It works like this: Math.random() returns a double between 0 and 1 (although never actually equal to 1). The full range of values you want is (max - min + 1) (the +1 is because you probably want max to min inclusive), so this will scale the number over the correct number of integer values. We floor it (round down) because we want an int, and finally shift it upwards by adding min to put the numbers in the correct range. We also cast to an int since we don't want it as a double. EDIT: On further inspection, this is better; no casting and rounding of doubles: Random rand = new Random(); int random = rand.nextInt(max - min + 1) + min;
{ "domain": "codereview.stackexchange", "id": 3444, "tags": "java, performance, random" }
Is pointing a telescope at a random place a viable astronomical strategy?
Question: Recently I happened to be on the MAST portal, looking at jwst data. I happened to come across 2 interesting targets, “random place” and “another random place” This got me thinking. It’s almost impossible to know what the scientists who pointed jwst at a random place were thinking. However, it’s interesting to think about if this is something that is beneficial, or at least not largely detrimental. It also occurred to me that at some point someone would have to point a telescope at a random point in order to discover things. So, is pointing telescopes at random places a viable method of doing astronomy, and if so what are examples? Answer: Narrow field telescopes such as HST or JWST are basically never randomly pointed - the field of view and density of interesting objects is not high enough that this would be viable and such a proposal would never make it out of the Time Allocation Committee as a valid use of precious and expensive telescope time. Looking at page 586 ("ASTRO-13") of the NASA FY23 Budget Request we can see that Hubble and JWST are \$98.3 and \$187 million a year to operate (I took the FY24 numbers as a "steady state" for JWST). From a report to the Space Telescope User Committee, we can find that Hubble executes about 75 orbits (each ~95 minutes) of observations per week in recent Cycles, making the cost per orbit about \$25k or about \$16k/hour to operate. I don't have similar numbers for JWST and being out at L2 makes operations a little easier, but it's also newer and operations won't be as streamlined as they are for Hubble, but given the budgets, we can assume ~2x Hubble so \$32k/hour. Oversubscription rates vary by cycle, science program category and instrument but approx. people ask for four times as much time on Hubble as is available and the selection rate for proposals is about 20%. So 80% of the people who apply with detailed science and observing plans for what they want to do get nothing and no observing time. Given this oversubscription rate, if you propose to randomly point the telescope and have no idea what you're going to see and whether it will produce anything to justify the time spent, your proposal will get a very low grade and zero time awarded. Blind/random discovery is not the job of precision instruments such as HST or JWST; that's what sky surveys are for. These will tile a substantial fraction of the sky with an instrument that has a large field of view in a single exposure, usually in only 1 or maybe 2 filters and often for a specific science goals such as discovering Near Earth Objects (e.g. ATLAS) or supernovae/explosive transients (e.g. ZTF or ASAS-SN) but can also be a general purpose survey with a variety of science goals (e.g. Sloan Digital Sky Survey (SDSS)). These surveys will find large numbers of new examples of known objects which weren't their primary science goal (e.g. new eclipsing binaries found in the OGLE survey for microlensing events Udalski et al. 1994. Occasionally, entirely new classes of object will be found of which the most famous example is Hanny's Voorwerp found in Galaxy Zoo images. These new objects or a subset e.g. "we'll pick the 10 best eclipsing binary candidates from our survey for further study" can then be studied in detail using larger 4-8m telescopes or space assets like HST and JWST. This model of "small widefield telescope finding interesting objects for big telescopes" dates back to the original 48"/1.2m Samuel Oschin Schmidt and the Palomar Observatory Sky Survey at Palomar Observatory which surveyed the Northern Sky using photographic plates in two "filters" (emulsions) in part to find and feed interesting targets to the 200"/5m Hale telescope. This was replicated in the Southern Hemisphere with the near-identical UK Schmidt and ESO Schmidt Telescopes and has continued through digital CCD surveys such as SDSS, PanSTARRS and SkyMapper as well as the ATLAS, ZTF and ASAS-SN previously mentioned. This model of "wide and shallow survey first, narrow and deep follow-up later" is more efficient in use of the telescope time and produces a better and higher science return per hour/dollar. (The closest HST comes to random pointings is the Pure Parallel mode where you can use a different instrument to observe an adjacent field close to where the main prime program and instrument is observing. The pure parallel mode gives you no control over where you look and for how long; that is totally set by the prime program and given the many pointing restrictions, the extremely small amount of sky covered by Hubble per year and the types of targets Hubble tends to observe, this is very far from "random" pointings)
{ "domain": "astronomy.stackexchange", "id": 6656, "tags": "observational-astronomy, history, james-webb-space-telescope" }
How can I verify my engineering work experience without having worked under a PE?
Question: First a little background, I've been working in Michigan as an engineer since getting my BSME in 1998 (I graduated with my MSME in 2010). I've worked for a number of companies in a variety of industries as an engineer, but never under the direct supervision of a licensed PE since all of the work I've done has been considered under "industrial exemption". Now however, I'm becoming less interested in simply being an employee and more interested in working for myself and being a PE would be advantageous. I'm reasonably sure I can pass the exam (with copious amounts of studying, naturally), but my concern is getting my experience verified. The PE's that I know, I've never worked with, either as a subordinate or as a peer. Has anyone had difficulty finding enough PE's to verify their experience when getting licensed? If so, were you successful in getting your experience verified? How do you recommend finding PE's to verify engineering work experience? Answer: How can I verify my engineering work experience without having worked under a PE? Many states offer the ability to substitute sufficient / substantial work experience for directly supervised work experience. The key difference here is that the obligation is upon the applicant to demonstrate how the in lieu of work still meets or preferably exceeds the expectations of the professional board with regards to the quality of the work produced. In other words, because you can't produce a PE to vouch for direct supervision then you have to demonstrate why the work ought to qualify. From what I can tell of Michigan's work experience requirements, you ought to be able to substitute sufficient work experience. Work Experience Requirements All applicants must provide verification of at least 4 years of acceptable engineering work experience obtained after having received an acceptable bachelors degree. Work experience must be verified by five persons, three of whom must be licensed professional engineers. The devil is obviously in the details, and it would behoove you to contact that board for exact details. But at first glance, it would appear that you'll be fine. To be a little more prescriptive in what you specifically need to do: Contact the PE's that you've worked with before. Find as many that you can that are willing to vouch for the quality of your work. It's important to keep in mind that they are vouching more for your character and their perception of you following a particular process as opposed to direct supervision. Some states call these "character reference" recommendations. Michigan expects at least 3 PE's to vouch for your work, you would be better off getting more than that if you can. There's no harm in submitting more than 5 verifications as well, especially if they all hold PE's. Start documenting the work experience that you've already completed. Some states count a Masters of Engineering as an equivalent to one year of work experience. In that documentation, call out the processes that you followed and make sure to note the review portion of those processes. The state board is interested in work that demonstrates the full engineering process showing the beginning of the work through till the end and including the review to potentially improve the process itself. Finally, really start studying the law and regulations surrounding the particular field of engineering that you wish to advertise your services within. And let me be straight-up brutally honest for a moment: Sealing a document merely means that you're willing to accept liability for a project should something that was reasonably foreseen go wrong. That means you can personally be civilly or criminally held liable should there be a significant problem on a project. Make sure you're comfortable with reasonable standards of care for the projects you want to engage so you can demonstrate due diligence in your offerings.
{ "domain": "engineering.stackexchange", "id": 752, "tags": "licensure" }
Configuration space which is not a manifold
Question: I am currently reading the book Mathematics in physics by Michael Stone and Paul Goldbart. In chapter 11, page 421, the authors say that "Except in pathological cases, the configuration space M of a mechanical system is a manifold." Are the authors correct? What examples are there of these pathological cases? Are there any physically relevant cases where the configuration space in classical mechanics is not a manifold? And why does configuration space have to be a manifold - what about variables taking on discrete values? Answer: Consider a pair of points $P$ and $Q$, say of mass $m>0$, and suppose that they are constrained as follows. $Q$ stays on the $z$ axis and can freely move along it up to restrictions said below. $P$ can freely rotate around $z$ and is connected to $Q$ by means of an ideal rod (zero mass) of length $\ell$, and it is also connected to the origin $O$ by means of another ideal rod of length $\ell$. This is a quite standard idealized mechanical system (I could use it as a starting point for an exercise of my undergraduate course of Analytic Mechanics). A suitable coordinate system to describe the system is apparently given by the signed distance $Z\in (-2\ell, 2\ell)$ of $Q$ from $O$ along $z$ and an angular coordinate $\theta\in (-\pi,\pi]$ describing the position of $P$ around $z$ in the plane $x,y$. Well you see that, discarding the extreme points $Z= \pm 2\ell$ (they could be included with a more precise discussion, see the ADDENDUM) a disaster shows up when $Z=0$. Concerning the set $Z\in (-2\ell, 0)$ and $Z\in (0, +2\ell)$, the space of configurations is diffeomorphic to $\mathbb R \times \mathbb S^1$. When $Z=0$ the configuration space (at fixed $Z$) instead of being $\mathbb S^1$, it becomes $\mathbb S^2$ and another set of coordinates should be used. There are two possibilities: either declaring that the configuration space is diffeomorphic to $\mathbb R \times \mathbb S^1$ deliberately ignoring the problem at $Z=0$, or declaring that it is not a ($2$-dimensional) manifold (because each point in the subset at $Z=0$ has a neighborhood that is not diffeomorphic to $\mathbb R^2$), but is made of the union of three manifolds respectively diffeomorphic to $\mathbb R \times \mathbb S^1$, $\mathbb S^2$, and $\mathbb R \times \mathbb S^1$. In practice, with an imprecise but pictorical description (see the ADDENDUM for a precise description) it is the union of a cylinder and a sphere inside the cylinder, tangent to the cylinder at the equator. This is not a manifold because it is not locally homeomorphic to $\mathbb R^n$ for some fixed $n$ ($2$ in our case). Generally speaking the space of configuration is almost always a manifold because it is obtained by imposing constraints on a set of $N$ matter points initially described in $\mathbb R^{3N}$. Constrains are determined by a family of $c< 3n$ real-valued functions $f_k = f_k(t,\vec{x}_1,\cdots, \vec{x}_N)$ by imposing that every admissible configuration $\vec{x}_1,\cdots, \vec{x}_N$ at any time $t$ must satisfy $$f_k(t,\vec{x}_1,\cdots, \vec{x}_N) =0\:, \quad k=1,\ldots, c\:. \tag{1}$$ If the functions $f_k$ are smooth and functionally independent, the theorem of regular values proves that (1) defines an embedded submanifold of $\mathbb R \times \mathbb R^{3N}$ of dimension $1+ 3N-c$. Fixing time $t \in \mathbb R$, we have an embedded submanifold of $\mathbb R^{3N}$, of dimension $3N-c$, called space of configurations. The two conditions of the constraints may be false for some points and it sometimes happens in particular when dealing with constraints like rigidity in some geometrically involved way. ADDENDUM. If $X,Y, Z$ denote the coordinate set of $Q$ and $x,y,z$ the coordinate set of $P$, both in the whole $\mathbb R^3$ space, the four constraints, corresponding to the set of conditions 1 and 2 above, read $$f(x,y,z, X,Y,Z)=0\:, \quad g(x,y,z,X,Y, Z)=0\:, \quad h(x,y,z, X,Y,Z)=0, \quad i(x,y,z, X,Y,Z)=0\tag{2}$$ where $$f(x,y,z, X,Y, Z) := x^2+y^2+z^2 -\ell^2\:, \quad g(x,y,z, X,Y,Z) := x^2+y^2+(z-Z)^2 -\ell^2\:, \quad g(x,y,z, X,Y,Z)= X\:, \quad i(x,y,z, X,Y,Z)= Y\:.$$ These constraints are functionally independent if, by definition, their differentials are linearly independent on the set of points $(x,y,z,X,Y, Z)$ where all conditions (2) are valid. In this case theorem of regular values implies that this set is an embedded submanifold of $\mathbb R^3 \times \mathbb R^3$ with dimension $6-4 =2$. It is clear by direct computations that the four differentials are not linearly independent (because $df=dg$) when $Z=0$. This is the problem with the said system of constraints. A precise description of the configuration space when including also the points $Z= \pm 2\ell$ is the union of two $2$-spheres in $\mathbb R^4$ that have an equator in common.
{ "domain": "physics.stackexchange", "id": 45884, "tags": "classical-mechanics, differential-geometry, mathematical-physics, topology" }
Sort three numbers
Question: So, I wrote this function to sort three numbers in Julia. Any feedback on readability, performance, and Julian-ness would be appreciated. function sort_asc(a, b, c) # defined in function so as not to overwrite Base.minmax minmax(x, y) = ifelse(x < y, (x, y), (y, x)) l1, h1 = minmax(a, b) lo, h2 = minmax(l1, c) md, hi = minmax(h1, h2) return lo, md, hi end Answer: As it turns out function sort_asc(a, b, c) l1, h1 = minmax(a, b) lo, h2 = minmax(l1, c) md, hi = minmax(h1, h2) return lo, md, hi end results in the exact same native assembly instructions on my machine (compared to the branchless version). So, given its greater simplicity, I think this is the best version so far. Here are comparisons to other implementations function sort3(a, b, c) l1, h1 = minmax(a, b) lo, h2 = minmax(l1, c) md, hi = minmax(h1, h2) return lo, md, hi end @inline minmax_nb(x, y) = ifelse(isless(x, y), (x, y), (y, x)) function sort3_nb(a, b, c) l1, h1 = minmax_nb(a, b) lo, h2 = minmax_nb(l1, c) md, hi = minmax_nb(h1, h2) return lo, md, hi end function sort3_koen(a, b, c) lo, md = minmax(a, b) if md > c hi = md lo, md = minmax(lo, c) else hi = c end return lo, md, hi end Here is the benchmark code using BenchmarkTools function map_sort3!(f::F, out, data) where F @inbounds for i in eachindex(out, data) a, b, c = data[i] out[i] = f(a, b, c) end end for f in [sort3, sort3_nb, sort3_koen] print(rpad(f, 12), ":") @btime map_sort3!($f, out, data) setup=begin data = [ntuple(i -> rand(Int), 3) for _ in 1:10_000] out = similar(data) end evals=1 end And here are the results on my machine: sort3 : 17.286 μs (0 allocations: 0 bytes) sort3_nb : 17.624 μs (0 allocations: 0 bytes) sort3_koen : 21.776 μs (0 allocations: 0 bytes)
{ "domain": "codereview.stackexchange", "id": 41825, "tags": "algorithm, sorting, julia" }
Does the collision of a neutron and anti-neutron produce energy?
Question: Following up on this post: Anti-Particle of Neutron, one very important part of it is unanswered. If a neutron collides with an anti-neutron, will it violently explode in a flash of energy? The Wikipedia article on it also doesn't shed light on this. We know a proton will be attracted to its anti-particle and create energy, but I suppose there is nothing (apart from a very weak gravity) that will attract a neutron to its anti-neutron. So if I took a gas made up of neutrons and another made of anti-neutrons and mixed them up, would nothing happen? Would it depend on the density? Also, what about a neutron star and an anti-neutron star? I suppose they would revolve around each other due to gravity, but there would come a time when they collide. Would the collision be the same as or different from the collision of two neutron stars? Answer: Yes, they will annihillate. It will happen slower like a proton-antiproton gas mix, because they have no charge, thus nothing attracts the neutrons to the antineutrons. Annihillation does not convert matter to energy, it converts particle-antiparticle pairs to photons. Energy is not matter, it is a number what we assign to particles. A mix of neutron- and antineutron gas will create photons, neutrinos and antineutrinos. Neutrinos and antineutrinos will appear because beside the annihillation, other processes will also happen. Neutrons and antineutrons are not elemental particles, they are from 3 quarks or antiquarks, and these annihillate. The other two builds pions, some of them decays to muons and antimuons before they annihillate to neutrinos and antineutrinos. The muons decay to electrons, positrons and (anti)neutrinos. The electrons and the positrons annihillate to photons. If the gases are really big or you have a much time to watch them, then also the neutrinos + antineutrinos will annihillate, and the result will be only photons. But this would require sizes and time comparable to the visible Universe. Currently there is no experimental technology to create stable antineutron gas. Even to create stable neutron gas is hard, because the neutrons have no charge, so there is no easy way to trap them (they have a little magnetic moment, so very slow neutrons can be trapped by very strong magnetic fields). The collision of a neutron and antineutron star would initiate a terrible strong annihillation. The result would be similar to a supernova with an extreme gamma photon flash. It is hard to say, what would be the result. In the Universe, no significant amount of antimatter exists.
{ "domain": "physics.stackexchange", "id": 84315, "tags": "antimatter, neutrons, neutron-stars" }
cob_bringup unrecognized as a package
Question: I am running electric on ubuntu 10.4. I installed ros-electric-care-o-bot according to the instructions. Then, following the Caro-o-bot instructions for simulation, I attempt to "rosdep install cob_bringup". I get the error below even though the cob_bringup stack is in my electric directory. rosdep install cob_bringup Warning: could not identify ['cob_bringup'] as a package Usage: rosdep [options] <command> <args> Originally posted by Paul0nc on ROS Answers with karma: 271 on 2011-09-14 Post score: 0 Answer: I just downloaded and installed the Care-O-bot stack to test this problem. I was able to get the rosdep to work correctly with the following commands: roscd cob_apps rosdep install cob_bringup Just a note, the rosdep for this stack only contains one dependency, and it appears that I already had it on my machine. If this turns out to be the case for you, don't panic if you see the "No packages to install" result. I hope this helps. Originally posted by DimitriProsser with karma: 11163 on 2011-09-14 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 6690, "tags": "ros, rosdep, care-o-bot" }
Number of words of a given length in a regular language
Question: Is there an algebraic characterization of the number of words of a given length in a regular language? Wikipedia states a result somewhat imprecisely: For any regular language $L$ there exist constants $\lambda_1,\,\ldots,\,\lambda_k$ and polynomials $p_1(x),\,\ldots,\,p_k(x)$ such that for every $n$ the number $s_L(n)$ of words of length $n$ in $L$ satisfies the equation $s_L(n)=p_1(n)\lambda_1^n+\dotsb+p_k(n)\lambda_k^n$. It's not stated what space the $\lambda$'s live in ($\mathbb{C}$, I presume) and whether the function is required to have nonnegative integer values over all of $\mathbb{N}$. I would like a precise statement, and a sketch or reference for the proof. Bonus question: is the converse true, i.e. given a function of this form, is there always a regular language whose number of words per length is equal to this function? This question generalizes Number of words in the regular language $(00)^*$ Answer: Given a regular language $L$, consider some DFA accepting $L$, let $A$ be its transfer matrix ($A_{ij}$ is the number of edges leading from state $i$ to state $j$), let $x$ be the characteristic vector of the initial state, and let $y$ be the characteristic vector of the accepting states. Then $$ s_L(n) = x^T A^n y. $$ Jordan's theorem states that over the complex numbers, $A$ is similar to a matrix with blocks of one of the forms $$ \begin{pmatrix} \lambda \end{pmatrix}, \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix}, \begin{pmatrix} \lambda & 1 & 0 \\ 0 & \lambda & 1 \\ 0 & 0 & \lambda \end{pmatrix}, \begin{pmatrix} \lambda & 1 & 0 & 0 \\ 0 & \lambda & 1 & 0 \\ 0 & 0 & \lambda & 1 \\ 0 & 0 & 0 & \lambda \end{pmatrix}, \ldots $$ If $\lambda \neq 0$, then the $n$th powers of these blocks are $$ \begin{pmatrix} \lambda^n \end{pmatrix}, \begin{pmatrix} \lambda^n & n\lambda^{n-1} \\ 0 & \lambda^n \end{pmatrix}, \begin{pmatrix} \lambda^n & n\lambda^{n-1} & \binom{n}{2} \lambda^{n-2} \\ 0 & \lambda^n & n\lambda^{n-1} \\ 0 & 0 & \lambda^n \end{pmatrix}, \begin{pmatrix} \lambda^n & n\lambda^{n-1} & \binom{n}{2}\lambda^{n-2} & \binom{n}{3}\lambda^{n-3} \\ 0 & \lambda^n & n\lambda^{n-1} & \binom{n}{2}\lambda^{n-2} \\ 0 & 0 & \lambda^n & n\lambda^{n-1} \\ 0 & 0 & 0 & \lambda^n \end{pmatrix}, \ldots $$ Here's how we got to these formulas: write the block as $B = \lambda + N$. Successive powers of $N$ are successive secondary diagonals of the matrix. Using the binomial theorem (using the fact that $\lambda$ commutes with $N$), $$ B^n = (\lambda + n)^N = \lambda^n + n \lambda^{n-1} N + \binom{n}{2} \lambda^{n-2} N^2 + \cdots. $$ When $\lambda = 0$, the block is nilpotent, and we get the following matrices (the notation $[n = k]$ is $1$ if $n=k$ and $0$ otherwise): $$ \begin{pmatrix} [n=0] \end{pmatrix}, \begin{pmatrix} [n=0] & [n=1] \\ 0 & [n=0] \end{pmatrix}, \begin{pmatrix} [n=0] & [n=1] & [n=2] \\ 0 & [n=0] & [n=1] \\ 0 & 0 & [n=0] \end{pmatrix}, \begin{pmatrix} [n=0] & [n=1] & [n=2] & [n=3] \\ 0 & [n=0] & [n=1] & [n=2] \\ 0 & 0 & [n=0] & [n=1] \\ 0 & 0 & 0 & [n=0] \end{pmatrix} $$ Summarizing, every entry in $A^n$ is either of the form $\binom{n}{k} \lambda^{n-k}$ or of the form $[n=k]$, and we deduce that $$ s_L(n) = \sum_i p_i(n) \lambda_i^n + \sum_j c_j [n=j], $$ for some complex $\lambda_i,c_j$ and complex polynomials $p_i$. In particular, for large enough $n$, $$ s_L(n) = \sum_i p_i(n) \lambda_i^n. $$ This is the precise statement of the result. We can go on and obtain asymptotic information about $s_L(n)$, but this is surprisingly non-trivial. If there is a unique $\lambda_i$ of largest magnitude, say $\lambda_1$, then $$ s_L(n) = p_1(n) \lambda_1^n (1 + o(1)). $$ Things get more complicated when there are several $\lambda$s of largest magnitude. It so happens that their angle must be rational (i.e. up to magnitude, they are roots of unity). If the LCM of the denominators is $d$, then the asymptotics of $s_L$ will very according to the remainder of $n$ modulo $d$. For some of these remainders, all $\lambda$s of largest magnitude cancel, and then the asymptotics "drops", and we have to iterate this procedure. The interested reader can check the details in Flajolet and Sedgewick's Analytic Combinatorics, Theorem V.3. They prove that for some $d$, integers $p_0,\ldots,p_{d-1}$ and reals $\lambda_0,\ldots,\lambda_{d-1}$, $$ s_L(n) = n^{p_{n\pmod{d}}} \lambda_{n\pmod{d}}^n (1 + o(1)). $$
{ "domain": "cs.stackexchange", "id": 1780, "tags": "formal-languages, regular-languages, word-combinatorics" }
Why is a lack of oxygen fatal to cells?
Question: In animals temporary anaerobic respiration leads to the breakdown of the pyruvate formed by glycolysis into lactate. The buildup of lactate in the bloodstream is accompanied by a large number of protons causing lactic acidosis, which is detrimental to the health of the organism. This is one of the main suggestions I have come across for why a lack of oxygen is fatal to cells, however the LD50 for lactic acid as referenced by the COSHH MSDS seem awfully high (even if the route is by ingestion rather than directly into the bloodstream) for this to be a cause of cell death: Toxicological Data on Ingredients: ORAL (LD50): Acute: 3543 mg/kg [Rat (Lactic Acid (CAS no. 50-21-5))]. 4875 mg/kg [Mouse (Lactic Acid (CAS no. 50-21-5))]. I also wonder if this is a larger problem for an organism as a whole rather than on a cellular level. The alternative, I suppose, is that glycolysis alone does not provide sufficient ATP for vital cellular processes to occur. If this is the case, which ATP requiring processes are most vital for the short term survival for a cell? Answer: Here's an illustrated example in neurons: ATP, of course, is generated by aerobic respiration. The critical biochemical reaction in the brain that is halted due to lack of ATP (and therefore O2) is the glutmaine synthetase reaction, which is very important for the metabolism and excretion of nitrogenous wastes: The body uses this reaction to dump excess ammonia (which is a metabolic waste product) on glutamate to make glutamine. The glutamine is then transported via the circulatory system to the kidney, where the terminal amino group is hydrolyzed by glutaminase, and the free ammonium ion is excreted in the urine. Therefore, as you'd expect, under hypoxic conditions in the brain, excess ammonia builds up which is very toxic to the cells. Neurons are also highly metabolically active, which means they generate more waste products. A buildup of nitrogenous waste products in the cell (and bloodstream) can be potentially fatal due to it's effects on pH (screws up enzymes and a whole slew of biochemical reactions). In addition, the buildup of ammonia will cause glutamate dehydrogenase to convert ammonia + aKG to glutamate, which depletes the brain of alpha-ketoglutarate (key intermediate in TCA cycle). This basically creates a logjam in the central metabolic cycle which further depletes the cell of energy. This is just one example of many. Of course, there are many, many other critical metabolic processes that require ATP (i.e. the Na+/K+ ATPase pump that regulates neuronal firing and osmotic pressure), but nitrogen metabolism was the first that came to mind :)
{ "domain": "biology.stackexchange", "id": 114, "tags": "cellular-respiration, anaerobic-respiration" }
Summing some datum for requests in each domain that fall in a date range
Question: I have a LINQ query which works fine. I am however very interested to understand if this can be written in a more optimum way... var query = (from r in Results.All.AsEnumerable() where r.RequestType.Id == Id && r.DateFrom >= sDate && r.DateTo <= eDate group r by new {r.Data_1} into g select new {data = g.Key.Data_1, sum = g.Sum(s => int.Parse(s.Data_2))}).ToList(); I am trying to sum Data_2 (stored as a string) by a unique list of Data_1 all by a given date range. I have provided some sample data below: For example: facebook.com = 51, m.facebook.com = 94 etc. Answer: This is a case where a foreach loop might work better. Your LINQ query iterates for your data several times. Where as a foreach loop and a dictionary, should be able to accomplish this in one loop. Something like this: Dictionary<string,int> query = new Dictionary<string, int>(); foreach(var r in Results) { if(r.RequestType.Id == Id && r.DateFrom >= sDate && r.DateTo <= eDate) { if(query.ContainsKey(r.Data_1)) { query[r.Data_1] += int.Parse(r.Data_2); } else { query.Add(r.Data_1, int.Parse(r.Data_2)); } } } While there is a Dictionary lookup for each item it should be more than offset by calculating the sum on the fly rather than at the end.
{ "domain": "codereview.stackexchange", "id": 10202, "tags": "c#, linq, datetime" }
Generic Timing Class
Question: I have written a class which can time functions, and I'd like to have it reviewed. I'm interested in everything (better naming and commenting, accuracy of measurement, usability, structure, use of lambda, etc.). import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.function.Consumer; import java.util.function.Function; import java.util.function.IntFunction; /** * A timing class. */ public class Timing { private List<TimingObject> functionsToTime; private int amount = 1_000_000; public Timing() { functionsToTime = new ArrayList<>(); } /** * adds a new function which will be timed. * * @param <R> return type of functionToTime (irrelevant) * @param <T> input type of functionToTime (same as return type of * inputConverter) * @param functionToTime a function expecting input of type T, returning * output of any type (R) * @param inputConverter converts the loop variable to type T and passes it * to functionToTime * @param name name of the function (used for output) */ public <R, T> void add(Function<R, T> functionToTime, IntFunction<T> inputConverter, String name) { functionsToTime.add(new TimingObject(functionToTime, inputConverter, name)); } /** * sets how often the given functions should be run when timed. * * @param amount amount */ public void setAmount(int amount) { this.amount = amount; } /** * performs the actual timing for all given functions. */ public void time() { for (TimingObject timingObject : functionsToTime) { time(timingObject); } } /** * passes the result of the timing to the given consumer. * * @param consumer consumer * @param sort how to sort the result */ public void output(Consumer<String> consumer, Sort sort) { switch (sort) { case ASC: Collections.sort(functionsToTime, (TimingObject t1, TimingObject t2) -> (int) (t1.timeTaken - t2.timeTaken)); break; case DESC: Collections.sort(functionsToTime, (TimingObject t1, TimingObject t2) -> (int) (t2.timeTaken - t1.timeTaken)); break; case NO: break; default: break; } consumer.accept(Formater.output(functionsToTime)); } /** * times the function in the given object. * * @param timingObject timingObject */ private void time(TimingObject timingObject) { runXTimes(timingObject, amount / 100); // warm up long startTime = System.nanoTime(); runXTimes(timingObject, amount); // actual timing timingObject.timeTaken = System.nanoTime() - startTime; } /** * runs the function in the given object x amount of times. * * @param timingObject timingObject * @param amount amount */ private void runXTimes(TimingObject timingObject, int amount) { for (int i = 0; i < amount; i++) { timingObject.function.apply(timingObject.inputConverter.apply(i)); } } protected class TimingObject { private Function function; private IntFunction inputConverter; protected String name; protected long timeTaken; public TimingObject(Function function, IntFunction inputConverter, String name) { this.function = function; this.inputConverter = inputConverter; this.name = name; } } public static enum Sort { ASC, DESC, NO } } The formater (not reusable or anything): import java.util.List; public class Formater { /** * returns name and timeTaken of the given TimingObjects. * * The times will be aligned properly. * * @param functionsToTime functionsToTime * @return string */ public static String output(List<Timing.TimingObject> functionsToTime) { int maxNameLength = getMaxNameLength(functionsToTime); String spaces = getSpaces(maxNameLength) + 1; StringBuilder out = new StringBuilder(); for (Timing.TimingObject timingObject : functionsToTime) { out.append(timingObject.name); out.append(spaces.substring(0, spaces.length() - timingObject.name.length())); // align out.append(timingObject.timeTaken); out.append("\n"); } return out.toString(); } /** * returns n spaces. * * @param n number of spaces * @return n spaces */ private static String getSpaces(int n) { StringBuilder spaces = new StringBuilder(n); for (int i = 0; i < n; i++) { spaces.append(" "); } return spaces.toString(); } /** * returns the length of the longest function name. * * @return max length */ private static int getMaxNameLength(List<Timing.TimingObject> functionsToTime) { int max = 0; for (Timing.TimingObject timingObject : functionsToTime) { int currentLength = timingObject.name.length(); if (timingObject.name.length() > max) { max = currentLength; } } return max; } } And how it can be used: import java.util.function.IntFunction; public class Main { public static void main(String[] args) { Timing t = new Timing(); IntFunction<String> intToString = (int i) -> String.valueOf(i) + "test"; IntFunction<Integer> intToInt = (int i) -> i; // time function string->string: t.add((String s) -> functionToTimeString(s), intToString, "s + s"); t.add((String s) -> functionToTimeString2(s), intToString, "s + s + s"); // [...] // we can also time int->int functions at the same time: t.add((Integer i) -> functionToTimeInt(i), intToInt, "i + i"); t.time(); // output to stdo t.output((String s) -> System.out.println(s), Timing.Sort.DESC); } private static String functionToTimeString(String s) { return s + s; } private static String functionToTimeString2(String s) { return s + s + s; } private static int functionToTimeInt(int i) { return i + i; } } Output: s + s + s 121048384 s + s 109057922 i + i 55232562 Answer: I really like the concept of this system, but it has some.... issues. The Code Style is pleasant. I am sure there are some nit-picks in there, but, it is certainly clean enough to make reading it easy. The issues I have are more with what you are timing, and some use cases that I think would break things. First though, what is good is the idea of having the two input functions, the one to time, and the other to generate the timing inputs. The problem is that your timing function also times the generation time. The other issue is that you reuse inputs for both the warmup, and the actual timing process. Timing the generation is... awkward because you don't know whether to blame the time on the actual function, or the generator. The instance reuse is also awkward because I know instances where rerunning certain values may cause failures, or unreasonably-fast execution. What I would recommend is that, instead of doing what you do with the warmup, you change your reporting system, and you time each function call individually, then throw out the 10% fastest runs, and the 10% slowest runs, then average the remainder..... (perhaps reporting the statistics like the 95th percentile as well, etc.). I would probably have a timing function like: private long[] timeRuns(T[] input) { long[] times = new long[input.length]; for (int i = 0; i < input.length; i++) { long start = System.nanoTime(); timingObject.function.apply(input[i]); times[i] = System.nanoTime() - start; } return times; } With the above function, I would probably split the input in to chunks of say 1000 and call the timeRuns for each chunk. That way you can populate a steady stream of data in to chunks, time the chunks, and move on. You can collect all the individual times outside the timing loop, and then statistically analyze them separately. This way, you won't need warmup runs as you will discard the slow ones. Slow runs can also be impacted by Garbage Collection, and other factors, so discarding the slow runs makes sense anyway. You also remove the input-value generation, and the duplicate-processing by doing this. Even as it stands, I can see that this is a useful tool. I think the value can be improved by timing the right things, and being smarter about the reporting.
{ "domain": "codereview.stackexchange", "id": 9816, "tags": "java, generics, lambda" }
Validating username, password, and email in PHP
Question: Let's pretend the following: <?php // Functions to validate/sanitize user input function validateUsername() { // If accepted, return true, else return false } function validatePassword() { // If accepted, return true, else return false } function validateEmail() { // If accepted, return true, else return false } This is how I handle user input: // Getting all user input $values = $_POST['values']; $error = false; if (!validateUsername($username) && $error === false) { $errorMessage = "Username can't contain special characters"; $error = true; } if (!validatePassword($password) && $error === false) { $errorMessage = "Password is'nt secure enought"; $error = true; } if (!validateEmail($email) && $error === false) { $errorMessage = "Email is not correctly formatted"; $error = true; } if ($error === true) { echo $errorMessage; } else { // Do something } ?> But I'm sure there is a better approach. What is the best (or a good) way to handle user input messages/errors? Also I've read a tutorial (which unfortunately I can't find anymore) where it was recommended to give the users hints and tips about their input, instead of giving them a big bold red warning. For example to accept not only 1234 AB (Dutch postcodes), but also 1234ab, 1234AB and 1234 ab and let the script convert it to the official notation 1234 AB. Answer: May this below class will help you. Put this Valiation class in file named as Validation.php file. <?php /** * This class will provide server side validation for different rules with custom * provided message for respective rule. * * @author: Alankar More. */ class Validation { /** * Posted values by the user * * @var array */ protected static $_values; /** * Rules set for validation * * @var array */ protected static $_rules; /** * Error messages * * @var array */ protected static $_messages; /** * To send response * * @var array */ protected static $_response = array(); /** * For storing HTMl objects * * @var array */ protected static $_elements; /** * Html object * * @var string */ protected static $_inputElement; /** * Value of Html object * * @var mixed (string|boolean|integer|double|float) */ protected static $_elementValue; /** * Name of validation rule * * @var string */ protected static $_validationRule; /** * Value of validation rule * * @var mixed (string|boolean|integer|double|float) */ protected static $_ruleValue; /** * Initializing class * * @param array $inputArray * @param array $values */ public static function _initialize(array $inputArray, array $values) { self::$_values = $values; self::$_response = array(); self::generateArrays($inputArray); return self::applyValidation(); } /** * Separating rules and values * * @param array $input */ public static function generateArrays(array $input) { self::$_messages = $input['messages']; self::$_rules = $input['rules']; } /** * Applying validation for the form values * */ public static function applyValidation() { foreach (self::$_rules as $rk => $rv) { $_element = self::$_rules[$rk]; if (is_array($_element)) { foreach ($_element as $key => $ruleValue) { if (!self::$_elements[$rk]['inValid']) { $method = "_" . $key; self::$_inputElement = $rk; self::$_elementValue = self::$_values[$rk]; self::$_validationRule = $key; self::$_ruleValue = $ruleValue; self::$method(); } } } } if (count(self::$_response) == 0) { self::$_response['valid'] = true; } return self::$_response; } /** * Method to check wheather the input element holds the value. * If not then assingn message which is set by the user. * */ protected static function _required() { if (self::$_ruleValue) { if (trim(self::$_elementValue) == NULL && strlen(self::$_elementValue) == 0) { self::setErrorMessage("Field Required"); self::setInvalidFlag(true); } else { self::setInvalidFlag(false); } } } /** * Maximum length of input * */ protected static function _maxLength() { if (self::$_ruleValue) { if (strlen(trim(self::$_elementValue)) > self::$_ruleValue) { self::setErrorMessage("Enter at most " . self::$_ruleValue . " charachters only"); self::setInvalidFlag(true); } else { self::setInvalidFlag(false); } } } /** * Minimum length of input * */ protected static function _minLength() { if (self::$_ruleValue) { if (self::$_ruleValue > strlen(trim(self::$_elementValue))) { self::setErrorMessage("Enter at least " . self::$_ruleValue . " charachters "); self::setInvalidFlag(true); } else { self::setInvalidFlag(false); } } } /** * Allow alphabets only * */ protected static function _number() { if (self::$_ruleValue) { $str = filter_var(trim(self::$_elementValue), FILTER_SANITIZE_NUMBER_INT); if (!preg_match('/[0-9]/', $str)) { self:: setErrorMessage("Enter numbers only"); self::setInvalidFlag(true); } else { self::setInvalidFlag(false); } } } /** * Allow alphabets only * */ protected static function _alphabetsOnly() { if (self::$_ruleValue) { $str = filter_var(trim(self::$_elementValue), FILTER_SANITIZE_STRING); if (!preg_match('/[a-zA-z]/', $str)) { self:: setErrorMessage("Enter alphabates only"); self::setInvalidFlag(true); } else { self::setInvalidFlag(false); } } } /** * Allow alphabets and numbers only * */ protected static function _alphaNumeric(){ if (self::$_ruleValue) { $str = trim(self::$_elementValue); if (!preg_match('/[a-zA-z0-9]/', $str)) { self:: setErrorMessage("Alphanumeric only"); self::setInvalidFlag(true); } else { self::setInvalidFlag(false); } } } /** * To check enter email is valid * */ protected static function _email(){ if (self::$_ruleValue) { $str = filter_var(trim(self::$_elementValue), FILTER_VALIDATE_EMAIL); if (!$str) { self:: setErrorMessage("Enter valid email"); self::setInvalidFlag(true); } else { self::setInvalidFlag(false); } } } /** * To check enter url is valid * */ protected static function _url(){ if (self::$_ruleValue) { $str = filter_var(trim(self::$_elementValue), FILTER_VALIDATE_URL); if (!$str) { self:: setErrorMessage("Enter valid URL"); self::setInvalidFlag(true); } else { self::setInvalidFlag(false); } } } /** * Setting invalid flag for every element * * @param boolean $flag */ private static function setInvalidFlag($flag) { self::$_elements[self::$_inputElement]['inValid'] = $flag; } /** * Setting error message for the input element * * @param string $message */ private static function setErrorMessage($message) { if (self::$_messages[self::$_inputElement][self::$_validationRule]) { $message = self::$_messages[self::$_inputElement][self::$_validationRule]; } array_push(self::$_response, ucfirst($message)); } } You can use this class in your application as below: <form name="frmTest" id="frmTest" action="" method="POST"> <input type="text" name="first_name" id="first_name" value = "" /> <button name="submit" value="Submit" type="submit" >Submit</button> </form> <?php require_once 'validation.php'; // Rules specification. $rules = array('method' => 'POST', 'rules' => array('first_name' => array('required' => true) ), 'messages' => array('first_name' => array('required' => 'Please enter first name') ) ); $userPostedData = $_POST; $response = Validation::_initialize($rules, $userPostedData); // if some error messages are present. if (!$response['valid']) { // it will give you the array with error messages. echo "<pre>"; print_r($response); } else { // all applied validations are passed. You can deal with your submitted information now. echo "<pre>"; print_r($_POST); } ?>
{ "domain": "codereview.stackexchange", "id": 19354, "tags": "php, validation, error-handling" }
How can a triangle have a sum exceeding 180 degrees in a curved space?
Question: I was reading a book to understand the limits of the euclidean space I understand that lines that are parallel in 2d can meet in 3d space like on a sphere but it is hard to imagine or fathom why the deviation in the angle is happening what I deduced is is that the 2d space of the sphere is actually curved so what appears to be straight in 2d is not straight in 3d so the angles won't be the same as in 2d space they will be different relevant to our system? Answer: Here are three diagrams to illustrate that the angle sum of a triangle can differ from $180^\circ$. The Wikipedia article Spherical Geometry might be of interest? It has a nice illustration relating to measuring angles of a triangle on the Earth.
{ "domain": "physics.stackexchange", "id": 100026, "tags": "differential-geometry, curvature, mathematics, geometry, space" }
Combining moments of Inertia in gear chain
Question: I've got two objects connected by a rod along it's axis of rotation (e.g. a sphere on top of a flat cylinder rotating around it's symmetric axis). Assuming the effects of the rod are negligible, is it correct to simply sum the two moments of inertia for both objects together to calculate the total moment of inertia for the entire compound object? i.e. $$ I_{total} = \sum_{1}^{n} I_n $$ So in the case of a cylinder connected to a sphere: $$ I_{total} = \left ( \frac{1}{2}M_{cylinder} R_{cylinder}^2 \right ) + \left ( \frac{2}{5} M_{sphere} R_{sphere}^2\right ) $$ Or is that wrong? Answer: That's correct so long as the rotation axis passes through the centers of mass of both objects. In general, the moment of inertia about a fixed axis (the $z$-axis, say) will be something like $$ I = \int_\text{object} \rho (x^2 + y^2) \,dV $$ But if we can split this integral up into two disjoint volumes (a cylinder and a sphere, say), we will have $$ I = \int_\text{cylinder} \rho (x^2 + y^2) \,dV + \int_\text{sphere} \rho (x^2 + y^2) \,dV = I_\text{cylinder} + I_\text{sphere} $$ as you suspected.
{ "domain": "physics.stackexchange", "id": 22738, "tags": "homework-and-exercises, angular-momentum, rotational-kinematics, moment-of-inertia" }
Determine the Z-Transform for the following sequence: $ |n|(\frac{1}{2})^{|n|} $
Question: Determine the Z-Transform for the following sequence: $$ |n|(\frac{1}{2})^{|n|} $$ I have tried to solve the above problem. However, the answer that I got is the negative of what is given in the solution manual. What I may have done wrong? SOLUTION FROM SOLUTION MANUAL: Answer: It's always good to do a sanity check on such results. E.g., you could try to see that $X(1)$ equals the sum of all time domain samples: $$X(1)=\sum_{n=-\infty}^{\infty}x[n]\tag{1}$$ With $x[n]=|n|\left(\frac12\right)^{|n|}$ it is clear that the result of $(1)$ must be positive. However, for the solution from the manual you get $X(1)<0$, whereas for your solution you obtain $X(1)>0$. So I think you can be confident that your solution is the correct one.
{ "domain": "dsp.stackexchange", "id": 8863, "tags": "z-transform, homework" }
Is the chitin in an insect's exoskeleton cross-linked?
Question: This answer to the question How to clean and preserve a cicada's molted exoskeleton (exuvia)? states: The exuvia is made of cross-liked chitin, and will not decay. You don't need any special preservatives as all. If you need to get the mud off, just rinse it as you said, in soapy water, let it dry, and you are done. Simple. Wikipedia's Chitin says only: Chitin is a modified polysaccharide that contains nitrogen; it is synthesized from units of N-acetyl-D-glucosamine (to be precise, 2-(acetylamino)-2-deoxy-D-glucose). These units form covalent β-(1→4)-linkages (like the linkages between glucose units forming cellulose). Therefore, chitin may be described as cellulose with one hydroxyl group on each monomer replaced with an acetyl amine group. This allows for increased hydrogen bonding between adjacent polymers, giving the chitin-polymer matrix increased strength. I'm not a chemist, but "increased hydrogen bonding between adjacent polymers" doesn't sound the same as cross-linked polymers. So I would like to ask for an answer based on sources other than Wikipedia: Question: Is the chitin in an insect's exoskeleton cross-linked? If it depends on the type of insect, then the focus should be on "a cicada's molted exoskeleton (exuvia)" as discussed in the linked answer. Answer: much like cellulose, chitin strands are bonded to other strands by hydrogen bonds. here is a slide share with a breakdown of the structure. It is crosslinked in the sense strands are linked to other strands in such a way that most enzymes cannot access it to break it down. this is the same thing that makes wood last untreated. In a strictly chemistry sense it is not a crosslinked polymer (which requires covalent or ionic bonding) but it still has crosslinking. Your hitting a difficulty in jargon. source.
{ "domain": "biology.stackexchange", "id": 10008, "tags": "entomology, proteins, protein-structure" }
acceleration of the universe
Question: Moments after the Big Bang, the universe was expanding at an incredible rate, (I've heard) faster than the speed of light. Due to dark energy, scientists predict the rate of expansion will pick up again. Space itself will be expanding faster than light speed. Someday, we will not be able to see other galaxies because they'll be moving away so fast that the light they produce will never reach us. Nowadays, though, we can see other galaxies, which means the expansion of the universe slowed down. What caused the expansion of the universe to accelerate more slowly? If dark energy is causing the acceleration to increase, wouldn't the universe continue to expand faster after the big bang? Is there a minimum rate of acceleration? If so, what is it, and what determines it? Answer: There is not a minimum rate of acceleration. In fact, before the discovery of dark energy most people thought that the universe would deccelerate. That is still a mathematical possibility - if dark energy is not a cosmological constant and its equation of state changes in the future. The vast difference in scale between the early inflation and present day expansion is explained by the fact that it is thought that the early inflation and current acceleration have different causes. If there is a common cause then there must be some bizarre dynamics going on to connect things across many orders of magnitude that is not at all likely according to theoretical prejudice. The expansion of the universe (in the approximation where you can ignore the fact that the universe isn't perfectly uniform) is governed by the Friedmann equation: $$ \left( \frac{\dot{a}}{a} \right)^2 = \frac{8\pi G}{3} \rho - \frac{k}{a^2} + \frac{\Lambda}{3} $$ (units where $c=1$) where the scale factor $a$ measures the size of the universe, $\rho$ is the energy density, $k$ measures the curvature of space and $\Lambda$ is the cosmological constant. $G$ is Newton's gravitational constant. For all practical purposes $k$ is zero in our universe. Now in order to find how the universe expands you need to know how the energy density changes with scale factor. There are some common cases: Matter: $\rho \propto a^{-3}$ Radiation: $\rho \propto a^{-4}$ Vacuum energy (slow rolling scalar field): $\rho \propto a^0$ If you plug these in and work things out you will find that when matter and radiation dominate the expansion is slowing down, but when vacuum energy or $\Lambda$ dominates the expansion speeds up. (The critical point, i.e. steady non-accelerating expansion, is for $\rho \propto a^{-2}$, which corresponds to a gas of cosmic strings, I think, but need to confirm this.) So you can get the proposed expansion history from the following standard scenario: The universe starts in a state dominated by a slow rolling scalar field (inflaton) with a large energy density $\rho\sim\text{constant}$. This drives a rapid and accelerating expansion. At some point the scalar field hits a phase transition and its energy is converted into ordinary matter and radiation. This is called reheating. While radiation and, subsequently, matter dominate the energy density of the universe the expansion continues but slows down. Eventually the radiation and matter dilute away to the point that the cosmological constant (or dark energy) dominate the energy density. At this point the expansion starts speeding up again. This happened about a billion years ago in our universe. The pattern is very similar to inflation in step 1, but the scale of the energy density is many orders of magnitude smaller, which is why it took so long for the change over to take place. If the expansion is really being driven by a cosmological constant then this acceleration will continue forever. If, on the other hand, it is being driven by some more complicated dark energy mechanism then there are many possibilities for the future...
{ "domain": "physics.stackexchange", "id": 6493, "tags": "cosmology, universe, space-expansion" }
How to derive the phase difference of a standing wave?
Question: We know a standing wave is defined by $D(x,t)=2a \sin kx\cos wt$. Intuitively, all particles within the same "loop" of a standing wave are vibrating in phase; all particles within 2 adjacent "loops" are vibrating in opposite phase. However, is there a mathematical proof of this? Below is my attempt: For a progressive wave $D(x,t)=A \sin (kx-wt+\Phi_0)$, the phase is $kx-wt+\phi_0$, which makes the phase difference $\Delta\Phi = (kx_2-wt+\Phi_0) - (kx_1-wt+\Phi_0) = k\Delta x$. Then if $\Delta\Phi = 2\pi$, the two particles are vibrating in phase; if $\Delta\Phi = \pi$, two particles are vibrating out of phase. But using the same logic for standing waves, it seems the phase for them would be $wt$ thus phase difference $\Delta\Phi = wt - wt = 0$. This makes sense for particles in the same loop, but does not take into account particles in adjacent loops. Answer: The phase difference you are trying to calculate is the phase difference between different points in space $x$ at the same time $t$. In other words you are choosing some constant time $t$ then calculating how the phase $\Phi$ varies with $x$. In your example of the travelling wave: $$D(x,t)=A \sin (kx-\omega t+\Phi_0) $$ your method works because you take two different values of $x_1$ and $x_2$ at the same time $t$ so when you calculate: $$\Delta\Phi = (kx_2-\omega t+\Phi_0) - (kx_1-\omega t+\Phi_0) $$ the $\omega t$ terms are constant and cancel out. This works in exactly the same way for the standing wave: $$ D(x,t)=2a \sin kx\cos \omega t $$ If we take constant $t$ then $\cos \omega t$ is constant and we can write our snapshot in time as: $$ D(x) = A\sin kx $$ where $A$ is a constant given by $A = 2a\cos\omega t$. And just as for the travelling wave we get: $$ \Delta\Phi = k(x_2 - x_1) $$
{ "domain": "physics.stackexchange", "id": 40935, "tags": "waves" }
Is the Abraham-Minkowski controversy resolved?
Question: A paper was published in 2010 claiming to resolve the Abraham-Minkowski controversy. Is this paper viewed as definitive by physicists? Paper: https://strathprints.strath.ac.uk/26871/5/AbMinPRL.pdf Answer: good to ask about these historical controversies. It makes sense to review the controversy. In the medium, the density of momentum was (I set $c=\hbar=1$ everywhere) $$E\times H$$ according to Max Abraham - that's equal to the Poynting vector which determines the flow of energy according to everyone - and $$D\times B$$ according to Hermann Minkowski who argued that in the medium, the stress-energy tensor is asymmetric. In both cases, one can start with the product of the vacuum fields $E\times B$ and replace one of them by the material "variation" of the field. The two guys differed by whether they replaced the electric or the magnetic factor. Consequently, a photon according to the first, Abraham form will have momentum $$p=\omega/n$$ while the second, Minkowski definition will give $$p=\omega n$$ Now, one may realize that the (complexified) electromagnetic wave depends on the position and the time as $$\exp(-i\omega t+i k x)$$ where $k=\omega n$. The phase velocity is simply $\omega/k$ and it should be $1/n$, smaller than one, so it's clear that $k=\omega n$. So the wave function shows clearly that the conserved momentum of a single photon agrees with the Minkowski's template. By definition, the momentum is given by the $k$. Barnett agrees that the Minkowski momentum is what generates normal translations - of the vector potential and other fields - by conjugation. If you try to find out what are the arguments (and interpretations) that support the Abraham form, I think that all of them are incompatible with modern physics. As far as I can say, they're all based on the idea that the momentum of any particle should be equal to the "mass times velocity". This is an extremely shaky assumption that is not really true in this case. In modern physics, the momentum has to be defined by a solid definition - and it is the "quantity conserved, as showed by Noether's theorem, because of the spatial translational symmetry". One can show that this is $m_{total}v$ for ordinary particles in the vacuum but one cannot show this thing for a photon in a material - especially because it is not true. It is very bizarre for Barnett to say that the "canonical momentum is more subtle than the $mv$ momentun". The "canonical momentum" is the only momentum that is acceptable in modern physics and that may be generalized to new contexts. It's the only Noether-derived momentum. In this case, it's the Minkowski's momentum. The "kinetic momentum", as he calls $mv$, is just an attempt to return to the 17th century as much as he can. For charged particles, there could be issues about $i\partial$ versus $i\partial-eA$, the velocity operator, but this doesn't occur for $A_i$ in the dielectric because the wave function of the photon cannot be transformed away by any gauge symmetries. Under the U(1) gauge symmetry with an oscillating parameter, $A$ may shift by a constant but it won't change the speed of its oscillations. The other arguments supporting the Abraham form seem to be bogus, too. For example, the Abraham form is supported by the argument that the center-of-mass-energy continues in a uniform motion as the photon enters the material. However, this is an invalid assumption, too. The uniform motion of center-of-mass only holds - again according to Noether's theorem - in systems which respect the equivalence of inertial frames e.g. the Lorentz (or Galileo) symmetry. This symmetry is explicitly broken by the boundary between the vacuum and the dielectric material, so there is no reason why the corresponding conservation law - the conservation of the velocity of the center-of-mass motion - should hold. To summarize, Minkowski was simply right while Abraham was wrong. But it's probably OK to "redo" the budget in such a way that a part of the momentum of the photon is attributed to the dielectric material when the photon enters it, and then it is returned back to the photon. In this way, one may justify the Abraham's form - and probably many other forms - but why should one really do it? Stephen Barnett who wrote the 2010 paper above is a respectable optician but this whole thing was much ado about nothing. If you care about the sociology, I am pretty confident that almost no people in optics will oppose Barnett, and the people from other disciplines may say that this was really a trivial dispute. Minkowski was really right, and by the way, he was a much better physicist than Abraham from all other angles, too. For example, Abraham's model of the electron and the arguments used to support this model were truly painful. That's a very different league than Minkowski who was one of the first people to have understood special relativity. I don't think that throughout the century, this problem was being studied by top physicists. To make things worse, the experiments are never testing these things "directly", especially because one may always choose different interpretations where the momentum goes when the photon is changing the medium. Best wishes, Lubos P.S.: Let me mention one detail: Minkowski's stress-energy tensor is asymmetric, so if you define the density of the angular momentum purely in terms of this tensor, multiplied by $x$ in the usual way, the angular momentum won't be conserved. However, it's not a problem because this can be compensated by an extra contribution to the angular momentum that comes from the volume and surface density of the spins - including the internal angular momenta of the atoms of the materials. There are many ways how to attribute the conserved quantities (energy, momentum, angular momentum) to regions - only the total has to be conserved. The diverse interpretations will differ about all the functions, but they should ultimately predict the same predictions for the experiments when done right.
{ "domain": "physics.stackexchange", "id": 56520, "tags": "electromagnetism" }
Why can distributed deep learning provide higher accuracy (lower error) than non-distributed one with the following cases?
Question: Based on some papers which I read, distributed deep learning can provide faster training time. In addition, it also provides better accuracy or lower prediction error. What are the reasons? Question edited: I am using Tensorflow to run distributed deep learning (DL) and compare the performance with non-distributed DL. I use the number of dataset 1000 samples and step size 10000. The distributed DL uses 2 workers and 1 parameter server. Then, the following cases are considered when running the code: Each worker and non-distributed DL use 1000 samples for training sets, same mini-batch size 200 Each worker uses 500 samples for training sets (first 500 samples for worker 1 and the rest 500 samples for worker 2), non-distributed DL use 1000 samples for training sets, same mini-batch size 200 Each worker uses 500 samples for training sets (first 500 samples for worker 1 and the rest 500 samples for worker 2) with mini-batch size 100, non-distributed DL use 1000 samples for training sets with mini-batch size 200 Based on the simulation, for all cases, distributed DL has lower RMSE than non-distributed DL. In this case, the RMSEs of distributed DL are as follows: Distributed DL in Case 2 < Distributed DL in Case 1 < Distributed DL in Case 3 < Non-distributed. In addition, I also add the training time (i.e., the number of steps is 2 x 10000) for non-distributed DL, the results are still not as good as distributed DL. One reason can be the mini-batch size, however, I wonder the other reasons why the distributed DL has better performance using the aforementioned cases? Answer: About the accuracy: Going with the strongest reason; memory problems will diminish due to the distribution of the computation. That will allow you to increase your training batch size which will reduce the gradient noise due to small mini-batch sizes. The steeper gradient moves will be towards the minima, with less noise. You can refer to this video for deeper understanding: https://www.youtube.com/watch?v=-_4Zi8fCZO4&list=PLkDaE6sCZn6Hn0vK8co82zjQtt3T2Nkqc&index=16 About the speed: It is more obvious I think. You distribute your gradient descent computations to multiple machines or CPUs/GPUs/TPUs, so a faster training speed you acquire as a result.
{ "domain": "datascience.stackexchange", "id": 4061, "tags": "deep-learning, tensorflow, distributed" }