anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
PID position controller in Gazebo | Question:
Can someone point me to a simple example of how to use a PID controlled actuator in Gazebo? I found an example here but there is command topic (in rostopic list) I just want to be able to send a joint to a position. Do I need an action server for this?
My urdf (sorry fo the long file, launch file is below).
<?xml version="1.0"?>
<gazebo>
<controller:gazebo_ros_controller_manager name="gazebo_ros_controller_manager" plugin="libgazebo_ros_controller_manager.so">
<alwaysOn>true</alwaysOn>
<updateRate>1000.0</updateRate>
<robotNamespace></robotNamespace>
<robotParam>robot_description</robotParam>
<interface:audio name="gazebo_ros_controller_manager_dummy_iface" />
</controller:gazebo_ros_controller_manager>
</gazebo>
<robot name="ar_arm">
<link name="world" />
<link name="arm_base">
<inertial>
<origin xyz="0.000000 0.000000 0.000990" rpy="0.000000 -0.000000 0.000000"/>
<mass value="101.000000" />
<inertia ixx="1.110000" ixy="0.000000" ixz="0.000000" iyy="100.110000" iyz="0.000000" izz="1.010000"/>
</inertial>
<collision name="arm_base_geom">
<origin xyz="0.000000 0.000000 0.050000" rpy="0.000000 -0.000000 0.000000"/>
<geometry>
<box size="1.000000 1.000000 0.100000"/>
</geometry>
</collision>
<visual>
<origin xyz="0.000000 0.000000 0.050000" rpy="0.000000 -0.000000 0.000000"/>
<geometry>
<box size="1.000000 1.000000 0.100000"/>
</geometry>
<material name="Cyan">
<color rgba="0 1 1 1.0"/>
</material>
</visual>
</link>
<link name="arm_cylinder">
<inertial>
<origin xyz="0.000000 0.000000 0.600000" rpy="0.000000 -0.000000 0.000000"/>
<mass value="1.000000" />
<inertia ixx="0.110000" ixy="0.000000" ixz="0.000000" iyy="100.110000" iyz="0.000000" izz="1.010000"/>
</inertial>
<collision name="arm_base_geom_arm_trunk">
<origin xyz="0.000000 0.000000 0.600000" rpy="0.000000 -0.000000 0.000000"/>
<geometry>
<cylinder radius="0.050000" length="1.00000"/>
</geometry>
</collision>
<visual name="arm_base_geom_arm_trunk_visual" cast_shadows="1">
<origin xyz="0.000000 0.000000 0.650000" rpy="0.000000 -0.000000 0.000000"/>
<geometry>
<cylinder radius="0.050000" length="1.100000"/>
</geometry>
<material name="Red">
<color rgba="255 0 0 1"/>
</material>
</visual>
</link>
<link name="arm_shoulder_pan">
<inertial>
<mass value="1.100000"/>
<origin xyz="0.045455 0.000000 0.000000" rpy="0.000000 -0.000000 0.000000"/>
<inertia ixx="0.011000" ixy="0.000000" ixz="0.000000" iyy="0.022500" iyz="0.000000" izz="0.013500"/>
</inertial>
<collision name="arm_shoulder_pan_geom_arm_shoulder">
<origin xyz="0.550000 0.000000 0.050000" rpy="0.000000 -0.000000 0.000000"/>
<geometry>
<box size="1.000000 0.050000 0.100000"/>
</geometry>
</collision>
<visual name="arm_shoulder_pan_geom_arm_shoulder_visual" cast_shadows="1">
<origin xyz="0.550000 0.000000 0.050000" rpy="0.000000 -0.000000 0.000000"/>
<geometry>
<box size="1.000000 0.050000 0.100000"/>
</geometry>
<material name="Cyan">
<color rgba="0 1 1 1.0"/>
</material>
</visual>
</link>
<link name="arm_shoulder_pivot">
<inertial>
<mass value="0.100001"/>
<origin xyz="0.045455 0.000000 0.000000" rpy="0.000000 -0.000000 0.000000"/>
<inertia ixx="0.011000" ixy="0.000000" ixz="0.000000" iyy="0.022500" iyz="0.000000" izz="0.013500"/>
</inertial>
<collision name="arm_shoulder_pan_geom">
<origin pose="0.000000 0.000000 0.050000 0.000000 -0.000000 0.000000"/>
<geometry>
<cylinder radius="0.050000" length="0.100000"/>
</geometry>
</collision>
<visual name="arm_shoulder_pan_geom_visual" cast_shadows="1">
<origin pose="0.050000 0.000000 0.050000" rpy="0.000000 0.000000 0.000000"/>
<geometry>
<cylinder radius="0.050000" length="0.200000"/>
</geometry>
<material name="Red">
<color rgba="255 0 0 1"/>
</material>
</visual>
</link>
<link name="arm_elbow_pan">
<inertial>
<mass value="1.200000"/>
<origin xyz="0.087500 0.000000 0.083333" rpy="0.000000 -0.000000 0.000000"/>
<inertia ixx="0.031000" ixy="0.000000" ixz="-0.005000" iyy="0.072750" iyz="0.000000" izz="0.044750"/>
</inertial>
<collision name="arm_elbow_pan_geom_arm_elbow">
<origin xyz="0.300000 0.000000 0.150000" rpy="0.000000 -0.000000 0.000000"/>
<geometry>
<box size="0.500000 0.030000 0.100000"/>
</geometry>
</collision>
<visual name="arm_elbow_pan_geom_arm_elbow_visual" cast_shadows="1">
<origin xyz="0.2500000 0.000000 0.000000" rpy="0.000000 0.000000 0.000000"/>
<geometry>
<box size="0.500000 0.030000 0.100000"/>
</geometry>
<material name="Cyan">
<color rgba="0 1 1 1.0"/>
</material>
</visual>
</link>
<joint name="arm_base_joint" type="fixed">
<parent link="world"/>
<child link="arm_base"/>
<axis xyz="0.000000 0.000000 1.00000">
<limit lower="0.000000" upper="0.000000"/>
<dynamics damping="10.0" friction="10.0"/>
</axis>
</joint>
<joint name="arm_base_cylinder" type="fixed">
<parent link="arm_base"/>
<child link="arm_cylinder"/>
<axis xyz="0.000000 0.000000 1.000000">
<limit lower="0.000000" upper="0.000000"/>
<dynamics damping="10.0" friction="10.0"/>
</axis>
</joint>
<joint name="arm_shoulder_pan_joint" type="continuous">
<parent link="arm_cylinder"/>
<child link="arm_shoulder_pan"/>
<origin xyz="0.000000 0.000000 1.100000"/>
<axis xyz="0.000000 0.000000 1.000000">
<dynamics damping="10.0" friction="10.0"/>
</axis>
</joint>
<joint name="arm_elbow_fix" type="fixed">
<parent link="arm_shoulder_pan"/>
<child link="arm_shoulder_pivot"/>
<origin xyz="1.100000 0.000000 0.100000"/>
<axis xyz="0.000000 0.000000 1.000000">
<dynamics damping="10.0" friction="10.0"/>
</axis>
</joint>
<joint name="arm_elbow_pan_joint" type="continuous">
<parent link="arm_shoulder_pivot"/>
<child link="arm_elbow_pan"/>
<origin xyz="0.00000 0.000000 0.0500000"/>
<axis xyz="0.000000 0.000000 1.000000">
<dynamics damping="10.0" friction="10.0"/>
</axis>
</joint>
<transmission type="pr2_mechanism_model/SimpleTransmission" name="kinect_trans">
<actuator name="kinect_motor" />
<joint name="arm_shoulder_pan_joint" />
<mechanicalReduction>1.0</mechanicalReduction>
<motorTorqueConstant>1.0</motorTorqueConstant>
</transmission>
</robot>
</model>
</gazebo>
Launch file:
<launch>
<!-- start gazebo with an empty plane -->
<node name="gazebo" pkg="gazebo" type="gazebo" args="$(find ar_joint_controller)/model/empty_throttled.world" respawn="false" output="screen">
</node>
<!-- start gui -->
<node name="gazebo_gui" pkg="gazebo" type="gui" respawn="false" output="screen"/>
<!-- spawn model -->
<node name="spawn_arm" pkg="gazebo" type="spawn_model" args="-file $(find ar_joint_controller)/model/single_actuated_joint.urdf -urdf -model single_actuated_joint" respawn="false" output="screen" >
</node>
<rosparam file="$( find ar_joint_controller)/cfg/conf_file.yaml" command="load" />
<param name="pr2_controller_manager/joint_state_publish_rate" value="100.0" />
<!-- Spawn some controllers stopped/started -->
<node pkg="pr2_controller_manager" type="spawner" args="kinect_controller" name="spawn_dynamix" output="screen"/>
</launch>
Originally posted by davinci on ROS Answers with karma: 2573 on 2013-04-04
Post score: 0
Original comments
Comment by inflo on 2013-04-05:
http://gazebosim.org/wiki/Tutorials/1.3/intermediate/pid_control_joint
Comment by Mehdi. on 2018-11-20:
@inflo deadlink
Answer:
hello, there is a tutorial that can help you with implementing a pid controller in gazebo
please check out here
Originally posted by cagatay with karma: 1850 on 2013-04-04
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 13681,
"tags": "ros, ros-fuerte"
} |
What is the relation between the degree of freedom of a molecule and the heat capacity? | Question: I'm confused. How can a change in the degree of freedom of a molecule can change the heat capacity?
Answer: The formal answer is known as the Equipartition Theorem, which states that, at thermal equilibrium, any degree of freedom that appears quadratically in the expression of the system's energy has an average energy of $\frac{1}{2}k_B T$. This means that any degree of freedom that appears quadratically in the energy gives the system an additional heat capacity of $\frac{\partial E}{\partial T}=\frac{1}{2}k_B$. Proofs of this can be found in many places, including Wikipedia and most textbooks.
Intuitively, this makes sense. Degrees of freedom are essentially different ways to store energy (e.g. as translational kinetic energy, rotational kinetic energy, elastic potential energy, magnetic energy, etc.). At equilibrium, it would be reasonable to expect that the energy of the system is equally distributed among the possible ways to store it. As such, the more ways a system can store energy, the less its particles' kinetic energy will rise when you inject more energy into the system (because that energy is more spread out over the non-kinetic degrees of freedom). As such, more degrees of freedom means less temperature change for a given energy change, which means more heat capacity. | {
"domain": "physics.stackexchange",
"id": 50099,
"tags": "thermodynamics"
} |
ImportError: No module named genpy | Question:
I am very new to ROS as I just started installing it on my Ubuntu 16.4 (I am fully aware that there hasn't been a version of ROS released for Ubuntu 16.4.) but so far all my installation went smoothly except when I tried to run the 'rostopic' line it gives me:
"Traceback (most recent call last):
File "/usr/bin/rostopic", line 34, in
import rostopic
File "/usr/lib/python2.7/dist-packages/rostopic/init.py", line 59, in
import genpy
ImportError: No module named genpy"
Anyone can tell me how to fix it? Thanks in advance.
Originally posted by Ghadeer on ROS Answers with karma: 1 on 2016-06-04
Post score: 0
Answer:
I had the same problem, but only because I did not complete installation instructions.
Setting up environment correctly ( $ source /opt/ros/kinetic/setup.bash ) fixed this.
Originally posted by Max Kiva with karma: 16 on 2017-08-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 24812,
"tags": "ros"
} |
Subscriber::shutdown() required to be called manually on destruction? | Question:
TL;DR
Is calling Subscriber::shutdown() explicitly required to prevent callbacks from being called after object destruction?
TS;DU (too short, didn't understand)
I've recently been doing some work were objects containing subscribers are created and destroyed while the node is still running. At the same time I've been getting crashes in my code that imply that the callbacks connected to said subscribers are being called seemingly after the subscriber has been destroyed, and thus causing segfaults and memory corruptions when that callback accesses member data.
Upon noticing this I started explicitly calling Subscriber::shutdown() in my destructors and the problem crashes stopped. Previously I had assumed Subscriber::~Subscriber() would have taken care of this. I looked at the code and Subscriber::shutdown() is not explicitly called.
So is this the expected behavior? Is this a bug?
Originally posted by Asomerville on ROS Answers with karma: 2743 on 2013-01-12
Post score: 2
Original comments
Comment by Asomerville on 2013-01-13:
If I'm able to, will do.
Answer:
This sounds like a bug. I suggest you file a ticket with a minimal code snippet that reproduces the issue, if possible, and post the link to the ticket as an answer to this question.
Originally posted by Eric Perko with karma: 8406 on 2013-01-13
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 12383,
"tags": "ros, c++, libroscpp, ros-electric"
} |
gzserver segfaults | Question:
New and up to date install of Kinetic and Ubuntu 16. I log in, open terminal and issue one command
roslaunch turtlebot_gazebo turtlebot_world.launch
and then gzserver segfaults.
I'm new to ROS but been into UNIX software development profesionally going back to the 1980's. It was always my opinion that if something I wrote segfaults it is always a bug, no matter what devious trick the user did to crash my software. I and my bosses figure I should have caught the problem with some kind of data validation or assert staetment.
Does the ROS development community have the same view on this? Should I report segfaults as bugs?
Next question are Where/how to report and most importantly, this is the very first step in the on-line totorial and it fails. How to figure out why?
Here is a cut and paste from the terminal window at the point where things start going wrong. (To address the question "what is in those .log files? The log files listed below do not exist. Apparently gazebo crashed before it could write to the log file.)
process[laserscan_nodelet_manager-9]: started with pid [18771]
process[depthimage_to_laserscan-10]: started with pid [18775]
[ INFO] [1481175480.361448318]: Finished loading Gazebo ROS API Plugin.
[ INFO] [1481175480.362918368]: waitForService: Service [/gazebo/set_physics_properties] has not been advertised, waiting...
Segmentation fault (core dumped)
[gazebo-2] process has died [pid 18708, exit code 139, cmd /opt/ros/kinetic/lib/gazebo_ros/gzserver -e ode /opt/ros/kinetic/share/turtlebot_gazebo/worlds/playground.world __name:=gazebo __log:=/home/chris/.ros/log/7977bfb0-bd08-11e6-a492-002100e558ce/gazebo-2.log].
log file: /home/chris/.ros/log/7977bfb0-bd08-11e6-a492-002100e558ce/gazebo-2*.log
[gazebo_gui-3] process has died [pid 18729, exit code 255, cmd /opt/ros/kinetic/lib/gazebo_ros/gzclient __name:=gazebo_gui __log:=/home/chris/.ros/log/7977bfb0-bd08-11e6-a492-002100e558ce/gazebo_gui-3.log].
log file: /home/chris/.ros/log/7977bfb0-bd08-11e6-a492-002100e558ce/gazebo_gui-3*.log
Originally posted by chrisalbertson on ROS Answers with karma: 136 on 2016-12-08
Post score: 0
Original comments
Comment by gvdhoorn on 2016-12-08:
In your previous question about this you were running things in VMWare. Is that still the case?
Answer:
There are some (probably) related questions over at answers.gazebosim.org (searching for gazebo exit code 139) and I also believe that ros-simulation/gazebo_ros_pkgs#387 is related.
Does the ROS development community have the same view on this?
As much as I appreciate humor, doing it this way might not be conducive to you getting answers quicker on this forum.
Yes, SEGFAULTs are obviously considered problematic (I don't think you and your bosses are alone in that), so they should be reported.
How to figure out why?
I would try and start gzserver directly, not from a launch file. If it still crashes, #387 is probably not related. If it doesn't, it probably is.
Next question are Where/how to report [..]
Gazebo is slightly special, in that it is a stand-alone tool, but has a tight ROS integration available. As the crash seems to point to gazebo_ros, it's probably a good idea to first check ros-simulation/gazebo_ros_pkgs/issues. If it turns out to be on the Gazebo side, then I would try the Gazebo answers site, the troubleshooting page and finally if it does turn out to be an unknown bug, the issue tracker.
See the Support guidelines on the wiki for more information.
Some related questions on answers.gazebosim.org:
gazebo crashes immediately using roslaunch after installing gazebo ros packages (second answer)
gazebo_gui-3 error when starting gazebo with roslaunch (second answer)
gazebo terminates with error code 139
Originally posted by gvdhoorn with karma: 86574 on 2016-12-08
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 26435,
"tags": "ros"
} |
Is it possible to ionise matter on big distance from spaceship to reflect it by magnetic field? | Question: The ship can create a big magnetic field to reflect or catch the ionised particles round it. But non-ionised particles will not interact with this field and still be dangerous for ship passengers.
It is possible to ionise that matter to reflect it too? Will this magnetic field be dangerous for the spaceship passengers?
Answer: Yes. If you are in a spaceship with a large Lorentz $\gamma$ with respect to the local hydrogen gas (protons + neutral H + neutral H2 molecules) of number density $N$, then the flux of these particles hitting the front of your ship will be $\gamma N c$ [number/ m^2 sec] because the column of particles in front of your ship is Lorentz contracted in your spaceship frame. For sufficiently large $\gamma$ this can be dangerous radiation for the ship's occupants and also a drag force on the ship.
As you suggest you solve this problem by placing a magnet far in front of the ship at the end of a long stick connected to the ship. The magnet should be a "septum magnet" which is a sheet of copper carrying a current in the ship's direction of travel. This produces an up B field to the left of the sheet and a down B field to the right of the sheet. The relativistic charged particles will be deflected to the left and right of the ship by this magnet. Unfortunately, some of the hydrogen is neutral and won't be deflected. You solve this by placing a thin sheet of material in front of the septum magnet. In passing thru this sheet, the neutral particles become ionized and thus deflectable by the magnet .... which is the answer to your question.
The septum magnet is far in front of the ship. The ship's occupants are not in the field. | {
"domain": "physics.stackexchange",
"id": 86391,
"tags": "electromagnetism, electromagnetic-radiation, space"
} |
How do physicists know when it is appropriate to use $\mathrm dx$ as if it is a number? | Question: I'm trying to teach myself calculus of variations when I came across a worked example about the shortest distance between two points in a plane. This is a question about the mathematics but I don't think it belongs in Mathematics stack exchange because it's about techniques utilized by physicists.
The worked example starts off by defining a distance function like this:
$$\mathrm ds = \sqrt{\mathrm dx^2 +\mathrm dy^2}$$
So here $\mathrm ds, \mathrm dx,$ and $\mathrm dy$ are being treated as if they are real (as in actual) numbers that have some real value when they are, in the best case, infinitesimals. The way I understand them is that they either define an operation on a function, such as $\frac{d}{\mathrm dx}$, or the derivative of a function that's been acted on by the differential operator, eg. $\frac{\mathrm dy}{\mathrm dx}$. They're just notation for operations.
$\frac{\mathrm dy}{\mathrm dx}$ is not a ratio of two numbers. They're not numbers as far as I can understand, they represent operations.
The worked example then pulls out a $\mathrm dx$ from under the square root, as if it is a number, to get this
$$\mathrm ds = \sqrt{1 +\left(\frac{\mathrm dy}{\mathrm dx}\right)^2} \mathrm dx$$
Then it does this
$$\int \mathrm ds = \int \sqrt{1 +\left(\frac{\mathrm dy}{\mathrm dx}\right)^2} \mathrm dx$$
it's gone back to treating $\mathrm dx$ and $\frac{\mathrm dy}{\mathrm dx}$ as operators or functions resulting from operations respectively.
My question is, how do you justify when to switch between treating it as a number and an operation? I find this very strange but I see it often in physics literature. Are there circumstances where this doesn't work? Please direct me to any source where I can learn more about this.
Answer: You are a victim of a quite frequent way of teaching calculus which continues to show embarrassment with the differentials, even if nowadays we have all the conceptual tools to deal with them in a safe way.
Forget about operators (which are not the natural way of looking at things in the present context) and infinitesimals (which mean too many things, some of them obsolete and inconsistent).
Let me try to show a consistent way of thinking and using differentials.
I assume that we have the definition of derivative of a real function of one real variable. Let's indicate the derivative of a function $f$ in a point $x_0$ as
$f^{\prime}(x_0)$. If the function is smooth enough (for our purposes it is sufficient that its first derivative is a continuous function) the first derivative at $x_0$ tells us how the function behaves locally in a neighbor of $x_0$:
$$
f(x) = f(x_0) + f^{\prime}(x_0) (x-x_0) + O((x-x_0)^2)
$$
from which we can say that the best linear approximation to the variation of $f$ around $x_0$ is represented by $f^{\prime}(x_0) (x-x_0)$.
We call such a best linear approximation of the variation of $f$ around $x_0$ the differential of $f$ at $x_0$ and we denote it by $df_{x_0}$. It is clear from its explicit expression that it is a (in general non-linear) function of $x_0$ and a (linear) function of $x$. It is also clear that such linear function of $x$ is defined for any $x$, but only in the neighborhood of $x_0$ we can write
$$
\Delta f = f(x)-f(x_0) \simeq df_{x_0}(x) = f^{\prime}(x_0) (x-x_0).
$$
At this point we can observe that our definitions and notations allow to write unambiguously
$$
\Delta x = x-x_0 = dx_{x_0} = 1 \cdot (x-x_0)
$$
and therefore we are enabled to write
$$
\frac{\Delta f}{\Delta x} \simeq \frac{df_{x_0}}{d x_{x_0}} = f^{\prime}(x_0),
$$
where we know that the symbol $\simeq$ here means a part corrections which vanish when $(x-x_0)\rightarrow 0$ at least linearly.
You see that this way of dealing with differentials is fully consistent, differentials are real valued functions, and no strange infinitesimal quantity is around.
Generalization to differentials of more than one variable or to vector valued functions is of course possible and it is a trivial generalization of the previous treatment. For example, the length of line element of a curve, $ds$, you wrote in you question can be written as
$$
ds = \sqrt{dx^2+dy^2}=\sqrt{x^{\prime 2}+y^{\prime 2}} d\tau
$$
if $\left(x(\tau),y(\tau)\right)$ is a parametrization of the curve. Each quantity appearing with a $d$ here is a differential, and at the and of the day, a real number.
I agree that, even if everything above is nothing but math, it is the community of physicists which seems to be more uneasy with the manipulations of differentials. | {
"domain": "physics.stackexchange",
"id": 63688,
"tags": "classical-mechanics, variational-principle, calculus, variational-calculus"
} |
Is time continuous or discrete? | Question: I was coding a physics simulation, and noticed that I was using discrete time. That is, there was an update mechanism advancing the simulation for a fixed amount of time repeatedly, emulating a changing system.
I though that was interesting, and now believe the real world must behave just like my program does. It is actually advancing forward in tiny but discrete time intervals?
Answer: As we cannot resolve arbitrarily small time intervals, what is ''really'' the case cannot be decided.
But in classical and quantum mechanics (i.e., in most of physics), time is treated as continuous.
Physics would become very awkward if expressed in terms of a discrete time:
The discrete case is essentially untractable since analysis (the tool created by Newton, in a sense the father of modern physics) can no longer be applied.
Edit: If time appears discrete (or continuous) at some level, it could still be continuous (or discrete) at higher resolution. This is due to general reasons that have nothing to do with time per se. I explain it by analogy: For example, line spectra look discrete, but upon higher resolution one sees that they have a line width with a physical meaning.
Thus one cannot definitely resolve the question with finitely many observations of finite accuracy, no matter how contrived the experiment. | {
"domain": "physics.stackexchange",
"id": 89630,
"tags": "time, universe, simulations, discrete"
} |
Wavelength as an observable in quantum mechanics? | Question: Recently I was discussing a problem with one of my students in which she found that two states of the particle in a box were orthogonal and was then asked to give an example of an observable that would make these two states perfectly distinguishable. She thought of the wavelength. This took me by surprise, since I don't think a trained physicist would have ever come up with this answer, and yet it was hard for me to specify anything wrong with it.
The answer I came up with at the time was sort of a "meta" answer. I told her that usually when we talk about observables in quantum mechanics, we have in mind classical quantities like position, energy, momentum, or angular momentum, which can then be taken over into the microscopic context. Classically, an electron doesn't have a wavelength, so wavelength isn't this type of quantity.
I'm also wondering whether there is some purely mathematical answer. We want an observable to be representable by a linear operator that is hermitian (or maybe just normal). The wavelength operator would sort of be the inverse of the momentum operator, but it would be signed, whereas a sign isn't something we normally associate with a wavelength. In a finite-dimensional space, the inverse of a hermitian matrix is also hermitian. It's not clear to me whether there are new issues that arise in the infinite-dimensional case, or whether it matters that there is some kind of singular behavior as we approach zero momentum.
Is there any clear physical or mathematical justification for excluding wavelength from the full rights and privileges of being a quantum-mechanical observable?
Answer: Your student is correct, and there's no problem with the "wavelength" observable. The wavelength of a state $|p\rangle$ of definite momentum $p$ is just $h/p$. So we can define the wavelength operator by
$$\hat{\lambda} |p \rangle = \frac{h}{p} |p \rangle.$$
Mathematically, this is equally as (il)legitimate as the momentum operator. In other words, it can't be the case that mathematical formalities prevent us from introducing it in quantum mechanics courses, because we already do plenty of things that are just as mathematically "wrong".
The physical reason we don't care much about it is just what you said: our classical theories are built around momentum, not wavelength, so upon quantization it's the momentum that shows up everywhere. It's the momentum that is squared in the kinetic energy, and which is affected by force, and so on. | {
"domain": "physics.stackexchange",
"id": 67764,
"tags": "quantum-mechanics, wavefunction, observables"
} |
Test if file exist | Question: bool isFileExist(const std::string& fileName)
{
return !!std::ifstream(fileName.c_str());
}
This is a function to check if a file exist. Are there non-obvious problems?
Answer: You're not testing whether the file exists. You're actually testing whether you are able to open it for reading. On POSIX (and POSIX-like) systems, use stat(). For a cross-platform solution, see this Stack Overflow question. | {
"domain": "codereview.stackexchange",
"id": 21048,
"tags": "c++, file"
} |
Z-function and the minimum string period | Question: Let $s$ be a string of length $n$. One of the classical solutions to the problem of finding the smallest period $p$ of $s$ (that is, smallest $p$ such that $s$ can be obtained as a concatenation of the several copies of $p$) uses so-called $Z$-function. In words, $Z=Z[0..n]$ with $Z[0] = 0$ and $Z[i]$ is the length of the longest substring starting at position $i$ which coincides with some prefix of $s$. Given that, it is easy to solve the problem of the smallest period: we iterate through $i$, and if at some point we find $i$ such that $i + z[i] = n$ and $n$ modulo $i$ is $0$, then $s[0..i-1]$ is our answer.
Now back to the question. Suppose $s$, as before, is a prefix (of length no more than, say, $10^6$) of an infinite string $ppppppppp...$ obtained by concatenation of infinitely many copies of $p$. The goal is to recover the smallest possible $p$. Can we adopt the algorithm above to solve the problem?
Example:
$s = abcabca$, the answer is $p=abc$.
Answer: If at some point we find $i$ such that $i + z[i] \ge n$, then the smallest possible $p$ is $s[0..i-1]$. | {
"domain": "cs.stackexchange",
"id": 18591,
"tags": "algorithms, strings, substrings"
} |
Isn't it incorrect for the minimal gauge coupling and related calculations in Prof. Ezawa's book on quantum Hall effect? | Question: He is CORRECT. I use $\mathbf{B}=\left(0,0,B_{\perp}\right)$ and he use $\mathbf{B}=\left(0,0,-B_{\perp}\right)$. $B_{\perp}>0$.
Nov.28.2012
Basically I got mad with conventions.
1.Here is the link of the book (second edition):
http://books.google.fr/books/about/Quantum_Hall_Effects.html?id=p3JpcdbqBPoC
Here is another link for one of his review articles:
http://iopscience.iop.org/0034-4885/72/8/086502
2.I am not happy with the negative sign of the commutator $\left[X,Y\right]=-il_{B}^{2}$. Here is my calculation:
$\left[X,Y\right]=\left(\left[x,p_{x}\right]+\left[-p_{y},y\right]\right)/eB+\left[-P_{y},P_{x}\right]/e^{2}B^{2}=il_{B}^{2}$
3.In my calculation, I used some conventions different from Prof.Ezawa's book and article.
Here is my convention:
$X:=x-P_{y}/eB\;;\; Y:=y+P_{x}/eB$
However, Prof.Ezawa use this convention:
$X:=x+P_{y}/eB\;;\; Y:=y-P_{x}/eB$
To be prepared for being driven mad, please compare them carefully.
4.I think he must be wrong somewhere, for example, in his book (2nd.ed), (10.2.5) and in his article (2.15)
$\left[P_{x},P_{y}\right]=i\hbar^{2}/l_{B}^{2}$
but you know we use minimal coupling $\mathbf{p}\rightarrow\mathbf{p}-\frac{q}{c}\mathbf{A}$ in this problem, $q=-e,e>0,{\mathbf{p}+\frac{e}{c}\mathbf{A}}$ for electrons, as Prof.Ezawa suggested in his book (10.2.3) and his article (2.12). Under this convention, I calculated $\left[P_{x},P_{y}\right]$ as following:
$\left[P_{x},P_{y}\right]=-i\hbar e\left(\left[\partial_{x},A_{y}\right]+\left[A_{x},\partial_{y}\right]\right)/c=-i\hbar eB/c=-i\hbar^{2}/l_{B}^{2}$
Oh my god here is a negative sign.
5.To summarize, I think if we take Prof.Ezawa's convention and apply his result for $\left[P_{x},P_{y}\right]=i\hbar^{2}/l_{B}^{2}$ during the calculation of $\left[X,Y\right]$, we will get his result. But his result for $\left[P_{x},P_{y}\right]=i\hbar^{2}/l_{B}^{2}$ seems not correct.
6.Someone save my day...
Answer: I have done this calculation some time ago. My convention was:
$$ X = x - \frac{P_y}{m \omega_c}\quad Y = y + \frac{P_x}{m \omega_c} $$
and
$$P_i = p_i +\frac{e}{c} A_i$$
And the magnetic field is $B = \nabla \wedge A = B_z \hat{z}$. Note that in particular: $X = x-\frac{1}{m\omega_c}(p_i + \frac{e}{c}A_i)$. My notes say that this gives:
$$ [X, Y] = i l_B^2\qquad \text{and}\qquad [P_x,P_y] = -\frac{i}{l_B^2}$$
If you now take his convention, you essentially flip the magnetic field, $\vec{B} \rightarrow -\vec{B}$. This replaces:
$$ X = x + \frac{P_y}{m \omega_c}\quad Y = y - \frac{P_x}{m \omega_c} $$
but you still have $P_i = p_i +\frac{e}{c} A_i$ -- that stays the same. Therefore we have: $X = x+\frac{1}{m\omega_c}(p_i + \frac{e}{c}A_i)$ (!!!! compare this to the other convention), and to compute the commutator we get:
$$\begin{align}[X,Y] &= \left[x+\frac{1}{m\omega_c}(p_y + \frac{e}{c}A_y),y-\frac{1}{m\omega_c}(p_x + \frac{e}{c}A_x)\right] \\
&= (-[x,p_x] + [p_y,y])/m\omega_c + \frac{e}{c (m\omega_c)^2}(-[p_y,A_x]+[A_y,p_x])
\end{align}$$
Now, $[x,p_x] = i$, as always. The other commutator depends on the orientation of the magnetic field:
$$-[p_y,A_x]+[A_y,p_x] = i ([\nabla_y, A_x] - [\nabla_x, A_y]) = -i(\nabla\wedge A)_z = iB_z$$, and so you get
$$[X,Y] = -\frac{2i}{m\omega_c} + \frac{e}{c (m\omega_c)^2} iB_z = -i l_B^2$$
Long story short: your derivation of $[X,Y]$ does not apply to his conventions. Your conve
Final note: If you switch conventions, you essentially replace $B_z \rightarrow -B_z$, so the magnetic length and cyclotron frequency also switch sign, $l_B^2\rightarrow -l_B^2$ and $\omega_c \rightarrow - \omega_c$. So you see that both commutators (involving $X$ and $Y$ and $P_x$ and $P_y$) pick up a minus sign, because they both involve $l_B^2$. | {
"domain": "physics.stackexchange",
"id": 5456,
"tags": "quantum-mechanics, quantum-hall-effect"
} |
Code Vectorization of gsub in R | Question: How can I vectorize this code in R?
data <- data.frame(A = rep(5, 5), B = rep(0, 5))
data$abstract <- c("no abstract available", "A", "A", "B", "no abstract available")
for (row in (1:nrow(data))){
data [row,"abstract"] <- gsub("no abstract available"," ",data[row,"abstract"])
}
Answer: You have plenty of alternatives for this problem
Using sapply
data$abstract <- sapply(data$abstract,
function(x){gsub(pattern = "no abstract available",
replacement = " ", x)})
Using mapply
data$abstract <- mapply(gsub, pattern = "no abstract available",
replacement = " ", data$abstract)
Using the stringr package
library(stringr)
data$abstract <- str_replace(data$abstract, "no abstract available",
" ")
Also, check this this question on StackOverflow for more information, like solutions with match and the qdap package. | {
"domain": "datascience.stackexchange",
"id": 578,
"tags": "r, data-cleaning"
} |
Measuring polarization - problem with understanding | Question: Let's assume that we have 2 polarizing filters. First with vertical (1) orientation and second with horizontal (0). I want to measure probability that photon passes through those 2 filters.
I have: ${|\psi\rangle = cos\theta|1\rangle + sin\theta|0\rangle}$
The probability that the photon passes the first filter is $cos^2\theta$ and after that it's polarized vertically so there is probability = 0 that the photon passes second filter which is polarized horizontally.
That's easy and understandable for me.
But the problem is when we add third filter between previous two. This filter will be oriented at 45 degrees. It blocks and passes photon with the same probability 1/2 and it's said that after passing this filter, the new polarization of photon is: ${\frac{1}{\sqrt2}|1\rangle + \frac{1}{\sqrt2}|0\rangle}$. So I have probability 1/2 for passing first filter and probability 1/2 that photon will be polarized horizontally before last filter. Total probability is 1/4.
But how we can say that photon pass through the second filter if we know that after passing the first one it's polarized vertically? For me there is probability 0 to passing second filter if that filter isn't polarized vertically.
Answer:
But how we can say that photon pass through the second filter if we know that after passing the first one it's polarized vertically? For me there is probability 0 to passing second filter if that filter isn't polarized vertically.
Polarizing filters don't just discard photons, they change the polarization of photons. This is simply true, by experiment.
If a photon just went through a vertical filter, the photon is now vertically polarized regardless of what the photon's polarization was beforehand. The beforehand polarization determines the probability of making it through the filter, but otherwise has no effect on the afterwards polarization. If that goes against your intuition... well, your intuition is wrong and you need to learn to ignore it in this case.
So what happens in the experiment is...
Initial setup with vertical photon heading rightward towards vertical then diagonal then horizontal polarizers.
||| /// ___
photon|V| ----> ||| /// ___
||| /// ___
Photon reaches first polarizer. 100% chance of making it through.
||| /// ___
--|V|->|| /// ___
||| /// ___
Photon passed through.
||| /// ___
||| --|V|-> /// ___
||| /// ___
Photon reaches diagonal polarizer. 50% chance of transmission.
||| /// ___
||| --|V|->// ___
||| /// ___
Phew, we won the coin flip! Photon made it through, but now it's diagonally polarized.
||| /// ___
||| /// --/D/-> ___
||| /// ___
Photon reaches horizontal beam splitter. 50% chance of transmission.
||| /// ___
||| /// --/D/->__
||| /// ___
We won the coin flip again! Only had a 25% chance of making it this far. Photon exits rightward, but horizontally polarized.
||| /// ___
||| /// ___ --_H_->
||| /// ___
And if you think that's weird, read up on the quantum Zeno effect. As you use more and more polarizers, with finer and finer changes in angle going from vertical to horizontal, you end up with a thing that rotates vertical photons into horizontal photons with negligible loss!
$$\lim_{n \rightarrow \infty} \cos^{2n} \frac{90^\circ}{n} = 100\%$$
In the small-angle limit, the turning effect beats the filtering effect! So maybe calling them "filters" wasn't ideal. | {
"domain": "physics.stackexchange",
"id": 32384,
"tags": "photons, quantum-information, polarization"
} |
Why there are three reactions at fixed beam in F.B.D? | Question: It is fixed so no reactions will be present, then how three reactions at fixed beam in F.B.D?
In F.B.D, we draw two reactions and moment reaction at fixed beam:
Why there are three reactions at fixed beam in F.B.D?
Answer: Firstly, as referred to in the comments, do define all TLAs before they are used. SE doesn't exist only to provide insight to the acronymistically hip.
The three forces are:
Normal force (if its on the bench and not massless then there has to be a normal force)
Pivoting force (the weight being offset from where it is resting will act to rotate it)
Fudge force (as the object is fixed it cannot be rotating so there must exist a force that counteracts the rotation and keeps it fixed in place). | {
"domain": "physics.stackexchange",
"id": 59426,
"tags": "homework-and-exercises, newtonian-mechanics, free-body-diagram"
} |
Wavefront meaning | Question: A wavefront is usually defined as the locus of points in which an electromagnetic wave has the same phase.
But, what do we mean with "phase"? Is it a phase in time, or in space, or both? According to wikipedia, it seems to me that it is only a phase in time, but if the wave is in a form like sin(2πf × t - k × x + a), how is it defined?
Answer: Wavefronts are defined during snapshots in time.
Take a three dimensional “photograph” of a wave at one instant of time.
Join points on the wave which are in phase with one another, eg “crests”, and you then have a wavefront.
Take another “photograph” of the wave a little later in time.
Join the points which have the same phase as that in the previous “photograph” to get the new position of the wavefront.
The direction of motion of the wavefront is at right angles to the wavefront. | {
"domain": "physics.stackexchange",
"id": 59809,
"tags": "waves, electromagnetic-radiation"
} |
Delta function in Green's function | Question: I am working through Altland Simons 2nd edition. On page 225 we find:
$$G_p = [1 - G_{0, p} \; \Sigma_p]^{\, -1} \, G_{0, p} = [G_{0, p}^{\, -1} - \Sigma_{p}]^{-1}$$
Finally, using the fact that $[G_{0, p}^{-1}\, ]^{ab} \; = (p^2 + r) \, \delta^{ab} $, we arrive at the formal solution
$$G_p^{ab} = \left[(p^2 + r - \Sigma_p)^{-1} \, \right]^{ab} $$
My question: is $\delta^{ab}$ is the Kronecker delta function? If so, should it be $$G_p^{ab} = \left[((p^2 + r) \, \delta^{ab} - \Sigma_p)^{-1} \, \right]^{ab} $$ instead?
In case it helps, the $\delta^{ab}\; $ comes up earlier in the text (page 223):
$$G_0 \equiv \langle \phi^a (x) \phi^b(y) \rangle_0 \propto \delta^{ab}$$
For reference, the action is given in equation (5.37)
$$ S[\phi] \equiv \int d^d x \left( \, \frac{1}{2} \partial \phi \cdot \partial \phi + \frac{r}{2} \phi \cdot \phi + \frac{g}{4N} (\phi \cdot \phi)^2 \right), $$
where $\phi$ is the N-component vector field $\phi = \{\phi^a\}, a = 1, \ldots, N$.
Answer: Yes, it looks like $\delta^{ab}$ is effectively the Kronecker delta. Your expression of $G^{ab}_p$ cannot be correct because it is of the form $[(A\delta^{ab}+B)^{-1}]^{ab}$. In fact the expression given for $G^{ab}_p$ is just $((p^2+r)\delta^{ab}-\Sigma_p \delta^{ab})^{-1}$. | {
"domain": "physics.stackexchange",
"id": 82860,
"tags": "condensed-matter, greens-functions"
} |
Maximising velocity at B when rolling down the curve between A and B | Question: I would like to build a curve between two points A and B. A ball would roll down the curve in a gravitational uniform field (i.e., I'm actually going to build the thing here on Earth).
My question is, how can I ensure that my curve maximises the vector v at point B.
I would like to chose v's orientation in advance (for example at 45º), and maximise |v|. There may be an obvious connection to the Brachistochrone, but I don't see it right now. Any ideas appreciated.
Answer: The speed of a ball starting from $A$ at rest and going to $B$ without friction is fixed by the difference in height between $A$ and $B$. In particular, if the ball has mass $m$, and we take $A$ to be at zero height, while $B$ is at height $h$, then by conservation of energy:
$$0+mgh=\frac{1}{2}mv^2+0 \implies v=\sqrt{2gh}$$
If we take into account the fact that the ball has a size and is rolling, as long as friction is negligible (that is, the ball rolls without slipping), then the result is numerically slightly different, but still independent of the path taken (again, by conservation of energy).
Notice that the velocity is parallel to the path, so the direction of the final velocity is determined only by the shape of the final part of the path.
If you are trying to maximise velocity, then you should place $A$ and $B$ as far away as possible in height. At fixed $A$ and $B$, if we ignore friction the final speed, is determined as explained above. Therefore what you should be doing in practice is trying to minimise friction - but that's an engineering issue on which I can offer little help. | {
"domain": "physics.stackexchange",
"id": 44936,
"tags": "newtonian-mechanics, lagrangian-formalism, optimization, brachistochrone-problem"
} |
catkin install space include directories for legacy code | Question:
From documentation, the catkin implementation and previous answers I conclude that it is intentional, that in a catkin package you cannot export any subdirectories of <INSTALL_SPACE>/install in the install space. Any such relative directories (passed to catkin_package with the INCLUDE_DIRS argument) are ignored.
This works fine if you follow the convention to install files into <INSTALL_SPACE>/install/package/ and always include headers with #include <package/header.h>.
However, I have a bunch of legacy code for which I added a bunch of interdependent catkin packages definitions that builds those libraries. It works fine with the devel space. The legacy code is broken up into different modules, but all modules include headers directly with foo.h, not <MODULE>/foo.h or similar. It relies on the build system to setup include paths correctly according to module dependencies. Working in the devel space I a achieve this by exporting the according subdirectory containing the header files as INCLUDE_DIRS. In the install space, I would still want to install the header files into a subdirectory <INSTALL_SPACE>/include/<package>, however I cannot export that subdirectory in catkin_package.
The solutions I see are:
Place all headers in the toplevel <INSTALL_SPACE>/include/. This is undesirable for possible clashes with other pacakges.
Add a cmake extras file to each package that amends the package_INCLUDE_DIRS variable (not sure if that works) or even directly calls include_directories(...) (which violates the apparent cmake convention that finding a package does not directly manipulate build configuration like include_directories). This seems a bit hacky, but might get the desired behaviour.
Install all header files into a common subdirectory <INSTALL_SPACE>/include/foo-project and set that up as an include directory in a common cmake macro that is called by all CMakeLists.txt (I already have such a macro anyway). Again, not that elegant and not exactly what I want (all packages having their own include sub-folder the install space).
Any suggestions on how to do this properly?
EDIT: In response to @dirk-thomas's answer:
The INCLUDE in the original question was a typo. I meant INCLUDE_DIRS. Just fixed it.
And it is possible to pass any custom value there, e.g. catkin_package(INCLUDE_DIRS foo) if you want <CMAKE_INSTALL_PREFIX>/foo to end up in the include dirs list. There is no need to use a custom CMake extras file for this.
This is exactly what I want to do, but it does not seem to work for the install space. It seems that catkin is removing all relative paths that would add custom subfolders <CMAKE_INSTALL_PREFIX>/include/foo and adds just the default <CMAKE_INSTALL_PREFIX>/include. Absolute paths are not changed / filtered. The responsible code section seems to be here, which is exactly the else branch you linked to also, but as far as I can see, it does not actually use the relative_dir in that else branch (maybe that is a bug in catkin?).
2 As you already stated a CMake config file should not directly manipulate the build configuration.
This seems to do the right thing at least in my case, see the answer I added.
3 Setting the package specific subfolder as an include directory defeats the separation (same as 2.). Header files of multiple packages with the same relative paths still collide.
In general yes. In my case I know that the header files of different modules do actually not clash, so this is ok. I still don't like it much though.
Thanks!
Originally posted by demmeln on ROS Answers with karma: 4306 on 2016-04-25
Post score: 0
Original comments
Comment by ahendrix on 2016-04-25:
Setting up package_INCLUDE_DIRS is the cmake way to do this. You should be able to do it in catkin but I'm not quite sure if you can do it with the existing macros or if you have to use cmake_extras
Answer:
Thanks @ahendrix for you comment. For the record: appending to package_INCLUDE_DIRS in the cmake extras works as expected (it also ends up in catkin_INCLUDE_DIRS). The solution I went with for now is installing all headers into one common sub-directory as in my option 3 above. I append this to the INCLUDE_DIRS in the cmake extras of my one "common" pacakge, that all other packages depend on. This way I don't have to add cmake extras to every single package, but still stick with the cmake convention of having all include dirs on project_INCLUDE_DIRS (through the build-export-depends on the "common" pacakge). Any additional comments and insights are still welcome.
Edit: For future reference, I added a filter common_pkg-extras.cmake.installspace.in in the folder cmake of my common_pkg with content:
if(_COMMON_LIB_EXTRAS_INCLUDED_)
return()
endif()
set(_COMMON_LIB_EXTRAS_INCLUDED_ TRUE)
list(APPEND common_lib_INCLUDE_DIRS "${common_lib_DIR}/../../../@CATKIN_GLOBAL_INCLUDE_DESTINATION@/MyProjectSubfolder")
Edit2: As @dirk-thomas pointed out, the additions in cmake-extras will of course not be reflected in the generated .pc files. For me, this is not an issue currently.
Originally posted by demmeln with karma: 4306 on 2016-04-25
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 24455,
"tags": "ros, catkin, include"
} |
A polynomial reduction from any NP-complete problem to bounded PCP | Question: Text books everywhere assume that the Bounded Post Correspondence Problem is NP-complete (no more than $N$ indexes allowed with repetitions). However, nowhere is one shown a simple (as in, something that an undergrad can understand) polynomial time reduction from another NP-complete problem.
However every reduction I can think of is exponential (by $N$ or by the size of the series) in run-time. Perhaps it can be shown that it is reducible to SAT?
Answer: As is often the case with NP-reductions, it makes sense to look for similar problems. In particular, it is hard to encode global conditions such has "have seen some nodes" into PCP (with polynomially many tiles) which contraindicates graph problems, packing problems would require us to encode unary numbers in PCP (creating exponentially large instance), and so on. Therefore, a string problem with only local restrictions can be expected to work best.
Consider the decision version of the shortest common supersequence problem:
Given two strings $a,b \in \Sigma^+$ with $|a|=n$ and $|b|=m$ and $k \in \mathbb{N}$, decide whether there is a string $c \in \Sigma^+$ with $|c| \leq k$ such that $a$ and $b$ are subsequences of $c$.
The idea is to let PCP build supersequences of $a$ and $b$ from left to right, encoding in the tiles' overlaps at which position we are in $a$ and $b$, respectively. It will use one tile per symbol in $c$, so $k$ corresponds to the BPCP's bound: if we can solve this PCP with $\leq k$ tiles, you can read off the common supersequence of equal length, and vice versa.
The construction of the tiles is a bit tedious, but quite clear. Note that we will not create tiles that do not forward $a$ or $b$; such can never be part of a shortest common supersequence, so they are superfluous. They can easily be added without breaking the properties of the reduction.
The numbers in the overlaps are encoded in binary, but using symbols outside of $\Sigma$ and padding them to a common length $\log \max(m,n)$. Thus we ensure that the tiles are used as the graphics suggest (tetris), that is characters and index-encoding overlaps do not mix (PCP does not prevent this per se). We need:
Starting tiles: $c$ can start with $a_1$, $b_1$ or both if they are equal.
Intermediate tiles: $c$ can proceed with the next symbol in $a$, in $b$ or both if they are equal.
Terminating tiles: $c$ ends with the last symbol of $a$ (if the last one of $b$ has been seen already), similar for $b$, or with the last symbol of both.
These are the tile schematics. Note that the intermediate tiles have to be instantiated for all pairs $(i,j) \in [n]\times [m]$. As mentioned above, create the tiles without $*$ only if the respective characters in $a$ and $b$ match.
[source]
The $*$ are symbolic for "don't care"; in the actual tiles, the other symbol will have to be copied there. Note that the number of tiles is in $\Theta(mn)$ and each tile has length $4\log \max(m,n) + 1$, so the constructed BPCP instance (over alphabet $\Sigma \cup \{0,1\}$ plus separation symbols) has polynomial size. Furthermore, the construction of every tile is clearly possible in polynomial time. Therefore, the proposed reduction is indeed a valid polynomial transformation which reduces the NP-complete shortest common supersequence problem to BPCP. | {
"domain": "cs.stackexchange",
"id": 370,
"tags": "complexity-theory, np-complete, reductions"
} |
Unattainable equilibrium | Question: I read a question on a textbook that asked:
For the reaction
$$ \ce{CaCO3(s) \rightleftharpoons CaO(s) + CO2(g)} \tag{1}$$
determine whether or not a mixture of $\ce{CaCO3(s)}$ and $\ce{CO2(g)}$
at a pressure greater than the value of $K_p$ can attain the above
equilibrium.
The answer it gives is no, because the partial pressure of $\ce{CO2}$ won't be able to decrease to $K_p$.
However, I'm confused about what happens to the mixture if equilibrium isn't attained. Will it go to completion? What if it is a mixture of $\ce{CaO}$ and $\ce{CO2}$ at a pressure less than $K_p$?
Answer: In chemical reaction problems attaining equilibrium or completing a reaction means reaching a state at which the reaction ceases. Here it never started, how can it cease? The system begins in a state of dynamic equilibrium (the posted problem is poorly worded) and the net amounts of the species will not change. The reaction can be regarded as having already gone to completion - in the reverse direction.
The high pressure of $\ce{CO2(g)} (> K_p)$ prevents net decomposition of $\ce{CaCO3(s)}$. The term "net" is important because there is a dynamic equilibrium: $\ce{CO2(g)}$ might not interfere with the forward reaction and some $\ce{CaCO3(s)}$ may decompose, but $\ce{CaO(s)}$ thus produced will react back to carbonate, the high pressure of the gas exhausting all $\ce{CaO(s)}$.
On nomenclature: thermodynamic changes ($\Delta G^\circ$, $\Delta H^\circ$ etc) refer to processes between equilibrium states, an initial equilibrium state and a final equilibrium state. This is why the field is called equilibrium thermodynamics. For instance, a starting state of unmixed reagents (in separate containers, say) might be regarded as an equilibrium system (under the imposed constraints it is stationary, will not change). That is analogous to the meaning of equilibrium here. No change in the amounts of the species (or in T, p, etc) will occur under the given conditions.
Another post with a similar question explains how to analyze such a problem mathematically in terms of sums of chemical potentials of the components. It can be used to demonstrate that dissociation of carbonate will increase the total free energy:
$$\begin{align} \qquad &\ce{CaCO3(s) &<=>& CaO(s) + &CO2(g)}\\ \qquad &\ce{A &<=>& B + &C} \\ \mathrm{(initial)} \qquad & n_A &\qquad& 0 \quad &n_C\\ \mathrm{(final)} \qquad & n_A -\alpha &\qquad& \alpha \quad &n_C+\alpha \\ \mathrm{(change)} \qquad & -\alpha &\qquad& \alpha \quad &+\alpha\end{align}$$
The free energy change is
$$\Delta G = - αμ^∘ _A + αμ^∘ _B + αμ^∘ _C + RT (n_C + α) \ln p_{fin} - RT n_C \ln p_{ini} \\ = α\Delta G^∘ + α RT \ln p_{fin} + RT n_C \ln \left(\frac{p_{fin}}{p_{ini}}\right) \\ = - αRT \ln K_p + α RT \ln p_{fin} + RT n_C \ln \left(\frac{p_{fin}}{p_{ini}}\right) \\ = α RT \ln \left(\frac{p_{fin}}{K_p}\right) + RT n_C \ln \left(\frac{p_{fin}}{p_{ini}}\right) $$
If carbonate decomposes $α>0$ and $p_{fin}> p_{ini}$ so that $\Delta G>0$. | {
"domain": "chemistry.stackexchange",
"id": 17932,
"tags": "equilibrium"
} |
Derivation of the AASHTO formula of interior girder moment | Question: The interior girder moment formula for one lane loaded for the AASHTO LRFD method is:
$$\begin{align}
mg^{SI}_{moment}&=\left(1.75+\frac{S}{3.6}\right)\left(\frac{1}{L}\right)^{0.35}\left(\frac{1}{N_c}\right)^{0.45} \\
&= \left(1.75+\frac{13}{3.6}\right)\left(\frac{1}{100}\right)^{0.35}\left(\frac{1}{3}\right)^{0.45} \\
&= 0.65\ \mathrm{lane/web}
\end{align}
$$
How is this formula derived? I have not been able to find the original research paper.
Answer: The live load distribution formulas in the AASHTO LRFD Bridge Design Specifications cannot be derived. As I understand it, they are based on calibration to extensive finite element modeling.
This is probably the NCHRP report you're looking for. | {
"domain": "engineering.stackexchange",
"id": 594,
"tags": "civil-engineering, bridges, aashto"
} |
Determine if a sentence is a pangram | Question: This definition is taken from Wikipedia:
A pangram (Greek: παν γράμμα, pan gramma, "every letter") or holoalphabetic sentence for a given alphabet is a sentence using every letter of the alphabet at least once. Pangrams have been used to display typefaces, test equipment, and develop skills in handwriting, calligraphy, and keyboarding.
I found a related code challenge here. It was in Python, so I tried to code this program in Java.
I took two steps:
Find if the String is a pangram
If a string is not a pangram, then find the missing letters
Please review the approach I have taken.
Pangram
import java.util.Set;
import java.util.TreeSet;
public class Pangram {
private static final int ASCII_VALUE_OF_SMALL_CASE_CHAR_A = 97;
private static final int ASCII_VALUE_OF_SMALL_CASE_CHAR_Z = 122;
private Set<Character> distinctCharsInInputStringSortedAlphabetically = new TreeSet<Character>();
public Pangram(final String inputString) {
addUniqueAlphabetsToSet(inputString);
}
public boolean isPangram() {
return distinctCharsInInputStringSortedAlphabetically.size() == 26;
}
private void addUniqueAlphabetsToSet(final String inputString) {
for (Character character : inputString.toLowerCase().toCharArray()) {
if ((int) character >= ASCII_VALUE_OF_SMALL_CASE_CHAR_A
&& (int) character <= ASCII_VALUE_OF_SMALL_CASE_CHAR_Z) {
distinctCharsInInputStringSortedAlphabetically.add(character);
}
}
}
public Set<Character> getMissingAlphabets() {
Set<Character> missingAlphabets = new TreeSet<Character>();
if (!isPangram()) {
char alphabet_a = 'a';
int asciiValue = (int) alphabet_a;
for (Character alphabetsInInput : distinctCharsInInputStringSortedAlphabetically) {
do {
if ((int) alphabetsInInput > asciiValue) {
missingAlphabets.add((char)asciiValue);
}
asciiValue++;
} while ((int) alphabetsInInput >= asciiValue);
}
if(asciiValue <=ASCII_VALUE_OF_SMALL_CASE_CHAR_Z){
do{
missingAlphabets.add((char)asciiValue);
asciiValue++;
}while(asciiValue <=ASCII_VALUE_OF_SMALL_CASE_CHAR_Z);
}
}
System.out.println("missingAlphabets" + missingAlphabets);
return missingAlphabets;
}
}
PangramTest
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
import java.util.HashSet;
import java.util.Set;
import java.util.TreeSet;
import org.junit.Test;
public class PangramTest {
@Test
public void checkPangram_Test1(){
Pangram pangram = new Pangram("The quick brown fox jumps over a lazy dog.");
assertTrue(pangram.isPangram());
}
@Test
public void checkPangram_Test2(){
Pangram pangram = new Pangram("The quick red fox jumps over a lazy dog.");
assertFalse(pangram.isPangram());
}
@Test
public void checkPangram_WithReallyBigString(){
Pangram pangram = new Pangram("Forsaking monastic tradition, twelve jovial friars gave up their vocation for a questionable existence on the flying trapeze");
assertTrue(pangram.isPangram());
}
@Test
public void checkPangram_Test3(){
Pangram pangram = new Pangram("Crazy Fredericka bought many very exquisite opal jewels");
assertTrue(pangram.isPangram());
}
@Test
public void checkPangram_Test4(){
Pangram pangram = new Pangram("Honest Fredericka bought many very exquisite opal jewels");
assertFalse(pangram.isPangram());
}
@Test
public void forPangramStringShouldReturnEmptySet(){
Pangram pangram = new Pangram("The quick brown fox jumps over a lazy dog.");
assertTrue(pangram.getMissingAlphabets().isEmpty());
}
@Test
public void forNonPangramStringShouldReturnMissingAlphabets(){
Pangram pangram = new Pangram("The quick brown fox jumps over busy dog.");
Set<Character> actual = pangram.getMissingAlphabets();
Set <Character>expected = new TreeSet<Character>();
expected.add('a');
expected.add('l');
expected.add('z');
assertEquals(expected,actual);
}
@Test
public void forNonPangramStringShouldReturnMissingAlphabets_Test2(){
Pangram pangram = new Pangram(" b cd x rs ijk pno f vu");
Set<Character> actual = pangram.getMissingAlphabets();
Set <Character>expected = new HashSet<Character>();
expected.add('a');
expected.add('e');
expected.add('g');
expected.add('h');
expected.add('l');
expected.add('m');
expected.add('q');
expected.add('t');
expected.add('w');
expected.add('y');
expected.add('z');
assertEquals(expected,actual);
}
}
Answer: The public interface of the class is nice and small. The code is laid out well and follows Java coding conventions. The tests have reasonable coverage.
There is a lot of unnecessary looping and casting in your code.
Here's a more concise version that passes your tests:
public class Pangram {
private final Set<Character> lettersRemaining = new HashSet<>();
public Pangram(String s) {
for (char ch = 'a'; ch <= 'z'; ch++) {
lettersRemaining.add(ch);
}
s = s.toLowerCase();
for (int i = 0; i < s.length(); i++) {
lettersRemaining.remove(s.charAt(i));
}
}
public boolean isPangram() {
return lettersRemaining.isEmpty();
}
public Set<Character> getMissingAlphabets() {
return new HashSet<>(lettersRemaining);
}
}
Suggestions...
Call it PangramCandidate rather than Pangram because it is misleading to have a non-pangram typed as a Pangram. As an analogy, you wouldn't expect String to have an isString() method.
The very long variable name distinctCharsInInputStringSortedAlphabetically is referenced in several places which makes it tedious to read the code. I think you should find a briefer way of expressing what that variable represents.
I do like your long test method names. However, the test method names starting with checkPangram_Test1..4 aren't very explanatory. Can you explain what exactly they are testing?
The term MissingAlphabets seems awkward to me. I think you mean MissingLetters or MissingCharacters.
In the tests, you can extract convenience methods for asserting whether a string is a pangram or not. This would reduce the amount of repetition in the test code.
public static boolean isPangram(String s) {
return new Pangram(s).isPangram();
}
public static void assertIsAPangram(String s) {
assertTrue(isPangram(s));
}
public static void assertIsNotAPangram(String s) {
assertFalse(isPangram(s));
}
I would actually suggest adding the isPangram(String s) convenience method to the PangramCandidate class because it will save callers time if that's all they need.
Also, you might think about how to cater for foreign characters (e.g. é) or different alphabets, say Cyrillic or even Spanish. This checker works for English only.
One of the lines in your test is very long due to a long String. I would suggest splitting it into two lines joined with +.
If you use the Google Guava library you can create test sets more concisely, e.g.
Sets.newHashSet('a', 'e', 'g', 'h', 'l', 'm', 'q', 't', 'w', 'y', 'z')
In your code, the conditional:
if ((int) character >= ASCII_VALUE_OF_SMALL_CASE_CHAR_A
&& (int) character <= ASCII_VALUE_OF_SMALL_CASE_CHAR_Z) {
can be expressed more succinctly as:
if (character >= 'a' && character <= 'z') {
There is no need to cast to an int and there is little value in expressing these characters as constants. It's not like they are going to change, and 'a' is much quicker to read and comprehend than a wordy explanation. | {
"domain": "codereview.stackexchange",
"id": 10976,
"tags": "java"
} |
Formal definition of hash function | Question: I was reading through the classic CLRS with the intention of reviewing the hash tables theory, more specifically the hash function definition I just wanted a reference to quote.
I cannot find a formal definition given but I think it's fair to say a hash function (not univerisal) $h$ is a surjective map from a set of keys $K$ to a subset of integers $U$, for each $k \in K$ we define $h(k)$ to be the hash value of $k$. From the explanation given in CLRS it seems though this restriction on $U$ (be integers) might be too restrictive, however since I think the definition has to show some practical aspects I think this might be correct.
Can you either give me:
1. A paper/book with a formal definition
2. Confirm if my definition is correct?
Thank you
Answer: A hash function is used to map a set of keys to a subrange of the integers (it is used as an index into an array, in the end). So it must be (assuming zero based arrays, as in C), $h \colon \mathcal{U} \to [0, m - 1]$ if $\mathcal{U}$ is the universe of keys. | {
"domain": "cs.stackexchange",
"id": 15874,
"tags": "hash, definitions"
} |
Beehive numbers - using goto in C++ | Question: I understand that using goto in C++ code is strictly unadvised, but sometimes, it really reduces the number of lines of code like in the following case.
This is my code for SPOJ. I know this does not reduce too many lines of code, but in a big project, it potentially could.
#include<iostream>
#include<cstring>
#include<cstdio>
#include<cmath>
using namespace std;
bool debug=false;
typedef long long int lld;
typedef unsigned long long int llu;
int main(int argc , char **argv)
{
if(argc>1 && strcmp(argv[1],"DEBUG")==0) debug=true;
lld n,val;
long double sq;
while(true){
scanf("%lld",&n);
if(n==-1)break;
n -= 1;
if(n%3!=0)goto exit;
n/=3;
sq=4*n+1;sq=sqrt(sq);
if(sq-(int)sq!=0)goto exit;
n=sq;
if(n%2!=1)goto exit;
printf("Y\n");
continue;
exit:
printf("N\n");
continue;
}
return 0;
}
What should I do in such situations? Is there a way to do this by making a function call?
Answer: Your use of goto is wholly unjustified, not just because goto is taboo, but because your code has flow-of-control that is hard to follow. Furthermore, the use of goto is not even an effective way to achieve your goal of compactness.
Before addressing the core concern about goto, I'd like to point out that there is a lot of junk in your code:
Your compiler should have warned you:
beehive.cpp:12:11: warning: unused variable 'val' [-Wunused-variable]
lld n,val;
^
1 warning generated.
You do compile with warnings enabled, right?
#include <iostream> and #include <cstring> are superfluous. (Put the remaining ones in alphabetical order.)
using namespace std; is superfluous, and even if you used anything in the std namespace, a blanket import like that is a harmful habit.
The debug flag is unused.
typedef unsigned long long int llu; is never used. Furthermore, the problem guarantees 1 ≤ n ≤ 109, so a long would be sufficient for n.
As you suspected, a function call would help. Your main() should look like this:
int main() {
long n;
while ((1 == scanf("%ld", &n)) && (-1 != n)) {
puts(is_beehive_number(n) ? "Y" : "N");
}
return 0;
}
That provides two very important improvements over your code:
You can see at a glance what the purpose and structure of your program are. The clutter within the loop is all gone, replaced by a function whose purpose is obvious. The loop is properly structured — the phony while (true) is replaced by a useful test.
There is now proper separation of concerns. main() loops, reads the input cases, and prints the results. Most importantly, the is_beehive_number() is a pure calculation function that accepts a long and returns a bool.
When the calculation code is in its own function rather than embedded in the loop, the flow of control can be expressed so much better!
bool is_beehive_number(long n) {
if (--n % 3 != 0) return false;
double sq = sqrt(4 * (n / 3) + 1);
if (sq != (long)sq) return false;
n = sq;
return (n & 1);
}
I don't understand the mathematics behind your code; I just transformed the code mechanically. | {
"domain": "codereview.stackexchange",
"id": 7748,
"tags": "c++, mathematics, programming-challenge"
} |
Angular momentum of a body about a point rotating about its own axes | Question: I want to calculate angular momentum of a sphere about point O. The sphere is rotating about its two axes with angular velocities $w_1$ and $w_2$.
I know that angular momentum = $m\vec{r}\times\vec{v} + Iw$, where v is velocity of centre of mass. Here, v=0, therefore angular momentum of COM = 0. But, the body is itself rotating. Now, which angular momentum should I take?
$Iw_1$ ,
$Iw_2$ ,
$Iw_1 + Iw_2$ ,
Components of $Iw_1$ and $Iw_2$ along r. or What?
Answer: A rigid body can only have one rotation axis. When the angular velocity vector has multiple non-zero components, like $$\vec{\omega} = \pmatrix{ \omega_1 & \omega_2 & 0}$$
then the magnitude of rotation is described by the length of the vector $$ \omega = \| \vec{\omega} \| = \sqrt{ \omega_1 ^2 + \omega_2 ^2 } $$
The rotation axis direction is the unit vector along $\vec{\omega}$
$$ \hat{\rm rot} = \frac{ \vec{\omega} }{ \| \vec{\omega} \|} = \pmatrix{ \frac{\omega_1}{\sqrt{ \omega_1 ^2 + \omega_2 ^2 }} \\ \frac{\omega_2}{\sqrt{ \omega_1 ^2 + \omega_2 ^2 }} \\ 0 } $$
In case of a sphere (where the mass moment of inertia is uniform with direction) the angular momentum magnitude is $$ L = {I}\, \omega ={I}\, \sqrt{ \omega_1 ^2 + \omega_2 ^2 } $$
or by component
$$ \begin{matrix} L_1 = I\, \omega_1 \\ L_2 = I\, \omega_2 \\ L_3 = 0 \end{matrix} $$
and $$ L = \sqrt{ L_1^2 + L_2^2 + L_3^2 } $$
In general, it is easier to consider the vector form of the above with a matrix/vector equation
$$ \vec{L} = \mathrm{I}\, \vec{\omega} $$
$$ \pmatrix{L_1 \\ L_2 \\ L_3 } = \begin{vmatrix} I_1 & 0 & 0 \\ 0 & I_2 & 0 \\ 0 & 0 & I_3 \end{vmatrix} \pmatrix{\omega_1 \\ \omega_2 \\ \omega_3} $$
For a sphere $I_1 = I_2 = I_3 = I$.
Even more useless information below:
Rotation of a rigid body happens along a line in space (the rotation axis). The location of this line relative to the COM is given by $$ \vec{r}_{\rm rot} = \frac { \vec{\omega} \times \vec{v} }{ \| \vec{ \omega} \|^2 } $$ where $\vec{v}$ is the velocity vector of the COM.
Corollary to this is the fact the momentum happens along a line in space (the axis of percussion), in such a way that a single impact along this line can instantaneously immobilize a rotating rigid body. This axis has direction along the linear momentum $\vec{p} = m \vec{v}$ and is located relative to the COM at $$ \vec{r}_{\rm imp} = \frac{ \vec{p} \times \vec{L} }{ \| \vec{p} \|^2}$$ where $\vec{L} = \mathrm{I}\, \vec{\omega}$ is the angular momentum vector at the COM.
Welcome to the introduction of screw theory in mechanics. | {
"domain": "physics.stackexchange",
"id": 57568,
"tags": "newtonian-mechanics, angular-momentum, rotational-dynamics"
} |
Using camera on Raspberry Pi 3 running ROS::Kinetic | Question:
Hi all.
Could somebody please point me in the right direction to where I can find instructions on how I can get the pi's camera to work with ROS:Kinetic. I'm struggling to find how to get this to work
Edit: I've just started using ROS and I want to start play around using it with cameras and lidar sensors. I've seen a lot of interesting projects using ROS and cameras, hence I want to use ROS with my raspberry pi. Is there a way to tap into the camera feed using a separate program and send the feeds to a ROS node for processing?
Originally posted by Nasher128 on ROS Answers with karma: 1 on 2016-12-23
Post score: 0
Original comments
Comment by gvdhoorn on 2016-12-23:
First things first: can you use the camera with any 'regular' (as far as ROS is non-regular) Linux programs? If not, putting ROS in the mix isn't going to be productive, so please figure that out first.
Comment by gvdhoorn on 2016-12-24:
I'm suggesting to first get the camera work without ROS involved at all. If you can't do that, it doesn't help to complicate things by introducing nodes and drivers.
If you can use the camera (say with guvcview), then we can suggest how you could proceed.
Comment by gvdhoorn on 2017-01-15:
Not sure you're still interested in this, but at UbiquityRobotics/raspicam_node looks related.
Answer:
Well you could use the Raspberry Pi Camera node that is found here:
https://github.com/UbiquityRobotics/raspicam_node
Or if you want to make life really easy for yourself you could just download a raspberry pi image that has the camera node, and all the drivers, and ROS already set up. That's available from here:
https://downloads.ubiquityrobotics.com
as always please report any bugs, issues or feature enhancements ideas at github.com/UbiquityRobotics. PRs with enhancements are particularly welcome and a friendly email to contact@ubiquityrobotics.com is always nice if something that our little project produced is useful for you.
Originally posted by DrDave with karma: 26 on 2018-01-11
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 26565,
"tags": "ros"
} |
Accessing Turtlebot iRobot Create Sensors | Question:
I noticed that when navigating with the gmapping demo or amcl demo that the turtlebot stops when the wheel drop sensors or bump sensors are active, just wondering if anyone knows what node takes care of this (move_base?) and what source file it can be found in.
Originally posted by Rydel on ROS Answers with karma: 600 on 2012-07-02
Post score: 2
Answer:
The Turtlebot handles this in the Turtlebot Node, which forms the ROS interface to the iRobot Create hardware.
The most relevant bits are:
Line 398 and forward, where if there is a cmd_vel to be sent to the hardware, it first executes the self.check_bumpers function at Line 429.
def check_bumpers(self, s, cmd_vel):
# Safety: disallow forward motion if bumpers or wheeldrops
# are activated.
# TODO: check bumps_wheeldrops flags more thoroughly, and disable
# all motion (not just forward motion) when wheeldrops are activated
forward = (cmd_vel[0] + cmd_vel[1]) > 0
if self.stop_motors_on_bump and s.bumps_wheeldrops > 0 and forward:
return (0,0)
else:
return cmd_vel
The actual hardware interaction is taking place in the roomba_sensor_handler.py file or the create_sensor_handler.py file, also in the turtlebot_node package.
Originally posted by mjcarroll with karma: 6414 on 2012-07-02
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Rydel on 2012-07-03:
thanks a lot | {
"domain": "robotics.stackexchange",
"id": 10021,
"tags": "navigation, turtlebot, move-base, ros-electric"
} |
How to Find the Frequency Response of a Communication Channel from Input and Output Symbols in MATLAB? | Question: I want to find the channel frequency response of a digital communications system. I have the functions of the input symbol (a triangle) and the output symbol - a distorted triangle: $$\dfrac{1}{1 + \left( \frac{2t}{T} \right)^2 }$$
I want to plot the frequency response of the channel as if it was a digital filter.
How can I do this in MATLAB?
I tried using symbolic variables, computing Fourier transforms of the input and the output signal using fourier command, and then finding the numerical values using subs and double commands, with no success.
Answer: I'm not sure about the symbols.
Yet if you have a function in time as the input of the channel and another function in time which is the output of the channel and assuming the channel doesn't have zeros in the spectrum and the input function as well, the channel Transfer Function is given by:
$$ \frac{{X}_{Out}(f)}{{X}_{In}(f)} $$
In MATLAB, just apply FFT on each, and do element by element division.
This will give you the DFT of the transfer function.
Apply IFFT to see the time response function. | {
"domain": "dsp.stackexchange",
"id": 1912,
"tags": "matlab, digital-communications, infinite-impulse-response, deconvolution"
} |
Accuracy of the original human DNA datasets sequenced by Human Genome Project? | Question: The Human Genome Project was the project of 'determining the sequence of nucleotide base pairs that make up human DNA, and of identifying and mapping all of the genes of the human genome'. It was declared complete in 2003, i.e. 99% of the euchromatic human genome completed with 99.99% accuracy.
Are the datasets provided by HGP still accurate, or as accurate as was claimed in 2003?
Given the technology in the past (such as using old techniques), or any other reason (newer research studies), is it possible that the datasets are not as accurate as originally expected?
Answer: The HGP developed the first "reference" human genome - a genome that other genomes could be compared to, and was actually a composite of multiple human genome sequences.
The standard human reference genome is actually continually updated with major and minor revisions, a bit like software. The latest major version is called GRCh38, was released in 2013, and has since had a number of minor updates.
Are the datasets provided by HGP still accurate?
Yes, in a sense, but we certainly have better information now. One way to measure the quality of the assembly is that the initial release from the HGP had hundreds of thousands of gaps - sequences that could not be resolved (this often occurs because of repetitive sequences). The newest reference genome has less than 500 gaps. | {
"domain": "bioinformatics.stackexchange",
"id": 0,
"tags": "hgp, phylogenetics"
} |
Observation of gauge in artificial magnetic fields | Question: In the ultracold atom community, an "artificial gauge field" or "artificial magnetic field" is a spatially varying hopping phase somehow engineered into the system, so that atoms hopping around an optical lattice gain a non-integrable phase factor in (seemingly) precise analogy with a gauge field.
However, it would appear that there is a major caveat to this comparison. In a recent paper, it is claimed:
A common belief is that all observables
are gauge-independent. However, gauge-dependent observations
can be made in time-of-flight images of ultracold
atoms when the momentum distribution of the
wavefunction is observed. The sudden switch-off of all
laser beams transforms canonical momentum, which is
gauge dependent, into mechanical momentum, which is
readily observed [29].
And, indeed, they observe a change in lattice symmetry that reflects the non-translation-invariant gauge choice, rather than the translation-invariant artificial magnetic field that it creates. Clearly, when taken at face value this is a major difference from a true gauge field.
So my questions are then:
Is the above really a reasonable description of what happens in
these artificial gauge field experiments, or does it just somehow
signal a breakdown of the analogy between these systems and real
gauge fields?
Whatever the difference is between artificial and real gauge fields, does this imply any effects of real gauge fields that would not be observable in these artificial gauge experiments? Or do these artificial gauge fields somehow improve on real gauge fields, in that they reproduce all the phenomena of true gauge fields and on top of that have the additional property of being directly measurable?
Answer: Disclaimer: I do particle physics / cosmology, so this is definitely outside my field, apply grains of salt to this answer appropriately.
I think Reference [29] (Lin et al, arxiv reference: 1008.4864) honestly does a better job of explaining what is going on (which makes sense, the impression I get is that 1008.4864 is a foundational paper in this subfield).
The gist, as I understand it from Lin et al, is that the hamiltonian for the system they are interested in--neutral atoms in a BEC state coupled to two intersecting lasers--can be written like:
\begin{equation}
H_{BEC} = (\vec{p} - \vec{p}_{min})^2,
\end{equation}
where $\vec{p}_{min}$ is a quantity that can be controlled by the experimenters (adjusting the lasers).
Lin et al note that this is formally similar to the hamiltonian for a charged particle moving in a background electromagnetic field
\begin{equation}
H_{charged} = (\vec{p} - q \vec{A})^2 + q \phi,
\end{equation}
where $\vec{A}$ and $\phi$ are the vector and scalar potentials respectively. Of course, $H_{charged}$ is gauge invariant.
To implement this analogy, they introduce an analogue vector potential $\vec{A^*}$ and scalar potential $\phi^*$ (they don't really introduce $\phi*$ but let's run with it for now). Then Lin et al identify
\begin{equation}
\vec{p}_{min} = q^* \vec{A^*},
\end{equation}
and also work in a gauge where
\begin{equation}
\phi^* = 0.
\end{equation}
Then in that gauge, the analogue electric field is given by $\vec{E^*} = -\dot{\vec{A^*}}$.
Thus, Lin et al point out that there are effects from setting up a time dependent $\vec{A}$ (ie a time dependent $\vec{p}_{min}$). From the perspective of the analogue gauge theory, this is due to the fact that the analogue electric field $\vec{E}^*$ is non-zero. The analogue electric field is a gauge invariant, observable quantity.
The passage you cite (from 1503.08243, Kennedy et al) suggests that the effect they measure comes from a time dependent $\vec{A^*}$. Again this would lead to a non-zero $\vec{E^*}$.
Of course, from the perspective of the analogue gauge theory, they are free to perform a gauge transformation, and they must get the same answer because physical observables must be gauge invariant (this point is ironclad--if the rest of this answer is wrong, this one point can't be wrong unless the analogy to gauge fields completely breaks down).
However, a gauge transformation will necessarily turn on $\phi^*$. In other words, $\phi^*$ will be nonzero in any other gauge. This requires one to change the original Hamiltonian to take this into account. What will still be true is that
\begin{equation}
H_{BEC}=(\vec{p}-\vec{p}_{min})^2 = (\vec{p} - q\vec{A^*})^2 + q \phi^*
\end{equation}
so obviously in this new gauge we can no longer identify $q\vec{A^*} = \vec{p}_{min}$. I think this is really what Kennedy et al are getting at, this relationship between $\vec{A^*}$ and $\vec{p}_{min}$ is not gauge invariant.
When the new hamiltonian is used correctly, the final answer will be the same in any gauge.
However I think that actually showing this works would be overkill--the bottom line I think is that $\vec{E^*}$ is non-zero, so everyone in the end is making measurements of a gauge invariant quantity.
Update 7/4
So I had a chance to look at this more. In the end I just have to disagree with the quote you cited--the observables do not depend on the gauge. However the gauge they choose is particularly nice, and finding a manifestly gauge invariant formulation of what they are doing might not be worth it. The bottom line is that once you gauge fix, every combination of operators you write down is gauge invariant (since there is no gauge freedom left), and therefore there is guaranteed to be a gauge invariant combination of operators that reduces to the combination you wrote down in the gauge you picked. In other words, one completely valid way to describe a gauge invariant quantity is to say what it looks like in a well defined gauge. What I think is going on is that the observable Kennedy et al are measuring (column density in momentum space) is very natural in one gauge fixed version of the problem, but finding the manifestly gauge invariant version would be unnecessarily complicated.
More details:
Powell et al (1009.1389) is a really good paper that discusses the theoretical aspects of what is going on in setups like the one used in Kennedy et al. The underlying formalism you need is gauge theory on a lattice. The basic idea is that there are fermion fields living on a given lattice site that create particles at that lattice site. In Kennedy et al these are referred to as $a_{m,n}$ (where $m,n$ are indices on a 2D lattice). There are also link fields, which are given by Wilson lines that connect the lattice site $(m,n)$ to the lattice site $(m',n')$
\begin{equation}
W_{(m,n),(m',n')} = \exp \left(i\int_{(x_m,y_n)}^{(x_{m'},y_{n'})} d\vec{x}\cdot\vec{A}\right) = e^{i\phi_{(m,n),(m',n')}}
\end{equation}
The last line works because this is a $U(1)$ gauge field, so the integrals are numbers.
Both the $a_{m,n}$ operators, and the phases $\phi_{m,n}$, transform under gauge transformations. The gauge transformations occur on each site independently, so we can write the parameters of the gauge transformation as $\lambda_{m,n}$. The $a$ operators transform as
\begin{equation}
a_{m,n} \rightarrow e^{i \lambda_{m,n}} a_{m,n}
\end{equation}
and the Wilson lines transform as
\begin{equation}
W_{(m,n),(m',n')} \rightarrow e^{i \lambda_{m,n}} W_{(m,n),(m',n')} e^{-i \lambda_{m',n'}}
\end{equation}
or equivalently
\begin{equation}
\phi_{(m,n),(m',n')} \rightarrow \phi_{(m,n),(m',n')} + \lambda_{m,n} - \lambda_{m',n'}
\end{equation}
As a particle theorist / cosmologist, I am more familiar with the above formulas in their continuum form, where I would call $a$ by the name $\psi$, so $\psi(x)\rightarrow e^{i\lambda(x)}\psi(x)$ and $W(x,y) \rightarrow e^{i \lambda(x)} W(x,y) e^{-i\lambda(y)}$.
One key observation is that the "hopping Hamiltonian" from Kennedy et al is gauge invariant
\begin{equation}
H = -t \sum_{m,n} a_{m+1,n}^\dagger e^{i\phi_{(m+1,n),(m,n)}} a_{m,n}
\end{equation}
which can be seen using the transformation rules above. Incidentally, my particle-y instincts are to think of the above as a discretized version of the fermionic part of the QED lagrangian, $\bar{\psi} i \gamma^\mu D_\mu \psi$, where $D$ is the gauge covariant derivative.
The fact that the hopping Hamiltonian is gauge invariant really means that nothing physical is going to end up depending on the choice of gauge. To the extent that this Hamiltonian describes the system Kennedy et al are measuring, nothing can end up depending on a choice of gauge because the underlying hamlitonian describing all of the dynamics does not. (This could be broken, for example, if (1) the approximation of the full dynamics of the system by this gauge theory breaks down in a way that breaks gauge invariance, or (2) if the way that the experimental apparatus is coupled to the BEC breaks the gauge symmetry. I am assuming both of those don't happen--if they do that is more the fault on the experimental side than the theoretical side, ie it is a boring breaking of gauge invariance).
For example, the number of particles on each site
\begin{equation}
n_{m,n} = a_{m,n}^\dagger a_{m,n}
\end{equation}
The number of particles is gauge invariant (as you can check from the rules and physically has to be the case since the number of particles is observable).
Both Powell et al and Kennedy et all find it convenient to work in a gauge where the phases only depend on one lattice direction, so
\begin{equation}
\phi_{(m,n),(m',n')} \rightarrow \phi_{m,m'}
\end{equation}
This is a very nice gauge for what they want to do. In particular, the translations in the $y$ direction commute with the Wilson line operators, but translations in $x$ do not. Their basic point is that all gauge fixings will force translation invariance to be broken somehow, so full translation invariance is not a real symmetry of the system.
Now the measurements in Kennedy et al, as far as I can tell, are really done in momentum space (from a theoretical point of view momentum space is nice for this problem because this diagonalizes the Hamiltonian). The momentum space operators are
\begin{equation}
\tilde{a}_{p,q} = \sum_{m,n} e^{i 2\pi (p m+qn) / N} a_{m,n}
\end{equation}
where $N$ is the number of lattice sites.
Things now get complicated because the momentum space operators don't have obviously nice transformation properties under gauge transformations (the gauge transformation of the $\tilde{a}_{p,q}$ will end up being some convolution of the gauge parameters with the real space operators $a_{m,n}$). This is related to the fact that the commutator of the Hamiltonian with the translation operator will be complicated in a general gauge.
So, what I think is going on is that Kennedy et al construct a column density in momentum space, which I am guessing amounts to the probability which you can compute in a given state by $\langle \tilde{a}^\dagger_{p,q}\tilde{a}_{p,q} \rangle$, where the $\tilde{a}_{p,q}$ are defined in the gauge that they describe. One frustrating thing is that I am not 100% sure what specific combination corresponds to the plots they make, so I can't be more explicit about what I'm saying, but conceptually it doesn't matter what precise combination of $\tilde{a}$'s they are plotting.
This does not make the observable gauge dependent, however it does mean that showing the gauge invariance is tricky. There is guaranteed to be some gauge invariant combination of Wilson lines and fermion operators that reduces to the combination that Kennedy et al plot, in the gauge that they pick. One avenue to discover the precise gauge invaraint combination is by guessing--if you find one gauge invariant combination that reduces to their observable, that is the correct one. Another more systematic approach is to take their observable, written in the gauge that they chose, and perform an arbitrary gauge transformation. The result will likely be messy (since the momentum space operators don't have nice transformation properties), but you are guaranteed to be able to write the result in terms of manifestly gauge invariant objects if you do everything correctly (you will probably have to add in gauge transformed combinations of operators that were zero in the original gauge, and the net goal is to cancel out all of the dependence on the gauge parameter).
In other words, once you gauge fix, you can write down any arbitrarily complicated combination you like of the operators you have and you are guaranteed to be talking about gauge invariant quantities, since there's no gauge freedom left. However, finding the manifestly gauge invariant form can be hard.
In the problem that Kennedy et al are considering, there is such a natural choice of gauge that I think they basically want to argue that there's no point in finding the gauge invariant form of what they are measuring--the main pragmatic reason to find a gauge invariant form would be if different groups were using different gauges and needed to compare their answers. The gauge invariant form could be interesting theoretically to get more insight into the system. Based on what Powell says in section II, I think the gauge invariant formulation involves studying the properties of the projective symmetry group of the system. But that would definitely be beyond the scope of an experimental paper. | {
"domain": "physics.stackexchange",
"id": 44091,
"tags": "atomic-physics, gauge-theory, synthetic-gauge-fields"
} |
Angle in a spacetime diagram | Question:
FIGURE 4.13 A Lorentz boost as a change of coordinates on a spacetime diagram. The figure shows the grid of $\left(c t^{\prime}, x^{\prime}\right)$ coordinates defined by $(4.18)$ plotted on a $(c t, x)$ spacetime diagram. The $\left(c t^{\prime}, x^{\prime}\right)$ coordinates are not orthogonal to each other in the Euclidean geometry of the printed page. But they are orthogonal in the geometry of spacetime. (Recall the analogies between spacetime diagrams and maps discussed in Example 4.1.) The $\left(c t^{\prime}, x^{\prime}\right)$ axes have to be as orthogonal as the $(c t, x)$ axes because there is no physical distinction between one inertial frame and another. The orthogonality is explicitly verified in Example 5.2. The hyperbolic angle $\theta$ is a measure of the velocity between the two frames.
The transformation is given as
$\begin{aligned} c t^{\prime} &=(\cosh \theta)(c t)-(\sinh \theta) x \\ x^{\prime} &=(-\sinh \theta)(c t)+(\cosh \theta) x \\ y^{\prime} &=y \\ z^{\prime} &=z \end{aligned}$
To find the angle $x'$ axes makes with $x$ axes I use
$0=(\cosh \theta)(c t)-(\sinh \theta) x$ to get its slope as $\tanh{\theta}$ and hence the angle is ${\tan}^{-1}(\tanh{\theta})$ but the book says the angle is $\theta$? What did I do wrong?
Answer: As @benrg says, the "Minkowskian-angle" (called "rapidity") uses the
hyperbolic functions, and not the circular functions.
With velocity $(v/c)=\tanh\theta$, we have
time-dilation factor $\gamma=\frac{1}{\sqrt{1-(v/c)^2}}=\cosh\theta$ and
Doppler factor $k=\sqrt{\frac{1+(v/c)}{1-(v/c)}}=\exp\theta$.
Geometrically, an angle (a circular angle)
is the arc-length of a circular arc divided by its radius.
For Minkowski spacetime, the Minkowskian-angle
is the Minkowski-arc-length of a Minkowski-circular-arc (a "hyperbolic arc") divided by its Minkowski-radius.
Alternatively, we can characterize an angle
as the twice the area of a circular sector, divided by the square-of-the-radius.
Similarly,
we can characterize a Minkowski-angle (the rapidity)
as the twice the area of a hyperbolic sector, divided by the square-of-the-Minkowski-radius.
Because of these characterizations,
for relativity, one should really draw a hyperbolic-arc for the rapidity,
and not a circular-arc, as you have drawn. | {
"domain": "physics.stackexchange",
"id": 91030,
"tags": "special-relativity, spacetime, inertial-frames, geometry"
} |
Relationship between speed and power in wind/hurricanes | Question: I was looking at hurricanes today when question crossed my mind.
Cat 1 Hurricanes are the ones going up to 95mph. Cat 5 goes above 157 mph.
That says a lot, but not all.
In motors or engines, RPM (analogy to MPH) is not the only important factor. Torque is equally as important. If one goes up, the other goes down unless you have more HP to sustain both at "desired" level.
How does this work with Hurricanes or wind for that matter?
I would assume that a slower wind gust with more torque could make more damage than a faster wind gust with less torque if that makes sense.
Is there such a relationship? How are both reconciled in this particular case?
Answer: the airflow equivalent of torque is pressure and of RPM is mass flow rate, at least in the sense that sustained pressure differences are what accelerate air masses to high velocities.
This sort of dynamic analogy is appropriate when dealing with air movement in ducts but it doesn't work so well when there are no ducts as such and where the scale lengths are of order ~hundreds of miles.
Instead, we look at the kinetic energy carried by parcels of high-speed air and the aerodynamic ("wind") loads imposed by those air parcels on things like trees, houses, cars and people.
Wind loads (expressed in terms of pounds per square foot) are proportional to the square of the wind velocity, which means doubling the wind velocity increases the loads by a factor of four. This effect is what makes hurricanes and tornadoes so destructive. | {
"domain": "physics.stackexchange",
"id": 51999,
"tags": "torque, power, speed"
} |
Are there any actual uses of isodiaphers? | Question: While studying atomic structure, I came across the terms isotopes, isobars, isoelectronic species, isotones and isodiaphers. While I can accept that the classification of isotopes, isobars, isoelectronic species and isotones may be useful, I do not understand what could be the use of species with the same difference between number of neutrons and protons (isodiaphers).
Is there any place in chemistry where they have a practical use (like in experiments, theories or laws), or is it just a useless classification term?
Answer: In practical radiochemistry, the term is rarely needed but it is not useless. In particular, isodiaphers are used in radiochemistry to describe chains of alpha decays.
I would expect, the relative frequencies of the practical use of the related terms is about
"isotopes" > "isobars" > "isodiaphers" > "isotones".
Isobars are used in radiochemistry to describe chains of beta decay, therefore their importance might be similar to isodiaphers for alpha decay. However, isobars are also very important to describe fission yields so that in total they might be used more often than isodiaphers. | {
"domain": "chemistry.stackexchange",
"id": 15438,
"tags": "nuclear-chemistry"
} |
`zip` operator to iterate on multiple container in a sign | Question: I worked out a zip operator similar to Python's, because I didn't find one in std. It allows to use range-based for loops to iterate at once over several equal-length containers (arrays, counters... anything that has an iterator and a static length). It should be safe (never exceed the iterator's capacity), be able to modify the content of the container in-place when possible, and have no run-time overhead compared to manually incrementing the iterators.
Some things still look a bit fishy to me and I also wonder if all my naming/implementation choices adhere the std look-and-feel. Examples of use:
std::array a = {1,2,3,4};
std::array b = {4,3,2,1};
for (auto [i, j, k] : zip(a, b, a)) {
std::cout << i << " " << j << " " << k << std::endl;
i = 42; // we can overwrite the values of a
}
//// This one doesn't work yet:
// for (auto [i, j] : zip(a, {4, 3, 2, 1})) {
// std::cout << i << " " << j << std::endl;
// }
With it comes a simple range class that allows to include counters in the iterations:
// x takes the value of array a, and i counts from 0 to 3
for (auto [x, i] : zip(a, range<4>())) {
std::cout << i << " " << x << std::endl;
}
Note here that the arguments to zip aren't necessarily l-values.
Here is my implementation:
// inductive case
template<typename T, typename... Ts>
struct zip : public zip<Ts...> {
static_assert(std::tuple_size<T>::value == std::tuple_size<zip<Ts...>>::value,
"Cannot zip over structures of different sizes");
using head_value_type = std::tuple<typename T::value_type&>;
using tail_value_type = typename zip<Ts...>::value_type;
using value_type = decltype(std::tuple_cat(std::declval<head_value_type>(),
std::declval<tail_value_type>()));
zip(T& t, Ts&... ts) : zip<Ts...>(ts...), t_(t) {}
zip(T& t, Ts&&... ts) : zip<Ts...>(ts...), t_(t) {}
zip(T&& t, Ts&... ts) : zip<Ts...>(ts...), t_(t) {}
zip(T&& t, Ts&&... ts) : zip<Ts...>(ts...), t_(t) {}
struct iterator {
using head_iterator = typename T::iterator;
using tail_iterator = typename zip<Ts...>::iterator;
head_iterator head;
tail_iterator tail;
bool operator!=(iterator& that) { return head != that.head; }
void operator++() { ++head; ++tail; }
value_type operator*() {
return std::tuple_cat<head_value_type, tail_value_type>(*head, *tail);
}
iterator(head_iterator h, tail_iterator t) : head(h), tail(t) {}
};
iterator begin() { return iterator(t_.begin(), zip<Ts...>::begin()); }
iterator end() { return iterator(t_.end(), zip<Ts...>::end()); }
T& t_;
};
// base case
template<typename T>
struct zip<T> {
using value_type = std::tuple<typename T::value_type&>;
using iterator = typename T::iterator;
zip(T&& t) : t_(t) {};
zip(T& t) : t_(t) {};
iterator begin() { return t_.begin(); }
iterator end() { return t_.end(); }
private:
T& t_;
};
// must implement tuple_size to check size equality
template<typename T, typename... Ts>
struct std::tuple_size<zip<T, Ts...>> {
static constexpr int value = std::tuple_size<T>::value;
};
What looks fishy/over-complicated:
the constructors to cover all kinds of arguments (l/r-value/references)
the mangling of tuple types
bonus: why doesn't my second example compile?
For completeness, here is my implementation of the range class:
template<class T, T BEG, T END, T STEP>
struct Range {
Range() {};
using iterator = Range;
using value_type = T;
bool operator!=(iterator that) { return this->val_ < that.val_; }
void operator++() { val_ += STEP; }
int& operator*() { return val_;}
iterator begin() { return *this; }
iterator end() { return Range(END); }
private:
Range(int val) : val_(val) {}
T val_ = BEG;
};
template<class T, T BEG, T END, T STEP>
struct std::tuple_size<Range<T, BEG, END, STEP>> {
static constexpr int value = (END - BEG) / STEP;
};
template<class T, T BEG, T END, T STEP>
static auto range() { return Range<T, BEG, END, STEP>(); };
template<int BEG, int END, int STEP=1>
static auto range() { return Range<int, BEG, END, STEP>(); };
template<int END>
static auto range() { return Range<int, 0, END, 1>(); };
Any feedback will be much appreciated! Thanks in advance.
Answer: zip
Right now, your zip uses the tuple protocol. It probably makes more sense to use the range protocol instead, to support cases like this:
std::vector a{1, 2, 3, 4};
std::vector b{5, 6, 7, 8};
for (auto [x, y] : zip(a, b)) {
std::cout << x << ' ' << y << '\n';
}
These constructors:
zip(T& t, Ts&... ts) : zip<Ts...>(ts...), t_(t) {}
zip(T& t, Ts&&... ts) : zip<Ts...>(ts...), t_(t) {}
zip(T&& t, Ts&... ts) : zip<Ts...>(ts...), t_(t) {}
zip(T&& t, Ts&&... ts) : zip<Ts...>(ts...), t_(t) {}
mandate that all arguments other than the first have the same. You also convert everything to lvalues, because an id-expression that refer to an rvalue reference is an lvalue (!) — this is because the original purpose of rvalue references were to capture rvalues and treat them like normal objects, not to forward rvalues.
The iterator class is also some required operations: associated types (iterator_category, difference_type, etc.), ==, postfix ++, etc. Also consider supporting random access iterator functionalities if the zipped ranges support them. We'll come back to this later.
I would also probably implement the zip without recursion, to reduce the compile-time overhead of nested template class instantiations. So the end result roughly looks like this: (not comprehensively tested, may have bugs; for simplicity, only random access ranges are supported)
#include <exception>
#include <iterator>
#include <tuple>
namespace detail {
using std::begin, std::end;
template <typename Range>
struct range_traits {
using iterator = decltype(begin(std::declval<Range>()));
using value_type = typename std::iterator_traits<iterator>::value_type;
using reference = typename std::iterator_traits<iterator>::reference;
};
template <typename... Its>
class zip_iterator {
public:
// technically lying
using iterator_category = std::common_type_t<
typename std::iterator_traits<Its>::iterator_category...
>;
using difference_type = std::common_type_t<
typename std::iterator_traits<Its>::difference_type...
>;
using value_type = std::tuple<
typename std::iterator_traits<Its>::value_type...
>;
using reference = std::tuple<
typename std::iterator_traits<Its>::reference...
>;
using pointer = std::tuple<
typename std::iterator_traits<Its>::pointer...
>;
constexpr zip_iterator() = default;
explicit constexpr zip_iterator(Its... its)
: base_its{its...}
{
}
constexpr reference operator*() const
{
return std::apply([](auto&... its) {
return reference(*its...);
}, base_its);
}
constexpr zip_iterator& operator++()
{
std::apply([](auto&... its) {
(++its, ...);
}, base_its);
return *this;
}
constexpr zip_iterator operator++(int)
{
return std::apply([](auto&... its) {
return zip_iterator(its++...);
}, base_its);
}
constexpr zip_iterator& operator--()
{
std::apply([](auto&... its) {
(--its, ...);
}, base_its);
return *this;
}
constexpr zip_iterator operator--(int)
{
return std::apply([](auto&... its) {
return zip_iterator(its--...);
}, base_its);
}
constexpr zip_iterator& operator+=(difference_type n)
{
std::apply([=](auto&... its) {
((its += n), ...);
}, base_its);
return *this;
}
constexpr zip_iterator& operator-=(difference_type n)
{
std::apply([=](auto&... its) {
((its -= n), ...);
}, base_its);
return *this;
}
friend constexpr zip_iterator operator+(const zip_iterator& it, difference_type n)
{
return std::apply([=](auto&... its) {
return zip_iterator(its + n...);
}, it.base_its);
}
friend constexpr zip_iterator operator+(difference_type n, const zip_iterator& it)
{
return std::apply([=](auto&... its) {
return zip_iterator(n + its...);
}, it.base_its);
}
friend constexpr zip_iterator operator-(const zip_iterator& it, difference_type n)
{
return std::apply([=](auto&... its) {
return zip_iterator(its - n...);
}, it.base_its);
}
constexpr reference operator[](difference_type n) const
{
return std::apply([=](auto&... its) {
return reference(its[n]...);
}, base_its);
}
// the following functions assume usual random access iterator semantics
friend constexpr bool operator==(const zip_iterator& lhs, const zip_iterator& rhs)
{
return std::get<0>(lhs.base_its) == std::get<0>(rhs.base_its);
}
friend constexpr bool operator!=(const zip_iterator& lhs, const zip_iterator& rhs)
{
return !(lhs == rhs);
}
friend constexpr bool operator<(const zip_iterator& lhs, const zip_iterator& rhs)
{
return std::get<0>(lhs.base_its) < std::get<0>(rhs.base_its);
}
friend constexpr bool operator>(const zip_iterator& lhs, const zip_iterator& rhs)
{
return rhs < lhs;
}
friend constexpr bool operator<=(const zip_iterator& lhs, const zip_iterator& rhs)
{
return !(rhs < lhs);
}
friend constexpr bool operator>=(const zip_iterator& lhs, const zip_iterator& rhs)
{
return !(lhs < rhs);
}
private:
std::tuple<Its...> base_its;
};
}
template <typename... Ranges>
class zip {
static_assert(sizeof...(Ranges) > 0, "Cannot zip zero ranges");
public:
using iterator = detail::zip_iterator<
typename detail::range_traits<Ranges>::iterator...
>;
using value_type = typename iterator::value_type;
using reference = typename iterator::reference;
explicit constexpr zip(Ranges&&... rs)
: ranges{std::forward<Ranges>(rs)...}
{
}
constexpr iterator begin()
{
return std::apply([](auto&... rs) {
return iterator(rs.begin()...);
}, ranges);
}
constexpr iterator end()
{
return std::apply([](auto&... rs) {
return iterator(rs.end()...);
}, ranges);
}
private:
std::tuple<Ranges...> ranges;
};
// by default, rvalue arguments are moved to prevent dangling references
template <typename... Ranges>
explicit zip(Ranges&&...) -> zip<Ranges...>;
Let's hope that P1858 Generalized pack declaration and usage gets accepted so that we can eliminate the tons of invocations of std::apply ...
range
Similar to zip, range operates on a tuple basis — the parameters are passed as template arguments, and tuple_size is provided. This would limit the usefulness of it, because runtime ranges (e.g., range(vector.size())) are not possible.
You choose to make range its own iterator type, which is not without precedent in the standard library. However, this will cause confusion once you add more functionality to range.
A more sophisticated comparison operator that treats sentinel (end) values specially takes the sign of step into account allows for commutative comparison and negative steps.
So the end result may look like this: (concept verification, overflow checking, etc. are omitted for simplicity)
namespace detail {
template <typename T>
class range_iterator {
T value{0};
T step{1};
bool sentinel{false};
public:
// lying again
using iterator_category = std::forward_iterator_tag;
using difference_type = std::intmax_t;
using value_type = T;
using reference = T;
using pointer = T*;
constexpr range_iterator() = default;
// sentinel
explicit constexpr range_iterator(T v)
: value{v}, sentinel{true}
{
}
explicit constexpr range_iterator(T v, T s)
: value{v}, step{s}
{
}
constexpr reference operator*() const
{
return value;
}
constexpr range_iterator& operator++()
{
value += step;
return *this;
}
constexpr range_iterator operator++(int)
{
auto copy{*this};
++*this;
return copy;
}
friend constexpr bool operator==(const range_iterator& lhs, const range_iterator& rhs)
{
if (lhs.sentinel && rhs.sentinel) {
return true;
} else if (lhs.sentinel) {
return rhs == lhs;
} else if (lhs.step > 0) {
return lhs.value >= rhs.value;
} else if (lhs.step < 0) {
return lhs.value <= rhs.value;
} else {
return lhs.value == rhs.value;
}
// C++20: return (lhs.value <=> rhs.value) == (step <=> 0); from third branch
}
friend constexpr bool operator!=(const range_iterator& lhs, const range_iterator& rhs)
{
return !(lhs == rhs);
}
};
}
template <typename T>
class range {
T first{0};
T last{};
T step{1};
public:
using value_type = T;
using iterator = detail::range_iterator<T>;
explicit constexpr range(T e)
: last{e}
{
}
explicit constexpr range(T b, T e, T s = T{1})
: first{b}, last{e}, step{s}
{
}
constexpr iterator begin() const
{
return iterator{first, step};
}
constexpr iterator end() const
{
return iterator{last};
}
constexpr T size() const
{
return (last - first) / step;
}
};
You may also consider implementing enumerate based on Python's, which comes in handy when accessing sequences by index:
// again, rvalue arguments are copied by default
template <typename Sequence>
auto enumerate(Sequence&& seq)
{
using std::begin, std::end;
return zip(range(end(seq) - begin(seq)), std::forward<Sequence>(seq));
} | {
"domain": "codereview.stackexchange",
"id": 37780,
"tags": "c++, template, iterator, c++17, variadic"
} |
I'm confused about publishing nav_msgs/Odometry message | Question:
I have some questions of the tutorial : Publishing Odometry Information over ROS to learn how to publish nav_msgs/Odometry message:
1. In this tutorial code, I'm confused about the transform part. architecture image http://wiki.ros.org/navigation/Tutorials/RobotSetup?action=AttachFile&do=get&target=overview_tf_small.png From the image of navigation stack, it only require "nav_msgs::Odometry". Why should we send "geometry_msgs::TransformStamped"? Since we set the same data in these two data structure.
2. Could I think the role of"odom" frame as the estimated pose of robot, and "base_link" as the origin (0, 0, 0) in the world?
3. In my robot case, I have a robot with motor encoder. Could I treat it as the odometry source just like below image?
image http://wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot?action=AttachFile&do=get&target=setupA2.jpg
4. Are the names of "frame_id" and "child_frame_id" changeable?
Originally posted by Josper on ROS Answers with karma: 73 on 2017-05-03
Post score: 3
Answer:
1.
Original: It is not clear what you are referring to, there is not geometry_msgs::TransformStamped in the image. Do you mean the tf messages?
Update: The two ways of sending the transformations (nav_msgs/Odometry on /odom and tfMessage on /tf) make the pose estimate of the robot available in a slightly different way. The /odom topic in general is only for Odometry messages, nothing else. These contain the pose and the velocity of the robot including the respective uncertainties (covariance matrices). The /tf topic on the other hand is only used for poses, but not only that of the odometry estimates, but all transformations the application tracks, e.g. the odometry, the position of sensors on the robot, objects detected in said sensors, poses of robot arms and grippers etc. Its purpose is to be able combine these transformations to answer questions like "where is the gripper with respect to the object I saw ten seconds ago". It is also incredibly useful in visualization, because everything can be displayed in a common coordinate frame.
Odom is the odometry estimate of the robot, coming from a sensor that accumulates drift. base_link is attached to the robot, i.e. some defined position on the robot (or below, if projected to the floor for a wheeled robot). See REP105 for details.
Yes
No, not in general. The transformations need to form a tree. If you interchange you may end up with a frame with two parents
Originally posted by Felix Endres with karma: 6468 on 2017-05-03
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Josper on 2017-05-03:
Yes, I mean tf. The tutorial says we should both send geometry_msgs::TransformStamped and nav_msgs/Odometry. But from the image I posted, it only needs nav_msgs/Odometry. And in the tutorial, the geometry_msgs::TransformStamped is all the same with nav_msgs/Odom. So I'm confused why need to send it? | {
"domain": "robotics.stackexchange",
"id": 27799,
"tags": "ros"
} |
Differece between torque $\tau$ and angular velocity $\omega$ | Question: The title is a weird question, i know. But hear me out: Take the equation that relates the tangential velocity $\vec v$ and the angular velocity $\vec \omega$ of an object moving on a circle with radius $r$.
$\vec v = \vec \omega × \vec r$
If we want to know the magnitude of the axial quantity $\omega$, we need to divide the tangential quantity $v$ by the radius $r$.
$v = \omega r \Leftrightarrow \omega = \frac{v}{r}$
Now let's take a look at the case of a torque.
$\vec \tau = \vec r × \vec F$
Here, we can just multiply the radius $r$ by the tangential quantity $F$ to get the axial quantity $\tau$.
$\tau = r F$
Why is it that we multiply in one case and divide in the other, if we want to know the magnitude of the axial quantity, even though the overall structure of these three vectors is the same in both cases. I mean, you can more or less make an ignorant one to one correspondence beteween $\vec \tau$ and $\vec \omega$, $\vec F$ and $\vec v$ and well, $\vec r$ and $ \vec r$, if you comapre both scenarios. This ignorant comparison is depicted below. I mean you can literaly draw these two scenarios in the exact same diagram. But we still end up with two different equations. I find that a bit odd.
I reckon it has something to do with the fact that in the case of the velocity we have a differntial relationship, which is not present in the case of the torque. I. e. the fact that $\vec v = \frac{d\vec r}{dt}$ but $ \vec \tau ≠ \frac{d \vec F}{dt}$. But i can't really pin down a complete explanation.
Answer: Here's one way to think about why they transform seemingly oppositely. Force and velocity form a pair in the sense that when multiplied you get power:
$$W = \int F \cdot v\,dt.$$
The same is true for torque and angular velocity:
$$W = \int \tau \cdot \omega\,dt.$$
So if to go from force to torque we multiply by $r$, we need to compensate by dividing velocity by $r$. | {
"domain": "physics.stackexchange",
"id": 83841,
"tags": "newtonian-mechanics, rotational-dynamics, vectors, torque, angular-velocity"
} |
Does Heisenberg's energy-time uncertainty principle imply that quantum computing is no more efficient than classical computing? | Question: See http://arxiv.org/abs/quant-ph/0006080v1 "On Non Efficiency of Quantum Computer", by Robert Alicki. In this paper, the author argues using Heisenberg's energy-time uncertainty principle, that quantum computing is no more efficient than classical computing. This paper convinced me, but I'm just an amateur and also biased toward toward this view. I'm curious why the experts still believe that quantum computing is more efficient than classical computing, given the argument made in this paper in 2000.
Answer: The short answer is no.
Regarding the paper, I can't understand the logic. As far as I can tell, the author writes down some version of the time-energy uncertainty relation, then says "Hence, it is quite natural to investigate X" where X has little or nothing to do with the time-energy uncertainty relation.
The language and formulation of the paper are also not clear, for example the fundamental inequality Eq. 3 is not even precisely formulated. In my opinion, you should not take the paper seriously. You said in your question that the author convinced you of their premise, but I'm not even sure what the premise is. If you could state precisely what you're convinced of, I could try to address it more directly.
Here is a physical counter-example showing that quantum computation need not take excessive (exponential) physical resources. One proposal for quantum computation involves adiabatically moving topological excitations around each other in a two dimensional piece of quantum matter. This is called topological quantum computation. We already have experimental examples, such as fractional quantum Hall fluids, which support these kind of topological excitations. By slowly braiding these excitations around each other in a suitable piece of quantum matter we can produce any unitary transformation we want and hence do computation. However, this braiding does not take exponential time, nor does it require exponentially large energy (in fact if the braiding is down adiabatically and interactions are short ranged, the energy of the piece of quantum matter may not even change during the computation!)
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 4557,
"tags": "heisenberg-uncertainty-principle, quantum-computer"
} |
tf problems when combine robot_localization gps & navigation amcl stack | Question:
hi, Dear all,
I met tf problems when combine robot_localization + navsat + navigation amcl stack. the tf transforms seems are colliding with each other.
according to r_l instruction, I set ekf_template_local.yaml and ekf_template_global.yaml :
publish_tf: true
gps node migration works well, but tf shows collide when i run roswtf, following is error messages:
**ERROR TF multiple authority contention:
* node [/ekf_localization_global] publishing transform [odom] with parent [map] already published by node [/amcl]
* node [/amcl] publishing transform [odom] with parent [map] already published by node [/ekf_localization_global]**
so I set ekf_template_global.yaml:
publish_tf: false
just only to let amcl module to publish tf(odom->map),
but it often occurs the warnning info as below at last line:
process[ekf_localization_local-1]: started with pid [59323]
process[ekf_localization_global-2]: started with pid [59449]
process[navsat_transform_node-3]: started with pid [59596]
[ INFO] [1465373999.050320208, 1465356233.643751380]: Datum (latitude, longitude, altitude) is (30.587115, 103.987225, 452.160004)
[ INFO] [1465373999.050385885, 1465356233.643751380]: Datum UTM coordinate is (402897.868551, 3384282.066208)
[ INFO] [1465373999.050625052, 1465356233.654742509]: Initial odometry pose is Origin: (-6.2733690279887452945 -0.27920951194753762525 -0.029217864509362003606) Rotation (RPY): (-0.0066287586623609510289, -0.0080349835834440021948, -0.00022201279494916720432)
[ INFO] [1465373999.050853030, 1465356233.654742509]: Corrected for magnetic declination of 0.000000 and user-specified offset of 0.000000. Transform heading factor is now -0.000222
[ INFO] [1465373999.050906508, 1465356233.654742509]: Transform world frame pose is: Origin: (-6.2733690279887452945 -0.27920951194753762525 -0.029217864509362003606)
Rotation (RPY): (-0.0066287586623609510289, -0.0080349835834440021948, -0.00022201279494916720432)
[ INFO] [1465373999.050948531, 1465356233.654742509]: World frame->utm transform is Origin: (3384148.7263204208575 -402893.25039371155435 29408.271071974020742)
Rotation (RPY): (-0.0066305423673257484971, -0.0080335117269795773554, 2.2864374053448203049e-09)
**[ WARN] [1465373999.084253427, 1465356233.686713222]: Could not obtain transform from map to base_link. Error was Could not find a connection between 'base_link' and 'map' because they are not part of the same tree.Tf has two or more unconnected trees.
[ WARN] [1465373999.084397737, 1465356233.686713222]: Could not obtain map->base_link transform. Will not remove offset of navsat device from robot's origin.
**
but the tf works fine, and the tf data is correct, although a little drift.
so to avoid the warning, i changed another method, i set ekf_template_global.yaml :
publish_tf: true
and change amcl.cfg, set:
<param name="tf_broadcast" value="false"/>
when I run roswtf:
WARNING The following nodes are unexpectedly connected:
* /ekf_localization_global->/view_frames_61426_1465374081884 (/tf)
the tf collide warning disappeared, but the tf data is wrong, the odom and base_link's z axis data has big issue. tf data is bad.
how should i do to make all works well? please help me. thank you very much!
launch file and bag file is at here:
launch files:
https://www.dropbox.com/s/ahk7nw9kg1eysuc/launch.zip?dl=0
rosbag file:
https://www.dropbox.com/s/htth1w41c58bcvo/ugv_bag.zip?dl=0
In bag file, I used following 4 topics:
lms1_scan
imu_topic
imu_odom_topic
imu_nav_topic
Thank you for all your help. I really appreciate your help! thanks!
update1:
bad effect of video record at here:
https://youtu.be/S8ueiIQoQ_A
map file at here:
https://www.dropbox.com/s/btd1nso7dlqodcg/mymap.zip?dl=0
params file of r_l at here:
https://www.dropbox.com/s/zd81adtnx3g1bpq/params.zip?dl=0
I also find another question and do as @Tom Moore instruct, question is at: http://answers.ros.org/question/218137/using-robot_localization-with-amcl/ , I do as below:
turn off map->odom tf transform in amcl;
include amcl_pose topic and /odometry/gps topic (it is in map frame) as input to the second ekf_localization_global node;
remap move_base odom topic to /odometry/filtered topic;
node graph pic is at here:
https://www.dropbox.com/s/ygo9mqpd3zib67t/222.fw.png?dl=0
But still get very bad effect.
Could you please instruct me?
Thank you all very much.
I really appreciate all your help!
Update 2:
I re-check the origin data of imu_topic, imu_odom_topic, the data shows well and good, origin imu_odom_topic can draw good
trajectory as I want.
lauch r_l module and with navsat module, set
<param name="use_odometry_yaw" value="false"/>
terminal occurs error info snapshot:
https://www.dropbox.com/s/rxe4n9fmsiaq8a1/33.png?dl=0
shows nan value of odometry/gps topic.
I returned to rviz, I find in tf, odom drift very huge and jump random, odom->map tf transfer in system maybe something wrong ?
please help me . thanks!
video at here: https://youtu.be/r9PJH2DSbhs
Originally posted by asimay_y on ROS Answers with karma: 255 on 2016-06-08
Post score: 2
Original comments
Comment by Tom Moore on 2016-06-15:
This is quite a long question! I will take a look at it soon. In the meantime, I suggest you make sure all of your sensor data conforms to standards, and then add one sensor at a time to one EKF, verify the results, then add more. Don't just throw it all together at once.
Comment by asimay_y on 2016-06-16:
hi, dear@Tom Moore, I'm sorry for so long question. yes, I check data conform and confirm it is ENU frame, according to your wiki knowledge. and I tried many time step by step, one sensor and another integrate, I'm still failed, the odom drift very large at last about163sec, especially when UAV turn
Answer:
OK, I took a look at this, and you have a few things wrong:
Your IMU topic has an empty quaternion of all zeros. That's going to break navsat_transform_node, and is the reason you were seeing NaN values. Since your IMU odometry topic appears to have a working orientation, I used that instead. Note that you will need to add a value for magnetic_declination for the settings for navsat_transform_node for your location. I'm assuming the IMU has a magnetometer from which it derives its yaw, correct? If not, then navsat_transform_node won't work, as you need an earth-referenced yaw measurement.
Your base_link->nav_link transform has an orientation in it. Note that this is meaningless for a navsat device, as it mounting orientation doesn't affect its measurement. You only need the linear offset in the static transform. That is not documented, and I've updated navsat_transform_node to remove the orientation component of that transform before applying it.
Your IMU is clearly measuring acceleration due to gravity, but you didn't enable imu0_remove_gravitational_acceleration. You can read more about that parameter on the wiki. This means that you may experience drift, particularly in the Z axis.
I put together a launch and config file here. Note that it also automatically plays your bag file at 10x speed, though you will need to update the path to the file itself. You will need to download the latest r_l source for this to work. It worked just fine for me. Here's a screenshot of the data in the map frame:
...and in the odom frame:
If you see any remaining drift (like at the bottom of the odom frame image), then it's a result of your imu_odom topic, which is likely integrating accelerations onboard the IMU, in which case drift may be inevitable. If you feed the filter velocity data that is incorrect, it won't fix it for you. For example, right at the beginning of the bag file, here's what your imu_odom topic looks like:
header:
seq: 11903
stamp:
secs: 1465356225
nsecs: 286038297
frame_id: odom
child_frame_id: base_link
pose:
pose:
position:
x: -93.3040241217
y: 8.20038348484
z: 0.0
orientation:
x: -0.0
y: 0.0
z: 0.0303902439879
w: -0.999538109864
covariance: [0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001]
twist:
twist:
linear:
x: -2.91783475405
y: -0.091434393804
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
covariance: [0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001]
You have a linear velocity of nearly -3 meters per second in the X axis, so the state estimate is going to move backwards.
Anyway, I'll let you look at the launch and config files and work through it yourself. The best advice I can offer with amcl in the loop is to make sure you turn of amcl's tf broadcasting, and make sure its state estimate agrees with the GPS. If they diverge over time, you're going to see it jump back and forth rapidly between them.
Originally posted by Tom Moore with karma: 13689 on 2016-06-14
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by asimay_y on 2016-06-14:
dear @Tom Moore, I want to set up awesome robot_localization package and with amcl, navigation stack to do outdoor UAV study. I have make a map with gmapping. my case is same as the question you instructed :
http://answers.ros.org/question/218137/using-robot_localization-with-amcl/
Comment by asimay_y on 2016-06-14:
i update new info above, please help me, thank you!
Comment by asimay_y on 2016-06-17:
sorry. type mistake. For UGV study. can you help me dear @Tom Moore?
Comment by Tom Moore on 2016-06-17:
When I get some time, yes. As I said, this is quite a large question and is not something that will be trivial for me to look into. One thing that I notice is that you have something called "imu_odom_topic." Do you only have an IMU and a GPS? Also, turn off linear acceleration in your IMU config.
Comment by asimay_y on 2016-06-17:
dear @Tom Moore, Thank you very much for your answer. I really appreciate your help, really.
Yes, I have only 1 equipment: INS/GPS, it can output IMU data & GPS data & velocity data, so I feed velocity data into imu_odom_topic to make a odometry data. Could you please tell me why turn off accelerati
Comment by asimay_y on 2016-06-17:
I turn off linear acceleration in both local and global EKF cfg, but it still seems too drift..
please see:
https://www.dropbox.com/s/blixgcome5i66gm/55.png?dl=0
Comment by Tom Moore on 2016-06-17:
I'm guessing that's because your IMU odometry topic is using the accelerometers anyway. Have you tried just plotting the imu_odom topic?
Comment by Tom Moore on 2016-06-17:
Also, do a rostopic echo on the imu_odom topic, and watch the linear velocities.
Comment by asimay_y on 2016-06-19:
hi, dear @Tom Moore, I tried just integrate imu_odom topic into EKF node, without accelerometer, but get even worse effect.. and I plot the imu_odom velocity, please snapshot:
https://www.dropbox.com/s/8vywutd8x0nalm8/linear_image1.png?dl=0
Comment by asimay_y on 2016-06-19:
and accelerometer plot: https://www.dropbox.com/s/8fh75ovgri0nr0o/acc_image.png?dl=0
https://www.dropbox.com/s/8fh75ovgri0nr0o/acc_image.png?dl=0
effect in rviz: https://www.dropbox.com/s/a1xlwrqunzkgpcf/2222.png?dl=0
Comment by asimay_y on 2016-06-21:
dear @Tom Moore, Thank you very much for your help! I'm lucky, and god send you to help me. :) I read your comments again and again and carefully, and modify the related wrong config parameters, and finally get the result the same as you showed to me. The trajectory of base_link in map & odom
Comment by asimay_y on 2016-06-21:
The trajectory of base_link in map & odom are almost same and overlap(purple and green). But I have one more issue puzzled me: why odom origin of tf tree drift so much in map?(I removed acceleration integration of imu) Please see snapshot: https://www.dropbox.com/s/pmvqfvrsv0fxpqx/66.png?dl=0
Comment by asimay_y on 2016-06-21:
Thank you very much for your help! | {
"domain": "robotics.stackexchange",
"id": 24864,
"tags": "navigation, gps, navsat-transform, robot-localization, amcl"
} |
Using Method Chaining To Alter Last Item Added To Collection | Question: There is a Name class with properties that represent different components that make up a person's name. A Name object requires a FirstName and Surname. All other fields are optional.
class Name
{
public string FirstName { get; set; } = String.Empty;
public string Surname { get; set; } = String.Empty;
public string Rank { get; set; } = String.Empty;
public string Suffix { get; set; } = String.Empty;
public string NickName { get; set; } = String.Empty;
public string MiddleName { get; set; } = String.Empty;
public Name(string firstName, string surname)
{
this.FirstName = firstName;
this.Surname = surname;
}
}
Also, there is a NamesBuilder class that has a List<Name> collection. It has a GetListAsString method which iterates over the collection and builds a single string with the list of names:
class NamesBuilder
{
List<Name> Names;
public NamesBuilder()
{
Names = new List<Name>();
}
public NamesBuilder AddName(string firstName, string surname)
{
Names.Add(new Name(firstName, surname));
return this;
}
public string GetListAsString()
{
StringBuilder sb = new StringBuilder();
foreach (Name name in Names)
{
//add Title if exists
if (name.Rank.Length > 0)
{
sb.Append(name.Rank);
sb.Append(" ");
}
//add Firstname
sb.Append(name.FirstName);
sb.Append(" ");
//add MiddleName if exists
if (name.MiddleName.Length > 0)
{
sb.Append(name.MiddleName);
sb.Append(" ");
}
//add NickName if exists
if (name.NickName.Length > 0)
{
sb.Append((char)34);
sb.Append(name.NickName);
sb.Append((char)34);
sb.Append(" ");
}
//add Surname
sb.Append(name.Surname);
//add Suffix if exists
if (name.Suffix.Length > 0)
{
sb.Append(" ");
sb.Append(name.Suffix);
}
//add new line
sb.AppendLine();
}
return sb.ToString();
}
}
This is called using method chaining:
static void Main(string[] args)
{
NamesBuilder nb = new NamesBuilder()
.AddName("James", "Kirk")
.AddName("Montgomery", "Scott")
.AddName("Nyota", "Uhura")
.AddName("Leonard", "McCoy")
.AddName("Christine", "Chapel");
Console.WriteLine(nb.GetListAsString());
}
And this outputs:
James Kirk
Montgomery Scott
Nyota Uhura
Leonard McCoy
Christine Chapel
So the missing functionality is the ability to add optional Rank, Suffix, NickName and MiddleName details to each name. My initial thought was to change the AddName method to multiple optional parameters:
public NamesBuilder AddName(string firstName, string surname, string rank = "", string nickName = "", string middleName = "", string suffix = "")
However, this seems very verbose and inelegant especially if only the suffix needs to be added and all previous optional parameters are not applicable to that particular name.
My approach is to create new methods in the NamesBuilder class that would append those details to the last item added to the collection.
Here is the amended code of the caller illustrating this
static void Main(string[] args)
{
NamesBuilder nb = new NamesBuilder()
.AddName("James", "Kirk").SetRank("Capt").SetMiddleName("Tiberius")
.AddName("Montgomery", "Scott").SetNickName("Scotty").SetRank("Lt Cdr")
.AddName("Nyota", "Uhura").SetRank("Lt")
.AddName("Leonard", "McCoy").SetSuffix("MD").SetNickName("Bones").SetRank("Lt Cdr")
.AddName("Christine", "Chapel");
Console.WriteLine(nb.GetListAsString());
}
And here is the updated NamesBuilder class:
class NamesBuilder
{
List<Name> Names;
public NamesBuilder()
{
Names = new List<Name>();
}
public NamesBuilder AddName(string firstName, string surname)
{
Names.Add(new Name(firstName, surname));
return this;
}
public NamesBuilder SetRank(string rank)
{
Names[Names.Count - 1].Rank = rank;
return this;
}
public NamesBuilder SetSuffix(string suffix)
{
Names[Names.Count - 1].Suffix = suffix;
return this;
}
public NamesBuilder SetMiddleName(string middleName)
{
Names[Names.Count - 1].MiddleName = middleName;
return this;
}
public NamesBuilder SetNickName(string nickName)
{
Names[Names.Count - 1].NickName = nickName;
return this;
}
public string GetListAsString()
{
StringBuilder sb = new StringBuilder();
foreach (Name name in Names)
{
//add Title if exists
if (name.Rank.Length > 0)
{
sb.Append(name.Rank);
sb.Append(" ");
}
//add Firstname
sb.Append(name.FirstName);
sb.Append(" ");
//add MiddleName if exists
if (name.MiddleName.Length > 0)
{
sb.Append(name.MiddleName);
sb.Append(" ");
}
//add NickName if exists
if (name.NickName.Length > 0)
{
sb.Append((char)34);
sb.Append(name.NickName);
sb.Append((char)34);
sb.Append(" ");
}
//add Surname
sb.Append(name.Surname);
//add Suffix if exists
if (name.Suffix.Length > 0)
{
sb.Append(" ");
sb.Append(name.Suffix);
}
//add new line
sb.AppendLine();
}
return sb.ToString();
}
}
The output is now:
Capt James Tiberius Kirk
Lt Cdr Montgomery "Scotty" Scott
Lt Nyota Uhura
Lt Cdr Leonard "Bones" McCoy MD
Christine Chapel
I have never before used methods like this to alter the data of the most recent item added to a collection. It works and I think it looks much better than multiple optional parameters but I'd appreciate feedback.
Answer: Aside from optional arguments that may or may not be used, fluent API is very useful when it comes to open arguments, and it's also easy to expand and maintain.
Your approach is very good. You might need to add some restrictions though, in order to protect your class accessibility. Currently, Name can be changed from outside the NameBuilder which makes your design vulnerable for unwanted exceptions.
What you need is to disclose Name inside the class, and use it internally, it doesn't need to be exposed, and restrict its access to only used through NameBuilder class.
Your current API is fine if it won't have much functionalities to add, but if you have some other requirements (other than adding names), I would suggest to wrap current work inside an internal class (inside the NameBuilder) which would handle the required functionalities. For instance, you might implement a class to handle adding new names, and another to process some actions such as formatting. All of which would be under the main class which would be the container to contain and navigate between them.
GetListAsString() why not ToString()?
since you've already defaulted your properties to string.Empty you can override ToString() on Name class to have this :
public override string ToString()
{
return $"{Rank}{FirstName}{MiddleName}{NickName}{Surname}{Suffix}".Trim();
}
then in your NameBuilder class do this :
private string Add(string text)
{
return $"{text} ";
}
public NamesBuilder SetRank(string rank)
{
_current.Rank = Add(rank);
return this;
}
public override string ToString()
{
return string.Join(Environment.NewLine, Names);
}
Now, just call ToString() to get the concatenated string.
the Add(string text) would just add a tailing space.
Lastly, there is no single validation used. You should validate each string, and make sure it fits your requirement before assign it. | {
"domain": "codereview.stackexchange",
"id": 39093,
"tags": "c#, object-oriented, design-patterns, collections"
} |
The 3 fictitious forces of the rotating frame in Quantum Mechanics | Question: I am wondering how the 3 typical fictitious forces (centrifugal, Coriolis, Euler) typical of a rotating frame manifest themselves in Quantum Mechanics.
Background on the classical point particle (see also this nice answer for a deeper geometrical discussion): We have an inertial frame centered in $O$ and a rotating one centered at $O'$, such that the axis are mutually oriented as $\hat{{\bf e}}_i = R \, \hat{{\bf
e}}'_i$, where $R$ is a rotation matrix and $i=1,2,3$. Given a point $\bf x$ as seen by $O$, we have ${\bf{x}} = {\bf{r}}+R \, {\bf{x}}'$, where $\bf r$ is the position of $O'$ measured by $O$. Now, we can introduce the "angular velocity matrix" $W=R^{-1}\dot{R}$, namely $\dot{R} = R \,W$ and $\ddot{R} = R(W^2+\dot{W})$. In this way, for a given vector $\bf u$, we have that $W{\bf u} ={\bf w} \times {\bf u}$, where ${\bf w}$ is the usual "angular velocity vector" associated to that fact that $R$ may have a temporal dependence ($W$ and $\bf{w}$ are related by Hodge duality). Now, we just have to take temporal derivatives ($\bf{r}$ is constant): $$ \dot{{\bf x}} = R (\dot{\bf x}' + W{\bf x}') $$ $$ \ddot{{\bf x}} = R (\ddot{\bf x}' + 2 W\dot{\bf x}'+W^2{\bf x}'+\dot{W}{\bf x}') $$ where the last term $\dot{W}{\bf x}'$ is the so-called "Euler force" (it is less famous than Coriolis and the centrifugal because you need a non-constant angular velocity of the rotating frame). Setting $R=1$ at the given time, the above equations read $$ \dot{{\bf x}} = \dot{\bf x}' + {\bf w} \times {\bf x}' $$ $$ \ddot{{\bf x}} = \ddot{\bf x}'+ 2 {\bf w} \times \dot{\bf x}'+ {\bf w} \times ( {\bf w} \times {\bf x}')+\dot{ {\bf w} }\times{\bf x}' $$ namely, $$ \ddot{{\bf x}} = \ddot{\bf x}' +``Coriolis"+ ``centrifugal"+ ``Euler" $$
Question: How do ''Coriolis'', ''centrifugal'' and ''Euler'' manifest themselves in Quantum mechanics? Assuming, for simplicity, a spin-$0$ wave function, I expect the final answer to be consistent with the classical-field-theory result for a complex scalar field described in Section V here.
Consideration #1: The combined effect of the 3 fictitious forces should somehow be already present in the Schrodinger equation (not under the direct form of "forces"). We should find something resembling the classical equations above when the Ehrenfest theorem is applied, or when working in the Heisenberg picture (in particular, I am thinking about the time derivative of the momentum operator: in this case, some "fictitious force" operator should appear). A concrete example of a system subject to apparent forces in QM is the rotating oscillator, see this paper.
Consideration #2: the change of frame (to an inertial or a non-inertial one) should preserve the probability, and so it should be implemented by means of a unitary transformation $U_t$, which is basically a rotation parametrized by time. If the rotation axis is not constant, $U_t$ should be expressed in terms of a T-ordered exponential, otherwise a simpler
$$
U_t = e^{\frac{-i}{\hbar} L_z \int_0^t \Omega(t') dt'}
$$
could do the job (assuming that the non-inertial frame is rotating along the $z$-axis). Now we can start from the Schrodinger equation in the inertial frame,
$$
i \hbar \partial_t \psi( {\bf x} ,t) = H \psi( {\bf x} ,t)
$$
and obtain
$$
i \hbar (\partial_t \psi' + \psi' U_t \partial_t U_t^*)= H' \psi'
$$
where $ \psi ' = U_t \psi $ and $H' = U_t H U_t^* $. So, there is an extra term related to $ \partial_t U_t $, that we usually don't have when we perform a time-independent rotation (some sign may be wrong, this is just to give the idea). The Euler effect is probably encoded (also) into the term $U_t \partial_t U_t^* \propto L_z \Omega(t)$. Please correct me if my line of reasoning is wrong (I see the analogy with this answer).
Related: clearly, if $U$ is not a time-dependent rotation but a boost, then we're just moving from one inertial frame to another (see this, this and this or these notes).
Answer: Here is one approach:
The Lagrangian for a point particle in an accelerated reference frame is$^1$
$$ L ~=~\frac{1}{2}m\vec{v}^2+m\vec{v}\cdot (\vec{\Omega} \times \vec{r})-V(\vec{r}),$$
where
$$ V(\vec{r})~=~m\vec{A}\cdot \vec{r} -\frac{m}{2} (\vec{\Omega} \times \vec{r})^2,$$
cf. my Phys.SE answer here.
So the Hamiltonian becomes$^1$
$$H~=~ \frac{1}{2m}(\vec{p}- m\vec{\Omega} \times \vec{r})^2 + V(\vec{r}). $$
Next write down the TDSE.
--
$^1$ If the angular velocity $\vec{\Omega}$ of the reference frame depends explicitly on time, then the Lagrangian $L$ and the Hamiltonian $H$ depend explicitly on time, and there will be an Euler force. | {
"domain": "physics.stackexchange",
"id": 78613,
"tags": "reference-frames, schroedinger-equation, centrifugal-force, unitarity, coriolis-effect"
} |
rtabmap with stereovision | Question:
i work on rtabmap with stereovision. i am new in ros and rtabmap and i am studying tutorials of rtabmap. i install rtabmap_ros package from ($ sudo apt-get install ros-indigo-rtabmap-ros) and when run the example of
Hand-held Stereo Mapping for ros (Process a directory of stereo images in ROS) the stereo_20Hz.launch file can not run and error is:
... logging to /home/zahra/.ros/log/709c28dc-a145-11e7-8f93-c8600022c399/roslaunch-zahra-9839.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
unused args [rgbd] for include of [/opt/ros/indigo/share/rtabmap_ros/launch/rtabmap.launch]
The traceback for the exception was written to the log file
please help me!
Originally posted by zahra.zahra on ROS Answers with karma: 1 on 2017-09-26
Post score: 0
Original comments
Comment by PeteBlackerThe3rd on 2017-09-26:
Can you post the log file it refers to. it should be located in the .ros/logs folder in your home directory
Comment by psammut on 2017-09-26:
Also, can you describe your setup more? What stereo camera are you using? Can you get the stereo camera to just publish images correctly? If you are using a rosbag, can you get that to play through the images correctly on its own?
Comment by zahra.zahra on 2017-09-26:
i am using zip file from rtabmap site that is content left and right images (stereo_20Hz.zip). i am following tutorials of rtabmap and useing file from this site.
Comment by zahra.zahra on 2017-09-26:
i don't know whats happen the rtabmap ros work correctly for rgbdmap but don't work for stereomapping!
Comment by zahra.zahra on 2017-09-26:
how can see log file?
Comment by zahra.zahra on 2017-09-26:
i access to log file from terminal with this command:
zahra@zahra:~$ cd ~/.ros/log
zahra@zahra:~/.ros/log$ ls
is it correct?
Comment by matlabbe on 2017-09-26:
Is it this tutorial?
Comment by zahra.zahra on 2017-09-27:
hello matlabbe
yes that is!
Answer:
The rgbd argument error is a bug in the released launch files for Indigo (this is fixed in recent versions). I updated the launch file of the tutorial to be compatible with current Indigo release.
You may have better results with Kinetic/Lunar versions though, here is an example of results:
cheers,
Mathieu
Originally posted by matlabbe with karma: 6409 on 2017-09-27
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by zahra.zahra on 2017-09-28:
thanks Mathieu. my problem is solved!
also i have one question. how can small 3dmap for show all scene? my 3dmap is very large and don't show all scene.
Comment by matlabbe on 2017-09-28:
Do you mean the 3D map viewport? You can zoom in/out with the mouse wheel.
Comment by zahra.zahra on 2017-09-29:
thanks Mathieu. | {
"domain": "robotics.stackexchange",
"id": 28927,
"tags": "slam, navigation, rtabmap-ros"
} |
Can vectors in physics be represented by complex numbers and can they be divided? | Question: Below is attached for reference, but the question is simply about whether vectors used in physics in a vector space can be represented by complex numbers and whether they can be divided.
In abstract algebra, a field is an algebraic structure with notions of addition, subtraction, multiplication, and division, satisfying certain axioms. The most commonly used fields are the field of real numbers, the field of complex numbers, and the field of rational numbers, but there are also finite fields, fields of functions, various algebraic number fields, p-adic fields, and so forth.
Any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. The theory of field extensions (including Galois theory) involves the roots of polynomials with coefficients in a field; among other results, this theory leads to impossibility proofs for the classical problems of angle trisection and squaring the circle with a compass and straightedge, as well as a proof of the Abel–Ruffini theorem on the algebraic insolubility of quintic equations. In modern mathematics, the theory of fields (or field theory) plays an essential role in number theory and algebraic geometry.
In mathematics and physics, a scalar field associates a scalar value to every point in a space. The scalar may either be a mathematical number, or a physical quantity. Scalar fields are required to be coordinate-independent, meaning that any two observers using the same units will agree on the value of the scalar field at the same point in space (or spacetime). Examples used in physics include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields, such as the Higgs field. These fields are the subject of scalar field theory.
In mathematics, an algebra over a field is a vector space equipped with a bilinear vector product. That is to say, it is an algebraic structure consisting of a vector space together with an operation, usually called multiplication, that combines any two vectors to form a third vector; to qualify as an algebra, this multiplication must satisfy certain compatibility axioms with the given vector space structure, such as distributivity. In other words, an algebra over a field is a set together with operations of multiplication, addition, and scalar multiplication by elements of the field.
A vector space is a mathematical structure formed by a collection of vectors: objects that may be added together and multiplied ("scaled") by numbers, called scalars in this context. Scalars are often taken to be real numbers, but one may also consider vector spaces with scalar multiplication by complex numbers, rational numbers, or even more general fields instead. The operations of vector addition and scalar multiplication have to satisfy certain requirements, called axioms,... An example of a vector space is that of Euclidean vectors which are often used to represent physical quantities such as forces: any two forces (of the same type) can be added to yield a third, and the multiplication of a force vector by a real factor is another force vector. In the same vein, but in more geometric parlance, vectors representing displacements in the plane or in three-dimensional space also form vector spaces.
In classical mechanics as in physics, the field is not real, but merely a model describing the effects of gravity. The field can be determined using Newton's law of universal gravitation. Determined in this way, the gravitational field around a single particle is a vector field consisting at every point of a vector pointing directly towards the particle. The magnitude of the field at every point is calculated applying the universal law, and represents the force per unit mass on any object at that point in space. The field around multiple particles is merely the vector sum of the fields around each individual particle. An object in such a field will experience a force that equals the vector sum of the forces it would feel in these individual fields.
Because the force field is conservative, there is a scalar potential energy per unit mass at each point in space associated with the force fields, this is called gravitational potential.
Gauss' law for gravity is mathematically equivalent to Newton's law of universal gravitation, but is stated directly as vector calculus properties of the gravitational field.
Answer: Can vectors in physics be represented by complex numbers?
Absolutely. There exists a direct isomorphism between the 2D Euclidean vector space and the Argand plane, for a start.
In fact, it is possible to talk of mathematical objects called quaternions and use quaternion algebra analogously to vector algebra. Historically quaternions were used to represent geometrical operations and transformations in 3D space - in the days before vector algebra. (They still are used, especially in areas such as computer graphics where they offer one or two advantages over the simpler world of vectors.) In any case, the relationship to vector algebra is a very close one.
Can vectors in physics be divided?
In general, no, vector-vector division is not a well-defined operation. At least, not within the bounds of linear algebra. i.e. There exist none or multiple solutions to the equation $\vec{y} = \mathbb{A} \vec{x}$. (See the Wolfram page.)
Saying this, the concept of vector division has an interesting relationship with your first question. If we map vectors to complex numbers (or quaternions in > two dimensions), we can use complex division or quaternion algebra respectively to define an analogous "vector division" operation. Note that quaternions can in fact be extended to higher dimensions, which allows for interesting possibilities. This study of this falls under the area of Clifford algebras.
(Note: vector-scalar division is of course well-defined, as is point-wise division of equal-length vectors, but I presume you are not referring to that.) | {
"domain": "physics.stackexchange",
"id": 331,
"tags": "mathematics, vectors"
} |
Is it true that the 3 body problem can't be solved using the four basic functions, radicals, and integrals? | Question: The two-body problem can be completely solved via two one-body problems, which only uses the four basic binary functions. However, the three-body problem cannot be solved with these functions and first-order integrals. So I am wondering, is there any finite numerical solution to the three-body problem?
Answer: In a sense, even solving the two body problem as a function of time is unsolvable in terms of the elementary functions. The problem is that the solution involves the solving for the inverse of Kepler's problem, $M = E - e \sin E$. This inverse function is transcendental and cannot be expressed in terms of the elementary functions. That said, it's fairly easy to solve for $E$ to an arbitrary degree of precision.
Regarding the three body problem, there are two special cases that are trivially solvable. These are the triangular Lagrange points (L4 and L5) for the restricted circular three body problem. The three linear Lagrange points are solutions to fifth order polynomials, and the solutions cannot be expressed using the elementary functions. But once again, it's fairly easy to solve for the locations of the three three linear Lagrange points using approximation techniques.
Those restricted three body problem Lagrange points are special cases. The generic case of the three body problem is notoriously unsolvable in terms of the elementary functions. There is an infinite series solution, but nobody uses it. Over a century ago there was a prize for solving the three body problem. Karl Frithiof Sundman was awarded that prize in 1912 for showing that a solution exists in the form of an infinite series in which the terms are ever increasing powers of the cube root of time. The reason no one uses this solution is that the number of required terms can be huge. Eight million terms is not near enough. It's more on the order of $10^{8000000}$ terms. | {
"domain": "astronomy.stackexchange",
"id": 6068,
"tags": "gravity, mathematics, n-body-simulations"
} |
Question about joinstate publisher and navigation? | Question:
hi ,
i have a real robot and i'm trying to work with the navigation stack , i don't have an urdf model and i'm wondering if the navigation and the localization depend on it ? , and i want to know if i have to use the JointState publisher and robotstate Publisher although i don't have a 3D model , is it esential to the navigation stack ?
thank you for answering
Originally posted by kesuke on ROS Answers with karma: 58 on 2018-08-02
Post score: 0
Answer:
Navigation only depends on having a TF frame for your robot and transforms for all the sensor data, but a full 3D model is not necessary.
Originally posted by David Lu with karma: 10932 on 2018-08-02
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by kesuke on 2018-08-02:
thnak you , because i have some trouble with le localization (i'm using hokuyo laser , imu ,encoder ) and i was wondering if it help when you have a 3d model | {
"domain": "robotics.stackexchange",
"id": 31450,
"tags": "navigation, joint-state-publisher, ros-indigo"
} |
A question from S. Weinberg's book (Sec. 2.7) | Question: S. Weinberg in his book "The quantum theory of fields" page 82 says: the elements $T,\bar{T}$, etc, of the symmetry group may be represented on the physical Hilbert space by unitary operators $U(T),U(\bar{T})$, etc. (by Wigner's theorem) which satisfy the composition rule $$U(T)U(\bar{T})=\exp \big( i\phi (T,\bar{T})\big) U(T\bar{T}),\quad (*)$$ with $\phi$ a real phase. When either $\bar{T}$ or $T$ is the identity, the phase $\phi$ must clearly vanish $$\phi (T,1)=\phi (1,\bar{T})=0.$$
But why?
My try: By the relation $(*)$, we have $U(T)U(1)=\exp \big( i\phi (T,1)\big) U(T)=U(T)\exp \big( i\phi (T,1)\big) $. Since $U(T)$ is unitary, it is invertible, so $U(1)=\exp \big( i\phi (T,1)\big)$. But why $\phi (T,1)=0$? $U(1)$ represents the symmetry element $1$ (by Wigner's theorem) in the symmetry group $\{ f:PH\to PH\}$ which $f$ is bijective function that preserves the probobility, i.e., $[\Psi]=1[\Psi]=[U(1)\Psi]$ for all $[\Psi] \in PH$, the projective space.
Answer: Clearly $\phi(T,1)$ is independent of $T$. Let $\phi(T,1)=\phi_0$. You can rescale your unitary operators by a constant phase ${\tilde U}(T) = e^{-i\phi_0} U(T)$. The new set of unitary operators satisfy ${\tilde U}(1) = I$. | {
"domain": "physics.stackexchange",
"id": 100054,
"tags": "quantum-mechanics, hilbert-space, symmetry, group-theory"
} |
An explanation of the Spontaneous Emission | Question: The scalar product of two quantum states gives the probability of transition between those two states. In particular, for two stationary (eigen) states, the orthogonality implies that the probabillity of transition is zero:
$$(\Psi,\Phi)=\int \Psi^* \Phi dv=0 \tag 1\\$$
That being said, I have a problem with the so-called "spontaneous emission" in atoms, which occurs when an electron undergoes a transition between two stationary states without any perturbation or external agent (that's why it's called "spontaneous").
How is that possible?! According to $(1)$, there would be no transition at all because its probability is zero. But we know it's not and yet this probability is given in terms of the Einstein coefficient $A$.
Question : Is this a contradiction between theory and experiment? Or is it a flaw in the frame work of the Schrodinger picture of QM? If so, how do we fix it ?!
I really need a satisfactory explanation. Thanks!
Answer: The flaw in your analysis is the assumption that spontaneous emission occurs between stationary states in the complete absence of perturbations. In the real world there are always perturbations. If these perturbations are weak and not time varying, then we have what we call spontaneous emission. In actuality the upper level state is a mixed state with a small admixture of the lower state and the always present perturbations lead to random decays. | {
"domain": "physics.stackexchange",
"id": 41972,
"tags": "quantum-mechanics, atomic-physics"
} |
Does conditional gate collapse controller's superposition? | Question: I've created a simple circuit in Q-Kit to understand conditional gates and outputted states on each step:
In the beginning there is clear 00 state, which is the input
The first qubit is passed through the Hadamard gate, it gets into superposition, 00 and 10 become equally possible
The first qubit CNOTs the second one, probability of 00 is unchanged, but 10 and 11 are swapped
The first qubit passes Hadamard again and probability of 00 is splited between 00 and 10, and 11 between 01 and 11 as if first qubit stepped into superposition from a fixed state
Shouldn't the result be equally distributed 00 and 01? The first qubit passes Hadamard twice, which should put it into superposition and back to initial 0. The CNOT gate does not affect controller qubit, so its existence shouldn't affect first qubit at all, but in fact it makes it act like it wasn't in superposition any more. Does usage of qubit as a controller collapse its superposition?
Answer: $$
\begin{eqnarray*}
\mid 0 0 \rangle &\to& \frac{1}{\sqrt{2}} \mid 0 0 \rangle + \frac{1}{\sqrt{2}} \mid 1 0 \rangle\\
&\to& \frac{1}{\sqrt{2}} \mid 0 0 \rangle + \frac{1}{\sqrt{2}} \mid 1 1 \rangle\\
&\to& \frac{1}{\sqrt{4}} \mid 0 0 \rangle + \frac{1}{\sqrt{4}} \mid 1 1 \rangle + \frac{1}{\sqrt{4}} \mid 1 0 \rangle + \frac{1}{\sqrt{4}} \mid 0 1 \rangle
\end{eqnarray*}
$$
If the second line was $(\frac{1}{\sqrt{2}} \mid 0 \rangle + \frac{1}{\sqrt{2}} \mid 1 \rangle) \otimes v$, then applying the $H$ again would take it to $\mid 0 \rangle \otimes v$, but it is not. They are entangled.
It seems like you're thinking the first qubit is unaffected by the CNOT, so the last two should commute.
$$
\begin{eqnarray*}
H_1 CNOT_{12} &=& \frac{1}{\sqrt{2}} \begin{pmatrix}
1 & 0 & 0 & 1\\
0 & 1 & 1 & 0\\
1 & 0 & 0 & -1\\
0 & 1 & -1 & 0
\end{pmatrix}\\
CNOT_{12} H_1 &=& \frac{1}{\sqrt{2}} \begin{pmatrix}
1 & 0 & 1 & 0\\
0 & 1 & 0 & 1\\
0 & 1 & 0 & -1\\
1 & 0 & -1 & 0
\end{pmatrix}\\
\end{eqnarray*}
$$
It is in a superposition, the entire time. There was no collapse. It's a nonobvious noncommutation. If you had $Id \otimes U$, that would be something literally not affecting the first qubit and it would commute with $H_1$. But CNOT is not of that form.
You can think of it this way at the beginning you have 2 qubits. After applying the first $H$ you still have 2 qubits. Then after the CNOT, they are entangled so you have 1 qudit with $d=4$ because they have been combined. Then the last $H$ leaves it with $d=4$. At each gate, you do a worst case scenario of the entanglement structure. | {
"domain": "quantumcomputing.stackexchange",
"id": 293,
"tags": "quantum-gate, qiskit, superposition"
} |
Finding a target vector through applications of a binary operator | Question: Let $\mathbb{X} = \mathbb{N} \cap [0, 255]$. I am given three vectors in $\mathbb{X}^3$ which I will denote by $v_i = [x_i, y_i, z_i]^T$ for $i \in \{1, 2, 3\}$. Now we have a binary operator $\bigoplus$ defined by $w = u \bigoplus v = \lfloor0.9 \cdot u\rfloor + \lfloor0.1 \cdot v\rfloor$, where $u$ and $v$ are in $\mathbb{X}^3$. Given a target vector $t \in \mathbb{X}^3$, the goal is to find a sequence of applications of $\bigoplus$ starting only with vectors $v_1, v_2, v_3$ that yield $t$. We are also given a promise that such a sequence of applications exists. How quickly can I find this sequence? It is worth noting that $\bigoplus$ is non-commutative and closed on $\mathbb{X}^3$. In my case, I am given specific values for $v_1, v_2, v_3,$ and $t$:
$$\begin{align*}v_1 &=[150, 0, 255] \\ v_2 &= [255, 150, 0] \\ v_3 &= [0, 255, 150] \\ t &= [62, 63, 184]\end{align*}$$
The obvious thing to try was a memoization / search approach where we keep a data-structure containing each vector seen so far (initially containing only $v_1, v_2, v_3$) and recursively apply the operator on each pair of vectors stored in my data-structure until the target is found. However this appears to be much to time consuming as the number of possibilities grow super quickly. Is there any better approach that one might try? It doesn't help that there appears to be little structure - For example, starting with our three vectors, there appears to be many combinations that we cannot even form.
I am also interested in knowing what happens if we are not given the promise that $t$ can be obtained from a series of applications of $\bigoplus$ on our three vectors. If we are not given such a promise, is there any easy way to at least determine if such a sequence exists (even though finding it might be difficult)?
Answer: Here is a simple solution that I suspect will be fast enough. Use breadth-first search on a graph where the vertex set is $X^3$. In other words, each possible vector $w \in X^3$ is a vertex. The vertex $w$ has 6 edges out of it going to $w \oplus v_1$, $w \oplus v_2$, $w \oplus v_3$, $v_1 \oplus w$, $v_2 \oplus w$, $v_3 \oplus w$. Now use BFS to find the shortest path from $v_1$ or $v_2$ or $v_3$ to $t$. This can be done by initially marking $v_1,v_2,v_3$ as visited and having distance 0, initializing the queue to contain $v_1,v_2,v_3$, and then running BFS until you reach the goal vertex $t$.
Why does this work? Define $d(v,w)$ to be the distance from $v$ to $w$, i.e., the length of the shortest path from $v$ to $w$ in this graph. Define $d(w) = \min(d(v_1,w),d(v_2,w),d(v_3,w))$. Note that any path of length $d$ corresponds to an expression that uses $d$ applications of the $\oplus$ operator, and vice versa. Therefore, $d(t)$ -- the distance to the goal vertex $t$ -- represents the minimum number of applications needed to reach $t$, and the corresponding path yields an expression that evaluates to $t$ using the minimal number of applications of $\oplus$.
How efficient is this? BFS run in time linear in the number of vertices plus the number of edges. Here we have $2^{24}$ vertices and $6 \times 2^{24}$ vertices. Therefore, in the worst case we perform something like $c \times 6 \times 2^{24}$ basic operations, where $c$ is a small constant. The performance in practice might be better than that, as there might not be any need to visit all possible vertices. The memory requirements are relatively limited. You can store the set of visited vertices in a 2MB bitmap; thus, checking whether a vertex has been visited before can be done by a single random-access lookup in this bitmap. You can store the queue in a single array of maximum size $3 \times 2^{24}$ bytes, i.e., 48MB; this is not very large large, and locality of access will be excellent. So I expect this algorithm to complete within seconds.
If this is not fast enough, it is probably possible to optimize it further using various techniques. For example, here is one candidate. Define $x-v = \{w \in X^3 : v \oplus w = x\}$ and $P(x) = \{v \in X^3 : \exists w \in X^3 . v \oplus w = x\}$. Define $S_j = \{w \in X^3 : d(w)=j\}$ and $T_k = \{w \in X^3 : d(w,t)=k\}$. Now you could consider an optimized iterative deepening algorithm, as follows:
For $d=1,2,3,\dots$:
For $k=\lfloor d/2 \rfloor, \lfloor d/2 \rfloor + 1, \lfloor d/2 \rfloor -1,\lfloor d/2 \rfloor +2,\dots,d,1$:
Set $j=d-k$. For each $v \in S_j$:
If $v \in P(t)$ and $t-v \cap T_k \ne \emptyset$, then choose any $w \in t-v \cap T_k$ and output $v \oplus w$ and halt.
We can compute the $S_j,T_k$'s using the technique above (BFS). The trick here is to use a clever data structure to store $T_k$. If we store $T_k$ in an octree or k-d tree, then testing whether $t-v$ has any intersection with $T_k$ is a stabbing query: we are given a $10 \times 10 \times 10$ cube and want to know if there's any point of $T_k$ that falls within the cube. This lookup can be done efficiently when $T_k$ is stored in an octree or k-d tree or similar data structure (basically in $O(\log |T_k|)$ time or so).
This requires more complex data structures, so I'm not sure that it will be worth it to implement. It's not clear whether the resulting solution will be faster or slower than simple BFS, but if simple BFS is too slow, this is another approach you could try.
This finds the sequence that evaluates to $t$ and uses the minimal possible number of applications of $\oplus$. If you just want any sequence (not necessarily minimal), it might be possible to find it faster. For example, you could use best-first search, where the "goodness" of a vertex is measured by its $L_{\infty}$ distance to $t$.
An aside: your operator $\oplus$ is not associative, but it is "almost so". In particular, $(u \oplus v) \oplus w$ is in general not equal to $u \oplus (v \oplus w)$ (due to the rounding), but it will be "close". Ignoring rounding,
$$\begin{align*}(u \oplus v) \oplus w &= 0.81 u + 0.09v + 0.10w\\
u \oplus (v \oplus w) &=0.90 u + 0.09 v + 0.01 w,\end{align*}$$
so (again ignoring rounding) their difference is $0.09 u - 0.09 w$, which is in $[-23,23]^3$. Rounding might make it a bit larger, but not too much.
It might be possible to use this property to gain some additional speedup -- though I have not identified a specific way yet. | {
"domain": "cs.stackexchange",
"id": 5470,
"tags": "algorithms, search-algorithms"
} |
roscd pointing to wrong location | Question:
Edit: Ah hah, this turned out to be a package problem, not a rospack problem. Sorry!
I have a package in an overlay. This same package exists in the /opt stacks. My ROS_PACKAGE_PATH has the overlay before the /opt stacks.
/home/vchwang/ros_workspace/sandbox:/opt/ros/electric/stacks:/opt/ros/electric/ros
However, roscd still points to the one in /opt. As a result, rosmake refuses to build the package in my overlay. How do I change this?
Some more info:
Even after running rospack profile, my rospack_cache claims the package sbpl is in /opt/ros/electric/stacks/arm_navigation/sbpl. If I go into ~/ros_workspace/sandbox/sbpl and rosmake, I get:
[ rosmake ] No package selected and the current directory is not the correct path for package 'sbpl'.
Originally posted by vhwanger on ROS Answers with karma: 52 on 2013-01-17
Post score: 0
Original comments
Comment by tfoote on 2013-01-17:
Please try to provide enough information for us to reproduce your error. http://www.ros.org/wiki/Support You're saying it's doing the wrong thing, and people are guessing in the dark.
Comment by KruseT on 2013-01-18:
if you solved the problem, answer yourself and close the question.
Comment by sam on 2013-01-19:
Can you post the whole rosmake message include the shell prompt?
Answer:
Turns out I was getting the wrong version of the package, which did not contain a manifest file.
Originally posted by vhwanger with karma: 52 on 2013-01-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 12467,
"tags": "rosmake"
} |
Electrons moving faster than light and backward in time? | Question: In Lawrence Krauss's book "A Universe From Nothing"; page 62 mentions that for a very short period of time, so small it cannot be measured, an electron due to the uncertainty principle can appear to be moving faster than the speed of light; another way to interpret this is that it's moving back in time.
It further proceeds to say that Feynman used this to suggest that what happens was that an electron and a positron were created, the positron will annihilate with the original electron while the new electron will continue on its merry way.
It finally says that this behavior is confirmed by the spectrum of the Hydrogen atom, among other things.
My question is, how does this not violate special relativity? How is it possible to get an electron moving faster than light? And how did we say it's okay to assume that and simply consider it moving back in time?
(Note, I do not have a degree in Physics so please take that into account while explaining the basic idea).
Answer: In particle physics, the mathematical framework that is currently used is known as quantum field theory. An example of such a theory is quantum electrodynamics, which deals with the interactions between photons and electrons.
When doing calculations in quantum field theory, you find yourself dealing with a lot of (some times very complex) formulas. Feynman found a way to nicely represent this formulas as diagrams. Two examples of Feynman diagrams are:
If you are studying a physical process and you want to make predictions about it there's a recipe to do your calculations:
Draw all the Feynman diagrams corresponding to the process in question. Each diagram has an associated mathematical formula. By adding all of them one gets the physical answer.
There are also some rules (known as Feynman rules) that are used draw diagrams. In general, what you have to do is:
For each incoming photon draw a wavy line on the left, for each incoming electron or positron a solid line (the arrow towards the left means a positron and towards the right an electron) and the same for outgoing particles. Then use the Feynman rules to construct an allowed diagram.
We could stop here. Feynman diagrams are just a nice way of representing some complicated formulas. We know how to use them to get experimentally testable answers to questions about the physics of electrons and photons. The results that can be obtained in this way describe nature impressively precisely (as is the case of the Lamb shift, some effect in the hydrogen atom).
However, because every incoming and outgoing line represents a particle, it is tempting to say that the internal lines of the diagram are also particles. Physicist sometimes call them virtual particles, but the concept of virtual particle has very little to do with the one of particle. Notice that a Feynman diagram doesn't even describe a physical process. It's just a way of representing some mathematical formula.
Why do physicist use that name, then? The answer is that when talking about calculations in quantum field theory, it can be used as a useful metaphor. You can talk about diagrams in a natural way as if they were physical processes with time represented as flowing from left to right and with particles colliding, being created, destroyed, etc.
In this metaphor, virtual particles can travel faster than the speed of light, for example. Nevertheless, when you translate this metaphoric language into the actual formulas and add them for all the diagrams, the results are agree with special relativity. The real particles never get to go faster than light.
In the first of the example diagrams above, the internal vertices can be moved so that one is on the left of the other, so in our metaphor one occur before the other. Then we can move them again and make the one that was in the left be now in the right. The internal solid line representing an electron would then change from going forward in time to going backwards. The metaphor plays nicely with this and allows us to see the electron going backwards as a positron (as both are represented by a solid line with an arrow to the left). Again, none of this is real, it is just a nice way of talking about some calculations.
In the second diagram, you can see an example of virtual creation and annihilation of an electron-positron pair from and to a photon. This is just a fancy way of talking about one diagram in the set of the Feynman diagrams describing the propagation of a photon.
So, to summarize and be very clear:
In quantum field theory special relativity is not violated, and it is impossible for any particle (in particular, electrons) to go faster than the speed of light or backward in time. | {
"domain": "physics.stackexchange",
"id": 35612,
"tags": "quantum-mechanics, special-relativity, electrons, antimatter, arrow-of-time"
} |
what realtime patch should I use with ROS? | Question:
ROS and Orocos are both capable of running realtime priority threads. What Realtime patch is the preferred method for realtime under Ubuntu?
Userspace support for RTAI and Xenomai are both available in repos, but there does not seem to be corresponding kernels available. I would prefer to use a prebuilt kernel if possible to simplify repeatability.
Originally posted by JonW on ROS Answers with karma: 586 on 2011-03-20
Post score: 7
Answer:
That depends on the real-time requirements of your system: both response times and ability to tolerate occasional lapses.
Vanilla Linux kernels provide good real-time performance these days. Unless your system requires very low response times or very hard guarantees, I recommend sticking with the stock kernel until measurements reveal a need for something more.
If hard real-time is needed, the simplest Ubuntu upgrade path is probably to install the linux-rt kernel, with Ingo Molnar's real time preemption patch.
In either case, you probably need to set some PAM variables in /etc/security/limits to [grant non-root users permission](https://help.ubuntu.com/community/UbuntuStudioPreparation#Real-Time Support) to create SCHED_FIFO threads.
Originally posted by joq with karma: 25443 on 2011-03-21
This answer was ACCEPTED on the original site
Post score: 5
Original comments
Comment by JonW on 2011-03-21:
Good point about decent real-time in vanilla. Thanks for the pointer to the limits file. | {
"domain": "robotics.stackexchange",
"id": 5153,
"tags": "ros, real-time, ubuntu"
} |
Using solutions of 1-D TISE to calculate constants A, B, C | Question: I'm tentatively putting this here as it's a quantum/Schrodinger problem however I'm unable to solve two equations in the way it's done in a text. At the minute I think that I can't do it because I've not understood something so I'm trying the wrong mathematical techniques. Here goes, it's on page 16 of K.S. Krane's Introductory Nuclear Physics...
Given the potential
$$V(x) = 0, x < 0, region 1$$
$$V(x) = V_0, x > 0, region 2$$
For region 1 you get the "typical" result for 1-D time-independent equation
$\psi_1 = Ae^{ik_1x} + Be^{ik_1x}$ and $k_1=\sqrt{(2mE/\hbar^2}$.
And in region 2 the corresponding solutions are $\psi_2 = Ce^{ik_2x} + De^{ik_2x}$
and $k_2=\sqrt{(2m(E-V_0)/\hbar^2}$.
Then at the boundary condition x = 0 gives $A + B = C + D$ and $k_1(A-B) = k_2(C-D)$ from some equations earlier in the book. Obviously at the boundary condition the wave form is required to be continuous for it to be a valid solution. The text goes on to say the D term must equal 0 which therefore, as I understand it, makes the previous two equations now equal $A + B = C$ and $k_1(A - B) = k_2C$.
The next two lines say that when these equations are solved you end up with the following:
$$B = A\frac{1-\frac{k_2}{k_1}}{1+\frac{k_2}{k_1}}$$
and
$$C = A\frac{2}{1+\frac{k_2}{k_1}}$$
Which has me completely lost. I've been unable to algebraically rearrange the equations to reach these solutions, what am I missing? I can only assume that there's a mathematical method that I don't know about (or I can't remember - festive brain fog!) which you can plug these numbers into and the right hand side of the top equation is equal to $C - A$ as $B = C - A$ or that there's a physical reason that I'm missing so I don't understand the concept enough therefore I'm missing an obvious step. The values of A and B are then used to calculate the probability of the wave being reflected at the barrier and A and C used to calculate the probability of transmittance (tunnelling) so I think it best that I understand how to get these results myself.
Any help much appreciated, it's been eluding me since Christmas Eve and I've wasted far too much paper thus far.
Answer: Your starting equations are:
$$A + B = C\tag{1}$$
$$k_1(A - B) = k_2C\tag{2}$$
Substitute $(1)$ into $(2)$:
$$k_1(A - B) =k_2(A+B)$$
Multiply out:
$$k_1A-k_1B=k_2A+k_2B$$
Sort the $A$ from the $B$ terms:
$$(k_2+k_1)B=(k_1-k_2)A$$
$$B=A\frac{k_1-k_2}{k_2+k_1}$$
Divide both numerator and denominator by $k_1$:
$$\implies \boxed{B = A\frac{1-\frac{k_2}{k_1}}{1+\frac{k_2}{k_1}}}\tag{3}$$
To get the expression for $C$, add $A$ to $(3)$. | {
"domain": "physics.stackexchange",
"id": 36465,
"tags": "quantum-mechanics, homework-and-exercises, wavefunction, schroedinger-equation, scattering"
} |
Is it possible to save the rxloggerlevel level's? | Question:
I have a launch file launching several nodes and I'm interested on saving a certain configuration of warn/debug/info depending on the node. Is it possible to save a specific configuration or a way of launching it setting the levels accordingly other than changing them manually at every execution?
Originally posted by quimnuss on ROS Answers with karma: 169 on 2011-07-12
Post score: 2
Answer:
Yes. You can set the ROSCONSOLE_CONFIG_FILE environment variable and have that point to a custom log4cxx config that sets the loggerlevels how you want it.
I guess you want to set debug for your package enabled, then use this entry:
log4j.logger.ros.PACKAGE=DEBUG
Originally posted by dornhege with karma: 31395 on 2011-07-12
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by bhaskara on 2011-07-14:
There's a ros-specific example at http://www.ros.org/wiki/rosconsole#Configuration. Rospy uses a different system based on the Python logging module, described here: http://www.ros.org/wiki/rospy/Overview/Logging. At the bottom of the page it describes how to set up a config file for this.
Comment by Lorenz on 2011-07-13:
Actually, since roscpp is using log4cxx, its documentation on the config file format might be the right one: http://logging.apache.org/log4cxx/index.html. Im not sure if this approach works for rospy though since it doesn't use log4cxx.
Comment by dornhege on 2011-07-13:
There is an example in my post. To check syntax and specifics about rosconsole, click the link in my post and then also look at rosconsole. It's in rosconsole, so I would guess it is loaded in node init.
Comment by quimnuss on 2011-07-13:
Looks like it! So I'd have to create a config file and point the ROSCONSOLE_CONFIG_FILE to it. Where can I find the syntax or an example of a config file? When is it loaded? when roscore is launched? when rxconsole is launched? (I'm sorry the documentation about the subject is too scarce...) | {
"domain": "robotics.stackexchange",
"id": 6122,
"tags": "ros"
} |
Helicities in electron-positron annihilation | Question: Consider the massless limit of a process in which an electron-positron pair annihilates into a virtual photon - the final state doesn't matter. If the electron is massless (or if the energy is high enough), helicity and chirality become the same, and they are conserved. My problem is that I'm getting contradictory results: the math says that the amplitude is nonzero only when the electron and the positron have the same helicity, while every book on the subject (and physical common sense) claims otherwise.
The amplitude is proportional to $\bar{v}\gamma^\mu u$, where $u$ is the electron's spinor and $v$ the positron's. Let's go to the center of mass frame, and take the electron's momentum to be $p^\mu = (p, 0, 0, p)$ and the positron's to be $p'^\mu = (p, 0, 0, -p)$. Using the Dirac basis, I have the following definite helicity spinors (following the Wikipedia article on spinors):
$$u_R = \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}\ \ \ u_L = \begin{pmatrix} 0 \\ 1 \\ 0 \\ -1 \end{pmatrix}$$
$$v_R = \begin{pmatrix} 0 \\ 1 \\ 0 \\ 1 \end{pmatrix}\ \ \ v_L = \begin{pmatrix} -1 \\ 0 \\ 1 \\ 0 \end{pmatrix}$$
Suppose the electron has positive helicity and the positron has negative helicity; in other words, both have spin up along the z axis. Books like Thomson's Modern Particle Physics or Halzen and Martin's Quarks and Leptons say that the annihilation should take place in this case, and it makes sense: the initial state has total spin 1, just what you need to create the virtual photon.
The problem is that I can calculate $\bar{v}_L \gamma^\mu u_R$ explicitly, and I get zero. I can even show it abstractly: Defining $P_R = \frac12 (1+\gamma^5)$ and $P_L = \frac12 (1-\gamma^5)$ and noting that $u_R = P_R u_R$ and so on, it can be shown quite generally that $\bar{v} \gamma^\mu u$ vanishes unless both spinors have the same helicity.
What is going on here? My best guess is that somehow the helicity assignments for antiparticles are reversed, but I don't see how that can be: I just followed the Wikipedia article and every book I could find, not to mention that I've checked that my spinors satisfy the Dirac equation with the proper momentum, and that the spins are right and that $P_L v_R = 0$ and $P_R v_L = 0$.
Answer: It is not true that "$\overline{v}_L\gamma^\mu u_R$ is zero".
While $u_R \equiv P_R u$, if you check carefully you will find that $\overline{v}_L \equiv \overline{v}P_L$. And if you "take $P_L$ to the other side of the $\gamma^\mu$" using the usual anticommutation relations, you get $P_R$. And, of course, $P_R^2 = P_R$.
The problem may have been originally caused by forgetting that $\overline{v}_L$ has the opposite sign for $\gamma^5$ in the projection operators than $\overline{u}_L$. | {
"domain": "physics.stackexchange",
"id": 28738,
"tags": "particle-physics, spinors, helicity"
} |
Partial Sudoku Verifier | Question: To practice my JavaScript for future employment, I've decided to take up the challenge of writing a javascript sudoku verifier. This code only verify's one of the nine 3x3 blocks there are. I would like feedback on this portion of the code, so I can continue my development with more knowledge and more efficient code. Also, any feedback on my HTML/CSS is warmly welcome. I'm using a fullscreen chrome browser for development, so the #middle centers the box on my screen, but it might not on yours.
You will have to click on full page -> to view the CSS correctly.
function checkAnswer() {
//reset each time button is clicked
document.getElementById('correct').style.display = 'none';
document.getElementById('incorrect').style.display = 'none';
//add each input into 2D array
let first_row = document.getElementsByClassName('row1');
let second_row = document.getElementsByClassName('row2');
let third_row = document.getElementsByClassName('row3');
let sudoku = [
[first_row[0].value, first_row[1].value, first_row[2].value],
[second_row[0].value, second_row[1].value, second_row[2].value],
[third_row[0].value, third_row[1].value, third_row[2].value]
]
//check if each number is unique in the 2D array
for (let i = 0; i < sudoku.length; i++) {
for (let j = 0; j < sudoku[i].length; j++) {
if(!isUnique(sudoku[i][j], sudoku)) {
document.getElementById('incorrect').style.display = 'block';
return;
}
}
}
document.getElementById('correct').style.display = 'block';
}
function isUnique(num, arr) {
let count = 0;
//check entire array
for(let i = 0; i < arr.length; i++) {
for(let j = 0; j < arr[i].length; j++) {
if(arr[i][j] == num) {
count++;
}
}
}
return count == 1;
}
body {
background-color: pink;
}
input {
width: 50px;
height: 50px;
font-size: 20px;
text-align: center;
}
button {
width: 100px;
height: 50px;
font-size: 15px;
margin-left: 40px;
}
#middle {
margin-left: 900px;
margin-top: 300px;
}
<!DOCTYPE html>
<html lang="en-US">
<head>
<meta charset='UTF-8'>
<title>Sudoku</title>
<link rel="stylesheet" type="text/css" href="style.css">
<script src="script.js"></script>
</head>
<body>
<div id="middle">
<table>
<tr>
<td><input type="text" class="row1" maxlength="1" value="3"></td>
<td><input type="text" class="row1" maxlength="1"></td>
<td><input type="text" class="row1" maxlength="1"></td>
</tr>
<tr>
<td><input type="text" class="row2" maxlength="1"></td>
<td><input type="text" class="row2" maxlength="1" value="2"></td>
<td><input type="text" class="row2" maxlength="1"></td>
</tr>
<tr>
<td><input type="text" class="row3" maxlength="1" value="7"></td>
<td><input type="text" class="row3" maxlength="1"></td>
<td><input type="text" class="row3" maxlength="1"></td>
</tr>
</table>
<br>
<button type="button" onclick="checkAnswer()">Check</button>
<div id="incorrect" style="display: none;">
<h1>Incorrect Entry!</h1>
</div>
<div id="correct" style="display: none;">
<h1>Correct Entry!</h1>
</div>
</div>
</body>
</html>
Answer: Styling
You can align centered elements relatively to their parent.
#middle {
margin-left: 900px;
margin-top: 300px;
}
#middle {
position: relative;
left: 50%;
transform: translateX(-50%);
top: 50%;
transform: translateY(50%);
}
Naming
In my opinion, for algorithms it is OK to use short variable names.
let first_row = document.getElementsByClassName('row1');
let second_row = document.getElementsByClassName('row2');
let third_row = document.getElementsByClassName('row3');
let sudoku = [
[first_row[0].value, first_row[1].value, first_row[2].value],
[second_row[0].value, second_row[1].value, second_row[2].value],
[third_row[0].value, third_row[1].value, third_row[2].value]
]
let r1 = document.getElementsByClassName('row1');
let r2 = document.getElementsByClassName('row2');
let r3 = document.getElementsByClassName('row3');
let sudoku = [
[r1[0].value, r1[1].value, r1[2].value],
[r2[0].value, r2[1].value, r2[2].value],
[r3[0].value, r3[1].value, r3[2].value]
] | {
"domain": "codereview.stackexchange",
"id": 34714,
"tags": "javascript, array, html, css, sudoku"
} |
Index gymnastics in weak gravitational field | Question: The metric in a weak gravitational field (TT gauge) is:
$$g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$$
with
$$\eta_{\mu\nu}=\begin{pmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{pmatrix},\ h_{\mu\nu}=\begin{pmatrix}0&0&0&0\\0&h_+&h_\times&0\\0&h_\times&-h_+&0\\0&0&0&0\end{pmatrix}$$
Since $g^{\mu\nu}$ is the inverse of the matrix above we get :
$$g^{\mu\nu}=\begin{pmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{pmatrix}+\begin{pmatrix}0&0&0&0\\0&-h_+&-h_\times&0\\0&-h_\times&h_+&0\\0&0&0&0\end{pmatrix}+\mathcal{O}(h^2)$$
This suggests that
$$\eta^{\mu\nu}=\begin{pmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{pmatrix},h^{\mu\nu}=\begin{pmatrix}0&0&0&0\\0&-h_+&-h_\times&0\\0&-h_\times&h_+&0\\0&0&0&0\end{pmatrix}+\mathcal{O}(h^2)$$
However when I raise and lower the indices of $h$ as follows
$$h^{\mu\nu}=g^{\mu\alpha}g^{\nu\beta}h_{\alpha\beta}=\eta^{\mu\alpha}\eta^{\nu\beta}h_{\alpha\beta}+\mathcal{O}(h^2)=\begin{pmatrix}0&0&0&0\\0&h_+&h_\times&0\\0&h_\times&-h_+&0\\0&0&0&0\end{pmatrix}+\mathcal{O}(h^2)$$
I get a different result (up to a sign). Which approach is wrong? I am very sure that the first with the inverse is correct, but cannot see what is wrong with the second approach?
Answer: What is wrong is the sentence that starts with “This suggests that …”, namely the sign of $h^{\mu\nu}$.
Actually, $g^{μν}=\eta^{μν}-h^{μν}+\mathcal{O}(h^2)$.
One can easily check that $(\eta^{μν}-h^{μν})(\eta_{να}+h_{να})=\delta^{\mu}_\alpha+\mathcal{O}(h^2)$. | {
"domain": "physics.stackexchange",
"id": 69040,
"tags": "general-relativity, metric-tensor, approximations, linearized-theory"
} |
What is an effective potential in classical mechanics? | Question: What is an effective potential in classical mechanics? I have read the wikipedia article and David Tong's lectures notes, but I didn't understand how an effective potential simplifies a situation or calculation, and why the ordinary potential won't suffice.
Answer: It isn't necessary to introduce the effective potential in orbital mechanics but it is really useful.
Let's say we have a particle moving in a central gravitational potential. Newton's laws give you a vector equation of motion
\begin{equation}
m \ddot{\vec{x}} = - \nabla U
\end{equation}
where $U = - G M m /r$. In a general coordinate system this is a complicated set of three coupled differential equations.
We want to simplify and decouple these equations as much as possible. So we work in spherical coordinates. I'll leave the derivation of the angular parts of the equation of motion to the textbook, let's focus on the radial part. The left hand side of Newton's law becomes
\begin{equation}
m \ddot{\vec{x}} \cdot \hat{e}_r = m \frac{d^2}{dt^2}\left(r \hat{e}_r\right)\cdot\hat{e}_r = m\ddot{r} - m \dot{\theta}^2 r = m \ddot{r} - \frac{L^2}{m r^3}
\end{equation}
In the last line, I've used the fact that we know that the angular momentum $L = m \dot{\theta} r^2$ is conserved.
Similarly,
\begin{equation}
-\nabla U \cdot \hat{e}_r =- \frac{G m M}{r^2}
\end{equation}
So, putting it together,
\begin{equation}
m \ddot{r} - \frac{L^2}{m r^3} = -\frac{G m M}{r^2}
\end{equation}
So now it's a matter of interpretation--we can now think of $r$ as the coordinate of a particle living in one dimension. We have effectively add a term to newton's law for $r$ that has no derivatives on it. Why not call that a potential? In other words, why not rearrange the above equation so that it looks more like a simple 1D mechanics problem
\begin{equation}
m \ddot{r} = - \frac{G m M}{r^2}+ \frac{L^2}{m r^3} = - \frac{d}{dr}\left( - \frac{GmM}{r} + \frac{L^2}{2 m r^2} \right)
\end{equation}
This is an extremely useful picture, because we all know how a particle moves in a potential! Given an angular momentum, you can plot the potential and immediately see where the stable circular orbits are (the minima of the potential). You can also qualitatively see how there is a barrier to approaching the object too closely (which makes sense--if you have angular momentum you wouldn't expect a head on collision).
This is pretty non-trivial: you have computed a two dimensional problem (finding a circular orbit, or even oscillations around that orbit). In other words, the problem was much simpler than it originally appeared (you didn't have to solve three arbitrary coupled differential equations, just one simple one with a potential), and we take advantage of this by using an effective potential. This kind of trick shows up all over the place in physics. | {
"domain": "physics.stackexchange",
"id": 23339,
"tags": "classical-mechanics, lagrangian-formalism, terminology, potential-energy, centrifugal-force"
} |
Check if string starts with vowel | Question: I am using a template for a text inputs and feedback messages to simplify my html code. This passes in variable string called label and populates the placeholder value for this input, and also uses the same string for the invalid-feedback message, but for the latter I want the correct a or an depending on if the label starts with a vowel or not. I currently use the registerOnTouched() function in text-input-component.ts as this is being called when the inputs are clicked/typed into to correct the feedback messages but I feel like there is a better solution/place to do this check.
Maybe something like a ternary operator or a pipe in the template?
And theoretically I could omit all the vowels except E from my check but that isn't future proof.
Form:
<app-text-input [formControl]="$any(loginForm.controls['email'])" [label]="'Email'" [type]="'email'"></app-text-input>
<app-text-input [formControl]="$any(loginForm.controls['password'])" [label]="'Password'" [type]="'password'"></app-text-input>
Text-Input Template CSS:
<div class="form-group">
<input
[class.is-invalid]="control.touched && control.invalid"
type={{type}}
class="form-control"
[formControl]="control"
placeholder={{label}}
>
<div *ngIf="control.errors?.['required']" class="invalid-feedback">Please enter a{{labelAfterVowelCheck}}</div>
</div>
Text-Input Template TypeScript:
import { Component, Input, Self } from '@angular/core';
import { ControlValueAccessor, FormControl, NgControl, PatternValidator } from '@angular/forms';
@Component({
selector: 'app-text-input',
templateUrl: './text-input.component.html',
styleUrls: ['./text-input.component.css']
})
export class TextInputComponent implements ControlValueAccessor {
@Input() label?: string;
labelAfterVowelCheck = '';
@Input() type: string = 'text';
constructor(@Self() public ngControl: NgControl) {
this.ngControl.valueAccessor = this; }
writeValue(obj: any): void {
}
registerOnChange(fn: any): void {
}
registerOnTouched(fn: any): void {
this.labelAfterVowelCheck = this.label!;
if(["A","E","I","O","U","a","e","i","o","u"].some(vowel => this.label?.startsWith(vowel))){
this.labelAfterVowelCheck = 'n ' + this.label;
}
else{
this.labelAfterVowelCheck = ' ' + this.label;
}
}
get control(): FormControl {
return this.ngControl.control as FormControl;
}
}
Answer: Just a couple ideas:
I might separate out the "validation" into its own function, isValid here, but you can create a better name. ie.
registerOnTouched(fn: any): void {
this.labelAfterVowelCheck = this.label!;
if(isValid(this.label)){
this.labelAfterVowelCheck = 'n ' + this.label;
}
else{
this.labelAfterVowelCheck = ' ' + this.label;
}
}
With this change, it's easier to see that you don't need the first assignment (as it will be redone in the line below below). And as you hint at, a ternary dries it up a little:
registerOnTouched(fn: any): void {
this.labelAfterVowelCheck = (isValid(this.label) ? 'n ' : ' ') + this.label;
}
The validation itself can be simplified with a regular expression (although I get the sense this code is temporary). ie. /^[aeiou]/i.exec(label)
Hope this helps! | {
"domain": "codereview.stackexchange",
"id": 44322,
"tags": "typescript, angular-2+"
} |
ros2 launch xml example | Question:
I see that a yaml and xml front-end was added to ros2. In the feature request I see that there was intent to add support for these to the ros2launch cli, but I am struggling to create a .launch.xml file and launch it with ros2launch
Are there any full examples of creating an xml launch file and then using the launch cli tool to run it? The design document and the test here show me the syntax, but I'm still unsuccessful.
Originally posted by johnconn on ROS Answers with karma: 553 on 2020-01-06
Post score: 2
Answer:
There is a tutorial about how to migrate launchfiles from ROS1 to ROS2: https://index.ros.org/doc/ros2/Tutorials/Launch-files-migration-guide/.
There are some XML examples in the demos, e.g. in this folder: https://github.com/ros2/demos/tree/master/demo_nodes_cpp/launch/topics.
It would be a great contribution to add more examples to the demos.
About your questions in the github issue:
If I have a launch file, how can I get it running?
ros2 launch /path/to/file_launch.xml
Do I need to create a package, do I need to add it in package.xml whatsoever...
No necessarily, as commented above.
If you do want to install some launchfiles with a package, you can:
(cpp package): https://github.com/ros2/demos/blob/948b4f4869298f39cfe99d3ae517ad60a72a8909/demo_nodes_cpp/CMakeLists.txt#L207-L211
(python package): Install the launchfiles in the share folder using data_files argument of setuptools setup. Like here, but with a launchfile.
In those cases, you will be able to do:
ros2 launch package_name name_of_launch_file_launch.xml
In case you have problems running a specific launchfile, you can copy it here and I will try to help.
Best,
Ivan
Originally posted by ivanpauno with karma: 86 on 2020-03-20
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by urczf on 2020-03-24:
That worked, thanks! Nonetheless, I would add it somewhere on index.ros.org | {
"domain": "robotics.stackexchange",
"id": 34230,
"tags": "ros"
} |
LeetCode 1206: Design Skiplist | Question: I'm posting my code for a LeetCode problem. If you'd like to review, please do so. Thank you for your time!
Problem
Design a Skiplist without using any built-in libraries.
A Skiplist is a data structure that takes O(log(n)) time to add, erase
and search. Comparing with treap and red-black tree which has the same
function and performance, the code length of Skiplist can be
comparatively short and the idea behind Skiplists are just simple
linked lists.
For example: we have a Skiplist containing [30,40,50,60,70,90] and we
want to add 80 and 45 into it. The Skiplist works this way:
Artyom Kalinin [CC BY-SA 3.0], via Wikimedia Commons
You can see there are many layers in the Skiplist. Each layer is a
sorted linked list. With the help of the top layers, add , erase and
search can be faster than O(n). It can be proven that the average time
complexity for each operation is O(log(n)) and space complexity is
O(n).
To be specific, your design should include these functions:
bool search(int target): Return whether the target exists in the
Skiplist or not.
void add(int num): Insert a value into the SkipList.
bool erase(int num): Remove a value in the Skiplist. If num does not
exist in the Skiplist, do nothing and return false. If there exists
multiple num values, removing any one of them is fine.
See more about Skiplist: https://en.wikipedia.org/wiki/Skip_list
Note that duplicates may exist in the Skiplist, your code needs to handle this situation.
Example:
Skiplist skiplist = new Skiplist();
skiplist.add(1);
skiplist.add(2);
skiplist.add(3);
skiplist.search(0); // return false.
skiplist.add(4);
skiplist.search(1); // return true.
skiplist.erase(0); // return false, 0 is not in skiplist.
skiplist.erase(1); // return true.
skiplist.search(1); // return false, 1 has already been erased.
Constraints:
0 <= num, target <= 20000
At most 50000 calls will be made to search, add, and erase.
Code
// The following block might trivially improve the exec time;
// Can be removed;
static const auto __optimize__ = []() {
std::ios::sync_with_stdio(false);
std::cin.tie(nullptr);
return 0;
}();
// Most of headers are already included;
// Can be removed;
#include <cstdint>
#include <cstdlib>
#include <ctime>
#include <vector>
static const struct Skiplist {
using SizeType = std::int_fast16_t;
struct Node {
SizeType val;
Node* next{nullptr};
Node* prev{nullptr};
Node* down{nullptr};
Node(SizeType val = 0) {
this->val = val;
}
};
Node* heads{nullptr};
SizeType layers = 0;
Skiplist() {
std::srand(std::time(nullptr));
}
const bool search(const SizeType target) {
if (heads == nullptr) {
return false;
}
auto ptr = heads;
while (ptr) {
while (ptr->next && ptr->next->val < target) {
ptr = ptr->next;
}
if (ptr->next && ptr->next->val == target) {
return true;
}
ptr = ptr->down;
}
return false;
}
const void add(const SizeType num) {
Node* ptr = heads;
std::vector<Node*> path(layers, nullptr);
for (SizeType layer = layers - 1; layer >= 0; --layer) {
while (ptr->next && ptr->next->val < num) {
ptr = ptr->next;
}
path[layer] = ptr;
ptr = ptr->down;
}
for (SizeType layer = 0; layer <= std::size(path); ++layer) {
ptr = new Node(num);
if (layer == std::size(path)) {
Node* last = heads;
heads = new Node();
heads->down = last;
heads->next = ptr;
ptr->prev = heads;
++layers;
} else {
ptr->next = path[layer]->next;
ptr->prev = path[layer];
path[layer]->next = ptr;
if (ptr->next) {
ptr->next->prev = ptr;
}
}
if (layer) {
ptr->down = path[layer - 1]->next;
}
if (std::rand() & 1) {
break;
}
}
}
const bool erase(const SizeType num) {
auto ptr = heads;
for (SizeType layer = layers - 1; layer >= 0; --layer) {
while (ptr->next && ptr->next->val < num) {
ptr = ptr->next;
}
if (ptr->next && ptr->next->val == num) {
ptr = ptr->next;
while (ptr) {
ptr->prev->next = ptr->next;
if (ptr->next) {
ptr->next->prev = ptr->prev;
}
ptr = ptr->down;
}
while (heads && heads->next == nullptr) {
heads = heads->down;
--layers;
}
return true;
} else {
ptr = ptr->down;
if (ptr == nullptr) {
return false;
}
}
}
return false;
}
};
LeetCode 1206 - Skip List Problem
References
Problem
Discuss
Answer:
SizeType looks like a misnomer. It feels more like ValueType. As a side note, consider making it a template <typename ValueType> struct SkipList.
Testing for heads == nullptr in search is redundant. The loop will take care of it immediately.
For DRY I recommend a helper method, akin to std::lower_bound, to be used in all interface methods (i.e. search, add, and erase). Yes it requires a very careful design of an iterator.
add may benefit from Node::Node(val, next, down) constructor.
No naked loops, please.
The for (SizeType layer = 0; layer <= std::size(path); ++layer) loop particularly deserves to be a method on its own. Its intention is to promote a freshly inserted node, so promote_added_node looks like a good name. | {
"domain": "codereview.stackexchange",
"id": 39134,
"tags": "c++, beginner, algorithm, programming-challenge, c++17"
} |
Back EMF from a motor | Question: I have a large DC Motor I ripped out of a treadmill and I'm curious if the back emf generated from manually rotating the motor is a DC output or an AC output. Is it AC only when it's from the power grid and other sources treated as DC? How can I always tell?
For context, the motor is a 2.65Hp, 21.4A, P.M.D.C Motor
Thank you!
Answer: For Permanent Magnet DC motor, even if you rotate it with the constant rate the back EMF you'll get externally from the motor connector pins ought to be more similar to a rectified AC.
However, I wouldn't expect it to be exactly sinosoidal. | {
"domain": "engineering.stackexchange",
"id": 3879,
"tags": "motors, electrical"
} |
Why is Spacetime described as flat even though we live in 3 dimensions of space? | Question: I’ve always heard and seen diagrams that show spacetime as being “flat” or in 2 dimensions with curvature. How does this correspond to the 3 spacial dimensions that we perceive to exist in?
Answer: "Flat space" means that on large scales, Euclidean geometry holds. All the angles in any triangle drawn in space add up to 180°; the total distance between points separated by $\Delta x$, $\Delta y$, and $\Delta z$ is $d=\sqrt{\Delta x^2+\Delta y^2+\Delta z^2}$; et cetera.
Note that this is not the case on the 2D surface of the Earth, because it is curved. If you have a globe or a basketball to play with, you can easily see that it's possible to draw a triangle with more than 180°. You can even draw one with three 90° angles for a total of 270°. On a sphere, the distance between two nearby points is $ds = \sqrt{dr^2+r^2d\theta^2+r^2sin^2\theta d\phi^2}$ where $(\theta,\phi)$ are the longitude and latitude. In general, Euclidian geometry does not apply.
The statement that in our universe, space is flat, means that on the largest scales (disregarding curvature "wrinkles" caused by stars, galaxies, black holes, etc) our 3D universe is measured to be flat such that Euclidean geometry applies. This also implies that it could be infinite in extent. If it were positively curved like a (3D) spherical surface, it might wrap in on itself and be finite in extent. The picture is somewhat analogous a smooth metallic surface. Although under a microscope you can see the surface roughness and all kinds of asperities, hills, and valleys, zooming out to a macroscopic view the overall structure is smooth and flat. | {
"domain": "physics.stackexchange",
"id": 97125,
"tags": "general-relativity, spacetime, curvature, spacetime-dimensions, visualization"
} |
Hyperbolic flow / vector field - irrotational and divergence-free? | Question: My text book on meteorology claims that a hyperbolic flow pattern is both divergence-free and irrotational:
(d) Hyperbolic flow that exhibits both diffluence
and stretching, but is nondivergent because the
two terms exactly cancel. Hyperbolic flow also
exhibits both shear and curvature, but is
irrotational (i.e., vorticity-free) because the
two terms exactly cancel.
-- Wallace & Hobbs, Atmospheric Science, 2nd Ed, p 273
In my understanding, that can not be true:
I can obtain a very similar flow pattern from $ grad \ xy $.
Based on the uniqueness of Helmholtz decomposition, the only divergence-free, irrotational vector field should be $ \vec{f} = \vec{0} $.
Based on Helmholtz decomposition, any vector field $ \mathbf{u} $ can be represented as $ \mathbf{u} = \mathbf{v} + \mathbf{d} $ with $ \mathbf{v} = \nabla \phi $ and $ \mathbf{d} = \nabla \times \mathbf{A} $. As I understand it, the only divergence-free ( $ \mathbf{v} = 0 $ ) and irrotational ( $ \mathbf{d} = 0 $ ) vector field can be $ \mathbf{u} = 0 $.
Am I missing something or is the text handwaving the math a little too much here?
Answer: Based on Hodge-Helmholtz decomposition a vector field $\mathbf{u}$ can be expressed as the sum of an irrotational vector potential $\phi$ and a divergence-free vector field $\mathbf{d}$.
$$\mathbf{u} = \nabla \phi + \mathbf{d}$$
For a flow to be irrotational it has to be able to be derived as the gradient of a vector potential $$\mathbf{u} = \nabla \phi$$ It also follows that is the flow is simply-connected and irrotational then $$\mathbf{u} = \nabla \times \mathbf{d}$$The difference field $\mathbf{u} - \mathbf{d}$ is irrotational therefore it can be resolved as a gradient of a vector potential $\phi$ by taking the divergence of the above equation. $$\nabla \cdot \mathbf{u} = \nabla^2\phi$$
A vector field satisfying the above equation in which both the lhs and rhs vanish and also satisfies the irrotational condition is both irrotational and divergence-free. Specifically $\nabla \cdot \mathbf{u} =0$ and $\nabla^2\phi = 0$ and $\mathbf{u} = \nabla \phi$. A possible function $\phi$ for which the vector field $\mathbf{u}$ can be derived from is a harmonic function. | {
"domain": "physics.stackexchange",
"id": 27299,
"tags": "fluid-dynamics, vector-fields, meteorology"
} |
Feynman lectures and apparent area of a nucleus | Question: In paragraph 5.7 of this lecture, Feynman explains how to calculate the apparent area of the nucleus, in a sheet of unspecified material.
In the note Feynman says:
"This equation is right only if the area covered by the nuclei is a small fraction of the total, i.e., if $\frac{n1-n2}{n1}$ is much less than 1. Otherwise we must make a correction for the fact that some nuclei will be partly obscured by the nuclei in front of them"
Do you have any idea how to apply this correction factor to the previous formula?
Answer: That "apparent area" is called the cross section, usually denoted with a sigma, $\sigma$.
Suppose (as Feynman et al. do) that you're interested in the probability that scattering from a nucleus removes a particle from the beam. If the thickness $\ell$ of your target is small enough that the overlap between nuclei is negligible, and the number density of the target nuclei is $n$ nuclei per unit volume, then the probability that a particle from your beam makes it through undeflected is
$$
p_\text{thin} = 1 - n\sigma\ell.
$$
If your target has large thickness $L$ so that this approximation doesn't apply, you can divide it up into many thin targets; the probability of transmitting through all the layers is the product of the probabilities of making it through each layer. That is,
\begin{align}
p_\text{thick} &= \prod_\text{all layers} p_\text{layer}
= \left( 1 - \frac{n\sigma L}N \right)^N,
\end{align}
if you divide the target into $N$ thin layers.
The continuum result is
\begin{align}
\lim_{\text{smooth}} p_\text{thick} &= \lim_{N\to\infty} (p_\text{thin})^N
= e^{-n\sigma L}
\end{align}
The transmission through a thick target is exponential in the length of the target. | {
"domain": "physics.stackexchange",
"id": 48408,
"tags": "nuclear-physics, scattering-cross-section"
} |
Identify this South African spider | Question: A family member saw this spider in her garden in the Cape Town area of South Africa. It's very attractive and I wondered what species it was?
Approx size is that it would fit roughly on a 3cm diameter circle.
Answer: Based on the location and image you provided, I suspect this spider to be Garden Orb-Weaver, belonging to the spider family Araneidae.
Just a side note: I used an app called iNaturalist to identify this spider. It is a helpful app to identify insects and plants.
Here is a similar image I found online: | {
"domain": "biology.stackexchange",
"id": 10295,
"tags": "species-identification, arachnology"
} |
Transfer function of a frequency shifting system | Question: There is a system which shifts frequencies of input by $-F_c$ such that:
$$Y(S) = X(S).H(S)$$
But $X(S)$ has value zero from $0$ to $F_c$.
I am confused on how the product of $X(S)$ and $H(S)$ becomes a positive value in $Y(S)$ in that frequency range, for any $H(S)$?
How transfer function of the system $H(S)$ will look like in the frequency domain?
Answer: The system you're looking for cannot be described by a transfer function because it is a time-varying system. Only linear time-invariant (LTI) system can be fully characterized by a transfer function. However, there is no LTI system that can shift frequencies. The output of a (stable) LTI system can only have frequency components that are already present in the input signal.
Frequency shifts are usually achieved by modulation, i.e., by multiplying the input signal with a sinusoid or with a complex exponential. Such a system is linear but not time-invariant. | {
"domain": "dsp.stackexchange",
"id": 5865,
"tags": "fourier-transform, transfer-function, laplace-transform"
} |
What is the probability of measurement in QM, dependent on time? | Question: Consider a QM system with an observable $A$ and orthonormal eigenbasis $\{|n\rangle,n=0,1,2,\ldots\}$. Then we know that if the system is in some state $|\psi\rangle$ and we measure $A$, the probability of finding an eigenstate $|n\rangle$ is $|\langle n|\psi\rangle|^2$ and that the probability of finding a non-eigenstate $|\phi\rangle$ is zero.
How does this translate to the time-dependent setting?
In particular, what is the probability of finding a state $|\phi,t\rangle$ at time $t$ if the system evolves according to some other state $|\psi,t\rangle$? Does $|\phi,0\rangle$ have to be an eigenstate of $A$ in order for this probability to be positive? Does the time evolution of $|\phi,t\rangle$ have to be given by the Schrödinger equation? I'm really confused by this and would very much appreciate help.
Answer: In quantum mechanics, each state of a system is always represented as a vector (a ket) in the Hilbert space of all possible states. This space has, per definition, a scalar product and thus a geometry associated with it and consequentially there is the notion of orthogonality. This allows for the probability of measuring any state $|\phi\rangle$ when the system is in state $|\psi\rangle$ to be defined as the square of the orthogonal projection $|\langle \phi | \psi \rangle|^2$, which is the scalar product. This definition is universal and does always apply. In particular, none of the states has to be an eigenstate of any operator.
Now to answer your question explicitly, by what I just explained, the probability of finding a state $|\phi,t\rangle$ at time $t$ if the system is in the state $|\psi,t\rangle$ is
$$
p = |\langle \phi,t | \psi,t \rangle |^2~,
$$
and there are no special conditions which $|\phi,0\rangle$ must fulfill. Furthermore, it will be $p > 0$, if $|\psi,t\rangle$ and $|\phi,t\rangle$ are not orthogonal, by definition of orthogonality.
Remark: For the definition of the measurement probability, there are no eigenstates needed, as stated above. If one wants to conduct an actual physical measurement, though, some observable will be measured, which is represented by a self-adjoint operator, which has, of course, an eigenbasis and the only possible results of the measurement are the eigenvalues. If the system evolves over time, the eigenstates and eigenvalues as well as the operator may be constant or change themselves, depending on the system. | {
"domain": "physics.stackexchange",
"id": 81145,
"tags": "quantum-mechanics, schroedinger-equation, measurements, time-evolution, quantum-measurements"
} |
Relativistic speed/energy relation. Is this correct? | Question: The relativistic energy-momentum equation is:
$$E^2 = (pc)^2 + (mc^2)^2.$$
Also, we have $pc = Ev/c$, so we get:
$$E = mc^2/(1-v^2/c^2)^{1/2}.$$
Now, accelerating a proton to near the speed of light, I get the following results for the energy of proton:
0.990000000000000 c => 0.0000000011 J = 0.01 TeV
0.999000000000000 c => 0.0000000034 J = 0.02 TeV
0.999900000000000 c => 0.0000000106 J = 0.07 TeV
0.999990000000000 c => 0.0000000336 J = 0.21 TeV
0.999999000000000 c => 0.0000001063 J = 0.66 TeV
0.999999900000000 c => 0.0000003361 J = 2.10 TeV
0.999999990000000 c => 0.0000010630 J = 6.64 TeV
0.999999999000000 c => 0.0000033614 J = 20.98 TeV
0.999999999900000 c => 0.0000106298 J = 66.35 TeV
0.999999999990000 c => 0.0000336143 J = 209.83 TeV
0.999999999999000 c => 0.0001062989 J = 663.54 TeV
0.999999999999900 c => 0.0003360908 J = 2,097.94 TeV
0.999999999999990 c => 0.0010634026 J = 6,637.97 TeV
0.999999999999999 c => 0.0033627744 J = 20,991.10 TeV
If the LHC is accelerating protons to $7 TeV$ it means they're traveling with a speed of $0.99999999c$.
Is everything above correct?
Answer: Yes you are correct.
If the rest mass of a particle is $m$ and the total energy is $E$, then
$$ E = \gamma mc^2 = \frac{mc^2}{\sqrt{1-\frac{v^2}{c^2}}}, $$
thus
$$ \frac vc = \sqrt{ 1 - \left( \frac{mc^2}E \right)^2 } \approx 1 - \frac12 \left( \frac{mc^2}E \right)^2 $$
The proton rest mass is 938 MeV, so at 7 TeV, the proton's speed is
$$ 1 - \frac vc = \frac12 \left( \frac{938\times10^6}{7\times10^{12}} \right)^2 = 9 \times 10^{-9} $$
meaning v ~ 0.999 999 991 c | {
"domain": "physics.stackexchange",
"id": 40,
"tags": "particle-physics, special-relativity"
} |
Polarization vectors in Quantum Electric Field | Question: The quantum electric field is written as,
\begin{equation}
\mathbf{E}(\mathbf{r})=i\sum_{\mathbf{k},\lambda}\sqrt{\frac{\hbar \omega}{2 V \epsilon_0}}\left(\mathbf{e}^{(\lambda)}\hat{a}^{(\lambda)}(\mathbf{k})e^{i\mathbf{k}\cdot\mathbf{r}} - \mathbf{e}^{(-\lambda)}\hat{a}^{\dagger(\lambda)}(\mathbf{k})e^{i\mathbf{k}\cdot\mathbf{r}}\right).
\end{equation}
The $\mathbf{e}^{(\pm\lambda)}$ terms are the polarization vectors. Do these vectors represent any kind of polarization vector? Or are the only circular polarization vectors? What if you want to measure something horizontally polarized? Would you just dot the $E$ field with a vector that yields a polarization vector you need?
Answer: So, I actually found this post because I've been trying to answer the same question. Here's what I understand from various sources I've found.
These vectors represent a polarization basis. For example, two orthogonal linear polarizations (which I've seen written before as $e_{\textbf{k}\lambda}$ for $\lambda=1,2$), which I think is the more traditional basis, or an alternate circularly polarized basis (which I've seen written as $e_{\textbf{k}\alpha}$ for $\alpha=1,-1$). These form a basis for the photon polarization, but don't necessarily describe the photon polarization itself. The photon polarization can be expressed in terms of these basis vectors, just like we might express motion in Cartesian or polar coordinates.
These notes here include a pretty good description, just search for "polarization". It's a little technical but it's got a more mathematical description of how the bases work.
This paper is definitely very technical, and I haven't actually read all of it, but the introduction talks some more about how these vectors are used.
I'm not sure how your measurement question fits into this though. | {
"domain": "physics.stackexchange",
"id": 31815,
"tags": "electric-fields, quantum-electrodynamics, fourier-transform, polarization"
} |
jQuery "on('click')" to toggle webpage content | Question: I have made a successful test page that satisfies my goals of using JavaScript ( jQuery specifically ) to add and remove content from a webpage.
<!doctype html>
<head>
<title>test</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
</head>
<body>
<div id="main"></div>
<script>
const foo = `<h1 id="title">Page-Foo-One</h1></div>
<p>Some text about foo foo foo.</p>
<h3 id="bar_link">Link to bar</h3>`;
const bar = `<h1 id="title">Page-Bar-Two</h1></div>
<p>Some text about bar bar bar.</p>
<h3 id="foo_link">Link to foo</h3>`;
$("#main").append(foo);
$("#main").on('click','h3#bar_link', function (){
console.log("Clicked bar_link");
$("#main").html("");
$("#main").append(bar);
}).on('click','h3#foo_link', function (){
console.log("Clicked foo_link");
$("#main").html("");
$("#main").append(foo);
}).on('click','h1#title', function (){
console.log("Clicked title");
$("#main").html("");
$("#main").append(foo);
})
</script>
</body>
</html>
For this simple example where only two possible outcomes exist the code isn't too bad to look at, however I know that if this were inflated to what a full website can entail (e.g. dozens of links per page, each one putting in different content) the .on(click) chain would get very complicated.
How else could I handle this situation?
I know that I could just make two different .html files ( foo and bar ) and setup simple anchor links (i.e. <a href="...">) between them and obtain the same effect but my goal is to have an external source generate the content that is displayed.
I am getting used to JavaScript programming in general so please do point out other ways I can improve what I am doing.
Answer: Well if you use event delegation on the <h3> element you won't have to care for how many pages or links there are just bind it once on page load and it will take care of the rest assuming the links are based on h3 following your given example so that solves your main problem.
$(document).on('click', 'h3', function() {
//do your thing
});
Since you have stated that you have an external source to generate the HTML I would create a function pageloader() to load the content.if that is going to be an ajax function to get the content or a template file using mutache.js or any other.
function pageLoader(){
$.ajax({
url:'path/to/content/',
//other params...
success:function(data){
//load content here
$("#map").html(data);
},
});
}
But if that content was to be used in the same scenario as you have provided
then you have the option of creating object literals for keeping all the pages in one place and calling them from one place and when calling use hasOwnProperty() to avoid inherited Properties.
var pages={
pagename:function(){
//load the content of the page
},
pagename2:function(){
//load content of the page
}
}
In this case What you need to consider is that you either keep the id of the h3 similar to the page name or define a data-attribute on the h3 and add the page name there, as we would be passing that id/data-attribute as a parameter to our pageLoader() function which holds all our pages and calls them too why because we will create the properties of the object literal as page name and then load them.
Below is a demonstration although I cant create an ajax example to show what I talked about in the first section, i will use the object literal section to create a demo since your actual concern was the binding of the links and not how to load the content, and that is covered. So you can update the pageLoader() function with whichever approach you want to load the content
function pageLoader(pagename) {
"use strict";
var pages = {
"foo": function() {
$("#main").html(`<h1 id="title">Page-Foo</h1></div>
<p>Some text about foo foo foo.</p>
<h3 id="foo">Link to Foo</h3><h3 id="bar">Link to bar</h3><h3 id="contact">Link to Contact</h3><h3 id="about">Link to About</h3>`);
},
"bar": function() {
$("#main").html(`<h1 id="title">Page-Bar</h1></div>
<p>Some text about conatct bla bla bla.</p>
<h3 id="foo">Link to Foo</h3><h3 id="bar">Link to bar</h3><h3 id="contact">Link to Contact</h3><h3 id="about">Link to About</h3>`);
},
"contact": function() {
$("#main").html(`<h1 id="title">Page-Contact</h1></div>
<p>Some text about conatct bla bla bla.</p>
<h3 id="foo">Link to Foo</h3><h3 id="bar">Link to bar</h3><h3 id="contact">Link to Contact</h3><h3 id="about">Link to About</h3>`);
},
"about": function() {
$("#main").html(`<h1 id="title">Page-about-Two</h1></div>
<p>Some text about about about about.</p>
<h3 id="foo">Link to Foo</h3><h3 id="bar">Link to bar</h3><h3 id="contact">Link to Contact</h3><h3 id="about">Link to About</h3>`);
}
};
//call the page
if (pages.hasOwnProperty(pagename)) {
pages[pagename].call(this);
}
}
//call on page load for the first time
$(document).ready(function() {
pageLoader("foo");
});
//use event delegation and bind the h3
$(document).on('click', 'h3', function() {
let pagename = $(this).prop("id");
pageLoader(pagename);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div id="main"></div>
Hope this helps | {
"domain": "codereview.stackexchange",
"id": 29385,
"tags": "javascript, beginner, jquery, html"
} |
Uncorrelated model error | Question: A dynamical system evolves according to $x_{k+1} = M_k x_k + w_k$
$w_k$ denotes the model error.
In a textbook, it is specified that $w_k$ is temporally uncorrelated and
$E[w_kw_j^T]$ = $Q_k$ if j=k and zero otherwise; $Q_k$ is n x n symmetric positive definite
Could someone explain why $w_k$ is temporally uncorrelated and how $Q_k$ is symmetric positive definite?
Answer: If $\mathbf{w_k} $ was temporally correlated, it would have memory and would be equivalent to an augmented state. In other words, it can be written as a state variable with its own independent noise term. You would be able to rewrite the equation to the same form you originally showed. It is the most basic form of the model.
The symmetry of $Q_k$ is because $E\{ ab\}=E\{ba\}$ for any terms $a$ and $b$ of the vector $\mathbf{w_k}$
Edit to address comments.
Let $\mathbf{x}=\mid x_0,x_1, \dots x_{N-1} \mid^T$ be a random Gaussian distributed random vector.
$$
E\left\{ \mathbf{x} \mathbf{x}^T \right\}=\mathbf{R} \quad \text{Is typically Full rank and also typically} = \sigma^2 \mathbf{I}
$$
when the elements of $\mathbf{x}$ are independent.
There are some exceptions to full rank, like like a linear dependence of some of the elements of $\mathbf{x }$ such as $x_i=x_j$ of more generally $x_i=\sum_{k=0, k\ne i}^{N-1} \alpha_k x_k$ .
The easiest example is to consider $E\{ x_i x_i \} = E\{ x_i^2\}$ . A Gaussian random variable can have zero mean but a variance $\sigma^2$. The matrix $\mathbf{R}$ would have nonzero diagonal terms, and is Full Rank.
The Matrix $\mathbf{R}$ isn't required to be diagonal and has nonzero off diagonal terms when $E\{x_i x_j\} \ne 0$ for some $i$ and $j$. For your problem, your book will likely show the covariance of the evolution of the the state vector and that will most likely have off diagonal nonzero elements.
$$
E\left\{ \mathbf{x}\right\} E\left\{ \mathbf{x}^T \right\} \quad \text{Is rank one}
$$ | {
"domain": "dsp.stackexchange",
"id": 6243,
"tags": "noise"
} |
Have imperative programs been defined like this? | Question: Possibly improper definition $\;$ An imperative program is a labeled directed graph, with every vertices labeled by a command and every edge labeled by a predicate.
Denote an edge labeled by predicate $p$ from $x$ to $y$ by $(x, p, y)$,
Two consecutive steps are expressed as $(x, \top, y)$. "Goto" is also expressed in this way.
A binary branch is expressed as $\{(x, p, y), (x, \neg p, z)\}$.
As can be easily noticed, "command" is not defined. I do not know what that should be, but it should be as weak as possible.
Have imperative programs been defined like that?
I also think a mathematical definition could help me go further understanding the space of imperative programs.
Answer: Short answer: yes.
Longer answer -- there probably isn't a unique reference but control flow graphs (essentially what you describe) and their precursors, flowcharts are Old. Here are three random references from antiquity to something very recent that use minor modifications of the model. The latter two define the model fairly precisely. You'll find a lot of lattices in the second paper.
Assigning Meaning to Programs, Robert Floyd, 1967.
Systematic Design of Program Transformation Frameworks by Abstract Interpretation, Patrick Cousot and Radhia Cousot, POPL 2002.
Assertion Checking Unified, Sumit Gulwani and Ashish Tiwari, VMCAI 2007. | {
"domain": "cstheory.stackexchange",
"id": 1017,
"tags": "reference-request, imperative-programming"
} |
Application of difference array for overlapping glass-tint problem | Question: For this problem, I heard that using a difference array can solve the problem but I can't seem to figure out how to solve this problem. Could anyone give me some advice? Please keep it simple since I am only a high school student preparing for the national infomatics Olympiad.
Given an array of numbers, we can construct a new array by replacing each element by the difference between itself and the previous element, except for the first element, which we simply ignore. This is called the difference array, because it contains the first differences of the original array. We will denote the difference array of array A by D(A). For example, the difference array of A = [9, 2, 6, 3, 1, 5, 0, 7] is D(A) = [2-9, 6-2, 3-6, 1-3, 5-1, 0-5, 7-0], or [-7, 4, -3, -2, 4, -5, 7].
Source
I've found a solution in Python , but I can't understand it.
Canadian Computing Competition: 2014 Stage 1, Senior #4:
You are laying N rectangular pieces of grey-tinted glass to make a stained glass window. Each piece of glass adds an integer value "tint-factor". Where two pieces of glass overlap, the tint-factor is the sum of their tint-factors.
You know the desired position for each piece of glass and these pieces of glass are placed such that the sides of each rectangle are parallel to either the x-axis or the y-axis (that is, there are no "diagonal" pieces of glass).
You would like to know the total area of the finished stained glass window with a tint-factor of at least T.
Input Specification:
The first line of input is the integer $N$ ($1\leq N\leq 10^3$), the number of pieces of glass. The second line of input is the integer $T$ ($1\leq T\leq 10^9$), the threshold for the tint-factor. Each of the next $N$ lines contain five integers, representing the position of the top-left and bottom-right corners of the $i$-th piece of tinted glass followed by the tint-factor of that piece of glass. Specifically, the integers are placed in the order $x_1$ $y_1$ $x_2$ $y_2$ $t$, where the top-left corner is at $(x_1,y_1)$ and the bottom-right corner is at $(x_2,y_2)$, and tint-factor is $t$. You can assume that $1\leq t\leq 10^6$. The top-most, left-most coordinate where glass can be placed is $(0,0)$ and you may assume $0\leq x_1<x_2\leq K$ and $0<y_1<y_2\leq K$, and ...
Output Specification:
Output the total area of the finished stained glass window which has a tint-factor of at least $T$.
I have an implementation but I'm not quite sure how it works. Any explanation would be greatly appreciated.
Answer: First, I will look at this problem from a more (high-level) theoretical perspective and then go through the details required in an implementation, based on the solution you linked to.
Given all the rectangles, we can divide the area in rectangular portions such that every portion is rectangular and all points in the same portion intersect the same set of rectangles. An division of the example input in rectangular portions can be seen below:
Our task is to identify the rectangular portions of glass that have at least the required tint $T$. To do this, it is necessary that we iterate in some manner over all these rectangular portions to determine their tint. It is also sufficient: the portions are determined by considering all rectangles they intersect, at which point we can inspect their tint. So, we can create an algorithm that solves this problem in $O(R)$, where $R$ is the number rectangular portions and this is (asymptotically) the best we can do.
But how big can $R$ become? If we have some collection of rectangular portions of the intersection of $i$ rectangles and add a new rectangle, we create one new portions for each portion that is intersected by one of the four edges of the new rectangle. So how many portions can intersect the same horizontal line? At most $2i-1$, since the same horizontal line can be intersected by at most $2$ vertical lines for every rectangle added (apart from the first). So, after adding all rectangles, we get $R\leq \sum_{i=1}^N 4\cdot (2i-1) = O(N^2)$.
So, our algorithm will run in $O(N^2)$. Given that $N\leq 10^3$, it should be fast enough if we implement it efficiently.
But implementing this efficiently could be rather tricky. Sure, it's easy to say that you can just add rectangles and create new portions as a result of their intersection, but that would be pretty cumbersome to implement and possibly inefficient.
This is where the 'difference array' approach comes in. The principle is slightly easier to explain with a 1D-array, so we will restrict the problem to that for now. Suppose we have input
x1 x2 t
11 20 1
13 14 2
17 18 1
12 19 1
So, we can draw it like this:
The array $A$ contains the tint values over all contiguous 'rectangles' in 1d, which is what we need. The array $dA$ is the difference array of $A$. Observe that this difference array can be easily found: after sorting the endpoints of our 'rectangles', we add the tint-value of the rectangle on the left side (when 'entering' the rectangle) and subtract it on the right side (when we 'leave' the rectangle). After we found are difference array $dA$, we can compute $A$ by noting that $A[i] = \sum_{j=1}^n dA[j]$. Then, we scan $A$ to find the areas that have tint $\geq T$ and add the area to our final value and we're done.
For 2D arrays, we do exactly the same thing, but now with an extra dimension to keep track of.
Let me make a final remark: it is in general very hard to 1. reconstruct an algorithm from an implementation and 2. understand an implementation of an algorithm without understanding the algorithm. Therefore, just inspecting source code to understand an algorithm is generally a hopeless task. In this case, I could only understand the implementation in this source after I understood the explanation. | {
"domain": "cs.stackexchange",
"id": 10633,
"tags": "algorithms, data-structures, computational-geometry, arrays"
} |
"...because there is no saturation." Elliott Lieb on the interacting 1d Bose gas | Question: In the this article by Lieb, Liniger:
"Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State"
a repulsive Bose gas is considered in 1d with Hamiltonian
$$
H = -\sum_\ell \partial_\ell^2 + 2c \sum_\ell \sum_{m<\ell} \delta(x_m-x_\ell)
$$
In one passage the solution space is considered for positive and negative $c$. It is argued:
we are interested in the repulsive case $c\geq0$. While the attractive case $c<0$ has solutions, the case is not physically meaningful, because there is no saturation. It can be shown that the energy of the $N$ particle ground state is proportional to $-N$ for $c\geq0$ but $-N^2$ for $c<0$.
I am interested in understanding the physical meaning of this particle-energy proportionality, and what the meaning is of this saturation?
Answer: I suspect that what was meant by that statement (lacking saturation) is that no stable N-body state exists for the attractive case. For fermion systems the attractive case can be stabilized by the Pauli principle, but Bose systems lack this stabilization mechanism. Leib"s work has for many decades specialized in the study of stability in many body quantum systems. | {
"domain": "physics.stackexchange",
"id": 63343,
"tags": "statistical-mechanics, mathematical-physics"
} |
How can it be known that Venus does not have plate tectonics? | Question: This answer provides some insight into Venus' surface geology:
Water may be necessary as a lubricant for plate tectonics. Whether or not this is the case, Venus does not have plate tectonics. It instead has a stagnant lid geology, punctured occasionally by extreme vulcanism (Siberian traps level vulcanism, and then some).
What are the observations that lead to this conclusion? The existence of plate tectonics on Earth was first determined by putting a lot of pieces of the first-hand observational puzzle together. There's much less data available from Venus.
Answer:
There's much less data available from Venus.
Some data exists. As mentioned in HDE 226868's answer, maps of Venus's surface exist. Like Earth's atmosphere, Venus's atmosphere is transparent to some low frequency electromagnetic radiation such as those used by radar. These observations are consistent with a planet that has stagnant lid tectonics and inconsistent with a planet that has active plate tectonics; more on this below.
In addition to these remote observations, the Soviet Union successfully sent several spacecraft into Venus's atmosphere, some of which landed and briefly operated on the surface of Venus. The initial attempts failed because nobody thought the surface conditions on Venus would be as brutal as they are. To make landing successful, the Soviet Union had to significantly downsize the parachutes and they had to use materials and avionics that could withstand very high temperatures.
Every piece of evidence gathered to date regarding Venus is inconsistent with a planet with active plate tectonics:
Venus surface temperature and pressure are well above water's critical point. Venus cannot have any liquid water on its surface. Water is widely (but not universally) thought to be critical as a lubricant that enables plate tectonics to occur.
The radar observations show a planet with a nearly universal surface age, about half a billion years old. This is inconsistent with a planet with active plate tectonics but consistent with a planet with stagnant lid tectonics.
Venus's atmosphere is very thick, much thicker than the Earth's atmosphere, and is dominated by carbon dioxide. Plate tectonics recycles carbon dioxide at subduction zones. Stagnant lid tectonics does not. A planet with plate tectonics will see a gradual reduction in the amount of carbon dioxide in its atmosphere over geologically long periods of time. A planet with intermittently active stagnant lid tectonics will instead see a gradual increase in the amount of carbon dioxide in its atmosphere over geologically long periods of time.
Multiple physics-based models of a planet with very high surface temperatures suggest that such planets will have rather thin and rather ductile crusts that can readily repair themselves against damage caused by subsurface tensions.
Note very well: The stagnant lid tectonics of hot terrestrial planets such as Venus and possibly Titan (Titan is "hot" because its geology is ice-based rather than rock-based) is rather different from the stagnant lid tectonics of cold terrestrial planets such as the Moon and Mars. The surfaces of the Moon and Mars are very old. The surface of Venus is much younger in comparison. Venus has undergone at least one somewhat recent nearly global resurfacing event. This has not happened on the Moon or Mars. Plate tectonics appears to require a planet whose surface is neither too cold nor too hot, and that has a good amount of liquid water on the surface.
Some references:
David Bercovici and Yanick Ricard, "Plate tectonics, damage and inheritance," Nature 508.7497 (2014): 513.
DOI: 10.1038/nature13072.
A. Davaille, S. E. Smrekar, and S. Tomlinson, "Experimental and observational evidence for plume-induced subduction on Venus," Nature Geoscience 10.5 (2017): 349.
DOI: 10.1038/NGEO2928.
James F. Kasting and David Catling, "Evolution of a habitable planet," Annual Review of Astronomy and Astrophysics 41.1 (2003): 429-463.
DOI: 10.1146/annurev.astro.41.071601.170049.
Mikhail A. Kreslavsky, Mikhail A. Ivanov, and James W. Head, "The resurfacing history of Venus: Constraints from buffered crater densities," Icarus 250 (2015): 438-450.
DOI: 10.1016/j.icarus.2014.12.024
Ignasi Ribas et al., "Evolution of the solar activity over time and effects on planetary atmospheres. I. High-energy irradiances (1-1700 Å)," The Astrophysical Journal 622.1 (2005): 680.
DOI: 10.1017/S0074180900182427.
Accessible pdf: https://iopscience.iop.org/article/10.1086/427977/pdf. | {
"domain": "astronomy.stackexchange",
"id": 3575,
"tags": "venus, planetary-formation, planetary-science"
} |
what if there is a door for which our immune system has no key? | Question:
And as a B cell matures, it develops the ability to determine friend from foe, developing both
immunocompetence -- or how to recognize and bind to a particular antigen -- as well as
self-tolerance, or knowing how to NOT attack your body’s own cells.
Once it’s fully mature, a B lymphocyte displays at least 10,000 special protein receptors
on its surface -- these are its membrane-bound antibodies.
All B lymphocytes have them, but the cool thing is, every individual lymphocyte has
its own unique antibodies, each of which is ready to identify and bind to a particular kind of antigen.
That means that, with all of your B lymphocytes together, it’s like having 2 billion keys
on your immune system’s keychain, each of which can only open one door.
This is what Hank said in Crash course. So the b cells, each are having several unique antibodies and I also saw that this is true for T cells too and the dendritic cells look for a helper T cell that can bind the parts of the intruders which the dendritic cell has presented on its membrane.
my question is
What if there is no antibody in any cell against an antigen?
Does the immune system has antibodies against all antigens in the world?
What happens if an antigen came into our body which has no antibody matching for it?
Answer: The same thing that happens to the native american when they were first exposed to smallpox. Extermination of 90%-95% of the population. https://en.wikipedia.org/wiki/History_of_smallpox
The immune system is unable to attack bacteria or virus which makes unrecognizable antigen. As a result the bacteria/virus continue to grow and grow unopposed, until the person dies.
The disease goes wild, spreads like wild fire, killing every single individual it infects. In the worst case scenario, the species may go extinct.
EDIT
Why do vaccines work.
Firstly the adaptive immune system is adaptive. The immune system generates new antibodies (ie new keys) by splicing together different version of the V(D)J genes in a big round of mixing and matching. Then it uses an enzyme called AID to induce hypermutation to that combination. The end result is a nearly limitless number of antibodies (keys).
https://en.wikipedia.org/wiki/Somatic_hypermutation
https://en.wikipedia.org/wiki/Antibody
However this process takes time. And during that time the bacteria/virus is growing and growing. Weakening the body. So it becomes a race. Can the body generate an antibody that recognizes the bacteria/virus, before the body dies.
A vaccine helps in this respect, because the vaccine is can expose the immune system to the antigen of the deadly virus/bacteria without actually using the active/live virus.
ie Some vaccines only uses pieces and parts of the bacteria/virus. The antigen is there. But the virus is dead.
In other older vaccines, the virus used in the vaccine has been weaken/mutated so much that it has a low probability of beating the body before the immune system. https://en.wikipedia.org/wiki/Edward_Jenner
In the Edward Jenner cowpox vaccine... the cowpox virus so happened to have antigens similar enough to smallpox, that antibodies against cowpox works on smallpox. And cowpox is not deadly to humans.
Now.. here is the interesting part. The arms race between host and disease has gone on for a very long time. So many viruses develop what is equivalent to chaff. These are proteins that stick out on the membrane surface and thus easily recognized by the immune system, but are rapidly changed. So by the time the immune system launches an aggressive response, the protein changes again. Kind like a disguise that a thief changed every time he robs a bank, so the police are always one step behind. The bacteria grows and grow. Constantly switching its antigen coat and eventually the person dies.
https://www.ncbi.nlm.nih.gov/books/NBK27176/
https://en.wikipedia.org/wiki/Relapsing_fever
A vaccine in this case, presents the immune system with the one part of the bacteria antigen coat that does not change. So an effective immune response can be made. However as experience with HIV antibodies has found, those constant parts tend to be protected by the parts that do rapidly change. So antibodies have trouble reaching it. And this is why we do not have vaccines for HIV. The constant regions are too well protected.
One response to this can be seen in flu vaccine. Here the virus changes it antigen coat not every few days, but on average once a year. So we try to predict what is the flu strain of the year.. and raise a vaccine to the antigens of the year. Sometimes the predictions are spot on, so the vaccine works perfectly. Sometimes the antigen predictions are a little off... so the vaccines don't work so well. And on a rare bad year, the predictions are completely off. And the flu vaccine do not work. | {
"domain": "biology.stackexchange",
"id": 8012,
"tags": "human-biology, molecular-biology, immunology, immune-system, immunity"
} |
Operations of Tensors with Different Orders | Question: I know that a 4th order tensor times a 2nd order tensor yields a 2nd order tensor; and a 2nd order tensor times a 2nd order tensor yields a 0th order tensor, or scalar. But from my linear algebra knowledge, the product of two 3 by 3 matrices is still a 3 by 3 matrix, which indicates that the product of two second-order tensors is still a second order tensor. How do I reconcile these things?
Answer: The rank of a tensor should not be confused with the dimension of a matrix. All matrices are second order tensors with (rank = 2) regardless of their dimension. In terms of tensor notation, a matrix $M$ is represented as $M_{ab}$ where the two indices $a$ and $b$ represent the rows and columns. These indices run over all the integer values that represent the different rows and columns.
Moreover, a "4th order tensor times a 2nd order tensor yields a 2nd order tensor," only if two of the indices are contracted. Compare
$$ A_{abcd}B_{cd} = C_{ab} , $$
and
$$ A_{abcd}B_{de} = C_{abce} . $$
In the first case two indices are contracted leading to a second rank tensor. In the second case only one index is contracted leading to a 4th rank tensor. | {
"domain": "physics.stackexchange",
"id": 89189,
"tags": "tensor-calculus"
} |
Two DC motors and single output? | Question: I saw one old industrial robot(Year 1988) end effector is having 2 DC motor for roll drive. After roll drive, yaw and pitch drives are connected and it has dc motors separately.
But roll drive has two DC motors. Why are they used like this? why not single with higher torque.
All the roll, pitch and yaw motors are same spec. Total 4 DC motors.
Two DC motor connected to single shaft using gears in roll.
Answer: Reusing parts is a very important and common concept.
Using the same DC motor 4 times means you have to deal with only one motor type. If you choose a 2nd type of motor for the roll, you have to fiddle with the characteristics of two motors. Using a motor with more torque usually means that you need a different drive circuit. From here on the problems are proliferating.
What if the supplier of the bigger motor stops selling the motor?
What if the driver circuit of the bigger motor does not have the
precision of the smaller ones?
What's the price to buy 1 + 2 motors vs. buying 4 of the same kind?
If you are a company building industrial robots you buy DC motors in bulk. You use few standard motors that your engineers are familiar with. The driver circuits are proven and tested.
If you face a higher torque requirement, you simply add another familiar motor. It's not economical to pay somebody finding a better suited motor, which has several possible drawbacks as pointed out above.
As a company your goal is to sell robots, not to overengineer them. | {
"domain": "robotics.stackexchange",
"id": 837,
"tags": "industrial-robot"
} |
what does mean quasi-static channel | Question: When I read papers about channels, I usually see something Quasi-static Channel,
Does "Quasi-static Channel" mean "time-variant channel" or "time-invariant channel"?
Answer: Quasi-static is almost-static. In other words, for a block (or window) period of time, you could assume that your channel is static. Below, i attach a figure that depicts this scenario. As you can see the channel could be assumed static for around 100 ms. | {
"domain": "dsp.stackexchange",
"id": 6611,
"tags": "fading-channel, multipath, multi-channel"
} |
Logarithm in state space equations | Question: I want to linearize a system to this form
$$\begin{bmatrix}
\Delta\dot{x}_1\\
\Delta\dot{x}_2\\
\Delta\dot{x}_3
\end{bmatrix} = A\begin{bmatrix}\Delta x_1\\ \Delta x_2\\ \Delta x_3\end{bmatrix}+B\begin{bmatrix} \Delta u\end{bmatrix}$$
$$\begin{bmatrix}
\Delta y
\end{bmatrix} = C\begin{bmatrix}\Delta x_1\\ \Delta x_2 \\ \Delta x_3\end{bmatrix}+D\begin{bmatrix} \Delta u\end{bmatrix}$$
The A matrix
$$A = \begin{bmatrix} \frac{\partial \dot x_1}{\partial x_1} & \frac{\partial \dot x_1}{\partial x_2} & \frac{\partial \dot x_1}{\partial x_3}\\[6pt] \frac{\partial\dot x_2}{\partial x_1} & \frac{\partial \dot x_2}{\partial x_2} & \frac{\partial \dot x_2}{\partial x_3}\\[6pt] \frac{\partial\dot x_3}{\partial x_1} & \frac{\partial \dot x_3}{\partial x_2} & \frac{\partial \dot x_3}{\partial x_3}\end{bmatrix}_{|P}$$
$$=\begin{bmatrix} M_1L & \frac{d}{dt} \left[\frac{R_2}{M^2}\ln(x_2(t)+1)\right] & R_1\\[6pt] L_1L_2 & \frac{d}{dt} \left[\frac{L_1}{M^2}\ln(x_2(t)+1)\right]& R_2\\[6pt] 0 &\frac1C & 0\end{bmatrix}_{|P}$$
has in its second comulmn logarithm. Pay attention only there. ($M$, $L$'s and $R$'s are some constants) If I derivate it, I get
$$=\begin{bmatrix} M_1L & \frac{R_2 \frac{dx_2(t)}{dt}}{M^2(x_2(t)+1)} & R_1\\[6pt] L_1L_2 & \frac{L_1 \frac{dx_2(t)}{dt}}{M^2(x_2(t)+1)}& R_2\\[6pt] 0 &\frac1C & 0\end{bmatrix}_{|P}$$
Am I right?
And I'm not sure what to do with this. Since my initial condition $P$ are in equilibrium and $\frac{dx_2(t)}{dt} = 0$, should I put there the $0$? The result would look like
$$=\begin{bmatrix} M_1L & 0 & R_1\\[6pt] L_1L_2 & 0 & R_2\\[6pt] 0 &\frac1C & 0\end{bmatrix}_{|P}$$
But few weeks ago I have been solving similar situation and I didn't put $\frac{dx_2(t)}{dt}$ in the matrix at all. Lecturer told me it was right. According to that, this matrix should look like
$$=\begin{bmatrix} M_1L & \frac{R_2}{M^2(x_2(t)+1)} & R_1\\[6pt] L_1L_2 & \frac{L_1 }{M^2(x_2(t)+1)}& R_2\\[6pt] 0 &\frac1C & 0\end{bmatrix}_{|P}$$
But I'm confused. The mode $x_i(t)$ isn't consider as a function of time?
How to solve this?
Answer: It's $\frac{d\dot x_2}{dx_2}$, not $\frac{d\dot x_2}{dt}$, so that's why the right result truly is $\frac{R_2}{M^2(x_2(t)+1)}$. | {
"domain": "dsp.stackexchange",
"id": 1616,
"tags": "system-identification, state-space"
} |
openni2 load oni file in Indigo | Question:
I am not able to load a oni file (OpenNI record file) with openni2 in Indigo. In Hydro I was able to load it simply with:
roslaunch openni2_launch openni2.launch device_id:=/path/to/your/file.oni
but in Indigo openni2 still tries to open the real device.
Any suggestions?
Originally posted by rastaxe on ROS Answers with karma: 620 on 2015-11-02
Post score: 0
Answer:
I was able to solve the issue, by changing these lines on openni2_camera (lines 682):
// check if device_id is a oni file
else if( device_id.size() - device_id.rfind(".oni") == 4 ) {
return device_id;
}
Originally posted by rastaxe with karma: 620 on 2016-09-12
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 22882,
"tags": "ros, openni2, ros-indigo"
} |
CNN: How do I handle Blurred images in the dataset? | Question: I have 30% blurred images in each classes. I have a total of 10 classes. I'm not allowed to drop these blurred images. How do I train the model to get better accuracy for both blurred and nonblurred training dataset ? Currently, I'm at 11% accuracy.
The images were blurred using a Gaussian blur.
I have used a Wiener filter, but not able to restore the image from blurred images.
Please can anyone suggest a good way to train the model.
Answer: I will suggest using data augmentation approaches to even out your data distributions. It will make your blurred images more usable to the model
The data distribution of 30% of your images deviates from the rest because they are blurred. Experiment with training using random blur with appropriate min-max ranges in the data augmentation pipeline (on the images that aren't blurred). This will help the model to smoothly generalize across blurred images. If you don't have labels of which images are blurred, use blur detection algorithms to determine a threshold after which you want to augment.
After doing this, it may be important that you do test-time data augmentation as well. | {
"domain": "datascience.stackexchange",
"id": 8305,
"tags": "deep-learning, cnn, image-classification, computer-vision, gaussian"
} |
Optimization of a Laravel controller that passes data and views | Question: I am trying to build a website that shows events. And I use the following controller.
Please note that the urls ($view and $course) etc. are renamed and are not the ones used on the real website!
Controller:
<?php
namespace App\Http\Controllers;
use Carbon\Carbon;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\DB;
use App\Models\Eventary;
class EventaryEventsController extends Controller
{
// function for the /kursangebote/{$course} pages
public function showEvents($course)
{
// filter the courses and set the views and id
if ($course == 'workshops') {
$view = 'pages.course.view-1';
$id = '1';
}
if ($course == 'events') {
$view = 'pages.course.view-2';
$id = '2';
}
if ($course == 'salsa') {
$view = 'pages.course.view-3';
$id = '3';
}
if ($course == 'dance4') {
$view = 'pages.course.view-4';
$id = '4';
}
if ($course == 'dance5') {
$view = 'pages.course.view-5';
$id = '5';
}
if ($course == 'dance6') {
$view = 'pages.course.view-6';
$id = '6';
}
if ($course == 'dance7') {
$view = 'pages.course.view-7';
$id = '7';
}
if ($course == 'dance8') {
$view = 'pages.course.view-8';
$id = '8';
}
// get the course data from the database
$events = Eventary::query()
->orderBy('title')
->orderBy('start')
->where('category', $id)
->where('start', '>', Carbon::now())
->get();
// pass through the data to the correct views
return view($view, [
"events" => $events
]);
}
// function for the /events page
public function workshopsList()
{
// get the course data from the database
$events = Eventary::query()
// show all events with the category 1 (events & workshops)
->where('category', '1')
->where('start', '>', Carbon::now())
->get();
// pass through the data to the correct views
return view('pages.course.event-list', [
"events" => $events
]);
}
// function for the /kalender page
public function calendarList()
{
// get the course data from the database
$events = Eventary::query()
// show all events with the category 1 (events & workshops)
->where('start', '>', Carbon::now())
->get();
// pass through the data to the correct views
return view('pages.calendar', [
"events" => $events
]);
}
// function for fullcalendar json generation
public function feed()
{
// get database
$events = DB::table('eventaries')->get();
return json_encode($events, JSON_PRETTY_PRINT);
}
}
there is a lot of code that gets reused. So I think that this is something that I should improve. But are there any other suggestions?
Answer: Put your repeated or complicated queries into scopes on the model class. Here I also use the helper now to avoid importing Carbon:
public function scopeFuture($query)
{
$query->where('start', '>', now());
}
You can even use this scope in other scopes. If you find yourself using the same scope on most to all queries, you could use a global scope, but I'm not a fan since they're usually unexpected and confusing.
What does your route look like for /kursangebote{$course}? If you're not constraining what $course can be, then this code will error out when someone inevitably tries to visit /kursangebotedance (no number). If you do have constraints on the route, I'd suggest not and instead making your controller responsible for returning a sensible response, e.g. if you can't match it to a category, abort(404).
Also it looks like you're expecting some partially uppercase URLs which is a confusing user experience. Everything should be lowercase, with words separated by hyphens (unless they're separated with a slash), as that's better for SEO.
What does the first "show all events with the category 1 (events & workshops)" mean? Aren't "Events" category 2? What does the second instance of this comment mean? You're not filtering on category there.
Try to avoid the magic of arbitrary numbers like '1' if you can. If you can't, consider using enums (PHP 8.1) to use a clear name instead.
The name "pages.course.view-N" is suboptimal. You don't need the word "view" because you know it's a view: it's a .blade.php file, and it's located in your "views" directory. What you don't know from looking at the file name is what type of dance it's for.
Style guides recommend using snake_case for view files, not hyphens.
If your "views" directory only contains a "pages" directory, then I would remove the "pages" directory. Even if it doesn't, it sounds like it's a catchall and not very descriptive.
The chain of if statements seems to me like it's indicating a deeper problem, but it's hard to give a suggestion without seeing more code. While you can use some other language construct here (such as match), I think you need a bigger rewrite.
What's the purpose of having so many different views ("course.view-N")? Is there a way that you can rewrite your view code to not need so many different views?
And why not store categories in the database? Right now, if you wanted to add more categories, you have to write more code. A database would allow you to expand and have an admin interface to add more categories as needed — or remove some. | {
"domain": "codereview.stackexchange",
"id": 42643,
"tags": "php, laravel"
} |
How to change random number of random elements in a column by group in R? | Question: I have the following data frame:
>df
Name
1 Ed
2 Ed
3 Bill
4 Tom
5 Tom
6 Tom
7 Ed
8 Bill
9 Bill
10 Bill
My goal is that from each group by "Name" change the "Name" values 25-75% of random rows to "Name"+"_X" (the remaining rows don't change). So the expected output is similar to:
Name
1 Ed
2 Ed_X
3 Bill_X
4 Tom
5 Tom_X
6 Tom
7 Ed_X
8 Bill
9 Bill
10 Bill_X
I have tried with for cycle like this (a the moment, for 50% random rows:
for (n in unique(df$Name)){
df[sample(which(df$Name==n), nrow(df[df$Name==id,])/2), 1] <- paste(df$Name, "_X", sep="")
}
Unsuccessfully, however.
Answer: The idea is correct, even if you have a couple of mistakes in your code (like id is not defined, and inside paste you want to use just n, not df$Name). This is not super compressed code but it does the job:
Name = c('Ed','Ed','Bill','Tom','Tom','Tom','Ed','Bill','Bill','Bill')
df = data.frame(Name)
for (n in unique(df$Name)){
# get indices
indices = which(df$Name==n)
# sample size
samp_size = round(length(df[df$Name==n,])/2)
# get indices to replace
samp = sample(indices, samp_size)
# need to set column as character
df$Name = as.character(df$Name)
# set new values
df[samp,] = paste(n,'_X',sep='')
# set again column as factor
df$Name = as.factor(df$Name)
}
Out:
Name
1 Ed
2 Ed_X
3 Bill
4 Tom_X
5 Tom_X
6 Tom
7 Ed_X
8 Bill
9 Bill_X
10 Bill_X | {
"domain": "datascience.stackexchange",
"id": 7226,
"tags": "r, dataframe"
} |
What is the relation between kinetic energy and momentum? | Question: If kinetic energy is doubled, what happens to momentum? Is it also doubled?
I've tried working through the formulas for each but keep getting lost.
$$KE=\frac{mv^2}{2}$$
$$p=mv$$
so if $v=\frac{p}{m}$ then $KE= \frac{m}{2} \cdot (\frac{p}{m})^2$ so $KE=\frac{1}{2} \cdot \frac{p^2}{m}$
then p= (2KEm)^1/2 ...so if KE is doubled what happens to p?
Answer: From your last equation you get $2KE=\frac{1}{2} \cdot \frac{p_{final}^2}{m}=2 \cdot \frac{1}{2} \cdot \frac{p_{initial}^2}{m}$
so you'll get: $p_{final}=\sqrt2 p_{initial}$ | {
"domain": "physics.stackexchange",
"id": 17488,
"tags": "homework-and-exercises, energy, kinematics, momentum"
} |
robot doesn t start moving | Question:
I have setup navigation stack on my robot with all the nodes up and running (move_base, amcl , costmap2d etc.). I am also able to move my robot using teleoperation (keyboard control) and see its motion in Rviz as well. Now, I have created a map using gmapping and loaded it with amcl and when I give 2D NavGoal command from Rviz, the robot does not move. It shows the planned path in Rviz but does not move. In the terminal, it prints 'Got new plan' many times and at the end prints 'Rotate Recovery behavior started'. And after that, nothing happens. Any ideas why the robot is not moving. Thanks
Originally posted by Momo on ROS Answers with karma: 1 on 2018-06-20
Post score: 0
Original comments
Comment by stevejp on 2018-06-20:
This type of problem is really difficult to help with as there are a ton of different things that could be happening, and we don't know your entire setup (simulated?). My general advice, however, is to start out with a minimal setup, and add complexity as you verify functionality.
Comment by Momo on 2018-06-21:
i am using an arduino mega as Peter Chau did in his work http://labone.tech/chefbot-buildlog/ but with different encoders B83609 .i am using same IMU(MPU6050,) kinect,and L298N
i suppose the problem is coming from the interruptions and timers(timer1 and timer3) of the encoders
Comment by Arif Rahman on 2018-06-21:
1 thing to check: make sure the keyboard teleop is OFF when sending the goal. Kill the node. Otherwise their commands will conflict.
Comment by nunuwin on 2019-06-30:
I have the same error but I used gazebo simulation, so how can I solve this?
Comment by Xiyu Chen on 2020-09-02:
Hi did you solve the problem? I am currently getting stuck on the same issue!
Answer:
Take a look into using a multiplexer (for example the yocs_cmd_vel_mux package) as it seems that the messages that the teleoperation and move_base nodes are publishing may be conflicting with each other. What ends up happening (at least from what I've seen) is that multiple nodes publishing cmd_vel messages at the same time on the same topic will cause poor behavior such as jittering or, as you've seen, the robot not moving (probably due to one node publishing empty cmd_vel messages).
By using a multiplexer, you can prioritize the cmd_vel messages so that you can use both nodes at the same time. From what I recall, the way that this works is that each node publishes on its own topic and the multiplexer will republish from those topics onto the topic that is used to control your robot. If both topics have messages being published on them, then the topic that has a higher priority will only be sent to your robot's topic.
Originally posted by jayess with karma: 6155 on 2018-12-18
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 31052,
"tags": "ros, arduino, navigation, ros-kinetic, base-odometry"
} |
What chemicals are going to be used to make the smoke black or white? | Question: For the papal election, white and black smoke will be used to signal success or failure in the voting process.
From what I understand, the ballots are burnt and chemicals are added to create the color.
But what chemicals are used to get black or white smoke?
Answer: I think Titanium Tetrachloride would be a good option for white smoke (I've heard it's used quite often in smoke bombs). But it releases quite a lot of $HCl$ gas and so it might not be very safe.
A quick Google search comes up with a lot of answers to your question though. This site even answers your question in the same context (The elections going on at the Vatican):
http://thehappyscientist.com/science-experiment/black-and-white-smoke
Apparently you can make white smoke using Magnesium powder as well. (The white colour is due to fine particles of Magnesium Oxide $MgO$ )
It seems the Vatican has experimented using many sources before (Military flares, Smoke bombs and so on) though apparently they didn't turn out too well...
EDIT
The Wikipedia article on Coloured smoke gives another possibility. You can add Titanium Dioxide ($TiO_2$) to a mixture of Potassium Chlorate ($KClO_3$) and Lactose for white smoke. $KClO_3$ is a powerful oxidizing agent which reacts with Lactose (which is a reducing sugar) to create an explosion. $TiO_2$ is safe and is used as a pigment in most white paints and as a food colouring as well.
EDIT2
The official recipes have been revealed by the Vatican press office. See this story on the New York Times website.
The previous edit was partially correct. They use a mixture of $KClO_3$, lactose and pine rosin for white smoke. For black smoke, they use Potassium perchlorate ($KClO_4$), anthracene (an aromatic compound made of three fused benzene rings) and sulfur.
The process (quoted from the site):
"The chemicals are electrically ignited in a special stove first used for the conclave of 2005, the statement said. The stove sits in the Sistine Chapel next to an older stove in which the ballots are burned; the colored smoke and the smoke from the ballots mix and travel up a long copper flue to the chapel roof, where the smoke is visible from St. Peter’s Square. A resistance wire is used to preheat the flue so it draws properly, and the flue has a fan as a backup to ensure that no smoke enters the chapel." | {
"domain": "chemistry.stackexchange",
"id": 367,
"tags": "organic-chemistry, everyday-chemistry, color"
} |
Algorithm for detection of overlaping between vectors | Question: I am currently developing a math-physics program and I need to calculate if two vectors overlap one another, the one is vertical and the other is horizontral. Is there any fast algorithm to do this because what I came up so far, has a lot of repeats. Eg lets say we have vector V1((0,1),(2,1)) and a V2((1,0),(1,2)) where the first parenthesis is the coordinates starting and the second coordinates that the vector reaches. I want as a result to take that they overlap at (1,1)
So far the only idea I came up is to ''expand'' each vector to a list of points and then compare the lists e.g for V1 its list would be (0,1) (1,1) (2,1)
Answer: If you have two line segments and you want to know if they intersect, see https://en.wikipedia.org/wiki/Line_intersection and https://en.wikipedia.org/wiki/Line_segment_intersection.
For next time, I'd expect you to do more research on your own. See especially "computational geometry", e.g., a textbook that covers that area. | {
"domain": "cs.stackexchange",
"id": 2453,
"tags": "algorithms"
} |
Should all internal node keys in B+ tree also be in the leaves? | Question: I was reading about B+ tree insertion.
The algorithm takes following form:
Insert the new node as the leaf node.
If the leaf node overflows, split the node and copy the middle element to the parent index node.
If the index node overflows, split that node and move the middle element to the parent index node.
However, adding the new index value in the parent node may cause it, in turn, to split. In fact, all the nodes on the path from a leaf to the root may split when a new value is added to a leaf node. If the root node splits, the tree grows by one level.
Now the book asks to insert 33 in following tree of order 4:
I was guessing how those [10,20,30] occur to be the root node. Before performing first split while forming above tree, these [10.20,30] should be in some leaf and in any case they should be present in some leaf.
In other words I feel that all internal node keys should also be present in the leaves. However thats not the case with [10,20,30]. This is also inline with the fact that in B+ tree all data is present in the leaves, so all keys should be present in the leaves.
Another example on youtube also have 13 and 30 in the root node but not in any leaf.
Am I wrong with the fact that all internal node keys should also be in the leaves?
Answer: The internal keys come from values also stored in leaves, but if you allow deletions, the value could be deleted from the leaf after it is created and used in the internal node. Deleting the value from a leaf won't change any internal nodes, unless the leaf becomes underfilled. With the insertion you list, it would have the property you are thinking of if there were no deletions.
On the other hand, it is possible that the author wasn't thinking of either the property you name, nor the possibility of deletions, when making the question, and just used 10, 20, 30 as easy separators. For sure, I have written questions with mistakes before, and usually, even with a mistake in the question, it can usually be answered properly to see if the student understands the material (insertion in this case). If a student then asked me what your post asks? I would be pleased with their understanding of the material to notice this property. | {
"domain": "cs.stackexchange",
"id": 6967,
"tags": "graphs, trees, binary-trees, binary-search-trees"
} |
Changing the values of Model parameters | Question:
Hi I have the following in my model plugin shown below, I was wondering how can I change the parameters of the <reset_frequency> such that it takes inputs from a ROS node which has a slider bar in it.
<plugin name="particle_shooter_plugin" filename="libparticle_shooter_plugin.so">
<reset_frequency>90.0</reset_frequency>
<x_axis_force>-10000.0</x_axis_force>
<y_axis_force>0</y_axis_force>
<z_axis_force>0.0</z_axis_force>
<x_origin>1.0</x_origin>
<y_origin>0.0</y_origin>
<z_origin>0.5</z_origin>
<random_range>1</random_range>
</plugin>
Originally posted by lucien on Gazebo Answers with karma: 3 on 2020-09-30
Post score: 0
Answer:
Replicate the slider in your plugin. I.e. make the variable a dynamic_reconfigure parameter.
Originally posted by nlamprian with karma: 833 on 2020-09-30
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by lucien on 2020-09-30:
Thank You. I realize I have to do a subscriber in the plugins.
And am following this video:
https://www.youtube.com/watch?v=hcte5r1R7vI&t=326s | {
"domain": "robotics.stackexchange",
"id": 4550,
"tags": "ros, gazebo-model, gazebo-plugin, gazebo-9"
} |
Does the Summation of Rest Masses equal the Total Rest Mass of the System? | Question: I've been told things such as the sum of the rest masses times c^2, or sum of the relativistic masses times c^2 (I forgot which), were actual equal to things such as the enthalpy of the system rather than the energy.
For a system of $n$ particles, having respective relativistic masses $m_0$ and the entire system having rest mass $M_0$, is it true
$$\sum_{k=1}^n m_k = M_0 \\$$
Answer: Short answer: no.
The mass1 of a system depends on the masses of the constituents, their internal motions, and the interactions between them.
Further, is it also not true that the mass of the system is the sum of the so-called "relativistic masses" of the constituents
$$ M \ne \sum_i \gamma_i m_i \;,$$
excepting cases where there is zero net potential energy in the system.
Explanation: The invariant mass of any object or system can be found by as the square of the energy-momentum four-vector (times appropriate factors of $c$)
\begin{align}
m^2
&= \frac{1}{c^4}\mathbf{p}^2\\
&= \frac{1}{c^4}\left(E^2 - (\vec{p}c)^2\right) \;,
\end{align}
which is a computation that comes out to the same value for all observer leading to the "invariant" appellation. The four-momentum of a system is equal to the sum of the four-momentum of the parts.
$$ \mathbf{P} = \sum_i \mathrm{p}_i \;.$$
So expanding $M^2c^4 = \mathrm{P}^2$ we get
\begin{align}
M^2
&= \frac{1}{c^4} \left( \sum_i \mathbf{p}_i \right)^2\\
&= \frac{1}{c^4} \left[ \left( \sum_i E_i\right)^2 - \left( \sum_i \vec{p}_i c\right)^2 \right] \\
&= \frac{1}{c^4} \left[
\left( \sum_i E_i^2 \right)
+ 2\left( \sum_i \sum_{j>i} E_i E_j \right)
- \left( \sum_i \vec{p}_i^2 c^2 \right)
- 2\left( \sum_i \sum_{j>i} \vec{p}_i \cdot \vec{p}_j c^2\right)
\right]\\
&= \frac{1}{c^4} \left[
\left( \sum_i E_i^2 - \vec{p}^2_i c^2 \right)
\right] +
\frac{1}{c^4}\left[
2\left(\sum_i \sum_{j>i} E_i E_j - \vec{p}_i \cdot \vec{p}_j c^2\right)\right]\\
&= \left( \sum_i m_i^2 \right)
+
\frac{1}{c^4}\left[
2\left(\sum_i \sum_{j>i} E_i E_j - \vec{p}_i \cdot \vec{p}_j c^2\right)\right]
\end{align}
Now that last line is equivalent to $( \sum_i m_i)^2$ only if the long term with the double sum is equal to $2 \sum_i \sum_{j>i} m_i m_2$, which is not generally the case.
Example cases: Because the factor $c^2$ is so absurdly large when expressed in any units suitable for day-to-day physics, the quantities of (kinetic or potential) energy that we encounter on a regular basis are two small to show up easily. But nuclear interaction involve large enough energies to make a measurable difference to the masses; if you have a good enough system for measuring mass (which we do in the form of mass spectrometers).
The capture of a neutron by a proton to make a deuteron is an example of a case where the resulting system is smaller than the sum of the parts (because about $2.2\,\mathrm{MeV}$ escapes as photons).
\begin{align}
m_p &= 1.6726 \times 10^{-27}\,\mathrm{kg}\\
m_n &= 1.6749 \times 10^{-27}\,\mathrm{kg}\\
m_\mathrm{D} &= 3.3436 \times 10^{-27}\,\mathrm{kg}\\
&< m_p + m_n \;.
\end{align}
And the fission of large nuclei into two lighter nuclei and a small number of neutrons is a case where the system is larger than the mass of the bits. But I will leave running down the high-precision mass measurements to for some particular case to you.
1 This answer uses the language and notation of invariant masses. The thing called "rest mass" in the older language is simply called "mass" herein, and there is no equivalent name for "relativistic mass" at all. | {
"domain": "physics.stackexchange",
"id": 47024,
"tags": "special-relativity, mass, conservation-laws"
} |
How to describe the inner curve of a crescent? | Question: What is the equation which describes the inner circle of the crescent that a celestial body displays when view at an angle from its light source, as a function of the crescent-cycle period? For instance, the cycle of Earth's moon?
I know that on a new moon the inner curve is coincident with the circumference, so the inner curve's radius is the moon's radius (let's call that 1 unit) and it's displacement from the moon's center is 0 units. After one week (one fourth of the cycle period) the inner curve's radius is infinite, and it's center is infinite distance from the center of the moon. The in-between values are not so clear, though!
Does r=1/cos(2x·π) describe the inner curve's radius (x would be the period)? It at least fits the end-values. How about determining the distance of the inner curve's center from the center of the moon?
I figure that this is a physics question and not a mathematics question as I am looking to describe a phenomenon that occurs in nature, which is only one of the infinite curves which could be described. If this question is better placed in mathematics.SE or astronomy.SE then it can be moved.
Thanks.
Answer: This is only barely physics, but I'll answer it any way.
First of all I will assume that you are at a great distance from the object so that we don't have to deal with parallax issues. Similarly, I will assume the light source is at a great distance and that the celestial body is a perfect sphere.
In this case, the terminator (the boundary between the light and unlighted portions of the sphere will always be a circle with the radius of the sphere and centered at the center of the sphere.
You are thus viewing the terminator circle from an angle and a circle viewed from an angle is an ellipse. The semi major axis of the ellipse is the radius of the sphere and the semi minor axis will be the radius of the circle times the cosine of the angle of rotation of the object (with 0 angle assumed at the "new" moon time).
I'll leave the writing of the equation up to you. Hope that is helpful. | {
"domain": "physics.stackexchange",
"id": 2319,
"tags": "mathematics, astronomy"
} |
ROS1 ROS2 porting help? | Question:
i have a std::queue defined like this
std::queue<sensor_msgs::PointCloud2ConstPtr> pointCloudBuf;
I wanted to understand how to define this in ROS2 , i tried the following but
std::queue<sensor_msgs::msg::PointCloud2::ConstPtr> pointCloudBuf;
When trying to push to the queue
void velodyneHandler(const sensor_msgs::msg::PointCloud2::SharedPtr laserCloudMsg)
{
mutex_lock.lock();
pointCloudBuf.push(*laserCloudMsg);
mutex_lock.unlock();
}
i am getting the following error
no matching function for call to ‘std::queue<std::shared_ptr<const sensor_msgs::msg::PointCloud2_<std::allocator<void> > > >::push(std::__shared_ptr_access<sensor_msgs::msg::PointCloud2_<std::allocator<void> >, __gnu_cxx::_S_atomic, false, false>::element_type&)’
61 | pointCloudBuf.push(*laserCloudMsg);
| ~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~
UPDATE
std::queue<sensor_msgs::msg::PointCloud2::SharedPtr> pointCloudBuf;
This seem to have solved my compilation issues , dont know if it will break something else down the line .
Anyhow , thankyou ROS community !
Originally posted by chrissunny94 on ROS Answers with karma: 142 on 2023-07-03
Post score: 0
Answer:
Try changing your declaration to the following. You've declared for ConstPtr but using for SharedPtr.
std::queue<sensor_msgs::msg::PointCloud2::SharedPtr> pointCloudBuf;
Originally posted by Gaurav Gupta with karma: 276 on 2023-07-03
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Mike Scheutzow on 2023-07-03:
@chrissunny94: the problem is that your code has incompatible types for the queue and the callback argument.
Comment by chrissunny94 on 2023-07-04:
@Gaurav Gupta (small world!), thanks a lot . saw your answer just now . .
Comment by chrissunny94 on 2023-07-04:
@Mike Scheutzow , yup looks like my code has a lot of incompatible types , sorting it now . Will post my finding soond .
Comment by chrissunny94 on 2023-07-05:
@Gaurav Gupta ,yup making it into shared_ptr , seems to have solved the problem . Someone in office also suggested the same solution .
Comment by Gaurav Gupta on 2023-07-05:
Yup! Glad it is all sorted now :) | {
"domain": "robotics.stackexchange",
"id": 38441,
"tags": "ros"
} |
Finding How to Transform a Plane to Reflect a Trajectory through a Given Coordinate | Question: So, for context, I am trying to analyze how this basketball machine works:
https://www.youtube.com/watch?v=FycDx69px8U
I have a basic understanding of how to calculate the new velocity of the ball after colliding with the backboard (assuming an elastic collision), but how would I approach finding a transform for the plane representing the backboard if I have a coordinate that the resulting trajectory should pass through?
In other words, given an incoming trajectory and a desired resulting trajectory after a collision, how would I find the plane (or normal vector representing that plane) that would create that resulting desired trajectory?
If there isn't a direct mathematical way to solve this, would I just sequence the different planes the trajectory can collide with and find one that fits a given set of parameters?
Answer: If we assume that during the collision, the plane only exerts a force on the ball that perpendicular to the plane itself ($\vec{F} \propto \hat{n}$), then the impulse delivered $\int \vec{F} \, dt = \Delta \vec{p} = m \Delta \vec{v}$ will also perpendicular to the plane. Thus, for a given $\vec{v}_f $ and $\vec{v}_i$, the normal to the plane $\hat{n}$ must point in the same direction as $\vec{v}_f - \vec{v}_i$.
Note, however, that it is possible for a surface to exert a force on the ball that is not strictly perpendicular to its surface. This happens, for example, when we throw a ball with a fair amount of "spin". I suspect this can be precluded by assuming that the ball does not have significant angular momentum, but I don't see an easy proof of this. | {
"domain": "physics.stackexchange",
"id": 87881,
"tags": "kinematics, collision, projectile"
} |
Simulating a track driven robot | Question:
I'm trying simulate the movement of a robot with a rubber track similar to iRobot's LANdroid. Ideally I would like to represent the robot with URDF and use Gazebo as the simulation environment. I am particularly interested in tasks such as climbing over stairs; the way these robots climb over stairs is dependent on some approximation of a track being present.
Does anyone have any advice or pointers on how I can model a track driven robot, preferably with Gazebo & URDF?
Currently I am using ODE and some OpenGL visualizations I wrote myself. The tread is modeled as a series of overlapping non-colliding wheels. While this works, it is a very rough approximation and there are un-addressed issues such as determining the torque to apply on each wheel joint based upon the signal sent to the drive motor and the friction each wheel is encountering.
I have also considered modelling the track as a chain of rigid links being driven by the drive wheel. This would be a more accurate model of the tread but I'm concerned with the stability and performance of ODE if I take this approach.
Originally posted by jcleveland on ROS Answers with karma: 1 on 2011-06-27
Post score: 7
Answer:
Now there is official support for tracked robots in Gazebo 9 and 11 using ODE. Check out https://github.com/osrf/gazebo/blob/gazebo11/worlds/tracked_vehicle_simple.world for usage example.
Originally posted by peci1 with karma: 1366 on 2020-04-24
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 5975,
"tags": "gazebo, simulation, urdf, simulator-gazebo"
} |
Including a package dependency in your personnal packages | Question:
I am a quite new ROS user and created my ros package which take a dependency (in my case eigen)
I took my ages to find how to build this with thefollowing lines in my CMakelists.txt :
#add the eigen dependency
rosbuild_find_ros_package( eigen )
include_directories( ${eigen_PACKAGE_PATH}/include/src )
At the end it seems logical. I don't know if it is about my low knownledge about CMake, but I passed sometime searching in the documentation without any clue about this.
First question, is it documented somewhere ? If not I would like to update the documentation but I don't know where is the rigth place :
http://www.ros.org/wiki/rosbuild/CMakeLists/Examples : adding the use of another package for building, espacially the "include" directory (not so easy to find this page from scratch)
http://www.ros.org/wiki/rosbuild/CMakeLists#rosbuild_find_ros_package : adding details about what you have to do after
http://www.ros.org/wiki/eigen/Tutorials : adding the details in each packages tutorial "how to use me ?"
Then, is it a roadmap in ROS on how a package gives the information to being used ?
In my example is the include_directories( ${eigen_PACKAGE_PATH}/include/src ) package specific ? or is it a ROS habits ?
Isn't it the job of ros dependencies system to add the includes ? for example in the section of the manifest.xml ?
Originally posted by Willy Lambert on ROS Answers with karma: 352 on 2011-02-26
Post score: 2
Answer:
Isn't it the job of ros dependencies system to add the includes ? for example in the section of the manifest.xml ?
It most certainly is! However, the manifest.xml file is actually where dependencies are specified, not in the CMakeLists.txt file. rosbuild only pulls in the <export> section of manifest.xml files for packages that you list a dependency on (or dependencies of your dependencies).
Your package should automatically have the correct includes for Eigen if you edit your manifest.xml file and add a line like <depend package="eigen" />. This manifest.xml line would make the stuff you did in your CMakeLists to find and include Eigen unnecessary.
Try adding the dependency to your manifest.xml file and removing anything related to finding the Eigen includes from your CMakeLists and see if that fixes things.
Originally posted by Eric Perko with karma: 8406 on 2011-02-26
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Willy Lambert on 2011-02-26:
thanks for this.
Comment by Willy Lambert on 2011-02-26:
thanks for this. At first time it was what i did but with eigen3, as my code was writtent for eigen it failed because the folder Eigen has been renamed to Eigen3. Anyway it helped, it didn't know the include were automatically done | {
"domain": "robotics.stackexchange",
"id": 4880,
"tags": "ros, manifest.xml, cmake, rosbuild"
} |
Assigning Clients a random number | Question: I have a program that creates 100,000 objects of class Client, puts them into array and then goes through that array 100 times, each time assigning each Client a different random number through Rnd() function:
Main sub:
Sub start()
Dim i As Long
Dim j As Long
Dim clientsColl() As Client
ReDim clientsColl(1 To 100000) As Client
For j = 1 To 100000
Set clientsColl(j) = New Client
clientsColl(j).setClientName = "Client_" & j
Application.StatusBar = "Getting client " & j
DoEvents
Next
Dim tempCount As Long
Dim clientCopy As Variant
For i = 1 To 100
tempCount = 0
For Each clientCopy In clientsColl
tempCount = tempCount + 1
clientCopy.generateRandom
'Application.StatusBar = "Calculating " & i & ": " & tempCount & "/" & 100000 '(1)
'DoEvents
Next
Application.StatusBar = "Calculating " & i
DoEvents
Next
MsgBox ("done")
End Sub
Client class:
Option Explicit
Dim clientName As String
Dim randomNumber As Double
Public Sub generateRandom()
randomNumber = Rnd()
End Sub
Public Property Get getClientName()
getClientName = clientName
End Property
Public Property Let setClientName(value As String)
clientName = value
End Property
The problem is, the execution time depends on whether or not line (1) is commented out. If it's executed, the status bar gets renewed, but the execution time is very slow. If it's not executed, the program gets done really fast. Why does this happen?
Answer: Your question is why does it take a long time to update the Application.StatusBar 10,000,000 times? The answer is that you are updating the Application.StatusBar 10,000,000 times.
Using the Timer from TheSpreadSheetGuru, I calculated that it takes roughly 1 sec to do 10,000 updates. So it will take roughly 10,000,000/10,000/60 minutes just to do the updates. That is roughly 16.667 minutes.
Sub CalculateRunTime_Seconds()
'PURPOSE: Determine how many seconds it took for code to completely run
'SOURCE: www.TheSpreadsheetGuru.com/the-code-vault
Dim x As Long
Dim StartTime As Double
Dim SecondsElapsed As Double
'Remember time when macro starts
StartTime = Timer
'*****************************
'Insert Your Code Here...
For x = 1 To 1000
Application.StatusBar = x
Next
'*****************************
'Determine how many seconds code took to run
SecondsElapsed = Round(Timer - StartTime, 2)
'Notify user in seconds
MsgBox "This code ran successfully in " & SecondsElapsed & " seconds", vbInformation
End Sub | {
"domain": "codereview.stackexchange",
"id": 28787,
"tags": "performance, vba, excel"
} |
Can thermite be lit while mixed into butane? If not is there a flammable liquid that would work? | Question: I want to create a thermite flamethrower to melt snow since I live in New England and since I am an asthmatic I struggle to shovel and it would be cool, so do you have any idea as to what liquid thermite could be lit in.
Answer: Thermite is a solid-solid reaction that I think would be greatly inhibited by any sort of intermediate. In either case, the smoke, the dust and the copious amounts of nitrous oxides produced will not make your asthma any better off than if you were to pick up a shovel.
And then there will likely be other safety-, legal-, environment- and relationship-with-neighbours -related issues that alone would make this a very bad idea. | {
"domain": "chemistry.stackexchange",
"id": 8411,
"tags": "heat"
} |
vision_bleeding not found | Question:
I tried to install "tod_stub", but it failed because it tries to install "vision_bleeding" from a repo which is not found under the specified address. (https://code.ros.org/svn/wg-ros-pkg/branches/trunk_diamondback/stacks/vision_bleeding)
Does anybody know if the repo was moved or deleted or how I could resolve the problem?
thanks
Originally posted by Cyrill on ROS Answers with karma: 1 on 2011-05-26
Post score: 0
Original comments
Comment by Julius on 2011-06-04:
http://svnbook.red-bean.com/en/1.5/svn.advanced.pegrevs.html might be an explanation. There's probably the so-called "Peg revision" that is missing.
Answer:
It's been renamed to vision_future and released into unstable:
https://code.ros.org/svn/ros-pkg/stacks/vision_future/trunk/
Thus, the easiest install path is to:
Switch to ros-unstable instead of ros-diamondback
sudo apt-get install ros-unstable-vision-future
Originally posted by kwc with karma: 12244 on 2011-05-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 5676,
"tags": "ros"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.