anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
How did Pauli and Fermi deduce the existence of the neutrino? | Question: From Wikipedia:
The neutrino was postulated first by Wolfgang Pauli in 1930 to explain how beta decay could conserve energy, momentum, and angular momentum (spin). In contrast to Niels Bohr, who proposed a statistical version of the conservation laws to explain the event, Pauli hypothesized an undetected particle that he called a "neutron" in keeping with convention employed for naming both the proton and the electron, which in 1930 were known to be respective products for alpha and beta decay.[6][nb 2][nb 3]
n0 → p+ + e− + νe James Chadwick discovered a much more massive nuclear particle in 1932 and also named it a neutron, leaving two kinds of particles with the same name. Enrico Fermi, who developed the theory of beta decay, coined the term neutrino (the Italian equivalent of "little neutral one") in 1933 as a way to resolve the confusion.[7][nb 4] Fermi's paper, written in 1934, unified Pauli's neutrino with Paul Dirac's positron and Werner Heisenberg's neutron-proton model and gave a solid theoretical basis for future experimental work.
Can you explain why beta decay could not be explained by adding that tiny amount of energy (attributed to the neutrino) to the KE of the emitted electron?
Answer: An electron is a charged particle, charge conservation would not work as the neutron has zero charge. In addition it would have been detected with its interaction as its energy would be similar to the energy of the other electron seen.
The neutrino was posited as a weakly interacting particle exactly because it was not caught by the detectors, and because energy and momentum conservation would not otherwise work for each event.
Edit after edit of question
Can you explain why beta decay could not be explained by adding that tiny amount of energy (attributed to the neutrino) to the KE of the emitted electron?
It is all about momentum and energy conservation. The neutron mass was known, the proton mass was known and the momentum measured and the electron mass was known and the momentum measured. It is easy to go to the center of mass system , i.e. where the neutron is at rest for the presumed two body decay. In the center of mass system the proton and the electron should have equal and opposite momenta which constraint defines also their energy in the center of mass system, one unique value. Instead the data showed that it was not a two body decay but a three body decay, since there was a distribution for the energy and momenta of the proton and the electron. A zero mass spin one half particle balancing momentum and energy solved the problem. | {
"domain": "physics.stackexchange",
"id": 16402,
"tags": "particle-physics, neutrinos"
} |
Installing ROS Melodic in Ubuntu 18.04.2 LTS (Bionic Beaver) 32-bit | Question:
Hello I have a raspberyy pi 3b+ with ARMv7 Processor (64bit). I have followed the ROS Melodic installation instruction from the http://wiki.ros.org/melodic/Installation/Ubuntu.
Robotic_arm@Robotic_arm-desktop:~$ sudo sh -c 'echo "deb http://packages.ros.org/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
[sudo] password for Robotic_arm:
Robotic_arm@Robotic_arm-desktop:~$ sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
Executing: /tmp/apt-key-gpghome.HQv1zUHfW7/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
gpg: key F42ED6FBAB17C654: public key "Open Robotics <info@osrfoundation.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
Robotic_arm@Robotic_arm-desktop:~$ sudo apt update
Hit:1 http://ports.ubuntu.com bionic InRelease
Hit:2 http://ppa.launchpad.net/ubuntu-pi-flavour-makers/ppa/ubuntu bionic InRelease
Ign:3 http://packages.ros.org/ubuntu bionic InRelease
Get:4 http://ports.ubuntu.com bionic-updates InRelease [88.7 kB]
Err:5 http://packages.ros.org/ubuntu bionic Release
404 Not Found [IP: 2600:3402:200:227::2 80]
Get:6 http://ports.ubuntu.com bionic-security InRelease [88.7 kB]
Hit:7 http://ports.ubuntu.com bionic-backports InRelease
Hit:8 http://ports.ubuntu.com/ubuntu-ports bionic InRelease
Get:9 http://ports.ubuntu.com bionic-updates/main armhf Packages [490 kB]
Get:10 http://ports.ubuntu.com bionic-updates/universe armhf Packages [800 kB]
Get:11 http://ports.ubuntu.com bionic-updates/universe Translation-en [279 kB]
Reading package lists... Done
E: The repository 'http://packages.ros.org/ubuntu bionic Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
Robotic_arm@Robotic_arm-desktop:~$ sudo apt install ros-melodic-desktop-full
[sudo] password for Robotic_arm:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package ros-melodic-desktop-full
Robotic_arm@Robotic_arm-desktop:~$
Can you please let me know how I can overcome this issue?? Thank you very much!
Originally posted by Robotic_arm on ROS Answers with karma: 3 on 2019-06-11
Post score: 0
Original comments
Comment by gvdhoorn on 2019-06-11:
You first write that you're looking to install 32-bit software, then state that your processor is 64-bit. What is it that you actually want to do?
Comment by tfoote on 2019-06-11:
Please edit your question to include the full commands you ran and the complete error message. Also please make sure to use copy and paste not type out error messages as accurate reproduction is important for us to help you debug.
Comment by Robotic_arm on 2019-06-11:
@gvdhoorn: Sorry if I wasn't clear. I have Ubuntu 18.04.2 which is 32 bit. But my raspberry pi has arm processor which is 64 bit.
Comment by gvdhoorn on 2019-06-11:
@Robotic_arm: note @tfoote's answer: you have packages.ros.org/ubuntu, it's packages.ros.org/ros/ubuntu. Note the ros after packages.ros.org there.
Comment by Robotic_arm on 2019-06-11:
@gvdhoorn: yes thank you! I realized it as i was copy pasting the entire list of commands.
Answer:
You appear to have copy and pasted the wrong url from the tutorial.
The repository is at: http://packages.ros.org/ros/ubuntu not http://packages.ros.org/ubuntu
Originally posted by tfoote with karma: 58457 on 2019-06-11
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Robotic_arm on 2019-06-11:
Thank you very much for your help!!
Comment by tfoote on 2019-06-11:
Please accept the answer using the check mark at the left so others know you have a solution. | {
"domain": "robotics.stackexchange",
"id": 33161,
"tags": "ros, ros-melodic"
} |
Raspberry camera module not working on Raspberry Pi 3 B | Question:
Hi,
I have Raspberry pi 3 B module with Linux based Raspbian Image OS on it.But when I connect it with raspberry pi camera module v2.1 , it does not work. It does not show image during rqt_image_view on Remote PC.
I am using rosrun raspicam_node raspicam_node command for running camera through ROS. Please guide me if there is some solution.
H/w and s/w specifications:-
Remote pc : Ubuntu 16.04, ROS Kinetic
Raspberry pi 3 B module with Linux based Raspbian Image OS.
Raspberry pi camera v2.1
Thanks
Originally posted by rutujaharidas on ROS Answers with karma: 1 on 2019-07-14
Post score: 0
Answer:
So, out of the top of my head,
have you tried using picamera (https://picamera.readthedocs.io/en/release-1.13/) on the raspberry pi itself with the same configuration? Its a good idea to test if the hardware is working with native libraries outside of ROS before you try ROS packages. If the camera isnt working outside of ROS then there is no way it will work with raspicam_node
Since you are using a remote PC, you should ensure your ROS multiple machines set up is working properly, publish a simple float topic on raspberry pi and see if you can echo it from remote PC
If step one and two is working then before using rqt_image_view on remote PC it is a good idea to rostopic echo the raw camera feed first and check if the topic is not empty.
Hope this helps.
Originally posted by indraneel with karma: 61 on 2019-07-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by rutujaharidas on 2019-07-14:
Hi,
As you mentioned I even tried to do it without ROS using following command raspistill -o image.jpg to capture images,but it still did not start the camera ,instead error came which told that It says no camera detected. And I even tried it on another raspberry camera module I had ,the error remained same. So I am not understanding if the camera has defect or the raspberry pi b camera port on which we attach our camera is at fault.
Thanks
Comment by indraneel on 2019-07-14:
Alright, then I am afraid this is not a ROS related problem, you could use other forums like stack exchange to get better accurate help on this issue.
Its hard to say whether the port has an issue or if there is a software problem. Just for the sake of it you could this thread for troubleshooting : https://raspberrypi.stackexchange.com/questions/37683/raspberry-pi-camera-rev-1-3-is-not-detected
Hope you solve this problem soon. Good luck.
Comment by billy on 2019-07-15:
Have you included the camera in the Pi Config? It doesn't work out of the box.
Comment by rutujaharidas on 2019-08-26:
I have enabled the raspicam with raspi-config. It detected the camera with vcgncmd get_camera (Returned supported=1 detected=1) .But when i do raspistill -o image.jpg. It gives me error that camera is not properly installed and it failed to create camera component. Please guide me if there is some solution to this problem. | {
"domain": "robotics.stackexchange",
"id": 33419,
"tags": "ros, ros-kinetic, raspbian"
} |
Representation and gauge transformation of Yang-Mills theory | Question: Let us consider a classical field theory with gauge fields $A_{\mu}^{a}$ and a scalar $\phi^{a}$ such that the Lagrangian is gauge-invariant under the transformation of
the gauge fields $A_{\mu}^{a}$ in the adjoint representation, with dimension $D_{\bf R}$, of the gauge group $SU(N)$, and
the scalar $\phi^{a}$ in the fundamental representation, with dimension $N$, of the gauge group $SU(N).$
Why can we represent $\phi$ as a traceless Hermitian $N \times N$ matrix, so that $\phi = \phi^{a}T^{a}$ where the $T^a$ are the representation matrices in the fundamental representation?
Why can we write down the variation of $\phi$ under a gauge transformation with gauge parameters $\theta^{a}$ as
$$\delta\phi = ig[\theta^{a}T^{a},\phi]$$
and the gauge covariant derivative as
$$D_{\mu}\phi = \partial_{\mu}\phi - igA_{\mu}^{a}[T^{a},\phi]?$$
Answer: I am not a quantum field theorist, but I think you are mixing up stuffs, so I am gonna go through some elementary concepts (some of which you might already be familiar with).
The group $\text{SU}(N)$ is a Lie-group, eg. a group, which is also a smooth manifold in the sense, it is a set of continuum cardinality elements and it can be locally parametrized by $\dim_{\mathbb{R}}\text{SU}(N)$ real parameters, in a way that two different parametrizations have smooth transition functions, and the group operations are smooth.
Such groups always admit an associated Lie-algebra, denoted as $\mathfrak{su}(N)$.
Now, $\text{SU}(N)$ is such a Lie-group, that while it exists "abstractly", it can be seen most naturally as the set of all $N\times N$ sized complex matrices, which satisfy $\Lambda^\dagger\Lambda=1$ and $\det\Lambda=1$. For such matrix Lie-groups, the Lie algebra can be seen as a set of $N\times N$ matrices too. These matrices are tangent vectors at the identity, meaning that if $\gamma:\mathbb{R}\rightarrow\text{SU}(N)$ is a smooth curve that goes through the identity at 0 ($\gamma(0)=1$), then $d\gamma/dt|_{t=0}$ is an element of $\mathfrak{su}(N)$.
We have $\gamma(t)\gamma^\dagger(t)=1$ for all $t$, so $$0=\frac{d}{dt}1= \frac{d}{dt}\gamma\gamma^\dagger|_{t=0}=\frac{d\gamma}{dt}|_{t=0}\gamma^\dagger(0)+\gamma(0)\frac{d\gamma^\dagger}{dt}|_{t=0}=\frac{d\gamma}{dt}|_{t=0}+\frac{d\gamma^\dagger}{dt}|_{t=0}=0, $$ so the elements of the Lie algebra are antihermitian matrices.
This is not the only condition though, because we also have the unit determinant condition.
We can obtain (from say Jacobi's formula) that this condition also implies that $d\gamma/dt|_{t=0}$ has vanishing trace.
Conclusion: The Lie algebra $\mathfrak{su}(N)$ consists of traceless, antihermitian matrices. This set is closed under addition, scalar multiplication and also under commutators (so if $A,B\in\mathfrak{su}(N)$, then $[A,B]\in\mathfrak{su}(N)$).
Now, as convention dictates, we usually take the Lie algebra to consist of hermitian matrices instead of antihermitean ones, we can do this, because we can always convert an antihermitean matrix into a hermitean one by multiplying with $i$ (and vice versa).
Now to actually answer you question, that the scalar field $\phi^a$ is in the fundamental representation of $\text{SU}(N)$ means that the index $a$ ranges from 1 to $N$ and if we view the $\text{SU}(N)$ elements as $N\times N$ matrices, then they act on $\phi$ by $\phi^a\mapsto\Lambda^a_{\ b}\phi^b$.
Now let $A=(A^a_{\ b})$ be a matrix in $\mathfrak{su}(N)$, the Lie algebra. The adjoint representation of $\text{SU}(N)$ is when $\text{SU}(N)$ is represented on the Lie algebra itself by $A\mapsto\Lambda A\Lambda^{-1}\equiv \Lambda A\Lambda^\dagger$.
The dimension of $\text{SU}(N)$ is $N^2-1$, and the Lie algebra has the same dimension. Which means that the dimension of the representation space of the fundamental representation is $N$ but the dimension of the representation space of the adjoint representation is $N^2-1$, so you cannot use the same set of indices. Let $A,B,...$ take the values $1,...,N^2-1$.
The scalar field then looks like $\phi^a$ but the gauge field looks like $A^A_\mu$.
1) The scalar field cannot be written as you had written, since the indices $a$ and $A$ in general do not have the same range.
2) I don't know what is your source, but I have never ever seen a covariant derivative written like that. The point of the covariant derivative is that if you allow point-dependent gauge transformations on $\phi$, eg. you allow $\phi^a(x)\mapsto\Lambda^a_{\ b}(x)\phi^b(x)$, then the partial derivatives $\partial_\mu$ are not good differential operators on the space of $\phi$-fields, because it is not gauge-covariant.
Instead, if $x^\mu(\lambda)$ is a curve in spacetime, let's assume that the rate of change of $\phi^a$ can be split into two terms, a physical rate of change and a gauge rate of change: $$ \frac{d}{d\lambda}\phi^a=\frac{D}{d\lambda}\phi^a+\frac{\delta}{d\lambda}\phi^a, $$ and since $d\phi^a/d\lambda=\partial_\mu\phi^a\cdot dx^\mu/d\lambda$ by the chain rule, we also have $$ \frac{d}{d\lambda}\phi^a=\frac{\partial}{\partial x^\mu}\phi^a\frac{dx^\mu}{d\lambda}=\frac{dx^\mu}{d\lambda}\left(D_\mu\phi^a+\delta_\mu\phi^a\right). $$ It can be shown that the difference of two differential operators that act on real functions the same way is a linear transform, and we want the physical rate of change $D_\mu$ to act as a differential operator, $\delta_\mu$ must act as a linear transformation, so we have $\delta_\mu\phi^a=-\mathcal{A}_{\mu\ \ b}^{\ a}\phi^b$ for some linear transformation valued vector field $\mathcal{A}_{\mu\ \ b}^{\ a}$.
So we have $D_\mu\phi^a=\partial_\mu\phi^a+\mathcal{A}_{\mu\ \ b}^{\ a}\phi^b$.
But $\mathcal{A}$ must be $\mathfrak{su}(N)$-valued. Why? We say that along a curve $x^\mu(\lambda)$ the field $\phi^a$ is parallel transported, if $d\phi^a/d\lambda=-\mathcal{A}_{\mu\ \ b}^{\ a}\phi^b\frac{dx^\mu}{d\lambda}$, eg. the physical rate of change is zero.
The field $\phi^a$ at different points are incomparable, because of point-dependent gauge transformations, but let us take the curve $x^\mu(\lambda)$ to be a closed loop, and let us take a one-parameter family of such loops, $x^\mu(\lambda,\epsilon)$, where $\epsilon$ is a smooth parameter and for $\epsilon=0$, the loop is the trivial loop that stays at a point $p$ and doesn't go anywhere (so $\epsilon$ is a smallness parameter of the loop).
If we parallel transport $\phi^a$ along this loop, then the change in $\phi^a$ must be a gauge transformation, since the field did not change physically, so the difference is $\Lambda(x(\epsilon,\lambda))^a_{\ b}\phi(x)^b-\phi^a(x)$ where $\lambda$ is the endpoint parameter of the loop. The gauge transformation for small $\epsilon$s is then $\Lambda(x(\epsilon,\lambda))=1+\epsilon B(\lambda,\epsilon)$ where $B$ is an element of the Lie algebra, since it is infinitesimally close to the identity.
Now calculating the rate of change of $\phi$ gives $$ \frac{d}{d\lambda}\phi^a=\frac{d}{d\lambda}(1+\epsilon B^a_{\ b}\phi^b)=\epsilon \frac{d}{d\lambda}B^a_{\ b}\phi^b=-\mathcal{A}_{\mu\ \ b}^{\ a}\phi^b \frac{dx^\mu}{d\lambda} $$ and from this we can see that $\mathcal{A}$ must be Lie algebra valued. Then, to switch to hermitean matrices instead of antihermitean ones, we define $igA_\mu=\mathcal{A}_\mu$, and if $\{T_A\}_{A=1}^{N^2-1}$ is a basis for the Lie algebra $\mathfrak{su}(N)$, we have $$ D_\mu\phi=\partial_\mu\phi+igA_\mu^AT_A\phi. $$ | {
"domain": "physics.stackexchange",
"id": 39476,
"tags": "gauge-theory, representation-theory, lie-algebra, yang-mills"
} |
What is the connection between Newton's Shell Theorem and Bertrand's Theorem? | Question: The most general force that can fulfil the first part of Newton's Shell theorem (any spherically symmetric body affects external bodies as if its mass were concentrated at its centre) is an inverse square force $F(r) \sim r^{-2}$, an harmonic oscillator $F(r) \sim r$ or the sum of both types of forces $F(r) \sim A r + B r^{-2}$ .
Interestingly these two types of forces (inverse square and harmonic oscillator) are also the only two types of forces which fulfil Bertrand's theorem:
Among central force potentials with bound orbits, there are only two
types of central force potentials with the property that all bound
orbits are also closed orbits, the inverse-square force potential and the
harmonic oscillator potential.
Both theorems seem to deal with completely different problems (closed bound orbits vs. the affect of spherical bodies on external bodies). But because the solution to both problems are the same type of forces I wonder if there might be a deeper connection between these two theorems. What could be the connection between Newtons Shell theorem and Bertrand's theorem?
Answer: TL;DR: It is a coincidence.
Firstly, the power laws don't match for $n\neq 3$ spatial dimensions:
On one hand, Bertrand's theorem is confined to a 2D orbit plane (due to angular momentum conservation), and doesn't depend on the ambient spatial dimension $n$.
On the other hand, Newton's shell theorem is tied to Gauss' law. Gauss surfaces are hypersurfaces of dimension $n-1$.
Secondly, even if we restrict to $n=3$ spatial dimensions, the solutions are different:
On one hand, Bertrand's theorem only works for a $1/r^2$ force law and Hooke's law separately but not for non-trivial linear combinations thereof.
On the other hand, the converse Newton's shell theorem also works for linear combinations thereof.
Thirdly, the known proofs of Bertrand's theorem are longer and the requirement of closed orbits leads to a rationality condition, which has no counterpart in the converse Newton's shell theorem. | {
"domain": "physics.stackexchange",
"id": 40615,
"tags": "classical-mechanics, newtonian-gravity, orbital-motion, symmetry, harmonic-oscillator"
} |
Region_tlr_ssd doesn't preview on rviz (ImageViewerPlugin]) and no signal | Question:
Hi guys,
I am trying to turn on the traffic light recognition using region_tlr_ssd but it doesn’t preview any result on rviz. I followed the ssdcaffe installation from [https://autoware.readthedocs.io/en/feature-documentation_rtd/DevelopersGuide/PackagesAPI/detection/region_tlr_ssd.html#] and installation and compilation done without any error.
I used the pretrained model that mentioned in the docs and save it in: Autoware/ros/src/computing/perception/detection/trafficlight_recognizer
and the model has loaded but I get the following message:
Ignoring source layer data
Ignoring source layer data_data_0_split
Ignoring source layer mbox_loss
The steps as follow:
initialized localization by run the TF, vehicle model from setup
run point cloud , vector map, and TF
launched velodyne from sensing and turn on the camera mindvision, launched the camera calibration.
launched ndt_matching, feat_proj, and region_tlr_ssd
opened the rviz and added new panel and chose tlr_superimpose topic.
System Specification
Ubuntu 16.4
Autoware 1.7
Cuda 9
ROS Kintenic
Thank you
Originally posted by Eng-Mo on ROS Answers with karma: 16 on 2019-05-06
Post score: 0
Original comments
Comment by Thiago de Borba on 2020-11-30:
Hi,
Could you solve this problem?
Answer:
Hi Guys,
I Mostly know the problem, the problem in the victor map. the vector map has to be very accurate otherwise it will never work at most cases, and the one which it used in the example is done by professional company and I am using a real car in real environment and don't have a ccurate vector map that includes all the road component.
Cheers.
Originally posted by Eng-Mo with karma: 16 on 2019-05-28
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 32979,
"tags": "rviz, ros-kinetic"
} |
What percentage of habitable-zone planets are detectable by transit? | Question: I realize that the probability of detecting a planet by transit depends on the size of the star, the size of the planet's orbit, and the size of the planet; and ranges from ~10% to a small fraction of one percent.
However, the habitable zone of a star is a function of the star's brightness, and thus at least statistically connects the size of the star to the size of a planet's orbit if the planet is assumed to be in the habitable zone. There is also a known distribution of the relative number of stars of different sizes.
With all that put together, can we form a reasonable estimate of the percentage of habitable-zone planets that can be detected by transit methods? The overall percentage has to be between 10% and about 0.1%, which suggests that if we are able to detect, say, 5 habitable-zone planets within 100 light years, there are probably actually at least 50.
Answer: Back of the envelope time.
First, we have to assume perfect data, so the only factor at play here is whether there is a geometric eclipse or not. Of course if you have poorer data then you will miss some planets because they are too small. i.e. We are looking for the fraction that can be detected in principle.
Let's assume the planets are small enough that their size does not really influence the transit probability, which is given by $\sim R_*/a$, where $a$ is the semi-major axis.
Let's assume circular orbits.
Let's assume a bare planet with no albedo so that the equilibrium temperature is given by
$$ T_{\rm eq} = T_* \sqrt{\frac{R_*}{2a}}\ .$$
Let's assume that a typical star in the Galaxy is an M-dwarf with a temperature $T_* \simeq 3500$ K and a radius of $R_* = 0.5 R_{\odot}$, and let's assume that the likelihood of planet occurrence is independent of stellar mass, so that the properties of an M-dwarf can be assumed for statistical purposes (in practice the answer will depend on what kind of star you are considering).
Let's assume that a habitable planet needs the equilibrium temperatures to be between 273K and 350K (arbitrary I know, and ignores the issue of atmospheres). The range of $a$ for this temperature range, around our fiducial M-dwarf, is between $50R_{\odot}$ and $82R_{\odot}$, with a probability of detecting a transit of between 0.6-1.0%.
So that is my answer 0.6-1%
The probability for higher mass stars is smaller and there are fewer of them. That is because although they are larger, the planets have to be much further away to be in the habitable zone (e.g. the transit probability for Earth is 0.4%). The main uncertainty is the probability of close-in planet occurrence for very low-mass stars where the transit probability can be much higher even though there are fewer host objects. | {
"domain": "astronomy.stackexchange",
"id": 4942,
"tags": "planetary-transits, habitable-zone"
} |
how install robot-pose-ekf kinetic | Question:
Hi
how to install the Robot-pose-ekf package in kinetic.
An error occurs with the apt-get install command:
After this operation, 2.653 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Err:1 http://packages.ros.org/ros/ubuntu xenial/main amd64 ros-kinetic-bfl amd64 0.7.0-2xenial-20180809-134309-0800
404 Not Found [IP: 64.50.233.100 80]
Err:2 http://packages.ros.org/ros/ubuntu xenial/main amd64 ros-kinetic-robot-pose-ekf amd64 1.14.4-0xenial-20190320-162431-0800
404 Not Found [IP: 64.50.233.100 80]
E: Failed to fetch http://packages.ros.org/ros/ubuntu/pool/main/r/ros-kinetic-bfl/ros-kinetic-bfl_0.7.0-2xenial-20180809-134309-0800_amd64.deb 404 Not Found [IP: 64.50.233.100 80]
E: Failed to fetch http://packages.ros.org/ros/ubuntu/pool/main/r/ros-kinetic-robot-pose-ekf/ros-kinetic-robot-pose-ekf_1.14.4-0xenial-20190320-162431-0800_amd64.deb 404 Not Found [IP: 64.50.233.100 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
I've already run the apt-get update command...
Originally posted by mateusguilherme on ROS Answers with karma: 125 on 2019-06-26
Post score: 0
Answer:
It is very likely you're running into the effects of #q325039.
Please follow the instructions there and try again.
Originally posted by gvdhoorn with karma: 86574 on 2019-06-26
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2019-06-26:
PS: this is not about robot-pose-ekf, but a problem with keys not matching, leading to apt not being able to update the indices, leading to apt trying to download old versions of packages that are not there any more. | {
"domain": "robotics.stackexchange",
"id": 33267,
"tags": "navigation, ros-kinetic, robot-pose-ekf"
} |
Confusion in n-factor calculation | Question: How do we calculate the equivalent weight of a compound, in a case where certain fraction of an element of the compound is getting reduced, while the other fraction is unaffected (no change in oxidation state)? For example, consider the following reaction:
$$\ce{Zn + K4[Fe(CN)6] -> K2Zn3[Fe(CN)6]2 + K}$$
I would like to calculate the equivalent weight of potassium ferrocyanide, given its molecular weight $M$. Here some potassium gets reduced to oxidation state (0) from (I).
Answer: This should do your work:
3Zn + 2K4[Fe(CN)6] = K2Zn3(Fe(CN)6)2 + 6 K
Now you can calculate the n factor.
On a second thought Zn may be ionic with +2 Oxidation state releasing K+. | {
"domain": "chemistry.stackexchange",
"id": 9269,
"tags": "inorganic-chemistry, stoichiometry, coordination-compounds, mole"
} |
What quantum gate is XNOR equivalent to? | Question:
The standard way to implement a reversible XOR gate is by means of a controlled-NOT gate or CNOT; this is the "standard quantum XOR operation". Physics.Stackexchange
Is there a "standard quantum XNOR operation"?
The XNOR gate (sometimes ENOR, EXNOR or NXOR and pronounced as Exclusive NOR) is a digital logic gate whose function is the logical complement of the exclusive OR (XOR) gate. Wikipedia
Alternatively, what is the logical complement of the CNOT gate?
Answer: There is no "standard" method to implement XNOR, but it can be logically obtained by attaching a NOT gate (often called an X gate in quantum computing) to a logical XOR (which you know is implemented using CNOT). The X gate is applied to the target qubit of the CNOT.
To answer your question more directly, there is no standard "quantum gate" that is equivalent to XNOR. The best way to implement XNOR in a quantum circuit is with a CNOT and an X on the second qubit.
The reason why {CNOT,X} can give you a logical XNOR was explained in this answer to your own question 3.5 months ago. | {
"domain": "quantumcomputing.stackexchange",
"id": 387,
"tags": "gate-synthesis, universal-gates, quantum-gate, foundations"
} |
how to simulate multipath fading? | Question: I want to simulate a multipath fading channel, different amplitude and delay is given. I want to use delay in padding style. I read that the different values for delay like [0 .6 3.1 ...] represent the delay between the first and last signal arrival? Giving the number of paths, How to simulate it?
regards
Answer: @MohammedFatehy If you put up -exactly- what you currently know and have, we can help you more.
Generally speaking, let us say you have a signal x[n]. And lets say your sampling rate is 1 Hz. So you take one sample every second.
Now you want to construct a channel for multipath. Right off the bat, your multipath channel is going to be an FIR filter btw. Let us say someone tells you, "your first echo is going to be 5 seconds away, and attenuates the amplitude by 50%. The second echo is going to be 8 seconds away, and attenuates the amplitude by 70%".
Now, your channel is going to be simply a vector, (call it channel), such that: channel = [1 0 0 0 0 0.5 0 0 0.3]. (You have the 1 in the beginning, because that is the 'line of sight' co-efficient. That is, the signal without any echos).
Now, you simply filter your signal x[n] with channel, and you have a multipath response. Does that make sense? | {
"domain": "dsp.stackexchange",
"id": 1012,
"tags": "multipath"
} |
Confusions regarding the Lorentz force | Question:
Why magnetic force isn't doing any work I know it's a similar case to centripetal force but can someone give me best possible answer not giving the dot product analogy but some high level mathematical proof and a useful analogy
Why charge doesn't speed up or speed down because of magnetic force because if magnetic field is acting like a Potential field then charge velocity should change e.g in projectile motion when taken in potential field it's independent axes of motion become dependent on each other
Answer: I think the work is $\int (\int\rho(E+\frac{\partial r}{\partial t}\times B)\cdot dr)dV=\int( \int\rho E\cdot dr)dV$ since (if I understand correctly) dr points in the same direction as $\frac{\partial r}{\partial t}$ and the cross product makes the resulting vector normal to dr (and B too for that matter). | {
"domain": "physics.stackexchange",
"id": 33317,
"tags": "electromagnetism, magnetic-fields, lorentz-symmetry, magnetic-moment"
} |
Minesweeper in C++ | Question: I would like general ideas on improvement for this implementation of Minesweeper I wrote in C++. I am not really good in OOP yet, so I want ideas about how I can refactor the code by adding objects (or structs) without it getting too bulky.
/*Minesweeper C++
Written by:-Teodor D.*/
#include<iostream>
#include<time.h>
#include<stdlib.h>
#include<stdio.h>
#include<iomanip>
#include <time.h>
/*Rules:
The player enters 'o' , then enters value of i and j to open cell[i][j].
Enter 'f' ,then enter value of i and j to place a flag on cell[i][j] */
using namespace std;
void reveal(int, int); /// reveals a cell with given coordinates
void create_mine_positions();
void cell_number(); //increases the number of a cell with 1
void create_table(); //creates the game table
void open_cell(); // opens a cell
void game();
void print_table(char); // prints the game table
char table[10][10]; //the game table visible ot the player
char table_mine_positions[10][10]; //table with the positions of the mines and the number of each cell
char symbol; //the input symbol, it can be 'o' or f'
int flag_counter=0;
int mines_flagged_counter=0;
bool end_game_lose=false;
time_t time_since_epoch = time(0);
time_t game_time;
void cell_number(int i,int j)
{
if(i>=0&&i<10&&j>=0&&j<10&&table_mine_positions[i][j]!='X')
table_mine_positions[i][j]++;
}
void create_mine_positions()
{
int counter=0;
srand(time(NULL));
for(int i=0;i<10;i++)
for(int j=0;j<10;j++)
table_mine_positions[i][j]='0';
int i=0;
int j=0;
while(counter<10)
{
int i=rand()%10;
int j=rand()%10;
if(table_mine_positions[i][j]=='0'){
table_mine_positions[i][j]='X';
cell_number(i-1,j);
cell_number(i+1,j);
cell_number(i,j-1);
cell_number(i,j+1);
cell_number(i-1,j-1);
cell_number(i-1,j+1);
cell_number(i+1,j-1);
cell_number(i+1,j+1);
counter++;
}
}
}
void create_table()
{
for(int i=0;i<10;i++)
for(int j=0;j<10;j++)
table[i][j]='*';
}
void print_table(char arr[10][10])
{
cout<<" ";
for(int i=0;i<10;i++)
cout<<setw(3)<<i;
cout<<endl<<" ";
for(int i=0;i<32;i++)
cout<<"_";
cout<<endl;
for(int i=0;i<10;i++){
cout<<setw(3)<<i<<"|";
for(int j=0;j<10;j++)
cout<<setw(3)<<arr[i][j];
cout<<endl;
}
}
void open_cell()
{
int i,j;
do
cin>>i>>j;
while(i<0||i>9||j<0||j>9);
if(table_mine_positions[i][j]=='X')
{
table[i][j]='X';
end_game_lose=true;
for(int i=0;i<10;i++)
for(int j=0;j<10;j++)
if(table_mine_positions[i][j]=='X')
table[i][j]='X';
}
else
reveal(i,j);
}
void place_or_remove_flag()
{
int i,j;
do
cin>>i>>j;
while(i<0||i>9||j<0||j>9);
if (table[i][j]=='*')
{
table[i][j]='F';
flag_counter++;
if(table_mine_positions[i][j]=='X')
mines_flagged_counter++;
}
else if (table[i][j]=='F')
{
table[i][j]='*';
flag_counter--;
if(table_mine_positions[i][j]=='X')
mines_flagged_counter--;
}
}
void input_symbol()
{
cin>>symbol;
switch (symbol){
case 'o' : open_cell(); break;
case 'f' : place_or_remove_flag(); break;
default : input_symbol();
}
}
void reveal(int i,int j)
{
if (table[i][j]=='*'&&table_mine_positions[i][j]!='X'&&i>=0&&i<10&&j>=0&&j<10)
{
table[i][j]=table_mine_positions[i][j];
if(table_mine_positions[i][j]=='0')
{
reveal(i,j-1);
reveal(i,j+1);
reveal(i-1,j-1);
reveal(i+1,j-1);
reveal(i+1,j+1);
reveal(i-1,j+1);
reveal(i-1,j);
reveal(i+1,j);
}
}
}
bool end_game_win_check()
{
if(flag_counter==10&&mines_flagged_counter==10)
return 1;
else
return 0;
}
void game()
{
create_table();
create_mine_positions();
while(!end_game_lose&&!end_game_win_check())
{
game_time=time(0);
print_table(table);
cout<<endl<<"Flags:"<<flag_counter<<endl;
cout<<"Time:"<<game_time-time_since_epoch<<endl;
input_symbol();
}
if(end_game_lose){
print_table(table);
cout<<endl<<"GAME OVER"<<endl;
}
if(end_game_win_check())
cout<<"Time to complete:"<<game_time-time_since_epoch<<endl;
cout<<endl<<"YOU WIN!"<<endl;
}
int main()
{
cout
<<"Rules:"
<<endl<<"Enter 'o' , then enter value of i and j to open cell[i][j]."
<<endl<<"Enter 'f' ,then enter value of i and j to place "
<<"or remove flag on cell [i][j]."
<<endl<<endl;
game();
return 0;
}
Answer: Some initial thoughts...
Consider labelling your axis, or complying with a known standard. When I'm looking at coordinates, I'm thinking (x,y) for horizontal, then vertical. You're asking for i,j coordinates, for vertical, then horizontal. This is likely to be a bit confusing to the user the first time they play. You're also using i,j in your code, again, I'd tend to use x,y and have them apply in the expected way.
Your formatting is quite erratic, if it looks like that in your IDE, I'd want to fix it. Consistent bracing/indentation makes code a lot easier to read. If it's OK in your development environment, then usually you'd just copy and paste the whole of your code into the question, select it and click the code {} icon.
It's generally considered good practice to brace your statements. Consider this code:
do
cin>>i>>j;
while(i<0||i>9||j<0||j>9);
It's perfectly valid, however, with no indentation or braces at first glance it looks a bit like an infinite while loop.
Users are awfully unreliable, even those with the best intentions will eventually make a mistake. What happens if instead of putting 'o 3 4', the user enters 'o e 4' by mistake? The loop above is unable to resolve the problem, so loops forever. Doing something like this (untested) would help to resolve the bad input, although obviously you might want to prompt the user again...:
do {
cin >> i >> j;
if (cin.fail())
{
cin.clear();
cin.ignore();
}
}
while (i<0 || i>9 || j<0 || j>9);
end_game_win_check - At the moment, for the user to win, they have to have flagged all ten of the mines. I sometimes used to play minesweep by not flagging any of the mines and just trying to clear all the squares that didn't have mines on them. It seems like if the user has cleared cell that doesn't have a mine they should also have won...
OO - As you've said, you've not really used any classes or structs. I would suggest that a good starting point would be to create a class and move your global variables into the class as members. Move your existing functions into the class as member functions. Then consider which functions actually need to be public and which can be private. The next step might be to try and extract the user interaction, particularly input out of the class, so that you're exposing operations to open_cell(x,y) where x+y have come from another class that is responsible for asking the user for moves. | {
"domain": "codereview.stackexchange",
"id": 22037,
"tags": "c++, minesweeper"
} |
Resource for scientific maps? | Question: I need an administrative division map of england for a motivation for some scientific workshop. I don't want to cite wikipedia for this. Sadly I can't find any official resources and I don't really know what sites are generally trusted or scientific.
Any tips? I hope this is the correct forum for this answer.
Answer: I usually refer to two free of charge sources for administrative division maps:
for Europe, as in your case (I hope they still hold the maps of UK even after Brexit...) you can use the official NUTS - Nomenclature of territorial units for statistics maps from EUROSTAT available here;
EUROSTAT has also a regularly updated dataset of the official boundaries for the whole world here (latest update 2020, but I don't remember how down to the finer statistical territorial units they get with this);
DIVA GIS repository: administrative maps of every country in the world (not sure how updated this is), available here.
I'm sure there will be more datasets like these, but these should fit your purposes. | {
"domain": "earthscience.stackexchange",
"id": 2119,
"tags": "geography, mapping"
} |
How can we physically determine the metric in SR? How can we physically determine a coordinate system where $g$ is diagonalized? | Question: Here I'll use theoretical units with $c=1$. Suppose we are given two coordinate systems $(t, x)$ and $(\tilde{t}, \tilde{x})$, and suppose they are related by $t = \tilde{t} - \tilde{x}/2$ and $x = \tilde{x}$. How can we tell which one has its time axis perpendicular to its spatial axes?
Equivalently, how can I tell that the metric $g$ in the $(t, x)$-coordinates is of the form $g = dt^{2}-dx^{2}$ and not $g = dt^{2} - (3/4)dx^{2} + dt\; dx$?
Is there any sequence of physical experiments I could do to select the "correct coordinate system" where the metric is diagonalized? Equivalently how can I physically deduce the metric in a given coordinate system? Is it even possible to fully determine the metric?
Answer: Assume we are working in special relativity with $D+1$ dimensional spacetime. Suppose we have an inertial coordinate system $(x^{\mu})$, so objects that don't interact with anything move in uniform motion according to these coordinates. If we can determine another coordinate system where the metric $g$ is known, and if we know how to transform between the old and new coordinate systems, then we can determine $g$ in the original coordinates.
Since the metric is symmetric (i.e. $g_{\mu\nu} = g_{\nu\mu}$), there are
$$ \binom{D+1}{2}+(D+1) = \frac{(D+1)(D+2)}{2} $$
many degrees of freedom for the metric. We can consider the following parts of $g$:
$g_{00}$
$g_{ii}$ and $g_{i0}$
$g_{ij}$ where $i < j$.
The first bullet describes only one component, the second bullet describes $2D$ many components, and the last bullet describes $\binom{D}{2} = D(D-1)/2$ many components. In total, we describe
$$ 1 + 2D + \frac{D(D-1)}{2} = \frac{(D+1)(D+2)}{2} $$
many components, which is exactly the number of degrees of freedom of $g$.
For each component, we will either transform coordinates so that the component is eliminated, or we will set up a system of equations that will allow us to determine the values of components.
We will proceed in three steps.
Part 1. Let $g_{S} = g_{ij}dx^{i}dx^{j}$ be the spatial part of $g$. We will find a new coordinate system in which $g_{S}$ is diagonalized. To do this we will exploit the Lorentz force law as follows. Send a stream of electrons in any direction. By applying an (approximately) uniform magnetic field to the electron beam in various ways, we are able to find/construct spatial axes that are orthogonal to one another according to $g_{S}$. This gives us a new system $(y^{i})$ of coordinates in which
$$ g_{S} = g'_{11}dy^{1}dy^{1} + \cdots + g'_{DD}dy^{D}dy^{D} $$ for some reals $g'_{11}, \ldots, g'_{DD}$.
By transforming from $(x^{\mu}) = (x^{0}, x^{i})$ to $(y^{\mu}) = (x^{0}, y^{i})$, we obtain $g = g'_{\mu\nu}dy^{\mu}dy^{\nu}$ where $g'_{ij} = 0$ for all $i\ne j$.
The components $g'_{ij}$ with $i < j$ are determined to be zero.
This leaves us with the remaining $2D+1$ components to be determined.
Part 2. To determine the rest of the coefficients, we proceed as follows. We assume that the two-way speed of light is always $c=1$, but for the sake of generality we won't assume a fixed one-way speed of light in coordinates $(y^{\mu})$.
The one-way speed of light in a direction can be found by sending a light signal from the origin of our coordinates to a clock at a distance $L$ from the origin (according to our coordinates). Then we take
$$ \frac{L}{(\text{receiver clock at reception}) - (\text{transmitter clock at transmission})}, $$
and this will be the one-way speed of light according to the $(y^{\mu})$-coordinates. Hence the one-way speed of light in any direction is a known quantity.
For each index $i$ where $1\le i\le D$, we let $c_{+}^{i} > 0$ be the one-way speed of light in direction $+y^{i}$ and let $c_{-}^{i} < 0$ be the (negative of the) one-way speed of light in direction $-y^{i}$. Let $1\le k\le D$ and consider shooting two light beams in the $+y^{k}$ and the $-y^{k}$ directions.
For light beams going to the $\pm y^{k}$ directions,
$$ dy^{k}/c_{\pm}^{k} = dy^{0} \qquad\text{ and }\qquad dy^{i} = 0 \;\text{ for }\; i\ne k. $$
If $c_{+}^{k}$ is infinite, then any fraction with an infinite denominator is interpreted to equal zero.
Likewise for $c_{-}^{k}$.
Knowing that $ds^{2} = 0$ for null geodesics, we find
\begin{align*}
0 &= g'_{\mu\nu}dy^{\mu}dy^{\nu} \\[1.0ex]
&= g'_{00}dy^{0}dy^{0} + g'_{k0}dy^{0}dy^{k} + g'_{0k}dy^{k}dy^{0} + g'_{kk}dy^{k}dy^{k} \\[1.0ex]
&= \left[ g'_{00}/(c_{\pm}^{k})^{2} + 2g'_{k0}/c_{\pm}^{k} + g'_{kk} \right] dy^{k}dy^{k}.
\end{align*}
Since $dy^{k}$ is nonzero for light beams moving in the $\pm y^{k}$ directions, we obtain
\begin{align*}
0 &= g'_{00}/(c_{+}^{k})^{2} + 2g'_{k0}/c_{+}^{k} + g'_{kk}, \\
0 &= g'_{00}/(c_{-}^{k})^{2} + 2g'_{k0}/c_{-}^{k} + g'_{kk}
\end{align*}
for $1\le k\le D$.
This provides $2D$ equations that are linear in the metric components.
Part 3. Lastly, consider the time interval from $y^{0} = 0$ to $y^{0} = 1$, and set $g'_{00}$ to be the negative square of the time $T$ elapsed there, so that $g'_{00} = -T^{2}$.
From parts 2 and 3, we obtain $2D+1$ many equations with $2D+1$ components to solve for. Without proof, I claim this is a solvable system of equations, and so all components $g'_{\mu\nu}$ are known in the $(y^{\mu})$-coordinates.
By transforming back to the $(x^{\mu})$-coordinates, we find $g_{\mu\nu}$'s in the original coordinate system, as desired. | {
"domain": "physics.stackexchange",
"id": 79551,
"tags": "special-relativity, spacetime, metric-tensor, symmetry, coordinate-systems"
} |
Using state transitions to filter C comments | Question: This is my second attempt at K&R 1-23,
Write a program to remove all comments from a C program. Don't forget
to handle quoted strings and character constants properly. C comments
don't nest.
I was previously packing characters into various io buffers, and someone suggested this was not a good choice. Thought I'd try something more along the lines of a state machine:
#include <stdio.h>
#define NORMAL 0
#define SINGLE_QUOTE 1
#define DOUBLE_QUOTE 2
#define SLASH 3
#define MULTI_COMMENT 4
#define INLINE_COMMENT 5
#define STAR 6
int state_from_normal(char prev_symbol, char symbol)
{
int state = NORMAL;
if (symbol == '\'' && prev_symbol != '\\') {
state = SINGLE_QUOTE;
} else if (symbol == '"') {
state = DOUBLE_QUOTE;
} else if (symbol == '/') {
state = SLASH;
}
return state;
}
int state_from_single_quote(char prev_symbol, char symbol)
{
int state = SINGLE_QUOTE;
if (symbol == '\'' && prev_symbol != '\\') {
state = NORMAL;
}
return state;
}
int state_from_double_quote(char prev_symbol, char symbol)
{
int state = DOUBLE_QUOTE;
if (symbol == '"' && prev_symbol != '\\') {
state = NORMAL;
}
return state;
}
int state_from_slash(char symbol)
{
int state = SLASH;
if (symbol == '*') {
state = MULTI_COMMENT;
} else if (symbol == '/') {
state = INLINE_COMMENT;
} else {
state = NORMAL;
}
return state;
}
int state_from_multi_comment(char symbol)
{
int state = MULTI_COMMENT;
if (symbol == '*') {
state = STAR;
}
return state;
}
int state_from_star(char symbol)
{
int state = STAR;
if (symbol == '/') {
state = NORMAL;
} else {
state = MULTI_COMMENT;
}
return state;
}
int state_from_inline_comment(char symbol)
{
int state = INLINE_COMMENT;
if (symbol == '\n') {
state = NORMAL;
}
return state;
}
int state_from(int prev_state, char prev_symbol, char symbol)
{
switch(prev_state) {
case NORMAL :
return state_from_normal(prev_symbol, symbol);
case SINGLE_QUOTE :
return state_from_single_quote(prev_symbol, symbol);
case DOUBLE_QUOTE :
return state_from_double_quote(prev_symbol, symbol);
case SLASH :
return state_from_slash(symbol);
case MULTI_COMMENT :
return state_from_multi_comment(symbol);
case INLINE_COMMENT :
return state_from_inline_comment(symbol);
case STAR :
return state_from_star(symbol);
default :
return -1;
}
}
int main(void)
{
char input;
char symbol = '\0';
char prev_symbol;
int state = NORMAL;
int prev_state;
while ((input = getchar()) != EOF) {
prev_symbol = symbol;
prev_state = state;
symbol = input;
state = state_from(prev_state, prev_symbol, symbol);
if (prev_state == SLASH && state == NORMAL) {
putchar(prev_symbol);
}
if (prev_state != STAR && state < SLASH) {
putchar(symbol);
}
}
}
Answer: Bugs
Here are three examples that will cause problems with your state machine:
'\\'
"\\"
/* comment **/
In the first two examples, your state machine doesn't recognize the end quotes because of the preceding backslash characters, even though the backslashes were already "consumed" by the other backslashes.
In the third example, the state machine fails to recognize the end of comment. The problem is that the double star should cause the state to remain in the STAR state but it instead reverts to the MULTI_COMMENT state. | {
"domain": "codereview.stackexchange",
"id": 17613,
"tags": "c, parsing, state-machine"
} |
Given asymptotic bounds, what can we say about small n? | Question: I am trying to wrap my head around these asymptotic notations. Given $f(n)$ and $g(n)$, one can write $f(n) = \Omega(g(n))$ as shorthand for $f(n) \geq c*g(n), n\geq n_0$. But what happens when $n<n_0$. What can we then say about $f(n)$ and $g(n)$? Do we just assume that $f(n)$ is a constant and the relation still works? Or does the question not even make sense to ask?
Answer: Nothing. If all you know about two functions is their $O$/$\Omega$ relations, you know nothing about how they relate to each other on any finite prefix.
Yes, this makes asymptotics of, say, runtime functions absolutely useless in terms of questions like, "which algorithm should I use in scenario X?" On first sight, that is.
Luckily, we usually do analyse with more detail but throw them away when formulating the theorem with coarse notation like Landau symbols. Many results about "practical" algorithms implicitly say, "and this behaviour can be observed for reasonable $n$".
Sometimes you get better theorems which use Landau notation for "error terms" only; authors like Knuth do not say "algorithm A runs in time $O(n \log n)$", but rather "algorithm A takes $2n\log n - 2n + O(1)$ comparisons". This does not completely do away with your issue, but mitigates it to some extent. If you know that that $O(1)$ stands for $\leq 1377$, you can make informed decisions based on that result.
Some of our reference material discusses facets of this issue. | {
"domain": "cs.stackexchange",
"id": 4824,
"tags": "asymptotics, landau-notation"
} |
How to convert this arduino code snippet to AVR GCC? | Question: I recently engaged with a university robotics project (based on ROS) and my main processing controller is Raspberry Pi and the system stability controller is Atmega 32 microcontroller (It is responsible for driving motors and check the communication protocols ex:-i2c,rs232 are working in good manner). So the motor controller of this robot is a i2c type one and and it drives the motors according to the i2c signals that coming from the i2c port of the Atmega 32 microcontroller. The main controller communicate with the Atmega 32 using rs232 protocol. So I found an arduino code as below,
// This function is called once when the program starts.
void setup() {
// Choose a baud rate and configuration. 115200
// Default is 8-bit with No parity and 1 stop bit
Serial.begin(115200, SERIAL_8N1);
}
// This function will loop as quickly as possible, forever.
void loop() {
byte charIn;
if(Serial.available()){ // A byte has been received
charIn = Serial.read(); // Read the character in from the master microcontroller
Serial.write(charIn); // Send the character back to the master microcontroller
}
}
The communication between processing board and microcontroller as below diagram
There is an already available arduino library called rosserial for ROS. But I want this in AVR GCC.What I want is convert this code to traditional avr gcc code that work on atmel studio 6
Answer: I never worked with Arduino, but there is a lot going on behind the scenes in the Arduino Serial class. The examples below should get you started, its not supposed to be compilable or functional as a whole. The example is for UART0.
Serial.begin() performs the low level initialization. If you do this manually it will look something like this:
void UART0_Init(uint32_t baud) {
uint16_t value = (((uint32_t)F_CPU / (baud * 16UL)) - 1);
/* Set baud rate */
UBRR0H = (uint8_t)(value >> 8);
UBRR0L = (uint8_t)value;
/* Enable USART receiver and transmitter */
UCSR0B = (1 << RXEN0) | (1 << TXEN0);
/* Set frame format: asynchronous, 8data, no parity, 1stop bit */
UCSR0C = (1 << UCSZ00) | (1 << UCSZ01);
// enable UART-Receive-Interrupt
UCSR0B |= (1 << RXCIE0);
}
Then there is the interrupt service routine for the receiver and you would need some state variables to communicate new data to the "main process".
volatile uint8_t lastByteReceived;
volatile bool newDataAvailable;
ISR (USART0_RX_vect) {
lastByteReceived = UDR0;
newDataAvailable = true;
}
The Arduino loop() represents more or less the classic while(true){} function in main().
int main(void) {
while (1) {
if (newDataAvailable) {
newDataAvailable = false;
// ...write lastByteReceived to a buffer or process it otherwise
}
}
}
PS: Isn't there support for syntax highlighting on SE robotics? | {
"domain": "robotics.stackexchange",
"id": 1216,
"tags": "arduino, ros, avr, i2c, rs232"
} |
Algorithm Complexity Analysis on functional programming language implementations | Question: I've learned today that algorithm analysis differs based on computational model. It is something I've never thought about or heard of.
An example given to me, that illustrated it further, by User @chi was:
E.g. consider the task: given $(i,x_1 ,…,x_n )$
return
$x_i$
. In RAM this can be solved in $O(1)$
since array access is constant-time. Using TMs, we need to scan the whole input, so it's $O(n)$
This makes me wonder about functional languages; From my understanding, "Functional languages are intimately related to the lambda calculus" (from a comment by Yuval Filmus on here). So, if functional languages are based on lambda calculus, but they run on RAM based machines, what is the proper way to perform complexity analysis on algorithms implemented using purely functional data structures and languages?
I have not had the opportunity to read Purely Functional Data Structures but I have looked at the Wikipedia page for the subject, and it seems that some of the data structures do replace traditional arrays with:
"Arrays can be replaced by map or random access list, which admits purely functional implementation, but the access and update time is logarithmic."
In that case, the computational model would be different, correct?
Answer: It depends on the semantics of your functional language. You can't do algorithm analysis on programming languages in isolation, because you don't know what the statements actually mean. The specification for your language needs to provide sufficiently detailed semantics. If your language specifies everything in terms of lambda calculus you need some cost measure for reductions (are they O(1) or do they depend on the size of the term you reduce?).
I think that most functional languages don't do it that way and instead provide more useful statements like "function calls are O(1), appending to the head of a list is O(1)", things like that. | {
"domain": "cs.stackexchange",
"id": 7374,
"tags": "complexity-theory, algorithm-analysis, computability, computation-models, lambda-calculus"
} |
Unconvincing sentence in wikipedia | Question: I'm studying about electric field and referring to an article about electric field in wikipedia
And in here, there are some doubtful sentences:
The electric field is defined as a vector field that associates to each point in space the electrostatic (Coulomb) force per unit of charge exerted on an infinitesimal positive test charge at rest at that point
First, in this sentence, i'm doubtful about 'electrostatic force'. As i know, moving charges also make electric field.
This implies there are two kinds of electric fields: electrostatic fields and fields arising from time-varying magnetic fields.
And i think, in here, there is electric field which is not belonging to 'electrostatic field and field time-varying magnetic fields'. For example, steady current, not in wire, makes electric field but don't make time varying magnetic field..
Is there something I'm thinking wrong? I think above sentences are uncomplete
Answer: Ignore the word "static" in the first quote, or at least don't interpret it to mean "this only holds in the static case" in which all charges are stationary. The units of the electric field are Newtons per Coulomb, so the field at each point is a measure of the force imparted per unit charge at that point. This is still correct even when charges are moving and hence the electric/magnetic fields may be time dependent.
The second sentence is worded a little confusingly, we don't typically distinguish the electric field arising from electrically charged particles:
$$\nabla\cdot \vec E=\frac{\rho}{\epsilon_0} \tag{1}$$
and those arising from time dependent magnetic fields
$$\nabla\times \vec E=-\frac{\partial \vec B}{\partial t} \tag{ii}$$
in situations in which both exist. We just have one electric field which separately satisfies all of Maxwell's equations. So you are right in both cases, the wording is just a little imprecise in the Wikipedia article. | {
"domain": "physics.stackexchange",
"id": 98324,
"tags": "electromagnetism, electric-fields"
} |
Are the subordinate genes of a repressed operon really "turned off"? | Question: Operons are often described using all or nothing language. A repressor binding to the operator is usually presented as "turning off" the regulated genes. Case in point, Scitable at Nature.com says:
In addition to being physically close in the genome, these genes are regulated such that they are all turned on or off together. ... The ability to turn ... genes on or off as a group therefore provides an efficient way to quickly adapt to environmental changes. ... mutations affecting the promoter can prevent all of the operon's genes from being expressed, ...
The thing that confuses me is that I've also heard genetics aren't all or nothing, that genes can only be "down regulated" or "up regulated", not truly turned on and off.
Obviously, differing amounts of moderate levels of whatever effector is modulating the repressor could lead to different levels of downstream expression. But, what about situations where nothing is inactivating the repressors? Will the regulated genes be (for all intents and purposes) be turned off? Additionally (and relatedly), is it even a realistic scenario for an operator to always (or almost always) be bound to by such a repressor?
Answer: There are several mechanisms by which the expression of a gene can be completely turned off. Certain network architectures can ensure foolproof repression (for e.g. by using multiple repressors parallely or additional epigenetic silencing mechanisms). Bistable switches can also ensure robustness of expression in a way that small fluctuations in the transcription factor levels do not lead to expression of the gene. Sometimes the activator molecules need to co-operatively activate the expression. In such conditions the activation graph is sigmoidal with respect to activator concentration. All these mechanisms can prevent leaky expression.
Some genes do have a basal expression but it is not necessary for all cases. However, basal expression is absolutely necessary for proper functioning of the lac operon (see this post).
Having said this, there is always some non-zero probability of expression of a gene when it is actually turned off. This probability may be very low but it would not be zero. This small accidental expression may or may not have any significant downstream effects. | {
"domain": "biology.stackexchange",
"id": 5719,
"tags": "molecular-biology, molecular-genetics, gene-expression, gene-regulation, operons"
} |
Is there a better type of sensor to measure the distance instead of Ultrasonic sensors | Question: We know that the Ultrasonic sensors can't accurately measure the distance to an object that
is more than 3 meters away (the distance may vary on brands)
has it's reflective surface at a shallow angle (the angle may vary on brands)
is too small to reflect enough sound
Also the environmental influences such as air tempereture, air pressure, air currents, humidity, mist and types of gas cause inaccurate measurements. Which sensor should i use to get the best measurements ?
Answer: Yes, there are IR sensors as well, which provide ranges up to 3-4 meters Pololu Range Finder. Also, we can find Laser range finders that measure up to 30m distance. But these Laser range finders are expensive in compared to ultrasonic sensors and mostly used for mapping purposes. | {
"domain": "robotics.stackexchange",
"id": 1460,
"tags": "sensors, ultrasonic-sensors"
} |
Existence of a perturbed channel that achieves a perturbed output state | Question: Consider a $d$-dimensional maximally entangled state $\vert\phi\rangle = \frac{1}{d}\sum_{i=1}^d\vert i\rangle_A\vert i\rangle_B$. Let $N_{A\rightarrow A'}$ be a quantum channel and consider $\rho_{A'B} = (N\otimes I_B)\vert\phi\rangle\langle\phi\vert$. I am interested in the set of nearby quantum states $S = \{\tilde{\rho}_{A'B}\ |\ \|\tilde{\rho} - \rho\|_1\leq \varepsilon, \tilde{\rho}_B = \rho_B\}$ for some $\varepsilon\in [0,1]$.
For any $\tilde{\rho}\in S$, does there exist a channel $\tilde{N}_{A\rightarrow A'}$ that outputs it given a maximally entangled input? That is $(\tilde{N}\otimes I_B)\vert\phi\rangle\langle\phi\vert = \tilde{\rho}_{A'B}$? If not, what is a good counterexample?
If such a $\tilde{N}$ exists, then is it close to $N$ in diamond distance as a function of $\varepsilon$?
Answer: Yes, the channel $\tilde{N}$ necessarily exists.
Notice first that the state $\rho_B$ is the completely mixed state $\mathbb{1}/d$. So, in order for $\tilde{\rho}_{A'B}$ to be contained in $S$, three things must be true:
$\tilde{\rho}_{A'B}$ must be positive semidefinite.
$\tilde{\rho}_{B} = \mathbb{1}/d$.
$\|\rho_{A'B} - \tilde{\rho}_{A'B}\|_1 \leq \varepsilon$.
In general, the state
$$
(M \otimes I_B) | \phi \rangle \langle \phi |
$$
uniquely determines the mapping $M$ (for any $M$): up to the normalization $1/d$, this is the Choi representation (or Choi-Jamiolkowski representation) of $M$.
Thus, there is a unique map $\tilde{N}$ such that $\tilde{\rho}_{A'B} = (\tilde{N}\otimes I_B)|\phi\rangle\langle \phi|$. The first condition listed above guarantees that $\tilde{N}$ is completely positive and the second condition guarantees that $\tilde{N}$ preserves trace, so $\tilde{N}$ must in fact be a channel.
Concerning the closeness of $\tilde{N}$ to $N$ in diamond distance, the best you can conclude without more information is that $\|\tilde{N} - N\|_{\diamond} \leq d \varepsilon$. See this answer on Theoretical Computer Science Stack Exchange for further details. | {
"domain": "quantumcomputing.stackexchange",
"id": 3677,
"tags": "quantum-state, quantum-operation, information-theory"
} |
Ideal gas in quasistatic process | Question: How to prove that ideal gas in quasistatic process where $x=x(V,T)$ is constant satisfy equation
$pV^f=const.$, where $f = (C_{x}-C_{p})/(C_{x}-C_{V})$
How to find $C_{x}$ for $p/V=const.$?
Answer: Given $x =x(V, T)$:
\begin{align}
dx =& \left(\frac{\partial x}{\partial V}\right)_T dV + \left(\frac{\partial x}{\partial T}\right)_V dT.\\
0 =& \left(\frac{\partial x}{\partial V}\right)_T dV + \left(\frac{\partial x}{\partial T}\right)_V dT, \tag{1}
\end{align}
for constant $x$. From the first law of thermodynamics: $\Delta Q = P dV + n C_V dT$
\begin{align}
\Delta Q =& P dV + n C_V dT.\,\,\text{Using Eq.(1) to replace }\, dV \,\text{ by } dT\\
\Delta Q =& \left\{ - P\left(\frac{\partial x}{\partial T}\right)_V / \left(\frac{\partial x}{\partial V}\right)_T + n C_V \right\} dT\\
n C_x =& \frac{\Delta Q}{dT}\Big\vert_x =\left\{ - P\left(\frac{\partial x}{\partial T}\right)_V / \left(\frac{\partial x}{\partial V}\right)_T + n C_V \right\} \tag{2}
\end{align}
Now using ideal gas equation $PV= nRT$ to replace $dT$ with $dP$ and $dV$ in Eq.(1), then carry the integration.
$$
P dV + V dP = nR dT. \tag{3}
$$
Plug eq.(3) in Eq.(1):
\begin{align}
0 =& \left(\frac{\partial x}{\partial V}\right)_T dV + \left(\frac{\partial x}{\partial T}\right)_V \frac{1}{nR} \left\{ P dV + V dP \right\}.\\
0 =&\left\{ \left( \frac{\partial x}{\partial V}\right)_T + \left(\frac{\partial x}{\partial T}\right)_V\frac{P}{nR} \right\} dV +\left\{ \left(\frac{\partial x}{\partial T}\right)_V \frac{V}{nR}\right\}dP\\
0 =&\left\{ nR+ P\left(\frac{\partial x}{\partial T}\right)_V /\left(\frac{\partial x}{\partial V}\right)_T \right\} \frac{dV}{V} +\left\{ P\left(\frac{\partial x}{\partial T}\right)_V/\left(\frac{\partial x}{\partial V}\right)_T \right\}\frac{dP}{P}\\
0 =&\left\{ nR+ n C_V- n C_x \right\} \frac{dV}{V} +\left\{ n C_V- n C_x \right\}\frac{dP}{P}\\
\end{align}
In the last equation, we plug in the $C_x$ in Eq.(2). Recall that $R = C_p - C_x$, we then have:
\begin{align}
0 =&\left\{ nC_P - n C_x \right\} \frac{dV}{V} +\left\{ n C_V- n C_x \right\}\frac{dP}{P}
\end{align}
The integration of this equation render your solution. | {
"domain": "physics.stackexchange",
"id": 78645,
"tags": "thermodynamics, statistical-mechanics"
} |
Non-coherent receiver for DBPSK: Implementation details | Question: I am implementing a non-coherent receiver for DBPSK.
I have a computer science background with little knowledge of DSP so my questions here may seem obvious.
The design of the receiver I am trying to implement is the following:
Design of DPSK receiver
And what I am doing right now is:
On the tx side
Differentially encoding my bit sequence
BPSK modulating it
Applying a high-pass filter
On the rx side
Applying a band-pass filter around the carrier frequency
Delaying (by one bit worth of data) and multiplying the data by itself
Applying a low-pass filter
Integrating over one bit worth of data (i.e. sum the sample values)
Making the decision depending on the sign of my integral
By 'data' I mean samples at 44.1kHz.
When I simulate this on Octave, it works (i.e. I can detect the bit sequence even with high AWGN).
The problem is I am trying to move to an infinitely long data sequence (i.e. I do not know the exact moment where the tx began).
So here are my questions:
How can I achieve time and frame synchronization ? (I tried with a barker code but I am not sure I am getting it right)
This is the thing that's most confusing to me. My guess is you'd have to get time synced before the integration block, and frame synced after the decision block.
How can I express the bandwidth in function of carrier frequency f and bit rate ?
Does the relation between bit rate and carrier frequency have any influence on the receiver ?
What are the optimal values for the filters ? (number of taps)
How do I calculate Eb/N0 for in my case ? (I want to compare my BER curve with others to see how good is my receiver)
Answer: 1) A typical approach would be to have a frame alignment word (and by "word" I mean bit sequence) at the beginning of every frame. You could time synchronize by looking for that FAW through cross-correlations. Barker codes are a good choice because they have good autocorrelation properties.
The problem with the approach is that the data, being random, can look like the FAW sometimes. You can use the periodicity of the FAW to screen out those false positives. For instance, if your frame is 1000 bits long, you would look for multiple hits spaced 1000 bits apart. Once you have found them you have both time and frame synchronization.
2) There is not a direct relationship between carrier frequency and bandwidth. There is a direct relationship between bit rate and bandwidth, but there are other factors involved. The easiest way to determine your bandwidth is to take an FFT of your signal and measure it.
3) Your question is unclear.
4) You haven't provided enough information, and "optimal" can mean a lot of different things. If you want a good answer to this you should start a new thread with more details.
5) It sounds like you are adding noise via AWGN, so you know what your SNR is. With BPSK Eb/No is equal to the SNR. | {
"domain": "dsp.stackexchange",
"id": 719,
"tags": "modulation, bpsk, synchronization"
} |
Do pressurised cans still pose a risk to bursting in normal conditions, but after being exposed to heat? | Question: We're planning on shipping some items from the US to Australia as you cannot get them here, that being:
https://www.chewy.com/sentry-stop-that-noise-pheromone-dog/dp/56486
I have been in contact with a freight forwarder who can ship them to Australia, but obviously they will need to be shipped as a hazmat item and go inside a particular sized box and be shipped by FedEx or DHL.
Now, as I understand it, the reason these are classified as hazmat is because of the chance of bursting as well as being flammable.
As per the link above:
Contents under pressure. Do not puncture or incinerate container. Do not set on stove or radiator, expose to heat or store at temperatures above 120°F (49°C), as container may burst.
I have no idea, but I assume they wouldn't be exposed to high temperatures such as that during shipping, but my question is...
Assuming we got them ok and they didn't burst, do pressurised cans such as these still pose a risk to bursting in "normal" conditions AFTER they have been exposed to high heat? Such as if they had been exposed to high heat during shipping, but we got them ok, would they still pose a risk to us with the possibility of bursting?
The manufacturer provides a Material Safety Data Sheet on their website, which you have to email them or register to get it sent to you, but if it's of any interest you can download it here from MediaFire.
Alternatively you can find instructions to download it via their site - see under "PRODUCT SAFETY INFORMATION".
If you don't want to download anything then you can see screenshots of the document I uploaded to an Imgur gallery here: https://i.stack.imgur.com/44umr.jpg
Note though, the above data sheet was provided via their site that shows their old packaging, the link from Chewy above shows their new packaging, as far as I know the packaging is the only thing that has changed.
Answer: Assume that high temperature causes no chemical reactions in the contents or between the contents and the can.
High temperature will increase the pressure in the can. Assume the can is fully unaffected in its mechanical integrity up to the point that it fails.
With all of the above, when you get the can back at room temperature, it will have no change in its potential to fail or not.
One weak link will be the potential for increased fatigue at the seams on the can as the contents go to high pressure and back to lower pressure. Another weak link will be in the potential for over-stress on the plastic cap seal on the top of the container.
When the contents react at high temperature and that temperature is exceeded, all bets are off.
In summary, this system affords no absolute guarantee that it will have no increased probability to fail when temperature cycles to a high value and returns to room temperature. The biggest uncertainty is what exactly is meant by high temperature (i.e. 100 °C or 500 °C) and for how long (i.e. immediately or for a few minutes or for a few days). The practical issue is whether and to what extent your shipment will reach the bounds.
Based on what is stated on the Website, the failure seems to be immediate at a point at or just above 50 °C. I would have to say that whatever company is ready to accept these cans knows that they have to keep them below that temperature during shipping. But, what you might want to know is whether they were at a high temperature just below that for an extended time,
I might suggest to mark the cans before shipment with a paint or dye or wax that changes color irreversibly when a certain temperature is exceeded. Or include a temperature probe with the shipment that tracks temperature with time over the entire shipment. Then, you can decide at the receiving end of the shipment whether the cans where shipped to your satisfaction. | {
"domain": "engineering.stackexchange",
"id": 3027,
"tags": "pressure, chemical-engineering, gas, safety, compressed-gases"
} |
Create a List of any type of object based on List of HashMap from JSON | Question: I have this method that is working as i want:
public static <T> List<T> convertDados(Class<T> entity, List<HashMap<String, String>> dados) throws NoSuchMethodException,
SecurityException, IllegalAccessException, IllegalArgumentException, InvocationTargetException, InstantiationException {
Field[] fields = entity.getDeclaredFields();
Method[] allSetterMethods = entity.getMethods();
Map<Integer, Method> setters = new HashMap<>();
Class<?>[] paramTypes = new Class<?>[fields.length -1];
List<T> result = new ArrayList<>();
int cont = 0;
//AtomicInteger counter = new AtomicInteger(0);
T obj = null;
/*Arrays.stream(allSetterMethods).filter(method -> method.getName().startsWith("set")).forEach(m -> {
int c = 0;
if(counter.get() > c)
c = counter.get();
paramTypes[c] = m.getParameterTypes()[0];
setters.put(c, m);
counter.getAndIncrement();
});*/
//Pega todos os setter
for(Method method : allSetterMethods) {
if(method.getName().startsWith("set")) {
paramTypes[cont] = method.getParameterTypes()[0];
setters.put(cont, method);
cont++;
}
}
for(Map<String, String> map : dados) {
if(obj == null)
obj = entity.getConstructor().newInstance();
for (Field field : fields) {
if(field.getName().startsWith("serial")) continue;
for(Map.Entry<Integer, Method> set : setters.entrySet()) {
if(set.getValue().getName().substring(3).equalsIgnoreCase(field.getName())) {
Integer var = null;
if(paramTypes[set.getKey()].equals(Integer.class))
var = Integer.parseInt(map.get(field.getName()));
Method method = entity.getMethod(set.getValue().getName(), paramTypes[set.getKey()]);
method.invoke(obj, var == null ? map.get(field.getName()) : var);
}
}
}
result.add(obj);
obj = null;
}
return (List<T>) result;
}
It creates a List of any type of object based on List of HashMap from a json. I'm trying to make it functional and more concise. So i started to change it and so far I have the following modifications:
public static <T> List<T> convertDados(Class<T> entity, List<HashMap<String, String>> dados) throws NoSuchMethodException,
SecurityException, IllegalAccessException, IllegalArgumentException, InvocationTargetException, InstantiationException {
Field[] fields = entity.getDeclaredFields();
Method[] allSetters = Arrays.stream(entity.getMethods()).filter(method -> method.getName().startsWith("set")).toArray(Method[]::new);
List<T> result = new ArrayList<>();
T obj = null;
for(Map<String, String> map : dados) {
if(obj == null)
obj = entity.getConstructor().newInstance();
for (Field field : fields) {
if(field.getName().startsWith("serial")) continue;
for(Method m : allSetters) {
if(m.getName().substring(3).equalsIgnoreCase(field.getName())) {
Integer var = null;
if(m.getParameterTypes()[0].equals(Integer.class))
var = Integer.parseInt(map.get(field.getName()));
Method method = entity.getMethod(m.getName(), m.getParameterTypes()[0]);
method.invoke(obj, var == null ? map.get(field.getName()) : var);
}
}
}
result.add(obj);
obj = null;
}
return (List<T>) result;
}
What is the best way to get rid off the fors taking a functional approach?
Answer: Here is one of possible ways:
public static <T> List<T> convertDados(Class<T> entity, List<HashMap<String, String>> dados) {
final Field[] fields = entity.getDeclaredFields();
final List<Method> allSetters = Arrays.stream(entity.getMethods())
.filter(method -> method.getName().startsWith("set"))
.collect(Collectors.toList());
return dados.stream()
.map(map -> {
try {
final T obj = entity.getConstructor().newInstance();
Arrays.stream(fields)
.map(Field::getName)
.filter(name -> !name.startsWith("serial"))
.forEach(fieldName -> {
allSetters.stream()
.filter(setter -> setter.getName().substring(3).equalsIgnoreCase(fieldName))
.forEach(setter -> {
final Class<?> paramType = setter.getParameterTypes()[0];
try {
entity.getMethod(setter.getName(), paramType).invoke(obj, parse(paramType, map.get(fieldName)));
} catch (IllegalAccessException | InvocationTargetException | NoSuchMethodException ignore) {
}
});
});
return Optional.of(obj);
} catch (InstantiationException | IllegalAccessException | InvocationTargetException | NoSuchMethodException ignore) {
return Optional.<T>empty();
}
})
.filter(Optional::isPresent)
.map(Optional::get)
.collect(Collectors.toList());
}
private static Object parse(Class<?> paramType, String value) {
return paramType.equals(Integer.class) ? Integer.parseInt(value) : value;
} | {
"domain": "codereview.stackexchange",
"id": 24659,
"tags": "java, functional-programming, reflection"
} |
Meaning of ft-values in nuclear physics | Question: What is the "physical" meaning of the ft-value for a decay channel? From what I understand, the ft-value is inversely proportional to the square of the matrix element, hence I would expect a larger ft-value would correspond to a less probable decay route. Is this correct?
Answer: It is close. The matrix element is only a part of the game. It is $t_{1/2}$ (actually $1/{\tau}$) who is the definitive answer to the probability of the route. The other part are kinematic factors (see Fermi's Golden rule 1 )
$\lambda_{beta} = \frac{\log(2)}{t_{1/2}}= g^2 \frac{m_e^5 c^4 |M^L_{if}|^2}{2\pi^3\hbar^7} \Big[ \frac{1}{(m_ec)^5}\int_0^{p_{max}} dp S^L(p,q)F^{\pm}(Z^\prime,p)p^2q^2 \Big] $
The term in big bracket if $f_L$ is a spectrum shape dependent factor (influence of Coulomb and angular momenta) and a useful (famous) factorization can be done
$f t_{1/2} \sim \frac{1}{|M^L_{if}|^2}$.
This formula says that from measured characteristics and the half-life, one can deduce the information about the matrix element itself. See the normogram:
Different transition types have typical $\log(ft)$ values :
Superallowed - 2.9–3.7 (no parity change, $L_{beta}=0$, spin change $\Delta J=0$)
Allowed - 4.4–6.0 (no parity change, $L_{beta}=0$, spin change $\Delta J=0,1$)
First forbidden 6 – 10
etc.
A detected transition with a specific $\log ft$ should have corresponding characteristics (if known. if not, it may be used to deduce them).
1 http://www.umich.edu/~ners311/CourseLibrary/bookchapter15.pdf | {
"domain": "physics.stackexchange",
"id": 55089,
"tags": "nuclear-physics, terminology"
} |
Control System Analysis - Block Diagram Basics | Question:
Why this relation is not right?
C(s) = R(s) * [G(s).G(p)+D(s)] / [1+G(s).G(p)+D(s)]
Answer: First step is the summing junction:
$$
R(s) - C(s) \\
$$
Then the controller:
$$
G_c(s)\left(R(s) - C(s)\right) \\
$$
Then the plant:
$$
G_p(s)\left(G_c(s)\left(R(s) - C(s)\right)\right) \\
$$
Then the addition of the disturbance input:
$$
D(s) + G_p(s)\left(G_c(s)\left(R(s) - C(s)\right)\right) \\
$$
Finally, all of the last step above is equal to the output, $C(s)$:
$$
C(s) = D(s) + G_p(s)\left(G_c(s)\left(R(s) - C(s)\right)\right) \\
$$
Multiply out the business on the right:
$$
C(s) = D(s) + G_p(s)\left(G_c(s)\left(R(s) - C(s)\right)\right) \\
C(s) = D(s) + G_p(s)G_c(s)R(s) - G_p(s)G_c(s)C(s) \\
$$
Then rearrange to get the $C(s)$ terms all on one side:
$$
C(s) + G_p(s)G_c(s)C(s) = D(s) + G_p(s)G_c(s)R(s) \\
$$
Factor out the $C(s)$ term and divide to get the answer:
$$
C(s)\left(1 + G_p(s)G_c(s)\right) = D(s) + G_p(s)G_c(s)R(s) \\
C(s) = R(s)\left(\frac{G_p(s)G_c(s)}{1 + G_p(s)G_c(s)}\right) + \frac{D(s)}{1 + G_p(s)G_c(s)} \\
$$
So it looks like your answer isn't correct because you have what distributes to be an $R(s)D(s)$ term, when the disturbance only sums with the result of the plant (and doesn't multiply), and you also have the disturbance term in the denominator, which is also incorrect.
There are all kinds of "thumbrules" on how you do block diagram reduction, but I always just multiply it out. It only takes a little bit longer, but I personally always get something wrong when I try the (N/(1+N)) method or something like that. | {
"domain": "robotics.stackexchange",
"id": 1810,
"tags": "control"
} |
What is the identity of this spider from India? | Question:
I came across this strange guy today at my classroom in Tamil Nadu, India. It is a spider which is not very much bigger than an ant but with an artistic body. He has 8 legs, but the two on the front are raised above his head, just like a crab does. Can anyone identify this little guy?
Answer:
Anyone identify this little guy?
That is a Siler semiglaucus, which is a type of jumping spider. It has the nickname of "metallic jumper", and can be found all throughout India, Indonesia, Philippines, and Thailand.
He has 8 legs, but the two on the front is raised above his head, just like a crab does.
Here is a wonderful collection of more images, which also includes locations of discovery. | {
"domain": "biology.stackexchange",
"id": 7861,
"tags": "species-identification, arachnology"
} |
Do dishwashing detergent and stain remover powder/stick have the similar ability to remove collar rings? | Question: For yellow rings on shirts caused by skin grease or sweat:
Some recommended to use an "Oxyclean" stain remover or stain remover stick
for cleaning.
Some recommended to use dishwashing detergent to brush and wash the
stains, saying "dish washing liquid is designed to remove grease, and
why buy another product (that doesn't work very well -- I've tried)
when you already have one that does?"
From a chemistry perspective, do dishwashing detergent and stain remover powder/stick have similar chemical composition and a similar ability to remove collar yellow rings?
Original dirty collar
After dishwashing detergent and laundry detergent
Answer: Detergents seem to consist of both polar and non-polar compounds. Probably because they need the non-polar part of the mixture to dissolve the grease, and the polar compound part of the mixture to get washed off with the water so there is not much left over on your shirt. The ideal solution would probably depend on just how bad the stain is (how much grease needs to be dissolved). If your home solutions did not work the first time, try to look for signs if it did anything at all, and it might not be something that will remove all the grease the first go around.
I have no experience using detergents or removing stains, but I am going off a lot of speculation here as to how the process might work.
"Like dissolves like"
According to this article if there are lots of Mg, Ca, or Iron ions in your water (likely) then it could reduce the effectiveness of soaps. A good question to ask yourself is whether or not your water contains many ions such as these. | {
"domain": "chemistry.stackexchange",
"id": 163,
"tags": "everyday-chemistry, aqueous-solution, surfactants"
} |
Prosilica GC655C low frame rate | Question:
Hi,
My Prosilica GC655C is giving only 56 fps. (The same camera was giving me 89 fps on a different machine, which I don't have access)
My Launch file is:
<launch>
<group ns="prosilica">
<param name="exposure_auto" type="str" value="Auto"/>
<param name="gain_auto" type="str" value="Auto"/>
<param name="whitebal_auto" type="str" value="Auto"/>
<param name="frame_id" type="string" value="high_def_optical_frame" />
<param name="trigger_mode" type="str" value="streaming"/>
<param name="ip_address" type="str" value="192.168.0.101"/>
</group>
<node name="prosilica" pkg="prosilica_camera" type="prosilica_node" respawn="false" output="screen">
</node>
And I have set my MTU with
sudo ifconfig eth0 mtu 8228
Is there any thing else that I should check ? (I don't remember doing anything else in the old setup)
FYI: I use CAT 6 cables and the router DLink DIR-825 supports jumbo frames. The current machine is an i7/8GB.
Originally posted by gajan on ROS Answers with karma: 13 on 2011-03-12
Post score: 0
Answer:
Hi Gajan,
I've experienced the same with our GC655C, GC660C and GC1290C, in our case changing cable type from CAT5E changed already a lot (our lab should be operational on CAT6A in the following days). Even with CAT5E cable I had to try 4 different types of suppliers before finding a good performance.
I'm still not getting the maximum performance out of our GC660C camera's however. I'm using a Cisco catalyst switch (which doesn't seem to be the bottleneck).
Originally posted by KoenBuys with karma: 2314 on 2011-03-12
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 5048,
"tags": "ros"
} |
What happens to water under high pressures without possibility of escape? | Question: Knowing very little about the nature of water, wondering how it might behave at the centre of a planet or centre of an another massive gravitational body.
Could water take such pressures or might it break into separate hydrogen and oxygen to find something more accommodating for the pressure exerted, if such atmosphere would allow the volume even?
Finally, bonus points: could triple points in water play a role in keeping temperatures low under high pressures via fluid thermodynamics?
Answer: What happens to water under high pressures without the possibility of escape?
It depends on what you mean for high pressure and on the temperature, however, the water phase diagram can help you to understand what will happen. This is from Wikipedia User Cmglee:
You can see that at high-pressure water assumes a solid form (ice): you will have ice X at 100 GPa, and what is labelled as hexagonal ice XI in the diagram but is actually superionic water or ice XVIII at 1 TPa (temperature range 0 to 650 K). These sorts of ices have different lattice and internal energies.
What would happen to water as pressures increase towards infinity without the possibility of the water escaping confinement?
Unfortunately, chemists don't deal with infinity, basically, however, there will be a point where the water molecule won't longer exist all the bonds will break. After that probably you will have something called electron-degenerate matter have a look here, here is all about quantum mechanics, Pauli exclusion principle and black holes so on more Physic S.E. stuff.
Could triple points in water play a role in keeping temperatures low under high pressures via fluid thermodynamics?
You can also note that there are other triple points like (100 k, 62GPa) this however doesn't directly affect the properties of water but are the properties of water that determine where are the triple points. In a close system, kinetic energy is constant so I think that fluid thermodynamics doesn't matter in this case is more related to equilibria.
More info about water phase diagrams here. | {
"domain": "chemistry.stackexchange",
"id": 1092,
"tags": "thermodynamics, water"
} |
ROS_NOBUILD doesn't allow modification of code | Question:
I'm a beginner with ROS (using the Turtlebot SDK) and I was attempting to generate PNG images from the depth and RGB data coming out of my Kinect.
For this I thought the easiest way would be to modify the "image_view" code (which allows me to right-click to save the current frame as PNG) so that it simply saves for all frames.
However, when I try to modify the source code and make it using rosmake, it says there were no errors, but the executable is still the original one when I run image_view from terminal.
Am I missing out on something basic here? I couldn't spot anything on the wiki.
Any help is highly appreciated.
Originally posted by rps2689 on ROS Answers with karma: 1 on 2012-07-01
Post score: 0
Original comments
Comment by Eric Perko on 2012-07-01:
Are you modifying ROS packages in /opt/ros that were installed via apt-get or a copy of image_view checked out into your workspace?
Comment by joq on 2012-07-01:
Don't modify the code installed via Debian packages (via apt-get). Check out a source overlay, if you want to modify something.
Answer:
I'm guessing you built from source, as you say you changed the source of image_view directly. At the end of a successful build (and if instructed to do so), rosmake will create the ROS_NOBUILD file in the package directory as an indication to the build system that the package can (and should) be skipped in any subsequent rosmakes. Normally, packages installed via a package manager don't include the source and come with a ROS_NOBUILD to prevent make clean mishaps among other things.
So the behaviour you saw was actually 'by design': rosmake is meant to skip building image_view and consequently your changes do not take effect.
AFAIK the most accepted setup for working on ROS (core) packages is to create a ROS 'workspace' and manage it using rosws and .rosinstall files. See this page for a tutorial.
Originally posted by ipso with karma: 1416 on 2012-07-01
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 10009,
"tags": "kinect, turtlebot, rosmake"
} |
Looking for sample code except for PCL/Tutorial | Question:
I'm looking for some sample code except for codes on the page( http://wiki.ros.org/pcl/Tutorials ) to get more knowledge about relationship between ROS and PCL. Could anybody tell me an information if knowing the web page that puts sample code? My goal is to build a people detection.
Originally posted by Ken_in_JAPAN on ROS Answers with karma: 894 on 2014-04-23
Post score: 0
Answer:
For example,
https://github.com/davetcoleman/point_cloud_lab
https://github.com/ankush-me/sandbox444/blob/master/display_pointCloud/src/display.cpp
I think that a code display.cpp on 2. is interesting, because this program can show images acquired fromfromROSMsg on visualizer.
Originally posted by Ken_in_JAPAN with karma: 894 on 2014-04-28
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 17759,
"tags": "pcl, ros-hydro"
} |
Remove the last occurrence of an item in a list | Question: The objective is given a list of items, remove the last occurrence of a specified item from the list using only user-defined functions except for very basic built-in ones like car, cdr, =, - etc. For example, if we've been given the list (A B A C), using our procedure to remove the last occurrence of A from the list should produce a list (A B C). Hope I'm being clear.
The program itself consists of five procedures:
Four helper procedures:
remove-1st
remove-all-occurrences
member?
size
And the one this fuss is all about:
remove-last-occurrence
By the way, you'll have noticed that all procedures are implemented using tail recursion.
If you're using MIT/GNU Scheme, you can use the built-in procedure "load" like this (load "demo.scm") to load the file into your interactive environment. Here's the code itself:
;;
;; Procedure: remove-1st
;; ---------------------
;; Takes in an item and a list and removes the first occurrence of the item in
;; the list.
;;
;; Usage example:
;; (remove-1st 'A '(B A C A)) => (B C A)
;;
(define remove-1st
(lambda (x ls)
(if (null? ls) ; If an empty list
'() ; Return an empty list
(if (equal? x (car ls)) ; Otherwise, if first item in list
(cdr ls) ; Return rest of list, done
(cons (car ls) (remove-1st x (cdr ls)))))))
; Otherwise, cons first item and
; rest of list with our item removed
;;
;; Procedure: remove-all-occurrences
;; ---------------------------------
;; Takes in an item and a list and removes all top-level occurrences of the
;; item in the list.
;;
;; Usage example:
;; (remove-all-occurrences 'A '(A B A C)) => (B C)
;;
(define remove-all-occurrences
(lambda (x ls)
(if (equal? (remove-1st x ls) ls) ; If list with item removed equals
ls ; itself, return list intact
(remove-all-occurrences x (remove-1st x ls)))))
; Otherwise, remove all occurrences
; of item from list with item
; removed as first occurrence
;;
;; Procedure: member?
;; ------------------
;; This predicate procedure checks whether an item is present in a list. If
;; there is at least one occurrence of the item in the list, a value of true is
;; returned. Otherwise, the procedure returns false.
;;
;; Usage examples:
;; (member? 'A '(A B C)) => #t
;; (member? 'D '(A B C)) => #f
;;
(define member?
(lambda (x ls)
(if (null? ls) ; If an empty list
#f ; Return false
(or (equal? x (car ls)) ; If x is first item in list, done
(member? x (cdr ls)))))) ; Otherwise, check the rest of items
;;
;; Procedure: size
;; ---------------
;; Takes in a list as argument and returns the number of elements it contains.
;;
;; Usage examples:
;; (size '(A B C D)) => 4
;; (size '()) => 0
;;
(define size
(lambda (ls)
(if (null? ls) ; If it's an empty list
0 ; Return zero as its size
(+ 1 (size (cdr ls)))))) ; Otherwise, add one to the size of the
; list minus the first element
;;
;; Procedure: remove-last-occurrence
;; ---------------------------------
;; This procedure removes only the last occurrence of an item in a list.
;;
;; Usage example:
;; (remove-last-occurrence 'A '(B A B A C)) => (B A B C)
;;
;; How it works:
;; First of all, if an empty list has been sent to the procedure, we likewise
;; are going to return an empty list too. Otherwise, we check whether the
;; specified item is in the list and if that comes out as true we're going to
;; check if it's the item's last occurrence in the list by removing all
;; occurrences of it from the list and making a comparison between the number
;; of elements when there are zero occurrences of the item in the list plus one
;; and when they're all there. The sizes being equal means that there is one
;; occurrence of the item in the list. So we now can remove it and return the
;; list. Otherwise, we're going to cons the list's first item and the list
;; produced as the result of removing the last occurrence of the item from the
;; list minus the first element.
;;
(define remove-last-occurrence
(lambda (x ls)
(if (null? ls)
'()
(if (and (member? x ls)
(= (+ (size (remove-all-occurrences x ls)) 1)
(size ls)))
(remove-1st x ls)
(cons (car ls) (remove-last-occurrence x (cdr ls)))))))
;; end of file
Answer: Youre solution is very inefficient. The classical, efficient solution is simply to reverse the list, remove the first occurrence of the item, than reverse again the list. So, using only your remove-1st function and the primitive reverse function, this is a possible definition:
;;
;; Procedure: remove-last-occurrence
;; ---------------------------------
;; This procedure removes only the last occurrence of an item in a list.
;;
;; Usage example:
;; (remove-last-occurrence 'A '(B A B A C)) => (B A B C)
;;
;; How it works:
;; The list is first reversed, so the last occurrence is now the first one;
;; remove that occurrence and then reverse again the list
(define remove-last-occurrence
(lambda (x ls)
(reverse (remove-1st x (reverse ls))))
If you cannot use the predefined function, then you could define reverse as:
(define reverse
(lambda (ls)
(if (null? ls)
'()
(append (reverse (cdr ls)) (cons (car ls) '())))))
or, with a tail-recursive definition, like in:
(define reverse
(lambda (ls)
(define reverse1
(lambda (ls acc)
(if (null? ls)
acc
(reverse1 (cdr ls) (cons (car ls) acc)))))
(reverse1 ls '())))
The first reason for the inefficiency is the definition of remove-all-occurrences, since, instead of visiting the list only once, with a function like this one:
(define remove-all-occurrences
(lambda (x ls)
(if (null? ls)
'()
(if (equal? x (car ls))
(remove-all-occurrences x (cdr ls))
(cons (car ls) (remove-all-occurrences x (cdr ls)))))))
you visit it multiple times, by calling remove-1st 2 times for each element that must be removed, producing a non linear algorithm.
The second reason for the inefficiency is the definition of remove-last-occurrence which again requires multiple recursive passes on the the list through member? and size. | {
"domain": "codereview.stackexchange",
"id": 22494,
"tags": "scheme"
} |
If $L_1,L_2\in \mathrm{NP}$ and $w\in L_1$ or $w\in L_2$ then can $L_1\cup L_2=L$'s verifier use the same certificate $c$ for $w$? | Question: I read the following solution for Showing that $\mathrm{NP}$ is closed under union
and they used the same $c$ for both the verifies $V_1$ and $V_2$. Why is it correct?
Let $L_1$ and $L_2$ be languages in $\mathrm{NP}$. Also, for $i = 1, 2$ let $V_i(x, c)$ be
an algorithm that, for a string $x$ and a possible certificate $c$,
verifies whether $c$ is actually a certificate for $x \in L_i$. Thus, $V_i(x,c) = 1$ if certificate $c$ verifies $x \in L_i$, and $V_i(x, c) = 0$ otherwise.
Since both $L_1$ and $L_2$ are both in $\mathrm{NP}$, we know that $V_i(x, c)$ terminates
in polynomial time $O(|x|^d)$ for some constant $d$. To show that $L_3 = L_1 \cup L_2$ is also in $NP$, we will construct a polynomial-time verifier $V_3$ for $L_3$. Since a certificate $c$ for $L_3$ will have the property that
either $V_1(x, c) = 1$ or $V_2(x, c) = 1$, we can easily construct a
verifier $V_3(x, c) = V_1(x, c) \lor V_2(x, c)$. Clearly then $x \in L_3$ if and
only if there is a certificate $c$ such that $V_3(x, c) = 1$. Notice also
that the new verifier $V_3$ will run in time $O(2(|x|^d))$, which is
polynomial. Therefore, the union $L_3$ of two languages in $\mathrm{NP}$ is also in $\mathrm{NP}$, so $\mathrm{NP}$ is closed under union.
taken from here.
$M_1,M_2$ TM which accept $w$ can accept $w$ for different reasons so we can't claim that $c_1=c_2$
Questions:
Is it legal to use the same certificate for both $V_1,V_2$ in the answer? Why?
Is a verifier by definition is deterministic TM?
In the above answer, does $V_3$ runs $x,c$ on both $V_1$ and $V_2$ in the worst case?
Answer: Assume that $c_1$ and and $c_2$ are polynomial size certificates for $x \in V_1$ and $x \in V_2$. Then define $c=c_1\#c_2$, a new certificate formed by concatenation of $c_1$ and $c_2$ which is clearly polynomial size. Then $V_1$ and $V_2$ still can use $c$ as a certificate to verify $x\in V_1$ and $x\in V_2$. $V_1$ will use the left part of $c$ and the $V_2$ will use the right part of $c$.
Is a verifier by definition is deterministic TM?
Yes, the verifier is a deterministic TM, by definition.
In the above answer, does $V_3$ runs $x,c$ on both $V_1$ and $V_2$ in the worst case?
You can treat $V_3$ as a TM which invokes (as subroutines) $V_1(x,c)$ and $V_2(x,c)$, so if $V_1$'s worst case is $O(f)$ and $V_2$'s worst case is $O(g)$ then $V_3$'s worst case is either $O(f)$ or $O(g)$ which is polynomial. | {
"domain": "cs.stackexchange",
"id": 9699,
"tags": "complexity-theory"
} |
golang rabbitmq message consumer | Question: I need to process rabbitmq messages with golang with worker style,
is this correct way to process rabbitmq messages with golang?
package main
import (
"log"
"github.com/streadway/amqp"
)
func main() {
conn, err := amqp.Dial("amqp://guest:guest@localhost:5672/")
if err != nil {
panic(err)
}
defer conn.Close()
for _, q := range []string{"q1", "q2", "q3"} {
go RunFetcher(q, conn, 3, 50)
}
select {}
}
func RunFetcher(queueName string, conn *amqp.Connection, workers, qos int) {
ch, err := conn.Channel()
if err != nil {
log.Println(err.Error())
return
}
ch.Qos(qos, 0, false)
defer ch.Close()
msgs, err := ch.Consume(queueName, "", false, false, false, false, nil)
if err != nil {
log.Println(err.Error())
return
}
for index := 0; index < workers; index++ {
go func() {
for d := range msgs {
// process message
d.Ack(false)
}
}()
}
select {}
}
Answer: There's not really a one "right" way of doing things, and all of the others are wrong. However, there's a number of things that are considered good practice, and WRT to those, your code can do with a bit of TLC.
To that end, I'll just run through your code line by line, leaving comments on both style, and things I deem to be missing, hopefully ending up with something that is more "idiomatic" go.
package main
import (
"log"
"github.com/streadway/amqp"
)
Yup, I'm going to comment on your imports. The standard way to organise the imports in golang is to have multiple sections (separated by a blank line). The order is: first standard packages, second group have packages local to the project, then a third group with your external packages. In this case, the amqp package most certainly is an external one, so I'd write:
import (
"context"
"log"
"github.com/streadway/amqp"
)
As you may have notices, I've also added the context package. It's a good idea to look into this one, especially once you start playing around with multiple routines. More on that later
func main() {
conn, err := amqp.Dial("amqp://guest:guest@localhost:5672/")
if err != nil {
panic(err)
}
defer conn.Close()
Ok, something you've probably heard before is "Don't panic". Maybe you know the phrase from the hitchhikers guide to the galaxy, but it's also one of the go proverbs. Just do a log.Fatalf("failed to connect to AMQP: %+v", err) or something. Fatal log is basically logging the output, and calling os.Exit(1).
for _, q := range []string{"q1", "q2", "q3"} {
go RunFetcher(q, conn, 3, 50)
}
select {}
}
The remainder of your main function just looks really weird. You seem to know what queues you want to consume from, but you've not given yourself any way to control the execution of the routines. Something you really would want is a way to stop the routines dead in their tracks. For example: the worker receives a kill signal, what you really want to do is cleanly terminate the workers. That's why I added the context package, because this allows you to do just that:
ctx, cfunc := context.WithCancel(context.Background())
defer cfunc() // cancel execution context when the main function returns
for _, q := range queueNames {
go runFetcher(ctx, q, 3, 50) // no need to export this func
}
Now let's add the bit that listens for a kill/interrupt signal:
import (
"os"
"os/signal"
)
func main() {
// add this:
sch := make(chan os.Signal, 1)
// listen for interrupt and kill signals
signal.Notify(sch, os.Interrupt, os.Kill)
// after you've done everything you needed to do:
<-sch // this is blocking, so after the read, the context will be automatically cancelled thanks to the defer cfunc() we added
}
Now your worker can run indefinitely, but cleans up after itself once it receives an interrupt or kill signal. Also: we don't need the empty select {}, which looks really nasty to my eye...
func RunFetcher(queueName string, conn *amqp.Connection, workers, qos int) {
ch, err := conn.Channel()
if err != nil {
log.Println(err.Error())
return
}
ch.Qos(qos, 0, false)
defer ch.Close()
msgs, err := ch.Consume(queueName, "", false, false, false, false, nil)
if err != nil {
log.Println(err.Error())
return
}
for index := 0; index < workers; index++ {
go func() {
for d := range msgs {
// process message
d.Ack(false)
}
}()
}
select {}
}
Ok, this is a big one to unpack. First off, let's not export this function (there's no reason to), and add the context argument as first argument to the function. Something I can't quite wrap my head around is why this function is kicked off in a routine, and doesn't even have a method of communicating any of the errors that it may encounter (for example in conn.Channel()). An error here causes an early return. To me, that error looks like something that needs to be handled. What's more, within this routine, you're actually spawning 3 subroutines that actually read from the same channel. That's fine, but it's actually this part you want to have running asynchronously. Why not change the runFetcher function to a normal function that returns an error, so you can handle it in main, and have it spawn as many routines as needed? You could also simply call conn.Channel() in the main function, and pass the Channel to the fetcher.
I'd also consider adding a loop so your fetchers can run indefinitely:
func runFetcher(ctx context.Context, queueName string, conn *amqp.Connection, workers, qos int) error {
ch, err := conn.Channel()
if err != nil {
return err
}
// the usual stuff, but:
msgs, err := ch.Consume(queueName, uuid, false, false, false, false, nil)
I would use a UUID package to create a unique consumer name, so we can actually stop our consumers correctly:
// create your workers like so:
for i := 0; i < workers; i++ {
go func() {
select {
case <-ctx.Done():
return // the context was cancelled, stop working
case msg := <- msgs:
msg.Ack(false) // acknowledge (or not)
}
}()
}
// now let's add this to stop the consumer
go func() {
<-ctx.Done()
ch.Cancel(uuid, false) // stop consumer quickly
}()
}
Now we're able to stop the consumers from actually getting messages from the queue, we can stop all our routines cleanly, and errors when establishing a consumer channel are actually propagated to the main function, so we can decide whether or not we want to proceed consuming messages from other queues or not.
These are some initial thoughts after quickly reading through the code. I might revisit this later on with more comments. | {
"domain": "codereview.stackexchange",
"id": 31365,
"tags": "go, rabbitmq"
} |
Calculating refraction between numerous media | Question: Last year, our teacher made us an exam to check our knowledge in light reflection and refraction. I don't perfectly remember it, but I know that one of the exercises included a ray with its angle of incidence, and let's say 7 media with their respective indexes of refraction. We had to find the last angle of refraction. Even though it's fairly easy to do, it took my classmates a lot of time. Meanwhile I didn't even have time to reach that question: we were very limited on time.
I was thinking if there was another way to find it, a quicker one. Then, while looking at a figure of refraction in my physics book, I came up with an idea: what if we calculate it using only $n_1$, $n_7$, $\Theta_1$ and $\Theta_7$? I'll explain.
Let's say the first medium is air, and I'm calling it $n_1$. The seventh medium is glass, and I'm calling it $n_7$. There are 5 other media between air and glass. The angle of incidence will be $\Theta_1$ and the last angle of refraction, the one inside $n_7$, will be $\Theta_7$. Is it possible to calculate $\Theta_7$ as
$$\Theta_7 = \frac{n_7 \cdot \Theta_1}{n_1},$$
or is it required to calculate each angle of refraction one by one until we reach $\Theta_7$?
Answer: Yes, you are right, except for that it is not the division of $\theta$, but $\sin\theta$. However, the real reason what you said is true can be seen by a bit of observation.
$$n_i\sin(\theta_i) = constant$$
Because:
$$\frac{n_2}{n_1} = \frac{\sin\theta_1}{\sin\theta_2}$$
And for the ray going to the third medium from the second,
$$\frac{n_3}{n_2}=\frac{\sin\theta_2}{\sin\theta_3}$$
What do you see in common (try rearraging them)?
$$n_1\sin\theta_1=n_2\sin\theta_2=n_3\sin\theta_3 ...$$ | {
"domain": "physics.stackexchange",
"id": 57712,
"tags": "refraction, geometric-optics"
} |
Deriving the Canonical Energy Momentum Tensor | Question: In the Mathematics for Physics of Stone and Goldbart the canonical energy momentum tensor is derived by the action principle as follows.
To the action of the form
$$ S=\int \mathcal{L}(\varphi,\varphi_\mu) \, \mathrm{d}^{d+1}x ,$$
where $\mathcal{L}$ the lagrangigan density and $\varphi_\mu = \frac{\partial \varphi}{\partial x^\mu}$ is, we make the variation of the form
$$ \varphi(x) \rightarrow \varphi(x^\mu + \varepsilon^\mu(x)) = \varphi (x^\mu) +
\varepsilon^\mu(x)\partial_\mu\varphi+O(|\varepsilon|^2) ,$$ where $x=(x^0,...,x^d)$ is.
Then the resulting variation is
$$\delta S= \int \left( \frac{\partial\mathcal{L}}{\partial \varphi} \varepsilon^\mu \partial_\mu \varphi +\frac{\partial \mathcal{L}}{\partial \varphi_\nu} \partial_\nu(\varepsilon^\mu \partial_\mu \varphi) \right)\mathrm{d}^{d+1}x$$
$$ = \int \varepsilon^\mu \frac{\partial}{\partial x^\nu}
\left( \mathcal{L} \delta^\nu_\mu -\frac{\partial \mathcal{L}}{\partial \varphi_\nu} \partial_\mu \varphi \right)\, \mathrm{d}^{d+1}x. $$
I understand that from going line 1 to 2 some sort of integration by parts is done. However when I try to do that I run into some trouble and don't get the second line. Can someone do it explicityl so that I learn where I made the mistake.
EDIT
My steps are
$$ \delta S=\int_\Omega \frac{\delta S}{\delta \varphi(x)} \mathrm{d}\Omega, $$ where $\Omega$ to be integrated region is.
$$=\int_\Omega \delta\varphi\left( \frac{\partial \mathcal{L}}{\partial \varphi} - \partial_\nu \frac{\partial\mathcal{L}}{\partial \varphi_\nu} \right) \mathrm{d}\Omega$$
$$=\int_\Omega \varepsilon^\mu \frac{\partial \varphi}{\partial x^\mu} \left( \frac{\partial \mathcal{L}}{\partial \varphi} - \partial_\nu \frac{\partial\mathcal{L}}{\partial \varphi_\nu} \right) \mathrm{d}\Omega \qquad \because \delta\varphi= \varepsilon^\mu \partial_\mu \varphi $$
$$=\int_\Omega \varepsilon^\mu \left( \frac{\partial \mathcal{L}}{\partial x^\mu} - \partial_\nu \frac{\partial\mathcal{L}}{\partial \varphi_\nu}\cdot \frac{\partial \varphi}{\partial x^\mu} \right) \mathrm{d}\Omega$$
Now I can take out $\partial_\nu$ however $\partial_\nu$ acts only on $\frac{\partial\mathcal{L}}{\partial \varphi_\nu}$ and not on $\frac{\partial \varphi}{\partial x^\mu}$. I thought $\partial_\nu\frac{\partial \varphi}{\partial x^\mu}$ could be zero so that I can take the partial derivative out of the bracket but I don't see why this should be true. If I were to take it out I arrive at the equation in the second line above.
Answer: Note that the chain rule in this case, since $\mathcal{L} = \mathcal{L}(\varphi, \partial_{\mu} \varphi)$ reads
$$\partial_{\nu}\mathcal{L} = \frac{\partial \mathcal{L}}{\partial \varphi} \partial_{\nu} \varphi + \frac{\partial{\mathcal{L}}}{\partial(\partial_{\mu} \varphi)}\partial_{\nu} \partial_{\mu} \varphi.$$
Forgetting the second term is the mistake I think you are making. | {
"domain": "physics.stackexchange",
"id": 18736,
"tags": "homework-and-exercises, lagrangian-formalism, variational-principle, noethers-theorem, stress-energy-momentum-tensor"
} |
Equation of Trajectory | Question: I was reading about Lorentz force law and motion of a charged particle under Electric field or magnetic field and I understood how they move in it.
But as I was moving further I came across the principle of a "velocity filter",
where they kept $\vec B,\vec v,\vec E$ mutually perpendicular, and then setting $|F_e|$ equal and opposite to $|F_m|$. So that,
$qE=qvB$ $\implies$ $v=\dfrac{E}{B}$
And that's good too, not a problem.
But then something popped up in my mind, what would happen and how would a particle move if,
$|F_e|\neq|F_m|$ and maybe if they are even in same direction.
So, I run some thought experiment and came to a conclusion or imagery about the path of the particle being something like this, well maybe?(Sorry for bad drawing, took time to draw anyhow.)
Now, I had the picture of the particle moving but I wanted to find an exact equation of trajectory for it.
Numerical part(my attempt):
I knew the magnetic field would try to put the charge particle in a circle and Electric field would try to distort it toward right as shown in fig. Therefore, I resorted to use vector algebra for analysis to find the individual effect of each field and add them.
1) Magnetic field
For circle, the equation was simple,
$\vec r_m=r \cos(t)\hat i+r\sin(t)\hat j$ ;
Where r denote the radius of circle, t any parameter.
(Note that I assume the the path is in x,y plane as indicated by the unit vector which is not shown in the figure.)
2)Electric field
It was hard to figure out , but I came up with this equation.
$\vec r_e=ct\hat i$,
where c is any constant related to motion of particle in electric field.
(And I really have no intuitive sense as to why it is not $c$ or maybe $ct^2$, it somehow agreed with the curve i thought and somehow it seemed the correct one.)
Hence
3)Net result
$\vec r=(r\cos(t)+ct)\hat i+r\sin(t)\hat j$
Converting into Cartesian form we get,
$x=r\cos(t)+ct$ and $y=r\sin t$
After shifting, tweaking the curve and graphing I got this,
$x=r-r\cos(t)+ct$ and $y=r\sin(t)$
Here's a graph of what it looks like, very similar to that of figure drawn.
https://www.desmos.com/calculator/3djbngtups
Questions based on above observation:
Is the equation i derived correct and what would be a proper derivation?
I tried to determine $r$ by assuming the equation that $F=Bqv$ and the
initial condition that magnetic field $\vec B$ alone should prove
the necessary centripetal force to put the particle in a circle of
radius $r$. Giving,
$Bqv=\dfrac{mv^2}{r}$ $\implies$
$r=\dfrac{mv}{Bq}$
However, for $c$ , I have no idea. And also I am not sure if $r$ is
correct or not?
What would happen if $\vec B ,\vec v,\vec E$ are not
even perpendicular? For e.g., $\vec v$ inclined to some axis with $\vec B$.
Answer: I've derived the equation you're looking for. You can skip down to the SUMMARY section if you don't want to see the math.
You need to start with the equation of motion:
$\vec{F} = m \vec{a} = m \frac{d\vec{v}}{dt}~~~$
(using the fact that the acceleration is the time derivative of the velocity)
Where the force $\vec{F}$ on the particle is given by the Lorentz force:
$\vec{F} = q(\vec{E} + \vec{v} \times \vec{B})$
Using the coordinate system in your picture,
$\vec{E} = E~ \hat{y}$
$\vec{B} = B~ \hat{x}$
$\vec{v(t)} = v_x ~ \hat{x} + v_y ~ \hat{y} + v_z ~ \hat{z}$
Putting that all together, we have:
$m \frac{d}{dt}(v_x ~ \hat{x} + v_y ~ \hat{y} + v_z ~ \hat{z}) = q[E~ \hat{y} + (v_x ~ \hat{x} + v_y ~ \hat{y} + v_z ~ \hat{z}) \times (B~ \hat{x})]$
Expanding and simplifying...
$m \frac{d v_x}{dt} ~ \hat{x} + m \frac{d v_y}{dt} ~ \hat{y} + m \frac{d v_z}{dt} ~ \hat{z} = q E~ \hat{y} + q B (v_z ~ \hat{y} - v_y \hat{z})$
We can separate this equation into three separate equations, one for each component of $\hat{v}$
$\frac{d v_x}{dt} = 0$
$\frac{d v_y}{dt} = \frac{q}{m} E + \frac{q}{m} B v_z$
$\frac{d v_z}{dt} = -\frac{q}{m} B v_y$
The x-component equation above tells us that in this situation the x-component of the velocity (the one parallel to the $\vec{B}$) is constant:
$v_x(t) = v_{x0}$
And therefore the x-coordinate is a linear function of time. In your drawing, it looks like the x velocity is zero, so the x-coordinate is a constant, which we will take to be 0 for simplicity:
$x(t) = 0$
The second and third equations are unfortunately coupled, but they can be easily solved by taking the derivative of the entire y-component equation, and substituting in for $\frac{d v_z}{dt}$ using the z-component equation:
$\frac{d^2 v_y}{dt^2} = \frac{q}{m} B \frac{d v_z}{dt}$
$\frac{d^2 v_y}{dt^2} = \frac{q}{m} B (-\frac{q}{m} B v_y)$
$\frac{d^2 v_y}{dt^2} = - \frac{q^2}{m^2} B^2 v_y$
The solution to this 2nd order differential equation is
$v_y = v_0 sin(\frac{q}{m} B t)$
Using this and the above equations we can find $v_z$:
$v_z = v_0 cos(\frac{q}{m} B t) - \frac{E}{B}$
Integrating these to get the components of the particle's position:
$y(t) = - \frac{m}{q B} v_0 cos(\frac{q}{m} B t)$
$z(t) = \frac{m}{q B} v_0 sin(\frac{q}{m} B t) - \frac{E}{B} t$
SUMMARY:
A particle in perpendicular magnetic and electric fields with uniform strength B and E respectively moving with an initial velocity of $\vec{v} = v_0 ~ \hat{z}$ has the following position over time:
$x(t) = 0$
$y(t) = - \frac{m}{q B} v_0 cos(\frac{q}{m} B t)$
$z(t) = \frac{m}{q B} v_0 sin(\frac{q}{m} B t) - \frac{E}{B} t$
So you were correct about the value of "r" - it's $\frac{m}{q B} v_0$, and the value of "c" that you were looking for is $\frac{E}{B}$.
HOWEVER: Interestingly, the "drift" velocity which causes the sort of spiralling shape is NOT along the direction of the electric field, but instead counterintuitively perpendicular to both the electric and magnetic fields! In plasma physics, this is known as the "$E \times B$ drift" | {
"domain": "physics.stackexchange",
"id": 18558,
"tags": "homework-and-exercises, electromagnetism, newtonian-mechanics, magnetic-fields, electric-fields"
} |
Producing full list of Fibonacci numbers which fit in an Int | Question: I am progressing through the problems listed at projecteuler.com as I learn Scala (my blog where I am covering this experience). I have written a function to produce all the Fibonacci numbers possible in an Int (it's only 47). However, the resulting function feels imperative (not functional).
val fibsAll = {
//generate all 47 for an Int
var fibs = 0 :: List()
var current = fibs.head
var next = 1
var continue = true
while (continue) {
current = fibs.head
fibs = next :: fibs
continue = ((0.0 + next + current) <= Int.MaxValue)
if (continue)
next = next + current
}
fibs.reverse
}
I am looking for feedback on this code:
To what degree does the presence of even a single var (there are four here) indicate an erred approach from a functional standpoint?
Given I want to return a List[Int], what better way is there to do this recursively as opposed to my current (odd) "while loop" approach?
I am finding the transition from imperative to functional thinking quite challenging.
Answer: object Euler002 extends App{
// Infinite List (Stream) of Fibonacci numbers
def fib(a: Int = 0, b: Int = 1): Stream[Int] = Stream.cons(a, fib(b, a+b))
// Take how many numbers you want into a List
val fibsAll = fib() takeWhile {_>=0} toList
fibsAll reverse
}
Take a look at this. | {
"domain": "codereview.stackexchange",
"id": 508,
"tags": "scala, fibonacci-sequence"
} |
How can I use contextual embeddings with BERT for sentiment analysis/classification | Question: I have a BERT model which I want to use for sentiment analysis/classification. E.g. I have some tweets that need to get a POSITIVE,NEGATIVE or NEUTRAL label. I can't understand how contextual embeddings would help in a better model, practically.
I process the tweets and sentences to make them ready to be fed into the tokenizer. After I get every embedding as well as its mask, and feed it into a BERT model. From that BERT model I get some hidden states in return. As I understand it, now I have to also use a linear layer to take that 768 output from BERT and output a possibility for the 3 labels.
How can contextual embeddings help me here? I get that we can use a combination of those hidden states/layers that we get for every sentence by the BERT model, and that helps us create better embeddings, which technically mean better models. But, after I follow some approach, e.g. summing the last four hidden states, or taking a mean of every token to create a token for each word, how do I proceed now? Do I need another model to take that embedding and output the labels that way (e.g. a linear layer but after the contextual embeddings are created)? Am I thinking of this the right way? Any input would be appreciated.
Answer: For this kind of setup, you should use the output at the first position and train a linear classifier over your 3 labels.
BERT was trained with inputs that were prepended a special token [CLS], and the output at that position (i.e. the first position) was used for a classification task. It is understood that, at that position, BERT outputs a representation for the whole input sentence. Therefore, this representation (i.e. the vector with dimensionality 768 outputted by the last layer in the first position) is what you should use as input for your sentiment classifier.
You should train a new model that takes the 768-dimensionality vector as input and generates the label. For this, a linear model with a categorical cross-entropy loss would be the standard. | {
"domain": "datascience.stackexchange",
"id": 12198,
"tags": "nlp, word-embeddings, transformer, bert, embeddings"
} |
Electrophoretic mobility of isozymes | Question: If isozymes are seperated using electrophoresis, which of the following will be the principle of separation ?
A.charge density
B.molecular weight
C.polarity of molecule
I think it should be B.
Answer: (B) is incorrect. I think the answer is (C) but to my mind those other two options are badly worded.
Here is an explanation.
The classic example of isozymes is the case of lactate dehydrogenase.
As you will see if you visit the Wikipedia page the enzyme is a tetramer, and there are two subunit types, the muscle or M type and the heart or H type. Consequently there are 5 isozymes, ranging from M4 to H4 with all possible combinations in between. Different tissues express these two subunit types to different degrees and so are characterised by a different pattern of isozymes.
The M type is encoded by the LDHA gene and the H type by the LDHB gene. I obtained the sequences of the two proteins and analysed them:
LDHA: 332 amino acids; 36.7 kD; pI=8.45
LDHB: 334 amino acids; 36.6 kD; pI=5.93
You can see from this that the ability to separate these isozymes depends upon charge and not molecular weight. That big difference in pI (isoelectric point) means that at neutral pH the LDHA protein will carry a net positive charge while the LDHB protein will be negatively charged. Consequently each tetramer will have a unique overall charge. You can find some details and an image of an electrophoresis result here.
In general, protein electrophoresis is much more sensitive to charge differences than it is to molecular mass except in the special case of SDS-PAGE when the large amount of negatively-charged detergent that binds to the proteins masks their intrinsic charge differences.
Another classic example of this is the ability to separate haemoglobin S in sickle cell patients from haemoglobin A in normal blood. In this case the only difference between the two molecules is a single Glu>Val substitution in two of the monomers in each tetramer. | {
"domain": "biology.stackexchange",
"id": 1538,
"tags": "molecular-biology, homework"
} |
Mechanism for controlling the clamping force | Question: I am working on upgrading a research project on wind energy. They are trying to measure the performance of wind blade designs in a small wind tunnel (i.e. their models are relatively small).
What they were doing was measure the torque generated by the wind by mounting on the axis of the wind blade a bicycle discbrake and engaging it. (They have gone through many mechanical iterations)
Currently they are using a servo motor to pull a bike cable that activates the clamps. What they do is: blade is accelerated, and then the brakes are applied and bring the blade to a halt. While the blase is slowing down, they are measuring the torque and rpm. In order to maintain a constant force on the clamp they are using a spring to maintain a constant tension of the bike cable.
I am upgrading the measurement software side, and two of the wishlist feature requests were :
to control the system to apply enough clamping force that will maintain the number or rpms. (apparently they had tried that in the past with a PID but it did not work out).
to maintain a constant torque on the design (i.e. clamping force on the discbrake).
When I saw the system I could see the following issues:
given the discbrake design, the available travel between engaging and free running is very small (a few hundred microns). That makes the system the system very non linear and very difficult to control (in a few microns you go from 0 to max braking power - even with a lever).
there is a lot of play on the discbrake (its wobbly) - due to the mounting of the shaft- , however because the shaft is spinning at 3000 to 10000 rpm it tends to self align.
Overall, I wasn't hopeful that I could achieve the level of control with their current system.
So my question (inspired while I was reading this question) is what mechanisms can I look into to apply a controllable clamping force (or equivalenly clamping torque) on a 3[mm] spinning shaft which is rotating at 3000-10000 rpm?
Answer: The position of the calipers isn't what you are adjusting, it's the pressure you exert on them.
Assuming the system is relatively slow (break force ramps up and down ~1 second), then a spring which is tensioned by a servo should work fine for this.
If the spring is stretched further, the breaking force will be increased. Notice that even though the cable is barely moving, the force applied to it is changing.
You stated that PID control of this servo for constant torque or rpms "didn't work", but why didn't it work? Was the servo too slow to react? Was the control loop unstable? I think we need more specifics. | {
"domain": "engineering.stackexchange",
"id": 3566,
"tags": "control-engineering, design, torque, measurements, mechanisms"
} |
Dark Matter 'Stars' | Question: I'm aware that the Milky Way has a dark matter 'halo' around it, presumably a spherically symmetric distribution.
But I'm completely ignorant regarding the theories explaining dark matter... Is there any reason to not expect a star-sized object to also be made of dark matter?
I know they'll be extremely difficult to detect, but I'm wondering if it's even physically possible to exist.
Answer: This depends on what you think dark matter is composed of.
One possibility is you could think dark matter is composed of MACHOS, in which case they are viewed as being composed of ordinary baryonic matter. There is some thought that these objects could actually be very cold neutron stars or black holes, in which case they are already dead stars well past the point of fusion. However, they could also be simply just smaller, colder objects such as asteroids, in which case it is entirely possible that over time, if their current seemingly stable orbits were sufficiently disturbed, they could collapse with other matter as part of normal star formation. This is similarly the case with RAMBOS. I think it is not controversial to think that a portion of dark matter is in fact composed by some MACHO or RAMBO like objects, since we have ample analogies of massive objects (like asteroids and planetoids) existing in the bounds of our own solar heliopause even if such things are extremely rarefied and difficult to detect.
More plausible and probably the leading candidate, is the idea that dark matter is composed of WIMPS. This term is sometimes misunderstood to include massive neutrinos, but even massive neutrinos are not massive enough to be good dark matter candidates, and are not WIMPS. WIMPS, while sharing similar interaction properties as neutrinos, are much more massive, and are most commonly associated with particles predicted by theories with supersymmetry. Since these objects only interact via the weak force, the characteristic scale of the interactions are significantly larger than that of the strong force, which is the force typically associated with star formation.
Hubble has actually managed to discern large dark matter structures, and NASA has been able to publish "photos" (which are actually false color images of detected distributions) showing dark matter. So while these objects are able to create structures on large scales, such as dark matter filaments, these are not structures we have full theories of. If the hypothesis is correct that these are composed of WIMPs, then these will not collapse to form stars in a regular sense (barring a general change in the definition of a star). However, it has been hypothesized that given enough time, complex structures could emerge, but at this point it is more speculation until we have proof of what dark matter is composed of. | {
"domain": "physics.stackexchange",
"id": 38553,
"tags": "astrophysics, dark-matter, stars"
} |
How exactly is "effective length" used in FPKM calculated? | Question: According to this famous blog post, the effective transcript length is:
$\tilde{l}_i = l_i - \mu$
where $l_i$ is the length of transcript and $\mu$ is the average fragment length. However, typically fragment length is about 300bp. What if when the transcript $l_i$ is smaller than 300? How do you compute the effective length in this case?
A related question: when computing the FPKM of a gene, how to choose a transcript? Do we choose a "canonical" transcript (how?) or combine the signals from all transcripts to a gene-level FPKM?
Answer: I have a blog post that describes the effective length (as well as these different relative abundance units). The short explanation is that what people refer to as the "effective length" is actually the expected effective length (i.e., the expectation, in a statistical sense, of the effective length). The notion of effective length is actually a property of a transcript, fragment pair, and is equal to the number of potential starting locations for a fragment of this length on the given transcript. If you take the average, over all fragments mapping to a transcript (potentially weighted by the conditional probability of this mapping), this quantity is the expected effective length of the transcript. This is often approximated as simply $l_i - \mu$, or $l_i - \mu_{l_i}$ --- where $\mu_{l_i}$ is the mean of the conditional fragment length distribution (conditioned on the fragment length being < $l_i$ to account for exactly the issue that you raise). | {
"domain": "bioinformatics.stackexchange",
"id": 110,
"tags": "rna-seq"
} |
roslaunch rostopic pub file find package | Question:
How do I publish a message from a file, found with rospack find, using rostopic pub -f from a launch file?
<node pkg="rostopic" type="rostopic" name="arbitrary_node_name" args="pub topic_name msg_type -l -f $(find package_name)/message_details.yaml" output="screen" />
doesn't publish anything.
Originally posted by noob_ros on ROS Answers with karma: 21 on 2020-08-29
Post score: 1
Original comments
Comment by noob_ros on 2020-08-31:
Correction: message is published but only partially (message_details specifies several ros messages separated by '---' but only first message is published)
Answer:
message is published but only partially (message_details specifies several ros messages separated by '---' but only first message is published)
it seems things are actually working as they should.
According to the wiki page (emphasis mine):
-f FILE
Read message fields from YAML file. YAML syntax is equivalent to output of rostopic echo. Messages are separated using YAML document separator ---. To use only the first message in a file, use the --latch option.
(this is the same info you'd get when running rostopic pub --help btw)
Note how it mentions the latch option and how that causes rostopic pub to only publish "the first message".
You have the -l command line option in your args attribute in the .launch file you show:
<node pkg="rostopic" type="rostopic" name="arbitrary_node_name" args="pub topic_name msg_type -l -f $(find package_name)/message_details.yaml" output="screen" />
That would be the short version of --latch.
Originally posted by gvdhoorn with karma: 86574 on 2020-09-01
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 35481,
"tags": "ros, ros-melodic, roslaunch, rostopic, package"
} |
Peptide at pH=1 | Question: In the following peptide:
I want to know the net charge of the peptide at pH=1.
According to me the as the solution is acidic so every nitrogen will get protonated.
Hence the number of charge = number of Nitrogen atom
Am I doing it in correct way?
Answer: You are on the right track, but not every nitrogen atom will be protonated - only those that are basic. None of the nitrogen atoms in amide groups are basic. Neither is the nitrogen atom in the indole side chain on tryptophan. Only one nitrogen is basic for each of the imidazole side chain of histidine and the guanidine side chain of arginine.
Thus, the positive charge in acid is more correctly determmine by the number of basic side chains plus the N-terminus amine. | {
"domain": "chemistry.stackexchange",
"id": 7240,
"tags": "acid-base, biochemistry, ph"
} |
How can I run roscore from python? | Question:
Whenever I launch a node from python using the roslaunch script api :
#Start roslaunch
launch = roslaunch.scriptapi.ROSLaunch()
launch.start()
# start required nodes
empty_srv_node = roslaunch.core.Node('rostful_node', 'emptyService.py', name='empty_service')
empty_srv_process = launch.launch(empty_srv_node)
I get an error on rospy.init_node() because master is not started :
Unable to register with master node [http://localhost:11311]: master may not be running yet. Will keep trying.
Question :
How can I run roscore ( not only rosmaster ) from python if needed ? I would expect roslaunch to do it but it seems it doesnt.
Thank you.
Originally posted by asmodehn on ROS Answers with karma: 131 on 2015-08-11
Post score: 5
Answer:
I don't know the ros way to do it, and personally I use python subbprocess to run the roscore first:
import subprocess
roscore = subprocess.Popen('roscore')
time.sleep(1) # wait a bit to be sure the roscore is really launched
then your code:
unittest.main() # for instance in my case I need a roscore to run some node tests
Originally posted by lgeorge with karma: 123 on 2015-12-30
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 22421,
"tags": "python, roslaunch, roscore"
} |
Process Synchronization and Atomic Instructions | Question: According to the book " Operating Systems Concepts ", ninth edition , page 210, we have the following statement :
Many modern computer systems therefore provide special hardware
instructions that allow us either to test and modify the content of a
word or to swap the contents of two words atomically—that is, as one
uninterruptible unit.
This means that an atomic instruction is an instruction that must run and complete without any interrupts. In the same page, we have :
The test and set() instruction can be defined as shown in Figure 5.3.
The important characteristic of this instruction is that it is
executed atomically. Thus, if two test and set() instructions are
executed simultaneously (each on a different CPU), they will be
executed sequentially in some arbitrary order. If the machine supports
the test and set() instruction, thenwe can implement mutual exclusion
by declaring a boolean variable lock, initialized to false. The
structure of process Pi is shown in Figure 5.4.
According to the above statement, we can add another characteristic to the atomic instruction, which is two atomic instructions cannot be executed simultaneously. I think this statement is valid if these two atomic instructions share the same variable lock and this make sense, because if two atomic operations ( like test_and_set()) operates on the same lock variable can be executed at the same time by two processes ( running in parallel on two cores ) and they wish to enter the critical section at the same time , then a race condition can occur. So can we say that an atomic instruction is an instruction that run without any interruptions and cannot run in parallel when it is called on the same lock variable on two or more different cores by two or more different processes or an atomic instruction is the one that cannot be interrupted and to achieve synchronization it should be supplemented by the capability that if two processes call it with the same lock variable then this atomic instruction cannot run on two processor simultaneously ?
Answer: This is just a matter of wording. “Uninterruptible” implies that the instruction cannot execute in parallel with an identical instruction - that would be the ultimate interruption.
But that’s implementation details. The important thing is that these operations are atomic. That is indivisible. If the description contains multiple parts, then all these parts will happen simultaneously. | {
"domain": "cs.stackexchange",
"id": 21601,
"tags": "operating-systems, synchronization"
} |
Finite time involved In electron transitions | Question: From Wikipedia: "Atomic electron transition is a change of an electron from one quantum state to another within an atom. It appears discontinuous as the electron "jumps" from one energy level to another in a few nanoseconds or less."
Allowing for the above statement, is it correct (obvious?) to conclude that we can work backwards from, (1.) knowing the energy of the radiation emitted by some transition between any two separate energy levels of say, the H atom, through (2.) convert the energy into a frequency and then (3). invert this frequency to allow a time for the transition to be calculated?
Or is this a relatively easily and regularly performed measurement to do, with our current transition time calculation accuracy limited only by the precision of our energy measuring instrumentation? e.g is this the process used in an atomic clock?
The reason for this question is, although I could find out how an atomic clock works on Google, I want to go as far as I can on my own, without the easy (lazy?) answers. But also I don't want to waste anyone's time with questions that Google research will answer. If I could get this question confirmed, it's the best encouragement for more difficult problems. e.g. smolins "trouble with physics" popsci book mentions Unruh effect, but there is no explaination of any kind for it in his book. I'm happy thinking through explanations due to vacuum energy acceleration effects myself. This is what physics is all about, to me.
I ask this question just to get absolute confirmation that the idea of a quantum transition being irreversible and indivisible does not imply that these two properties also produce instantaneous transitions
Answer: A partial answer is no, you can't just invert the frequency: based on the following example.
The Lyman alpha transition in hydrogen from the $n=2$ to $n=1$ results in photons with a frequency of $\simeq 2.47 \times 10^{15}$ Hz.
The radiative lifetime of the transition is about 2 nanoseconds - i.e. it occurs with a frequency of about $5\times 10^{8}$ Hz.
The connection between the two is via the width of the spectral feature. i.e. In the absence of other broadening mechanisms, the Lyman Alpha line has a natural broadening of $\sim 5 \times 10^{8}$ Hz due to the finite transition rate and consequent energy uncertainty in the photon. The rate of the transition is determined by the strength of the electric dipole interaction between the initial and final states.
So what you would do is invert the frequency width of the transition to estimate the transition time (again, emphasizing that this is in the absence of other broadening mechanisms). | {
"domain": "physics.stackexchange",
"id": 20346,
"tags": "quantum-mechanics"
} |
Stokes' theorem in GR | Question: I read this formula in Sean Carroll's book of GR:
$$\int_{\Sigma}\nabla_{\mu}V^{\mu}\sqrt{g}d^nx~=~\int_{\partial\Sigma}n_{\mu}V^{\mu}\sqrt{\gamma}d^{n-1}x$$
where n is the 4-vector orthogonal to the hypersurface $\partial\Sigma$ and $\gamma$ the induced metric defined by
$$\gamma_{ij} = \partial_i X^{\mu}\partial_jX^{\nu}g_{\mu\nu}.$$
The only trouble I have is with the appearance of this induced metric, because I have the formula
$$\int_{\Sigma}\nabla_{\mu}V^{\mu}\sqrt{g}d^nx=\int_{\Sigma}\partial_{\mu}(V^{\mu}\sqrt{g})d^nx$$
which then should give me straightforwardly
$$\int_{\partial\Sigma}n_{\mu}V^{\mu}\sqrt{g}d^{n-1}x$$
by the classical Stokes' theorem. I was thinking that it may be something with the differential element: going from $d^nx$ to $d^{n-1}x$ forced the apparition of this indeuced metric but I didn't manage to figure out how so far.
Answer: There are a couple of problems you're running into: first, $\sqrt{g}$ is part of the integration measure. When you're relating the integral over a manifold to the integral over its boundary, you need to use the appropriate integration measure for each region. And also, Stokes' theorem is defined in terms of a covariant derivative $\nabla_\mu$, not a coordinate derivative $\partial_\mu$.
$$\int_{\Sigma}\partial_{\mu}(\cdots)\mathrm{d}^nx \neq \int_{\partial\Sigma}(\cdots)\mathrm{d}^{n-1}x$$
Basically, the "classical" Stokes' theorem, as you're thinking about it, just doesn't work for arbitrary manifolds, at least not with arbitrary coordinate systems. (You can get away with it by using Gaussian normal coordinates, which are specially constructed so that the integration measure on the boundary is a simple "dimensional reduction" of the measure on the manifold, but in that case $\sqrt{\gamma} = \sqrt{g}$.)
What Stokes' theorem really does is relate the integral of an $n$-form over a boundary to the integral of its exterior derivative over the enclosed submanifold.
$$\int_{\partial\Sigma}\omega = \int_{\Sigma}\mathbf{d}\omega$$
When you go to apply this, if $\omega$ is the dual of a vector field you get
$$\begin{align}\omega &= \color{blue}{n_\mu V^\mu}\color{red}{\sqrt{\gamma}\mathrm{d}^{n-1} y} \\ \mathbf{d}\omega &= \color{blue}{\nabla_\nu V^\nu}\color{red}{\sqrt{g}\mathrm{d}^{n} x}\end{align}$$
The details of how to work this out are in appendix E of Sean Carroll's book, but basically, note that the exterior derivative does two things: it effectively replaces the normal vector with a covariant derivative, and it replaces the integration measure of the hypersurface (including the factor of $\sqrt{\gamma}$) with that of the manifold. | {
"domain": "physics.stackexchange",
"id": 2454,
"tags": "general-relativity, differential-geometry"
} |
Unit Testing in VBA | Question: Unit testing in VBA is... lacking. (What isn't lacking in VBA though?) Since I've become more interested in unit testing lately, I decided I needed something better than Debug.Assert(), so I started building this framework. Currently there is a ton of functionality missing, but since I'm new to unit testing and interfaces, I don't want to get too deep before realizing I've made a huge mistake. The code is simple, but works just fine.
I want to be able to run the output to either a file or the immediate window, so I created a simple IOutput interface that contains one subroutine.
IOutput.cls
Public Sub PrintLine(Optional ByVal object As Variant)
End Sub
And a Console class implementing it. Console uses VBPredeclaredId = True to create a default instance. The Logger class remains unimplemented for the moment.
Console.cls
Implements IOutput
Public Sub PrintLine(Optional ByVal object As Variant)
If IsMissing(object) Then
'newline
Debug.Print vbNullString
Else
Debug.Print object
End If
End Sub
Private Sub IOutput_PrintLine(Optional ByVal object As Variant)
PrintLine object
End Sub
The UnitTest class then takes in an IOutput object in and stores it as a property. I need the Output stream to be available to the local project, but I don't want to expose it to external projects referencing it, so I declared it at a Friend scope (more on that later).
UnitTest.cls
Private Type TUnitTest
Name As String
OutStream As IOutput
Assert As Assert
End Type
Private this As TUnitTest
Public Property Get Name() As String
Name = this.Name
End Property
Friend Property Get OutStream() As IOutput
Set OutStream = this.OutStream
End Property
Public Property Get Assert() As Assert
Set Assert = this.Assert
End Property
Friend Sub Initialize(Name As String, out As IOutput)
this.Name = Name
Set this.OutStream = out
Set this.Assert = New Assert
Set this.Assert.Parent = Me
End Sub
The UnitTest creates it's own instance of the Assert object. I have a real concern here. I don't like that I have to pass in the test name along with the actual conditions I'm testing.
Assert.cls
Private Const PASS As String = "Pass"
Private Const FAIL As String = "Fail"
Private Type TAssert
Parent As UnitTest
End Type
Private this As TAssert
Public Static Property Get Parent() As UnitTest
Set Parent = this.Parent
End Property
Public Static Property Set Parent(ByVal Value As UnitTest)
Set this.Parent = Value
End Property
Public Sub IsTrue(testName As String, condition As Boolean, Optional message As String)
Dim output As String
output = IIf(condition, PASS, FAIL)
Report testName, output, message
End Sub
Public Sub IsFalse(testName As String, condition As Boolean, Optional message As String)
Dim output As String
output = IIf(condition, FAIL, PASS)
Report testName, output, message
End Sub
Private Sub Report(testName As String, output As String, message As String)
output = this.Parent.Name & "." & testName & ": " & output
If message <> vbNullString Then
output = output & ": " & message
End If
this.Parent.OutStream.PrintLine output
End Sub
Finally, I don't want to import all of these classes into each project I'm working on. It will be a nightmare to keep them all synced as I make changes to the VBAUnit project. So I changed their instancing to "PublicNotCreatable".
If the instancing property is PublicNotCreatable, the class behaves normally when used within the same project, but a variable can be declared of that class type in other projects. The other project cannot create a new instance of the class, but can have a variable of the class's type. To allow another project to use a new instance of the class, the project containing the class must provide a global-scope function that creates a new instance of a class and returns it to the caller. For example, suppose Project1 contains a class named Class1, whose Instancing property is PublicNotCreatable. Suppose also that Project2 references Project1.
Cpearson.com
So I have a regular *.bas module named Provider that contains this single function.
Provider.bas
Public Function New_UnitTest(Name As String, out As IOutput) As UnitTest
Set New_UnitTest = New UnitTest
New_UnitTest.Initialize Name, out
End Function
Then, from another project, I go just add VBAUnit to the references. (If you don't have it open, you have to click browse and navigate to the actual file.) I did just that and wrote some tests that essentially test themselves.
This is where the Friend scope comes into play. VBAUnit has access to the Initialize subroutine and to the OutputStream property, but they're not visible to any external projects.
AssertConditionTest
This code is boilerplate code. Each new test I create will need these line. Also, once I have implemented a file logger, this is where you would need to decide where to output the results to. I don't like boiler plate, but I can't think of a way to get around it. I'm way open to suggestion on this.
Private test As VBAUnit.UnitTest
Private Sub Class_Initialize()
Set test = VBAUnit.New_UnitTest(TypeName(Me), VBAUnit.Console)
End Sub
Private Sub Class_Terminate()
Set test = Nothing
End Sub
Followed by the actual tests.
Public Sub RunAllTests()
IsTrueShouldPass
IsTrueShouldFail
IsFalseShouldPass
IsFalseShouldFail
End Sub
Public Sub IsTrueShouldPass()
test.Assert.IsTrue "IsTrueShouldPass", True
End Sub
Public Sub IsTrueShouldFail()
test.Assert.IsTrue "IsTrueShouldFail", False
End Sub
Public Sub IsFalseShouldPass()
test.Assert.IsFalse "IsFalseShouldPass", False, "with a message."
End Sub
Public Sub IsFalseShouldFail()
test.Assert.IsFalse "IsFalseShouldFail", True, "with a message."
End Sub
Finally, in this project, we have a regular *bas. This is just kind of throw away code that we use to run the tests we're interested in.
Public Sub TestTheTests()
Dim test As New AssertConditionTest
test.RunAllTests
test.IsFalseShouldPass
End Sub
To summarize:
Am I using interfaces in an intelligent way?
Is there anyway to ditch the boilerplate code in AssertConditionTest?
How can I avoid passing a "Test name" into each Assert statement and still get results like this? My method feels like a dirty hack at best.
AssertConditionTest.IsTrueShouldPass: Pass
AssertConditionTest.IsTrueShouldFail: Fail
AssertConditionTest.IsFalseShouldPass: Pass: with a message.
AssertConditionTest.IsFalseShouldFail: Fail: with a message.
Was it a stupid decision to make Assert it's own class and keeping a Parent UnitTest property?
Answer:
IOutput class module (Interface)
Looking at how the interface is being used:
this.Parent.OutStream.PrintLine output
Where output is clearly a String, which makes sense. But the interface's signature doesn't reflect that, and is confusing:
Public Sub PrintLine(Optional ByVal object As Variant)
Why is the parameter optional? and why is it a Variant? ...and why is it called object? I would have expected this:
Public Sub PrintLine(ByVal output As String)
Which leads me to the implementation:
Console class module
If the parameter is a String, and isn't optional, the PrintLine implementation gets... a little bit simpler:
Option Explicit 'always. even if you're not **yet** declaring anything.
Implements IOutput
Public Sub PrintLine(ByVal output As String)
Debug.Print output
End Sub
Private Sub IOutput_PrintLine(ByVal output As String)
PrintLine output
End Sub
It seems your Console class was intended to be used a bit like a .net System.Console, a static class.
In the context of the IOutput interface implementation, it doesn't make sense for that class to be static, nor to have an optional parameter to its PrintLine method. However, if you encapsulate a test result in its own class...
Option Explicit
Public Enum TestOutcome
Inconclusive
Failed
Succeeded
End Enum
Private Type TTestResult
outcome As TestOutcome
output As String
End Type
Private this As TTestResult
Public Property Get TestOutcome() As TestOutcome
TestOutcome = this.outcome
End Property
Friend Property Let TestOutcome(ByVal value As TestOutcome)
this.outcome = value
End Property
Public Property Get TestOutput() As String
TestOutput = this.output
End Property
Friend Property Let TestOutput(ByVal value As String)
this.output = value
End Property
Public Function Create(ByVal outcome As TestOutcome, ByVal output As String)
Dim result As New TestResult
result.TestOutcome = outcome
result.TestOutput = output
Set Create = result
End Function
...then I'd renamed IOutput to ITestOutput, and change the signature like this:
Public Sub WriteResult(ByVal result As TestResult)
End Sub
and Console would look ilke this:
Option Explicit
Implements ITestOutput
Public Sub WriteResult(ByVal result As TestResult)
Debug.Print result.TestOutput
End Sub
Private Sub ITestOutput_WriteResult(ByVal result As TestResult)
WriteResult result
End Sub
That gives WriteResult a much clearer intent than PrintLine, and doesn't stop you from implementing a WriteLine(String) method and keeping Console as a static utility class, and as a bonus you have a concept of a test result that can be inconclusive, failed or successful.
UnitTest class module
Mucho kudos, this is the first time I'm seeing a warranted use of the Friend keyword in VBA. This is pretty clever, and enables several things that aren't otherwise possible in VBA:
Factories: a class can now be created and initialized with parameter values, as if created with a constructor.
Immutability: a class can only expose getters, and be immutable from the client code's perspective.
Impressive. I wish I had realized VBAProjects could reference each other, 10 years ago!
Provider code module
I don't like this. I would have made it a "static" class module (with a default instance), and called it UnitTestFactory.
I don't like the method name either - again, underscores in identifiers are confusing in VBA. If that code is in a class called UnitTestFactory, the method's name could simply be Create.
I don't like that you're assigning the result, and then calling a method on that reference - it looks very awkward and would be much clearer with a result variable, and I would make the IOutput implementation/reference a property of the factory class, removing it from the method's signature:
Option Explicit
Private Type TUnitTestFactory
TestOutput As IOutput
End Type
Private this As TUnitTestFactory
Public Property Get TestOutput() As IOutput
Set TestOutput = this.TestOutput
End Property
Public Property Set TestOutput(ByVal value As IOutput)
Set this.TestOutput = value
End Property
Public Function Create(ByVal testName As String) As UnitTest
Dim result As New UnitTest
Set result.Initialize testName, TestOutput
Set Create = result
End Function
Assert class module
I've never seen the Static keyword used like this (what's a static property in VBA anyway?), and the Parent property makes me think that class is doing more than it should. I believe the TestOutcome enum and the TestResult class I've suggested above, would be helpful here... but I don't think it's Assert's job to report the test's outcome - by keeping that responsibility at the test level, you remove the need to pass the test's name to the Assert methods.
Question is, how to do that?
I think I'd expose an event:
Public Event AssertCompleted(ByVal result As TestResult)
This would make a method like IsTrue look like this:
Public Sub IsTrue(ByVal condition As Boolean, Optional message As String)
Dim outcome As TestOutcome
outcome = IIf(condition, TestOutcome.Succeeded, TestOutcome.Failed)
result = TestResult.Create(outcome, message)
RaiseEvent AssertCompleted(result)
End Sub
For this to work, a UnitTest class only needs this - note that I'd want to call the variable assert, so I'd rename the class to TestAssert, and forfeit the default instance / static-ness of the type:
Private WithEvents assert As TestAssert
Private Sub assert_AssertCompleted(ByVal result As TestResult)
OutStream.WriteResult result
End Sub
And... Bingo!
This code is boilerplate code. Each new test I create will need these line.
Don't worry about that boilerplate code. It's needed because it's the only logical place for the client code to specify an implementation for the output interface - a test might want to output to a text file, another might want to output to the immediate pane, and another might want to send the output to a listbox on a modal form... and it's really the client code's job to specify that. | {
"domain": "codereview.stackexchange",
"id": 9554,
"tags": "object-oriented, unit-testing, vba, library, interface"
} |
Eigenvector of spin half particle in applied magnetic field at angle | Question: I am very new to this field of physics so sorry if this is basic. I was recently trying understand how you go about calculating energy splits of electrons in applied fields. I understand that given a magnetic field is along the z axis the wave the eigenvectors [0.5;0] and [0;0.5] still hold for spin up, spin down, and from that I can easily calculate the Eigenvalues and hence the energy levels of the two states.
However where I am stuck is when I introduce another component of the magnetic field so that my Hamiltonian is then proportional to BzIz+BxIx. Previously when the field was only applied along one axis it was simply proportional to BzIz. The reason this confuses me is that now the states [0.5;0] and [0;0.5] are no longer eigenstates. Instead the Eigenstates are in the form A[0.5;0]+B[0,;0.5], where A and B are constants depending on Bz and Bx.
I believed a wave function like this to be a superposition of two states? This must be wrong as a wave function shouldn't collapse into a superposition... But then again I believed it to be the case that wave functions always collapse into an eigenstate of the operator being used... Safe to say I am confused, apologies if this is basic...
Answer: If you have an external magnetic field of strength $B$ in the direction $(n_x,n_y,n_z)$ then the Hamiltonian has a term proportional to $$B_x\hat\sigma_x+B_y\hat\sigma_y+B_z\hat\sigma_z.$$
And I put the hats on the operators so you know they are operators on a Hilbert Space rather than matrices. Indeed you could choose a basis for your Hilbert Space that happens to be eigenvectors of $\hat\sigma_z$ and then all vectors in the Hilbert Space are just column vectors of complex numbers.
And then you can use the matrix representation $$B_x\sigma_x+B_y\sigma_y+B_z\sigma_z.$$
And the energy eigenstates can now have parts that are eigen to these operators/matrices. But the basis you choose to write something is just as arbitrary as the coordinate axis.
So in general, the natural basis is in terms of the eigenvectors to the actual Hamiltonian, but you can use any basis. And if you choose the wrong basis, your states can look more complicated, but they aren't different.
It's like if someone moved in a line at constant speed. You could pick your x axis to point in that direction, and the motion would look simple, e.g. $\vec r(t)=vt\hat i+0\hat j$, but you could also pick a coordinate system where it looks like $\vec r(t)=\frac{\sqrt 2}{2}vt\hat i'\frac{\sqrt 2}{2}vt\hat j'.$ And I don't mean like as in an analogy. I mean literally you are choosing a basis.
What can be confusing is that there is a basis for physical space and a basis for your Hilbert Space and they are related but different. When you write something as a bunch of scalars in a particular order then you picked a basis. The basis might make your math looks cluttered or simple, but it doesn't change the physics.
So you really have a magnetic field, there really is an operator $$B_x\hat\sigma_x+B_y\hat\sigma_y+B_z\hat\sigma_z.$$ And it is a Hermitian operator and it has eigenvectors and writing it in terms of that basis (the basis of eigenvectors) might make your equations look the nicest just like $ \vec r(t)=vt\hat i+0\hat j$ looked nicer than $\vec r(t)=\frac{\sqrt 2}{2}vt\hat i'\frac{\sqrt 2}{2}vt\hat j'.$ But no physics is different.
I believed a wave function like this to be a superposition of two states?
The word superposition is just the word linear combination. Any vector is a superposition of other vectors. It isn't physically meaningful. It just means your basis didn't include the vector in question.
When someone makes a big deal about a superposition, they are actually trying to make a big deal about it being a superposition of a particular set of basis vectors. | {
"domain": "physics.stackexchange",
"id": 24896,
"tags": "quantum-mechanics, newtonian-mechanics, quantum-spin, eigenvalue"
} |
Keeping Time constant instead of speed of light in "Special Theory of Relativity" | Question: I was reading special theory of relativity and I figured out that, basically this whole theory is revolving around three main factors.
Motion(speed)
Time
Space (length)
The result of Michelson–Morley experiment was, that there is no change in speed of light in any direction, relative to "hypothetical ether".
So Einstein concluded that, speed of light always remains constant, and what changes is "time(dilation)" and "length(contraction)" in order to keep speed of light constant.
But What will happen, if I assume that, the speed of light, and length changes to keep "time as a constant" (one factor out of three must remain constant). Will it satisfy all the laws of physics?
Answer: I don't think that such a change would yield a theory that would "Obey all the laws of physics" because the fact that lightspeed is constant is a law of physics, verified by many experiments in the past century, including the most famous negative result of all time, the Michelsen-Morley interferometric investigation.
Time-dilation and length contraction are consequences of the constancy of the speed of light. The fact that both of these phenomena are experimentally verified is evidence that c is constant and finite. If you make time constant and allow c to vary, you are no longer changing frames of reference using Lorentz transforms, and you are not predicting the physics of our universe. | {
"domain": "physics.stackexchange",
"id": 58020,
"tags": "special-relativity, relativity"
} |
Pneumatic flow formula | Question: I'm looking for a pneumatic formula in order to have the flow. I found many formula, but only for hydraulic :
$\Delta P = R_p Q$ where $R_p$ is the resistance, $Q$ the flow, and $\Delta P$ the pressure potential:
$$R_p = \frac{8\eta L}{\pi R^4}$$
with $\eta =$ dynamic viscosity of the liquid ; $L =$ Length of the pipe ; $R =$ radius of the pipe
I don't know if I can use them, since I'm in pneumatic ! After doing research on the internet, I found some other variables like : sonic conductance, critical pressure coefficient, but no formula...
I think that I have all the information in order the calculate the flow :
Pipe length = 1m ; Pipe diameter = 10mm ; $\Delta P =$ 2 bars
Thanks !
Answer: The Hagen Poiseuille (HP) equation you found can also be used in approximation for gases, as long as the pressure drop $\Delta P$ isn't too large.
Here we have:
$$\Delta P=P_1-P_0$$
where $P_1$ is the pressure at the entrance of the pipe and $P_0$ at the outlet.
Once $Q$ has been estimated with HP, we can still apply a correction using the Ideal Gas Law. Assume the flow to be isothermal, then:
$$Q_0 P_0=Q_1 P_1$$
from which the corrected volume throughput $(\mathrm{m^3 s^{-1}})$ $Q_1$ can be found. | {
"domain": "physics.stackexchange",
"id": 66347,
"tags": "homework-and-exercises, fluid-dynamics, pressure, flow, air"
} |
Difference between clonal and subclonal mutations | Question: I'm a physicist writing a proposal that has to do with cancer as a disease driven by evolutionary selection. As far as I understand, all tumors start with a single precursor (single cell or group of cells), and the other cells derive from this precursor by cycles of alterations and selection processes.
Reading recent articles, such as this, I learned that the derived cells include both clones and subclones. Since I'm not sure I understood things correctly, I have a few questions on the difference between the two words:
Is it OK to call clones the cells derived from the precursors?
When should I use subclones?
In the case of heterogeneity, is it fine to call clonal population a group of clones with the same characteristics?
Answer: clones means when a cell has the same DNA characteristics as his predecessor so in that case of yours you could say that those cells are clones while for sub-clones I am not that sure, the difference of clone and sub-clone is that a sub-clone is basically a clone who is then remade with a different characteristic(an upgrade if you will) , since cancer are mutated cells you could call them like that but only if you study its DNA, meaning you have to study deep to find out if there are sub-clones in there. AS for the 3 question I didn't quite understood what you meant cuz you just used 2 words with opposite meaning because heterogeneity means a group of organismes with different characteristics and clonal population means organismes with same DNA ( much like a colony of bacteria). | {
"domain": "biology.stackexchange",
"id": 6569,
"tags": "cell-biology, cancer, cloning"
} |
Link to PR2/Startrobot is down | Question:
I'm following the tutorial on this page:
http://wiki.ros.org/2dnav_pr2_app/Tutorials/RunningNavigationStack
As it says, I need to follow the steps according to this link:
http://pr.willowgarage.com/wiki/PR2/StartRobot
The problem is that the link isn't working. Normaly this isn't a problem, but I don't know whats written on that page.
Does someone knows how I can acces this page via another route? Or does someone has another solution?
Please let me know.
Kind regards,
Bert
Originally posted by Burcks on ROS Answers with karma: 1 on 2015-09-11
Post score: 0
Answer:
I'm not sure if it's exactly the same, but this tutorial should contain similar information: http://wiki.ros.org/pr2_robot/Tutorials/Starting%20up%20the%20PR2
Originally posted by ahendrix with karma: 47576 on 2015-09-11
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 22615,
"tags": "ros"
} |
Get Next and Previous "Load Day" | Question: I keep looking at this trying to figure if there's a better way. All three functions work, but I wanted to get some insights and ideas from others.
IsLoadDay determines if the date parameter is a "Load Day". Load Days are Tuesday through Saturday, but we skip holidays and designated No Load Days. There are two existing tables of Holidays (HOLDY_WA_STATE) and No Load Days (afrs_daily_load_no_load_date).
GetNextLoadDay and GetPreviousLoadDay are intended to be used. They each use IsLoadDay to determine if the date parameter is a Load Day, and then move forward or backward through time until a Load Day is found.
GetNextLoadDay will check the date parameter and determine if it is a Load Day whereas GetPreviousLoadDay will first go one day back and then start checking if the date is a Load Day.
IsLoadDay Function
CREATE FUNCTION [dbo].[IsLoadDay](@Dt DATE)
RETURNS BIT
AS
BEGIN
DECLARE @IsLoadDay BIT = 1
IF DATENAME(DW, @Dt) IN ('SUNDAY', 'MONDAY')
BEGIN
SET @IsLoadDay = 0
END
ELSE
IF EXISTS(SELECT 1 FROM [DBMGMT].[dbo].[afrs_daily_load_no_load_date] WHERE [load_date] = @Dt AND [load_ind] = 'N')
BEGIN
SET @IsLoadDay = 0
END
ELSE
IF EXISTS(SELECT 1 FROM [master].[dbo].[HOLDY_WA_STATE] WHERE [HOLDY_DATE] = @Dt)
BEGIN
SET @IsLoadDay = 0
END
RETURN @IsLoadDay
END
GO
GetNextLoadDay Function
CREATE FUNCTION [dbo].[GetNextLoadDay](@Dt DATE)
RETURNS DATE
AS
BEGIN
DECLARE @NxtDt DATE
SET @NxtDt = @Dt
WHILE [dbo].[IsLoadDay](@NxtDt) != 1 BEGIN
SET @NxtDt = DATEADD(D, 1, @NxtDt)
END
RETURN @NxtDt
END
GO
GetPreviousLoadDay Function
CREATE FUNCTION [dbo].[GetPreviousLoadDay](@Dt DATETIME)
RETURNS DATETIME
AS
BEGIN
DECLARE @PrvDt DATE
SET @PrvDt = DATEADD(D, -1, @Dt)
WHILE [dbo].[IsLoadDay](@PrvDt) != 1 BEGIN
SET @PrvDt = DATEADD(D, -1, @PrvDt)
END
RETURN @PrvDt
END
GO
Answer: Is Load Day?
Your IsLoadDay function seems a little overcomplicated.
Might I suggest trying something along the lines of this, which brings all the checks you want to do pretty nicely into a single coalesce:
CREATE FUNCTION [dbo].[IsLoadDay](@Dt DATE)
RETURNS BIT
AS
BEGIN
RETURN COALESCE
(
--If it is a monday or sunday, it's not a load day
CASE WHEN DATENAME(DW, @Dt) IN ('SUNDAY', 'MONDAY') THEN 0 END,
--If it is a designated not load day, it's not a load day
(
SELECT 0
FROM [DBMGMT].[dbo].[afrs_daily_load_no_load_date]
WHERE [load_date] = @Dt AND [load_ind] = 'N'
),
--If it is a holiday, it's not a load day
(
SELECT 0
FROM [master].[dbo].[HOLDY_WA_STATE]
WHERE [HOLDY_DATE] = @Dt
),
--Otherwise, it's a load day
1
)
END
GO | {
"domain": "codereview.stackexchange",
"id": 15207,
"tags": "sql, datetime, sql-server, t-sql"
} |
Read or write message from/to file | Question:
What is the easiest way to load/write a message from/to a file in C++?
I want something like this:
void loadMessage(const std::string& filename, std_msgs::Image& image_msg)
{
// open file and put it's content in img_msg
}
void saveMessage(std_msgs::Image& image_msg, const std::string& filename)
{
// put message content in file
}
The file has YAML format, for example created by
rostopic pub -1 /image std_msgs/Image > file
Originally posted by Stephan on ROS Answers with karma: 1924 on 2011-09-22
Post score: 3
Original comments
Comment by Mac on 2011-09-23:
Do you mean that you want the output file to be in YAML format?
Comment by joq on 2011-09-23:
Is there some reason not to use rosbag? That is the normal ROS technique; it can be done with standard commands.
Comment by lucasw on 2016-02-08:
This is similar to http://answers.ros.org/question/10330/whats-the-best-way-to-convert-a-ros-message-to-a-string-or-xml/
Answer:
The best way I see is to use rosbag. You can easily record messages from the command line by just executing
rosbag /topic1 /topic2
Rosbag also provides a C++ and a Python API for writing and reading streams of messages from your programs. Have a look at this page for more information on the API.
Originally posted by Lorenz with karma: 22731 on 2011-09-23
This answer was ACCEPTED on the original site
Post score: 5 | {
"domain": "robotics.stackexchange",
"id": 6756,
"tags": "ros, c++, serialization, message"
} |
Aharonov-Bohm phase picked up when a magnetic dipole goes around a charge | Question: When a particle with charge $q$ traverses a loop that encloses a magnetic flux $\Phi$, it picks up a phase $e^{iq\Phi}$ (I have set $c$ and $\hbar=1$). This is the usual Aharonov-Bohm phase. Now, let us look at a different scenario - consider a particle with a magnetic dipole moment that gives rise to a flux $\Phi$ through a plane. Move this particle along a loop enclosing a charge $q$ in the same plane. What is the phase picked up in this case?
According to Section 2.1.3 of Intro. to Topological Quantum Computation by Pachos, this phase is also $e^{iq\Phi}$. I am not able to see why, though.
Answer: The geometric phase acquired by a neutral particle with a magnetic dipole moment encircling a point charge is an example of the Aharonov-Casher phase which is a variant of the Aharonov-Bohm phase. This phase can be explained as follows:
The interaction energy between the two particles is given by:
$$E_{int} = \vec{\mu} \cdot \vec{B}$$
Where $B$ is the magnetic field felt by the neutral particle in its rest frame. This magnetic field originates from a Lorentz transformation of the charge electric field:
$$\vec{B} = − \gamma \frac{\vec{v}}{c} \times \vec{E} \approx -\frac{\vec{v}}{c} \times \vec{E}$$
(In the non-relativistic approximation).
Thus the interaction energy becomes:
$$E_{int} = -\vec{\mu} \cdot \frac{\vec{v}}{c} \times \vec{E} = \frac{1}{c} \vec{\mu} \times \vec{E} \cdot \vec{v} = \frac{q}{c} \frac{\vec{\mu} \times \vec{r} }{|\mathbf{r}|^3}\cdot \vec{v}$$
This term has the form:
$$E_{int} = \frac{q}{c} \vec{A}(r)\cdot \vec{v}$$
Which is just the standard form of the Lorentz-force Lagrangian, giving rise to the Aharonov-Bohm phase, with:
$$\vec{A}(r) = \frac{\vec{\mu} \times \vec{r} }{|\mathbf{r}|^3}$$
Suppose, that the motion is circular on a plane and the magnetic moment is perpendicular to the plane, then the Aharonov-Casher phase is given by:
$$\phi_{AC} = \oint \frac{q}{c} \frac{\vec{\mu} \times \vec{r} }{|\mathbf{r}|^3}\cdot d\vec{r} = q \frac{2 \pi \mu}{rc}$$
As can be observed, the phase depends on the radius of the circular trajectory. This is an example of a geometric phase. If we replace the point charge by an infinite wire of charge per unit length $\lambda$, then the electric potential becomes:
$$V(\vec{r}) =\lambda \ln(|\mathbf{r}|)$$
And the electric field
$$\vec{E} (\vec{r}) = \frac{\lambda \vec{r}}{|\mathbf{r}|^2} $$
In this case, the Aharonov-Casher phase becomes:
$$\phi_{AC} = \lambda \frac{2 \pi \mu}{c}$$
As we can see, this phase does not depend on the radius of the trajectory. In fact, we can prove that it does not depend on the shape of the trajectory, but only on the number of times the trajectory encircles the charge. This is an example of a topological phase. In contrast the geometric phase in first case depends on the trajectory.
Pachos had in mind the second type of phase. The anyons live in 2D and their electrostatic potentials are logarithmic, because this is the solution of the Poisson equation in 2D. | {
"domain": "physics.stackexchange",
"id": 45491,
"tags": "quantum-mechanics, electromagnetism, topology, berry-pancharatnam-phase"
} |
Does a man on the moon experience day? | Question: Does a man on the moon (or any other similar body) experience daytime like we do? By daytime I mean looking upward and seeing a bright "sky," not dark space.
If not, then what is the reason behind it?
Answer:
like even when light gets on the moon why does the space appears dark from the moon?
For the same reason it appears dark from the Earth (when flying at an altitude of 80,000 feet or so):
Image credit: View from the SR-71 Blackbird.
The fact is, we can't 'see space' from the Earth's surface during the day because the atmosphere is 'in the way'- the atmosphere scatters light from the Sun and other sources of light and we see that rather than the darkness of space.
If we fly high enough so that most of the atmosphere is below us, we can 'see space' through the little bit of atmosphere above us.
Since the Moon has no atmosphere, the light from the Sun is not scattered and the sky appears dark from the surface of the Moon. | {
"domain": "physics.stackexchange",
"id": 14576,
"tags": "homework-and-exercises, atmospheric-science, dispersion"
} |
Time reversal transformation of the complex scalar field | Question: consider a complex scalar field $\phi$
$$\phi(t,x)=\int\frac{d^3k}{\sqrt{2\omega_k(2\pi)^{3}}}
\big(a_ke^{i\vec{k}\cdot\vec{x}-i\omega_kt}
+b^\dagger_ke^{-i\vec{k}\cdot\vec{x}+i\omega_kt}\big)$$
By definition, time reversal operator $T$ is anti-unitary, so we have
$$T\phi(t,x)T^\dagger =\int\frac{d^3k}{\sqrt{2\omega_k(2\pi)^{3}}}
\big(Ta_k T^\dagger e^{-i\vec{k}\cdot\vec{x}+i\omega_kt}
+T b^\dagger_k T^\dagger e^{+i\vec{k}\cdot\vec{x}-i\omega_kt}\big) \\
=\int\frac{d^3k}{\sqrt{2\omega_k(2\pi)^{3}}}
\big(a_{-k} e^{-i\vec{k}\cdot\vec{x}+i\omega_kt}
+ b^\dagger_{-k} e^{+i\vec{k}\cdot\vec{x}-i\omega_kt}\big) \\
=\int\frac{d^3k}{\sqrt{2\omega_k(2\pi)^{3}}}
\big(a_{k} e^{i\vec{k}\cdot\vec{x}+i\omega_kt}
+ b^\dagger_{k} e^{-i\vec{k}\cdot\vec{x}-i\omega_kt}\big)=\phi(-t,x)
$$
But this seem contradict with the expextation that for complex scalar field $\phi$, we should have
$$T\phi T^\dagger \sim \phi^*$$
Anything wrong with the time reversal or the expectation of the field?
Answer: Short answer:
The paradox can be resolved by being clear about how we want to define time-reversal. In the simplest case of a non-interacting, single-component complex scalar field, a typical definition takes time-reversal to be the antilinear transformation that replaces the field operator $\phi(t,x)$ with $\phi(-t,x)$. This is consistent with the calculation shown in the question. There is no contradiction, because antilinearity doesn't imply that the result should be $\sim\phi^*$.
Whether or not a given transformation is a symmetry of the given model is a separate question, not addressed here; that would require specifying the whole model. The key message here is simply that if we're clear with definitions, then no contradictions arise.
Long answer:
Quantum (field) theory is formulated using operators on a Hilbert space. For the question being asked here, the Hilbert space is not important, so we can think of the operators as elements of an abstract algebra (a C*-algebra) instead of as operators on a Hilbert space, but to be concise, I'll still call them "operators."
Every operator $A$ has an adjoint, often denoted $A^\dagger$ by physicists and often denoted $A^*$ by mathematicians. I'll use the notation $A^\dagger$ here. The adjoint satisfies
$$
(zA)^\dagger=z^* A^\dagger
\tag{1}
$$
for all complex numbers $z$, where $z^*$ denotes the complex conjugate. It also satisfies
$$
(AB)^\dagger=B^\dagger A^\dagger.
\tag{2}
$$
We can think of the adjoint as an extension of complex conjugation from complex numbers to operators.
An antilinear transformation takes each operator $A$ and returns a new operator, which I'll denote $\sigma(A)$, subject to these rules:
\begin{gather}
\sigma(zA)=z^*\sigma(A)
\tag{3}
\\
\sigma(AB)=\sigma(A)\sigma(B).
\tag{4}
\end{gather}
Notice that (4) preserves the order of multiplication but (2) reverses it.
The quantum "complex" scalar field is a collection of operators $\phi(t,x)$, one per point in spacetime. (We can make this well-defined by treating spacetime as a very fine discrete lattice, but that level of detail won't be needed here.) Saying that the field operator is "complex" is a common but dangerous way of saying that it is not self-adjoint: $\phi^\dagger(t,x)\neq\phi(t,x)$. The field operator $\phi(t,x)$ is an operator, not a complex number.
With all of that in mind, suppose that we define time-reversal to be the antilinear transformation $\sigma_T$ whose effect on the field operator is
$$
\sigma_T\big(\phi(t,x)\big)\equiv\phi(-t,x).
\tag{5}
$$
For any other operator $A$ that can be expressed in terms of $\phi(t,x)$, such as the operators $a_k$ or $b_k$ used in the question, the effect of $\sigma_T$ on $A$ can be derived from this definition. In particular, the requirement that $\sigma_T$ be antilinear implies
$$
\sigma_T\big(z\,\phi(t,x)\big)=z^*\phi(-t,x)
\tag{6}
$$
for all complex numbers $z$. This definition of time-reversal is consistent with the calculation shown in the question. The key message here is that antilinearity does not imply that the result must be $\sim\phi^*$.
The structure of a given model might also motivate defining charge conjugation to be an ordinary linear transformation $\sigma_C$ that satisfies
$$
\sigma_C\big(\phi(t,x)\big)\equiv\phi^*(t,x).
\tag{7}
$$
(This is typically done in scalar QED, for example.) The requirement that $\sigma_C$ be linear implies
\begin{gather}
\sigma_C(zA)=z\,\sigma_C(A)
\tag{8}
\\
\sigma_C(AB)=\sigma_C(A)\sigma_C(B).
\tag{9}
\end{gather}
In particular,
$$
\sigma_C\big(z\,\phi(t,x)\big)=z\,\phi^*(t,x)
\tag{10}
$$
for all complex numbers $z$. Although this charge conjugation transformation takes the adjoint of the field operator, it does not take the adjoint of most other operators. It is a linear transformation, which preserves the order of multiplication as shown in (9), in contrast to (2).
The transformations $\sigma_T$ and $\sigma_C$ were defined above by specifying two things:
whether they are linear or antilinear
how they affect the field operator.
Their effect on all other operators may be deduced from these, assuming that all other operators may be expressed in terms of the field operators (as usual in QFT). More general definitions of such transformations can allow them to mix different components or different fields with each other; only simple representative possibilities were considered here.
P.S. - An antilinear transformation $\sigma_T$ can be written as
$$
\sigma_T(A)=TAT^{-1}
\tag{11}
$$
for an antilinear operator $T$. This can be a useful way to write it when working with vectors in the Hilbert space, but this fact does not affect the answer to the question. | {
"domain": "physics.stackexchange",
"id": 54058,
"tags": "quantum-field-theory, time-reversal-symmetry"
} |
Does rtabmap_ros accepts IMU fusion? | Question:
I want to fuse IMU data with vision while mapping with rtabmap_ros, I need to know if the algorithms actually accepts IMU fusion?
Originally posted by saratoonsi on ROS Answers with karma: 1 on 2018-08-01
Post score: 0
Answer:
The algorithm accepts any kind of odometry, so yes if you generate an odometry from fusion of IMU and vision, you can feed it to rtabmap. Just make sure that the twist covariance matrix in the generated odometry message makes sense (maybe around 10^-4 order for diagonal values) so that you don't have problems with bad graph optimization afterward in rtabmap.
Originally posted by matlabbe with karma: 6409 on 2018-08-01
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by saratoonsi on 2018-08-09:
Thanks :) . | {
"domain": "robotics.stackexchange",
"id": 31435,
"tags": "slam, navigation, ros-kinetic, rtabmap"
} |
Pendulum: how to measure the value of the decay constant $\tau$ experimentally? | Question: I have been given the following function to model the behaviour of a simple physical pendulum:
$$\theta(t) = \theta_o e^{-\frac{t}{\tau}} \cos\left(2\pi \frac{t}{T} + \alpha\right)$$
Where $\alpha$ is a constant, $T$ is the pendulum's period, and $\tau$ is a time constant of decay. I only understand $\tau$ to be a friction constant of the pendulum, but I am unsure how to measure it experimentally. How would this variable be simply measured in a pendulum with known $T$, mass, length, initial angle etc...
Thanks!
Answer: This is similar to damped harmonic motion (where the damping is caused by air resistance and friction). The cosine term in your equation represents the oscillatory motion and the exponential part of the equation ("modulates") determines the decay of the amplitude over time. Assuming you have access to one, a sonic motion detector can be used to create plots of position and velocity as functions of time. If you plot position versus time, this would be a step in the right direction. Using the data obtained from the detector, you should plot an exponential fit of the decay of this oscillating system. Using your equation you can get the ratio of two successive peaks which would give (and I have ignored your original phase $\alpha$ since we can measure from the start of the first oscillation)
$\large \frac{\theta_1}{\theta_2} =\frac{ \theta_o e^{\frac{-t}{\tau}} \cos(2\pi \frac{t}{T})}{ \theta_o e^{\frac{-(t + T)}{\tau}} \cos(2\pi \frac{(t+T)}{T})}$
and because the motion is periodic the cosine terms are essentially equal (successive peaks), the $\theta_0$ cancel
and use the rules for dividing exponents, we get
$\large \frac{\theta_1}{\theta_2}= \large e^{\frac{T}{\tau}}$
If we take the natural log of both sides and rearrange we get,
$\large \tau = \frac{T}{\large log \frac{\theta_1}{\theta_2}}$
where T is the period between each oscillation. If you apply this equation to each pair of peaks in a given time
interval, and do this as many times as required for accuracy you will get an average of value of $\tau$. | {
"domain": "physics.stackexchange",
"id": 71924,
"tags": "classical-mechanics, friction, dissipation"
} |
Trying to build a circuit for quantum teleportation on IBMQ I get ERROR_RUNNING_JOB error | Question: I am trying to build a circuit for quantum teleportation. On the simulator, everything runs fine and according to expectations, however, I am not able to run the algorithm on the real quantum processor.
ERROR_RUNNING_JOB is returned without any other explanation. I tried to implement two different circuits, the first one with controlled X and Z gates, the second one with gates X and Z are controlled by a value in the classical register. Please find both circuits below.
I appreciate any help.
Thanks.
Answer: The issue is that you are applying operations after measurement gates and this is currently not available on the real hardware. I think the hardware also does not support reset operations mid-way through a circuit at the moment.
The best way forward is to keep running this on the simulator or try to find a different way of expressing the circuit such that it avoids these features. | {
"domain": "quantumcomputing.stackexchange",
"id": 1050,
"tags": "circuit-construction, ibm-q-experience, teleportation"
} |
Drawing a Thue-Morse pattern recursively | Question: This is web exercise 3.1.63. from the book Computer Science An Interdisciplinary Approach by Sedgewick & Wayne:
Write a program that reads in a command line input N and plots the
N-by-N Thue-Morse pattern. Below are the Thue-Morse patterns for
N = 4, 8, and 16.
Here is my program:
public class ThueMorse
{
public static void drawThueMorse1(int n, double x, double y, double size)
{
if (n == 0) return;
double x1 = x - size/2, x2 = x + size/2;
double y1 = y - size/2, y2 = y + size/2;
StdDraw.setPenColor(StdDraw.BOOK_BLUE);
StdDraw.filledRectangle(x+(3*size/4),y,size/4,size/2);
//StdDraw.pause(300);
StdDraw.filledRectangle(x-(3*size/4),y,size/4,size/2);
//StdDraw.pause(300);
StdDraw.filledRectangle(x,y+(3*size/4),size/2,size/4);
//StdDraw.pause(300);
StdDraw.filledRectangle(x,y-(3*size/4),size/2,size/4);
//StdDraw.pause(300);
StdDraw.setPenColor(StdDraw.WHITE);
StdDraw.filledSquare(x,y,size/2);
//StdDraw.pause(300);
StdDraw.filledSquare(x+(3*size/4),y+(3*size/4),size/4);
//StdDraw.pause(300);
StdDraw.filledSquare(x+(3*size/4),y-(3*size/4),size/4);
//StdDraw.pause(300);
StdDraw.filledSquare(x-(3*size/4),y-(3*size/4),size/4);
//StdDraw.pause(300);
StdDraw.filledSquare(x-(3*size/4),y+(3*size/4),size/4);
//StdDraw.pause(300);
drawThueMorse1(n-1, x1, y1, size/2);
drawThueMorse2(n-1, x1, y2, size/2);
drawThueMorse2(n-1, x2, y1, size/2);
drawThueMorse1(n-1, x2, y2, size/2);
}
public static void drawThueMorse2(int n, double x, double y, double size)
{
if (n == 0) return;
double x1 = x - size/2, x2 = x + size/2;
double y1 = y - size/2, y2 = y + size/2;
StdDraw.setPenColor(StdDraw.WHITE);
StdDraw.filledRectangle(x+(3*size/4),y,size/4,size/2);
//StdDraw.pause(300);
StdDraw.filledRectangle(x-(3*size/4),y,size/4,size/2);
//StdDraw.pause(300);
StdDraw.filledRectangle(x,y+(3*size/4),size/2,size/4);
//StdDraw.pause(300);
StdDraw.filledRectangle(x,y-(3*size/4),size/2,size/4);
//StdDraw.pause(300);
StdDraw.setPenColor(StdDraw.BOOK_BLUE);
StdDraw.filledSquare(x,y,size/2);
//StdDraw.pause(300);
StdDraw.filledSquare(x+(3*size/4),y+(3*size/4),size/4);
//StdDraw.pause(300);
StdDraw.filledSquare(x+(3*size/4),y-(3*size/4),size/4);
//StdDraw.pause(300);
StdDraw.filledSquare(x-(3*size/4),y-(3*size/4),size/4);
//StdDraw.pause(300);
StdDraw.filledSquare(x-(3*size/4),y+(3*size/4),size/4);
//StdDraw.pause(300);
drawThueMorse1(n-1, x1, y1, size/2);
drawThueMorse2(n-1, x1, y2, size/2);
drawThueMorse2(n-1, x2, y1, size/2);
drawThueMorse1(n-1, x2, y2, size/2);
}
public static int log2(int x)
{
return (int) (Math.log(x)/Math.log(2));
}
public static void main(String[] args)
{
int n = Integer.parseInt(args[0]);
n = log2(n)-1;
drawThueMorse1(n, 0.5, 0.5, 0.5);
}
}
StdDraw is a simple API written by the authors of the book. I checked my program and it works. Here is one instance of it:
Input: N = 256
Output:
Is there any way that I can improve my program?
Thanks for your attention.
Answer:
Is there any way that I can improve my program?
Yes :)
A good cue to see if your program can be improved is when there is lot of repetition in the code. There are usually ways to avoid repetition, although they can sometimes get pretty complex. However, avoiding repetition is useful for two very important reasons :
Copy/paste usually leads to "dumb" bugs
If you want to change a thing you need to change it in a billion places, making it more likely you'll forget one.
I'll be very honest, I'm reading your code as much as I can, but I don't understand what's happening in there, so it's going to be hard to provide a review of what you're doing, but it's also an indication that the code isn't clear enough to be understood by someone trying to understand it (me, in this case).
What can be done about this?
Use comprehensive names (That goes for methods, variables, everything really)
Spacing the code properly (Have you ever read a complicated paper/book where all the text was cramped up? Have you ever read another one where the was good spacing? It makes a world of difference!)
Avoiding commented code (Why is it there? Is it a mistake? Should it be uncommented? How long has it been commented? Why?)
Finally, when none of those tips are enough, you should comment your code, but be cautious not to "overcomment". Comments should be just long enough so that someone can understand why you coded the way you did. There's no need to explain things such as "I'm changing the pen color" over StdDraw.setPenColor(StdDraw.BOOK_BLUE);
Now, there's one thing that I think would help tremedously with the readability of the code. Right now, your code both computes the "pattern" and prints it. What if you first generated a binary 2D array that fit the problem at hand, and you had another function that printed this array?
Taking an example of the N=4 case you gave, the array could look like this :
0110
1001
1001
0110
Separating the algorithm from the printing will also let you see how you can improve your code.
For example, we can notice that there's symmetry between the \$row_i\$ and \$col_i\$, meaning that we only need to find the value of half the grid in order to find all the values (that would be a valuable optimisation for a problem with a big N value).
Now, if I take a look at the link you gave us, I see that :
Amazingly, the Thue-Morse sequence can be generated from the substitution system :
0 -> 01 and 1 -> 10, recursively.
Let's look at this with the first row of the N=8 example you gave us. We'll need \$log_2(8)=3\$ iterations of this substitution sequence : 0->01->0110->01101001. Let's fill our 2D array with this :
0 1 1 0 1 0 0 1
1 X X X X X X X
1 X X X X X X X
0 X X X X X X X
1 X X X X X X X
0 X X X X X X X
0 X X X X X X X
1 X X X X X X X
Now, let's fill the second row, knowing that the first value is 1 : 1 -> 10 -> 1001 -> 10010110
0 1 1 0 1 0 0 1
1 0 0 1 0 1 1 0
1 0 X X X X X X
0 1 X X X X X X
1 0 X X X X X X
0 1 X X X X X X
0 1 X X X X X X
1 0 X X X X X X
For completion, let's do the third row, and I'll leave the rest to you 10 -> 1001 -> 10010110
0 1 1 0 1 0 0 1
1 0 0 1 0 1 1 0
1 0 0 1 0 1 1 0
0 1 1 X X X X X
1 0 0 X X X X X
0 1 1 X X X X X
0 1 1 X X X X X
1 0 0 X X X X X
I guess you see where this ends. This leads to a pretty performant approach with code that should be pretty simple to understand. Afterwards, you could have a function paintThueMorse, give it this 2D array and paint blue for 1, white for 0 or the inverse. | {
"domain": "codereview.stackexchange",
"id": 39364,
"tags": "java, performance, beginner, recursion"
} |
Context-free grammar for $L=\{ a^nb^m | n \le m+3 \}$ | Question: I'm having problems determining the productions for a CFG describing the language $L=\{ a^nb^m | n \le m+3 \}$
where $n,m \ge 0$
I'm very new to this so this example might be a little harder, but everything I try I end up not finding the correct solutions. Some example strings are $\epsilon$, $a$, $aa$, $aaa$, $b$, $ab$, $aab$, $aaab$, $aaaab$, $bb$, $abb$, $aabb$, $aaabb$, $aaaabb$, $aaaaabb$ etc.
This is how I tried reasoning:
since there can be a loop of $a$ in the beginning, I thought that one production could be
$A \rightarrow aAb | \epsilon$
But this is as far as my reasoning goes. What's confusing is that for each value of $m$, I have increasing $n$.
Can anyone give a hint, or give general hints how to construct CFG from languages?
Thanks!
Answer:
$S \rightarrow aSb/A $
$A \rightarrow \epsilon/a/aa/aaa/B$
$B \rightarrow bB/\epsilon$ | {
"domain": "cs.stackexchange",
"id": 20070,
"tags": "formal-languages, automata, context-free, formal-grammars"
} |
How do we know which term to attach a phase factor to in a state equation? | Question: I need to find the state of a particle in a one-dimesional harmonic oscillator where a measurement of the energy yields the values $\hbar\omega\over 2$ or $3\omega\hbar\over 2$, each with a probability of one-half at time t. I would have thought that the state would be $\big|\psi(0)\big>= {1\over \sqrt2}\big|0\big>+{1\over \sqrt2}\big|1\big>$. However the right equation is $\big|\psi(0)\big>= {1\over \sqrt2}\big|0\big>+{1\over \sqrt2}e^{-i\phi}\big|1\big>$. I know that the $e^{-i\phi}$ is a relative phase factor, but I can't figure out where it came from.
Where did the phase factor come from and when do phase factors need to be applied to the terms in state equations?
Answer: Your answer is correct, but there are other states that would give probability 50% and 50% if you add a phase shift to any of the states. However, any phase shift could be rewritten by a relative phase shift factor, as in the last expression in your question. | {
"domain": "physics.stackexchange",
"id": 52700,
"tags": "quantum-mechanics, harmonic-oscillator, quantum-states"
} |
How would I spectrally rotate/invert an audio signal (Matlab or Python)? | Question: So I have a 4.5 minute wav audio clip. I want to spectrally rotate the frequencies such that the spatio-temporal characteristics of natural speech are retained, while having the new audio be unintelligible. Any tips? I found https://ccrma.stanford.edu/~jos/sasp/Spectral_Rotation_Real_Signals.html, but I'm not getting anywhere with it.
Answer: After reading the linked article by Julius Smith, flipping the spectrum entirely as the OP is requesting is NOT the intention of that author. That article is shifting the spectrum to the left by only half a DFT bin as an alternate approach to the complexity of creating an analytic signal with the Hilbert Transform for purposes of making an octave band filter on a real signal (per one post prior). It is eliminating the samples at DC and Nyquist and converting the real signal into a complex signal.
Specifically, if you multiply a discrete signal in the time domain by a complex phasor of the general form $e^{j\omega n}$ where n is the sample count and $\omega$ is a fractional radian frequency between $0$ and the sample rate given as $2\pi$, the spectrum will be shifted accordingly by $\omega$ as given by the frequency translation property of the FT (when $\omega$ is a positive number this shift will be to the right, and when $\omega$ is a negative number this shift will be to the left). Since the frequency spectrum for a discrete signal is periodic over the fractional radian frequency range from $0$ to $2\pi$, this shift will also be a spectral rotation, as is clearer from the spectrums I plotted further below (consider shifting these spectrums a small amount to the left and observe how within the range from $0$ to $2\pi$ the spectrum rotates meaning that the spectrum that goes over $f_s$ in the shift appears at $f=0$.)
Julius Smith is shifting the spectrum to the left by only half a DFT bin (a very small shift), specifically by doing the following:
$$y[n] = e^{-\pi n/N} x[n]$$
Note how the DFT formula for the DFT:
$$X[k] = \sum_{n=0}^{N-1} x[n] e^{2 \pi nk/N}$$
Where $x[n]$ is rotated using integer phase steps given by $k$ in $e^{2 \pi k n/N}$, so Julius approach results in a shift of $k/2$.
That said, if the OP indeed wants to flip the spectrum entirely (swap low and high frequency), this is then done by shifting the spectrum entirely over $N/2$ bins (half the entire DFT spectrum, not just one bin), which ends up being, using the same form but with using $k=N/2$ instead of $k=1/2$ as Julius Smith had done,
$$y[n] = e^{2\pi(N/2) n/N} x[n] = e^{\pi n} x[n] = (-1)^n x[n]$$
To further see how this works and how the spectrum in discrete systems is circular, see the plots demonstrated below. Given any spectrum in the unique span of $f \in [0, f_s)$ where $f_s$ represents the sampling rate, or equivalently and more frequency used in fractional radian frequency $f\ in [0, 2\pi )$, we can circularly shift in the frequency domain by multiplying the signal in the time domain by $e^{j\omega_s n}$ where $\omega_n$ represents the fractional radian frequency amount to shift.
This applies to all signal types, meaning real and complex so is the more general case. For real signals the unique span is $f \in [0, f_s/2)$ or in fractional radian frequency $f \in [0, \pi)$. So this is the OP's case with an audio file representing a real signal. The OP wants to rotate the spectrum such that $f=0$ is translated to $f= \pi$ (Equivalently half the sampling rate or $f_s/2$.)
However if the signal is over-sampled such that the spectrum of interest does not fully occupy the frequency range of $0$ to $f_s/2$ then this process will be unsatisfactory as demonstrated below.
The top spectrum shows the case where the audo spectrum occupies the full range from $0$ to $fs/2$, showing the "unfolded spectrum with the frequency axis extending to $\pm \infty$ which nicely reveals the frequency periodicity mentioned above.
The next line is the spectrum of the sequence +1,-1,+1,-1... specifically being a tone that is sampled at exactly one half the sampling rate. When we multiply in time, we convolve in frequency, resulting in the third line of the spectrum shifted by $f_s/2$.
When the signal is oversampled, the same process would result in the following spectrums, which may not be what the OP is hoping to achieve. In this case a suggested approach is to decimate the signal so that $f_s/2$ is at the upper end of the desired spectrum and follow the process above. If for rate matching the higher sampling rate is needed, the signal can be again interpolated after the multiplication by +1,-1,+1,-1... | {
"domain": "dsp.stackexchange",
"id": 9214,
"tags": "fourier-transform, audio, frequency-spectrum"
} |
Machine learning algorithm for xml manipulation | Question: Given a virtual game map (picture) and a racing car at the map's starting point, I'm trying to build an algorithm that would help me generate a route that would get the car from the beginning to the endpoint.
route definition: a very complex .xml file that includes all the data the car needs in order to navigate the road successfully.
What I have:
I have thousands of different maps and I could get any number of maps I'd like.
For every map, I have a complex external algorithm using picture analysis that builds a route for me - although this route is not so accurate.
For some percentage of the maps, I also have a specific "good working car solution" - which is basically a manually built route file.
What I'm trying to achieve:
Basically, I'm trying to improve the complex route generator algorithm which I don't have access to.
I want to use the percentage of manual routes I have and compare them to the routes that are generated for the same maps. From the comparison and the differences between the manual and the automatically created route files, I want to build a second-step smart algorithm that "learns" what should be changed in the automatically generated routes and have that algorithm run as a second step and make it as close as possible to the handmade route.
My final goal is to be able to build an accurate route for maps I don't have the manual route for. I want to be able to produce an accurate route for every map - using both the external algorithm and the one I will build on top of it.
This entire subject is very new to me and I'd like to get your advice as to how should I approach this complex problem, and what could make this task easier for me.
Answer: Terminology
There are two uses of the word map in this discussion.
Road maps are construed below as images of road maps.
Mapping input to desired output is the skill the system must learn.
The set of examples used to teach the system from an existing mapping of input to output is called a labeled data set and the associated type of learning is called supervised.
Training Resources
Unreliable labeled road map examples and access to more also unreliable labeled road map examples
A large number of unlabeled road map examples and access to more road maps
A smaller number of labeled road map examples
Inputs
Virtual game map as an image
Starting position
Target ending position
Output
The fastest route in sufficient detail to fully direct car movement — Assuming that the route should be optimized for soonest arrival time, not for fuel conservation, minimal tire wear, safety, or minimal distance, since the car was identified as a race car.
Accurate Valuation of the Resource Inventory
The low quality routes from the black box service provide neither examples of desired system behavior nor examples undesirable system behavior. The former case would be good training data. The later would be good to use in an adversarial architecture based on the design of GANs and their variants. The uncertain quality of the labeled data from the black box service makes it ambiguous and, from an information science point of view, of zero value.
Comparing the low quality routes and the high quality routes may be interesting, but not particularly useful given the current state of machine learning. The objective of your initial project phase is to teach the machine to generate a high quality route from the low quality route, not produce a comparison report. To do that without processing the images a substantially large overlap between these two sets would be required.
The smaller number of labeled road map examples for which the labels were created manually
Corresponding labeled road map examples for which the labels were created by the black box service
That approach would require you to correct bad routes to create a sufficient training set. Without overlap, you have no training data for artificial network training. It may be possible to create an algorithm of GAN style that learns how to correct the black box service output using a concept called cross-entropy, but it would require processing the images and would likely be more difficult to do so than to replace the black box service altogether.
If your ultimate goal is to create a working system that generates routes from images and start and and positions, I suggest discounting the external service altogether, and discard the idea of creating a route improver sub-system. That you wish to improve upon the black box route generator's algorithm is noble but ultimately a time-consuming distraction to reaching the ultimate goal.
Replace the unreliable thing with the reliable thing you design and build and to which you will have full and open access to improve. Your system will ultimately have to deal with the images in your code either way, and that's the most challenging aspect of the overall task. Just learn about CNNs and RNNs and get right to it. That's my advise anyway.
A Note on Feedback
For any non-trivial route, the car will not likely stay on the road if given only turn, acceleration, and deceleration instructions. Detection of the edges of the road would normally be needed as a feedback mechanism to accompany the route instructions. The only way around this is to make the visualization sufficiently accurate that cumulative errors never exceed the distance that would put the car off pavement.
A Note on Data Representation
JSON has a more flexible and terse structure when it comes to homogeneous arrays than XML, and it is easy to convert XML to JSON. Furthermore, a transportation route is often represented as a directed graph, and there are many algorithms already conceived in graph theory that have been implemented in graph libraries in most languages. For instance, shortest path, path equivalence, path concatenation, and detection of rings (driving in circles) are one line calls to these libraries. Because JSON, for the reason given, has overtaken XML in many domains, the number of graph libraries that can read and write JSON directly has overtaken the number that can read or write XML directly. The tooling for JSON analysis and visualization has surpassed that of XML at this point too. | {
"domain": "ai.stackexchange",
"id": 629,
"tags": "machine-learning, algorithm, learning-algorithms"
} |
Srednicki's book on QFT | Question: I am reading Srednicki's book on QFT and there's a thing I don't quite see in chapter 6 (Path integrals in QM)
equation (6.7) is
$\langle{}q^{''},t^{''}|q^{'},t^{'}\rangle=\int\prod_{k=1}^Ndq_k\prod_{j=0}^N\frac{dp_j}{2\pi}e^{ip_j(q_{j+1}-q_j)}e^{-iH(p_j,\bar{q_j})\delta t}$
where he says $\bar{q_j}=\frac{1}{2}(q_j+q_{j+1})$,$q_0=q'$, $q_{N+1}=q^{''}$ and takes the limit $\delta t\to{}0$
and he gets equation (6.8)
$\langle{}q^{''},t^{''}|q^{'},t^{'}\rangle=\int\mathcal{D}q\mathcal{D}p\cdot e^{i\int_{t'}^{t''}dt(p(t)\dot{q(t)}-H(p(t),q(t)))} $
My question is, where does the integral on the exponential come from?
Answer: A product of exponentials is equivalent to an exponential of a sum, e.g.
$$e^{A}e^{B}=e^{A+B}.$$
The limit of $\delta t$ going to zero turns the discrete sum into a continuous integral. | {
"domain": "physics.stackexchange",
"id": 10411,
"tags": "homework-and-exercises, quantum-field-theory, path-integral"
} |
Is this a good/safe way to convert enums? | Question: Due to the fact that Entity does not support Enums, I am converting them to strings, string lists, and back again.
The reason is two fold:
type-safety, and consistency in the database.
Now, I have a bunch of these, but I will use Credit Cards as an example:
public enum CreditCardName
{
VISA,
Master_Card,
Discover,
American_Express
}
Originally, I decided on writing the same method for them, over and over again:
public static List<string> GetCreditCardTypes()
{
var list = Enum.GetNames(typeof(CreditCardName));
return list.Select(n => n.Replace("_", " ")).ToList();
}
Then it occurred to me that I could make one private method to do this:
public static List<string> GetCreditCardTypes()
{
return EnumToList(typeof(CreditCardName));
}
private static List<string> EnumToList(Type type)
{
var list = Enum.GetNames(type);
return list.Select(n => n.Replace("_", " ")).ToList();
}
Now I'm also doing something similar in reverse, but will skip the code. The question I have is if this is a good way to go about this? I know there are other workarounds for Entity's lack of enum support, but they are a bit cumbersome for my simple needs. I have never used Types in this way, so I just wanted to check that this is how Type/typeof are intended to be used, and that this will not cause unforeseen runtime issues.
Answer: I solved a similar issue by inventing a DisplayName attribute, so I can define my enums like this:
public enum CreditCardName
{
[DisplayName("VISA")]
Visa,
[DisplayName("MasterCard")]
MasterCard,
[DisplayName("Discover")]
Discover,
[DisplayName("American Express")]
AmericanExpress
}
This has a number of advantages. For one, apps like Resharper will stop complaining about your naming scheme. Also, your string doesn't have to actually correspond to an English name, it could correspond to an entry in a translation strings file or something like that.
Source for DisplayName:
[AttributeUsage(AttributeTargets.Field)]
public class DisplayName : Attribute
{
public string Title { get; private set; }
public DisplayName(string title)
{
this.Title = title;
}
}
Source for the method to retrieve your DisplayName objects
IEnumerable<DisplayName> attributes = from attribute in typeof(CreditCardName).GetCustomAttributes(true)
let displayName = attribute as DisplayName
where displayName != null
select displayName;
You can roll this into a more complete generic method too:
public string[] GetNames<TEnum>()
{
return (from attribute in typeof(CreditCardName).GetCustomAttributes(true)
let displayName = attribute as DisplayName
where displayName != null
select displayName.Title).ToArray();
}
Unfortuately, I there isn't a way you can enforce that TEnum is an enum at compile time, but you could make it throw an exception instead. | {
"domain": "codereview.stackexchange",
"id": 471,
"tags": "c#, enum"
} |
Why UV light causes sunburn if exposed for too long, whereas visible light does not? | Question: UV light causes sunburn if exposed for too long, whereas visible light does not. Why?
Answer: To understand this effect, we have to think of light as a stream of photons. Each photon has a particular wavelength, and shorter wavelength photons (such as those of UV) have a higher energy than those of longer wavelength photons (such as those of visible light).
Many chemical processes require what is called an "activation energy" to happen. You can think of it like a hill between your starting point and your destination. You need enough energy to get up the hill before you can roll down to your destination. If you don't have enough energy, you end up rolling part way up the hill, and then rolling back down.
Sunburn is associated with damage caused by these chemical processes. When a UV photon hits some compounds in the body (such as DNA), it has enough energy to "break" it, causing a chemical reaction that results in changes that have to be repaired by the body later. Visible light photons lack the energy to get past this barrier, so they can't cause the same damage.
The reason UV and visible light are so different is that UV light is called "ionizing radiation." It has enough energy to actually kick electrons off of atoms. Visible light is "non-ionizing radiation," and generally doesn't have the energy to do this -- if it tries to kick an electron off of the atom, the atom has enough "pull" to hold onto it. The ability to knock electrons off of atoms is a pretty powerful tool for causing chemical reactions to occur, so the division between ionizing and non-ionizing is a pretty big deal. | {
"domain": "physics.stackexchange",
"id": 41174,
"tags": "thermal-radiation"
} |
RF-sputtering: where comes the target self-bias voltage from? | Question: While learning for an exam, I stumbled over the following Question:
According to Material Science of thin Films by Milton Ohring,
"RF sputtering essentially works
because the target self-biases to a negative potential. Once this happens, it
behaves like a DC target where positive ion-bombardment sputters away
atoms for subsequent deposition."
So far so good. What I don't understand is
why exactly a self-bias voltage is appearing?
why this bias voltage does not lead sputter bombarding of the substrate, that should be coated?
I understand that it relates to the high mobility of electrons compared to the ions, but I don't see why this implicates the self-bias yet. The above mentioned source tries to explain it as follows:
"Negative target bias is a consequence of
the fact that electrons are considerably more mobile than ions and have
little difficulty in following the periodic change in the electric field. The
disparity in electron and ion mobilities means that isolated positively
charged electrodes draw more electron current than comparably isolated
negatively charged electrodes draw positive ion current. For this reason the
discharge current-voltage characteristics are asymmetric and resemble
those of a leaky rectifier or diode [...]"
This is not clear to me, as I would think that this explanation would hold for the sputter substrate as well. I would be very glad if someone could make that clear by a somehow intuitive explanation. Thank you for all suggestions. :)
Answer: First of all, sputtering can be used either to etch the target or to deposit material on the target, depending on the arrangement inside the etching chamber. However there is a further complication; a target may be etched as a source of ions for film deposition on a manufactured item, or as part of a manufacturing process for the target item. All this can confuse the novice reader. The question asks about RF sputter etching and references aspects of both processes.
RF plasma etching fundamentals
The chamber is evacuated and a small amount of a suitable gas then introduced at low pressure. A high RF voltage is used to strike a plasma in the gas, and this provides the ambient working conditions.
The plasma comprises positive ions and negative electrons. The RF voltage induces oscillations in the particles, with the light electrons accelerating faster and moving further; they are said to have greater mobility.
Consequently more electrons bump into the target than do ions. The target picks up their negative charge and hence gains a negative DC bias.
The target now attracts the ions more strongly, limiting the charge buildup and DC bias. More importantly, the ions now impact the target more frequently and start etching into it.
In a real process, the RF power level and resultant DC bias are both among the parameters which must be carefully controlled to ensure reliable and uniform etching. The exact chemistry of the gas reactions and compounds is of course also crucial.
Source for deposition
The target is typically placed at the top of the vacuum chamber and the product to receive the film is laid on a second electrode beneath it. Target atoms are either ionised or combined into ionised molecules. The product may also be given a much greater negative bias to attract the ions and build up a deposition layer.
However the more gas you let in, the harder it is for target material to reach the other electrode, so other methods are often used to create or enhance the ion flow and the chamber can then be run at higher vacuum.
Manufacturing process
The electrode lies at the bottom and the target is laid flat on (effectively comprises) it and the RF is induced across it and the containing vacuum chamber. Typically the gas ions react with it to create heavier molecules with even lower mobility, which therefore float off carrying atoms of the target with them, thus etching into it.
These heavy molecules are purged by a steady flow of gas, which reduces any tendency to deposit material on the chamber walls. However cleaning of such a chamber can still sometimes be a routine maintenance chore. | {
"domain": "physics.stackexchange",
"id": 76041,
"tags": "experimental-physics, solid-state-physics, plasma-physics"
} |
Error while installing ROS Base Install from SVN on Ubuntu (BeagleBoard) | Question:
I'm trying to install ROS from SVN on my BeagleBoard (with Ubuntu 10.04 on it) using instructions from Wiki. I previously installed the ROS-only version without problems. Now, as I wanted to install the kinect-stack, I wanted to "upgrade" to ros-base. But after executing
rosinstall ~/ros "http://packages.ros.org/cgi-bin/gen_rosinstall.py?rosdistro=cturtle&variant=base&overlay=no"
I got:
ERROR: woah, unknown installation method hg not installed, cannot create a hg vcs client
Any clues?
Originally posted by tom on ROS Answers with karma: 1079 on 2011-02-18
Post score: 0
Answer:
Seems like you need to install mercurial.
Originally posted by Yogi with karma: 411 on 2011-02-18
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by tom on 2011-02-18:
Thanks, whatever it is, sudo apt-get install hgsvn seems to have solved my problem. | {
"domain": "robotics.stackexchange",
"id": 4795,
"tags": "ros, beagleboard"
} |
What is "definite parity" in quantum mechanics? | Question: I am studying for an exam on quantum mechanics, and have come across something which I don't understand. The problem is:
We have a symmetric potential, i.e. $V(x)=V(-x)$. If the energy eigenvalue is non-degenerate, show that the energy eigenfunction $\psi(x)$ has definite parity.
My interpretation of this:
I understand that a non-degenerate eigenvalue means that no two independent eigenstates have the same eigenvalue. I also know that if two different states correspond to the same physical state, then $\psi(x)=k\psi(-x)$, and so $k=\pm1$, which is what is meant by even or odd parity. I think that what this question means when it says "definite parity" is that we must prove that only one of these values of $k$ is possible.
My attempt:
Suppose $\psi(x)=k\psi(-x)$. Then $$\frac{-\hbar^2}{2m}\frac{d^2}{dx^2}\psi(x)+V(x)\psi(x)=E\psi(x)\\\frac{-\hbar^2}{2m}\frac{d^2}{dx^2}k\psi(-x)+V(-x)k\psi(-x)=Ek\psi(-x)$$
From here, using the transformation $x\to-x$, and cancelling $k$, we get back to the original equation, so I haven't really made any progress, and I don't know what to do next.
My question:
Is my interpretation of definite parity correct? It is probably quite useful to understand the question fully before attempting it. Also, are there any hints anyone can offer for me to solve this? I'd prefer if you didn't give the full solution away straight away so that I can solve it using a hint.
Thanks!
Answer: Yes, that is what 'definite parity' means - it says that $\psi$ is an eigenfunction of the parity operator, without committing to either eigenvalue. Perhaps some examples say it best:
$f(x) = x^2$ has definite parity
$f(x) = x^3$ has definite parity
$f(x) = x^2+x^3$ doesn't.
In terms of the question you've been set, it's important to note that the condition that the energy eigenvalue be non-degenerate is absolutely crucial, and if you take it away the result is in general no longer true. Again, as an example, consider
$$\psi(x) = A\cos(kx -\pi/4)$$
as an eigenfunction of a free particle in one dimension: the hamiltonian has a symmetric potential, and yet here sits a non-symmetric wavefunction. Of course, this is because the same eigenvalue, $\hbar^2k^2/2m$, sustains two separate orthogonal eigenfunctions of definite, and opposite, parity,
$$\psi_1(x) = A\sin(kx) \ \ \text{and} \ \ \psi_2(x) = A\cos(kx),$$
which takes the eigenspace out of the hypotheses of your theorem.
So, how do you use the non-degeneracy of the eigenvalue? Well, the non-degeneracy tells you that any two eigenfunctions are linearly dependent, and you're only given one wavefunction to start with, so somehow you need to flesh that set out to two, and then move on with the argument. | {
"domain": "physics.stackexchange",
"id": 40158,
"tags": "quantum-mechanics, schroedinger-equation, parity"
} |
Publish Float64MultiArray from command line - ROS2 | Question:
I can't seem to figure this out:
ros2 topic pub /gripper_controller/commands std_msgs/msg/Float64MultiArray lay [TAB COMPLETE]
Now the message is auto-populated like this. I just change the data field:
ros2 topic pub /gripper_controller/commands std_msgs/msg/Float64MultiArray layout:\
\ \ dim:\ []\
\ \ data_offset:\ 0\
data:\ [0.2]\
^ When I try that, the error is:
yaml.scanner.ScannerError: mapping
values are not allowed here in
"", line 1, column 13:
layout: dim: [] data_offset: 0data: [0.2]
Similar issues if I use the rqt message publisher.
Originally posted by AndyZe on ROS Answers with karma: 2331 on 2021-08-03
Post score: 0
Answer:
Ah, found an example in ros2_control_demos:
ros2 topic pub /gripper_controller/commands std_msgs/msg/Float64MultiArray "data:
- 0.1"
Originally posted by AndyZe with karma: 2331 on 2021-08-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 36769,
"tags": "ros"
} |
Why does gravity travel at the speed of light? | Question: In electromagnetism, Maxwell's equations predict that electromagnetic disturbances travel at the speed $$c = \frac{1}{\sqrt{\mu_0 \epsilon_0}}.$$ Does general relativity predict that gravitational waves travel at the same speed or is this "put in by hand", that is, is there a wave equation with constants that presumably include the gravitational constant $G$ and other constants?
Answer: I'd like to add to Cosmas Zachos's Answer; but first, one can find a great deal more detail about the derivation of the wave equation for weak field gravitational waves as given in Cosmas's summary in Chapter 9 "Gravitational Radiation" of the book "A First Course in General Relativity" by Berhard Schutz.
Cosmas's answer is correct, but I'd like to point out that there is a sense wherein GTR is constructed from the outset to have a gravitational wave propagation speed of $c$, and one can also think of Maxwell's equations in the same way too. That is, their wave propagation speeds both equally derive from thoughts and postulates crafted in the special theory of relativity (or at least one can make this theoretical postulate and, so far, witness experimental results consistent with it).
The reason that the weak field analysis in Cosmas's answer / Schutz chapter 9 yields a D'Alembertian with a wave speed of $c$ is that, from the outset, General Relativity is postulated to be encoded as field equations constraining the Ricci tensor part (i.e. the volume variation encoding part) of the curvature tensor in a manifold that is locally Lorentzian. The "locally Lorentzian" bit is the clincher, here: to every observer there is a momentarily co-moving inertial frame wherein the metric at the observer's spacetime position is precisely $\mathrm{d} \tau^2 = c^2\,\mathrm{d} t^2 - \mathrm{d}x^2-\mathrm{d}y^2-\mathrm{d}z^2$ when we use an orthonormal basis for the tangent space at the point in Riemann normal co-ordinates. Of course, not every theory that takes place on a locally Lorentzian manifold has to yield a wave equation, but the weak field vacuum Einstein equations have a structure that does yield this equation, as in Cosmas's answer. The fact that the constant in that equation is $c$ hearkens right back to the local Lorentzian character of the scene (the manifold) of our theoretical description.
One can think of Maxwell's equations in exactly the same way, and there are several ways one can go about this. But once you postulate that Maxwell's equations are Lorentz covariant, then the constant in the wave equation that arises from them has to be $c$. One can begin with the Lorentz force law and postulate that the linear map $v^\mu \mapsto q\,F^\mu_\nu\,v^\nu$ that yields the four force from the four velocity of a charged particle acted on by the electromagnetic field $F$ is a mixed tensor. Now take Gauss's law for magnetism $\nabla\cdot B=0$ and postulate that it is true for all inertial observers. From this postulate, one derives $\mathrm{d} F=0$ if $F$ does indeed transform as a tensor. Do the same for Gauss's law for electricity $\nabla\cdot E=0$ in freespace and derive $\mathrm{d} \star F = 0$. From these two equations and Poincaré's lemma we get $\mathrm{d}\star \mathrm{d} A=0$, which contains D'Alembert's equation (with a one-form $A$ existing such that $\mathrm{d} A=F$, which is where Poincaré's lemma is deployed). And, from the Lorentz covariance postulated at the outset, the wavespeed in this D'Alembert equation has to be $c$.
So, in summary, the fact that the wavespeed is exactly the same for both theories arises because they have a common "source" theory, namely, special relativity as the beginning point whence they come.
The OP has probably already realized this, but, for the sake of other readers, there is really only one electromagnetic constant just as there is only one gravitational constant $G$. The current SI units define exact values for the speed of light $c$, the Planck constant $\hbar$, and the unit electrical charge $e$; the electric and magnetic constants $\epsilon_0$ and $\mu_0$ are related to the experimentally-measured fine structure constant, $\alpha = \frac{e^2}{4\pi\epsilon_0}\frac{1}{\hbar c}
= \frac{e^2}{4\pi} \frac{\mu_0 c}{\hbar}$.
As in Cosmas's answer, $\epsilon_0$ doesn't come into the wave equation just as $G$ doesn't come into the gravitational wave D'Alembert equation, and $\epsilon_0$ (or equivalently $\mu_0$ or $\alpha$) is only directly relevant when Maxwell's equations describe the coupling of the field vectors to charge. | {
"domain": "physics.stackexchange",
"id": 45432,
"tags": "general-relativity, speed-of-light, gravitational-waves, causality"
} |
Understanding Voss-McCartney pink noise generation algorithm | Question: I'm implementing the Voss-McCartney pink noise generation algorithm.
If you follow the link above, you can read:
from James McCartney 2 Sep 1999 21:00:30 -0600:
The top end of the spectrum wasn't as good. The cascade of sin(x)/x
shapes that I predicted in my other post was quite obvious.
Ripple was only about 2dB up to Fs/8 and 4dB up to Fs/5. The response
was about 5dB down at Fs/4 (one of the sin(x)/x nulls), and there was
a deep null at Fs/2.
(These figures are a bit rough. More averaging would have helped.)
You can improve the top octave somewhat by adding a white noise
generator at the same amplitude as the others. Which fills in the
diagram as follows:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
x x x x x x x x x x x x x x x x
x x x x x x x x
x x x x
x x
x
It'll still be bumpy up there, but the nulls won't be as deep.
If I understand it well, this algorithm generates pink noise by adding random (white?) noise sources at different frequencies1
However, I don't fully understand the explanation given in the quote above for the extra white noise generator on the "top row". Can someone clarify how/why it improves the algorithm? Does that make it a good algorithm for pink noise generation for audio applications? Especially, shouldn't I discard the first samples until all the "rows" were mixed into the signal (in the ASCII art quoted above, that would mean discarding 15 first samples)?
1 I'm not sure of the wording here. Do not hesitate to correct me if I'm wrong
Answer: So let's look at what the author of the article you linked to says further down;
Output samples are on the top row, and are the sum of all the other rows at that time.
Output /---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\
\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/
Row -1 /---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\/---\
\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/\___/
Row 0 /--------\/--------\/--------\/--------\/--------\/--------\/--------\/--------\/--------\
\________/\________/\________/\________/\________/\________/\________/\________/\________/
Row 1 --------------\/------------------\/------------------\/------------------\/--------------
______________/\__________________/\__________________/\__________________/\______________
Row 2 ------------------------\/--------------------------------------\/------------------------
________________________/\______________________________________/\________________________
Row 3 --------------------------------------------\/--------------------------------------------
____________________________________________/\____________________________________________
Row 4 ------------------------------------------------------------------------------------\/----
____________________________________________________________________________________/\____
This means that the above diagram has multiple different white sequences, which they only occasionally change – let's formalize that. Start with only the two top rows:
Row -1 is simply white noise
Row 0 is white noise, interpolated by a factor of 2 with a 2-sample-boxcar-filter / sample-and-hold. That gives that noise an (aliased) sinc shape, which is essentially a low-pass shape
Row 1…N do the same, with the sincs becomming narrower by factors of 2.
Thinking about the discrete PSD of this:
Row -1 has a constant discrete PSD
Row 0 adds sinc(2f)²-shaped power to that
Row 1 adds sinc(4f)²-shaped power to that
and so on
All in all, I don't have a proof that this becomes perfectly pink at hand, it probably doesn't within finite observation, but it's kind of intuitive to think that close to 0 Hz, all the main lobes of these sinc²s add up, and with every doubling of frequency, you get closer to the zeros of more sinc²s.
The proposed algorithm really doesn't seem so elegant – generating good (discrete) white (pseuderandom) noise is actually surprisingly hard for longer observational windows (which is what you need if you want to assess the quality of something), and hence, having a pseudorandom generator¹ run at asymptotically twice the sampling rate seems more effort then letting it run at the sampling rate and then using an appropriate low-pass filter that approximates the desired spectral shape (in this case, $\lvert H(f)\rvert \propto \frac1f$); at least on modern CPUs, which have excellent SIMD instructions (i.e. highly optimized for running filters, not so much for running pseudo-random noise generators), the difference between holding and adding up many noise values and doing a FIR is that the FIR requires multiplication of held values with constants (the filter taps) – and since that can typically done in a fused multiply-accumulate operation.
Now, on an ASIC or FPGA, things might look different; if the amplitude distribution of the noise doesn't matter (i.e. there's no need to add up anything but uniformly drawn, uncorrelated samples), then you can actually save on complexity by doing the "simpler" thing, i.e. logical operations needed to generate e.g. XOROSHIRO128** would very likely be clocked much higher than the multipliers needed for a nice FIR filter.
¹you don't need multiple generators – you just ask that one white one more often; white samples are uncorrelated in every subsampling! | {
"domain": "dsp.stackexchange",
"id": 8121,
"tags": "noise, algorithms"
} |
Parallel Algorithm Pseudocode: Helman-JaJa Listrank | Question: What would Helman-JaJa listrank pseudocode be like? I tried looking around but all I found were "prosecode" descriptions (eg pp. 18-19 here) which I find kinda hard to follow.
Answer: Helman-JáJá
Helman-JáJá's is a beautiful parallel algorithm, but the original paper (if I may say so) does not do a particularly great job of communicating some of that beauty to non-experts (this, incidentally, is also my only criticism of JáJá's wonderful text on parallel algorithms). Technically, it is a prefix computation (a.k.a. running totals) algorithm, and list ranking happens to be just one such prefix computation where it might be useful.
I'll quote the original paper (pp. 43-44) and give rough, very high-level pseudocode with comments, and a super brief summary in my own words of what the step aims to do. The paper assumes that it is operating on two types, viz. List and Sublist. If you're planning to implement this (a great concurrent programming exercise), these could be structs or classes, depending on the language you use (I recommend Cilk, but feel free to use what you like).
Compute Processor Sums
(1) Processor $P_i (0\leq i\leq p - 1)$ visits the list elements with array indices $\frac{in}{p}$ through $\frac{(i + 1)n}{p} - 1$ in order of increasing index and computes the sum of the successor indices. [...] [The] negative successor index [that] denotes the terminal list element [...] is replaced by the value ($-s$) for future convenience. Additionally, as each element of List is read, the value in the successor field is preserved by copying it to an identically indexed location in the array Succ. The resulting sum of processor indices is stored in location $i$ of the array $Z$.
Summary: Partition the list evenly among the processors and let each processor compute the sum of successors. The summation is basically a reduction; tons of scope for exploiting parallelism if you know the right algorithms. The following part (ignoring the negative successor index and replacing it with $-s$, the array $Z$) is something you may find useful as implementation detail.
Pseudocode:
parforeach processor i operating on its partition of elements j:
Z[i] <- Σj
copy the successor of j into Succ, aligned with List by index
Find the Head
(2) Processor $P_0$ computes the sum $T$ of the $p$ values in the array $Z$. The index of the head of the list is then $h = (\frac{1}{2}n(n - 1)) - T$
Summary: Reduce the processor-sums array to a single sum. The HJ paper treats this as a serial step executed only by the master thread/processor 0, but this needn't be. The formula finds the head, given the (+)reduction of the entire array (the paper covers why on p. 42, but the short version is that the head of the entire list is no one's successor, so it will be the only element 'missing' from the sum of all successors from $0$ to $n - 1$, which is given by $\frac{1}{2}n(n - 1)$).
Pseudocode:
processor 0:
T <- ΣZ[i]
h <- n(n - 1)/2 - T
Partition the List
(3) For $j=\frac{is}{p}$ up to $\frac{(i + 1)s}{p} - 1$, processor $P_i$ randomly chooses a location $x$ from the block of list elements with indices $(j - 1)\frac{n}{(s - 1)}$ through $j\frac{n}{(s - 1)} - 1$ as a splitter which defines the head of a sublist in List (processor $P_0$ chooses the head of the list as its first splitter). This is recorded by setting Sublists[j].head to $x$. Additionally, the value of List[x].successor is copied to Sublists[j].scratch, after which List[x].successor is replaced with the value ($-j$) to denote both the beginning of a new sublist and the index of the record in Sublists which corresponds to its sublist.
Summary: Pick random indices to split the list into $s$ sublists. Everything else in the paragraph is either mathematical formulations for the indices in each partition, or how you could encode the information you would need for the subsequent steps.
Pseudocode:
parforeach processor i:
if (i = 0):
x <- head -- x is the splitter
else:
x <- random index in the processor's partition
Record the head of the sublist and its immediate successor
Compute Local Prefixes
(4) For $j=\frac{is}{p}$ up to $\frac{(i + 1)s}{p} - 1$, processor $P_i$ traverses the elements in the sublist which begins with Sublists[j].head and ends at the next element which has been chosen as a splitter (as evidenced by a negative value in the successor field). For each element traversed with index $x$ and predecessor $pre$ (excluding the first element in the sublist), we set List[x].successor = -j to record the index of the record in Sublists which corresponds to that sublist. Additionally, we record the prefix value of that element within its sublist by setting List[x].prefix_data = List[x]prefix_data ⊗ List[pre].prefix_data. Finally, if $x$ is also the last element in the sublist (but not the last element in the list) and $k$ is the index of the record in Sublists which corresponds to the successor of $x$, then we also set Sublists[j].successor = k and Sublists[k].prefix_data = List[x].prefix_data. Finally, the prefix_data field of Sublists[0], which corresponds to the sublist at the head of the list is set to the prefix operator identity.
Summary: Compute the local running totals within each sublist. The rest of this long paragraph is bookkeeping to record the sublist prefix sums and track how the sublists are arranged (we're going to use it momentarily).
Pseudocode:
parforeach processor i:
Compute the local prefix sum within i's partition of the list
Store the local prefix at the next head
Mark the next sublist as the ith partition's successor
Order the Heads
(5) Beginning at the head, processor $P_0$ traverses the records in the array Sublists by following the successor pointers from the head at Sublists[0]. For each record traversed with index $j$ and predecessor $pre$, we compute the prefix value by setting Sublists[j].prefix_data = Sublists[j].prefix_data ⊗ Sublists[pre].prefix_data.
Summary: Traverse from head to head, computing the prefix sums of the stored prefix sums. In effect, this gives a relative prefix of the heads. The bookkeeping we'd done before helps us identify 'successor-heads' for this step.
Pseudocode:
processor 0:
j <- the head of the second sublist
pre <- head -- Sublists[0] is the head of the overall list
while(sublists remain):
Sublists[j].prefix_data <- Sublists[pre].prefix_data ⊗ Sublists[j].prefix_data
pre <- j
j <- next head after j
Compute All Prefix Sums
(6) Processor $P_i$ visits the list elements with array indices $\frac{in}{p}$ through $\frac{(i + 1)n}{p} - 1$ in order of increasing index and completes the prefix computation for each list element $x$ by setting List[x].prefix_data = List[x].prefix_data ⊗ Sublists[-(List[x].successor)].prefix_data. Additionally, as each element of List is read, the value in the successor field is replaced with the identically indexed element in the array Succ.
Summary: Revisit all the sublists, adding the relative prefix of the head to each element. To use the list ranking example, if something was ranked 2 in its sublist, and its head (0 locally) got a relative rank of 5, add 5 to every element of this sublist to get their 'global' ranks (in the big list). Any of our bookkeeping that didn't help us in step (5) become useful in step (6), with -(List[x].successor) serving as an indicator of sublist membership. The last line is about restoring the 'true' successors, since we mutated List[i].successor.
Pseudocode:
parforeach processor i:
foreach element x in its partition:
List[x].prefix_data <- List[x].prefix_data ⊗ Sublists[-(List[x].successor)].prefix_data -- Add the head's relative rank
List[x].successor <- Succ[x] -- restore the true successors
If you're reading this carefully, and especially if you've done concurrent programming before, you might have spotted an excellent opportunity to execute the last step as a massively-parallel operation. | {
"domain": "cs.stackexchange",
"id": 21263,
"tags": "algorithms, randomized-algorithms, parallel-computing, linked-lists, pseudocode"
} |
AWGN : Recombining AWGN to obtain new AWGN | Question: I am a newbie to AWGN.
My professor has given me a task of taking 4 different AWGN (0 mean and SD of 1) channels (Lets say each channel has 100 samples). Then I recombine these noise samples by adding all the noise samples together. Now I divide the AWGN channel by the square root of the channel count. In this case, I have 4 samples so I divide the newly generated channel by 2. Now as I understand this new channel has same mean and SD as each of the channels that were recombined to generate this new channel. I now use this new AWGN and combine it with my signal and study a Decoders performance using signal+newly created AWGN. I will be adding the noise to a BIAWGN channel. So my signal will be +1 or -1 and noise will be added on top of that
My questions are as follows:
1. What could be the motivation of doing this. Will this help me in any way?
2. Does anyone know of any research papers that employ this method when studying a decoders performance?
Answer: My guess is your professor is trying to teach you about how noise in a dual-pol QAM system affects decoder performance. In such a system, there are 4 uncorrelated AWGN noise channels (XI,XQ,YI,YQ). It could also be that your professor is trying to teach you about linear impairments and their properties. | {
"domain": "dsp.stackexchange",
"id": 7313,
"tags": "noise"
} |
Quantum mechanical description of how bonding works? | Question: I am trying to conceptualize of chemical bonding, and really be able to explain to someone with lots of questions how exactly it works. I am having a hard time conceptualizing what the bonding is actually from or how it results in what at a macroscopic scale appears as a "sticking together".
Wikipedia says:
All bonds can be explained by quantum theory... More sophisticated theories are valence bond theory... and molecular orbital theory which includes linear combination of atomic orbitals and ligand field theory.
I took physics in college but never quantum mechanics, so I am not sure if I covered (or just forgot) valence bond theory or molecular orbital theory and the like.
In molecular orbital theory, electrons in a molecule are not assigned to individual chemical bonds between atoms, but are treated as moving under the influence of the atomic nuclei in the whole molecule. Quantum mechanics describes the spatial and energetic properties of electrons as molecular orbitals that surround two or more atoms in a molecule and contain valence electrons between atoms.
A bonding orbital concentrates electron density in the region between a given pair of atoms, so that its electron density will tend to attract each of the two nuclei toward the other and hold the two atoms together.
Not seeing a clear explanation of why they stick together. Can you please elaborate with the technical details on why the electron/orbital stuff leads to a "sticking together"? Though this sort of is a start:
The release of energy (and hence stability of the bond) arises from the reduction in kinetic energy due to the electrons being in a more spatially distributed (i.e. longer de Broglie wavelength) orbital compared with each electron being confined closer to its respective nucleus....
These newly added electrons potentially occupy a lower energy-state (effectively closer to more nuclear charge) than they experience in a different atom.
But can you go deeper than that?
Initial answer/question: The electron orbitals combine, causing the sticking together. Why do electron orbitals combine? Because of some rules of quantum mechanics which I am unsure about (but there are clear rules somewhere in the textbooks). Why does the combining cause sticking together? I am unsure how to explain any further, and not sure a textbook answers that. What is the sticking together actually? I am not sure how to answer this without circular reasoning.
By concentrating the electron density in a certain way, and because electrons are negatively charged as opposed to the positive nucleus, it's like a sort of magnet is the best I can put it. The positive and negative charges of the nuclei/electrons are basically like magnets. But how does magnetism lead to a stickiness then? Arg, I am not sure of that either. I definitely didn't study quantum mechanics of magnetism, I know that.
Another way I can try to explain it is with knots. Somehow the mathematical patterns and symmetries that emerge at the quantum scale lead to a sort of interwoven knot sort of system, you can imagine. The interweaving of the symmetries is what bonding is. (But I know that is probably mostly inaccurate. But you can see that weaving leads to knots which leads to stickiness, so it is the type of explanation I would like to find, if possible)
Answer: This is a long comment,as to answer your question it needs a course in quantum mechanics.
The basic reason atoms form is because the 1/r Coulomb potential that gives macroscopically the attraction between the positive and negative charges, at the quantum level it gives rise to the atomic orbitals. If there were no quantum mechanics the electrons would fall on the nucleus and neutralize it, and no atoms would exist. It is one of the reasons quantum mechanics had to be postulated.
Orbitals are probability loci in (x,y,z,t) where an electron can be found if measusred.
Here are the possible orbitals for the hydrogen atom.
So the negatively charged electron will be in a specific location (x,y,z) at a specific time .This allows the positive charge of the nucleus range to affect other charges, and the orbitals of different molecules can fit , like lego blocks, positive attracting negative. i.e. a new potential can be found, complicated because of the shielding of the electron orbitals, that can attract molecule to molecule, due to the topology of the orbitals that allows positive electric fields windows of long range.
This new (not 1/r) potential would be too complicated to be solved with the Schrodinger equation, and new mathematical models are used in order to fit and predict the bonding of atoms and molecules, as summarized in the link by Roger | {
"domain": "physics.stackexchange",
"id": 82435,
"tags": "atomic-physics, quantum-chemistry"
} |
C++ Random Phone Number Generator | Question: I was tinkering around with C++ after about 3 days of learning and decided to make a random phone number generator.
This is the code I came up with
#include <iostream>
#include <string>
#include <ctime>
#include <cstdlib>
using namespace std;
int getrandomdigit();
int main()
{
srand(time(NULL));
string input;
int digit1_1;
int digit1_2;
int digit1_3;
int digit2_1;
int digit2_2;
int digit2_3;
int digit2_4;
cout << "Enter three digits(area code): ";
getline(cin, input);
cout << "\n";
cout << "\tAvailable numbers in your area.";
cout << "\n\t******************************";
cout << "\n\n";
for(int i = 0; i < 10; ++i)
cout << "\nPhone number: " << input << "-" << getrandomdigit() << getrandomdigit()
<< getrandomdigit() << "-" << getrandomdigit()
<< getrandomdigit() << getrandomdigit() << getrandomdigit();
cout<<"\n";
return 0;
}
int getrandomdigit()
{
return rand() % 10;
}
So I believe I understand the logic of how the random number generation works. But before I added the function getrandomdigit(), I was using the manually defined "digit" vars in an attempt to generate pseudorandom numbers. So my question remains, why does it occur when you use the integers that the numbers AREN'T randomly generated. For instance, if you were to enter an area code, it would just print the same 2 prefixes 10 times, but when you use the function, it gives you a completely random var every time the counter is run until 10.
Here's a visual example of using the digit vars
Enter three digits(area code): 203
Available numbers in your area.
******************************
Phone number: 203-523-3023
Phone number: 203-523-3023
Phone number: 203-523-3023
Phone number: 203-523-3023
Phone number: 203-523-3023
Phone number: 203-523-3023
Phone number: 203-523-3023
Phone number: 203-523-3023
Phone number: 203-523-3023
Phone number: 203-523-3023
Then here is an example of using the getrandomdigit function
Enter three digits(area code): 203
Available numbers in your area.
******************************
Phone number: 203-696-3428
Phone number: 203-832-2940
Phone number: 203-293-9390
Phone number: 203-483-3935
Phone number: 203-616-6955
Phone number: 203-089-8674
Phone number: 203-953-6456
Phone number: 203-516-2123
Phone number: 203-800-7836
Phone number: 203-624-7130
Answer: Here are some observations that may help you improve your code.
Don't abuse using namespace std
Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid.
Eliminate unused variables
Unused variables are a sign of poor code quality, so eliminating them should be a priority. In this code, none of the digit_ variables are ever actually used. My compiler also tells me that. Your compiler is probably also smart enough to tell you that, if you ask it to do so.
Consider using a better random number generator
Because you're using a compiler that supports at least C++11, consider using a better random number generator. In particular, instead of rand, you might want to look at std::uniform_real_distribution and friends in the <random> header.
Use string concatenation
The main function includes these lines:
std::cout << "\n";
std::cout << "\tAvailable numbers in your area.";
std::cout << "\n\t******************************";
std::cout << "\n\n";
Each of those is a separate call to operator<< but they don't need to be. Another way to write that would be like this:
std::cout << "\n"
"\tAvailable numbers in your area."
"\n\t******************************"
"\n\n";
This reduces the entire menu to a single call to operator<< because consecutive strings in C++ (and in C, for that matter) are automatically concatenated into a single string by the compiler.
Consider improving names
The variable input is not very descriptive. Perhaps areaCode would be a better name.
Add error checking
The program appears to assume that the user will enter a valid 3-digit area code, but no provision is made to assure this. A robust program always checks user input and provides error checking and handling.
Use object orientation
Because you're writing in C++, it would make sense to have a class such as PhoneNumber to encapsulate the details of your implementation. You may not yet have learned about objects or classes, but they're one of the main strengths of C++ and something you should learn soon if you haven't already. Use objects where they make sense.
Understand the problem domain
In the US and Canada (which, by the formatting of the phone number, seems to be where this is intended to be used), phone numbers are created according to the North American Numbering Plan. In that plan, the second grouping of numbers is called that Central office code. That three-digit number must start with a number in the range of 2 through 9 and cannot start with a 0 or 1. Your current program does not obey this scheme and generates invalid numbers. | {
"domain": "codereview.stackexchange",
"id": 29934,
"tags": "c++, c++11, random"
} |
How to make Energy from radioactive material | Question: I have seen where Tritium hitting phosphorus emits light, and a solar cell collects it to for a "battery" of a sort, but Are you able to extract (For Example) Americium from smoke detector and use it do Make light/Power? And could you just use phosphorus from a match or is it more complicated? I only have a basic understanding of radioactivity.
Answer: You could technically use the americium from smoke detectors, but you'd need quite a bit of it - the activity of americium is relatively low, and the amount is quite small in each detector, in the microgram range. Source: http://www.world-nuclear.org/information-library/non-power-nuclear-applications/radioisotopes-research/smoke-detectors-and-americium.aspx
So, with the right phosphor, you could generate (a very very tiny amount of) power from americium from smoke detectors.
Also, I believe you are confusing the element phosphorus with the general concept of a phosphor. A phosphor is simply a compound that glows in response to a certain electromagnetic stimulus. Phosphorus is the element that is found on old match-heads, but it is not used in any phosphors that I know of. Most phosphors are made of transition metal or rare-earth metal compounds. Source: https://en.wikipedia.org/wiki/Phosphor. | {
"domain": "physics.stackexchange",
"id": 36591,
"tags": "energy, atomic-physics, radioactivity, proton-decay"
} |
Manipulability Measures of Serial Robots? | Question: I need to create a dataset of dexterous workspaces for a significantly large number of serial robots with varying DoFs.
My question is if there is an efficient way of measuring a robot's manipulability.
Most solutions that I have found are based on inverse kinematics, which in my particular case would take too long. Are there any time-efficient alternatives?
Answer: The most common manipulability measure is by Yoshikawa [1] which is purely kinematic, ie. it ignores dynamics such as inertia and motor torques. It is simply
$$
\sqrt{\det( J J^T)}
$$
where $J$ is the manipulator Jacobian that gives spatial velocity in world coordinates. The measure says something about how spherical the 6D velocity ellipsoid is.
The measure is a bit problematic because it is a norm of mixed units (m/s and rad/s). It can be more insightful to take the translational 3x3 part (upper left sub matrix) or the rotational 3x3 part (lower right sub matrix). This is discussed in §8.2.2 of [2].
Using the Robotics Toolbox for MATLAB we can easily compute this measure
>> mdl_puma560 % define a robot model and some poses
>> p560.maniplty(qn)
Manipulability: translation 0.111181, rotation 2.44949
>> p560.maniplty(qz)
Manipulability: translation 0.0842946, rotation 0
>> p560.maniplty(qs)
Manipulability: translation 0.00756992, rotation 2.44949
>> p560.maniplty(qr)
Manipulability: translation 0.00017794, rotation 0
where qr = [0, 1.5708, -1.5708, 0, 0, 0], qs = [0, 0, -1.5708, 0, 0, 0] and
qn = [0, 0.7854, 3.1416, 0, 0.7854, 0].
We can see that pose qn (elbow up, wrist pointing down) is quite well conditioned whereas qr (arm straight up) is close to singular.
If you have Denavit-Hartenberg models for all your robots then this would be an easy task to automate.
Taking dynamics into account involves measures based on the force ellipsoid and is discussed in §9.2.7 of [2]. It requires that you know the inertial parameters of the robot which in general is not the case. Use the 'asada' option to the maniplty method to compute this.
[1] Manipulability of Robotic Mechanisms, T. Yoshikawa, IJRR 1985.
[2] Robotics, Vision & Control, P. Corke, Springer, 2017.
[3] A geometrical representation of manipulator dynamics and its application to arm design. H. Asada H, 1983, J.Dyn.Syst.-T ASME 105:131. | {
"domain": "robotics.stackexchange",
"id": 1901,
"tags": "serial"
} |
Compiling errors when using ros.h in Unreal | Question:
Hi,
I am totally new to ROS and I am trying to make this tutorial happen in Unreal Engine(4.14):
http://wiki.ros.org/rosserial_windows/Tutorials/Receiving%20Messages
However, I got a lot of errors when I tried to compile ros.h in Unreal. I am using VS2015.
Please let me know if you know what happened. Thanks in advance!
The error log is followed:
Error C2575 'deserialize': only member functions and bases can be virtual TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 43
Error C2660 'memcpy': function does not take 2 arguments TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 37
Error C3646 'req_param_resp': unknown override specifier TrafficProject d:\unreal projects\trafficproject\source\ros_lib\ros\node_handle.h 490
Error C2448 'rosserial_msgs::msg': function-style initializer appears to be a function definition TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 26
Error C2270 'serialize': modifiers not allowed on nonmember functions TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 30
Error C2355 'this': can only be referenced inside non-static member functions or non-static data member initializers TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 32
Error C2355 'this': can only be referenced inside non-static member functions or non-static data member initializers TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 33
Error C2355 'this': can only be referenced inside non-static member functions or non-static data member initializers TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 34
Error C2355 'this': can only be referenced inside non-static member functions or non-static data member initializers TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 37
Error C2355 'this': can only be referenced inside non-static member functions or non-static data member initializers TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 45
Error C2355 'this': can only be referenced inside non-static member functions or non-static data member initializers TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 46
Error C2355 'this': can only be referenced inside non-static member functions or non-static data member initializers TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 54
Error C2447 '{': missing function header (old-style formal list?) TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\RequestParam.h 9
Error Failed to produce item: D:\Unreal Projects\TrafficProject\Binaries\Win64\UE4Editor-TrafficProject-4209.dll TrafficProject D:\Unreal Projects\TrafficProject\Intermediate\ProjectFiles\ERROR 1
Error C2227 left of '->level' must point to class/struct/union/generic type TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 32
Error C2227 left of '->level' must point to class/struct/union/generic type TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 33
Error C2227 left of '->level' must point to class/struct/union/generic type TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 45
Error C2227 left of '->level' must point to class/struct/union/generic type TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 46
Error C2227 left of '->msg' must point to class/struct/union/generic type TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 34
Error C2227 left of '->msg' must point to class/struct/union/generic type TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 37
Error C2227 left of '->msg' must point to class/struct/union/generic type TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 54
Error C4430 missing type specifier - int assumed. Note: C++ does not support default-int TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 24
Error C4430 missing type specifier - int assumed. Note: C++ does not support default-int TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 25
Error C4430 missing type specifier - int assumed. Note: C++ does not support default-int TrafficProject d:\unreal projects\trafficproject\source\ros_lib\ros\node_handle.h 490
Error C2059 syntax error: ')' TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 23
Error C2059 syntax error: 'constant' TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 20
Error C2059 syntax error: '}' TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 64
Error C2143 syntax error: missing ';' before '{' TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\RequestParam.h 9
Error C2143 syntax error: missing ';' before '}' TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 20
Error C2143 syntax error: missing ';' before '}' TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 64
Error MSB3073 The command ""D:\Unreal Engine\UE_4.14\Engine\Build\BatchFiles\Rebuild.bat" TrafficProjectEditor Win64 Development "D:\Unreal Projects\TrafficProject\TrafficProject.uproject" -waitmutex" exited with code -1. TrafficProject C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V140\Microsoft.MakeFile.Targets 46
Error C2238 unexpected token(s) preceding ';' TrafficProject D:\Unreal Projects\TrafficProject\Source\ros_lib\rosserial_msgs\Log.h 20
Originally posted by 116Exia on ROS Answers with karma: 16 on 2017-06-05
Post score: 0
Answer:
Ok I figured it out: It is the def conflict of "ERROR" between log.h and wingdi.h. I just undef the "ERROR" in log.h and it works fine.
Originally posted by 116Exia with karma: 16 on 2017-06-06
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 28060,
"tags": "ros"
} |
Problem compiling node with Geany | Question:
Hello everyone when i compile my Node;cpp file it points me towards an error in Node.h file
And the error it shows is
Node.h : fatal error: ros/ros.h : No file or folder of this type
I have looked into previous questions asked on this topic, so my file "package.xml" and "CmakeLists.txt" for this node seems correct to me, though i am posting them
package.xml
<?xml version="1.0"?>
<package>
<name>p_sedimentation_estimator</name>
<version>0.0.0</version>
<description>Fusion of pointcloud sonar and scan laser</description>
<maintainer email="ro@rob.com">Rob</maintainer>
<license>TODO</license>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>roscpp</build_depend>
<build_depend>std_msgs</build_depend>
<build_depend>cana_msgs</build_depend>
<!-- todo -->
<!-- The export tag contains other, unspecified, tags -->
<export>
<!-- Other tools can request additional information be placed here -->
</export>
</package>
Cmakelists.txt
cmake_minimum_required(VERSION 2.8.12)
project(p_sedimentation_estimator)
add_definitions(-std=c++11)
## Find catkin macros and libraries
## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)
## is used, also find other catkin packages
find_package(catkin REQUIRED
roscpp
std_msgs
geometry_msgs
pcl_conversions
pcl_ros
tf2_ros
tf2_bullet
cana_msgs
)
catkin_package(
# INCLUDE_DIRS include
# LIBRARIES p_sedimentation_estimator
# CATKIN_DEPENDS other_catkin_pkg
# DEPENDS system_lib
)
find_package(OpenCV REQUIRED)
set(CMAKE_INCLUDE_CURRENT_DIR ON)
set(CMAKE_AUTOMOC ON)
find_package(Qt5Core)
find_package(Boost 1.54.0 REQUIRED COMPONENTS )
###########
## Build ##
###########
## Specify additional locations of header files
## Your package locations should be listed before other locations
include_directories(
${catkin_INCLUDE_DIRS}
)
file(GLOB Node_SRC
"src/*.h"
"src/*.cpp"
)
## Declare a C++ executable
add_executable(p_sedimentation_estimator ${Node_SRC})
## Specify libraries to link a library or executable target against
target_link_libraries(p_sedimentation_estimator
${catkin_LIBRARIES}
${OpenCV_LIBS}
Qt5::Core
)
#############
## Install ##
#############
# all install targets should use catkin DESTINATION variables
# See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html
## Mark executables and/or libraries for installation
install(TARGETS p_sedimentation_estimator p_sedimentation_estimator
ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
WHAT or Where could be the problem ??
1)catkin_make runs without error
2)Compilation of the node.cpp program gives this error(Node.h : fatal error: ros/ros.h : No file or folder of this type) in node.h file
is it necesarry to use build also , other than sourcing, catkin_make and compiling.
Help Needed :)
Thanku in advance :)
Originally posted by Yadav on ROS Answers with karma: 5 on 2020-03-02
Post score: 0
Original comments
Comment by gvdhoorn on 2020-03-02:
Could I ask you to please format your questions properly? I've done it for you now (and in your previous questions as well), but for next time: select any code/console text/etc and press ctrl+k or click the Preformatted Text button (the one with 101010 on it).
Comment by Yadav on 2020-03-02:
@gvdhoorn Thanku so much...i will keep it in mind next time :)
Comment by mgruhler on 2020-03-02:
@Yadav If I understand your question correctly, at the end you specify three points 1), 2), 3) that you step through, right?
Could you please explain how you "compile" your node? Because this is actually what catkin_make will do for you and thus I don't think you can have 1) successfull and 2) not if you are using the standard workflow...
Comment by Yadav on 2020-03-02:
@mgruhler
Yes,catkin_make runs without error
But when i compile the code in the Node.cpp file alone, using "Geany" it shows me the error(Node.h : fatal error: ros/ros.h : No file or folder of this type) in node.h file
So by catkin_make it gets compiled but it doen't work according to the code in Node.cpp file
Standard workflow ?
Comment by mgruhler on 2020-03-03:
@Yadav then you don't have a problem with compiling your code in general, but only using geany.
I've retagged and reworded your question title as such.
Answer:
As ROS and caktin heavily builds on CMake as a build system, you at least need an IDE that supports it.
Even though it is stated that Geany can use CMake, I haven't found a way to properly configure it with a few minutes of search. You might have to use some plugins.
This is even more true that ROS also requires a few more quirks to set up, usually.
There is a large list of IDEs and how to use them with ROS on the wiki. I suggest you try one of those if you want to compile from source. If you want more fancy stuff like jumping to external (out of your source) function definitions, those instructions might come in handy.
On another note: you tagged your question with Indigo. Indigo is EOL since for almost a year now, I suggest to update to a new one
Originally posted by mgruhler with karma: 12390 on 2020-03-03
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Yadav on 2020-03-10:
I could suggest to not use GEANY also :) Thanks @mgruhler @gvdhoom for the help :) | {
"domain": "robotics.stackexchange",
"id": 34525,
"tags": "ros, ide, ros-indigo"
} |
Simple harmonic motion in a non-inertial frame | Question: For simple harmonic motion in a non-inertial frame is amplitude same on both sides? As in one direction pseudo force supports acceleration due to spring and in another direction it opposes acceleration due to spring. So how will it affect the amplitude of this motion?
Answer: Let the pseudoforce be $f$. Then the equation of motion in the non inertial frame is
$$m \ddot{x} = -kx + f = -k\left(x - \frac{f}{k}\right)$$
Change variables to $X =x - \frac{f}{k}$, then the equation of motion is
$$m\ddot{X} = -kX$$
which is the equation for shm with solution
$$X = A \cos (\omega t + \phi)$$
where $\omega = \sqrt{k/m}$ and $A$ is the amplitude of the motion. Substituting back you get
$$x = A \cos(\omega t + \phi) + \frac{f}{k}$$
The motion is still simple harmonic, the amplitude remains the same throughout the motion but the centre of oscillation (equilibrium position) is shifted by an amount $\frac{f}{k}$ | {
"domain": "physics.stackexchange",
"id": 70216,
"tags": "newtonian-mechanics, reference-frames, harmonic-oscillator, spring, oscillators"
} |
Merge sort a list of integers | Question: This code is meant to sort a list of integers using merge sort.
I'm doing this to improve my style and to improve my knowledge of fundamental algorithms/data structures for an upcoming coding interview.
def merge_sort(l):
if len(l) == 0 or len(l) == 1:
return
mid = len(l) // 2
left = l[:mid]
right = l[mid:]
merge_sort(left)
merge_sort(right)
merge(l, left, right)
def merge(l, left, right):
l_ind, r_ind, ind = 0, 0, 0
left_len, right_len, l_len = len(left), len(right), len(l)
while l_ind < left_len and r_ind < right_len:
if left[l_ind] < right[r_ind]:
l[ind] = left[l_ind]
l_ind += 1
ind += 1
else:
l[ind] = right[r_ind]
r_ind += 1
ind += 1
if l_ind < left_len and r_ind >= right_len:
while ind < l_len:
l[ind] = left[l_ind]
l_ind += 1
ind += 1
elif l_ind >= left_len and r_ind < right_len:
while ind < l_len:
l[ind] = right[r_ind]
r_ind += 1
ind += 1
Answer: Get rid of some len
left_len, right_len, l_len = len(left), len(right), len(l)
Get rid of these, they don't do anything in terms of readability. They also are unnecessary, the len call is not O(n) it is simply O(1).
(Maybe) a library?
Also, although Rosetta code isn't always the most stylistic source, if you look at their solution, the use a pre-built merge. As you say you are "reinventing the wheel", maybe you don't want to do that but something to consider in the future.
Naming
l_ind, r_ind ...
I would write those out in full as left_ind and right_ind. | {
"domain": "codereview.stackexchange",
"id": 22967,
"tags": "python, algorithm, reinventing-the-wheel"
} |
Heat transfer through radiation and its relation with temperature | Question: My question is: if a body and the surroundings have the exact same temperature, will that body still lose heat in terms of radiation? If so then what exactly is heat.
Heat is defined as energy flow between a system and another or between a system and its surroundings by virtue of the temperature difference.
so no energy flow will occur if temperature difference between a body and its surroundings is zero. So no radiation should occur. But stefan-boltzmann implies that radiation energy is only dependent on absolute temperature of the body. Thus my confusion?
Another question: I learned that heat is transferred from a higher temperature body to a lower temperature body. Is this universal to all modes of heat transfer? If 2 bodies are seperated by a vacuum will the hot body lose heat through radiation and the cold body absorb that heat? So when both those bodies have the same temperature no radiation should occur as they are in thermal equilibrium? That being what is considered the temperature of vacuum that seperates the 2 bodies. Does it have temperature or is it undefined?
Answer: I'll answer your second question first, because then your first one is easier. In short, yes, the equilibration of temperature between two bodies is absolutely universal - it doesn't depend on how the heat is transferred, and in particular it does apply to radiative transfer. And, indeed, once two bodies have reached thermal equilibrium through radiative transfer, the radiation in between them has a temperature that is the same as the temperature of the two bodies.
Let us now imagine two bodies coming into radiative equilibrium. We'll say that one of them is a hollow sphere and the other one is inside it, because then we don't have to consider the surrounding environment (which I'll get to shortly). The inner body will be giving off heat at a rate proportional to its temperature to the fourth power, and this doesn't depend on the temperature of its surroundings at all. But it's also absorbing heat, at a rate that does depend on the temperature of its surroundings. (It's proportional to the fourth power of the outer body's temperature.) So when they come into equilibrium, the inner body is giving off and receiving heat at exactly the same rate. Radiation is still occurring, but heat flow is not, because the radiation coming in cancels the radiation going out, so there's no net flow of energy. This answers your first question.
Often we don't consider this because we're thinking about something that's a lot hotter than its surroundings. Because $T^4$ increases very rapidly with $T$, the radiative energy transfer from the environment is often small enough to be ignored in this case. In particular, for a body in space that's not exposed to sunlight, the relevant incoming radiation is the cosmic microwave background, which has a temperature of only 3 kelvin and can be ignored for most purposes. | {
"domain": "physics.stackexchange",
"id": 19852,
"tags": "thermodynamics"
} |
Bash Code optimization For ssh login on the remote device | Question: I have below code which works fine as its written in bash, i am just posting it here to see if this can be better optimised as per the BASH way. I am have an intermediate understanding in bash so, looking for any expert advise.
Thanks for the reveiw and comments in advanced .
BASH CODE:
#!/bin/bash
#######################################################################
# Section-1 , this script logs in to the Devices and pulls all the Data
#######################################################################
# timestamp to be attached to the log file
TIMESTAMP=$(date "+%m%d%Y-%H%M%S")
# logfile to collect all the Firmware Version of C7000 components
LOGFILE_1="/home/myuser/firmware_version-${TIMESTAMP}.log"
LOGFILE_2="/home/myuser/fmv_list-${TIMESTAMP}.log"
# read is a built-in command of the Bash shell. It reads a line of text from standard input.
# -r option used for the "raw input", -s option used for Print the string prompt,
# while option -s tells do not echo keystrokes when read is taking input from the terminal.
# So, altogether it reads password interactively and save it to the environment
read -rsp $'Please Enter password below:\n' SSHPASS
export SSHPASS
for host in $(cat enc_list);
do
echo "========= $host =========";
sshpass -e timeout -t 20 ssh -o "StrictHostKeyChecking no" $host -l tscad show firmware summary ;
done | tee -a "${LOGFILE_1}"
# at last clear the exported variable containing the password
unset SSHPASS
######################################################################################
# Section-2, This will just grep the desired data from the log file as produced from
# Section-1, log file
######################################################################################
data_req="`ls -l /home/myuser/firmware_version-* |awk '{print $NF}'| tail -1`"
cat "${data_req}" | egrep '=|1 BladeSystem|HP VC' | awk '{$1=$1};1' | tee -a "${LOGFILE_2}"
Data Pulled by Script as outlined in section-1 in the code.
========= enc1001 =========
HPE BladeSystem Onboard Administrator
(C) Copyright 2006-2018 Hewlett Packard Enterprise Development LP
OA-9457A55F4C75 [SCRIPT MODE]> show firmware summary
Enclosure Firmware Summary
Blade Offline Firmware Discovery: Disabled
Onboard Administrator Firmware Information
Bay Model Firmware Version
--- -------------------------------------------------- ----------------
1 BladeSystem c7000 DDR2 Onboard Administrator with KVM 4.85
2 OA Absent
Enclosure Component Firmware Information
Device Name Location Version NewVersion
-----------------------------------------------------------------------------------
TRAY | BladeSystem c7000 Onboard Administrator Tray | - | 1.7 | 1.7
FAN | Active Cool 200 Fan | 1 | 2.9.4 | 2.9.4
FAN | Active Cool 200 Fan | 2 | 2.9.4 | 2.9.4
FAN | Active Cool 200 Fan | 3 | 2.9.4 | 2.9.4
FAN | Active Cool 200 Fan | 4 | 2.9.4 | 2.9.4
FAN | Active Cool 200 Fan | 5 | 2.9.4 | 2.9.4
FAN | Active Cool 200 Fan | 6 | 2.9.4 | 2.9.4
FAN | Active Cool 200 Fan | 7 | 2.9.4 | 2.9.4
FAN | Active Cool 200 Fan | 8 | 2.9.4 | 2.9.4
FAN | Active Cool 200 Fan | 9 | 2.9.4 | 2.9.4
FAN | Active Cool 200 Fan | 10 | 2.9.4 | 2.9.4
BLD | HP Location Discovery Services | - | 2.1.3 | 2.1.3
Device Firmware Information
Device Bay: 1
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 09/12/2016
iLO4 2.50 Sep 23 2016
Power Management Controller 1.0.9
Device Bay: 2
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 09/12/2016
iLO4 2.50 Sep 23 2016
Power Management Controller 1.0.9
Device Bay: 3
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 05/21/2018
iLO4 2.61 Jul 27 2018
Power Management Controller 1.0.9
Device Bay: 4
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 09/12/2016
iLO4 2.50 Sep 23 2016
Power Management Controller 1.0.9
Device Bay: 5
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 09/12/2016
iLO4 2.50 Sep 23 2016
Power Management Controller 1.0.9
Device Bay: 6
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 09/12/2016
iLO4 2.50 Sep 23 2016
Power Management Controller 1.0.9
Device Bay: 7
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 09/12/2016
iLO4 2.50 Sep 23 2016
Power Management Controller 1.0.9
Device Bay: 8
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 10/25/2017
iLO4 2.70 May 07 2019
Power Management Controller 1.0.9
Device Bay: 9
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 09/12/2016
iLO4 2.70 May 07 2019
Power Management Controller 1.0.9
Device Bay: 10
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 10/25/2017
iLO4 2.55 Aug 16 2017
Power Management Controller 1.0.9
Device Bay: 11
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 10/25/2017
iLO4 2.55 Aug 16 2017
Power Management Controller 1.0.9
Device Bay: 12
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 05/05/2016
iLO4 2.40 Dec 02 2015
Power Management Controller 1.0.9
Device Bay: 13
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 12/28/2015
iLO4 2.40 Dec 02 2015
Power Management Controller 1.0.9
Device Bay: 14
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 05/05/2016
iLO4 2.40 Dec 02 2015
Power Management Controller 1.0.9
Device Bay: 15
Discovered: No
Firmware Component Current Version Firmware ISO Version
------------------------------------ -------------------- ---------------------
System ROM I36 09/12/2016
iLO4 2.70 May 07 2019
Power Management Controller 1.0.9
Interconnect Firmware Information
Bay Device Model Firmware Version
--- -------------------------- ----------------
1 HP VC Flex-10/10D Module 4.50
2 HP VC Flex-10/10D Module 4.50
The Overall output of the code is as follows:
========= enc1001 =========
1 BladeSystem c7000 DDR2 Onboard Administrator with KVM 4.85
1 HP VC Flex-10/10D Module 4.50
2 HP VC Flex-10/10D Module 4.50
Thanks
Answer:
there is no need for final semicolon ;
loop
for host in $(cat enc_list);
$(cat ) can be writen as $(< ), latter form is builtin and will not fork a cat command.
data
data_req="`ls -l /home/myuser/firmware_version-* |awk '{print $NF}'| tail -1`"
no need for quote
back tick ( ` ) is deprecated use $( ) construct.
you use ls -l then awk to filter filename ( $NF ), just use ls | tail -1
sorting by ls won't work when year change.
sorting ls is frowned upon (well if you build all the sorted files without space or newline in their name, it might be OK)
if you still want ls sorting use either ls -t or ls -rt to filter by date (newest first, oldest first)
use \ls to skip any aliased ls (when piped ls will put one file per line, this can be forced by ls -1, column display can be forced with ls -C )
you use ${LOGFILE_1} above, then use a parsed ls to retrieve the file, why not use ${LOGFILE_1} again ?
parsing
cat "${data_req}" | egrep '=|1 BladeSystem|HP VC' | awk '{$1=$1};1' | tee -a "${LOGFILE_2}"
grep can read file, this is a useless use of cat.
awk '{$1=$1};1' will do nothing
the line can be written as
egrep '=|1[ ]+BladeSystem|HP VC' "${LOGFILE_1}" | tee -a "${LOGFILE_2}"
I am pretty sure you can use public/private keys with HP enclosure.
those enclosure might give you XML answer, it might be worth the effort to analyse and parse it using XML tools ( xmlstartlet/xsltproc/xmllint ), not awk/sed/grep | {
"domain": "codereview.stackexchange",
"id": 38771,
"tags": "bash, linux, shell"
} |
Bloch Hamiltonian of a Hexagonal Lattice | Question: As explained here (link in the first comment. couldn't post more than 2 links here!), the Bloch Hamiltonian of a lattice is obtained as
$$h_k = \sum_je^{i\mathbf{k.R_j}}h_j$$
where $h_j$ is the hopping matrix, and $j$ runs over all lattice vectors.
For a square lattice, again as said in the linked lectures, for $h_j$ and then $h_k$ we have:
My question is about the next example in the lectures which is a hexagonal lattice. According the formula above for $h_j$, with $j$ running over all lattice vectors, the hexagonal shouldn't be any different than the square lattice above. But this is what the lecture gives for $h_k$:
Which shows $h_j$ has an extra $e^{i\mathbf{k.(a_1-a_2)}}$ term! Where does this term come from? Shouldn't we just have $a_1,a_2,-a_1,-a_2$ translation vectors, as in the square lattice?
Answer: If you consider only $a_1, a_2, -a_1, -a_2$, you miss 2 neighbours. You also need to consider $a_1 - a_2$ and its opposite, which gives you the top-most and down-most vertices on your last diagram, respectively (starting from the central vertex).
Physically, the idea is to simulate tunnelling between nearest neighbours and ignore it for sites farther away, on the ground that the probability is proportional to the distance. Leaving out the two sites I have just described while selecting the four other vertices of the hexagon although they are all at the same distance from the centre of the hexagon, and therefore they are all open to tunnelling to the same extent, that would not be very physical. | {
"domain": "physics.stackexchange",
"id": 44093,
"tags": "condensed-matter, solid-state-physics, electronic-band-theory, tight-binding"
} |
Can a Hermitian operator $A$ bring a wavefunction $|\psi \rangle$ out of Hilbert space? | Question: I recently learned about an example where an operator $O$ (not Hermitian) acted on a wavefunction $|\psi \rangle$ and carried the wavefunction out of Hilbert space, i.e. $O|\psi \rangle$ is not in Hilbert space.
Can a Hermitian operator $A$ carry a wavefunction out of Hibert space?
Up till now, I always did calculations in QM assuming that $A|\psi \rangle$ is still in Hilbert space.
Answer: Formally self-adjoint, but unbounded, operators can easily take a normalizable state (i.e a state in the Hilbert space) and make it unnormalizable and therefore no longer in the space. This can lead to all sorts of aparent paradoxes. For example consider the operator $p^4= \partial_x^4$ on infinite square well $[0,1]$. Let it act on the wavefunction
$$
\psi(x)=\sqrt{30} x(1-x)= \sum_{n=0,{\rm odd}}^\infty \frac{\sqrt{960}}{n^3\pi^3} \phi_n(x)
$$
where $\phi_n(x)= \sqrt{2}\sin n\pi x$ are the normalized eigenfunctions of $p^2=-\partial_x^2$.
This wavefunction statisfies the usual boundary condition of a "particle in a box," and is normalized.
Clearly differentiaing $\psi(x)$ four times gives zero, but acting four times on the eigenfunction expansion gives
$$
p^4 \psi= \sum_{n=0}^\infty \sqrt{960}n\pi \phi_n(x)
$$
which has $||\psi||^2 \propto \sum_{n=1}^\infty n^2 = \infty$, so the resultant state is no longer in the Hilbert space.
For this reason, and others, unbounded operators are not allowed to act on all states in the Hilbert space, but instead have a domain which is at best a dense linear subspace of the Hilbert space, and always includes the restriction that the action of the operator on any state remains in the Hilbert space. In the example above $\psi$ is in the domain of $p^2$ as $p^2\psi$ is normalizable, but it is not in the domain of $p^4$. Therefore $p^4\ne (p^2)^2$ because the output of $p^2$ no longer in the domain of the second $p^2$ factor.
The notion of a "Hermitian" operator, although popular in intro QM courses, is unsafe as "Hermitian", only impies that $p$ and $p^\dagger$ are given by the same differential operator.The better notion is self-adjoint because it requires that $p$ and $p^\dagger$ are not only given by then same formuala, but also that they also have the same domain. Any quantum obsservable has to be self-adjoint. There are many operators that are hermitian, but do not have a complete set of eigenfunctions, and therefore have no meaning in Quantum mechanics. | {
"domain": "physics.stackexchange",
"id": 61063,
"tags": "quantum-mechanics, hilbert-space, operators, wavefunction"
} |
Mapping with Kinect sensor | Question:
I am working on a robot, which finally should move to the specified location in the map autonomously. (For which I am planning to use Navigation stack).
For which I want to map the environment.
I have a kinect sensor available, so which mapping I should use so that I can generate a map (2D or 3D), which can be later fed to the navigation stack....??
I can fetch the odo ticks from the micro controller and give for the mapping stack.
As the kinect is a fixed connection to the robot, I can publish a static transform from kinect to the base link.
Are these sufficient to use the mapping stack, but the question is which mapping stack to go with..? and any examples suggesting the use of the mapping stack will be great full to start with.
Many thanks in advance.
Originally posted by sumanth on ROS Answers with karma: 86 on 2014-08-11
Post score: 0
Answer:
For the task of reaching goal 2d map is sufficient.
First generate the map by moving robot manually . Use gmapping with 2d laser and odometry.
(use depthimage_to_laserscan to generate laser scan from kinect data)
When you have map use amcl for localisation. This tutorial of Autonomous Navigation of a Known Map with TurtleBot is helpful.
Originally posted by bvbdort with karma: 3034 on 2014-08-11
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by sumanth on 2014-08-11:
Thanks,
but then where the 3D map is used..?
Comment by bvbdort on 2014-08-11:
check the videos here http://wiki.ros.org/3d_navigation. Of course you can use 3d maps for navigation. try rgdb slam but i never get chance to use 3d navigation.
Comment by sumanth on 2014-08-12:
but does navigation stack takes the map from the gmapping.?
Comment by sumanth on 2014-08-12:
Are there any exampels/tutorial to generate the map by moving robot manually . Use gmapping with 2d laser and odometry..?
Comment by bvbdort on 2014-08-12:
first you need to build the map using gmapping, then use map for navigation. Which robot are you using ? and how you are driving it ß
Comment by sumanth on 2014-08-12:
I am using my custom built robot, its a differential drive mobile robot, plan to drive the robot for mapping is using commands from laptop (similar to teleop on turtlebot).
Comment by bvbdort on 2014-08-12:
start with building map by driving robot
Comment by sumanth on 2014-08-13:
@bvbdort, I tried mappin with the gmapping but was not sucessful.
Steps followed:
open the kinect with openini using "roslaunch openni_launch openni.launch".
pointcloud to the laser scan using "rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/camera/depth/image_raw"
Comment by sumanth on 2014-08-13:
3. run the gmapping with "rosrun gmapping slam_gmapping scan:=/scan tf:=/odom"
I am running the node which publishes the odom (the odometry data from the real robot).
4. Then open rviz with "rosrun rviz rviz".
But nothing I can see in rviz.
What am I missing here..?
Comment by bvbdort on 2014-08-13:
can you share rqt_graph picture for more info. i think you should start gmapping like "rosrun gmapping slam_gmapping scan:=scan _odom_frame:=/odom"
Comment by sumanth on 2014-08-13:
I have created a separate question to keep the forum organised.
link for the question: http://answers.ros.org/question/189963/gmapping-problem/ | {
"domain": "robotics.stackexchange",
"id": 18982,
"tags": "slam, navigation, mapping, kinect, gmapping"
} |
tf broadcasting rate | Question:
hello
im trying to increase the rate of which tf transform is being published
it seems that no matter what i do the rate stays 1 hz
even if cout prints at 10 hz, when i tf_echo the transform it still appears to be
published at 1 hz
i didnt find in ROS ANSWER any documentation about that
thanks!
#include <ros/ros.h>
#include <iostream>
#include <tf/transform_broadcaster.h>
int main(int argc, char** argv){
ros::init(argc, argv, "tfpub");
ros::NodeHandle n;
ros::Rate r(10.0);
tf::TransformBroadcaster broadcaster;
while(n.ok()){
broadcaster.sendTransform(
tf::StampedTransform(
tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.1, 0.0, 0.2)),
ros::Time::now(),"map", "odom"));
std::cout<< ros::Time::now() <<std::endl;
r.sleep();
ros::spinOnce();
}
}
Originally posted by alonsheader on ROS Answers with karma: 3 on 2017-07-02
Post score: 0
Original comments
Comment by ufr3c_tjc on 2017-07-02:
This probably won't fix the issue, but put the ros::spinOnce(); call before the r.sleep(); call. You should never have anything after a Rate.sleep() call as this messes with loop timing.
Answer:
The rate at which tf_echo prints to the screen is a command line argument and does not reflect the underlying message rates.
If you want to determine the rate of publishing you should use tf_monitor. There's a tutorial on debugging.
Originally posted by tfoote with karma: 58457 on 2017-07-02
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by DavidN on 2017-07-02:
Totally agree. The correct way to view the rate is actually using view_frames tool
Comment by alonsheader on 2017-07-03:
thank you! | {
"domain": "robotics.stackexchange",
"id": 28271,
"tags": "transform"
} |
Why is eclipse complaining about links? | Question:
I'm trying to open costmap_2d in eclipse, but I get the following error when trying to import:
Error processing changed links in project description file. Cannot create a link to '/home/tim/ros_source/electric/navigation/costmap_2d' because it overlaps the location of the project that contains the linked resource.
I have no idea what this means. I get similar errors for all other packages.
I have a fresh install of Ubuntu 11.10. I've successfully built ROS from source. I have eclipse indigo for c/c++ developers. Here's what I'm doing:
rosmake costmap_2d
make eclipse-project
open eclipse
Import... > General > Existing project
Browse to /home/tim/ros_source/electric/ros_comm/clients/cpp/roscpp
don't select copy into workspace, press finish
error
The project seems to get partially imported, but it's pretty manged.
The manged project has some issues. They might be related, or they may just be a side effect.
Invalid project path: Include path not found (/home/tim/ros_source/electric/driver_common/dynamic_reconfigure/msg/cpp).
Invalid project path: Include path not found (/home/tim/ros_source/electric/driver_common/dynamic_reconfigure/srv/cpp).
Invalid project path: Include path not found (/home/tim/ros_source/electric/ros/core/roslib/msg_gen/cpp/include).
The first two seems a little weird, shouldn't those be msg_gen/cpp/include and srv_gen/cpp/include?
I'm also confused on the third one, because roslib doesn't generate messages, right?
Originally posted by tperkins on ROS Answers with karma: 73 on 2011-12-14
Post score: 1
Original comments
Comment by tom on 2011-12-15:
Don't feel offended, please. You haven't written what solutions you already tried, which one normally should do if had tried any.
Comment by tperkins on 2011-12-15:
BTW, try posting a constructive comment next time. Do you honestly think I haven't already googled it?
Comment by tperkins on 2011-12-15:
Yeah I agree, it's an eclipse problem. But I posted it here because I image other developers are seeing the same thing, given I have a fresh install (and I haven't done anything weird, I promise). And if I figure it out myself, I can answer my own question.
Comment by tom on 2011-12-14:
This seems to be a problem of the directory tree and is rather Eclipse related, I'd look for solutions elsewhere, like on stackoverflow. http://lmgtfy.com/?q=because+it+overlaps+the+location+of+the+project+that+contains+the+linked+resource
Answer:
The warnings are a result of packages exporting a non-existing paths in their manifest, see these previous discussions on ros-users:
http://ros-users.122217.n3.nabble.com/ROS-Eclipse-warnings-td2479833.html
http://ros-users.122217.n3.nabble.com/Please-remove-references-to-msg-cpp-and-srv-cpp-from-package-manifests-td2168967.html#a2170278
Packages still exporting these paths should be ticketed for a fix (they just need to remove the non-existant include path from the manifest). That being said, I don't think it's related to your first problem. I have seen plently of these warnings using Eclipse, but never your first error.
Originally posted by AHornung with karma: 5904 on 2011-12-16
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by FranciscoD on 2013-09-03:
This doesn't seem to fix the issue. The first error still persists for me when trying to work on the slam_gmapping package.
Comment by FranciscoD on 2013-09-03:
http://stackoverflow.com/questions/7000562/eclipse-cannot-create-a-link-to-dir-generated-by-cmake says it's because of the build directory, but the ros wiki page says that the make command should take care of this itself http://wiki.ros.org/IDEs#Creating_the_Eclipse_project_files | {
"domain": "robotics.stackexchange",
"id": 7647,
"tags": "eclipse"
} |
Is this a known problem in graph theory? | Question: My basic problem includes a graph where each node $i$ is associated with a weight $c_i$, and the problem is to find a minimum (or maximum) weighted independent set with a fixed cardinality $p$. This is I believe a well-known problem in graph theory that is well-studied for different types of graphs.
Now, suppose I am dealing with a generalized form of the problem as following. The weight of each node can take $p$ different values, that is each node is associated with $p$ different weights. The aim is again to find a minimum (or maximum) weighted independent set with a fixed cardinality $p$, however, each type of weight can be selected only once. Precisely, if the weight type $j$ is selected for the node $i$, i.e., we select the weight $c_{ij}$, then the other selected nodes cannot take a weight of type $j$.
My question is that, is this still a graph theory problem? Is it a known generalization in the graph theory problems?
Any help and/or reference is appreciated.
Answer: If $G=(V,E)$, with $V=\{v_1,v_2,...,v_n\}$ and weights $\{c_{i,j}, i=1,2,...,n, j=1,2,...,p\}$ is the given graph, then we can construct the strong product (I finally found the name of the operation) $G\boxtimes K_p$ of $G$ and $K_p$, where $K_p$ is the complete graph with $p$ vertices. This is the graph with vertices $\{v_{i,j},i=1,2,...,n, j=1,2,...,p\}$ and edges $\{v_{a,b},v_{c,d}\}$ where either:
$a=c$,
$b=d$ or
$\{v_a,v_c\}\in E$. (The actual condition of the strong product reduces to this since in $K_p$ all vertices are adjacent).
We give the vertex $v_{i,j}$ the weight $c_{i,j}$, for $i=1,2,...,n$ and $j=1,2,...,p$.
The problem on $G$ is equivalent to the problem minimum (maximum) weighted independent set in the weighted $G\boxtimes K_p$. If a vertex $v_{i,j}$ of the new graph is chosen this corresponds to choosing vertex $v_i$ of the original graph and using the $j$-th weight $c_{i,j}$ corresponding to it.
The set of edges of $G\boxtimes K_p$ are exactly those that prevent the corresponding choices in $G$ to use adjacent vertices or reusing weights with the same index:
Condition $1$ defines edges in the strong product that prevent the equivalent of using two weights from the same original vertex.
Condition $2$ prevents using the weights with the same index from different vertices of the original graph.
Condition $3$ prevents that two vertices that were neighbors in the original graph are selected.
Example:
If $G$ is the graph
and $p=2$, then $G\boxtimes K_2$ would be the graph
Images created with this tool. | {
"domain": "cs.stackexchange",
"id": 16756,
"tags": "graphs, weighted-graphs, set-cover"
} |
Does $\{0, 1\}^*\in \text{co-NP}$? | Question: There is trivially $\emptyset\in\text{NP}$. From the definition of $\text{co-NP} = \{L : \overline{L} \in \text{NP}\}$ where $\overline{L} = \{0, 1\}^*-L$ follows $\{0, 1\}^*\in \text{co-NP}$. Is this true? If so, it is also true that $\{0,1\}^*\in \text{NP}$?
Answer: Both $\emptyset$ and $\{0,1\}^*$ are in $P$, and therefore they are also in $NP \supseteq P$ and $co-NP \supseteq P$. | {
"domain": "cs.stackexchange",
"id": 17266,
"tags": "complexity-theory"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.