anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Control robot using controller/followjointtrejectory/goal
Question: Hi, Is it possible to control a robot by using the topic controller/followjointTrejectory/goal. I could see when using moveit, move_group is publishing joint goals to this topic. Similarly, can I control the robot without moveit. I tried publishing joint goal to goal topic. Rostopic echo also giving the data which I published. But robot is not moving. Is there another service to be called after publishing the joint goal. Originally posted by Abhishekpg on ROS Answers with karma: 195 on 2019-12-20 Post score: 0 Original comments Comment by ct2034 on 2019-12-20: Can you please add the console output of the controller node, to find out why the robot is not moving? Answer: The topic you are seeing belongs to an action. And yes, you can definitely control the robot this way, without using moveit. If you look at your current setup, probably you start somewhere the joint_trajectory_controller. You can use it to control your robot directly in the joint space. There is also an rqt plugin for this controller. YOu could use it for a first try. Or see here for a Tutorial including source code. Just keep in mind that there is no obstacle avoidance or anything, then. The robot just moves where you ask it to go to. Originally posted by ct2034 with karma: 862 on 2019-12-20 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 34176, "tags": "ros, follow-joint-trajectory, ros-kinetic" }
Wave diffraction explanation
Question: I'm trying to understand wave diffraction and I found this wikipedia article. It's in Czech so I'll explain a bit. I'm interested in the 4 images I couldn't find on english wikipedia. The first one is diffraction at large slit, second on large obstacle, third on tiny slit and last on slit which size is comparable with wavelength - happens with light on diffraction grating. I got few question to those however. I don't know why, but I think I heard that wave can't pass through slit, which size is smaller than wavelength. For example imagine microwave oven. On doors is some kind of texture with those small slits. Microwaves have wavelength from 1 to 0.001 metres, so those slits should be sufficient and block the wave - that's why they are there in first place. But how is then possible that third case with tiny slit? Also, I'm missing cases what happens at obstacle, which size is comparable to wavelength and if I'm wrong in the above situation, also on a tiny obstacle? (And if I'm right with that microwave thing, does the wave pass as there was no obstacle when hitting something tiny? ). On a slit, I can use Huygens principle to create envelope, but what to do on obstacles? And can someone tell me what range of sizes compared to wavelength is considered for comparable obstacle? And if it matters I imagine waves as always coming from source point, the way Kirchhoff described them as shown here, but I don't need to understand those equations and so on. I'd also appreciate if someone got a good website, images, documents,... well whatever that tryes to explain diffraction without too many formulas. I'm not going to compute anything in the end, just trying to get the best possible image of it all for now. Answer: I don't know why, but I think I heard that wave can't pass through slit, which size is smaller than wavelength. This is simply wrong. The energy passing through a slit/hole smaller than wavelength is less than transmitted through an comparable area in free field, but it is not zero! The real world, being wavelike and particle-like at the same time is not on/off, black/white, yes/no, true/untrue. Microwaves from Your oven are damped at that grating, but not erased totally. (BTW, my MW oven operates at about 12 cm wavelength, not ""1 to 0.001 metres,"" blocking the waves were almost impossible then) On a slit, I can use Huygens principle to create envelope, but what to do on obstacles? You could use Huygens principle on both edges of the obstacle, going outward for some length of say, two to three obstacle diameters, or read something about complementarity of slits and obstacles.
{ "domain": "physics.stackexchange", "id": 1077, "tags": "waves, interference, diffraction" }
How do I initialize stim's tableau simulator to a random tableau?
Question: I want to create a stim.TableauSimulator with a random tableau as its state. I know I can get a random tableau using stim.Tableau.random(num_qubits), but how do I do the same thing for the simulator? Answer: You can use stim.TableauSimulator.set_inverse_tableau to change the tableau simulator's state to a specific tableau, such as a random tableau from stim.Tableau.random. For example: import stim random_tableau = stim.Tableau.random(10) simulator = stim.TableauSimulator() simulator.set_inverse_tableau(random_tableau**-1) assert simulator.current_inverse_tableau() == random_tableau**-1 Computing the inverse using **-1 isn't technically necessary here, since the distribution of inverses of random tableaus is the same as the distribution of random tableaus. But "how do I invert a tableau" is a natural followup question when learning about a method called set_inverse_tableau.
{ "domain": "quantumcomputing.stackexchange", "id": 3396, "tags": "stim" }
Bose-Einstein condensation summation to integral
Question: I have a question about Bose-Einstein condensation. Namely, people say that if we go from the summation over the number of particles to an integral using the density of states, we make a flaw in the calculation if the temperature is below the critical temperature and therefore we write: $N = N_0 + N_{ex}$. Where $N$ is the total number of particles, $N_0$ is the number of particles in the ground-state and $N_{ex}$ is the number of particles in the excited states (given by the integral that contains the density of states). Now my question is: why do some references like Yoshioka statistical physics say that we miss the particles in the ground state since $D(\epsilon)=0$ for the density of states while this expression is inside an integral? Could somebody give a more rigorous proof or reference for this? Answer: If for instance $D(\epsilon)\propto \sqrt{\epsilon}$ then $\int_0^{\Delta\epsilon} d\epsilon D(\epsilon)\approx0$ for any small $\Delta\epsilon$. But what you have to keep in mind is that the proper expression is \begin{equation} N= \sum_{i=1}^\infty n_i \neq \int_0^\infty d\epsilon D(\epsilon) \end{equation} Approximating the sum by the integral does not hold if $n_1$ is $O(N)$, because the density assigns no weight to the ground state (at $\epsilon=0$). If you count the occupations in the interval $[0,\Delta \epsilon]$ discretely you always have $n_1=O(N)$in the sum, no matter how small $\Delta \epsilon$ is. But using the integral on the right you get $\int_0^{\Delta\epsilon} d\epsilon D(\epsilon)\propto \Delta \epsilon^{3/2}\rightarrow0$ as $\epsilon\rightarrow0$.
{ "domain": "physics.stackexchange", "id": 51908, "tags": "condensed-matter, bose-einstein-condensate" }
Maxwell Laws Summary Diagram - Suggestions that I am missing?
Question: I have been going through a summary book of Maxwell's equations and hope I have organised this correctly but I think perhaps I am missing things important prompts that I could add? Image below Thanks for your help in advance Answer: You can certainly add the column for Magnetostatics or Magnetoquasistatics if they are of any interest to you. Also, you might add charge conservation: $$ \nabla \cdot \mathbf{J}=-\frac{\partial \rho}{\partial t} $$ this one should fit your way of organizing a summary and slicing the knowledge about Maxwell's equations.
{ "domain": "physics.stackexchange", "id": 59797, "tags": "electromagnetism, maxwell-equations, vector-fields" }
Is it true Mars once had life? (like Earth)
Question: Nasa had proven Mars once had water and still has under ground. So dose that mean mars once had life or maybe still has? Answer: As yet unknown - there are some hints that life might exist ie methane emissions, but no proof.
{ "domain": "physics.stackexchange", "id": 25984, "tags": "solar-system, biology, solar-system-exploration, nasa" }
What happens to the electron density in a metal during an electric discharge?
Question: Suppose we are able to see into a grain of metal at the boundary between the grain and air (perhaps along one of the faces of this cube): (Source: Wikimedia Commons.) This image does not show the electron density, but maybe we can imagine it being vaguely similar to what is shown in the image, but more diffuse. What happens when an electric arc discharges into the grain from outside? Specifically, what happens to the electron density? Do the electrons in the arc pass through the voids, do they pass through the lattice sites, or do they do something else? What is the name of the subfield that studies this kind of nonequilibrium phenomenon? Answer: Electrons in a metal behave 'very strongly' according to quantum mechanics. This means that unlike classical objects, their position and velocity cannot be defined precisely. With that said, you can expect a shift in the electron density - a shift which resembles a sort of electron fluid flowing out of the conductor, and becoming a part of the arc. But NOTE that this electron fluid is more accurately the probability density, not a classical charge density. You can relate more to the problem by studying the way a single electron would move around an obstacle. For instance, quite astoundingly, in the hydrogen atom, there is a non-zero probability that the electron would be inside the nucleus! In a nutshell, we can only study the movement of the probability current for the electrons, rather than a classical charge density. Nevertheless, approximately, this situation would equal a moving charge density. The field that studies such dynamic phenomenon is Plasma physics. Many people think that a plasma is just a very hot gas, but technically even the electron gas in a metal at normal temperatures is a type of plasma.
{ "domain": "physics.stackexchange", "id": 27449, "tags": "solid-state-physics, metals" }
Translate inertia tensor & outer product of a vector with itself
Question: I'm trying to understand a mathematical expression where it's like this: $$ I' = I+ m \left( d^2 E - \mathbf{d}\otimes\mathbf{d}\right) $$ where $I'$ is the new tensor flow, $I$ is the tensor flow, $m$ is mass, $d$ is the distance from the center of mass to the new origin of the center of mass (as a vector), and $E$ is the identity matrix What I understand is to translate an inertia tensor, the formula is: $$ I' = I+ m \left( d^2 E \right) $$ but then I saw this equation at the beginning which uses outer product of the vector with itself. I'm not able to understand why this outer product is there. Answer: If $E$ is the identity matrix, then you have, \begin{align} d^2E-\mathbf{d}\otimes\mathbf{d}&=\left(\begin{array}{ccc}d^2 & 0 & 0 \\ 0 & d^2 & 0 \\ 0 & 0 & d^2\end{array}\right)-\left(\begin{array}{ccc}d_x^2 & d_xd_y & d_xd_y \\ d_yd_x & d_y^2 & d_yd_z \\ d_zd_x & d_zd_y & d_z^2\end{array}\right) \\ &= \left(\begin{array}{ccc}d_x^2+d_y^2+d_z^2 & 0 & 0 \\ 0 & d_x^2+d_y^2+d_z^2 & 0 \\ 0 & 0 & d_x^2+d_y^2+d_z^2\end{array}\right)-\left(\begin{array}{ccc}d_x^2 & d_xd_y & d_xd_y \\ d_yd_x & d_y^2 & d_yd_z \\ d_zd_x & d_zd_y & d_z^2\end{array}\right) \\ &=\left(\begin{array}{ccc}d_y^2+d_z^2 & -d_xd_y & -d_xd_y \\ -d_yd_x & d_x^2+d_z^2 & -d_yd_z \\ -d_zd_x & -d_zd_y & d_x^2+d_y^2\end{array}\right) \end{align} where we used that $d^2=\mathbf{d}\cdot\mathbf{d}=d_x^2+d_y^2+d_z^2$ in the second step. This final form conforms to the definition given by Wikipedia. The moment of inertia is a rank-2 tensor, $I_{ij}$, and represents the moment of inertia (MOI) along the $i$ axis when rotated about the $j$ axis. So the diagonal element $I_{xx}$ is the MOI along the $x$ axis for an object rotated about the $x$ axis. The element $I_{xy}$ is the MOI along the $x$ axis for an object rotating about the $y$ axis. The usage of the outer product simplifies the expression required to write down the MOI tensor.
{ "domain": "physics.stackexchange", "id": 94707, "tags": "tensor-calculus, rigid-body-dynamics, moment-of-inertia" }
are the operation applied in Teleportation circuit fixed?
Question: I implemented teleportation described on this page: Teleporation Circuit diagram is as follows: since we use entanglement for teleportation, as entanglement is done before 1st barrier, just wanted to know after the entanglement/1st barrier: is Alice free to perform any operation on the circuit or the operations should be exactly what is shown in the above figure between 1st and 2nd barrier? Also, can we apply 2nd H-gate prior to 2nd CNOT gate between 1st and 2nd barrier and still achieve teleportation? Answer: The circuit you've prepared will teleport an arbitrary $1$-qubit state from register $q_0$ to $q_2$. The operations between the first and second barrier are preparing a Bell measurement and so those should not be modified, while the operations before the first barrier are preparing a shared entangled state between Alice and Bob and so those do not need to be modified. Otherwise you can apply whatever operations you like to $q_0$ anywhere before the first barrier, or anywhere after the second barrier but before the $CNOT (H\otimes I)$ that prepares the bell measurement.
{ "domain": "quantumcomputing.stackexchange", "id": 3847, "tags": "textbook-and-exercises, teleportation" }
Regular Grammar and Regular Language
Question: From Wikipedia, Regular Language All finite languages are regular. and Also Regular Grammar, is a way to describe the Regular Language Right regular grammar (also called right linear grammar). Left regular grammar (also called left linear grammar). From Wikipedia its Example : a* b c* can be described as Regular Grammar.. but it generate infinite number of 'a's and infinite number of 'c's. Recall Regular Language can be described by Regular Expression and Also Finite Language .. Answer: This might be a language issue. In (mathematical) English, the statement all finite languages are regular means: if $L$ is a finite language then $L$ is regular. No implication in the other direction ("if a language is regular then it is finite") is implied.
{ "domain": "cs.stackexchange", "id": 3977, "tags": "automata, formal-grammars" }
Identification of species found in fly lab
Question: I just found this in my Drosophila lab (within a closed vial) - unfortunately it was dead having been frozen for two weeks. Can anyone help id what it is, and whether it poses a risk to my flies (like mites do)? My inclination is that it is some kind of (moth?) larvae. Note the awesome corkscrew tail... Found in Uppsala, Sweden, May 2014 in a vial of Drosophila with food (a yeast, sugar, and agar mix). Length is about 3-4mm excl. tail. Answer: I'd guess it's a larva of one of the carpet beetles (Dermestidae), some of which are synanthropic. They probably would be interested not in live flies, but in their food and cadavers. Its appearance seems to be very characteristic - someone would be able to identify it to generic or even specific level.
{ "domain": "biology.stackexchange", "id": 2230, "tags": "species-identification" }
Pandas On AWS Sagemaker
Question: Hello Guys i have a question We want to start working with AWS Sagemaker. I understand that i can open Jupiter notebook and work like it was in my computer. but i know pandas working on single node. when i working for example on my machine i have 64gb memory and that is the limit for pandas because its not parallel but AWS is parallel so how pandas work with that Answer: In my opinion, your question is composed of two parts : How to run a processing job on a big AWS Sagemaker instance with more than 64GB of memory ? How to run pandas in parallel ? How to run a job on a big AWS Sagemaker instance ? When you open Sagemaker studio, you are by default working on a ml.t3.medium instance, a very small (and cheap instance). The reason is that this local machine is not designed to run big processing jobs, but to make some data exploration on small data, manage sagemaker, run notebooks etc... To run a big job on Sagemaker, you will need to setup a processing job that you will run on a distant instance. You will only pay for the running time you have made on this distant instance. You can run pandas / sklearn / pyspark jobs. More about processing jobs on Sagemaker : https://docs.aws.amazon.com/sagemaker/latest/dg/processing-job.html The list of distant instances available in sagemaker : https://aws.amazon.com/fr/sagemaker/pricing/ How to run pandas in parallel ? Pandas can't run in parallel. It is not related to AWS Sagemaker. If you want some pandas alternatives in parallel, you can refer to : Spark processing jobs : https://docs.aws.amazon.com/sagemaker/latest/dg/use-spark-processing-container.html Dask ("pandas in parallel") running on AWS Sagemaker : https://aws.amazon.com/fr/blogs/machine-learning/machine-learning-on-distributed-dask-using-amazon-sagemaker-and-aws-fargate/
{ "domain": "datascience.stackexchange", "id": 10706, "tags": "machine-learning, pandas, aws, sagemaker" }
Ways to cause membrane damage to microalgae and yeast?
Question: I am researching a way to monitor the membrane damage of cells. To do that I fist have to have reference points, namely, cells with damaged membranes. I am working with Dunalliela, Hematococcus (both microalgae), and common yeast. I have been using mostly ethanol to damage the membranes so far. What other ways are there to cause the most possible damage to microalgae (or yeast) membranes?. The idea is not total cell obliteration, but rather membrane damage (death is expected). Answer: To get to the membrane of these species you first need to get past a formidable cell wall. The methods listed below are therefore more aimed at making cells permeable but the membranes must sustain some damage in the process. At our lab we regularly use glass bead transformation for microalgae transformation. The microabrasion allows DNA to go in so I imagine the membranes must be damaged somehow. I've also used hypotonic media (depending on the strain) to swell cells close to their bursting point. I imagine they become rather permeable at this point since they let several dyes in that are otherwise excluded.
{ "domain": "biology.stackexchange", "id": 3697, "tags": "biochemistry, microbiology, membrane" }
ROS installation on raspberry 3
Question: Hi, i want to install ROS indigo or jade desktop full on my raspberry pi 3 but i can't although i follow all tutorial. is there any one has managed to install it with rviz? Originally posted by Emilien on ROS Answers with karma: 167 on 2016-06-09 Post score: 0 Original comments Comment by JohnDoe2991 on 2016-06-10: I've installed ROS indigo on a raspberry pi 1 b+ about a year ago. At one point I had a crash, but after a second run it worked surprisingly. Comment by JohnDoe2991 on 2016-06-10: I used this tutorial, but I only installed ROS-Comm not desktop Answer: Hi, I did succeed installing ROS indigo Desktop version on my raspberry by following the tutorial, but I didn't followed exactly all the steps because I faced many problems. First of all, I am not sure if what I did is the perfect way to install it but it works for the moment and I am not facing any major problem with ROS on my raspberry pi. Secondly, I did it with a raspberry pi 3 with raspbian jessie installed with NOOBS in a 16Gb microSD card. So first, modify the size of the swap space (you will normally don't need to do it after). So I modified it from 100M to 3,0G. To do so, I opened the dphys-swapfile with a text editor (leafpad for example) and modified it. $ sudo leafpad /etc/dphys-swapfile and modified the line CONF_SWAPSIZE=100 with CONF_SWAPSIZE=3000. I saved, reboot, and kept following the tutorial until the installation of the liburdfom-dev library. After changing the name and pressing enter (as asked in the tutorial), it didn't installed it. I kept following the tutorial and tried to install the libcollada-dev. After changing the name, it started to compile and put an error (related to liburdfom-dev). So I restarted the tutorial part of liburdfom-dev and after changing the name (again) and pressing enter, it compiled (didn't understand why). I restarted the libcollada-dev part and it finally compiled. I then had to patch the collada-urdf, what I did by reading one of theses answers. So I downloaded it, saved as fix.patch in ~/ros_catkin_ws/src/robot_model/collada_urdf/src and lanched the following command line : $ cd ~/ros_catkin_ws/src/robot_model/collada_urdf/src $ patch < fix.patch I then followed the tutorial until the 3.3 Building the catkin workspace section. I also had a few mysterious errors. I don't know f it's because I didn't do well for the first steps but here is what I have done. My first error concerned the urdfdom-config.cmake file line 15, it put an error about the urdf_sensor library. So I opened the urdfdom-config.cmake file : $ sudo leafpad /usr/local/lib/urdfdom/cmake/urdfdom-config.cmake and changed the line 8 foreach(lib urdfdom_sensor;urdfdom_model_state;urdf_model;urdf_world) by foreach(lib urdfdom_sensor;urdfdom_model_state;urdf_model;urdf_world) (I actually just changed the order of the arguments state_model with sensor). Save and keep going (do not abandon). Again : $ sudo ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/indigo You will normally get an error with rviz. Based on this tutorial, so you will have to open mesh_loader.cpp. to do so : $ sudo leafpad ~/ros_catkin_ws/src/rviz/src/rviz/mesh_loader.cpp copy : #ifdef __arm__ // fix for ARM build #include <strings.h> bool Assimp::IOSystem::ComparePaths(const char *p1, const char *p2) const { return !::strcasecmp(p1, p2); } #endif at the end of the includes part. Finally again : $ sudo ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/indigo It will take some hours. If you face problems in the last part, (Adding release packages), it could be because the are some missing libraries, for example, if usb.h is missing, type : $ sudo apt-get install libusb-dev PS : During the installation, the raspberry pi may seems to freeze but not, it's the RAM and the swap memory wich are beeing used. You can check the memory used by keeping a terminal open and typing (whenever you want): $ free -h You will see that when it freezes, the system starts using the swap memory. At the end of the installation, it could be possible to modify the swapfile as before to avoid problems, but I dont know if ROS can work with the default RAM of the raspberry pi 3. I hope it will help someone. Do not hesitate, I may have forgotten some details. Good luck. Originally posted by fab with karma: 36 on 2016-06-12 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Emilien on 2016-06-13: thank you, i install it with rviz. but i can't after install any package with apt-get, i receive unable to locate package. do you have any idea to resolve this? Comment by fab on 2016-06-17: hi, it's maybe because it's the wrong package name. Can you put the name of the missing library ?
{ "domain": "robotics.stackexchange", "id": 24891, "tags": "ros" }
Does stopping the same bike and rider at the same velocity with the front brake require less energy than the back brake?
Question: It's the same body made by the rider and the bike moving at the same speed. So, even though braking on the front/back alters the normal forces on the opposite wheels thus creating more friction with the ground (if you back brake), you're still converting the same amount of energy to heat, correct? In other words, brakes made from the same material on the same wheel should experience equivalent wear if each one converts X Joules of kinetic energy to heat energy, no matter if they are applied on the front or back wheel? Answer: The answer depends on whether the wheels skid. When you brake with just the rear wheel, it's quite possible to skid; if you apply the front brake, the increase in normal force on that wheel tends to prevent skidding (although in extreme cases it could make you fly over the handlebars). Applying the rear brakes hard enough to block the wheel would generate little wear of the brake system, and lots of wear on the back tires. The same thing would not happen at the front. In practice, which brakes wear more really depends on how much you use them. For motorbikes, it is recommended that you use the front brake more heavily to prevent skidding - in fact some bikes have a mechanism that pretty much ensures this. But from a pure physics perspective, the kinetic energy that needs to be dissipated is the same, regardless of what brake is applied.
{ "domain": "physics.stackexchange", "id": 23292, "tags": "newtonian-mechanics, thermodynamics, energy, friction, work" }
Can I assume system is LTI when given by DTFT of impulse response
Question: I'm having hard time to grasp it probably because i don't fully understand it. I understand that when a system is given by $h(t)$ (in general $h(t-\tau)$) i can assume that it is a LTI system. So i guess my gap of knowledge is that im not sure if it is possible to FT or DTFT impulse response of a non LTI system $h(t;\tau)$. I assume not because the transform depends on the time variable but here i have a 2 variables function and im not really sure how the transforms behave under these conditions. So like the title says, if a system is given by $H(e^{j\theta})$ can i assume that it is a LTI system? and why is that? Thanks in advance. Answer: A 1D LTI system is completely characterized by the function $h(t)=T\{\delta(t)\}$ which is denoted as the impulse response of the system. Given an LTI system with impulse response $h(t)$ you can then find its frequency response as $H(j\omega)=\mathcal{F}\{h(t)\}$ where $\mathcal{F}$ stands for continuous time Fourier transform. When a system is not LTI but, for example, linear time-varying, then it's characterization is possible with the 2D function $h(t,\tau)=T\{\delta(t-\tau)\}$ which is the response of the system to an impulse applied at time $t=\tau$ When such is the case, the Fourier transform can be aplied in the following sense, treating $\tau$ as fixed (or as a parameter) the Fourier transform $\mathcal{F}\{h(t)_{\tau}\} = H_{\tau}(j\omega)$. So in effect you have to consider 1D CTFT for each $\tau$. Whether this is a useful representation or not, or there are other more useful representations for such systems, requires an indepth analysis of time-varying systems.
{ "domain": "dsp.stackexchange", "id": 5383, "tags": "discrete-signals, fourier-transform, linear-systems, impulse-response" }
If the atomic number is # of protons, why does emission of a beta (electron) particle increase the atomic number?
Question: Atomic number: the number of protons in the nucleus of an atom, which is characteristic of a chemical element and determines its place in the periodic table. Beta emission: $$\ce{^14_6C -> ^14_7N + ^0_{-1}\beta}$$ Why does emission of an electron increase the atomic number, when the atomic number is the number of protons, not electrons? Answer: During beta particle emission one neutron is converted into proton and electron, to balance charge and electron can't remain inside the nucleus hence emitted from the nucleus.
{ "domain": "chemistry.stackexchange", "id": 7237, "tags": "electrons, protons" }
Linearizing Gravity to ${\cal O}(h^3)$
Question: I've seen the action of linearized gravity in many places. We basically have $${\cal L} ~\sim~ \frac{1}{G_N}\left( - \frac{1}{2}h^{\alpha\beta} \Box h_{\alpha\beta} + \frac{1}{4} h \Box h + {\cal O}(h^3)\right)$$ in the gauge where the trace-reversed field is divergenceless. I'm doing some field theory, on linearized gravity backgrounds by treating $h_{\mu\nu}$ as a massless spin-2 field. I can't seem to find the ${\cal O}(h^3)$ terms in the Lagrangian anywhere. I know how to evaluate it, but it looks nasty. Are there any known references that just lists the next to leading order terms in the above Lagrangian? Answer: I think the earliest papers where this was written down were by DeWitt. But for a reference easily available via the arXiv look at hep-th/9411092. Eq (2.17) has the expansion you want and eq. (2.18) even has it to fourth order in $h$.
{ "domain": "physics.stackexchange", "id": 4646, "tags": "general-relativity, resource-recommendations, perturbation-theory, linearized-theory" }
Is four velocity always given by $U^{\mu} = d x^{\mu}/d\tau$?
Question: I was taught that four-velocity is defined as $${\bf U} = \frac{d \bf x}{d\tau}$$ and that it has the components $$U^{\mu} = \frac{d x^{\mu}}{d\tau}$$ where $d\bf x$ is the four displacement and $\tau$ is proper time. My question is simple: is the latter equation (for the components) correct in all coordinate systems? I tried to figure it out the following way: $${\bf U} = \frac{d}{d\tau}(x^{\mu}{\bf e_{\mu}})$$ $$= \frac{d x^{\mu}}{d\tau}{\bf e_{\mu}} + x^{\mu}\frac{d{\bf e_{\mu}}}{d\tau}$$ $$= \frac{d x^{\mu}}{d\tau}{\bf e_{\mu}} + x^{\mu}\frac{d x^{\nu}}{d\tau}\frac{\partial{\bf e_{\mu}}}{\partial x^{\nu}}$$ $$= \frac{d x^{\mu}}{d\tau}{\bf e_{\mu}} + x^{\mu}\frac{d x^{\nu}}{d\tau}\Gamma_{\nu \mu}^{\alpha}\bf e_{\alpha}$$ $$= \left(\frac{d x^{\alpha}}{d\tau} + x^{\mu}\frac{d x^{\nu}}{d\tau}\Gamma_{\nu \mu}^{\alpha}\right){\bf e_{\alpha}}$$ and therefore: $$U^{\alpha} = \frac{d x^{\alpha}}{d\tau} + x^{\mu}\frac{d x^{\nu}}{d\tau}\Gamma_{\nu \mu}^{\alpha}.$$ But the only way this component equation agrees with the earlier one is if $$x^{\mu}\frac{d x^{\nu}}{d\tau}\Gamma_{\nu \mu}^{\alpha} = 0$$ for all $\alpha$. However, I can't seem to prove/disprove this. Obviously if the Christoffel symbols are zero, then it's trivial. But if there are non-zero Christoffel symbols, then is it still zero? Answer: The four-velocity is the tangent four-vector of a timelike world line, that is $U^\mu = dx^\mu / d\tau$, and this definition applies to any coordinate system. The difference of the coordinates $dx^\mu$ is a vector, i.e. an invariant, and together with the proper time $\tau$, an invariant by definition, produces the four-velocity $U^\mu$, which is an invariant as well. However in your figuring out you refer to an object $x^\mu e_\mu$ whose difference in general is not a vector, as in an arbitrary coordinate system it does not transform as the coordinates. Your comparison between the two expressions is not justified.
{ "domain": "physics.stackexchange", "id": 59691, "tags": "general-relativity, vectors, velocity, coordinate-systems, definition" }
Why do nuclei decay so fast and slow?
Question: Why do nuclei like Oganesson (also known as Ununoctium, this is the 118th element on the periodic table) decay in about 5 milliseconds? This is weird that they decay. In comparison, why do elements like uranium take about 200,000 years to decay, or even more? Why do atoms decay at all? Why do elements like Polonium (84th element) take only about 140 days to decay? Answer: In a nutshell, atoms decay because they're unstable and radioactive. Ununoctium (or Oganesson) has an atomic number of 118. That means that there are 118 protons in the nucleus of one atom of Oganesson, and that isn't including the number of neutrons in the nucleus. We'll look at the most stable isotope of Oganesson, $\mathrm{{}^{294}Og}$. The 294 means that there are 294 nucleons, or a total of 294 protons and neutrons in the nucleus. Now, the largest stable isotope of an element known is $\mathrm{{}^{208}Pb}$, or lead-208. Beyond that many nucleons, the strong nuclear force begins to have trouble holding all those nucleons together. See, normally, we'd think of the nucleus as impossible because the protons (all having a positive charge) would repel each other, because like charges repel. That's the electromagnetic force. But scientists discovered another force, called the strong nuclear force. The strong nuclear force is many times stronger than the electromagnetic force (there's a reason it's called the strong force) but it only operates over very, very small distances. Beyond those distances, the nucleus starts to fall apart. Oganesson and Uranium atoms are both large enough that the strong force can't hold them together anymore. So now we know why the atoms are unstable and decay (note that there are more complications to this, but this is the general overview of why). But why the difference in decay time? First, let me address one misconception. Quantum mechanics says that we don't know exactly when an atom will decay, or if it will at all, but for a collection of atoms, we can measure the speed of decay in what's called an element's half-life. It's the time required for the body of atoms to be cut in half. So, to go back to decay time, it's related (as you might expect) again to the size of the nucleus. Generally, isotopes with an atomic number above 101 have a half-life of under a day, and $\mathrm{{}^{294}Og}$ definitely fits that description. (The one exception here is dubnium-268.) No elements with atomic numbers above 82 have stable isotopes. Uranium's atomic number is 92, so it is radioactive, but decays much more slowly than Oganessson for the simple reason that it is smaller. Interestingly enough, because of reasons not yet completely understood, there may be a sort of "island" of increased stability around atomic numbers 110 to 114. Oganesson is somewhat close to this island, and it's half-life is longer than some predicted values, lending some credibility to the concept. The idea is that elements with a number of nucleons such that they can be arranged into complete shells within the atomic nucleus have a higher average binding energy per nucleon and can therefore be more stable. You can read more about this here and here. Hope this helps!
{ "domain": "physics.stackexchange", "id": 37944, "tags": "nuclear-physics, radioactivity, binding-energy, isotopes, half-life" }
How to use included models with plugins (ROS Hydro and stand alone Gazebo)
Question: Hi!, I am creating my own Gazebo-ROS Package using SDF specifications for a pioneer2dx model including the camera and hokuyo models. For moving the pioneer2dx (with camera and hokuyo included but without their plugins) I used the gazebo_ros_diff_drive.cpp plugin from /gazebo_ros_pkgs/gazebo_plugins. This was tested with a publisher to deal with the /cmd_vel topic. Until there everything is good working and displaying the debug information in a normal way by the rostopic list in the terminal. Now, my problem is when I try to implement the camera and the hokuyo models included in the pioneer2dx model with their respective plugins of /gazebo_ros_pkgs/gazebo_plugins, gazebo_ros_camera.cpp and gazebo_ros_laser.cpp which should be filled by the experience of each one because some of their sections are blank (my next step). This is mentioned in Tutorial ROS_Motor_and_Sensor_Plugins. So, I would like to be sure of including the plugins correctly before writing them. I got some ideas from this answer considering there the example of the dr_vehicle (model.config and model.sdf) and also following the latest SDF specification, but there is still an error on my model. In the Tutorial, there is a robot's model and description hierarchy explained without plugins implementation in a SDF model. I am following this hierarchy of MYROBOT_description and MYROBOT_gazebo and it works correctly while not including camera and hokuyo plugins. Should I create each model separatly within the /pioneer_description file with their respective plugins? so: /pioneer_description/models/pioneer2dx /pioneer_description/models/camera /pioneer_description/models/hokuyo each one with a plugins file and this also means changes for my .world file located in /pioneer_gazebo. And also setting the paths as mentioned in this answer. Does anyone have some recommendations, examples or information about using included models with plugins and maybe some experience with the camera and hokuyo plugins mentioned above for ROS Hydro and stand alone Gazebo? Thanks!, Juan-Diego Originally posted by JuDiNiSi on Gazebo Answers with karma: 3 on 2013-08-29 Post score: 0 Answer: Hi Juan, I think there is no "correct" way to do this. For myself I like to separate my robot parts into different files, which are included in a central masterfile. This way, I am able to enable/disable robot parts very quickly. If you choose to put the different parts into different subfolders or not depends on your own taste. Further more, I used the mesh files that come with gazebo to visualize my sensor (e.g. model://hokuyo/meshes/hokuyo.dae), but since I wanted to use ROS messages, I needed to modify the used plugin "libRayPlugin.so" to "libgazebo_ros_laser.so" anyways, and therefore I wrote my own version of the hokuyo laser scan. Since I dont use Hydro and standalone gazebo, but groovy and gazebo 1.5, i cant provide any SDF code, because I coded in URDF. Regards psei Originally posted by psei with karma: 166 on 2013-09-10 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by JuDiNiSi on 2013-09-13: Hi! Psei, thank you for your answer! I think so as well. In any case, I also had to do it separately. In the pioneer2dx model.sdf , I included the links for a camera and a hokuyo laser with their ROS plugins given by the gazebo_ros_pkgs for connecting ROS (in catkin_ws) with the new version of Gazebo. I have my own gazebo models file too which was referenced to the corresponding gazebo models path (GAZEBO_MODEL_PATH) as it is explained in some links that I have above in my question. Comment by JuDiNiSi on 2013-09-13: The plugins path are also referenced to the corresponding gazebo plugins path (GAZEBO_PLUGIN_PATH).
{ "domain": "robotics.stackexchange", "id": 3441, "tags": "gazebo" }
Is anyone attempting to disprove the existence of ToE? (Formerly: Is there necessarily a theory of everything?)
Question: Does the following claim have a proof? Theorem: There exists a theory of everything. [edit: Added the following to hopefully clarify what I’m driving at.] Is any physicist working on proving the following theorem? Theorem: There is no ToE because QM and GR are fundamentally irreconcilable. In other words, any theorem that addresses the question of a ToE – much as any theorem that addresses the question of perpetual motion machines – is necessarily a nogo theorem. Answer: Well, one of the most popular Theories of Everything is string theory, or M-theory. This essentially gives us a way to link general relativity to quantum mechanics by introducing the force of gravity as a particle in the standard model. This would require a certain "common factor" between gravity (the graviton), and the rest of the particles, so string theory was devised. It essentially tells us that there are "vibrations", or "strings" that make up particles, and a certain "vibration", or "combination" of these strings give us various particles. However, this would require us to have 11 dimensions. While this is a possibility that cannot be dismissed, it's certainly difficult to prove. Then, there's the slightly less popular Loop Quantum Gravity (LQG). LQG postulates that the structure of space is composed of finite loops woven into an extremely fine fabric or network. These networks of loops are called spin networks. The evolution of a spin network, or spin foam, has a scale on the order of a Planck length, approximately 10−35 metres, and smaller scales are meaningless. Consequently, not just matter, but space itself, prefers an atomic structure. The whole point of trying to devise a theory that could act as a bridge between QM and GR is to find a possibility for a TOE. I don't think someone would be trying to disprove the need for a TOE when there's an obvious need for it - the inconsistencies between QM and GR. For one, GR tells us that each cause results in a predictable, local effect. It tells us that each object has a specific, precise point in time that can be measured. However, QM tells us that events in the subatomic realm have probabilistic, rather than definite outcomes. This is just one of the many inconsistencies between QM and GR. I hope you now understand the need for one.
{ "domain": "physics.stackexchange", "id": 74659, "tags": "theory-of-everything, epistemology" }
Is equivalence of unambiguous context-free languages decidable?
Question: It is well known that the equivalence problem is undecidable for general context-free languages. However, all proofs of this fact that I am aware of seem to involve some ambiguous context-free grammars. For this reason, I would like to ask if it is known whether the problem remains undecidable while restricting oneself to unambiguous context-free languages. That is, given two context-free grammars that are a priori granted to be unambiguous, is it decidable whether they are equivalent or not? I find this problem a little intriguing, since it is known that equivalence is decidable for deterministic context-free languages, though this result is far from trivial... On the other hand, there might be some simple reason for undecidability that I have been overlooking. Answer: This is currently an open problem. As correctly pointed out, if it is decidable, then one expects the proof to be hard since it generalises the famous DPDA equivalence problem. On the other hand, the classical arguments for undecidability of the CFL universality problem make use of inherently ambiguous languages, and thus one needs new ideas to show undecidability. Let me point out that the universality problem for UCFLs is decidable (in PSPACE), using generating functions [1]. REFERENCES [1] N. Chomsky and M. P. Schützenberger, The Algebraic Theory of Context-Free Languages, Computer Programming and Formal Systems, 1963.
{ "domain": "cstheory.stackexchange", "id": 4418, "tags": "fl.formal-languages, grammars, context-free, decidability, undecidability" }
Intuition on why quantum hall effect?
Question: I think I can understand hall effect using the right hand rule, the electrons experience lorentz force in the presence of a magnetic field but I don't get why quantum? I saw the resistivity looks like a stepped graph why do electrons behave in such a manner like a current flowing through a multi stages zener diode? (OK I made up the multi stages) Answer: The simplest explanation requires basic knowledge about quantum point contacts. QPCs are narrow openings between two electron gases, where transverse motion of electrons is quantized. The conductance through such openings is guantized in the units of the conductance quantum $$G_0 = \frac{2e^2}{h}.$$ This happens, because for a one-dimensional motion the group velocity exactly cancels out with the density of states (the former is proportional to $\partial_k\epsilon(k)$, whereas the latter is inversely proportional to this quantity). The conductance is therefore proportional to the number of conducting transverse energy levels, each level contributing $G_0$. More formally this physics is expressed by the Landauer formula. In quantum Hall effect the magnetic field is so strong that the electrons are confined to the walls of the conductor, moving in effectively one-dimensional channels, and thus exhibiting conductance quantization (which means that the resistance scales as $1/n$ with the number of conducting sub-bands.) What I described here is the Integer quantum Hall effect (IQHE), which is essentially a one-particle phenomenon. Fractional quantum Hall effect has to do with electron-electron interactions and by far more complex.
{ "domain": "physics.stackexchange", "id": 70130, "tags": "electrons, quantum-hall-effect" }
How to see/change learning rate in Keras LSTM?
Question: I see in some question/answers that ask to decrease the learning rate. But I don't know how can I see and change the learning rate of LSTM model in Keras library? Answer: In Keras, you can set the learning rate as a parameter for the optimization method, the piece of code below is an example from Keras documentation: from keras import optimizers model = Sequential() model.add(Dense(64, kernel_initializer='uniform', input_shape=(10,))) model.add(Activation('softmax')) sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='mean_squared_error', optimizer=sgd) In the above code, the fifth line lr is learning rate, which in this code is set to 0.01. You change it to whatever you want. For further detail, see this link. Please tell me if this helped to solve your problem.
{ "domain": "datascience.stackexchange", "id": 4954, "tags": "keras, lstm, learning-rate" }
How is energy dissipated if force exists in action and reaction pairs?
Question: If force exists in action-reaction pairs, then how is energy not always conserved? To illustrate, I would like to give the example of an inelastic collision in an isolated system. All forces acting on the objects colliding are internal, and the KE of the COM remains constant. Now, when the objects collide, while one object exerts a given force, the other object also exerts an equal and opposite reaction force. If we think of kinetic energy as work done by a force acting on that object, how is mechanical energy ever dissipated in such a case? Answer: If force exists in action-reaction pairs, then how is energy not always conserved? I assume by this that you actually mean, how is mechanical energy not always conserved. Locally energy is always conserved, but it can be transformed from mechanical energy to other forms. I believe that your confusion comes from the idea that since internal forces always come in equal and opposite pairs that the mechanical work done by one should always be equal and opposite to the mechanical work done by the other. This neglects the fact that the rate of work is $P=\vec F \cdot \vec v$. Even though the $\vec F$ has an equal and opposite counterpart, their respective $\vec v$ may differ. In particular, in continuum mechanics the stress power density is given by $\mathbf T \cdot \mathbf D$ where $\mathbf T$ is the Cauchy stress tensor and $\mathbf D$ is the rate of deformation tensor. The rate of deformation is the key to understanding your question. The rate of deformation is essentially* the difference in velocity for neighboring parts of the material. The forces are equal and opposite, and if the velocity is the same then there is no mechanical work and no deformation. On the other hand, if the material is deforming then the velocities are not the same and so the mechanical work is not equal and mechanical energy is lost. I would like to give the example of an inelastic collision in an isolated system. So in an inelastic collision there is deformation of the colliding bodies. At some point the front stops while the rear continues moving forward. This difference in velocity is what drives the loss of mechanical energy despite all of the internal forces being equal and opposite. See https://csml.berkeley.edu/Notes/ME185.pdf#page150 More specifically $\mathbf D=(\mathbf L+\mathbf L^T)/2$ where $\mathbf L= \nabla \vec v$
{ "domain": "physics.stackexchange", "id": 78778, "tags": "newtonian-mechanics, forces, energy, energy-conservation, conservation-laws" }
Can signals travel "backwards" in the sensory pathway?
Question: My understanding of the "sensory pathway" is that its a linear, directional pipeline as follows: Nerves (fire various signals depending on the type of sensors they are) Fibers (transmit signals from nerves to spinal cord) Spinal cord (transmits signals from fibers to brainstem) Brainstem (transmits signals from spinal cord to brain) Brain (the somatosensory cortex register signals as pain/pressure/temperature/etc.) So again, my understanding is that sensory nerves are 1-way sensors, and that once they fire a signal, its 1 and only ultimate destination is the brain. So, as a precursor to my question, if anything I have stated is incorrect, please begin by correcting me! I read that pinched nerves in the back and neck can cause neuropathy ("pins and needles") all over the body: hands, feet, face, etc. But how can this be? If my understanding of the sensory pathway is correct, then a pinched nerve in one's neck should only send "pins and needles" signals directly to the brain; it should not at any point forward/relay signals on to any other areas of the body. To state that a pinched nerve in one's neck could possibly cause neuropathy in their arm, then that insinuates there is some connection between the pinched "neck nerve" and the nerve(s) in the arm that are experiencing the pins and needles sensation. In fact, unless I completely misunderstand the entire sensory pathway and how pain signals transmit, this implies that a pinched nerve sends signals out to the spinal cord, down the spinal cord and into, say, an arm, and that the sensory receptors in the arm subsequently react to these signals. How is this possible?!? Is there some kind of feedback mechanism at play where nerves in the neck and back can relay signals on to other areas of the body, instead of just feeding directly into the "spinal cord => brainstem => cortex" pipeline? Answer: Reverse signals (dendrite -> axon) do occur in neurons, and are called back propagating action potentials (bAPs). However, whatever role bAPs play in the nervous system at large is subtle/small enough that we don't really understand them at all. In any case, as @luigi points out, pinched nerves don't have anything to do with bAPs. The reason why a pinch in one place (the neck) feels like it has an effect in another place (the arm) is because some neurons trace long (on the order of several feet) pathways through the body. Ultimately, the sensory input of everything below your head passes through neurons in your neck at some point. If there is a pinch in the neuron in your neck that relays sense information from your arm, that neuron in your neck can activate spuriously, even in the absence of any stimulus acting on your arm. In that case your brain will interpret the spurious activation of that neck neuron as being the result of some kind of strange arm stimulus, and so you feel pins and needles. See here for more information on the sensory cortex in your brain and the "sense image" it maintains of your body. Edit To clarify what I mean by a "pathway", The part marked "proprioceptors or mechanoreceptors" is what's buried in your skin. If you poke a mechanoreceptor, it evokes a signal in the neuron marked "first-order". In some cases, that first-order neuron will reach all the way to brainstem (as in the picture above), and in some cases that first-order neuron will interface with an intermediate neuron in the spine, but the effect is the same in any case. A first-order neuron stimulates a second-order neuron, which in turn stimulates a third-order neuron, etc., all the way until the original signal arising from the mechanoreceptor reaches the somatosensory cortex. A pinch in any of those first, second, etc. order neurons will potentially send spurious signals to the somatosensory cortex. Second edit: Rereading your question, I think a lot of the trouble that you're having has to do with your understanding of how the brain assembles sense information. In order to feel pain in your arm, it is not necessary for anything to be happening to the arm itself. Your arm could be perfectly healthy and normal, but as long as your brain is receiving a signal equivalent to the one produced by a damaged arm you will still feel the same sensation of pain.
{ "domain": "biology.stackexchange", "id": 3214, "tags": "human-biology, human-anatomy, brain, pain, central-nervous-system" }
Temporary family member 2020-CD3
Question: So, recently I came to know that this asteroid is orbiting Earth from past 3 years and still it is orbiting. It is kind of a mini Moon to the Earth know to have come from asteroid-belt. Is there any threat of collision by 2020-CD3 on Earth and how long it will take to escape the Earth's gravity? Answer: There is no chance of a collision in the short term. 2020 CD3 is in a rather chaotic orbit of the Earth that extends well beyond the moon and doesn't come very close to Earth Each orbit is different, but no orbit brings it closer than the moon. The orbit is irregular because it is perturbed both by the moon and by the gravity of the sun. At its furthest point from the Earth. The gravity field that it moves in is not a simple "inverse square law" because of the combination of Earth, Moon and Sun. It is likely to escape Earth in the next few months, and it will go back to be a sun orbiting asteroid, we know this because we understand how gravity works, so we can predict its future path. Its future path involves it leaving the neigbourhood of the Earth. However as its orbit does bring it close to the Earth's orbit there is a chance of it colliding with the Earth at some point in the future. It would make a bright fireball, but would not be dangerous, as it is only a couple of metres across. Objects like this collide with the Earth frequently. Its orbit is not yet well enough known to determine exactly how it will return to a solar orbit, and so we can't yet predict whether it will be temporarily captured or even collide with Earth on its next close approach. But the probability of a collision is low.
{ "domain": "astronomy.stackexchange", "id": 4320, "tags": "the-moon, asteroids, asteroid-belt" }
Is weighted XOR-SAT NP-hard?
Question: Given $n$ boolean variables $x_1,\ldots,x_n$ each of which is assigned a positive cost $c_1,\ldots,c_n\in\mathbb{Z}_{>0}$ and a boolean function $f$ on these variables given in the form $$f(x_1,\ldots,x_n)=\bigwedge_{i=1}^k\bigoplus_{j=1}^{l_i}x_{r_{ij}}$$ ($\oplus$ denoting XOR) with $k\in\mathbb{Z}_{>0}$, integers $1\leq l_i\leq n$ and $1\leq r_{i1}<\cdots<r_{il_i}\leq n$ for all $i=1,\ldots,k$, $j=1,\ldots,l_i$, the problem is to find an assignment of minimum cost for $x_1,\ldots,x_n$ that satisfies $f$, if such an assignment exists. The cost of an assignment is simply given by $$\sum_{\substack{i\in\{1,\ldots,n\}\\x_i\,\text{true}}}c_i.$$ Is this problem NP-hard, that is to say, is the accompanying decision problem "Is there a satisfying assignment of cost at most some value $K$" NP-hard? Now, the standard XOR-SAT problem is in P, for it maps directly to the question of solvability of a system of linear equations over $\mathbb{F}_2$ (see, e. g., https://en.wikipedia.org/wiki/Boolean_satisfiability_problem#XOR-satisfiability). The result of this solution (if it exists) is an affine subspace of $\mathbb{F}_2^n$. The problem is thus reduced to pick the element corresponding with minimal cost from that subspace. Alas, that subspace may be quite large, and indeed, rewriting $f$ in binary $k\times n$-matrix form, with a $1$ for each $x_{r_{ij}}$ at the $i$-th row and the $r_{ij}$-th column, and zero otherwise, we get a cost minimization problem subject to $$Ax=1,$$ where $A$ is said matrix, $x$ is the column vector consisting of the $x_1,\ldots,x_n$ and $1$ is the all-1-vector. This is an instance of a binary linear programming problem, which are known to be NP-hard in general. So the question is, is it NP-hard in this particular instance as well? Answer: A classical result of Berlekamp, McEliece, and van Tilborg shows that the following problem, maximum likelihood decoding, is NP-complete: given a matrix $A$ and a vector $b$ over $\mathbb{F}_2$, and an integer $w$, determine whether there is a solution to $Ax = b$ with Hamming weight at most $w$. You can reduce this problem to your problem. The system $Ax = b$ is equivalent to the conjunction of equations of the form $x_{i_1} \oplus \cdots \oplus x_{i_m} = \beta$. If $\beta = 1$, this equation is already of the correct form. If $\beta = 0$ then we XOR an extra variable $y$ to the right-hand side, and then we force this variable to be $1$ by adding an extra equation $y = 1$. We define the weights as follows: $y$ has weight $0$, and the $x_1$ have weight $1$. We have now reached an equivalent formulation of maximum likelihood decoding which is an instance of your problem.
{ "domain": "cs.stackexchange", "id": 4870, "tags": "optimization, np-hard, satisfiability, integer-programming, xor" }
Wordnet Lemmatization
Question: I tried finding about exception lists in wordnet lemmatizers. "Morphy() uses inflectional ending rules and exception lists to handle different possibilities" which I read from http://www.nltk.org/howto/wordnet.html . Can you explain what is an exception list. Thank you. Answer: The exception list files are used to help the processor find base forms from 'irregular inflections' according to the man page. They mean that some words, when plural or a different tense, can't be algorithmically processed to find the base/root word. More details can be found in the morphy man. I'm not a language processing expert, but this is likely a result of English words that 'break the rules'. If you think about the code like a human trying to learn English: the student learns rules to use (algorithm) and then has to memorize exceptions for the rules (exception lists). An over-simplified analogy that does not involve endings/conjugation would a spell checking program. An algorithm might check for 'i before e, except after c' but would first have to check the word against an exception list to make sure it isn't 'weird' or 'caffeine' - please don't start a linguistics fight about this rule, I am not commenting on the validity of it/that's not the point I'd like to make.
{ "domain": "datascience.stackexchange", "id": 310, "tags": "machine-learning, nlp, text-mining" }
American Checkers
Question: I am implementing the logic for a Checkers game in F#. I am writing my code in a library so it can be called from any UI provider, and am trying to do it in good FP style. I currently have the following code; each file is provided in the order of compilation: Checkers.fs This contains some unions and records that will be used throughout the entire library. type Player = Black | White type PieceType = Checker | King type Coord = { Row :int Column :int } with static member (+) (coord1 :Coord, coord2 :Coord) = {Row = coord1.Row + coord2.Row; Column = coord1.Column + coord2.Column} Piece.fs This is an immutable class representing a Piece. It contains each option as a static member to ease building boards manually. type Piece(player:Player, pieceType:PieceType) = member this.Player = player member this.PieceType = pieceType member this.Promote() = new Piece(player, PieceType.King) override this.Equals (obj) = let piece = obj :?> Piece piece.Player = this.Player && piece.PieceType = this.PieceType override this.GetHashCode() = this.Player.GetHashCode() ^^^ this.PieceType.GetHashCode() static member WhiteChecker() = Some <| new Piece(Player.White, PieceType.Checker) static member BlackChecker() = Some <| new Piece(Player.Black, PieceType.Checker) static member WhiteKing() = Some <| new Piece(Player.White, PieceType.King) static member BlackKing() = Some <| new Piece(Player.Black, PieceType.King) Board.fs This type represents a board, and the only stateful member of this class is immutable, like Piece. The second optional constructor is for ease of creating a board from a hard-coded list in C#; it may not remain in future versions. type Board(board) = new () = Board(Board.DefaultBoard) new(board :IEnumerable<IEnumerable<Option<Piece>>>) = let boardArray = List.ofSeq(board.Select(fun r -> List.ofSeq(r))) Board(boardArray) member this.Board :Option<Piece> list list = board member this.Item with get(coord :Coord) = this.Board.[coord.Row].[coord.Column] member this.Item with get(row :int) = this.Board.[row] static member DefaultBoard :Option<Piece> list list = [ [None; Piece.BlackChecker(); None; Piece.BlackChecker(); None; Piece.BlackChecker(); None; Piece.BlackChecker()]; [Piece.BlackChecker(); None; Piece.BlackChecker(); None; Piece.BlackChecker(); None; Piece.BlackChecker(); None]; [None; Piece.BlackChecker(); None; Piece.BlackChecker(); None; Piece.BlackChecker(); None; Piece.BlackChecker()]; [None; None; None; None; None; None; None; None]; [None; None; None; None; None; None; None; None]; [Piece.WhiteChecker(); None; Piece.WhiteChecker(); None; Piece.WhiteChecker(); None; Piece.WhiteChecker(); None]; [None; Piece.WhiteChecker(); None; Piece.WhiteChecker(); None; Piece.WhiteChecker(); None; Piece.WhiteChecker()]; [Piece.WhiteChecker(); None; Piece.WhiteChecker(); None; Piece.WhiteChecker(); None; Piece.WhiteChecker(); None]; ]; FSharpExtensions.fs This module contains many functions for internal use only. Most of these functions are designed to be based on a certain type to make calling them easier; perhaps they should be included in the type rather than being extensions like this? module FSharpExtensions = let internal getJumpedCoord(startCoord, endCoord) = {Row = startCoord.Row - Math.Sign(startCoord.Row - endCoord.Row); Column = startCoord.Column - Math.Sign(startCoord.Column - endCoord.Column)} let internal checkMoveDirection(piece :Piece, startCoord :Coord, endCoord :Coord) = match piece.PieceType with | PieceType.Checker -> match piece.Player with | Player.Black -> startCoord.Row < endCoord.Row | Player.White -> startCoord.Row > endCoord.Row | PieceType.King -> true let internal moveIsDiagonal(startCoord :Coord, endCoord :Coord) = startCoord <> endCoord && System.Math.Abs(startCoord.Row - endCoord.Row) = System.Math.Abs(startCoord.Column - endCoord.Column) let internal kingRowIndex(player) = match player with | Player.Black -> 7 | Player.White -> 0 type Board with member internal board.CoordExists(coord :Coord) = coord.Row > 0 && coord.Row < board.Board.Length && coord.Column > 0 && coord.Column < board.Board.[0].Length member internal board.IsValidCheckerHop(startCoord :Coord, endCoord :Coord) = let piece = board.[startCoord].Value checkMoveDirection(piece, startCoord, endCoord) && board.[endCoord].IsNone member internal board.IsValidKingHop(startCoord :Coord, endCoord :Coord) = board.[endCoord].IsNone member internal board.IsValidCheckerJump(startCoord :Coord, endCoord :Coord) = let piece = board.[startCoord].Value let jumpedCoord = getJumpedCoord(startCoord, endCoord) let jumpedPiece = board.[jumpedCoord] checkMoveDirection(piece, startCoord, endCoord) && board.[endCoord].IsNone && jumpedPiece.IsSome && jumpedPiece.Value.Player <> piece.Player member internal board.IsValidKingJump(startCoord :Coord, endCoord :Coord) = let piece = board.[startCoord].Value let jumpedCoord = getJumpedCoord(startCoord, endCoord) let jumpedPiece = board.[jumpedCoord] board.[endCoord].IsNone && jumpedPiece.IsSome && jumpedPiece.Value.Player <> piece.Player member internal board.IsValidHop(startCoord :Coord, endCoord :Coord) = match board.[startCoord].Value.PieceType with | PieceType.Checker -> board.IsValidCheckerHop(startCoord, endCoord) | PieceType.King -> board.IsValidKingHop(startCoord, endCoord) member internal board.IsValidJump(startCoord :Coord, endCoord :Coord) = match board.[startCoord].Value.PieceType with | PieceType.Checker -> board.IsValidCheckerJump(startCoord, endCoord) | PieceType.King -> board.IsValidKingJump(startCoord, endCoord) member internal board.SetPieceAt(coord :Coord, piece :Option<Piece>) = let boardItems = List.init 8 (fun row -> match row with | i when i = coord.Row -> List.init 8 (fun col -> match col with | j when j = coord.Column -> piece | _ -> board.[row].[col] ) | _ -> board.[row] ) new Board(boardItems) member internal board.Jump(startCoord :Coord, endCoord :Coord) = let kingRowIndex = kingRowIndex(board.[startCoord].Value.Player) let piece = match endCoord.Row with | row when row = kingRowIndex -> Some <| board.[startCoord].Value.Promote() | _ -> board.[startCoord] let jumpedCoord = getJumpedCoord(startCoord, endCoord) board.SetPieceAt(startCoord, None).SetPieceAt(endCoord, piece).SetPieceAt(jumpedCoord, None) member internal board.Hop(startCoord :Coord, endCoord :Coord) = let kingRowIndex = kingRowIndex(board.[startCoord].Value.Player) let piece = match endCoord.Row with | row when row = kingRowIndex -> Some <| board.[startCoord].Value.Promote() | _ -> board.[startCoord] board.SetPieceAt(startCoord, None).SetPieceAt(endCoord, piece) ExtensionMethods.fs This is really the public API of my library. I designed the methods like this so they could be called as boardInstance.Member(...) in both C# and F#. [<Extension>] type ExtensionMethods() = [<Extension>] static member IsValidMove(board :Board, startCoord :Coord, endCoord :Coord) = board.CoordExists(startCoord) && board.CoordExists(endCoord) && board.[startCoord].IsSome && moveIsDiagonal(startCoord, endCoord) && match Math.Abs(startCoord.Row - endCoord.Row) with | 1 -> board.IsValidHop(startCoord, endCoord) | 2 -> board.IsValidJump(startCoord, endCoord) | _ -> false [<Extension>] static member Move(board :Board, startCoord :Coord, endCoord :Coord) :Option<Board> = match ExtensionMethods.IsValidMove(board, startCoord, endCoord) with | false -> None | true -> match Math.Abs(startCoord.Row - endCoord.Row) with | 1 -> Some <| board.Hop (startCoord, endCoord) | 2 -> Some <| board.Jump (startCoord, endCoord) | _ -> None [<Extension>] static member Move(board :Board, coordinates :IEnumerable<Coord>) = let coords = List.ofSeq(coordinates) match coords.Length with | b when b >= 3 -> let newBoard = ExtensionMethods.Move (board, coords.Head, coords.[1]) ExtensionMethods.Move (newBoard, coords.Tail) | _ -> ExtensionMethods.Move (board, coords.Head, coords.[1]) static member internal Move(board :Option<Board>, coordinates :IEnumerable<Coord>) = match board.IsSome with | false -> None | true -> ExtensionMethods.Move(board.Value, coordinates) Any and all comments welcome. If you see something that I can improve, or that I am not doing something the idiomatic F# way, please let me know. Answer: One of the many benefits of F# is that it's a multiparadigmatic language; while it embraces a functional-first ideal, it clearly also enables you to write object-oriented code. This is useful if you're coming from a C# background, like I did some years ago. You can get started quickly writing F#, concentrating on learning the language syntax. Naturally, if you do that, you'd tend to start your F# journey writing F# in an object-oriented style. There's nothing wrong with that; it's part of the voyage. It looks like you've already made an effort to make your F# code immutable, which is an important step in the right direction. In general, it's looking good. You also use the word idiomatic F#, and that opens another discussion. Most importantly, what's idiomatic has a subjective component. Still, when I write F# code these days, my code is mostly assembled from functional types (records, discriminated unions, lists, tuples) and let-bound functions. Most of the code I've seen from other seasoned F# programmers seem to follow that pattern as well. With that in mind, here are some suggestions that would, in my opinion, make the code more idiomatic. Use functional types You already start by declaring the two record types Player and PieceType, and that's a good start, but why make Coord a class? You can make it a record, too: type Coord = { Row :int; Column :int } You can still support addition: let (+) coord1 coord2 = { Row = coord1.Row + coord2.Row; Column = coord1.Column + coord2.Column } Notice how type annotations aren't required. Since this function reads from Row and Column labels, the compiler can infer that both coord1 and coord2 are Coord values. The + function still works, as this FSI session demonstrates: > { Row = 1; Column = 4 } + { Row = 2; Column = 3 };; val it : Coord = {Row = 3; Column = 7;} In the same spirit, it's easy to refactor Piece to a record and associated functions: type Piece = { Player : Player; PieceType : PieceType } let promote p = { p with PieceType = King } let whiteChecker = Some { Player = White; PieceType = Checker } let blackChecker = Some { Player = Black; PieceType = Checker } let whiteKing = Some { Player = White; PieceType = King } let blackKing = Some { Player = Black; PieceType = King } F#'s functional types all have structural equality, so you can see how this approach saves you from having to override Equals and GetHashCode. Here, I also converted the 'factory' methods whiteChecker, blackKing, and so on, to values. The original functions always returned the same value, so I didn't see any reason to make them functions. Use built-in types The Board class is little but a wrapper around Piece option list list, so it can be eliminated. It can sometimes be useful to define a type alias: type Board = Piece option list list It can help you better communicate intent, but it doesn't preclude a user from providing a 'raw' Piece option list list value. This keeps things more flexible. The only behaviour defined by Board are the two Item properties and the default board, all of which can easily be defined as let-bound values: let row = List.item let square coord = List.item coord.Row >> List.item coord.Column Notice how row is nothing but an alias for List.item. Likewise, square is a composition of List.item functions. These functions are actually much more generic than the ones defined in the Board class, but they work correctly with a board as well. You could define the default board like above, but perhaps you'll find the following more readable: let defaultBoard = [ List.replicate 4 [None; blackChecker] |> List.concat List.replicate 4 [blackChecker; None] |> List.concat List.replicate 4 [None; blackChecker] |> List.concat List.replicate 8 None List.replicate 8 None List.replicate 4 [whiteChecker; None] |> List.concat List.replicate 4 [None; whiteChecker] |> List.concat List.replicate 4 [whiteChecker; None] |> List.concat ] Code readability is subjective, but this is both shorter, and explicitly highlights the repetitive nature of the default board. On the other hand, as a reader, you can't 'see' the whole board laid out in the code. In any case, you can get a row from the default board: > row 2 defaultBoard;; val it : Piece option list = [null; Some {Player = Black; PieceType = Checker;}; null; Some {Player = Black; PieceType = Checker;}; null; Some {Player = Black; PieceType = Checker;}; null; Some {Player = Black; PieceType = Checker;}] You can also get a single square: > square { Row = 2; Column = 1 } defaultBoard;; val it : Piece option = Some {Player = Black; PieceType = Checker;} As far as I can tell, you could convert all the extension methods in FSharpExtensions to let-bound functions as well, but I think I'll conclude my review here. C# interop My general preference, if I need to expose an F# library to C# is to write the F# as idiomatically as possible. This means that I'll make no concessions to C# when writing my implementation. Once I know what my F# implementation looks like, I can always slap an object-oriented Facade over the F# code, if necessary. Some functional types, like records and sequences are easily consumable from C#, since they're simply immutable classes and IEnumerable<T> sequences, but other types, like discriminated unions, require more translation in order to look object-oriented.
{ "domain": "codereview.stackexchange", "id": 23426, "tags": "game, functional-programming, f#, checkers-draughts" }
Simple way to duct a PVC pipe through a stainless steel sheet cover
Question: I need to duct a PVC pipe (DN80) through a stainless steel sheet metal covering. The hole thing needs to be tight against stormwater. There's tons of products to duct pipes through thick walls, but I'm at loss on how to do that here. The cover is horizontal, upside checkered. Ideas I've considered: Welding a short stainless steel pipe stub onto the sheet and use a ring seal Hole with ID of pipe in steel sheet, flange the pipe through the sheet Both of these appear overblown to me. Answer: The way this would typically be achieved, is with an off-the-shelf "Tank Connector". This essentially replaces the welding operation in your first idea. Simply cut the appropriately sized hole, and install the connector. There are face seals on both sides of the sheet metal, which are clamped tight. I've shown a brass connector, since I thought it was the clearest image - but needless to say, these are available in PVC, Stainless Steel, and with any number of diameters or Male/Female/Etc. configurations, according to your exact needs.
{ "domain": "engineering.stackexchange", "id": 2210, "tags": "civil-engineering, pipelines" }
Does classification of a balanced data-set lead to any problem?
Question: So I came across a bioinformatics paper, where I found a line which says: One potential problem with using a training set with equal numbers of positive and negative examples in cross-validation is that it can artificially inflate performance estimates because the number of false-positive classifications is proportional to the number of examples classified. So applying these methods to all proteins in an organism may result in a large number of false-positive identifications. I am unable to understand how classification of balanced dataset is a problem. Can someone please explain this to me? Answer: Actually, I guess it highly depends on the real data-set and its distribution. I guess the paper has referred to that is that on occasions that the distribution of each class varies, your model won't work well because of changing the distribution of each class. In cases like a disease prediction where the number of each class varies for different places, a model that is trained in the U.S won't work in African countries at all. The reason is that the distribution of classes has been changed. So in such cases that usually the negative and positive classes are not balanced in practice, balancing them will cause the problem of distribution changes. On these occasions, people usually use the real data-set which is not balanced and use F1 score for evaluation.
{ "domain": "datascience.stackexchange", "id": 2844, "tags": "machine-learning, deep-learning, class-imbalance, distribution, bayes-error" }
Classical and quantum correlations in bipartite system
Question: I would like to know how to answer following questions: Is there classical/quantum correlations in given bipartite pure/mixed state? I have gathered several definitions. Some of them (it seems) lead to counterintuitive conclusions. I have marked them from 1 to 4. I would be glad if someone could point out mistakes (if there are any) in my logic. $\underline{\text{Classical correlations}}$ are correlations which can be created by LOCC (local operations and classical communication). Do local operators acting on bipartite Hilbert space have following form: $U_{A} \otimes I_{B} \text{ and } I_{A} \otimes U_{B} $? If so then locality of these operators have nothing to do with spacial locality. Is that correct? $\underline{\text{Quantum correlations}}$ are correlations which can not be created by aforementioned way. Would that mean that quantum correlations will be created by nonlocal(?) operators of form: $U_{A} \otimes U_{B}$? Is there any other way to create quantum correlations? Can classical correlations be created by action of this operator on a given state as a byproduct(at list in case of mixed state)? I have heard that entanglement is not the only form of quantum correlations. Yet since at the moment I am not interested in exotic states but rather in general concepts I will ignore this knowledge. If this information can't be neglected please let me know. Here are some statements which seem to be correct yet counterintuitive: $\underline{\text{Pure states:}}$ No pure (bipartite) state has classical correlations between its subsystems. Proof: If we start from direct product state $|A\rangle\otimes |B\rangle$ no local operator will be able to create any correlations between subsystems. Since classical communication will lead to creation of mixed state we are out of options. Schmidt decomposition tells us whether given pure state is entangled(has quantum correlations) or separable. $\underline{\text{Mixed states:}}$ Every mixed bipartite state has classical correlations between its subsystems. Proof: Any mixed state can be written as a mixture of pure states. As I understand it this means that any density matrix can be diagonalized. Diagonalized matrix will have eigenvalues on its diagonal. Due to properties of density matrix, eigenvalues $\lambda_{i} \in [0,1]$ and their sum always equals 1. Hence these eigenvalues may be interpreted as probabilities corresponding to some pure states in a mixture. Presence of at least two nonzero eigenvalues would indicate presence of classical correlations between subsystems. Separability tells us whether given mixed state has entanglement(i.e. quantum correlations) or not. For bipartite system one can use so called Peres-Horodecki criterion. Answer: Let's go through your list. $\underline{\text{Classical correlations}}$ are correlations which can be created by LOCC (local operations and classical communication). Yes, that is correct. Do local operators acting on bipartite Hilbert space have following form: $U_{A} \otimes I_{B} \text{ and } I_{A} \otimes U_{B} $? Yes, that is correct, though this does not take into account classical communication, so the LOCC set is bigger than that. A full expression is rather clunky (since it needs to account for an arbitrary number of round trips for the classical communication, with a local unitary and projective measurement at each end), but there's good details on Wikipedia if you want them. If so then locality of these operators have nothing to do with spacial locality. Is that correct? It doesn't need to have anything to do with spatial locality. You normally enforce this by stipulating that the A and B subsystems are at remote locations and that their measurements occur sufficiently fast that they're within spacelike-separated regions of spacetime. If you don't enforce that, then 'locality' just becomes a marker of the different tensor factors of the state space. $\underline{\text{Quantum correlations}}$ are correlations which can not be created by aforementioned way. Generally speaking, yes. Would that mean that quantum correlations will be created by nonlocal(?) operators of form: $U_{A} \otimes U_{B}$? Yes. Is there any other way to create quantum correlations? It depends on what you count as "ways". There's a bunch of quantum channels that create quantum correlations which are not of the form $U_{A} \otimes U_{B}$; as an example take a unitary channel of the form $U_{A} \otimes U_{B}$ and follow it up with some limited decoherence. Does that count? Generally speaking, if you have a given quantum channel and you know that it creates quantum correlations, you'll typically be able to decompose it in such a fashion, but that decomposition does not necessarily speak to what is "really going on" inside the system (and you will typically not have any access to that). I would therefore say that the answer to the question is morally yes, but that it's too ill-defined to say anything concrete. Can classical correlations be created by action of this operator on a given state as a byproduct (at list in case of mixed state)? Frankly, this is completely unclear to me. I have heard that entanglement is not the only form of quantum correlations. This is correct. You probably want to look at quantum discord as the core example of such correlations. Yet since at the moment I am not interested in exotic states but rather in general concepts I will ignore this knowledge. If this information can't be neglected please let me know. That depends on what you mean by "can't be neglected". There are many important conceptual settings in which the information can't be neglected. There are many important conceptual settings in which it can. Some examples of the former are places where quantum contextuality is an important consideration, with the Kochen-Specker theorem taking a role similar to that of Bell's theorem in the study of entanglement. Some examples of the latter are the study of entanglement. Which side you want to listen to is a personal choice. $\underline{\text{Pure states:}}$ No pure (bipartite) state has classical correlations between its subsystems. Proof: If we start from direct product state $|A\rangle\otimes |B\rangle$ no local operator will be able to create any correlations between subsystems. Since classical communication will lead to creation of mixed state we are out of options. Yes, that is correct. You may be interested in purity as a quantum resource theory. $\quad \cdot$ Schmidt decomposition tells us whether given pure state is entangled (has quantum correlations) or separable. That is correct. $\underline{\text{Mixed states:}}$ Every mixed bipartite state has classical correlations between its subsystems. No, this is incorrect. To get a counter-example, simply take any two mixed density matrices $\rho_A$ and $\rho_B$, and set $\rho = \rho_A\otimes \rho_B$ as a mixed separable state with no classical correlations between its subsystems. Proof: Any mixed state can be written as a mixture of pure states. As I understand it this means that any density matrix can be diagonalized. Diagonalized matrix will have eigenvalues on its diagonal. Due to properties of density matrix, eigenvalues $\lambda_{i} \in [0,1]$ and their sum always equals 1. Hence these eigenvalues may be interpreted as probabilities corresponding to some pure states in a mixture. So far so true, but Presence of at least two nonzero eigenvalues would indicate presence of classical correlations between subsystems. This doesn't follow from anything. Study the counter-example above to see why this fails. $\quad \cdot$ Separability tells us whether given mixed state has entanglement (i.e. quantum correlations) or not. Entangled states are, by definition, those states that are not separable. So yes. But it tells you very little. For bipartite system one can use so called Peres-Horodecki criterion. That is one possible criterion, but it is not infallible at detecting entanglement. In other words, you get the implication $$ \rho\text{ is separable} \implies \text{its partial transpose is positive semidefinite} $$ and its contraposition $$ \text{the partial transpose of }\rho\text{ is indefinite} \implies \rho\text{ is entangled}, $$ but you don't get the converse, which would be more useful, i.e. it's possible that $\rho$ is entangled but you've just chosen a basis in which the partial transpose is still positive semidefinite.
{ "domain": "physics.stackexchange", "id": 52038, "tags": "quantum-mechanics, quantum-information, quantum-entanglement, density-operator" }
When did the first cold dark matter halos begin to originate?
Question: I know that these dark matter halos should have been created in an early universe because during the formation of galaxies, the baryonic matter was too hot to form gravitationally self-bound objects and should have required the cdm to form these structures. Did they form in an era dominated by non-relativistic matter or by radiation? Thanks for your response. Answer: The first dark matter halos typically originated in the redshift range 30-70, at a time of 30-100 million years. This is based on assuming that the initial variations in the density of the universe at small scales, for which we have no observational data, are comparable to those at large scales, for which we have ample observational data (e.g. the cosmic microwave background and large-scale structure). The particular time is solely a function of how long it took density fluctuations to grow in amplitude from their minuscule initial values of about one part in $10^{4}$ up to the $\mathcal{O}(1)$ values required for regions of excess density to collapse into halos. More specifically, I did some quick calculations using some CLASS-generated power spectra I had lying around. Suppose the minimum halo mass is around an earth mass (a typical assumption for cold dark matter, but the precise value doesn't affect the result much, as long as it's small). Then $1\sigma$ density excesses reach the "spherical collapse" threshold around redshift 15. That means that under the assumption of spherical symmetry, that is the time when they would collapse to form halos. Similarly, $2\sigma$ excesses collapse around redshift 33, $3\sigma$ excesses around redshift 52, and $4\sigma$ excesses around redshift 72. Note that the story is different for halos large enough to form galaxies inside them. If I repeat the calculation for a mass scale of (let's say) $10^7$ solar masses, the result is that $1\sigma$ density excesses at this mass scale collapse around redshift 4, $2\sigma$ around redshift 10, $3\sigma$ around redshift 15, and $4\sigma$ around redshift 20. Also, different assumptions about the amplitudes of initial density variations at small scales can change the formation time of the smallest halos. These initial fluctuations are believed to have been seeded during inflation, and inflationary physics are pretty poorly constrained. If you make the initial variations more extreme, you can make your cosmology form dark matter halos as early as you want. Or can you? During the radiation epoch (before a redshift of about 3400 or a time of about 52000 years), there are no peculiar gravitational forces. The radiation's gravitational influence dominates, and its pressure keeps it homogeneous. You can't really make a gravitationally bound dark matter halo where radiation dominates.
{ "domain": "astronomy.stackexchange", "id": 6765, "tags": "cosmology, universe, dark-matter" }
$\mathcal N=2$ SUSY representation
Question: I want to understand why in $\mathcal N=2$ SUSY representation (from Wess & Bagger book on SUSY and SUGRA, the second table on page 14): $$Q_\alpha^A Q^B_\beta |1\rangle=(1)^4 \oplus 0 \oplus 2,\quad \text{with } A,B=1,2 \text{ and $|1\rangle$ is the Clifford vacuum}.$$ My wrong (miss a spin-1) method is: First symmetrize the two spin indices and antisymmetrize the A and B indices (since the generators Q anticommute) which gives $(1)^{(AB)}=1$, then antisymmetrize the two spin-indices and symmetrize A and B which gives $(0)^{\{AB\}}=0^3$, now I have $$(1\oplus 0^3)\otimes 1= 1^3\oplus 2+0.$$ (I am not sure about the $1^3$ part which I think must be instead $1^4$ but I do not know why!?) Answer: Not sure where you're missing the 1 from, but $$0 \otimes 1=1$$ and $$ 1\otimes 1 = 0 \oplus 1 \oplus 2$$ give the result in the book.
{ "domain": "physics.stackexchange", "id": 36308, "tags": "special-relativity, quantum-spin, supersymmetry, tensor-calculus, representation-theory" }
A couple of questions about making an electric hobby car from scratch
Question: So I'm a high school student doing some competition (a wallet-sized one, unfortunately) and I'm supposed to make an electric car. Think an RC-car, (about that size) without the remote control. I want to make one from scratch, so I decided on using an arduino for a micro controller. Unfortunately, I don't really have an adviser that's particularly knowledgeable about this stuff, so I decided to ask here. (brace yourself for quite a few possibly dumb and vague questions. I do understand this question will likely be edited/closed for being too vague, but if, by any chance, someone has answers, I'd also like explanations -- mostly because I want to learn something.) I've decided on using an aluminum chassis and soft wheels, since the competition wants a car that can go as fast as possible. I've chosen soft wheels because that should give me higher friction, and an aluminum chassis because I want to reduce weight. Now, here comes the first problem. The competition specifies a maximum of a 9v battery to power everything, including the arduino and the motors. Now, of course, we only need to go about 10 meters (I just want sub 3s times). My original design involved having 4 separate dc brushed/brushless motors, and then connecting each dc motor directly to the wheel. The dc motors could be held on with brackets, and then connected to an Arduino connected to a MOSFET, at which point I could then use a motor encoder to see how far to go. I now have a couple of questions: Should I try having a 9v to 12v converter? or a 9v to 24v converter? That might cut the current significantly, but I'd also be able to use a much more powerful motor. The problem I have with this is that it might be adding a lot of potential weight -- as I understand it, I can't convert directly from DC 9v to something else. I've decided on having a rear wheel drive rather than a 4wd -- but is it worth getting a gearbox and an axle and just having one motor? Should I use a gearbox at all? Or should I try to change the speed based on the size of the wheel I use? Would I try to maximize RPM or torque by getting a geared-down brushless? If I have a 9v battery, can I have 2 9v motors in parallel? Or is that unsafe because 9v is only nominal? If anyone has tips on how to improve the question, I'd be glad to incorporate them into my edits. some info, by the way: looking for sub3 second times using banebots 2-7/8 inch diameter wheels hard floor will need to back up after 9m only going straight, doesn't need to be able to turn thinking of a PID loop with a physical rotary quadrature encoder if one axle. thoughts? the car will be ~1kg and I won't be carrying anything else. Answer: Answers to your specific questions: No, voltage converters won't help, at best they will waste 10% of your power. Think about this : they involve transformers. two different windings on a magnetic core to translate one voltage to another. That might help you understand that any given motor (which is just a winding on a magnetic core to translate voltage into motion) can be tuned to run off any voltage by changing the winding. You'll see the same RC car motors with 13,15,17 turns to tune performance. Instead, learn the relationship between voltage and speed (Kv, rpm/volt, the relationship is simply Speed = Kv * voltage) and current and torque (Kt, Newton-metres/amp). A high speed motor (high Kv) gives little torque, and a low speed motor gives higher torque. So with direct driven wheels you want a LOW speed motor. You have a mass (1kg), a distance (10m), and a time (3s). If you've done basic high school physics you know how to get speed and acceleration from distance and time. Work out 2 cases : linear acceleration from 0 to twice the average speed, and covering the distance at average speed. I'll recommend you stick to metric, it makes some of the math easier. If your motor Kv is specified in rpm/volt, you have to convert that to radians/second/volt. In metric units, the torque relationship Kt = 1/Kv, so torque is Kt * current, or 1/Kv * current. If you think about electric power, P = V * A and mechanical power = Nm/second = torque * rotation rate in radians/second, and the law of conservation of energy, you'll see why Kt = 1/Kv and understand how (high speed, low torque) and (low speed, high torque) can provide the same power. Now acceleration and mass give you force, and force * wheel radius gives you the torque required for acceleration. Also multiply mass * gravity(9.8N/kg) * coefficient of friction to get the force required to push the car at constant speed, and convert that to a torque. (You don't know the coefficient of friction : try 0.2 for a perfectly smooth floor and 0.6 for carpet) Now you know the torque required just for acceleration, and for travelling across the floor. The actual torque you need is the sum of these. (adding a safety factor like another 25% isn't a bad idea). Given the speeds you need, in m/s, and a specific wheel radius, divide speed by radius to get rotational rate in radians/second. (Convert that to RPM if you want). Now that you know the speed (rad/sec or rpm) and torque (Nm or Newton metres) you can go back to selecting a motor. Can you find a motor with the right Kv, where 9V * Kv is anywhere close to your desired rotation speed? What current do you need to achieve the torque? If it's over 0.5A you'd better forget that little 9V battery, and that's being optimistic... I'm guessing not, and that's where you need a gearbox to translate from (high speed, low torque) to (low speed, high torque). So if your motor speeds are 10x too high, use a 10:1 gearbox, multiply your required speed by 10, and divide the required torque by 10. Can you meet your speed and current requirements now? All this may look daunting but each bit is relatively simple maths. If there are specific sticking points, think about them for a bit, and ask if you need to. Having laid that groundwork, the rest of the questions should be easy... If you have 4 motors, each delivers 1/4 the torque and consumes 1/4 the current, so there's not much to choose between 1 motor and 4 in overall efficiency, but if one small motor can't supply the torque you need, 2 or 4 might work. The huge chunk of maths above will tell you if you need a gearbox. You probably do. You don't want to maximise RPM OR torque, you want just enough of both to cover the requirements and a bit (50% or so) to spare if possible. (I haven't mentioned inefficiencies like friction that eat away at performance little by little. This is long enough without that!). Brushless motors aren't a whole lot better at delivering torque, where they win is by running much faster, so you'd need to gear them down more. Probably not worth it here. You can run motors in parallel, but remember their currents add up, and your little battery can't supply much current. (Good batteries have datasheets which show how much current they can supply, for how long. In this case, 0.5A for much less than an hour, down to 4.8V)
{ "domain": "engineering.stackexchange", "id": 644, "tags": "mechanical-engineering, motors, electric-vehicles" }
How to do the direct evaluation by hand in QAOA algorithm
Question: In QAOA algorithm, for $p=1$ layer of gates and at most degree $d=3$, the expectation value can be calculated by hand. I think I can convince myself the idea of commutability. In this qiskit example, the author evaluated $$f_A(\gamma,\beta) = \frac{1}{2}\left(1 - \langle +^3|U_{21}(\gamma)U_{02}(\gamma)U_{01}(\gamma)X_{0}(\beta)X_{1}(\beta)\;Z_0Z_1\; X^\dagger_{1}(\beta)X^\dagger_{0}(\beta)U^\dagger_{01}(\gamma)U^\dagger_{02}(\gamma)U^\dagger_{12}(\gamma) | +^3 \rangle \right)$$ as $$f_A(\gamma,\beta) = \frac{1}{2}\left(sin(4\gamma)sin(4\beta) + sin^2(2\beta)sin^2(2\gamma)\right).$$ I believe I can write these unitaries explicitly as tensor product of $U1$ and $U3$ gate. However, when we evaluate this by hand, the states $|+^3\rangle$ or $|+^5\rangle$ what I can imagine is an 8-dimensional vector or 32-dimensional vector. For the 8-dimension case, these gates are two-qubit gates. All I can think of is that I need to expand these two-qubit gates into a three-qubit gate by tensor product with an identity operator. Even though, I still don't know how to handle these gates like $U_{02}$. How can I sandwich an identity operator into a two-qubit operator? My question is how to evaluate this explicitly by hand. I found a paper Quantum Algorithms for Fixed Qubit Architectures and it provided some operator identity which I think can help understand how this evaluation worked. But I still have no idea how these identities hold. I copied the original text here: 5.2 Optimal trial state parameters In this example we consider the case for $p = 1$, i.e. only layer of gates. The expectation value $F_1(\gamma,\beta) = \langle \psi_1(\beta,\gamma)|H|\psi_1(\beta,\gamma) \rangle$ can be calculated analytically for this simple setting. Let us discuss the steps explicitly for the Hamiltonian $H = \sum_{(j,k) \in E} \frac{1}{2}\left(1 - Z_i Z_k\right)$. Due to the linearity of the expectation value we can compute the expectation value for the edges individually $$f_{(i,k)}(\beta,\alpha) = \langle \psi_1(\gamma,\beta)|\;\frac{1}{2}\left(1 - Z_i Z_k\right)\;|\psi_1(\gamma,\beta)\rangle. $$ For the butterfly graph as plotted above, we observe that there are only two kinds of edges $A = \{(0,1),(3,4)\}$ and $B = \{(0,2),(1,2),(2,3),(2,4)\}$. The edges in $A$ only have two neighboring edges, while the edges in $B$ have four. You can convince yourself that we only need to compute the expectation of a single edge in each set since the other expectation values will be the same. This means that we can compute $F_1(\gamma,\beta) = 2 f_A(\gamma,\beta) + 4f_B(\gamma,\beta)$ by evaluating only computing two expectation values. Note, that following the argument as outlined in section 4.2.2, all the gates that do not intersect with the Pauli opertor $Z_0Z_1$ or $Z_0Z_2$ commute and cancel out so that we only need to compute $$f_A(\gamma,\beta) = \frac{1}{2}\left(1 - \langle +^3|U_{21}(\gamma)U_{02}(\gamma)U_{01}(\gamma)X_{0}(\beta)X_{1}(\beta)\;Z_0Z_1\; X^\dagger_{1}(\beta)X^\dagger_{0}(\beta)U^\dagger_{01}(\gamma)U^\dagger_{02}(\gamma)U^\dagger_{12}(\gamma) | +^3 \rangle \right)$$ and $$f_B(\gamma,\beta) = \frac{1}{2}\left(1 - \langle +^5|U_{21}(\gamma)U_{24}(\gamma)U_{23}(\gamma)U_{01}(\gamma)U_{02}(\gamma)X_{0}(\beta)X_{2}(\beta)\;Z_0Z_2\; X^\dagger_{0}(\beta)X^\dagger_{2}(\beta)U^\dagger_{02}(\gamma)U^\dagger_{01}(\gamma)U^\dagger_{12}(\gamma)U^\dagger_{23}(\gamma)U^\dagger_{24}(\gamma) | +^5 \rangle \right)$$ How complex these expectation values become in general depend only on the degree of the graph we are considering and is independent of the size of the full graph if the degree is bounded. A direct evaluation of this expression with $U_{k,l}(\gamma) = \exp\frac{i\gamma}{2}\left(1 - Z_kZ_l\right)$ and $X_k(\beta) = \exp(i\beta X_k)$ yields $$f_A(\gamma,\beta) = \frac{1}{2}\left(sin(4\gamma)sin(4\beta) + sin^2(2\beta)sin^2(2\gamma)\right)$$ and $$f_B(\gamma,\beta) = \frac{1}{2}\left(1 - sin^2(2\beta)sin^2(2\gamma)cos^2(4\gamma) - \frac{1}{4}sin(4\beta)sin(4\gamma)(1+cos^2(4\gamma))\right) $$ These results can now be combined as described above, and the expectation value is therefore given by $$ F_1(\gamma,\beta) = 3 - \left(sin^2(2\beta)sin^2(2\gamma)- \frac{1}{2}sin(4\beta)sin(4\gamma)\right)\left(1 + cos^2(4\gamma)\right),$$ We plot the function $F_1(\gamma,\beta)$ and use a simple grid search to find the parameters $(\gamma^*,\beta^*)$ that maximize the expectation value. Answer: After several days of thinking and researching, I decided to answer my own question. N.B. The tensor product symbol $\otimes$ are omitted when there is no risk in confusion, especially when the index is different. In symbol, $$ A_iB_j = A_i \otimes B_j (i \neq j). $$ Firstly, for case $$e^{i \beta B} Z_i Z_j e^{-i \beta B},$$ we only consider qubit $j$, in which $B_e=\sum_{k \in e}{X_k}$. As $X_i$ and $X_j$ act on different qubits, $$ e^{i{(X_i+X_j)}}=e^{i((X_i\otimes I_j)+(I_i \otimes X_j))}=e^{iX_i}e^{iX_j}. $$ Now, $$ \begin{align} e^{-i \beta X} & = e^{-i \beta \begin{bmatrix}0 & 1\\ 1 & 0\end{bmatrix}} \\ & = e^{-i \beta \begin{bmatrix}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\end{bmatrix} \begin{bmatrix}1 & 0\\ 0 & 1\end{bmatrix} \begin{bmatrix}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{bmatrix}} \\ & = \begin{bmatrix}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\end{bmatrix} \begin{bmatrix}e^{-i \beta} & 0\\ 0 & e^{-i \beta}\end{bmatrix} \begin{bmatrix}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{bmatrix} \\ & = \begin{bmatrix}\cos(\beta) & -i\sin(\beta)\\ -i\sin(\beta) & \cos(\beta)\end{bmatrix} \\ & = I\cos(\beta)-iX\sin(\beta). \end{align} $$ Thus, $$ \begin{align} e^{i \beta X} & = I\cos(\beta)+iX\sin(\beta). \end{align} $$ Hence, $$ \begin{align} e^{i \beta X} Z e^{-i \beta X} & = (I\cos(\beta)+iX\sin(\beta))Z(I\cos(\beta)-iX\sin(\beta)) \\ & = Z \cos(2\beta) + Y\sin(2\beta). \end{align} $$ For the Ising traverse field Hamiltonian, we only consider $$G_{e=(i,k)}=Z_iZ_k.$$ For $e^{i \gamma Z_iZ_k}$, it can be evaluated as $$ \begin{align} e^{i \gamma Z_iZ_k} &= e^{i \gamma \begin{bmatrix}1&&&\\&-1&&\\&&-1&\\&&&1\end{bmatrix}} \\ & = \begin{bmatrix}e^{i\gamma}&&&\\&e^{-i\gamma}&&\\&&e^{-i\gamma}&\\&&&e^{i\gamma}\end{bmatrix} \\ & = \begin{bmatrix}\cos\gamma+i\sin\gamma &&&\\&\cos\gamma-i\sin\gamma&&\\&&\cos\gamma-i\sin\gamma&\\&&&\cos\gamma+i\sin\gamma\end{bmatrix} \\ & = \cos\gamma\begin{bmatrix}1&&&\\&1&&\\&&1&\\&&&1\end{bmatrix}+i\sin\gamma\begin{bmatrix}1&&&\\&-1&&\\&&-1&\\&&&1\end{bmatrix} \\ & = I_i I_k \cos\gamma + i Z_1Z_2 \sin\gamma. \end{align} $$ We can calculate two-qubit operations independently, such that $$ Y_i (Z_i Z_k) = (Y_i \otimes I_k)(Z_i \otimes Z_k)=(Y_i Z_i) \otimes (I_kZ_k) = i X_i \otimes Z_k = i X_i Z_k. $$ Numerically, if you cannot convince yourself, $$ Y_i \otimes I_k = \begin{bmatrix} &&-i&\\&&&-i\\i&&&\\&i&& \end{bmatrix} $$ and $$ Z_i \otimes Z_k = \begin{bmatrix} 1&&&\\&-1&&\\&&-1&\\&&& 1\end{bmatrix} $$ and $$ (Y_i \otimes I_k)(Z_i \otimes Z_k) = \begin{bmatrix} &&i&\\&&&-i\\i&&&\\&-i&&\end{bmatrix} = i(X_i \otimes Z_k). $$ Now, for a more specific example, $$ \begin{align} e^{-i \gamma Z_iZ_k} Y_i e^{i \gamma Z_iZ_k} & = (I_i I_k \cos\gamma - i Z_1Z_2 \sin\gamma) (Y_i \otimes I_k) (I_i I_k \cos\gamma + i Z_1Z_2 \sin\gamma) \\ & = Y_i \cos^2\gamma - Y_i\sin^2\gamma - X_i Z_k\sin\gamma\cos\gamma - X_i Z_k \sin\gamma\cos\gamma \\ & = Y_i \cos 2\gamma - X_iZ_k\sin 2\gamma. \end{align} $$ Q.E.D.
{ "domain": "quantumcomputing.stackexchange", "id": 1780, "tags": "qaoa" }
What is the name of the baboons that live in Saudi Arabia
Question: In Saudi Arabia there are a lot of baboons in the mountain area. I would like to know what's the name of this specific baboons so I can read about them more: Answer: These look like Hamadryas Baboons (Papio hamadryas). They also fit the geographical location.
{ "domain": "biology.stackexchange", "id": 996, "tags": "species-identification" }
Why is RobotState not the same in different nodes?
Question: I am attaching a collision object to my robot. The object is displayed fine and reported correctly as the cause of a collision if I move it into a wall in Rviz. I also added some debug messages that make it clear that the RobotState in the move_group node knows about this object. However, I have two other nodes running, which have a move_group_interface and a MoveGroupCommander in C++/Python respectively, and neither of their RobotStates seem to be aware of the attached object. They are both running when the AttachedCollisionObject is published, but their state does not change. I check with this code in the C++ node: robot_state::RobotState state(*group_interface.getCurrentState()); std::vector<const robot_state::AttachedBody*> ab; state.getAttachedBodies(ab); ROS_INFO_STREAM("Current state has " << ab.size() << " attached bodies."); But the result stays 0, even though it should say 1 after attaching the object (a similar debug message does report the correct number in the move_group node). Normally it would not be a problem since the collisions are checked in the move_group node, but I need these attached objects to construct my planning goal. What am I missing? How can I update the RobotState? Thanks in advance. I build MoveIt from source on the kinetic-devel branch and use ROS Kinetic on Ubuntu 16.04. edit: I am using the PlanningSceneInterface to attach the collision object and not publishing a message manually, if that makes a difference. edit2: I may have found the reason for this behavior in the description of the PlanningSceneInterface applyAttachedCollisionObject function: Other PlanningSceneMonitors will NOT receive the update unless they subscribe to move_group's monitored scene I'm not sure how to do that or why that wouldn't be the default behavior of the move_group_interface, but any pointers are welcome. Originally posted by fvd on ROS Answers with karma: 2180 on 2018-09-04 Post score: 2 Answer: Hi there, I can't say I have a plain answer to your problem because it depends on your specific node setup. Your second edit is right in that the PlanningSceneInterface.apply* functions only affect the move_group node. This is because the functions use the ros service provided by the move_group node. If you instantiate your own PlanningSceneMonitor in a different node, you have to connect it to move_group/monitored_planning_scene to receive the updates. But these are throttled (2 Hz by default I think) and of course updates there are asynchronous. Historically the only way to update the scene was asynchronously via the add* functions (published messages). There all nodes/monitors got the updates.... or not because messages where dropped/not connected/... The proper way to go for you might be to get a fresh planning scene from the move_group node right when you need it. You can use the GetPlanningScene service for that, or better, you utilize the PlanningSceneInterface and just add a method that exposes the full possibilities of the service there. Originally posted by v4hn with karma: 2950 on 2018-09-04 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by fvd on 2018-09-04: Thanks for the answer. I already made it a MoveGroupCapability. Is there a reason that move_group_interface does not include AttachedBody objects or even the whole scene? I see that it instantiates a CurrentStateMonitor, but that only tracks TF and JointStates.
{ "domain": "robotics.stackexchange", "id": 31701, "tags": "ros, moveit, ros-kinetic, move-group-interface, move-group" }
How many joules of energy does America use everyday?
Question: When considering all forms of energy the US uses to power itself, for example fossil fuels, solar, and nuclear energy, how much does America use a day? I'm looking for an answer in joules and a reference to the data source in which the answer is derived. I'm not a student or a physicist. I'm just a layman who understands the basics concept of energy and would like to know if there is anyone here who can answer this question. I would be happy to clarify the question if it seems unclear, just leave a comment. Answer: Yes, it's an easy find on Google. About 25000 TWh in 2013 of primary energy. See Energy in the US and I'm sure there's more data you can find online. $1~\textrm{Joule} = 1~\textrm{ Wsec, i.e,}~ 1~\textrm{ watt-second}.$ You can do the conversion: $25000~\textrm{TWh} = 25000 \times 10^{12} \times 3.6 \times 10^3 = 10^{20}~\textrm J$ approx. It was relatively constant for a few years according to the wiki article. If you need an exact number for an specific year just go find it.
{ "domain": "physics.stackexchange", "id": 36018, "tags": "energy" }
Is AI entirely a part of Computer Science?
Question: Both AI and Computer Science are Sciences, as I understood from Wikipedia, Computer Science is everything that has any relation to computers. And AI is commonly defined as Study of machines that take the prerogative of humans (creating musical pieces e.t.c But recently, when I was reading, I read this sentence : "In Computer Science, AI is [...]" So my question is really : Is there a part of AI studies that do not refer to Computer Science? Answer: AI is an amalgamation of many fields, Computer Science plays a major role in imparting "Intelligence" to the machine. Following is a quote from the best selling AI book Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig. Artificial Intelligence (AI) is a big field, and this is a big book. We have tried to explore the full breadth of the field, which encompasses logic, probability, and continuous mathematics; perception, reasoning, learning, and action; and everything from microelectronic devices to robotic planetary explorers. So, the answer to your question is yes, there are other fields that AI depends including mathematics (to optimize AI algorithms), electronic components (sensors, microprocessors, etc.), mechanical actuators (hydraulic, pneumatic, electric, etc.) I highly recommend the book if you are looking for a starting point.
{ "domain": "ai.stackexchange", "id": 284, "tags": "terminology" }
Cannot execute rostopic list -v correctly
Question: Hi, I am a beginner to ROS, please see below information about my environment: ROS version: melodic Platform: Ubuntu 18 Environment Variables (output of env | grep ROS) i source the setup.bash file for my current workspace every time i login: ROS_ETC_DIR=/opt/ros/melodic/etc/ros ROS_ROOT=/opt/ros/melodic/share/ros ROS_MASTER_URI=http://localhost:11311 ROS_VERSION=1 ROS_PACKAGE_PATH=/opt/ros/melodic/share ROSLISP_PACKAGE_DIRECTORIES= ROS_DISTRO=melodic While following the beginner tutorials, i performed the step to list current topics in verbose mode via rostopic list -v and was faced with the following result: Published topics: * /turtle1/color_sensor [turtlesim/Color] 1 publisher * /turtle1/cmd_vel [geometry_msgs/Twist] 1 publisher * /rosout [rosgraph_msgs/Log] 5 publishers * /rosout_agg [rosgraph_msgs/Log] 1 publisher * /turtle1/pose [turtlesim/Pose] 1 publisher Subscribed topics: Traceback (most recent call last): File "/opt/ros/melodic/bin/rostopic", line 35, in <module> rostopic.rostopicmain() File "/opt/ros/melodic/lib/python2.7/dist-packages/rostopic/__init__.py", line 2119, in rostopicmain _rostopic_cmd_list(argv) File "/opt/ros/melodic/lib/python2.7/dist-packages/rostopic/__init__.py", line 2059, in _rostopic_cmd_list exitval = _rostopic_list(topic, verbose=options.verbose, subscribers_only=options.subscribers, publishers_only=options.publishers, group_by_host=options.hostname) or 0 File "/opt/ros/melodic/lib/python2.7/dist-packages/rostopic/__init__.py", line 1246, in _rostopic_list verbose) File "/opt/ros/melodic/lib/python2.7/dist-packages/rostopic/__init__.py", line 1140, in _sub_rostopic_list print(indent+" * %s [%s] %s subscribers"%(t, ttype, len(llist))) NameError: global name 'llist' is not defined According to the correct output shown in the tutorials, i get the Published topics correctly, however the Subscribed topics output is not correct. Kindly help. Originally posted by Eliptical0 on ROS Answers with karma: 1 on 2018-06-03 Post score: 0 Answer: This is a known issue, and already fixed in ros/ros_comm#1407. It could take a little while for this fix to reach you though. Originally posted by gvdhoorn with karma: 86574 on 2018-06-03 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 30964, "tags": "ros-melodic, rostopic" }
Does every joint need an actuator and transmission?
Question: Hi all, I'm writing my own URDF-controller code for my robot. I have the URDF working well in rviz and warehouse but I can't get the controller to work. I am looking into this tutorial for a differential drive robot and I was wondering will I need actuators/transmissions for all of my motors in the joints of my robot's manipulator arms? Here is a copy of my urdf: <?xml version="1.0" ?> <robot name="H20_robot"> <link name="stand_link"> <visual> <geometry> <box size="0.20 0.10 1"/> </geometry> <origin rpy="0 0 0" xyz="0 0 -.5"/> <material name="orange"> <color rgba="1 0.5 0 1"/> </material> </visual> <collision> <geometry> <box size="0.20 0.10 1"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 0 -.5"/> </collision> <inertial> <mass value="20.0"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <link name="base_link"> <visual> <geometry> <cylinder length="0.38" radius="0.0325"/> </geometry> <origin rpy="0 1.57057 0" xyz="0 0 0"/> <material name="grey"> <color rgba="0.5 0.5 0.5 1"/> </material> </visual> <collision> <geometry> <cylinder length="0.38" radius="0.0325"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 1.57057 0" xyz="0 0 0"/> </collision> <inertial> <mass value="10.0"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <link name="camera_link"> <inertial> <mass value="1.0"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="camera_to_base" type="fixed"> <parent link="base_link"/> <child link="camera_link"/> <origin xyz="0.05 -.1 .3" rpy="0 0.5 -1.57" /> </joint> <joint name="base_to_stand" type="fixed"> <parent link="stand_link"/> <child link="base_link"/> <origin xyz="0 0 0" rpy="0 0 0" /> </joint> <link name="head_link"> <visual> <geometry> <sphere radius="0.15"/> </geometry> <origin rpy="0 0 0" xyz="0 0 0.1825"/> <material name="orange"/> </visual> <collision> <geometry> <sphere radius="0.15"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 0 0.1825"/> </collision> <inertial> <mass value="2.0"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="head_to_base" type="fixed"> <parent link="base_link"/> <child link="head_link"/> <origin xyz="0 0 0" rpy="0 0 0" /> </joint> <link name="left_shoulder_link"> <visual> <geometry> <cylinder length="0.06" radius="0.0175"/> </geometry> <material name="grey"/> <origin rpy="0 1.57057 0" xyz="0.03 0 0"/> </visual> <collision> <geometry> <cylinder length="0.06" radius="0.0175"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 1.57057 0" xyz="0.03 0 0"/> </collision> <inertial> <mass value="2.5"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="base_to_left_shoulder" type="revolute"> <parent link="base_link"/> <child link="left_shoulder_link"/> <origin xyz="0.19 0 0" rpy="0 0 0" /> <axis xyz="1 0 0" /> <limit effort="1000.0" lower="-1.57" upper="1.57" velocity="0.5"/> </joint> <link name="right_shoulder_link"> <visual> <geometry> <cylinder length="0.06" radius="0.0175"/> </geometry> <material name="grey"/> <origin rpy="0 -1.57057 0" xyz="-0.03 0 0"/> </visual> <collision> <geometry> <cylinder length="0.06" radius="0.0175"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 -1.57057 0" xyz="-0.03 0 0"/> </collision> <inertial> <mass value="0.5"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="base_to_right_shoulder" type="revolute"> <parent link="base_link"/> <child link="right_shoulder_link"/> <origin xyz="-0.19 0 0" rpy="0 0 0" /> <axis xyz="-1 0 0" /> <limit effort="1000.0" lower="-1.57" upper="1.57" velocity="0.5"/> </joint> <link name="left_rotator_link"> <visual> <geometry> <cylinder length="0.0150" radius="0.01"/> </geometry> <material name="grey"/> <origin rpy="1.57 0 0" xyz="0.045 0 0"/> </visual> <collision> <geometry> <cylinder length="0.0150" radius="0.01"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="1.57 0 0" xyz="0.045 0 0"/> </collision> <inertial> <mass value="0.1"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="left_shoulder_to_rotator" type="fixed"> <parent link="left_shoulder_link"/> <child link="left_rotator_link"/> <origin xyz="0 0.025 0" rpy="0 0 0" /> </joint> <link name="right_rotator_link"> <visual> <geometry> <cylinder length="0.0150" radius="0.01"/> </geometry> <material name="grey"/> <origin rpy="-1.57 0 0" xyz="-0.045 0 0"/> </visual> <collision> <geometry> <cylinder length="0.0150" radius="0.01"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="-1.57 0 0" xyz="-0.045 0 0"/> </collision> <inertial> <mass value="0.1"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="right_shoulder_to_rotator" type="fixed"> <parent link="right_shoulder_link"/> <child link="right_rotator_link"/> <origin xyz="0 0.025 0" rpy="0 0 0" /> </joint> <link name="left_tricep_link"> <visual> <geometry> <cylinder length="0.29" radius="0.0325"/> </geometry> <material name="grey"/> <origin rpy="0 0 0" xyz="0 0.0325 -.105"/> </visual> <collision> <geometry> <cylinder length="0.29" radius="0.0325"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 0.0325 -.105"/> </collision> <inertial> <mass value="5.0"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="left_rotator_to_tricep" type="revolute"> <parent link="left_rotator_link"/> <child link="left_tricep_link"/> <origin xyz="0.045 0.0075 0" rpy="0 0 0" /> <axis xyz="0 -1 0" /> <limit effort="1000.0" lower="-1.57" upper="1.57" velocity="0.5"/> </joint> <link name="right_tricep_link"> <visual> <geometry> <cylinder length="0.29" radius="0.0325"/> </geometry> <material name="grey"/> <origin rpy="0 0 0" xyz="0 0.0325 -.105"/> </visual> <collision> <geometry> <cylinder length="0.29" radius="0.0325"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 0.0325 -.105"/> </collision> <inertial> <mass value="5.0"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="right_rotator_to_tricep" type="revolute"> <parent link="right_rotator_link"/> <child link="right_tricep_link"/> <origin xyz="-0.045 0.0075 0" rpy="0 0 0" /> <axis xyz="0 -1 0" /> <limit effort="1000.0" lower="-1.57" upper="1.57" velocity="0.5"/> </joint> <link name="left_upper_bicep_link"> <visual> <geometry> <cylinder length="0.145" radius="0.0325"/> </geometry> <material name="grey"/> <origin rpy="0 0 0" xyz="0 -0.0325 0"/> </visual> <collision> <geometry> <cylinder length="0.145" radius="0.0325"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 -0.0325 0"/> </collision> <inertial> <mass value="2.5"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="left_tricep_to_upper_bicep" type="fixed"> <parent link="left_tricep_link"/> <child link="left_upper_bicep_link"/> <origin xyz="0 0 -.145" rpy="0 0 0" /> </joint> <link name="right_upper_bicep_link"> <visual> <geometry> <cylinder length="0.145" radius="0.0325"/> </geometry> <material name="grey"/> <origin rpy="0 0 0" xyz="0 -0.0325 0"/> </visual> <collision> <geometry> <cylinder length="0.145" radius="0.0325"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 -0.0325 0"/> </collision> <inertial> <mass value="2.5"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="right_tricep_to_upper_bicep" type="fixed"> <parent link="right_tricep_link"/> <child link="right_upper_bicep_link"/> <origin xyz="0 0 -.145" rpy="0 0 0" /> </joint> <link name="left_lower_bicep_link"> <visual> <geometry> <cylinder length="0.11" radius="0.0175"/> </geometry> <material name="grey"/> <origin rpy="0 0 0" xyz="0 0 -0.055"/> </visual> <collision> <geometry> <cylinder length="0.11" radius="0.0175"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 0 -0.055"/> </collision> <inertial> <mass value="2.5"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="left_upper_to_lower_bicep" type="revolute"> <parent link="left_upper_bicep_link"/> <child link="left_lower_bicep_link"/> <origin xyz="0 -.0325 -0.0725" rpy="0 0 0" /> <axis xyz="0 0 1" /> <limit effort="1000.0" lower="-2" upper="2" velocity="0.5"/> </joint> <link name="right_lower_bicep_link"> <visual> <geometry> <cylinder length="0.11" radius="0.0175"/> </geometry> <material name="grey"/> <origin rpy="0 0 0" xyz="0 0 -0.055"/> </visual> <collision> <geometry> <cylinder length="0.11" radius="0.0175"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 0 -0.055"/> </collision> <inertial> <mass value="2.5"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="right_upper_to_lower_bicep" type="revolute"> <parent link="right_upper_bicep_link"/> <child link="right_lower_bicep_link"/> <origin xyz="0 -.0325 -0.0725" rpy="0 0 0" /> <axis xyz="0 0 1" /> <limit effort="1000.0" lower="-2" upper="2" velocity="0.5"/> </joint> <link name="left_upper_forearm_link"> <visual> <geometry> <cylinder length="0.27" radius="0.0325"/> </geometry> <material name="orange"/> <origin rpy="0 0 0" xyz="-0.0325 0 -0.095"/> </visual> <collision> <geometry> <cylinder length="0.27" radius="0.0325"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="-0.0325 0 -0.095"/> </collision> <inertial> <mass value="2.5"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="left_lower_bicep_to_upper_forearm" type="revolute"> <parent link="left_lower_bicep_link"/> <child link="left_upper_forearm_link"/> <origin xyz="0 0 -.11" rpy="0 0 0" /> <axis xyz="1 0 0" /> <limit effort="1000.0" lower="-1.57" upper="1.57" velocity="0.5"/> </joint> <link name="right_upper_forearm_link"> <visual> <geometry> <cylinder length="0.27" radius="0.0325"/> </geometry> <material name="orange"/> <origin rpy="0 0 0" xyz="0.0325 0 -0.095"/> </visual> <collision> <geometry> <cylinder length="0.27" radius="0.0325"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0.0325 0 -0.095"/> </collision> <inertial> <mass value="2.5"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="right_lower_bicep_to_upper_forearm" type="revolute"> <parent link="right_lower_bicep_link"/> <child link="right_upper_forearm_link"/> <origin xyz="0 0 -.11" rpy="0 0 0" /> <axis xyz="-1 0 0" /> <limit effort="1000.0" lower="-1.57" upper="1.57" velocity="0.5"/> </joint> <link name="left_lower_forearm_link"> <visual> <geometry> <cylinder length="0.065" radius="0.0325"/> </geometry> <material name="orange"/> <origin rpy="0 0 0" xyz="0 0 -.1975"/> </visual> <collision> <geometry> <cylinder length="0.065" radius="0.0325"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 0 -.1975"/> </collision> <inertial> <mass value="2.5"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="left_upper_to_lower_forearm" type="fixed"> <parent link="left_upper_forearm_link"/> <child link="left_lower_forearm_link"/> <origin xyz="0 0 0" rpy="0 0 0" /> </joint> <link name="right_lower_forearm_link"> <visual> <geometry> <cylinder length="0.065" radius="0.0325"/> </geometry> <material name="orange"/> <origin rpy="0 0 0" xyz="0 0 -.1975"/> </visual> <collision> <geometry> <cylinder length="0.065" radius="0.0325"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 0 -.1975"/> </collision> <inertial> <mass value="2.5"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="right_upper_to_lower_forearm" type="fixed"> <parent link="right_upper_forearm_link"/> <child link="right_lower_forearm_link"/> <origin xyz="0 0 0" rpy="0 0 0" /> </joint> <link name="left_wrist_link"> <visual> <geometry> <cylinder length="0.055" radius="0.0175"/> </geometry> <material name="orange"/> <origin rpy="0 0 0" xyz="0 0 -0.0275"/> </visual> <collision> <geometry> <cylinder length="0.055" radius="0.0175"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 0 -0.0275"/> </collision> <inertial> <mass value="1.0"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="left_lower_forearm_to_wrist" type="revolute"> <parent link="left_lower_forearm_link"/> <child link="left_wrist_link"/> <origin xyz="0 0 -.23" rpy="0 0 0" /> <axis xyz="0 0 1" /> <limit effort="1000.0" lower="-3.14" upper="3.14" velocity="0.5"/> </joint> <link name="right_wrist_link"> <visual> <geometry> <cylinder length="0.055" radius="0.0175"/> </geometry> <material name="orange"/> <origin rpy="0 0 0" xyz="0 0 -0.0275"/> </visual> <collision> <geometry> <cylinder length="0.055" radius="0.0175"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 0 -0.0275"/> </collision> <inertial> <mass value="1.0"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="right_lower_forearm_to_wrist" type="revolute"> <parent link="right_lower_forearm_link"/> <child link="right_wrist_link"/> <origin xyz="0 0 -.23" rpy="0 0 0" /> <axis xyz="0 0 -1" /> <limit effort="1000.0" lower="-3.14" upper="3.14" velocity="0.5"/> </joint> <link name="left_hand_link"> <visual> <geometry> <box size="0.04 0.05 0.09"/> </geometry> <material name="orange"/> <origin rpy="0 0 0" xyz="0 0 -0.045"/> </visual> <collision> <geometry> <box size="0.04 0.05 0.09"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 0 -0.045"/> </collision> <inertial> <mass value="0.5"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="left_wrist_to_hand" type="revolute"> <parent link="left_wrist_link"/> <child link="left_hand_link"/> <origin xyz="0 0 -0.055" rpy="0 0 0" /> <axis xyz="0 1 0" /> <limit effort="1000.0" lower="-0.785" upper="0.785" velocity="0.5"/> </joint> <link name="right_hand_link"> <visual> <geometry> <box size="0.04 0.05 0.09"/> </geometry> <material name="orange"/> <origin rpy="0 0 0" xyz="0 0 -0.045"/> </visual> <collision> <geometry> <box size="0.04 0.05 0.09"/> </geometry> <contact_coefficients mu="0" kp="1000.0" kd="1.0"/> <origin rpy="0 0 0" xyz="0 0 -0.045"/> </collision> <inertial> <mass value="0.5"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> <origin/> </inertial> </link> <joint name="right_wrist_to_hand" type="revolute"> <parent link="right_wrist_link"/> <child link="right_hand_link"/> <origin xyz="0 0 -0.055" rpy="0 0 0" /> <axis xyz="0 -1 0" /> <limit effort="1000.0" lower="-0.785" upper="0.785" velocity="0.5"/> </joint> <turnGravityOff>true</turnGravityOff> </robot> Here is the code from the tutorial I am referring to: <!-- Transmission is important to link the joints and the controller --> 98 <transmission name="joint_w1_trans" type="SimpleTransmission"> 99 <actuator name="joint_w1_motor" /> 100 <joint name="joint_w1" /> 101 <mechanicalReduction>1</mechanicalReduction> 102 <motorTorqueConstant>1</motorTorqueConstant> 103 </transmission> 104 105 <transmission name="joint_w2_trans" type="SimpleTransmission"> 106 <actuator name="joint_w2_motor" /> 107 <joint name="joint_w2" /> 108 <mechanicalReduction>1</mechanicalReduction> 109 <motorTorqueConstant>1</motorTorqueConstant> 110 </transmission> 111 Hope someone can help me out! Originally posted by MartinW on ROS Answers with karma: 464 on 2013-02-05 Post score: 0 Answer: if you are using PR2 controllers you need to define transmission for each joint you want to control. Originally posted by IgorZ with karma: 86 on 2013-02-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by MartinW on 2013-02-08: Ahh okay and should I use the PR2 controllers on a non-pr2 robot? or are there better controllers to use on these types of robots? Comment by Bharadwaj on 2013-02-09: Hi, I too am trying to impliment the PR2 controllers on my own robot. The urdf example link u pasted above is also a non-pr2 model and they use the pr2 controllers on it. I believe it should work . Comment by Bharadwaj on 2013-02-09: Hi, I too am trying to impliment the PR2 controllers on my own robot. The urdf example link u pasted above is also a non-pr2 model and they use the pr2 controllers on it. I believe it should work . Comment by IgorZ on 2013-02-09: It depends. I use PR2 controllers for my robotic hand simulation. You can always write your own controller as a Gazebo plugin. There is instruction on Gazebo website how to write ROS plugins. Comment by MartinW on 2013-02-09: Yea Bharadwaj, I just got my controllers to work by synthesizing these two tutorials: http://www.ros.org/wiki/pr2_mechanism/Tutorials/SImple%20URDF-Controller%20Example and http://www.ros.org/wiki/pr2_mechanism/Tutorials/Writing%20a%20realtime%20joint%20controller Comment by MartinW on 2013-02-09: Just make sure you get the namespaces, lib, and plugin names right. I spent a good few hours trying to make my own but after doing those tutorials I finally got my controllers to spawn and run :) But I don't have them doing anything yet, just connected to my joints. Now I'm working on some PID cntrl Comment by Bharadwaj on 2013-02-11: Hay MartinW, are you able to control your URDF model even without the <controller:gazebo_ros_controller_manager Comment by MartinW on 2013-02-12: Hey Bharadwaj, No I tried a lot of different ways to get around that tag in the urdf. That is the only was I can control, however for my manipulator I only get data at around 10Hz. So I am not going to use controllers and just implement the drivers into a joint trajectory action ..... Comment by MartinW on 2013-02-12: Check out this link here: http://answers.ros.org/question/12182/controllers-vs-drivers/ I read the answers and it seems to have cleared things up a little!
{ "domain": "robotics.stackexchange", "id": 12756, "tags": "ros" }
Manipulating Adjacency matrix
Question: I have created an adjacency matrix which looks something like this A B C D E F G H I A 0 0 0 0 0 0 0 0 0 B 1 0 0 0 0 0 0 0 0 C 1 0 0 0 0 0 0 0 0 D 0 0 0 0 0 0 0 0 0 E 1 1 0 1 0 0 0 0 0 F 1 1 1 1 1 0 0 0 0 G 0 0 1 1 0 0 0 0 0 H 0 1 0 0 0 0 1 0 0 I 1 1 0 0 0 1 0 0 0 I am trying to manipulate this matrix such that an indirect link between two alphabet will replace the direct link. e.g. In the graph, C is connected to A and F is connected to both A and C (among others) and I is connected to both A and F. How do I go about removing the direct connections such as A and F, A and I since there are indirect connection between (C to A and F to C, C to A and F to C and I to F). I tried using loop but it seems like more the indirect connection, the number of loop required increases as well. What type of algorithm should I use for this type of problem? Would adjacency list be a better alternative? Answer: If I understand you correctly, one solution to your problem would be to compute a spanning forest. A spanning forest of a graph is a smallest possible (in terms of number of edges) graph such that there's a path between two vertices in the forest if, and only if, there's a path between them in the original graph.
{ "domain": "cs.stackexchange", "id": 11413, "tags": "algorithms, adjacency-matrix" }
Can charging by conduction and charging by induction occur simultaneously?
Question: If I connect a neutral body to a charged body (suppose it is negatively charged) with a conducting wire, how will the neutral body get charged - by induction or by conduction? Answer: "Electrically charging an object" can have different meanings: adding or removing charge from an object, i.e. changing its state so that the total charge is different than before by adding/removing electrons/ions. To do that, you need some way of conducting the charges (usually a wire for electrons). In your example, a wire could lead the electrons from a negatively charged body to a neutral body (assuming that both bodies themselves conduct sufficiently). moving charges (i.e. changing the charge distribution) within the object without changing the total charge. In most cases, this includes performing work on the system and includes "charging a battery" = changing the electrochemical configuration. It can be done in any number of ways by creating an electrical or magnetic field, which leads to charge movement. This is governed by the Maxwell equations, which includes induction (applying force on electrons by changing the magnetic field). Those phenomena can happen simultaneously if the setup is chosen accordingly. In your example, assuming conducting bodies, the charging will mostly be the done by adding charge. Additionally, the arising currents change rapidly and also create changing magnetic and electrical fields that perform work on the bodies, thus moving charge within the bodies. I'm Austrian, so I can only say that the latter meaning is colloquial in German, maybe it's the same in English.
{ "domain": "physics.stackexchange", "id": 67497, "tags": "electrostatics, electricity, charge, electromagnetic-induction" }
Charge acquired by sols after their preparation
Question: This table from my textbook (p. 20), shows nature of charge on particles of listed sols, in their original or natural form. What reasons can be attributed to oxides (sols) being positively charged? Any general conclusions that can be made for their preparation or occurence? (Supplementary) Reasons I could find out by myself to some of these: Positively charged sols Hydrated metallic oxides are formed by hydrolysing metal halides, the oxides so formed preferentially adsorb metal ions present in the solution making the sol positively charged, for example $$\ce{FeCl3 + 3H2O ->[Hydrolysis] Fe(OH)3(sol) + 3HCl}$$ Here, $\ce{Fe^3+}$ ions present in the solution are preferentially adsorbed by the colloidal particles. Basic dyes are cationic dyes, so their particles are positively charged. Oxides - ? Negatively charged sols Colloidal sols of metals such as gold, silver, platinum etc. can be prepared by Bredig's arc method. The electrons disperesed in the medium are captured by the sol particles making the sol negatively charged. Metallic sulfides are prepared by treating Metallic oxides with hydrogen sulfide, for example $$\ce{As2O3 + 3H2S ->[Double decomposition] As2S3(sol) + 3H2O}$$ The $\ce{S^2-}$ ions present in the solution are preferentially adsorbed by the sol particles making it negatively charged. Acidic dyes are anionic dyes, so their particles are negatively charged. (Additionally) It will be appreciated if anyone can give idea on the case of gum, clay and charcoal too. Answer: The original, and fairly limited question, is: "What reasons can be attributed to oxides (sols) being positively charged?" Disregarding the method of preparation, one answer is that these metal oxides are more basic than H2O, and therefore will adsorb H$^+$ ions from a neutral (pH =7) H2O medium. The pH will rise, but it doesn't take many protons to saturate a surface, so the rise may be only about 1 or possibly 2 pH units. The example given in the question: FeCl3 + 3H2O −→ Fe(OH)3(sol) + 3HCl could well result in adsorption of metal ions, but surely the production of hydrogen ions will result in a great excess of H$^+$ and adsorption of H$^+$also. FeCl3 solution is quite acidic. On the other hand, metals, metallic sulfides, clays, etc. could be more Lewis acidic, i.e., attractive toward the oxygen atom in water. Once adsorbed, the water molecule would be more likely to release a hydrogen ion than water would, resulting in a negative charge on the solid and an increase in acidity - again, perhaps by only 1 or 2 pH units. The bottom line is that the solvent (water) and its initial and final pH are a serious factor in determining the charge and its magnitude on suspended particles. This is not to minimize the effect of adsorption of metal ions on suspended particles when the metal ions are in significant concentrations, but these ions also have the much more plentiful H2O molecules to interact with.
{ "domain": "chemistry.stackexchange", "id": 16930, "tags": "physical-chemistry, colloids" }
The result of the extract of color varies depending on the camera and Image's topic
Question: I am using two cameras to extract of the cloth's color. first camera webcam and second camera Xtion or orbbec my code can't extract the color at same values (RGB) when I watch the video I think that the webcam is better than others(RGBD). it is near from fact (the color similar what my eyes see) the webcam is using rosrun usb_cam usb_cam_node _pixel_format:=yuyv whereas others roslaunch astra_camera astra.launch this means different Format see here my issue is the robot loses the target more with RGBD when a happening simple change in an environment not severe illumination. meanwhile, I need it because of the depth camera. this my full code to compare them: import rospy import cv2 import numpy as np from sensor_msgs.msg import Image, CameraInfo from cv_bridge import CvBridge, CvBridgeError class CVControl: def __init__(self): self.bridge = CvBridge() self.image_sub = rospy.Subscriber('/usb_cam/image_raw', Image, self.usb_cam) self.image_sub = rospy.Subscriber("/camera/rgb/image_raw", Image, self.img_callback) def usb_cam(self, data): try: cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8") except CvBridgeError as e: print e img = cv_image hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) red_lower = np.array([106, 48, 26], np.uint8) red_upper = np.array([126, 68, 106], np.uint8) red = cv2.inRange(hsv, red_lower, red_upper) # kernal = np.ones((5, 5), "uint8") # # red = cv2.dilate(red, kernal) res_red = cv2.bitwise_and(img, img, mask=red) mask = cv2.morphologyEx(red, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2, 2))) mask = cv2.Canny(mask, 50, 100) mask = cv2.GaussianBlur(mask, (13, 13), 0) mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2, 2))) (_, contours, hierarchy) = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) for pic, contour in enumerate(contours): area = cv2.contourArea(contour) if (area > 10000): x, y, w, h = cv2.boundingRect(contour) img = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2) cv2.putText(img, "Red Colour", (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255)) cv2.imshow("usb_cam", img) cv2.waitKey(3) def img_callback(self, data): try: cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8") except CvBridgeError as e: print e img = cv_image hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) red_lower = np.array([106, 48, 26], np.uint8) red_upper = np.array([126, 68, 106], np.uint8) red = cv2.inRange(hsv, red_lower, red_upper) # kernal = np.ones((5, 5), "uint8") # # red = cv2.dilate(red, kernal) res_red = cv2.bitwise_and(img, img, mask=red) mask = cv2.morphologyEx(red, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2, 2))) mask = cv2.Canny(mask, 50, 100) mask = cv2.GaussianBlur(mask, (13, 13), 0) mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2, 2))) (_, contours, hierarchy) = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) for pic, contour in enumerate(contours): area = cv2.contourArea(contour) if (area > 10000): x, y, w, h = cv2.boundingRect(contour) img = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2) cv2.putText(img, "Red Colour", (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255)) cv2.imshow("Image window", img) cv2.waitKey(3) def main(): rospy.init_node('image_converter') try: ctrl = CVControl() rospy.spin() except KeyboardInterrupt: print "Shutting down" cv2.destroyAllWindows() if __name__ == '__main__': main() Regardless of the color of the box, this is output: will I need to change the topic or something else? please your advisees Originally posted by Redhwan on ROS Answers with karma: 73 on 2019-12-05 Post score: 0 Original comments Comment by ct2034 on 2019-12-05: Thanks for your question. Could you please clarify, what your exact problem is: What behaviour do you expect, and what is happening instead? And could you please shorten your code to the relevant sections (if any) or use something like http://gist.github.com/ To make it more readable Comment by Redhwan on 2019-12-05: Thanks for your help. I didn't upload it in any websites, I still improve it until submitting a paper. I would that the robot follows me base on the color of clothes without loses me. Answer: This is a pretty common behavior with cameras. Different cameras respond to color slightly differently, and the automatic gain and white balance functions of the camera will affect the numeric and perceived color even more. There is an entire field of study devoted to this topic: Color Theory I suggest that you do some research into different color spaces (look at HSV for example) and maybe try to find and read a few papers about color tracking in computer vision. Most papers will discuss the techniques that they use to be robust to lighting and other changes in color. Originally posted by ahendrix with karma: 47576 on 2019-12-05 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 34093, "tags": "ros, ros-kinetic" }
Decomposition of a $4 \times 4$ unitary matrix
Question: I am currently studying the paper "Decomposition of unitary matrices and quantum gates (2012)" and referring to the textbook Quantum Computation and Quantum Information. Among the topics, I am particularly focused on understanding the decomposition of a $4 \times 4$ unitary matrix, but there are certain parts that I find challenging to grasp. In the process of decomposing an arbitrary unitary matrix $U$ into multiple unitary matrices, I am unsure why we start by setting the entry in the third row and first column of $U_1U$ to zero. Additionally, I don't understand the reason behind the specific form of the $U_1$ matrix that is used to make the entry in the third row and first column zero. To rephrase briefly, why do we start by setting the entry in the third row and first column of $U_1U$ to zero, and why is the form of $U_1$ chosen in such a way to achieve this? Answer: That paper appears to do their rotations in a very strange order. The method you're interested in is how to use Givens rotations to perform a QR decomposition (see, e.g. https://en.wikipedia.org/wiki/QR_decomposition#Using_Givens_rotations). The conventional ordering is that you start by setting the bottom-left element to zero, then work up the first column until all but the diagonal element are zero. Then you start from the bottom of the second column and work up, and so on. Why do you choose $U_i$ a particular way? Consider $$ U_i=\left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \cos\theta & \sin\theta \\ 0 & 0 & -\sin\theta & \cos\theta \end{array}\right). $$ When you calculate $U_iU$, what's its effect? Notice, first, the the first two rows of the output are the same as the first two rows of $U$. What does the fourth row look like? $-\sin\theta r_3+\cos\theta r_4$ where $r_3,r_4$ are the bottom rows of $U$. So, the trick is that there will always be a linear combination that zeros the element we want, you just have to pick the correct value of $\theta$. The side-effect is that, because $U_1$ has to be unitary, the third row gets messed up a bit. But that'll get sorted in the next iteration.
{ "domain": "quantumcomputing.stackexchange", "id": 5313, "tags": "quantum-gate, textbook-and-exercises, mathematics" }
Electrical engine calculations
Question: The ElectricalEngine class responds to the horsepower message. Because efficiency is calculated in percent a programmer can mistakenly initialize it with an integer instead of a float. class ElectricalEngine attr_reader :volts, :current, :efficiency def initialize(volts, current, efficiency) @volts, @current, @efficiency = volts, current, efficiency end HP_IN_WATTS = 746 def horsepower (volts * current * efficiency) / HP_IN_WATTS end end puts ElectricalEngine.new(240, 90, .6).horsepower # correct puts ElectricalEngine.new(240, 90, 60).horsepower # buggy How would you handle this scenario? Do nothing. It's the programmers responsibility to know the right datatype. Rename efficiency to efficiency_as_float to make it clearer. Rename efficiency to efficiency_as_percent and adjust horsepower's calculation. Write a custom efficiency method to check the datatype and convert it accordingly. Check efficiency type and raise an error if it's not a float. Other Solution four might look like this. Of course this conversion can happen in the initialize method too, but I think this is cleaner. def horsepower (volts * current * efficiency_as_float) / HP_IN_WATTS end def efficiency_as_float if efficiency.is_a?(Integer) # what if 1 is passed in instead of 1.0? efficiency / 100.to_f else efficiency end end Solution 5 would look something like this: def initialize(volts, current, efficiency) raise "Efficiency must be a float" unless efficiency.is_a?(Float) @volts, @current, @efficiency = volts, current, efficiency end Should ElectricalEngine own the responsibility of converting incorrect datatypes? Answer: I'd say pick #1: Do nothing (except, as m_x says, use to_f). But, if you really, really want to do something, pick #5: Raise an error. Specifically, I'd recommend raising a RangeError with a helpful message. raise RangeError, "efficiency must be between 0 and 1" unless (0..1).cover?(efficiency.to_f) As for this: Because efficiency is calculated in percent a programmer can mistakenly initialize it with an integer instead of a float. Yes, it can happen (I've made such mistakes myself), but using a 0..1 float is the more common approach (in my experience). It's usually only spreadsheets that deal with percentages as 0..100; in most programming (and math) contexts, "percent" means some 0..1 number. So calling it efficiency_as_percent could cause the opposite effect: People passing a 0..1 float where an int is required. Either way, the efficiency isn't specifically percentage (although you might render it as such); it's just a ratio. A factor. Hence floats make more sense, as they allow you set a more precise value than 0..100. Of course, you have to be a bit pragmatic about all this, so you don't end up implementing a strict type in a dynamically-typed language. For instance, you could also check volts and current - e.g. they probably shouldn't be negative. But then it quickly becomes a huge headache. You might also ask, "well, what if I want to calculate the horsepower of an over-unity engine?". Well, you can't, if efficiency can't go above 1.0. But in a sense, it's just algebra. You've got a formula, and you can plug whatever you want into it. Whether it makes sense to plug certain values in doesn't change the math. From a practical standpoint, a 200% efficiency engine is of course impossible, but you can still do the math just fine. Heck, even a perfect, 100% efficiency engine is impossible. Sooo should you only allow 0...1 (half-open range)? If you don't allow 1.0 itself, then how close a value do you allow? 0.9? 0.99999? Similarly, a zero-efficiency engine sounds like a mistake, so should you point that out too? And so on and so on... Anyway, I'd be ok with checking efficiency, but I wouldn't bother with it myself. Leave it to the programmer to do things right or suffer the consequences. GIGO: Garbage In, Garbage Out.
{ "domain": "codereview.stackexchange", "id": 10676, "tags": "ruby, validation" }
How can I demonstrate the internal energy of diatomic gas?
Question: I want to demonstrate the formula to find the internal energy of an ideal gas. The formula is $$U = \frac{5}{2}nRT.$$ I first tried to use the formula $U = E_c + E_p$ (Internal energy of an ideal gas is equal to the sum of the kinetic energy of all particles and the potential energy). Answer: You do that by applying the Equipartition Theorem. This theorem says that the average value of every quadratic term in the total energy of a molecule is $kT/2$. So write down the energy of a molecule, taking into account its kinetical (which shall include center of mass and rotational terms) and potential energy (which for your case must be zero, since you are considering a rigid molecule) and you shall get five quadratic terms. Hence every molecule has $5kT/2$ on avarege. Sum up over the molecules and use $Nk=nR$, where $N$ is the number of molecules.
{ "domain": "physics.stackexchange", "id": 40992, "tags": "thermodynamics, energy, ideal-gas, kinetic-theory" }
How must elemental symbols be ordered in the formula for a compound?
Question: An online problem asked me to find the molecular formula of a certain substance given its mass composition. I found that it was $\ce{N_5P_5Cl_{10}}$. It was incorrect and told me the ordering of my symbols was wrong. I switched it to $\ce{N_5Cl_{10}P_5}$ because that's the order that they appear on the periodic table. It marked me wrong again and told me that the answer was $\ce{P_5N_5Cl_{10}}$, but didn't tell me why it must be ordered this way. Is there a convention that must be followed here? Answer: Yes there is. The most general rule is that elements are listed in increasing order of electronegativity, and in your case the Pauling ENs are 2.19 (P), 3.04 (N) and 3.16 (Cl), so that's why the application wanted it that way. Most of the time we approximate that rule by saying elements are listed in increasing order of column of the Periodic Table, so e.g. OF2, CO2, and BF3, and when elements appear in the same column, in decreasing order of period, so e.g. SiC, BrCl and SO3. The column rule takes precedence over the row rule, because EN varies more strongly left-to-right than up-and-down, generally. The place where this approximation fails most obviously is with hydrides, since H has an EN close to C but appears at the far left of the Table. Normally we skip the approximation and mostly use the underlying rule for hydrides, so e.g. BeH2 and GeH4 even though H is above and to the left of Be and Ge (Pauling EN is 1.57 for Be, 2.01 for Ge, 2.2 for H), but H2O and HF. However CH4 (Pauling En 2.55 for C) doesn't quite follow that rule and NH3 (Pauling EN 3.04 for N) is wildly contradictory. These may just be historical accidents.
{ "domain": "chemistry.stackexchange", "id": 10338, "tags": "notation" }
State vector construction method vs observable in "Quantum Mechanics The Theoretical Minimum"
Question: In L. Susskind's book "Quantum Mechanics The Theoretical Minimum, spin is used as a vehicle to explain the effect of an observable in three orthogonal diagrams, leading to the creation of the Pauli Spin matrices. My question is the following specific aspect of the reasoning leading to the Pauli Spin matrices. The notation used is: Along the $z$-axis, we measure the spin states $\lvert u\rangle$ 'up' and $\lvert d\rangle$ 'down'. Along the $x$-axis, we measure $\lvert l\rangle$ 'left' and $\lvert r\rangle$ 'right', and along the $y$-axis, we measure $\lvert i\rangle$ 'in' and $\lvert o\rangle$ 'out'. We have $$|r\rangle = \frac{1}{\sqrt{2}}|u\rangle+\frac{1}{\sqrt{2}}|d\rangle\tag{1}$$ $$ \sigma_z = \begin{pmatrix} 1 & 0 \\ 0 & -1\\ \end{pmatrix} $$ $$\sigma_{z}|r\rangle = \frac{1}{\sqrt{2}}|u\rangle-\frac{1}{\sqrt{2}}|d\rangle\tag{2}$$ Equation (1) is derived, see book page 41, equation 2.5, Section 2.3 by using the following setup: Apparatus A prepares the spin state to be $|r\rangle$ Apparatus A is then rotated to measure $\sigma_{z}$ However this "looks" to be doing the same as the left hand side of equation (2), but the right hand side of equation (2) [derived on page 82] has a minus sign that isn't there in equation (1). So why is there a difference? Answer: In page 41, we are only told that measuring $|r\rangle$ gives us $|u\rangle$ and $|d\rangle$ with equal probabilities. Here we are using the measurement of $\sigma_z$ of an electron in $|r\rangle$ state to determine it in terms of $|u\rangle$ and $|d\rangle$. We are only looking at probabilities here. Not probability amplitudes. Thus when forming the state we have the freedom to call either $$|r\rangle=\frac{1}{\sqrt{2}}|u\rangle \pm \frac{1}{\sqrt{2}} |d\rangle$$ as the state. We chose to call the plus state as $|r\rangle$. However in page 82, we are considering the action of $\sigma_z$ on the state $|r\rangle$ at the level of amplitudes. And since we have already decided on what the state $|r\rangle$ is, we can do this calculation which turns out to be $\frac{1}{\sqrt{2}}|u\rangle - \frac{1}{\sqrt{2}} |d\rangle$.
{ "domain": "physics.stackexchange", "id": 64228, "tags": "quantum-mechanics, quantum-spin" }
1D Hermitian Hamiltonians are topologically trivial?
Question: I am currently studying the following paper: https://arxiv.org/abs/1706.07435 The authors state: "The Z/2 classification we found for separable nonHermitian Hamiltonians in one dimension is in contrast with the case of gapped Hermitian Hamiltonians, all of which are topologically trivial." Yet, I remember that the models like the Su-Schriefer-Heeger(SSH) model, which is 1D has a non-trivial winding number (equal to 1D Chern number) with a Z classification. How do the two reconcile? Answer: The statement cited is misleading. If the system is bosonic, and no symmetry is considered, then indeed all gapped Hermitian Hamiltonians are trivial. With symmetry there can be distinct symmetry-protected topological (SPT) phases. For fermionic systems, even without symmetry there is a $Z_2$ classification for gapped Hermitian Hamiltonians. SSH model is an example of SPT in fermionic systems.
{ "domain": "physics.stackexchange", "id": 96510, "tags": "topological-insulators, topological-order" }
Dynamically indexing numpy array
Question: I want to create a function that takes a numpy array, an axis and an index of that axis and returns the array with the index on the specified axis fixed. What I thought is to create a string that change dynamically and then is evaluated as an index slicing in the array (as showed in this answer). I came up with the following function: import numpy as np def select_slc_ax(arr, slc, axs): dim = len(arr.shape)-1 slc = str(slc) slice_str = ":,"*axs+slc+",:"*(dim-axs) print(slice_str) slice_obj = eval(f'np.s_[{slice_str}]') return arr[slice_obj] Example >>> arr = np.array([[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 1, 0], [1, 1, 1], [0, 1, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]]], dtype='uint8') >>> select_slc_ax(arr, 2, 1) :,2,: array([[0, 0, 0], [0, 1, 0], [0, 0, 0]], dtype=uint8) I was wondering if there is a better method to do this. Answer: Avoid eval at all costs Use the slice built-in for your : components, or the numeric index at the selected axis Do not abbreviate your variable names Type-hint your function signature Turn your example into something resembling a unit test with an assert Modify the data in your example to add more non-zero entries making it clearer what's happening Prefer immutable tuples over mutable lists when passing initialization constants to Numpy Prefer the symbolic constants for Numpy types rather than strings Suggested import numpy as np def select_slice_axis(array: np.ndarray, index: int, axis: int) -> np.ndarray: slices = tuple( index if a == axis else slice(None) for a in range(len(array.shape)) ) return array[slices] arr = np.array( ( ((0, 0, 0), (0, 0, 0), (0, 0, 9)), ((0, 1, 0), (2, 3, 4), (0, 5, 0)), ((0, 0, 0), (0, 0, 0), (8, 0, 0)), ), dtype=np.uint8, ) actual = select_slice_axis(arr, 2, 1) expected = np.array( ( (0, 0, 9), (0, 5, 0), (8, 0, 0), ), dtype=np.uint8 ) assert np.all(expected == actual)
{ "domain": "codereview.stackexchange", "id": 41870, "tags": "python, numpy" }
SDR : Signal divided between the edges of the receiving window
Question: I'm using a SDR (specifically the RTL-SDR) to analyse the spectrum and i noticed a phenomena that does not seem logical. The signal I want to capture is centred on 868.3 MHz and the central frequency of the SDR is centred on 869.3 MHz with a bandwidth of 2 MSamples. What I observed is that the 2 halves of the signal are displayed on both sides of the receiving window instead of just the half between 868.3MHz and 868.3MHz - 0.125MHZ ( as my signal bandwidth equals 125 KHz ) . What do we call this phenomena and what is the explanation behind it ? The first image is when the central frequency of the SDR is 868.3MHz The second image is when the central frequency ofthe SDR is 869.3MHz Answer: That's aliasing in its most classic form. The anti-alias filter in your receiver isn't perfect, so that frequencies slightly outside of what you want to sample appear on the other end.
{ "domain": "dsp.stackexchange", "id": 7555, "tags": "fft, frequency-spectrum, gnuradio, software-defined-radio" }
turtlesim_node wont start on MAC OSX 10.6
Question: Im trying to start with ROS, so im following the tutorials, my problem comes when I want to launch turtlesim_node by using the command rosrun turtlesim turtlesim_node. It says: [rosrun] Couldn't find executable named turtlesim_node below /Users/ME/ros/ros_tutorials/turtlesim Then i try: rosmake turtlesim and i got these errors: "wxBitmap::wxBitmap(int, int, int)", referenced from: turtlesim::TurtleFrame::TurtleFrame(wxWindow*)in turtle_frame.o "_wxDefaultPosition", referenced from: turtlesim::TurtleFrame::TurtleFrame(wxWindow*)in turtle_frame.o ld: symbol(s) not found collect2: ld returned 1 exit status make[3]: *** [../bin/turtlesim_node] Error 1 make[2]: *** [CMakeFiles/turtlesim_node.dir/all] Error 2 make[1]: *** [all] Error 2 any idea what could be wrong? thanks in advance Originally posted by Sergio Omar on ROS Answers with karma: 1 on 2011-05-22 Post score: 0 Answer: I just built the diamondback ros_tutorials stack from source on a OS X 10.6.7 machine, and had no trouble compiling, linking, or running turtlesim/turtlesim_node. It works fine for me. Looks like you have a problem with your wx installation. Suggestions: Make sure you have wx (and other supporting libraries) installed: rosmake --rosdep-install ros_tutorials Reinstall wx: sudo port deactivate wxWidgets-python sudo port uninstall wxWidgets-python sudo port install wxWidgets-python Originally posted by Brian Gerkey with karma: 2916 on 2011-06-12 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 5630, "tags": "ros, rosmake, turtlesim, turtlesim-node" }
Density matrix of a partially polarized beam of electrons
Question: The following problem is from Huang's Statistical Mechanics (2nd edition): 8.1 Find the density matrix for a partially polarized incident beam of electrons in a scattering experiment, in which a fraction $f$ of the electrons are polarized along the direction of the beam and a fraction $1 - f$ is polarized opposite to the direction of the beam. Earlier in the chapter, he gives $$\rho_{m n} \equiv (\Phi_n, \rho \Phi_m) \equiv \delta_{m n} |b_n|^2 \tag{8.10}$$ where $\Phi_n$ and $b_n$ have the same meaning as in $$\Psi = \sum_n b_n \Phi_n {,} \tag{8.7}$$ the wave function of a system (each $\Phi_n$ is a wave function for $N$ particles contained in a volume $V$ and is an eigenfunction of the Hamiltonian of the system; the $\Phi_n$ are chosen such that together they form a complete orthonormal set). He also defines the density operator, but I'll omit that here. I am not looking for a solution, just some guidance or a hint. I am having trouble coming up with an expression for the $\Phi_n$'s. For an individual electron, we can write $$\left[ A \begin{pmatrix} 1 \\ 0 \end{pmatrix} + B \begin{pmatrix} 0 \\ 1 \end{pmatrix} \right] e^{i \mathbf{k} \cdot \mathbf{r}}$$ Can we just use this same wave function for this problem? If so, why? I'm thinking our coefficients $A$ and $B$ should satisfy $|A|^2 + |B|^2 = f$ for the electrons polarized in the direction of the beam and $|C|^2 + |D|^2 = 1 - f$ for the electrons polarized opposite to the direction of the beam. Another thought: $|b_1|^2 = f$ and $|b_2|^2 = 1 - f$ in $(8.7)$, so that I would expect the density matrix to be a $2 \times 2$ matrix. But I really am unsure. This is not homework, I am trying to work through problems in QSM in order to read some papers this summer. Any help would be greatly appreciated. Answer: Generally speaking: If you prepare a system in the pure state $|\psi\rangle$, then the density matrix of the system in that state will be $\hat \rho = |\psi\rangle\langle \psi|$. If you have $N$ preparation procedures which produce the system in states with density matrices $\hat\rho_1,\ldots,\hat \rho_N$ with probabilities $p_1,\ldots,p_N$, where those probabilities must obey $p_j\geq 0$ and $\sum_{j=1}^N p_j = 1,$ then the probabilistic procedure will be described by the density matrix $$\hat \rho = \sum_{j=1}^N p_j \hat \rho_j.$$ In your case you have two preparation procedures with probabilities $p_1=f$ and $p_2=1-f$, so your task is to produce appropriate descriptions of the $\hat \rho_j$ and to add them up correctly to get $\hat \rho$.
{ "domain": "physics.stackexchange", "id": 49950, "tags": "quantum-mechanics, statistical-mechanics, electrons, polarization, density-operator" }
Primitive console string calculator in C
Question: Recenlty I have finished this primitive console string calculator in C. I need your balanced criticism, I'd like to know what I have to correct, remake and learn. Permitted operations: + - * / ( ) ^ Sample input string: 11 * -10 + 15 - 35 / (35 - 11) + 2^2 = main.c #include "stack.h" #include <stdio.h> #include "calculating.h" #include <errno.h> #define EXPR_LENGTH 1000 main() { while (1) { Stack* nums = createEmptyStack(); ValType lastval, prevval, result; char* temp = NULL, temp_char; char num_buf[NUM_LENGTH], expr_buf[EXPR_LENGTH]; CalcErrors err_flag = NO_ERR; system("cls"); // clear the screen printf("Enter the expression: "); /* Transform the infix notation to the reverse Polish notation and processing the error if necessary */ if (transformToRPN(expr_buf) != NO_ERR) { err_flag = WRONGSYM_ERR; goto break_loop; } temp = expr_buf; // to don't change the pointer expr_buf /* Parsing the operands from expression in the reverse Polish notation */ while (temp_char = getOperand(num_buf, temp, &temp)) { switch (temp_char) { /* Processing the operators */ case SUB: case ADD: case MUL: case POW: case DIV: if (nums->size < 2) { // if operators are less than 2 (to avoid a pop() of empty stack) err_flag = LACK_OPERAND; goto break_loop; // break the nested loop } lastval = pop(nums); // last added operand prevval = pop(nums); result = useOperator(prevval, lastval, temp_char); if (errno || (!lastval && temp_char == DIV)) { // math errors (out of range, zero division etc) err_flag = MATH_ERR; goto break_loop; // break the nested loop } push(nums, result); break; /* Adding a value in the stack */ case VAL: push(nums, atof(num_buf)); break; /* End of input or other errors */ case EQ: err_flag = END_CALC; goto break_loop; default: err_flag = WRONGSYM_ERR; goto break_loop; } } break_loop: // if input was incorrect, loop will be broken printf("\n"); /* Processing the errors or displaying an answer */ switch (err_flag) { case WRONGSYM_ERR: printf(MSG_WRONGSYM); break; case MATH_ERR: printf(MSG_MATH); break; case END_CALC: nums->size == 1 ? printf("= %f\n", pop(nums)) : printf(MSG_LACKOP); break; case LACK_OPERAND: printf(MSG_LACKOP); break; // FIX THE TEXT } printf("\nDo you want to calculate something else? y/n > "); getchar(); // skip the '\n' symbol temp_char = getchar(); if (temp_char != 'y' && temp_char != 'Y') break; } } stack.h #define ELEMENTARY_SIZE_STACK 5 // initial the length of stack #define RESIZE_ADDING 5 /* how much will be add when the stack size will be equal to ELEMENTARY_SIZE */ #include <stdlib.h> #define ValType double #define ValType_IOSPECIF "%lf" typedef struct stack { ValType* data; size_t max; // such value when the stack size should be extend size_t size; } Stack; Stack* createEmptyStack(void); void push(Stack* s, ValType val); void deleteStack(Stack* st); static void resize(Stack* st); ValType pop(Stack* st); ValType peek(Stack* st); void printStack(Stack* st); #define MSG_MALLOC "ERROR! MALLOC RETURNS NULL!\n" #define MSG_REALLOC "ERROR! REALLOC RETURN NULL!\n " calculating.h #include <float.h> /* Errors processing */ typedef enum calcerr { NO_ERR, MATH_ERR, WRONGSYM_ERR, LACK_OPERAND, END_CALC } CalcErrors; #define MSG_WRONGSYM "ERROR! You entered the wrong symbol(s).\n" #define MSG_LACKOP "ERROR! The input format is not correct.\n" #define MSG_MATH "ERROR! Math error!\n" /* Operators, delimiters, other characters */ #define VAL 1 #define SUB '-' #define ADD '+' #define MUL '*' #define DIV '/' #define EQ '=' #define POW '^' #define OP_BRACE '(' #define CL_BRACE ')' #define DELIM_DOT '.' #define DELIM_COMMA ',' #define IS_DELIM(x) ((x) == DELIM_DOT || (x) == DELIM_COMMA) #define SPACE ' ' /* Priority of operators */ #define PRIOR_SUB 1 #define PRIOR_ADD 1 #define PRIOR_MUL 2 #define PRIOR_DIV 2 #define PRIOR_POW 3 #define PRIOR_OP_BR 0 #define PRIOR_CL_BR 0 #define PRIOR(x) getPriority(x) /*Calculation and parsing expressions on the RPN (Reverse Polish Notation) */ int getOperand(char* num, char expression[], char** end); double useOperator(double leftval, double rightval, char oper); /*transform the infix notation to the reverse Polish notation*/ int isOperator(char op); static int getPriority(char op); CalcErrors transformToRPN(char result[]); #define NUM_LENGTH DBL_DIG // maximum length of number calculating.c #include "calculating.h" #include "stack.h" #include <stdio.h> #include <math.h> /* Function for working with an expression on the RPN */ int getOperand(char* num, char expression[], char** end) { unsigned counter = 0; char* ptr = expression; /* NULL pointer checking*/ if (!ptr) return 0; /* Spaces skipping before */ while (isspace(*ptr)) ptr++; /* The unary minus checking */ int negative_flag = 0; if (*ptr == SUB) { if (isdigit(*++ptr)) // if the next character is a digit negative_flag = 1; else ptr--; } /* The return of the any not digit character */ if (!isdigit(*ptr)) { *end = ++ptr; return *(ptr - 1); } /* Making a float number and the delimiter processing */ while (isdigit(*ptr) || IS_DELIM(*ptr)) { if (*ptr == DELIM_COMMA) *ptr = DELIM_DOT; // for atof() if (negative_flag) { num[counter++] = SUB; negative_flag = 0; // in order to add the minus symbol one time in the head of array } num[counter++] = *ptr; ptr++; } num[counter] = '\0'; *end = ptr; // pointer to the next character (to continue reading an operand from a new location) return VAL; } double useOperator(double leftval, double rightval, char oper) { double result; switch (oper) { case ADD: result = leftval + rightval; break; case SUB: result = leftval - rightval; break; case MUL: result = leftval * rightval; break; case DIV: result = leftval / rightval; break; case POW: result = pow(leftval, rightval); break; default: exit(1); } return result; } /* Function for working with an expression on the infix notation */ int getPriority(char ch) { switch (ch) { case SUB: return PRIOR_SUB; case ADD: return PRIOR_ADD; case MUL: return PRIOR_MUL; case DIV: return PRIOR_DIV; case POW: return PRIOR_POW; case OP_BRACE: return PRIOR_OP_BR; // opening parethesis case CL_BRACE: return PRIOR_CL_BR; // closing parethesis default: return -1; } } int isOperator(char x) { return (x == SUB || x == ADD || x == MUL || x == DIV || x == POW); } CalcErrors transformToRPN(char result[]) { Stack* ops = createEmptyStack(); char temp_ch; unsigned counter = 0; while ((temp_ch = getchar())) { /* Spaces skipping before */ while (isspace(temp_ch)) temp_ch = getchar(); /* Ending the input, fully write the remaining contents of stack*/ if (temp_ch == EQ) { while (ops->size) { result[counter++] = (char)pop(ops); result[counter++] = SPACE; } result[counter++] = EQ; result[counter] = '\0'; return NO_ERR; } /* Cheking the unary minus, and if it's not unary, return character after minus used to check back and process a minus as an operator*/ if (temp_ch == SUB) { temp_ch = getchar(); // read next symbol and if the next character is a digit if (isdigit(temp_ch)) { if (temp_ch != '0') // to don't allow the '-0' result[counter++] = SUB; } else { ungetc(temp_ch, stdin); temp_ch = SUB; } } /* Making a number */ if (isdigit(temp_ch)) { while (isdigit(temp_ch) || IS_DELIM(temp_ch) ) { result[counter++] = temp_ch; temp_ch = getchar(); } ungetc(temp_ch, stdin); // return the extra character result[counter++] = SPACE; } /* Else check the operator and push it to the stack */ else { if (isOperator(temp_ch)) { if (!ops->size) // if stack is empty (to avoid an error after pop) push(ops, (double)temp_ch); else { if (PRIOR(temp_ch) <= PRIOR((char)peek(ops))) // > if priority of new operator is higher than operator { // in the top of stack, then the old operation will be result[counter++] = (char)pop(ops); // display and new operator push in the stack < result[counter++] = SPACE; } push(ops, (double)temp_ch); } } /* Operators inside parathesises processing */ else if (temp_ch == OP_BRACE) push(ops, (double)temp_ch); else if (temp_ch == CL_BRACE) // if it's a closing parethesis, then it write all of operators before the { // opening parethesis not including it char tmp; while ((tmp = (char)pop(ops)) != OP_BRACE) // until the opening parathesis { result[counter++] = tmp; result[counter++] = SPACE; } } /* Any other symbols */ else return WRONGSYM_ERR; } } } stack.c #include "stack.h" #include <string.h> Stack* createEmptyStack(void) { Stack* st = malloc(sizeof(Stack)); if (st == NULL) { printf(MSG_MALLOC); exit(EXIT_FAILURE); } st->data = malloc(sizeof(ValType)*ELEMENTARY_SIZE_STACK); if (st->data == NULL) { printf(MSG_MALLOC); exit(EXIT_FAILURE); } st->max = ELEMENTARY_SIZE_STACK; st->size = 0; return st; } void deleteStack(Stack* st) { free(st->data); free(st); st = NULL; } static void resize(Stack* st) { st->max += RESIZE_ADDING; st->data = realloc(st->data, st->max*sizeof(ValType)); if (st->data == NULL) { printf(MSG_REALLOC); exit(EXIT_FAILURE); } } void push(Stack* st, ValType val) { if (st->size == st->max) resize(st); st->data[st->size++] = val; } ValType pop(Stack* st) { if (!st->size) { // if there is a request to pop the value from empty stack printf("ERROR (FUNCTION POP): STACK SIZE < 0\n"); exit(EXIT_FAILURE); } st->size--; return st->data[st->size]; } /* In this function just look to the top of the stack */ ValType peek(Stack* st) { if (!st->size) { // if there is a request to pop the value from empty stack printf("ERROR (FUNCTION PEEK): STACK SIZE < 0\n"); exit(EXIT_FAILURE); } return st->data[st->size-1]; } void printStack(Stack* st) { int i; for (i = 0; i < st->size; i++) printf(ValType_IOSPECIF " ",st->data[i]); printf("\n"); } Answer: Generalities and trivialities: I recommend #includeing all wanted system headers before any private headers. This avoids any possibility of changing the meaning of any system header by defining a macro that happens to be meaningful to it. static functions and variables generally should not be declared in header files, unless you actually want every file that includes the header to provide (for functions) or have (for variables) its own copy. In your code, this applies to the resize() and getPriority() functions. If you want to prototype those then put each prototype near the top of the C file where the corresponding function definition appears. The string constant for MSG_REALLOC contains a trailing space character that you probably did not intend. There's no point in deleteStack() setting its argument to NULL, as the caller will not see any effect of it. On the other hand, you never call deleteStack() or printStack(). Inasmuch as this is a program, not a reusable library, it is wasteful to define functions that you never call. You do not declare the type of main(). It does default to int, which is the required type, but failing to declare that is poor style. It looks like you could replace much of function getOperand() with a call to strtod(). Regarding ValType: If you see value in defining the value data type as a macro -- and I think that may be a bit overkill -- then at least define it conditionally. The point of using a macro would be to make it easy to switch to a different type, and you've gone only half way on that. If you make the definition(s) conditional, then it would not be necessary to modify the header at all to change type. For example: #idndef ValType #define ValType double #endif #ifndef #define ValType_IOSPECIF "%f" #endif Note in the above example code that I have changed the default definition of ValType_IOSPECIF. The result is the correct form, per the standard, for a printf() field descriptor for a double (there is none specific to float because float arguments to printf() are automatically promoted to double, as a consequence of those falling among printf()'s variable arguments). Note also that printf() and scanf() are asymmetric in this particular regard, so if you needed to support both then you would need separate macros for the two contexts. Regarding function transformToRPN(): Function transformToRPN() does not just transform an expression, it inputs one. At minimum, therefore, the function is poorly named, but it's questionable that these two behaviors are combined in a single function. In function transformToRPN(), you have an else block whose sole contents is an if/elseif/else tree, and moreover, that conditions in that inner tree test the same variable that those on the outer tree do. I recommend merging the inner tree into the outer tree for clarity and symmetry. Function transformToRPN() receives a pointer to a buffer into which to record an RPN transformation of the input expression, but it does not receive the size of that buffer, and it performs no bounds checking. It would be extremely easy for a user to intentionally cause a buffer overrun, which is a favorite cracking tactic. Function transformToRPN() assigns the return value of getchar(), an int, to a variable of type char. If char happens to be a signed type, then this produces implementation-defined behavior for some possible inputs. Whether char is signed or unsigned, you cannot distinguish one possible input char from an EOF. Moreover, the while() loop seems unlikely to terminate, because at the end of the stream getchar() returns EOF, which normally has value -1, and always has a value outside those that can result from converting a valid char value, including 0, to type unsigned char. The loop will terminate only if getchar() returns 0.
{ "domain": "codereview.stackexchange", "id": 22449, "tags": "c, calculator, math-expression-eval" }
Does quantum superposition allow to apply an algorithm to all bits possibly in one shot?
Question: From what I understood, quantum programming can solve some algorithm exponentially faster. Thanks to the superposition, unlike a classic bit, which can be either 0 or 1, a qubit can be both 0 and 1. Does that mean we can apply an algorithm for all bits possibly in one shot? Example: We have 2 qubits, we apply a Hadamard gate on it to be in superposition. So all possibility will be [00, 01, 10, 11], so [0, 1, 2, 3] Can we apply an algorithm for all of these solutions ? For example, apply *10 to the value. So we would get [0, 10, 20, 30]. Then sum it all to get 60. But, getting that in one shot ? I tried to implement something similar on Qiskit. Creating 2 qubits, applying Hadamard gate on them. But I fail to go further. If I measure my qubits, they will be one of the solutions, but not all of them. Am I missing something ? Answer: No, you're not missing something. Talking about being able to apply an algorithm for all possible inputs in a single shot is misleading. As you've found, you cannot read out all those answers; you can only get one answer, just as you would with classical. The trick of quantum computing is asking the right question. This is typically some sort of comparison between the different values you've evaluated. You access it by performing some sort of interference between your multiple parallel evaluations, usually using something like a Hadamard transform or Fourier transform. Working out what that might be for a specific calculation you might want is the major challenge of algorithm design. For the case that you're asking about, I believe there's a version of Grover's algorithm which you can use to calculate the average value of a function. It's far from single shot, but can provide a mild improvement in run time for large input sizes.
{ "domain": "quantumcomputing.stackexchange", "id": 3752, "tags": "superposition, quantum-parallelism" }
Motion of an electron near a proton
Question: Statement of the problem: Consider an electron and a proton that are initially at rest separated $a$ meters. Do not take into account the movement of the proton, because its mass is much greater than the electron's. 1. What is the minimum kinetic energy at which the electron must be "launched" so that the electron gets to be $b$ meters away from the proton? 2. What is the corresponding minimum speed for that situation? 3. What distance away from the proton will the electron reach when it has double the initial kinetic energy? (The original problem says 2.00 nm instead of $a$ meters, and 12.0 nm instead of $b$ meters) My attempt: Question 1 and 2 We say $q$ represents the elementary charge, i.e. the charge of the proton (positive) and of the electron (negative). Then I can assume that when the electron is $b$ meters away, the KE is zero. Then the difference in potential energy would be the same as the lost KE. $$\Delta U=\frac{1}{4\pi \varepsilon_o} \left ( \frac{q^2}{a} - \frac{q^2}{b} \right ) =K= mc^2 \left ( \frac{1}{\sqrt{1-\frac{v^2}{c^2}}}-1 \right ) \approx \frac{1}{2}m v^2$$ Doing some algebra we can get $v$, which is what they are asking for. My questions: 4. Is it possible to have an electron and a proton at rest? I imagine we would need another force that is "pulling" the electron away from the proton, and that that force cancels out with the electrostatic force from the proton. But, what would be a realistic example of a source of that force? And, how would that source alter the entire system? 5. How can we launch an electron that is near a proton? Question 3 suggests that the speed of the electron suddenly goes from zero to some positive value. But, doesn't that ("launching the electron") imply infinite acceleration? If that is indeed possible, how can we realistically do it? EDIT: The original problem was written in a confusing way. Thanks to the help of @user7777777 I was able to interpret it the right way and fix the wording. Answer: You are absolutely correct for Questions 1 and 2. For Question 3, we can use the result from Question 2, but replacing $\frac{1}{2} m {v_1}^2$ with $2 (\frac{1}{2} m {v_1}^2)$, and comparing the two results in terms of $b$.
{ "domain": "physics.stackexchange", "id": 52188, "tags": "homework-and-exercises, electrons, charge, potential-energy, classical-electrodynamics" }
Twist message working & ackermann type conversion
Question: I am familiar with ROS and the concept of twist message. Recently, i used the teb_local_planner concept to convert the conventional twist messages into the ackermann type and thus,the designed steering based worked well. But now, i want to understand the actual working of how the twist causes movement of the wheels,even in case of conventional differential based bots. When i use any twist message, ex: rostopic pub /catvehicle/cmd_vel geometry_msgs/Twist -r 8 -- '[2.0, 0.0, 0.0]' '[0.0, 0.0, -1.8]' , then what are the internal operations going on which results in movement of the bot? Is there any specific package used in the launch file of the bot which links this twist based velocity commands into encoder values? Also, changing this twist message into ackermann type involves any new set of internal operation to drive the wheels? Regards Originally posted by Ayush Sharma on ROS Answers with karma: 41 on 2017-05-03 Post score: 1 Answer: Every robot is going to have a unique way of handling the conversion from Twist message into motor commands. It depends on the geometry of the robot, the motor controllers, what feedback is available, what type of motor is involved, the existence of APIs, etc. So it is difficult to give a general answer for how to do this. For wheeled robots, likely the biggest thing that is needed is an understanding of the kinematics of the robot. For a differential drive robot (like a TurtleBot), if we have a desired forward and angular body velocity (the data contained in the Twist), there is a unique mapping that tells us what translational velocities each of our wheels needs (assuming we know how far apart the wheels are). Then if we also know the diameter of the wheels, we can uniquely calculate a desired angular velocity for the motor. This information can then be passed onto a motor controller in some fashion. Let's look at the TurtleBot as an example: This callback is the callback that actually listens to the Twist message on the cmd_veltopic. They convert the data in the twist into a tuple representing the translational velocity of each wheel that gets stored as self.req_cmd_vel Then at the very end of the TurtlebotNode.spin method they call self.drive_cmd with this tuple of wheel velocities. Because we are in "twist control mode", the self.drive_cmd actually calls the Turtlebot.direct_drive method defined in the create_driver script. This method transmits a command to the robot base over a UART connection using PySerial. The protocol for this communication was originally defined by iRobot. If you are interested in studying the kinematics of mobile robots, I'd recommend looking at Siegwart's Introduction to Autonomous Mobile Robots. Section 3.2 of Correll's Introduction to Autonomous Robots (available as a pdf) has a good walkthrough of diff drive robots and car-like steered robots. Originally posted by jarvisschultz with karma: 9031 on 2017-05-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Ayush Sharma on 2017-05-04: Thank you for such a great and explanatory reply. The book you mentioned has a perfect summary of what the ackermann steering involves. Comment by Ayush Sharma on 2017-05-04: If i need to start learning CAN Interfacing and need to implement them with the steering model, do you suggest me to work on the controller packages as well,or will working only with kinematics of the steering model will do my job of linking with CAN? Comment by jarvisschultz on 2017-05-04: I'm not really sure what your application is, but fundamentally, CAN is just a communication protocol. If you are using CAN, my guess is that is because you have motor controllers that use CAN. You need to write a ROS node that converts Twists to motor velocities and then sends them over the CAN bus
{ "domain": "robotics.stackexchange", "id": 27794, "tags": "ros, teb-local-planner" }
Historical reason behind using $ν$ instead of $f$ to stand for frequency in the equation $E=hν$?
Question: Normally, we use the letter $f$ to stand for frequency in equations. $$T = 1/f$$ $$v = \lambda f$$ $$Φ +E_k = h f$$ So I'm curious as why the letter $ν$ (nu) is used to represent frequency in the equation $$E=hν$$ when people who first saw it may think it's velocity due to its resemblance with $v$ and get confused? (And even the frequency formula for matter waves de Broglie deduced from it uses $f$.) Is there a historical reason behind this? Answer: I think because scientists in previous centuries had gone through the classical education of ancient greek and latin, and greek, being symbols not used in the normal writing would stand up and not be confused . Mathematics had used a lot of the first letters ( for every delta there exists an epsilon) and they probably did not want a confusion with force (f). lamda was taken by wavelength ( possibly by association with the l of length). mu, coming before nu in the alphabet, was also used up to denote some constants (magnetic permeability). This wiki link gives the scientific definitions attached to the greek alphabet, and the correspondence is one letter to many definitions!
{ "domain": "physics.stackexchange", "id": 9873, "tags": "soft-question, history, notation, conventions" }
why Gazebo crash while adding a 3D model? is a robotic arm .DAE
Question: I'm trying to add in "model editor" (custom shapes) a model of a robotic arm but Gazebo crashes when I click in import. why this could happen? is there another way of loading the robotic arm? I tried also with .OBJ Originally posted by Jezel on Gazebo Answers with karma: 5 on 2017-10-18 Post score: 0 Original comments Comment by chapulina on 2017-10-18: When you run Gazebo in verbose mode, do you see any errors? (gazebo --verbose) I've seen Gazebo crash before with malforned collada files. Comment by Jezel on 2017-10-18: now I tried verbose mode and you are right, is not a valid collada. "Invalid collada file. Must be version 1.4.0 or 1.4.1". Thanks, i didn' t know about this verbose mode, at least now i know why is crashing. Is there any way to fix this colladas? Answer: As described on the comments, the issue is that the collada file is invalid. One way around it would be to load it into a CAD program such as Blender and then re-export is as collada in the hope that the program will fix the issues. Originally posted by chapulina with karma: 7504 on 2017-10-19 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Jezel on 2017-10-20: Got it, thanks a lot for your help
{ "domain": "robotics.stackexchange", "id": 4196, "tags": "gazebo" }
Why doesn't sunlight ALWAYS get split (into monochromatic) when going through the clouds?
Question: It is my understanding that clouds are largely made up of water, which is known to split white light into its frequency components, and that's why we see rainbows sometimes. My question is, with sunlight being practically constant throughout the day, why aren't there rainbows all over, all the time? I mean, we can see monochromatic changes in the color of the "sky" throughout the day, but rainbows seem to only happen sometimes, why? In short: Rainbows that we can see come from water droplets, not collections of droplets. Answer: Light is multiply scattered in clouds, i.e. it is refracted by one water droplet or ice crystal, then refracted again by the next, then again by the next and so on. By the time the light reaches you eyes it will have been scattered many times. This means all the light hitting the cloud gets thoroughly mixed up and all the rainbows are jumbled up together and just appear a uniform white. To get a rainbow requires that the light scattering be fairly weak so the light scattered into the rainbow arc is not scattered again.
{ "domain": "physics.stackexchange", "id": 31502, "tags": "electromagnetism, visible-light, refraction" }
How much to turn to face the goal?
Question: Hi, I am implementing a simple algorithm to move the robot towards an assumed goal position in the willow-erratic world map in stage_ros package. I want to calculate the degree quantity that the robot should turn to face the goal. I am using the following equation but it seems that there is a problem with the code since when the robot is turning the quantity doesn't change. Also, when I manually place the robot in a location(1.5, 8) where it should turn 180 degrees to face the goal (3.5, 8), the equation below yields an output of -79.38 which is incorrect. Someone told me I can do this task using quaternions. How can I do that? robotPose.pose.orientation.z is equal to 1 now. How can I use that to find out the degree quantity that the robot need to turn to face the goal? angle = (math.atan2(goalY - robotPose.pose.position.y, goalX - robotPose.pose.position.x) - math.atan2(robotPose.pose.position.y, robotPose.pose.position.x)) * 180/math.pi What am I doing wrong? Thank you Originally posted by Warrior on ROS Answers with karma: 61 on 2014-04-17 Post score: 0 Answer: What you need is a fixed reference for both goal and robot pose. Here's an example with /map frame (sorry it is a C++ cause I don't master Python code): while(ros::ok()) { //declare some variables tf::StampedTransform poseRobot; geometry_msgs::PoseStamped robot_pose; //initialize robot angle double yaw_r(50); //try what's between {} try { ROS_DEBUG("Pose du robot ..."); // have the transform from /map to /base_link (robot frame) and put it in "poseRobot": this is the robot pose in /map! listener.lookupTransform("/map","/base_link",ros::Time(0), poseRobot); //get the orientation: it is a quaternion so don't worry about the z. It doesn't mean there is a 3D thing here! robot_pose.pose.orientation.x = poseRobot.getRotation().getX(); robot_pose.pose.orientation.y = poseRobot.getRotation().getY(); robot_pose.pose.orientation.z = poseRobot.getRotation().getZ(); robot_pose.pose.orientation.w = poseRobot.getRotation().getW(); // convert the quaternion to an angle (the yaw of the robot in /map frame!) yaw_r = tf::getYaw(robot_pose.pose.orientation); } catch(tf::TransformException &ex) //if the try doesn't succeed, you fall here! { // continue ROS_INFO("error. no robot pose!"); } //angleDes is the goal angle in /map frame you get it by subscribing to goal //The angleDes you should get it in a call back normally by converting the goals orientation to a yaw. Exactly as we have done to get robot's yaw //delta_yaw is what you want to get double delta_yaw = angleDes - yaw_r; //and here I just make sure my angle is between minus pi and pi! if (delta_yaw > M_PI) delta_yaw -= 2*M_PI; if (delta_yaw <= -M_PI) delta_yaw += 2*M_PI; You can replace /map frame with /odom frame. Suppose now that your goal is not in /map frame. This how you convert it with tf: //some bool to check if conversion is alright success =false; try { ROS_DEBUG("transform from goalFrame to mapFrame ..."); //prepare conversion by sync'ing times ros::Time current_transform = ros::Time::now(); listener2.getLatestCommonTime(currentGoal.header.frame_id, "map", current_transform, NULL); currentGoal.header.stamp = current_transform; //conversion geometry_msgs::PoseStamped goalInMap; listener2.transformPose("/map", currentGoal, goalInMap); angleDes = tf::getYaw(goalInMap.pose.orientation); success =true; } catch(tf::TransformException &ex) { // continue ROS_DEBUG("error. no transform from goal frame to map frame!"); } Originally posted by AbuIbra with karma: 118 on 2014-04-18 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Warrior on 2014-04-18: Your way seems to be complicated and I couldn't understand it properly. I may note that my robot is in a 2-D space, not 3D. Does that make sense? I think I figured out the correct way of calculating the degree quantity that the robot must move to face the goal as follow: ((math.atan2(self.goalY - self.robotY, self.goalX - self.robotX)) * 180/math.pi) - self.robotAngle What do you think? Comment by AbuIbra on 2014-04-18: I edited my answer to put more explanatory comments to make my code clearer. Believe me man, this is not complicated, this , I think, is the straight forward way to do things with ROS basic tools (tf, poseStamped, etc. ) Comment by AbuIbra on 2014-04-18: As for your change in the comment, I can't say much about it. You have to understand that there are frames. So your "self" thing means nothing unless you tell me in which frame it is. For example, imagine you have robotPose variable expressed in /base_link(robot frame), you always have x=0 y=0 ! Comment by Warrior on 2014-04-18: Sorry, I shouldn't have copied the code directly. The "self" thing is used because I am writing my codes in a class. Here's the more understandable code: ((math.atan2(goalY - robotY, goalX - robotX)) * 180/math.pi) - robotAngle. Initially, robotAngle is 180 (the robot is looking backward). So, the quantity of this equation will be the angle that the robot must turn to face the goal. However, the topics that I have subscribed to publish both pose.{x, y, z} along with orientation.{x, y, z, w} quantities. So, suppose my robot starts at the position (-8, -2) and its robot.pose.orientation.z quantity is 1. The goal is also located at (3.5, 8). How can I calculate what should robot.pose.orientation.z be when I am facing the robot? (Sorry I am still a little bit lost) Comment by Stefan Kohlbrecher on 2014-04-18: I think a lot of the confusion stems from the fact that you've been misunderstanding how the orientation quaternion works. Have a look here:http://answers.ros.org/question/30340/problem-in-geometry_msgspose/ Comment by AbuIbra on 2014-04-20: @Warrior: this :"((math.atan2(goalY - robotY, goalX - robotX)) * 180/math.pi) - robotAngle" can be always equal to "((math.atan2(goalY, goalX)) * 180/math.pi)" because in robot frame: robotX =0 robotY=0 and robotAngle=0! So without the frame, this is useless. So you really have to think FRAMES!
{ "domain": "robotics.stackexchange", "id": 17694, "tags": "navigation, rospy, ros-hydro" }
Depalma Free Energy fields
Question: Few years ago I read some papers about Free Energy, written by Bruce de Palma, a physicist who is said to be the inventor of the N-Machine, which is an device that works with free energy latent in the space around us. There are several informal references to him and his work. However, I tried to find papers of him several times in Scielo, Science Direct, IEEE, Physical Letters, Nature and several other scientific databases and found no references to Dr. Bruce de Palma. Personally I find quite strange that a research of such importance do not have a formal paper in an journal of big impact. Does his work make sense? Answer: The claims are absurd--- you can't extract "zero point energy", because it just isn't there--- energy is extracted by moving something from a higher energy state to a lower energy state, and the vacuum is the lowest possible energy. Nevertheless, there are many fraudulent claims to the contrary, and this is one of them. In response to comment It is easy to dismiss claims that there are new fields affecting material substances--- there aren't any such fields. The author here does not provide evidence for these fields beyond speculating, and if these fields could produce macroscopic forces in matter, they will generically generate friction in materials, so that you would notice missing energy which is radiated into these modes. This is too obvious to miss. Further, even if there were new fields, in their vacuum configuration, they would be useless for extracting energy. You would need to find an excited configuration and drain out the energy. This stuff is not worth the bother of reading it.
{ "domain": "physics.stackexchange", "id": 1810, "tags": "thermodynamics, energy, experimental-physics" }
Does the aluminization of a Thermos flask matter?
Question: Typically we have a double walled container with a vacuum between the walls which limits or stops conduction and convection heat transfer. They are also (usually) aluminized to minimize heat transfer by radiation. Does this have much effect? In other words, how much worse would a regular flask be without the reflective coating? Answer: Consider a black Body at 100°C with an ambient temperature of 0°C. The energy loss $P$ due to radiation is $$P = \sigma \times (T_1^4-T_2^4)\text{ (Stephan-Boltzman)}$$ which gives $782.6\:\mathrm{W/m^2}$. With a surface of $500\:\mathrm{cm^2}$ it turns out as $\approx 40\:\mathrm{W}$, so quite significant.
{ "domain": "physics.stackexchange", "id": 26278, "tags": "thermodynamics" }
$\Gamma(J/\psi\rightarrow \text{Hadrons})\sim \alpha_s^3$ - why not $\alpha_s^6$?
Question: I have seen this in a number of places now (e.g. Ynduráin (2007), and here) , that the decay of $J/\psi$ goes as $\alpha_s^3$. The decay of $J/\psi$ progresses through three gluon as such each vertex of each of these gluons contributes a factor $\propto \sqrt{4\pi \alpha_s}$ to the matrix element, which thus goes like: $$M\propto \alpha_s^3$$ to account for the 6 verticies in the diagram. The decay width should goes like $M^2$ and thus we should have: $$\Gamma(J/\psi\rightarrow \text{Hadrons})\sim \alpha_s^6$$ not $$\Gamma(J/\psi\rightarrow \text{Hadrons})\sim \alpha_s^3$$ as is commonly stated. My question is why is the latter my commonly written? Does it actually go like $\alpha_s^3$ or is their another reason behind it? Answer: The computation of the decay width for a particular final state is actually complex because this requires a model for the final state non-perturbative hadronisation and non-perturbative QCD is really difficult. But the observable you considered in your question, $\sigma(J/\Psi \to \text{hadrons})$, where "hadrons" stands for any number and flavour of hadrons, is much easier to compute. Indeed we just need to consider all quarks and gluons final state at a given order of perturbation in $\alpha_S$. Since those quark or gluon final state gives a hadron for sure, we only need to compute the square of the matrix elements for $c\bar{c} \to ggg$, etc, and then sum them up. The result is that $\sigma(J/\Psi \to \text{hadrons})/\sigma(J/\Psi \to l^+l^-) \propto \alpha_S^3$ at leading-order, with a next-leading-order correction proportional to $\alpha_S^4$. This is relatively well confirmed experimentally. Note: what is most reliably computed is this ratio because this cancels out the non-perturbative aspect of the bound-state $J/\Psi$. Note also that when I say "relatively well confirmed", there are actually issues. The value of $\alpha_S$ extracted from the perturbative formula of this ratio does not agree with the value extracted from high energy processes for example. $J/\Psi$ decays are still an ongoing field of research with lingering hypotheses of glue balls in the final state e.g.
{ "domain": "physics.stackexchange", "id": 40917, "tags": "particle-physics, strong-force" }
QtCreater cannot show ROS packges?
Question: I use QtCreator to open the CMakeLists.txt (the root CMakeLists.txt) under the root src/ folder. Normally, QtCreator should show the directory tree of all ROS packges as below: However, sometimes it fail to show the ROS packages, but turns out to be: This is very ugly and makes very inconvenient for me to find my desired source files and edit. So anyone know why this situation happens and how to solve this problem? Originally posted by Winston on ROS Answers with karma: 180 on 2017-07-10 Post score: 1 Answer: What I really can recommend is the ROS Qt Creator Plug-in. Its a really nice plugin, where you can easily import ROS Workspaces via 1. File > New File or Project > Import Project > Import ROS Workspace > Choose... 2. Enter a Project Name and Browse... to the ROS project workspace (workspace not src folder) then select Next > Originally posted by chwimmer with karma: 436 on 2017-07-11 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 28323, "tags": "ros, qtcreator" }
Measures and probability in formal language theory
Question: I am looking for references for the following problem: I have a very special class of regular languages and my aim is to express (and to justify my conjecture) that this class itself is very small in some way (as a subset of the regular languages) and that the languages contained in this class are rather "bloated". For the latter point, I could prove that all languages in the class have a large diameter with respect to many common metrics on strings. However, I want to consider the following: Given a language from the class, we know it has a large diameter, but does it also have a large "volume" (that is, measure), or put differently, if I choose randomly a finite word, is there anything meaningful to say about how "probable" it is that the word belongs to the language? Of course, we can lift the problem: Picking a random language, how probable is it to get a language in the class? Are there any references or standard approaches for looking at classes of (regular) languages from this point of view (or is this considered as generally uninteresting)? Answer: There is the concept of density of languages (see e.g. here). The density $\operatorname{den}_L : \mathbb{N} \to [0,1]$ of $L \subseteq \Sigma^*$ is defined by $\qquad \operatorname{den}_L(n) = \frac{|L \cap \Sigma^n|}{|\Sigma^n|}$. For any fixed length, the density corresponds to the probability of picking a word from the language, assuming we pick uniformly at random. Add a distribution over lengths and you may have what you need. You may be able to express your concept of "volume" in terms of this notion, maybe by investigating $\lim_{n \to \infty} \operatorname{den}_L(n)$. As for "randomly picking a language" -- how would you do that? There are uncountably many languages over any given alphabet so I'm not sure how you would define a (nice) probability distribution.
{ "domain": "cs.stackexchange", "id": 1650, "tags": "formal-languages, reference-request, regular-languages, probability-theory" }
Cylinder Hollowness vs Speed Down Incline Plane
Question: We know that the speed of a hollow can is slower down an incline than the speed of a filled can of the same mass and dimensions. If the can is not completely hollow, but not completely filled (so the circumference has a non-zero and < r circumference), one would expect the speed to be in between the two speeds mentioned above. Is the relationship here, between the hollowness of a cylinder and the speed, linear? If not, what is it? Answer: If you, say, have a partly hollow can of mass $m$, outer radius $R$ and moment of inertia $I$ around its axis of symmetry, rolling from the height $h$ with angular velocity $\omega$ and speed of the center $v$, then energy conservation gives you: \begin{equation} mgh=\frac{1}{2}mv^2+\frac{1}{2}I\omega^2. \end{equation} The speed of the center $v=\omega R$, so the eq-n above can be rewritten as \begin{equation} mgh=\frac{1}{2}\left(mR^2+I\right)\omega^2. \end{equation} Now the moment of inertia for a can of a radius $R$ hollow until radius $r<R$ is $I=\pi\rho h/2\left(R^4-r^4\right)$*, where $\rho$ is its density and $h$ is the height. Taking into account that $\rho={m\over V}={m\over\pi(R^2-r^2)h}$ we get \begin{equation} mgh=\frac{m}{4}\left(3R^2+r^2\right)\omega^2=\frac{m}{4}\left(3+\frac{r^2}{R^2}\right)v^2. \end{equation} And for the velocity you have \begin{equation} v=\sqrt{\frac{4gh}{3+\frac{r^2}{R^2}}}. \end{equation} For $r=0$ (filled cylinder) you get $v^2={4\over 3}gh$, for $r=R$ (hollow tube) you get $v^2=2gh$. * The moment of inertial $I$ for a partly hollow cylinder can be found by integrating $$ I=\int x^2\mathrm{d}m=\int x^2\rho(x)\mathrm{d}V=\int_0^R x^2 \rho(x) 2\pi x\mathrm{d}xh, $$ where $\rho(x)=\rho\Theta(x-r)$, where $\Theta$ is the Heaviside function.
{ "domain": "physics.stackexchange", "id": 33259, "tags": "homework-and-exercises, rotational-dynamics" }
How well is Earth’s motion through the universe quantified?
Question: If I had a time travel machine that could drop me into any point in space and time, could I accurately predict earth’s “position”? I know position is an improper term. Would it be possible to know exactly where to land so that you are at the same spot on the surface of Earth as when you “departed”? Answer: Surprisingly, we will soon be able to do this well enough to go back several hundred million years and place ourselves, if not in the solar system, inside the Galaxy and roughly in the right part of the Galaxy. It turns out that we know, with high accuracy, the Earth's motion with respect to the microwave background radiation, which provides a well defined frame of reference for motions. Therefore, we will be able to work out which direction and about how far the MW has moved with respect to this universal frame and separately the Earth with respect to the center of the galaxy. It requires solving for the growth of what is called the 'microwave dipole velocity' over time. Some of this velocity develops from nearby objects like the MW and the nearby groups of galaxies. So, we need to know their masses and relative motions a bit better than we presently do. But, by far, most of the velocity is from objects quite far away, and this allows us to use the simpler and well understood 'perturbation' techniques. The biggest uncertainty for the Earth's position might be the interactions with local stars back in time and what effects they had on perturbing the Suns orbital motion about the center of the galaxy. Again we need to know better the masses of local stars and their relative velocities to calculate the Earth's trajectory back in time. As it happens, the space telescope GAIA, which is about to make its second release of data, will greatly improve our knowledge of the motions and, indirectly, the masses of all these stars and nearby galaxies. So, if you have got that time machine working, we are nearly ready to go!
{ "domain": "astronomy.stackexchange", "id": 2860, "tags": "space-time" }
What is the difference between i.i.d noise and white noise?
Question: I want to know the difference between independent and identically distributed (i.i.d) noise and white noise. In my short knowledge, i.i.d is that there is no relationship about time dependency. White noise means that there are relationship about time dependency. Actually, I'm not sure whether this is correct or not. Also I want to know what is an i.i.d white noise. Can you tell me where we find the iid noise in the nature? update Answer: Independence is 'stronger' than whiteness. I believe that independence between the random variables implies whiteness but whiteness does not imply independence. Whiteness means that the random variables are uncorrelated but not necessarily independent. The use of i.i.d noise is seen very often when formulating probabilistic models because it makes inference much easier. For instance if two random variables $X_1$ and $X_2$ are i.i.d it means that the joint pdf $p(X_1,X_2)$ factors into the product of the individual pdfs $p(X_1)p(X_2)$. If the two random variables are uncorrelated this factorization is not valid. In ML estimation typically the log of the product is considered $\log(p(X_1)p(X_2)) = \sum \log p(X)$ because then differentiation with respect to the parameter of interest is much more straight forward. I'm not sure where to find i.i.d noise in the real world but I believe that the assumption about i.i.d observation noise is made more of convenience than because it is realistic.
{ "domain": "dsp.stackexchange", "id": 2797, "tags": "noise" }
how to get states (position and velocity) from the simple box robot?
Question: Hi, I have simple box robot model which is controlled by keyboard teleop in the gazebo environment. The next step I want to do is putting a simple feedback controller. The first thing I need to do is to get robot model states (position and velocity of center of gravity of the simple box robot model). And the next step I think is that the keyboard teleop should subscribe the states. Are these procedures are right? If then, could you let me know how to get the robot model states ? Should I have put robot state publisher into robot urdf ? If there is appropriate link or tutorial, please let me know. And if there is other way for me to do a feedback control for the simple box robot, please let me know. Originally posted by maruchi on ROS Answers with karma: 157 on 2011-12-09 Post score: 1 Original comments Comment by DimitriProsser on 2011-12-09: I've edited my answer to include this information. Comment by maruchi on 2011-12-09: Dimitri, the current goal is a tracking for given trajectory. I have one more question for subscribing in the teleop. What should I do to subscribe the states (position and velocity) in the teleop cpp file which has currently just publisher. I have just limited knowledge of C++ but keeping learning. Comment by DimitriProsser on 2011-12-09: What is the goal of the feedback? Are you trying to give the robot a position and have it move there? Or are you trying to obtain a certain speed? Answer: If you want to get the position of the robot, the easiest way to do that is with the position_3d controller. Add the following to your urdf file: <gazebo> <controller:gazebo_ros_p3d name="p3d_base_controller" plugin="libgazebo_ros_p3d.so"> <alwaysOn>true</alwaysOn> <updateRate>100.0</updateRate> <bodyName>base_link</bodyName> <topicName>base_pose_ground_truth</topicName> <gaussianNoise>0</gaussianNoise> <frameName>map</frameName> <xyzOffsets>0 0 0</xyzOffsets> <rpyOffsets>0 0 0</rpyOffsets> <interface:position name="p3d_base_position"/> </controller:gazebo_ros_p3d> </gazebo> Change "base_link" to the link you'd like to know the position of, and change "base_pose_ground_truth" to the topic that you wish to publish to. Then, if you subscribe to that topic in your teleop node, you can see the link's absolute position relative to the Gazebo world (0,0,0) coordinate. This message is nav_msgs/Odometry. It will give you position and velocity. To subscribe to this topic, you can use something like the following in teleop.cpp: ros::Subscriber base_pose_sub_ = node_.subscribe<nav_msgs::Odometry> ("base_pose_ground_truth", 100, &Teleop::GroundTruthCallback, this); You can find more on subscribers [here](http://www.ros.org/wiki/ROS/Tutorials/WritingPublisherSubscriber(c%2B%2B). You can find a description of the nav_msgs::Odometry message here. Originally posted by DimitriProsser with karma: 11163 on 2011-12-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by maruchi on 2011-12-11: Thanks Dimitri. I still have a difficulty in inserting a subscriber in the keyboard.cpp of teleop due to my limited knowledge of C++. Could you give me some comment on the below code ? http://answers.ros.org/question/3274/how-to-add-a-subscriber-in-keyboard-teleop Comment by DimitriProsser on 2011-12-09: If you want to track the position and velocity of your robot, you can use either geometry_msgs/Pose or nav_msgs/Odometry as the easiest. The reason that Odometry is good is because this Gazebo controller is already written for you. You cannot change the message type without re-writing the controller Comment by maruchi on 2011-12-09: Dimitri, the current robot model is simple sliding box and forces applied to the model through teleop. What I am trying to do is to subscribe the robot states and to make a simple trajectory following controller in the keboard.cpp in the teleop. Do I still need to use nav_msgs/Odeometry ? Comment by maruchi on 2011-12-09: DimitriProsser, I really appreciate you for the help. I will get back as soon as I implement your directions. Comment by Neil Traft on 2015-08-16: I don't know if this is old syntax or what, but to get this to work I had to use this syntax: <plugin name="p3d_base_controller" filename="libgazebo_ros_p3d.so"> instead of <controller:gazebo_ros_p3d... Comment by ktiwari9 on 2015-08-25: Hi guys, I have a similar problem. I am also using a Husky A200 and in the urdf I added the controller like Dimitri said above. Now how should my GroundTruthCallback() look like so that I can read the current robot position and vel information from gazebo ? Please help. Comment by ktiwari9 on 2015-08-25: This syntax is deprecated and no longer supported in Gazebo 1.9.x and ROS Hydro. Does anybody know of a more recent version of doing this ? Comment by Neil Traft on 2015-08-25: The message is of type nav_msgs/Odometry so it should look like any ROS callback, except replace the message pointer type with nav_msgs::Odometry::ConstPtr. The URDF syntax worked for me in Hydro/Gazebo 1.9, but you might need to also comment out this line that begins: <interface:position...
{ "domain": "robotics.stackexchange", "id": 7588, "tags": "control, ros, subscribe, teleop" }
How is this Pytorch expression equivalent to the KL divergence?
Question: I found the following PyTorch code (from this link) -0.5 * torch.sum(1 + sigma - mu.pow(2) - sigma.exp()) where mu is the mean parameter that comes out of the model and sigma is the sigma parameter out of the encoder. This expression is apparently equivalent to the KL divergence. But I don't see how this calculates the KL divergence for the latent. Answer: The code is correct. Since OP asked for a proof, one follows. The usage in the code is straightforward if you observe that the authors are using the symbols unconventionally: sigma is the natural logarithm of the variance, where usually a normal distribution is characterized in terms of a mean $\mu$ and variance. Some of the functions in OP's link even have arguments named log_var.$^*$ If you're not sure how to derive the standard expression for KL Divergence in this case, you can start from the definition of KL divergence and crank through the arithmetic. In this case, $p$ is the normal distribution given by the encoder and $q$ is the standard normal distribution. $$\begin{align} D_\text{KL}(P \| Q) &= \int_{-\infty}^{\infty} p(x) \log\left(\frac{p(x)}{q(x)}\right) dx \\ &= \int_{-\infty}^{\infty} p(x) \log(p(x)) dx - \int_{-\infty}^{\infty} p(x) \log(q(x)) dx \end{align}$$ The first integral is recognizable as almost definition of entropy of a Gaussian (up to a change of sign). $$ \int_{-\infty}^{\infty} p(x) \log(p(x)) dx = -\frac{1}{2}\left(1 + \log(2\pi\sigma_1^2) \right) $$ The second one is more involved. $$ \begin{align} -\int_{-\infty}^{\infty} p(x) \log(q(x)) dx &= \frac{1}{2}\log(2\pi\sigma_2^2) - \int p(x) \left(-\frac{\left(x - \mu_2\right)^2}{2 \sigma_2^2}\right)dx \\ &= \frac{1}{2}\log(2\pi\sigma_2^2) + \frac{\mathbb{E}_{x\sim p}[x^2] - 2 \mathbb{E}_{x\sim p}[x]\mu_2 +\mu_2^2} {2\sigma_2^2} \\ &= \frac{1}{2}\log(2\pi\sigma_2^2) + \frac{\sigma_1^2 + \mu_1^2-2\mu_1\mu_2+\mu_2^2}{2\sigma_2^2} \\ &= \frac{1}{2}\log(2\pi\sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2\sigma_2^2} \end{align} $$ The key is recognizing this gives us a sum of several integrals, and each can apply the law of the unconscious statistician. Then we use the fact that $\text{Var}(x)=\mathbb{E}[x^2]-\mathbb{E}[x]^2$. The rest is just rearranging. Putting it all together: $$ \begin{align} D_\text{KL}(P \| Q) &= -\frac{1}{2}\left(1 + \log(2\pi\sigma_1^2) \right) + \frac{1}{2}\log(2\pi\sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2\sigma_2^2} \\ &= \log (\sigma_2) - \log(\sigma_1) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2\sigma_2^2} - \frac{1}{2} \end{align} $$ In this special case, we know that $q$ is a standard normal, so $$ \begin{align} D_\text{KL}(P \| Q) &= -\log \sigma_1 + \frac{1}{2}\left(\sigma_1^2 + \mu_1^2 - 1 \right) \\ &= - \frac{1}{2}\left(1 + 2\log \sigma_1- \mu_1^2 -\sigma_1^2 \right) \end{align} $$ In the case that we have a $k$-variate normal with diagonal covariance for $p$, and a multivariate normal with covariance $I$, this is the sum of $k$ univariate normal distributions because in this case the distributions are independent. The code is a correct implementation of this expression because $\log(\sigma_1^2) = 2 \log(\sigma_1)$ and in the code, sigma is the logarithm of the variance. $^*$The reason that it's convenient to work on the scale of the log-variance is that the log-variance can be any real number, but the variance is constrained to be non-negative by definition. It's easier to perform optimization on the unconstrained scale than it is to work on the constrained scale in $\eta^2$. Also, we want to avoid "round-tripping," where we compute $\exp(y)$ in one step and then $\log(\exp(y))$ in a later step, because this incurs a loss of precision. In any case, autograd takes care of all of the messy details with adjustments to gradients resulting from moving from one scale to another.
{ "domain": "ai.stackexchange", "id": 2644, "tags": "pytorch, proofs, implementation, variational-autoencoder, kl-divergence" }
how to map point cloud Data comming from serial port
Question: I have a serial data frame, it contains some points defined at (X,Y,Z) and another information, i have received this frame in ROS, and extracted the data related to point cloud, now how can i visualize these points in RVIZ, and if i want to implement slam later what is the data type involved to encapsulate these points Originally posted by MjdKassem on ROS Answers with karma: 13 on 2020-09-14 Post score: 0 Answer: Your best bet is to populate a sensor_msgs/PointCloud2 and publish it. A topic with this message type can natively be visualized in Rviz. If you're looking for an example of populating this type of message, check out any of the lidar drivers such as the Velodyne driver for melodic. Originally posted by Josh Whitley with karma: 1766 on 2020-09-15 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by MjdKassem on 2020-09-19: thank you, what is the purpose of converting from sensor_msgs/PointCloud2 to PCL datatypes Comment by Josh Whitley on 2020-09-19: Purely for convenience of editing the data. PCL is not a requirement (see the ros2 version for an example without PCL).
{ "domain": "robotics.stackexchange", "id": 35538, "tags": "ros, ros-melodic" }
Overlapping rectangles
Question: I received the following question in a technical interview today (for a devops/SRE position): Write a function which returns true if the two rectangles passed to it as arguments would overlap if drawn on a Cartesian plane. The rectangles are guaranteed to be aligned with the axes, not arbitrarily rotated. I pretty much blew the whole thought process for how to approach the question I wasn't thinking of cases where, for example, one rectangle might be enclosed entirely within the other or of cases where a short wide rectangle might cross wholly through a taller, narrower one forming some sort of cross --- so I was off on a non-productive tangent thinking about whether any corner of either rectangle was within the bounding box of the other. Naturally as soon as I got home, having let the problem settle more in my mind, I was able to write something which seems reasonably elegant and seems to work (for the test cases I've tried so far). This graphic (which took far longer to create than the code, and I don't even know how to make Inkscape show the gridlines in the saved image as it's showing in my working canvas) shows the simplest obvious cases: red/aqua overlapping by a corner, blue completely enclosing fuschia and green/yellow overlapping but without any corner enclosed within the other. My test code uses red/blue, blue/green and similar combinations as the non-overlapping test cases, including those which would overlap only in horizontal or vertical dimensions but are clear in the other dimension. My thought process, when I got home and sat down with a keyboard was as follows: We only care about edges, ultimately. So my Rect() class store just the X scalar of the left and right edges, and the Y scalar of top and bottom edges. For rectangles to overlap there must be some overlap in both the horizontal and vertical directions. So it should be sufficient to just test if either the left or right edge of the first rectangle argument is to the right or left (respectively) of the other rectangle's opposite edge ... and likewise for top and bottom. Here's the code for creating Points and Rectangles. Rectangles are only instantiated using corner points; points are actually trivial and could be named tuples if I were inclined. #/usr/bin/env python class Point(object): def __init__(self, x, y): self.x = x self.y = y class Rect(object): def __init__(self, p1, p2): '''Store the top, bottom, left and right values for points p1 and p2 are the (corners) in either order ''' self.left = min(p1.x, p2.x) self.right = max(p1.x, p2.x) self.bottom = min(p1.y, p2.y) self.top = max(p1.y, p2.y) Note that I'm setting left to the minimum of the two x co-ordinates, right to the max, and so on. I'm not doing any error checking here for zero area rectangles here, they would be perfect valid for the class and probably, arguably, be valid the the same collision "overlap" function. Points and lines would simply be infinitesimal "rectangles" for my code. (However, no such degenerate cases are in my test suite). Here's the overlap function: #/usr/bin/env python def overlap(r1,r2): '''Overlapping rectangles overlap both horizontally & vertically ''' hoverlaps = True voverlaps = True if (r1.left > r2.right) or (r1.right < r2.left): hoverlaps = False if (r1.top < r2.bottom) or (r1.bottom > r2.top): voverlaps = False return hoverlaps and voverlaps This seems to work (for all my test cases) but it also looks wrong to me. My initial attempt was to start with hoverlaps and voverlaps as False, and selectively set them to True using conditions similar to these shown (but with the inequality operators reversed). So, what's a better way to render this code? Oh, yeah: here's the test suite at the end of that file: #!/usr/bin/env python # if __name__ == '__main__': p1 = Point(1,1) p2 = Point(3,3) r1 = Rect(p1,p2) p3 = Point(2,2) p4 = Point(4,4) r2 = Rect(p3,p4) print "r1 (red),r2 (aqua): Overlap in either direction:" print overlap(r1,r2) print overlap(r2,r1) p5 = Point(3,6) # overlaps horizontally but not vertically p6 = Point(12,11) r3 = Rect(p5,p6) print "r1 (red),r3 (blue): Should not overlap, either way:" print overlap(r1,r3) print overlap(r3,r1) print "r2 (aqua),r3 (blue: Same as that" print overlap(r2,r3) print overlap(r3,r2) p7 = Point(7,7) p8 = Point(11,10) r4 = Rect(p7,p8) # completely inside r3 print "r4 (fuschia) is totally enclosed in r3 (blue)" print overlap(r3,r4) print overlap(r4,r3) print "r4 (fuschia) is nowhere near r1 (red) nor r2 (aqua)" print overlap(r1,r4) p09 = Point(13,11) p10 = Point(19,13) r5 = Rect(p09,p10) p11 = Point(13,9) p12 = Point(15,14) r6 = Rect(p11,p12) print "r5 (green) and r6 (yellow) cross without corner overlap" print overlap(r5,r6) print overlap(r6,r5) Answer: You have common code, which moreover has applications beyond this one, so should you not pull it out into a function? Then you can reduce overlap to def overlap(r1, r2): '''Overlapping rectangles overlap both horizontally & vertically ''' return range_overlap(r1.left, r1.right, r2.left, r2.right) and range_overlap(r1.bottom, r1.top, r2.bottom, r2.top) Now, the key condition encapsulated by range_overlap is that neither range is completely greater than the other. A direct refactor of the way you've expressed this is def range_overlap(a_min, a_max, b_min, b_max): '''Neither range is completely greater than the other ''' overlapping = True if (a_min > b_max) or (a_max < b_min): overlapping = False return overlapping For such a simple condition I would prefer to use not rather than if-else assignment. I would also reorder the second condition to exhibit the symmetry more clearly: def range_overlap(a_min, a_max, b_min, b_max): '''Neither range is completely greater than the other ''' return not ((a_min > b_max) or (b_min > a_max)) Of course, de Morgan's laws allow rewriting as def range_overlap(a_min, a_max, b_min, b_max): '''Neither range is completely greater than the other ''' return (a_min <= b_max) and (b_min <= a_max) I think that the last of these is the most transparent, but that's an issue of aesthetics and you may disagree. Note that I've assumed throughout, as you do, that the rectangles are closed (i.e. that they contain their edges). To make them open, change > to >= and <= to <.
{ "domain": "codereview.stackexchange", "id": 4572, "tags": "python, interview-questions, collision, computational-geometry" }
Why is chemistry unpredictable?
Question: Disclaimer: I am not a chemist by any means, and I only have knowledge limited to what I learned in my university's Chemistry III course. Basic understanding of everything up to valence electron orbitals. Why is there no set of rules to follow which can predict the product of chemical reactions? To me, it seems that every other STEM field has models to predict results (physics, thermodynamics, fluid mechanics, probability, etc) but chemistry is the outlier. Refer to this previous question: How can I predict if a reaction will occur between any two (or more) substances? The answers given state that empirical tests are the best way we've gotten to predict reactions, because we can discern patterns or "families" of reactions to predict outcomes. Are we only limited to guessing at "family" reactions? In other words, why am I limited to knowing my reactants and products, then figuring out the process? Can I know the reactants, hypothesize the process, and predict the product? If the answer is "It's complicated", I would enjoy a push in the right direciton - like if valence orbitals actually do help us predict, or any laws of energy conservations etc, please give me something which I can go research. Answer: First of all, I'd ask: what do you admit as "chemistry"? You mentioned thermodynamics as being a field where you have "models to predict results". But thermodynamics is extremely important in chemistry; it wouldn't be right if we classified it as being solely physics. There is a large amount of chemistry that can be predicted very well from first principles, especially using quantum mechanics. As of the time of writing, I work in spectroscopy, which is a field that is pretty well described by QM. Although there is a certain degree of overlap with physics, we again can't dismiss these as not being chemistry. But, I guess, you are probably asking about chemical reactivity. There are several different answers to this depending on what angle you want to approach it from. All of these rely on the fact that the fundamental theory that underlies the behaviour of atoms and molecules is quantum mechanics, i.e. the Schrödinger equation.* Addendum: please also look at the other answers, as each of them bring up different excellent points and perspectives. (1) It's too difficult to do QM predictions on a large scale Now, the Schrödinger equation cannot be solved on real-life scales.† Recall that Avogadro's number, which relates molecular scales to real-life scales, is ~$10^{23}$. If you have a beaker full of molecules, it's quite literally impossible to quantum mechanically simulate all of them, as well as all the possible things that they could do. "Large"-ish systems (still nowhere near real-life scales, mind you — let's say ~$10^3$ to $10^5$) can be simulated using approximate laws, such as classical mechanics. But then you lose out on the quantum mechanical behaviour. So, fundamentally, it is not possible to predict chemistry from first principles simply because of the scale that would be needed. (2) Small-scale QM predictions are not accurate enough to be trusted on their own That is not entirely true: we are getting better and better at simulating things, and so often there's a reasonable chance that if you simulate a tiny bunch of molecules, their behaviour accurately matches real-life molecules. However, we are not at the stage where people would take this for granted. Therefore, the ultimate test of whether a prediction is correct or wrong is to do the experiment in the lab. If the computation matches experiment, great: if not, then the computation is wrong. (Obviously, in this hypothetical and idealised discussion, we exclude unimportant considerations such as "the experimentalist messed up the reaction"). In a way, that means that you "can't predict chemistry": even if you could, it "doesn't count", because you'd have to then verify it by doing it in the lab. (3) Whatever predictions we can make are too specific There's another problem that is a bit more philosophical, but perhaps the most important. Let's say that we design a superquantum computer which allowed you to QM-simulate a gigantic bunch of molecules to predict how they would react. This simulation would give you an equally gigantic bunch of numbers: positions, velocities, orbital energies, etc. How would you distil all of this into a "principle" that is intuitive to a human reader, but at the same time doesn't compromise on any of the theoretical purity? In fact, this is already pretty tough or even impossible for the things that we can simulate. There are plenty of papers out there that do QM calculations on very specific reactions, and they can tell you that so-and-so reacts with so-and-so because of this transition state and that orbital. But these are highly specialised analyses: they don't necessarily work for any of the billions of different molecules that may exist. Now, the best you can do is to find a bunch of trends that work for a bunch of related molecules. For example, you could study a bunch of ketones and a bunch of Grignards, and you might realise a pattern in that they are pretty likely to form alcohols. You could even come up with an explanation in terms of the frontier orbitals: the C=O π* and the Grignard C–Mg σ. But what we gain in simplicity, we lose in generality. That means that your heuristic cannot cover all of chemistry. What are we left with? A bunch of assorted rules for different use cases. And that's exactly what chemistry is. It just so happens that many of these things were discovered empirically before we could simulate them. As we find new theoretical tools, and as we expand our use of the tools we have, we continually find better and more solid explanations for these empirical observations. Conclusion Let me be clear: it is not true that chemistry is solely based on empirical data. There are plenty of well-founded theories (usually rooted in QM) that are capable of explaining a wide range of chemical reactivity: the Woodward–Hoffmann rules, for example. In fact, pretty much everything that you would learn in a chemistry degree can already be explained by some sort of theory, and indeed you would be taught these in a degree. But, there is no (human-understandable) master principle in the same way that Newton's laws exist for classical mechanics, or Maxwell's equations for electromagnetism. The master principle is the Schrödinger equation, and in theory, all chemical reactivity stems from it. But due to the various issues discussed above, it cannot be used in any realistic sense to "predict" all of chemistry. * Technically, this should be its relativistic cousins, such as the Dirac equation. But, let's keep it simple for now. † In theory it cannot be solved for anything harder than a hydrogen atom, but in the last few decades or so we have made a lot of progress in finding approximate solutions to it, and that is what "solving" it refers to in this text.
{ "domain": "chemistry.stackexchange", "id": 14741, "tags": "physical-chemistry, experimental-chemistry, theoretical-chemistry, process-chemistry" }
What is the objective that is optimized with Random Search?
Question: I have recently learned about Random Search (or sklearn.model_selection.RandomizedSearchCV in Python) and was thinking about the theory behind the optimization process. In particular my question is, given that one performs Random Search on a certain algorithm (let's say random forest), what are the best hyperparameter based on? More specifically in what sense are they the "best" hyperparameters for the model? Do they maximize accuracy of the model? If not what is the (performance-)criterion that is optimized? Or is it entropy/gini? Answer: According to the documentation, the function RandomizedSearchCV accepts a scoring string that can take any value from this table and you can even implement your own custom scorer depending on what your goal is. The default parameter is None in which case it uses the models score function that is defined to: Return the mean accuracy on the given test data and labels.
{ "domain": "datascience.stackexchange", "id": 7407, "tags": "optimization, hyperparameter-tuning, randomized-algorithms" }
How to synchronize a 2d cellular automaton in $\Theta(\sqrt{n} \log n)$ steps
Question: Let's assume we have given a two dimensional cellular automaton with an initial configuration where alls cells are in an quiescent state, expect for one square of cells. Let $n$ be the number of cells in that square. We want to synchronize all cells in the square, so we basically want to solve the firing squad synchronization problem. Synchronizing all cells in $\Theta(\sqrt{n})$ is rather easy: All cells at the left and right border of the square serve as generals and we synchronize each row separately with any standard 1-dimensional FSSP algorithm. A row of $k$ cells with generals at both ends can be synchronized in $\Theta(k)$ steps (in our case: $k=\sqrt{n}$). What I want to do, is to synchronize the whole square in $\Theta(\sqrt{n} \log n)$, so I need to slow down the synchronization process by a factor of $\log n$. But I have no idea how to do so. Answer: A naive solution could be that each general first launch a binary counter in its row and counts the current time until $k\log(k)$ steps and then run the FSSP. To do this you just need to detect the value of $k$ and compute $k\log(k)$ in less than $k\log(k)$ steps: sending a signal back and forth toward the other end while incrementing a binary counter gives you k written in binary in roughly $2k+\log(k)$ steps, so you also know $\log(k)$ (length of the counter). Within the $\Theta(k\log(k))$ steps left you have much more time than necessary to compute the product $k\times \log(k)$. And you're done. I'm pretty sure that (1) there are more elegant solutions, (2) there is a general acceleration/slowdown theorem that could also be used here.
{ "domain": "cs.stackexchange", "id": 3043, "tags": "synchronization, cellular-automata, algorithm-design" }
Is it possible to hear the past?
Question: From this Stack Exchange Physics Post, I am certain that it is possible to view the past. But then this interesting question came to me. Is it possible to hear the past? Ok, you might say, "Well, we are hearing the past, aren't we?" and go onto immense details about the speed of sound against the difference and stuff. But what if I wanted to hear the sound of a place extremely close by days, weeks, months or even years back? Would that be possible? Answer: Luboš' comment really answers your question, but to more specifically address your comment: The big difference between sound and light is that sound requires a medium to travel in while light does not. In fact light travels best in a vacuum where by definition there is no medium. The reason we can see back 13.5 billion years is because the light has been travelling through an almost perfect vacuum so there's nothing to impede it. The problem with anything travelling through a medium is that no medium is perfectly elastic and there are always energy losses due to viscous damping. This is why when you strike a bell it may ring for a while but won't ring for anything like 13.5 billion years. You could ring the bell and wait for the sound wave to travel right round the Earth. You'd hear the sound about 120,000 seconds later, so this would allow you to hear a day and a bit into the past. However, unless it was an extraordinarily loud bell the sound would have decayed to below thermal noise in that time so you wouldn't be able to hear it. This isn't really an answer to your question, but I mention it because it's cool: cosmic events like supernovae form shock waves when they hit ares of concentrated interstellar gas, and this is arguably a form of sound. In that case you can hear sound from many years ago, though you'd need an awfully big ear!
{ "domain": "physics.stackexchange", "id": 29317, "tags": "time, acoustics" }
Direction of Electromotive Force
Question: Can we determine the direction of emf from the direction of current? I know current exists from higher potential to lower potential and through this we can guess which terminal is at higher potential and which is at lower potential but question remain same: what is the direction of emf? A simple explanation would be appreciated. Answer: Definition Of EMF (according to Wikipedia) : Electromotive force, also called emf, is the voltage developed by any source of electrical energy such as a battery or dynamo. It is generally defined as the electrical potential for a source in a circuit. A device that supplies electrical energy is called electromotive force or emf. Emfs convert chemical, mechanical, and other forms of energy into electrical energy.The product of such a device is also known as emf. So, as you can see from the definition, EMF is a scalar quantity and has no direction. However, EMF is the cause of potential difference. And as you rightly said: "Current flows from higher potential to lower potential".
{ "domain": "physics.stackexchange", "id": 35341, "tags": "electricity" }
What is the amount of programs for which we can solve the halting problem?
Question: The halting problem is undecidable of course. This implies that there is at least one program for which we cannot decide whether it halts or not, because theoretically, if all we know is that the halting problem is undecidable, it could still be that there is a program that can decide for every program except itself whether it halts or not. So I am wondering, for what percentage of programs can we solve the halting problem? That is, if we assign a Gödel number G to every program. What is the value of the limit with respect to G to infinity of: ${s}/n$ where s = the number of programs with Gödel number 1 to G, for which we can decide whether it halts or not. and n = the number of programs with Gödel number 1 to G, for which we cannot decide whether it halts or not. I am not looking for a specific answer like 75%. I am looking for an answer like: less than 0.01 % or more than 9.99%, or anything really that gives a hint at an answer. Answer: The halting problem is undecidable of course. This implies that there is at least one program for which we cannot decide whether it halts or not. The undeicidability of the halting problem actually doesn't imply this. It means that there is no single algorithm that can tell whether any program halts. It does not mean that there is any single program for which no algorithm can correctly say whether or not it halts. Note that a program either halts or doesn't on each particular input. The general halting problem is a function from (program, input) pairs to booleans. You're talking about the halting problem in terms of "this program halts, that one doesn't" without talking about inputs. So either you're thinking of one of the formalisms of the halting problem where you just assume you run all the programs with an empty input (or zero if the inputs are numbers, or whatever; it's all basically the same), or you're thinking of "for a given program, does it halt for all inputs" style versions of the halting problem. It doesn't matter though, for this discussion. Whatever definition of "program halts" you're using, a program either halts or it doesn't. HALT(P) is True or False for all P. You claim that there must therefore be some program P for which no other program correctly computes HALT(P), otherwise we could union up all the pointwise decision procedures for each individual P into one that works for all P. In fact though, I can concretely give you a set of decision procedures such that for all P, one of them gives the correct answer to HALT(P): regardless of input, return True regardless of input, return False In the "just delegate to a subprogram that works for this particular input" strategy, you're actually doing all the real work in the part that decides how to delegate. If you could actually do such a delegation (such as by enumerating the pointwise decision procedures for the halting problem alongside the programs that they work on), you would already have decided the halting problem by the time the delegation part of the program is finished; the answer is just encoded in another program that you have to run, instead of a more direct encoding of boolean values. To try to get an intuition for why this sort of delegation can't work, think about how you could actually attempt to implement it in (say) a Turing Machine. If you have a unique pointwise solver for each program, then you simply can't hardwire them all into the Turing Machine, because there are an infinite number of them and you only have a finite number of states to implement them. If you imagine that there are a finite number of sub-solvers to dispatch to (which is definitely possible, since "return True" and "return False" are a sufficient set), then you're assuming that your Turing Machine can analyse each input program to figure out which sub-solver would work on it. But how would you implement that analysis? It essentially is an obfuscated version of the halting problem. For any set of partial halting-problem solvers, you should be able to take any standard "halting problem is undecidable" proof and replace the part where the assumed solver outputs true or false with having it output an indication of which partial solver you should run; it'll be easy to use that to construct the same sort of contradictions. (i.e. "Which partial solver does the delegator indicate would correctly decide whether the delegator halts?") Your argument hasn't shown that there must be a P for which there is no program that computes HALT(P) correctly. It actually shows that there is no algorithm for carrying out the delegation strategy you outlined. Neither can you "enumerate a list of such decision procedures for each individual program" as mentioned in one of the comments. Well, technically you can, since my list of 2 programs above was such an enumeration, but you can't enumerate a list of decision procedures paired with the programs that they work on.
{ "domain": "cs.stackexchange", "id": 18821, "tags": "computability, halting-problem" }
What is a weak value really?
Question: There have been a lot of recent experiments performing weak measurements. Some of the conclusions seem to be quite surprising (e.g. this paper) and there is still debate if the weak measurement is actually a "measurement" or not. My question is what the weak value really represents. I know what it is mathematically (see e.g. wikipedia), but what is it physically? Is it an actual property of the system or an inferred quantity? And why is it useful to measure it (as in what does it tell us up the system)? Answer: Take a large ensemble of particles and preselect state $\left|\psi\right\rangle$ Then take a large sub-ensemble of this ensemble and postselect state $\left|\phi\right\rangle$. In other words, set up the experiment, with the ensemble starting in state $\left|\psi\right\rangle$. Make a weak measurement of the observable $\hat{A}$, then make a strong measurement (at the end of the experiment) and look at the data that ended in state $\left|\phi\right\rangle$. As mentioned in the comments of the question, this does potentially lead to issues about whether or not this is actually a weak measurement as it involves a strong measurement of the measurement device. The weak value, $A_W$, gives the average (mean) value of $\hat{A}$ that the particles in this sub-ensemble appear to have when measured via a weak interaction It is not relevant or useful when looking at a single particle, although measuring an ensemble of particles is equivalent to repeating an experiment with a single particle, many times. If, instead of a weak measurement, a strong measurement was performed, then the wavefunction collapses and we no longer know which particles would have ended in state $\left|\phi\right\rangle$ given that they started in state $\left|\psi\right\rangle$ as they have effectively now started in a different state. Postselecting $\left|\phi\right\rangle$ such that $\left\langle\phi|\psi\right\rangle$ is small allows for amplification of measurements that were previously too small to observe (e.g. Spin Hall Effect of Light - http://science.sciencemag.org/content/348/6242/1448) They also bring a new way to look at counterfactual statements - a set of statements that classically, cannot be simultaneously true as they appear to contradict each other. This is usually solved in quantum physics by saying that when making a (strong) measurement of such a statement, the wavefunction collapses and the other statements cannot be measured so do not have to be simultaneously true. As weak measurements do not collapse the wavefunction, they allow for these statements to be tested (e.g. Hardy's Paradox - http://www.tau.ac.il/~yakir/yahp/yh12)
{ "domain": "physics.stackexchange", "id": 30023, "tags": "quantum-mechanics, measurement-problem" }
Is $B=μ_{0} nI$ for the magnetic field magnitude at the center of the solenoid axis only?
Question: Is $B=μ_{0}nI$ for the magnetic field magnitude at the center of the solenoid axis only, or at any point inside the solenoid as long as it is far enough away from its edges as it is suggested by the figure down below? Answer: The conclusion that the field is uniform inside the solenoid is based on the assumption that all the field lines are parallel to the axis of the solenoid, both inside and outside, so the integral of $\mathbf B\cdot d\mathbf s$ is zero along both of the vertical sides of the loop. This is true if, but only if, the solenoid is infinitely long so the field will be uniform only for an infinitely long solenoid. However in practice the field is very close to uniform as long as the solenoid is long compared to its radius.
{ "domain": "physics.stackexchange", "id": 94691, "tags": "electromagnetism, magnetic-fields, electric-current" }
Extension method that writes to a stream
Question: public static void streamReport(this Report report, Stream stream) { using (var streamWriter = new StreamWriter(stream)) { //some logic that calls streamWrite.Write() streamWriter.Flush(); } //Should "return stream;" here ? } I am writing an extension method to an object in which I would like to transform to a CSV which is generated and returned to a client (web). My questions are: Should I return stream (change void to Stream) ? Should I use the out keyword in before the Stream parameter ? Is this a common way to let the caller know the parameter will be changed ? Should I change the method to generate a new stream (and not accept one) and trust the user to Dispose it ? Should I pass StreamWriter instead of Stream ? Answer: Your questions imply that you might not be quite aware of how streams or references work in C#. You pass in a reference to a Stream You create a StreamWriter which writes to that Stream This will automatically make changes visible to anyone holding a reference to the same Stream. Therefor there is no need to try and return the Stream in any way from the method to make the changes visible to the caller - you just need to write to it. This is at least how I interpret your questions. However there is a catch in your implementation: disposing the StreamWriter will automatically close the underlying Stream which I find hugely annoying at times. Only since .NET 4.5 there is a constructor for StreamWriter which allows you to leave the Stream open. So right now your extension method will close the Stream which I guess is not the intention. Your only option to avoid that is to use .NET 4.5 or do not wrap the StreamWriter in a using statement.
{ "domain": "codereview.stackexchange", "id": 6171, "tags": "c#, asp.net, stream" }
How do I know what variable to use for the chain rule?
Question: In my textbook the tangential acceleration is given like this: $$a_t=\frac{dv}{dt}=r\frac{dw}{dt}$$ $$a_t=rα$$ I understand that the chain rule is applied here like this: $$a_t=\frac{dv}{dt}=\frac{dv}{dw}\frac{dw}{dt}=rα$$ What I don't understand is why we have to apply the rule in this specific way. Say I write like this: $$a_t=\frac{dv}{dθ}\frac{dθ}{dt}$$ This way, I end up with entirely different result. How do I know how the chain rule must be applied? Answer: What I don't understand is why we have to apply the rule in this specific way? How do I know how the chain rule must be applied? We don't have to. You don't know. Somebody just found out that by using that specific method, the result ended up neat and simple. Nothing is wrong with another method. You get the same thing in another expression. Let's try to interpret the terms in your result: $$a_t=\frac{dv}{dθ}\frac{dθ}{dt}$$ $\frac{dθ}{dt}$ clealy equals $\omega$. $\frac{dv}{dθ}$ is a bit tougher - something like instantanous speed change per angle change. If you have a way to measure this, then the formula: $$a_t=\frac{dv}{dθ}\omega$$ is just as useable. Just not as neat. I mean, it is kinda smart that $a_t=r\alpha$ has the same shape as $v=r\omega$ and $s=r\theta$. Makes the overview much better, when we can end with a similar and simple result. Note, if you already have the expression $v=r\omega$ at hand, then you don't even need chain rules to reach the simple expression: $$a_t=\frac{dv}{dt}=\frac{d(r\omega)}{dt}=r\frac{d\omega}{dt}=rα$$
{ "domain": "physics.stackexchange", "id": 23906, "tags": "rotational-kinematics, differentiation" }
Can we teach machine to tie shoe lace?
Question: Is it possible with any of machine learning methods to train machine to tie shoe lace? If possible how data should be interpreted for the training? If we are using reinforcement learning, how will it learn to reach the best rewards? Answer: You would not believe how difficult this task is, assuming you want a humanoid robot to tie laces with humanoid hands. It’s possible of course but compared to what the current state of the art in machine learning is, this task is very very complex because it’s a physical system with unknown variables (Eg. coefficient of friction on laces), physical limitations (Eg. dexterity of robotic hands), and occluded vision (Eg. hand in front of laces) to name a couple of the issues one faces not to mention robots are expensive but I digress. Robots can perform amazing preprogrammed sequences but if you wanted to have a robot to be able to tie any shoe in any situation, machine learning is the right approach. Training data should include all of the things that humans use when they approach this problem: proprioception (the location of joints in space aka where are your fingers) a sense of touch, a robot would need a quite a few touch sensors vision, although not actually necessary (because blind humans are capable of learning to do this task), it may speed up learning Using reinforcement learning as a framework, one needs to be able to give the RL agent a signal that it is closer to tying the lace. This could be done in a few ways and it’s unclear what would be the best. One method would be to train a separate model using supervised learning and pictures of shoes laced up to look at the current state of the bow (or lack thereof) and report if it looks like a tight bow. This method requires a whole other network but it may be the most versatile in the end although the RL agent may just learn how to place laces so it looks like a tight bow. Although simple for humans, this is an interesting problem and thinking about how it could be implemented helps one understand how to apply ML in other areas.
{ "domain": "ai.stackexchange", "id": 395, "tags": "machine-learning, reinforcement-learning" }
Why don't particles in a heated gas have the same kinetic energy?
Question: When an Ideal gas is left alone for a while, the atoms or molecules collide with each other, each time transferring a tiny amount of energy. So, essentially, the particles keep transferring energy and eventually all the atoms should have the same amount of energy. But this is not the case. The atoms resort to the maxwell- boltzmann distribution. But why is this so? Isn't the equilibrium state particles with equal energy? What causes the maxwell-boltzmann distribution to be the distribution particles resort to? Answer: It's Boltzmann's H- Theorem. For an introduction, please read the Wikipedia page. The quantity $H$, a function of time $t$, is defined as : $$ H(t) = \int_0^t f(E,t)log(\frac{f(E,t)}{\sqrt{E}}-1) $$ $f$ is the number of molecules having a kinetic energy between $E$ and $dE$. One sees, that this quantity is a measure of "information" in the gas. Any given collision(s) between any two molecules of an ideal gas occurs with the molecules initially having uncorrelated velocity, angle of collision and starting points. Now, for the gas molecules to run truly randomly, you want to make sure that the "information" - i.e. any regular pattern generated by them is minimal. To go back to your original question, if all molecules did eventually came to an uniform speed, then, starting from an initial fully random state, they will spontaneously start moving towards a particular direction. Order will rise spontaneously. This conflicts with second law. At this point, please note, that the $H$-theorem is not perfectly rigorous in that sense - there are ongoing attempts to improve it. Nevertheless for the purpose of this question, Boltzmann's argument is that the quantity $H$ must be minimized. And that can only happen when the velocity distribution converges to the Maxwell-Boltzmann distribution.
{ "domain": "physics.stackexchange", "id": 38915, "tags": "statistical-mechanics, ideal-gas" }
Definition of probablity current in dirac space not including spatial dimension?
Question: I'm currently reviewing (basic) relativistic quantum mechanics and stumbled upon the probability current in "dirac space", defined as $j^μ = (j^0,\vec j)^\mathrm T$ with $j^0 = c\,ρ = c\,ψ^+ψ$ and $\vec j = cψ^+\,\hat{\vec α}\,ψ$, where $\hat{\vec α}$ is the set of Dirac α-Matrices, composed of the pauli spin matrices like $\hat α_i = \begin{pmatrix} 0 & \hat σ_i\\ \hat σ_i & 0\end{pmatrix}$. Why is there no spatial derivative in the probability current, as there is for non-relativistic quantum mechanics? Or am I just failing to see it? Answer: Unlike its non-relativistic counterpart, the Dirac current doesn't need any derivatives in its definition because the Dirac equation is more "beautiful". It represents the charge density and obeys the continuity equation $\partial_\mu j^\mu=0$, anyway. There is no contradiction with the non-relativistic limit because the Dirac equation basically says that if the Dirac 4-spinor is divided to two 2-spinor parts, $C$ and $D$, then (in a certain basis optimized for non-relativistic physics) the Dirac equation reduces to $$(i\vec \sigma\cdot \nabla) C = (m+i\partial / \partial t) D $$ and a similar equation with $C,D$ basically reverted. You may reconstruct the precise signs and coefficients if you fix a convention. In the non-relativistic limit, the right hand side also basically reduces to $2m\cdot D$ plus small corrections that increase with the velocity. Consequently, the 2-spinor $D$ is equal to the spatial derivatives of $C$ which is why the bilinear expressions constructed both from $C$ and $D$ – from the whole 4-spinor – de facto depend on the spatial derivatives of $C$, anyway (if you eliminate $D$). This $C$ may then be identified with the 2-spinor that appears in the non-relativistic Pauli equation. The Dirac equation's ability to reduce the number of derivatives may surprise in other contexts, too. For example, it is a first-order equation – linear in the derivatives or $p_\mu$ – even though it replaces either the Klein-Gordon equation or the non-relativistic Schrödinger's equation, both of which are second-order differential equations (at least second-order in spatial derivatives). Again, this is no contradiction because the Dirac spinor has (at least) twice as many components and one-half of the components end up being approximately equal to the derivatives of the other half. In this way, a first-order equation may become equivalent to a second-order equation. The trick is similar as the trick to rewrite the second-order equations in mechanics, $m\ddot x = F(x)$, as a "Hamiltonian formalism" set of first-order equations for $x$ and $p$.
{ "domain": "physics.stackexchange", "id": 23984, "tags": "quantum-mechanics, special-relativity, dirac-equation, dirac-matrices" }
Calculating the numerical factor from Feynman diagram
Question: I kind of understood the symmetry factor quite well. However, I just do not understand how one can relate the Feynman diagram to the term (especially the numerical factor in front of it) in the expansion of the path integral. I am reading about the Feynman diagram from the book "Quantum Field Theory in a Nutshell" by Antony Zee. I am now stuck at getting the numerical factor in the expansion of the transition amplitude using Feynman diagrams. Considering the integral $$Z(J)= \int_{-\infty}^\infty dq e^{-\frac{1}{2}m^2q^2-\frac{\lambda}{4!}q^4+Jq} \, .$$ Expand the Taylor's series in $\lambda$, we get $$Z(J)= \int_{-\infty}^\infty dq e^{-\frac{1}{2}m^2q^2+Jq}\left[1-\frac{\lambda}{4!}+\frac{1}{2}\left(\frac{\lambda}{4!}\right)^2q^8+... \right] \, .$$ We can write this in term of the differential in $J$ as $$Z(J)=\left[1-\frac{\lambda}{4!}\left(\frac{d}{dJ}\right)^4 +\frac{1}{2}\left(\frac{\lambda}{4!}\right)^2\left(\frac{d}{dJ}\right)^8+...\right] \int_{-\infty}^\infty dq e^{-\frac{1}{2}m^2q^2+Jq} \, .$$ Using the Gaussian integral, we finally obtain $$\tilde{Z}(J)\equiv \frac{Z(J)}{Z(J=0)} =e^{\frac{\lambda}{4!}\left(\frac{d}{dJ}\right)^4}e^{\frac{1}{2m^2}J^2} \, .$$ We can get the term that contains a certain power in $\lambda$ and $J$ by expanding both exponentials in the above equation. For instance, if we want to have the term $\lambda \cdot J^4$, we get that term to be $$ \frac{8!(-\lambda)}{(4!)^3(2m^2)^4}J^4 = \frac{35}{192}\frac{(-\lambda) J^4}{(m^2)^4} \, .$$ This term can be represented by the diagrams The author clearly stated that I leave you to work out the rules carefully to get the numerical factor right. I am not sure how I can get the numerical factor of this term. However, I have tried counting the number of the possible contractions, In diagram (a), the number of the possible contraction is clearly $4! = 24!$. In diagram (b), we start at the vertex. We choose 2 legs to contract with the sources, which gives $4\times 3=12$. In diagram (c), we consider only the loop diagram on the right. The symmetry factor there is $8$. Hence the total number of the possible contractions is $\frac{4!}{8} = 3$. Therefore, I conclude that the total number of ways of the contractions is $24+12+3=39$. But I cannot see any links to the numerical factor obtained from expanding the integral. This question is related to but different from Formula For Symmetry Factor. Answer: There are three points worth mentioning before answering your question: You are calculating disconnected contributions, so you have to account for the factors required for the connected parts to exponentiate Since you have no propagator the best you can hope for is to obtain the sum over all the symmetry factors for your chosen power of $J$ and $\lambda$ In the path integral formulation the symmetry factors depend on whether you have differentiated the sources, $J$, and then set them to zero to produce correlation functions (such as in the related post), or not (such as here). Regarding point 3, ultimately, in your example you need four more derivatives with respect to $J$, after which you set $J=0$ to extract Greens functions, "$\langle qqqq\rangle$". More generally, this always produces a factor of n! in the numerator for $n$-pt connected Greens functions. (E.g., the symmetry factor, $1/4$, of the balloon in the presence of sources below becomes $1/4\times 2!=1/2$ in agreement with the related post.) Having said that, here is how to think about $35/192$: $$ \frac{35}{192}=\frac{1}{4!}+\frac{1}{4}\times \Big(\frac{1}{2}\Big)+\frac{1}{8}\times\frac{1}{2}\Big(\frac{1}{2}\Big)^2 $$ where the correspondence between the path integral and the Feynmann diagram factors is shown schematically in: and the corresponding reasoning underlying the various factors (in addition to what you already know) is exhibited below. The factors of $1/2$ inside the parentheses in the figure come from the free propagator pieces, the factor of $1/2$ in the coefficient of the $(1/2)^2$ term (which is the free propagator squared) comes from the fact that this is from a disconnected diagram and in particular the third term in the expansion of the free propagation piece: $$ e^{\frac{1}{2}GJ^2}=1+\frac{1}{2}G+\frac{1}{2}\Big(\frac{1}{2}GJ^2\Big)^2+\dots $$ which is in turn multiplied by $\frac{1}{8}\times$(bubble diagram), etc. All in all, and filling in the remaining steps, \begin{equation} \begin{aligned} &\exp\Bigg\{\Big(\frac{1}{2}GJ^2\Big)-\lambda\Big[\Big(\frac{1}{4!}{\rm cross}\,J^4\Big)+\Big(\frac{1}{4}{\rm balloon}\,J^2\Big)+\Big(\frac{1}{8}{\rm double \,\,bubble}\Big)\Big]+\mathcal{O}(\lambda^2)\Bigg\}\\ &=\dots -\lambda\Bigg\{\Big(\frac{1}{4!}{\rm cross}\,J^4\Big)+\Big(\frac{1}{4}{\rm balloon}\,J^2\Big)\Big(\frac{1}{2}GJ^2\Big) +\Big(\frac{1}{8}{\rm double \,\,bubble}\Big)\frac{1}{2}\Big(\frac{1}{2}GJ^2\Big)^2\Bigg\}\\ &\qquad +\dots \end{aligned} \end{equation} The exponent on the left-hand side of this equality is the generating function of connected Greens functions, $W(J)$, whereas the right-hand side is the generating function of disconnected Greens functions, $Z(J)$, whose coefficients' sums you computed (related via $Z(J)=e^{W(J)}$ in this convention). Generalisation A general and useful result for future reference is that a generating function of the form, where, $U(J)=\frac{1}{\hbar}(W(J)-W(0))$, after carrying out the functional derivatives and re-exponentiating takes the diagrammatic form (ref): The $\mathcal{O}(\hat{\lambda})$ term agrees with the above special case. To the best of my knowledge all coefficients in $U(J)$ are correct and all diagrams to the indicated order are included. Since it is purely combinatorial it also holds for arbitrary spacetime backgrounds and the couplings can in principle be arbitrary local functions at this stage. A comment on $\hbar$ In $U(J)$ (after appropriate field redefinitions) $\hbar$ enters via $\ell=\hbar^{d/2}$ (with $d$ the spacetime dimension). Then, the entire $\ell$ power counting in $U(J)$ comes solely from the source and bare couplings (see here for further detail). In particular, $J\sim \mathcal{O}(\ell^{-1})$ so that every external leg in a given diagram lowers the order of $\ell$ by one. The $\ell$-dependences of the couplings are then such that all tree diagrams are $\mathcal{O}(\ell^{-2})$, all one-loop diagrams are $\mathcal{O}(\ell^0)$, all two-loop diagrams are $\mathcal{O}(\ell^{2})$, etc., and a generic $L$-loop diagram in $U(J)$ is $\mathcal{O}(\ell^{2L-2})$. (This is why an $\hbar$ expansion is an expansion in loops rather than couplings, but this is somewhat implicit in $U(J)$ above. This is all made precise in here.)
{ "domain": "physics.stackexchange", "id": 60312, "tags": "quantum-field-theory, symmetry, feynman-diagrams, path-integral, perturbation-theory" }
Hangman program w/ Python and JSON
Question: What improvements can I make? Should I organize my functions based on A-Z? For example, find_letter function comes before guessing function. import json import random import re def find_letter(user_input, vocab, i=None): return [i.start() for i in re.finditer(user_input, vocab)] def guessing(tries,vocab,blank_line): while tries > 0: user_input = input("Guess:") if user_input in vocab: if not(user_input in blank_line): index = find_letter(user_input=user_input,vocab=vocab) i = 0 while i < len(index): blank_line[index[i]] = user_input i += 1 if not('_' in blank_line): return "you won!!!" else: print("YOU ALREADY TRIED THAT LETTER!") else: tries -= 1 print(str(blank_line) + " Number of Tries : " + str(tries)) return "you lost" def hangman_game(vocab): 'Preliminary Conditions' print(introductory(vocab=vocab)) tries = num_of_tries(vocab=vocab) blank_line = produce_blank_lines(vocab=vocab) print(blank_line) print(guessing(tries=tries,vocab=vocab,blank_line=blank_line)) def introductory(vocab): return "HANGMAN GAME. So...what you gotta do is guess and infer what the word might be. " \ "WORD COUNT : " + str(len(vocab)) def num_of_tries(vocab): if len(vocab) < 7: return 8 elif 7 <= len(vocab) < 10: return 10 else: return 14 def produce_blank_lines(vocab): blank_line = [] for i in vocab: if i.isalpha(): blank_line += "_" elif i == " ": blank_line += " " else: blank_line += i + " " return blank_line if __name__ == '__main__': data = json.load(open("vocabulary.json")) vocab = random.choice(list(data.keys())).lower() print(vocab) print(hangman_game(vocab=vocab)) Snippet of the JSON File {"abandoned industrial site": ["Site that cannot be used for any purpose, being contaminated by pollutants."], "abandoned vehicle": ["A vehicle that has been discarded in the environment, urban or otherwise, often found wrecked, destroyed, damaged or with a major component part stolen or missing."], "abiotic factor": ["Physical, chemical and other non-living environmental factor."], "access road": ["Any street or narrow stretch of paved surface that leads to a specific destination, such as a main highway."], "access to the sea": ["The ability to bring goods to and from a port that is able to harbor sea faring vessels."], "accident": ["An unexpected, unfortunate mishap, failure or loss with the potential for harming human life, property or the environment.", "An event that happens suddenly or by chance without an apparent cause."], "accumulator": ["A rechargeable device for storing electrical energy in the form of chemical energy, consisting of one or more separate secondary cells.\\n(Source: CED)"], "acidification": ["Addition of an acid to a solution until the pH falls below 7."], "acidity": ["The state of being acid that is of being capable of transferring a hydrogen ion in solution."], "acidity degree": ["The amount of acid present in a solution, often expressed in terms of pH."], "acid rain": ["Rain having a pH less than 5.6."] Answer: data = json.load(open("vocabulary.json")) Unless Python 3 has changed this, I think you're leaking the open file handle here. What you want is more like with open("vocabulary.json") as f: data = json.load(f) which IMHO is even a bit easier to read because it breaks up the mass of nested parentheses. vocab = random.choice(list(data.keys())).lower() Same deal with the nested parentheses here. We can save one pair by eliminating the unnecessary materialization into a list. [EDIT: Graipher points out that my suggestion works only in Python 2. Oops!] Also, vocab (short for vocabulary, a set of words) isn't really what this variable represents. data.keys() is our vocabulary; what we're computing on this particular line is a single vocabulary word. So: word = random.choice(data.keys()).lower() print(word) print(hangman_game(word)) I can tell you're using Python 3 because of the ugly parentheses on these prints. ;) But what's this? Function hangman_game doesn't contain any return statements! So it basically returns None. Why would we want to print None to the screen? We shouldn't be printing the result of hangman_game; we should just call it and discard the None result. word = random.choice(data.keys()).lower() print(word) hangman_game(word) def hangman_game(vocab): Same comment about vocab versus word here. 'Preliminary Conditions' This looks like you maybe forgot a print? Or maybe you're trying to make a docstring? But single quotes don't make a docstring AFAIK, and even if they did, "Preliminary Conditions" is not a useful comment. I'd just delete this useless line. print(introductory(vocab=vocab)) In all of these function calls, you don't need to write the argument's name twice. It's a very common convention that if you're calling a function with only one argument, the argument you pass it is precisely the argument it needs. ;) Also, even if the function takes multiple parameters, the reader will generally expect that you pass it the right number of arguments in the right order. You don't have to keyword-argument-ize each one of them unless you're doing something tricky that needs calling out. So: print(introductory(word)) And then you can inline introductory since it's only ever used in this one place and it's a one-liner already: print( "HANGMAN GAME. So...what you gotta do is guess and infer what the word might be. " "WORD COUNT : " + str(len(vocab)) ) Personally, I would avoid stringifying and string-concatenation in Python, and write simply print( "HANGMAN GAME. So...what you gotta do is guess and infer what the word might be. " "WORD COUNT : %d" % len(vocab) ) I know plenty of working programmers who would write print( "HANGMAN GAME. So...what you gotta do is guess and infer what the word might be. " "WORD COUNT : {}".format(len(vocab)) ) instead. (I find that version needlessly verbose, but it's definitely a popular alternative.) tries = num_of_tries(vocab=vocab) blank_line = produce_blank_lines(vocab=vocab) print(blank_line) print(guessing(tries=tries,vocab=vocab,blank_line=blank_line)) It's odd that you abbreviated number_of_tries to num_..., but then left in the word of. I would expect either number_of_tries or num_tries (or ntries for the C programmers in the audience); num_of_tries is in a weird no-man's-land. It's weird that you have a function named produce_blank_lines that produces actually a single blank_line. It's even weirder that the value of blank_line is not a constant "\n"! This suggests that everything here is misnamed. You probably meant underscores = produce_underscores(word) or even blanks = shape_of(word). Finally, it is weird that blank_line is not even a string, but rather a list of characters. I would replace this entire dance with something like blanks = ''.join('_' if c.isalpha() else ch for ch in word) or blanks = re.sub('[A-Za-z]', '_', word) This would lose your addition of a blank space after each non-space-non-alpha character, but IMHO that's actually an improvement over your version. (Your sample JSON doesn't show any examples of words containing non-space-non-alpha characters. My version certainly performs better for words like non-space or Route 66.) if not(user_input in blank_line): This would be idiomatically expressed (both in Python and in English) as if user_input not in blank_line: index = find_letter(user_input=user_input,vocab=vocab) i = 0 while i < len(index): blank_line[index[i]] = user_input i += 1 You have another singular/plural problem here (just like with blank_line[s]). It doesn't make sense to get the len of a singular index. Since the variable actually represents a list of indices, we should name it [list_of_]indices. Also, this style of for-loop is very un-Pythonic. In Python we can say simply for i in find_letter(user_input, word): blank_line[i] = user_input And then we can inline the single-use find_letter: for m in re.finditer(user_input, word): blank_line[m.start()] = user_input (Using m as the conventional name for a match object.) And then we can eliminate the regex and just loop over the word directly: for i, ch in enumerate(word): if ch == user_input: blank_line[i] = user_input or for i in range(len(word)): if word[i] == user_input: blank_line[i] = user_input In fact, by not using regexes, we fix a bug in your code: if the chosen vocabulary word contains . as one of its non-space-non-alpha characters, a user who enters . as their guess will win the game immediately! :D Anyway, that's enough for one answer, I think.
{ "domain": "codereview.stackexchange", "id": 33047, "tags": "python, json, hangman" }
Was a black hole formed by only one star or many stars? And can a black hole be formed by other materials (non-stars).
Question: Although I know that a black hole can be formed by gravitational collapse of a massive dead star, I'm not sure whether a black hole can be formed by collapse of many stars. Answer: Recently with the LIGO experiment two black holes were seen to merge into one back hole. It seems that LIGO also may have detected two neutron stars merging into a black hole. The object resulting from the merger is about 2.7 times the mass of the Sun. As the object continues to be studied, evidence is mounting that what was created in the process was a low-mass black hole (if true, this would represent yet another new class of black holes discovered by LIGO). Continued observations over the next several years may confirm or refute this notion. Future observations will show. The black hole responsible was Sagittarius A* (pronounced “Sagittarius A-star”), the supermassive black hole at the center of our Milky Way galaxy. Astronomers think that most large galaxies like the Milky Way should have supermassive black holes in their centers, but it wasn’t until the past couple decades that they had compelling evidence that Sgr A* is our supermassive black hole. There are black holes in the center of galaxies, but whether it is a black hole eating a star, or two stars simultaneously merging into a black hole , I think it is a matter of modelling and trying to find confirmation for the model.
{ "domain": "physics.stackexchange", "id": 53724, "tags": "black-holes, astrophysics, stars" }
Fast string formatting in for loop
Question: I have a function that loops through large objects of raw data, translating coded strings into readable words. The function is very basic, and I keep wondering if I should use an object instead containing the words to be detected and translated. This would be cleaner-looking and more flexible, but it would not like it to harm the performance of the script, since it needs to run through potentially thousands of items. function formatTitle(input) { var input_lowercase = input.toLowerCase() if (input == "total") { return "Title" } else if ( input_lowercase == "type" || input_lowercase == "dokumenttype" || input_lowercase == "km_type" ) { return label_documenttype } else if ( input_lowercase.indexOf("tema_") >= 0 || input_lowercase == "spesfikasjon" ) { return label_theme } else if ( input_lowercase.indexOf("prosess_") >= 0) { return label_process } else if (input.indexOf("SM_") >= 0) { return input.substr(3); } else { return input.charAt(0).toUpperCase() + input.slice(1) } } There are more exceptions in the list, I've shortened it a bit down for practical reasons. I use the function in the following context (when t_formatheader is switched on): if ( skipping_mechanism == "lookup" ) { //42% faster than switch block in IE for (var key in arr_firstnode) { if (key in keysToIgnore) continue; if ( t_formatheader == "1" ) { arrayPushUnknown( columns, formatTitle( key ), "", true, key ) } else { arrayPushUnknown( columns, key, "", true, key ) } } } Answer: I think you need a mix of lookup object(s) and regular if statements; there are clearly direct matches, matches on the prefix and the rest. Also, to write else if is pointless since you return anyway in previous if checks, so you can simply use if(condition){ return value } if(condition2){ // <- no else here return value2 } All that together you could consider something like this: var formatTitleDirectMapping = { 'total' : 'title', 'type' : label_documenttype, 'dokumenttype' : label_documenttype, 'km_type' : label_documenttype, 'spesfikasjon' : label_theme, } var formatTitlePrefixMapping = { 'tema_' : label_theme, 'prosess_' : label_process, 'dokumenttype' : label_documenttype, 'km_type' : label_documenttype, } function formatTitle(input) { var lower = input.toLowerCase(); //Is there a straight forward conversion? Then get out if( formatTitleDirectMapping[lower] ) return formatTitleDirectMapping[lower]; //Is there a straight forward conversion for the prefix? Then get out var prefix = lower.split("_")[0]; if( formatTitlePrefixMapping[prefix] ) return formatTitlePrefixMapping[prefix]; //Put the conversions that dont lend themselves to lookup structures here if( prefix == "sm") { return input.substr(3); } //Catch all return input.charAt(0).toUpperCase() + input.slice(1) }
{ "domain": "codereview.stackexchange", "id": 9203, "tags": "javascript, optimization" }
Importance of Killing vector fields
Question: I am facing some problems in understanding what is the importance of a Killing vector field? I will be grateful if anybody provides an answer, or refer me to some review or books. Answer: In terms of classical general relativity: Einstein's equations $$ G_{ab} = 8\pi T_{ab} $$ can be formulated, in local coordinates, as a system of second order partial differential equations for the metric unknown $g_{ab}$. The matter field equations further generate some family of partial differential equations. Given a continuous symmetry (as guaranteed by a Killing vector field), one has tools and tricks one can use to help solve the PDEs. Noether's theorem tells us that for Einstein's equation (which admits a Lagrangian formulation), associated to each Killing vector field $X^a$ is a conservation law. One can simply see this by considering the current $$ J^{(X)}_a = T_{ab} X^b $$ Its divergence is $$ \nabla^a J^{(X)}_a = \nabla^a T_{ab} X^b + T_{ab} \nabla^a X^b $$ The first term vanishes since the energy momentum tensor is divergence free. Using that the energy-momentum tensor is symmetric, we write $$ \nabla^a J^{(X)}_a = \frac12 T_{ab} \left( \nabla^a X^b + \nabla^b X^a\right) $$ As a consequence of Killing's equations, if $X^a$ is a Killing field, the term inside the parenthesis evaluates to 0. So $J^{(X)}$ is divergence free. Applying Stokes' theorem we then see a conservation law. Symmetry reduction: given a continuous symmetry for a PDE, we can try to perform a symmetry reduction of the equations. This reduces the number of independent variables for the PDE, and often makes it easier to see an exact solution (or to examine features of symmetric solutions). For a survey of how symmetry can help, I recommend checking out Exact Solutions of Einstein's Field Equations by Stephani et al (Cambridge University Press). Chapters 8, 9, 10, and all of Part II of that book addresses the use of symmetry groups to help solve Einstein's equations.
{ "domain": "physics.stackexchange", "id": 13838, "tags": "general-relativity, differential-geometry, lie-algebra, vector-fields" }
Overlap integrals in Hückel theory
Question: In Hückel theory we are only interested in π systems, where $\mathrm p_z$ orbitals overlap. One of the approximations in Hückel theory is that the overlapping $\mathrm p_z$ orbitals are orthonormal: $$ S_{ij} = \begin{cases} 1\quad i = j \\ 0\quad i \neq j \\ \end{cases} $$ If orbital overlap leads to bond formation, how can the overlap integrals, between $\mathrm p_z$ orbitals, be equal to 0? Maybe I am missing something obvious, but doesn't a π-bond demand non-zero overlap between p-orbitals? Answer: Hückel theory would still work if there would be overlap. In fact, it is possible to create a completely new basis by applying a transformation to the basis set by which a new set of orthogonal basis functions is created. The advantage of setting $\hat{S}$ (the overlap matrix) to $\hat{I}$ (the identity matrix) is that the eigenvectors of the matrix equation can be directly interpreted in terms of the $\pi$ orbitals in the system. There is the opposite view in Extended Huckel Methods whereby the assumption is that the overlap entries depend upon the off-diagonal elements. In ordinary Huckel theory, the diagonal elements are represented as \begin{align} \alpha_{ii} &= \langle \psi_i | \hat{H} | \psi_i \rangle \\ \hat{H} &= -\frac{\hbar^2}{2}\nabla^2 + V \end{align} But in the extended thoery, the offdiagonal elements are not denoted by $\beta_{ij} = \delta_{ij}$ but more as \begin{align} \beta_{ij} &= - \frac{C}{2}\Big \langle \psi_i \Big |-\frac{\hbar^2}{2}\nabla^2 + V \Big | \psi_j \Big \rangle\\ &= - \frac{C}{2} \left(\alpha_{ii} + \alpha_{jj} \right)S_{ij} \end{align} Where $S_{ij}$ is the usual notation for the overlap between the corresponding atomic orbitals. Here $C$ is a constant, usually taken equal to $7/4$ and has units of energy.
{ "domain": "chemistry.stackexchange", "id": 13936, "tags": "physical-chemistry, quantum-chemistry, molecular-orbital-theory, physical-organic-chemistry" }
How does the human body deal with free radicals it creates itself?
Question: I know that free radicals are created all over your body all the time as a byproduct of its chemical processes, (or maybe I do not and that is false). However I am confused about the distinction between free radicals formed from UV photons that will affect DNA base pairings, and the ones that the human body creates on a consistent basis. How does the body deal with the free radicals it created by itself? And how are those free radicals different from the ones that cause oxidative damage (e.g., from UV rays)? Answer: Reactive oxygen species, primarily superoxide, may result from one-electron reduction of molecular oxygen. Given that aerobic metabolism involves loooooong chain of electron transfers, it isn't all that rare. Human body uses specialized enzymes to defang reactive oxygen species. The most well known are peroxidase and superoxide dismutase (see wiki for details). Aside from this, free radical and other oxygen and nitrogen active species are, apparently, involved in some forms of immune response. In this case they are not dealt with, but instead are expected to do maximum possible local damage.
{ "domain": "chemistry.stackexchange", "id": 10733, "tags": "biochemistry, radicals, organic-oxidation" }
What algorithm do computers use to compute the square root of a number?
Question: What algorithm do computers use to compute the square root of a number ? EDIT It seems there is a similar question here: Finding square root without division and initial guess But I like the answers provided here more. Plus the person asking the similar question had the question formed in a personal way. My question has an easier to find wording. The similar one was asked having an insight about the inner mechanics of such an algorithm and was being asked like if it was something that had not yet been solved-found. I don't know if there is a way for a moderator to merge the two posts. Answer: Have a look at the paper https://www.lirmm.fr/arith18/papers/burgessn-divider.pdf which describes the division / square root hardware implementation in some ARM processors. Roughly the idea: For a division unit, you can look up the initial bits of divisor and dividend in a table to get two bits of result. For double precision with a 53 bit mantissa, you repeat that 26 or 27 times and you have the result. For single precision with 24 bit mantissa, you need to repeat only 12 or 13 times. All the other bits, calculating the exponent etc., can be done in parallel. Now what does square root have to do with division? Simple: sqrt (x) = x / sqrt (x). You get the square root by dividing x by its square root. Now when you start this division, you don't know the square root yet. But as soon as you have say eight bits and you are very, very careful, you can produce two mantissa bits per cycle. And getting the first say eight bits is not hard, since unlike a division x / y, there is only one operand involved. The paper describes this in much more detail, and importantly describes why this is superior for example to using Newton-Raphson or similar. Note: It is superior for typical floating-point numbers with 24 or 53 bit mantissa. Since that is what is used 99.9999% of the time it is important, but for different situations things are different.
{ "domain": "cs.stackexchange", "id": 17441, "tags": "algorithms, approximation, numerical-algorithms" }
Caulculating maximum shear stress at certain points
Question: I have a structures problem I cannot solve, I have the answer and I have attempted the question. I just need to know where I am going off track. Any assistance would be highly appreciated. My work is as follows: At point B: $T = Fd$, where $d = 300\text{ mm}$ and $F = 100\text{ N}$, therefore $T = 30,000\text{ Nmm}$ From torsion, shear stress $\tau = Tr/J$ $$J = \dfrac{\pi}{2}\cdot(r_1^4 - r_2^4)$$ Therefore $\tau = \dfrac{30,000 \cdot 30 \cdot 2}{\pi(30^4 - 25^4)} = 1.37\text{ MPa}$ This is the first answer listed, which leads me to believe my lecturer listed the answers for (B,A) instead of (A,B)? However then for point A: $T = Fd$, I believe $d$ now changes to 330 mm as you have to add the outer radius of the bar so $T = 33,000\text{ Nmm}$ Following the same process used for B: $$t = \dfrac{33,000 \cdot 30 \cdot 2}{\pi(30^4 - 25^4)} = 1.50\text{ MPa}$$ This is clearly not the answer my lecturer gave. I am a confused as to where I went wrong. I am aware that I did not use the value 250mm but I am unsure where it applies. What makes points A and B different, aside from their distance from the force? I would really appreciate any assistance in the matter, thank you so much for reading. Answer: The reason is, that the pipe is also subject to transverse shear. This is due to the fact, that this lever doesn't apply a "pure" torque moment, but an eccentric force, which results in torque and shear. Now you get two different shear stresses when you consider A and B. As this pipe is subject to the shear force $V_z=100N$, the transverse shear "flows" from B over A towards the opposite side of B. (note: There would be a bending moment $M_y=100N\cdot L_{pipe}$ the base of the pipe as well, but it doesn't result in transverse shear.) You can determine the transverse shear stress, using the following formula $$ \tau=\frac{V_z\cdot Q_y}{I_y\cdot t} $$ Take a look at this question for further explanation of transverse shear. further explanation [edited] Take a look at this graphic, especially at the transformed coordinated systems. (By convention, a dot in a circle depicts a force/direction facing away from you, and a cross in a circle is a force/direction facing towards you) In sketch $II$ you can see, how the $100N$ force does create a bending moment, the orange line is a sketch of the deformation curve, that said force would cause. On the other hand, in sketch $III$ you can see two opposing eccentric forces, resulting in the same torque $T=2\cdot(50N\cdot 0.3m)=30Nm$, which would not result in a bending moment, as they cancel each other out. $M_y=50N\cdot L_{pipe}-50N\cdot L_{pipe}=0$. Furthermore, there would be no transverse shear stress in the pipe as well.
{ "domain": "engineering.stackexchange", "id": 2006, "tags": "mechanical-engineering, structural-engineering, structures, steel, stresses" }