anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Why can precomputed sets of lattice QFT field configurations be used to measure arbitrary observables?
Question: My knowledge of quantum mechanics is rusty and my understanding of (lattice) quantum field theory on a very novice level at best, so it is likely my whole question is based on completely wrong assumptions and a lack of understanding. Most introductory texts about QFT give some sort of a translation table between quantities in QM and QFT, see e.g. top of page 16 here. A given particle path (over all of which you integrate in the path integral formulation of QM to get the amplitude of a given process) is translated into a given field configuration in QFT (over all of which, again, you integrate in the path integral formulation of QFT). As far as I understand, lattice quantum field theory calculations on big high-power computing clusters are effectively generating lots of field configurations (for a given lattice size, spacing, boundary conditions etc, but independent of any "starting conditions"). These field configurations are generated using metropolis MC methods (or similar more advanced importance sampling schemes), based on the action calculated for a given field configuration. In the "list" of output field configurations, the occurence of field configurations is then already weighted by their effective action, so that summing up over them yields the most relevant results without near infinite amounts of practically irrelevant field configurations. To extract an observable from such a set of field configurations, one simply sums the value of the observable for all field configurations. I wonder why it is possible and reasonable to extract any observable from a pre-computed number of field configurations that did not include any information about what observable one would like to obtain. In other words: how can the action of a field configuration be independent of the process I want to extract afterwards? To maybe clarify a bit further, consider a simple double slit experiment: I want to calculate the QM amplitude of an electron at position A (on one side of the double slit) at time t0 to appear at position B (on the other side of the double slit) at time t1. For this I randomly generate a bunch of paths the satisfy the conditions of my observable (position A at t0, position B at t1) and evaluate their actions. If I want to be smart about it, I do some importance sampling of paths (Metropolis or whatever). However in this scenario, I only generated paths that were connected to the observable I knew I was looking for from the beginning (propagation A->B). I could not change the observable to a different transition amplitude afterwards and use the same paths. So how do I unify these two pictures in my head? The only thing I can think of would be to not restrict the generation of QM paths to starting point A and ending point B, instead generating QM paths for all possible starting and end points. Afterwards, to calculate the desired transition amplitude, I could only sum up over paths going from A to B, which would have made the very most of my generated paths absolutely unnecessary. If that should be the case, why to LQFT calculations not restrict the generation of field configurations to such that give a meaningful contribution to a predefined observable? Answer: The path integral form of the Gell-Mann-Low theorem says that $<0|T\{ \pi(x,t) \pi(x,t+T)\}|0>$ is equal to the statistical average of $\pi(x,t) \pi(x,t+T)$ in the sum over all gauge field paths. Here $\pi(x,t)$ is some lattice opertaor that has a non-zero matrix element (proprtional to $\sqrt Z$) between the vacuum and the pion state (just as in the continuum). Then for large $T$ we have $$ <0|T\{ \pi(x,t) \pi(x,t+T)\}|0> \sim Z e^{-m_\pi T} $$ where $m_\pi$ is the lighest mass (giving the slowest decay in imaganinary time $T$) eigenstate in the pion channel. The paths (gauge field configuartions) in the path integral don't care about what is being averaged, so a precomputed table of configuartions can be used.
{ "domain": "physics.stackexchange", "id": 39256, "tags": "quantum-mechanics, quantum-field-theory, lattice-model" }
How does heating in the atmosphere look above 100 km
Question: I have tried without luck to find a graph of temperature change through the atmosphere that goes further up than about 100 km. On this graph: (Source: http://www.windows2universe.org/kids_space/temp_profile.html) as on many others the temperature rises and rises and continues like that out of the scale. How will the temperature curve look further up when the exosphere gets closer (at about 700 km or so, according to Wikipedia)? What is the max temperature the thermosphere will reach? Answer: A team at the University of Western Ontario used a lidar to measure "temperature" by inferring it from the blackbody spectrum of atmospheric gases at various heights. They arrived at this: However, as noted in the comments to the OP, the atmosphere at very high altitudes isn't really in thermal equilibrium and doesn't have a well-defined concept of temperature.
{ "domain": "physics.stackexchange", "id": 19451, "tags": "temperature, atmospheric-science" }
Set relationship between Big-Oh and Theta notations
Question: I was reading "Introduction to Algorithms" by CLRS and it says that: We write $f(n) = O(g(n))$ to indicate that a function $f(n)$ is a member of of the set $O(g(n))$. Note that $f(n) = \Theta(g(n))$ implies $f(n)=O(g(n))$, since $\Theta$-notation is a stronger notion than $O$ notation. Written set-theoretically, we have $\Theta(g(n)) \subseteq O(g(n))$. Q1: What do the authors mean by strong notion? What is strong notion, when we use it, how does it help us (here) knowing one implication is stronger than the other and how does it affect when creating implication. Q2: It seems contradictory in a sense by saying that $\Theta$ is stronger notion than $O$ and then writing $\Theta(g(n)) \subseteq O(g(n))$.How to deduce and then interpret the second sentence from the first? I would appreciate your answers. Answer: In general, a proposition $P$ is said to be stronger than $Q$ if $P$ implies $Q$ ( symbolically, $P\Rightarrow Q$) and $NOT$( $Q$ implies $P$). In the example at hand, $f \in \Theta(g) \Rightarrow f \in \mathcal{O}(g)$ , so that $f \in \Theta(g)$ is a stronger assertion than $f \in \mathcal{O}(g)$.
{ "domain": "cs.stackexchange", "id": 17117, "tags": "algorithms, asymptotics" }
How to recognize colors from the color chart?
Question: I am developing a vision application that is using the color chart below and camera to extract color of each patch in the chart. In order to do that, I have to first detect the chart area in an image and match the area with existing template which contains locations of patches. My question: I need to detect the color patch area by recognizing four corners of the chart. You can see the corners that the small inset rectangles indicate in the image below. I know one way to go about it is to let user to select those corners by clicking it. But is there any way to automatically detect four corners from the image. Answer: You can house hough lines to detect the color area First detect all lines Lines with longest length would be the outer most ones Pick the top horizontal line and do template match for the squares. Once you find the squares, traverse downwards toll you reach the other set of squares. Mark all points where the squares were found You now have an enclosing area for the color chart.
{ "domain": "dsp.stackexchange", "id": 255, "tags": "image-processing" }
The selection of a particular wavelength in He-Ne laser
Question: As we know that, the He-Ne laser emits many wavelengths including the red(632.8nm), green etc. How to select a particular wavelength? (For eg: Red) Please explain briefly. Answer: You have to introduce some sort of frequency selection inside resonator. For HeNe lasers the most straightforward way is to change dielectric coating on resonator mirrors so that they only reflect 99.9% for the wavelength you need (i.e. green). You can also introduce tunable element like Lyot filter but this could be challenging as HeNe laser has somewhat low gain and therefore very sensitive to looses.
{ "domain": "physics.stackexchange", "id": 38641, "tags": "optics, laser, frequency" }
How do you construct composite messages in rosjava?
Question: Recently, rosjava changed so that you can no longer create a message structure like so: org.ros.message.std_msgs.String str = new org.ros.message.std_msgs.String(); str.data = "Hello world! " + sequenceNumber; Instead it looks like the only way to create a message is by calling "newMessage()" on a publisher and then using accessor methods like so: std_msgs.String str = publisher.newMessage(); str.setData("Hello world! " + sequenceNumber); This makes it really hard to construct things like geometry_msgs/PoseArray messages, where you need to create a bunch of other message types to populate the message. Listing the symbols defined in "rosjava_messages-0.0.0-SNAPSHOT.jar" I don't see any instance fields or ways to create messages without an unclear series of factories. This diff shows the API change in use in the pubsub tutorial. (though it has changed a bit since then) For the curious, the symbols defined for geometry_msgs.PoseArray are: rosjava_messages-0.0.0-SNAPSHOT.jar(geometry_msgs/PoseArray.class): 00000279 C geometry_msgs.PoseArray 00000008 D geometry_msgs.PoseArray._TYPE:Ljava/lang/String; 00000008 D geometry_msgs.PoseArray._DEFINITION:Ljava/lang/String; T geometry_msgs.PoseArray.getHeader:()Lstd_msgs/Header; T geometry_msgs.PoseArray.setHeader:(Lstd_msgs/Header;)V T geometry_msgs.PoseArray.getPoses:()Ljava/util/List; T geometry_msgs.PoseArray.setPoses:(Ljava/util/List;)V And in this case, it's unclear to me how to construct geometry_msgs/Pose messages to populate the list of Poses in geometry_msgs/PoseArray. Originally posted by jbohren on ROS Answers with karma: 5809 on 2012-04-06 Post score: 0 Answer: You can make geometry_msgs/Pose messages now using the MessageFactory that you can get from Node::getTopicMessageFactory(). The MessageFactory takes a type that you can get through the interface like so: geometry_msgs.Pose msg = mNode.getTopicMessageFactory().newFromType(geometry_msgs.Pose._TYPE); At least that is how I am doing it on 438c2ba9b5ad, and looking at the hg repository, it doesn't look like this method has changed since then. Originally posted by jamuraa with karma: 218 on 2012-04-09 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by jbohren on 2012-04-09: That's what I was looking for, thanks! Comment by damonkohler on 2012-04-10: I'm thinking about a way to make this a little easier for collections of messages. In general I'd like to avoid the necessity of using MessageFactory instances like that.
{ "domain": "robotics.stackexchange", "id": 8894, "tags": "rosjava, android" }
Is pyrite (FeS₂) an ionic or a covalent compound?
Question: I have searched all over the web and found a lot of diverse explanations, but none of them are concluding exactly whether $\ce{FeS2}$ (solid - pyrite) is a covalent or an ionic compound. From electronegativity, it should be covalent as the $\Delta\chi=0.7$ which is less than $1.5$ and thus said to make covalent bonds and therefore be a covalent compound. From the definition of ionic bonds, which are bonds between a metal and a non-metal element (whereas covalent bonds are bonds between non-metal elements), it should be an ionic compound. Does someone know which of those is 'true', or better, if there is another, more detailed explanation? Answer: You seem to have fallen into the trap of thinking that ionic and covalent bonds are fundamentally different. They are not - they are just two ends of a spectrum, which has an arbitrary division somewhere in the middle into an ionic and covalent regime. This is explained in the answers to this question. In the case of pyrite we have a relatively hard cation, with a small ionic radius and charge of +2, and a rather large anion, with a charge of -2. Therefore there will be a significant degree of covalent character in the Fe-S bonds due to the polarising effect of the cation on the anion. This is confirmed by experimental results and theoretical calculations which suggest that the charge on Fe is about +2/3 and the charge on S is about -1/3. This is significantly less than the expected charges of +2 and -1 from a purely ionic model and so indicates that there is significant sharing of electrons. This is supported by measurements and calculations of the electron density, which show significant electron density between the atoms. Reference: http://pubs.rsc.org/en/content/articlepdf/2014/sc/c3sc52977k
{ "domain": "chemistry.stackexchange", "id": 5634, "tags": "bond, ionic-compounds, transition-metals, covalent-compounds" }
Why is there a derivative with respect to psi of potential there in klein gordon equation?
Question: In the Wikipedia article on the Klein Gordon equation there is a section titled Klein–Gordon equation in a potential that gives the equation for a field $\psi$ in potential $V$ as: $$ \Box\psi + \frac{dV}{d\psi} = 0 $$ Why is there a derivative of the potential in this equation? What does it mean? Answer: Because this is the way, the equation of motion is derived from the action principle. You have some action functional and look for the stationary points of this functional, such that for any infinitesimal in some sense function $\delta \phi$, satisfying certain conditions, like differentiability up to given order, and vanishing on the boundaries of the integration domain. More precisely, action for a Klein-Gordon theory with some potential is: $$ S = \int d^{D} x \left(\frac{1}{2}\partial_\mu \phi \partial^\mu \phi - V(\phi) \right) $$ We look for the stationary points of the functional, such that for $\phi + \delta \phi $, variation vanishes in the first order: $$ \delta S = \int d^{D} x \left(\partial_\mu \delta \phi \partial^\mu \phi - \frac{\partial V(\phi)}{\partial \phi} \delta \phi \right) $$ Here note, that the leading order term in $V(\phi + \delta \phi) - V(\phi)$ is proportional to derivative of $V(\phi)$ with respect to field $\phi$. Then after integration by parts, one has: $$ \delta S = \int d^{D} x \left( \partial_\mu \partial^\mu \phi + \frac{\partial V(\phi)}{\partial \phi} \right) \delta \phi $$ And in order for this variation $\delta S$ to be zero for all admissible variation of $\phi$, the following equation has to hold: $$ \partial_\mu \partial^\mu \phi + \frac{\partial V(\phi)}{\partial \phi} = 0 $$ Which is the Klein-Gordon equation with potential
{ "domain": "physics.stackexchange", "id": 78732, "tags": "klein-gordon-equation" }
Why gravitational potential is same?
Question: Why is the gravitational potential at the surface of a hollow sphere equal to the gravitational potential inside the sphere which is $-\frac {Gm}{r}$ ? Does this mean that the potential is the same at every place inside the sphere? N.B > The hollow sphere generates the Gravitational field Answer: Yes, inside the hollow shell the potential is the same anywhere, thus the net gravitational force on a test mass inside the shell is zero. This the famous shell theorem of classical gravitation theory, which is explained in many textbooks (see for a description at this link). The reason for this is basically Gauss's flux theorem for gravity.
{ "domain": "physics.stackexchange", "id": 47623, "tags": "newtonian-gravity, symmetry, potential-energy, gauss-law" }
how can I translate Whisper encodings to SBERT embeddings?
Question: I'm using the Whisper model to recognize speech, and then matching the output text against a list of known questions by generating SBERT embeddings from the text and ranking the known questions by cosine similarity with the SBERT embeddings of the text output from Whisper. It works pretty well, and I'm happy with the level of accuracy. I'd like to streamline the process a bit, and I understand I can get the encoder embeddings from the whisper model output rather than just the transcribed text. My question is: What's the best way to fuse these steps together? More generally, is there a good way to translate embeddings from one model vector space to another? What would that task even be called in terms of linear algebra? Answer: The problem here is that the SBERT embedding of a piece of text is a single vector, while the embeddings you get from Whisper are a sequence of vectors. Therefore, it's not a matter of just mapping two embedded spaces, but mapping a sequence of vectors in an embedded space to a single vector in a different space. Of course, you could train a small multi-head attention to mimic the equivalent SBERT but nothing guarantees that such an approach would give comparable results to computing the SBERT from Whisper's output text, or that it would be computationally worth it.
{ "domain": "datascience.stackexchange", "id": 11505, "tags": "transformer, bert" }
How did Earth's atmospheric layers get their names?
Question: How are atmospheric layers named once they have been discovered? Answer: Who gave the names Troposphere and Stratosphere The stratosphere was firstly discovered by Léon-Philippe Teisserenc de Bort and Richard Aßmann (they did not cooperate) around 1900. Teisserenc de Bort denoted that layer as 'zone isotherme' in his publication on his discovery. However, he published in French. I remember to have read somewhere - not sure where - that he gave the two names 'troposphere' and 'stratosphere' to the two lower layers after discovering the existence of a quite distinct second layer. Unfortunately, I do not find the reference anymore. In the article Hoinka (1997) you will find some information about the discovery of the stratosphere. What the Names mean Troposphere tropos (greek): turn(ing) Turbulent mixing is relevant in this layer of the atmosphere => Turbulent Mixing Sphere => Troposphere Stratosphere stratum (latin): something which covers something else (pavement, blanket) This layer of the atmosphere, which is not dominated by turbulent mixing, covers the lower turbulent layer => Covering Sphere => Stratosphere Mesosphere mesos (greek): middle This part of the atmosphere is the third of five (from humans defined) layers of the atmosphere => Layer in the middle (not by distance/height but by counting layers) => Mesosphere Thermosphere thermos (greek): warm, heat This layer can be heated to above 1000 °C by the sun. However, it does not feel 'warm' because of the low 'air' density. => Heated/hot layer => Thermosphere Exosphere éxo (greek): outside, external This layer is the most outward located layer of the atmosphere. => At the outer side located => Exosphere
{ "domain": "earthscience.stackexchange", "id": 755, "tags": "atmosphere, history-of-science" }
Energy In Quantum Mechanics (Pythagorean theorem)
Question: In the Schrodinger Equation for a free electron in three dimensions, can the energy eigenvalue E always be broken up into x y and z components such that $E^2 = E_x^2 + E_y^2 + E_z^2$? What is the reasoning behind the answer? Answer: For this to work it must be possible to break up the Schrodinger equation into three independent equations: $$ -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x_i^2}\psi_i(x_i)+V(x_i)\psi_i(x_i)=E_i\psi_i(x_i). \tag{1} $$ with $x_1=x,x_2=y,x_3=z$, and this can happen only if the potential function $V(x_1,x_2,x_3)$ can be broken as a sum $V_1(x_1)+V_2(x_2)+V_3(x_3)$ of potentials each one independent from the other. The case of a free particle is the one where $V_i(x_i)=0$, meaning that the potential along $x_1$ (it is $0$ in this direction) is independent of the potential in $x_2$ (it is also $0$ in this direction). In particular, using separation of variables with $V=0$, we have $$ -\frac{\hbar^2}{2m}\sum_i\frac{\partial^2}{\partial x_i^2}\psi(x_1,x_2,x_3)=E\psi(x_1,x_2,x_3) $$ where $E=E_1+E_2+E_3$ and $E_i$ is the eigenvalue for Eq.(1) and $\psi(x_1,x_2,x_3)=\psi_1(x_1)\psi_2(x_2)\psi_3(x_3)$, with $\psi_i(x_i)$ the solution to (1) associated with the eigenvalue $E_i$.
{ "domain": "physics.stackexchange", "id": 51220, "tags": "quantum-mechanics, energy" }
How to prove the equivalence of Wigner distribution function expressions?
Question: I'm currently going through Goodman's Introduction to Fourier Optics, Fourth Edition and I'm at the Wigner distribution function section, where he states the definition: $$W_{g}(x,y;f_{x},f_{y}) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} g(x + \Delta x/2, y + \Delta y/2) g^{*}(x - \Delta x/2, y - \Delta y/2)\times e^{-2\pi j (f_{x}\Delta x + f_{y}\Delta y)} d\Delta x d\Delta y.$$ He then goes on to say that by replacing $g$ and $g^{*}$ with their Fourier integrals, using the shift theorem and the sifting property of delta functions, we can obtain the equivalent expression: $$W_{g}(x,y;f_{x},f_{y}) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} G(f_{x} + \Delta f_{x}/2, f_{y} + \Delta f_{y}/2) G^{*}(f_{x} - \Delta f_{x}/2, f_{y} - \Delta f_{y}/2)\times e^{2\pi j (x\Delta f_{x} + y\Delta f_{y})} d\Delta f_{x} d\Delta f_{y},$$ where $G$ is the Fourier transform of $g$. For a generic function the Fourier transform used is: $$G(f_{x}, f_{y}) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} g(x,y) e^{-2\pi j (f_{x}x+f_{y}y)}dxdy$$ I've been trying to derive this result by attempting what he says to do, but have so far failed. I think I might be missing a substitution trick somewhere. I've also searched for other sources where this equivalence could be proven, but in the ones I have found, either both expressions are stated as definitions, or the equivalence result is just stated but never proven. Could someone please help me with this? Answer: Looks like a homework problem. In any case, for $$ G(k) = \int \!\! dx ~ e^{2\pi j xk}~ g(x) \qquad \leadsto G^*(k) = \int \!\! dx ~ e^{-2\pi j xk}~ g^*(x), \leadsto \\ g(x+u/2)= \int \!\! dk' ~ e^{-2\pi j k' (x+u/2)}~ G(k'), \qquad g^*(x+u/2)= \int \!\! dk'' ~ e^{2\pi j k'' (x-u/2)}~ G^*(k''), $$ so that, substituting in W, $$ W(x;k)= \int \!\! du \int \!\! dk' \int \!\! dk'' ~ e^{2\pi j( k'x+k'u/2-k''x +k''u/2- ku )} G(k') G^*(k'') \\ = \int \!\! dk' \int \!\! dk'' ~\delta\left (k-\frac{k'+k''}{2}\right ) ~e^{2\pi jx( k'-k'')} G(k') G^*(k'') \\ = \int \!\! dw ~ e^{2\pi jxw } G(k+w/2) G^*(k-w/2)~. $$
{ "domain": "physics.stackexchange", "id": 99376, "tags": "optics, fourier-transform" }
Number of stable molecules of artificial molecule XY4
Question: Suppose element X has $5$ stable isotopes, and element Y has $6$ stable isotopes. Find the number of natural molecules, knowing that an X has a charge of $+4$ and Y has a charge of $-1$. What I tried was this: for element X we have $5$ options, and for each element Y we have $6$ options. Therefore the total number of options would be $5 × 6 × 6 × 6 × 6$. But this answer is clearly wrong, as it counts the same type of molecule, multiple items. (AAAB) and (AAABA) count twice which we don't want to happen. (A is an isotope of Y and B is some other isotope of B) What can I do? Answer: There are 5 ways to choose the central atom. Choosing the possible combinations of the other 4 items from 6 isotopes is complicated as this article explains. But it provides a formula for combinations with repetition that gives the answer of 126. This implies that there are 630 distinct isotope combinations in the molecule (assuming that we can distinguish combinations of isotopes with the same atomic number which is possible with high resolution mass spectroscopy). If only integer atomic numbers count then the distinguisahble combinations will be lower. But the easiest way to work that out is to do a Monte-Carlo simulation not to look up probability formulae.
{ "domain": "chemistry.stackexchange", "id": 11710, "tags": "inorganic-chemistry, theoretical-chemistry, covalent-compounds, isotope" }
winros: can't find tf/tfMessage type?
Question: I have successfully built the WinROS sdk and samples using VS2010. I have created a simple program that writes messages to a bag file. My ultimate goal is to write tf/tfMessage type messages to the /tf topic in a bag, then to play that bag on Ubuntu for other existing nodes listening on that topic, but I don't see the tf/tfMessage type defined anywhere (thought it would be in "geometry_msgs"). I can write geometry_msgs/TransformStamped type messages which seem to have the same data, but are not compatible with the /tf topic. Any suggestions? Originally posted by gershon on ROS Answers with karma: 73 on 2014-01-22 Post score: 0 Original comments Comment by gershon on 2014-01-23: Can someone confirm that tf is not ported to winros? I just want to know that I'm not missing something. Seems like a very important library so I thought it was, but maybe not. Comment by gershon on 2014-01-23: Also, upon further inspection it appears that the tf/tfMessage is just an array of geometry_msgs/TransfromStamped type messages. Can someone confirm that for me? I'm thinking it might not be that hard to mock-up one of these in that namespace and copy the MD5 from the real one and make it work? Answer: Indeed, tf has not yet been ported to windows. We were backing off from doing that because of the move from tf to tf2 and we also didn't have a use case for it ourselves. Be great if someone stepped in and did it. Certainly would be useful for alot of folk. Originally posted by Daniel Stonier with karma: 3170 on 2014-01-29 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gershon on 2014-01-29: I understand. Thanks. Comment by tav on 2014-08-26: Hi there, I am now looking for tf/transform_broadcaster.h win winros and I cannot find it too in the new version (sdk-hydro-x86-vs10-0.3.1). Has it still not ported into the winros after almost 7 months?
{ "domain": "robotics.stackexchange", "id": 16735, "tags": "ros, winros, transform" }
At what temperature are the most elements of the periodic table liquid?
Question: For elements where 'liquid', is relatively easy to define, at which temperature are the most elements liquid, and which ones? Assume 1 atm Answer: We take the natural elements (atomic numbers $1$ to $92$) to have well-defined melting and boiling points. Additionally, all figures are quoted at the standard $1\text{ atm}$ pressure. Here's some quick Python code that fetches the number of elements in the liquid state at various temperatures (note that it relies on the values of the melting and boiling points of elements defined in the mendeleev package, however it is straightforward to instead use your own dataset). from mendeleev import element elements = [element(i) for i in range(1, 92 + 1)] # Hydrogen to Uranium n_liquid_list = [sum(element.melting_point < temp < element.boiling_point for element in elements) for temp in range(0, 5000)] n_liquid_max = max(n_liquid_list) temperature = n_liquid_list.index(n_liquid_max) print(n_liquid_max, "elements are in the liquid state at", temperature, "K") prints 38 elements are in the liquid state at 2161 K For the full list, just add in for elem in elements: if elem.melting_point < temperature < elem.boiling_point: print(elem.name, end=", ") which gives you Beryllium, Aluminum, Silicon, Scandium, Titanium, Vanadium, Chromium, Manganese, Iron, Cobalt, Nickel, Copper, Gallium, Germanium, Yttrium, Zirconium, Palladium, Silver, Indium, Tin, Lanthanum, Cerium, Praseodymium, Neodymium, Promethium, Gadolinium, Terbium, Dysprosium, Holmium, Erbium, Thulium, Lutetium, Platinum, Gold, Actinium, Thorium, Protactinium, Uranium which broadly fall under transition metals, lathanides and actinides. As noted by ChrisH, there is an additional such temperature range starting at $2584\text{ K}$, in which the liquid elements are: Beryllium, Boron, Aluminum, Silicon, Scandium, Titanium, Vanadium, Chromium, Iron, Cobalt, Nickel, Copper, Gallium, Germanium, Yttrium, Zirconium, Technetium, Ruthenium, Rhodium, Palladium, Lanthanum, Cerium, Praseodymium, Neodymium, Promethium, Gadolinium, Terbium, Dysprosium, Holmium, Erbium, Lutetium, Hafnium, Platinum, Gold, Actinium, Thorium, Protactinium, Uranium I also thought it might be interesting to plot the total number of elements in the liquid state at a given temperature as a function of temperature: The first peak corresponds to the temperature range $2161\text{ K} - 2219 \text{ K}$ while the second peak corresponds to $2584\text{ K} - 2627 \text{ K}$.
{ "domain": "physics.stackexchange", "id": 76438, "tags": "states-of-matter, elements" }
Does water need to pumped up from deep ocean?
Question: OTEC (Ocean Thermal Energy Conversion) utilizes the temperature gradient between cold deep ocean water, and warmer water to do work. I understand the pressure in the depths may be as high as a couple of orders greater than surface atmospheric pressure. I also remember, vaguely, that a fluid moves from an area of high pressure to low pressure; wouldn't a sealed pipe merely need valves at the top to control the flow? Does water need to be pumped up out of the deeps? Answer: Typically, yes, the water does need to be pumped up. Because if it released energy by rising, it would already have risen to the surface. OTEC depends on a high-enough temperature difference between the lower-depth water intake and the higher-depth one, for that temperature difference to do enough work to provide some surplus power, in addition to the power needed to pump the water up. Tepco's OTEC plant on Nauru (1982-3) reportedly generated 120kW electricity gross, of which 90kW was needed to operate the plant. The surplus 30kW was fed into the grid. More context: OTEC is estimated to be viable with a ${\Delta}T$ of 20 Kelvin, so definitely the tropics, and predominantly the western Pacific. Unsurprisingly, Japan has been particularly active in OTEC. The global harnessable resource is estimated at $10^{13}W$, which is the same order of magnitude as total global energy consumption.
{ "domain": "physics.stackexchange", "id": 4442, "tags": "fluid-dynamics, renewable-energy" }
Generic wrapper for equality and hash implementation
Question: I was writing an answer in another thread and came across this challenge. I'm trying to have a generic class so that I can delegate the routine (and tiring) Equals and GetHashCode implementation to it (which should handle all that). With the code things will be clearer. Please note that I have no problem with performance as of now. But when writing a generic library for future use, I'm contemplating better designs. I'll include the bare minimum code required to drive the idea. Approach 1 public class Equater<T> : IEqualityComparer<T> { public IEnumerable<Func<T, object>> Keys { get; private set; } public Equater(params Func<T, object>[] keys) { Keys = keys; } public bool Equals(T x, T y) { ---- } public int GetHashCode(T obj) { ..... } } //an example usage public class Dao : IEquatable<Dao> { static Equater<Dao> equater = new Equater<Dao>(x => x.Id, x => x.Table); public bool Equals(Dao other) { return equater.Equals(this, other); } public override int GetHashCode() { return equater.GetHashCode(this); } } This is great considering it works for any number of properties. But performance sucks, boxing I believe is the culprit. Runs in around 260 ms for about 100000 calls to GetHashCode. Approach 2 public static class Equater<T> { public static Func<T, T, bool> equals; public static Func<T, int> getHashCode; public static void Set<TKey1, TKey2>(Func<T, TKey1> key1Selector, Func<T, TKey2> key2Selector) { equals = (x, y) => { ---- }; getHashCode = t => { .... }; } //other overloads of Set with varying type arguments } //an example usage public class Dao : IEquatable<Dao> { static Dao() { Equater<Dao>.Set(x => x.Id, x => x.Table); } public bool Equals(Dao other) { return Equater<Dao>.equals(this, other); } public override int GetHashCode() { return Equater<Dao>.getHashCode(this); } } Runs in around 100 - 110 ms this time. Approach 3 public abstract class Equater<T> : IEqualityComparer<T> { public static Equater<T> Create<TKey>(Func<T, TKey> keySelector) { return new Impl<TKey>(keySelector); } public static Equater<T> Create<TKey1, Tkey2>(Func<T, TKey1> key1Selector, Func<T, Tkey2> key2Selector) { return new Impl<TKey1, Tkey2>(key1Selector, key2Selector); } //etc. other overloads of Create with varying type arguments public abstract bool Equals(T x, T y); public abstract int GetHashCode(T obj); class Impl<TKey> : Equater<T> { readonly Func<T, TKey> keySelector; public Impl(Func<T, TKey> keySelector) { this.keySelector = keySelector; } public override bool Equals(T x, T y) { ---- } public override int GetHashCode(T obj) { .... } } class Impl<TKey1, TKey2> : Equater<T> { readonly Func<T, TKey1> key1Selector; readonly Func<T, TKey2> key2Selector; public Impl(Func<T, TKey1> key1Selector, Func<T, TKey2> key2Selector) { this.key1Selector = key1Selector; this.key2Selector = key2Selector; } public override bool Equals(T x, T y) { ---- } public override int GetHashCode(T obj) { .... } } } //an example usage public class Dao : IEquatable<Dao> { static Equater<Dao> equater = Equater<Dao>.Create(x => x.Id, x => x.Table); public bool Equals(Dao other) { return equater.Equals(this, other); } public override int GetHashCode() { return equater.GetHashCode(this); } } Approaches 80 - 90 ms. This is a lot verbose since I have to write a class for every additional type argument, but is the fastest. Can I implement it better? Please note that my question is not about performance as such (for that reason I've not included the critical GetHashCode part as my question is not about that). I'm trying to know if I can implement the above in a cooler and more efficient way. In other words its more about efficiency than performance. Answer: Yes, boxing is likely the culprit. To fix that, you need to type your Funcs as in Example 3. Just make your Equater class better so it's not as verbose. Expression trees won't buy you anything here. public class Equater<T> { private readonly List<IEquaterFunc<T>> _equaterFuncs = new List<IEquaterFunc<T>>(); public bool Equals(T x, T y) { return _equaterFuncs.All(equaterFunc => equaterFunc.Equals(x, y)); } public int GetHashCode(T obj) { //do something } public void AddEquaterFunc<TProperty>(Func<T, TProperty> equaterFunc) { _equaterFuncs.Add(new EquaterFunc<T, TProperty>(equaterFunc)); } } public interface IEquaterFunc<T> { bool Equals(T x, T y); int GetHashCode(T obj); } public class EquaterFunc<T, TProperty> : IEquaterFunc<T> { private readonly Func<T, TProperty> _func; public EquaterFunc(Func<T, TProperty> func) { _func = func; } public bool Equals(T x, T y) { //use EqualityComparer<TProperty>.Default to avoid boxing return EqualityComparer<TProperty>.Default.Equals(_func(x), _func(y)); } public int GetHashCode(T obj) { TProperty value = _func(obj); return ReferenceEquals(value, null) ? 0 : value.GetHashCode(); } } //an example usage public class Dao : IEquatable<Dao> { private static readonly Equater<Dao> Equater; static Dao() { Equater = new Equater<Dao>(); Equater.AddEquaterFunc(x => x.Id); Equater.AddEquaterFunc(x => x.Table); } public bool Equals(Dao other) { return Equater.Equals(this, other); } public override int GetHashCode() { return Equater.GetHashCode(this); } public int Id { get; set; } public string Table { get; set; } }
{ "domain": "codereview.stackexchange", "id": 8529, "tags": "c#, performance, generics" }
What is the work per length if you want to move two infinite parallel current-carrying wires?
Question: I'm having trouble trying to get the correct units of this problem. The force per length is: $$\frac{\vec{F}}{l}=\frac{-I_1I_2\mu _0}{2\pi r}\hat{r}$$ Let's say that the currents are 1A and the initial distance between them is 1 meter. $$\frac{I_1I_2\mu _0}{2\pi}=\frac{A^2\times 4\pi \times 10^{-7}N/A^2}{2\pi}=2\times 10^{-7}N$$ Now, if we want to get the work per length I think (here is where I'm not sure) we can use the formula: $$W=\int \vec{F}\cdot d\vec{r}$$ In our case we want the work per legth and we know the force per length: $$\frac{W}{l}=\int \frac{\vec{F}}{l}\cdot d\vec{r}$$ In our case: $$\frac{W}{l}=\frac{-2\times 10^{-7}N}{l}\int _{1m}^{2m}\frac{dr}{r}$$ But the integral gives $\ln(2)$, so we don't have units of work. Thanks for your time. Answer: Equation 5 doesn't follow from your previous equations. You've added an extra factor of 1/l by going from $rF/l = D$ (equation 2), where D is some constant with units of Newtons, to $rF/l = D/l$ in equation 5. If you get rid of the extra 1/l factor your dimensional analysis will check out. Edit with some additions. As a general piece of advice for all mathematical physics (and math in general): avoid numbers wherever possible until you're ready to calculate specific values.The 1/l error would likely not have been made if you had started by saying Given some constants $I_1$ and $I_2$ let $D = \frac {-I_1 I_2 \mu _0} {2 \pi}$ note that the units of D are $A*A*N/A^2 = N$ Some things to note, that are useful to make explicit either in words or with a diagram: By selecting some displacement vector $\vec r$ measuring the distance from one wire to the other, we have chosen the origin wire as a stationary reference frame and the source of the relevant fields. We have thus chosen the other wire as a moveable object with a magnetic charge which will be pushed or pulled by the relevant fields. We'll need this framing to be explicit so that we can use the right-hand rule to determine the direction of the force vector. Also by assigning $\vec r$ we have chosen that the geometric origin (r=0) is along the axis of the first wire, and the $\hat r$ direction is positive for any forces, velocities, etc. Thus the negative sign in D indicates the direction of movement (towards the origin wire). Digramming everything is a really good idea, and will help you identify what are the physical meanings of your implicit mathematical assumptions, and what are the physical meanings of your calculated results. For instance, here you asked the question, "what is the work per length if you move the wire?" but the thing you have calculated is "what is the work done by the field on the free wire?" If we wanted to ask about you moving the wire, we would have to add a you to the system, applying a force equal and opposite the field's force. Diagram. Identify the physical meaning of every mathematical representation. Make all assumptions explicit. Make sure you are answering the question you are asking.
{ "domain": "physics.stackexchange", "id": 81523, "tags": "electromagnetism, work, units" }
Vue component over-complication
Question: I'm building a date range selector, and while this works, I feel like I'm making it more complicated than it needs to be. Is there a more elegant way of writing this? Possibly using computed values or watches? <template> <form> <select @change="rangeSelection"> <option v-for="(option, key) in rangeOptions" :key="key" :value="key"> {{option.display}} </option> </select> </form> </template> <script> import moment from 'moment'; export default { name: 'DateRangeChooser', data: () => { return { selectedRange: "last7Days", startDate: moment().subtract(8, 'days'), endDate: moment().subtract(1, 'days'), rangeOptions: { last7Days: { display: 'Last 7 Days', startDate: moment().subtract(8, 'days'), endDate: moment().subtract(1, 'days') }, lastWeek: { display: 'Last Week', startDate: moment().startOf('week').subtract(1, 'week'), endDate: moment().endOf('week').subtract(1, 'week') }, last30days: { display: 'Last 30 days', startDate: moment().subtract(31, 'days'), endDate: moment().subtract(1, 'days') } } } }, methods: { rangeSelection: function(e){ this.endDate = this.rangeOptions[e.currentTarget.value].endDate; this.startDate = this.rangeOptions[e.currentTarget.value].startDate; } } }; </script> Answer: Yes you can use computed properties: computed: { endDate: function() { //determine endDate based on value of selectedRange }, startDate: function() { //determine startDate based on value of selectedRange } } And v-model can be used to bind the value of the select list to that data property: <select v-model="selectedRange"> Then use that property in the computed properties: computed: { endDate: function() { return this.rangeOptions[this.selectedRange].endDate; }, startDate: function() { return this.rangeOptions[this.selectedRange].startDate; } } With this approach there is no need to define the rangeSelection method and bind it to the onchange property. rangeOptions could also be moved outside the component, and then the first key could be used to select the default value of selectedRange instead of hard coding it. See the snippet below. const rangeOptions = { last7Days: { display: 'Last 7 Days', startDate: moment().subtract(8, 'days'), endDate: moment().subtract(1, 'days') }, lastWeek: { display: 'Last Week', startDate: moment().startOf('week').subtract(1, 'week'), endDate: moment().endOf('week').subtract(1, 'week') }, last30days: { display: 'Last 30 days', startDate: moment().subtract(31, 'days'), endDate: moment().subtract(1, 'days') } }; const form = new Vue({ el: '#DateRangeChooser', data: () => { return { selectedRange: Object.keys(rangeOptions)[0], rangeOptions } }, computed: { endDate: function() { return rangeOptions[this.selectedRange].endDate; }, startDate: function() { return rangeOptions[this.selectedRange].startDate; } } }); <script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.22.2/moment.min.js"></script> <script src="https://cdn.jsdelivr.net/npm/vue/dist/vue.js"></script> <div id="DateRangeChooser"> <select v-model="selectedRange"> <option v-for="(option, key) in rangeOptions" :key="key" :value="key"> {{option.display}} </option> </select> <div>Start: {{startDate.format('LL')}}</div> <div>End: {{endDate.format('LL')}}</div> </div> ###Edit @blindman67 brought up an interesting point: Sorry my VUE knowledge is limited but I thought computed properties were cached. Could be a problem in some edge cases, eg time near midnight and user selects, then changes their mind waits till after midnight selects again?1 That is true: ... computed properties are cached based on their dependencies2 if caching is an issue, then you may need to look at using a watcher or just calculate the computed property each time instead of referencing values within this.rangeOptions. 1https://codereview.stackexchange.com/questions/206192/vue-component-over-complication/206213?noredirect=1#comment397843_206213 2https://v2.vuejs.org/v2/guide/computed.html#Computed-Caching-vs-Methods
{ "domain": "codereview.stackexchange", "id": 32402, "tags": "javascript, ecmascript-6, vue.js" }
Dead weight pressure tester for very low positive pressures
Question: Does the dead weight tester technology provide capability for an accurate, low pressure calibration standard in the range of zero to 150 cm H2O, and if so is there a manufacturer that provides such an instrument? If not, what is the best primary standard for that range of pressure in terms of accuracy? I suspect for dead weight testers there might be a lower limitation due to the balance of gravitational force and frictional force. I am looking to measure gauge pressure. Answer: In this situation, you might do just as well to use a U-tube manometer. Is some respects, this is a sort of dead-weight tester where the fluid itself is the dead weight. In this device pressure is determined from the difference in height between two connected volumes of fluid. Usually this is just a transparent tube bent into a 'U' shape, placed upright in front of a convenient means of measuring the relative heights of the free surfaces of the liquid. One end of the tube is connected to the pressurized volume, while the other is left open. The pressure difference may be calculated from the height difference. $$ P = \rho g h $$ Where $\rho$ is the density of the liquid ($\sim 1000 \, kg/m^3$ for water) and $g$ is gravitational acceleration ($9.81 \, m/s$ in most places). Or, if you need your measurements in cm of water, simply use water as the working fluid and the height difference will be exactly the quantity you want. This is basically why "length units of liquid" can be used as a unit of pressure*. If a standard manometer is not sensitive enough, you can tilt it off of vertical by some known angle. As a result, the liquid will travel a larger, more easily measured, distance along the tubes to achieve the same difference in height. Despite the seemingly primitive technology involved, measurements of this kind can be quite accurate when performed correctly. There's a nice document from NIST that goes into detail about how to maximize the accuracy of these devices. Also a review by Ruthberg specifically for low pressure measurements. *I don't mean to endorse these units of pressure. The pascal ($Pa$ or $N/m^2$) is the correct SI unit of pressure derived from the newton and the meter.
{ "domain": "engineering.stackexchange", "id": 95, "tags": "measurements, pressure, metrology, standards" }
Gravity, falling bodies, and the equivalence principle
Question: Why is that bodies in a box accelerating uniformally in space that is sufficiently removed from gravitational fields fall identically to bodies in a box located in a homogeneous gravitational field (e.g., on Earth) Answer: We don't know. The equivalence principle is based on a fundamental assumption that interial mass is the same as gravitational mass. We have verified this assumption in numerous experiments (e.g., the classic feather vs hammer experiment done on the Moon by David Scott). On this assumption, we have also built a more complete theory of gravitation (GR) that explained/predicted thitherto unexplained/unknown phenomena (e.g., black holes, gravitational waves, gravitational lensing etc.). It all works out and seems correct but we still do not know why the two masses should be equal. For all we know, there is a deeper connection lurking in somewhere.
{ "domain": "physics.stackexchange", "id": 54616, "tags": "gravity, newtonian-gravity, reference-frames, acceleration, equivalence-principle" }
A wonky gravitational potential and its critical points
Question: I have tough problem I am not sure how to solve: For this question, we are confined to a plane. Consider a gravitational field that is proportional to $\frac{1}{r^3}$ instead of $\frac{1}{r^2}$, and consider its potential which is then proportional to $\frac{1}{r^2}$. Suppose that I put $n$ identical point masses in this plane, all at different locations and let $U(x,y)$ be the potential function. What can we say about the critical points of $U(x,y)$? Specifically, can we show that the number of critical points is always $\leq n$? Remark: By critical point, I mean the usual definition in vector calculus, that is a place where both partial derivatives are zero. Answer: Take 4 points arranged in a square. The middle of the square is a critical point by symmetry, and the midpoint of the four sides of the square would be a critical point just for the two vertices it joins together, ignoring the other two. But the other two are little more than twice as far away, so eight times weaker force, so if you bring the point closer to the center of the square by an amount about 1/8 of the way to the other side, you will cancel out the force from the far pair from the force on the near pair. So there are 5 critical points for four points on a square, and this holds for all sufficiently fast-falling-off forces. Using squares-of-squares, I believe it is easy to establish that the number of critical points is generically $n^2$. I believe it is a difficult and interesting mathematical problem to establish any sort of nontrivial bound on the number of critical points. The trivial bound is from the order of the polynomial equation you get, and it's absurdly large---- it grows like $2^n$. EDIT: The correct growth rate The answer for a general configuration is almost certainly N+C critical points (this should be an upper bound and a lower bound for two different C's --- I didn't prove it, but I have a, perhaps crappy, heuristic). For the case of polygons, there are N+1 critical points. For squares where the corners are expanded to squares, and so on fractally, the number of critical points is N+C where C is a small explicit constant. The same for poygons of polygons. I found a nifty way of analyzing the problem, and getting some good estimates, but I want to know how good the mathematicians are at this before telling the answer. Perhaps you can ask this question on MathOverflow?
{ "domain": "physics.stackexchange", "id": 1833, "tags": "homework-and-exercises, gravity, mathematical-physics, topology, vector-fields" }
Converting any CSS colour to RGB(a)
Question: For a plugin I'm writing in jQuery, I have two optional parameters. For each parameter I do a check for its value. However, I'm curious if I can't write it shorter. jsFiddleiddle (function ($) { $.rgbGenerator = function (color, options) { var args = $.extend({ returnAsObj: false, addAlpha: false }, options); var d = $("<div>"); d.css("color", color).appendTo("body"); var rgb = d.css("color"); rgb = rgb.replace(/(^rgba?\(|\)$|\s)/g, "").split(",").map(Number); d.remove(); if (args.addAlpha === false) { if (rgb.length == 3) { if (!args.returnAsObj === true) { return "rgb(" + rgb[0] + "," + rgb[1] + "," + rgb[2] + ")"; } else { return { r: rgb[0], g: rgb[1], b: rgb[2] }; } } else if (rgb.length == 4) { if (!args.returnAsObj === true) { return "rgba(" + rgb[0] + "," + rgb[1] + "," + rgb[2] + "," + rgb[3] + ")"; } else { return { r: rgb[0], g: rgb[1], b: rgb[2], a: rgb[3] }; } } } else { if (!args.returnAsObj === true) { return "rgba(" + rgb[0] + "," + rgb[1] + "," + rgb[2] + "," + args.addAlpha + ")"; } else { return { r: rgb[0], g: rgb[1], b: rgb[2], a: args.addAlpha }; } } }; })(jQuery); Basically, what it does is: Check args.addAlpha whether it's false (default) Check the length of rgb (when it's 3 it's RGB, when it's 4 it's rgba) Check args.returnAsObj. If it's set to true, return an object rather than a string Especially the part where I check the length of rgb seems unnecessary, but I'm not sure how I could write it any other way. Something like this would be nice: return { r: rgb[0], g: rgb[1], b: rgb[2], a: function () { if (rgb.length == 4 || !args.addAlpha === false) { return rgb[3] || args.addAlpha; } } }; But I assume that's not possible. Answer: You are right that returning an object where a is a function would not work, but it would work to return the result of a call to a function! You can also restructure things a bit to reduce duplicated code. By having a starting string or a starting object, and optionally adding to that string or object if some conditions are fulfilled. This is untested code, but I think it should do the same as your original. function result(addAlpha, rgb, asObject) { if (asObject) { var obj = { r: rgb[0], g: rgb[1], b: rgb[2] }; if (addAlpha !== false) { obj.a = addAlpha; } else if (rgb.length == 4) { obj.a = rgb[3]; } return obj; } var a = (addAlpha !== false) || (rgb.length == 4) ? "a" : ""; var alpha = ""; if (addAlpha !== false) alpha = "," + addAlpha; else if (rgb.length == 4) alpha = "," + rgb[3]; return "rgb" + a + "(" + rgb[0] + "," + rgb[1] + "," + rgb[2] + alpha + ")"; }
{ "domain": "codereview.stackexchange", "id": 13870, "tags": "javascript, jquery, plugin" }
Electron Degeneracy Pressure and the Pauli Exclusion Principle
Question: I have read that what keeps white dwarfs from gravitational collapse is electron degeneracy pressure. How does this pressure prevent further collapse, and how is it related to the Pauli Exclusion Principle? Answer: Basically, the Pauli exclusion principle says that two fermions (in this case, electrons) can't be in the same quantum state. To expand: No two electrons in an atom can share the same numbers for their four quantum numbers, properties that help describe the state of a particle. What are quantum numbers? The important consequence here is that no two electrons can have the same spin and energy level. In a white dwarf or a neutron star, the fermions are packed very close together, and there's quite a lot of force due to gravity. However, the exclusion principle triumphs. Fermions near each other must have different energy levels; this leads to energy differences and degeneracy pressure pressure, which counteracts the force of gravity. Above a certain mass limit (the Chandrasekhar limit, roughly $\sim1.40M_{\odot}$), electron degeneracy pressure is no longer sufficient; the white dwarf collapses to a neutron star. There appears to be a similar limit for neutron stars, where neutron degeneracy pressure cannot support the remnant against gravity, and it collapses into a black hole.
{ "domain": "astronomy.stackexchange", "id": 1954, "tags": "star, gravity, degenerate-matter" }
Is there any method to know the real age of the universe?
Question: Well, I was wondering about the real age of our universe, I found that it's estimated to be $13.8\times 10^9$ years. Is it an approximation, or laws behind this age? Answer: As I said there's a mathematical laws behind this approximation. We use the Friedmann equations and EFE : $$\begin{cases} 3\frac{\dot{a}^2}{a^2}+3\frac{kc^2}{a^2}-\Lambda c^2=\frac{8\pi G}{c^2}\rho \qquad(1) \\[2ex] -2\frac{\ddot{a}}{a}-\frac{\dot{a}^2}{a^2}-\frac{kc^2}{a^2}+\Lambda c^2=\frac{8\pi G}{c^2}p \qquad(2)\\[2ex] R_{ij}-\frac{1}{2}Rg_{ij}=\frac{8\pi G}{c^4} T_{ij} \qquad(3) \end{cases}$$ If we toke $k=0; \Lambda\neq 0$ than our universe is flat and its expansion is accelerated; thus the EFE can be written : $$R_{ij}-\frac{1}{2}Rg_{ij}-\Lambda g_{ij}=\frac{8\pi G}{c^4}T_{ij}$$ Or it can be also written : $$R_{ij}-\frac{1}{2}Rg_{ij}=\frac{8\pi G}{c^4}T_{ij}+\Lambda g_{ij} \Leftrightarrow R_{ij}-\frac{1}{2}Rg_{ij}=\frac{8\pi G}{c^4}\Bigr(T_{ij} \frac{c^4\Lambda }{8\pi G}g_{ij}\Bigl)$$ We express the stress-energy tensor for vacuum : $$T_{ij}^{\mathbf{Vacuum}}=\frac{c^4\Lambda g_{ij}}{8\pi G}$$ Comparing it with perfect fluid: $$T_{ij}^{\mathbf{Matter}}=-p.g_{ij}+\Bigl(\frac{p}{c^2}+\rho_0\Bigr)u_i u_j$$ We can simulate vacuum as a fluid $$\begin{cases} \mathbf{Pressure}\ :\ p=-\frac{\Lambda c^4}{8\pi G} \\ \mathbf{Energy\ density}\:\ \rho=-p=\frac{\Lambda c^4}{8\pi G} \end{cases}$$ Adding cosmological parameters: $$\begin{cases} \Omega_v+\Omega_m=1 \\ 2q-1+3\Omega_v =0 \end{cases} \iff \begin{cases} \Omega_m =1-\Omega_v \\ \Omega_v =\frac{1-2q}{3} \end{cases}$$ Let $t=t_0$ and $\Lambda = \frac{3\Omega_{v_0}H_0^2}{c^2}$ $$\begin{cases} \Omega_{v_0}+\frac{8\pi G \rho_0}{3c^2 H_0^2}=1 \\[2ex] \Omega_{v_0}=\frac{\Lambda c^2}{3H_0^2}=\frac{1-2q_0}{3} \end{cases} \iff \begin{cases} 1-\Omega_{v_0}=\frac{8\pi G \rho_0}{3c^2 H_0^2} \iff (1-\Omega_{v_0})\frac{H_0^2}{c^2}=\frac{8\pi G \rho_0}{3c^2} \\[2ex] \Omega_{v_0}=\frac{1-2q_0}{3} \iff 1-\Omega_{v_0}=\frac{2}{3} (1+q_0) \end{cases}$$ Thus we obtain : $$\frac{8\pi G\rho_0}{3c^4}=\frac{2}{3} \frac{H_0^2}{c^2} (1+q_0)$$ In the first equation, we are having the following : Recall $\ k=0, \Lambda \neq 0\ $: $$3\frac{\dot{a}^2}{a^2} -\Lambda c^2=\frac{8 \pi G}{c^2}\rho$$ We know that : $\rho a^3=\rho_0 a_0^3$; thus : $\rho=\frac{\rho_0 a_0^3}{a^3} $ $$\Rightarrow 3\frac{\dot{a}^2}{a^2} -\Lambda c^2=\frac{8 \pi G}{c^2}\frac{\rho_0 a_0^3}{a^3} \iff \dot{a}^2=\frac{8 \pi G}{3c^2}\frac{\rho_0 a_0^3}{a}+\frac{\Lambda c^2 a^2}{3} \iff da=\sqrt{\underbrace{\frac{8 \pi G\rho_0 a_0^3}{3c^2}}_{K}\frac{1}{a}+\underbrace{\frac{\Lambda c^2 }{3}}_{B}a^2} dt$$ Let $ \ K=\frac{8 \pi G\rho_0 a_0^3}{3c^2}\ $ and $\ B=\frac{\Lambda c^2}{3}\ $ : $$da=\sqrt{\frac{K}{a}} \sqrt{1+\frac{B}{K} a^3} dt \iff dt=\frac{da.a^{1/2}}{\sqrt{K}\sqrt{1+\frac{B}{K} a^3}}$$ Integrating, we get the following : $$\int dt=\int \frac{da.a^{1/2}}{\sqrt{K}\sqrt{1+{\frac{B}{K} a^3}}}$$ Let $x^2=\frac{B}{K} a^3\ $ thus : $$\begin{cases} 3a^2da=2\frac{K}{B} x dx \\ a^2=\bigl(\frac{K}{B}\bigr)^{2/3} x^{4/3} \\ a^{1/2}=\bigl(\frac{K}{B}\bigr)^{1/6}x^{1/3} \end{cases}$$ So (I'm going to skip math here !) : $$\int \frac{\frac{2}{3} \bigr(\frac{K}{B}\bigl)^{1/2}dx}{\sqrt{K}\sqrt{1+\underbrace{\frac{B}{K}a^3}_{x^2}}}=\int dt \iff \frac{2}{3B^{1/2}} \text{arcsh}(x)=t \qquad(\text{I'm Skiping math !}) $$ $$\fbox{$a^3=\frac{8\pi G \rho_0 a_0^3}{c^4\Lambda}\text{sh}^2\Bigr(\frac{c}{2}\sqrt{3\Lambda}t\Bigl)$} $$ Recall : $\Lambda = \frac{3\Omega_{v_0}H_0^2}{c^2}$ and $\frac{8\pi G\rho_0}{3c^4}=\frac{2}{3} \frac{H_0^2}{c^2} (1+q_0)$ $$\require{cancel} a^3=\frac{2H_0^2(1+q_0)a_0^3}{c^2\Lambda }\text{sh}^2\Bigr(\frac{c}{2}\sqrt{3\Lambda}t\Bigl) \iff a^3=\frac{2\cancel{H_0^2}(1+q_0)a_0^3 \cancel{c^2}}{3\cancel{c^2}\Omega_{v_0}\cancel{H_0^2}}\text{sh}^2\Biggr(\frac{\cancel{c}}{2}\sqrt{\frac{9\Omega_{v_0}H_0^2}{\cancel{c^2}}}t\Biggl)$$ Therefore: $$ a^3=\frac{2a_0^3(1+q_0)}{3\Omega_{v_0}}\text{sh}^2 \Bigr(\frac{3H_0}{2}\sqrt{\Omega_{v_0}}t\Bigl)$$ and now, let us calculate this $t$, well we are going to assume that $t=t_0$ and $a=a_0$: $$\begin{aligned}\require{cancel}\cancel{a_0^3}=\frac{2\cancel{a_0^3}(1+q_0)}{3\Omega_{v_0}}\text{sh}^2 \Bigr(\frac{3H_0}{2}\sqrt{\Omega_{v_0}}t_0\Bigl) \iff \frac{3\Omega_{v_0}}{2(1+q_0)}=\text{sh}^2 \Bigr(\frac{3H_0}{2}\sqrt{\Omega_{v_0}}t_0\Bigl)\\ \iff \frac{3H_0}{2}\sqrt{\Omega_{v_0}}t_0=\text{arcsh}\Biggr(\sqrt{\frac{3\Omega_{v_0}}{2(1+q_0)}}\Biggl) \\ \iff t_0=\frac{2}{3} \frac{1}{H_0\sqrt{\Omega_{v_0}}}\text{arcsh}\Biggr(\sqrt{\frac{3\Omega_{v_0}}{2(1+q_0)}}\Biggl) \end{aligned}$$ And here you go, the formula of our universe's age : $$\fbox{$t_0=\frac{2}{3} \frac{1}{H_0\sqrt{\Omega_{v_0}}}\text{arcsh}\Biggr(\sqrt{\frac{3\Omega_{v_0}}{2(1+q_0)}}\Biggl)$}$$ The numerical substitution : $$\begin{cases} H_0^{-1}\approx 14.56\times 10^9 \\ q_0\approx -0.5245 \\ \Omega_{v_0}\approx 0.683 \end{cases}$$ Thus : $$t_0=\frac{2}{3}\times 14.56\times 10^9 \frac{1}{\sqrt{0.683}}\text{arcsh}\Biggr(\sqrt{\frac{3\times 0.683}{2(1-0.5245)}}\Biggl)\approx 13.8\times 10^9\text{y}$$ I hope now that you understood my comment, it depends on the numerical values of the cosmological parameters. And yeah one more thing, Sorry I skipped a lot of steps in the proof because of my laziness and it is long. I hope you have understand that there is a laws behind this approximation and this is a way to compute our universe's age. Good luck !
{ "domain": "physics.stackexchange", "id": 66175, "tags": "cosmology, time, space-expansion, universe, big-bang" }
Using a wrapper on a primitive as a generic for an interface used for Java lambda
Question: I apologize for that title, lol. I have a Java method that I'm writing where I want to be able to pass in an array of Objects and two interfaces that will be used for lambda expressions that specify a particular value to use for calculation. It's part of a larger class that I want to use for all kinds of Statistics calculations, but I started with calculating correlation because it's most relevant to the specific problem I want to solve. import java.lang.IllegalArgumentException; import java.util.Arrays; import java.util.stream.Stream; import java.util.stream.Collectors; import java.util.List; public class Statistics { //1. This method is really unreadable public static <O> double getCorrelation(O[] a, Fetchable<O, Double> dataPointA, Fetchable<O, Double> dataPointB) { Double[] temp = Arrays.stream(a).map(x -> dataPointA.fetch(x)).collect(Collectors.toList()).toArray(new Double[0]); Double[] temp2 = Arrays.stream(a).map(x -> dataPointB.fetch(x)).collect(Collectors.toList()).toArray(new Double[0]); return getCorrelation(temp, temp2); } public static double getCorrelation(double[] a, double[] b) { if (a.length != b.length) { //2. Is this the best Exception for this case? throw new IllegalArgumentException(); } double sumA = 0.0; double sumB = 0.0; double sumSquareA = 0.0; double sumSquareB = 0.0; double sumAB = 0.0; for(int i = 0; i < a.length; i++) { sumA += a[i]; sumB += b[i]; sumSquareA += a[i] * a[i]; sumSquareB += b[i] * b[i]; sumAB += a[i] * b[i]; } int n = a.length; return (n * sumAB - sumA * sumB) / (Math.sqrt(n * sumSquareA - sumA * sumA) * Math.sqrt(n * sumSquareB - sumB * sumB)); } private static double getCorrelation(Double[] a, Double[] b) { if (a.length != b.length) { //2. Is this the best Exception for this case? throw new IllegalArgumentException(); } //3. Is there a better way to do this conversion from Double[] to double[]? double doubleArrA[] = new double[a.length]; double doubleArrB[] = new double[a.length]; for (int i = 0; i < a.length; i++) { doubleArrA[i] = (double)a[i]; doubleArrB[i] = (double)b[i]; } return getCorrelation(doubleArrA, doubleArrB); } interface Fetchable<T1, T2> { public T2 fetch(T1 a); } //main function for testing public static void main(String[] args) { //this class is defined in another file; it's a simple class with four public doubles I made just to test TestDataPoint a[] = new TestDataPoint[5]; a[0] = new TestDataPoint(); a[0].w = 3; a[0].x = 0; a[0].y = 4; a[0].z = 9; a[1] = new TestDataPoint(); a[1].w = 1; a[1].x = 8; a[1].y = 3; a[1].z = 2; a[2] = new TestDataPoint(); a[2].w = 7; a[2].x = 4; a[2].y = 4; a[2].z = 0; a[3] = new TestDataPoint(); a[3].w = 3; a[3].x = 1; a[3].y = 0; a[3].z = 1; a[4] = new TestDataPoint(); a[4].w = 6; a[4].x = 3; a[4].y = 9; a[4].z = 8; a[1] = new TestDataPoint(); System.out.println(getCorrelation(a, p -> p.w, q -> q.z)); System.out.println(getCorrelation(a, p -> p.x, q -> q.z)); System.out.println(getCorrelation(a, p -> p.y, q -> q.z)); System.out.println(getCorrelation(a, p -> p.z, q -> q.z)); } } It works pretty well; I've tried it and it seems to do exactly what I want. There are a few things I want to look at, though: That crazy method is absurdly unreadable. Is IllegalArgumentException the best Exception for the case where the method cannot execute properly because arrays of differing lengths are provided? Is there any easily readable way to convert Double[] to double[] without the loop I used in getCorrelation(Double[] a, Double[] b)? Thanks in advance. Answer: To focus on your initial questions 1) That crazy method is absurdly unreadable. I don't think it's too bad. You can improve it by using method references rather than lambdas and the toArray method of Stream: Double[] temp = Arrays.stream(a).map(dataPointA::fetch).toArray(Double[]::new); Double[] temp2 = Arrays.stream(a).map(dataPointB::fetch).toArray(Double[]::new); return getCorrelation(temp, temp2); 2) Is IllegalArgumentException the best Exception for [this] case Yeah, it's fine. "Best" is subjective. Honestly, it doesn't really matter. I would, however, provide a more detailed message which explains why the arguments were invalid: throw new IllegalArgumentException("Input arrays must be the same length"); 3) Is there any easily readable way to convert Double[] to double[] without the loop? Yes, it's a bit better if you use streams: final double[] doubleArrA = Stream.of(a).mapToDouble(x -> x).toArray(); final double[] doubleArrB = Stream.of(b).mapToDouble(x -> x).toArray(); Some other thoughts Declare variables as final if you don't intend to change them. It reduces the chance of accidental errors on your part, and helps readers of your code forget about the possibility that the variable might change. Using meaningful identifiers. a, b, w, x, y, z, temp1, temp2: these could all be improved. Single character identifiers are only really okay in a few specific sitations, such as when using them as loop counters, or generic type parameters. At the cost of iterating over the arrays multiple times (don't worry about the performance until you know it's a problem), your getCorrelation method could initialise most of the sums in the same line as they're declared, thus allowing them to be declared as final: final double sumA = DoubleStream.of(a).sum(); final double sumB = DoubleStream.of(b).sum(); final double sumSquareA = DoubleStream.of(a).map(x -> x * x).sum(); final double sumSquareB = DoubleStream.of(b).map(x -> x * x).sum(); double sumAB = 0.0; for(int i = 0; i < a.length; i++) { sumAB += a[i] * b[i]; } Use comments to document unusual formulas or ideas. (n * sumAB - sumA * sumB) ... is hard to understand at a glance. It might benefit from a comment identifying the mathematical proof (e.g. //Pythagorean theorem) or even a link to the Wikipedia page which explains it Do not use public access as your default. Start by declaring everything as private and gradually increase the visibility as necessary. Define your own constructors. TestDataPoint should have a constructor which takes 4 arguments. You should not rely on initialising the properties one at a time because this makes it possible to leave yourself with half-initialised objects. class TestDataPoint { //... TestDataPoint(double w, double x, double y, double z) { this.w = w; this.x = x; this.y = y; this.z = z; } }
{ "domain": "codereview.stackexchange", "id": 31168, "tags": "java, generics, interface, lambda, exception" }
Is a hydrogen bond considered to be a van der Waals force?
Question: Is a hydrogen bond considered to be a Van der Waals force? Answer: According to the IUPAC gold book a van der Waals force is: The attractive or repulsive forces between molecular entities (or between groups within the same molecular entity) other than those due to bond formation or to the electrostatic interaction of ions or of ionic groups with one another or with neutral molecules. The term includes: dipole–dipole, dipole-induced dipole and London (instantaneous induced dipole-induced dipole) forces. Hydrogen bonding is a type of dipole-dipole interaction, so it would fit the definition of a van der Waals force. The way I think of it is: van der Waals forces are anything that make a gas non-ideal, since that's how they were originally discovered and defined.
{ "domain": "chemistry.stackexchange", "id": 17199, "tags": "hydrogen-bond" }
Where can you get a photon detector?
Question: Say I'm doing the double-slit experiment with photons as the particle and want to add a detector just behind each slit to eliminate the interference pattern. Where would I get a detector like that? Answer: They're based on photomultiplier tubes. An example is referred to in this lab assignment. Photon Detector: The detectors used in this experiment are two Hamamatsu R1527 photomultiplier tubes (PMTs)
{ "domain": "physics.stackexchange", "id": 5577, "tags": "experimental-physics, photons, double-slit-experiment, particle-detectors" }
If measurement fixes the state of a quantum system, how do we know that superposition exists?
Question: My very limited understanding of quantum mechanics cannot make sense of the superposition phenomena. It would seem to me like if a measurement makes the wave function of a quantum system collapse, nobody has ever experienced superposition? Edit, to be more precise: In order to observe a physical phenomena, one has to undergo the process of "measuring" it When a quantum system is in a superposition of states, the act of measuring it makes the wave function collapse before the measurement, fixing it to one of the possible states. Therefore, any particle we ever observe is fixed in one state. I know this must be wrong, but why? Answer: How do we know superposition exists? We can do some math to predict the behavior of how certain "quantum" things behave. One of the most popular interpretations is that these things can exist in quantum "superpositions" in which the quantum thing is simultaneously in multiple states at once. It's not the only interpretation so it's not necessary to believe it. Now what exactly is a "superpositon state"? Well, essentially the mathmatical model assigns independent complex numbers to each possiblity. So for example if I flip a coin, each possiblity receives its own number, so heads receives a complex number and tails receives a complex number. Sometimes, when different quantum states interact, the outcome of the resultant output state is a sum of these assigned complex numbers - which we interpret as an "interference between the different possibilities." It is this interference between different possibilities that we interpret as "superposition." And often this effect looks like an interference between waves (which you can see in the double slit experiment, for example). In order to observe a physical phenomena, one has to undergo the process of "measuring" it When a quantum system is in a superposition of states, the act of measuring it makes the wave function collapse before the measurement, fixing it to one of the possible states. Therefore, any particle we ever observe is fixed in one state. I know this must be wrong, but why? We only ever measure something in a fixed particular state (so you are correct here), but what is interesting is which particular state we observed. To predict the correct outcomes, we use a model that requires interference between the numbers assigned to each possibility. We then conclude that the model is correct and that when a state is not measured, all of these numbers assigned to each possibility actually exist and that is what superposition is.
{ "domain": "physics.stackexchange", "id": 79398, "tags": "quantum-mechanics, hilbert-space, superposition, quantum-measurements" }
Metric of an accelerated reference frame (without gravitation)
Question: If we look at a flat Minkowski space-time (without any gravitation) an choose an accelerated frame of reference, what happens to the metric tensor is it still in Minkowski coordinates or will he be affected by general relativity? Answer: A reference frame is equivalent to a choice of coordinates. So, choosing an accelerated frame in Minkowski space is equivalent to choosing a specific coordinate system on Minkowski space. Most importantly, this means that there is not genuine curvature in an accelerated frame, i.e. it is fundamentally different than gravity. The equivalence principle amounts to the statement that the Chistoffel connection coefficients $\Gamma^{\mu}_{\rho\sigma}$ are not tensorial objects; that is, they have no instrinsic geometric meaning. In the same way that we can choose a local Lorentz frame in a general curved spacetime (i.e. we can choose coordinates so that locally the connection coefficients are zero), we can also make flat space "look" like it is curved by choosing coordinates where the connection coefficients do not vanish. However, we cannot use the equivalence principle to mask the presence of genuine spacetime curvature (i.e. gravity). We have the coordinate freedom to make the metric tensor appear flat, and we can even make the first derivatives of the metric tensor vanish. We cannot, however, guarantee that the second derivatives of the metric tensor vanish, and so tensors which depend on the second derivatives of the metric (principally the Riemann curvature tensor) will be able to detect whether or not spacetime is genuinely curved (whether gravity is present). The geodesic equation is \begin{equation} \frac{d^2 x^{\mu}}{ds^2}+\Gamma^{\mu}_{\rho\sigma}\frac{dx^{\rho}}{ds} \frac{dx^{\sigma}}{ds}=0. \end{equation} So, if we are in flat spacetime, we can choose a reference frame (coordinate system) in which some of the $\Gamma^{\mu}_{\rho\sigma}$ are non-zero. Then Newton's law \begin{equation} \frac{d^2 x^{\mu}}{ds^2}=0, \end{equation} is modified by the presence of fictitious forces.
{ "domain": "physics.stackexchange", "id": 26367, "tags": "special-relativity, reference-frames, acceleration, metric-tensor, observers" }
Proca equation gauge conditions
Question: In massive case without any gauge conditions proca equation can be written as $\partial_\nu(\partial^\nu A^\mu- \partial^\mu A^\nu)+\left(\frac{mc}{\hbar}\right)^2 A^\mu=0$ Since $A_\mu$ is a $n$-vector (has $n$ components in $n$-dimensional spacetime) it must have 4 degrees of freedom in 4d spacetime. In other words, any spin-1 particle\that is described by Proca equation must have 4 spin states. But $Z$-boson and $W$-boson (since they're massive) have only 3 and photon has only 2 One degree of freedom can be cut with Lorenz gauge $$\partial_\mu A^\mu=0.$$ What are other constraints that cut another spin state for massless particles? How to derive those constraints (including Lorenz gauge as well)? Answer: A gauge condition is imposed because the "symmetry" $A_\mu \to A_\mu + \partial_\mu \alpha$ describes a redundancy (for small $\alpha$) in our description of photons. We remove this redundancy by imposing a gauge condition. The Proca Lagrangian has no such symmetry (for $m^2 \neq 0$) so there is no concept of gauge condition for a massive spin-1 field. The usual d.o.f. counting goes as follows. A vector field has 4 components. One of these is removed by EoM so we have 3 independent on-shell d.o.f. which is precisely the correct number for massive spin-1 fields. For massless fields, there is an additional gauge symmetry which we remove by imposing a separate gauge condition which reduces the number of physical on-shell d.o.f. from 4 to 2.
{ "domain": "physics.stackexchange", "id": 89701, "tags": "field-theory, quantum-spin, bosons, degrees-of-freedom" }
Fine-tuning LLM or prompting engineering?
Question: For some type of chatbot, like a customized chatbot, may it be better to fine-tune the LM for the domain-specific data and the type of question instead of relying on the prompt as prompts have to be processed by the LM every time? Another reason is that how can the system add the correct prompt for each question, not to say there are many new questions which don't have any existing prompts? So I feel prompting is good, but not practical. Do you agree or do I miss some key points? Answer: This is still very much an open question, but from how the research looks now it appears that a combination of prompting, output handling, and possibly fine-tuning, will be necessary to achieve consistent behavior from LLMs. As an example of why this is, the AMA paper found that within prompt ensembles, accuracy varied by up to 10%. This has wide-reaching implications, as 10% is a HUGE level of variation in behavior, and by their very nature any system is going to processing prompt ensembles -- your prompt, varied by the input. Another issue is that it appears that failure modes of LLMs are... more robust than we'd like, meaning that there are some wrong answers that LLMs will reliably return and small variations in the prompt have no meaningful positive effect. Again, though, this is still an open area. Completely different, interesting approaches like sequential monte carlo steering of LLM output may offer better results overall, and there are definitely countless undiscovered techniques.
{ "domain": "datascience.stackexchange", "id": 11896, "tags": "nlp" }
How to show ExactOneSAT is NP-Complete?
Question: $\text{ExactOneSAT}= \{\phi\;|\;\phi\; \text{is a boolean formula}$ $\text{ such that it has a satisfying assignment with only one true literal per clause} \}$ I am trying to reduce 3SAT to this problem but can't find a way. I tried taking a small example $\phi=(x_1 \vee x_2) \wedge (\overline{x_1} \vee x_2 ) $. In this example the formula can only be satisfied only when both $x_1,x_2$ are True. How do I reduce such a case of 3SAT to ExactOneSAT ? This is an exercise from Sanjeev Arora and Barak Boaz : A modern approach to complexity, but not a homework exercise. Answer: We can reduce 3SAT to ExactOneSAT (3SAT $\leq_P$ ExactOneSAT) as follows. Replace each clause $C_m$ by $(z_{m,1} \lor z_{m,2} \lor z_{m,3})$ and ensure that if $C_m$ is, say, $(v_i \lor \overline{v_j} \lor v_k)$ then $(\neg v_i \Rightarrow \neg z_{m,1})$, $(\neg \overline{v_j} \Rightarrow \neg z_{m,2})$ and $(\neg v_k \Rightarrow \neg z_{m,3})$. Thus, for example, if $v_i$ is true then $z_{m,1}$ can be true or false, but if $v_i$ is false then $z_{m,1}$ must be false. Thus replace $(X_{m}^1 \lor X_{m}^2 \lor X_{m}^3)$ in clause $C_m$ by $$(z_{m,1} \lor z_{m,2} \lor z_{m,3}) \land (\neg X_{m}^1 \lor y_{m,1} \lor z_{m,1}) \land ( \neg X_{m}^2 \lor y_{m,2} \lor z_{m,2}) \land ( \neg X_{m}^3 \lor y_{m,3} \lor z_{m,3})$$ where $X^1_m, X^2_m, X^3_m$ are the three literals in the clause $C_m$. Note that $X$'s are literals in the clauses and can have negations, whereas $y$'s and $z$'s are variables. If $\phi'$ is the modified boolean CNF, then this will give us $\phi \in$ 3SAT iff $\phi' \in $ ExactOneSAT. This along with the fact that above transformation is polynomial-time gives us a proof of NP-completeness. Let us see how $\phi \in 3SAT \Rightarrow \phi' \in$ ExactOneSAT. Assume we have a satisfying assignment for $\phi$. For example, say $X_m^1$ is true and $X_m^2$ and $X_m^3$ are false. Then we can make following assignments: $y_{m,1}$ = False, $z_{m,1}$ = True $y_{m,2}$ = False, $z_{m,2}$ = False $y_{m,3}$ = False, $z_{m,3}$ = False For example, say $X_m^1$ and $X_m^2$ are true and $X_m^3$ is false. Then we can make following assignments: $y_{m,1}$ = False, $z_{m,1}$ = True $y_{m,2}$ = True, $z_{m,2}$ = False $y_{m,3}$ = False, $z_{m,3}$ = False For example, say all $X_m^1$, $X_m^2$ and $X_m^3$ are true. Then we can make following assignments: $y_{m,1}$ = False, $z_{m,1}$ = True $y_{m,2}$ = True, $z_{m,2}$ = False $y_{m,3}$ = True, $z_{m,3}$ = False So we can see that we can find a satisfying assignment for $\phi'$ where only one literal is true in every clause. Let us now see how $\phi' \in$ ExactOneSAT $\Rightarrow \phi \in $ 3SAT Assume we have a satisfying assignment for $\phi'$ where exactly one literal is true in every clause. Let us say $z_{m,1}$ is true and $z_{m,2}$ and $z_{m,3}$ are false. Then definitely $X_m^1$ in the second clause should be true, implying that same assignment satisfies $C_m$. Thus we have $\phi \in$ 3SAT iff $\phi' \in $ ExactOneSAT. We have actually prove ExactOne3SAT is NP-complete by this because every clause has 3 literals.
{ "domain": "cs.stackexchange", "id": 6068, "tags": "complexity-theory, np-complete, satisfiability, 3-sat" }
Why alloy have more resistance?
Question: Is there any simple way to understand why alloy have more resistance than metals? My teacher ask this, I answer that, there might be more free electrons in metals than an alloy, but she said you are not accurate, So what is better answer to this question? Answer: I tend to agree with Sanya in that I am not sure about the universality of this. There might of course be instances where this is the case. A pure metal has a periodic lattice of ions. There is then a conduction band of electrons that fills the space between the ions. These electrons have wave vectors in the reciprocal space. In space the occurrence of ions at locations according to elementary translation vectors $\vec t_i$ in the $x,~y,z$ with $$ \vec T~=~\sum_{i=1}^3n_i\vec t_i $$ the reciprocal basis is given by $$ \vec t_i~=~2\pi\vec n_i\frac{\epsilon_{ijk}t_jt_k}{\epsilon_{ijk}t_it_jt_k}. $$ so the reciprocal lattice vector is $$ \vec G~=~\sum h_i\vec t_i, $$ where $\vec G_i~=~h_i\vec t_i$ The volume associated with this vector is a Brillouin zone. The volume of the Brillouin zone is $$ Vol~=~\epsilon_{ijk}G_iG_jG_k. $$ An electron in the lattice with wave vector $\vec k$ passes between Brillouin zones with $\vec k'~=~\vec k~+~\vec G$. We then have for a near perfect lattice lots of electrons passing through Brillouin zones accordingly and they then constructively interfere as waves. We can now see that if we have an imperfect lattice there may now be more destructive interference of waves. This may not be universally the case. For a semiconductor crystalline lattice the introduction of impurities increases its conductivity. This is because this doping permits more movement of electrons from the valence bands associated with the ions to the conduction band. So for semiconductors exactly the opposite can happen; the introduction of impurities with various properties can increase conductivity. It may be the case that very pure metals have in most cases more conductivity. Of course pure copper and iron are very ductile and weak. As a result we introduce impurities, carbon in iron or zinc in copper (brass). There is also the question of what happens with lattice dislocations. If you take a piece of copper tubing you find it is very flexible. However, if you bend it back and forth it becomes stiff. This is because lots of lattice imperfections result which block further motion. Similarly with iron dislocations, usually due to impurities make it harder. How these play a role I am not certain of.
{ "domain": "physics.stackexchange", "id": 32320, "tags": "condensed-matter, electric-current, electrical-resistance, material-science, conductors" }
Spin states in a finite potential well
Question: i have a question concerning an electron in an attractive potential well. Let's suppose the potential function is defined as $$V = \left\{ \begin{array}{cl}0, & \mbox{for } z < 0\\ -|V_0|, & \mbox{for } 0 < z < a\\ 0 & \mbox{for } z > 0 \end{array}\right. $$ Describing the electron with the Dirac equation I have the dispersion relations: $$ E = \left\{\begin{array}{lr} \sqrt{p_1^2 + m^2}, \qquad\qquad \qquad\qquad \mbox{ for } z < 0 \mbox{ and } z > a\\ \sqrt{p_2^2 + m^2} - e|V_0|, \qquad \qquad \,\,\,\,\mbox{ for } 0 < a < a\end{array}\right. $$ where I chose the positive energy since I'm considering an electron. Then let us suppose I want to determine the bound states energy eigenvalues. In this situation $\ E^2 < m^2 $ such that $\ p_{1} $ is purely imaginary. To get the eigenvalues quantization i have to impose the continuity conditions at $ z=0, \, z=a $. My question concerns the spin part of the wave functions: can I assume it to be the same in all three zones, i.e, always spin up or always spin down? Since the energy is spin independent and I'm only interested in obtaining the possible eigenvalues is should not matter, is it correct? Thank you. Answer: Your square well does not have any term that interacts with a spin. So there is no reason to have different spins at different places, you just can change a wavefunction from $\psi$ to $\psi\chi$, where $\chi$ describes the spin part and remains the same everywhere. You can find your problem a bit more complex and in 3-dimensions - the potential may have terms like this: $V(r)=-V_R f(r) - i W_v f(r) + ... + V_{SO}\frac{1}{r} \frac{d}{dr}f(r) \sigma \cdot l$ The last term with a coefficient $V_{SO}$ includes a spin-orbit ($\sigma \cdot l$) interaction and from this you can have different energies for different spins. If $V_{SO}=0$, yes, you have different $l$'s (orbital momenta), but the energy does not depend on the spin of the particle - the levels are degenerate for spins. This is where the Shell Model got so successful. $f(r)$ is used to be Woods-Saxon, imaginary $i W_V$ speaks about reaction channels.
{ "domain": "physics.stackexchange", "id": 37645, "tags": "quantum-mechanics, potential, dirac-equation, spinors, dirac-matrices" }
Does DNA replication in 5' to 3' (leading strand) need RNA primase?
Question: https://www.youtube.com/watch?v=27TxKoFU2Nw In the above video it shows that during DNA replication, the lagging strand require RNA primase to add 3' -OH group for further addition of nucleotides. However, it hasn't been shown that the above strand ( leading strand) require it. Besides, RNA is needed to initiate the polymerization because it has the 3'-OH. But when I look at the structure of deoxynucleotide, it also has the 3'-OH but it does not have the 2'-OH. So why DNA cannot initiate the polymerization? Thanks for your answer! Answer: The DNA polymerase also needs a RNA primer on the leading strand to be able to start polymerization. Afterwards this is not needed anymore, since the replication goes on without a break. On the lagging strand polymerization replication can only work between the replication fork and the next region of double-stranded DNA. See the figure (from here): The reason for the need for RNA primers is located in the function of the enzymes. While the DNA polymerase can only work on a double stranded template (add nucleotides to the 3'OH-end of the strand) the DNA Primase (actually an RNA polymerase) can work on single stranded targets and thus add the RNA primer there.
{ "domain": "biology.stackexchange", "id": 2930, "tags": "dna, replication, primer" }
Hokuyo Map Generation
Question: When using the Laser_Pipeline node along with the Hokuyo_node how do you take in the tilting angle that the Hokuyo laser is at when creating a 3D map? Would it be solved by simply changing the frame the laser data is read in or do you have to operate the Servo to retrieve the tilting angle? Also, where is tilt angle saved? Is it in the bag file or somewhere else. Originally posted by tyler258 on ROS Answers with karma: 93 on 2011-11-14 Post score: 0 Original comments Comment by tyler258 on 2011-11-16: It seems RViz will use the angle generated by the encoder and information read in from the laser scanner to generate the map correct? Am I on the right track here? Comment by tyler258 on 2011-11-16: I've looked at several implementations that are very similar (Georgia tech's healthcare robotics lab and cal poly). It seems that it is necessary to somehow generate the angle information from the servo (by use of an encoder?). Comment by tyler258 on 2011-11-16: Currently we are using a parallax #900-00005, just a standard futaba servo on a parallax mount that is for this specific servo. We will however be upgrading it to a dynamixel ax-12a with a custom made mount. Comment by Mac on 2011-11-14: We're going to need more details. Describe your system to us: it sounds like you've built a tilt-mount Hokuyo. (Or are you on a PR2?) Answer: If I understand your question correctly (from the comments above), you want to know how to correctly turn a tilting 2D laser into a "3D" laser. In short, you'll need the tilt angle, plus some geometry, so that you can describe the 3D rays the laser is casting as it tilts. Any servo with an encoder (like, say, the AX=12+ you mention) can provide this information directly. You'll still need to do the calibration by hand (for example, how far above the axle of your servo is your laser's center of projection?), but then you can describe the whole thing to ROS by way of a set of tf frames and it can figure the rest out for you. Originally posted by Mac with karma: 4119 on 2011-11-16 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 7289, "tags": "ros, hokuyo-node, laser-pipeline" }
Why do the $L_z$ and $L^2$ operators share eigenfunctions, but the $L_x$ and $L_y$ operators don't?
Question: In my lecture notes the following was written: I would understand in the case of an applied field if there was some symmetry breaking feature which would allow for a preferred axis or something which could explain why the $L_z$ operator and $L^2$ operator share eigenfunctions as mentioned in the above notes. I would've thought in the case of no external field, there is no reason to assume the x y and z axis's have any distinguishing factor between them. Why does this happen? Answer: It is possible to have a simultaneous eignfunction of ${\hat L}^2$ and one other component of the angular momentum. Typically this is taken to be ${\hat L}_z$, but it could just as well be the $x$ or $y$ component - there is nothing special about $z$. This is known as "choosing a quantization axis".
{ "domain": "physics.stackexchange", "id": 63369, "tags": "quantum-mechanics, angular-momentum, operators, commutator, observables" }
Prove the triangle inequality for the trace norm: $\|M+N\|_1\le \|M\|_1+\|N\|_1$
Question: I have been trying to show that $$||M+N|| \le ||M|| + ||N||$$ However, I seem to be missing some fundamental property of either how the trace or square root acts on these sums of matrices, or how the Hilbert-Schmidt I.N can be used. I can expand it easily, getting $$||M+N|| = Tr|M+N|=Tr[\sqrt{(M+N)(M+N)^\dagger}]$$which evaluates to $$Tr[\sqrt{MM^{\dagger}+MN^{\dagger}+NM^{\dagger}+NN^{\dagger}}]$$ Now obviously you can't just square both sides, as the square operations doesn't distribute over the trace function, nor does the square root over the sum, or this would be trivial using Hilbert-Schmidt I.N. So what am I missing here? Answer: For square matrices, $$\Vert A \Vert_1 = \max_U \vert \text{Tr}(UA)\vert$$ over all unitaries acting on the matrix space related. $$\Vert A+B\Vert_1 = \max_U \vert \text{Tr}(U(A+B))\vert = \max_U \vert \text{Tr}(UA+UB)\vert = \max_U \vert \text{Tr}(UA) + \text{Tr}(UB)\vert$$ For the absolute value we have for any real number the inequality, $$\vert \text{Tr}(UA) + \text{Tr}(UB)\vert \leq \vert \text{Tr}(UA)\vert + \vert \text{Tr}(UB)\vert$$ the (sup) maximum preserves this relation and you'll get the wanted result.
{ "domain": "quantumcomputing.stackexchange", "id": 4103, "tags": "mathematics, trace-distance, linear-algebra" }
Software floating-point multiplication
Question: I wrote a floating-point multiplication function as an excercise. The program compares its result to the usual hardware multiplication result and for this purpose I use unspecified behavior, but the function itself should be fine. I do get warnings about XOR-ing 1-bit ints but I have no idea why. I noticed there aren't that many comments in the multiplication function so I wonder what I could put there. #include <math.h> #include <assert.h> #include <memory.h> #include <limits.h> #include <fenv.h> #include <float.h> #include <thread> #include <mutex> #include <iostream> #include <random> int msb( uint64_t v ) { if(v == 0) return 0; return 63-__builtin_clzll(v); } void roundedShift(uint64_t* a, unsigned int n) { if(n >= sizeof(uint64_t)*CHAR_BIT) { // we don't care about rounding if n == 64 because mantissa product is always less than 2^46 *a = 0; } uint64_t roundoff = *a - ((*a >> n) << n); *a >>= n; if(roundoff*2 > (1ull<<n) || (roundoff*2 == (1ull<<n) && (*a)%2 == 1)) { ++(*a); } } struct SoftFloat { unsigned int mantissa : 23; unsigned int exponent : 8; unsigned int sign : 1; static const SoftFloat nan; SoftFloat operator *(SoftFloat right) const { if(exponent == 255 || right.exponent == 255) { return specialMultiplication(right); } uint64_t fullLeftMantissa = (mantissa)+(exponent!=0?(1<<23):0); uint64_t fullRightMantissa = (right.mantissa)+(right.exponent!=0?(1<<23):0); uint64_t fullNewMantissa = fullLeftMantissa * fullRightMantissa; // experiment have shown that operating on biased exponents is faster than // computing unbiased exponent and then adding bias short leftNormalBiasedExponent = exponent != 0 ? exponent: 1; short rightNormalBiasedExponent = right.exponent != 0 ? right.exponent : 1; short newNormalBiasedExponent = leftNormalBiasedExponent + rightNormalBiasedExponent - 127 - 23; int totalShift = 0; if(newNormalBiasedExponent < 1) { int diff = -newNormalBiasedExponent + 1; newNormalBiasedExponent = 1; totalShift += diff; } int implicitBit = msb(fullNewMantissa); int shift = implicitBit - totalShift - 23; if(shift >= 0) { newNormalBiasedExponent += shift; totalShift += shift; fullNewMantissa &= ~(1ll << implicitBit); } else { newNormalBiasedExponent = 0; } roundedShift(&fullNewMantissa, totalShift); if(fullNewMantissa == (1 << 23)) { ++newNormalBiasedExponent; fullNewMantissa = 0; } if(newNormalBiasedExponent >= 255) { newNormalBiasedExponent = 255; fullNewMantissa = 0; } return SoftFloat{(uint)fullNewMantissa, (uint)newNormalBiasedExponent, sign ^ right.sign}; // i don't really understan why this return expression gives warning // narrowing conversion of ‘(((int)((const SoftFloat*)this)->SoftFloat::sign) ^ ((int)right.SoftFloat::sign))’ from ‘int’ to ‘unsigned int’ } bool isNan() const { return exponent == 255 && mantissa != 0; // This is same as (*reinterpret_cast<const unsigned int*>(this) << 1) > 0b11111111000000000000000000000000u, // but the experiments have shown it doesn't make any difference for speed on my machine. // Current version does not invoke any UB } bool isZero() const { return exponent == 0 && mantissa == 0; } bool isRepresentationEqual(const SoftFloat& right) { return !memcmp(this, &right, sizeof(SoftFloat)); } // Correctness testing relies on specific layout of floats and of fields inside SoftFloat, but the multiplication function itself does not float toHardFloat() const { float res; memcpy(&res, this, sizeof(SoftFloat)); return res; } static SoftFloat fromOrdinalNumber(const unsigned int& a) { SoftFloat res; memcpy(&res, &a, sizeof(SoftFloat)); return res; } static SoftFloat fromHardFloat(const float& a) { SoftFloat res; memcpy(&res, &a, sizeof(SoftFloat)); return res; } private: SoftFloat specialMultiplication(SoftFloat right) const { // precondition - at least one of *this, right is either inf, -inf or nan if(isNan() || right.isNan()) { return nan; } if(isZero() || right.isZero()) { return nan; } return {0, 255, sign ^ right.sign}; } }; const SoftFloat SoftFloat::nan = {1, 255, 0}; std::mutex coutMutex; void checkOneMultiplication(SoftFloat left, SoftFloat right) { SoftFloat softProduct = left*right; SoftFloat hardProduct = SoftFloat::fromHardFloat( left.toHardFloat() * right.toHardFloat()); bool equalRepresentation = softProduct.isRepresentationEqual(hardProduct); bool bothNan = softProduct.isNan() && hardProduct.isNan(); if(!(equalRepresentation || bothNan)) { std::lock_guard<std::mutex> lock(coutMutex); std::cerr << "failed\n"; std::cerr << "left operand\t" << left.mantissa << " " << left.exponent << " " << left.sign << " " << left.toHardFloat() << std::endl; std::cerr << "right operand\t" << right.mantissa << " " << right.exponent << " " << right.sign << " " << right.toHardFloat() << std::endl; std::cerr << "soft product\t" << softProduct.mantissa << " " << softProduct.exponent << " " << softProduct.sign << " " << softProduct.toHardFloat() << std::endl; std::cerr << "hard product\t" << hardProduct.mantissa << " " << hardProduct.exponent << " " << hardProduct.sign << " " << hardProduct.toHardFloat() << std::endl; abort(); } } void checkRange(unsigned int start, unsigned int end, int threadNumber) { // this loop checks multiplication with every one of 2^28 possible floats in given range of representaions. Right operand is pseudorandom but deterministic std::uniform_int_distribution<unsigned int> distr; std::mt19937 gen(threadNumber); for(unsigned int representationNumber = start; representationNumber != end; representationNumber++) { // can't use < in condition because last block ends with 0 int rigntOperandRepresentationNumber = distr(gen); SoftFloat leftOperand = SoftFloat::fromOrdinalNumber(representationNumber); SoftFloat rightOperand = SoftFloat::fromOrdinalNumber(rigntOperandRepresentationNumber); checkOneMultiplication(leftOperand, rightOperand); if(representationNumber%0x10000000 == 0 && representationNumber > start) { std::lock_guard<std::mutex> lock(coutMutex); std::cout << "thread " << threadNumber << " checked 0x" << std::hex << representationNumber-start << " multiplications" << std::endl; } } } int main(int argc, char *argv[]) { static_assert(sizeof(SoftFloat) == sizeof(float)); static_assert(alignof(SoftFloat) == alignof(float), ""); static_assert(sizeof(SoftFloat) == sizeof(int), ""); static_assert(__BYTE_ORDER__ == LITTLE_ENDIAN, ""); // Unfortunately we can't statically check further details of layout of structures assert(SoftFloat::fromHardFloat(std::numeric_limits<float>::denorm_min()).isRepresentationEqual(SoftFloat{1, 0, 0})); assert(SoftFloat::fromHardFloat(FLT_MIN).isRepresentationEqual(SoftFloat{0, 1, 0})); float nonConstant = -1; assert(SoftFloat::fromHardFloat(nonConstant*0).isRepresentationEqual(SoftFloat{0, 0, 1})); // Test can't run if any of the above asserts fail const int threadCount = 4; fesetround(FE_TONEAREST); // assuming mantissa is rounded to even when there are two nearest std::thread threads[threadCount-1]; const unsigned int blockSize = UINT32_MAX/4+1; for(int i = 1; i < threadCount; i++) { threads[i-1] = std::thread(checkRange, blockSize*i, blockSize*(i+1), i); } checkRange(0, blockSize, 0); SoftFloat specialCases[] = {{0, 0, 0}, // 0 {0, 0, 1}, // -0 {0, 255, 0}, //inf {0, 255, 1}, //-inf SoftFloat::nan, SoftFloat::fromHardFloat(std::nanf("")), SoftFloat::fromHardFloat(FLT_MIN), SoftFloat::fromHardFloat(1), SoftFloat::fromHardFloat(FLT_MAX), SoftFloat::fromHardFloat(FLT_EPSILON), SoftFloat::fromHardFloat(std::numeric_limits<float>::denorm_min()), SoftFloat::fromHardFloat(-FLT_MAX), SoftFloat::fromHardFloat(FLT_MIN_EXP), SoftFloat::fromHardFloat(FLT_MIN_10_EXP), SoftFloat::fromHardFloat(FLT_MAX_EXP), SoftFloat::fromHardFloat(FLT_MAX_10_EXP)}; int numberOfSpecialCases = sizeof(specialCases)/sizeof(SoftFloat); for(int i = 0; i < numberOfSpecialCases; i++) { for(int j = 0; j < numberOfSpecialCases; j++) { checkOneMultiplication(specialCases[i], specialCases[j]); } } for(int i = 1; i < threadCount; i++) { threads[i-1].join(); } std::cout << "checked 0x" << UINT32_MAX + numberOfSpecialCases*numberOfSpecialCases << " multiplications"; return 0; } The program tests multiplication of every possible floating-point number with a random number. This takes a long time to run, so I made it threaded. I also used fancy C++11 RNG stuff to make it deterministic. The function uses __builtin_clzll which is available on gcc and clang but not on MSVC, I believe. Answer: Fairly good code overall. Mostly small stuff below. Narrowing I do get warnings about XOR-ing 1-bit ints but I have no idea why. Weak compiler. Perhaps use bool sign : 1; Naked magic numbers Rather than 23, 127, etc, consider the C-ish #define MANTISSA_BIT_WIDTH 23 or a C++ -ish const int mantissa_bit_width = 23; uint? uint appears non-standard. Perhaps unsigned? // return SoftFloat{(uint)fullNewMantissa, (uint)newNormalBiasedExponent, sign ^ right.sign}; return SoftFloat{(unsigned)fullNewMantissa, (unsigned)newNormalBiasedExponent, sign ^ right.sign}; Minor stuff Portability Although OP has "gcc, linux, x64", little changes would step toward portability without sacrificing efficient emitted code. // if(fullNewMantissa == (1 << 23)) { if(fullNewMantissa == (1ul << 23)) { // `int` could be 16-bit // fullNewMantissa &= ~(1ll << implicitBit); fullNewMantissa &= ~(1ull << implicitBit); // Why mess with signed shifts? int vs. short Rarely is short faster/better than int unless one has an array of the type. Consider // short leftNormalBiasedExponent, rightNormalBiasedExponent, newNormalBiasedExponent int leftNormalBiasedExponent, rightNormalBiasedExponent, newNormalBiasedExponent sizeof type vs sizeof object Consider the clearer, less maintenance // return !memcmp(this, &right, sizeof(SoftFloat)); return !memcmp(this, &right, sizeof *this); roundedShift() Unclear about roundedShift() correctness. Partly due to lack of comments, partly due to "it takes time" to analyze.
{ "domain": "codereview.stackexchange", "id": 36140, "tags": "c++, floating-point" }
Does longer straw has its advantage with suction force than shorter straw?
Question: After the pandemic, I've gotten chances to use various types of hand sanitizers with different design of bottles. Among them, I've found that the bottles with longer straws (I couldn't find the exact vocabulary for this, but I hope you could get what I mean since we all use hand sanitizers these days) show some better performance with its suction force. How do I know that? When we use 500ml size of hand sanitizers, we often observe that the gels left near the bottom of the bottle are hard to be inhaled into the straw to be arrived to our hands. However when I use bottles with longer straws, I've found that I could use almost every of the gels left near the bottom of the bottle. I assume this shouldn't be a coincidence. Therefore, Assumption: Among two straws of different lengths, a longer straw has advantage with suction force over the shorter straw, when all else conditions being equal. Could anyone give some hint or advice that accounts for my assumption? Thanks. Answer: No, I don't think so. For the scenario you describe of getting the last bits out of the bottle, and all other conditions being equal, the longer straw should have to battle a higher hydrostatic pressure difference and thus be at a disadvantage. Viscous forces adding up over a greater length of straw won't help either... But then there are so many different variables at play: Straw length, diameter, power of the pump, fluid viscosity, bottle geometrym (specifically the bottom and how the straw lands on it). What might be the case - and this is a wild guess, I think - that the longer straw has a more powerful pump fitted and when you get to the rest of the bottle and there is already some air in the straw (such that hydrostatic pressure will play no or a lesser role), then the higher power of the pump might empty the rest of the bottle more easily. Not at all sure, if that's the case.
{ "domain": "physics.stackexchange", "id": 87434, "tags": "forces, classical-mechanics, fluid-dynamics, vacuum" }
Commutativity and Associativity of Poincare Transformations
Question: Commutativity and Associativity of Poincare Transformations: For commutativity I showed that $2$ successive transformations does not commute with the same transformations reversed. $$(Λ_2 Λ_1, Λ_2 a_1 + a_2)$$ does not commute with $$(Λ_1 Λ_2, Λ_1 a_2 + a_1) $$ For Associativity I tried showing $3$ transformations are associative i.e $$((Λ_1 a_1) (Λ_2 a_2))(Λ_3 a_3) = (Λ_1 a_1)((Λ_2 a_2)(Λ_3 a_3))$$ I got that this isn't associative but I thought Poincare Transformations formed a group and hence should be associative. Edit I got that $$(Λ_2 Λ_1 Λ_3 , Λ_2 Λ_1 a_3 + Λ_2a_1+a_2)$$ which is not the same as $$(Λ_1 Λ_3 Λ_2 , Λ_3 Λ_2 a_1 + Λ_3a_1+a_3)$$ Can someone please provide assistance with this question Answer: The group action of the Poincaré group is given by $$(\Lambda_1;a_1)*(\Lambda_2;a_2)=(\Lambda_1\Lambda_2; a_1 + \Lambda_1 a_2).$$ Thus, we find that \begin{align}\left( (\Lambda_1;a_1)*(\Lambda_2;a_2)\right) * (\Lambda_3;a_3) &= (\Lambda_1\Lambda_2; a_1 + \Lambda_1 a_2) * (\Lambda_3;a_3) \\ &= (\Lambda_1\Lambda_2\Lambda_3; [a_1 + \Lambda_1 a_2] + [\Lambda_1\Lambda_2] a_3) \end{align} on the one hand and \begin{align}(\Lambda_1;a_1)*((\Lambda_2;a_2) * (\Lambda_3;a_3)) &= (\Lambda_1;a_1)*(\Lambda_2\Lambda_3; a_2 + \Lambda_2 a_3) \\ &= (\Lambda_1\Lambda_2\Lambda_3; a_1 + \Lambda_1 [a_2 + \Lambda_2 a_3]) \end{align} on the other. As you can see the multiplication $*$ is indeed associative. I hope this could help you. Cheers!
{ "domain": "physics.stackexchange", "id": 67404, "tags": "inertial-frames, group-theory, poincare-symmetry" }
Phasor transformation to sinus or cosinus?
Question: In my EM waves lecture, our lecturer somehow explained the way we make phasor transformation of a particular function such as $$A\cos(\omega t-( \alpha +\beta z))u_{y}$$ converted into phasor form of $$H_{S}=Ae^{-j( \alpha +\beta z)}u_{y}$$ but sometimes we reconvert it into sinus function in some problems and I didn't quite understand when to do when. From Euler's equation I assume we take the imaginary part for sinus and real part for cosine but in phasor form everything looks like imaginary therefore how am I suppose to know which function to use? Answer: From elementary trigonometry we have: $\sin(\theta-\delta)=\cos(\theta)$. In EM we understand $\delta$ like the time gap between two waves. When you apply initial conditions to your problem (for ex. $E(t=0)=0$) you can calculate $\delta$, you can use both expressions (sine or cosine) if you be careful defining $\delta$.
{ "domain": "physics.stackexchange", "id": 77079, "tags": "electromagnetism, waves, conventions, complex-numbers" }
Does freezing point depression depend on the charge of the solvated ions?
Question: Assuming an ideal solution, which of these 0.10 M solutions would have the least freezing point depression? $\ce{HCl}$ $\ce{NaBr}$ $\ce{KNO3}$ $\ce{MgSO4}$ They all produce solutions with two ions per formula unit, but the correct answer is somehow supposed to be $\ce{MgSO_4}$. How is this? I don't think it can be due to the weak basicity of sulfate; that should in fact increase the ions present and have an even greater freezing point depression. Answer: First let's gather the data from the CRC Handbook of Chemistry and Physics, 92nd edition (mol/L is moles per liter). The data below is reported as molarity rather than molality since the problem is posed in molarity. However in a solution as dilute as 0.1 M the difference shouldn't be large. wt% mol/L Delta-F HCl 0 0 0.00 0.5 0.137 -0.49 1 0.275 -0.99 2 0.553 -2.08 3 0.833 -3.28 4 1.117 -4.58 5 1.403 -5.98 least squares fit Delta-f = = -3.4683*[HCL] - 0.5624*[HCL]^2 [HCL] =0.1 M, Delta-f = -0.35 wt% mol/L Delta-F NaBr 0 0.00 0.00 0.5 0.049 -0.17 1 0.098 -0.34 2 0.198 -0.69 3 0.301 -1.04 4 0.405 -1.39 5 0.512 -1.76 least squares fit Delta-f = = -3.4823*[NaBr] + 0.0930*[NaBr]^2 [NaBr] =0.1 M, Delta-f = -0.35 wt% mol/L Delta-F KNO3 0 0 0.00 0.5 0.05 -0.17 1 0.099 -0.33 2 0.2 -0.64 3 0.302 -0.94 4 0.405 -1.22 5 0.509 -1.50 least squares fit Delta-f = = -3.4264*[KNO3] + 0.9923*[KNO3]^2 [KNO3] =0.1 M, Delta-f = -0.33 wt% mol/L Delta-F MgSO4 0 0 0.00 0.5 0.042 -0.10 1 0.084 -0.19 2 0.169 -0.36 3 0.257 -0.52 4 0.346 -0.69 5 0.437 -0.87 least squares fit Delta-f = = -2.3466*[MgSO4] + 0.9726*[MgSO4]^2 [MgSO4] =0.1 M, Delta-f = -0.22 So there is a difference. 0.1 molar $\ce{MgSO4}$ has a freezing point depression of 0.22 degrees Celsius whereas the other 0.1 molar salts are about 0.35 degrees. As far as the "answer" goes the problem is very poorly worded. First the problem states to "assume an ideal solution," then poses the problem in such a manor that some sort of non-ideal behavior must be assumed to obtain an answer. "Ideal behavior" for electrolytes only occurs in very dilute solutions. A 0.1 molar solution is much too concentrated for ideal electrolyte behavior. Without any "lookup" of data the only solution seems to be some hand-waving about $\ce{MgSO4}$ being the only salt with divalent ions so the effective dissociation is smaller because of forming ion-pairs in solution.
{ "domain": "chemistry.stackexchange", "id": 5478, "tags": "physical-chemistry, aqueous-solution, solubility" }
Time between collisions and collision time
Question: I’m having a hard time really getting how the relationship between pressure and square mean velocity of a gas is derived for one single reason...seems like I don’t really know what impulse is. From what I know force is the rate of change of momentum I.e Impulse/Time of collision. But in the various explanations I see, when a gas particle collides with the wall of the container, The time period considered is that between collisions and not the time of collision between that gas particle and the wall of the container. I understand that that might be true by the formula since In this time between collisions the particle hits the wall once and thus changes momentum once in this time interval...But it just doesn’t seem right to me. Could we just then take any time interval during which the momentum of a particle changes once and then calculate force as this change In momentum divided by the arbitrarily chosen time interval ? I’m really confused...help please. Answer: The interaction time during the collision turns out not to matter. In most other "physics problems" calculations, we only care about the force during the interaction or collision. So the length of the collision is important. But here we want the average force over time (meaning across multiple collisions). This average means we don't care about the forces in any single collision, just the total impulse from each, and how often they occur. Let's imagine someone dribbling a soccer ball on their leg. The ball moves slightly, but overall is about the same height over time, suggesting that the average acceleration is zero and that the average force over time on the ball is zero. Let's assume the ball has a weight of 5N, so the average force from the leg must be 5N. $F = \frac{\Delta p}{\Delta t}$ If the person is dribbling by hitting it once every second, we know the impulse delivered every second is sufficient to generate an average force of 5N over the entire second. $$ \Delta p = 5\text{Ns}$$ The force during the interaction will be much higher, but isn't necessary to know. In the gas calculation, we do the same math, but instead start with impulse information (because we know the change in momentum of the molecule) and the time information (because we know the speed and the size of the box), so we can calculate the average force. Could we just then take any time interval during which the momentum of a particle changes once and then calculate force as this change In momentum divided by the arbitrarily chosen time interval ? Yes, but for a continuous process (bouncing a soccer ball or molecules bouncing off walls), the time interval during which the momentum changes exactly once is not arbitrary. It's determined by the periodicity of the event. We can't use 1 second for the soccer ball if the person is dribbling the ball more slowly than that. The Average value does depend on the chosen time interval...We do choose the time interval between two successive collisions, But why ?Like why is the average value over this time interval more legitimate than another one. You can use any interval you want, but you have to count the total impulse delivered during that time. If there are two collisions in that interval, then each will presumably deliver half the impulse. Setting the interval to one in which there is only one collision is just a convenience. Twice as long, twice as many collisions, twice as much impulse. But total impulse divided by time remains constant. Like when you say the average acceleration of the ball over time is zero...Why... ? Because the ball remains in front of the dribbler. If the ball had some consistent acceleration, it would move away. It was just an explicit statement so that we could compare the weight of the ball (pushing it down) to the impulse delivered by the leg (pushing it up). Over the course of the session, these two must be almost exactly equal.
{ "domain": "physics.stackexchange", "id": 65244, "tags": "thermodynamics, statistical-mechanics, ideal-gas, gas" }
Boat Hull Drag in Shallow Water
Question: I paddle several different types of small craft in the ocean and bays near my home. One phenomenon I've observed is beyond my understanding of drag on a narrow displacement hull. When paddling in water less than maybe 6 feet in depth, there is a very noticeable increase in drag, which increases as the depth decreases. This is true for all the types of boats I paddle, from a large 6-person Hawaiian outrigger canoe (L=40', width=2', draft=0.67') to a one-person racing kayak (L=20', width=1.5', draft=0.33'). I should add that hull speeds are generally in the 6-8 mph range. I've read that rowing shells also encounter this same drag and they are longer, narrower, but with drafts in more or less the same range. I've read one explanation that the boundary layer on the hull makes contact with the bottom and the bottom increases the drag on the outer part of the boundary layer which in turn is transmitted to the hull. This seems highly unlikely to me, especially when the water is more than a maybe a foot deep. I don't believe the boundary layer from a 20 foot kayak extends that deep. As I said, this is noticable at water depths up to 6 feet or more. I've also heard that pressure waves from the hull bounce off the bottom and reflect back up to the hull and cause drag but as an engineer that doesn't really sound very rigorous to me. It seems to me there might be some interaction between the bow wave of a boat and the bottom, since I know that the dynamics of surface waves do extend approximately as deep as their wavelenth, and at these speeds the wavelength of a bow wave is certainly on par with the water depth. Can anyone give me a properly defensible answer to this? Answer: Your experience with more resistance in water of depth of ~ 6ft is probably due to increased resistance of the boat against the bow wave which itself is beginning to "feel bottom". There is an excellent post on the Earth Science Stack that explains what it means to "feel bottom" here. If you can estimate the size of the bow wave (wavelength) then possibly you can use the mathematical relationships posted there to determine if the bow wave is considered a deep water wave, shallow water wave or otherwise in the region of transition. If the calculated regime is either shallow water or transition, that could explain your observation.
{ "domain": "physics.stackexchange", "id": 40357, "tags": "fluid-dynamics, drag" }
Unable to implement control on MoveIt: Parameter '~moveit_controller_manager' not specified
Question: I am struggling to implement a joint_trajectory_controller on Moveit. I have: Set up my controllers.yaml file controller_list: - name: /nymble_arm_controller action_ns: joint_trajectory_action type: FollowJointTrajectory default: true joints: - base_swivel - arm_joint - elbow_joint - wrist_pitch - wrist_yaw - wrist_roll 2.Implemented an action server node for joint_trajectory_action interface. Action server I have implemented is from here 3.Created a moveit_planning_execution launch file as described here This launch file contains the required controller_manager.launch file described here 4.Created a launch file to start action server and included above launch file. I have gone through all the similar questions asked before but none has helped me. Will be grateful for some help. Originally posted by rohin on ROS Answers with karma: 99 on 2016-06-16 Post score: 0 Answer: The mistake I was making was that the controller.launch file was not named in the following convention: "robotname"_moveit_controller_manager.launch.xml Originally posted by rohin with karma: 99 on 2016-06-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24952, "tags": "ros, moveit, joint-trajectory-controller" }
Comparing acidity of anilinium ion and para-fluoro anilium ion
Question: Which of these compounds - anilinium ion and para-fluoro anilium ion - is more acidic? I thought the answer would the be first one since $\ce{-F}$ is an ortho-para activating group, so the electron density at the carbon at which $\ce{-NH3+}$ is attached would be more. I know that $\ce{-F}$ has a strong inductive effect, but since this is para with respect to $\ce{-NH3+}$, I thought I could say +M dominates. However the answer given was the second one. Why is this so and when can I neglect the inductive effect of halogen substituents in benzene to favour the mesomeric effect caused by them? Answer: The best way to compare acidities of organic compounds is by drawing out the conjugate bases and figuring out which one is more stable. Molecules with more stable conjugate bases are more acidic, so doing the same here: The conjugate bases are aniline and para-fluoro aniline respectively. The $\ce{-NH2}$ group shows +M (mesomeric effect), so on drawing out the resonance structures we find that the second resonance structure is more stable as the resulting negative charge is inductively withdrawn by fluorine group. For halogens the inductive effect dominates over their positive mesomeric effect, especially in the case of fluorine (because it is small and the lone pairs experience a high effective nuclear charge,in case of large atoms like chlorine, bromine and iodine, the outer electrons are held weakly)
{ "domain": "chemistry.stackexchange", "id": 8045, "tags": "organic-chemistry, acid-base" }
Is my ObjectCache wrapper sound?
Question: Wrapper: internal static class Cache { private static ObjectCache InternalCache { get { return MemoryCache.Default; } } private static T CacheOrGetExisting<T>(string key, Func<T> valueFactory) { T value; object uncastedValue = InternalCache.Get(key); if (uncastedValue == null) { value = valueFactory(); InternalCache.Set(key, value, DateTimeOffset.UtcNow.AddHours(1)); } else { value = (T)uncastedValue; } return value; } public static decimal Foo { get { return CacheOrGetExisting("foo", () => { return 5m; // data access here }); } } public static Bar Bar { get { return CacheOrGetExisting("bar", () => { return new Bar(); // data access here }); } } } Usage in a Controller: model.Foo = Cache.Foo; One thing I'm not sure of: is the readonly property necessary for InternalCache or can I write private static ObjectCache InternalCache = MemoryCache.Default;? I'm not worried about the tiny chance of multiple calls to the same uncached object causing multiple value = valueFactory(); and I'm also not worried about stale data. The main goal is to cache some informational data that is simply displayed on the screen, instead of retrieving it every request. I've tested this with an expiration of 1 minute and a class that gets DateTime.Now on creation, and it worked fine, so I believe the code is bug-free. Answer: As Magus pointed out in the comments, making everything static can present some issues. Using these static properties in other classes makes those classes tightly coupled to the Cache class. Why should you care? Consider this class: public class Bar { public int DoSomething() { return 5 + Cache.CachedNumber; } } You've lost your ability to test DoSomething() in isolation from the rest of your code. Any time you test DoSomething(), it relies on a specific implementation of Cache. It also makes it difficult to use the Bar class with different cache types (in memory, database, file system, etc.). Ideally, Bar would look something like this: public class Bar { private ICacheProvider Cache { get; set; } public Bar(ICacheProvider cache) { this.Cache = cache; } public int DoSomething() { return 5 + Cache.CachedNumber; } } Now we can: Test that DoSomething is doing its job, regardless of flaws in the ICacheProvider instance (by passing a mock ICacheProvider that always returns CachedNumber = 3, for example). Pass in different ICacheProvider implementations that can very depending on my use case, and Bar will still work without caring about them. Also, I would suggest not using something as vague as Cache or ICacheProvider (used for illustration). Once you start caching a lot of things, you may want to cache different properties differently, at which point your super cache class will become too large to maintain. Consider breaking it up into meaningful repositories that can implement caching however they want, or something similar.
{ "domain": "codereview.stackexchange", "id": 8090, "tags": "c#, asp.net, cache, asp.net-mvc" }
How to simplify second-order derivative of Ket in Dirac notation?
Question: I am currently playing around with Dirac notation in the context of interband transitions and came across a second derivative of a Ket. Under what conditions will this second derivative be zero: $\langle m | \ddot{n} \rangle = 0$? Or, what other simplifications can I make? I am used to seeing people drop second-order terms in expressions, but I am not sure about that here. Maybe I can simplify by considering only linear $H$, and inserting the identity $\langle m | \dot{n} \rangle =\frac{\langle m | \dot{H} |\ n \rangle}{(E_n-E_m)}$? But it's not clear to me. For a 2-band model, $|m\rangle$ and $|n\rangle$ are eigenstates from $H|m\rangle=E_m|m\rangle$, and the second derivative is with respect to some other parameter (such as momentum) that is not time. Answer: From the Schroedinger equation $$\frac{\partial}{\partial t}|\psi\rangle=-\frac{i}{\hbar}H|\psi\rangle.$$ Therefore, $$\frac{\partial^2}{\partial t^2}|\psi\rangle=-\frac{i}{\hbar}\frac{\partial}{\partial t}(H|\psi\rangle)=-\frac{i}{\hbar}\dot{H}|\psi\rangle-\frac{i}{\hbar}H\left(\frac{\partial}{\partial t}|\psi\rangle\right)=-\frac{i}{\hbar}\dot{H}|\psi\rangle-\frac{1}{\hbar^2}H^2|\psi\rangle.$$ I'm not entirely sure what your $|m\rangle$ and $|n\rangle$ are (are they eigenstates of some time-independent part of the Hamiltonian?). Hopefully, this is enough to get you on the right track.
{ "domain": "physics.stackexchange", "id": 70549, "tags": "quantum-mechanics, homework-and-exercises, hilbert-space, notation, perturbation-theory" }
Disposing the Context in the Repository pattern
Question: I have a question about the Repository pattern: public abstract class RepositoryBase<T> : IDisposable, IRepository<T> where T : class, IEntity { /// <summary> /// For iterations using LINQ extension methods on the IQueryable and IEnumerable interfaces /// it is required that the context which was used to make the query was not disposed before the iteration /// otherwise it will throw an exception /// </summary> private Context _context; protected Context Context { get { return _context ?? (_context = new Context()); } } public virtual void AddOrUpdate(T entity) { var dbSet = Context.Set<T>(); if (dbSet.Contains(entity)) entity.ChangeDateTime = DateTime.Now; else entity.AddeDateTime = DateTime.Now; Context.Set<T>().AddOrUpdate(entity); Context.SaveChanges(); } public virtual void Delete(T entity) { Context.Set<T>().Remove(entity); Context.SaveChanges(); } public void Drop() { var dbSet = Context.Set<T>(); foreach (var entity in dbSet) dbSet.Remove(entity); Context.SaveChanges(); } public virtual IQueryable<T> Find(Expression<Func<T, bool>> predicate) { return Context.Set<T>().Where(predicate); } public IEnumerable<T> GetAll() { return Context.Set<T>(); } public void Dispose() { if (_context != null) { _context.Dispose(); _context = null; } } } Why I am not embracing each method implementation with a using and disposing the Context as soon as the query is done? Some of my methods, as Find and GetAll return collections. I would want to be able to iterate through them using LINQ, only this is not possible if the Context is disposed, as it would be in the original Repository model. Is this the best approach to go around this issue? Answer: My question is about why I am not embracing each method implementation with a using and disposing the Context as soon as the query is done. Some of my methods, as Find and GetAll return collections. I would want to be able to iterate through them using LINQ, only this is not possible if the Context is disposed, as it would be in the original Repository model. A Linq enumerable isn't enumerated when it's created: it's enumerated when you try to work with it. If we want to dispose the context within each each method, I think you can do that by realizing the data, i.e. by reading it all into a concrete object such List or EnumerableQuery, before you dispose the context ... something like this (untested code ahead): public IEnumerable<T> GetAll() { using (Context context = new Context()) { IEnumerable<T> enumerable = context.Set<T>(); // enumerate into a List before disposing the context List<T> list = new List<T>(enumerable); return list; } } public virtual IQueryable<T> Find(Expression<Func<T, bool>> predicate) { using (Context context = new Context()) { IEnumerable<T> enumerable = context.Set<T>().Where(predicate); // enumerate into a EnumerableQuery before disposing the context // see https://stackoverflow.com/a/6765404/49942 for further details EnumerableQuery<T> queryable = new EnumerableQuery<T>(enumerable); return queryable; } } Beware that this is expensive if the data set is huge, if you don't actually want all the data you queried.
{ "domain": "codereview.stackexchange", "id": 42930, "tags": "c#, .net, database, repository" }
How can we create an AI to develop mobile apps?
Question: There are AI creating game, content and more. I'm thinking on how can AI develop mobile app itself? The computer languages might easy for AI to learn. AI can learn a lot from good open source project in github. The trend prediction can help AI to select the topic for creating a great apps. There are lots of details to let AI create a great apps. Answer: We don't know how to do that yet. The problem is one of scale: Despite many years of research into program synthesis via heuristic methods, it's still not possible to automatically create programs (e.g. via Genetic Programming (GP), Grammatical Evolution (GE) or Learning Classifier Systems (LCS)) that are thousands of lines long, whether that's for mobile or any other application area. Contrary to popular belief, alternative formal methods approaches can indeed be used to create sizeable programs, but the kind of interaction that a mobile app would typically require is not easily specified in this way. The scale at which heuristic approaches are currently viable is closer to the scale of expressions (e.g. single program statements) than entire programs. An intermediate approach is therefore to provide a program template and let GP etc generate the missing parts of the template. This paper describes how to combine Machine Learning with the 'Template Method' Design Pattern in order to create larger programs than would otherwise be possible, giving the specific example of a 'hyper-quicksort'.
{ "domain": "ai.stackexchange", "id": 140, "tags": "computer-programming" }
Returning a list of objects
Question: Please review the following code. Methods getFirsts and getSeconds, both of which are private, return a list of objects which implement CommonInterface. Is this a good or bad design? @Override public final List<? extends CommonInterface> getObjects(final CommonEnum type) { if (type == null) { return new ArrayList<CommonInterface>(); } switch (type) { case FIRST: return getFirsts(); case SECOND: return getSeconds(); default: return null; } } Answer: It's hard to say anything since it seems rather a pseudo-code. Anyway, two notes which you might find useful: I guess you could replace the switch-case structure with polymorphism. Two useful reading: Refactoring: Improving the Design of Existing Code by Martin Fowler: Replacing the Conditional Logic on Price Code with Polymorphism Replace Conditional with Polymorphism Are you sure that returning null in the default is fine? I'd consider returning an empty list (as it returns when type == null) or throwing an exception (IllegalStateException, for example) if it's a programming error. (See: The Pragmatic Programmer: From Journeyman to Master by Andrew Hunt and David Thomas: Dead Programs Tell No Lies)
{ "domain": "codereview.stackexchange", "id": 2567, "tags": "java, design-patterns" }
Salted hash generator
Question: I've had a go at creating a small class in C# that can generated salted hashes from text. I would like to know how this could be improved (in terms of code style) and whether or not this code is secure enough that it could be used in a professional environment (it won't be). Hasher.cs using System.Security.Cryptography; namespace HashAndSaltTest { public static class Hasher { private static readonly int MaxSaltLength = 32; public static byte[] GenerateSaltedHash(byte[] plainText) { HashAlgorithm algorithm = new SHA256Managed(); byte[] salt = GenerateSalt(); byte[] saltedText = new byte[plainText.Length + salt.Length]; for (int i = 0; i < plainText.Length; i++) saltedText[i] = plainText[i]; for (int i = 0; i < salt.Length; i++) saltedText[plainText.Length + i] = salt[i]; return algorithm.ComputeHash(saltedText); } private static byte[] GenerateSalt() { byte[] salt = new byte[MaxSaltLength]; using (RandomNumberGenerator random = new RNGCryptoServiceProvider()) random.GetNonZeroBytes(salt); return salt; } } } Answer: Because HashAlgorithm implements IDisposable you should enclose the usage of it in an using block as well. Omitting braces {} although they might be optional can lead to hidden and therfor hard to find bugs. I would like to encourage you to always use braces. Instead of using a for loop to copy plainText to saltedText and salt to saltedText you may take advantage of the Array.CopyTo() method. Public methods should validate passed parameters. Because you don't return the salt you can only generate a hash but you can't verify the hash. Applying the mentioned points can lead to public static byte[] GenerateSaltedHash(byte[] plainText) { if (plainText == null) { throw new ArgumentNullException("plainText"); } if (plainText.Length == 0) { throw new ArgumentException("Length may not be zero", "plainText"); } using (HashAlgorithm algorithm = new SHA256Managed()) { byte[] salt = GenerateSalt(); byte[] saltedText = new byte[plainText.Length + salt.Length]; plainText.CopyTo(saltedText, 0); salt.CopyTo(saltedText, plainText.Length); return algorithm.ComputeHash(saltedText); } } private static byte[] GenerateSalt() { using (RandomNumberGenerator random = new RNGCryptoServiceProvider()) { byte[] salt = new byte[MaxSaltLength]; random.GetNonZeroBytes(salt); return salt; } }
{ "domain": "codereview.stackexchange", "id": 30600, "tags": "c#, hashcode" }
Django Forms and handling
Question: I'm really new with Django and I would like to get your minds, advices, ... in order to improve the first part of my project. For the moment, I created a model form which is named Identity and users can fill the form, get a preview with modification before to submit data. As I said, I'm beginning to use Django and I'm sure that my script need modification, maybe an easier way to do what I want. My forms.py file : #-*- coding: utf-8 -*- from django import forms from BirthCertificate.models import * class IdentityForm(forms.ModelForm) : class Meta : model = Identity fields = '__all__' My views.py file looks like : #-*- coding: utf-8 -*- from django.shortcuts import render, render_to_response from django.http import HttpResponseRedirect, HttpResponse from django.template import loader from .models import Identity, Country from .forms import IdentityForm def IdentityAccueil(request) : template = loader.get_template('accueil_Identity.html') return HttpResponse(template.render(request)) def IdentityFormulary(request) : form = IdentityForm(request.POST or None) template_name = 'form_Identity.html' if form.is_valid() : if '_preview' in request.POST : post = form.save(commit=False) template_name = 'preview.html' elif '_save' in request.POST : post = form.save() return HttpResponseRedirect('formulaire_traite') context = { "form" : form, } return render(request, template_name, context) def CompletedFormulary(request) : identity = Identity.objects.all().order_by("-id")[0] context = { "identity" : identity, } return render(request, 'recapitulatif_identity.html',context) def Consultation(request) : identity = Identity.objects.all().order_by("-id")[:10] #Les 10 dernières fiches créées identity_France = Identity.objects.filter(country='64').order_by("-id")[:10] #Les 10 dernières fiches où la personne habite en France query = request.GET.get('q') if query : toto = Identity.objects.filter(lastname__icontains=query) else : toto = [] context = { "identity" : identity, "identity_France" : identity_France, "query" : query, "toto" : toto, } return render(request, 'resume.html', context) My accueil_identity.html looks like : <h2 align="center"> Gestion des fiches individuelles </align> </h2> <p> Veuillez cliquer sur l'opération à effectuer : </p> <p> </p> <p>* <a href="http://localhost:8000/Identity/formulaire">Créer une nouvelle fiche individuelle</a></p> <p> * <a href="http://localhost:8000/Identity/recherche">Consulter/Editer une fiche individuelle</a></p> <p> * Supprimer une fiche individuelle </p> My form_identity.html file looks like : <!--DOCTYPE html --> <html> <body> <h1 align="center"> Formulaire de fiche individuelle </h1> <form method='POST' action=''> {% csrf_token %} <h3> Partie contenant les informations de la fiche individuelle </h3> {{ form.as_ul }} {{ value|date:"%d/%m/%Y" }} <br></br> <input type ="submit" name="_save" value="Valider la fiche individuelle" /> <input type ="submit" name="_preview" value="Prévisualiser la fiche individuelle" /> </form> </body> </html> My preview.html file as following : <h2 align="center"> Prévisualisation de la fiche individuelle </align> </h2> <form method='POST' action='/Identity/accueil'> {% csrf_token %} {% block content %} <h3> Récapitulatif des données enregistrées : </h3> <li> Civilité : {{form.title}}</li> <li> Nom : {{form.lastname}}</li> <li> Prénom : {{form.firstname}}</li> <li> Sexe : {{form.sex}}</li> <li> Date de Naissance : {{form.birthday}}</li> <li> Ville de Naissance : {{form.birthcity}}</li> <li> Pays de Naissance : {{form.birthcountry}}</li> <li> Nationalité : {{form.nationality}}</li> <li> Profession : {{form.job}}</li> <li> Adresse : {{form.adress}}</li> <li> Ville : {{form.city}}</li> <li> Code Postal : {{form.zip}}</li> <li> Pays : {{form.country}}</li> <li> Email : {{form.mail}}</li> <li> Téléphone : {{form.phone}}</li> {% endblock %} <br></br> <input type ="submit" name="_save" value="Valider la fiche individuelle" /> </form> And my recapitulative_identity.html file : <h2 align="center"> Votre formulaire a été validé </align> </h2> {% block content %} Votre personne porte le numéro : {{ identity.id }} <h3> Récapitulatif des données enregistrées : </h3> <li> Civilité : {{identity.title}}</li> <li> Nom : {{identity.lastname}}</li> <li> Prénom : {{identity.firstname}}</li> <li> Sexe : {{identity.sex}}</li> <li> Date de Naissance : {{identity.birthday}}</li> <li> Ville de Naissance : {{identity.birthcity}}</li> <li> Pays de Naissance : {{identity.birthcountry}}</li> <li> Nationalité : {{identity.nationality}}</li> <li> Profession : {{identity.job}}</li> <li> Adresse : {{identity.adress}}</li> <li> Ville : {{identity.city}}</li> <li> Code Postal : {{identity.zip}}</li> <li> Pays : {{identity.country}}</li> <li> Email : {{identity.mail}}</li> <li> Téléphone : {{identity.phone}}</li> <br></br> {% endblock %} <br></br> <form method='POST' action='/Identity/accueil'> {% csrf_token %} <input type ="submit" value="Retour fiche identité" /> </form> <form method='POST' action='/BirthCertificate/accueil'> {% csrf_token %} <input type ="submit" value="Création d'un acte de naissance" /> </form> My resume.html file : <h2 align="center"> Affichage de toutes les fiches individuelles </align> </h2> <br></br> {% block content %} <h4> Récapitulatif des 10 dernières fiches individuelles créées: </h4> <ul> {% for item in identity %} <li>{{ item }}</li> {% endfor %} </ul> <h4> Récapitulatif des 10 dernières fiches individuelles créées habitant en France: </h4> <ul> {% for item in identity_France %} <li>{{ item }}</li> {% endfor %} </ul> <h4> Recherche par nom </h4> <form method="GET" action=""> <input type="text" name="q" placeholder="Rechercher un nom" value="{{ request.GET.q }}"> <input type="submit" value="Rechercher"> </form> <ul> {% for item in toto %} <li> {{ item }} </li> {% endfor %} </ul> {% endblock %} I need some advice, part of script in order to improve mine, ... :) Answer: Naming C’est vraiment bizarre to have some of the mots in French et d’autres in English. I don't talk about the templates, where it is normal to use the language of the end user, but rather your views.py where you mix English and French at will. You also use CamelCase for function names which is against PEP 8 recommendations. Your function names may be: def identity_home(… def identity_form(… def identity_resume(… def identity_listing(… Simplifications Since you already import render from django.shortcuts, you don't need loader: def identity_home(request): return render(request, 'accueil_Identity.html') # Keeping the template name but you may want to change it. By the way, render_to_response is unused in your code and the documentation does not recommend to use it anyway; so you can safely drop it. You can also use the shortcut redirect instead of an explicit HttpResponseRedirect, so you can feed it a view name and other arguments for the URL. More on that latter. You also don't need to use the all() queryset if you are going to order_by(...) right after: Identity.objects.all().order_by(..) is equivalent to Identity.objects.order_by(..) Lastly, I would change toto = [] with toto = Identity.objects.none() just so toto always stores a queryset. Oh and change that variable name, this is not serious: queryset or search_result should do. Lastly, country='64' does not mean anything. Especially given the fact that you import Country from your models. I don't know what it contains, but I’ll make a wild guess that country=Country.FRANCE will be better. Race conditions Getting the last entry in the database to retrieve the last form saved is error prone. It may work in your case where you’re the only one testing your system, but as soon as several user can work at the same time on this, it will be possible that, if two of same save a for roughtly at the same time, they will both see the summary of the last one. And this is an issue. Instead, you should not rely on the order of actions, but on concrete informations. Here you can use the post you just created and use its id to uniquely identify it. You will need to modify your URL patterns so that the URL named formulaire_traite take an id as its last parameter (adding something like /(?P<id>\d+) at the end of the URL should suffice) and use that id to retrieve the Identity to display. All in all, your views.py may look like: from django.shortcuts import render, redirect, get_object_or_404 from .models import Identity, Country from .forms import IdentityForm def identity_home(request): return render(request, 'accueil_Identity.html') def identity_form(request): form = IdentityForm(request.POST or None) template_name = 'form_Identity.html' if form.is_valid() : if '_save' in request.POST : post = form.save() return redirect('formulaire_traite', id=post.id) template_name = 'preview.html' return render(request, template_name, {"form" : form}) def identity_resume(request, id): identity = get_object_or_404(Identity, pk=id) context = {"identity": identity} return render(request, 'recapitulatif_identity.html', context) def identity_listing(request): identitys = Identity.objects.order_by("-id") identity = identitys[:10] # Les 10 dernières fiches créées identity_France = identity.filter(country='64')[:10] # Les 10 dernières fiches où la personne habite en France query = request.GET.get('q') if query: search_results = Identity.objects.filter(lastname__icontains=query) else: search_results = Identity.objects.none() context = { "identity": identity, "identity_France": identity_France, "query": query, "search": search_results, } return render(request, 'resume.html', context) Templates The <h3> and <li> tags are common to preview.html and recapitulative_identity.html. You can factorize that out. A base template like: {% block pre_certificate %}{% endblock %} <h3> Récapitulatif des données enregistrées : </h3> <li> Civilité : {{certificate.title}}</li> <li> Nom : {{certificate.lastname}}</li> <li> Prénom : {{certificate.firstname}}</li> <li> Sexe : {{certificate.sex}}</li> <li> Date de Naissance : {{certificate.birthday}}</li> <li> Ville de Naissance : {{certificate.birthcity}}</li> <li> Pays de Naissance : {{certificate.birthcountry}}</li> <li> Nationalité : {{certificate.nationality}}</li> <li> Profession : {{certificate.job}}</li> <li> Adresse : {{certificate.adress}}</li> <li> Ville : {{certificate.city}}</li> <li> Code Postal : {{certificate.zip}}</li> <li> Pays : {{certificate.country}}</li> <li> Email : {{certificate.mail}}</li> <li> Téléphone : {{certificate.phone}}</li> {% block post_certificate %}{% endblock %} And use it to define preview.html: {% block pre_certificate %} <h2 align="center"> Prévisualisation de la fiche individuelle </align> </h2> <form method='POST' action='/Identity/accueil'> {% csrf_token %} {% endblock %} {% block post_certificate %} <br></br> <input type ="submit" name="_save" value="Valider la fiche individuelle" /> </form> {% endblock %} and recapitulative_identity.html: {% block pre_certificate %} <h2 align="center"> Votre formulaire a été validé </align> </h2> Votre personne porte le numéro : {{ certificate.id }} {% endblock %} {% block post_certificate %} <br></br> <form method='POST' action='/Identity/accueil'> {% csrf_token %} <input type ="submit" value="Retour fiche identité" /> </form> <form method='POST' action='/BirthCertificate/accueil'> {% csrf_token %} <input type ="submit" value="Création d'un acte de naissance" /> </form> {% endblock %} For that, you will need to change a bit the context used when rendering in identity_form and identity_resume to use 'certificate' instead of respectively 'form' and 'identity'.
{ "domain": "codereview.stackexchange", "id": 23172, "tags": "python, beginner, comparative-review, django" }
LeetCode 212: Word Search II
Question: I'm posting my code for a LeetCode problem. If you'd like to review, please do so. Thank you for your time! Problem Given a 2D board and a list of words from the dictionary, find all words in the board. Each word must be constructed from letters of sequentially adjacent cell, where "adjacent" cells are those horizontally or vertically neighboring. The same letter cell may not be used more than once in a word. Example: Input: board = [ ['o','a','a','n'], ['e','t','a','e'], ['i','h','k','r'], ['i','f','l','v'] ] words = ["oath","pea","eat","rain"] Output: ["eat","oath"] Note: All inputs are consist of lowercase letters a-z. The values of words are distinct. Code #include <vector> #include <string> #include <set> class Solution { static constexpr unsigned int A_LOWERCASE = 'a'; static constexpr unsigned int ALPHABET_SIZE = 26; private: class TrieNode { public: std::vector<TrieNode *> children; bool is_word; TrieNode() { is_word = false; children = std::vector<TrieNode *>(ALPHABET_SIZE, NULL); } }; class Trie { public: TrieNode *get_parent() { return parent; } Trie(const std::vector<std::string> &words) { parent = new TrieNode(); for (unsigned int index = 0; index < words.size(); index++) { set_word(words[index]); } } void set_word(const std::string &word) { TrieNode *curr = parent; for (unsigned int length = 0; length < word.size(); length++) { unsigned int index = word[length] - A_LOWERCASE; if (curr->children[index] == NULL) { curr->children[index] = new TrieNode(); } curr = curr->children[index]; } curr->is_word = true; } private: TrieNode *parent; }; public: std::vector<std::string> findWords(std::vector<std::vector<char>> &board, std::vector<std::string> &words) { Trie *trie = new Trie(words); TrieNode *parent = trie->get_parent(); std::set<std::string> found_words; for (unsigned int row = 0; row < board.size(); row++) { for (unsigned int col = 0; col < board[0].size(); col++) { depth_first_search(board, row, col, parent, "", found_words); } } std::vector<std::string> structured_found_words; for (const auto found_word : found_words) { structured_found_words.push_back(found_word); } return structured_found_words; } private: static void depth_first_search(std::vector<std::vector<char>> &board, const unsigned int row, const unsigned int col, const TrieNode *parent, std::string word, std::set<std::string> &structured_found_words) { if (row < 0 || row >= board.size() || col < 0 || col >= board[0].size() || board[row][col] == ' ') { return; } if (parent->children[board[row][col] - A_LOWERCASE] != NULL) { word = word + board[row][col]; parent = parent->children[board[row][col] - A_LOWERCASE]; if (parent->is_word) { structured_found_words.insert(word); } char alphabet = board[row][col]; board[row][col] = ' '; depth_first_search(board, row + 1, col, parent, word, structured_found_words); depth_first_search(board, row - 1, col, parent, word, structured_found_words); depth_first_search(board, row, col + 1, parent, word, structured_found_words); depth_first_search(board, row, col - 1, parent, word, structured_found_words); board[row][col] = alphabet; } }; }; Reference LeetCode has a template for answering questions. There is usually a class named Solution with one or more public functions which we are not allowed to rename. For this question, the template is: class Solution { public: vector<string> findWords(vector<vector<char>>& board, vector<string>& words) { } }; Problem Solution Discuss Trie Answer: Avoid defining trivial constants While it is good practice to give some constants that are used throughout the code a nice descriptive name, A_LOWERCASE is a case of a trivial constant where it is actually detrimental to define it. The reason is that A_LOWERCASE describes exactly its value, instead of what the value stands for. If you would call it FIRST_LETTER_OF_THE_ALPHABET, it would be a better name (although obviously a bit too long for comfort). But, everyone already knows that 'a' is the first letter. It's as trivial as the numbers 0 and 1, and you wouldn't write: static constexpr int ZERO = 0; static constexpr int ONE = 1; for (int i = ZERO; i < ...; i += ONE) { ... } Another issue with such constants is that if you ever write constexpr unsigned int A_LOWERCASE = 'b', the compiler won't complain, the code won't work as expected, and someone reading the code will probably not think that A_LOWERCASE might be anything other than 'a' and thus have a hard time finding the issue. Move TrieNode inside Trie Since TrieNode is an implementation detail of Trie, it should be moved inside it: class Trie { public: class Node { ... }; Node *get_root() { return root; } ... private: Node *root; }; Also note that the parent of a tree is not a node. The correct term to use here is "root". Use std::array to store trie node children Instead of a fixed size vector, you should use std::array. It avoids a level of indirection: class Trie { class Node { std::array<Node *, 26> children; ... Prefer using default member initializers This avoids having to write a constructor in some cases, and is especially useful to avoid repeating yourself when a class has multiple constructors. In this case you can write: class Trie { class Node { std::array<Node *, 26> children{}; bool is_word{false}; }; ... }; Avoid new when you can just declare a value In the constructor of Trie you always allocate a new Trie::Node. This is unnecessary, you can just write: class Trie { public: class Node {...}; Node *get_root() { return &root; } ... private: Node root; }; Similarly, in findWords(), you can just write: std::vector<std::string> findWords(std::vector<std::vector<char>> &board, std::vector<std::string> &words) { Trie words; ... } Use std::unique_ptr to manage memory Avoid calling new and delete manually. It is easy to make mistakes and cause a memory leak. I don't see any call to delete in your code! A std::unique_ptr will manage memory for you and delete it automatically once it goes out of scope. Here is how to use it: class Trie { class Node { std::array<std::unique_ptr<Node>, 26> children; ... }; ... void set_word(const std::string &word) { Node *curr = get_root(); for (...) { ... if (!curr->children[index]) { curr->children[index] = std::make_unique<Node>(); } curr = curr->children[index].get(); } curr->is_word = true; } ... }; Prefer using range-for When iterating over containers (this includes iterating over the characters in a std::string), and you don't need to manipulate the iterator itself but just want to see the values, use a range-for loop. For example in the constructor of Trie: Trie(const std::vector<std::string> &words) { for (auto &word: words) { set_word(word); } } And in set_word(): for (auto letter: word) { unsigned int index = letter - 'a'; ... } Simplify return value of findWords() Instead of creating a temporary std::vector and copying elements manually from found_words, you can make use of the fact that std::vector has a constructor that can copy elements from another container for you, and use brace-initialization in the return statement: std::vector<std::string> findWords(std::vector<std::vector<char>> &board, std::vector<std::string> &words) { std::set<std::string> found_words; ... return {found_words.begin(), found_words.end()}; }
{ "domain": "codereview.stackexchange", "id": 39305, "tags": "c++, beginner, algorithm, programming-challenge, c++17" }
how to create a subscriber node that also publishes message?
Question: Hi everyone, I'm wondering if it's possible to create a node that serves both as a subscriber and a publisher at the same time? It will receive some messages and publish some messages to other topic. #include "ros/ros.h" #include "std_msgs/String.h" #include <sstream> #include <nav_msgs/OccupancyGrid.h> void mapConvert(const nav_msgs::OccupancyGrid::ConstPtr& msg) { ROS_INFO("I heard:1"); pub.publish(msg->info.width);//or just simply pub.publish(1); } int main(int argc, char **argv) { ros::init(argc, argv, "map_converter"); ros::NodeHandle n; ros::Subscriber sub = n.subscribe("map", 1000, mapConvert); ros::Publisher pub = n.advertise<int>("mapconverted", 1000); ros::Rate loop_rate(10); int count = 0; while (ros::ok()) { ros::spinOnce(); loop_rate.sleep(); } ros::spin(); return 0; } I have tried this structure but it did not work.Please Help! Thank You in advance! Originally posted by edmond320 on ROS Answers with karma: 61 on 2016-02-24 Post score: 1 Answer: Looks like you need to implement a service. But if you really need to use publisher, you should make publisher accessable from function by making global pointer. #include "ros/ros.h" #include "std_msgs/String.h" #include <sstream> #include <nav_msgs/OccupancyGrid.h> ros::Publisher * pub; void mapConvert(const nav_msgs::OccupancyGrid::ConstPtr& msg) { ROS_INFO("I heard:1"); pub->publish(msg->info.width);//or just simply pub.publish(1); } int main(int argc, char **argv) { ros::init(argc, argv, "map_converter"); ros::NodeHandle n; ros::Subscriber sub = n.subscribe("map", 1000, mapConvert); pub = new ros::Publisher(n.advertise<int>("mapconverted", 1000)); ros::Rate loop_rate(10); int count = 0; while (ros::ok()) { ros::spinOnce(); loop_rate.sleep(); } ros::spin(); delete pub; return 0; } Originally posted by yasagitov with karma: 223 on 2016-02-25 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 23899, "tags": "ros, node, publisher" }
Gell-Mann–Oakes–Renner relation for heavier pseudoscalar mesons?
Question: The Gell-Mann–Oakes–Renner relation between the pion mass and light-quark masses is the following, $$m_{\pi}^2=-\frac{2}{f_{\pi}^2}(m_u+m_d)\langle\bar \psi \psi\rangle,$$ where $f_\pi^2$ is the pion decay constant and $\langle \bar \psi \psi \rangle$ the chiral condensate. My question is: what's the corresponding formula for each of the other pseudoscalar mesons (Kaons and η mesons), if we assume a non-zero strange quark mass $m_s$ as well? I'm ignoring electroweak interactions which mix the eta mesons. I also know that the chiral anomaly affects the formula for the $\eta'$ meson (or $\eta_1$, since we're ignoring electroweak mixing). Answer: It's a long story, but you could do worse than review Cheng & Li's classic text, Gauge Theory of Elementary Particle Physics, (5.245–248). In their conventions, $$m_{\pi}^2 f_{\pi}^2 = \frac{m_u+m_d}{2}\langle\bar u u+\bar d d \rangle, \\ m_{K}^2 f_{K}^2 = \frac{m_u+m_s}{2}\langle\bar u u+\bar s s \rangle, \\ m_{ \eta}^2 f_{\eta}^2 = \frac{m_u+m_d}{6}\langle\bar u u+\bar d d \rangle +\frac{4m_s}{3}\langle\bar s s \rangle . $$ They are gotten from applications of Dashen's theorem, (GOR); and for perfect $SU(3)$ flavor symmetry of the QCD vacuum condensate, $$ \langle\bar u u \rangle= \langle\bar d d \rangle= \langle\bar s s \rangle , \\ f_{\pi}=f_{K}=f_{\eta}, $$ (and $m_u\sim m_d$), you get $$ 4m_K^2= 3m_\eta^2 + m_\pi^2, \\ \frac{m_u+m_d}{2m_s}= \frac{m_\pi^2}{2m_K^2- m_\pi^2 }\approx 1/25. $$ If you want detail, Scherer's review, (4.46–7), will provide more than you'd wish for. Not to mention S Weinberg's (1996) The Quantum Theory of Fields (v2.) (19.7.16).
{ "domain": "physics.stackexchange", "id": 81903, "tags": "particle-physics, nuclear-physics, quarks, mesons, pions" }
Binary star system - Revolution around Primary vs Companion
Question: How likely is it in a binary or multi-star system for a non-star celestial body to revolve around the primary star rather than the companion star? Answer: This is not my field, but the question is interesting so I'll give you my best answer. The sphere of influence of an isolated astronomical body (in this case the binary stars, treated as a unit) is not well-defined; therefore, some of the forthcoming argument requires more information about the context in which you are studying the binary system (like the mass and relative location of the galactic center blackhole). As a first-order approximation, let's assume that the orbiting matter will be co-planar with the stars, then matter orbiting the primary star will lie within its Roche Lobe (and likewise for the companion). All other material (even that which lies within the radius from the center of mass to $L_2$) will be considered to be circumbinary. Therefore, assuming a uniform probability distribution, the likelihood of a particular object orbiting the primary star is given by $$ P=\frac{A_1}{A_{SoI}} $$ And the relative probability that it will orbit the primary star as opposed to the companion is $$ P'=\frac{A_1}{A_1+A_2} $$ where $A_1, A_2, \text{and}\ A_{SoI}$ are the planar areas of the primary star's Roche lobe, the companion star's Roche lobe, and the system's sphere of influence, respectively. However, as previously noted, calculating these areas requires more information about the system (mass ratio, eccentricity, separation distance, etc.). Of course, this is all quite simplified and could probably be improved by determining the actual probability distribution of circumstellar matter and taking into account out-of-plane bound orbits. That being said, the actual probability distribution would be difficult to calculate since I am not aware of a well-established model for planetary mass distribution (relevant: Frost Line). Additionally, most significantly out-of-plane orbits near the stars would be highly unstable, so that adjustment might not have much of an effect. Another way of answering this question would be to address how often we actually observe S-type orbits (around one star) versus P-type orbits (around both stars) in binary systems. According to the following articles, it appears that circumbinary planets have certainly been observed, but as far as I can tell, there doesn't appear to be particularly strong observational evidence to determine if any of these exoplanets are either circumprimary or circumsecondary. However, some numerical models do bear out the possibility of S-type orbits by establishing their stability. Existence: http://arxiv.org/abs/1210.3055 http://arxiv.org/abs/1210.3612 http://arxiv.org/abs/1010.4048 http://www.mpia.de/homes/henning/Publications/daemgen.pdf Stability: http://adsabs.harvard.edu/full/2002ESASP.518..547P http://montgomerycollege.edu/Departments/planet/planet/Planetary%20Definition/doubleStars.pdf
{ "domain": "physics.stackexchange", "id": 10862, "tags": "gravity, stars, orbital-motion, three-body-problem" }
Are electrons and positrons part of a family of 4 (8? 16?) similar particles?
Question: EDIT: Completely rewritten because of the 'needs clarity' tag and some useful related questions appearing in the side-bar. I hope this is clear now This answer gives a long list of properties of particles whose value differs by a minus sign when comparing a particle to its antiparticle. We know that anti-particles exist, so apparently for every particle there is a particle where the value of all the properties in this list are 'flipped': i.e. the same magnitude but of opposite sign. My question is: given a particle, say an electron, does there exist a different particle where some of the properties in the list in the linked answer are flipped and some are not? If the answer is no, why is this not possible? If the answer is yes, what is an example of such a pair of particles? Answer: All charge-like quantum numbers flip upon a change of a particle to an anti-particle. In other words, charge-like quantum numbers are correlated. It is not like that one has a distinct particle with Lepton number +1 and another one with Lepton number -1 and each has a distinct anti-particle. The particle with Lepton number -1 is the anti-particle of the particle of Lepton number +1. This is the same for particles with non-zero Baryon number and so on. EDIT Actually, there exist also particles which are their own anti-particle. The most prominent one is the photon. In the standard model there exist only one photon with no partners or anti partners. In a wider sense also triplets exist, for instance $\pi^+$, $\pi^0$ and $\pi^{-}$. They form a triplet in the vector representation of SU(2). But in a strict sense $\pi^0$ has nothing to do with $\pi^+$ and $\pi^{-}$ which are anti-particles of each other. Because the mass of $\pi^0$ has not the same mass as a $\pi^{\pm}$. In the pure viewpoint of particle-antiparticle symmetry the $\pi^0$ is not part of $\pi^{\pm}$. The $\pi^0$ is anti-particle of its own like the photon. In particular there is nothing to flip. Because the number one might want to flip is zero. Needless to say, this applies for the standard model, i.e. the actual valid description for elementary particles which has been successfully checked in many experiments. Hypothetical theories are not considered here.
{ "domain": "physics.stackexchange", "id": 98547, "tags": "particle-physics, definition, antimatter" }
Quadcopter - is iPhone the ultimate flight controller?
Question: iPhone contains Gyroscope GPS Two photo and video cameras Self-sufficient battery that outlives the motor battery Wifi Backup connectivity (cellular, bluetooth) Programmable computer Real-time image processing capabilities and face detection General purpose IO (with something like this) and old models are available very cheap. What is the main benefit of having a separate dedicated flight controller and camera on hobbyist rotorcraft rather than a general purpose device like the iPhone? Answer: Speaking from experience, smartphones (android in my case) do work as flight controllers but they have significant drawbacks The operating system gets in the way. You don't have root access and can't kill off unnecessary processes. The cameras are decent, but somewhat narrow field-of-view. Dual cameras are not setup to enable stereo vision and even trying to use both at the same time overheats the phone quickly. Camera placement is fixed relative to the body (problem if you want to look forward rather than down) IMU sensors are really cheap which leads to noisy signals Everything is thermally connected and not well cooled, so sensors exhibit significant drift. You still need extra processors. Most motors talk over PWM or i2c so you need a board to receive commands from the phone and translate. WIFI range is smaller than most RC transmitter/receiver pairs. Like I said, you can make it work but it is not optimal.
{ "domain": "robotics.stackexchange", "id": 702, "tags": "quadcopter" }
What is the importance of the endocannabinoid system for cognitive function?
Question: The endocannabinoid system is a very important function of human biology. Unfortunately, due to the illegality of cannabis, it is a relatively new field of study. I have read a few articles about Google researching the role of dopamine in learning, and according to this article, anandamide (the neurotransmitter that closely resembles tetrahydrocannabinol): was found to do a lot more than produce a state of heightened happiness. It’s synthesized in areas of the brain that are important in memory, motivation, higher thought processes, and movement control. Have any neuroscientists (or any scientists) considered the importance of the endocannabinoid system for cognitive function? If not, is there any reason this information might or might not be relevant to artificial intelligence? Answer: The release of Adenosine, Dopamine, Endorphin, Endocannabinoids, GABA, Glutamate, Norepinephrine, Oxytocin, Serotonin, and many others into specific regions of the brain are very likely an essential part of both activation tuning of single neurons and neuroplasticity, two essential aspects of organic learning researchers have been and will continue to work to understand. Most of those I've met in that sector of research are curious about the larger questions of what intelligence and consciousness are, and all of them appear to be interested in discovering how learning systems may be valuable in software engineering contexts. These overarching questions are difficult to answer and the dive into the detail of learning has resulted in the expansion of ideas presented by Dr. Norbert Wiener in the mid 20th Century at MIT. How chemical feedback in regions of the brain are secreted, how they disseminate geometrically into organic structures, how they interact with receptors, and what that does to the cell metabolism to produce change in the cell is almost definitely part of the DNA driven design of higher animal learning. There does not appear to be anything pointless or arbitrary about it. Adaptation to improve survival is evident and, yes, study is well underway. Narcotics that interfere with the natural functioning of these organic signaling systems can lead to the inability to adapt in the individuals addicted to them. That fact is a strong form of evidence that learning depends on these systems. Oxytocin is another neurotransmitter of interest because its release is associated with what would fit into the higher levels of human thought and motivation on Abraham Maslow's famous hierarchy of needs. Oxytocin seems to be part of reward signaling for modes of thought like authenticity, compassion, intimacy, wisdom, spirituality, and other human mental capacities and patterns of thought that transcend mere rationalism. Why is that important? Because the ability to lay down selfish goals for the good of the community seem to depend largely on the oxytocin system's preemptive ability over mere survival mode neural activity in mammals. Regarding the cannabinoid receptors, there is a large enough body of media that show a correlation between a pool of successful artists and marijuana use to legitimately wonder whether there is any tie between creativity and the endocannabinoid system. However in science (and hopefully in technology too), we are careful not to draw conclusions rashly. It is also possible that this apparent correlation is simply a social phenomenon where the popularity of the artistic products or live performances is mainly because of potential audiences similarly stimulating their receptors with cannabinoids. For instance, those engaging in LSD trips on stage attracted those also engaging in LSD trips into their audiences two generations ago. Whether the work of those artists was more creative because of the impact on the serotonin receptors is largely subjective. How can researchers come to conclusions about what is good performance? One could analyze the audio to produce a table of notes in a performance and then develop a system to attach a numerical value to several positive quality metrics, study the price of tickets, or count downloads, but the decisions of what to measure and how to aggregate it into a final judgment of excellence is itself necessarily a subjective choice. Nonetheless, if we place the many misconceptions of popular psychology and the drug culture aside, there is much research into the endocannabinoid system as part of the learning signaling, what machine learning researchers are currently calling reinforcement. The more general term is, "A non-linear control system's feedback signal," already developed in detail in the 1940s (Behavior, Purpose and Teleology — A Rosenblueth, N Wiener, J Bigelow — Philosophy of Science, 1943 — U Chicago). Some new trendy name will probably appear in the 2020s. This article claims that the mammals under test, "exhibited enhanced learning." Memory in Monoacylglycerol Lipase Knock-out Mice, by Bin Pan, Wei Wang, Peng Zhong, Jacqueline L. Blankman, Benjamin F. Cravatt and Qing-song Liu; Journal of Neuroscience 21 September 2011, 31 (38) 13420-13430; DOI: https://doi.org/10.1523/JNEUROSCI.2075-11.2011 To find many others: https://scholar.google.com/scholar?q=endocannabinoid+learning
{ "domain": "ai.stackexchange", "id": 575, "tags": "neural-networks, emotional-intelligence, biology, cognitive-science, brain" }
Migration from Fuerte to Hydro
Question: I am working on some code that I haven't made. The thing is that I have to work in Hydro and this code was made for Fuerte. I tried to catkinize the packages changing the manifest.xml and CMakeLists.txt I don't know if it is needed to change anything else but when I try to make the package I get an error. CMakeFiles/LLtoUTMsNode.dir/src/LLtoUTMsNode.cpp.o: In function `ros::serialization::Stream::advance(unsigned int)': /home/summitxl/ros_catkin_ws/install_isolated/include/ros/serialization.h:721: undefined reference to ros::serialization::throwStreamOverrun()' /home/summitxl/ros_catkin_ws/install_isolated/include/ros/serialization.h:721: undefined reference to `ros::serialization::throwStreamOverrun()' /home/summitxl/ros_catkin_ws/install_isolated/include/ros/serialization.h:721: undefined reference to `ros::serialization::throwStreamOverrun()' /home/summitxl/ros_catkin_ws/install_isolated/include/ros/serialization.h:721: undefined reference to `ros::serialization::throwStreamOverrun()' /home/summitxl/ros_catkin_ws/install_isolated/include/ros/serialization.h:721: undefined reference to `ros::serialization::throwStreamOverrun()' CMakeFiles/LLtoUTMsNode.dir/src/LLtoUTMsNode.cpp.o:/home/summitxl/ros_catkin_ws/install_isolated/include/ros/serialization.h:721: more undefined references to `ros::serialization::throwStreamOverrun()' follow collect2: error: ld returned 1 exit status make[2]: *** [/home/summitxl/catkin_ws/devel/lib/test_GPS/LLtoUTMsNode] Error 1 make[1]: *** [Ensayos/test_GPS/CMakeFiles/LLtoUTMsNode.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... Originally posted by arenillas on ROS Answers with karma: 223 on 2014-05-08 Post score: 1 Answer: You know that you don't have to catkinize your code to try it with Hydro? May it is better to first get the packages building using rosbuild (adapting them to potential API changes of dependencies) and after that consider to catkinize them. Originally posted by Dirk Thomas with karma: 16276 on 2014-05-08 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 17879, "tags": "ros, ros-fuerte, ros-hydro, serialization" }
Is it possible to have a spacetime described by a piecewise metric?
Question: For example (in $1+1D$): $$ ds^2 = \begin{cases} -dt^2 + dx^2 & \text{if } x > 0\\ -dt^2 + Adx^2 & \text{if } x \leq 0 \text{ and } A \neq 1 \end{cases} $$ What criteria/junction conditions are necessary for the above metric to be a valid description of a spacetime? What does the inverse metric look like? Answer: You're definitely free to consider whatever you like. Here, you considered a discontinuous $(0,2)$ tensor field on the manifold $\Bbb{R}^2$, so there's probably nothing much you can say about how this relates to physics. Now, you could instead consider a smooth metric tensor $g=-dt^2+\zeta\,dx^2$, where $\zeta:\Bbb{R}\to\Bbb{R}$ is a smooth function such that $\zeta=1$ for $x>0$ and say $\zeta$ is some other positive constant $A$ for $x<-1$ (so $\zeta$ smoothly changes from a positive constant $A$ to $1$ in the region $\{x\,: -1\leq x\leq 0\}$). You can now define a symmetric tensor field $T_{ab}=\frac{1}{8\pi}(R_{ab}-\frac{1}{2}g_{ab}R)$ and say that for this specific 'stress energy tensor', the metric $g$ is a smooth solution of Einstein's equations. This is nice and smooth but the question still remains as to whether this describes anything physical, and most likely it doesn't. One of the ways to talk about 'good' spacetimes is through the initial value formulation (see Hawking and Ellis or Wald for more info about everything I'm about to say); and a key idea here is global hyperbolicity. Note that Einstein's field equations \begin{align} R_{ab}-\frac{1}{2}g_{ab}R &=8\pi T_{ab} \end{align} as written are pretty hard to interpret, and a-priori, it isn't even clear what a solution means. This is typical for any type of PDE; one has to carefully define what is meant by a solution. One should think of a solution as starting from some 'initial data', and 'evolving' them according to the field equations. In this manner, the theory is dynamical, and that is how we make sense of it. For example, in electrodynamics, we solve for the $E$ and $B$ fields, and we have the Maxwell equations \begin{align} \begin{cases} \frac{\partial \boldsymbol{B}}{\partial t} &= -\nabla\times\boldsymbol{E}\\ \frac{\partial \boldsymbol{E}}{\partial t} &= \frac{1}{\epsilon_0\mu_0}\nabla\times\boldsymbol{B}- \frac{1}{\epsilon_0}\boldsymbol{J}\\ \nabla\cdot \boldsymbol{E}&= \frac{\rho}{\epsilon_0}\\ \nabla\cdot \boldsymbol{B}&= 0 \end{cases} \end{align} I re-organized the equations to highlight that we have two equations which talk about time derivatives (i.e evolution), and two equations which are 'constraints'. Note that solutions to PDEs must talk about initial/boundary conditions. In other words, in $\Bbb{R}^4$ with coordinates $(t,x,y,z)$, we may consider the initial time hypersurface $\Sigma_0=\{(t,x,y,z)\,:\, t=0\}$, and on this initial time surface, we imagine that we have some initial electric and magnetic fields $\boldsymbol{E}_0,\boldsymbol{B}_0$, and initial charge and current densities $\rho_0,\boldsymbol{J}_0$ (all defined on $\Sigma_0$), such that the latter two constraints are satisfied on $\Sigma_0$. Depending on context, we may also have to provide boundary conditions. The idea is then that we start with this 'initial data', and then 'let time flow', and solve the equations in time to see how the fields evolve. That is what we mean by solutions to Maxwell's equations (granted there's a lot more to be said and I'm glossing over a ton of details, regarding the potential formulation, gauge freedom etc, but that's the gist). For Einstein's equations, we should think in similar terms. We should think of starting with some initial data, and then consider globally hyperbolic maximal development of the initial data. For Einstein's equations, the initial data consists of $(\Sigma,\gamma_{ab}, \kappa_{ab})$, where $\Sigma$ is a $3$-manifold, $\gamma_{ab}$ is a Riemannian metric on $\Sigma$, and $\kappa_{ab}$ is a certain symmetric tensor field (it will turn out to be the second fundamental form/ extrinsic curvature later); and of course any other 'reasonable' matter fields you wish to consider (and appropriate constraint equations have to be satisfied, just as in the case of electrodynamics). Of course one has to specify which function spaces (usually some Sobolev space) these tensor fields belong to, and we have to ensure that these initial data are 'physically reasonable' (whatever that means, must be made precise based on context). We then take this initial data, and 'evolve' it according to Einstein's equations. The result is a globally hyperbolic spacetime $(M,g)$, which satisfies Einstein's field equations, and it is usually only spacetimes which arise in this fashion to which we ascribe any significant physical meaning. For example, Minkowski spacetime, Schwarzschild, Kerr etc can all be thought of as arising in this manner. Obligatory Remark: I've barely begun to even scratch the surface here, you should read Hawking and Ellis/Wald for more detailed information (and there's a vast literature on this stuff).
{ "domain": "physics.stackexchange", "id": 90112, "tags": "general-relativity, spacetime, metric-tensor" }
Can the sun explode in fact?
Question: Because of fusion inside the sun, it pushes it apart, why sun doesn't explode then? The pressure inside should make it explode right? Answer: Because gravity holds the sun together. The core of the sun is in a stable balance between gravity pulling it together, and the pressure generated by the heat which pushes out. The balance is maintained. If gravity were to start to "win" and cause the core to contract, the core would be compressed, which would increase the rate of fusion, heating the core and the pressure, which would cause the core to expand. If you work out the maths, it turns out that this is nicely stable: you don't get pulsations of heating and cooling.
{ "domain": "astronomy.stackexchange", "id": 6052, "tags": "the-sun" }
Find state space model from transfer function
Question: Let's suppose we have: G(s) = (s+1)/(s^2-2s+1) how can we find the state space representation of the transfer function: x_dot = x2 x2_dot = 2*x2-x1+u where u is an arbitrary input. I am very new to this topic, so a detailed answere would be great ! :) Answer: So, assuming that the transfer function is between the output $Y(s)$ and the input $U(s)$, namely $$ \dfrac{Y(s)}{U(s)} = G(s)\qquad\qquad(1) $$ multiplying (1) by $U(s)\cdot(s^2-2s+1))$ yields $$ s^2 Y(s) -2s Y(s) + Y(s) = sU(s) + U(s) $$ Going back in the time domain we obtain $$ \ddot y -2\dot y + y = \dot u + u\qquad (2) $$ Now, we look for a realization of the kind \begin{align*} \dot x &= Ax+Bu\qquad (3)\\ y &=Cx \end{align*} with $x=(x_1,x_2)$ and \begin{align*} A &= \begin{pmatrix}0 & 1\\a_1 & a_2\end{pmatrix} & B&=\begin{pmatrix}0\\1\end{pmatrix}, & C&=\begin{pmatrix}c_1&c_2\end{pmatrix}\end{align*} The next step is to find the values of $(a_1,a_2,c_1,c_2)$ for which (3) has the same input-output behaviour of (2). From (3) we have \begin{align*} y&=c_1x_1+c_2x_2\\ \dot y&= c_1\dot x_1 + c_2\dot x_2 = c_1x_2 + c_2a_1x_1 + c_2a_2 x_2 + c_2u\\ \ddot y&= c_1\ddot x_1 + c_2\ddot x_2 = c_1a_1x_1+c_1a_2x_2 +c_1u+ c_2a_1x_2+ c_2a_2a_1x_1+c_2a_2^2 x_2 + c_2a_2u + c_2\dot u \end{align*} substituting into (2) yields \begin{align*} 0&=(c_1+c_1a_1+c_2a_2a_1-2c_2a_1)x_1 + (c_2+c_1a_2+c_2a_1+c_2a_2^2-2c_1-2c_2a_2)x_2\\ &+(c_1+a_2c_2-2c_2-1)u + (c_2-1)\dot u \end{align*} since that equality must hold for all $(x_1,x_2,u,\dot u)$ that' equivalent to ask \begin{align*} c_1+c_1a_1+c_2a_2a_1-2c_2a_1&=0\\c_2+c_1a_2+c_2a_1+c_2a_2^2-2c_1-2c_2a_2&=0\\ c_1+a_2c_2-2c_2-1&=0\\c_2-1&=0 \end{align*} and a solution is $$ (a_1,a_2,c_1,c_2)=(-1,2,1,1) $$ Therefore \begin{align*} \begin{pmatrix}\dot x_1\\\dot x_2\end{pmatrix} &=\begin{pmatrix}0&1\\-1&2\end{pmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix} + \begin{pmatrix}0\\1\end{pmatrix}\\y&=\begin{pmatrix}1&1\end{pmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix} \end{align*}
{ "domain": "dsp.stackexchange", "id": 4729, "tags": "fourier-transform, transfer-function, laplace-transform" }
Static electricity in PE foam rolling
Question: In a PE (polyethylene) foam factory there have been some incidents with static electricity that have created minor fires. Fire is a concern in such a factory, as PE is highly flammable, and even more as PE foam uses propane or butane gas in its manufacturing process (that's what creates the bubbles in the foam). The issues we've been having are mostly related to the thinnest of the foams we produce, which is 1 mm thick. It's formed by extrusion into a flat fabric of about 1m width and we form rolls of 350 linear meters. The machine's speed is about 78 meters / minute. The foam is rolled over cardboard cores in a torque machine (the shaft is chrome-coated steel). The plant is located in a very dry, cold place (humidity is often around 40-60% and temp ranges between 10-28°C). I've gathered the following observations while analysing this problem: While the roll is below 45cm in diameter, the static electricity is negligible (I don't have an instrument to measure it, I just get my arm near the roll and see if the hair is drawn towards the foam). Once the diameter gets beyond the 45cm diameter the static is noticeable, to the point of firing little sparks (when it reaches the max diameter). The static builds up first near the sides of the roll, even though it can be felt in the center too. All of the incidents result from a spark igniting the roll in any of the sides, never in the center. The foam goes through a series of rolls (puller, thkcness measurment) before being formed into a roll. Most of those rolls are metallic (steel) and in direct contact with both the fabric and the chasis of the machine (grounded). I can't notice static in the fabric as it leaves this series of puller rolls. There's static buildup no matter if the foam is in contact with the shaft or not (sometimes it moves sideways, leaving the cardboard core and "touching" the shaft). I suspect some slipping between each layer of foam while the roll is formed (due to the shaft rotating slightly faster than the foam is fed), but I can't confirm (the mechanism is synchronized by a PLC). My hypothesis is that the static buildup is being caused by friction of the foam with the surrounding air, at a rate that is faster than its ability to release it through the shaft (its only contact with ground). Questions: Shall I discard the idea that the static buildup is being caused by friction between layers of the foam in the roll (as they're the same material, theoretically they're not apart in the triboelectric series)? Other than spraying the area (or even the foam) with any watery solution to increase the conductivity of the air, what would you suggest to keep the sparks from jolting and causing fires? 2A. If a watery solution spray is the best solution there is, what makes a good mixture? 2B. Would a metallic side plate in contact with the shaft and the side of the roll be a good, safe solution? If so, what material would you use? How could I measure the static electricity built up into the roll to ascertain a "safe limit"? Answer: Since asking the question I went to study the subject a little bit more. I want to share my conclusions, hoping that they could be of help to someone else. Static buildup can't be caused by friction (or contact) between layers of the same material. They don't make a triboelectric pair. Most likely, the static is being generated by the foam slipping in the surrounding air. This brings us to the second point: try to humidify the air. That could be accomplished by spraying the area, but more technically by installing humidifiers. In addition to installing the side plate you mentioned (a good conductor, such as copper is the obvious pick, as it doesn't require any mechanical properties), it's a good idea to ensure that the chassis of the rolling machine is properly grounded (as well as those structures in which the material slips).
{ "domain": "engineering.stackexchange", "id": 3154, "tags": "electrical, fire, polyethylene" }
Azimuthal Quantum Number
Question: In the radial equation of hydrogen atom the differential equation is described by But why is l taken to be integer. I know the principal quantum number n correspond to energy levels so that's why it's taken as integer. Why should azimuthal quantum number be taken as integer though Answer: There are two parts to the answer to your question. (a) When solving a differential equation the boundary conditions must be specified (this is not always emphasised enough in textbooks), and (b) these conditions are determined by the physics of the problem via the postulates of quantum mechanics, including the nature of the wavefunction. Quantum mechanics (QM) has no derivation; we accept its postulates and QM survives only because experiment has so far always confirmed it. As an example of quantisation consider the particle in a box. Here there are all sorts of solutions to the differential equation, i.e Schroedinger's equation. However, as the walls are infinitely high the wavefunction must have zero amplitude here and this means that the only solutions that remain and are physically realistic are when quantum number $n$ is quantised. The fact that the wavefunction must be zero at each wall are the boundary conditions. A similar situation occurs on a particle on a ring where the angular momentum quantum number $l$ is quantised only because we insist, by applying the postulates of QM that the wavefunction repeats itself exactly each $2\pi$ round the circle. This is in effect the boundary condition. In the H atom the equation is far more complicated but the general conditions imposed on the wavefunction remains the same; the imposition of boundary conditions, as determined by QM, results in the generation of integer quantum numbers so that theory matches experimental data.
{ "domain": "chemistry.stackexchange", "id": 14707, "tags": "quantum-chemistry, hydrogen" }
Iterative Radix-2 FFT in C
Question: I have a small and weak microcontroller. I also don't have access to the complex library. I wrote this iterative version of the FFT to hopefully get better performance than the recursive version, and also just to learn how the FFT works, and brush up on C. I'd like to know if the code itself is understandable, and if there are some performance improvements I can make. #include <stdio.h> #include <math.h> #define FALSE 0 #define TRUE 1 #if !defined(M_PI) # define M_PI 3.14159265358979323846 #endif struct complex { double real; double imaj; }; typedef struct complex complex_t; complex_t complex_init(const double real, const double imaj) { complex_t temp; temp.real = real; temp.imaj = imaj; return temp; } complex_t complex_add(const complex_t a, const complex_t b) { return complex_init(a.real + b.real, a.imaj + b.imaj); } complex_t complex_subtract(const complex_t a, const complex_t b) { return complex_init(a.real - b.real, a.imaj - b.imaj); } complex_t complex_multiply(const complex_t a, const complex_t b) { return complex_init(a.real * b.real - a.imaj * b.imaj, a.real * b.imaj + a.imaj * b.real); } _Bool is_power_of_2(const unsigned int x) { return x != 0 && (x & (x - 1)) == 0; } _Bool fft(complex_t* input, complex_t* output, const unsigned int size) { if (!is_power_of_2(size)) return FALSE; if (size == 1) { output[0] = input[0]; return TRUE; } const unsigned int half_size = size / 2; // Initial loop. Do the input shuffle and first butterfly at the same time. // shuffle is the bit reversed representation of i. If i is 11000, then shuffle is 00011. for (unsigned int skip = size, i = 0, shuffle = 0; i < half_size; ++i) { const complex_t even = input[shuffle]; const complex_t odd = input[shuffle + half_size]; output[i * 2] = complex_add(even, odd); output[i * 2 + 1] = complex_subtract(even, odd); if (i == 0 || is_power_of_2(i + 1)) { skip /= 2; shuffle = skip / 2; } else shuffle += skip; } // Do the rest of the butterfly operations for (unsigned int even_to_odd = 2; even_to_odd < size; even_to_odd *= 2) { const double angle = -M_PI / even_to_odd; const complex_t partial_rotation = complex_init(cos(angle), sin(angle)); complex_t current_rotation = complex_init(1, 0); for (unsigned int i = 0, to_even = 0; i < half_size; ++i, ++ to_even) { if (i % even_to_odd == 0) { to_even = 2 * i; current_rotation = complex_init(1, 0); } complex_t* even = output + to_even; complex_t* odd = even + even_to_odd; const complex_t odd_rotated = complex_multiply(*odd, current_rotation); *odd = complex_subtract(*even, odd_rotated); *even = complex_add(*even, odd_rotated); current_rotation = complex_multiply(current_rotation, partial_rotation); } } return TRUE; } Answer: Bit-reversal permutation bug Keeping both a normal counter and the "reversed counter" is the right idea, but this implementation isn't quite right. For example, it might result in a sequence such as 0, 4, 2, 6, 1, 3, 5, 7 while the correct one is 0, 4, 2, 6, 1, 5, 3, 7. A B 000 000 100 100 010 010 110 110 001 001 011 101 <<< 101 011 <<< 111 111 There are various ways to do this without that bug, but some of them rely on instructions that are unlikely to be efficient (or even exist) on microcontrollers, such as leading zero count and shift-by-variable. I don't know what you can make work, and make efficient, since I don't know your microcontroller. Perhaps a recursive implementation of the bit-reversal permutation is better, but maybe not, given the often high cost of recursion on microcontrollers. I can only recommend trying some options, I don't know how they will work out in your specific case. Also, see at the bottom for algorithmic changes. Periodic test in the inner loop By that I mean the condition of this if statement: if (i % even_to_odd == 0) { to_even = 2 * i; current_rotation = complex_init(1, 0); } While it is the case that a remainder operation with a power of two on the right hand side can be efficient, that only applies when the compiler knows it's dealing with a power of two. Hypothetically, a compiler could detect this case, but there is not much hope for that happening. GCC 10 compiling for x64 (I realize you're not targeting that, but that's not really the point, it's about the compilers ability to reason above the set of values even_to_odd could have and how to use that information) makes this for example: mov eax, ecx xor edx, edx div ebx test edx, edx jne .L41 That's not good, and it would be worse on a microcontroller. Since you know that even_to_odd is a power of two, you could write the condition like this: (i & (even_to_odd - 1)) == 0. If GCC 10 cannot do this automatically, it is not likely that the various vendor-specific compilers for some microcontrollers can do it either, as they are usually weaker in that regard. Benchmarking note Seems to be 20-40% faster on my desktop than the recursive version I wrote By all means test it on your desktop, but when option B is faster than option A on a desktop, that is no guarantee at all that that will still be true on the microcontroller. Even between different desktops there is no such guarantee, if they have processors with a different micro-architecture, for example AMD Zen vs Intel Skylake. Benchmarking the code on your desktop can easily trick you into making decisions that are bad for the microcontroller. Algorithmic changes The Stockham FFT avoids an explicit bit-reversal permutation, removing the need to implement it efficiently. On the other hand, it accesses memory in a different way and needs temporary work-space. Maybe it's good for your application, or maybe not, I'm not sure.
{ "domain": "codereview.stackexchange", "id": 41155, "tags": "performance, c" }
Creating a ROS msg and srv - Tutorial
Question: Hi! I have the same problem as mentioned here: Error using rosmsg in "Creating msg and srv" Tutorial In section 2.2 of this tutorial when typing the command $ rosmsg show beginner_tutorials/Num I don't get init64 num but Unable to load msg [beginner_tutorials/Num]: Cannot locate message [Num]: unknown package [beginner_tutorials] on search path ... In the other thread the solution is: I should simply run: $ cd ~/catkin_ws/ then > $ catkin_make But there I get the Error: CMake Error at /home/user/catkin_ws/build/beginner_tutorials/cmake/beginner_tutorials-genmsg.cmake:3 (message): Could not find messages which '/home/user/catkin_ws/src/beginner_tutorials/msg/Num.msg' depends on. Did you forget to specify generate_messages(DEPENDENCIES ...)? Cannot locate message [init64] in package [beginner_tutorials] with paths [['/home/user/catkin_ws/src/beginner_tutorials/msg']] Call Stack (most recent call first): /opt/ros/indigo/share/genmsg/cmake/genmsg-extras.cmake:304 (include) beginner_tutorials/CMakeLists.txt:68 (generate_messages) -- Configuring incomplete, errors occurred! See also "/home/user/catkin_ws/build/CMakeFiles/CMakeOutput.log". See also "/home/user/catkin_ws/build/CMakeFiles/CMakeError.log". make: *** [cmake_check_build_system] Error 1 Invoking "make cmake_check_build_system" failed What do I do wrong. I changed the CMakeList.txt exactly as said in the tutorials and checked it a lot of times. And I put in catkin_package(CATKIN_DEPENDS message_runtime) Can anybody help me out here? Thanks!! Edit: Here is the CMakeLists.txt (I removed the commented parts): cmake_minimum_required(VERSION 2.8.3) project(beginner_tutorials) find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs message_generation ) add_message_files( FILES Num.msg ) generate_messages( DEPENDENCIES std_msgs ) include_directories( ${catkin_INCLUDE_DIRS} ) And here the package.xml: <?xml version="1.0"?> <package> <name>beginner_tutorials</name> <version>0.0.0</version> <description>The package for beginners</description> <maintainer email="user@todo.todo">user</maintainer> <license>TODO</license> <build_depend>message_generation</build_depend> <run_depend>message_runtime</run_depend> <buildtool_depend>catkin</buildtool_depend> <build_depend>roscpp</build_depend> <build_depend>rospy</build_depend> <build_depend>std_msgs</build_depend> <run_depend>roscpp</run_depend> <run_depend>rospy</run_depend> <run_depend>std_msgs</run_depend> <export> </export> </package> Originally posted by bluefish on ROS Answers with karma: 236 on 2015-02-20 Post score: 1 Original comments Comment by Andromeda on 2015-02-20: Did you save the file as said iin the tutorial? Could you please entirely post your CMakeLists.txt und package.xml ? Comment by bluefish on 2015-02-20: Thanks for your comment! Yes, I saved it here: ~/catkin_ws/src/beginner_tutorials Comment by bluefish on 2015-02-20: I put in both files in the edit above. Comment by jarvisschultz on 2015-02-20: Does your Num.msg file actually say init64 num? It should say int64 num (you have an extra 'i'). That typing error would explain why you are getting the "Cannot locate message [init64]" when running catkin_make Comment by Andromeda on 2015-02-20: And what happen if you type simply: $ rosmsg show Num ????? without the folder name Comment by bluefish on 2015-02-20: Thanks so much javisschultz and Andromeda!! I really put in that extra "i". And I tried to find that error for hours. So stupid.... Sorry for bothering you with that and thanks again!! Comment by JarvisRobot on 2016-12-07: I face the same problem. I changed the CMakeList.txt exactly as said in the tutorials and checked it a lot of times. When I try to ~/catkin_ws/src/beginner_tutorials$ rosmsg show Num,it returns Could not find msg 'Num'. Can anybody help me out here? Thanks!! Comment by jarvisschultz on 2016-12-08: The command should be rosmsg show beginner_tutorials/Num Answer: There is a typo: init64 instead of int64. Originally posted by dornhege with karma: 31395 on 2015-02-20 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 20941, "tags": "ros, tutorial, cmake, msg" }
The inertia matrix explained
Question: Hello. I'd like to find some documentation about the inertia matrix in Gazebo/ROS (this topic is located in between). Here is the ROS wiki page: http://wiki.ros.org/urdf/Tutorials/Adding%20Physical%20and%20Collision%20Properties%20to%20a%20URDF%20Model Basically, what I do not understand, is that the inertia matrix could not be independent from the mass. How a discrepancy between a dummy inertia matrix and the real inertia matrix could impact the simulation? Thanks in advance for any doc that could help me to understand this part of the robot models. Originally posted by Arn-O on Gazebo Answers with karma: 316 on 2013-09-17 Post score: 3 Answer: The inertia tensor encodes the mass distribution of a body, so it does depend on the mass, but also on where it is located. The URDF tutorial you point to states that "If unsure what to put, the identity matrix is a good default." I highly disagree with this statement, as there is no one-size-fits-all inertia. In fact, this value will probably be too large for most links used in human-sized robots (or smaller). In my experience, heavier mobile base links (in the 50kg-100kg range) have inertias that fall within this order of magnitude, but almost all other smaller links (belonging to arms, legs, heads) have inertias that are between two and five orders of magnitude smaller. That being said, this statement is only saying that for a given mass, you're assuming a fictitious mass distribution the yields the identity. Although it may seem unrealistic, it is possible to distribute mass so that a desired matrix results. Originally posted by Adolfo Rodríguez T with karma: 275 on 2013-09-18 This answer was ACCEPTED on the original site Post score: 6 Original comments Comment by hsu on 2013-09-18: Adolfo, can you point me to the tutorial (or feel free to update it)? I think more appropriately it should say: "...the identity matrix is unrealistic but a good default for stable dynamics especially when using LCP solvers such as ODE or Bullet.". Thanks. Comment by Arn-O on 2013-09-18: Thanks for your answer. I've been confused by the way the tutorial was written. Comment by Adolfo Rodríguez T on 2013-09-19: Martin already updated it. The link is in the original question.
{ "domain": "robotics.stackexchange", "id": 3457, "tags": "physics" }
Problem in understanding DDFS (Direct Digital Frequency Synthesizer.)
Question: I need to generate sine and cosine signals using DDFS (Direct Digital Frequency Synthesizer.) I'm unable to get the idea of the phase increment applied as input which changes the frequency of the quadrature signals. Answer: A Direct Digital Frequency Synthesizer (DDFS) or simply Direct Digital Synthesizer (DDS) usually refers to the combination of a Numerically Controlled Oscillator (NCO) combined with Digital to Analog (D/A) converters for complex quadrature output or a single D/A converter for a real output. An NCO is actually a fairly simple structure, consisting of a Frequency Control Word (FCW) input to a "phase accumulator" which is nothing more than a simple counter, usually of high precision, and counts in increments of the FCW. Note that an accumulator is a digital integrator (for example in continuous time, the integration of a constant is a ramp; in discrete time, an input of all ones into an accumulator [1 1 1 1 1 ...] would also be a ramp [1 2 3 4 5 ...]. For this reason, if we say the input that is just a digital value represents frequency, and knowing that the integration of frequency is phase ($freq = \frac{d\phi}{dt}$), then we can therefore see that the accumulator output represents phase (so we call it the phase accumulator). Since the output of the counter is phase versus time, we then need a function to convert that to a sine wave (or sine and cosine wave for quadrature output). This is often done with a lookup table (LUT), that will function as a calculation of the sine and cosine values for the specific phase values at a given time in the phase accumulator. The output of the LUT will therefore be the (digital) sine and cosine functions versus time, with a rate that is set by the input FCW (A higher FCW will result in a higher count rate at the phase accumulator output, meaning more cycles in time of our sine and cosine waveforms at the LUT output. The LUT will contain exactly one cycle of the sine and cosine waves (effectively, memory compression algorithms are actually used so that less data needs to be stored), so as the phase accumulator rolls over properly ($Cnt_{max}+n = n$), the output sine and cosine waves continue where expected without interruption. Here are some slides I have that describe the NCO. Also included in the figure is the Phase Control Word (PCW), which is an offset control for the Phase Accumulator Output, providing direct phase modulation control if desired: NCO Implementation view: NCO Mathematical view (continuous time equivalent): In implementation, the phase accumulator is typically quite large in precision (24 to 48 bits), driven by the frequency step size desired. The frequency control word is of a similar size (usually 1 bit less to synthesize all frequencies from DC to $F_{clk}/2$). Since the LUT contains one cycle of the output frequency, counting through every value in the accumulator means counting through every address in the LUT, and this would result in the lowest possible output frequency above DC. Counting by 2 would result in a frequency twice as fast, and so on. From this we easily see the relationship: $F_{step} = \frac{F_{clk}}{2^{acc}}$ and $F_{out} = FCW * F_{step}$ Where $F_{step}$ is the step size in Hz $F_{clk}$ is the clock rate that the accumulator updates in Hz $acc$ is the accumulator size in bits $F_{out}$ is the output frequency in Hz $FCW$ is the Frequency Control Word in digital counts. [0 to $(2^{acc-1})-1$] Phase Truncation, SFDR and SNR If we passed all the phase accumulator bits to a LUT, the digital output for each frequency control word would be perfect, with the only error source being the precision of the sine wave stored in memory (defined by the LUT output bit width). However the memory requirements for such an implementation would be excessive, and given the resulting noise, unnecessary. Therefore we truncate the LSBs of the phase, leading to truncation error in the frequency output. This results in spurs throughout the frequency spectrum, based on the repetition rates of the truncation patterns that are created. The relationship between the highest spur and the phase truncation comes out very nicely as 6.02 dB/bit where bit is the number of MSB bits that are passed on after truncation. For example, in the figure shown, 14 bits are passed on after truncation, so the Spurious Free Dynamic Range (due to phase truncation errors) is 6.02*14 = 84.28 dB. This means, that although there are lots of spurs all across the digital frequency spectrum (from DC to Fs) but the strongest of all the spurs, in this case, is -84 dB below the output signal level. (At least the strongest of the spurs due to the phase truncation that was added). I had also evaluated the SNR (signal to noise ratio, where in this case the noise is due to the combined power of all spurs from phase truncation) and this came out to be SNR due to phase truncation: $SNR = 6.02dB/bit - 5.172 dB$ here, bit is the number of bits into the LUT So this means for our example with 14 bits after truncation, the combined power of ALL spurs created from phase truncation would be (6.02*14 - 5.17) = 79.11 dB below the output signal. The strongest of the spurs was 84.28 dB down as calculated previously, so this means all the other truncation spurs increase the total spurious noise power by 5.17 dB. Note that are values for FCW that will result in a spurious free output (spuriuos free of spurs due to phase truncation), all FCW values that keep the truncated bits always at 0 - an example of this specific to the figure below where there are 18 truncated bits, is an FCW of 1+[eighteen zeros] = 262144 decimal) . However for all other values, and sufficiently long data runs, these formulas will apply. This relationship was derived by making use of the fact that the phase truncation error is a ramp or cyclically sampled ramp, which has a uniform distribution. This is a useful relationship that can be combined with the well documented quantization noise of a full scale sine wave (referring to the digital output after the look-up table), as: SNR due to output quantization: $SNR = 6.02 dB/bit + 1.76 dB$ here, bit is the number of bits out of the LUT (or effective number of bit, ENOB of the ADC if considering the ADC output). Quantization noise and phase truncation noise can reasonably be considered independent and uncorrelated, meaning the total noise for a composite SNR would sum in power. Therefore the above two SNR relationships can be very useful in establishing the overall noise performance, leading to the precision requirements on the NCO both at the input and output side of the LUT, while the accumulator size itself is driven from the frequency resolution desired.
{ "domain": "dsp.stackexchange", "id": 3954, "tags": "matlab, digital-communications" }
Aggregating decision criteria of different scales
Question: Let's say I have a framework that performs a detection task on some dataset. In order to do so I use three different metrics (A, B, and C) as decision makers. A and B are probabilities, i.e., $ 0 \le A, B \le 1$ while C is the Reconstruction(MSE) loss ($ \lvert y_{pred} - y_{true}\rvert^2_2 $). I have a way to make decisions based off of "only A", "only B", "only C" and "only A, B" with the following rules: Only A: return A < A_thresh Only B: return B < B_thresh Only C: return C > C_thresh Only A, B (under the mild assumption that they're independent): return (log(A) + log(B)) < AB_thresh What I'm looking for is a way to make a decision based off of all 3 of my metrics but I do not want to resort to something as simple as majority voting. Is this possible for metrics of different scales? Answer: I'm not sure what you have against majority voting. Clearly you have an ensemble of weak classifiers. Of the many ways to combine them, voting is a nice one, easily explained to stake holders. You didn't tell us how your cost of FN relates to your cost of FP. The cost function makes a difference when designing each classifier, and when designing an ensemble output for them. Only A, B: We return A × B < exp(AB_thresh). I don't know what your class imbalance or {A, B} probabilities typically look like, but I'm guessing they're not too far from 90% ? If you tend to have low probabilities, say closer to 10%, I would be fine with instead comparing the sum A + B. It is like voting, like computing "only A" or "only B", but more powerful because when the model shows strong confidence in the A measure or the B measure that can be enough to tip the scales. (Unlike the product, that sum is clearly not a "probability". But it can be a useful score, a figure-of-merit correlated with the target classification.) You explain that you want to put C on the same footing as A and B. That is, you want to convert it to a probability. So you're looking for a technique like Platt scaling or perhaps isotonic regression. You have a hypothesis. Three of them, in fact. So you have some opinions about the structure of the underlying generative process that's producing your data, and you know there are cases where each detector has weak performance. I recommend you put together a simulated generative process, so you know the parameters and the ground-truth Y labels. And see how models that do voting or Platt scaling interact with the simulation. You have already done several training runs. Augment the training dataset with a categorical column indicating the "winner" model, {A, B, C}, for that example. If e.g. both A and B said "true", pick the one that reported higher confidence. Train a new "arbiter" model D which, given an input example, predicts one of {A, B, C} as the winner. Rather than "voting", use "arbitration" to decide how your ensemble of models classifies a novel example. In this way D learns the shape of your data together with the strengths / weaknesses of the models, and uses each model only in the part of the problem space where it shines. Or let decision nodes found by XGBoost do that work for you. Include some A, B, C features for XGBoost to look at, in addition to the raw input features. Include an A indicator variable (boolean) if you wish, but be sure to also include the continuous raw A number, as that is more informative.
{ "domain": "datascience.stackexchange", "id": 12057, "tags": "machine-learning, model-selection" }
Are energy eigenfunctions of a particle in one dimensional box orthogonal to each other?
Question: For a particle in one dimensional box, its State Ψ(t=0) is defined as: $Ψ= \frac{3}{5}Φ_1(x)+\frac{4}{5}Φ_3(x)$ I want to find out $|Ψ(0)|^2$. My question is that as energy eigenfunctions $Φ_1(x)$ and $Φ_2(x)$ are orthogonal to each other so, $|Ψ(0)|^2$ must be: $|Ψ(0)|^2=(\frac{3}{5}\langle\phi_1|+ \frac{4}{5}\langle\phi_3|)(\frac{3}{5}|\phi_1\rangle+ \frac{4}{5}|\phi_3\rangle)$ $=\frac{9}{25}\langle\phi_1|\phi_1\rangle+\frac{12}{25}\langle\phi_3|\phi_1\rangle +\frac{12}{25}\langle\phi_1|\phi_3\rangle+ \frac{16}{25}\langle\phi_3|\phi_3\rangle $ $ =\frac{9}{25}+\frac{16}{25}$ $= 1$ But In the book they have written \begin{align*} |Ψ(0)|^2 = \frac{9}{25} |Φ_1|^2 + \frac{16}{25} |Φ_3|^2 + 2\cdot\frac{12}{25} \operatorname{Re}(Φ_1^* Φ_3) \end{align*} Please help me understand what I am missing here. Answer: For a particle in one dimensional box, its State Ψ(t=0) is defined as: $Ψ= \frac{3}{5}Φ_1(x)+\frac{4}{5}Φ_3(x)\tag{A}$ I want to find out $|Ψ(0)|^2$. My question is that as energy eigenfunctions $Φ_1(x)$ and $Φ_2(x)$ are orthogonal... Yes, but the orthogonality comes from integrating over the length of the box. You can not just multiply the functions and expect to get zero. $|Ψ(0)|^2=(\frac{3}{5}\langle\phi_1|+ \frac{4}{5}\langle\phi_3|)(\frac{3}{5}|\phi_1\rangle+ \frac{4}{5}|\phi_3\rangle)$ The above expression is not correct. There is no integration on the left-hand side, so it is not an inner product--it is just the square of the wavefunction at $t=0$ (and some point $x$, which you have not written explicitly, but is implied by Eq. (A) above). But In the book they have written \begin{align*} |Ψ(0)|^2 = \frac{9}{25} |Φ_1|^2 + \frac{16}{25} |Φ_3|^2 + 2\cdot\frac{12}{25} \operatorname{Re}(Φ_1^* Φ_3) \end{align*} Please help me understand what I am missing here. You are missing the fact that evaluating the spatial wavefunction at at a given point (e.g., $t=0$ and $x$) is not the same as the inner product. All the book has done is to multiply $\Psi(0,x)$ times $\Psi(0,x)^*$. The inner product requires integration over the length of the box, which has yet to be done.
{ "domain": "physics.stackexchange", "id": 96184, "tags": "hilbert-space, wavefunction, superposition, observables" }
Calibrate Odometry and Gyro,What is the principle calibration?
Question: hello. i am using tuetlebot with the Gyro ADXRS613,and i got two params named turtlebot_node/gyro_scale_correction turtlebot_node/odom_angular_scale_correction but what's the meaning about two params? and What is the principle calibration? is there some relevant literatures i can study? thanks a lot. Originally posted by longzhixi123 on ROS Answers with karma: 78 on 2013-06-03 Post score: 1 Answer: I doubt there is relevant literature, this is a very basic calibration. The robot basically aligns to a wall, spins around several times at different speeds, and at the end tries to generate scale corrections for the gyro and odom such that the values would be correct for those rotations (where the ground truth is based off the laser-scanner detection of the wall to which it keeps aligning). Originally posted by fergs with karma: 13902 on 2013-06-12 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Zayin on 2013-06-12: I assume you have looked into this? http://www.ros.org/wiki/turtlebot_calibration/Tutorials/Calibrate%20Odometry%20and%20Gyro Note that calibration is very important for localization & navigation.
{ "domain": "robotics.stackexchange", "id": 14416, "tags": "ros, navigation, odometry, turtlebot, gyro" }
Is there a way to tell which chromosome a gene is on, by looking at the "Chromosome/scaffold name"
Question: I recently got a data set, from which I need to figure out which chromosome a gene is from, but the head of the data reads like: Gene ID Description Gene type Gene End (bp) Gene Start (bp) Strand Associated Gene Name Chromosome/scaffold name while one entry of the data is like: ENSG00000252303 RNA, U6 small nuclear 280, pseudogene [Source:HGNC Symbol;Acc:HGNC:47243] snRNA 67546754 67546651 1 RNU6-280P CHR_HG2128_PATCH It looks like the last column is the information about which chromosome the gene comes from, but how to interpret the "CHR_HG2128_PATCH"? Answer: What you are looking at is a patch, not an actual chromosome location. HG-2128 is an issue ID, something that the Genome Reference Consortium uses to track issue with references. Information regarding that specific patch is here. The issue is described as a "Possible misassembly or indel variation in GRCh38 within AL049860.8." During the alignment step before you received your data, someone used a top level assembly version of the reference genome. This includes extra, accessioned scaffolds tacked on to the end of the file that contain the fix. These scaffolds represent what will be found in the next major release of the genome. The fixes aren't applied directly to the reference as soon as they are found due to the inevitable disruption of coordinates. So reads mapped to this FIX patch rather than the primary original scaffold in the primary assembly, on which the RNU6-280P gene resides. The coordinates are slightly different. In the current GRCh38.p12 assembly the RNU6-280P gene is located at chr6:67546651-67546754 (source). You can see that the coordinates are very close to those indicated in the table you provided (Gene Start, Gene End) but are indeed different. Here is some more information on patches.
{ "domain": "bioinformatics.stackexchange", "id": 627, "tags": "gene, ensembl" }
ROS MoveIt GripperCommand directed to position_controllers
Question: Hi there everyone, my question is this, how do I get MoveIt gripper_action or GripperCommand to control a ros_control position_controllers/JointPositionController type joint? My configuration was talked about here I have configured a controllers.yaml file in the MoveIt robot config package folder. like this: ros_control_namespace: / #controller_manager_ns: controller_manager controller_list: - name: arm_controller action_ns: follow_joint_trajectory type: FollowJointTrajectory default: false joints: - jt1_joint - jt2_joint - jt3_joint - jt4_joint allowed_execution_duration_scaling: 1.2 allowed_goal_duration_margin: 0.5 - name: gripper_controller action_ns: gripper_action type: GripperCommand default: false parallel: false joints: - jt5_joint initial: # Define initial robot poses. - group: jaycar_arm pose: home The above doesn't work, it says [ WARN] [xxxx.yyyy]: Waiting for gripper_controller/gripper_action to come up [ERROR] [xxxx.yyyy]: Action client not connected: gripper_controller/gripper_action My jt5_joint is running as a ros controller as a position_controllers/JointPositionController which i can control using rostopic pub /gripper_controller/command std_msgs/Float64 "data: 0.1" How do i convert to something that MoveIt can use or configure MoveIt so it works with my joint control type? I tried using moveit_ros_control <param name="moveit_controller_manager" value="moveit_ros_control_interface::MoveItControllerManager"/> instead of <param name="moveit_controller_manager" value="moveit_simple_controller_manager/MoveItSimpleControllerManager"/> But that didn't help. I'm hoping someone would be able steer me in the right direction. It seems that MoveIt only supports two types of controllers, the FollowJointTrajectory and GripperCommand. I cant find a list of supported controllers that match the ros_control controllers of which there are heaps of different types. Am I missing something? Thanks heaps in advance. Originally posted by Zonared on ROS Answers with karma: 48 on 2020-09-20 Post score: 1 Answer: I don't know if I should answer my own question or not, so I'll post a comment until told otherwise. I solved my problem. What I did is configure "trajectory_execution.launch.xml" file in MoveIt package to use MoveItSimpleControllerManager <arg name="moveit_controller_manager" default="moveit_simple_controller_manager/MoveItSimpleControllerManager" /> Then I changed my ros_controller yaml file in my robot hardware package for joint 5 to be a JointTrajectoryController, this like: gripper_controller: type: position_controllers/JointTrajectoryController joints: - jt5_joint It seems MoveIt only support JointTrajectoryController controllers, I'm probably wrong but I couldn't get it to work any other way. The main thing is, it works now, i can control all four joints for arm movement and gripper to grip things all from RVIZ. Next thing is to control is via python. Originally posted by Zonared with karma: 48 on 2020-09-22 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by fvd on 2020-09-22: Answering your own question is perfectly fine. Please feel free to post and accept it. Thanks for documenting your solution. Comment by Zonared on 2020-12-05: Answer is in my own comment.
{ "domain": "robotics.stackexchange", "id": 35556, "tags": "moveit, ros-melodic, ros-control" }
Quantum causal structure
Question: We take causal structure to be some relation defined over elements which are understood to be morphisms of some category. An example of such a relation is a domain, another is a directed acyclic graph. Yet another is a string diagram in a symmetric monoidal category. Panangaden and Martin showed that interval domains are categorically equivalent to (hyperbolic?) spacetimes. This makes the domain a perfect candidated for classical relativistic causality. The planar graphs of the diagrammatical calculus are an enticing candidated for quantum causal structure. Then there is Hardy's causaloid. The classical causal structure seems very realist, as in we might believe in the existence of the set of events for a universe. I have offered the Fischer Impossibility result as a refutation of the naïve existence of this set. What are the best candidates for a quantum causal structure? Answer: It seems the FLP impossibility result may not hold for quantum systems (see this paper by Helm and section 5 of this paper for a criticism of the first), in which case you don't need any exotic causal structure for quantum mechanics to avoid it.
{ "domain": "physics.stackexchange", "id": 3360, "tags": "mathematical-physics, quantum-gravity, research-level, causality, category-theory" }
Continuous symmetry transformations Taylor expansion
Question: Continuous symmetry transformations form a Lie group. The product of two such transformations is also a symmetry transformation: $T(\theta_1^a)T(\theta_2^a) = T(\theta_3^a)$ where $\theta_3^a=f^a(\theta_1^a,\theta_2^a)$. I now would like to to perform a Taylor expansion of $f$ around $\theta^a=0$: $$\theta_1^a=f(\theta_1^a,\theta_2^a=0)=f^a(0,0)+\frac{\partial f^a}{\partial \theta_2^b}\theta_1^b+\frac{\partial f^a}{\partial \theta_2^b}\theta_2^b+\frac{1\partial^2 f^a}{2\partial \theta_1^b \partial \theta_1^c}\theta_1^b \theta_1^c + \frac{1\partial^2 f^a}{2\partial \theta_2^b \partial \theta_2^c}\theta_2^b \theta_2^c + \frac{\partial^2 f^a}{\partial \theta_1^b \partial \theta_1^c}\theta_1^b \theta_2^c + ...$$ They then go on and write: $$f(\theta_1^a,\theta_2^a)=\theta_1^a+\theta_2^a+\sum_{b,c}f_{bc}^a \theta_1^b\theta_2^c+...$$ My question is now what exactly is meant with the different indices and how do you get from the first expression the the second? I think that $\theta_1$ etc. are matrices in the Lie group and $f$ denotes the operation on that Lie group. But that's as far as I get. Answer: A Lie group can be parametrized by a set of continuous parameters. $\theta^a$s are these group parameters. $a=1,..,n$ where $n$ is the number of parameters need to specify the group elements uniquely. The group transformations obey a composition law $T(\theta_1)T(\theta_2)=T(f(\theta_1,\theta_2))\equiv T(\theta_3)$, where I am denoting the set of $\theta_1^a$ etc.. as $\theta$ when they appear inside the brackets. If $\theta_1^a=0$, then the first transformation is just identity. $$T(0)T(\theta_2)=T(\theta_2)=T(f(\theta_1=0,\theta_2))$$ Therefore $f^a(\theta_1=0,\theta_2)=\theta_2^a$. Similarly $f^a(\theta_1,\theta_2=0)=\theta_1^a$. Therefore we can expand $\theta_3^a=f^a(\theta_1,\theta_2)$ aroung $\theta_1=0,\theta_2=0$ as $$\theta_3^a=f^a(\theta_1,\theta_2)=\theta_1^a+\theta_2^a+f^a_{bc}\theta_1^b\theta_2^c+...$$ upto second order. Clearly there can not be any $\theta_2^b\theta_2^c$ term because then we can not satisfy $f^a(\theta_1=0,\theta_2)=\theta_2^a$ Edit: Let us see why we can not have a $\theta_2^b\theta_2^c$ term. First of all, we are working upto second order in infinitesimals. In fact $T(\theta_1=0)T(\theta_2)=T(\theta_2)$ is true even for finite transformations. Therefore $f^a(\theta_1=0,\theta_2)=\theta_2^a$ should be satisfied to all orders. But suppose we have $$\theta_3^a=f^a(\theta_1,\theta_2)=\theta_1^a+\theta_2^a+f^a_{bc}\theta_1^b\theta_2^c+g^a_{bc}\theta_2^b\theta_2^c+h^a_{bc}\theta_1^b\theta_1^c+...$$ then $f^a(\theta_1=0,\theta_2)=\theta_2^a+h^a_{bc}\theta_2^b\theta_2^c$ which implies that $h^a_{bc}=0$. Therefore there can not be any $\theta_2^b\theta_2^c$ term. Same argument applies for $\theta_1^b\theta_1^c$ term
{ "domain": "physics.stackexchange", "id": 32696, "tags": "group-theory, lie-algebra" }
isType(obj) / getType(obj) - v0
Question: By browser unknown, i mean i don't know how far back support goes for this. Also, I'm wondering when I can delegate to typeof. I've heard typeof is faster but the method below is more widely supported and also mentioned in ES5. /*isType ** dependencies - none ** browser - unknwon ** */ NS.isType = function (type, o) { return (Object.prototype.toString.call(o).slice(8,1) === type); }; /*getType ** dependencies - none ** browser - unknown ** */ NS.getType = function (o) { return (Object.prototype.toString.call(o).slice(8,1); }; Clarification: Not interested in detecting array like objects....just the language objects defined in ES 5. Answer: How about: NS.CheckType = function (o,test) { // implements both return test ? o.constructor.name === test : o.constructor.name; }; // usage NS.CheckType(false); //=> 'Boolean' NS.CheckType(false,Array); //=> false NS.CheckType({},Object); //=> true NS.CheckType({},Array); //=> false NS.CheckType([],Object); //=> false NS.CheckType([],Array); //=> true NS.CheckType(/[a-z]/); //=> 'RegExp' NS.CheckType(0); //=> 'Number' // etc... Because most js-things 'inherit' from Object you can also use: Object.prototype.is = function (test) { return test ? this.constructor === test : this.constructor.name; }; // usage 'string'.is(); //=> 'String' 'string'.is(Object); //=> false (function(){}).is(); //=> Function var f = function(){}; f.is(Function); //=> true // also function Animal(name){this.name = name || 'some animal';} var dog = new Animal('Bello'); dog.is(Animal); //=> true // etc... [Edit] tested this in IE7-10: Object.prototype.is = function (test) { return test ? this.constructor === test : (this.constructor.name || String(this.constructor) .match ( /^function\s*([^\s(]+)/im)[1] ); }; for completeness: if the constructor function is anonymous the method will fail. Here's a solution for that: Object.prototype.is = function (test) { return test ? this.constructor === test : (this.constructor.name || ( String(this.constructor).match ( /^function\s*([^\s(]+)/im) || ['','ANONYMOUS_CONSTRUCTOR'] ) [1] ); }; // usage var Some = function(){ /* ... */} some = new Some; some.is(); //=> 'ANONYMOUS_CONSTRUCTOR' And as bonus: Object.prototype.is = function() { var test = arguments.length ? [].slice.call(arguments) : null ,self = this.constructor; return test ? !!(test.filter(function(a){return a === self}).length) : (this.constructor.name || (String(self).match ( /^function\s*([^\s(]+)/im) || [0,'ANONYMOUS_CONSTRUCTOR']) [1] ); } // usage var Some = function(){ /* ... */} ,Other = function(){ /* ... */} ,some = new Some; 2..is(String,Function,RegExp); //=> false 2..is(String,Function,Number,RegExp); //=> true some.is(); //=> 'ANONYMOUS_CONSTRUCTOR' some.is(Other); //=> false some.is(Some); //=> true // note: you can't use this for NaN (NaN === Number) (+'ab2').is(Number); //=> true
{ "domain": "codereview.stackexchange", "id": 3309, "tags": "javascript" }
Do primate RGCs have overlapping receptive fields?
Question: According to this link, http://hubel.med.harvard.edu/book/b10.htm retinal ganglion cells (RGCs) receive input from overlapping receptive fields (RFs). This is also an idea used in convolutional neural networks for deep learning of images. On the other hand, this one, for the primate retina, states that there is no such overlap: http://www.sciencedirect.com/science/article/pii/0042698995001670 So do primate RGCs have overlapping RFs? If not, how does the primate visual system deal with this sparse input? How does it fill in the blanks? Answer: Short answer Associated ON- and OFF-center retinal ganglion cells can show 100% overlap in their receptive field. Background When looking at the elementary neurophysiology of the retina, we can see that a single cone generally synapses onto two bipolars. Because photoreceptor cells hyperpolarize when illuminated, glutamate release from their synapse is inhibited. In turn, OFF-center bipolar cells are hyperpolarized and ON-center bipolars are depolarized. The OFF-center bipolar synapses onto an OFF-center retinal ganglion cell (RGC), while the ON-center synapses onto an ON-center RGC (Fig. 1). Retinal circuitry linking cones to bipolar cells (left panel) and bipolar cells to retinal ganglion cells (right panel). source: Washington University. Then to your question - the basic circuitry shows that the ON- and OFF-center RGCs receive their input from a single cone and hence have 100% overlapping field. In the foveal region, bipolars link 1:1 on cones, as in Fig. 1, so in this example case, they show 100% overlap. Note that there are an estimated 30 types of retinal ganglion cells, about half of them being not even described yet, as per a 2015 review article (Sanes & Masland, 2015). Hence, this answer is anywhere from exhaustive, but it does show there can be substantial overlap between RGCs. Reference - Sanes & Masland, Ann Rev Neurosci (2015); 38: 221-46
{ "domain": "biology.stackexchange", "id": 7033, "tags": "vision, neurophysiology, human-eye" }
Point charges symmetrically spreading out
Question: The Problem There are $3$ positively charged particles fixed in a frictionless horizontal plane, positioned in the vertices of a triangle. The $i$-th particle has mass $m_i$ and charge $Q_i$. When they are free to move, their positions always form a triangle that is similar to the first triangle, such as their corresponding sides are always parallel. [So they can't rotate, just spread out] Determine the largest angle of the triangle, if the charge/mass ratio of the particles is given by:$$\dfrac{Q_1}{m_1}:\dfrac{Q_2}{m_2}:\dfrac{Q_3}{m_3}=1:2:3$$ My Attempt I tried to approach it with vectors, centering a cartesian referential in the centroid of the triangle. If the position of each particle $i$ is $\vec{r_i}$, then: $$\begin{cases}\displaystyle\vec{r_{21}}+\vec{r_{23}}+\vec{r_{31}}=0\\\vec{a_1}=\frac{KQ_1}{M_1}\left(\frac{Q_2}{r^3_{21}}\vec{r}_{21}+\frac{Q_3}{r^3_{31}}\vec{r}_{31}\right)\\\vec{a_2}=\frac{2KQ_1}{M_1}\left(\frac{Q_1}{r^3_{12}}\vec{r}_{12}+\frac{Q_3}{r^3_{32}}\vec{r}_{32}\right)\\\vec{a_3}=\frac{3KQ_1}{M_1}\left(\frac{Q_1}{r^3_{13}}\vec{r}_{13}+\frac{Q_2}{r^3_{23}}\vec{r}_{23}\right)\\\vec{a_{1}}+\vec{a_{2}}+\vec{a_{3}}=0\end{cases}$$ There must be an elegant solution to this problem... What am I missing here? Is it possible to represent symmetry here without directly operating the position vectors? Answer: A simpler approach is to set the magnitudes of the mutual accelerations proportional to the separations. This is a necessary condition and seems to also be sufficient. i.e. $\space a_{12}/r_{12} = a_{23}/r_{23} = a_{31}/r_{31}$ taking $a_{ij} = \frac{KQ_iQ_j/r_{ij}^2}{M_{ij}}$ where $M_{ij}=\frac{M_iM_j}{(M_i+M_j)}$ is the reduced mass giving $a_{ij}/r_{ij} = K(Q_i/M_i)(Q_j/M_j)(M_i+M_j)/r_{ij}^3$ Substituting the $1:2:3$ ratios for $Q_i/M_i$ gives $1\cdot2\cdot(M_1+M_2)/r_{12}^3 = 2\cdot3\cdot(M_2+M_3)/r_{23}^3 = 3\cdot1\cdot(M_3+M_1)/r_{31}^3$ This immediately gives you the ratio of the three sides and you can use the cosine formula to get the angles. But the answer is not independent of the masses! Assuming the masses are all equal, we get the sides of the triangle in the ratio $2^{(1/3)}:6^{(1/3)}:3^{(1/3)}$ which gives the same answer of $84.2270^{\circ}$ found by @secavara
{ "domain": "physics.stackexchange", "id": 86716, "tags": "homework-and-exercises, kinematics, charge, vectors, coulombs-law" }
Joint order in JointGroupVelocityController 'command' topic
Question: I'm working now with UR5. When I echo the states of joints I get this galf@galf:~/Desktop/ur5/catkin_ws$ rostopic echo /ur5/joint_states -n1 header: seq: 9465 stamp: secs: 94 nsecs: 662000000 frame_id: '' name: [elbow_joint, shoulder_lift_joint, shoulder_pan_joint, wrist_1_joint, wrist_2_joint, wrist_3_joint] position: [-3.1415916305807485, 0.30983008014827096, 1.9488655889645834, 3.1415983918040062, 1.845209559572325, -3.141592812638949] velocity: [3.621569639258467e-06, -9.772236883207112e-06, 0.00011501897260136615, -2.0719156311055294e-07, 0.06444979561546602, -6.340667138213241e-08] effort: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] --- Are the positions match the joints in the same order as they appear in the names? I'm asking this because when I retrieve the info for the command topic, I get galf@galf:~/Desktop/ur5/catkin_ws$ rostopic pub /ur5/joints_group_velocity_controller/command std_msgs/Float64MultiArray "layout: dim: - label: '' size: 0 stride: 0 data_offset: 0 data: - 0" So to command the joints, I need fill out data as data: [0,0,0,0,0,0] but I don't know the order of joints. How this works in ROS? Does data[0] represent elbow_joint? The tree appears as shoulder_pan_joint > shoulder_lift_joint > elbow_joint > wrist_1_joint > wrist_2_joint > wrist_3_joint. I don't know why ROS puts them in alphabetical order. Originally posted by CroCo on ROS Answers with karma: 155 on 2022-06-01 Post score: 0 Original comments Comment by gvdhoorn on 2022-06-01: I've updated the title to better reflect your question. "topic command" is too ambiguous. Answer: From this: galf@galf:~/Desktop/ur5/catkin_ws$ rostopic pub /ur5/joints_group_velocity_controller/command std_msgs/Float64MultiArray ... it would appear you're using a JointGroupVelocityController with your driver (note: this cannot be ur_driver, as that's not a ros_control compatible driver). That controller requires a configuration stanza similar to this: joint_group_vel_controller: type: velocity_controllers/JointGroupVelocityController joints: - shoulder_pan_joint - shoulder_lift_joint - elbow_joint - wrist_1_joint - wrist_2_joint - wrist_3_joint The order in which you should provide the data to the joints_group_velocity_controller/command would be the same order as specified in the configuration stanza I've included here. So to command the joints, I need fill out data as data: [0,0,0,0,0,0] but I don't know the order of joints. How this works in ROS? Does data[0] represent elbow_joint? this is not really "in ROS". It's just a consequence of how the ros_control developers implemented their infrastructure. The tree appears as shoulder_pan_joint > shoulder_lift_joint > elbow_joint > wrist_1_joint > wrist_2_joint > wrist_3_joint. I don't know why ROS puts them in alphabetical order. For this, see #q356347, #q282097, #q221560 and #q351105. Originally posted by gvdhoorn with karma: 86574 on 2022-06-01 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by CroCo on 2022-06-01: this is exactly my ymal file in the same order you've posted it. Comment by CroCo on 2022-06-01: Could you please confirm if the names match the positions' indices in the command topic? Comment by gvdhoorn on 2022-06-02: I'm not sure what is unclear about this: The order in which you should provide the data to the joints_group_velocity_controller/command would be the same order as specified in the configuration stanza I've included here. Comment by CroCo on 2022-06-02: @gvdhoorn, I'm talking about /joint_states. Comment by gvdhoorn on 2022-06-02: Your question is still a bit ambiguous (which "names" are you referring to in your comment), but: please see #q356347, #q282097, #q221560 and #q351105. Summarising: no, they don't have to, they probably don't, and you shouldn't assume they do or will.
{ "domain": "robotics.stackexchange", "id": 37736, "tags": "ros, ros-melodic" }
Comparing two objects having two nullable date fields
Question: ExpiryDates has two properties PSL_ExpiryDate and MNL_ExpiryDate of type nullable date. I am trying to compare two objects for having same or different values. Is there a cleaner way to do the same as the code below? private bool SameValues(ExpiryDates ExpiryDates1, ExpiryDates ExpiryDates2) { //Assume they are the same value and the look for differences bool result = true; if (ExpiryDates1.PSL_ExpiryDate.HasValue != ExpiryDates2.PSL_ExpiryDate.HasValue) { result = false; } if (ExpiryDates1.MNL_ExpiryDate.HasValue != ExpiryDates2.MNL_ExpiryDate.HasValue) { result = false; } if ((ExpiryDates1.MNL_ExpiryDate != null) && (ExpiryDates2.MNL_ExpiryDate != null)) if (ExpiryDates1.MNL_ExpiryDate.Value != ExpiryDates2.MNL_ExpiryDate.Value) result = false; if ((ExpiryDates1.PSL_ExpiryDate != null) && (ExpiryDates2.PSL_ExpiryDate != null)) if (ExpiryDates1.PSL_ExpiryDate.Value != ExpiryDates2.PSL_ExpiryDate.Value) result = false; return result; } The ExpiryDate class looks like this: public class ExpiryDates { public DateTime? MNL_ExpiryDate { get; set; } public DateTime? PSL_ExpiryDate { get; set; } } Answer: Nullable<T> already does the hard work for you. Just override Equals on your ExpiryDates class (and GetHashCode): public override bool Equals(object obj) { var otherDates = obj as ExpiryDates; return otherDates != null && MNL_ExpiryDate.Equals(otherDates.MNL_ExpiryDate) && PSL_ExpiryDate.Equals(otherDates.PSL_ExpiryDate); } As pointed out by mjolka in a comment. It would be advisible to implement IEquatable<T> (IEquatable<ExpiryDates>) as well. See the documentation for why it's a useful thing to do. Further edit... It does say in the comments for IEquatable<T> but you should additionally override == and != e.g. public static bool operator ==(ExpiryDates left, ExpiryDates right) { if (object.ReferenceEquals(left, null)) { return object.ReferenceEquals(right, null); } return left.Equals(right); }
{ "domain": "codereview.stackexchange", "id": 14434, "tags": "c#, null" }
Motion of block on wedge
Question: there is some confusion to me in the case of "motion of block on a frictionless wedge" Below is a simple diagram! Let us consider a situation as above in which there is a block of mass $m$ moving with velocity $v$ in positive $x$ direction on a frictionless wedge as given above. Now since block doesn't have any velocity in $y$ direction initially so it moves horizontally as long as it is on table But when at the instant it encounters the wedge it it gains some some velocity in upward direction and starts to move upward (i.e in positive $y$ direction) with velocity decreasing with time. it approaches some height and after that comes back down the wedge. What gives the block that upward velocity (i.e which force(s) changes the velocity in $y$ direction.) because of which it goes to some height and after that comes back? What is the velocity of Block (net velocity which is along wedge). At the Instant it just starts to slide on the wedge? Answer: When the block collides with the wedge, there will be a normal impulsive force from the wedge, in a direction perpendicular to the surface of the wedge. If you notice, this normal impulsive force has a component in the vertical direction. This is what provides the block with the vertical velocity. As for your second question, the magnitude of the velocity will not change. This is because work has not been done by any of the forces(Normal force and gravity) during the time the block starts sliding on the wedge. The normal force doesn't do work because its direction is perpendicular to the displacement. Gravity won't do work because there is no sufficient displacement in the vertical direction, in the small time it takes to just start sliding on the wedge. So the speed of the block at the base of the wedge will be the same as it was on the horizontal plane. Of course, it will decrease as the block goes up on the wedge. Note: this is assuming the block is particle like. If it has finite dimensions, there will be rotational motion when it reaches the wedge.
{ "domain": "physics.stackexchange", "id": 32790, "tags": "homework-and-exercises, newtonian-mechanics, forces" }
A heuristic for finding a maximum disjoint set
Question: Background I need to find a largest set of non-overlapping axis-parallel squares, out of a given collection of candidate squares. This problem is NP-complete. Many papers suggest approximation algorithms (see Maximum Disjoint Set in Wikipedia), but I need an exact algorithm. My current solution uses the following divide-and-conquer strategy: Calculate all horizontal and vertical lines that pass through corners of the candidate squares. Each such line separates the candidates into three groups: candidates that are entirely at one side of the line, candidates that are entirely at the other side of the line, and candidates that are intersected by the line. Now there are two cases: Easy Case: There is a separator line $L$ that does not intersect any candidate square. Then, recursively calculate the maximum-disjoint-set among the squares on one side of $L$, recursively calculate the maximum-disjoint-set among the squares on the other side of $L$, and return the union of these two sets. The separator line guarantees that the union is still a disjoint set. Hard Case: All separator lines intersect one or more candidate squares. Select one of the separator lines, $L$; suppose that $L$ intersects $k$ squares. Calculate all $2^k$ subsets of these intersected squares. For each subset $X$ that is in itself a disjoint set, calculate the maximum-disjoint-set recursively as in the Easy Case, under the assumption that $X$ is in the set. I.e., recursively calculate the maximum-disjoint-set among the squares on one side of $L$ that do not intersect $X$, recursively calculate the maximum-disjoint-set among the squares on the other side of $L$ that do not intersect $X$, and calculate the union of these two sets with $X$. Out of all $2^k$ unions, return the largest one. Question My question is: What is the best way to select the separator line $L$? There are two conflicting considerations: On one hand, we want $L$ to intersect as few squares as possible, so that the power set is not too large. On the other hand, we want $L$ to separate the candidate squares to subsets of balanced size, preferrably equal size, so that the recursion ends as fast as possible. What is the best way to balance these conflicting considerations? EDIT: Additional details My current heuristic is to pick the separator line that intersects the least number of squares. This heuristic allows the algorithm to process input sets with up to $n=30$ candidates, in several seconds. The optimal solution in these cases has about 10 squares. In general, the number of squares in the optimal solution is near $2\cdot\sqrt{n}$. When the input grows beyond 30 candidates, the running time becomes much slower (several minutes and more). My goal is to find a heuristic that will allow me to process larger sets of candidates. Answer: Rather than your approach, I suggest you formulate this as an integer linear program and feeding it to an off-the-shelf ILP solver. Alternatively, formulate it as a SAT problem and feed it to a SAT solver: you'll probably need to take the decision problem version, where you ask whether there exists a subset of $k$ non-overlapping squares, and then use binary search on $k$. Those would be the first approaches I would try, personally. If you definitely want to try your approach based upon a "separating line", then I think the best way to answer your question is going to be to pick a representative set of problem instances, and try some different heuristics on them to see which seems to work best. My intuition suggests that the best way to select the separator line may be to pick the line $L$ that intersects as few squares as possible, without worrying about how balanced the division is (though if there is a tie among multiple lines that each intersect the same number of squares, you could always use "how balanced the division is" as a tie-breaker). The reason is that you are getting an exponential multiplicative increase in the running time each time you enumerate all subsets of the squares that intersect the line $L$. I think your prime consideration is going to be keeping that blowup down. But that's just my intuition, and my intuition might be wrong. I think you need to do the experiment to find out empirically what works best. If you do apply your separating line approach, you might consider using a branch-and-bound approach. Keep track of the best solution you've found so far (i.e., the largest set of non-overlapping squares you've been able to find so far); say that it is of size $s$ at any point in time. Now anytime your search tree enters a subtree where you can prove that all solutions below the subtree will have size $\le s$, there is no need to explore that subtree. For instance, if you have a line $L$ and a subset $X$ where you know that the size of $X$ plus the size of the largest sub collection of non-intersecting squares above $L$ plus the total number of squares below $L$ is $\le s$, then there is no need to recursively compute the largest sub collection of non-intersecting squares below $L$, which saves you one recursive call. But, if you use an off-the-shelf ILP solver, it will already implement these sort of branch-and-bound heuristics for you -- hence my advice to start by formulating this as an ILP problem and applying an off-the-shelf ILP solver. Finally, the following paper apparently describes an $O(2^{\sqrt{n}})$ time algorithm to compute the exact solution to your problem. This is an improvement over the obvious algorithm that enumerates all possible subsets of squares, which takes $O(2^n)$ time. An application of the planar separator theorem to counting problems. S.S. Ravi and H.B. Hunt III. Information Processing Letters, Volume 25, Issue 5, 10 July 1987, Pages 317–321. http://www.sciencedirect.com/science/article/pii/0020019087902067
{ "domain": "cs.stackexchange", "id": 2327, "tags": "algorithms, computational-geometry, efficiency, heuristics" }
Why can't Carbon Dioxide be liquid at 1 atm?
Question: Wikipedia says that Carbon Dioxide cannot be liquid under atmospheric pressure. I thought the phase of an element was related to both pressure and temperature. If that is so, then should you not be able to set the temperature conditions such that you could have a cup of liquid Carbon Dioxide? Answer: The other answer already shows that this is a solid fact. Carbon dioxide is not (actually cannot stay) liquid at 100 kPa, period. Why that is the case is much less obvious. I cannot give you a clear, rigorous, quantitative explanation. But the general reasoning about this and similar observations is that once the crystal structure has broken up under increasing temperature, there is very little that can keep the molecules together. The linear CO2 molecules are unpolar from a distance, but each bond is quite polar. They have to align carefully to have enough attraction, in the solid. Now if the crystal structure breaks up, all the outer oxygen atoms see are other oxygen atoms, with a negative partial charge. So the condensed phase just blows up. Only with a significant outside pressure, the molecules in the liquid can jump from one attractive position to another without boiling away. Naphtalene is a similar case: The molecule is rigid and has a high aspect ratio. It needs a lot of energy to be able to move in the condensed phase at all, but once it is able to turn a bit, it looses 90% of its contact area (for vdW bonding) with its neighbours, and takes off. Liquid water is kept together by hydrogens bonds, which can break and reform extremely fast. Nitrogen molecules don´t care how their neighbours are aligned, neither does methane or ethane. Alkanes are flexible additionally. They all can move in a liquid phase and still keep up enough interaction.
{ "domain": "chemistry.stackexchange", "id": 13335, "tags": "phase" }
Numeric double integration
Question: I've made a simple program for numerically aproximating double integral, which accepts that the bounds of the inner integral are functions: import numpy as np import time def double_integral(func, limits, res=1000): t = time.clock() t1 = time.clock() t2 = time.clock() s = 0 a, b = limits[0], limits[1] outer_values = np.linspace(a, b, res) c_is_func = callable(limits[2]) d_is_func = callable(limits[3]) for y in outer_values: if c_is_func: c = limits[2](y) else: c = limits[2] if d_is_func: d = limits[3](y) else: d = limits[3] dA = ((b - a) / res) * ((d - c) / res) inner_values = np.linspace(c, d, res) for x in inner_values: t2 = time.clock() - t2 s += func(x, y) * dA t1 = time.clock() - t1 t = time.clock() - t return s, t, t1 / res, t2 / res**2 This is, however, terribly slow. When res=1000, such that the integral is a sum of a million parts, it takes about 5 seconds to run, but the answer is only correct to about the 3rd decimal in my experience. Is there any way to speed this up? The code i am running to check the integral is def f(x, y): if (4 - y**2 - x**2) < 0: return 0 #This is to avoid taking the root of negarive #'s return np.sqrt(4 - y**2 - x**2) def c(y): return np.sqrt(2 * y - y**2) def d(y): return np.sqrt(4 - y**2) # b d # S S f(x,y) dx dy # a c a, b, = 0, 2 print(double_integral(f, [a, b, c, d])) The integral is eaqual to 16/9 Answer: If you want to use numpy, use numpy properly. Inestead of for x in inner_values: s += func(x, y) * dA use the more idiomatic, and much faster s += dA * np.sum(func(inner_values, y)) Note: this requires rewriting f as return np.sqrt(np.maximum(0, 4 - y**2 - x**2)) so it can accept an array as an input. This does not reduce accuracy, but brings time down to a much more accaptable .04 seconds for a 100x improvement. The takeaway here is Numpy is not magic. It provides quick vectorization.
{ "domain": "codereview.stackexchange", "id": 29583, "tags": "python, performance, numpy, numerical-methods" }
Censor the middle of words of length N from a text file
Question: LENGTHS_TO_CENSOR = {4, 5} CENSOR_CHAR = '*' CENSOR_EXT = "-censored" def replace_inner(word, char): if len(word) < 3: return word return word[0] + char * len(word[1:-1]) + word[-1] def create_censor_file(filename): output_file = open(filename + CENSOR_EXT, "w+") with open(filename) as source: for line in source: idx = 0 while idx < len(line): # If the character isn't a letter, write it to the output file. if not line[idx].isalpha(): output_file.write(line[idx]) idx += 1 else: word = "" while idx < len(line) and line[idx].isalpha(): word += line[idx] idx += 1 if len(word) in LENGTHS_TO_CENSOR: word = replace_inner(word, CENSOR_CHAR) output_file.write(word) output_file.close() def main(): filename = input("File to be censored: ") create_censor_file(filename) if __name__ == "__main__": main() I was assigned a task to censor words that are length n in a file. This file can potentially contain punctuation and numbers. I originally tackled the problem by splitting the line into a list of words (using .split(' ')) and checking the length to determine if the program should censor the word or not. This failed for inputs such as: does not work.for.this.input or.this The output file must be exactly like the input but with words of length in LENGTHS_TO_CENSOR censored with CENSOR_CHAR. I decided to abandon trying to make it Pythonic and ended up with this result. I want to know if there is a way to take this method and make it more Pythonic. Answer: create_censor_file should really be called create_censored_file. I'd rename source to source_file for consistency and clarity. You should use with for both files. Why not use just w instead of w+? This is probably one of the few things that regexes are actually useful for. You can just use re.sub(r'(?<=\b\w)\w{' + ','.join(map(lambda x: str(x-2), LENGTHS_TO_CENSOR)) + '}(?=\w\b)', lambda match: CENCOR_CHAR * len(match.group(0)), source.read()) A couple other things: Good job with the main function and if __name__ == '__main__' check! I have not yet tested this code. Result: LENGTHS_TO_CENSOR = {4, 5} CENSOR_CHAR = '*' CENSOR_EXT = "-censored" def create_censor_file(filename): with open(filename + CENSOR_EXT, "w") as output_file, open(filename) as source_file: output_file.write( re.sub( r'(?<=\b\w)\w{' + ','.join(map(lambda x: str(x-2), LENGTHS_TO_CENSOR)) + '}(?=\w\b)', lambda match: CENSOR_CHAR * len(match.group(0)), source_file.read())) def main(): filename = input("File to be censored: ") create_censor_file(filename) if __name__ == "__main__": main()
{ "domain": "codereview.stackexchange", "id": 32000, "tags": "python, file, formatting" }
Two point light sources apart for a certain distance, where should I put the convex lens so that images of both sources are formed at the same place?
Question: The question is as follows: Two point light source are 24 cm apart. Where should a convex lens of focal length 9 cm be put in between them from one source so that the images of both the sources are formed at the same place? The answer I found is How do we know that $$\frac{1}{f} = \frac{1}{-y} + \frac{1}{-x}$$? I know how convex lens and the equation work but i do not know why the object and image distances are negative for the first source $S_1$. Answer: It is important that the lens formula be applied with the correct sign convention. One form of the lens formula is $$\frac1o+\frac1i =\frac1f$$ where $o$ is the object distance from the centre of the lens; $i$ is the image distance from the centre of the lens; $f$ is the focal length of the lens and Real images and objects have positive values of $i$ and $o$; Virtual images and objects have negative values of $i$ and $o$; A converging lens has a positive focal length; a diverging lens has a negative focal length All of these conventions must be applied correctly for the formula to work. It may be that the solution that troubles you uses a different formula and sign convention. I have quoted the Gaussian convention and formula; the other common one is the Cartesian convention and formula. I also believe that the expression $(-24-x)$ is a typo; it does not appear in the following expressions. Now, for this particular problem: You are told that two objects, 24 cm apart, each form an image at the same spot. If the objects were on the same side of the lens, there images could not be at the same spot. Therefore, the two objects are 24 cm apart, with the lens somewhere in between, and the image on the same side of the lens as its object must be a virtual image. The basic diagram as given in the solution reflects this. Note also that $x$, $y$ and $24-x$ are all positive numbers, and $f=9$ Consider the object $S_1$: It has a converging lens, a real object with a positive object distance and a virtual image with a negative image distance. Therefore the lens formula for $S_1$ becomes:$$\frac{1}{x} + \frac{1}{-y} = \frac19$$ Consider now Object $S_2$: It has the same converging lens, a real object with a positive object distance, and a real image, with a positive image distance. Therefore the lens formula for $S_2$ becomes:$$\frac{1}{24-x} + \frac{1}{y} = \frac19$$ If we add these two equations, the $y$-term disappears, and the resulting expression in $x$ can be re-arranged to give the same quadratic as found in the quoted solution.
{ "domain": "physics.stackexchange", "id": 86782, "tags": "homework-and-exercises, optics" }
Why is $\Sigma^*$ concatenated with some language regular?
Question: Let $\Sigma=\{a,b\}$. Why is the concatenation of any language with $\Sigma^*$ always regular? I found a problem where $(a+b)^*$concatenated with $a^nb^n$ was regular? Answer: The claim is false. For example, if you concatenate $\{ a^{n^2} b : n \geq 0\}$ and $\Sigma^*$ then you get a non-regular language (exercise). In contrast, the concatenation of $\{ a^n b^n : n \geq 0 \}$ and $(a+b)^*$ is regular, since it equals the latter; this is the case whenever the former language contains the empty word. Finally, the concatenation of $\{ a^n b^n : n \geq 1 \}$ and $(a+b)^*$ is again not regular, since intersecting it with $a^*b^*$ gives the non-regular language $ \{ a^n b^m : m \geq n \geq 1 \} $.
{ "domain": "cs.stackexchange", "id": 9600, "tags": "formal-languages, regular-languages" }
Python memoization decorator
Question: I have spent all night whipping up this recipe. It's my first Python decorator. I feel like I have a full understanding of how decorators work now and I think I came up with a good object-oriented algorithm to automatically provide memoization. Please let me know what you think. I made a few quick changes after pasting it in here so please let me know if my changes broke something (don't have an interpreter on hand). """This provides a way of automatically memoizing a function. Using this eliminates the need for extra code. Below is an example of how it would be used on a recursive Fibonacci sequence function: def fib(n): if n in (0, 1): return n return fib(n - 1) + fib(n - 2) fib = memoize(fib) That's all there is to it. That is nearly identical to the following: _memos = {} def fib(n): if n in _memos: return _memos[n] if n in (0, 1): _memos[n] = n return _memos[n] _memos[n] = fib(n - 1) + fib(n - 2) return _memos[n] The above is much more difficult to read than the first method. To make things even simpler, one can use the memoize function as a decorator like so: @memoize def fib(n): if n in (0, 1): return n return fib(n - 1) + fib(n - 2) Both the first and third solutions are completely identical. However, the latter is recommended due to its elegance. Also, note that functions using keywords will purposely not work. This is because this memoization algorithm does not store keywords with the memos as it HEAVILY increases the CPU load. If you still want this functionality, please implement it at your own risk.""" class memoize: """Gives the class it's core functionality.""" def __call__(self, *args): if args not in self._memos: self._memos[args] = self._function(*args) return self._memos[args] def __init__(self, function): self._memos = {} self._function = function # Please don't ask me to implement a get_memo(*args) function. """Indicated the existence of a particular memo given specific arguments.""" def has_memo(self, *args): return args in self._memos """Returns a dictionary of all the memos.""" @property def memos(self): return self._memos.copy() """Remove a particular memo given specific arguments. This is particularly useful if the particular memo is no longer correct.""" def remove_memo(self, *args): del self._memos[args] """Removes all memos. This is particularly useful if something that affects the output has changed.""" def remove_memos(self): self._memos.clear() """Set a particular memo. This is particularly useful to eliminate double-checking of base cases. Beware, think twice before using this.""" def set_memo(self, args, value): self._memos[args] = value """Set multiple memos. This is particular useful to eliminate double-checking of base cases. Beware, think twice before using this.""" def set_memos(self, map_of_memos): self._memos.update(map_of_memos) Answer: The first thing that came to mind looking at your code was: style. There's a particular reason for placing your doc-strings above the functions instead of below? The way you're doing it will show None in the __doc__ attribute. IMHO Some of that strings are not even doc-strings: def __call__(self, *args) """Gives the class its core functionality.""" # ... It doesn't really say much. Keep also in mind that comments are for the programmers, docstrings for the users. PEP8 tells you all about this style guide, and PEP257 is specifically about Docstring Conventions. Also I didn't like very much that you put the __call__ method before __init__, but I don't know if that is just me, or there's some convention about that. Cleared this, I'm failing in founding the point in all your methods that you've written beside __init__ and __call__. What's their use? Where your code is using them? If you need something write the code for it, otherwise don't. Or you'll be writing for something that you don't need and that will probably not match your hypothetical requires of tomorrow. I get that you were probably doing an exercise to learn about decorators, but when implementing something, don't ever write code just for the heck of it. Let's take a deeper look at your non-doc strings, like this one: """Removes all memos. This is particularly useful if something that affects the output has changed.""" def remove_memos(self): self._memos.clear() That should probably just be: def remove_memos(self): """Removes all memos.""" self._memos.clear() And nothing more. What on earth does "This is particularly useful if something that affects the output has changed." this means? "something that affects the output has changed"? It's all very strange and confusing. Also, how will your decorator know that "something that affects the output has changed"? There's nothing in here: def __call__(self, *args): if args not in self._memos: self._memos[args] = self._function(*args) return self._memos[args] that does that or that uses any of the other methods. Also you seem to not being able to provide a scenario where they might be used (and even if you could there's still no way to use them). My point is that all those additional methods are useless, probably it wasn't write them if you learned some Python, but that is as far as their use is gone.
{ "domain": "codereview.stackexchange", "id": 1414, "tags": "python, python-3.x, memoization" }
Words in computer's memory
Question: I don't know much about memory. Here are some lines from CLRS: The words in a computer memory are typically addressed by integers from 0 to $M - 1$, where $M$ is a suitably large integer. In many programming languages, an object occupies a contiguous set of locations in the computer memory. A pointer is simply the address of the first memory location of the object, and we can address other memory locations within the object by adding an offset to the pointer. Now, what does 'word' mean here? Does it mean a finite sequence of characters from the keyboard? If yes then I know that we can set $M = 256$ so that ASCII would cover all these characters but then the problem is that $M$ wouldn't be that large as it should be and also it should have been said 'letters in a computer memory', not 'words in a computer memory'. Answer: No, it refers to a word in a computing sense. Here, a word is just a unit of data, whatever is natural for a particular processor. For instance, an x86-64 processor, which I'm currently using, has a word size of 64 bits. So a single word in this case consists of 64 bits.
{ "domain": "cs.stackexchange", "id": 18878, "tags": "memory-access, memory-allocation" }
How is bromothymol blue synthetised?
Question: In a classical reaction, phenolphthalein is synthetised from phthalic anhydride and phenol with acid catalyst. When one uses substituted phenols, one obtains similar compounds (e. g. cresolphthalein, thymohlphthalein). However, some acid-base indicators contain a sulfo- group instead of a carboxy- group of phthaleins. Does that mean that they are synthetised from phenols and the anhydride of o-sulfobenzoic acid? Phthalic anhydride is fairly cheap, but how is o-sulfobenzoic anhydride prepared? Answer: Now a days, sulfonephthaleins can be prepared by reaction of readily available saccharin and the desired plenol compound (Ref.1 & Ref.2). In this method, active reagent, sulfobenzoic anhydride will be prepared in situ as depicted in following diagram: In this method, it's described the preparation of phenolsulfonephthalein (Phenol Red). If you use thymol instead of phenol, you will get thymolsulfonephthalein, which can be subsequently brominated by bromine/acetic acid to obtain dibromothymolsulfonephthalein (Bromothymol Blue). This procedure is described in Ref.2. References: V. H. Tillu, D. K. Dumbre, H. B. Borate, R. D. Wakharkar, V. R. Choudhary, "Solvent-Free One-Pot Synthesis of Sulfonephthaleins from Saccharin and Phenols," Synthetic Communications 2012, 42(8), 1101-1107 (https://doi.org/10.1080/00397911.2010.535946). B. S. Rao, J. B. Puschett, B. M. Karandikar, K. Matyjaszewski, "Synthesis of Functional Bromothymol Blue Dyes for Surface Attachment to Optical Fibers," Dyes and Pigments 1991, 16(1), 27-34 (https://doi.org/10.1016/0143-7208(91)87018-I).
{ "domain": "chemistry.stackexchange", "id": 13166, "tags": "organic-chemistry, synthesis" }
What is the maximum number of indices one can create on a table with N columns?
Question: Say, I have a database table with $N$ columns. What is the (theoretical) maximum number of indices I can create on that table? For $N = 1,2,3$ it's easy enough to calculate the answer $(1, 4, 15)$, but is there any formula? Also, is there a "name" for this number? Answer: I assume you mean the following: given $N$ columns, there are $N$ single columns, giving $N$ different indices $N(N-1)/2$ pairs of columns, and 2 ways to combine each pair, giving $N(N-1)$ different indices $\frac{N(N-1)(N-2)}{2 \cdot 3}$ triples of columns, and $3 \cdot 2$ ways to combine each triple, giving $N(N-1)(N-2)$ different indices and so on. This number doesn't have a (well-known) name, but the sequence does have its own OEIS entry: Eighteenth- and nineteenth-century combinatorialists call this the number of (nonnull) "variations" of n distinct objects, namely the number of permutations of nonempty subsets of {1,...,n}. Various formulas are given to compute it, including a recurrence relation $a_n = n(a_{n-1} + 1)$ and the rather surprising $a_n = \lfloor{e \cdot n! - 1}\rfloor$. One could argue that you can create more indices by taking into account that columns can be used in ascending and descending mode. For indices with more than one column, this even has an influence on which records are 'near' each other.
{ "domain": "cs.stackexchange", "id": 16420, "tags": "databases, permutations" }
What do we mean by magnetic field energy? Does this magnetic field energy in a current carrying wire comes from the battery the wire is connected to?
Question: Let me take an example to elaborate my question. Suppose we have a simple circuit with a battery (E) and a resistance (R). Current will be flowing in the circuit, I = E/R. Now, we know that that if there is a current in a wire, then due to this current, a magnetic field will be present around it, and magnetic field will also be carrying some magnetic field energy. Now the question comes here is, where this magnetic field energy comes from? Does it come from the battery? If that is the case then whey do we say that work done by the battery is equal to the heat dissipated in the resistance? Why do we not say that the work done by the battery is equal to the heat dissipated in the resistance + the magnetic field energy stored in the vicinity of the wire? Answer: A steady magnetic field represents a constant store of energy so the battery does not need to supply any energy to maintain the constant magnetic field. To estimate the energy required to set up the magnetic field assume that a loop of wire of diameter $30\,\rm cm$ represents an electrical circuit. Such a loop has a self inductance of approximately $10^{-6}\,\rm H$. If a current of one amp is passing through the loop then the energy stored is $\frac 12 LI^2\approx 5\times 10^{-7}\,\rm J$. This energy comes from the battery as the magnetic field increases from zero and then once the magnetic field is created the battery does not have to supply any more energy to maintain this magnetic field. However ohmic heating requires a continuous supply of energy from the battery and if the current is one amp with the resistance of the loop being one ohm then the battery is supplying energy at the rate one joule every second. That stored energy in the magnetic field is returned to the circuit when the circuit is broken and the current drops to zero.
{ "domain": "physics.stackexchange", "id": 51551, "tags": "electromagnetism, magnetic-fields, classical-electrodynamics, electromagnetic-induction" }
rviz over remote dns
Question: I'm remote to my master_uri; ie http://somedns:11311 I can listen and publish to topics from the master, but I'm unable to list topics in Rviz or Gazebo. I can get it to work over my local network by exporting the local ip of the master to ROS_IP, but i cant resolve the dns server. any suggestions? Thanks Originally posted by cerebraldad on ROS Answers with karma: 45 on 2017-03-06 Post score: 0 Answer: You probably don't have bidirectional communications. See http://wiki.ros.org/ROS/NetworkSetup for some tips on making sure you do have bidirectional comms. Originally posted by William with karma: 17335 on 2017-03-06 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by cerebraldad on 2017-03-06: i read through all of the above documentation with no luck. I am outside my local network Comment by William on 2017-03-06: What does no luck mean? Did you try the tests with ncat? did they succeed or fail? Comment by cerebraldad on 2017-03-07: nowhere in the above page does it say anything about nmap or ncat. I dont know how to run it across the dns server. Im trying to do some testing while at work another state over. Comment by cerebraldad on 2017-03-07: Ncat: Connection timed out. when i enter the master uri with or without the port. Comment by William on 2017-03-07: Sorry, I meant netcat. The likely issue is that you're cross a NAT and the ports are not open for the various ROS nodes to connect to one another. Most people use a VPN to avoid this issue of port forwarding with ROS. Comment by cerebraldad on 2017-03-08: tes netcat or nc is not communicating. on the robot I typed nc -l 1234 and on the workstation i typed nc somedns 1234 and I was unable to send a message to either computer. I even tried a few ports using -p. Could i have further complicated things by using virtualbox running ubuntu? Comment by cerebraldad on 2017-03-08: I can see the topics listed just cant gather any of the info explicitly.
{ "domain": "robotics.stackexchange", "id": 27213, "tags": "ros, rviz, ros-master-uri" }
80-20 or 80-10-10 for training machine learning models?
Question: I have a very basic question. 1) When is it recommended to hold part of the data for validation and when is it unnecessary? For example, when can we say it is better to have 80% training, 10% validating and 10% testing split and when can we say it is enough to have a simple 80% training and 20% testing split? 2) Also, does using K-Cross Validation go with the simple split (training-testing)? Answer: 1) In 80-10-10 scheme, 80% is for training, 10% is for validation and 10% is for testing. Validation set required to search for the optimal hyperparameters. For models having no hyperparameters, it doesn't do much good to use a validation set (although, it is still useful in determining when to stop the training of the model using early stop). In such situation, one might just keep 80% as training set and 20% as testing set. 2) Yes, K fold CV can be used with simple split.
{ "domain": "datascience.stackexchange", "id": 7176, "tags": "machine-learning, cross-validation, training" }
How to make a c++ node Listen and Publish
Question: I feel like this should be a simple problem. The listening callback function is working and it is printing in terminal but the publishing doesn't seem to be working. The node shows up in rostopic list but it does not echo anything. The script is getting an observation from an APPL node and gets an action back. I then want to pass the action to the action node. Ideally, I want to get something like this working as the published attribute: void sendAction(int action) { printf("SENDING ACTION\n"); ros::NodeHandle nh; ros::Publisher vel_pub = nh.advertise<geometry_msgs::Twist>("cmd_vel", 1); geometry_msgs::Twist vel; vel.angular.z = 0; vel.linear.x = 1; vel_pub.publish(vel); } This is the whole script at the moment using namespace std; // Messege recieved callback void chatterCallback(const std_msgs::String::ConstPtr& msg) { // Variables std_msgs::String actionstr; std::stringstream ss; // Set up ros::NodeHandle n; // Publish and advertise ros::Publisher chatter_pubs = n.advertise<std_msgs::String>("actionstr", 1000); ros::Rate loop_rate(10); // APPL NODE SHIZ appl::appl_request srv; ros::ServiceClient client = n.serviceClient<appl::appl_request>("appl_request"); //Send to POMDP Node srv.request.cmd=2; srv.request.obs=msg->data.c_str(); client.call(srv); int action=srv.response.action; // Get action string if (action == 0) {ss << "Listen";} if (action == 1) {ss << "OpenRight";} if (action == 2) {ss << "OpenLeft";} // Publish String actionstr.data = ss.str(); chatter_pubs.publish(actionstr); printf("Sent\n"); printf("Obs: %s Action: %d - %s\n", msg->data.c_str(), action, actionstr.data.c_str()); } /* ------------------------------------------------ */ /* ---------------------******--------------------- */ /* ---------------------*MAIN*--------------------- */ /* ---------------------******--------------------- */ /* ------------------------------------------------ */ int main(int argc, char **argv) { ros::init(argc, argv, "listener"); ros::NodeHandle n; // Set to Appl client ros::ServiceClient client = n.serviceClient<appl::appl_request>("appl_request"); srand((unsigned)time(0)); int tiger;//0=left 1=right tiger=rand()%2; // Reset Appl Controller. appl::appl_request srv; srv.request.cmd=1; //reset the controller first client.call(srv); int action=srv.response.action; ROS_INFO("I Listen"); ros::Subscriber sub = n.subscribe("chatter", 1000, chatterCallback); ros::spin(); return 0; } Originally posted by Alkaros on ROS Answers with karma: 103 on 2013-10-10 Post score: 0 Answer: I didn't pay much attention to your code, not really sure of what you are trying to accomplish, but I don't think it is a good idea to declare the publishers inside a subscriber callback. As you will be creating the publisher every time you get data on the subscriber, I don't know how this would behave. Try something like this: using namespace std; //REMEMBER! EVERY TIME YOU USE A GLOBAL VARIABLE... GOD KILLS A KITTEN!!! ros::Publisher chatter_pubs; ros::ServiceClient client; // Messege recieved callback void chatterCallback(const std_msgs::String::ConstPtr& msg) { // Variables std_msgs::String actionstr; std::stringstream ss; // APPL NODE SHIZ appl::appl_request srv; //Send to POMDP Node srv.request.cmd=2; srv.request.obs=msg->data.c_str(); client.call(srv); int action=srv.response.action; // Get action string if (action == 0) {ss << "Listen";} if (action == 1) {ss << "OpenRight";} if (action == 2) {ss << "OpenLeft";} // Publish String actionstr.data = ss.str(); chatter_pubs.publish(actionstr); printf("Sent\n"); printf("Obs: %s Action: %d - %s\n", msg->data.c_str(), action, actionstr.data.c_str()); } /* ------------------------------------------------ */ /* ---------------------******--------------------- */ /* ---------------------*MAIN*--------------------- */ /* ---------------------******--------------------- */ /* ------------------------------------------------ */ int main(int argc, char **argv) { ros::init(argc, argv, "listener"); ros::NodeHandle n; chatter_pubs = n.advertise<std_msgs::String>("actionstr", 1000); // Set to Appl client client = n.serviceClient<appl::appl_request>("appl_request"); srand((unsigned)time(0)); int tiger;//0=left 1=right tiger=rand()%2; // Reset Appl Controller. appl::appl_request srv; srv.request.cmd=1; //reset the controller first client.call(srv); int action=srv.response.action; ROS_INFO("I Listen"); ros::Subscriber sub = n.subscribe("chatter", 1000, chatterCallback); ros::spin(); return 0; } Originally posted by Martin Peris with karma: 5625 on 2013-10-10 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by dornhege on 2013-10-10: In short: It will behave badly. The publisher will very probably not yet be connected to another node when the publish call comes and then goes out of scope. So, no data is ever sent. Comment by Alkaros on 2013-10-11: Thanks guys, This worked a treat. Sorry about the kittens :( Comment by bit-pirate on 2013-10-13: Please mark your question as answered (click the check sign to the left of the answer you like).
{ "domain": "robotics.stackexchange", "id": 15828, "tags": "ros, turtlebot, teleop, node, publish" }