anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Confusion of measuring two quantities on a quantum system
Question: Let's say there are two observables corresponding to two operators A and B, and let's say my system is in a state Phi where with probability 1 if I measure A I get 3 (let's say 3 Joules), If I measure B I get 4 (let's say 4 m/s). If I measure A and then B I would get 3 Joules for that energy measurement and 4 m/s for the speed measurement, however, mathematically, I would write: BA Phi=12 Phi So the measurements kind of mixed up, I don't understand this. This question arised from problem 3.5 of Zetilli's book Answer: It seems the problem is in distinguishing measuring $A$, measuring $B$ and measuring $BA$. If you have an apparatus that measures $BA$, then you’d get $12 J m/s$, and there is no way you can “separate out” the $A$ and $B$ part: presumably the apparatus to measure $BA$ would yield a single pulse of some height (or whatever other signal) from which you’d deduce the outcome is $12 J m/s$, and you’d have no way of knowing if this were $3\times 4$ or $2\times 6$. You could separate this out if you measured $A$, recorded the outcome, then measured $B$ and recorded the outcome. That’s not quite the same as measuring $BA$, which is technically a different operator and thus would require a different setup than measuring $A$ or measuring $B$ alone. Measuring $A$ then $B$ is two measurements, whereas measuring $BA$ is a single measurement.
{ "domain": "physics.stackexchange", "id": 55675, "tags": "quantum-mechanics, homework-and-exercises, operators, eigenvalue, quantum-measurements" }
Energy equation for an open system
Question: I teach undergraduate thermodynamics and I was quite ashamed that I couldn't explain to a student, the following. I thought I'd bring it to physics.SE in hope of providing my student a good explanation. The question was concerning the energy equation for an open system: $$ \underbrace{\frac{\mathrm{d} E}{\mathrm{d}t}}_{\begin{array}{c}\text{Rate of change}\\ \text{of total energy}\\ \text{in the system}\end{array}} = \underbrace{\delta \dot{Q}}_{\begin{array}{c}\text{Rate of}\\ \text{heat transfer}\end{array}} - \underbrace{\delta \dot{W}}_{\begin{array}{c}\text{Work}\\ \text{extracted/input}\end{array}} + \underbrace{\dot{m} \left(h_1 + \frac{V_1^2}{2} + g z_1 \right)}_\text{energy of inlet stream} - \underbrace{\dot{m} \left(h_1 + \frac{V_1^2}{2} + g z_1 \right)}_\text{energy of outlet stream}$$ The young lady asked me that for steady state operations, the rate of change of total energy, $dE/dt$ is zero. So why are $\delta \dot{Q}$ and $\delta \dot{W}$ are not zero as they are also rates of heat exchange and work generation/input. It is just obvious to me but I don't know how to explain this to a 17 year old. I'd appreciate it if someone could help me out on this. Answer: The key to explaining this lies in two pieces that you have correctly written in the equation but have to make the student appreciate: 1) The difference between a $d$ and a $\delta$ $d$ means a small differential change, while $\delta $ just means a small quantity($\delta$ is used to denote a change in variational calculus, but that is different). Therefore in your equation $\delta Q/dt$ is just a small quantity per unit time (or per small change in time). 2) Steady state is when a system's properties don't change with time: Heat is not a systems property it is a transfer of energy. This is precisely why one never says $dQ$. Same for work, or matter. Now matter is a funny one it can be a change and a transfer. So technically if one is doing an atom balance one says \begin{align*} dN=\delta N \end{align*} or if your system leaks a particular molecules while also produces it by a chemical reaction one would write, \begin{align*} dN_i = \delta N_{i, leak}+\delta N_{i, reaction} \end{align*} Also energy transfers are always path dependent since they are not a system's property but depend on the process path. This is why $\delta $ also reminds us of that path dependence. Essentially in steady state no changes to system properties occur but the quantity of heat or work is not zero it is still what it is.
{ "domain": "physics.stackexchange", "id": 9710, "tags": "thermodynamics, work" }
Determining Fourier Coefficients by inspection
Question: I'm beginning to learn about Fourier series/transforms. My teacher hopes that by now we should be able to examine a simple potential function and decompose it without having to actually do the transform. The potential function we are given is : $$U(\vec{r}) = -4 U_0 \mathrm{cos}(\frac{2\pi x}{a})\mathrm{cos}(\frac{2\pi y}{a})$$ with which it is possible to find another function that satisfies $$U_q = \int_V \frac{d\vec{r}}{V}e^{-i\vec{q}\cdot\vec{r}} U(\vec{r})$$ where $V$ is the volume of a unit cell (or the area, or length, depending on dimensions). It is easy to solve for $U_q$ with a table and/or trig identities. The problem is that this integration is done over the space of one unit cell and I have trouble interpreting it, as usually a Fourier Transform would be over all space. My questions are: 1) How do you determine the coefficients by inspection? Practice, or is there a trick? 2) Why is $U_q$ defined over the space of a unit cell and not over all space? This is very new to me and my questions might be confusing. For the interested, this is taken from exercise 8.1 in Marder's Condensed Matter Physics 2nd Ed Answer: 1) How do you determine the coefficients by inspection? Practice, or is there a trick? For a contrived example like $U(\vec{r}) = -4 U_0 \mathrm{cos}(\frac{2\pi x}{a})\mathrm{cos}(\frac{2\pi y}{a})$ it's fairly simple to do, since you can just use $\mbox{cos}(2\pi x/a)=\frac{e^{2\pi i x/a}+e^{-2\pi i x/a}}{2}$ and similar for $\mbox{cos}(2\pi y/a)$, multiply them out, and use the fact that the basis for Fourier space is tensor products of plane waves, $e^{2\pi i k_1 x/a}e^{2\pi i k_2 y/a}$. Hopefully you're aware of this fact, but Fourier transforms are just a particular type of change of basis, quite literally in exactly the same manner as the changes of basis you've probably done in linear algebra for matrices and vectors. 2) Why is $U_q$ defined over the space of a unit cell and not over all space? The potential you are dealing with is an approximation to the potential experienced by electrons in the environment of a periodic crystalline lattice of nuclei, such as that encountered in metals. As a result, $U(x,y)$ is periodic in both directions, with the periodicity being that of the internuclear separation $a$. Since $U$ is periodic over the cell, you only need to integrate over a single cell and then normalize the result by dividing by $V$. You could also integrate over space, but it would be pointless because it would give the same answer. I've neatly skipped over a few delicate mathematical details, but that's intuitively the reason.
{ "domain": "physics.stackexchange", "id": 9988, "tags": "solid-state-physics, fourier-transform" }
Calculating moments of inertia for partially filled cylinders
Question: I'm planning on doing an experiment with cylinders of equal length and constant density but different solid fills. For example, a filled cylinder, $75\%$ filled, $50\%$ filled, $25\%$ filled, and a hollow cylinder. I will need the moment of inertia for each of these objects. Is it possible to calculate or find a source for moment of inertia based on partial fills? Some example values: $density = 1.2g/cm^3$, $length = 10cm$, $radius = 3cm$ Answer: Here I am assuming by partially filled you mean the mass density is uniform between an inner and outer radius, i.e. the cylinders are obtained by "carving out" a narrower coaxial cylinder from a uniform, filled cylinder. The moment of inertia with respect to the axis of symmetry can be found by evaluating $$\iiint\limits_V{\rho r^2 dV},$$ where $\rho$ is the mass density and $r$ is the distance from the axis. For the cylinders in question, this integral gives $$I_{CM} = \frac{1}{2}MR^2\left[1 + (1-f)^2\right]$$ where $M$ is the total mass, $R$ is the outer radius and $f$ is the fraction of the cylinder filled. In terms of mass density, this is equal to $$I_{CM} = \frac{1}{2}\pi\rho hR^4\left[1 - (1 - f)^4\right]$$ where $h$ is the height. However, you indicated that the cylinders are rolling down a ramp. Assuming no slipping, the instantaneous axis of rotation of such a cylinder is the one in contact with the ramp. If you wish to calculate the moment of inertia with respect to this axis, according to the parallel axis theorem, $$I = I_{CM} + MR^2.$$ Edit: What you may have meant by "partial fill" could simply be an effective decrease of $\rho$. If this is the case, just set $f=1$ in the above equations, and replace $\rho$ by $f\rho$.
{ "domain": "physics.stackexchange", "id": 59884, "tags": "newtonian-mechanics, rotational-dynamics, moment-of-inertia" }
How to make a wire turn with a turbine head
Question: I have an outer pipe and an inner pipe a lot smaller than the inside of the the outer pipe, and the head of the (mini) turbine will be connected to a metal rod. There will be wires connected to the head and it will be turning to the wind. How should I set up the wires? Answer: A slip ring connector is the usual way to solve this problem. The electrical contact is made by fixed brushes which make sliding contact with a rotating ring. The key point is to ensure that you maintain a good and consistent electrical contact. It is common to use brass slip rings and graphite contactors held in place with springs and designed to wear and be replaced periodically. Graphite has the advantage that it combines good electrical conductivity with self lubrication and is soft enough to wear to conform precisely to the shape of the surface of the slip ring. It is also important that the enclosure is designed to prevent contamination of the contact surface by dust, oil, moisture etc.
{ "domain": "engineering.stackexchange", "id": 749, "tags": "design" }
Meaning of Yang-Baxter equation for classical $r$-matrix
Question: I'm reading this [math/9802054] paper on the structure of the phase space of Chern-Simons TQFT. I'm stuck at the definition of the classical $r$-matrix, which goes as follows: This might sound dumb, but I don't understand what $r_{12}$, $r_{23}$, $r_{13}$, $r_{21}$ is. I could greatly benefit from an illustrating example. Equation (15) suggests that $r_{12}$ and $r_{21}$ from equation (14) can be defined as $r$ and its transpose. But I suspect that $r_{12}$ from equation (13) is different, and probably has smth to do with higher tensor powers of $\mathfrak{g}$, but I simply couldn't come up with a meaningful definition. This is embarassingly confusing. Answer: A classical $R$-matrix $r\in \mathfrak{g}\otimes \mathfrak{g}$ is an element of the 2nd tensor power of an algebra $\mathfrak{g}$ (formally extending the algebra with a unit element ${\bf 1}$). The notation $r_{k\ell}\in \mathfrak{g}\otimes \mathfrak{g}\otimes \mathfrak{g}$ for an element of the 3rd tensor power means that $r$ belongs to the $k$'th and $\ell$'th copy of the algebra $\mathfrak{g}$, and one should plug ${\bf 1}$ into the remaining copy. (If $k>\ell$ this involves a transposition.) The notation for the quantum $R$-matrix $$R~=~{\bf 1}\otimes{\bf 1} +\hbar r +{\cal O}(\hbar^2)$$ and the quantum Yang-Baxter equation is similar. See also the related Sweedler notation.
{ "domain": "physics.stackexchange", "id": 48668, "tags": "quantum-field-theory, topological-field-theory, chern-simons-theory, integrable-systems" }
Can vectorization lead to mixed states?
Question: Given an operator $L = \sum_{ij}L_{ij}\vert i\rangle\langle j\vert$, in some basis, the definition of vectorization is $vec(L) = \sum_{ij}L_{ij}\vert i\rangle\vert j\rangle$. The operation is invertible and heavily used in quantum information. This definition always leads to a pure bipartite state but my notes, Wikipedia, etc. have no indication about whether one can also map operators (of some sort) to a mixed bipartite state. Can anyone shed some light on if this makes sense and there exist extensions of the definition of vectorization? Answer: You have given $L$ as a general linear operator between Hilbert spaces $H_1$ to $H_2$ (the one indexed by $j$ and $i$ respectively). That is all possible linear operators, the entirety of $Mor(H_1,H_2)$. Mor here is short for the word morphism, a word for whatever the notion of whatever sorts of functions/operators/etc that are allowed. In the present context of vector spaces, this just means what are the linear operators $H_1 \to H_2$. So we have all of the linear operators and for each of them $\mathbf{vec}$ produces a pure state in $H_2 \otimes H_1$. There is nothing missing. There are no other linear operators. There is no way to get states $\rho$ on $H_2 \otimes H_1$ that aren't pure here. Only mixed. You already noticed this is invertible. That is we have $Mor(H_1,H_2) \simeq H_2 \otimes H_1$. The LHS is where $L$ is an element and the RHS is where $\mathbf{vec}(L)$ is an element. There are extensions of this concept. For example suppose we have 3 Hilbert spaces. $H_1$, $H_2$ and $H_3$. Giving a linear map from $H_1$ to the space of linear maps from $H_2$ to $H_3$, which would be denoted as $Mor(H_1, Mor(H_2,H_3))$ is the same as giving a linear operator from $H_1 \otimes H_2 \to H_3$. See how this implies the above. Take $H_3=\mathbb{C}$. Then a way of thinking about a bra for $H_2$ is as a linear map $H_2 \to H_3=\mathbb{C}$. I'll leave you to ponder this fact some more to fully flush out how it implies the above.
{ "domain": "quantumcomputing.stackexchange", "id": 490, "tags": "quantum-state, mathematics, textbook-and-exercises" }
java error in "rosmake roboearth"
Question: Hi guys! I'm on a problem trying to compile roboearth package. I've follow the instruction posted on http://www.ros.org/wiki/roboearth, using rosinstall to download the package. When i try to compile the package with: "rosmake roboearth" i'll got a lot of error, i think that are related to some java problems. Here are the logs: /home/prisma/fuerte_workspace/stacks/knowrob/json_prolog/src/java/edu/tum/cs/ias/knowrob/json_prolog/PrologBindings.java:80: cannot find symbol symbol : variable JSONUtils location: class edu.tum.cs.ias.knowrob.json_prolog.PrologBindings if(JSONUtils.isArray(bdg)) { ^ /home/prisma/fuerte_workspace/stacks/knowrob/json_prolog/src/java/edu/tum/cs/ias/knowrob/json_prolog/PrologBindings.java:82: cannot find symbol symbol : variable JSONArray location: class edu.tum.cs.ias.knowrob.json_prolog.PrologBindings Term[] objs = JSONQuery.decodeJSONArray(JSONArray.fromObject(bdg)); ^ /home/prisma/fuerte_workspace/stacks/knowrob/json_prolog/src/java/edu/tum/cs/ias/knowrob/json_prolog/PrologBindings.java:90: cannot find symbol symbol : variable JSONUtils location: class edu.tum.cs.ias.knowrob.json_prolog.PrologBindings } else if(JSONUtils.isObject(bdg)) { ^ /home/prisma/fuerte_workspace/stacks/knowrob/json_prolog/src/java/edu/tum/cs/ias/knowrob/json_prolog/PrologBindings.java:91: cannot find symbol symbol : variable JSONObject location: class edu.tum.cs.ias.knowrob.json_prolog.PrologBindings Term obj = JSONQuery.decodeJSONValue(JSONObject.fromObject(bdg)); ^ /home/prisma/fuerte_workspace/stacks/knowrob/json_prolog/src/java/edu/tum/cs/ias/knowrob/json_prolog/PrologBindings.java:94: cannot find symbol symbol : variable JSONUtils location: class edu.tum.cs.ias.knowrob.json_prolog.PrologBindings } else if(JSONUtils.isString(bdg)) { ^ /home/prisma/fuerte_workspace/stacks/knowrob/json_prolog/src/java/edu/tum/cs/ias/knowrob/json_prolog/PrologBindings.java:97: cannot find symbol symbol : variable JSONUtils location: class edu.tum.cs.ias.knowrob.json_prolog.PrologBindings } else if(JSONUtils.isNumber(bdg)) { ^ I'm using Ubuntu 12.04 and ros fuerte release. Any suggestions?? Originally posted by pacifica on ROS Answers with karma: 136 on 2013-02-12 Post score: 1 Answer: Do you have all rosdeps installed? It looks like libjson-java is missing. You can check with 'rosdep check knowrob' given that rosdep is set up correctly. Originally posted by moritz with karma: 2673 on 2013-02-12 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by pacifica on 2013-02-12: Thanks a lot!! rosdep resolved my problems, i miss libjson!
{ "domain": "robotics.stackexchange", "id": 12852, "tags": "ros, roboearth, compilation, java" }
What is the impact of the number of features on the prediction power of a neural network?
Question: What is the impact of the number of features on the prediction power of an ANN model (in general)? Does an increase in the number of features mean a more powerful prediction model (for approximation purpose)? I'm asking these questions because I am wondering if there is any benefit in using two variables (rather than one) to predict one output. If there is a scientific paper that answers my question, I would thank you. Answer: Based on the clarifications given in the comments on the original question, i will try my best to give an answer. If the number of features increases, does the approximation capability of an ANN improve theoretically speaking? It depends on whether this additional feature adds 'usefulness' to the data that was already supported. If you add a feature that is already in the dataset, it obviously will not increase the approximation capabilities. If you add a feature that is not yet in the current set of features and does influence the thing you are trying to approximate, then yes! There are some exceptions in this case of course, such as what if features are multicorrelated etc. It is not super important or measurable in ANNs, but statistics has a bunch of theory on this stuff. Simple statistical multiple regression has 4 assumptions and also checks for multicollinearity to see whether it is possible get 'useful' results from the data etc. As ANNs is super general, this is not applicable, but you could argue that similar practices could be looked into when seeking the best possible 'theoretical' validity of the features that you are using. Does adding more features make your ANNs produce more accurate results (in practice)? Again, it depends. If you indeed added a useful feature, then the possibility exists that your ANN will in practice be able to produce more accurate results. But if your ANN is not able to converge to a solution, then your results will be gibberish. It is more likely that a network is unable to converge if you just throw data at it. More data requires bigger networks etc. So it is highly dependent on your training method. AKA, if your added feature is useful and the method you use is sufficient, then I'd argue that, yes, adding a feature will most likely make your ANNs produce more accurate results.
{ "domain": "ai.stackexchange", "id": 2477, "tags": "neural-networks, deep-learning, reference-request, feature-selection, features" }
What Is the Transfer Function of a Moving Average (FIR Filter)?
Question: To make post-processing easier, I export scope measurements as CSV files, which are then post-processed (mostly in Microsoft Excel, which is not the best tool for the job, but it is all I have at my disposal). One of the post processing used is a simple moving average, and I was wondering how it does transform the source measurement. I expect it to behave similarly to a low-pass filter, but I guess it is not the same. Hence, my question. I am specifically looking for a Bode plot. Thanks in advance. Answer: The frequency response of the moving average is called the asinc or psinc, the aliased sinc or periodic sinc (sinc for cardinal sine), or the Dirichlet function. Since the sum of the moving average filter coefficients is equal to one, it preserves constant signal, hence is somewhat lowpass. When the length $N=2$, the shape in the frequency domain is shaped like a cosine, hence decreasing. Otherwise, it ripplles in the frequency domain. When the length is even, the amplification at Nyquist is $0$, when odd, of $1/N$, as shown below: So it's globally lowpass, not the best, but one of the fastest to compute, allowing fast recursive computations.
{ "domain": "dsp.stackexchange", "id": 5903, "tags": "linear-systems, finite-impulse-response, transfer-function, digital-filters, moving-average" }
Acceleration in inclined and horizontal plane
Question: If I apply the formula : $v^2=u^2+2$ as in the inclined plane, I get $a=0.87 ~\rm m/s^2$... If I do the same thing in a horizontal plane (not inclined),the same result pops out... I want to know what are the difference between these two results? If acceleration in the inclined plane is effective acceleration, then why is it equal to the acceleration in the horizontal plane(not inclined)? Answer: If I understand the question right, then you need to go from a specific start speed to a specific end speed in a specific distance. This requires a specific acceleration. It doesn't matter which way this motion happens. It's doesn't matter which things are pulling or pushing or grabbing or holding back. They just must result in that specific acceleration. In other words: On a horizontal surface, you need that specific acceleration in order to reach that end speed over that distance. The car can exert a force to cause this acceleration, because nothing else holds back. On the inclined surface, you still need that specific acceleration in order to reach that end speed over that distance. Now gravity pulls backwards, so the car must exert a larger force in order for the car to reach that same acceleration. Forces are different in different situations, because what holds back and what helps along in the motion may differ. But the motion we are aiming to reach, in this case acceleration, is the same regardless of how it is reached. If you are not convinced, then consider this simpler example: Let's say that I am driving with a speed of $1\;\mathrm{m/s}$ and I want to speed up to $3\;\mathrm{m/s}$ in 1 second. What is the acceleration I need to have? Simple: I need to accelerate with $2\;\mathrm{m/s}$ per second. $a=2\;\mathrm{m/s^2}$. This is regardless of the cause of this acceleration. In other words: This is regardless of the forces. The forces acting - whichever they may be - just must result in this acceleration. Otherwise I will not reach the $3\;\mathrm{m/s}$ in one second. If I am driving upwards, then I must therefor exert a larger force in order to overcome gravity, so the total force still is enough to cause that acceleration of $a=2\;\mathrm{m/s^2}$. Bottom line: Kinematics (the motion) can be considered separately from dynamics (the forces). Dynamics is the cause of changes in the kinematics. You can easily consider how something moves (it's position, speed, acceleration etc.) without thinking about why it has that acceleration.
{ "domain": "physics.stackexchange", "id": 46374, "tags": "newtonian-mechanics, acceleration" }
Why is it that the input to an IFFT is first reordered in an OFDM Modulator
Question: I am back with my questions! I recently came across a paper titled "Low Latency IFFT Design for OFDM Systems Supporting Full-Duplex FDD" where they claim a new method of IFFT input mapping to reduce the output latency. However, they have mentioned how in a conventional OFDM, the null subcarriers are mapped in the middle and the data subcarriers to the side. I have attached a picture of this. But my question is why is it done in the first place? Is it something to do with the efficient use of butterfly operation? Also, please mention a reference where it is cited that conventional OFDM systems use this method of reordering before performing an IFFT operation. Thank you Answer: Since Discrete Fourier Basis Vectors are $2\pi$-Periodic, hence, negative frequencies $-\frac{2\pi k}{N}$ are conventionally represented as $(2\pi - \frac{2\pi k}{N} = \frac{2\pi}{N} (N-k))$. It has nothing to do with Butterfly Structure of FFT Algorithm. The above statement basically means that $-k^{th}$ tone is represented as $(N-k)^{th}$ tone in DFT/FFT. That is why you are seeing that DC and positive tones are placed as it is, but negative tone $-k$ is placed at $N-k$ before taking IFFT. It is precisely the consequence of the convention that DFT coefficients are $X[k]$ where $k = 0,1,2,...,(N-1)$. $k$ goes from $0$ to $(N-1)$ and not from $(-\frac{N}{2})$ to $(\frac{N}{2} - 1)$. So, DC + Positive $\frac{N_d}{2}$ tones are placed as first $\frac{N_d}{2}+1$ values, and then Negative $\frac{N_d}{2}$ tones are placed in reverse order from the end, i.e. $-1^{th}$ tone becomes $(N-1)^{th}$ value, $-2^{nd}$ tone becomes $(N-2)^{nd}$ value and so on to $-\frac{N_d}{2}^{th}$ tone becomes $(N-\frac{N_d}{2})^{th}$ value. The rest of the $(N - (N_d +1))$ tones are zero padded. This array of values becomes the IFFT input.
{ "domain": "dsp.stackexchange", "id": 9067, "tags": "fft, digital-communications, ofdm, dsp-core" }
What common device uses 7 pin mini DIN plugs, that could be used for the Roomba serial port?
Question: I have a Roomba that I'm planning to hack on the cheap, using junk cables and other parts. The excellent book Hacking Roomba suggests using Macintosh serial cables as a cheap surrogate. However, as a hopeless Macintosh aficionado, I am somewhat loath to start chopping up my precious Mac cables. Plus the 8 pin Mac cables aren't an exact match anyway So, I was wondering what other, hopefully commonly available, devices use a 7 pin mini DIN plug, that could be used instead? Answer: Core Electronics sell the cables... https://core-electronics.com.au/mini-din-connector-cable-for-irobot-create-2-7-pins-6-feet.html?utm_source=google_shopping&gclid=CjwKCAiAx8KQBhAGEiwAD3EiP3lPsVEOb1lMni_NlwosKJn3eBxhbsDlLSWiARWlqn-ZYJkN7FcUTBoCHPcQAvD_BwE you can also find the plugs on ebay, just google "ebay roomba din plug"
{ "domain": "robotics.stackexchange", "id": 2503, "tags": "roomba" }
How did life survive in the Snowball Earth theory?
Question: There is this theory that says that the whole Earth was once covered with snow and ice. How was it possible for life to survive this extreme state? Answer: First, the "snowball earth", despite favoured by many scientists, is not a theory in the scientific meaning of the term, but a hypothesis, because it is not widely accepted: https://en.wikipedia.org/wiki/Snowball_Earth#Scientific_dispute. Some models show sea ice at the equator, while some more sophisticated climatic models failed to form sea ice to the equator. But let's talk about the most famous "snowball earth" event, the Huronian Glaciation. It happened just after the "Great Oxygenation Event": the massive amounts of oxygen (O2) released in the atmosphere decreased methane concentration, and temperatures plummeted. It's supposed that Earth's surface became entirely or nearly entirely frozen. How was it possible for life to survive this extreme state? It's worth mentioning that, by that time (2 and half billion years ago), there were only unicellular organisms (and probably no eukaryotes, just bacteria). There were no pluricelular organisms nor life on land (outside water). Thus, having in mind that there were still a lot of liquid water (under the ice), it's easy to see those bacteria surviving. Sure, many species have become extinct, but not all life forms were threatened. About the Huronian glaciation: http://link.springer.com/referenceworkentry/10.1007%2F978-3-642-11274-4_742 http://www.sciencedirect.com/science/article/pii/S1674987113000297
{ "domain": "biology.stackexchange", "id": 5770, "tags": "life" }
Is the amount of cubic inches in a cubic foot exact up to infinite significant figures?
Question: I know we are supposed to use SI units and I know they are defined very well. But does that mean that Imperial units of length/volume aren't exact? Don't they have infinite significant figures in the equations? I couldn't find it in the SI brochure. I'm asking because I need to convert from cubic feet to liters and trying to decide how many significant figures to use. My first thought is that the conversion factors are exact and I should use all the numbers my calculator comes up with. But the conversion factor wasn't in the back of my book so maybe cubic inches in a cubic foot aren't exact? Answer: Dealing with imperial measures in science is problematic. The main reason for that is that you have (at least) two different definitions (fixed values) for a lot of them, one based on the laws of the United States and one based on the laws of the United Kingdom. While the differences are quite insignificant for everyday use (as in baking a cake), in the science world these small deviations can cause further inaccuracies. On Wikipedia you can find more, but for the argument, let's look at the values for the yard: United States 0.914401829 m United Kingdom 0.9143993 m International 0.9144 m In principle the definition of cubic inch to cubic foot is exact, while the conversion to the SI system may carry an error. Unit Abbreviation Definition SI equivalent (rounded 4sf) Cubic inch cu in or in³ 16.38 cm³ Cubic foot cu ft or ft³ 1728 cu in 0.02831 m³ Cubic yard cu yd or yd³ 27 cu ft 0.7646 m³ Most of the time this shouldn't affect your measurements more than the use of significant figures, which is used as stating an error implicitly. That means it is in itself inaccurate. In any scientific capacity you would state the error explicitly. When a value is stated as $1.00$ it has three significant figures it actually means that the measurement falls within the interval $0.99$ to $1.01$, or $1.00\pm0.01$. Let's have a quick example. You measure the volume of water in a beaker to be $0.35~\pu{in^3}$. The beaker doesn't specify how accurate the measurement is, so we simply assume $\pm0.01~\pu{in^3}$. You would state that value as $0.35\pm0.01~\pu{in^3}$, or $0.34-0.36~\pu{in^3}$, or simply $0.35~\pu{in^3}$ with implied error. On the other hand, when we know significant figures are used, we would interpret a value of $0.35~\pu{in^3}$ as $0.35\pm0.01~\pu{in^3}$. When you convert this to the SI system, your value doesn't become more or less accurate. You cannot use significant figures to express the error in one system in the same way you would do it in the other, because the implicit error is different. The exact conversion (of the international yard) is $$\frac{0.9144~\pu{m}}{1~\pu{yd}}=\frac{0.9144~\pu{m}}{3~\pu{ft}}=\frac{0.9144~\pu{m}}{36~\pu{in}}\implies1~\pu{in^3}=\left(\frac{0.9144}{36}\right)^3~\pu{m^3}.$$ The actual trust of the value is $\pm0.01~\pu{in^3}=\pm0.16387064~\pu{cm^3}$. When you convert your value $0.35~\pu{in^3}=5.74~\pu{cm^3}$ (keeping the same number of significant figures since the conversion is exact), you would imply an error of only $0.01~\pu{cm^3}$, which is far less than the actual trust. Instead your measured value falls somewhere between $5.58-5.90~\pu{cm^3}$. So the correct way of implicitly stating the error would be $6~\pu{cm^3}$. TL;DR The conversions are exact, the system of significant figures is not. In doubt, state error margins explicitly.
{ "domain": "chemistry.stackexchange", "id": 7253, "tags": "physical-chemistry, units" }
Could the Halting Problem be "resolved" by escaping to a higher-level description of computation?
Question: I've recently heard an interesting analogy which states that Turing's proof of the undecidability of the halting problem is very similar to Russell's barber paradox. So I got to wonder: mathematicians did eventually manage to make set theory consistent by transitioning from Cantor's naive formulation of the field to a more complex system of axioms (ZFC set theory), making important exclusions (restrictions) and additions along the way. So would it perhaps be possible to try and come up with an abstract description of general computation that is more powerful and more expressive than Turing machines, and with which one could obtain either an existential proof or maybe even an algorithm for solving the halting problem for an arbitrary Turing-machine? Answer: You cannot really compare. Naive set theory had paradoxes that were eliminated by ZFC set theory. The theory has to be improved for consistency, as a basic assumption of scientific work is that consistency is achievable (else reasonning becomes a chancy business). I suppose mathematicians expected it had to be possible, and worked to resolve the issue. There is no such situation with computation theory and the halting problem. There is no paradox, no inconsistency. It just so happens that there is no Turing machine that can solve the TM halting problem. It is simply a theorem, not a paradox. So it may be that some breakthrough in our understanding of the universe will lead to computation models beyond what we can envision now. The only such event, in a very weak form, that remains within the TM realm, was possibly quantum computing. Other than this very weak example that touches complexity (how long does it take?) rather than computability (is it feasible?), I doubt anyone on this planet has a clue that computability beyond TM is to be expected. Furthermore, the halting problem is a direct consequence of the fact that Turing machines are describable by a finite piece of text, a sequence of symbols. This is actually true of all our knowledge (as far as we know), and that is why speech and books are so important. This is true of all our techniques to describe proofs and computations. So even if we were to find a way to extend the way we compute, say with the T+ machines. Either it would mean that we have found a way of expressing knowledge beyond writing finite document, in which case the whole thing falls out of my jurisdiction (I claim absolute incompetence) and probably anyone else's. Or it would still be expressible in finite documents, in which case it would have its own halting problem for T+ machines. And you would be asking the question again. Actually that situation does exists in reverse. Some types of machines are weaker than Turing machines, such as Linear Bounded Automata (LBA). They are quite powerful though, but it can be shown exactly as it is done for TM that LBA cannot solve the halting problem for LBA. But TM can solve it for LBA. Finally, you can imagine more powerful computational models by introducing oracle, that are devices that can give answers to specific problems, and can be called by a TM for answers, but unfortunately do not exists physically. Such oracle-extended TM is an exemple of the T+ machine I considered above. Some of them can solve the TM halting problem (abstractedly, not for real), but cannot solve their own Halting problem, even abstractedly.
{ "domain": "cs.stackexchange", "id": 4991, "tags": "computability, turing-machines, computation-models, halting-problem" }
Voting Regression models, other approaches than averaging the results from each estimators
Question: In a regression problem that I'm currently working on, it seems that my model is doing well on higher values but significantly worse on lower values (e.g. values from 100,000,000 to 105,000,000 are being accurately predicted/ having lower error scores while values from 1,000,000 to 5,000,000 don't). One approach that I am planning to test out is using multiple regression models, with one trained on the lower values and one on the higher values. I've seen scikit-learn's VotingRegressor, but if I understand correctly it seems that in predicting the value it'll only average the result from the estimators. Other than using average values from the estimators, are there any other approaches to do the voting from multiple regression models? Since classification problems might use soft/hard voting, wondering if there are alternative approaches in regression problems as well. Answer: You may try a stacking or blending approach (such as a StackingRegressor() in the recent sklearn versions), featuring a simple meta-model taking your initial models' predictions as features.
{ "domain": "datascience.stackexchange", "id": 10968, "tags": "python, scikit-learn, regression, ensemble-modeling" }
How to avoid FFT Bit Growth?
Question: I try to implementation radix-2 DIT FFT algorithm in FPGA. However, I don't understand that how to set to bit growth for multiply and add process of twiddle factor and input data. For instance, my input data is 13 bit signed number with a fraction of 0 bit (sfix13_0) and my twiddle factor is 16 bit signed number fraction is 15 bit (sfix16_15). So first step is multiply these two numbers. After that, I add first sample of input and output of multiply operation . I get 32 bit signed number with a fraction of 15 bit. There is the problem is that if I do 10 stages for FFT algorithm, my output has really big bit depth. How to set bit depth after multiply and add operations for every stage? What kind of operation should be done without spoiling the FFT result and how can we explain it theoretically? For example, I simulate bit growth in MATLAB for 16 point FFT and 4 stages. However, I reach 128 bit signed number with 60 bit fraction. I think if I apply this approach for 10 stages, the output would be achieved very very high bit depths. I add MATLAB output: This image has input and output data. In the left hand side in picture shows inputs, in the right hand side in the picture show outputs, middle in the picture shows multiply and add operations (called butterfly). How can I handle this bit growth especially integer part while doing 1024 point FFT? Answer: Fixed point FFT is tricky. Unfortunately this depends a lot on the class of signals that you are dealing with. how can we explain it theoretically? Let's start with the energy conserving definition of the FFT $$X[k] = \frac{1}{\sqrt{N}}\sum_{n=0}^{N-1}x[n] \cdot e^{-j2\pi\frac{kn}{N}}$$ This preserves the energy in both domains, i.e. $\sum x^2[n] = \sum |X[k]|^2$. If you apply this to "noise like" signals, the output will generally have the same range of amplitudes than the input, so just adding a few guard bits will do the trick. However, if you apply it to a sine wave, the peak amplitude will be about $\sqrt{N}$ larger, that means you need at least $\log_2(N)/2$ guard bits to accommodate the very large crest factor of the frequency domain signal. How to set bit depth after multiply and add operations for every stage? That depends lot on the specific requirements of your implementation: what is the required SNR, what are the types of input signals , how sensitive are you against occasional clipping, etc. This is always a trade off between signal to noise ratio, risk of clipping (which is just a different type of noise) and how much bits you want to spend. I would start with use $\sqrt{N}$ scaling, i.e. right shift by 1 the results of every second stage All the twiddle factors are equal or smaller than 1. You can represent them as Q0.15 Your data is Q13.0 so a multiplication would result in Q13.15 For a 1024 bit FFT you should add at least 5 guard bits, so all intermediate results and the output should probably be in Q18.15 or Q19.15. Perform all butterflies with the data in Q19.15 and the twiddle factors in Q0.15. Truncate the result down to Q19.15 and repeat. Scale every second stage by 0.5 (right shift) or scale every stage by sqrt(0.5) if it's cheap to do so. Your output will be in Q19.15, so you can truncate/round this back to whatever format or bit depth you need. You just need to make sure you can accommodate the crest factor of the output If you are limited to 32 bits, then I would go with Q18.13 and the twiddles in Q0.13. You'll take a 12 dB hit in signal to noise ratio, but that may be good enough.
{ "domain": "dsp.stackexchange", "id": 10669, "tags": "fft, bitdepth" }
Analyzing parallel performance question
Question: I was reviewing for my CS class and came across this question and answer combo that didn't have any explanation why it was correct. I'm confused on how they got the answer: We have a system to which we can instantaneously add and remove cores -- adding more cores never leads to slowdown from things like false sharing, thread overhead, context switching, etc When the program foo() is executed to completion with a single core in the system, it completes in 20 minutes. When foo() is run with a total of three cores in the system, it completes in 10 minutes. If 100% of foo() is parallelizable, with 3 cores it would take 20/3=6.66 minutes. Since it instead takes 10 minutes, what fraction of foo() is parallelizable? ANSWER GIVEN: 0.75 How many minutes would it take to execute foo on this magical system as the number of cores approaches infinity? ANSWER GIVEN: 5 Could someone explain how the staff got these answers? Answer: Suppose that the fraction of foo() that is parallelizable is $\alpha$. Then the total execution time with $n$ cores is: the non-parallelizable execution, done by only one core: $(1- \alpha)\times 20$ the parallelizable execution: $\alpha \times 20 \times \frac{1}{n}$ Then, for the first question, we have to consider that the number of cores is $3$, and the total time is $10$. The equation becomes: $(1-\alpha)\times 20 + \alpha\times \frac{20}{3} = 10\Leftrightarrow \alpha = 0.75$. For the second question, we suppose known the answer to the first question: $\alpha = 0.75$, and we suppose then that $n = \infty$. The total execution time becomes: $0.25 \times 20 + 0.75\times \frac{20}{\infty} = 0.25 \times 20 = 5$.
{ "domain": "cs.stackexchange", "id": 18230, "tags": "time-complexity, parallel-computing, performance, threads" }
What happens when a compass is taken to the site of magnetic pole of earth?
Question: We know that the direction of compass needle always points towards the magnetic north pole. If we take the compass to the site of magnetic north pole, which direction will the compass needle point? Answer: If the Earth's magnetic field were a perfect dipole, the compass needle would float aimlessly. Or it might point in the last direction it was facing before you stepped over the magnetic north pole. If you have magnetized metal in your pockets, it might point there. If you tilt the compass from horizontal, it will point toward the low side. If you started it spinning, it would continue spinning until friction slowed and stopped it. Earth's magnetic field lines would be vertical at the magnetic north pole if the magnetic north pole coincided exactly with the geomagnetic north pole. So a compass held horizontally there would have no preferred direction. But the Earth's magnetic field is not a perfect dipole. The north pole of the Earth's magnetic field does not fall exactly at the geomagnetic north pole (geomagnetic is the north antipodal point of a theoretical perfect dipole that extends through the center of the Earth), so the field lines are not exactly vertical. A horizontal compass will be erratic and unreliable at the geomagnetic north pole, and may exhibit all the behavior in the paragraph above. The magnetic pole and the geomagnetic pole are constantly in motion due to fluid movements in the Earth's core. They wander many miles each year, but on average they hover around the Earth's rotation axis. Here is a link which explains the movement of the geomagnetic pole: https://en.wikipedia.org/wiki/Geomagnetic_pole.
{ "domain": "physics.stackexchange", "id": 24092, "tags": "magnetic-fields, earth, geomagnetism" }
Hangman game in C
Question: I am a beginner who wrote this simple Hangman game in C for fun and to practice programming. I am looking for advice on optimizing this code and making it adhere to best practices. Are there too many variables, conditionals, and loops? #include <stdio.h> #include <stdlib.h> #include <string.h> #include <time.h> #include <stdbool.h> int main(void) { //get a word char words[12][10] = {"depend", "rich", "twig", "godly", "fang", "increase", "breakable", "stitch", "pumped", "pine", "shrill", "cable"}; srand(time(NULL)); int r = rand() % 12; char* word = words[r]; int length = strlen(word); int maxIncorrect = 5; char correct[26] = {'\0'}; char incorrect[26] = {'\0'}; int amountCorrect = 0; int amountIncorrect = 0; //repeat until maxIncorrect wrong guesses while (amountIncorrect < maxIncorrect) { char guess = '\0'; bool inWord = false; bool allCorrect = true; printf("\n________________________________________\n\n"); //print blanks and correcly guessed letters for (int i = 0; i < length; i++) { inWord = false; for (int j = 0; j < 26; j++) { if (word[i] == correct[j]) { inWord = true; } } if (inWord) { printf("%c ", word[i]); } else { printf("_ "); allCorrect = false; } } //stop if all letters have been correctly guessed if (allCorrect) { printf("\n\nCorrect!\n"); return 0; } //print incorrect guesses printf("\n\nIncorrect guesses: "); for (int i = 0; i < amountIncorrect; i++) { printf("%c", incorrect[i]); } printf("\n"); inWord = false; bool valid = true; printf("\n%i incorrect guesses remaining\n", maxIncorrect - amountIncorrect); printf("Enter a single lowercase letter: "); //get user's guess and check if it is valid (a lowercase letter that has not been guessed before) scanf("%c%*c", &guess); for (int i = 0; i < amountCorrect; i++) { if (guess == correct[i]) { valid = false; } } for (int i = 0; i < amountIncorrect; i++) { if (guess == incorrect[i]) { valid = false; } } if (guess < 97 || guess > 122) { valid = false; } //go back to top of loop guess is invalid if (!valid) { printf("\nInvalid\n"); } //check if guess is part of the word or not else { for (int i = 0; i < length; i++) { if (guess == word[i]) { inWord = true; } } if (inWord == true) { correct[amountCorrect] = guess; amountCorrect++; } else { incorrect[amountIncorrect] = guess; amountIncorrect++; } } } printf("\nThe word was %s\n", word); return 0; } Answer: General Overview This is pretty good for a beginner. I see some best practices are already being followed such as placing single statements in an if statement within { and } as well as putting each variable on a new line and initializing it. The variable names are quite understandable. The code could be more flexible and more modular. When developing code it is a good idea to compile with switches that report possible errors such as the GCC -pedantic flag. When I compile this code I get the following warning messages: main.c(11,11): warning: 'function': conversion from 'time_t' to 'unsigned int', possible loss of data main.c(15,18): warning: 'initializing': conversion from 'size_t' to 'int', possible loss of data Complexity A general best practice in almost every programming language is that no function should be larger than a single screen in the editor or IDE. This generally means no function should be larger than 55 to 60 lines of code, in the current implementation the main function is 118 lines of code. The reason for this particular best practice is that anything more than one screen makes it very difficult to follow the logic of the function. The function main() is too complex (does too much). As programs grow in size the use of main() should be limited to calling functions that parse the command line, calling functions that set up for processing, calling functions that execute the desired function of the program, and calling functions to clean up after the main portion of the program. In the current code main is the There is also a programming principle called the Single Responsibility Principle that applies here. The Single Responsibility Principle states: that every module, class, or function should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by that module, class or function. An example of code that could be a function: //print blanks and correcly guessed letters for (int i = 0; i < length; i++) { inWord = false; for (int j = 0; j < 26; j++) { if (word[i] == correct[j]) { inWord = true; } } if (inWord) { printf("%c ", word[i]); } else { printf("_ "); allCorrect = false; } } Magic Numbers There are Magic Numbers in the code function (12, 10, 5, 26), it might be better to create symbolic constants for them to make the code more readable and easier to maintain. These numbers may be used in many places and being able to change them by editing only one line makes maintenance easier. Numeric constants in code are sometimes referred to as Magic Numbers, because there is no obvious meaning for them. There is a discussion of this on stackoverflow. Flexibility This code at the very beginning of main does not allow for additional words to be added to the list: char words[12][10] = { "depend", "rich", "twig", "godly", "fang", "increase", "breakable", "stitch", "pumped", "pine", "shrill", "cable" }; srand(time(NULL)); int r = rand() % 12; char* word = words[r]; A flexible alternative would be: //get a word const char *words[] = { "depend", "rich", "twig", "godly", "fang", "increase", "breakable", "stitch", "pumped", "pine", "shrill", "cable" }; size_t wordCount = sizeof(words) / sizeof(*words); size_t r = rand() % wordCount; char* word = words[r]; This will allow you to add all the words you want and then calculate the size of the array of words. *Note: The above code could be a function, but seeding the random number generator should probably be one of the first lines in the main function. *
{ "domain": "codereview.stackexchange", "id": 45066, "tags": "beginner, c, hangman" }
Mechanism of deprotection of enol thioether
Question: I was reading some enolate chemistry from Carruthers textbook and came across the selective alkylation of unsymmetrical ketones. The reaction involves blocking one alpha-position of the ketone using an enol thioether, as shown below. I cannot understand the mechanism of the final step and while searching, I ended up with this from Sciencedirect (Ref.1) Removal of the thioenol ether protecting group gives the aldehyde function, and under the acidic conditions the β-hydroxy group eliminates to give the mainly (E)-unsaturated enal. This reaction also works if the ketone is reduced to a secondary alcohol with LiAlH4 (69% yield). My questions are: Are enol thioethers the same as thioenol ethers? If not, please draw an example. If they are the same, does the deprotection take place through an aldehyde intermediate as stated above? Also, please explain the mechanism of this deprotection. Ref. 1: Warren J. Ebenezer, Paul Wight, in Comprehensive Organic Functional Group Transformations, 1995 Ref. 2: Modern Methods of Organic Synthesis 4th edition, page no. 10 Answer: Mechanism: 1 -hydroxide anion adds to the =CHSBu carbon in a Michael addition giving the enolate anion of the carbonyl and -CHOH(SBu). 2- The enolate anion deprotonates the newly arrived -OH group to reform the carbonyl group 3- The alkoxide anion displaces BuS- to form the aldehyde 4- retroaldol gives the unsubstituted carbonyl
{ "domain": "chemistry.stackexchange", "id": 17732, "tags": "organic-chemistry, reaction-mechanism, organosulfur-compounds, protecting-groups, enolate-chemistry" }
What happens to the neighboring star of a type Ia supernova?
Question: Supernovae of type "Ia" are those without helium present, but with evidence of silicon present in the spectrum. The most accepted theory is that this type of supernova is the result of mass accretion on a carbon-oxygen white dwarf from a companion star, usually a red giant. This can happen in very close binary star systems. Both stars have the same age and models indicate that they almost always have a similar mass. But usually one of the stars is more massive than the other and the more massive star evolves faster (leave the main sequence) before the lower mass star does. A star with less than 8-9 solar masses evolves at the end of its life into a white dwarf, binary systems would consist of a white dwarf and a red giant which has greatly expanded its outer layers. During the explosion an amount of carbon undergoes fusion that a normal star would take centuries to use up. This enormous release of energy creates a powerful shockwave that destroys the star, ejecting all its mass at speeds of around 10,000 km / s. The energy released in the explosion also causes an extreme increase in brightness, so these supernovae become the brightest of all, emitting around 10^44 J (1 foe). Normally there are no traces of the star that caused the cataclysm, but only traces of superheated gas and dust that is rapidly expanding. What happens to the neighboring star? Answer: In answer to your question of "What happens to the neighboring star?", according to the Johns Hopkins folks, it gets blown away: (Credit Johns Hopkins) I would be a little skeptical of the certainty of this claim only because we have not been able to observe any of these Type Ia explosions up close while it is happening. That's why the Type Ia SN 2011fe is so important to us. It is merely 21 million light years away, instead of a billion.
{ "domain": "physics.stackexchange", "id": 3029, "tags": "astrophysics, supernova" }
How are setup and hold and timing constraints handled when reading an address from memory?
Question: When an operand encodes an address, and that address changes the "memory address register" and the word in memory being addressed, it seems like timing issues could be a problem. Examples, LDA instruction (load address from memory, load word from that address. ) Illustrated in timing diagram below. How is this usually solved (could be different across different CPUs but there are probably trends. ) Is a temporary register used (the "memory address register" could load and output in two separate steps), or is the timing issue just not a problem and tends to work out regardless? Example LDA with two step "memory address register", when LDA_1 => -- Load accumulator from operand addr_in <= dr; state := LDA_2; when LDA_2 => addr_out <= addr_in; state := LDA_3; when LDA_3 => accu <= dr; state := load_opcode; Timing with temporary "address in" register illustrated below, Answer: Well, actually there are many different approaches that can be used, and the optimal solution depends on the specific requirements and constraints of the system. While using a temporary register, the address is typically loaded into the MAR in a single clock cycle, and then used to access the word in memory in the next clock cycle. This ensures that there is sufficient time for the address to be stable before it is used to access memory. Well another way is to use a dedicated memory controller, it is responsible for managing the timing of read and write operations to memory. The memory controller can be designed to ensure that the timing constraints of the memory are met, regardless of the speed of the CPU. These approaches can help ensure that the read operation is successful and the timing constraints of the memory are met. However, as I already said the specific solution will depend on the design of the CPU and the memory system.
{ "domain": "cs.stackexchange", "id": 20723, "tags": "cpu" }
Purpose of backpropagation in neural networks
Question: I've just finished conceptually studying linear and logistic regression functions and their optimization as preparation for neural networks. For example, say we are performing binary classification with logistic regression, let's define variables: $x$ - vector containing all inputs. $y$ - vector containing all outputs. $w_{0}$ - bias weight variable. $W=(w_1,...,w_{2})$ - vector containing all weight variables. $f(x_i)=w_{0}+\sum_{i=1}x_{i}w_{i}=w_{0}+x^{T}W$ - summation of all weight variables. $p(x_{i})=\frac{1}{1+e^{-f(x_i)}}$ - logistic activation function (sigmoid), representing conditional probability that $y_i$ will be 1 given observed values in $x_i$. $L=-\frac{1}{N} \sum^{N}_{i=0} y_i*ln(p(x_i))+(1-y_i)*ln(1-p(x_i))$ - binary cross entropy loss function (Kullback-Leibler divergence of Bernoulli random variables plus entropy of activation function representing probability) $L$ is multi-dimensional function, so it must be differentiated with partial derivative, being: $$\frac{\partial{L}}{\partial{w}}$$ Then, the chain rule gives: $$\frac{\partial{L}}{\partial{w_1}}=\frac{\partial{L}}{\partial{p_i}} \frac{\partial{p_i}}{\partial{w_1}}$$ After doing few calculations, derivative of the loss function is: $$(y_i-p_i)*x_i$$ So we got derivative of the loss function, and all weights are trained separately with gradient descent. What does backpropagation have to do with this? To be more precise, what's the point of automatic differentiation when we could simply plug in variables and calculate gradient on every step, correct? In short We already have derivative calculated, so what's the point of calculating them on every step when we can just plug in the variables? Is backpropagation just fancy term for weights being optimized on every iteration? Answer: Is backpropagation just fancy term for weights being optimized on every iteration? Almost. Backpropagation is a fancy term for using the chain rule. It becomes more useful to think of it as a separate thing when you have multiple layers, as unlike your example where you apply the chain rule once, you do need to apply it multiple times, and it is most convenient to apply it layer-by-layer in reverse order to the feed forward steps. For instance, if you have two layers, $l$ and $l-1$ with weight matrix $W^{(l)}$ linking them, non-activated sum for a neuron in each layer $z_i^{(l)}$ and activation function $f()$, then you can link the gradients at the sums (often called logits as they may be passed to logistic activation function) between layers with a general equation: $$ \frac{\partial L}{\partial z^{(l-1)}_j} = f'(z^{(l-1)}_j) \sum_{i=1}^{N^{(l)}} W_{ij}^{(l)} \frac{\partial L}{\partial z^{(l)}_i}$$ This is just two steps of the chain rule applied to generic equations of the feed-forward network. It does not provide the gradients of the weights, which is what you eventually need - there is a separate step for that - but it does link together layers, and is a necessary step to eventually obtain the weights. This equation can be turned into an algorithm that progressively works back through layers - that is back propagation. To be more precise, what's the point of automatic differentiation when we could simply plug in variables and calculate gradient on every step, correct? That is exactly what automatic differentiation is doing. Essentially "automatic differentiation" = "the chain rule", applied to function labels in a directed graph of functions.
{ "domain": "datascience.stackexchange", "id": 3719, "tags": "machine-learning, neural-network, logistic-regression, backpropagation, loss-function" }
What is an effective and efficient way to read research papers?
Question: I will be a grad student in condensed matter theory starting this fall. As an undergrad, I did the basic physics and math courses as well as a few grad classes (qft, analysis, solid state physics etc.) When I start reading research papers, I often feel overwhelmed because there is so much that I don't know and I find it hard to decide which points to gloss over and which points to spend time on and understand more thoroughly (which in my case, would probably require supplementary reading of textbooks or related papers) What are some things to keep in mind while reading a paper so that: I get a general overview of the paper and I more useful insights into parts I can do the above reasonably fast (say, finish reading at least 1 paper a week for a start) You don't have to be specific to condensed matter theory papers when you answer. Answer: Comment: A Prof once said to me you should read the abstract, look at the pictures, then read the conclusion at the end, and then start reading the paper. It's only an overhead of minutes and you're slightly less lost and get an idea what the author thinks the value of the paper is. What I also like to do when taking notes is keeping in mind the search for what are appropriate lists for data. I.e. make it a task to find out what sort of collections of factoids would useful w.r.t. the task you set out to do. It helps forming an appropriate hierarchy of things on your head, which is different for every subject.
{ "domain": "physics.stackexchange", "id": 12357, "tags": "soft-question, education" }
Integrating the Schwarzschild geodesics equations
Question: I am trying to make a graph similar to this one from the Wikipedia article about Schwarzschild geodesics: There is an equation like this: $$ \varphi = \int\frac{dr}{r^2\sqrt{\frac{1}{b^2}-(1-\frac{r_s}{r})\frac{1}{r^2}}}.$$ I did not get how they manage to draw this picture from that equation. Particular questions: If it is a polar plot, usually it is opposite, like $r(\varphi)$, but not $\varphi(r)$. How can I draw the polar plot if I have $\varphi(r)$ function? What is integration interval here? $ \varphi = \int_{?}^{?}...$ Answer: To your first question: Using a parametric plot in $r$ or $\phi$ and plotting $\left\{r\cos(\phi),r\sin(\phi)\right\}$. To your second question: that is where things become tricky. Depending on what kind of geodesics comes out the radius shrinks or increases and the angle is also kind of problematic for closed orbits. A formulation with proper time as parameter is better suited for implementation. This Wolfram Demonstration Geodesics in Schwarzschild Space of Niels Walet has a simple implementation of an integration over proper time. That being said; with a bit of tweaking one can of course implement the differential equation for $\phi$. J. B. Hartle provides a Mathematica Notebook here as supplement to his book Gravity: An Introduction to Einstein's General Relativity.
{ "domain": "physics.stackexchange", "id": 38212, "tags": "general-relativity, black-holes, metric-tensor, geodesics" }
Laser Retro value not returned by the RaySensor sensor
Question: Hi there, I am setting the laser_retro value in the link of my model as following: <collision name="basic_square_garden_collision"> <laser_retro>2000.0</laser_retro> .... And then reading it out as following: this->parent_ray_sensor_->GetLaserShape()->GetRetro(ja) this->parent_ray_sensor_ is cast to RaySensor. Now the problem is that the laser_retro value does not seem to be set and is then also not read. The latter I verified with the following command: gztopic echo /gazebo/default/lawnmower/base_footprint/lawnsensor/scan Does anyone have an example for setting and reading out this value? Originally posted by dejanpan on Gazebo Answers with karma: 60 on 2013-04-12 Post score: 0 Answer: Setting up the return as follows worked for me but not sure if this is the right way: diff -r 5ebae74ef4ea -r b1e29966c852 gazebo/physics/Collision.cc --- a/gazebo/physics/Collision.cc Thu Apr 04 16:55:58 2013 -0700 +++ b/gazebo/physics/Collision.cc Fri Apr 12 21:27:45 2013 -0700 @@ -83,6 +83,13 @@ { Entity::Load(_sdf); + float retro; + if (this->sdf->HasElement("laser_retro")) + { + retro = this->sdf->GetElement("laser_retro")->GetValueDouble(); + this->SetLaserRetro(retro); + } + Originally posted by dejanpan with karma: 60 on 2013-04-12 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by nkoenig on 2013-04-24: I believe this has been merged into Gazebo default, and will come out in Gazebo 1.8.0
{ "domain": "robotics.stackexchange", "id": 3209, "tags": "gazebo" }
How to use data of subscriber to publish and command the robot?
Question: My main aim is to turn the robot to a specific angle. So, what I am trying to do is I have defined a subscriber and publisher in single node and then I will subscribe to the odometry data of the gazebo model through which I will calculate the yaw of the robot and until the robot gain the specific yaw I will command it a specific angular velocity. But the problem is when I define yaw in odometryCb it is not local variable so I can't use it in publisher part so tried defining variables in publisher it self but it is showing that sub has no attribute subscriber. Can anyone help me to find out how to deal it with? #!/usr/bin/env python import rospy import roslib; roslib.load_manifest('firstrobot') from std_msgs.msg import String from nav_msgs.msg import Odometry from std_msgs.msg import Header from geometry_msgs.msg import Twist import time import math TEXTFILE = open("data_pub.csv", "w") TEXTFILE.truncate() def odometryCb(msg): import csv #csvData=[['pose_x','pose_y','twist_x','twist_z', 'Orientation' ,'time']] x = msg.pose.pose.position.x y = msg.pose.pose.position.y v_x = msg.twist.twist.linear.x w = msg.twist.twist.angular.z q0 = msg.pose.pose.orientation.w q1 = msg.pose.pose.orientation.x q2 = msg.pose.pose.orientation.y q3 = msg.pose.pose.orientation.z yaw = math.degrees(math.atan2( 2.0*(q0*q3 + q1*q2),(1.0-2.0*(q2*q2 + q3*q3)))) t= 1.0*msg.header.stamp.secs + 1.0*(msg.header.stamp.nsecs)/1000000000 row = [ v_x, w, yaw, t] with open('data.csv', 'a') as csvFile: writer = csv.writer(csvFile) writer.writerow(row) csvFile.close() def publisher(): pub = rospy.Publisher('robot_diff_drive_controller/cmd_vel', Twist, queue_size=10) sub = rospy.Subscriber("robot_diff_drive_controller/odom", Odometry, odometryCb) rospy.init_node('publisher', anonymous=True) rate = rospy.Rate(10) # 10hz cmd_vel = Twist() odom = Odometry() cmd_vel.linear.x=0.0 cmd_vel.angular.z=0.3 now = time.time() du = time.time() x = sub.subscribe(odom.pose.pose.position.x) y = sub.subscribe(odom.pose.pose.position.y) v_x = sub.subscribe(odom.twist.twist.linear.x) w = sub.subscribe(odom.twist.twist.angular.z) q0 = sub.subscribe(odom.pose.pose.orientation.w) q1 = sub.subscribe(odom.pose.pose.orientation.x) q2 = sub.subscribe(odom.pose.pose.orientation.y) q3 = sub.subscribe(odom.pose.pose.orientation.z) yaw = math.degrees(math.atan2( 2.0*(q0*q3 + q1*q2),(1.0-2.0*(q2*q2 + q3*q3)))) while(yaw < 90): pub.publish(cmd_vel) rate.sleep() # In ROS, nodes are uniquely named. If two nodes with the same # node are launched, the previous one is kicked off. The # anonymous=True flag means that rospy will choose a unique # name for our 'listener' node so that multiple listeners can # run simultaneously. # spin() simply keeps python from exiting until this node is stopped if __name__ == '__main__': publisher() Originally posted by Maulik_Bhatt on ROS Answers with karma: 18 on 2018-08-26 Post score: 0 Original comments Comment by gvdhoorn on 2018-08-26: Just a note: x = sub.subscribe(odom.pose.pose.position.x) You cannot subscribe to single fields. Only to topics. Comment by Maulik_Bhatt on 2018-08-26: Then, what should I be the code instead of it? Comment by gvdhoorn on 2018-08-26: I hate to give these sort of answers, but do really try to complete the tutorials. They should show you how to do this. Note: ROS != simulink. A topic subscription is not a signal (or it sort-of is, but then a complex one, not a single float or integer). Comment by Maulik_Bhatt on 2018-08-26: Actually, I found no tutorial which explains how to use data of the subscriber in the publisher of the same node. Can you provide me the link to those tutorials? Comment by gvdhoorn on 2018-08-26: I was referring to subscribing to topics instead of trying to subscribe to individual fields. As to publishers and subscribers in the same node: please use Google: search for: publisher subscriber same node site:answers.ros.org. Comment by Choco93 on 2018-08-27: You can do calculation and everything within the subscriber, just define publisher outside and put pub.publish int the subscriber Comment by Maulik_Bhatt on 2018-08-27: I tried doing that by changing the code to the following but nothing happens after a run the code #!/usr/bin/env python import rospy import roslib; roslib.load_manifest('firstrobot') from std_msgs.msg import String from nav_msgs.msg import Odometry from std_msgs.msg import Header from geometry_msgs Comment by Maulik_Bhatt on 2018-08-27: I tried that by putting this code in subscriber. still code is not working. rospy.init_node('publisher', anonymous=True) rate = rospy.Rate(10) cmd_vel = Twist() cmd_vel.linear.x=0.0 cmd_vel.angular.z=0.3 now = time.time() while(yaw < 90): pub.publish(cmd_vel) Answer: I finally got my code working with the code bellow. #!/usr/bin/env python import rospy import roslib; roslib.load_manifest('firstrobot') from std_msgs.msg import String from nav_msgs.msg import Odometry from std_msgs.msg import Header from geometry_msgs.msg import Twist import time import math TEXTFILE = open("data_publisher.csv", "w") TEXTFILE.truncate() pub = rospy.Publisher('robot_diff_drive_controller/cmd_vel', Twist, queue_size=10) cmd_vel = Twist() cmd_vel.linear.x=0.0 cmd_vel.angular.z=0.3 def odometryCb(msg): import csv x = msg.pose.pose.position.x y = msg.pose.pose.position.y v_x = msg.twist.twist.linear.x w = msg.twist.twist.angular.z q0 = msg.pose.pose.orientation.w q1 = msg.pose.pose.orientation.x q2 = msg.pose.pose.orientation.y q3 = msg.pose.pose.orientation.z yaw = math.degrees(math.atan2(2.0*(q0*q3 + q1*q2),(1.0-2.0*(q2*q2 + q3*q3)))) t= 1.0*msg.header.stamp.secs + 1.0*(msg.header.stamp.nsecs)/1000000000 row = [ v_x, w, yaw, t] with open('data_publisher.csv', 'a') as csvFile: writer = csv.writer(csvFile) writer.writerow(row) csvFile.close() if yaw <= 90: pub.publish(cmd_vel) def listener(): rospy.init_node('oodometry', anonymous=True) rospy.Subscriber("robot_diff_drive_controller/odom", Odometry, odometryCb) rospy.spin() if __name__ == '__main__': listener() Originally posted by Maulik_Bhatt with karma: 18 on 2018-08-30 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31643, "tags": "gazebo, navigation, odometry, ros-kinetic" }
What does "Active Weather Patterns" mean?
Question: I just read that Cassini discovered titan had "active Weather patterns". What does that mean? Answer: It means that Titan has weather (driven by methane rather than water) and that it's weather changes with the seasons. Cassini has been observing Titan for almost half of a Titan year, which is 29.457 times longer than is our year. Titan's weather patterns (where it's cloudy, where it rains) has visibly changed over this time, just as weather patterns here change from summer to winter / winter to summer.
{ "domain": "astronomy.stackexchange", "id": 2270, "tags": "saturn, titan, weather, climate, cassini" }
How can quantum entanglement not be non-local?
Question: I know this kind of question has been brought up many times.I have read many posts here regarding this but I still have a problem with a certain aspect of it so please bear with me. Lets consider the standard case where one of two singlet state electrons is send to Alice and the other is send to spacelike separated Bob.This is repeated many times. I know that its impossible for Alice to transmit information by measuring her electrons.This is because the density operator for the state of Bob is invariant under a rotaion of Alices axes.To show that, we note that the state prior to Alices measurement is also rotation invariant (using the well known rotation matrices for spin 1/2 objects): $$\frac{|\uparrow_z\downarrow_z\rangle-|\downarrow_z\uparrow_z\rangle}{\sqrt{2}}=\frac{|\uparrow_n\downarrow_n\rangle-|\downarrow_n\uparrow_n\rangle}{\sqrt{2}} $$ The state has the same components, no matter in which base it is expressed.That means, that no matter which axis Alice chooses, she will always get half of her results spin up and the other half spin down. Now if we use the "model" that Alices measurement collapses the composite state wavefunction, it follows that the particles Bob recieves will be a statistical mixture of one half spin up, one half spin down.This state is also independent of the basis,as can be shown using the same rotation matricies. $$\rho=1/2*|\uparrow_z\rangle\langle\uparrow_z|+1/2*|\downarrow_z\rangle\langle\downarrow_z|=1/2*|\uparrow_m\rangle\langle\uparrow_m|+1/2*|\downarrow_m\rangle\langle\downarrow_m| $$ This means that the statistical distribution of ups and downs that bob measures is completely independent of Alices choice of axis, or even her choice to measure her electron at all. Bob will statistically always get a fifty fifty result. But nevertheless, and this is my question, Alices measurement makes Bobs wavefunction collapse to a pure state, and it seems to me this will leave an imprint on Bobs measurement in a non local fashion. Lets say Alice only measures along the z axis, and Bob chooses a random axis for every measurment.If Alice tells Bob the outcome of her measurements, Bob will be able to divide his outcomes in two groups: One where the corresponding spin of Alice was measured up, and the other one for the case where Alice has gotten down.And these two groups will show a perfect correlation between Bobs axis and Alices z-axis.Everytime Alice had found spin up, and Bob has chosen the z-axis aswell, he will find that his measurement was down, and the probability distribution will depend on the angle of his axis relative to Alices z-axis by the well known formula $$|\langle \chi_z|\chi_n\rangle|^2=\cos\left(\frac{\theta}{2}\right)^2$$ Where theta is the angle that Bob measured the spin along, relative to the z-axis.And Alice can change the axis that Bobs measurements are biased to.If she chooses another axis, then Bobs outcomes will show the correlation with respect to this new axis.This correlation will be "burned" into Bobs list of outcomes one by one, everytime Alice measures an electron. Of course, Bob will only see this correlation when he recieves Alices results, which can only be transmitted via a classical channel. But I mean, "something" has to change in the very moment Alice makes her measurement, because the correlation between the spins is already there, even before Alices information has reached Bob. How can we rescue locality here? (I guess this is deeply connected to the question whether the wavefunction actually collapses or something else is going on.But how can we say that there is not actually a collapse if the model of the collapse gives us the right probability distribution for the angles?Maybe you can adress this) Edit: I think these answers don't answer my question because I explained that I know that Alice cant send information to Bob this way.Nevertheless it seems to me there is some kind of non locality involved.Like a nonlocal event thats just useless for sending information.I wonder whether there is a way of looking at it that doesnt involve any nonlocal event. Answer: You have reached the realm of interpretation of the quantum formalism. No one doubts that the formalism is correct (at least at the level of the entanglement effects you are referencing). But what is actually happening is a matter of interpretation. Most physicists would agree with you that the results of the various Bell test experiments doom local interpretations of QM. But if you dig, you may find some which preserve locality at the expense of something else. My personal favorite is the transactional interpretation of quantum mechanics, which (in a sense) preserves local interactions but avoids conflict with the predictions of the apparently non-local QM formalism by postulating time-symmetric (including backward-in-time) signals. You see, locality can indeed be saved. But at what cost?
{ "domain": "physics.stackexchange", "id": 43985, "tags": "quantum-mechanics, quantum-entanglement, faster-than-light, bells-inequality, non-locality" }
Assigning parameters in perpendicular axes: D-H is a must
Question: So I was given a course assignment to assign frames and write D-H parameters for this robot using only 5+1 frames (with Frame $\{5\}$ at $P$ and Frame $\{0\}$ at $O$). And I assigned them like this: My question is: From Frame $\{1\}$ to Frame $\{2\}$, what are joint distances $a$ and $d$? The best answer I could get was 0. But obviously it should be zero for one axis and $a_1$ for the other. What's wrong? I have read a similar question here. But the answer points me to another method which is impossible for me. Edit: No matter I put $a_1$ in $a$ $$(\alpha,a,d,\theta)=(-90^\circ,a_1,0,\theta_2-90^\circ)$$ or in $d$ $$(\alpha,a,d,\theta)=(-90^\circ,0,a_1,\theta_2-90^\circ)$$ The joint distance $a_1$ does not appear in $z$. What it gave out is $$\left( \begin{array}{cccc} \sin{\theta_2} & \cos{\theta_2} & 0 & a_1 \\ 0 & 0 & 1 & 0 \\ \cos{\theta_2} & -\sin{\theta_2} & 0 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right) \text{ or } \left( \begin{array}{cccc} \sin{\theta_2} & \cos{\theta_2} & 0 & 0 \\ 0 & 0 & 1 & a_1\\ \cos{\theta_2} & -\sin{\theta_2} & 0 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right)$$ Obviously, $a_1$ should appear in $Z$-translation instead! Answer: I found this video helpful when learning about the DH method. DH is all about describing the differences between coordinate systems using rotation and translation about/along the X and Z components of the coordinate system. EDIT It seems that your frames {0} and {1} are not in the correct location if your wanting to follow the DH convention. When your on a rotation (pivoting) joint (such as the first joint), you should move up the coordinates to the next joint. That should make joint distances a and d be 0, allowing the DH convention to sufficiently describe the differences in frames.
{ "domain": "robotics.stackexchange", "id": 1194, "tags": "dh-parameters, frame" }
Block Sliding Down Hemisphere
Question: A block of mass m slides down a hemisphere of mass M. What are the accelerations of each mass? Assume friction is negligible. $a_M$ = Acceleration of hemisphere $N_m$ = Normal force of M onto m $N_M$ = Normal force of ground onto M So from the FBD's, I come up with $$\sum \text{F}_{xm}= mg\sin \theta = m(a_t - a_M \cos \theta)$$ $$\sum \text{F}_{ym} = N_m - mg \cos \theta = -m(a_r + a_M \sin \theta)$$ $$\sum \text{F}_{xM} = -N_m \sin \theta = Ma_M$$ I need another equation, so I tried going the route of work-energy, to find the tangential speed of the block sliding on the hemisphere, but can I look at the energy of the block by itself? I figure I cannot, as it is atop an accelerating body. However, if I can consider the energy of the block by itself to find the tangential speed, then I can solve for aM, $$ a_M = gm\sin \theta \frac{2-3\cos \theta}{M-m\sin ^2 \theta} $$ which goes to 0 when M >> m and so then $$a_t = g\sin \theta$$ in that case, which checks out, however Im still a little weary about this. I'm rather stuck here so any help would be appreciated. Answer: I did not check thoroughly your free body diagrams,but they look correct. One comment though, I do not think that using polar separation of the accelerations is particularly useful for this problem since the obvious origin for such system is accelerating, as you well indicate. I think that your missing equations are just your geometrical constrains. You have not used them. I noticed that we are using different convention for the coordinates. Mine is just a static Cartesian coordinate system with the origin in the center of the hemishpere at the beginning, but it does not move with it. I am assuming we can treat this problem in two dimensions. Assuming that the block has not lost contact with the hemisphere, you have $$(x_m-x_M)^2 + y_m^2=R^2,$$ which is an equation that is valid at all times, therefore it relates the three accelerations and velocities in a complicated manner that appears explicitly by deriving two times. And the fifth equation is just $$\tan\theta=\frac{y_m}{x_m-x_M}.$$ therefore you have the normal, the three dinamical variables, and a fourth dependandant dynamical variable introduced just to ease the notation, $\theta$. The FBD equations in my case are \begin{align} m \ddot{y}_m &= N\cos\theta -mg\\ m \ddot{x}_m &= N \sin\theta\\ M \ddot{x}_M &= -N cos\theta \end{align} Five equations for five unknowns, and a very ugly solution that I do not think exists in closed form. You asked about energy. Indeed, you can use this approach since there is no friction. Actually, the energy approach is useful even when the objects are no longer in contact. Lets assume that the block start moving in the position you depict in your picture, that is $y_m(t=0)=h$ and $x_m(t=0)=\sqrt{R^2-h^2}$, and $x_M(t=0)=0$, with all velocities equal to zero at $t=0$. Therefore, conservation of energy dictates $$\frac{1}{2}M\dot{x}_M^2+\frac{1}{2}m(\dot{x}_m^2+\dot{y}_m^2)+mgy_m=mgh.$$
{ "domain": "physics.stackexchange", "id": 15438, "tags": "homework-and-exercises, newtonian-mechanics" }
Why is an excess number of eosinophils consistent with a violent allergic reaction?
Question: What roles do eosinophils play in allergic reaction? Answer: The primary function of Eosinophils is to releas digestive enzymes and destroy extracellular parasites, but they also contain pro-inflammatory molecules and cytokines in their granules. Eosinophils play an important role in late response inflammation. The primary inflammation is caused by degranulation of improper hyperactivation of mast cells when they encounter antigen/allergen (They have IgE bound to their receptors on a membrane). They release cytokines as IL-4 and IL-13 which recruit Th2 lymphocytes. Th2 lymphocytes release i.e. IL-5 which attract eosinophils. Eosinophils release the content of their granules, which can degrade the tissue on an organism. Their cytokines attract more WBC - eosinophils (macrophages, TH2 lymphocytes and mast cells) and since this process is not controlled, they accumulate in tissues and it can lead to allergic reaction. Good book contemplating this topic
{ "domain": "biology.stackexchange", "id": 5211, "tags": "biochemistry, cell-biology, immunology, medicine" }
Conserved current from the three $SU(2)$ transformations
Question: We are asked to show that the following Lagrangian is invariant under the three $SU(2)$ transformations $\Phi \rightarrow \exp{({\frac{i}{2}{\alpha_j\sigma^j}}) \Phi}$, where $\Phi$ is a doublet complex scalar field$$\Phi =( \phi_1 \phi_2)^{T}$$ The given Lagrangian is $$ \mathcal{L} = \partial_\mu\Phi^{\dagger}\partial_\mu\Phi-m^2\Phi^{\dagger}\Phi $$ I have re-written the Lagrangian as $$ \mathcal{L} = g^{\mu\nu}\partial_\mu{\phi_i}^\dagger\partial_\nu{\phi_i}-m^2\phi^{\dagger}_i\phi_i $$ Where $i = 1,2$. I derived $\delta\mathcal{L}=0$ using the fact that $\vec{\sigma}_j = \vec{\sigma}_j^\dagger$. My problem is finding $\delta\phi_i$ and $\delta\phi_i^\dagger$. How can I expand the exponential? And also, will I get 3 conserved currents for each complex field? How would I calculate $$\alpha_j(j^\mu)_i = \frac{\partial\mathcal{L}}{\partial({\partial_\mu\phi_i})}\delta\phi_i+\frac{\partial\mathcal{L}}{\partial({\partial_\mu\phi^\dagger_i})}\delta\phi^\dagger_i$$ Edit 1: Attempting to calculate $\delta\phi_i$ I expand the exponential to linear order since the parameter $\alpha_j \ll 1$. Hence $\delta\phi_i= I + i\alpha_j\sigma^j$. Assuming this is true, how does that summation look like? Would each j have a different charge? Answer: You might be misunderstanding the notation. $$ \delta \Phi = \exp{({\frac{i}{2}{\alpha_j\sigma^j}}) \Phi} -\Phi +O(\alpha^2)= \frac{i}{2}\alpha_j\sigma^j \Phi, $$ so that $$ \delta \begin{pmatrix} \phi_1\\ \phi_2 \end{pmatrix} = \frac{i}{2} \begin{pmatrix} \alpha_3&\alpha_1-i\alpha_2\\ \alpha_1+i\alpha_2&-\alpha_3 \end{pmatrix} \begin{pmatrix} \phi_1\\ \phi_2 \end{pmatrix} , $$ which you may perform to get $\delta\phi_j$. Now $$ \frac{i}{2} \alpha_j ~ (\partial^\mu \Phi^\dagger \sigma^j \Phi - \Phi^\dagger \sigma^j \partial^\mu \Phi) $$ represents three currents, an SU(2) adjoint triplet, the coefficients of the three parameters $\alpha_j$, normally omitted. Did you compute them? Indeed, integrating their zero components produces three isocharges, closing into an su(2) Lie algebra.
{ "domain": "physics.stackexchange", "id": 61930, "tags": "homework-and-exercises, lagrangian-formalism, field-theory, noethers-theorem" }
positive charge , current and electron flow in a simple circuit …confusion
Question: my question is that in a simple circuit one wire attached with battery cell ,and then electrons start flowing from lower potential to higher and as we know in metal wire only electron is the thing which is carrying charge ,,, then why we say there is current and it flow opposite to electron ??? even there is no other thing , please dont go in other stuff yeah i know positive charge also moves some where like alpha particals in some cases but here in simple circuit why they dont just say electron flow ? why current and why its opposite ??? Answer: It is a historical convention to denote the current as if it is carried by positive charges. It is absolutely true that the electron flow (the physical current) flows in the opposite direction of the conventional flow we assume in theory. Have a look here. In wires it is only electrons that carry the current This contradiction doesn't affect the correctness of our results at all. That is why it was kept the way it is
{ "domain": "physics.stackexchange", "id": 9836, "tags": "electrons, charge, electric-current" }
Modal logic axiom S4, transitive and reflexive frame, tableaux solver
Question: I have a difficult problem to solve which as mentioned in the title is related to modal logic axiom S4. So, here is some background knowledge that can be useful: S4 axiom is a class of transitive and reflexive frames S4 satisfiability problem is PSpace-Complete lastly solver for S4 does not terminate without a trick which will allow it to return a finite model. Knowing all this, I have implemented a solver for propositional modal logic S4 and it also terminates with a finite model. In general solver uses Tableaux approach and generates graph in which the input formula is true. So, to outline what the algorithm does we have the following: algorithm start by creating a graph with a single node with the input formula. then we solve alpha, beta, gamma and delta formulas until each sub-formula is marked as being visited. for termination part of the algorithm, every time a new node is about to be created (which is caused by delta formulas) we check whether the same node already exists in the graph. If yes, then we do not create a new node but we add an edge to the node that has that value. This is enough to see what the algorithm does. What I am trying to do next is to prove that the algorithm is correct and will always work. For this purpose I will need to structure a formal proof. So is anyone familiar enough with the topic to suggest what technique can be used to with the proof? What should be used bisimulation or filtration? Furthermore, it would be great if you could sketch out the proof if possible. Any help would be very very appreciated. Answer: To find a descriptive answer to this post, please see my thesis which presents background information of various modal logics including S4. It also contains detailed implementations of the algorithms used along with implementations in python. I have also proved that NPSpace problems such as S4 or K4 are terminating in finite time without infinite looping. Here is the link to full report -> Link to the pdf file located on my website. I would recommend reading it fully and following the research papers which were used to make sure full knowledge is transferred onto you :). Please note that this was a research project which I have now finished. Nevertheless, if anyone wants to expand on my work or would like me to get involved on similiar topic, please get in touch in any way possible.
{ "domain": "cs.stackexchange", "id": 7465, "tags": "logic, proof-techniques, satisfiability, correctness-proof, modal-logic" }
Time for a wind-battred door to slam
Question: Foolishly leaving the window too widely open, earlier today my bathroom door slammed shut with the wind. How could I theoretically work out the time it takes to close? Would it be impossible to reach an answer based on the conservation of energy, or does the nature of the wind's force ruin that method? That was my question, what follows are just hunches. I'm not sure how to approach the problem- the angular velocity is $w= \frac{d \theta}{dt}$, so the force on a point on the door is $\propto (u-wrsin\theta)^2$ (if you measure the angle so that it is 0 when the door is parallel to the wind's motion, u is the wind's speed and r the distance from the door's pivot). I assume a stationary thing hit b the wind is analogous to a moving thing hitting the stationary wind. Am I right in thinking you would then integrate to get the torque on the entire door, and from that, knowing the moment of inertia of the door, find the time for $\theta=90$? I am unfamiliar to most of angular mechanics, if that helps formulate a good answer. Answer: I don't think your "drag on each point of the door is the same as on a free particle in the wind" is going to be a good simulation of what goes on when your door slams. Actually, with your model, the door would never start moving. If you wanted a back of the envelope estimate, I would base it on the Venturi effect: the air is stalled on one side of the door, flowing with a speed $v$ on the other one, so there is a pressure difference of $\rho v^2/2$ between both sides, where $\rho$ is the air's density. The force created by this pressure difference is perpendicular to the door's surface, and you could consider it applied at the center of the door. You could refine this in several ways, taking into account how the flow through the door changes while it closes, or have better model of how the air affects the door once it is half way closed using thin airfoil theory. But its probably not worth the effort.
{ "domain": "physics.stackexchange", "id": 5124, "tags": "angular-velocity" }
Are halcyon days an actual phenomenon?
Question: According to Greek mythology, halcyon days are the seven days in winter when storms never occur. [Wikipedia] I assume that Ancient Greeks noticed a period in winter during which the weather was especially calm, so they came up with a myth to explain it. Is there such a concept in meteorology? According to the article “Halcyon Days: When Spring Appears in the Middle of Winter in Greece” from Greek Reporter, the Halcyon days supposedly take place some time between 15 December to 15 February, most often between 16 to 31 January. Answer: The story of Halcyon days is that there is a period of 7 (or 14?) days around midwinter (on the 21st of December) on which Aeolus does not make the wind to blow - to allow the Kingfisher, 'Alcyone to lay her eggs. This is a myth. There is no predictable period of calm weather in midwinter, although periods of stormy and calm weather can alternate, on several time scales. There are periods of a week or so that are calm, but they do not occur regularly. There are lots of examples of storms in the Mediterranean and across the Northern Hemisphere, occurring on or near to the 21st of December, the time when according to the myth, storms don't occur: [In] 1967 An F4 tornado traveled 33 miles across Iron and Washington Counties in Missouri [...] The tornado killed 3 and injured 52 others. (source) [In 2021] Severe weather forecast across much of Greece and Cyprus due to Storm Carmel through at least Dec. 20. (source) Of course there are more stormy and less stormy days. Most, if not all winters will have a period of high pressure, with gentle winds and little preciptiation. These can last a week or more. But you cannot predict with accuracy when these periods will occur from more than a few days in advance.
{ "domain": "earthscience.stackexchange", "id": 2421, "tags": "meteorology, rain, storms, winter, geomythology" }
Assigning names to unnamed labels
Question: I'm working on a compiler. During the final stage I will get a collection of Lines. This is the method that takes the Lines and gives name to Labels that don't have it yet. Lines can have Labels. When a Line doesn't have a Label, its Label property is set to null. Apart from this, Labels can be unnamed (Label.Name is null) This is the method. It looks ugly and I'm using a nested method. Please, review it and try to improve it to be elegant. Performance isn't a requirment (at least, for now): private static void GiveNameToUnnamedLabels(IEnumerable<Line> finalCode) { int count = 0; void GiveName(Model.Label label) { label.Name = $"dyn_label{++count}"; } var unnamed = finalCode.Where(x => x.Label != null && x.Label.Name == null).ToList(); unnamed.ForEach(x => GiveName(x.Label)); } Answer: A few notes about your code: Both the inner and outer method are fairly small, and the name of the outer method is already aptly descriptive, so I see no need for a named inner method here. An anonymous method would suffice. List<T>.ForEach is a bit of an odd duck, in my opinion. It looks like functional programming (similar to Linq), but it's actually used to produce side-effects. Combining the two is giving off conflicting signals. A for or foreach loop, on the other hand, is clearly not functional-style, so side-effects would not be surprising. I agree with keeping anonymous method parameter names short, but x isn't very descriptive. line is a little longer but much more descriptive. Also, I find it more readable to put chained Linq calls on a line of their own: var unnamed = finalCode .Where(x => x.Label != null && x.Label.Name == null) .ToList(); Having that said, I would go for a plain old foreach loop. It may not be as fancy, but it's simple and to the point, which should make it easier to understand: int count = 0; foreach (var line in finalCode) { if (line.Label != null && line.Label.Name == null) { line.Label.Name = $"dyn_label{++count}"; } }
{ "domain": "codereview.stackexchange", "id": 29922, "tags": "c#, compiler" }
Counting trees (order matters)
Question: As a follow up to this question (the number of rooted binary trees of size n), how many possible binary trees can you have if the nodes are now labeled, so that abc is different than bac cab etc ? In other words, order matters. Certainly it will be much more than the Catalan number. What would the problem be if you have n-ary trees instead of binary ? Are these known problems? reference ? Answer: As partially answered above by Syzygy, for labelled binary trees, it is $n!C_n$, where $C_n$ being the Catalan number. Generalized this to labelled $k$-ary trees, it is $n!C^k_n$ where $C^k_n$ is the Fuzz-Catalan number $ \begin{equation} C^k_n= \binom{kn}{n}\frac{1}{(k-1)n+1} \end{equation} $
{ "domain": "cs.stackexchange", "id": 224, "tags": "binary-trees, combinatorics, trees" }
Acceleration in $F=ma$
Question: I tried to ask this question in Electrical Engineering Stackexchange but was told I was better asking here. Newton's Second Law of Motion states that the vector sum of the forces $\mathbf{F}$ on an object is equal to the mass $m$ of that object multiplied by the acceleration vector $\mathbf{a}$ of the object: $$\mathbf{F}=m\mathbf{a}.$$ By this equation we may associate, among others, the following three properties with any object in newtonian mechanics: A mass $m$. The vector sum of forces $\mathbf{F}$ acting on it. An acceleration vector $\mathbf{a}$. These properties are related by Newton's second law, as stated earlier. However, they seem to have a different status. We can change the mass of an object (by simply removing parts of the object) independently of the other two properties. We can also change the sum of forces on the object independently of the other two (e.g. by adding a new force). However we are unable to change the acceleration vector of the object directly, we would need to change either $\mathbf{F}$ or $m$ to do that. This difference seems strange to me, because after all the equation treats all of these characteristics the same (we can choose to solve any of them by substituting for the other two). For example, in the model $V=RI$ of a resistor this doesn't happen. We can choose to alter directly any of the three properties of the resistor (the voltage across it, the current flowing through it or its resistance). Why is $\mathbf{a}$ in $\mathbf{F}=m\mathbf{a}$ different? I know it's pretty common sense to us that we can't change acceleration directly (at least for me it is), but still the mathematical equation wouldn't suggest that. So is there a physical (theoretical) explanation as to why this third property (acceleration) can only be changed indirectly? Why does it seem to have a different status than the other two (in the sense that we humans can't alter it directly, only indirectly)? Answer: Your observations are spot on. I usually write Newton's second law this way: $\vec{a} = \vec{F}/m$. This form makes it clear that the law is a relationship between the dynamic variables force and mass, and the kinematic variable, acceleration. $F$ and $m$ describe the situation, $a$ is the result. Cause and effect, if you will. In fact, that's one reason why it's a law that can't be derived from some other principle. It is based on observation: with $\vec{F}$ and $m$ I always observe $\vec{a}$ That's why it seems like it has a different status: because it does have a different status. As @brightmagus points out, you've made a very astute observation. update My phrase "cause and effect" was poorly chosen. But still, force and mass are the setup, and acceleration is what is observed. I'm given a property of the system (mass), and its interaction with the environment is specified (force). What results is a change in the position (via a constant acceleration). I can't cause an acceleration without first creating a force. But mass exists regardless of force or acceleration, and force exists, well, actually gradients of potentials, exist independent of the mass of the test particle, and even the existence of a test particle. Note also that the equation of motion of a particle, the fundamental question, is found starting with $\vec{a} = \vec{F}/m$ which leads to $\vec{v} = \left(\vec{F}/m\right)t + \vec{v}_0$, etc. (for constant force in this case). Concerning Ohm's Law, the situation seems similar. I do not think it's true that any of the three parameters can be changed externally and independently. We have a property of the system (resistance), and an interaction with the outside world (a potential or EMF), and the result is the motion of charge carriers. I can create an EMF chemically with a battery, or electromechanically via a generator. But I can't think of a way to generate a current without first creating an EMF. One can model a lumped circuit element as a current source, but under the hood, I think it has to be an EMF source and a resistor. I can't think of any other way of generating a current, but I'm ready to be corrected! Current is what is observed given a voltage and a resistance.
{ "domain": "physics.stackexchange", "id": 14953, "tags": "newtonian-mechanics, forces, mass, acceleration" }
Controller for an Administrator User, can this be improved? (codeigniter)
Question: I want to write better code. This is a simple controller class for an administrator login. Are there conventions or tips in PHP to rewrite this code and improve it? <?php class Administrators extends CI_Controller { public function __construct() { parent::__construct(); } public function index() { if(!$this->session->userdata('logged_in')) { redirect('/office/administrators/login'); } else { redirect('/office/dashboard/'); } } public function login() { if($this->session->userdata('logged_in')) { redirect('/office/dashboard/'); } if($_SERVER['REQUEST_METHOD'] == 'POST') { $a = new Administrator(); if($a->login($this->input->post('email'),$this->input->post('password'))) { redirect('/office/dashboard/'); } else { $this->messages->add('Unable to authenticate you', 'error'); } } $data['error_messages'] = $this->messages->get('error'); $this->load->view('/office/administrators/login', $data); } public function logout() { $this->simpleloginsecure->logout(); redirect('office/administrators/login'); } } ?> Answer: First Joseph's post. I agree with it mostly, but the first comment I'd like to clarify. A PHP opening tag should definitely be the first item on the page to avoid headers being sent prematurely, but the opening tag <?php or <? does not make a difference and depends upon your server. The traditional, long, PHP opener is usually the accepted norm because not all servers accept the later and it can cause compatibility issues. Such as if you were to include XML in your PHP then doing <?xml might cause issues. However, most all servers have a setting that allows you to enable it "short_open_tag". People still use the shorter form, but it does not have any performance differences. I'm sure this is what he meant, but originally reading it I thought he was saying that switching to the short form did that. So there's clarification. Only other thing I want to point out is the following comment: "you can use the input class's post() to check for the post request". I'm not sure about CI, but unless it does the same REQUEST_METHOD check in its interior, then what you have there currently is fine. I just had a similar discussion on SO about checking the $_POST vs checking the REQUEST_METHOD and it is much better to check the request method. Checking the post array will work, but only because it is a hack. From the looks of this CI statement it is a similar thing. But I'd suggest looking into it for clarification. Now, here are my suggestions: Don't override a parent method if you aren't going to extend it. The child class inherits it automagically. The only reason to call the constructor again, or any inherited method for that matter, is if you were going to change something, extend it, before or after the parent method. Example, say you wanted to change a property $newProperty before it was used in the parent constructor (constructor is a bad example think a normal method). Then you can set that property just before calling it so that the parent method can use that new value. Or say that your parent method has a local variable $newVar and you want to use it in your new class. Then you can save that variable as a property or do something with it immediately in local scope. In short, just delete your constructor it is unnecessary as is. public function __construct() { $this->newProperty = 'jkl';//can be used in parent method parent::__construct(); echo $newVar;//came from parent method $this->newVar = $newVar; } Give your program some defaults instead of using if/else statements and/or calling methods multiple times. Doing this makes it easier to change functionality should you desire it. Say you didn't want to use redirect anymore, but maybe view instead, you'd have to change each occurrence of it. There's only two here, but if you had more it could be a pain. public function index() { $redirect = '/office/dashboard/'; if(!$this->session->userdata('logged_in')) { $redirect = '/office/administrators/login'; } redirect( $redirect ); }
{ "domain": "codereview.stackexchange", "id": 1900, "tags": "php, performance, object-oriented, mvc, codeigniter" }
Does air have to be oversaturated in order to make resublimation possible?
Question: I am writing a pre-scientific work about the chemtrail conspiracy theory. A part of that is the exact process of contrail formation. There is one more puzzle piece missing. Hopefully the answer to this question doesn't make the puzzle bigger. But here's the question: According to chemtrail conspiracy theorists, the difference between contrails and chemtrails are their persistence. "Normal" contrails dissolve after a few seconds, but "chemtrails" persist longer, minutes or even hours. Source And now to their argument: They say that for a contrail to be persistent, the relative humidity of the ambient air has to be over 100%, hence oversaturated, because then the water vapor resublimates into ice and the contrail grows and stays persistent. And then they come to their argument: The air in altitudes that planes fly at (~FL340, ~250 hPa pressure altitude (standard atmosphere)) is very rarely oversaturated, but persistent contrails can be seen pretty often. Now I need to know, is it true that resublimation is only possible in oversaturated ambient air? If yes, is resublimation even needed for a contrail to stay persistent? Because I found a source that stated that only ~70% relative humidity is needed for a contrail to stay persistent. Or maybe I've just been unfortunate with all the sounding data I've viewed so far, and the air up there is pretty often oversaturated, although I don't think so. Please help me understand the last puzzle piece I need in order to finish my pre-scientific work and prove the conspiracy theorists wrong. Thanks a lot! Answer: They're wrong about the relative humidity not reaching high enough levels. Here's a map of the continental US from instantweathermaps.com, showing today's relative humidity at a pressure altitude of 300 hPa (300 mb): So, across most of the US, the relative humidity is greater than 90%, which is enough for persistent contrails to form. The intuition here is that the air in the upper troposphere is quite cold, so it can't hold much water vapor in the first place. Even though there isn't much water vapor up there, the presence of even a tiny amount quickly pushes air toward saturation.
{ "domain": "physics.stackexchange", "id": 46912, "tags": "water, freezing, humidity" }
Formatting milliseconds as days, hours, minutes, and seconds
Question: This is a function which aims to convert an amount of milliseconds to a more human-interpretable Day(s) Hour(s) Minute(s) Second(s) format: function dhms(t) { d = Math.floor(t / (1000 * 60 * 60 * 24)), h = Math.floor((t % (1000 * 60 * 60 * 24)) / (1000 * 60 * 60)), m = Math.floor((t % (1000 * 60 * 60)) / (1000 * 60)), s = Math.floor((t % (1000 * 60)) / 1000); return d + 'Day(s) ' + h + 'Hour(s) ' + m + 'Minute(s) ' + s + 'Second(s)' } So for getting the variables values I have for now something quite verbose: d = Math.floor(t / (1000 * 60 * 60 * 24)), h = Math.floor((t % (1000 * 60 * 60 * 24)) / (1000 * 60 * 60)), m = Math.floor((t % (1000 * 60 * 60)) / (1000 * 60)), s = Math.floor((t % (1000 * 60)) / 1000); Is it the way to go? Or should I go with: d = Math.floor(t / 86400000), h = Math.floor(t % 86400000 / 3600000), m = Math.floor(t % 3600000 / 60000), s = Math.floor(t % 60000 / 1000); Or even with: d = Math.floor(t / 864e5), h = Math.floor(t % 864e5 / 36e5), m = Math.floor(t % 36e5 / 6e4), s = Math.floor(t % 6e4 / 1e3); Or another way? How is it recommended to assign time values? Answer: The Decimal Point Exponential notation is by far the better for large numbers. However when writing numbers in this format the convention is to have the decimal point after the first digit, as the exponent represents the magnitude of the value which is obscured if you have to locate the decimal point. d = Math.floor(t / 8.64e7), h = Math.floor(t % 8.64e7 / 3.6e6), m = Math.floor(t % 3.6e6 / 6e4), s = Math.floor(t % 6e4 / 1e3); As constants But.. These are magic numbers, and you repeat some of them several times, which is prone to error. Also assuming that this function would be part of a set of such functions declaring these constants as named variables would be much better. DAY_MS = 8.64e7; HOUR_MS = 3.6e6; MIN_MS = 6e4; SEC_MS = 1e3; Derived values As they are derived from each other you can then write it as SEC_MS = 1e3; MIN_MS = SEC_MS * 60; HOUR_MS = MIN_MS * 60; DAY_MS = HOUR_MS * 24; Encapsulate If you then encapsulate them you can drop the _MS and your function would look like. "use strict"; const dhms = (()=>{ const SEC = 1e3; const MIN = SEC * 60; const HOUR = MIN * 60; const DAY = HOUR * 24; return time => { const ms = Math.abs(time); const d = ms / DAY | 0; const h = ms % DAY / HOUR | 0; const m = ms % HOUR / MIN | 0; const s = ms % MIN / SEC | 0; return `${time < 0 ? "-" : ""}${d}Day(s) ${h}Hour(s) ${m}Minute(s) ${s}Second(s)`; }; })(); Notes That t is made positive ms to avoid a negative sign on each number. This also lets you use the shorter and quicker | 0 (bit-wise OR 0) to floor the values, which you should only use for positive integers. The use of a template string to format the output. You could also define it as a module and thus avoid the need to encapsulate the constants as a module has its own local scope.
{ "domain": "codereview.stackexchange", "id": 34121, "tags": "javascript, datetime, comparative-review, formatting" }
First program with scraping, lists, string manipulation
Question: I wanted to find out which states and cities the USA hockey team was from, but I didn't want to manually count from the roster site here. I'm really interested to see if someone has a more elegant way to do what I've done (which feels like glue and duct tape) for future purposes. I read about 12 different Stack Overflow questions to get here. from bs4 import BeautifulSoup from collections import Counter import urllib2 url='http://olympics.usahockey.com/page/show/1067902-roster' page=urllib2.urlopen(url) soup = BeautifulSoup(page.read()) locations = [] city = [] state = [] counter = 0 tables = soup.findAll("table", { "class" : "dataTable" }) for table in tables: rows = table.findAll("tr") for row in rows: entries = row.findAll("td") for entry in entries: counter = counter + 1 if counter == 7: locations.append(entry.get_text().encode('ascii')) counter = 0 for i in locations: splitter = i.split(", ") city.append(splitter[0]) state.append(splitter[1]) print Counter(state) print Counter(city) I essentially did a three tier loop for table->tr->td, and then used a counter to grab the 7th column and added it to a list. Then I iterated through the list splitting the first word to one list, and the second word to a second list. Then ran it through Counter to print the cities and states. I get a hunch this could be done a lot simpler, curious for opinions. Answer: It looks like you're just trying to get the n-th column from a bunch of tables on the page, in that case, there's no need to iterate through all the find_all() results, just use list indexing: for row in soup.find_all('tr'): myList.append(row.find_all('td')[n]) This is also a good use case for generators because you are iterating over the same set of data several times. Here is an example: from bs4 import BeautifulSoup from collections import Counter from itertools import chain import urllib2 url = 'http://olympics.usahockey.com/page/show/1067902-roster' soup = BeautifulSoup(urllib2.urlopen(url).read()) def get_table_column(table, n): rows = (row.find_all("td") for row in table.find_all("tr")) return (cells[n-1] for cells in rows) tables = soup.find_all("table", class_="dataTable") column = chain.from_iterable(get_table_column(table, 7) for table in tables) city, state = zip(*(cell.get_text().encode('ascii').split(', ') for cell in column)) print Counter(state) print Counter(city) While this works, it also might be a good idea to anticipate possible errors and validate your data: To anticipate cases where not all rows have n>=7 td elements, we would change the last line in get_table_column to: return (cells[n-1] for cells in rows if len(cells) >= n) We should also anticipate cases where the cell contents does not contain a comma. Let's expand the line where we split on a comma: splits = (cell.get_text().encode('ascii').split(',') for cell in column) city, state = zip(*(split for split in splits if len(split) == 2))
{ "domain": "codereview.stackexchange", "id": 6295, "tags": "python, html, parsing" }
How can I calculate the speed of an object knowing its horizontal and vertical velocity components?
Question: Let's say a ball is thrown and it experiences typical projectile motion (moves in a parabolic arc etc.) and the only information we know are the equations for the horizontal and vertical components of its velocity for it's entire path. From the given information, how does one calculate the total/actual speed of the ball relative to the direction it is travelling in at any given point (ignoring drag)? As an example (horizontal and vertical components of velocity respectively): $$V_x = 30$$ $$V_y = 20 - 9.81t$$ Is it simply a matter of using Pythagoras' theorem and taking the magnitude? $$ V=\sqrt{(V_x)^2 + (V_y)^2} $$ Answer: The formula you have written is correct; but they are functions of time. Hence, by inserting the particular instant , say $t$ on the function ,you get the instantaneous components of velocity. Then using phythagoras theorem you will get the total instantaneous velocity. Taking your example, at time $T$ s , the X-comp. is $30$ unit and Y-comp. is $(20 - 9.81T) $ unit . So,velocity at time $T$ is $\sqrt{(30)^2 + (20 - 9.81T)^2}$ unit
{ "domain": "physics.stackexchange", "id": 45066, "tags": "velocity, vectors, speed" }
Is my understanding of canonical transformations flawed?
Question: Consider a system described by Hamilton's equations $$\dot{q}_i=\frac{\partial H}{\partial p_i}=\{q_i,H\}, \quad \dot{p}_i=-\frac{\partial H}{\partial q_i}=\{p_i,H\}.\tag{1}$$ I want to prove that a time-independent transformation of the form $$q_i\to Q_i(q,p), \quad p_i\to P_i(q,p)\tag{2}$$ preserves Hamilton's equations provided $$\{Q_i,Q_j\}=0,\qquad \{Q_i,P_j\}=\delta_{ij},\qquad\{P_i,P_j\}=0, \tag{3}$$ i.e. if the transformation is canonical. However, I find that $$\dot{Q}_i=\sum_k\left[\frac{\partial Q_i}{\partial q_k}\dot{q}_k+\frac{\partial Q_i}{\partial p_k}\dot{p}_k\right]=\{Q_i,H\},\tag{4}$$ and similarly, $$\dot{P}_i=\sum_k\left[\frac{\partial P_i}{\partial q_k}\dot{q}_k+\frac{\partial P_i}{\partial p_k}\dot{p}_k\right]=\{P_i,H\}\tag{5}$$ without using (3)! Does it mean that any transformation of the form (2) is canonical? But as far as I knew, only a subset of transformations of type (1) which satisfies (3) preserve the form of Hamilton's equations i.e. are canonical. But it turns out that I am able to reproduce Hamilton's equations in the transformed variables without using the restriction that (3), rather trivially. Did I make any mistakes? Answer: Hint: In symplectic notation $$z^I~=~(q^i,p_i)\qquad\text{and}\qquad Z^I~=~(Q^i,P_i),$$ and assuming no explicit time dependence in the transformation $Z^I=f^I(z)$, OP has shown that $$\begin{align}\dot{Z}^I~\stackrel{\text{chain rule}}{=}~&~ \frac{\partial Z^I}{\partial z^J}\dot{z}^J\cr ~\stackrel{\text{old Ham. eqs.}}{=}&~ \frac{\partial Z^I}{\partial z^J}J^{JK}\frac{\partial H}{\partial z^K}\cr ~=~~~~~~&\{Z^I,H\}_z,\end{align}\tag{4/5}$$ where $J^{JK}$ is the symplectic unit. However the task is instead to show the new Hamilton's equations $$\dot{Z}^I~=~ J^{IJ}\frac{\partial H}{\partial Z^J}~=~\{Z^I,H\}_{\color{red}{Z}}\tag{new Ham. eqs}$$ with the help of the symplectic condition $$ J^{IL}~=~\{Z^I,Z^L\}_z~=~\frac{\partial Z^I}{\partial z^J}J^{JK}\frac{\partial Z^L}{\partial z^K}.\tag{3}$$ References: H. Goldstein, Classical Mechanics, 2nd eds.; Section 9.3. H. Goldstein, Classical Mechanics, 3rd eds.; Section 9.4.
{ "domain": "physics.stackexchange", "id": 95669, "tags": "classical-mechanics, coordinate-systems, hamiltonian-formalism, phase-space, poisson-brackets" }
Mass eigenstates and weak eigenstates of neutrinos
Question: I am aware that similar questions have been answered earlier. But still I am not able to convince myself on following question: If mass(/energy) eigenstates are eigenstates of the Hamiltonian operator, which operator is connected to weak eigenstates and what is its eigenvalue. Answer: which operator is connected to weak eigenstates AFAIK interactions are not connected to operators in a one to one relation. There are four interactions, the electromagnetic, the weak, the strong and the gravitational, characterized by the corresponding ( em, weak, strong) interaction coupling constant These interaction refer to the elementary particles in the standard model of particle physics All particles carry quantum numbers which may be conserved or not depending on the interaction. and what is its eigenvalue. so the term "weak eigenstate" does not have a meaning , imo. For neutrinos there are the number operators , operating on the specific neutrino field, the creation and annihilation operators . The quantum field theory formalism has to be used and the search brings up a number of papers on how oscillations can be modeled in quantum field theory: example Flavor oscillation of traveling neutrinos is treated by solving the one-dimensional Dirac equation for massive fermions. The solutions are given in terms of squeezed coherent state as mutual eigenfunctions of parity operator and the corresponding Hamiltonian, both represented in bosonic creation and annihilation operators. It was shown that a mono-energetic state is non-normalizable, and a normalizable Gaussian wave packet, when of pure parity, cannot propagate. A physical state for a traveling neutrino beam would be represented as a normalizable Gaussian wave packet of equally-weighted mixing of two parities, which has the largest energy-dependent velocity. Based on this wave-packet representation, flavor oscillation of traveling neutrinos can be treated in a strict sense. These results allow the accurate interpretation of experimental data for neutrino oscillation, which is critical in judging whether neutrino oscillation violates CP symmetry. It ain't simple.
{ "domain": "physics.stackexchange", "id": 44798, "tags": "particle-physics, mass, energy-conservation, neutrinos" }
Using lock with value type (multithread context)
Question: I have to use value type to lock code sections in correspondence to objects identifiers (i.e. Guid, int, long, etc.). So I've written the following generic class/method, which uses a Dictionary to handle object that is a reference in correspondence to the value type (so this objects can be used with the lock instruction). I think the code and comments are self-explanatory enough. public static class Funcall<TDataType, T> { // Object used for lock instruction and handling a lock counter class LockObject { public int Counter = 0; } // Correspondance table between the (maybe) value type and lock objects // The TDatatype above allows to create an independant static lockTable for each given TDatatype static Dictionary<T, LockObject> lockTable = new Dictionary<T, LockObject>(); static LockObject Lock( T valueToLockOn ) { lock ( lockTable ) // Globally locks the table { // Create the lock object if not allready there and increment the counter if ( lockTable.TryGetValue( valueToLockOn, out var lockObject ) == false ) { lockObject = new LockObject(); lockTable.Add( valueToLockOn, lockObject ); } lockObject.Counter++; return lockObject; } } static void Unlock( T valueToLockOn, LockObject lockObject ) { lock ( lockTable ) // Globally locks the table { // Decrement the counter and free the object if down to zero lockObject.Counter--; if ( lockObject.Counter == 0 ) lockTable.Remove( valueToLockOn ); } } public static void Locked( T valueToLockOn, Action action ) { var lockObject = default( LockObject ); try { // Obtain a lock object lockObject = Lock( valueToLockOn ); lock ( lockObject ) // Lock on the object (and so on the corresponding value) { // Call the action action?.Invoke(); } } finally { // Release the lock Unlock( valueToLockOn, lockObject ); } } // Same as above except this is returning a value (Func instead of Action) public static TResult Locked<TResult>( T valueToLockOn, Func<TResult> action ) { var lockObject = default( LockObject ); try { lockObject = Lock( valueToLockOn ); lock ( lockObject ) { return action == null ? default( TResult ) : action(); } } finally { Unlock( valueToLockOn, lockObject ); } } } Which can be used like this: Guid anObjectId = ObtainTheId(); return Funcall<AnObjectClass, Guid>.Locked( anObjectId, () => { // do something return something; } ); Are there better ways to do that? Any advice is welcome. Edit/note: as asked in the comments, the use case is the following This "lock" system is to be used by a module for which the entry points are dedicated to modify stored objects of a particular type (this object type will be modified by this module only). Externally the module is used by several concurrent workers. More specifically, I use MongoDB 3.4 for objects storage and it does not provides transactions which are new in version 4.0 (I mean here session.Start, session.Commit). Among that, I don't really need a full/complete transactional system, but simply ensure each step of the workers demands are each consistent at the time each demand occurs. I'm aware that this simple "lock" system can be considered as weak, but it is simple and meet my need in the context I am working on. Answer: Thread safety I've not tested this, but it looks solid enough, since the only non-trivial logic is held in lock statements. Encapsulation is accordingly good. Depending on the load, and if performance is critical, some reader/writer locking might help to ease contention (e.g. if the lock object already exists, you don't need to write-lock the dictionary to retrieve it, and if the count is non-zero when you release the lock, you don't need to write-lock to remove it), but it will dramatically increase the complexity. Naming API Lock isn't a great name: it doesn't lock anything, it just gives you a handle to a LockObject, which you happen to have to release. Unlock similarly. Personally I avoid split prepositions Locked isn't a great method name either: Locked suggests a state, not an operation. I'd rather call these RunLocked, or RunExclusive or something. I don't like this: return action == null ? default( TResult ) : action(); It means that if action == null, then the system goes to all that effort just to return a meaningful-but-made-up value. What use case could this have? This may end up obscuring a bug later on, when something which shouldn't be null ends up being null, and this code covers up the fact by returning default(TResult). In the very least this behaviour should be clearly documented, but I'd much rather it just throw an ArgumentException with a helpful error message. The same goes for action?.Invoke(); in the other overload. As always, I'll suggest inline-documentation (///) on the type and public members: it is little (well spent) effort; makes it so much easier to exploit the API without having to leave the IDE; and improves maintainability, as anyone working on the code can see inline what it is meant to be doing and avoid breaking that contract. Why is this static? I'd much sooner make this non-static, and provide a static instance if that is a meaningful concept. That way, if you need to lock on the same type for different purposes then you can. Making this static needlessly restricts the applicable use-cases. Dodgy try...finally The try...finally constructs in the Locked methods are a bit dodgy... Unlock will throw if lockObject is null, which means you should be entering the try knowing that it is not null. The quick-and-easy solution is to move the call to Lock out of the try. If Lock can crash (it's not immediately clear how under normal circumstances it would, thought), then you need to consider that specifically. Other misc allready is misspelt in one of the comments. The lockTable can (and arguably should) be readonly. I'd consider replacing the Counter++ and Counter-- with dedicated Increment()/Decrement() methods in LockObject (making counter private), since any other usage would (presently) make no sense. Many people object to the if without {} construct: you only have one usage, so I'd consider putting the braces in. Personally I like to make the accessibility of everything explicit (i.e. mark private members and classes private); it just saves the reader having to remember anything, and avoids any confusion when coming from languages with different defaults.
{ "domain": "codereview.stackexchange", "id": 32588, "tags": "c#, multithreading, lambda, locking" }
Is it possible to pet/groom birds?
Question: I wonder if it's possible to pet a domestic crow, or an owl. Do birds respond to grooming as cats or dogs, for example? If so, then how does one pet a bird? Answer: Birds do groom, but not like mammals do. People tend to call their grooming behavior preening. Preening removes dirt and parasites, arranges the feathers nicely, and distributes oils over the feather (very important for waterfowl.) To answer your question better, we need to look at the specific species. Lots of birds preen each other socially, so if you wanted to pet a bird (in a way it likes) it's good to know how the species behaves. You mention crows and owls. Crows are social so they may preen each other; they're also sophisticated (cultural) so it's hard to generalize. A 'bird nerd' on YouTube shows convincing footage of such behavior. I think family groups of crows preen each other, but I don't have a source. Owls tend to be solitary and would not understand preening behavior from a human. A social species of owl might preen; I don't know. Of course, if you just go ahead and pet your owl like a dog, it may appear to like it. It's hard to say if it does like it...it's easy to anthropomorphize a bird with binocular vision!
{ "domain": "biology.stackexchange", "id": 4424, "tags": "zoology, ornithology" }
What is the difference between strong-AI and weak-AI?
Question: I've heard the terms strong-AI and weak-AI used. Are these well defined terms or subjective ones? How are they generally defined? Answer: The terms strong and weak don't actually refer to processing, or optimization power, or any interpretation leading to "strong AI" being stronger than "weak AI". It holds conveniently in practice, but the terms come from elsewhere. In 1980, John Searle coined the following statements: AI hypothesis, strong form: an AI system can think and have a mind (in the philosophical definition of the term); AI hypothesis, weak form: an AI system can only act like it thinks and has a mind. So strong AI is a shortcut for an AI systems that verifies the strong AI hypothesis. Similarly, for the weak form. The terms have then evolved: strong AI refers to AI that performs as well as humans (who have minds), weak AI refers to AI that doesn't. The problem with these definitions is that they're fuzzy. For example, AlphaGo is an example of weak AI, but is "strong" by Go-playing standards. A hypothetical AI replicating a human baby would be a strong AI, while being "weak" at most tasks. Other terms exist: Artificial General Intelligence (AGI), which has cross-domain capability (like humans), can learn from a wide range of experiences (like humans), among other features. Artificial Narrow Intelligence refers to systems bound to a certain range of tasks (where they may nevertheless have superhuman ability), lacking capacity to significantly improve themselves. Beyond AGI, we find Artificial Superintelligence (ASI), based on the idea that a system with the capabilities of an AGI, without the physical limitations of humans would learn and improve far beyond human level.
{ "domain": "ai.stackexchange", "id": 124, "tags": "terminology, definitions, agi, comparison, narrow-ai" }
remote roslaunch failed to launch
Question: Hi all, I am trying to launch nodes on a remote master machine through a launch file. My ssh keys are setup for passwordless connection. The husky.machine file to configure the launch ssh connections to the remote computer is: <launch> <arg name="robot_0_ip" default="$(env ROS_MASTER_IP)" /> <machine name="robot_0" address="$(arg robot_0_ip)" user="ros" env-loader="/home/ros/catkin_ws/husky_launcher.sh" ssh-port="22" timeout="20" /> </launch> I stripped down my main launch file, system.launch to this: <launch> #### 1: ARBITER - ROBOT COMPUTER CONFIGURATIONS ########### <include file="$(find husky_pursuit)/launch/game/husky.machine" /> #### 2: ARBITER - ARENA SETUP ########### <include file="$(find husky_pursuit)/launch/game/arbiter.launch" /> # local #### 3/4: ROBOT - ROBOT LAUNCHES ########### <node machine="robot_0" pkg="tf" type="static_transform_publisher" name="base_link_to_laser" args="0.25 0 0.25 0 0 0 /robot_0/base_link /robot_0/laser 40" /> </launch> It just launches one node remotely on robot_0 (the master), a static_transform_publisher. Before running this launch file, I ssh into robot_0 and run roscore. The error from running system.launch is: ... logging to /home/ajay/.ros/log/81f340de-1106-11e4-a670-801f02abdf53/roslaunch-i72ubuntu-12392.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http:// 192.168.1.119:35719/ remote[192.168.1.125-0] starting roslaunch remote[192.168.1.125-0]: creating ssh connection to 192.168.1.125:22, user[ros] launching remote roslaunch child with command: [env ROS_MASTER_URI=http:// 192.168.1.125:11311 /home/ros/catkin_ws/husky_launcher.sh roslaunch -c 192.168.1.125-0 -u http:// 192.168.1.119:35719/ --run_id 81f340de-1106-11e4-a670-801f02abdf53] remote[192.168.1.125-0]: ssh connection created [192.168.1.125-0] killing on exit remote roslaunch failed to launch: robot_0 Based on the log file, SSH authentication was successful, but ProcessMonitor shuts down. The log file is at pastebin.com/Q2t3hEpp Why does the remote roslaunch fail to launch? Thanks! EDIT: /home/ros/catkin_ws/husky_launcher.sh env-loader file: #!/bin/bash export ROS_WS=/home/ros/catkin_ws source $ROS_WS/devel/setup.bash export PATH=$ROS_ROOT/bin:$PATH export ROS_PACKAGE_PATH=$ROS_PACKAGE_PATH:$ROS_WS export ROS_MASTER_URI=http:// 192.168.1.125:11311 export ROS_HOSTNAME=192.168.1.125 (I added a space between http:// and 192.168.125:11311 because I can't publish links yet. It's not really there on the robot computer). Originally posted by Ajay Jain on ROS Answers with karma: 165 on 2014-07-21 Post score: 3 Original comments Comment by ahendrix on 2014-07-21: I've usually seen issues like this when there was an error in the environment loader script. Can you edit your question to include your environment loader? Comment by Ajay Jain on 2014-07-21: I added the script. It sources the workspace devel/setup.bash file - does it also need to source /opt/ros/hydro/env.sh? Answer: Your environment loader looks reasonable, except that the last line should be exec "$@" Originally posted by ahendrix with karma: 47576 on 2014-07-21 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Ajay Jain on 2014-07-22: Thanks - this worked! Now I just need to resolve my TF extrapolation into the past errors...
{ "domain": "robotics.stackexchange", "id": 18700, "tags": "ros, roslaunch, ros-hydro, husky, remote-launch" }
Time complexity $O(m+n)$ Vs $O(n)$
Question: Consider this algorithm iterating over $2$ arrays $(A$ and $B)$ size of $ A = n$ size of $ B = m$ Please note that $m \leq n$ The algorithm is as follows for every value in A: // code for every value in B: // code The time complexity of this algorithm is $O(n+m)$ But given that $m$ is strictly lesser than or equal to $n$, can this be considered as $O(n)$? Answer: Yes: $n+m \le n+n=2n$ which is $O(n)$, and thus $O(n+m)=O(n)$ For clarity, this is true only under the assumption that $m\le n$. Without this assumption, $O(n)$ and $O(n+m)$ are two different things - so it would be important to write $O(n+m)$ instead of $O(n)$.
{ "domain": "cs.stackexchange", "id": 18238, "tags": "time-complexity, arrays, big-o-notation, iteration, linear-complexity" }
Comparison/Difference between two distinct plots
Question: I am making a project where I can identify leaves by taking a picture of them and a database. I will use their widths to determine their shape. So, I worked across the stem of the leaf each width and plotted them onto the following graphs. The data of my graphs are located at: https://docs.google.com/spreadsheets/d/1HnYYA9keX7jjkhp4tocd2gy-i8VtHzYaS28fax-Nofc/edit?usp=sharing To compare two graphs, I take the minimum of two functions: The sum of the absolute difference between two graphs for every X-axis point Reverse one graph, and then run another sum of absolute difference between the reversed and the other graph. This is in case if one of the two leaves is measured in an opposite direction from the other one. i.e. min( sum(abs(a-b)), sum(abs(a’-b)), where n’ is the reverse plot of n (In case a plot is reversed). So, the problem is, the difference between two very similar leaves is actually greater than two very distinct leaves. (On top) I have three graphs, two of which are similar-pos,pos2 (which should return a smaller difference), both of which are the acer ginnala leaf and another is different-neg, the betula alleghaniensis (which should return a greater difference than the "similar"): However, this is not the case. On the 2nd layer, the two graphs are very distinct. (one is acer, other is betula). However, running the above algorithm, the sums are 70648 and 27362, returning a result of 27362. On the last layer, when two very similar graphs are compared, (both acer) they return a much higher difference, with 40084 (normal) and 85664 (reversed), and returns the minimum of 40084, which is a higher difference than the first comparison. I’ve tried standard deviation of the differences, squaring differences, Intersection Correlation, but had no luck. So, since the current measurement metric doesn’t work on this, what is a concise, straightforward, and better way to measure the difference between two graphs? Answer: One way to compare similarly-shaped data that is translated in time is to look at the correlation between them. If I do this in scilab: ddata = diff(data',1,'c'); XC = xcorr(ddata(1,:),ddata(2,:)); CC1 = xcorr(ddata(1,:),ddata(1,:)); CC2 = xcorr(ddata(2,:),ddata(2,:)); den = sqrt(max(CC1)*max(CC2)) clf plot(XC/den); plot(CC1/max(CC1),'r') plot(CC2/max(CC2),'m') disp(max(XC)/den) then I get this: where the blue plot is the (normalized) cross-correlation of the time-differenced data and the red and magenta plots are the (normalized) auto-correlation of each individual time series. The result is that the "correlation" between them is 0.8416782.
{ "domain": "dsp.stackexchange", "id": 3135, "tags": "waveform-similarity" }
Is it possible for a physical object to have an irrational length?
Question: Suppose I have a caliper that is infinitely precise. Also suppose that this caliper returns not a number, but rather whether the precise length is rational or irrational. If I were to use this caliper to measure any small object, would the caliper ever return an irrational number, or would the true dimensions of physical objects be constrained to rational numbers? Answer: The set of irrational numbers densely fills the number line. Even assuming that quantum mechanics doesn't disable the preimse of your question, the probability that you will randomly pick an irrational number out of a hat of all numbers is roughly $1 - \frac{1}{\infty} \approx 1$. So the question should be "is it possible to have an object with rational length?
{ "domain": "physics.stackexchange", "id": 77916, "tags": "mathematics, measurements, geometry" }
$\sqrt{-1}$ coefficient in a function
Question: In a simple harmonic oscillator with $\ddot{x} = -\omega^2x$, it can be shown through differentiation that one solution can be given by $\dot{x} = i \omega Ae^{i \omega t}$. What does the factor of $i$ do here? What effect does it have on velocity? Answer: Note that $i$ can be written as follows \begin{align} i = e^{i \pi/2}. \end{align} Therefore, we have the velocity $\dot{x}$ written as \begin{align} \dot{x} \; &= \; i \omega A e^{i \omega t} \\ \; &= \; \omega A \; e^{i \pi/2} \, e^{i \omega t} \\ \; &= \; \omega A \; e^{i \big(\omega t + \pi/2\big)}. \end{align} If you compare the velocity $\dot{x}$ with displacement $x = A e^{i \omega t}$, $x$ is lag behind $\dot{x}$ by a phase of $\pi/2$. So the complex number $i$ you mentioned encodes the phase difference information.
{ "domain": "physics.stackexchange", "id": 87624, "tags": "waves, harmonic-oscillator, complex-numbers" }
Are my protolysis equations right?
Question: We had the task to solve several salts in water and measure the pH value of the solution. After that we should "create" the protolysis equation and to tell about Chemical equilibrium[s]. I get some possible result equations, but I'm really unsure about them. The [] brackets were used to make the ionic loads easier visible. solving equations: $\ce{KNO3 <=> [NO3]- + [K]+}$ $\ce{FeCl3 <=> [Fe]^{3+} + [Cl]-}$ $\ce{NH4Cl <=> [NH4]+ + [Cl]-}$` $\ce{NaHSO4 <=> [HSO4]- + [Na]+}$ my protolysis equations: $\ce{[NO3]- + H2O <=> HNO3 + [OH]-}$ $\ce{[Fe(H2O)6]^{3+} + H2O <=> [Fe(H2O)5OH]^{2+} + [H3O]+}$ $\ce{[NH4]+ + H2O <=> NH3 + [H3O]+}$ $\ce{[HSO4]- + H2O <=> [SO4]^{2-} + [H3O]+}$ I would say that all equations run bi-directional, because the result of solving equations can vary dependent on temperature for example, and the protolysis equations should be bi-directional because we're having a solution, which isn't a pure acid. Additionally, I got the question if I can draw conclusions on the pH-change through those protolysis reactions. I would say yes, but not very well, depending on the sum of all hydrogen-containing ions like $\ce{H3O+}$ Answer: yes, they are, all of them are bidirectional. It is not possible to draw conclusions about the pH value, because of the different abilities to solve in water and because those reaction may cause an increased or a decreased temperature. (Source: my teacher)
{ "domain": "chemistry.stackexchange", "id": 3716, "tags": "acid-base, solutions" }
Palindrome Checker with Stack
Question: In my answer to this question, I wrote a new solution. Personally, I would not use a stack, but given that a stack was required, how good is this solution? In this solution, I add half of the word, not including the middle character if applicable, to the stack and remove it from the string. If there is a middle character in the total string, the sizes of the string and the stack are not equal, and the middle character is at the beginning of the string, making it easy to delete (we don't need to compare it to itself, do we?). Then, I iterate through the stack, comparing the top of the stack to the beginning of the string; if they are equal, I remove the character from the stack and the string, and continue; otherwise, I return false;. If there are no characters left in the stack, I return true;. bool isPalindrome(string word) { stack<char> s; for (int i = 0; i <= word.size() / 2; i++) { s.push(word[0]); word.erase(0, 1); } if (word.size() > s.size()) { word.erase(0, 1); } if (s.size() == 0) { return true; } while (s.size() > 0) { if (s.top() == word[0]) { s.pop(); word.erase(0, 1); // return true only when our stack is empty if (s.size() == 0 || word.size() == 0) { return true; } } else { return false; } } } Answer: Your logic is incorrect in a few places. The first problem is that you are modifying word inside the loop so you probably want to cache the size before you use it in the loop indices. This is one of those insidious bugs. int size = word.size(); for (int i = 0; i <= size / 2; i++) { s.push(word[0]); word.erase(0, 1); } if (word.size() > s.size()) { word.erase(0, 1); } I'm going to assume from here on that the cached size code above is what you intended the behavior of this program to be. Moving on to the next problem. Consider when word.size() is even, say 8. i will iterate from 0 to 4 inclusive. So s will have 5 characters and word will have 3. If word.size() was 9 then i would iterate from 0 to 4 as well, leaving word with 4 characters. Hence, after the loop, word.size() will never be greater than s.size(). In trying to handle the odd case you produced a bug in the even case. The loop bound you are looking for is: for (int i = 0; i < size / 2; i++) Here is the updated code for the first half: int size = word.size(); for (int i = 0; i < size / 2; i++) { s.push(word[0]); word.erase(0, 1); } if (word.size() > s.size()) { word.erase(0, 1); } Now you can verify that word.size() and s.size() will always be the same at this point. This means that inside the while loop you can remove the following check since s.size() and word.size() will both reach 0 at the same time: if (s.size() == 0 || word.size() == 0) { return true; } Now the only way you can exit the loop is if all the letters on the stack and in the second half of the word were indeed the same. Which of course means that word is a palindrome. So just put a return true at the end. You can now remove the following check before the while loop too since in this case the loop won't run and you will jump down to return true: if (s.size() == 0) { return true; } With all of that said I think the code is a bit too complex. There is no need to modify word. With some integer arithmetic you can use two loops to iterate through word without deleting anything. I will print out the loop indices for three looping methods you can use. Method 1: int i; for(i = 0; i < n/2; i++)s.push(word[i]); if(n % 2) i++; // skip the middle index if n is odd for(; i < n; i++) compare_stuff(); Method 2: for(int i = 0; i < n/2; i++)s.push(word[i]); for(int i = (n+1)/2; i < n; i++) compare_stuff(); // skip the middle index if n is odd Method 3: for(int i = 0; i < (n+1)/2; i++)s.push(word[i]); for(int i = n/2; i < n; i++) compare_stuff(); // check the middle index against itself if n is odd Method 2 is probably the best for this problem since it allows for local loop indices and doesn't check the middle value unnecessarily. My code would be: bool isPalindrome(const std::string& word) { uint n = word.size(); std::stack<char> s; for(uint i = 0; i < n/2; i++) { s.push(word[i]); } for(uint i = (n+1)/2; i < n; i++) { if(s.top() != word[i]) { return false; } s.pop(); } return true; }
{ "domain": "codereview.stackexchange", "id": 12324, "tags": "c++, palindrome, rags-to-riches" }
How to interpret Pauli spinors?
Question: Recently in my QM course we derived the Pauli equation for an electron in a magnetic field. From what I understand, since we now have a spin-dependent term in our Hamiltonian, the spatial and spin degrees of freedom of the electron are now coupled. Hence, the wavefunction should in principle be different depending on whether the electron is spin-up or down, and therefore it's useful to write our Schrodinger equation using a 2-component wavefunction, represented by a mathematical object known as a spinor: $$ \begin{pmatrix} \psi_+ \\ \psi_- \\ \end{pmatrix} $$ where $\psi_+$ is the wavefunction for when the electron is sping-up, and $\psi_-$ for when it's spin-down. While I understand the need for two different wavefunctions depending on the spin, there are still some doubts regarding the interpretation of this spinor which I'd like to clear up: Can you interpret the spinor as a sum of 2 tensor direct products of vectors in the position degree of freedom space and in the spin degree of freedom space? Really what I'm asking is is the spinor equivalent to saying the state of our system is $$|\psi\rangle = \frac{1}{\sqrt[]{2}} (|\psi_+\rangle \otimes |\uparrow\rangle + |\psi_-\rangle \otimes |\downarrow\rangle).$$ I would assume this is right as to describe both position and spin simultaneously you'd need a tensor product space of an infinite-dimensional position eigenket space and a 2-dimensional spin eigenket space, but I want to make sure this is actually what the spinor means. I'm guessing that if your system is initially in a mixture of spin-up and spin-down states (as in the expression in question 1), and you measure the spin in the z-direction, the outcome of your measurement will also collapse your spatial wavefunction into either $\psi_+$ or $\psi_-$ as the position and spin degrees of freedom are coupled. But what if you start with a mixture but measure position before spin? I understand that the squared moduli of $\psi_+$ and $\psi_-$ give you the position probability distributions of the electron in its spin-up and spin-down states respectively, but what is the position probability distribution when it's still in a mixture of spin states? If either of these doubts could be cleared up it would be greatly appreciated. Answer: ...a 2-component wavefunction, represented by a mathematical object known as a spinor: $$ \begin{pmatrix} \psi_+ \\ \psi_- \\ \end{pmatrix}\tag{A} $$ ...there are still some doubts regarding the interpretation of this spinor which I'd like to clear up: Can you interpret the spinor as a sum of 2 tensor direct products of vectors in the position degree of freedom space and in the spin degree of freedom space? Really what I'm asking is is the spinor equivalent to saying the state of our system is $$|\psi\rangle = \frac{1}{\sqrt[]{2}} (|\psi_+\rangle \otimes |\uparrow\rangle + |\psi_-\rangle \otimes |\downarrow\rangle).$$ I would write the direct product notation as: $$ |\psi_+\rangle\otimes|\uparrow\rangle + |\psi_-\rangle\otimes|\downarrow\rangle\;, $$ whereas you seem to have inserted a $\frac{1}{\sqrt{2}}$ by hand, which is inconsistent with your Eq. (A) above. If we use a notation that is consistent with your Eq. (A) above, we write: $$ 1 = \langle\psi_+|\psi_+\rangle + \langle\psi_-|\psi_-\rangle =\int d^3r\left(|\psi_+(\vec r)|^2 + |\psi_-(\vec r)|^2\right)\;. $$ The probability to measure spin up is: $$ \langle\psi_+|\psi_+\rangle = \int d^3r |\psi_+(\vec r)|^2\;. $$ And the probability to measure spin down is: $$ \langle\psi_-|\psi_-\rangle = \int d^3r |\psi_-(\vec r)|^2\;. $$ The above expressions for probability can be obtained from the the usual partial measurement rules. I'm guessing that if your system is initially in a mixture of spin-up and spin-down states (as in the expression in question 1), and you measure the spin in the z-direction, the outcome of your measurement will also collapse your spatial wavefunction into either $\psi_+$ or $\psi_-$ as the position and spin degrees of freedom are coupled. Yes, the state will collapse, but the collapsed state after the measurement must be normalized ("by hand"). For example, measuring spin-up will collapse the state to: $$ \frac{|\psi_+\rangle}{\sqrt{\langle\psi_+|\psi_+\rangle}} $$ But what if you start with a mixture but measure position before spin? I understand that the squared moduli of $\psi_+$ and $\psi_-$ give you the position probability distributions of the electron in its spin-up and spin-down states respectively, but what is the position probability distribution when it's still in a mixture of spin states? Just like we sum (or rather, integrate) over the spatial position to get the overall probability to measure some given spin, similarly, we sum over spin to get the overall probability density to measure some given position. $$ p(\vec r) =\sum_{i=\uparrow, \downarrow} |\psi_i(\vec r)|^2 \;, $$ where technically $p(\vec r)$ is a probability density rather than a probability. Update (to address additional questions in the comments): In general, if a measurement is "projective" (which is a common kind of measurement, but not necessarily the most general), then we can use a projection operator to understand the collapse of the wavefunction after the measurement as well as the probability to measure a specific value. Measurement of the position is a little more complicated since position is a continuous variable, and so we need to at least integrate over a small region $\delta V$ to interpret our results in terms of a probability rather than probability density. For example, suppose we measure the position and we find the particle within $\delta V$ of $\vec r_0$. The relevant projection operator is: $$ \hat \Pi_{\vec r_0} = \int_{\delta V}d^3r'|\vec r_0+\vec r'\rangle\langle \vec r_0 + \vec r'|\;, $$ where $|r\rangle$ is an eigenstate of the position operator. This projection operator's action on $\psi$ is: $$ \hat \Pi|\psi\rangle = \int_{\delta V} d^3r' \left( \psi_+(\vec r'+\vec r_0)|\vec r' + \vec r_0\rangle|\uparrow\rangle + \psi_-(\vec r'+\vec r_0)|\vec r' + \vec r_0\rangle|\downarrow\rangle \right)\;. $$ The probability to measure $\vec r_0$ within $\delta V$ is: $$ P(\vec r_0; \delta V) = \langle \psi|\Pi^\dagger \Pi|\psi \rangle = \langle \psi|\Pi|\psi \rangle $$ $$ =\int_{\delta V}d^3r' \left( |\psi_+(\vec r'+\vec r_0)|^2 + |\psi_-(\vec r'+\vec r_0)|^2 \right)\;, $$ which is consistent with our expression for probability density above. The state of the system after the measurement collapse is described by the normalized ket: $$ |\chi_{\vec r_0;\delta V}\rangle = \frac{\hat \Pi|\psi\rangle}{\sqrt{\langle\psi|\hat\Pi^\dagger\hat \Pi|\psi\rangle}} $$ $$ =\frac{\int_{\delta V}d^3r\left( \psi_+(\vec r_0+\vec r)|\vec r_0 +\vec r\rangle|\uparrow\rangle + \psi_-(\vec r_0+\vec r)|\vec r_0 +\vec r\rangle|\downarrow\rangle \right)} {\sqrt{\int_{\delta V}d^3r \left(|\psi_+(\vec r_0+\vec r)|^2+|\psi_-(\vec r_0+\vec r)|^2 \right)}} $$ If we take $\delta V$ to be very small in comparison with any length scale over which $\psi_i(\vec r)$ changes, e.g., take the limit $\delta V \to 0$, we find that collapsed state becomes: $$ |\chi_{\vec r_0;\delta V}\rangle \underbrace{\to}_{\delta V\to 0} |\vec r_0\rangle\sqrt{\delta V}\frac{\psi_+(\vec r_0)|\uparrow\rangle + \psi_-(\vec r_0)|\downarrow\rangle}{\sqrt{|\psi_+(\vec r_0)|^2+|\psi_-(\vec r_0)|^2}}\;, $$ however, as usual, the position eigenkets are eigenkets of a continuous operator, and are non-normalizable (they are delta normalized), so they can not represent the true physical state of the system. But the state $|\chi_{\vec r_0;\delta V}\rangle$, prior to taking the limit, can represent the state of the system after a measurement.
{ "domain": "physics.stackexchange", "id": 94391, "tags": "quantum-mechanics, electromagnetism, schroedinger-equation, spinors" }
Is there a simple way to express this DTFT in polar form?
Question: Consider the discrete-time system $$ H(z) = a_0 + a_1 z^{-1} + a_2 z^{-2} $$ To compute the DTFT, let $z = e^{j\omega}$ such that $$ H(e^{j\omega}) = e^{-j\omega} \left(a_0 e^{j\omega} + a_1 + a_2e^{-j\omega}\right) \label{eq:H_w} \tag{1} $$ If $a_0 = a_2$, then $H(e^{j\omega})$ can be put into polar form as $$ H(e^{j\omega}) = \left\lvert a_1 + 2 \cos \omega\right\rvert e^{-j\omega} $$ However, if $a_0 \neq a_2$, then it is not clear how $H(e^{j\omega})$ can be put into polar form. More precisely, substituting $e^{j\omega} = \cos \omega + j \sin \omega$ into \eqref{eq:H_w} yields \begin{align} H(e^{j\omega}) &= e^{-j\omega} \big(a_0 (\cos \omega + j \sin \omega) + a_1 + a_2(\cos \omega - j \sin \omega)\big) \\ &= e^{-j\omega} \big( (a_0 + a_2)\cos \omega + j(a_0 - a_2) \sin \omega + a_1\big) \end{align} Since $(a_0 + a_2)\cos \omega + j(a_0 - a_2) \sin \omega$ is a complex number, then it can be expressed in polar form as $C(\omega)e^{j\theta(\omega)}$, where \begin{align} C(\omega) &= \sqrt{((a_0 + a_2)\cos \omega)^2 + ((a_0 - a_2) \sin \omega)^2} \\ \theta(\omega) &= \operatorname{atan2}\big( (a_0 - a_2) \sin \omega,\ (a_0 + a_2)\cos \omega \big) \end{align} Therefore, \begin{align} H(e^{j\omega}) &= e^{-j\omega} \left( C(\omega)e^{j\theta(\omega)} + a_1\right) \end{align} At this point, I can factorize $e^{j\frac{\theta(\omega)}{2}}$ out of the parentheses and then simplify further, but I feel that this method is cumbersome. Is there an easier approach? Answer: Here is one way to do it. Let's call the coefficients "$b$" instead of "$a$" since "$a$" is usually used for the denominator and "$b$" for the numerator. Let's start with $$H(e^{j\omega}) = b_0 + b_1 e^{-j\omega} + b_2 e^{-j2\omega} $$ The phase is the inverse tangent of the the imaginary part divided by the real part so we get: $$\phi(\omega) = \operatorname{tan}^{-1} \left( \frac{- b_1 \sin(\omega) - b_2 \sin(2\omega)}{1 + b_1 \cos(\omega) + b_2 \cos(2\omega)} \right)$$ provided the quadrant is chosen properly. I'm not sure if there is a way to simplify this further. You could try to pull out $e^{-j\omega}$ to make it more symmetrical, but I don't think this will make it any easier. For the magnitude squared we just multiply with complex conjugate. $$\big|H(e^{j\omega})\big|^2 = H(e^{j\omega}) \cdot H^*(e^{j\omega}) = (b_0 + b_1 e^{-j\omega} + b_2 e^{-j2\omega})(b_0 + b_1 e^{j\omega} + b_2 e^{j2\omega})$$ We just have to multiply this out and sort the terms. We get something like this. $$ \big|H(e^{j\omega})\big|^2 = b_0^2 + b_1^2 + b_2^2 + 2(b_0b_1 + b_1b_2)\cos(\omega) + 2b_0b_2\cos(2\omega)$$ None of these expressions is particularly pretty or intuitive, but that's just the way it is. CAVEAT: I haven't double checked the math, so it's possible that there is an arithmetic error or typo in there.
{ "domain": "dsp.stackexchange", "id": 10742, "tags": "transfer-function, dtft" }
The Acceleration of a particle varies from $2m/s^2$ to $4m/s^2$ as its position changes from 40mmt o 120mm. Can I get Veloc. at 120mm if v=0.4 at 40mm
Question: I realize this might be an absurd questions to ask directly here, but I cannot seem to wrap my head around it. There is no information about the time, and I have not studied any equations when the acceleration is varying linearly. This questions was given by our professor without any explanations. I want to know if it is possible to solve this. If so how? It seems to be a simple problem and yet I cant see a simple solution. I have tried drawing Acceleration time, and A-V graphs but dont seem to have any results. I also tried integrating but couldn't find a solution. Any method to proceed or a solution would be very helpful. Thanks! Answer: You just need to find a suitable function that relates position and acceleration. Then using Newton's second law, we know : $$\frac{dp}{dt}=m\frac{dv}{dt}=m\frac{dv}{dx}\frac{dx}{dt}=mv\frac{dv}{dx}=ma$$ Thus, $$vdv=adx=a(x)dx$$ If you know $a$ as a function of $x$, which you should be able to derive, then you can integrate from $v_i$ to $v_f$ on LHS, and from initial position to final position on RHS. As you can see, we don't need an idea of time anymore. Just need to find acceleration as a function of position that satisfies that at $40mm$ the acceleration is $2$ and at $120mm$ it is $4$
{ "domain": "physics.stackexchange", "id": 83773, "tags": "homework-and-exercises, newtonian-mechanics, kinematics, acceleration, velocity" }
Singularity and Illumina's Nirvana
Question: I am trying to using Illumina's Nirvana however I can't seem to get it to work despite having the right Singularity path. I pulled Nirvana down using singularity pull docker://annotation/nirvana:3.14 I found the path to the .dll using the following command: singularity exec nirvana_3.14.sif /bin/bash -c "cd /opt/nirvana/ && ls" Tried to run the downloader using: ## Define path to singularity image: sif_container="/path/to/nirvana_3.14.sif" ## Path in the singularity file: sif_path="/opt/nirvana/Downloader.dll" ## Data to be placed in Data="path/to/Nirvana/GRCh38/" #_____________________________________________________________________________________________________________________________________ singularity exec "$sif_container" dotnet "$sif_path" --ga GRCh38 --out "$Data" yet it keeps saying ERROR: The top-level output directory directory was not specified. Please use the --out parameter.. I have checked the paths and they are correct and I have specified the data path yet it does not seem to recognize it? I have even moved around the container and the path variables to no avail? Answer: From your previous questions, it is apparent that your HPC has an odd singularity setup. You could ask you sysadmin about how to navigate it. EDIT After working with OP on chat (and based on previous experience with them), it looks like their sysadmin has mount hostfs enabled in the singularity.conf file, which messes with every container every time. I guess it's easier than letting people bind their directories of choice but anyway, the way to circumvent that is to use --no-mount hostfs @Indira, always run singularity [exec|shell|run] --no-mount hostfs instead of plain singularity [exec|shell|run]. All containers will then run well and if they error out, it's probably because either the container internally encountered an error or it was unable to access an external resource. What you're running into now is a mix that obscures the real problems and creates a no-win scenario for you.
{ "domain": "bioinformatics.stackexchange", "id": 2498, "tags": "illumina, singularity" }
How to check sate of plan execution in MoveIt! during async execution in python?
Question: I'm using MoveIt! to generate motion plans for a robot arm. Have it working just fine using the defined move_group for my arm. My question is, when executing a plan with group.go(wait=False), what is the correct way to check on the status of the execution of the path? The return of group.go(wait=False) is a boolean so that doesn't do me any good, and the group, and robot objects do not seem to have anything indicating a plan is being executed. Originally posted by oars on ROS Answers with karma: 48 on 2016-12-13 Post score: 2 Original comments Comment by JoshMarino on 2016-12-14: Should be able to check the status of the controller or from joint states. Answer: Hi Oars, There is no way to check for the status of group.go(wait=False) using the standard moveit_commande API calls. However, under the hood, the moveit_commander makes a call to the moveit_msgs/MoveGroup Action. The difference between async and non-async is that when you tell moveit_commander to wait, it monitors the output of the MoveGroup Action state and only returns control to the caller when the action has either succeeded or failed. To see how that works: the code that does so can be found here. To get the relevant information from the action server you can subscribe to the /move_group/status/ topic, which publishes a actionlib_msgs/GoalStatusArray that "stores the statuses for goals that are currently being tracked". An alternative would be the /move_group/feedback/ topic, which also contains information on what the action server is doing. I'd advise you to rostopic echo both topics to see what messages are being published so you can decide which is easier to use. Originally posted by rbbg with karma: 1823 on 2016-12-14 This answer was ACCEPTED on the original site Post score: 11 Original comments Comment by oars on 2016-12-14: Awesome! Thanks for the detailed answer. Really appreciate it. This gives me all I need moving forward.
{ "domain": "robotics.stackexchange", "id": 26477, "tags": "ros, python, moveit, move-group" }
Reference paper to support information -- energy relation $\left(kT \ln2 \rm\frac{J}{bit}\right)\;.$
Question: In answer to Maxwell's Demon Constant (Information-Energy equivalence) there is stated that one bit of information allows to perform $kT \cdot \ln2$ Joules of work. Which paper supports the thesis? (there are many publications on Maxwell daemon, Szilard engine, Landauer's principle). Answer: See e.g. page 3 of http://arxiv.org/abs/0707.3400 It's nonsensical to attribute this simple particular insight to a "discoverer"; all these considerations should be associated with Ludwig Boltzmann who knew the answer even though the information in physics was considered continuous at that time. One may easily derive the result. For example, put one molecule of an ideal gas in a vessel, learn in which half of the vessel the molecule is (one bit of information), and put a barrier in the middle. You will be able to allow the molecule to do the work and expand from $V/2$ to the original volume $V$. The molecule will do the work $$ W = \int_{V/2}^V p\,dV = \int_{V/2}^V \frac{kT}{V}dV = kT \ln \frac{V}{V/2} = kT\ln 2 $$ where I used $pV = NkT$ for $N=1$ molecule of an ideal gas. More generally, you don't have to consider ideal gas. Just recall how work is related to the free energy, $E-TS$. To reduce the entropy of a subsystem by one bit, i.e. by $k\ln 2$ (look at Boltzmann's tomb formula to know why it's this value), we have to do change $E-TS$ by $kT\ln 2$.
{ "domain": "physics.stackexchange", "id": 2814, "tags": "thermodynamics, specific-reference, entropy, information" }
Inertial and a gravitational component of two observers
Question: Started reading the book "How Einstein found his field equations" by Janssen and Renn and I am already blocked on chapter 1. What do the authors means after the "split" below?: "What Einstein made relative [in GR] is not motion but gravity. Two observers, one moving on a geodesic, the other moving on a non-geodesic, will disagree about how to split what is now often called the inertio-gravitational field, a guiding field affecting all matter the same way, into an inertial and a gravitational component." Answer: Well, I am generally familiar with the works of Michel Janssen and the works of Jürgen Renn, and I endorse the interpretation of GR that they present in their works. What Janssen and Renn are describing is about attribution. In physics the current standard approach is to think of interactions as mediated by field. It is for example assumed that the Coulomb force and magnetism are manifestations of the electromagnetic field. That is, rather than assuming that the dynamic interaction of two charged particles is a direct particle-to-particle interaction it is assumed that a field is acting as mediator of that interaction. Janssen and Renn are representative of a school of thought in which the following proposition is made: Allow for the possibility that the phenomenon of Inertia is mediated by a field. This inertia field is then assumed to have the property that it opposes change of velocity. The measure for coupling to the inertia field is the familiar term for inertial mass: '$m$'. The next step is to grant the following set of suppositions: In the absence of a source of gravitational interaction the inertia field is uniform. Another way of stating that is: by default inertia is isotropic (the same in all directions) A source of gravitational interaction induces a bias in the inertia field. The standard name for this bias is: 'Curvature of Spacetime'. This biased state of the inertia field is then acting as mediator of gravitational interaction. So in terms of this interpretation there is a fundamental assumption that there is no separate gravitational field. Rather, according to this interpretation the field that gives rise to the phenomenon of inertia and the field that is acting as the mediator of gravitational interaction are one and the same field. This assumption of a single field then accounts for the equivalence of inertial and gravitational mass. So we take the standard thought demonstration of a space station that is spinning. Let the space station be ring-shaped. In that rotating ring: the centripetal acceleration has the effect of pulling G's Do you attribute the G-load to inertia? Or do you attribute the G-load to the presence of gravity? The point is: a local experiment cannot tell the difference. About the word 'split' that Janssen and Renn are using. They are referring to an analogy with electromagnetism. In terms of electromagnetism: take the case of a wire with a current running through it. Now consider the following two cases: a charged particle that is stationary with respect to the wire, and a charged particle that has a velocity relative to the wire. The stationary-wrt-the-wire particle does not experience a Lorentz force, the velocity-wrt-the-wire particle does experience a Lorentz force. Generally, depending on the velocity of an observer relative to some system the composition of Coulomb force and Lorentz force comes out differently. There is a single field, the electromagnetic field, but depending on velocity of the observer relative to the system the decomposition comes out differently. Janssen and Renn refer to that decomposition as 'split'. Janssen and Renn point out that there is an analogous split if one grants that GR is a theory of the inertio-gravitational field. For the inertio-gravitational field: how that split comes out depends on the state of acceleration of the observer relative to the system.
{ "domain": "physics.stackexchange", "id": 100200, "tags": "general-relativity, observers, geodesics" }
What is the identity of the unknown compound using GCMS and NMR?
Question: I've been asked to identify an unknown compound by performing several analytical techniques on it. The compound was a white powder and had a boiling point of 65 degrees celsius. I have the GC mass spectrum as well as both Carbon-13 and Proton NMR spectra (note that the compound was dissolved in chloroform for NMR analysis). So far I think it contains an aromatic ring because of the chemical shifts for the NMR spectra but that's all I have so far. These are the images of the spectra (Fig 1. GC and Mass spectrum, Fig 2. Proton NMR spectrum, Fig 3. Zoomed in Proton NMR spectrum, Fig 4. Carbon NMR spectrum, Fig. 5 Zoomed in Carbon NMR spectrum): Answer: Hints: If you counted carbon signals in $\ce{^{13}C}\mathrm{~NMR}$, you'd find the compound has at least 10 $\ce{C}$ atoms. Thus, it's indicated boiling point is incorrect, unless it's determined at reduced pressure. If so, you need to give that pressure. You are correct about saying your compound should be aromatic. Did you count how many aromatic hydrogens are there? At least, by doing that, you may able to speculate some framework for your compound, as a beginning point. Now, if you are careful, you see no aliphatic carbon signals in your $\ce{^{13}C}\mathrm{~NMR}$. Then, what is the signal at $\mathrm{\delta~1.6}$ in your $\ce{^{1}H}\mathrm{~NMR}$ doing there? Now, go to $\mathrm{GC/MS}$. In there, your parent peak is almost looks like a doublet. Do you know why? Can you recall what element have the two stable isotopes about 50:50 natural abundance with $2~m/z$ units? If you don't remember, then subtract each of your parent peaks from your base peak in your $\mathrm{MS}$. Now, you almost have the empirical formula of your compound. Now it's matter of analyzing peak patterns in $\ce{^{1}H}\mathrm{~NMR}$ and fragmentation pattern in $\mathrm{MS}$.
{ "domain": "chemistry.stackexchange", "id": 10078, "tags": "nmr-spectroscopy, mass-spectrometry" }
How to run python request for list of url's with multiple page numbers?
Question: Hi I am trying to get the cancer ontologies (obo_id and label) from EBI-OLS. Earlier I have used the below code to get the obo_id terms and labels descendants = requests.get( "https://www.ebi.ac.uk/ols/api/ontologies/efo/terms/http%253A%252F%252Fpurl.obolibrary.org%252Fobo%252FMONDO_0004992/descendants" ) As there are more than 1 page so I created total_pages total_pages = descendants.json()["page"]["totalElements"] // 1000 + 1 cancer_id = [] for i in range(total_pages): ## Check the number of id's present in 'totalElements' des = {"page": i, "size": descendants.json()["page"]["totalElements"]} ## Add the number of 'totalElements' and 'pages' in the new python request for descendants cancer_efo = requests.get( "https://www.ebi.ac.uk/ols/api/ontologies/efo/terms/http%253A%252F%252Fpurl.obolibrary.org%252Fobo%252FMONDO_0004992/descendants", params=des, ) ## Get the number of obo_id's present in the cancer_efo.json obo = [obj["obo_id"] for obj in cancer_efo.json()["_embedded"]["terms"]] ## Get the number of cancer labels present in the cancer_efo.json label = [obj["label"] for obj in cancer_efo.json()["_embedded"]["terms"]] data = zip(obo, label) cancer_id.extend(data) The above code worked pretty well with one link Now the issue is that I want to get ontologies for 37 descendent links and maximum links have more than 1 page. I want to get one file (cancer_id) containing all obo_id's and labels from these all 37 links. I made a separate file called list.txt (Right now I am copying only 3 here) list.txt https://www.ebi.ac.uk/ols/api/ontologies/ncit/terms/http%253A%252F%252Fpurl.obolibrary.org%252Fobo%252FNCIT_C4861/descendants https://www.ebi.ac.uk/ols/api/ontologies/ncit/terms/http%253A%252F%252Fpurl.obolibrary.org%252Fobo%252FNCIT_C3171/descendants https://www.ebi.ac.uk/ols/api/ontologies/ncit/terms/http%253A%252F%252Fpurl.obolibrary.org%252Fobo%252FNCIT_C3457/descendants I tried the below code but it is not working import requests with open ('list.txt', 'r') as fin: links = fin.readlines() def requestIt(links): cancer_all = requests.get(links) return cancer_all.json() data = list(map(requestIt, links)) obo = obj["obo_id"] for obj in data The error is obo = obj["obo_id"] for obj in data ^ SyntaxError: invalid syntax Answer: Okay, its done. What you are looking for is as follows: import requests from pathlib import Path # Test code def get_all_keys(d): for key, value in d.items(): yield key if isinstance(value, dict): yield from get_all_keys(value) def requestIt(links): cancer_all = requests.get(links) return cancer_all.json() if __name__ == '__main__': path = Path('/PathToDir/test', 'link.html') with open (path, 'r') as fin: links = fin.readlines() data = list(map(requestIt, links)) # Test code for x in get_all_keys(data[0]['_embedded']['terms'][0]): print(x) # Answer code for i,d in enumerate(data): print (f'Link {i + 1} ') for obo_id in d['_embedded']['terms']: print (obo_id['obo_id']) Output Link 1 NCIT:C8263 NCIT:C9163 NCIT:C5630 Link 2 NCIT:C7175 NCIT:C68700 NCIT:C68696 NCIT:C68697 NCIT:C68698 NCIT:C68699 NCIT:C9155 NCIT:C9288 NCIT:C9287 NCIT:C9019 NCIT:C9018 NCIT:C36055 NCIT:C36058 NCIT:C36057 NCIT:C36056 NCIT:C38377 NCIT:C122688 NCIT:C156718 NCIT:C156719 NCIT:C169107 Link 3 NCIT:C7056 NCIT:C131911 NCIT:C165799 NCIT:C68684 NCIT:C4998 NCIT:C5089 NCIT:C5095 NCIT:C8154 NCIT:C8604 NCIT:C127840 NCIT:C8489 NCIT:C8852 NCIT:C69144 NCIT:C115349 NCIT:C115367 NCIT:C165240 NCIT:C8926 NCIT:C160230 NCIT:C160231 NCIT:C182035 Where Link 1 is ... https://www.ebi.ac.uk/ols/api/ontologies/ncit/terms/http%253A%252F%252Fpurl.obolibrary.org%252Fobo%252FNCIT_C4861/descendants ... and so on. If the code is listed # Test code that is what is used to interrogate the data structure. This doesn't contribute to your answer. However, given the way I've done it, it's important to understand the data structure. Thus output of # Test code is at the base of this post. If you need another part of the data structure you simply switch id_odo to whatever else you are looking for (at the base of this post). There will be an easy to use OOP method based out the data object. I did't want to spend time investigating the object ... the question is specifically about obo_id so it was easier to go for that. Test code output iri lang description synonyms annotation Contributing_Source Display_Name Legacy Concept Name Neoplastic_Status Preferred_Name Semantic_Type UMLS_CUI code label ontology_name ontology_prefix ontology_iri is_obsolete term_replaced_by is_defining_ontology has_children is_root short_form obo_id # <- its here in_subset obo_definition_citation obo_xref obo_synonym is_preferred_root _links self href parents href ancestors href hierarchicalParents href hierarchicalAncestors href jstree href graph href
{ "domain": "bioinformatics.stackexchange", "id": 2436, "tags": "python, api, pandas, ontology" }
Will there any modification happen?
Question: Initially capacitor 2 is uncharged . Later it is connected with pre-charged-capacitor 1 in an open circuit . So,yet in this arrangement capacitor 1 cannot charge capacitor 2 . My question is- as no current is flowing , the top plate & bottom plate of capacitor 2 & bottom plate of capacitor 1 are in the same potential.Always true for this arrangement regardless how many charges there are on capacitor 1.So, why does the potential of bottom plate of cap. 2 & bottom plate of cap. 1 become same magnitude always?Will there any modification happen on the plates at the moment of connecting these 2 capacitors in an open circuit? Answer: Potential is always measured relative to something. When you consider a capacitor in isolation (not connected to a circuit), then the difference between the two plates gives an obvious reference ("the other plate"). When you connect the two plates in a circuit, they now share a common reference (the wire that connects them) and it makes sense to express all potentials relative to that point (which you might ground). The potential difference across each of the two capacitors was not affected by connecting just one side; only if you close the switch (so current will flow to equalize the potential difference across the two capacitors) will you affect a change.
{ "domain": "physics.stackexchange", "id": 43529, "tags": "electric-circuits, capacitance" }
Parsing an XML tree of categories
Question: I have parse function which is parsing tree of categories. I've written it in simplest way possible and now struggling with refactoring it. Every nested loop is doing the same stuff but appending object to object childs initialized at the top. I think it's possible to refactor it with recursion but I'm struggling with it. How to wrap it in recursion function to prevent code duplication? Final result should be a list of objects or just yield top level object with nested childs. for container in category_containers: root_category_a = container.xpath("./a") root_category_title = root_category_a.xpath("./*[1]/text()").get() root_category_url = self._host + root_category_a.xpath("./@href").get() root = { "title": root_category_title, "url": root_category_url, "childs": [], } subcategory_rows1 = container.xpath("./div/div") for subcat_row1 in subcategory_rows1: subcategory_a = subcat_row1.xpath("./a") subcategory_title = subcategory_a.xpath("./*[1]/text()").get() subcategory_url = self._host + subcategory_a.xpath("./@href").get() subcat1 = { "title": subcategory_title, "url": subcategory_url, "childs": [], } subcategory_rows2 = subcat_row1.xpath("./div/div") for subcat_row2 in subcategory_rows2: subcategory2_a = subcat_row2.xpath("./a") subcategory2_title = subcategory2_a.xpath("./*[1]/text()").get() subcategory2_url = self._host + subcategory2_a.xpath("./@href").get() subcat2 = { "title": subcategory2_title, "url": subcategory2_url, "childs": [], } subcategory_rows3 = subcat_row2.xpath("./div/div") for subcat_row3 in subcategory_rows3: subcategory3_a = subcat_row3.xpath("./a") subcategory3_title = subcategory3_a.xpath("./*[1]/text()").get() subcategory3_url = self._host + subcategory3_a.xpath("./@href").get() subcat3 = { "title": subcategory3_title, "url": subcategory3_url, "childs": [], } subcat2['childs'].append(subcat3) subcat1['childs'].append(subcat2) root['childs'].append(subcat1) yield root Answer: You can start by just extracting the bit that clearly is repeated into a standalone function: def get_category(category) -> dict[str, Any]: category_a = category.xpath("./a") category_title = category_a.xpath("./*[1]/text()").get() category_url = self._host + category_a.xpath("./@href").get() return { "title": category_title, "url": category_url, "childs": [], } and then turn your code into something like: for container in category_containers: root = get_category(container) for subcontainer in container.xpath("./div/div"): subcategory = get_category(subcontainer) for subsubcontainer in subcontainer.xpath("./div/div"): subsubcategory = get_category(subsubcontainer) for subsubsubcontainer in subcontainer.xpath("./div/div"): subsubsubcategory = get_category(subsubsubcontainer) subsubcategory["childs"].append(subsubsubcategory) subcategory["childs"].append(subsubcategory) root["childs"].append(subcategory) yield root For recursion to work with this problem you're going to need to define the maximum depth somehow - I think this might work, but also otherwise I think you'll get the jist of it: def recurse_categories(container, depth = 0): if depth > 2: return None category = get_category(container) subcategory = recurse_categories(category, depth + 1) if subcategory is not None: category["childs"].append(subcategory)
{ "domain": "codereview.stackexchange", "id": 43810, "tags": "python, parsing, recursion, xml, scrapy" }
Client for many similar kinds of REST requests
Question: I have a class that makes calls to a Web API (Using RestSharp), which works fine but the code is super ugly. What would be the best way to refactor it? I thought of just doing a Facade pattern so all the calls that goes to cart will be in a separate class and order will be in an order class and so on. All the methods looks very similar maybe I can extract something. public class ApiRestClient : IApiRestClient { private readonly RestClient _client; private readonly string _url = ConfigurationManager.AppSettings["webapibaseurl"]; public ApiRestClient() { _client = new RestClient(_url); } public TokenDto Get(Guid id) { var request = new RestRequest("/cart/{id}", Method.GET) {RequestFormat = DataFormat.Json}; request.AddParameter("id", id, ParameterType.UrlSegment); var response = _client.Execute(request); return JsonConvert.DeserializeObject<TokenDto>(response.Content); } public void SaveOrder(OrderDto order) { var request = new RestRequest("/order/", Method.POST) {RequestFormat = DataFormat.Json}; request.AddObject(order); _client.Execute<TokenDto>(request); } public string GetLayout(int? id) { var request = new RestRequest("/customer/{id}", Method.GET) {RequestFormat = DataFormat.Json}; request.AddParameter("id", id, ParameterType.UrlSegment); var response = _client.Execute(request); return JsonConvert.DeserializeObject<CustomerDto>(response.Content).Layout; } public void UpdateUser(UserDto userMap) { var request = new RestRequest("/user/", Method.POST) {RequestFormat = DataFormat.Json}; request.AddObject(userMap); _client.Execute<UserDto>(request); } public IEnumerable<TokenDto> GetInvoices(Guid id) { var request = new RestRequest("/receipt/{id}", Method.GET) {RequestFormat = DataFormat.Json}; request.AddParameter("id", id, ParameterType.UrlSegment); var response = _client.Execute(request); return JsonConvert.DeserializeObject<IEnumerable<TokenDto>>(response.Content); } public TokenDto GetInvoice(Guid id) { var request = new RestRequest("/receipt/invoice/{id}", Method.GET) {RequestFormat = DataFormat.Json}; request.AddParameter("id", id, ParameterType.UrlSegment); var response = _client.Execute(request); return JsonConvert.DeserializeObject<TokenDto>(response.Content); } } Answer: I don't see a reason for having a readonly _url in this class. In addition I don't see a reason to have a class level _url variable neither. I would like to suggest having two constructors, one having a string url argument and one being argumentless. In this way it is easy without changing the AppSettings to have some flexibility. Applying this would look like so private readonly RestClient _client; public ApiRestClient() : this(ConfigurationManager.AppSettings["webapibaseurl"]) { } public ApiRestClient(string url) { _client = new RestClient(url); } It is not obvious from its name what the method public TokenDto Get(Guid id) will get. It seems from the resource that this would be a cart. Having a methodname, which makes it clear what the method is about, would be much better. The GetLayout() method should be splitted into a GetCustomer() method (which you may need anyway) to only take the Layout of that returned CustomerDto. The default Method for RestRequest is Method.Get so you could use the constructor which only takes string resource as a parameter. The creation of this requests could be extracted to a method which takes a string and an object as parameter. Something along this lines private RestRequest GetGetRequest(string resource, object value, string name) { var request = new RestRequest(resource) {RequestFormat = DataFormat.Json}; request.AddParameter(name, value, ParameterType.UrlSegment); return request; } if you by any chance are using C# 6 you could use the nameof operator like so private RestRequest GetGetRequest(string resource, object id) { var request = new RestRequest(resource) {RequestFormat = DataFormat.Json}; request.AddParameter(nameof(id), id, ParameterType.UrlSegment); return request; } In the same way you could implement a GetPostRequest() method like so private RestRequest GetPostRequest(string resource, object value) { var request = new RestRequest(resource, Method.POST) { RequestFormat = DataFormat.Json }; request.AddObject(value); return request; } You could also add a generic method ProcessRequest<T> like so private T ProcessRequest<T>(RestRequest request) { var response = _client.Execute(request); return JsonConvert.DeserializeObject<T>(response.Content); } (I don't really know if I like this). Applying the mentioned points will lead to public class ApiRestClient : IApiRestClient { private readonly RestClient _client; public ApiRestClient() : this(ConfigurationManager.AppSettings["webapibaseurl"]) { } public ApiRestClient(string url) { _client = new RestClient(url); } private RestRequest GetGetRequest(string resource, object value, string name) { var request = new RestRequest(resource) { RequestFormat = DataFormat.Json }; request.AddParameter(name, value, ParameterType.UrlSegment); return request; } private T ProcessRequest<T>(RestRequest request) { var response = _client.Execute(request); return JsonConvert.DeserializeObject<T>(response.Content); } public TokenDto GetCart(Guid id) { var request = GetGetRequest("/cart/{id}", id, "id"); return ProcessRequest<TokenDto>(request); } public string GetLayout(int? id) { var customer = GetCustomer(id); return customer.Layout; } public CustomerDto GetCustomer(int? id) { var request = GetGetRequest("/customer/{id}", id, "id"); return ProcessRequest<CustomerDto>(request); } public IEnumerable<TokenDto> GetInvoices(Guid id) { var request = GetGetRequest("/receipt/{id}", id, "id"); return ProcessRequest<IEnumerable<TokenDto>>(request); } public TokenDto GetInvoice(Guid id) { var request = GetGetRequest("/receipt/invoice/{id}", id, "id"); return ProcessRequest<TokenDto>(request); } private RestRequest GetPostRequest(string resource, object value) { var request = new RestRequest(resource, Method.POST) { RequestFormat = DataFormat.Json }; request.AddObject(value); return request; } public void SaveOrder(OrderDto order) { var request = GetPostRequest("/order/", order); _client.Execute<TokenDto>(request); } public void UpdateUser(UserDto userMap) { var request = GetPostRequest("/user/", userMap); _client.Execute<UserDto>(request); } }
{ "domain": "codereview.stackexchange", "id": 16719, "tags": "c#, rest, client" }
find the minimum cost conversion between currencies A and B, given a matrix of currency conversions
Question: Given a matrix of currency conversions, find the minimum cost conversion between currencies A and B (ex. maybe min cost conversion is A->C->D->B) I was thinking of this as some sort of max flow problem such that A and B are the source and sink respectively, but I can't figure out exactly how to transform the weights of the problem. Answer: You can use a pathfinding algorithm for this case: Create a node for each currency type. Add an edge between two currency types if you can directly convert between them (in the case of your problem, you always can). For now, lets put the currency conversion rate as the weight of the edge - but this will change later on. Say you want to convert between currency $A$ and $B$, then you will want to follow the path $p:=v_0,v_1,\dots,v_k$ such that $\Pi_{i=1}^{k-1} w(v_i,v_{i+1})$ is minimized. Notice that by applying $\log$ on the formula, and using the fact that $log$ is monotonic, we get that this is exactly the path with $\sum_{i=1}^{k-1} \log(w(v_i,v_{i+1}))$ minimized. Therefore, by changing the weights to their logarithm (the weights will now be $log(conversion\_rate(v_i,v_{i+1}))$ instead of just $conversion\_rate(v_i,v_{i+1})$), we get that a shortest path is exactly what you need. You can use here Djikstra, or any other pathfinding algorithm you prefer.
{ "domain": "cs.stackexchange", "id": 18934, "tags": "algorithms" }
rxbag timeline and plots incorrect
Question: I tried rxbag with the bag at: pr.willowgarage.com/data/gmapping/basic_localization_stage.bag. The result is in screenshot: alunos.deec.uc.pt/~a2004110783/rxbag.png The base_scan topic only shows messages up to 0:08, while there are more, just like tf. On the plot the translation.y is not getting plotted an instead it is a straight line at y=0. This is in Fuerte with all the apt-get updates. Originally posted by nunojpg on ROS Answers with karma: 9 on 2012-12-25 Post score: 0 Answer: Beside rxbag also the new replacement rqt_bag shows the same effect. The bag file contains one message (at about 0:08) which timestamp is not in-order. You can fix the bag file by calling bagsort YOUR-BAG.bag YOUR-FIXED-BAG.bag. Unfortunately that bagsort script is currently broken. I already committed a patch - if you don't want to wait for the new release you can just apply the simple fix from https://github.com/ros/ros_comm/issues/42 manually. Originally posted by Dirk Thomas with karma: 16276 on 2012-12-29 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 12203, "tags": "ros, rxbag, rqt-bag" }
What worms devour the body?
Question: I was reading this site which broached comedones, an esoteric word to me; so I thought to look up its etymology which I find exceptionally singular and peculiar (I would have never guessed that the word originated from to eat up !) : Etmyonline: "blackhead," etc., 1866, from Latin comedo "glutton," from comedere "to eat up" (see comestible). A name formerly given to worms that devour the body; transferred in medical use to secretions that resemble them. Footnote: ODO also offers an etymology. Answer: A maggot typically feeds on carrion. A maggot is the larva of a fly and usually particular to the larvae of Brachyceran flies, such as houseflies, cheese flies, and blowflies. Interestingly decomposition from maggots has a lot of use in forensic science as the presence or development of maggots on a corpse can be useful to a forensic entomologists to determine the approximate time of death. Depending on the species and the conditions, maggots may be observed on a body within 24 hours. The eggs are laid on the body and when the eggs hatch, the maggots move towards their preferred conditions and begin to feed. Insects are usually useful 25–80 hours post mortem and is contingent on temp humidity and oxygen availability. After 80 hours, this method becomes less reliable.
{ "domain": "biology.stackexchange", "id": 3563, "tags": "entomology, etymology" }
Confused about volume, density and mass, help!
Question: I got into an argument with my friend, which cast confusion on my understanding of density and its relationship to volume. I'm hoping to get some clarity. The argument involved describing density in terms of volume. Let's say you define a sphere in empty space. You choose a point, apply the formula for a sphere, and now you have a sphere. Not a sphere OF anything other than space, just a spacial, theoretical sphere. No particles, massless or otherwise (this is a thought experiment). What is the density of that sphere? Is it zero, or is it undefined? I know density can be defined as p = m/v. But in a theoretical sphere, which HAS volume, should we call mass zero, because there is none? Or is it undefined because a theoretical sphere really isn't related to mass at all? If it IS undefined, does that mean it makes no sense to relate density to volume because density is only a property of mass? The answer, to me, seems to be that in fact it makes no sense to talk about the density of a massless object. Sorry if I answered my own question, but I would still like clarity. If someone could help guide me through the assumptions I'm making about reality and math and how they relate, I'd really appreciate it. Answer: Theoretically speaking, in order for something (usually particles) to be massless, it has to be travelling in the speed of light. In quantum theory, uncertainty principle states that the position and momentum of such particle cannot be accurately determined, therefore it is not possible to measure the volume of the particle. So it is generally assumed that the volume is too small and/or insignificant. Regarding your question, I think that the simple answer would be; a massless object would have no volume nor density, or immeasurable. Take your pick.
{ "domain": "physics.stackexchange", "id": 28882, "tags": "mass, density, volume" }
Horizontal symmetry plane of cis-dichloroosmiumtetracarbonyl
Question: Consider the molecule $\ce{Os(CO)4Cl2}$ with a drawing given below via Chemtube3d: I thought in this molecule, if I drew a horizontal plane, then a $σ_\mathrm{h}$ would be present i.e. A ‘horizontal’ mirror plane is perpendicular to the principle axes seeing as the mirror would, in my view, cut the atoms in half so the top would be the reflection of the bottom. However, my professor said in a lecture that this molecule shown here does not have $σ_\mathrm{h}$ and I'm slightly puzzled as to why. Why does this molecule not have $σ_\mathrm{h}$ plane? Answer: Your compound has only one proper axis of rotation and it is $C_2$ as shown in the diagram. Therefore, it is the principle axis of rotation. By definition, $\sigma_\mathrm{h}$ is a plane of symmetry perpendicular to the principle axis of rotation. However, the plain you were talking about is paralleled to the principle axis of rotation. Thus, it is not a $\sigma_\mathrm{h}$ plane. Nonetheless, it is a $\sigma_\mathrm{v}$ plane, by definition.
{ "domain": "chemistry.stackexchange", "id": 12181, "tags": "inorganic-chemistry, coordination-compounds, symmetry" }
How to determine interaction of photon with a hydrocarbon Bixin
Question: Bixin is an apocarotenoid found in annatto with 25 carbons with carboxylic acids at each end of the chain. It is an insoluble orange red pigment. I am interested to study photon interaction with this molecule. Please, give me some head start on what to read. It is a red pigment, therefore it absorbs photons below 670 nm or so. How can I determine if this molecule allows luminescence in any degree? Electronic transitions? or just any refraction or scattering. Answer: In a first step I would widen the available search criteria. While accessing the English entry of wikipedia, collection of the chemical name, CAS number, smiles code (copy/paste), pubchem and chemspider ID are presented. These identifiers may become helpful in the subsequent searches in other databases. Nor would I hesitate (as long as my web browser allows proper display .and. there are at least some words barely understood in reading) to access the other versions (Spanish, Portuguese, ...), too. In the public libraries of US' NIH, just the keyword "bixin" yields 150 (pmc) or 116 (pubmed) hits, that after some narrowing yield entries like this abstract in Spectrochimica Acta or this about triplet states of bixin, published in J. Agric. Food Chem. While the British chemspider does not yield spectra directly, the literature references listed by this database include even surprising finds like doi 10.1039/C4CS00309H (2015ChemSocRev3244) about Vegetable-based dye-sensitized solar cells. Do not omit collections like researchgate, as here, or figshare. If your institute has subscriber access, chemical databases like Reaxys (Elsevier) or Scifinder Scholar (ACS) may indicate primary literature including spectroscopic data, too. No longer chemistry/physics centred, a search in Thomson Reuters' Web of Science, and more engineering-oriented Scopus (by Elsevier) may round the picture.
{ "domain": "chemistry.stackexchange", "id": 8420, "tags": "organic-chemistry, spectroscopy, photochemistry" }
How does water evaporate if it doesn't boil?
Question: When the sun is out after a rain, I can see what appears to be steam rising off a wooden bridge nearby. I'm pretty sure this is water turning into a gas. However, I thought water had to reach 100 degrees C to be able to turn into a gas. Is there an edge case, for small amounts of water perhaps, that allows it to evaporate? Answer: Evaporation is a different process to boiling. The first is a surface effect that can happen at any temperature, while the latter is a bulk transformation that only happens when the conditions are correct. Technically the water is not turning into a gas, but random movement of the surface molecules allows some of them enough energy to escape from the surface into the air. The rate at which they leave the surface depends on a number of factors - for instance the temperature of both air and water, the humidity of the air, and the size of the surface exposed. When the bridge is 'steaming': the wood is marginally warmer than the air (due to the sun shine), the air is very humid (it has just been raining) and the water is spread out to expose a very large surface area. In fact, since the air is cooler and almost saturated with water, the molecules of water are almost immediately condensing into micro-droplets in the air - which is why you can see them. BTW - As water vapour is a gas, it is completely transparent. If you can see it then it is steam, which consists of tiny water droplets (basically water vapour that has condensed). Consider a kettle boiling - the white plume only occurs a short distance above the spout. Below that it is water vapour, above it has cooled into steam. Steam disappears after a while, as it has evaporated once again.
{ "domain": "physics.stackexchange", "id": 8962, "tags": "temperature, everyday-life, water, evaporation" }
Inertial frames
Question: I'm just starting my study of relativity, and I have a rough understanding of the connection between inertial frames, newton's laws, and galilean transformations, but I'd probably benefit more if someone could spell out clearly what is taken as an assumption/axiom in classical mechanics (newtonian vs special relativity), and what is implied. I have a lot of loose information, and it would really help if someone could tie it all together. I've heard that inertial frames are frames within which Newton's Laws hold. Now my textbook (classical mechanics, taylor), says that Newton's first law is implied by the second, and this first law is just used to determine which frames are inertial. So if an object doesn't suddenly accelerate with the influence of a force, you're in an inertial frame. So suppose the first law holds in a particular frame. How does it follow that the second and third laws also hold in that frame? Wikipedia says that both newtonian mechanics and special relativity assume equivalence of inertial frames. But what does "equivalent" mean in this context? Any frame moving with constant velocity with respect to an inertial frame is also an inertial frame. I know that if frame S is inertial and observe a force F, and if the respective force F' when viewed from S' (which moves at constant velocity with respect to S), the F'=F. This is stated as "newton's second law is conserved under a galilean transformation", but I'm not sure why. When demonstrating F=F', we assume F=ma in S and F'=ma' in S', so it seems like we assume the second law is true in both frames and simply show that F=F' Like I said, I know it's a lot of loose info, but I'd really appreciate it if someone could clarify/tie together everything Answer: 1)Definition: An inertial frame of reference is a frame of reference where Newton's first law applies (uniform motion if without external force). Now if we have other frame of references that are moving relative to this inertial frame with uniform relative velocities, then all the others are also called inertial frame of references. 2)Transformation between inertial reference frames:In Newtonian mechanics, the laws of physics are invariant under Galilean transformation. While in special relativity, the laws of physics are invariant under Lorentz transformation. The latter reduces to the former in classical limit.
{ "domain": "physics.stackexchange", "id": 85758, "tags": "newtonian-mechanics, special-relativity, reference-frames, inertial-frames" }
Are multi-qubit control RX gates scaling exponentially?
Question: In https://arxiv.org/pdf/quant-ph/0303063.pdf it a method shown for implementing a multi-qubit controlled phase shift gate thath scales exponentially with n. Are there new methods to implement these gates in polynomial time? And does anybody know if there is a paper descring the method for impelemnting a multi-qubit controlled gate that Qiskit uses for its MCMT gate? https://qiskit.org/documentation/stubs/qiskit.circuit.library.MCMT.html Answer: An $n$-qubit controlled phase gate with error $\epsilon$ takes $O(n + \lg \frac{1}{\epsilon})$ gates to achieve. The $O(n)$ dependence is easiest to understand in the case where you have $n$ ancillae: The $O(\lg \frac{1}{\epsilon})$ dependence is from the need to decompose the single qubit phase rotation into the gateset that is actually supported, e.g. using repeat-until-success circuits. Only a single ancilla is actually required. And if you're willing to increase the cost to $O(n \cdot \lg \frac{1}{\epsilon})$ and use $n$ arbitrary single qubit phase rotations instead of one then no ancillae are needed at all.
{ "domain": "quantumcomputing.stackexchange", "id": 2510, "tags": "quantum-gate, resource-request" }
Explaining frame of reference
Question: I have troubles understanding the expression "frame of reference" and "inertial frame" and "rest frame". So if we have two particles $A$ and $B$ flying with different speed. Now the rest frame for $A$ and $B$ is not the same. But is the inertial frame the same? Or would be the inertial frame the same if we have a third fix point $C$ and we would observe them both from $C$? For me it seems logical to have $3$ frame of references. From the point of view of $A$, from the point of view of $B$ and from an independent point of view $C$. Is this correct? Answer: You write about the rest frame and the inertial frame, but there are infinitely many such frames. The more dimensions we have, the more possibilities. Even with only 1 dimension, frames can differ in the position of their origin, in their speed, their acceleration, and higher rates of change of position. A "rest frame" is one in which a particular object is at rest. Unless we are restricted to motion in 1 dimension, its axes can point in any direction, so there is not one "rest frame" for each object. An "inertial reference frame" is one in which Newton's Laws of motion are true : in particular, an object not acted on by any forces remains at rest or moves in a straight line with constant speed. Any reference frame which is moving with constant velocity relative to an inertial reference frame is also an inertial reference frame. A "rest frame" will not be an "inertial frame" if the object is accelerating, which includes rotating in a circle or spinning on its axis. It is not clear what you are asking in your last paragraph. Even with 3 points or objects, there are infinitely many possible frames of reference.
{ "domain": "physics.stackexchange", "id": 40722, "tags": "special-relativity, classical-mechanics" }
Publish LaserScan from Arduino
Question: Hello, I am using an arduino Mega to publish LaserScan message. But I have a weird result, when i run this code: #include <ros.h> #include <sensor_msgs/LaserScan.h> ros::NodeHandle nh; // Laser Scan sensor_msgs::LaserScan lidar_msg; ros::Publisher lidar_pub("/laser_scan", &lidar_msg); float ranges[10] = {0}; float intensities[10] = {0}; void setup() { // Initialize ROS node handle, advertise and subscribe the topics nh.initNode(); nh.getHardware()->setBaud(57600); nh.advertise(lidar_pub); // Set LaserScan Definition lidar_msg.header.frame_id = "lidar"; lidar_msg.angle_min = 0.0; // start angle of the scan [rad] lidar_msg.angle_max = 3.14*2; // end angle of the scan [rad] lidar_msg.angle_increment = 3.14*2/360; // angular distance between measurements [rad] lidar_msg.range_min = 0.3; // minimum range value [m] lidar_msg.range_max = 50.0; // maximum range value [m] } void loop(){ // simple loop to generate values for (int i=0 ; i<10 ; ++i){ ranges[i] = 1.0*i; } lidar_msg.ranges = ranges; lidar_msg.header.stamp = nh.now(); lidar_pub.publish(&lidar_msg); nh.spinOnce(); } Then, I launch rosserial node to start serial communication: $ rosrun rosserial_python serial_node.py /dev/ttyUSB2 [INFO] [1519508591.451838]: ROS Serial Python Node [INFO] [1519508591.468653]: Connecting to /dev/ttyUSB2 at 57600 baud [INFO] [1519508593.624826]: Note: publish buffer size is 512 bytes [INFO] [1519508593.625218]: Setup publisher on /laser_scan [sensor_msgs/LaserScan] But when, i echo the topic, I didn't get any values in ranges: --- header: seq: 112 stamp: secs: 1519508602 nsecs: 77765016 frame_id: "lidar" angle_min: 0.0 angle_max: 6.28000020981 angle_increment: 0.0174444448203 time_increment: 0.0 scan_time: 0.0 range_min: 0.300000011921 range_max: 50.0 ranges: [] intensities: [] --- Any idea ? matt Originally posted by mattMGN on ROS Answers with karma: 78 on 2018-02-24 Post score: 1 Answer: Did you perhaps forget to set the ranges_length and intensities_length fields to the nr of elements in the arrays? That is required with rosserial. See #q265037 and #q229621 for related questions. According to [wiki/rosserial/Overview/Messages](http://wiki.ros.org/rosserial/Overview - Messages): Also be aware that the arrays in the messages are not defined as vector objects. Thus you have to predefine the array and then pass it as pointer to the message. To determine the end of the array each one has an auto-generated integer companion with the same name and the suffix _length. And also wiki/rosserial/Overview - Limitations - Arrays. Originally posted by gvdhoorn with karma: 86574 on 2018-02-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by mattMGN on 2018-02-25: ranges_length and intensities_length were the point ! Thanks you Comment by kartiksoni on 2022-09-02: hey can u show me where and how exactly did u add the ranges_length and intensities_length? Thank you.
{ "domain": "robotics.stackexchange", "id": 30144, "tags": "arduino, laser, ros-kinetic, rosserial-arduino" }
Why are geons unstable? Are there other problems with geons?
Question: I read in various places geons are "generally considered unstable." Why? How solid is this reasoning? Is the reason geons are not studied much anymore because we can't make more progress without better GR solutions or a better theory of quantum gravity, or is it because it really is a failed theory with fundamental problems (other than the unproven stability question)? Answer: The stability argument is as follows--- the Geon system will have some mass, and it is made out of massless fields orbiting in closed orbits, so if you make the geon a little smaller with the same total energy, you expect the gravity to win and the massless fields to collapse into a black hole, and if you make the geon a little bigger, you expect the massless stuff to disperse to infinity. This argument is hard to make rigorous, because you need to find a way to rescale the nonlinear gravitational theory. So Wheeler studied this situation extensively, with the hope of finding a stable Geon. He didn't find one, and even if there were one, we already have a good model of elementary particles in the black hole solutions and their quantum counterparts, so it is not clear that such a solution would be useful. But it is a strangely neglected field. Perhaps there is an easy argument that establishes instability of all geons, but it is going to be tough, because the Geons can make arbitrarily complicated links of light going through each other, pulling each other into stable orbits.
{ "domain": "physics.stackexchange", "id": 2233, "tags": "general-relativity, black-holes, quantum-gravity, quantum-electrodynamics" }
Regular of language of all words of length 3
Question: Consider the language $$L = \{ x \in \{0,1\}^* \mid |x| = 3 \}.$$ I think the above language is regular. A DFA can be used to determine the above language. Am I correct? Is the above language regular? If this language $L$ is regular, then it should satisfy the pumping lemma. Then there exist $w = xyz$, where $xy^nz \in L$ for all $n \ge 0$. But on the other hand, if we pump more letters then the resulting string will not be in the language. The language $L$ only contains words of length 3. The pumping lemma states that for every regular language there exists an integer $p$, such that string $w$ of length at least $p$ can be written as $w = xyz$ and $y$ can be pumped. Here are my doubts. Is this language $L$ regular? If so, does it satisfy the pumping lemma? The pumping lemma states that every regular language has a pumping length $p \ge 1$. Does this language not have one? Answer: Every finite language is regular. If $L$ is a finite language and $p$ is larger than the length of all words in $L$, then $L$ satisfies the pumping lemma with the constant $p$. Indeed, every word in $L$ of length at least $p$ can be pumped (vacuously).
{ "domain": "cs.stackexchange", "id": 16495, "tags": "regular-languages, pumping-lemma" }
Robot Localization ekf_node: frame ID "camera_imu_optical_frame" does not exist
Question: Hello, I am new to ROS, and I was trying to use ROS2 to implement SLAM with RealSense Camera L515. I tried to search for others with similar issues, but haven't had any success. Operating System & Version: Linux (Ubuntu 20.04) on Jetpack 5.01 ROS2 Distro: Foxy Kernel Version: 5.10.65-tegra Platform: NVIDIA Jetson Xavier NX I'm working to set up odometry for my robot that utilizes the L515 and its internal imu. I don't have my wheel odometry set up yet, but I wanted to see if I could run the ekf_node from the robot localization package with just the imu first, but I encountered a barrage of warnings when I launched the node in conjunction with the realsense camera node. This message was repeatedly printed in the terminal: [ekf_node-1] Warning: Invalid frame ID "camera_imu_optical_frame" passed to canTransform argument source_frame - frame does not exist [ekf_node-1] at line 133 in /tmp/binarydeb/ros-foxy-tf2-0.13.13/src/buffer_core.cpp My ekf.yaml config file is as follows: ### ekf config file ekf_filter_node: ros__parameters: frequency: 30.0 sensor_timeout: 0.1 two_d_mode: false transform_time_offset: 0.0 transform_timeout: 0.0 print_diagnostics: true debug: false debug_out_file: /path/to/debug/file.txt publish_tf: true publish_acceleration: false map_frame: map # Defaults to "map" if unspecified odom_frame: odom # Defaults to "odom" if unspecified base_link_frame: camera_link # Defaults to "base_link" if unspecified world_frame: odom # Defaults to the value of odom_frame if unspecified odom0: example/odom odom0_config: [true, true, false, false, false, false, false, false, false, false, false, true, false, false, false] odom0_queue_size: 2 odom0_nodelay: false odom0_differential: false odom0_relative: false odom0_pose_rejection_threshold: 5.0 odom0_twist_rejection_threshold: 1.0 odom1: example/odom2 odom1_config: [false, false, true, false, false, false, false, false, false, false, false, true, false, false, false] odom1_differential: false odom1_relative: true odom1_queue_size: 2 odom1_pose_rejection_threshold: 2.0 odom1_twist_rejection_threshold: 0.2 odom1_nodelay: false pose0: example/pose pose0_config: [true, true, false, false, false, false, false, false, false, false, false, false, false, false, false] pose0_differential: true pose0_relative: false pose0_queue_size: 5 pose0_rejection_threshold: 2.0 # Note the difference in parameter name pose0_nodelay: false twist0: example/twist twist0_config: [false, false, false, false, false, false, true, true, true, false, false, false, false, false, false] twist0_queue_size: 3 twist0_rejection_threshold: 2.0 twist0_nodelay: false imu0: camera/imu imu0_config: [false, false, false, false, false, false, false, false, false, false, false, true, true, false, false] imu0_nodelay: false imu0_differential: false imu0_relative: true imu0_queue_size: 5 imu0_pose_rejection_threshold: 0.8 # Note the difference in parameter names imu0_twist_rejection_threshold: 0.8 # imu0_linear_acceleration_rejection_threshold: 0.8 # imu0_remove_gravitational_acceleration: true use_control: true stamped_control: false control_timeout: 0.2 control_config: [true, false, false, false, false, true] acceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 3.4] deceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 4.5] acceleration_gains: [0.8, 0.0, 0.0, 0.0, 0.0, 0.9] deceleration_gains: [1.0, 0.0, 0.0, 0.0, 0.0, 1.0] process_noise_covariance: [0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.015] initial_estimate_covariance: [1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-9] ros2 run tf2_tools view_frames.py reveals that indeed, there is no camera_imu_optical_frame: My realsense camera node is run with the following parameters: configurable_parameters = [{'name': 'camera_name', 'default': 'camera', 'description': 'camera unique name'}, {'name': 'serial_no', 'default': "''", 'description': 'choose device by serial number'}, {'name': 'usb_port_id', 'default': "''", 'description': 'choose device by usb port id'}, {'name': 'device_type', 'default': "''", 'description': 'choose device by type'}, {'name': 'config_file', 'default': "''", 'description': 'yaml config file'}, {'name': 'unite_imu_method', 'default': "2", 'description': '[0-None, 1-copy, 2-linear_interpolation]'}, {'name': 'json_file_path', 'default': "''", 'description': 'allows advanced configuration'}, {'name': 'log_level', 'default': 'info', 'description': 'debug log level [DEBUG|INFO|WARN|ERROR|FATAL]'}, {'name': 'output', 'default': 'screen', 'description': 'pipe node output [screen|log]'}, {'name': 'depth_module.profile', 'default': '0,0,0', 'description': 'depth module profile'}, {'name': 'enable_depth', 'default': 'true', 'description': 'enable depth stream'}, {'name': 'rgb_camera.profile', 'default': '0,0,0', 'description': 'color image width'}, {'name': 'enable_color', 'default': 'true', 'description': 'enable color stream'}, {'name': 'enable_infra1', 'default': 'false', 'description': 'enable infra1 stream'}, {'name': 'enable_infra2', 'default': 'false', 'description': 'enable infra2 stream'}, {'name': 'infra_rgb', 'default': 'false', 'description': 'enable infra2 stream'}, {'name': 'tracking_module.profile', 'default': '0,0,0', 'description': 'fisheye width'}, {'name': 'enable_fisheye1', 'default': 'true', 'description': 'enable fisheye1 stream'}, {'name': 'enable_fisheye2', 'default': 'true', 'description': 'enable fisheye2 stream'}, {'name': 'enable_confidence', 'default': 'true', 'description': 'enable depth stream'}, {'name': 'gyro_fps', 'default': '0', 'description': "''"}, {'name': 'accel_fps', 'default': '0', 'description': "''"}, {'name': 'enable_gyro', 'default': 'true', 'description': "''"}, {'name': 'enable_accel', 'default': 'true', 'description': "''"}, {'name': 'enable_pose', 'default': 'true', 'description': "''"}, {'name': 'pose_fps', 'default': '200', 'description': "''"}, {'name': 'pointcloud.enable', 'default': 'false', 'description': ''}, {'name': 'pointcloud.stream_filter', 'default': '2', 'description': 'texture stream for pointcloud'}, {'name': 'pointcloud.stream_index_filter','default': '0', 'description': 'texture stream index for pointcloud'}, {'name': 'enable_sync', 'default': 'true', 'description': "''"}, {'name': 'align_depth.enable', 'default': 'false', 'description': "''"}, {'name': 'colorizer.enable', 'default': 'false', 'description': "''"}, {'name': 'clip_distance', 'default': '-2.', 'description': "''"}, {'name': 'linear_accel_cov', 'default': '0.01', 'description': "''"}, {'name': 'initial_reset', 'default': 'true', 'description': "''"}, {'name': 'allow_no_texture_points', 'default': 'false', 'description': "''"}, {'name': 'ordered_pc', 'default': 'false', 'description': ''}, {'name': 'calib_odom_file', 'default': "''", 'description': "''"}, {'name': 'topic_odom_in', 'default': "''", 'description': 'topic for T265 wheel odometry'}, {'name': 'tf_publish_rate', 'default': '0.0', 'description': 'Rate of publishing static_tf'}, {'name': 'diagnostics_period', 'default': '0.0', 'description': 'Rate of publishing diagnostics. 0=Disabled'}, {'name': 'decimation_filter.enable', 'default': 'false', 'description': 'Rate of publishing static_tf'}, {'name': 'rosbag_filename', 'default': "''", 'description': 'A realsense bagfile to run from as a device'}, {'name': 'depth_module.exposure.1', 'default': '7500', 'description': 'Initial value for hdr_merge filter'}, {'name': 'depth_module.gain.1', 'default': '16', 'description': 'Initial value for hdr_merge filter'}, {'name': 'depth_module.exposure.2', 'default': '1', 'description': 'Initial value for hdr_merge filter'}, {'name': 'depth_module.gain.2', 'default': '16', 'description': 'Initial value for hdr_merge filter'}, {'name': 'wait_for_device_timeout', 'default': '-1.', 'description': 'Timeout for waiting for device to connect (Seconds)'}, {'name': 'reconnect_timeout', 'default': '6.', 'description': 'Timeout(seconds) between consequtive reconnection attempts'}, and when I run ros2 topic echo /camera/imu I get the following output: header: stamp: sec: 1657205548 nanosec: 714173696 frame_id: camera_imu_optical_frame orientation: x: 0.0 y: 0.0 z: 0.0 w: 0.0 orientation_covariance: - -1.0 - 0.0 - 0.0 - 0.0 - 0.0 - 0.0 - 0.0 - 0.0 - 0.0 angular_velocity: x: -0.001745329238474369 y: 0.001745329238474369 z: -0.001745329238474369 angular_velocity_covariance: - 0.01 - 0.0 - 0.0 - 0.0 - 0.01 - 0.0 - 0.0 - 0.0 - 0.01 linear_acceleration: x: -0.019613299518823624 y: -9.639936447143555 z: 0.16671304404735565 linear_acceleration_covariance: - 0.01 - 0.0 - 0.0 - 0.0 - 0.01 - 0.0 - 0.0 - 0.0 - 0.01 Notice the frame_id is notated as camera_imu_optical_frame. So my question is how can I help the ekf_node see my imu data when the frames are currently split into camera_accel_optical_frame and camera_gyro_optical_frame? Thank you! Originally posted by LiquidTurtle1 on ROS Answers with karma: 15 on 2022-07-11 Post score: 0 Answer: The EKF finds the imu data that you want, looks at the transform frame of that imu data, tries to look up where it is located in the world, fails, and warns you. You need to tell the system where your sensors are located. This is done via transforms. I think the method of choice is to provide a URDF for your robot, but I'm not sure. There are tutorials available that you could use. Originally posted by Per Edwardsson with karma: 501 on 2022-07-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by LiquidTurtle1 on 2022-07-12: I think you are right. Within the ekf config file it says "# make sure something else is generating the odom->base_link transform." When I publish a static transform it works for 1 frame and the stops, so I should expect a dynamic transform to do the trick. Thank you!
{ "domain": "robotics.stackexchange", "id": 37839, "tags": "navigation, robot-localization" }
What is the relationship between the discrete time and continuous time variables?
Question: While going through Proakis's Digital Signal Processing (page 21) ,he stated that if a continuous time signal $~x(t)~$ that has been sampled each $~T~$ seconds to produce a discrete time signal $~x(n)~$ then the relationship between the variables $t$ and $n$ is : $$ t=nT \tag($$ Question : in the LHS we have a continuous variable whereas in the RHS we have a variable that can only take step sizes of $T$ , so clearly $t$ and $nT$ do not span the same range , then how is the formula above justified ? Answer: It's about the relationship between the discrete time signal $x_d[n]$ and the continuous-time signal $x_c(t)$: $$x_d[n]=x_c(nT)\tag{1}$$ So you formally replace the variable $t$ by $nT$ but this just means that you sample the continuous-time signal at sample instants $t_n=nT$. So $t=nT$ is only true for the values of $t$ that we're interested in, and these are the discrete values $t_n=nT$.
{ "domain": "dsp.stackexchange", "id": 7003, "tags": "time-domain" }
How can I convert the pseudo-code that solves maximum subarray problem to Python code?
Question: I'm reading Algorithm Design and Application, by Michael T. Goodrich and Robert Tamassia, and sometimes the pseudo-code in this book doesn't make sense at all. At chapter one they show us 3 solutions for the famous maximum subarray problem. I understood completely the first and second solutions, but the third one (the fastest) seems impossible to understand. The pseudo-code for this solution is this: Algorithm MaxsubFastest(A): Input: An n-element array A of numbers, indexed from 1 to n. Output: The maximum subarray sum of array A. M₀ ← 0 // the initial prefix maximum for t ← 1 to n do Mₜ ← max{0, Mₜ₋₁ + A[t]} m ← 0 // the maximum found so far for t ← 1 to n do m ← max{m, Mₜ } return m What I don't get about this pseudo-code is this: M₀ is a variable that receives zero at the beginning, right? But it is never called again... so what is happening here? How the Mₜ ← max{0, Mₜ₋₁ + A[t]} part works at all? Is it creating a lot of Mₜ variables, one for every t value? The "max" part is something like a function? If that is so, doesn't it interfere with our algorithm Big-Oh notation? There are two loops that seems to "talk" to each other, otherwise they run separately. I think that a good way to end my questioning is to see this code in a programming language I know (Javascript or Python, preferably). So, my question: how can I implement this pseudo-code in Python? Answer: How the $M_t ← \max\{0, M_{t-1} + A[t]\}$ part works at all? Is it creating a lot of $M_t$ variables, one for every t value? Yes, it is creating variable $M_t$ for every $t$. Together with $M_0$, there are $n+1$ of them. When $t=1$, the statement is $M_1 ← \max\{0, M_0 + A[1]\}$. Note that $M_0$ is used. The "max" part is something like a function? If that is so, doesn't it interfere with our algorithm Big-Oh notation? Yes, $\max$ is a function of two numbers that returns the bigger one between the two numbers or anyone of them if they are equal. It is assumed that it takes up to to some constant time to call that function. A constant factor does not affect the big-$O$ notation. There are two loops that seems to "talk" to each other, otherwise they run separately. Yes. The first loop creates/computes a list of $M_t$. The second loop finds the maximum of them. Here is corresponding code in Python. Note that the code uses $A[t-1]$ instead of $A[t]$ in the pseudocode since the index starts from $0$ in Python instead of $1$ in the pseudocode. def MaxsubFastest(A): """Return the maximum sum of a subarray of A """ n = len(A) M = [-1] * (n + 1) # -1 stands for not set. M[0] = 0 for t in range(1, n + 1): M[t] = max(0, M[t - 1] + A[t - 1]) m = 0 for t in range(1, n + 1): m = max(m, M[t]) return m As you might have suspected, we can reduce the usage of $n+1$ $M_t$'s to only only variable $b$, which will be the bigger one between 0 and the maximum sum of a subarray that ends at current index of $A$. To enable that, we will merge two loops into one, keeping $m$ as the maximum sum of a subarray that have considered so far. def MaxsubFastestWithLeastSpace(A): """Return the maximum sum of a subarray of A """ c = 0 m = 0 for t in range(1, len(A) + 1): # c is the maximum sum of a subarray that ends at index t-1 # if that sum is greater than 0; otherwise c is 0. c = max(0, c + A[t - 1]) # the maximum sum of a subarray that ends before index t m = max(m, c) return m
{ "domain": "cs.stackexchange", "id": 20449, "tags": "algorithms, python, pseudocode, maximum-subarray" }
Energy balance in a thermodynamic system
Question: I think I've missed this point in class but I was wondering how to determine for example if I have a turbine, the Energy balance would look something like this: $W$ as work $h_1$ as intake and $h_2$ as exhaust of the turbine; assuming adiabatic turbine and internal energy is 0 we get: $W+m(h_1-h_2)=0$ or $\frac Wm=(h_1-h_2)$. after all this my question is: How do you determine the position of the $h_1$ and $h_2$ of the equation? I know that for a pump it would be the opposite, meaning $h_2-h_1$ instead of $h_1-h_2$ So say I have a condenser, and a cooler... how would I determine the energy balance equation for those, (assuming steady state system)? I hope I've been clear enough in my question! Thanks in advance! Answer: You simply need to keep your signs straight. By convention, work done by the system (e.g., turbine work) is considered positive and work done on the system (e.g., pump work) is considered negative. In the case of heat, heat into the system is positive and out is negative. With these conventions, the first law is $$\Delta U=Q-W$$ The enthalpy of steam encompasses both internal energy and the product of pressure and volume. Fig 1 below show the energy balance for turbines, pumps and compressors. For Turbines $h_{1}>h_{2}$. For pumps and compressors $h_{2}>h_{1}$ Fig 2 shows it for boilers, condensers and evaporators. For boilers and evaporators, $h_{2}>h_{1}$. For condensers $h_{1}>h_{2}$. Hope this helps.
{ "domain": "physics.stackexchange", "id": 56439, "tags": "thermodynamics, energy, flow" }
Karnaugh map - assign variables to the inputs?
Question: I drew the map on the right, but what I drew doesn't work for what the question is asking me. I think I did something very wrong, and I don't really understand what this question is asking me. Am i suppose to re arrange the binary inputs somehow? Answer: I assume $w,x,y,z$ are inputs to the circuit given. What we need is $$\sum_{w,x,y,z}(2,3,8,9)$$ Put this in a Karnaugh Map and you will get the simplified equation as $w^\prime x^\prime y + wx^\prime y^\prime$. Now, let $a,b,c,d$ be the inputs to the circuit that you have given. Output of first OR gate will be $a+b$. Let it be called $f$. Output of second OR gate will be $c+d$. Let it be called $s$. Output of the XOR gate is $fs^\prime+sf^\prime$. Substitue the values of $f$ and $s$ in the equation and use the equality $(\alpha+\beta)^\prime=\alpha^\prime \beta^\prime$The output will be $$a^\prime b^\prime\cdot(c+d)+c^\prime d^\prime\cdot(a+b)$$. Compare this with the equation that we require and we find that $$a=w$$$$b=x$$$$c=x$$$$d=y$$ These are the order in which inputs should be given. Karnaugh Map that you have drawn is correct assuming that $w,x,y,z$ are inputs to the circuit.
{ "domain": "cs.stackexchange", "id": 2448, "tags": "logic" }
Computing the item price and vat for a given pricing, order, and exchange rate
Question: How can I improve the readability of the below function using a list comprehension? Also, is there a way to improve items() performance? pricing = {'prices': [{'product_id': 1, 'price': 599, 'vat_band': 'standard'}, {'product_id': 2, 'price': 250, 'vat_band': 'zero'}, {'product_id': 3, 'price': 250, 'vat_band': 'zero'}], 'vat_bands': {'standard': 0.2, 'zero': 0}} order = {'order': {'id': 12, 'items': [{'product_id': 1, 'quantity': 1}, {'product_id': 2,'quantity': 5}]}} exchange_rate = 1.1 def items(): """ computes the item price and vat for a given pricing, order, and exchange rate. returns list of items dictionaries """ return [{'product_id': item['product_id'], 'quantity': item['quantity'], 'price': round(product['price'] * exchange_rate, 2), 'vat': round(pricing['vat_bands']['standard'] * product['price'] * exchange_rate, 2)} if product['vat_band'] == 'standard' else {'product_id': item['product_id'], 'quantity': item['quantity'], 'price': round(product['price'] * exchange_rate, 2), 'vat': 0} for item in order['order']['items'] for product in pricing['prices'] if item['product_id'] == product['product_id']] print(items()) Output: [{'product_id': 1, 'quantity': 1, 'price': 658.9, 'vat': 131.78}, {'product_id': 2, 'quantity': 5, 'price': 275.0, 'vat': 0}] Answer: As a general rule it is best not to nest comprehensions. It makes code hard to read. You are better off just writing a for loop and appending the results to a list or using a generator. Here are a couple rules with comprehensions that will make your code less brittle and easy for others to work with: Don't nest comprehensions. If a comprehension is too long for one line of code, don't use it. If you need an else, don't use a comprehension. Of course there are exceptions to these rules, but they are a good place to start. One of the reasons nested list comprehension is an issue is it often results in a exponential increase in computation needed. For each item in the order you have to loop through every product. This is not efficient. You want to go from O(n x m) to O(n + m). You should loop through products once and through order items once. You can see in the updated code below that I loop through the list of products and create a dictionary with the key as the product ID. This makes it so that, while looping through the order items, I can simply get the product by looking up the key. It is much more performant and readable. pricing = { "prices": [ {"product_id": 1, "price": 599, "vat_band": "standard"}, {"product_id": 2, "price": 250, "vat_band": "zero"}, {"product_id": 3, "price": 250, "vat_band": "zero"}, ], "vat_bands": {"standard": 0.2, "zero": 0}, } order = { "order": { "id": 12, "items": [{"product_id": 1, "quantity": 1}, {"product_id": 2, "quantity": 5}], } } exchange_rate = 1.1 def calculate_exchange_rate(price, rate=None): if rate is None: rate = exchange_rate return round(price * rate, 2) def items(): """ computes the item price and vat for a given pricing, order, and exchange rate. returns list of items dictionaries """ item_list = [] products = {p["product_id"]: p for p in pricing["prices"]} for item in order["order"]["items"]: product = products.get(item["product_id"]) vat = 0 if product["vat_band"] == "standard": vat = pricing["vat_bands"]["standard"] * product["price"] item_list.append( { "product_id": item["product_id"], "quantity": item["quantity"], "price": calculate_exchange_rate(product["price"]), "vat": calculate_exchange_rate(vat), } ) return item_list print(items())
{ "domain": "codereview.stackexchange", "id": 38780, "tags": "python, performance, comparative-review, complexity" }
how to solve the frame ID setting warning?
Question: Do you have any idea ** to correct the following warning setting? [WARN] [1340092854.777455509]: Message from [/hokuyo] has a non-fully-qualified frame_id [laser]. Resolved locally to [/laser]. This is will likely not work in multi-robot systems. This message will only print once. [WARN] [1340092854.968655384]: Message from [/hector_mapping] has a non-fully-qualified frame_id [laser]. Resolved locally to [/laser]. This is will likely not work in multi-robot systems. This message will only print once. [ERROR][1340123076.274228430]: Trajectory Server: Transform from /map to /base_link failed: Unable to lookup transform, cache is empty, when looking up transform from frame [/base_link] to frame [/map] [ERROR] [1340123076.523439543]: Trajectory Server: Transform from /map to /base_link failed: Unable to lookup transform, cache is empty, when looking up transform from frame [/base_link] to frame [/map] Originally posted by jas on ROS Answers with karma: 33 on 2012-06-19 Post score: 0 Answer: Set the frame_id to /laser and the warnings will go away. Originally posted by tfoote with karma: 58457 on 2012-07-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by kedarm on 2013-10-03: Although I agree that this solution will work for single robot settings, I think a better and more robust solution is given here: https://code.ros.org/trac/ros-pkg/ticket/5511
{ "domain": "robotics.stackexchange", "id": 9860, "tags": "slam, navigation, joint-trajectory-controller, hector, base-laser" }
What is the difference between $\vert-\rangle$ and $\vert+\rangle$?
Question: I understand that a Qubit can be represented in the form of $$\vert\psi\rangle=\alpha \vert0\rangle+\beta\vert1\rangle$$ where $\alpha$ and $\beta$ are complex numbers and the $\alpha^2$ and $\beta^2$ are the probabilities of the state of the Qubit and $$\vert\alpha\vert^2+\vert\beta\vert^2=1$$ If $\alpha$ and $\beta$ were equal to $\frac{1}{\sqrt2}$, the measurement would give us a superposition of states where we have a 1/2 probability for 0 and 1/2 probability for 1: $$\vert\psi\rangle=\frac{1}{\sqrt{2}}\vert0\rangle+\frac{1}{\sqrt{2}}\vert1\rangle$$ I can't find an explanation for the difference between $\vert-\rangle$ and $\vert+\rangle$ and what's the use of putting a negative sign for the $\vert1\rangle$ state. Answer: In quantum computing, when dealing with a single spin-1/2 particle, the conventional definitions for $|+\rangle$ and $|-\rangle$ are $$|+\rangle = \frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle$$ $$|-\rangle = \frac{1}{\sqrt{2}}|0\rangle - \frac{1}{\sqrt{2}}|1\rangle$$ where $|0\rangle$ and $|1\rangle$ are the spin-up and spin-down configurations for a measurement along the $z$-axis. If you measure spin in the $z$-direction, the $|0\rangle$ state gives you a positive number, and the $|1\rangle$ state gives you a negative number. In this implementation, $|+\rangle$ and $|-\rangle$ are the spin-up and spin-down states along the $x$-axis. If you measure spin along the $x$-axis, the $|+\rangle$ state gives you a positive number, and the $|-\rangle$ state gives you a negative number. The relation between the two axes and the two sets of states is given by the way that the Pauli matrices interact, and the two observables (spin along the $z$-axis and spin along the $x$-axis) are related by the uncertainty principle, since the spin operators for those observables (the Pauli matrices $\sigma_z$ and $\sigma_x$) do not commute with each other.
{ "domain": "physics.stackexchange", "id": 52780, "tags": "quantum-mechanics, quantum-information, conventions, notation, quantum-states" }
Is the $S$-matrix a scalar operator?
Question: The scattering $S$ operator which is defined to be the operator corresponding to $S$ matrix should be rotational invariance, does this imply $S$ operator is a scalar operator? Answer: In a typical collider experiment, two particles, generally in approximate momentum eigenstates at $t = -\infty$, are collided with each other and it is measured the probability of finding particular outgoing momentum eigenstates at $t = +\infty$. In the Heisenberg picture the probability amplitude for the initial states $\vert i \rangle$ to evolve to the final states $\vert f \rangle$ is defined as $\langle f \vert S \vert i \rangle$ where the time evolution is put in the scattering or S-matrix. The S-matrix element $\langle f \vert S \vert i \rangle$ for $n$ asymptotic momentum eigenstates is given by the LSZ (Lehmann-Symanzik-Zimmermann) reduction formula which for scalar quantum fields $\phi (x)$ states $$\langle f|S| i \rangle = [i \int d^4 x_1 exp (-i p_1 x_1) (\Box_1 + m^2)] \cdot \cdot \cdot [i \int d^4 x_n exp (i p_n x_n) (\Box_n + m^2)] \langle \Omega | T \{ \phi (x_1) \cdot \cdot \cdot \phi (x_n)\} | \Omega \rangle$$ where $−i$ in the exponent applies to initial states and $+i$ to final states. The LSZ formula is constructed from Lorentz covariant fields, hence the S-matrix element is an invariant.
{ "domain": "physics.stackexchange", "id": 69984, "tags": "quantum-field-theory, hilbert-space, operators, scattering, s-matrix-theory" }
Which biphenyl is optically active?
Question: Which biphenyl is optically active? I know that it can never be 1. I don't think it will be 4 either as I read that it should be ortho-substituted biphenyl. So, when I look at 2 and 3, I can't make out which one should be the correct answer. Should it always be two ortho substituents on each phenyl group for it to be optically active? Also, what if the two groups on the same phenyl are different, will it always be optically active in that case? Answer: Biphenyl 2 is the only optically active compound here. These stereoisomers are due to the hindered rotation about the 1,1'-single bond of the compound (Ref.1). Biphenyl 3 is not optically active, because partially allowed rotation about the 1,1'-single bond of the compound (rotation is only partially restricted). To illustrate this phenomenon, I depicted the following diagram: Note that compound 3 can rotate through two simultaneous $\ce{I}$ and $\ce{H}$ atoms allowing last $180 ^\circ$ rotation, which is well illustrated in the diagram posted by Karsten Theis. References: Paul Newman, Philip RutkinKurt Mislow, "The Configurational Correlation of Optically Active Biphenyls with Centrally Asymmetric Compounds. The Absolute Configuration of 6,6'-Dinitro-2,2'-diphenic Acid," J. Am. Chem. Soc. 1958, 80(2), 465-473 (https://doi.org/10.1021/ja01535a054).
{ "domain": "chemistry.stackexchange", "id": 13401, "tags": "organic-chemistry, aromatic-compounds, stereochemistry, molecular-structure, molecules" }
Is it required to input full matrix when using Qiskit HHL algorithm for sparse matrices?
Question: I am working with a very sparse matrix and it seems inefficient to load the full matrix as input into the Qiskit HHL algorithm. Is it possible to input only the non-zero elements, instead ? I am working with a "clean" banded, tri-diagonal matrix. Answer: At the moment this is not possible, however in the near future it will be possible for the case of tridiagonal symmetric Toeplitz matrices. The reason for this being that qiskit does not support QRAM or "black-box" access to the matrix elements, so to be able to partly specify the matrix one would need to develop specific methods that work with this limited information, which is only possible if the matrix has some structure.
{ "domain": "quantumcomputing.stackexchange", "id": 1924, "tags": "qiskit, hhl-algorithm" }
Mapping enum to function - request / response library encapsulation
Question: I'm writing code for an embedded device - specifically, a camera. The idea is different camera manufacturers can implement the code so it properly works with their camera. Here, I'm encapsulating the request/response library we're using, because it's complicated to deal with, and we want to use ease of implementation as a selling point. Some of the requests require a response or parameters. Others, however, act like your TV would: you press a button, that sends a command code, and that triggers an action. No return code needed, no parameters. I'm gonna have a lot of functions like that. Relevant types provided by the library we're using: typedef unsigned char bool; typedef unsigned char uint8_t; My enum definition... typedef enum { //auth - 1xx //move - 2xx //snapshot - 3xx //stream - 4xx //zoom - 5xx //infrared - 6xx ZOOM_CAPABLE = 500, ZOOM_IN = 501, ZOOM_OUT = 502, ZOOM_STOP = 503, INFRARED_CAPABLE = 600, INFRARED_ON = 601, INFRARED_OFF = 602, WRITE_DEMO = 1000, READ_DEMO = 1001 } application_requestID; These values are shared by some javascript client that communicates with my embedded device. I have the authority to change the values and the javascript client. I'm developing a new system, so I don't need to adhere to any previously defined values. Currently it's ZOOM_IN, ZOOM_OUT, ZOOM_STOP, INFRARED_ON, INFRARED_OFF that are functions that require no input arguments and no output arguments. As you can see in the comments, I got more stuff planned. The functions look like this: bool canZoom();//ZOOM_CAPABLE void startZoomIn();//ZOOM_IN void startZoomOut();//ZOOM_OUT void stopZoom();//ZOOM_STOP And (as a demo), I have implemented them like this: static bool zoomingIn = false; static bool zoomingOut = false; static const uint8_t ZOOM_CAP = 200; static uint8_t currentZoom = 100; bool canZoom(){ return true; } void startZoomIn(){ zoomingIn = true; zoomingOut = false; } void startZoomOut(){ zoomingOut = true; zoomingIn = false; } void stopZoom(){ zoomingIn = false; zoomingOut = false; } void tick_cameraInterface(){ //do stuff here, like moving a camera? //this simulates the idea that between requests, a camera is constantly moving (until it is asked to stop) if (zoomingIn && currentZoom < ZOOM_CAP){ currentZoom++; } if (zoomingOut && currentZoom > 0){ currentZoom--; } } Now, the last section of code, the encapsulation of the responses and requests: application_event_result application_event(application_request* applicationRequest, buffer_read_t* readBuffer, buffer_write_t* writeBuffer){ if (applicationRequest->queryId == WRITE_DEMO){ if (!unabto_query_write_uint8(writeBuffer, getDemoValue())){ return AER_REQ_RSP_TOO_LARGE; } return AER_REQ_RESPONSE_READY; } else if (applicationRequest->queryId == READ_DEMO){ uint8_t x; if (!unabto_query_read_uint8(readBuffer, &x)){ return AER_REQ_TOO_SMALL; } setDemoValue(x); return AER_REQ_RESPONSE_READY; } else if (applicationRequest->queryId == ZOOM_CAPABLE){ if (!unabto_query_write_uint8(readBuffer, canZoom())){ return AER_REQ_RSP_TOO_LARGE; } return AER_REQ_RESPONSE_READY; } else if (applicationRequest->queryId == ZOOM_IN){ startZoomIn(); return AER_REQ_RESPONSE_READY; } else if (applicationRequest->queryId == ZOOM_OUT){ startZoomOut(); return AER_REQ_RESPONSE_READY; } else if (applicationRequest->queryId == ZOOM_STOP){ stopZoom(); return AER_REQ_RESPONSE_READY; } return AER_REQ_NO_QUERY_ID; } I'm seeing a massive instance of repetition here. ZOOM_IN? Call startZoomIn() function. ZOOM_OUT? Call startZoomOut() function. These things sound like they could be mapped to one another. But here's the catch... I'd like to keep my enum values. Altering them to be sequential makes it trivial to create an array of function pointers, but I lose out on maintainability; if I have to add ZOOM_RESET later, it's gonna have a weird number. I don't want to have a large memory (ROM and RAM) footprint. The cameras already use a certain amount of memory. The more memory I use, the higher the chance I force the camera manufacturer to use a larger ROM or RAM chip. If integration of our interface were to force the manufacturer to upgrade, I'm sure we'll get those costs billed somehow (that, or camera manufacturers will say "hmm... nope, that won't fit"). Ideally I'd like to be ANSI C compliant, as I will be altering my code to be ANSI C compliant later (I have to test on Win32 for now, so I can't compile as ANSI C since Windows.h is filled with C++ code). Being ANSI C compliant would be another selling point, since the code would compile on a large amount of compilers (and thus easily be adaptable to a broad range of embedded devices). How would I refactor this to use a enum to function mapping? Is it possible to do so without having to traverse the array? Answer: ANSI C Don't use // comments but /**/. Design away invalid states It probably will never happen and is only in your mockup code but your design allows for the state zoomingIn = true; zoomingOut = true; These two are mutually exclusive so you might consider to use a tristate (enum) like: typedef enum { ZOOMING_IN, ZOOMING_STOPPED, ZOOMING_OUT, } zooming_state; And having the state managed in one variable zooming (or something with a better name then). Switch to the rescue Why don't you use a switch for the commandmapping? switch(applicationRequest->queryId) { case WRITE_DEMO: if (!unabto_query_write_uint8(writeBuffer, getDemoValue())){ return AER_REQ_RSP_TOO_LARGE; } return AER_REQ_RESPONSE_READY; case READ_DEMO: { uint8_t x; if (!unabto_query_read_uint8(readBuffer, &x)){ return AER_REQ_TOO_SMALL; } setDemoValue(x); return AER_REQ_RESPONSE_READY; } case ZOOM_CAPABLE: if (!unabto_query_write_uint8(readBuffer, canZoom())){ return AER_REQ_RSP_TOO_LARGE; } return AER_REQ_RESPONSE_READY; case ZOOM_IN: startZoomIn(); return AER_REQ_RESPONSE_READY; case ZOOM_OUT: startZoomOut(); return AER_REQ_RESPONSE_READY; case ZOOM_STOP: stopZoom(); return AER_REQ_RESPONSE_READY; default: return AER_REQ_NO_QUERY_ID; } That looks much cleaner to me and it can handle the sparse indices as well (and with automatic compileroptimization!).
{ "domain": "codereview.stackexchange", "id": 8777, "tags": "c" }
Interpolating irregularly missing data points of regularly spaced data
Question: If I have a set of regularly spaced sample data (spacing $\delta x$) and some of my data is missing (zero) but not at regular intervals, i.e. $[a_0, (missing), (missing), (missing), a_4, a_5, (missing), a_7, a_8, a_9, a_{10}, ...]$ Can digital signal processing techniques be used to interpolate the missing data? I've only read of interpolating by a FIR or IIR when the missing data is every nth element. There is a lot more data present than missing. The interpolation can be done offline. The first or last data point(s) might be missing. Answer: If you want to have just a working example, you can consider the functionality of scipy.interpolate: N = 128 n = np.arange(N) x = np.sin(2*np.pi*n/32) plt.plot(n, x) pos = np.random.randint(N, size=(50,)) x_received = np.delete(x, pos) n_received = np.delete(n, pos) plt.stem(n_received, x_received) import scipy.interpolate interpolated = scipy.interpolate.interp1d(n_received, x_received, bounds_error=False, kind='cubic') plt.plot(n, interpolated(n), 's') This code generates a signal x for n=0,...,127. Then, it deletes some values out of it and interpolates the missing values by spline cubic interpolation. See that the blue vertical lines denote the received samples, the green squares denote the interpolated values. The approximation is quite good.
{ "domain": "dsp.stackexchange", "id": 5255, "tags": "interpolation" }
If Mars orbited the Earth how distant would it have to be to cause the same tides?
Question: If it were possible to replace the Moon with Mars, how distant would it have to be to essentially create the same oceanic tides as the Moon currently does? Mars seems to be roughly 3 times the mass of the Moon, so does that mean it should be 300% the distance? Furthermore, would the increased distance cause any issues with Earth's orbit around the Sun or potentially cause some sort of situation like Charon and Pluto where the centre of mass is between them? Answer: The Tidal acceleration between 2 bodies is calculated with this formula: $$ a_T = \frac{2GM_MR_1}{Dm^3} $$ Where $M_M$ is Mar's mass, $R_1$ Earth Radius and $D_m$ the distance to the Moon. If you equal this to Moon's Tidal acceleration you will get $D_M$ as distance to Mars to get the same Tidal acceleration having $\frac{M_M}{M_m}=8.73328184501$: $$ D_M = \sqrt[3]{8.73}D_m\simeq2.06Dm $$
{ "domain": "astronomy.stackexchange", "id": 396, "tags": "gravity, mars" }
Drawing conclusions about 3d truss from 2d model
Question: I am on the task of analysing the load capacity of a warren-ish truss bridge similar to this one. I do not, however, have the software or skill to model and analyse a 3D structure. I was wondering wether or not i could analyse a 2D simplification and still come to a somewhat credible conclusion. What considerations should be made when approximating a 3D truss as a 2D truss? What would be the limitations of such an approach? Thank you. Edit: I do not have to know the exact maximum load. My only goal is to figure out where the truss will fail first. Answer: The answer to your question "whether or not I could analyse a 2D simplification and still come to a somewhat credible conclusion" is obviously yes. After all, how do you think engineers calculated complex structures before the advent of the computer? The real question, of course, is how to do this well. The first question is how to determine the loading applied to the truss. Each of the trusses must obviously support its own self-weight. The division of the remaining loads among the trusses, however, is a bit more circumstantial. A reasonable first hypothesis would be to get the total load and divide it evenly between both trusses. If the structure is transversally symmetrical (that is, there is nothing that'd draw more loads to one truss than the other), then that is a perfectly valid assumption for all the dead loads (paving, etc). For the live loads, however, a bit more care may be required if discrete moving loads are considered. If a car is driving along your bridge, will it be perfectly centered (or close enough)? Then sure, divide the live load evenly between both trusses. Otherwise, if (for example) the bridge is wide enough that the car may be closer to one truss than the other, then more of the live load will go to the near truss. Assuming a transversally symmetric bridge, if the live load will be at a distance $a$ from one truss and $b$ from the other, and $a>b$, then the closer truss will have to withstand a fraction $\dfrac{a}{a+b}$ of the total live load. Obviously, if you're just considering uniformly distributed live loads, then the worst case scenario is to load the entire bridge and divide it evenly between both sides (assuming a transversally symmetric bridge). Once you know the loads, you can then move on to checking the trusses themselves. Trusses are composed of members that are either in tension or compression. For those in tension, the 3D reality of the structure is irrelevant. For those in compression, it is very relevant. That is because compression can lead to buckling, and so we need to know how the members are braced. If we were dealing with a real 2D truss, the upper chord (under compression) would have no out-of-plane bracing, meaning its effective length would be equal to almost the entire span of the bridge, dramatically reducing its buckling strength. However, we aren't dealing with a 2D truss, but a 3D bridge. And the 3D reality of the truss is that the transversal bracing is restraining the out-of-plane displacement of all the nodes. This has absolutely no effect on the structural analysis calculation of the 2D truss (calculating the internal forces of each member), but it is essential when calculating the truss' design strength. There are other issues such as wind and other non-vertical loads, but I have a feeling that these are out-of-scope for your question, so I'll leave them out of this answer. Let me know in a comment if I'm mistaken.
{ "domain": "engineering.stackexchange", "id": 1368, "tags": "structural-engineering, statics" }
Why is Fe3O4 a non-stoichiometric compound?
Question: Fe combines with O in a whole number ratio 3:2. Even the ions Fe2+ and Fe3+ are in the ratio 1:2. Then, why Fe3O4 is a non-stoichiometric compound? Answer: Stoichiometric compounds have composition with fixed and exact atomic ratio in small integer numbers. Nonstoichiometric compounds have their real composition expressed by atomic ratios in rational numbers. There is also a gray zone, where the compound is not exactly stoichiometric, but the deviation is small in considered context and we use for it a stoichiometric formula that fits it "good enough". Many metal oxides fall in this category. The fact a compound has mixed oxidation numbers like $\ce{Fe3O4}=\ce{Fe^{II}Fe^{III}2O4}$ or $\ce{Pb3O4}=\ce{Pb2^{II}[Pb^{IV}O4]}$, so it cannot be expressed by a single integer oxidation number, does not alone make the compound non-stoichiometric. But deviations in composition, leading to rational atom ratios, do make such a compound non-stoichiometric. Iron oxides are famous for that, even if $\ce{Fe3O4}$ is generally closer to stoichiometry than $\ce{FeO}$ or $\ce{Fe2O3}$. That is general phenomena for transition metals with multiple close oxidation states and oxide properties. There are vacancies or extra atoms and changes in oxidation states in frequent lattice defects. E.g. the formal FeO(s) is usually a non-stoichiometric oxide with real composition $\ce{Fe_{0.84−0.95}O(s)}$. There to produce its stoichiometric variant too: Wikipedia: Stoichiometric $\ce{FeO}$ can be prepared by heating $\ce{Fe_{0.95}O}$ with metallic iron at $\pu{770^{\circ}C}$ and $\pu{36 kbar}$. Other typical cases of non-stoichiometric compounds are manganese oxides mixed oxide ceramics, e.g. high temperature superconductors Li-ion cell cathode materials "hydroxides"/"oxidohydroxides"/"hydrated oxides" of transition metals hydrides of transition metals
{ "domain": "chemistry.stackexchange", "id": 17412, "tags": "stoichiometry, ionic-compounds" }