anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Numerical Python code to generate artificial data from a time series process
Question: I'm writing code to generate artificial data from a bivariate time series process, i.e. a vector autoregression. This is my first foray into numerical Python, and it seemed like a good place to start. The specification is of this form: \begin{align} \begin{bmatrix} y_{1,t} \\ y_{2,t} \end{bmatrix} &= \begin{bmatrix} 0.02 \\ 0.03 \end{bmatrix} + \begin{bmatrix} 0.5 & 0.1 \\ 0.4 & 0.5 \end{bmatrix} \begin{bmatrix} y_{1,t-1} \\ y_{2,t-1} \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 0.25 & 0 \end{bmatrix} \begin{bmatrix} y_{1,t-2} \\ y_{2,t-2} \end{bmatrix} + \begin{bmatrix} u_{1,t} \\ u_{2,t} \end{bmatrix} \end{align} where \$u\$ has a multivariate normal distribution, i.e. \begin{align} \begin{bmatrix} u_{1,t} \\ u_{2,t} \end{bmatrix} \sim N \left( \begin{bmatrix} \mu_1 \\ \mu_2 \end{bmatrix}, \Sigma \right) \end{align} My basic algorithm generates enough observations to get past the effects of the (arbitrary) initial conditions, and only returns the number of observations asked for. In this case, it generates ten times the number of observations asked for and discards the first 90%. import scipy def generate_data(y0, A0, A1, A2, mu, sigma, T): """ Generate sample data from VAR(2) process with multivariate normal errors :param y0: Vector of initial conditions :param A0: Vector of constant terms :param A1: Array of coefficients on first lags :param A2: Array of coefficients on second lags :param mu: Vector of means for error terms :param sigma: Covariance matrix for error terms :param T: Number of observations to generate """ if y0.ndim != 1: raise ValueError("Vector of initial conditions must be 1 dimensional") K = y0.size if A0.ndim != 1 or A0.shape != (K,): raise ValueError("Vector of constant coefficients must be 1 dimensional and comformable") if A1.shape != (K, K) or A2.shape != (K, K): raise ValueError("Coefficient matrices must be conformable") if mu.shape != (K,): raise ValueError("Means of error distribution must be conformable") if sigma.shape != (K, K): raise ValueError("Covariance matrix of error distribution must be conformable") if T < 3: raise ValueError("Cannot generate less than 3 observations") N = 10*T errors = scipy.random.multivariate_normal(mu, sigma, size = N-1) data = scipy.zeros((N, 2)) data[0, :] = y0 data[1, :] = y0 for i in range(2, N): data[i, :] = A0 + scipy.dot(A1, data[i-1, :]) + scipy.dot(A2, data[i-1, :]) + errors[i-1,:] return(data[-T:, :]) def main(): y0 = scipy.array([0, 0]) A0 = scipy.array([0.2, 0.3]) A1 = scipy.array([[0.5, 0.1], [0.4, 0.5]]) A2 = scipy.array([[0, 0], [0.25, 0]]) mu = scipy.array([0, 0]) sigma = scipy.array([[0.09, 0], [0, 0.04]]) T = 30 data = generate_data(y0, A0, A1, A2, mu, sigma, T) print(data) if __name__ == "__main__": main() As I said, this code is very simple, but since it's my first attempt at a numerical Python project, I'm looking for any feedback before I slowly start building a larger code project that incorporates this. Does any of my code (base Python, SciPy, etc.) need improvement? This specific model comes from: Brüggemann, Ralf, and Helmut Lütkepohl. "Lag Selection in Subset VAR Models with an Application to a US Monetary System." (2000). Answer: If you are going to generate 10 times the amount of data needed, there's no need to keep it around. You can allocate the array you intend to return, and overwrite it several times. Similarly, you don't need to precompute and store all the error terms, but can generate them on the fly as you need them: data = scipy.zeros((T, 2)) rmvn = scipy.random.multivariate_normal prev_1 = prev_2 = y0 for samples in range(10*T): idx = samples % T data[idx, :] = A0 + A1.dot(prev_1) + A2.dot(prev_2) + rmvn(mu, sigma) prev_1, prev_2 = data[idx, :], prev_1 return data The above approach, holding the previous entries in auxiliary variables, to avoid indexing complications, also makes it kind of obvious that your requirement for T >= 3 is easy to work around. I have also used the .dot method, rather than the dot function, as I believe it makes the long expression a little more readable.
{ "domain": "codereview.stackexchange", "id": 15250, "tags": "python, numpy, statistics" }
How age of universe or galaxy estimated?
Question: How scientist found age of universe as 13.8 billion years and age of Milky Way as 13.51 Billion years? Is there any proof? Answer: It's impossible to mathematically "prove" those things in the way that mathematicians prove axioms. Instead, physicists compare the predictions of a model of the universe with experimental observations made on it in order to verify the correctness of that model; those observations having certain statistical limits on their stated accuracy. They also use that data to refine their models so as to include more physics in them and thereby increase their predictive power and accuracy. Finally, physicists make comparisons between different models to determine whether or not their own model is consistent with those others, within the accuracy limits mentioned above. This process is used to rule in or rule out the validity of the model in question. None of these processes rely on mathematical proof as such.
{ "domain": "physics.stackexchange", "id": 52289, "tags": "cosmology, space-expansion, big-bang" }
Qiskit - How to get statevector for each shot?
Question: I have a simple circuit which I run for 10 shots: from qiskit.quantum_info import Statevector from qiskit import Aer, execute, QuantumCircuit from qiskit.quantum_info import Statevector backend = Aer.get_backend("statevector_simulator") qc2 = QuantumCircuit(2, 1) qc2.h(0) qc2.measure([0], [0]) print(qc2) result = execute(qc2, backend=backend, shots=10).result() Now, when I print the state vector with: print('State after measurement:', result.get_statevector()) The output is: State after measurement: Statevector([1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], dims=(2, 2)) The question is that to which of the 10 shots does this state vector belong to? Depending on the measurement, the state vector should be different - when q0 is measured as 0 and when q0 is 1. And how do I get the statevector for each shot? Answer: You can use save_statevector simulator instruction. To save the statevector for each shot, set pershot value to True: from qiskit import Aer, execute, QuantumCircuit from qiskit.providers.aer.library import save_statevector backend = Aer.get_backend("statevector_simulator") qc2 = QuantumCircuit(2, 1) qc2.h(0) qc2.measure([0], [0]) qc2.save_statevector(label = 'test', pershot = True) result = execute(qc2, backend = backend, shots = 10).result() print(result.data(0)['test'])
{ "domain": "quantumcomputing.stackexchange", "id": 3925, "tags": "qiskit, measurement" }
problem controlling pan and tilt with gui
Question: Hi, I am a beginner with ROS and have built an URDF Model for a turtlebot. The turtlebot is equiped with a Pan&Tilt unit consisting of 2 servos, which I can manipulate and display with the provided gui Interface in Rviz. I also have a launch file to start the servos and manipulate them with the keyboard. But this is not linked to my robot model. How do I bring this two things together, such that I can control the robot in rviz as well as the corresponding Hardware? I know this is not much information. Here the RQT Graph after starting all the Nodes. http://www.fotos-hochladen.net/uploads/rosgraphnodes762v4fsyrj.png Thanks! Originally posted by mike.sru on ROS Answers with karma: 16 on 2013-09-19 Post score: 0 Answer: I found a solution for my problem here: http://gazebosim.org/wiki/Tutorials/1.9/ROS_Control_with_Gazebo Originally posted by mike.sru with karma: 16 on 2013-09-20 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 15582, "tags": "urdf, transform" }
Why is representation theory important in physics?
Question: Given a certain group we can find many representations of it. And If I'm not wrong a representation is a group itself. For example, given the group of the unitary 2x2 matrices with determinant 1 $SU(2)$, its three-dimensional representation is a subsets of 3x3 matrices that is a group itself. Since a representation is a homomorphism I expect that the group and its representation can be different groups. So it looks like there exist many groups that are related to a certain group because they are its representation. Why is it important to know the representation of a group? is there a physically important property that the representation group inherit from the group it represents? If I'm wrong feel free to correct me (I'm pretty new to the topic) I've recently wrote a related post but there were too many questions inside, maybe it's useful somehow Some clarifications about the ideas of representation of a group Answer: One can give many examples of where specific aspects of representation theory are useful in physics (see the current other answers to this question) but the fact of the matter is simply that you cannot do physics without having representations, whether you call them that or not: Don't think about representations as "a group and a different group". Even faithful (but different) representations are relevant. A representation is a pair - it consists of both a vector space $V_\rho$ and a representation map $\rho : G \to \mathrm{GL}(V_\rho)$ that represerves the group structure, i.e. is a group homomorphism. Without a representation, the group $G$ remains abstract and acts on nothing. Whenever we ask a question like "How does X transform under rotations?" (or with "rotations" replaced by any other transformation), this is - if X lives in a vector space, as it often does, e.g. when it is any sort of number or array of numbers - the same as asking "In which representation of $\mathrm{SO}(3)$ (the rotation group) does $X$ transform?". You cannot have transformations that form a group acting on vectors without having representations. Most of physics is literally impossible to do without having a representation somewhere, since the ideas of transformations and symmetries are fundamental to all fields of physics. And questions like "If I multiply X and Y, how does their product transform?" are so natural that it is mostly unavoidable to have more than one representation. You might as well ask "Why are groups important?", because without their representations, groups aren't very interesting from a physical perspective at all (this, coincidentally, is why you'll often hear physicists say "group theory" to what mathematicians would consider "representation theory")!
{ "domain": "physics.stackexchange", "id": 66981, "tags": "quantum-mechanics, group-theory, representation-theory" }
Counting number of configurations particles in harmonic trap
Question: For a collection of $N$ particles in a harmonic trap, the hamiltonian: \begin{equation} H\left(\vec{p},\vec{x}\right) = \sum_{i=1}^N \left(\frac{\vec{p_i}^2}{2m} + \frac{1}{2}k\vec{x_i}^2\right) \end{equation} Assume the number of configurations is given by \begin{equation} \Omega\left[U,N\right] = \frac{1}{N!}\int \frac{\text{d}^Dp_1\text{d}^Dx_1...\text{d}^Dp_N\text{d}^Dx_N} {\left(2\pi \hbar\right)^{ND}}\delta \left(H-U\right) \Delta U \end{equation} How can I show that the integral above computes to: \begin{equation} \Omega[U,N]= \frac{1}{N!}\frac{1}{\Gamma[ND]}\frac{U^{ND-1}\Delta U}{\left(\hbar\omega\right)^{ND}} \end{equation} I don't know how to even get started. Can someone point me into the right direction? Answer: On a conceptual basis I think it all boils down to what do we mean when writing an integral like the one shown by OP. Hence, including a Dirac distribution. You already understood that we want to "measure" the number of states with a given total energy $E$. Where every possible state is a point in phase space. The Hamiltonian is a function on this phase space. As usual in a lot of systems we identify the Hamiltonian as the Noether charge resulting from time-translation symmetry, which we call energy. So the states with constant energy $E$ are the ones satisfying $$ H(\vec p_1,\dots,\vec p_N,\vec x_1,\dots,\vec x_N) = E\ .\qquad (1) $$ If $H$ is a "well-behaved" function on phase space, equation $(1)$ implicitly defines a sub-manifold of our phase space (which in this case we could think of as $\mathbb R ^{6N}$). So the Hamiltonian suggested by OP reads $$ H(\vec p_1,\dots,\vec p_N,\vec x_1,\dots,\vec x_N) = \sum_{i=1}^N \frac{\vec p_i^2}{2m} + \frac k 2\vec x_i^2\ . $$ The question is now how does the sub-manifold, which is the set of all points which satisfy eq. $(1)$, "look like". This is probably answered by a simple example. Let us look at the points $(x, y) \in \mathbb R^2$ which satisfy $$ x^2 + y^2 = R^2, $$ which is just a circle of radius $R$. And the equivalent integral one would solve would be given by $$ S = \int_{\mathbb R ^2} \delta(x^2 + y^2 - R^2)\ \text{d}x\text{d}y, $$ which one would solve using polar coordinates because all there is to integrate is the "volume" of a circle with radius $R$. I think from here you should be able to perform your calculations in analogy.
{ "domain": "physics.stackexchange", "id": 86010, "tags": "statistical-mechanics, harmonic-oscillator" }
In Ada95 how many bytes does integer occupied in memory?
Question: Also asking for float, character, ect. Answer: Ada does not have primitives in the same way as most languages. Instead when you define an Integer or Floating Point Type you provide a range. The compiler then ensures the Type has that range in whatever underlying type it chooses on that system. The Standard library does however provide some predefined types... See http://en.wikibooks.org/wiki/Ada_Programming/Type_System http://www.adahome.com/rm95/rm9x-03-05-02.html
{ "domain": "cs.stackexchange", "id": 3281, "tags": "memory-allocation" }
The gravitational inverse square law and general relativity
Question: I know this has been discussed previously, but I am wondering if the equations of general relativity independently reduce to an inverse square law at low speeds or if the formulas were created in a way that they would (chicken and egg question). (boundary condition). One answer mentioned how Einstein's field equations can prove Birkoff's Theorem which probably answers my question, but could someone give a clarifying comment. As many times as it has been explained, it still seems to me that general relativity says that gravity is not a force in the "true" sense of the word. On the other hand, an inverse square law suggests a "force" carried by particles (gravitons) due strictly to geometric considerations. Answer: If general relativity didn't reproduce Newton's model of gravity (the inverse square law) in the low-speed weak-gravity approximation, then general relativity would have been ruled out, because we already know that Newton's model is an excellent approximation under those conditions. In this sense, the inverse square law played an important role in testing general relativity. However, in hindsight, general relativity now stands on its own as our fundamental model of gravity, and the inverse square law is better regarded as one of the many correct predictions of general relativity. So there is no chicken-and-egg problem. This is a common theme in physics: the principles that were once regarded as fundamental are later regarded as mere approximations to something deeper. A new foundation is adopted, and the original foundation becomes a prediction instead. The structure of general relativity can actually be deduced from a few simple principles that don't explicitly refer to any kind of inverse square law (this is supposed to be a non-technical rendition of Lovelock's theorem): The action principle. Loosely translated, this says that if thing A influences thing B, then thing B must also influence thing A in a related way. The list of "things" in the universe should include the metric field. Diffeomorphism invariance. Loosely translated, this means that if we take all the "stuff" in the universe and distort it all together in some smooth but otherwise arbitrary way, then we haven't actually changed anything at all (because we distorted all the stuff the same way, and the only things that matter are stuff compared to other stuff). The "action" in the action principle shouldn't involve any more than second-derivatives. (No third-derivatives, for example.) This, in turn, can be inferred using the idea that general relativity itself is probably just an approximation to something deeper that current experiments are unable to resolve. This is discussed in "Introduction to the Effective Field Theory Description of Gravity", https://arxiv.org/abs/gr-qc/9512024. ...it still seems to me that general relativity says that gravity is not a force in the "true" sense of the word. On the other hand, an inverse square law suggests a "force" carried by particles (gravitons) due strictly to geometric considerations. The way we translate mathematically-formulated principles into words is, at least to some degree, a matter of taste. In classical general relativity, gravity is mediated by a field (the metric field) that both influences and is influenced by everything else, in accordance with the action principle. When we say that gravity is not a force in the "true" sense of the word, we are alluding to the fact that this same metric field is what we use to define geometry and proper time, which are things that we usually think of as prerequisites for doing any kind of physics at all. But we can also think of things the other way around: geometry and proper time are useful concepts because of the characteristics of this particular field and the way it interacts with everything else. Whether we use the "geometry-first" language or the "geometry-second" language, the important principles underlying classical general relativity are principles like those listed above. Here's a concise review of how Newton's law of gravity is derived from general relativity: "Normalization conventions for Newton’s constant and the Planck scale in arbitrary spacetime dimension" (https://arxiv.org/abs/gr-qc/0609060). This paper highlights the fact that the inverse-square law result is specific to four-dimensional spacetime. Regarding gravitons, a proper understanding of gravity in those terms requires a theory that reconciles general relativity with quantum theory (beyond the context of the low-resolution approximation that is used in the "Effective Field Theory" paper cited above). Exactly what such a theory has to say about the "right" way to think about gravity is an interesting question that I'm not qualified to answer, but here is a related (technical) post: String theory and one idea of “quantum structure of spacetime”
{ "domain": "physics.stackexchange", "id": 53877, "tags": "general-relativity, gravity" }
A bead on the wire loop
Question: In my physics book there is a question which stated is as follows: A thin circular loop of radius $R$ rotates about its vertical diameter with an angular frequency $\omega$. Show that a small bead on the wire loop remains at its lowermost point for $\omega \leq \sqrt{g/R}\,$. What is the angle made by the radius vector joining the centre to the bead with the vertically downward direction for $\omega=\sqrt {2g/R}$? Neglect friction. When I started doing this question, I ran into the problem that I cannot gain any intuition that why should the bead move upward the loop if $\omega \geq \sqrt {g/R}$? What other forces might be acting on it (other than normal force by the loop and the gravitational force)? Most important of these two is the former, as when I imagined this I cannot think why it should move upward because it looks like it should remain at the lowest point whatsoever be the angular velocity (because if it is like a particle and is at the lowermost point then it would lie on the axis of rotation and hence should have a zero acceleration.) Please correct me if I'm going wrong anywhere. Answer: Absolutely. For your problem, if the bead is at rest at the bottom initially, it will always stay there. The possible forces on the bead in the rotating reference frame are: Normal reaction from the wire Weight Centrifugal force Coriolis force (irrelevant for motion on the wire, as it is perpendicular to the wire and taken care of by the azimuthal component of the normal reaction) To get the equilibrium positions, you just need to find the angles $\theta$ for which the tangential force on the stationary bead (i.e. along the wire) is zero. More Perspective This is a mechanical example of what is known as Landau's theory of phase transitions in condensed matter. When $\Omega$ is small, there is only one equilibrium position of the bead in the rotating frame of reference, and that's the lowermost position, at the bottom. As one increases $\Omega$, at some point, the centrifugal force becomes strong enough to balance out the tangential component of the bead at a non-trivial angle (on either side), and two new equilibrium positions occur. If you now do a stability analysis of the problem, it will turn out that as long there is one equilibrium position, it is stable. However, as soon as the new equilibrium positions occur, the lowermost position becomes unstable (while the new ones are stable). You may now ask what happens if the bead is at rest at the bottom initially. The answer is that the bead should stay there forever. In practice, however, this doesn't happen because there is no perfectly isolated & frictionless/dissipationless system. The system might pick up some stray disturbance and it will spontaneously sway to one side. Once that happens, the bead will move towards the more stable position and in the presence of infinitesimal dissipation eventually settle in that position. This phenomenon in more exotic condensed matter/particle physics setups is called spontaneous symmetry breaking and is responsible for solids, ferromagnetism, superconductivity, superfluidity, Higgs mechanism etc.
{ "domain": "physics.stackexchange", "id": 62926, "tags": "homework-and-exercises, newtonian-mechanics, forces, angular-velocity" }
How can a stack-based VM be Turing-Complete?
Question: I was trying to implement a stack-based Virtual Machine (e.g. like JVM or some early version of Lua) for some benchmarks in order to test its performance against a Register-based VM. I originally implemented it as having a global stack for all values and an array of instructions, where for the sake of simplicity, no functions are implemented, but only goto statements that jump occationally in between the array of instructions, in order to operate on the stack. All instructions are decoded in a linear fashion, and then loaded into an array, which in turn gets fed into the gigantic switch, looping itself while directing instructions to different operations. The values contained in the instructions are decoded seperately within the switch, and then get stored into the global stack, when needed. Later on, I heard that in order to be Turing-equivalent, the stack machine must have at least two stacks. So the question I'm asking here, is 1) is my original design Turing-equivalent? 2) If not, how to revise the design so that it would be Turing-equivalent? The reason for the stack VM to be Turing equivalent is that then its speed and efficiency can be compared to another Turing-equivalent custom-made Register machine. Please correct me on this if it does not :) Thanks! Answer: You are confusing stack-oriented programming languages (resp. their interpretation model) with pushdown automata. The former can be Turing-complete because they can access an (infinite) random-access memory; the latter does not have any memory but the (one) stack. The description of your machine (model) is not very clear. I think you can decide its power on your own given the above criterion.
{ "domain": "cs.stackexchange", "id": 11765, "tags": "turing-completeness, interpreters" }
Why the capstan equation is exponential?
Question: Does anybody knows any intuitive explanation about the reason that the capstan equation is exponential and not, as expected, linear? Answer: At any point, the friction force is proportional to the tension in the rope, and the friction force is the rate of change of the tension along the length of the rope. That is the basic form of the equation for every exponential growth and decay situation:$$\frac{dy}{dx} = Ay$$ where $A$ is a constant.
{ "domain": "physics.stackexchange", "id": 56882, "tags": "newtonian-mechanics, friction, string" }
How to achieve blast results according to the intuitive interpretation of `-max_target_seqs`?
Question: Very recently a BLAST parameter -max_target_seqs n got a lot of attention. Instead of the intuitive interpretation (return the best n sequences) the parameters asks blast to return the first n sequences that pass the e-value threshold. This is affecting thousands of workflows and analyses that assumed the intuitive interpretation. I also found that there is another parameter called -num_alignments. What is the difference? Related question on BioStars seems that is misinterpreting the max_target_seqs parameter. Is num_alignments also misunderstood? Now the main question. How to actually run blast to give me the best n hits in the database? Till now I thought that this example will do the job: blastp -query $QUERY -db $PROTEINS -out $BLASTOUT -evalue 1e-10 -outfmt 6 -num_alignments 5 -- edit -- Now I understood that i can not achieve the desired output only by parameters of blast, but I still don't see an answer to how can I get the best n hits. So I wanted to check out how to get all the results and that's not intuitive either. The parameter -max_target_seqs is set by default to 500. Does it mean that if there are more than 500 significant hits, I don't have a guarantee for the best one? Do I have to specify max_target_seqs to some crazy high number to be sure that I got them all? Answer: The -max_target_seqs is also related to -evalue which does not work as one would think. Looking at the blast news one finds that since version BLAST 2.2.27+ released, one should use the -max_target_seqs: 4.) The output for reports without separate descriptions and alignments sections (all –outfmt greater than 4) should now use –max_target_seqs to control the output rather than –num_descriptions and –num_alignments. So it seems that num_alignments is also misunderstood. I came to the conclusion that one cannot expect to get the best n alignments, unless you get all and filter yourself the best ones.
{ "domain": "bioinformatics.stackexchange", "id": 726, "tags": "blast" }
What is the rationale behind the "same" mode of discrete convolution?
Question: Having read up on discrete convolution and how it is implemented, I've started to think about which "mode" is applicable to which situation. For two signals x1 and x2 of length M and N, respectively, NumPy and SciPy convolution functions (example doc) support modes: "full": output of length M + N - 1 "same": output of length max(M, N) "valid": output of length max(M, N) - min(M, N) + 1 Only the "valid" mode is free of boundary effects because the output samples are all calculated from fully-immersed input samples. But when you look at the "same" mode and its output length, you start to realise that it's a strange half-way point between the other two modes. It must be fully-immersed on one side of the output but not at the other. What they are doing can easily be demonstrated: import numpy as np import scipy.signal as sig x1 = np.array([1, 1, 1, 1, 1], dtype=np.float64) x2 = np.array([1, 10], dtype=np.float64) print(np.convolve(x1, x2, mode="same")) # [ 1. 11. 11. 11. 11.] print(sig.convolve(x1, x2, mode="same", method="direct")) # [ 1. 11. 11. 11. 11.] print(sig.convolve(x1, x2, mode="same", method="fft")) # [ 1. 11. 11. 11. 11.] It is clear that the convolution begins on the left in the same manner as the "full" mode (without full immersion), but stops abruptly at the right-hand-side (with full immersion). It's not clear to me whether this is just a convention, or if full immersion on the right is advantageous over the opposite. Perhaps it was just easier to implement it as the "full" mode with an abrupt end? Please could someone shed light on why this is so, and what the mode would be useful for? Answer: I think that the only reason why the same mode exists is because sometimes it is convenient if the output has the same length as the input (assuming that the input is longer than the impulse response). The exact implementation of that mode is usually not very important, and there are of course several possibilities: use zero padding at the end. use zero padding at the beginning. use the central part of the full convolution, resulting in zero padding at both ends. There are two possibilities if an odd number of samples needs to be removed from the result of the full convolution (see below). I believe that most DSP simulation platforms (such as Matlab, Octave, Scipy etc.) use method $3$. Example: For this example I used Octave (Matlab uses the same convention). If $M$ and $N$ are the lengths of the two sequences, and if $M\ge N$, the full convolution has length $M+N-1$, and for the same mode we need to remove $N-1$ samples in order for the result to be of length $M$. If $N-1$ is even, the same number of samples are removed from both ends of the full convolution result. If $N-1$ is odd, we need to remove one more sample at one of the two ends. Octave/Matlab chooses to remove one more sample at the beginning. From your example it appears that Scipy removes one more sample at the end. However, I believe that both implementations use the middle part of the full convolution with samples removed at both ends (i.e., method $3$ above). Of course, if $N=2$ (as in your example), there is only one sample that needs to be removed. N odd $\Rightarrow$ the same number of samples are removed at both ends: u = [1,2,3,2,1]; v = [1,0,-1]; conv(u,v) = 1 2 2 0 -2 -2 -1 conv(u,v,'same') = 2 2 0 -2 -2 N even $\Rightarrow$ one more sample is removed at the beginning (Scipy seems to do the opposite): u=[1,2,3,2,1]; v=[1,0,-1,2]; conv(u,v) = 1 2 2 2 2 4 3 2 conv(u,v,'same') = 2 2 2 4 3 I would expect that the Scipy result is 2 2 2 2 4
{ "domain": "dsp.stackexchange", "id": 12360, "tags": "discrete-signals, convolution" }
Get relation from definition of stress-energy tensor and the conservation of energy
Question: Starting from the following definition of stress-energy tensor for a perfect fluid in special relativity : $${\displaystyle T^{\mu \nu }=\left(\rho+{\frac {p}{c^{2}}}\right)\,v^{\mu }v^{\nu }-p\,\eta ^{\mu \nu }\,}\quad(1)$$ with $$v^{\nu}=\dfrac{\text{d}x^{\nu}}{\text{d}\tau}$$ and $$V^{\nu}=\dfrac{\text{d}x^{\nu}}{\text{d}t}$$ (we have $v^{\nu}=\gamma\,V^{\nu}$) So, finally, I have to get the following relation : $$\dfrac{\partial \vec{V}}{\partial t} + (\vec{V}.\vec{grad})\vec{V} = -\dfrac{1}{\gamma^2(\rho+\dfrac{p}{c^2})} \bigg(\vec{grad}\,p+\dfrac{\vec{V}}{c^2}\dfrac{\partial p}{\partial t}\bigg)\quad(2)$$ To get this relation, I must use the conservation of energy for $\nu=i$ and $\nu=0$ with : $$\partial_{\mu}T^{\mu\nu}=0\quad(3)$$ If someone could help me to find the equation $(2)$ from $(1)$ and $(3)$, this would be nice to indicate the tricks to apply. EDIT 1 : For the moment, below where I am : I recognize in the left member of wanted relation $(2)$ the Lagrangian derivative : $$\dfrac{\text{D}\,\vec{V}}{\text{d}t}=\dfrac{\partial \vec{V}}{\partial t} + (\vec{V}.\vec{\nabla})\vec{V}\quad(4)$$ and I can rewrite $(1)$ with the $V^{\mu}$ components like : $$T^{\mu\nu}=\left(\rho+\dfrac{p}{c^{2}}\right)\,\gamma^2\,V^{\mu}V^{\nu }-p\,\eta^{\mu\nu}\,\quad(5)$$ But from this point, I don't know how to make the link between $(4)$, $(5)$, $(3)$ (the divergence of stress-energy equal to zero), and $(1)$ ... Any help is welcome Answer: Here are the main steps. I use $c = 1$ as usual in relativity, and $a, b = 0, 1, 2, 3$ are flat spacetime indices : \begin{gather} T^{ab} = (\rho + p) \, u^a \, u^b - \eta^{ab} p,\tag{1} \\[12pt] \partial_a T^{ab} = u^b u^a \, \partial_a \, \rho + u^b u^a \, \partial_a \, p + (\rho + p)\big( (\partial_a \, u^a) \, u^b + u^a \, \partial_a \, u^b \,\big) - \partial^b p = 0. \tag{2} \end{gather} Contract (2) on $u_b$ and use properties $u_b \, u^b = 1$ and $u_b \, \partial_a \, u^b \equiv 0$. You should get the continuity equation : \begin{equation}\tag{3} (\rho + p) \, \partial_a \, u^a = -\, u^a \, \partial_a \, \rho. \end{equation} Subsitute this constraint into equation (2). You should get this : \begin{equation}\tag{4} (\rho + p) \, u^a \, \partial_a \, u^b = -\, u^b u^a \, \partial_a \, p + \partial^b \, p. \end{equation} Write these for simplicity (total proper-time derivative) : \begin{align}\tag{5} u^a \, \partial_a \, u^b &\equiv \frac{d u^b}{d\tau}, & u^a \, \partial_a \, p &\equiv \frac{d p}{d\tau}. \end{align} Then you get this, for index $b = i = 1, 2, 3$ : \begin{align}\tag{6} (\rho + p) \frac{d u^i}{d\tau} = -\, u^i \, \frac{dp}{d\tau} + \partial^i \, p. \end{align} Use $u^i = \gamma \, v^i$ and $\partial^i p = -\, \partial_i \, p$ and $d\tau = dt / \gamma$, so (6) becomes a vectorial equation : \begin{align}\tag{7} \gamma \, (\rho + p) \frac{d \gamma \, \vec{v}}{dt} = -\, \gamma^2 \, \vec{v} \, \frac{dp}{dt} - \vec{\nabla} \, p. \end{align} Now scalar-contract this with vector $\vec{v}$ and use $\dot{\gamma} = \gamma^3 \, \vec{v} \cdot \dot{\vec{v}}$ and the identity $1 + \gamma^2 \, v^2 \equiv \gamma^2$ : \begin{gather} \gamma^2 (\rho + p) \frac{d \vec{v}}{dt} + \gamma \, (\rho + p) \, \gamma^3 (\vec{v} \cdot \dot{\vec{v}}) \, \vec{v} = -\, \vec{\nabla} \, p - \gamma^2 \, \vec{v} \, \frac{dp}{dt} \tag{8} \\[12pt] \gamma^4 (\rho + p)(\vec{v} \cdot \dot{\vec{v}}) = -\, \vec{v} \cdot \vec{\nabla} \, p - \gamma^2 \, v^2 \, \frac{d p}{dt}. \tag{9} \end{gather} Subsitute (9) into the second term of left part of equation (8). After some simplification algebra and using \begin{equation}\tag{10} \frac{d p}{dt} = \frac{\partial p}{\partial t} + \vec{v} \cdot \vec{\nabla} \, p, \end{equation} you should get your equation : \begin{equation}\tag{11} \gamma^2 (\rho + p) \frac{d \vec{v}}{dt} = -\, \vec{\nabla} \, p - \vec{v} \, \frac{\partial p}{\partial t}. \end{equation} That's the relativistic Euler equation for a perfect fluid. Usually : $p \ll \rho$ and $\gamma \approx 1$ for a slowly moving fluid. The last term should be negligible.
{ "domain": "physics.stackexchange", "id": 54239, "tags": "homework-and-exercises, energy-conservation, momentum, tensor-calculus, stress-energy-momentum-tensor" }
The Double Slit Experiment and the changing of electron behaviour
Question: As you will all know, when one tries to detect which slit an electron has gone through with close up observation, it changes from behaving like a wave and producing an interference pattern to behaving like a particle and producing two lines on the screen. Is there a full explanation as to why the electron changes its behaviour when observed? Answer: The (never ending!) confusion about the double slit experiment exists because people don't understand what an electron is. It isn't a particle, and it isn't a wave - it's an excitation in a quantum field. The electron can interact with it's environment in ways that make it look like a particle, and it can also interact in ways that look like a wave, and whenever you interact with an electron you change it's state. So if you interact with an electron in a "particle like" way you change it so that future interactions are also going to return "particle like" results. In the double slit experiment it doesn't make sense to ask which slit the electron went through, because it isn't a particle and didn't go through one slit. Arguably it doesn't even make sense to say it went through both slits, because that statement is still influenced by "particle like" thinking, but possibly this is getting excessivle philosophical. Anyhow, you specifically asked "Is there a full explanation as to why the electron changes its behaviour when observed?" and the answer is "yes". In this context "observing" means "interacting with", and when you interact with an electron you change it's state, and therefore you change the way it can interact with the slits. If you interact with the electron in such a way as to localise it, i.e. pinning it down to one position then this change will change it's subsequent interation with the slits and prevent the interference pattern forming. In principle you can choose a specific interaction, e.g. having it pass through a gas and ionise the gas molecules, and you can describe it's interactions with a gas molecule then the slits in a mathematically rigorous way. Well, you might be able to - sadly this level of detail is beyond my skills :-)
{ "domain": "physics.stackexchange", "id": 5847, "tags": "quantum-mechanics, double-slit-experiment" }
Can atomic partial charges be measured in molecules experimentally?
Question: Can we measure atomic partial charges in molecules experimentally? The charge of isolated ions can be measured, but when atoms are part of a molecule, the case is much more difficult. We do not really know where one atom starts and where another one stops (the atomic radii are diffuse). So I am inclined to say that partial charges must be based on theoretical concepts, such as the Mulliken population analysis (or a range of other methods), or be derived (estimated) from measurables such as dipole moment. However, is it possible to probe molecules at the atomic level, using STM, TEM, or some other electron probing technique, and get the partial charges that way, perhaps based on electron-electron repulsions? Still, it seems to me, that the border between the atoms need to be properly defined in order to get the partial charges. According to the Wikipedia page "Partial charge", several experimental techniques can be used to estimate the partial charge, e.g. XPS, NMR, EPR, UV/vis, with more. But do these techniques measure something which is a direct result of the effect of partial charge, or is some other physical observable measured, from which the partial charges are "guessed"? For example, using XPS to get the kinetic energy of the photoelectron seems like a "direct" measurements (although I know that what the detector sees is not really the kinetic energy, but something with which the kinetic energy correlates: calibration is needed). However, the idea of partial charge seems to be somewhat vague. Numerical values of partial charges depend on the definition of where one atom starts and where another ends, which is an abstract concept: this border cannot be observed or measured. Another issue that comes to mind, is that the partial charge is not necessarily uniformly distributed around each atom, but will depend on bonds, lone pairs, interactions with other molecules, etc. Answer: Let me put it this way. There are no atoms in molecules. There is just a continuous cloud of electron density, and as you correctly pointed out, we have no way to pinpoint where one atom ends and another starts. Or rather, we have many ways to do that, all slightly different and all inherently arbitrary. (My personal favorite is the concept of Bader charges, but that's a matter of taste.) So yes, the very idea of partial atomic charges is unavoidably vague.
{ "domain": "chemistry.stackexchange", "id": 4534, "tags": "experimental-chemistry" }
Electromagnetic field in the Casimir effect
Question: So, I read, that the Casimir effect arises from the ground state of the electromagnetic field. But I don't understand where the electromagnetic field in the Casimir effect comes from, since we are considering neutral metallic plates. Answer: The Casimir force has a mystique that it doesn't deserve. It's just an ordinary electromagnetic force between charged particles. If the plates were made of uncharged particles, there would be no Casimir force. But metals are made of charged electrons and nuclei. At large distances the electromagnetic field of the positive and negative charges cancels almost perfectly, but at small distances the separation between the charges is relatively large and there are detectable electromagnetic effects.
{ "domain": "physics.stackexchange", "id": 71881, "tags": "condensed-matter, casimir-effect" }
1's and 2's complement of 0
Question: In the Morris Mano's book on Digital Circuits Edition 3, it is defined that r's complement for N=0 is 0 and r-1 complement is not even defined. But lets say r = base = 2 Then 2'd complement of 0, based on his formula is $r^n - N$ where $n$ is number of digits is $2^1 - 0 = 2$. This also makes sense because $(0)'' = (0)' + 1 \to 1 + 1 \to 10$ in binary and $2$ in decimal Now coming on to 1's complement, the formula is $r^n - r^{-m} - N$ where $n$ is same as in 2's complement and $m$ is the number of digits after decimal. So, $2^1 - 2^0 - 0$ gives $2 - 1 - 0 \to 1$ which is also correct. Does that mean that first line which is in the book is incorrect? For your information, I am reading the book addressed for Indian students. Answer: Zero in 1's complement is the same as zero in normal binary 0000 00000 but there's a weird quirk of 1's complement. In 1's complement you can get the negative version a number by flipping all of its bits so that means you can flip all the bits in zero to get 1111 1111 which has a value of -0. 2's complement avoids this problem by changing how you convert a number to negative. In 2's complement you make a number negative by flipping all the bits then adding 1. Here's what happens when you try to convert zero to a negative number with 2's complement: you start with 0000 00000 you flip all the bits to get 1111 1111 you add 1 which overflows it back to 0000 00000 So to answer your question: The 1's complement of 0 is 0000 00000 and 1111 1111 The 2's complement of 0 is 0000 00000
{ "domain": "cs.stackexchange", "id": 21542, "tags": "binary" }
Predicting orbital trajectory of an object
Question: I'm new here, I'm not a physics student, I'm a programmer and a soon to be computer science student (I come in peace). I'm developing a space simulation thing and I need your help. I know the velocity vector of a body in a 3d space and the distance between that body and a significantly bigger one. The first body is supposed to orbit the second one. I want to know the equation of the ellipse, the Euler Angle of the ellipse with respect to the reference frame and the argument of peri-apsis of the object. I thought about just adding a gravitational force to the body, but since it's a simulation, the frame duration being greater than zero causes error accumulation. What I tried to do is to find as many data as possible from the stuff I know, like I found an expression of the semi-major axis and the speed at the apo-apsis using the vis-viva equation, but not much more really. Answer: Here is an algorithm you can implement. Let the small orbiting body have mass $m$. You know its initial position and velocity vectors, $\mathbf{r}_0$ and $\mathbf{v}_0$, in an arbitrary Cartesian coordinate system. I’ll assume that you want to treat the “significantly heavier” body with mass $M$ as being stationary and consider it to be the origin of your coordinate system. So in the formulas below I have assumed that $M\gg m$. From the initial position and velocity you know the (constant) angular momentum, $$\mathbf{L}=m\mathbf{r}_0\times\mathbf{v}_0\tag1.$$ This vector is perpendicular to the plane of the orbit, so now you know the orbital plane. From the vis-viva equation $$v^2=GM\left(\frac2r-\frac1a\right)\tag2$$ you can use $\mathbf{r}_0$ and $\mathbf{v}_0$ to find the semimajor axis $a$ of the ellipse in this plane. From the semimajor axis you can find the (constant) energy using $$E=-\frac{GMm}{2a}\tag3.$$ From the energy and the angular momentum you can find the orbital eccentricity $$e=\sqrt{1+\frac{2EL^2}{G^2M^2m^3}}\tag4.$$ At this point you know the plane of the ellipse, the size of the ellipse, and the eccentricity of the ellipse. The remaining unknown is the orientation of the ellipse in the plane. To find this, use the orbital equation $$r=a\frac{1-e^2}{1+e\cos\theta}\tag5$$ where $\theta$ is an angular coordinate around the axis defined by $\mathbf L$, and the expression for the angular momentum in polar coordinates in the orbital plane, $$L=mr^2\dot\theta\tag6.$$ These equations give the velocity components in terms of the angle around the ellipse as $$v_r=\dot{r}=\frac{L}{ma}\frac{e\sin\theta}{1-e^2}\tag7$$ and $$v_\theta=r\dot\theta=\frac{L}{ma}\frac{1+e\cos\theta}{1-e^2}\tag8.$$ You know one velocity, $\mathbf{v}_0$, and can find its $r$ and $\theta$ components. The value of $\theta$ that satisfies both (7) or (8) -- call it $\theta_0$ -- tells you where along the ellipse you’re starting. The major axis of the ellipse is in the direction where $\theta=0$. ADDENDUM: Here is a complete numerical example! Use units in which $GM=m=1$, and let the initial position be $$\mathbf{r}_0=(1,2,3)$$ and the initial velocity be $$\mathbf{v}_0=\left(\frac12,\frac13,\frac14\right).$$ One finds $$\mathbf{L}=\left(-\frac12,\frac54,-\frac23\right),$$ $$E=\frac{427-144\sqrt{14}}{2016}\approx -0.0554557,$$ and $$e=\sqrt{1-\frac{325(144\sqrt{14}-427)}{145152}}\approx 0.86564.$$ The initial radial velocity is $$v_{r,0}=\mathbf{v}_0\cdot\frac{\mathbf{r}_0}{|\mathbf{r}_0|}=\frac{23}{12\sqrt{14}}\approx 0.512251$$ and the initial velocity along the $\hat\theta$ direction is $$v_{\theta,0}=\sqrt{\mathbf{v}_0^2-v_{r,0}^2}=\frac{5}{12}\sqrt{\frac{13}{14}}\approx 0.40151.$$ Solving for $\theta_0$, one finds $$\theta_0=\pi-\tan^{-1}\frac{115\sqrt{13\sqrt{395929+93600\sqrt{14}}}}{184679}\approx 117.277\text{ degrees}.$$ (I used Mathematica.) To represent the orbit, we introduce some useful unit vectors. A unit vector perpendicular to the orbit is $$\hat{\mathbf{z}}=\frac{\mathbf{L}}{|\mathbf{L}|}=\frac{1}{\sqrt{13}}\left(-\frac65,3,-\frac85\right)\approx (-0.33282,0.83205,-0.44376).$$ A perpendicular unit vector pointing toward the initial position is $$\hat{\mathbf{x}}=\frac{\mathbf{r}_0}{|\mathbf{r}_0|}=\frac{1}{\sqrt{14}}(1,2,3)\approx (0.267261,0.534522,0.801784).$$ A third perpendicular unit vector is $$\hat{\mathbf{y}}=\hat{\mathbf{z}}\times\hat{\mathbf{x}}=\frac{1}{\sqrt{182}}\left(\frac{61}{5},2,-\frac{27}{5}\right)\approx (0.904324,0.14825,-0.400275).$$ We rotate around $\hat{\mathbf{z}}$ by $\theta_0$ to make new unit vectors where $\hat{\mathbf{x}}'$ points along the major axis: $$\hat{\mathbf{x}}'=\hat{\mathbf{x}}\cos{\theta_0}-\hat{\mathbf{y}}\sin{\theta_0}\approx (-0.926249,-0.376731,-0.0116847),$$ $$\hat{\mathbf{y}}'=\hat{\mathbf{x}}\sin{\theta_0}+\hat{\mathbf{y}}\cos{\theta_0}\approx (-0.176901,0.407143,0.896019).$$ The orbit is then $$\begin{align} \mathbf{r}&=r(\hat{\mathbf{x}}'\cos\theta+\hat{\mathbf{y}}'\sin\theta)\\ &=a\frac{1-e^2}{1+e\cos\theta}(\hat{\mathbf{x}}'\cos\theta+\hat{\mathbf{y}}'\sin\theta)\\ &=\left(\frac{-2.09049\cos\theta-0.399255\sin\theta}{1+0.86584\cos\theta},\frac{-0.850262\cos\theta+0.9189\sin\theta}{1+0.86584\cos\theta},\frac{-0.0263717\cos\theta+2.02238\sin\theta}{1+0.86584\cos\theta}\right) \end{align}.$$ I leave it to you to verify, as I did, that this satisfies the initial conditions when $\theta=\theta_0$. As further sanity checks, one finds that when $\theta=0$ the position is $(-1.1204,-0.455699,-0.0141339)$ and when $\theta=\pi$ it is $(15.5821,6.33768,0.196569)$. The former is periapsis, at a distance of $1.20961$, and the latter is apoapsis, at a distance of $16.8228$. These sum to twice the semimajor axis, $2a\approx 18.0324$, and are individually $a(1-e)$ and $a(1+e)$ with $e\approx 0.86564$. Note: All of this gets you the correct ellipse in 3D space, parameterized by the angle-around-the-orbit $\theta$. It doesn’t tell you where the object is at a given time, which adds more complications because $\theta$ isn’t proportional to $t$. For information about the time-dependence, see the Wikipedia article “Kepler’s equation”.
{ "domain": "physics.stackexchange", "id": 65597, "tags": "newtonian-mechanics, newtonian-gravity, kinematics, orbital-motion, simulations" }
Iodate ion and Octet rule
Question: 'm trying to figure out why the oxidation state of the iodate ion is +5. The iodine atom has 7 electrons in its outermost shell (comprised of s and p sub-shells). Two oxygen atoms receive 2 electrons from iodine to obtain full valence shells (s2 p6) and only one electron goes to the other oxygen atom. This leaves iodine with a full sub-shell s2. Is this a deviation from the Octet rule? And how is the charge on the iodate ion negative when the third oxygen requires an electron? Answer: First of all I strongly advise you to go through the following link as it will make my answer more clear to you . Iodate ion Two oxygen atoms receive 2 electrons from iodine to obtain full valence shells (s2 p6) and only one electron goes to the other oxygen atom. The bond between the iodine atom and oxygen atom is a covalent bond . The electrons from the iodine atom are not given but shared . This leaves iodine with a full sub-shell s2. Is this a deviation from the Octet rule? The two double bonded oxygens ( refer the structure in my link ) share in all 4 electrons and the 3rd oxygen shares a single electron . Iodine hence has more electrons than octet rule specified 8. This is why it violates the rule . And how is the charge on the iodate ion negative when the third oxygen requires an electron? The third oxygen shares a single electron and hence has a negative charge on it which makes the overall molecule an anion with charge -1.
{ "domain": "chemistry.stackexchange", "id": 2801, "tags": "oxidation-state" }
Question is about a paper "A Block-sorting Lossless Data Compression Algorithm" by M. Burrows and D.J. Wheeler
Question: In the paper A Block-sorting Lossless Data Compression Algorithm by M. Burrows and D.J. Wheeler Link. On page number 5. please describe this line If the original string $S$ is of the form $Z^p$ for some substring $Z$ and some $p > 1$, then the sequence $T^i[I]$ for $i = 0,...., N - 1$ will also be of the form $Z'^p$ for some subsequence $Z'$. Answer: F.e. string "abc" becomes "bca" after BWT. This means that "abcabc" will transform into "bbccaa", i.e. "external repetition" of entire string will lead to "internal repetition" of each char in transformed string.
{ "domain": "cs.stackexchange", "id": 13766, "tags": "algorithms, data-compression, research" }
Why aren't coordinates induced vector fields always Killing fields?
Question: We have that $$ L_K g_{\mu\nu}=\nabla_\mu K_\nu + \nabla_\nu K_\mu$$ A vector field $K$ is a Killing field if $ L_K g_{\mu\nu}=0$, but consider the coordinate induced vector field $\partial_\alpha$, we have $$(\partial_\alpha)_\nu=g_{\lambda\nu}(\partial_\alpha)^\lambda= g_{\lambda\nu}\delta^\lambda_{\,\,\alpha}=g_{\alpha\nu}$$ Thus by the compatibility condition of the Levi Civita connection, i.e. $g_{ij;k}=0$ for all $i,j,k$ $$L_{\partial_\alpha}g_{\mu\nu}=0 $$ for all $\alpha$. This is of course nonsensical, because it would imply that velocities are always conserved in every direction, hence there's never acceleration... where is the huge fault in my reasoning? EDIT: it gets worse: any non vanishing vector field on a smooth manifold can be expressed as $\partial_1$ in a suitable chart Answer: A vector field K is a Killing field if the Lie derivative with respect to K of the metric g vanishes. In your demonstration you assume as vector the partial derivative and then in the R.H.S. of the equation you show up with a tensor, i.e. the metric. It is inconsistent. What your demonstration defines are the covariant components of the partial derivative as vector, not the components of the metric tensor. So, the compatibility condition is not applicable. Note: Given a metric $g_{\mu \nu}$, a Killing field $K = \partial_\lambda$ exists if all of the components of the metric are independent of the coordinate $x^\lambda$. However there may be hidden symmetries not so manifest. Further: The metric compatibility is not applicable because the covariant derivative of a vector (one index) differs from the covariant derivative of a two index tensor. $\nabla_\mu V^\nu = \partial_\mu V^\nu + \Gamma^\nu_{\mu \sigma} V^\sigma$ Eq. (1) covariant derivative of a vector $\nabla_\mu T^{\nu \lambda} = \partial_\mu T^{\nu \lambda} + \Gamma^\nu_{\mu \sigma} T^{\sigma \lambda} + \Gamma^\lambda_{\mu \sigma} T^{\nu \sigma}$ Eq. (2) covariant derivative of a two index tensor The structure of the covariant derivative is different. Even if the components of the partial derivative vector are formally the same as the components of the metric tensor, they are worked out according to Eq. (1) and not to Eq. (2), which would assure the compatibility. (Just note that I assumed both vector and tensor as described in contravariant components. If you have in covariant components there is a $-$ sign in front of the $\Gamma's$ and the indices change up/down).
{ "domain": "physics.stackexchange", "id": 53965, "tags": "general-relativity, conservation-laws, symmetry, vector-fields" }
Why do the Andromeda Galaxy images from NASA have some sort of color shifting in it
Question: I was checking the images from this website of the Andromeda Galaxy. However, I noticed that low res images look good, however, the 1.7Gb images has some sort of color shifting in it: Is there any simple explanation for this? Or is it just normal and the compression to lower res removes it? Update: Here is the zoomed version on one of the red region (rotated) Without annotations: Answer: I think they are probably processing artefacts, because there are other artefacts there, and because the lines are parallel. I sent it through RGB saturation and found other artefacts of linear star zones at the seams of some of the images. The visible color version is only a small element of the PHAT study and perhaps it is not scientifically accurate.
{ "domain": "astronomy.stackexchange", "id": 6441, "tags": "galaxy, image-processing" }
Directed graph with bounded in-deg can be partitioned in a balanced way
Question: I want to prove that for all $n$, there exists a constant $c(n)$ such that if $G=(V,E)$ is a directed graph with in-degree bounded by $n$, it is possible to partition the set of vertices $V$ into two sets $V_1, V_2$ in a way that for each vertex $v\in V_1$ (and the same for every vertex in $V_2$) , if the vertex $v$ has out-degree $d(v) \geq c(n)$ in $G$, then its out degree in the corresponding subgraph will be between $\frac{1}{3}d(v)$ and $\frac{2}{3}d(v)$. I suspect that it is the kind of statements which can be proved by the probabilistic method, using the local lemma (since there are some similar theorems proven in this way). However I couldn't figure out how to do that, will be glad for help. (I have also asked this question in math.stackexchange a while ago, but since this is a research question, I believe this forum is more appropriate). Answer: This is a special version of the Beck-Fiala theorem. Define a set system on the vertices whose sets are the out-neighborhoods of the vertices. The in-degree condition will give that every element is in at most $n$ sets. The theorem states that in this case the elements can be colored with red and blue such that the difference of the red and blue elements in any set is at most $2n-1$. This means that in your digraph the out-degree of each vertex $v$ in the corresponding subgraph is between $d(v)/2-n$ and $d(v)/2+n$, so for your specific question $c(n)\ge 6n$ is sufficient.
{ "domain": "cstheory.stackexchange", "id": 4110, "tags": "graph-theory, co.combinatorics, pr.probability, randomized-algorithms, combinatorics" }
How can you tell between gas, liquid and crystal in a microscopic way?
Question: I know that molecules in ideal gas can move freely, and molecules in crystal are bonded to some specific location. But can I describe this in a more quantitative way? Do gas molecules have more degrees of freedom? Answer: Yes, by examining the statistical distribution of distances between molecules and angles separating two nearby about a third molecule. In general, correlations of 2nd and higher order of the positions of molecules relative to each other. For a gas, there are few molecules close together, but some due to molecules colliding and almost colliding. At far distances, it'll be more or less uniform. There'd be nothing of interest in angular correlations. For a liquid, there'd be none closer than about the size of a molecule, but at that distance many. There'd be mushy peaks in the distribution of distances, and just uniform mush beyond a few molecule-sizes away. There'd be strong angular correlations as nearby molecules try to pack tightly, with fleeting gatherings of several molecules in an approximate crystal, but always jiggling making the angular distribution mushy. For a crystalline solid, every molecule near or far is at a precise distance, and at precise angles with respect to any reference directions. The distance and angular distributions would look like bunches of Dirac functions, slightly smoothed out due to thermal motion, phonons, impurites and so on. Radial distributions explained by professors at Oxford, with plots Comparison of radial distribution functions, with plots
{ "domain": "physics.stackexchange", "id": 10095, "tags": "thermodynamics" }
Activating items in some randomized arrays
Question: I've got a list with about 120 items in IE and I have some items I want to activate. I compare, build and randomize some Arrays on the fly. The for loop in the run() function is really slow, and i tried to add a setTimeout(), like i've read somewhere on stackoverflow. It does not seem to improve the performance, and if the requested list of items to be activated is too large, IE starts to give the slow script error. Can i optimize this to something that'll work? for(index = 0; index < 120; index++){ wrapper.tempSlot.push(index); } Array.prototype.shuffle = function shuffle(){ var tempSlot; var randomNumber; for(var i =0; i != this.length; i++){ randomNumber = Math.floor(Math.random() * this.length); tempSlot = this[i]; this[i] = this[randomNumber]; this[randomNumber] = tempSlot; } } Array.prototype.compare = function(testArr) { if (this.length != testArr.length) return false; for (var i = 0; i < testArr.length; i++) { if (this[i].compare) { if (!this[i].compare(testArr[i])) return false; } if (this[i] !== testArr[i]) return false; } return true; } function checkAgainstEl(val){ val = $($item.select).eq(val).data('personid') return val in oc([array, with, to, be, activated, numbers]); } var index = 0; var length = wall.globals.highlight.selection.length; var run = function(){ for(;index < length; index++){ wrapper.tempSlot.shuffle(); var pop = wrapper.tempSlot.pop(); var value = wall.globals.highlight.selection[index]; var newItem = $('#wallItem' + value); wrapper.objectsStr = wrapper.objectsStr + '#wallItem' + value + ', '; while(checkAgainstEl(pop)){ wrapper.tempSlot.shuffle() pop = wrapper.tempSlot.pop(); } beforeItem = $($item.select).eq(pop); newItem.insertBefore(beforeItem); $($container.select).append(beforeItem); if (index + 1 < length && index % 5 == 0) { setTimeout(run, 25); } } } run(); Answer: Interacting with the DOM is one of the slowest operations in JavaScript. I'd suggest selecting all your elements before the loop, then referring to the cached elements inside. On the same token, do not perform any DOM manipulation in a loop if you can help it. Since it doesn't look like you can, another approach is to detach all DOM elements you're going to rearrange before the loop, perform the manipulation inside the loop, then re-attach them to the DOM at the end. A reverse while loop would be faster than the for. If you're using a version of jQuery prior to 1.6, the .data() method carries a lot of overhead. In checkAgainstEl(), use $.data($($item.select).get(val), 'personId') instead. It looks like you're re-wrapping jQuery objects in new jQuery objects unnecessarily, and in multiple places. Like in #4. I'm assuming variables prefixed with a $ are jQuery objects; if they are, no need to wrap them in $() again. Your biggest problem is the DOM interaction inside a loop. Cut that out any way you can and you'll be golden.
{ "domain": "codereview.stackexchange", "id": 325, "tags": "javascript, performance, jquery, shuffle" }
Finding Angular Acceleration of rod given radius and angle
Question: A uniform rod is 2.0 m long. The rod is pivoted about a horizontal, frictionless pin through one end. The rod is released from rest at an angle of 30° above the horizontal. What is the angular acceleration of the rod at the instant it is released? I just used $$sin(30) = \frac{9.8}{a_{centripetal}}$$ Then I related $$a_{rad} = a_{centripetal}$$ Is this right? I am looking at University Physics: $$a_{rad} = v^2 / r$$ Where 9.8 is gravity. But I got none of the answers in the multiple choice ... so I must be doing wrong. Also I haven't used the radius. Any suggestions would be helpful... Answer: Always start with a nice clear diagram/sketch of the problem. It all follows from there. Here is a Free Body Diagram I made for you. Then you have (the long detailed way): Sum of the forces on body equals mass times acceleration at the center of gravity. $\sum_i \vec{F}_i = m \vec{a}_C $ $$ A_x = m a_x \\ A_y - m g = m a_y $$ Sum of torques about center of gravity equals moment of inertia times angular acceleration. $\sum_i \left(\vec{M}_i + (\vec{r}_i-\vec{r}_C)\times\vec{F}_i\right) = I_C \vec{\alpha} $ $$ A_x \frac{L}{2} \sin(\theta) - A_y \frac{L}{2} \cos(\theta) = I_C \ddot \theta $$ Acceleration of point A must be zero. $\vec{a}_A = \vec{a}_C + \vec{\alpha}\times(\vec{r}_A-\vec{r}_C) + \vec{\omega}\times(\vec{v}_A-\vec{v}_C) $ $$ a_x + \frac{L}{2} \sin(\theta) \ddot\theta + \frac{L}{2} {\dot\theta}^2 \cos(\theta) =0 \\ a_y - \frac{L}{2} \cos(\theta) \ddot\theta + \frac{L}{2} {\dot\theta}^2 \sin(\theta) =0 $$ Now you can solve for $a_x$, $a_y$ from 3. and use those in 1. to get $A_x$,$A_y$. Finally use 2. to solve for $\ddot\theta$ Or do the shortcut of finding the applied torque on A and applying it to the effective moment of inertia about the pivot $I_A = I_C + m \left(\frac{L}{2}\right)^2 $ to get $$ \ddot\theta = \frac{m g \frac{L}{2} \cos\theta }{ I_C + m \left(\frac{L}{2}\right)^2 } $$
{ "domain": "physics.stackexchange", "id": 5137, "tags": "homework-and-exercises, rotational-dynamics, rotational-kinematics, angular-velocity" }
Enumeration given a product of primes
Question: Given a list of (small) primes $ (p_0, p_1, \dots, p_{n-1})$, is there an (efficient) algorithm to enumerate, in order, all numbers that can be expressed as $ \prod_{k=0}^{n-1} p_k^{e_k} $, where $e_k \in \mathbb{Z}, e_k \ge 0 $? What about in a certain interval, potentially at an exponential starting point? For example, if we had the set $(2,3,5)$, the first few numbers would be $(2, 3, 2^2, 5, 2 \cdot 3, 2^3, 3^2, 2 \cdot 5, \dots )$. Is there an algorithm to efficiently enumerate all the numbers not expressible as a product of powers of primes from the set? How about in an interval? Note: I just saw the Polymath paper on deterministic prime finding in an interval ( Deterministic methods to find primes ) and thats what inspired this question. I don't know if it's important that the set be a list of primes, but I'll keep it in there just in case. EDIT: I was unclear by what I meant by 'efficient' . Let me try making it more precise: Given a list of $n$ primes $(p_0, p_1, \dots p_{n-1})$ and a bound, $B$, is it possible to find in polynomial time with respect to $lg(B)$ and $n$, the next integer, $x$, such that $x > B$ and is expressible as a product of powers of primes from the list? Answer: You can generalize the standard algorithm for enumerating Hamming numbers in increasing order by merging with a min-heap. The Hamming numbers are the numbers expressible with primes 2, 3 and 5. If you can enumerate the expressible numbers in order, the non-expressible numbers are easily found in the successive gaps. To solve the problem for an interval [i, j], find the greatest expressible number less than i and the least expressible number greater than j, and use that to initialize and terminate the algorithm. Edit: I forgot to mention how you might efficiently find the bounds for the interval problem. Modified bisection should work. You have an exponent sequence as your current guess. For each individual exponent split the difference as in bisection, yielding n variants, and then find the numerically smallest of the variants (or the largest, depending on whether you're searching for lower or upper bounds), and use that as your next guess. I don't know if it's important that the set be a list of primes, but I'll keep it in there just in case. No, it doesn't matter. If you use the algorithm I suggested, the effect of allowing composites is that you generate duplicates when unique factorization fails; this happens when two or more numbers in the generating set aren't relatively prime. For generators 2 and 4, the sequence goes 1 = 2^0 4^0, 2 = 2^1 4^0, 4 = 2^2 4^0, 4 = 2^0 4^1, ... Any duplicates will occur in contiguous runs, so they are easily filtered out without keeping a black list. You could also kill them upon insertion in the min-heap.
{ "domain": "cstheory.stackexchange", "id": 269, "tags": "ds.algorithms, co.combinatorics" }
$\frac{1}{\sqrt{2}}$ (|Independent particle Model⟩+ |Strong Interaction Model⟩)?
Question: What is an adequate way to understand this simultaneously. One has the underlying assumption that matter is saturated and has the merit of being able to come up with an accurate formula for the Binding Energy (the SEMF/Bethe-Weizsäcker formula) and the other can explain the magic numbers by building up energy levels in a shell structure similar to atomic orbitals. However, the underlying assumptions appear to be completely contradictory. I am looking for a better perspective on how to understand these two simultaneously. Answer: Here comes an experimentalist point of view.These models are approximations of the real theoretical description of matter in aggregate,which at the moment is not a simple one. To truly describe nuclear forces in the language of QCD , our current knowledge of the theory of strong interactions, is a formidable many body task. In the calculational methods that exist now you would need thousands of Feynman diagrams calculated for each specific question. The models, fitted to real data, allow one to describe the behavior of nuclei and have some predictive power over it. Qualitatively one can think of the drop model as an approximation to the continuity of force interchanges with gluon exchanges between the quarks of the nuclei within the nuclear bag. At high energies one talks of quark-gluon plasma. This is the liquid phase analogy. The shell model works as a total approximation to a potential well, similar to the way the Bohr model worked for describing the atom. There is a collective potential well there, created by all the gluon exchanges, and it is approximated in this simplified form using data to be able to predict further data. It works.
{ "domain": "physics.stackexchange", "id": 909, "tags": "nuclear-physics" }
Is it possible to write an HTML compiler with no mutable state?
Question: That's probably a vague question but allow me to try and give an example: My compiler does transformations on HTML (from HTML to HTML). It scans a flattened DOM tree, and relies on lookbehinds (on elements pushed onto a stack) to decide what transformation to apply. I can give more detail if necessary but I don't want to lose the reader. Can such logic be alternatively implemented with no mutable state? I have become a big believer in functional programming and have made it a point to make my code as functional-style as possible. I don't like loops that perform actions based on the content of a previous iteration, by saving the information from a previous iteration in stacks or booleans. I need to augment the functionality of this routine and will probably tip the code from being "just about understandable" to "only the author will understand this, everyone else don't touch". But I have little background in compiler theory & development so am wondering if the problem domain necessitates mutable state in practice. Answer: In theory, any program can be treated as functional by treating every operation as "replace the world with a world this piece of state replaced by a different value". In practice ... this actually works pretty well. Your code will look something like: root_node = parse_html() for each operation: root_node = root_node.replace_using_operation(operation) For a leaf or node that you know won't be affected by this operation, replace_using_operation looks like: fn replace_using_operation(self, op): return self For the typical non-leaf case, replace_using_operation will look like: fn replace_using_operation(self, op): children = new List<Node>(); # pedantically, this is not "functional", but realistically it is. # if your language has generators or a iterator map function you can # make it purely functional, but most people find this easier to read. for each ch in self.children: children.append(ch.replace_using_operation(op)) new_self = new Node(children) delete self return new_self Of course, depending on what knowledge you have of the operation, you might want to perform different operations on some children. Also note that you're perfectly free to pass additional arguments (including snapshots of other branches of the tree - just remember you don't own them) or to have entirely separate replace_* functions instead of passing operation as an argument. This code is only a very rough skeleton, after all.
{ "domain": "cs.stackexchange", "id": 3617, "tags": "compilers, pushdown-automata, functional-programming" }
ValueError:Input 0 of layer sequential is incompatible with the layer
Question: i represented each row in my dateset as a 15552 cell which is spectrogram colored image(72723), that is represent the audio features, 72*72 is the size of spectrogram image and 3 referred to the 3 channels for RGB values. and each colored pixel is represented in 3 cells side by side and the next 3 cells referred to the next colored pixel and so on. and when i create my model i tried to make the input shape as following `input_shape=(72, 72, 3) to convert the input image to image so i can use it in CNN model. Here is the code: from google.colab import files uploaded = files.upload() #Let’s treat the Readername column as the output (Y). y= dset.readear #Simultaneously we will have to drop the column from dataset to form the input vector. x=dset.drop('readear',axis=1) # define one hot encoding encoder = OneHotEncoder(sparse=False) # transform data y= encoder.fit_transform(y.values.reshape(-1,1)) #the split ratio of 80:20. The 20% testing data set is represented by the 0.2 at the end. x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2) class_names=['A','G','D','u','E','h','k','t','b','m','i','s','fa','n'] x_train = np.asarray(x_train).astype('float32') y_train = np.asarray(y_train).astype('float32') #the model model = models.Sequential() model.add(layers.Conv2D(72, (3, 3), activation='relu', input_shape=(72, 72, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(144, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(144, (3, 3), activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(144, activation='relu')) model.add(layers.Dense(14)) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test)) But always the fit function give my the following error: ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: (None, 15552) can any one tell me how can i fix it? Answer: Each row in your dataset is of shape (15552), whereas you are telling your model that the expected input has a shape of (72, 72, 3). Reshape the data before passing it to your model to make sure that the actual input shape and the input shape defined using the input_shape argument are the same. You can reshape the input using numpy.reshape: import numpy as np np.reshape(x, (-1, 72, 72, 3))
{ "domain": "datascience.stackexchange", "id": 10250, "tags": "deep-learning, dataset, cnn, data-science-model" }
An application that compares two lists of byte strings
Question: I am a beginner in Rust, and I am trying to write a program that compares common bits between two lists of bits. The program below does that (with the variable names indicating the domain of application). I would appreciate a code review by anyone who might point out idiomatic and concise ways of accomplishing the same. extern crate getopts; extern crate num_bigint; extern crate num_traits; extern crate rand; use num_bigint::BigUint; use rand::distributions::{IndependentSample, Range}; use num_traits::FromPrimitive; use std::fs; use std::fs::File; use std::io::Write; use std::iter::repeat; use getopts::Options; use std::env; use std::collections::HashMap; #[derive(Debug)] struct MyOptions { mutantlen: u64, nmutants: u64, ntests: u64, nfaults: u64, nchecks: u64, nequivalents: u64, } impl ToString for MyOptions { fn to_string(&self) -> String { return format!("data/mutantlen={:?}/nequivalents={:?}/nmutants={:?}/nfaults={:?}/ntests={:?}/nchecks={:?}/", self.mutantlen, self.nequivalents, self.nmutants, self.nfaults, self.ntests, self.nchecks); } } fn genbits(bitlen: u64, nflipped: u64) -> BigUint { let mut rng = rand::thread_rng(); let faulty_bits: u64 = Range::new(1, nflipped + 1).ind_sample(&mut rng); let mut m: BigUint = FromPrimitive::from_usize(0).unwrap(); for _ in 0..faulty_bits { let pos: usize = Range::new(0, bitlen).ind_sample(&mut rng) as usize; let one: BigUint = FromPrimitive::from_usize(1).unwrap(); let fault = one << pos; m |= fault; } return m; } fn gen_lst(num: u64, len: u64, nflipped: u64) -> Vec<BigUint> { return (0..num).map(|_| genbits(len, nflipped)).collect(); //::<Vec<_>> } fn gen_mutants(nmutants: u64, mutantlen: u64, nfaults: u64) -> Vec<BigUint> { return gen_lst(nmutants, mutantlen, nfaults); } fn gen_tests(ntests: u64, mutantlen: u64, nchecks: u64) -> Vec<BigUint> { return gen_lst(ntests, mutantlen, nchecks); } fn kills(test: &BigUint, mutant: &BigUint) -> bool { return (test & mutant) > FromPrimitive::from_usize(0).unwrap(); } fn zeros(size: usize) -> Vec<BigUint> { repeat(FromPrimitive::from_usize(0).unwrap()) .take(size) .collect() } fn mutant_killed_by(m: &BigUint, tests: &Vec<BigUint>) -> usize { return tests.iter().filter(|t| kills(&t, m)).count(); } fn mutant_killscore( _opts: &MyOptions, mutants: &Vec<BigUint>, equivalents: &Vec<BigUint>, my_tests: &Vec<BigUint>, ) -> HashMap<usize, usize> { return mutants.iter().chain(equivalents.iter()) .map(|m| mutant_killed_by(m, my_tests)) .enumerate().collect(); } fn do_statistics(opts: &MyOptions, mutant_kills: &HashMap<usize, usize>) -> () { let mut ntests = Vec::new(); for i in 0..1001 { let mut e = 0; let mut a = 0; let mut s = 0; for (_m, k) in mutant_kills { if *k == i { e += 1; } if *k >= i { a += 1; } if *k <= i { s += 1; } } ntests.push((i, a, s, e)) } let fname = format!("{:}kills.csv", opts.to_string()); let mut f = File::create(&fname).expect(&format!("Unable to create file: {}", &fname)); f.write_all("ntests, atleast, atmost, exactly\n".as_bytes()) .expect("Unable to write data"); for &(i, a, s, e) in &ntests { let data = format!("{}, {}, {}, {}\n", i, a, s, e); f.write_all(data.as_bytes()).expect("Unable to write data"); } } fn main() { let args: Vec<String> = env::args().map(|x| x.to_string()).collect(); let ref _program = args[0]; let mut opts = Options::new(); opts.optopt("l", "mutantlen", "length of a mutant", "mutantlen"); opts.optopt("m", "nmutants", "number of mutants", "nmutants"); opts.optopt("t", "ntests", "number of tests", "ntests"); opts.optopt("f", "nfaults", "maximum number of faults per mutant", "nfaults"); opts.optopt("c", "nchecks", "maximum number of checks per test", "nchecks"); opts.optopt("e", "nequivalents", "number of equivalents", "nequivalents"); let matches = match opts.parse(&args[1..]) { Ok(m) => m, Err(f) => panic!(f.to_string()), }; let mutantlen = match matches.opt_str("l") { Some(s) => s.parse().unwrap(), None => 10000, }; let nmutants = match matches.opt_str("m") { Some(s) => s.parse().unwrap(), None => 10000, }; let ntests = match matches.opt_str("t") { Some(s) => s.parse().unwrap(), None => 10000, }; let nfaults = match matches.opt_str("f") { Some(s) => s.parse().unwrap(), None => 10, }; let nchecks = match matches.opt_str("c") { Some(s) => s.parse().unwrap(), None => 10, }; let nequivalents = match matches.opt_str("e") { Some(s) => s.parse().unwrap(), None => 0, }; let opts: MyOptions = MyOptions { nmutants, mutantlen, nfaults, ntests, nchecks, nequivalents, }; eprintln!("{:?}", opts); fs::create_dir_all(opts.to_string()).unwrap_or_else(|why| { println!("! {:?}", why.kind()); }); // first generate our tests let my_tests = gen_tests(ntests, mutantlen, nchecks); // Now generate n mutants let mutants = gen_mutants(nmutants, mutantlen, nfaults); let equivalents = zeros(nequivalents as usize); // how many tests killed this mutant? let mutant_kills = mutant_killscore(&opts, &mutants, &equivalents, &my_tests); do_statistics(&opts, &mutant_kills); } Answer: Run clippy. It will automatically tell you of a great number of improvements, such as: Don't use return on the last statement. You shouldn't accept a &Vec<T>. You should use HashMap::values let ref x = y is not idiomatic, prefer let x = &y Many of your function names are very short and ndlsly_abrvd. Writing out those extra characters won't hurt. Think a bit more about your code organization. It's a little surprising that you haven't created any methods and everything is currently just loose functions. For example, gen_tests, gen_mutants, and do_statistics all heavily make use of MyOptions; perhaps they should actually be methods on MyOptions instead? MyOptions It's not common to directly implement ToString. Instead, implement Display as it is more flexible (you can write it to a stream without allocating memory) and you get to_string for free (impl<T> ToString for T where T: Display + ?Sized). On the flip side, it feels wrong to implement either Display or ToString for MyOptions to create a path. A similar concept exists in the standard library where a helper type is used to provide a more-obvious Display implementation. genbits BigUint implements the Zero and One traits; using them is much simpler than calling unwrap. Rng::gen_range is an easier way of constructing a random range. However, Range is preferred for multiple generations, but should be hoisted out of the loop. Move the numeric cast into the Range. Prefer fold to avoid mutable variables and for loops. mutant_killscore Why does this take _opts? If you don't need it, don't pass it. Iterator::chain will convert the argument to an iterator, you don't have to call .iter() on the argument. do_statistics Don't say a function returns -> (); just omit it entirely. Instead of pushing items into a Vec, collect them using Iterator::collect Are the edits / additions / subtractions really supposed to overlap? They will all be triggerred when *k == i. Don't use the {:} formatter; it's the same as {}. Using format! in an unwrap or expect means the string will be allocated even in success cases. Instead, use unwrap_or_else with panic. In most cases, you want to use the write! / writeln! macros instead of direct calls to write_all. This also allows you to avoid allocating a string just to write it out, as you can format directly to the file. There's no reason to iterate over the reference to ntests, we aren't using the vector anymore after that. In fact, there's no reason to collect ntests into a Vec at all, you can iterate over it directly. parse_arguments Extract argument parsing to a new function, it outweighs the rest of the main function. Don't specify the item type of a collection, use Vec<_> and let inference handle it. env::args already provides Strings; there's no need to convert. Don't take the reference to args[0] if you don't need it. There's boilerplate in the matching of arguments, use Option::map_or instead of a match. Use a closure to further reduce boilerplate between different arguments. Don't redundantly declare the type of a variable (let x: Foo = Foo...). Avoid having multiple authoritative sources for your values (individual variables and the MyOptions struct). main When creating the directory fails, you print a message and continue execution. That seems very suspicious. extern crate getopts; extern crate num_bigint; extern crate num_traits; extern crate rand; use getopts::Options; use num_bigint::BigUint; use num_traits::{One, Zero}; use rand::Rng; use rand::distributions::{IndependentSample, Range}; use std::collections::HashMap; use std::env; use std::fmt; use std::fs::{self, File}; use std::io::Write; use std::iter::repeat; #[derive(Debug)] struct MyOptions { mutantlen: u64, nmutants: u64, ntests: u64, nfaults: u64, nchecks: u64, nequivalents: u64, } impl fmt::Display for MyOptions { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!( f, "data/mutantlen={:?}/nequivalents={:?}/nmutants={:?}/nfaults={:?}/ntests={:?}/nchecks={:?}/", self.mutantlen, self.nequivalents, self.nmutants, self.nfaults, self.ntests, self.nchecks, ) } } fn genbits(bitlen: u64, nflipped: u64) -> BigUint { let mut rng = rand::thread_rng(); let faulty_bits = rng.gen_range(1, nflipped + 1); let rr = Range::new(0, bitlen as usize); (0..faulty_bits).fold(BigUint::zero(), |m, _| { let pos = rr.ind_sample(&mut rng); let fault = BigUint::one() << pos; m | fault }) } fn gen_lst(num: u64, len: u64, nflipped: u64) -> Vec<BigUint> { (0..num).map(|_| genbits(len, nflipped)).collect() } fn gen_mutants(nmutants: u64, mutantlen: u64, nfaults: u64) -> Vec<BigUint> { gen_lst(nmutants, mutantlen, nfaults) } fn gen_tests(ntests: u64, mutantlen: u64, nchecks: u64) -> Vec<BigUint> { gen_lst(ntests, mutantlen, nchecks) } fn kills(test: &BigUint, mutant: &BigUint) -> bool { (test & mutant) > BigUint::zero() } fn zeros(size: usize) -> Vec<BigUint> { repeat(BigUint::zero()).take(size).collect() } fn mutant_killed_by(m: &BigUint, tests: &[BigUint]) -> usize { tests.iter().filter(|t| kills(t, m)).count() } fn mutant_killscore( mutants: &[BigUint], equivalents: &[BigUint], my_tests: &[BigUint], ) -> HashMap<usize, usize> { mutants .iter() .chain(equivalents) .map(|m| mutant_killed_by(m, my_tests)) .enumerate() .collect() } fn do_statistics(opts: &MyOptions, mutant_kills: &HashMap<usize, usize>) { let ntests = (0..1001).map(|i| { let mut e = 0; let mut a = 0; let mut s = 0; for k in mutant_kills.values() { if *k == i { e += 1; } if *k >= i { a += 1; } if *k <= i { s += 1; } } (i, a, s, e) }); let fname = format!("{}kills.csv", opts); let mut f = File::create(&fname).unwrap_or_else(|e| { panic!("Unable to create file {}: {}", &fname, e); }); writeln!(f, "ntests, atleast, atmost, exactly").expect("Unable to write data"); for (i, a, s, e) in ntests { writeln!(f, "{}, {}, {}, {}\n", i, a, s, e).expect("Unable to write data"); } } fn parse_arguments() -> MyOptions { let args: Vec<_> = env::args().collect(); let mut opts = Options::new(); opts.optopt("l", "mutantlen", "length of a mutant", "mutantlen"); opts.optopt("m", "nmutants", "number of mutants", "nmutants"); opts.optopt("t", "ntests", "number of tests", "ntests"); opts.optopt( "f", "nfaults", "maximum number of faults per mutant", "nfaults", ); opts.optopt( "c", "nchecks", "maximum number of checks per test", "nchecks", ); opts.optopt("e", "nequivalents", "number of equivalents", "nequivalents"); let matches = match opts.parse(&args[1..]) { Ok(m) => m, Err(f) => panic!(f.to_string()), }; let numeric_arg = |name, def| matches.opt_str(name).map_or(def, |s| s.parse().unwrap()); MyOptions { nmutants: numeric_arg("m", 10_000), mutantlen: numeric_arg("l", 10_000), nfaults: numeric_arg("f", 10), ntests: numeric_arg("t", 10_000), nchecks: numeric_arg("c", 10), nequivalents: numeric_arg("e", 0), } } fn main() { let opts = parse_arguments(); eprintln!("{:?}", opts); fs::create_dir_all(opts.to_string()).unwrap_or_else(|why| { println!("! {:?}", why.kind()); }); // first generate our tests let my_tests = gen_tests(opts.ntests, opts.mutantlen, opts.nchecks); // Now generate n mutants let mutants = gen_mutants(opts.nmutants, opts.mutantlen, opts.nfaults); let equivalents = zeros(opts.nequivalents as usize); // how many tests killed this mutant? let mutant_kills = mutant_killscore(&mutants, &equivalents, &my_tests); do_statistics(&opts, &mutant_kills); }
{ "domain": "codereview.stackexchange", "id": 28854, "tags": "bitwise, rust" }
Project Euler - Problem No.4 - Largest palindrome product
Question: according to the problem: A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99. Find the largest palindrome made from the product of two 3-digit numbers. Here is my code: def largest_palindrome_product(n:int) -> int: ''' Returns largest palindrome whose a product of two n digit(base 10) integer :param n: the number of digits in the numbers we compute the product of :return: largest palindrome whose a product of two n digit(base 10) integer or -1 if non were found ''' # Dealing with edge cases if n == 1: return 9 elif n < 1: raise ValueError("Expecting n to be >= 1") mul_max = -1 upper_boundary = (10**n) - 1 lower_boundary = 10**(n-1) # Searching for the largest palindrome between the upper boundary and the lower one. for i in range(upper_boundary, lower_boundary, -1): for j in range(i, lower_boundary, -1): str_prod = str(i*j) if i*j > mul_max and str_prod[::-1] == str_prod: mul_max = i*j return mul_max Here is a small test case for this code: from ProjectEuler.problem4 import largest_palindrome_product if __name__ == "__main__": # largest prime product is of 91*99 -> returns 9009 print(largest_palindrome_product(2)) # Checking edge cases -> returns 9 print(largest_palindrome_product(1)) # largest prime product is of 993*913 -> returns 906609 print(largest_palindrome_product(3)) Let me know your thoughts on this solution :) Answer: Errors range(start, end) goes from the start value, inclusive, to the end value, exclusive. So for i in range(upper_boundary, lower_boundary, -1): will not include lower_boundary in the values which will be tested, so you will be ignoring products where i would be 10 (two digit case) and 100 (three digit case). Similarly, for j in range(i, lower_boundary, -1) will ignore products where j would be 10 and 100. The solution is to use range(..., lower_boundary - 1, -1). Special Case Why is n == 1 special cased, to return 9? Why don’t you trust the algorithm to return the correct value? Oh, right, 9*1 wouldn’t be tested, because lower_boundary = 1, and got excluded due to the bug above. Perhaps you should have examined this special case closer. Optimizations You compute i*j up to 3 times each loop. You should compute it once, and store it in a variable, such as prod. prod = i * j str_prod = str(prod) if prod > mul_max and str_prod[::-1] == str_prod: mul_max = prod You are searching in decreasing ranges for the outer and inner loops. Why? True: You’ll find the target value faster. But you still search all product values where j <= i. Is there any way of determining there won’t be any larger mul_max value, either from the inner loop, or from the outer loop, or both? For instance, if i*j > mul_max is not true, would it be true for any smaller value of j? Turning a integer into a string is an \$O(\log n)\$ operation. Can you skip doing it for every product? for j in range(i, lower_boundary - 1, -1): prod = i * j if prod <= mul_max: break str_prod = str(prod) if str_prod[::-1] == str_prod: mul_max = prod Can something similar be done with the for i in range(...) loop, to speed things up even further?
{ "domain": "codereview.stackexchange", "id": 35481, "tags": "python, beginner, python-3.x, programming-challenge, palindrome" }
Ampere's law and external currents
Question: In Ampere's law, the current outside the curve taken is not included in the expression. Does it mean that the magnetic field calculated by using the law gives the contribution of only the currents crossing the area bounded by the curve? Answer: The value of the line integral $\oint_C \vec B \cdot d\vec l$ really does only depend upon the current bounded by the closed path $C$. That's a consequence of Ampere's law. However, the value of the magnetic field $\vec B$ at any point along the path $C$ depends on every current, even those outside. Knowing the value of the line integral is not the same thing as knowing the value of the magnetic field. Here's an example that illustrates the difference: Imagine just a single wire carrying current $I_o$. Take the Amperian to be a circle that encloses the wire but is not centered on it. By Ampere's law, $\oint_C \vec B \cdot d \vec l=\mu_o I_o$. Done and done. However, you can't use this Ampere's law to "factor out" $\vec B$ from the integral, so you can't use it to solve for $\vec B $. Why? Because the magnetic field isn't constant over the path, and since it's not constant, it can't be pulled out of the integral in the manner you normally would. In order to rewrite $\oint \vec B \cdot d\vec l$ as $B \oint dl$, one requirement is that the magnitude of the magnetic is constant over the path. One more example. Imagine two wires that each carry current $I_o$, but the Amperian loop is centered around only one of them; the other wire is not bounded by the loop. Alright, here again, $\oint \vec B \cdot d\vec l = \mu_o I_o$. Done. But the magnetic field $\vec B$ in the integral is really the net magnetic field $\vec B_\text{net}$. As mentioned earlier, all currents contribute to the magnetic field $\vec B$ in Ampere's law. But since the net magnetic field isn't constant, you can't factor it out of the integral and solve for it. So the Ampere's law as you know it is mainly useful when you can factor out the magnetic field and solve for it. In these cases, you typically have highly symmetric situations that allow you to do this. But even in non-symmetric cases, Ampere's law is true, but doesn't allow you to solve for the magnetic field.
{ "domain": "physics.stackexchange", "id": 15663, "tags": "electromagnetism, magnetic-fields, electric-current" }
Do gases have a general upper limit of density?
Question: Is there some limit for the density of gases, at which no change in condition could make it more dense without making it fluid, or solid - or something 'in between'? Answer: Yes - you can have a state where increasing the pressure would create a supercritical fluid See Phase Diagram
{ "domain": "physics.stackexchange", "id": 13084, "tags": "pressure, temperature, ideal-gas, states-of-matter" }
Why isn't the answer to this Atwood Machine problem correct?
Question: Given: $$m_1 = 2*m_2$$ $$m_{disc} = 3*m_1 = 6*m_2$$ $$h = 3R$$ (R is the radius of the cylinder at the top) Find $v^2$ in terms of $g$ and $r$, where $v$ is the velocity of the blocks when $m_1$ is just about to hit the floor. Assume that the string does not slip over the cylinder at the top, and that the cylinder does have a moment of inertia. This was a problem that I found a while back, and I've been trying to solving it ever since. The correct answer is apparently $\frac{4}{3}gr$, but I keep on getting $gr$ as my answer. Here is my work: We have the Conservation of Energy Equation $$m_1gh = \frac{1}{2}m_1{v_1}^2 + \frac{1}{2}m_2{v_2}^2+m_2gh+\frac{1}{2}I{\omega}^2$$ First, we substitute in the first given equation to get $$2m_2gh = m_2{v_1}^2 + \frac{1}{2}m_2{v_2}^2+m_2gh+\frac{1}{2}I{\omega}^2$$ Now, we calculate I to be $$I_{disk} = \frac{1}{2}mr^2 = 0.5(3*m_1)r^2 = 1.5m_1r^2$$. Substituting back in and combining like terms: $$m_2gh = m_2{v_1}^2 + \frac{1}{2}m_2{v_2}^2+\frac{1}{2}(1.5m_1r^2){\omega}^2$$ Now, we realize that $$\omega = \frac{v}{r}$$, and also again use the first given equation: $$m_2gh = m_2{v_1}^2 + \frac{1}{2}m_2{v_2}^2+\frac{1}{2}(1.5(2m_2)r^2)(\frac{v^2}{r^2})$$ Now, since $$h = 3R$$, we get $$3m_2gr = m_2{v_1}^2 + \frac{1}{2}m_2{v_2}^2+1.5m_2v^2$$ Note that in this problem, v_1 = v_2, so $$3m_2gr = m_2{v}^2 + \frac{1}{2}m_2{v}^2+1.5m_2v^2$$ $$3m_2gr = 3m_2v^2$$ Thus, we get $$v^2 = \frac{3m_2gr}{3m_2}$$ So, $$v^2 = gr$$. I've been stuck on this problem for a long time, and I don't know where my mistake is. Can somebody please point me in the right direction? Answer: I believe your answer is correct. I assume there is a mistake in the textbook, and they meant to write that $h=4R$. Then the textbook answer would have been right.
{ "domain": "physics.stackexchange", "id": 73375, "tags": "homework-and-exercises, newtonian-mechanics, rotational-dynamics, energy-conservation, string" }
Pinging a URL with libcurl
Question: I've asked a question about pinging a URL before, but I wanted something that was a bit shorter and still just as efficient. libcurl seemed to be the perfect answer. Here is my method: /** * @fn int testConnection(void) * @brief Pings "https://www.google.com/" to test if there is an internet connection. Timeout is set to three seconds. * @return Success value if connected to the internet. */ int testConnection(void) { CURL *curl; CURLcode res = 0; curl_global_init(CURL_GLOBAL_ALL); curl = curl_easy_init(); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "https://www.google.com/"); curl_easy_setopt(curl, CURLOPT_CONNECTTIMEOUT, 3); curl_easy_setopt(curl, CURLOPT_NOBODY, 1); res = curl_easy_perform(curl); curl_easy_cleanup(curl); } curl_global_cleanup(); return res; } Is there anything "wrong" with this pinging method? Are there any ways that I could make this method more efficient? Answer: Testing a URL to see if it is accessible is not a great way to test for an internet connection. There are a number of things I can see that can go wrong (or things you may be testing other than 'the internet'): you are testing your DNS server (if it is down, you have 'no internet', even though you do....) you are testing that caching-web-proxy on your company firewall that has a cache of the google page..... you are establishing a socket connection (slow in comparison) when there is no need for that on that socket you are establishing a protocol communication (HTTPS which is unnecessary) you are testing against one hard-coded destination that could change (I would guess that http will be discontinued on google 'soon' and they will swap to https.... - oh, you are using HTTPS already, which requires multiple packets!) you are testing against just one site, which may be a victim of a denial-of-service, or downtime. The right tool for the job is an ICMP packet (or few) ... a 'ping'. An ICMP packet is designed to be a lightweight system for identifying hosts that are up. By using a known IP address, you do not need to worry about DNS issues By using multiple IP's you are covered better for system downtime. By using a configurable system to change the IPs if needed. Consider the IP's 8.8.8.8 <-- google's DNS server and, for more, I found this link... (I used 8.8.8.8 from before).
{ "domain": "codereview.stackexchange", "id": 7952, "tags": "performance, c, curl" }
Could an artificial neural network algorithm be expressed in terms of map-reduce operations?
Question: Could an artificial neural network algorithm be expressed in terms of map-reduce operations? I am also interested more generally in methods of parallelization as applied to ANNs and their application to cloud computing. I would think one approach would involve running a full ANN on each node and somehow integrating the results in order to treat the grid like a single entity (in terms of input/output and machine learning characteristics.) I would be curious even in this case what such an integrating strategy might look like. Answer: Yes it can, and has been. In the paper Map-Reduce for Machine Learning on Multicore they discuss using the Map-Reduce paradigm for several common ML algorithms including ANNs.
{ "domain": "cs.stackexchange", "id": 310, "tags": "parallel-computing, artificial-intelligence, neural-networks" }
Does Hubble's constant apply to galaxies that are blue-shifted/ moving towards us? Another question
Question: Does Hubble's constant apply to galaxies that are moving towards Earth with a velocity and does Hubble's constant also able to different systems such as binary stars, stars in another galaxies or black holes? Also, why is there such as large uncertainty in the Hubble's constant? It is because: for further away galaxies, the expansion rate is not constant due to dark matter and so galaxies' velocities vary. Also when measuring the much further away galaxies, the parallax angle must become so small that the absolute uncertainty for the parallax angle must be large, causing the distance measured from the galaxy to Earth to have a large uncertainty? Answer: The only galaxies that we observe to be blue-shifted are some galaxies in the Local Group such as Andromeda. These galaxies are relatively close neighbours to the Milky Way, so although they are still subject to the Hubble flow they also have an individual radial speed towards us that is greater than the expansion of space. Galaxies which are further away will also have their own individual speeds, but the Hubble speed at their distance is greater than these individual speeds, so all distant galaxies appear red-shifted. Nevertheless, their individual speeds will cause a distribution of red-shifts around the theoretical Hubble red-shift. This is one factor which makes it difficult to determine the Hubble constant with high precision. Other sources of uncertainty are the difficulty in precisely measuring the actual distance to a far galaxy, and the fact that the Hubble "constant" is not actually constant at all, but has changed over lifetime of the universe.
{ "domain": "physics.stackexchange", "id": 72178, "tags": "cosmology, astrophysics, velocity" }
Is the electron spin $g$-factor value implying the particle is a composite one?
Question: As I understood the highest possible value for a magnetic moment of a point charge having the same amount of charge as an electron and rotating with same electron velocity and confined in the same area around a pivot point is a half of the electron magnetic moment. Does it imply that the electron could posses that kind of magn. moment due to a superposition of two different and opposite charges in it which combined give the net charge of the electron? In that case the negative one is possibly of higher value and responsable for the magn. moment as it is the rotating component and the positive one is central and not acting as a magnet moment source? If my conjecture is wrong please give me a hint. Answer: The main thing to say is that magnetic dipole moment comes in two forms: one form owing to charges moving from one place to another (such as going around in a loop), and the other form intrinsic to certain kinds of entity and not related to motion. Many of the entities, such as electrons, which appear in the Standard Model of physics have the latter kind of dipole moment. Your question suggests that the magnetic dipole moment of the electron might be of the first (motional) form. But the physical theory here says you would be wrong: the magnetic dipole moment of the electron is of the second (intrinsic) form. That is, it is simply part of the nature of what electrons are and it is not associated with any motion, whether displacement or rotation. On the other hand it is a sort of partner to the intrinsic angular momentum of the electron, and that property (also called spin) sounds as if motion is involved too, but in fact this is not so. When we say that an electron has a dipole moment, what are we saying exactly? Basically it is a statement about how the electron interacts with other things, such as magnetic fields. Ultimately this property is deeply connected to the fundamental theory of what an electron is, and it is not possible to say much more about it in simple terms. You would have to learn about vector-like quantities called Dirac spinors and things like that. These spinors are simply the mathematical language of physical things such as electrons. A note on classical physics for magnetic dipole and moment angular momentum Finally, let's add a note concerning the kind of scenario raised in your question. It does not concern electrons, but the more general question of the relationship between magnetic dipole moment $\bf \mu$ and angular momentum $\bf L$. Can we have non-zero ${\bf L}$ for a zero $\bf \mu$? Yes: consider a neutral spinning object. Can we have non-zero $\mu$ with zero $\bf L$? Yes: consider two rings or discs spinning about the same axis with equal and opposite angular momentum. There is then no angular momentum in total. Now make one of the rings charged and the other not. Then you have a magnetic dipole moment. In conclusion, in classical physics, for a composite object, you can get any value of $g$. But for a point charged moving around a loop of any shape, the current around the loop and the area of the loop combine to make the angular momentum and magnetic dipole moment related to one another with $|g| = 1$.
{ "domain": "physics.stackexchange", "id": 68176, "tags": "magnetic-fields, electrons, charge, quantum-spin, magnetic-moment" }
How to run a ROS service call in another thread with boost
Question: Hi, my node as a class which calls a service. During that service call it needs to get notified by a topic. The callback is set up respectively. The problem is that during the service call no messages are received. I suspect that the node is blocked by the service call and cannot handle any callback calls. They just pop up after the service call has terminated. Below is a similar example of that node. class MyClass { callService(){ my_srv.call(request); // blocking here! } callback(){ // do something <-- is never called during service request } } int main.. { ros::ServiceClient my_srv = n.serviceClient<..>(".."); MyClass my_class(my_srv); n.subscribe(my_topic, 100, &MyClass::callback, &my_class); my_class.callService(); } When trying to put the service call in another boost thread compilation leads to the error (see below) my_srv.call(request); // blocking here! Changed to: boost::thread myThread = boost::thread(boost::bind(&ros::ServiceClient::call, &my_srv, request)); Error: »bind(<unresolved overloaded function type>, ros::ServiceClient*, request&)« Any hints? Thanks Originally posted by Sebastian Rockel on ROS Answers with karma: 23 on 2015-01-09 Post score: 0 Answer: Hi Sebastian, a couple of points: Service calls are blocking. This is a deliberate design decision. IMO, services should in general only be used for things which return quickly. If you want to query a long-running service asynchronously, that's a sign that that service should better be implemented as an actionlib action. (... assuming you control the implementation of the service, that is.) Your topic callbacks aren't triggered because they only are triggered when your node enters ros::spin() or ros::spinOnce(). In roscpp, there are threads which handle topics asynchronously, but they only fill the subscriber queues. When you call ros::spin(), your thread goes through the messages in the subscriber queues one by one and calls the corresponding callbacks. This means that if you're only using ros::spin(), your code won't be multi-threaded. A good solution for your problem could be the ros::AsyncSpinner class. It starts a user-defined number of threads which all call ros::spin(), so you don't have to call it yourself any more. The drawback (as with any multi-threaded solution) is of course that now you need to make sure that all resources are properly locked to prevent data races. Hope this helps! Originally posted by Martin Günther with karma: 11816 on 2015-01-09 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Sebastian Rockel on 2015-01-09: Hi Martin, Thanks! I understand the second point and regarding the first point I completely agree. Unfortunately it's not my package and it was designed by uos like this on purpose;-) ros::AsyncSpinner looks interesting. I'll have a look if I can solve the problem pragmatically with it.
{ "domain": "robotics.stackexchange", "id": 20528, "tags": "ros, boost, thread, service" }
Multiple explicit cast operations
Question: This sample code works fine, but it looks awful. How would you improve this? data.Add(((Adress)(((OwnerIDList)owner.Adresses.Value)[0].Adress.Value)).FirstName.Value.ToString()); data.Add(((Adress)(((OwnerIDList)owner.Adresses.Value)[0].Adress.Value)).LastName.Value.ToString()); Why do we use .Value in FirstName.Value.ToString()? FirstName is a DTString object (implements a basic interface for all data types to be stored in data base). Answer: at least I would extract a variable: var address = (Adress)((OwnerIDList)owner.Adresses.Value)[0].Adress.Value; data.Add(address.FirstName.Value.ToString()); data.Add(address.LastName.Value.ToString()); All these cast operations make me believe that your code is not that strongly typed. If it so then there is not that much you can do with readability in this code. P.S.: Address in English has two d.
{ "domain": "codereview.stackexchange", "id": 59, "tags": "c#, casting" }
Derivation of Torricelli's equation
Question: I am reading "Fundamentals of Physics" by Shanker and he is deriving Torricelli's equation. First he says $\displaystyle a = \frac{dv}{dt}$. Next, he multiplies both sides by velocity to get $\displaystyle v\frac{dv}{dt} = a\frac{dx}{dt}$. Next, he does something I don't understand, he cancels the $dt$'s to get $v~dv = a~dx$. My first question is why is this justified, and what does the resulting equation "mean"? The next step he takes I also do not understand. He does $\displaystyle \int_{v_1}^{v_2}vdv = \int_{x_1}^{x_2}adx$. Why is this mathematically justified? I thought that $\displaystyle \int ~dx$ was thought of as a single entity, where the $dx$ solely acts as the "closing bracket" of the integral. In addition, why are there two different upper and lower bounds for the same integral? Doesn't this break the equality? In school I learned that you integrate by going from something like $\displaystyle \frac{dx}{dt} = v$ to $\displaystyle \int \frac{dx}{dt}dt = \int vdt$. This is not the same as what Shanker is doing. Answer: Let's approach this slightly differently; I am sure you would agree that: $$v a = a v$$ This is the equivalent of your equation after being multiplied by $v$ where $v=\frac{dx}{dt}$ and $a=\frac{dv}{dt}$. Now integrate this with respect to time: $$\int_{t_1}^{t_2} v a dt = \int_{t_1}^{t_2} a v dt $$ From the definition of the derivatives we know that $adt=dv$ and $vdt=dx$, i.e. the infinitesimal change in velocity|distance ($dv|dx$) is simply the acceleration|velocity ($a|v$) multiplied with an infinitesimal change in time $dt$. This then transforms the equation to: $$\int_{v(x_1,t_1)}^{v(x_2,t_2)} v dv = \int_{x_1}^{x_2} a dx $$ This is equivalent to just cancelling the $dt$'s in your equation when you realize a derivative is not a single quantity but a ratio of infinitesimal quantities as mentioned by Gert in the comments. The bounds are changed because we have changed the variable with respect to which we are integrating. However, they correspond to the same bounds as $v(x_1,t_1)=v_1$ and $v(x_2,t_2)=v_2$.
{ "domain": "physics.stackexchange", "id": 24995, "tags": "kinematics, calculus" }
buildfarm: Melodic build with qt_gui_cpp dependency doesn't install qtbase5-dev
Question: Hi, I'm trying to release a package in Melodic (find_object_2d) that depends on qt_gui_cpp (in its package.xml). Normally, in previous distributions (Lunar, Kinetic, Indigo...), qt_gui_cpp will install qtbase5-dev indirectly (qtbase5-dev is in package.xml of qt_gui_cpp). Then find_object_2d build will be able to find Qt5. But in Melodic, the builds fail: http://build.ros.org/job/Mbin_uB64__find_object_2d__ubuntu_bionic_amd64__binary/ http://build.ros.org/job/Mbin_uA64__find_object_2d__ubuntu_artful_amd64__binary/ http://build.ros.org/job/Mbin_ubv8_uBv8__find_object_2d__ubuntu_bionic_arm64__binary/ http://build.ros.org/job/Mbin_ubhf_uBhf__find_object_2d__ubuntu_bionic_armhf__binary/ with this error: 12:21:57 CMake Error at /usr/share/cmake-3.10/Modules/FindQt4.cmake:1320 (message): 12:21:57 Found unsuitable Qt version "" from NOTFOUND, this code requires Qt 4.x 12:21:57 Call Stack (most recent call first): 12:21:57 CMakeLists.txt:47 (FIND_PACKAGE) The error is about Qt4 because in find_object_2d's CMakeLists.txt, it searches for Qt5 first (optional), then if not found i searches for Qt4 (required): # look for Qt5 before Qt4 FIND_PACKAGE(Qt5 COMPONENTS Widgets Core Gui Network QUIET) IF(NOT Qt5_FOUND) FIND_PACKAGE(Qt4 COMPONENTS QtCore QtGui QtNetwork REQUIRED) ENDIF(NOT Qt5_FOUND) I can reproduce the error with melodic docker desktop (not full): $ docker run -it osrf/ros:melodic-desktop-bionic $ cd $ mkdir catkin_ws $ cd catkin_ws $ mkdir src $ cd src $ catkin_init_workspace $ git clone https://github.com/introlab/find-object.git $ cd .. $ catkin_make The error above appears. To solve the problem, I should explicitly install qt5: $ apt-get install qtbase5-dev $ catkin_make or $ rosdep install qt_gui_cpp $ catkin_make The question is why qt_gui_cpp doesn't install qtbase5-dev? https://github.com/ros-visualization/qt_gui_core/blob/kinetic-devel/qt_gui_cpp/package.xml#L24 Original issue link: https://github.com/introlab/find-object/issues/52 Originally posted by matlabbe on ROS Answers with karma: 6409 on 2018-05-22 Post score: 0 Answer: qt_gui_cpp only declares a build dependency on qtbase5-dev. That means the dependency is only being installed when that package is being built. When you are using it to build your package only its run dependencies are installed (otherwise users of rqt would need all development packages on their system). As far as I can see nothing in that package has changed in the past - actually Kinetic is even using the exact same branch. So I assume that you were lucky before that some other dependency brought in qtbase5-dev transitively and your package was relying on that behavior. If your package needs qtbase5-dev I recommend that you state that explicitly as a build dependency in the manifest. The "downside" will be that you can't easily switch between Qt 5 and Qt 4. That is the reason why qt_gui_cpp uses different branches for Indigo (targeting Qt 4) and Kinetic (targeting Qt 5). Originally posted by Dirk Thomas with karma: 16276 on 2018-05-22 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 30876, "tags": "ros, ros-melodic, jenkins, qt5, buildfarm" }
Computing confidence interval for average from individual predictions
Question: In the attached image, I am plotting actual vs predicted along with confidence intervals for each of the predicted points. The black star represents the average of all the predicted points. What formula can I use to combine the confidence intervals of all the predicted data points to get a confidence interval for the average? Also, please assume that the average could be a weighted average. I am using python, numpy. Answer: OK, we have two dozen predictions-with-uncertainty, and actual point values. I'm still not sure we know enough to answer, so I'm going to just outline some points and hope someone can improve upon this. A quick search found three similar StackExchange questions: Combining 13 independent regressions seems the most similar -- Note comment warning this might be meaningless. The best-answer suggests a sampling approach because the result has no closed form. Using a Normal approximation would have a closed form, but I think your plot suggests the errors are not Normal, or your 95% CI is much too narrow. Combining 2 -- See esp. the reply discussing meta-analysis and whether you have a fixed effects or a random effects model. This may be what you are doing, in which case re-formulating the question for that community may be best. Delta Method -- Seems to me the same question, but takes a direct maximization approach, assuming approximations hold. A couple of observations suggesting any simple average is not going to give you what you want, and you need to be very clear what is being combined, and what question the result is going to answer: If these are 95% Confidence Intervals, then your error model is much too conservative. Nearly all of them should include the true value (gray dashed diagonal) but only about 5 do. Therefore any parametric combination will be at least as unreliable. But suppose you have a black box with uncalibrated estimates, and you just want to know the "average" from that black box, however uncalibrated. If these are two-dozen unrelated estimates from a crowdsourcing platform, the "average" is not meaningful. On the other hand, if they are performance measures of the crowd, then the average is something like the average performance of the crowd across all question types. In that case: 1st approximation would just average the two-dozen points and use the standard deviation of that. (Each point can be weighted.) 2nd approximation would treat each estimate and CI as Normal, and use one of the formulas for combining Normal CIs. Note however that it seems error bars are wider towards the edges. That may require special handling. 3rd approximation: sample say 100 forecasts from the two-dozen distributions and then combine those ~2400 samples. (Weighting can be done by varying sample size for each forecast.) Works even if the distributions are not assumed to be Normal. If instead you have two-dozen forecasts from a single statistical model on different input data, and you want to know the average forecast across all input combinations, don't combine them. Just run the model with no factors specified, and use that result. Hopefully a bona fide statistician can supply a more definitive answer.
{ "domain": "datascience.stackexchange", "id": 1674, "tags": "python, statistics" }
Leakage of X-ray radiation
Question: Suppose a sample of strontium-90 is stored in a lead container with lead walls. It is know that X-ray radiation may be detected outside the lead container. After some discussion with my peers, it seems that we have differing theories on how the X-ray radiation is formed. Beta particles emitted by the decay of strontium-90 collide with the walls of the container, and in the process emit photons. These photons, when energetic enough, have high penetration power and can penetrate the walls of the container. Similarly, the beta emission by the decay of strontium-90 produces photons, but the photons instead tunnel through the walls. The first theory seems to show a picture of the emitted x-rays actually passing through the empty space between lead particles in the wall of the container. Under this theory, the term "penetration" is not well defined enough. Under the second theory, it seems that quantum tunnelling is not directly applicable because there is no potential barrier involved. Which would be the correct explanation, or is there a better explanation? Answer: I don't know what "penetration power" is or why quantum tunneling needs to be invoked. Sr-90 decays entirely via beta emission with up to $0.546\ \mathrm{MeV}$ given to the electron, and its daughter isotope similarly decays with up to $2.28\ \mathrm{MeV}$ given to the electron. These energy ranges are right around the $1.71\ \mathrm{MeV}$ of P-32, whose beta emissions are known to induce significant bremsstrahlung in lead. Bremsstrahlung can easily produce photons of energies similar to the incident energy of the charged particle. Any photon will have some attenuation length dependent on the frequency and the material it is passing through. Here is the NIST chart for lead to stop photons. As you can see, the value of $\mu/\rho$ is about $0.5\ \mathrm{cm^2/g}$ for photon energies of $2\ \mathrm{MeV}$. At a density of $11\ \mathrm{g/cm^3}$, this means the attenuation length is about $5.7\ \mathrm{cm}$. Even a $10\ \mathrm{cm}$ thick lead wall will only stop about 80% of such photons. Please do follow Jon Custer's answer and properly line the storage container.
{ "domain": "physics.stackexchange", "id": 16654, "tags": "quantum-mechanics, nuclear-physics, radiation" }
Useful metrics to compare network-output image to true image?
Question: I'm designing a supervised network that would require to output an image. I'm wondering what are the best metrics to find similarity between the output and actual target image. So far, my best assumption is to calculate the distance of RGB values, and give less weight to the background. Are there better ways? Answer: Widely adopted method to measure similarity between two images is structural similarity: Wikipedia Useful links Implementation However, if you aim on training a network that should produce images of desired properties, I would consider GANs framework. In this case, the similarity between the generated and target image is defined by a special discriminator network. Depending on what is your task, there are many GAN sub-types.
{ "domain": "datascience.stackexchange", "id": 4477, "tags": "cnn, image-recognition" }
What is the difference between technical-grade and food-grade tripotassium phosphate?
Question: There is an article about tripotassium phosphate that states the following: Consumers may have health concerns about why this cleaning agent can be used in food, but that is the technical grade, not the food grade. When used as a food additive, it almost has no side effects and its safety has been approved by the FDA... There is an answer here that nicely addresses the difference between lab-grade and food-grade purities but does that answer my question? My question specifically is what is the difference between "technical-grade" and "food-grade" tripotassium phosphate? Is it just a matter of impurities? At the atomic level, K3PO4 is K3PO4, whether it's being used as a commercial degreaser, a stain remover, or a food additive, no? Answer: It is just a question of purity. Usually the manufacturer of chemicals prints their composition on the label. For example, the purest form of potassium triphosphate produced by VWR is Minimum purity : > $98$ %; Free alkali : < $1.0$ %; Dipotassium hydrogen phosphate : < $1.0$ %; Sodium : < $0.5$ %; Chloride < $0.003$ %; Iron : < $0.001$ %; Lead : < $0.002$ %; Total nitrogen : < $0.001$ % I don't have the result of the analysis of the same product, in a technical-grade quality. But I am sure that the same values are significantly higher. Technical products are cheaper. But they contain more impurities.
{ "domain": "chemistry.stackexchange", "id": 17002, "tags": "terminology, food-chemistry" }
Why do electrons stretch when they flow through a wire?
Question: In the question: Special relativity and electromagnetism, the question was in reference to a Veritasium video where he describes how magnetic fields are caused by special relativity. He describes how if a current is flowing through a wire, and a positively charged cat is moving at the same speed as the negative charge, then from its reference frame, the positive charge contracts in an otherwise neutral wire and causes a net positive charge, therefore repelling the cat. The questioner asks why the same effects relating to the length contraction don't occur when electrons are flowing and everything else is stationary. Here is the part that confused me: The top answer replied: If each pair of neighboring electrons had been separated by a little rigid rod, the electrons would have to come closer together when the current starts flowing. But there are no such rods, and the row of electrons is free to stretch when the current begins to flow, and this stretching is exactly canceled out by the length contraction, such that in the lab it looks like the distance between the moving electrons is the same as the distance between the stationary protons. What? Why would electrons have to come closer together when they start flowing (when not in reference to the relativistic length contraction)? That would make one side of the wire charged? It's not possible for all of the electrons to compress in a closed loop without bunching up in packs! Why do the electrons stretch when they start flowing? Would that not make one side of the wire charged? Once again, it's not possible for all the electrons in a closed loop to stretch without bunching up at some point. What I think I understand from this picture of relativistic charge distribution is that while the positive charges on the squirrel (or cat's) side of the wire are now contracted since the electrons are stationary relative to the squirrel, on the opposing side, the electrons are now moving at twice the speed of the positive charge in the opposing direction, resulting in ... a net negative charge, right? But if the positive charge is contracting due to moving at the speed of 1x, and the electrons are contracting due to moving at the speed of 2x, (with x being the speed of squirrel), then are we not back to where we started with a neutral wire if we subtract 1x from each??? (Am I assuming that length contraction is linear? Did I just change reference frames? Is it both?) And lastly, assuming that the electrons do stretch for some reason when current flows through them, isn't it incredibly convenient that "this stretching is exactly canceled out by the length contraction"? What is the basis of this claim? I might be misunderstanding everything here. I really appreciate any answers. Answer: Why do the electrons stretch when they start flowing? Would that not make one side of the wire charged? Once again, it's not possible for all the electrons in a closed loop to stretch without bunching up at some point. I do see that the previous answer encourages you to think this way, but I would not think this way. Basically, in order to apply relativistic concepts you have to know the situation in one frame. Once you know the situation in one frame then you can transform to another frame to find the situation in that frame. So in the case of a wire we know through circuit theory how things are in the rest frame of the battery. We know that the wire is carrying current and that it is uncharged. That is our given information. There is no mystical reason that it has to be that way, we have set it up that way by design. If we wanted to have the wires not carry current then we could remove the battery, and if we wanted the wires to be charged then we could have attached the whole thing to a Van de Graff generator. So by our setup we have chosen the scenario in the rest frame of the wire to be that the wire carries current and is uncharged. With that given information in the circuit’s rest frame we can then calculate the scenario in any other frame using relativity. The charge density and the current density form a four-vector, meaning that charge density has the same relationship to current density as time has to space. So in the circuit frame, assuming the current is going counter clockwise, then the current density four vector in the top wire is $(\rho,j_x,j_y,j_z)=(0,-j,0,0)$, in the left wire is $(0,0,-j,0)$, in the bottom wire is $(0,j,0,0)$ and in the right wire is $(0,0,j,0)$. Now, with that information we can easily transform to any other frame. For example, in a frame moving at $v$ with respect to the circuit we get the current density four vector in the top wire is $$\left( \frac{jv}{\sqrt{1-v^2}},-\frac{j}{\sqrt{1-v^2}},0,0\right)$$ the bottom wire has opposite sign from the top wire and the left and right wires have the same current density four vector as in the circuit frame. So, with this we notice a couple of things. First, in the moving frame there is a sort of “current density” contraction similar to length contraction. We also notice that there is a non-zero charge density. As you mentioned: it's not possible for all the electrons in a closed loop to stretch without bunching up at some point. That is correct. We see the opposite charge density on the opposite wire. So we do get the bunching and spreading necessary to conserve charge. This approach helps us understand your final question: assuming that the electrons due stretch for some reason when current flows through them, isn't it incredibly convenient that "this stretching is exactly canceled out by the length contraction"? Basically, we see that this is a little backwards. The fact that the charge density is zero in the circuit frame is the given. It is provided by our experimental setup. There is no “convenience” about it, we simply chose it by our setup. What is a result of our setup in the circuit frame is that in the moving frame there is a nonzero charge density. That is a direct result of transforming the defined circuit into the moving frame. It is not merely a coincidence that the charge in the moving frame cancels out the length contraction, but instead the experimental setup mandates there be no charge in the circuit frame and then the charge in the moving frame is a result.
{ "domain": "physics.stackexchange", "id": 65745, "tags": "electromagnetism, special-relativity, electric-current, charge" }
Could wavefunction values be quantized?
Question: According to standard quantum mechanics, Hilbert space is defined over the complex numbers, and amplitudes in a superposition can take on values with arbitrarily small magnitude. This probably does not trouble non-realist interpretations of the wavefunction, or explicit wave function collapse interpretations, but does come into play in serious considerations of realist interjections that reject explicit collapse (e.g. many worlds, and the quantum suicide paradox). Are there, or could there be, models of "quantum mechanics" on Hilbert-like spaces where amplitudes belong to a discretized subset of the complex numbers like the Gaussian integers -- in effect, a module generalization of a Hilbert space -- and where quantum mechanics emerges as a continuum limit? (Clearly ray equivalence would still be an important consideration for this space, since almost all states with Gaussian-integer valued amplitudes will have $\left< \psi | \psi \right> > 1$. Granted, a lot of machinery would need to change, with the normal formalism emerging as the continuous limit. An immediate project with discretized amplitudes could be the preferred basis problem, as long as there are allowed bases that themselves differ by arbitrarily small rotations. Note: the original wording the question was a bit misleading and a few responders thought I was requiring all amplitudes to have a magnitude strictly greater than zero; I've reworded much of the question. Answer: One problem with this idea is normalization: $$\int_{\mathbb R} \psi^* (x) \psi(x)~ dx = 1$$ You are integrating over infinite space. If $\psi$ has a minimum non-zero value, $\psi$ must be $0$ everywhere except a finite volume. Now switch to the momentum basis. Because $\psi$ has bounded support, the Fourier Transform of it cannot have. To be normalizable, the tails would have to have infinitesimal values. So you cannot have discrete values in momentum space. Does this fit your theory? Another problem is that wave functions are continuous. If there are only a discrete set of values, you would have discontinuous functions. Unless you are talking about a space with holes in it? Constant values in distinct regions? Given $$\hat p \psi(x) = -i\hbar \frac{\partial\psi}{\partial x}$$ a $\psi$ that was constant, except where interrupted by discontinuities would correspond to $\hat p = 0$ except where it is undefined or perhaps has infinite spikes. Likewise $$-\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x^2} = E \psi$$ would lead to $E = 0$ except perhaps at the discontinuities. $$$$
{ "domain": "physics.stackexchange", "id": 81838, "tags": "quantum-mechanics, hilbert-space, wavefunction, quantum-interpretations, discrete" }
Why does the alkyl group anti to the hydroxyl migrate in Beckmann rearrangement?
Question: Background: Many organic reactions involve the migration of an alkyl group from one position to the adjacent one. The migration of the alkyl group is decided by its migratory aptitute i.e. electron-richness. It generally follows the priority order of hydride > phenyl > higher alkyl > methyl. Main question: The Beckmann rearrangement also involves an alkyl migration. However, this migration is not governed by migratory aptitude. In fact, the $\ce{-R}$ which is in anti-position to the hydroxyl group in the oxime migrates, irrespective of its migratory aptitude! My question is why is this so? And are there other organic reactions which have such a rule for alkyl migration? MasterOrganicChemistry and Wikipedia don't even mention the word "trans" or "anti" anywhere. My textbook obviously doesn't mention anything either, hence I am driven to ask this question. Answer: Oximes can undergo the Beckmann rearrangement. Oximes also exist as stable syn and anti isomers. In the figure below, $\ce{R_1}$ is anti to the hydroxyl group; another isomer exists where it is syn the the hydroxyl group. In the first step of the Beckmann rearrangement the protonated hydroxyl group makes for a good leaving group. As the hydroxyl group begins to depart, the $\ce{R}$ group which is anti to the leaving group begins to migrate and form a bond with nitrogen. (Wikipedia) This step is said to be concerted (rather than stepwise) with the $\ce{N-O^{+}H2}$ bond being broken and the new $\ce{R-N}$ bond being formed at (more or less) the same time. Whether you think of the $\ce{R}$ group as participating in something similar to an SN2 reaction or whether you think of the $\ce{R}$ group as starting to bond to the available $\sigma^{*}$ orbital of the elongating $\ce{N-O}$ bond (equivalent descriptions), you see that having the $\ce{R}$ group approach the nitrogen from a 180° angle from the breaking $\ce{N-O}$ bond would be preferred. Hence, the $\ce{R}$ group anti to the $\ce{N-O}$ bond migrates preferentially. There are many reactions showing this geometric preference, the Baeyer-Villiger oxidation being an example.
{ "domain": "chemistry.stackexchange", "id": 9855, "tags": "organic-chemistry, reaction-mechanism" }
Proving Kelvin’s Circulation theorem
Question: I’m following Anderson’s Fundamentals Of Aerodynamics. However, the proof for the above theorem is not provided. Every time I try it, things get really lengthy and cumbersome. Is there some nice and short method to getting around to prove the time rate of change of circulation over a chose closed loop 0 in an inviscid, incompressible flow ? Answer: The circulation is given by $$\Gamma = \oint_{C(t)}\boldsymbol{u}d\boldsymbol{r}.$$ Now, take the material derivative of this expression (remember to use the product rule because our contour is time-dependent) $$\dfrac{D \Gamma}{Dt} =\oint_{C(t)}\dfrac{D \boldsymbol{u}}{Dt}d\boldsymbol{r}+\oint_{C(t)}\boldsymbol{u}d\left[\dfrac{D\boldsymbol{r}}{Dt}\right].$$ Now, we assume that the flow in inviscid and all volume forces are conservative (have a potential $\Psi$). Such that we can use Euler's equation for the first integral $$\dfrac{D \Gamma}{Dt} =\oint_{C(t)}\left[-\dfrac{\nabla p}{\rho}-\nabla \Psi \right]d\boldsymbol{r}+\oint_{C(t)}\boldsymbol{u}d\boldsymbol{u}.$$ The term on the right-hand side is integrable and has an anti-derivative of $\boldsymbol{u}^2$. As we are integrating on a circle (starting point = endpoint) it must be zero. If we assume that $\rho(p)$, which is called barotropic, then we can conclude that the closed contour integral on the left-side is also zero. Note, that the closed contour integral for conservative fields $-\nabla \Psi$ is zero, also called path independence. Hence, we conclude $$\dfrac{D \Gamma}{Dt} =0.$$ This means that for inviscid, barotropic flow's and conservative volume forces the circulation $\Gamma$ is conserved.
{ "domain": "engineering.stackexchange", "id": 1737, "tags": "aerodynamics" }
Problem with Costmap2DROS in Hydro
Question: Hi, I recently moved my code from Fuerte to Hydro and now have some issues with the costmap_2d, which I use as a local map in my obstacle avoidance module. Occasionally cells that should have higher costs due to the inflation step seem to have zero costs. When moving to Hydro I had to remove the call to Costmap2DROS::getCostmapCopy as it is not present anymore. Is it safe to just use the Costmap2D-Pointer returned by the new method Costmap2DROS::getCostmap()? Or do I have to Pause the Costmap2DROS before using the underlying Costmap2D? For some reason the issue disappears when I set the Output-Level of the Costmap2D to DEBUG during runtime. So it really might be some timing issue between updating and querying the Costmap2D. Any help is appreciated! Originally posted by Sebastian Kasperski on ROS Answers with karma: 1658 on 2014-02-26 Post score: 1 Answer: I found a solution to this. It seems to be necessary to lock the costmap before accessing its data by something like: boost::unique_lock< boost::shared_mutex > lock(*(myCostmap2DROS->getCostmap()->getLock())); Edit: Ok, to actually use the costmap, for example in a function. you would do it like this: void myFunc() { Costmap2D* pCostmap = myCostmap2DROS->getCostmap(); boost::unique_lock< boost::shared_mutex > lock(*(pCostmap->getLock())); // Now you can use the costmap. [...] } Originally posted by Sebastian Kasperski with karma: 1658 on 2014-03-04 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by BlitherPants on 2014-03-05: I think I might be having a problem similar to yours. Do you think you could elaborate on the solution a little, please? For instance, how are you storing the pointer returned by getCostmap(), and do you have to do anything to unlock the 2DROS map afterwards? Still a newbie. Thanks! Comment by Sebastian Kasperski on 2014-03-05: You don't need to permanently store the Costmap2D-Pointer, as you can just get from your Costmap2DROS whenever you need it. The costmap will be unlocked again, once the created boost::unique_lock goes out of scope, that is in the example above when myFunc() returns. Comment by BlitherPants on 2014-03-13: That is very helpful, thanks!
{ "domain": "robotics.stackexchange", "id": 17108, "tags": "navigation, ros-hydro, costmap-2d" }
What does the ordering of creation/annihilation operators mean?
Question: When a system is expressed in terms of creation and annihilation operators for bosonic/fermionic modes, what exactly is the physical meaning of the order in which the operators act? For example, for a fermionic system with states $i$ and $j$, $c_i c_j^\dagger$ is different from $c_j^\dagger c_i$ by a sign change, due to anticommutativity. I understand the mathematics of this, but what does it mean intuitively? The former would be described as destroying a particle in state $j$ "before" creating one in state $i$, but what does "before" actually mean in this context, since there's no notion of time? As another (bosonic) example, $a_i^\dagger a_i$ is clearly different from $a_ia_i^\dagger$, since acting the former on a vacuum state $|0\rangle$ gives zero while for the latter, $|0\rangle$ is an eigenstate, but again, what is the physical interpretation? My normal interpretation of commutativity as a statement regarding the effect of a measurement on a state fails here since creation/annihilation are obviously not observables. I hope the question makes sense and isn't too abstract! Answer: OP is basically asking for an intuitive understanding of operator ordering. Well, the quantum world is something that us Earthlings notoriously do not understand well. Often we start with a classical model with commuting quantities. When we next want to quantize the model, we at first do not know which way we should order the corresponding non-commuting quantum operators. Say for simplicity that the classical Hamiltonian $H=AB$ is a product of two classical quantities $A$ and $B$. And say that the corresponding two quantum operators $\hat{A}$ and $\hat{B}$ have a c-number commutator $[\hat{A},\hat{B}]~\propto~ \hbar{\bf 1}$. There are initially many ways to choose an operator ordering and to choose a representation (ket-space), which the quantum operators act on. Say that we have chosen a specific notion of ordering that we call normal ordering $:\hat{A}\hat{B}:$, and say that we have chosen a notion of Fock space vacuum. To parametrize our ignorance we now introduce a c-number parameter $c$, and define the quantum Hamiltonian as $$\hat{H}~=~:\hat{A}\hat{B}:~+~\hbar c{\bf 1}.$$ In this way, if we have made a wrong choice by normal ordering the operators, we can always absorb the error into the definition of the c-number parameter $c$. One can often limit the possible choices of $c$ further by demanding Hermiticity of $\hat{H}$ and imposing other physical requirements. For instance, in (Bosonic) string theory, a similar so-called intercept parameter $a$ is completely fixed by consistency requirements (Lorentz symmetry in the light-cone formulation; nilpotency of the BRST charge in the covariant formulation), see chapter 2 and 3 in Green, Schwarz and Witten, "Superstring theory", vol. 1. A similar story holds for Fermionic operators.
{ "domain": "physics.stackexchange", "id": 1355, "tags": "quantum-mechanics, quantum-field-theory, operators, vacuum, second-quantization" }
Units of vector differential operator del ($\nabla$)
Question: My book says that $\left[\nabla \cdot (\vec E \times \vec H)\right] = \mathrm{W/m^3}$. I see that $\vec E$ is in $\mathrm{V/m}$ and $\vec H$ is $\mathrm{A/m}$, so these multiplied is $\mathrm{W/m^2}$, but how does dotting with $\nabla$ give another $\mathrm{m^{-1}}$? Answer: Note on notation: I use $[\cdot]$ do denote the units of the quantity in brackets. Derivatives always have units of $1/[\text{differentiation variable}]$. This can be clearly seen from the definition of the derivative in terms of difference qutionts: $$ \partial_x f(x) := \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}.$$ So if $x$ has some unit $[x]$ then $\partial_x f$ will have units $[f]/[x]$. (As the limit does obviously not change the units.) As $$\nabla = \begin{pmatrix} \partial_x \\ \partial_y \\ \partial_z \end{pmatrix}$$ and the coordinates in space carry the unit $\mathrm{m}$, you have that $$[\nabla \cdot \vec V] = [\partial_x V_x + \partial_y V_y + \partial_z V_z] = [\partial_x V_x] = [V_x]/[x] = [V_x]/\mathrm{m}.$$ (Where I arbitrarily chose the $x$-component of the vector, as all components have the same units.)
{ "domain": "physics.stackexchange", "id": 22512, "tags": "electromagnetism, units" }
LSTM classification with different sizes
Question: I'm relatively new to the world of recurrent neural nets and I'm trying to build a classifier using an LSTM model to predict HIV activity from a given molecule (the original dataset can be found here ). I have sequences of different lengths (from few dozen to almost 400 characters) but I'm not sure how to proceed. Let's say that I have a dataset structured like so: import pandas as pd import random import string random.seed(42) seqs = [[random.choice(string.ascii_letters) for i in range(random.randint(1,10))] for i in range(5)] classes = [random.randint(0,1) for i in range(5)] df = pd.DataFrame({ "seq": seqs, "class": classes }) print(df) seq class 0 [b, V] 0 1 [p, o, i, V, g] 0 2 [f, L, B, c, b, f, n, o, G] 1 3 [b, J, m, T, P, S, I, A, o, C] 0 4 [r, Z, a, W, Z, k, S, B, v, r] 0 I know I should: one hot encode the elements, perform masking and padding But I don't know how to perform it in Keras/TF2 and I can't find any resources online that explain how to code something similar. Answer: masking and padding are indeed popular preprocessing choices when working with LSTM. The easiest way is to pad all the short sequences to the same length as the longest sequence. Here is an example of masking and padding in Keras https://www.tensorflow.org/guide/keras/masking_and_padding Is that what you're looking for? raw_inputs = [ [711, 632, 71], [73, 8, 3215, 55, 927], [83, 91, 1, 645, 1253, 927], ] # By default, this will pad using 0s; it is configurable via the # "value" parameter. # Note that you could "pre" padding (at the beginning) or # "post" padding (at the end). # We recommend using "post" padding when working with RNN layers # (in order to be able to use the # CuDNN implementation of the layers). padded_inputs = tf.keras.preprocessing.sequence.pad_sequences( raw_inputs, padding="post" ) print(padded_inputs) Output: [[ 711 632 71 0 0 0] [ 73 8 3215 55 927 0] [ 83 91 1 645 1253 927]] Besides, you can also add start and end tokens to your sequences, like this example (https://www.tensorflow.org/tutorials/text/nmt_with_attention) def preprocess_sentence(w): w = unicode_to_ascii(w.lower().strip()) # creating a space between a word and the punctuation following it # eg: "he is a boy." => "he is a boy ." # Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation w = re.sub(r"([?.!,¿])", r" \1 ", w) w = re.sub(r'[" "]+', " ", w) # replacing everything with space except (a-z, A-Z, ".", "?", "!", ",") w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w) w = w.strip() # adding a start and an end token to the sentence # so that the model know when to start and stop predicting. w = '<start> ' + w + ' <end>' return w ```
{ "domain": "datascience.stackexchange", "id": 9576, "tags": "python, deep-learning, keras, tensorflow, lstm" }
Phishing Project Webbug Implementation
Question: Here comes the next round. I've implemented some of the suggestions from the previous review. That being said, though, there are a few things that I have on the horizons but haven't yet put in that were suggested. I am currently working on implementing dependency injection so that the database object is not redundantly instantiated. I am also working on implementing query iteration instead of returning the result set all at once. As with previous reviews, I'm primarily looking to make sure that my implementation stays secure and efficient. I've added in a class that contains 2 static methods for generating logs on the server when something goes wrong. These functions generate the log and send an email. If the email fails, the log is still created. DBManager /** * DBManager constructor. */ public function __construct() { if($this->db == null) { try { $this->db = new PDO(getenv('DB_TYPE').':dbname='.getenv('DB_DATABASE').'1;host='.getenv('DB_HOST'), getenv('DB_USERNAME'), getenv('DB_PASSWORD')); } catch(\PDOException $pdoe) { ErrorLogging::logConnectError(__CLASS__,__FUNCTION__,$pdoe->getMessage(),$pdoe->getTrace()); } } } /** * query * Public facing method for executing queries. It will return the result set back. * * @param string $sql The query to be prepared and executed * @param array $bindings An array of query parameters * @return array Array of results from query * @throws \PDOException */ public function query($sql,$bindings) { if(is_null($this->db)) { throw new \PDOException(); } $result = $this->executePreparedStatement($sql,$bindings); return $result; } /** * executePreparedStatement * Prepares the query($sql), binds the parameters, executes the query, then returns the result set. * * @param string $sql The query to be prepared and executed * @param array $bindings An array of query parameters * @return array Array of results from the prepared statement * @throws QueryException Checks if prepared statement was successful created and executed */ private function executePreparedStatement($sql,$bindings) { $stmt = $this->db->prepare($sql); if($stmt === false) { $message = "Failed to generate prepared statement.\nError Code: " . $this->db->errorCode() . "\nError Info: " . array_values($this->db->errorInfo()); throw new QueryException($message,$sql); } $result = $stmt->execute($bindings); if($result === false) { $message = "Failed to execute prepared statement.\nError Code: " . $this->db->errorCode() . "\nError Info: " . array_values($this->db->errorInfo()); throw new QueryException($message,$sql); } //fetchAll() is used for now. Based off a previous suggestion, iteration will be implemented at a later date. return $stmt->fetchAll(); } public function getErrorCode() { if(is_null($this->db)) { throw new \PDOException(); } return $this->db->errorCode(); } public function getErrorInfo() { if(is_null($this->db)) { throw new \PDOException(); } return $this->db->errorInfo(); } PhishingController /** * webbugRedirect * Handles when webbugs get called. If request URI contains the word 'email', executes email webbug otherwise executes website webbug * * @param string $id Contains UniqueURLId that references specific user and specific project ID */ public function webbugRedirect($id) { $urlId = substr($id,0,15); $projectId = substr($id,15,16); $db = new DBManager(); $sql = "SELECT USR_Username FROM gaig_users.users WHERE USR_UniqueURLId=?;"; $bindings = array($urlId); try { $result = $db->query($sql,$bindings); $username = $result[0]['USR_Username']; if(strpos($_SERVER['REQUEST_URI'],'email') !== false) { $this->webbugExecutionEmail($username,$urlId,$projectId); } else { $this->webbugExecutionWebsite($username); } } catch(QueryException $pdoe) { ErrorLogging::logQueryError(__CLASS__,__FUNCTION__,$pdoe,$db,array($urlId,$projectId)); } catch(\PDOException $pdoe) { ErrorLogging::logConnectError(__CLASS__,__FUNCTION__,$pdoe->getMessage(),$pdoe->getTrace()); } } /** * webbugRootExecution * Common values for webbug execution. Returns array of values to calling method. * * @param int $strLocation Index of UniqueURLId in REQUEST_URI * @return array|null Returns null if IP is hidden or not given, otherwise gives needed input */ private function webbugRootExecution($urlId,$projectId) { if(!empty($_SERVER['REMOTE_ADDR'])) { $db = new DBManager(); $ip = $_SERVER['REMOTE_ADDR']; $host = gethostbyaddr($_SERVER['REMOTE_ADDR']); $sql = "SELECT PRJ_ProjectName FROM gaig_users.projects WHERE PRJ_ProjectId=?;"; $bindings = array($projectId); try { $result = $db->query($sql,$bindings); $projectName = $result[0]['PRJ_ProjectName']; $date = date("Y-m-d"); $time = date("H:i:s"); return array($ip,$host,$projectName,$date,$time); } catch(QueryException $pdoe) { ErrorLogging::logQueryError(__CLASS__,__FUNCTION__,$pdoe,$db,array($urlId,$projectId)); } catch(\PDOException $pdoe) { ErrorLogging::logConnectError(__CLASS__,__FUNCTION__,$pdoe->getMessage(),$pdoe->getTrace()); } } return null; } /** * webbugExecutionEmail * Email specific execution of the webbug tracker. * * @param string $username Username of user passed from webbugRedirect */ private function webbugExecutionEmail($username,$urlId,$projectId) { $db = new DBManager(); $data = $this->webbugRootExecution($urlId,$projectId); if(!is_null($data)) { $sql = "INSERT INTO gaig_users.email_tracking (EML_Id,EML_Ip,EML_Host,EML_Username,EML_ProjectName, EML_AccessDate,EML_AccessTime) VALUES (null,?,?,?,?,?,?);"; $bindings = array($data[0],$data[1],$username,$data[2],$data[3],$data[4]); try { $db->query($sql,$bindings); } catch(QueryException $pdoe) { ErrorLogging::logQueryError(__CLASS__,__FUNCTION__,$pdoe,$db,array($urlId,$projectId)); } catch(\PDOException $pdoe) { ErrorLogging::logConnectError(__CLASS__,__FUNCTION__,$pdoe->getMessage(),$pdoe->getTrace()); } } } ErrorLogging - Mailing commented out for testing purposes /** * logConnectError * Emails devs that a connection error has occured and then generates .log file * * @param string $class Class Name * @param string $method Function Name * @param string $message Error Message * @param array $trace Exception Trace */ public static function logConnectError($class, $method, $message, $trace) { $path = self::createFile($class); $headers = array('server'=>$path[1],'method'=>$method,'path'=>$path[2]); $mail = false;/*Mail::send(['html' => 'emails.errors.pdoconnectexception'],$headers, function($m) { $m->from(''); $m->to('')->subject('CRITICAL: PDOConnectException'); });*/ date_default_timezone_set('America/New_York'); $message = "Error in $class $method \n Error logged at: " . date('m/d/Y h:i:s a') . "\n Email sent to Devs: $mail \n Error logged on IP: $path[0] \n Error Message: $message" . "\n Error Trace: \n" . json_encode($trace) . "\n ------------------------------------- \n"; error_log($message,3,$path[2]); } /** * logQueryError * Emails devs that a connection error has occured and then generates .log file * * @param string $class Class Name * @param string $method Function Name * @param QueryException $exception Exception object to retrieve trace, message, and SQL * @param DBManager $db Database object to retrieve errors * @param array $params urlId and projectId */ public static function logQueryError($class, $method, $exception, $db, $params) { $path = self::createFile($class); $trace = $exception->getTrace(); $errorcode = $db->getErrorCode(); $errorinfo = $db->getErrorInfo(); $ip = $_SERVER['REMOTE_ADDR']; $message = $exception->getMessage(); $sql = $exception->getQuery(); $headers = array('trace'=>$trace,'errorcode'=>$errorcode,'erorrinfo'=>$errorinfo,'ip'=>$ip,'message'=>$message,'sql'=>$sql,'params'=>$params); $mail = false;/*Mail::send(['html' => 'emails.errors.pdoqueryexception'],$headers, function($m) { $m->from(''); $m->to('')->subject('PDOQueryException WebbugRedirect'); });*/ date_default_timezone_set('America/New_York'); $message = "Error in $class $method \n Error logged at: " . date('m/d/Y h:i:s a') . "\n Email sent to Devs: $mail \n Error logged on IP: $path[0] \n Error Message: $message" . "\n Error Trace: \n" . json_encode($trace) . "\n ------------------------------------- \n"; error_log($message,3,$path[2]); } /** * createFile * Executes common processes for logQueryError and logConnectionError. * * @param string $class Class Name * @return array */ private static function createFile($class) { $remote_addr = $_SERVER['REMOTE_ADDR']; $server_addr = getenv('SERVER_ADDR'); $index = strpos($class,'\\'); $path = '../storage/logs/' . basename($class) . '_' . date('m-d-Y') . '.log'; if(!file_exists($path)) { $file = fopen($path,'w'); fclose($file); } return array($remote_addr,$server_addr,$path); } As always, thanks for the feedback! Looking forward to some responses! Answer: So having participated in previous reviews, I can say I like some of the evolution of this code, but have a few concerns as well. 1) I still am not understanding the reason for the DBManager class to really exist. My expounded a lot on this in last review, so won't go into that here. Even with this latest review, I don't see what value this class is adding. 2) You use this sort of validation pattern throughout your DMManager class: if(is_null($this->db)) { throw new \PDOException(); } I think it is good to fail loudly here, however I do not think you should not be throwing a PDOException and you should not be throwing an exception without any message at all. The PDO classes should be throwing PDO* exceptions, not your custom class. After all, who is to say PDO is the underlying cause to why you don't have a valid PDO instance to work with. Also, if you truly want to test that you have a valid PDO object, the validation might look like: if(empty($this->db) || $this->db instanceof PDO === false) { throw new \Exception('You need a message here'); } Ultimately, if you go with a dependency injection approach, you could actually remove this validation from each method, as you could then assure that valid PDO object was passed in constructor and set on object such that all other methods can reliably utilize it. 3) I think doing away with custom PDO Exception classes is the right approach, but the way you have now split up business and logging logic seems a little strange. You are at the end of the day now utilizing PHP's error_log() functionality vs. email as previously done. This seems to be an improvement. I think however that you moved too much of the business logic into these logging methods, as now the logging methods have a dependency on the DBManger object and are very limited in terms of use beyond DBManager use cases. In previous review, I suggested central logging mechanism (of which error_log() is certainly one example) vs. having the DBManager class directly email errors out. My stance really hasn't changed in that the DBManager class should know nothing about the specific implementation of the logging mechanism, only what logging mechanism to use. For this reason, I would suggest that these two methods might actually be methods on the DBManager class, because what they are doing is providing the business logic for forming the error message and then handing the logging of the message to the central logging mechanism. They could actually help bring meaning to the existence of that DBManager class if you need to provide some error handling behavior above and beyond what PDO gives you. 4) I don't understand the createFile() method at all. If you are relying on a central logging mechanism like error_log(), you should not have to deal with creation of new log files, this should be handled by logging mechanism. I also don't understand the return value for this method.What do remote and server address variables have to do with anything? I can see that you are trying to log to individual log files by class and date. I question the strategy of doing the by-class approach in that you might find it more diffivult to debug your application if you can't see a seeing of interconnected errors in a single log file vs. trying to go through a bunch of different files comparing timestamps and such. I would think the by-date functionality would typically be handled by whatever log rotation logic you have on the server, I don't know that I would want to enforce this in some class somewhere. 5) If you take a look at the PhishingController class, you will note a lot of repeated code. It seem like really there are maybe 3-4 lines of code that is truly specific to each method (to generate SQL and parameters). Outside of that the code commonly instantiates the DB, queries the DB, and handles DB errors. This could likely be refactored to a single method that these other method utilize to do the common work. I like that you are trying to catch multiple exception types and trigger different behaviors, but I wonder if this class is where that logic should be. If you take the comments above around possibly moving the error logging logic in DBManager class, then you could similarly have all this logic about when to log query error vs. connect error in DBManager class and not in this class. Think about it for a second, what happens if you get one of your two defined error modes that you are concerned with? Should the DBManager class own reporting and logging these errors or the PhishingController class which is using DBManager? From the PhishingController class perspective, why should it log DB errors? All it should care about is whether the query executed successfully or not, not whether the error get logged successfully. The only question you should ask yourself for the PhishingController class, is what behavior does it need to provide to it's caller in the cases where the underlying query fails. As shown now, the class just eats the underlying exceptions and an returns null. This may be totally appropriate behavior, or you may decide that this class should in turn throw its own exception or rethrow the underlying exception if this method call is going to be critical and the caller needs to be aware of problems with underlying dependencies.
{ "domain": "codereview.stackexchange", "id": 20941, "tags": "php, laravel" }
Lynxmotion AL5D arm interfacing with MoveIt
Question: i have one lynxmotion AL5D arm and i want to use that with ROS mainly with moveIt . any one help me how i can do this ? even i have my own Urdf file that i can visualize in Rviz . if any one have any link or knowledge please share with me . Originally posted by lagankapoor on ROS Answers with karma: 216 on 2018-01-18 Post score: 0 Original comments Comment by mgruhler on 2018-02-01: https://answers.ros.org/question/211557/connect-lynxmotion-al5d-arm-to-ros/ is still relevant, I guess... Answer: Now i have a Github repo Thanks all here is the github repo :- https://github.com/ibtisamtauhidi/Lynx-AL5X-Controller Originally posted by lagankapoor with karma: 216 on 2018-02-01 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by mgruhler on 2018-02-01: Well, I don't think that this is an answer :-) But happy you figured something out. Please share what this is so others might benefit... Comment by lagankapoor on 2018-02-01: yeah done sir Comment by jayess on 2018-02-01: If you put quotes around a URL then a link doesn't get rendered. If you just put the URL in with nothing around it a link does get rendered. Comment by lagankapoor on 2018-02-02: @jayess someone from ROS community say that to me so that link not disappear after changing URL Comment by jayess on 2018-02-02: I don't understand your comment, but if you look at every instance in which you put quotes around a link that link renders as text not as a link. I removed the ' from this one as a demonstration. Comment by lagankapoor on 2018-02-02: @jayess thanks alot Comment by jayess on 2018-02-02: No problem Comment by lagankapoor on 2018-02-02: @jayess may i get your Email for more information ?
{ "domain": "robotics.stackexchange", "id": 29780, "tags": "robotic-arm, ros, arduino, rviz, moveit" }
Cannot import custom rosmsg (actionlib)
Question: Hi, I have very strange problem related to my custom action msg. I compiled my msg, but I can't import it. When I try to run my py action server I am getting next error: raceback (most recent call last): File "/home/ivan/infocom_robotics_ws/src/dock_station_goal/src/turn_for_specific_angle_server.py", line 5, in from dock_station_goal.msg import DoTurnAction, DoTurnFeedback, DoTurnResult ImportError: No module named msg But I can at the same time I can find my msg with: rosmsg list | grep DoTurn ivan@ivan-desktop:~$ rosmsg list | grep DoTurn dock_station_goal/DoTurnAction dock_station_goal/DoTurnActionFeedback dock_station_goal/DoTurnActionGoal dock_station_goal/DoTurnActionResult dock_station_goal/DoTurnFeedback dock_station_goal/DoTurnGoal dock_station_goal/DoTurnResult ivan@ivan-desktop:~$ Another strange thing is that when I am checking: rosmsg packages, the pkgs with my custom msgs are doubled: ivan@ivan-desktop:~$ rosmsg packages actionlib actionlib_msgs actionlib_tutorials base_local_planner beginner_tutorials beginner_tutorials bond control_msgs costmap_2d diagnostic_msgs dock_station_goal dock_station_goal dynamic_reconfigure geometry_msgs map_msgs move_base_msgs nav_msgs realsense2_camera robot_drive roscpp rosgraph_msgs rospy_tutorials rosserial_msgs sensor_msgs shape_msgs smach_msgs std_msgs stereo_msgs tf tf2_msgs trajectory_msgs turtle_actionlib turtlebot3_msgs turtlebot_msgs turtlesim visualization_msgs You can see that beginner_tutorials and dock_station_goal are doubled I am stack and getting crazy((( Can somebody help me? Thank you! EDIT I am using catkin_make, the output of $ROS_PACKAGE_PATH is: /home/ivan/infocom_robotics_ws/src:/opt/ros/melodic/share And my Cmake list is: cmake_minimum_required(VERSION 2.8.3) project(dock_station_goal) find_package(catkin REQUIRED COMPONENTS message_generation actionlib actionlib_msgs move_base_msgs roscpp rospy std_msgs genmsg ) add_action_files( DIRECTORY action FILES DoDocking.action DoTurn.action ) generate_messages( DEPENDENCIES std_msgs actionlib_msgs ) catkin_package( LIBRARIES dock_station_goal CATKIN_DEPENDS move_base_msgs roscpp rospy std_msgs actionlib_msgs DEPENDS system_lib DEPENDS message_runtime ) include_directories( ${catkin_INCLUDE_DIRS} ) > > > ${catkin_EXPORTED_TARGETS}) ${catkin_EXPORTED_TARGETS}) Originally posted by Yehor on ROS Answers with karma: 166 on 2019-12-17 Post score: 0 Original comments Comment by Yehor on 2019-12-17: Another weird thing is that I can import my msg from /home/usr/my_ws by running python! But I cant import from my_pkg/src Comment by mgruhler on 2019-12-17: Could you please edit your post and provide the following information: the CMakeLists.txt of your package (remove all comments, though), the build tool you use (catkin_make or catkin_tools / catkin build) the output of echo $ROS_PACKAGE_PATH if possible, a link to your package source code (GitHub or similar) Comment by Yehor on 2019-12-17: @mgruhler I edited my post Unfortunately I don't have this project on a github Answer: Sorry, it was my fail. I had the file with the same name as pkg in that folder and didn't notice that. Thank you for the response. I will delete the question Originally posted by Yehor with karma: 166 on 2019-12-17 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2019-12-17:\ I will delete the question I've undeleted your question, as it is a valid question. The fact that it has a "trivial" answer does not change this. I've also accepted your comment as the answer.
{ "domain": "robotics.stackexchange", "id": 34161, "tags": "ros, ros-melodic, genmsg, msg" }
Why does taking advantage of locality matter in multithreaded systems?
Question: As we all know, when a given thread/process reaches a memory address it does not have cached, the execution will (for the most part) freeze up until said data is fetched from memory. What I don't understand, is why in multithreaded systems, we can't save ourselves the headache of data-oriented design. Why can't the processor/OS simply do work elsewhere on a different thread until the data is received? I couldn't find a good post on this exact question, and this may just be obvious to others. I only know so much about the pipeline and such so there could be a very obvious reason for this, I simply don't know why. Answer: Ask yourself: How long does it take to load data from memory, and how long does it take to stop one thread from running and start another thread? Starting another thread would usually take a lot longer, plus a new thread would have the exact same problem, having to wait for data to be ready. An exception is hyperthreading cores. A hyperthreading core can officially run two threads simultaneously, but doesn’t actually have twice the processing power of a normal core, so it cannot actually perform twice as many instructions or anywhere near that. Such a core can however process operations of the second thread immediately when the first thread cannot proceed for any reason. Now it’s debatable if hyperthreading is actually a win, especially since it gives hackers ways that operations in one thread can very subtly affect operations in another thread, leading to massive security problems. As a result, many high-performance ARM processors don’t implement hyperthreading at all.
{ "domain": "cs.stackexchange", "id": 18147, "tags": "concurrency, process-scheduling, cpu-pipelines" }
Is deterministic pseudorandomness possibly stronger than randomness in parallel?
Question: Let the class BPNC (the combination of $\mathsf{BPP}$ and $\mathsf{NC}$) be log depth parallel algorithms with bounded error probability and access to a random source (I'm not sure if this has a different name). Define the class DBPNC similarly, except that all processes have random access into a random stream of bits fixed at algorithm startup. In other words, each process in BPNC has access to a distinct random source, while DBPNC algorithms have a shared perfectly random counter mode generator. Do we know whether BPNC = DBPNC? Answer: They are the same: BPNC = DBPNC. Say a BPNC machine is given as input a DBPNC program to simulate. Execute the program in lock step. First assume that the indices between different steps are distinct, so that we do not need to remember old random bits. At each step, each processor asks for a random bit at a specific index into the shared stream. Compute and distribute the random bits as follows: Sort the indices among the processors and remember the origin of each bit. Coordinate among adjacent processors to compute the ranges of identical indices. Compute each random bit on the first processor that owns it after sorting. Scatter throughout the identical ranges. Send back to the origin process (if necessary by reversing the sorting algorithm). To allow processors to ask for old indices, have each processor remember the (results) of all previous sorting epochs. To check whether newly requested indices occurred in a given previous epoch, do Sort the new indices. Merge the lists of old and new indices (e.g., with Cole 1988). Scatter appropriately.
{ "domain": "cstheory.stackexchange", "id": 1876, "tags": "cc.complexity-theory, complexity-classes, dc.parallel-comp, randomness, pseudorandomness" }
What will go wrong if a recursive record type has a negative eta rule?
Question: In the context of Agda like dependent type theory: This short paper https://jesper.sikanda.be/files/vectors-are-records-too.pdf says some inductive type can be seen as records, for example Vector of fixed-length list can be seen as inductively-defined family of non-recursive types. But they argue that for example natural number type should not have a eta rule because it is a recursive type (the original paper says N = Unit \/ N is non-terminating.) So what will go wrong if we have this type: data out where cons : out => out in : out => out in (cons a) = a and give it a negative eta-rule: (a: out) then a = cons (in a) judgementally Can it proof False? Or just this is a bad idea....? edit: It seems Agda has eta-rule for recursive records? but not for the one previously defined, see this issue https://github.com/agda/agda/issues/402 . but the previously defined one is ruled out I think by implementation issues, not theoretical one? Answer: Having a recursive record type with eta-equality wouldn't destroy consistency of the theory, but it would destroy decidability of typechecking. For example, let's define your out type as a record type in Agda: record Out : Set where inductive constructor cons field in : Out Agda doesn't use eta-equality for this type. Suppose it did, then Agda's typechecker would loop whenever it tries to solve an equation of the form x = y : Out (where x and y are two variables or neutral terms): x = y iff in x = in y iff in (in x) = in (in y) iff in (in (in x)) = in (in (in y)) ...
{ "domain": "cstheory.stackexchange", "id": 4670, "tags": "type-theory, dependent-type" }
Video Game AI State Change
Question: I have here a snippet of code from a simple video game AI enemy. The basic idea behind the enemy is that he can be in one of a few states. By default he is in a patrolling state, where he moves around at a slow speed to random points within a circle. While in this state, he is rolling every X seconds to switch to one of his more interesting states. Currently, he may either transition into a state where he spins rapidly while patrolling as stated above, but at a much faster move speed. Or he will can instead go into a stationary defensive spin. Here is the current method which handles this state change: IEnumerator RollForStateChange() { while (true) { yield return new WaitForSeconds(RollTickTime); if (CanRollForStateChange) { float spinAttackRoll = Random.Range(0, 100); if (spinAttackRoll <= SpinAttackChancePerTick) { StartCoroutine(SpinAttackTimer()); continue; } float spinDefenseRoll = Random.Range(0, 100); if(spinDefenseRoll <= SpinDefenceChancePerTick) { StartCoroutine(DefenseSpinTimer()); continue; } } } } While this sort of dice rolling method works just fine for now, I can't help but feel that it is more likely to choose the spin attack, since that roll goes first. I can also see this becoming more inefficient if I wanted to add more states for the enemy to transition into. However, I am struggling to come up with a more elegant way to decide which state the enemy should go into. Answer: You are right that this roll is biased. Instead of using a fixed range and less-than for your rolls, you should just roll once and use specific ranges to choose which action is taken. Something like: int spinAttackChancePerTickMin = 0; int spinAttackChancePerTickMax = 50; int spinDefenceChancePerTickMin = 50; int spinDefenceChancePerTickMax = 100; int roll = Random.Range(0, 100); if (roll >= spinAttackChancePerTickMin && roll < spinAttackChancePerTickMax) { StartCoroutine(SpinAttackTimer()); continue; } StartCoroutine(DefenseSpinTimer());
{ "domain": "codereview.stackexchange", "id": 14555, "tags": "c#, ai, unity3d" }
How would the Internet work between planets?
Question: Say that in the future there are people on other planets, e.g., Mars. The one-way communication delay to Mars is between 3 and 21 minutes. Say we want to connect the people on Mars to the Internet. How would we deal with the communication delay? For example, we don't want to get request timeouts as fast as we have them normally (on Earth). Would network protocols need to be adjusted or modified to deal with the increased latency? Answer: There would be special protocols for interplanet communication. Right now there are already different routing protocols for connecting different cities, different countries and different continents. BGP (Border Gateway protocol) is a routing protocol used between continents mainly. It has different parameters and behavior than OSPF or RIP or other protocols used between different LAN's or WAN's. Read about BGP. Some other protocols would probably be developed for even larger distances and for connecting different planets.
{ "domain": "cs.stackexchange", "id": 7027, "tags": "computer-networks" }
Energy Conservation Dilemma
Question: Assume that a man is travelling in a space ship at a certain relativistic speed with respect to a man at rest at some point in space, such that 3 minutes in the ship is equal to 5 minutes for the person at rest. Also assume that the man in the ship has a lighter which contains gas of a certain amount such that the lighter can be lit for 5 minutes . Now if the man in the space ship lights the lighter for 3 minutes, then he would have 2 minutes' worth of gas left, but the stationary observer would have seen light emitted for about 5 minutes (since 3 in that space ship = 5 minutes for the stationary observer) How is it possible for the stationary observer to see light for 5 minutes? And in this case, how is energy conserved? Answer: From the perspective of the stationary observer, the light is dimmer. Why? Because the chemical reaction is happening more slowly, so the fire emits fewer photons per second. The flame burns for longer, but it emits less energy per second. Both observers will agree on the total energy emitted by the flame (once they have accounted for possible red-shift) and thus total energy is conserved.
{ "domain": "physics.stackexchange", "id": 42589, "tags": "special-relativity, energy-conservation, inertial-frames, time-dilation, observers" }
what is needed in order to add support for building archlinux packages from ROS packages automatically?
Question: ROS supports the generation of debian packages via the buildfarm. Currently ArchLinux supports ROS via the AUR, which means users need to compile (via archlinux's "makepkg" tool) packages themselves, instead of simply downloading binary packages as it happens for Ubuntu. What would be needed in order to allow either officially or unofficially the generation of ArchLinux binary packages? I would be nice to at least be able to privately run a server with which binary archlinux packages could be created for ROS, using standard ROS tools. At the moment, python scripts generating ArchLinux PKGBUILDs for ROS packages already exist, so maybe this could be added to existing tools, although I don't know where to start. Originally posted by Matias on ROS Answers with karma: 122 on 2016-02-09 Post score: 1 Original comments Comment by gvdhoorn on 2016-02-10: Bloom is the tool used in the release process that generates the required debian directories and spec files. Adding another platform would require support at least there. In addition, the buildfarm would need to be updated to run the needed build tools. Comment by gvdhoorn on 2016-02-10: @William and @tfoote can probably provide a much more in-depth answer on Bloom and the buildfarm respectively. Answer: Support for automated building of binary packages for a different packaging system requires three things: making all third-party dependencies which we use from Debian / Ubuntu available (e.g. PCL, gazebo, etc.) update bloom to generate the necessary information for the packaging system (currently it supports Debians as well as RPMs) update the ros_buildfarm to run the actual packaging jobs to produce the binary packages of your choice I would not recommend doing the packaging by hand simply because it: is a huge continuous effort and will likely always suffer from being out-of-date since packages get updated frequently and there are just a lot of them Originally posted by Dirk Thomas with karma: 16276 on 2016-02-10 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Matias on 2016-02-10: For the first point, dependencies are already met but packages need to be built from AUR (Arch User Repository). If the changes made to ros_buildfarm include retrieving and building this packages, would that be ok? Comment by Matias on 2016-02-10: Also, if ros_buildfarm is extended for Arch support, would this mean that eventually an arch package repository would be available from which users would simply download packages? Or would this require a separate server? Comment by Dirk Thomas on 2016-02-10: The third-party packages are not built by the ROS build farm. They are already available in the official Debian / Ubuntu repositories. So there is no infrastructure to help building them for Arch. That should happen externally. Comment by Dirk Thomas on 2016-02-10: Yes, ros_buildfarm would drop the binary packages into an Arch repository (however that looks like). I guess the same server as for the Debian/Ubuntu apt repo can be used. The buildfarm_deployment repo is used to provision the server. Comment by tfoote on 2017-12-05: BTW There's a new tool for automating metadata generation for rosdistros supreflore: https://github.com/ros-infrastructure/superflore It's being actively used for Gentoo and Open Embedded support is under development. It could relatively easily be extended to support Arch as well.
{ "domain": "robotics.stackexchange", "id": 23697, "tags": "build, bloom-release, buildfarm, release, archlinux" }
Xacro include breaks including xacro file
Question: I have a xacro file named "diff_wheeled_robot.urdf.xacro" to describe a robot and it works just fine. It is including other xacro files and their xacro macros. Now I want to include another xacro file describing a laser. The following code is the laser file ("rplidar.urdf.xacro"): <?xml version="1.0"?> <robot name="rplidar" xmlns:xacro="http://www.ros.org/wiki/xacro"> <xacro:include filename="$(find testbot_description)/urdf/inertias.urdf.xacro" /> <xacro:include filename="$(find testbot_description)/urdf/materials.urdf.xacro" /> <xacro:property name="rplidar_mass" value="0.2" /> <xacro:property name="rplidar_radius" value="0.0757" /> <xacro:property name="rplidar_height" value="0.04080" /> <xacro:macro name="rplidar" params="parent translateX translateY"> <link name="rplidar_link"> <inertial> <origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/> <mass value="${rplidar_mass}"/> <cylinder_inertia m="${rplidar_mass}" r="${rplidar_radius}" h="${rplidar_height}"/> </inertial> <visual> <origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/> <geometry> <cylinder length="${rplidar_height}" radius="${rplidar_radius}" /> </geometry> <material name="Blue" /> </visual> <collision> <origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/> <geometry> <cylinder length="${rplidar_height}" radius="${rplidar_radius}" /> </geometry> </collision> </link> <joint name="rplidar_joint" type="fixed"> <parent link="${parent}"/> <child link="rplidar_link"/> </joint> <gazebo reference="rplidar_link"> <material>Gazebo/Blue</material> <turnGravityOff>false</turnGravityOff> <sensor type="ray" name="rplidar_sensor"> <pose>${rplidar_radius/2} 0 0 0 0 0</pose> <visualize>true</visualize> <update_rate>10</update_rate> <ray> <scan> <horizontal> <samples>8000</samples> <min_angle>0</min_angle> <max_angle>3.14159265359</max_angle> </horizontal> </scan> <range> <min>0.15</min> <max>8.0</max> <resolution>0.001</resolution> </range> </ray> <plugin name="gazebo_ros_head_rplidar_controller" filename="libgazebo_ros_laser.so"> <topicName>/scan</topicName> <frameName>rplidar_link</frameName> </plugin> </sensor> </gazebo> </xacro:macro> </robot> But when I try to include it via <xacro:include filename="${find testbot_description)/urdf/rplidar.urdf.xacro" /> and I run rosrun xacro xacro --inorder diff_wheeled_robot.urdf.xacro I get an error: No such file or directory: None None when processing file: diff_wheeled_robot.urdf.xacro What am I doing wrong? ROS Kinetic Ubuntu 16.04 Originally posted by Julian98 on ROS Answers with karma: 69 on 2018-09-29 Post score: 0 Answer: <xacro:include filename="${find testbot_description)/urdf/rplidar.urdf.xacro" /> It should be $(find ..), not ${find ..). Originally posted by gvdhoorn with karma: 86574 on 2018-09-29 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Julian98 on 2018-09-30: Thank you ;)
{ "domain": "robotics.stackexchange", "id": 31838, "tags": "ros, include, ros-kinetic, xacro" }
Can the header line contain arbitrary text in FASTA format?
Question: The following is a valid .fasta file content: >HSBGPG Human gene for bone gla protein (BGP) GGCAGATTCCCCCTAGACCCGCCCGCACCATGGTCAGGCATGCCCCTCCTCATCGCTGGGCACAGCCCAGAGGGT ATAAACAGTGCTGGAGGCTGGCGGGGCAGGCCAGCTGAGTCCTGAGCAGCAGCCCAGCGCAGCCACCGAGACACC ATGAG. Is this also? >Arbirary_Name_iJustCameUP_with_and_other_local_identifiers GGCAGATTCCCCCTAGACCCGCCCGCACCATGGTCAGGCATGCCCCTCCTCATCGCTGGGCACAGCCCAGAGGGT ATAAACAGTGCTGGAGGCTGGCGGGGCAGGCCAGCTGAGTCCTGAGCAGCAGCCCAGCGCAGCCACCGAGACACC ATGAG. Answer: NCBI's FASTA format description: https://www.ncbi.nlm.nih.gov/genbank/fastaformat/ NCBI's BLAST page describing valid FASTA input: https://blast.ncbi.nlm.nih.gov/Blast.cgi?CMD=Web&PAGE_TYPE=BlastDocs&DOC_TYPE=BlastHelp NCBI's SNP page describing FASTA format: https://www.ncbi.nlm.nih.gov/projects/SNP/snp_legend.cgi?legend=fasta UniProt's FASTA-header description: https://www.uniprot.org/help/fasta-headers Wikipedia reference (indicating format origin): https://en.wikipedia.org/wiki/FASTA_format FASTA program (origin of format, per Wikipedia): https://fasta.bioch.virginia.edu/wrp_fasta/fasta_guide.pdf Harvard PolyPhen page describing FASTA format: http://genetics.bwh.harvard.edu/pph/FASTA.html FASTA file format (from Pacific Biosciences): https://pacbiofileformats.readthedocs.io/en/3.0/FASTA.html
{ "domain": "bioinformatics.stackexchange", "id": 1898, "tags": "fasta" }
If photons don't "experience" time, how do they account for their gradual change in wavelength?
Question: It is often said that photons do not experience time. From what I've read, this is because that when travelling at the speed of light, space is contracted to infinity, so while there is no time to cover any distance, there isn't any distance to cover. But the fact remains that as the universe expands, the photon's wavelength stretches as well. So from everyone's else perspective, that photon's wavelength is gradually changing.. But since photons don't experience time, how do they account for that change in their own wavelength? I mean, the photon should exist for at least one plank-time, right? Otherwise it wouldn't really exist, and we couldn't detect it. (I'm assuming things here. Please correct me if I'm wrong). So if it was "born" as a certain wavelength, and then immediately absorbed as a different wavelength, then couldn't it be said that the photon experienced time? Also, 2 photons (from the same source) might get absorbed at different times (from our perspective), but from the photon's perspective they should experience the same amount of time (zero). Is there something going on here with different-sized infinities? How is that phenomena explained? Thanks! Answer: We don't really have a good perspective on what a photon "feels" or, indeed, anything about what its universe would look like. We're massive objects; even the idea of "we must travel at the speed of light because we're massless" makes little sense to us. But we can talk, if you like, about what the world looks like as you travel faster and faster: it's just that obviously that doesn't tell us truly what happens "at that point" of masslessness. One thing that happens, as you go faster and faster, is that everyone else sees your clocks ticking slower and slower. This is the basis for the statement that photons don't "experience time." It's a little more complicated than that: suppose you are emitting light, say, as periodic "flashes": there is a standard Doppler shift which has to be corrected for before you see this "time dilation". In fact, as you get faster and faster, the flashes undergo "relativistic beaming", the intensity of the pulses will point more and more in the direction that you're going, as seen from the stationary observer. The same effect in reverse happens for you: as you go faster and faster, the stars of the universe all "tilt" further and further into the direction you're going. By these extrapolations, in some sense a photon experiences no time as seen from the outside world. But in another sense: if the photon had any way to communicate to the rest of the world, it could only communicate to the thing that it's going to hit anyway, and no faster than it itself can travel there. So in some sense it simply "can't" communicate its own state at all. So a key lesson, I guess, is that we have to think of the particle's frequency as interactive: in some sense the photon's energy that gives it a frequency $f = E / h$ where $h$ is Planck's constant, but in another sense it is changeless, it's not "oscillating." Quantum electrodynamics actually reifies this notion (makes the idea "solid" in the mathematics) pretty well: the photon's frequency lives in its complex phase, but a quantum system's overall phase factor is not internally observable and can only be observed by its interaction with an outside system with a different phase. In turn, you only observe their phase difference; there is a remaining overall phase for the interacting system which becomes unobservable, and so forth.
{ "domain": "physics.stackexchange", "id": 23411, "tags": "photons, spacetime, space-expansion, wavelength" }
Does the current light pollution set a fundamental limit for the range of Earth-based telescopes?
Question: As far as I know, all deep-sky pictures are captured with the Hubble Space Telescope. If there would be no atmospheric distortion, could we make deeper pictures in the optical spectrum with terrestrial telescopes, say, due to bigger mirrors? Or does the terrestrial light pollution limit the range due to the signal-to-noise ratio? Answer: I think the current answers are (somewhat) missing the OP. The issue regarding extended luminous objects, such as Nebulae is contrast, with the surrounding "darkness". Light pollution raises the brightness more or less uniformly across the entire image. One could try to estimate it, and subtract that amount of brightness from every pixel, but signal to noise effects come into play (i.e. we cannot eliminate all the stray light via image processing without introducing noise and/or processing artifacts. An obvous extreme example, is that we don't try to do astronomical observing during the day, even though bright objects, such as the moon, planets and bright stars can actually be seen. Obviously, resolution also comes into play as well, as some of the background noise is due to the large numbers of very dim stars, which if they can't be resolved and corrected for add star noise to the background intensity level.
{ "domain": "physics.stackexchange", "id": 3070, "tags": "astronomy, telescopes, light-pollution" }
How to extract data from an API every hour in Python?
Question: I am new to Python and I tried writing a script that extracts air quality json data from an API every hour and logs it into a same excel file. My code doesn't return anything. Is my code correct ? and how can I make it log into the excel file every hour please ? Thank you very much. Here is the script: def write_to_excel(): request = requests.get("https://api.waqi.info/feed/paris/? token=?") request_text = request.text JSON = json.loads(request_text) filterJSON = { 'time': str(JSON['data']['time']['s']), 'co': str(JSON['data']['iaqi']['co']['v']), 'h': str(JSON['data']['iaqi']['h']['v']), 'no2': str(JSON['data']['iaqi']['no2']['v']), 'o3': str(JSON['data']['iaqi']['o3']['v']), 'p': str(JSON['data']['iaqi']['p']['v']), 'pm10': str(JSON['data']['iaqi']['pm10']['v']), 'pm25': str(JSON['data']['iaqi']['pm25']['v']), 'so2': str(JSON['data']['iaqi']['so2']['v']), 't': str(JSON['data']['iaqi']['t']['v']), 'w': str(JSON['data']['iaqi']['w']['v']), } liste.append(filterJSON) try: os.remove("airquality.xlsx") except: pass pd.DataFrame(liste).to_excel('airquality.xlsx') print(liste) if __name__ == "__main__": schedule.every(3).seconds.do(write_to_excel) while True: schedule.run_pending() ''' Answer: Assuming that example is working for you, trying to write the data every 3 seconds, you need to just change the scheduling to be schedule.every(1).hour.do(write_to_excel) You are currently writing the data at each interval to the same file, so you will overwrite the file every time. You could do a few things here: Open the excel file (e.g. into a pandas DataFrame), append the new data and save it all back to disk. This is pretty inefficient though and will have problems once you have a lot of data. Write the data to a database, extending it every hour. This is the most professional solution. Write a new file to disk each hour, including e.g. the timestamp of the hour in the filename to make each file unique. This is simple and just means you iterate over the files one-by-one when reading them later to do analysis or plotting etc. You could change your function to be like this, implementing the first option above: def request_data_and_save(excel_file: str = "air_quality.xlsx"): request = requests.get("https://api.waqi.info/feed/paris/?token=?") request_text = request.text JSON = json.loads(request_text) filterJSON = { 'time': str(JSON['data']['time']['s']), 'co': str(JSON['data']['iaqi']['co']['v']), 'h': str(JSON['data']['iaqi']['h']['v']), 'no2': str(JSON['data']['iaqi']['no2']['v']), 'o3': str(JSON['data']['iaqi']['o3']['v']), 'p': str(JSON['data']['iaqi']['p']['v']), 'pm10': str(JSON['data']['iaqi']['pm10']['v']), 'pm25': str(JSON['data']['iaqi']['pm25']['v']), 'so2': str(JSON['data']['iaqi']['so2']['v']), 't': str(JSON['data']['iaqi']['t']['v']), 'w': str(JSON['data']['iaqi']['w']['v']), } # Get a writer and append data to the file with pd.ExcelWriter(excel_file, mode="a") as xl: df = pd.DataFrame.from_dict(filterJSON) df.to_excel(xl) if __name__ == "__main__": schedule.every().hour.do(write_to_excel) while True: schedule.run_pending() NOTE: this code is not tested
{ "domain": "datascience.stackexchange", "id": 7316, "tags": "python" }
How to implement the controlled square root of NOT gate on the IBM Q composer?
Question: I already know how to do that for Z, Y, and H gates. How can I make a controlled sqrt-of-NOT gate? I mean the controlled version of the gate described here. Answer: Here's one decomposition: It was made by decomposing a controlled S (which is easier to think about because it only phases; it's diagonal) and then converting the basis of the target by conjugating with Hadamards. It generalizes in-place to any $\text{CNOT}^{2t}$:
{ "domain": "quantumcomputing.stackexchange", "id": 1801, "tags": "quantum-gate, circuit-construction" }
Class template for the encapsulation of datasheet specifications using optionals
Question: A couple of years ago I wrote a pair of class templates to encapsulate specifications which have one or more of a minimum, typical, and/or maximum value, e.g. on a datasheet like this one for the 741 op amp: That code required me to re-invent the wheel a bit since I was using an old compiler, and I had not discovered boost::optional/std::optional yet. I've re-written that code to use a single class template called Spec, C++11 and C++14 features, and boost::optional or std::optional. I've also included some additional utilities (inserting a Spec into a std::ostream, clamping Spec values, computing spec limits based on a value and a % tolerance, etc.). I have not added yet support for guardbands as in the previous implementation, but I plan to do so later. Here is the implementation using boost::optional. My compiler is Visual Studio 2017 (which can use C++17's std::optional), but I'm only using C++14 features in the project this will be used in. I've provided commented-out code below to use std::optional instead: #ifndef SPEC_H #define SPEC_H #include <ostream> #include <stdexcept> // To use std::optional, if supported by compiler //#include <optional> // To use boost::optional #include <boost/none.hpp> // boost::none_t #include <boost/optional.hpp> // If using std::optional /*typedef std::nullopt_t nullspec_type; inline constexpr nullspec_type nullspec = std::nullopt;*/ // Using boost::optional /** \brief Type definition for an empty class type which indicates that a Spec value is undefined. */ typedef boost::none_t nullspec_type; /** \brief A constant which can be used as an argument to a Spec constructor with a nullspec_type parameter. */ const nullspec_type nullspec = boost::none; /** \brief Exception class for attempts to access non-existent Spec values. */ class bad_spec_access : public std::logic_error { public: /** \brief Constructs a Spec bad access error */ bad_spec_access(const std::string& s) : std::logic_error(s) {} }; /** \brief Encapsulates the specified minimum, typical, and maximum value of a quantity. The minimum, typical value, and/or maximum value may all be left undefined (but at least one must be defined). This permits the construction of specifications which are unbounded (e.g. have no minimum value) or which do not have a typical value. The maximum is allowed to be less than the minimum, and the typical value need not be between the minimum and maximum. \tparam T an arithmetic type or a type which behaves like an arithmetic type (has arithmetic operators like + and - defined). */ template<typename T> class Spec { typedef boost::optional<T> optional_type; //typedef std::optional<T> optional_type; std::optional alternative optional_type minimum; optional_type typ; optional_type maximum; public: typedef T value_type; Spec(value_type typ) : minimum(), typ(typ), maximum() {} Spec(value_type minimum, value_type maximum) : minimum(minimum), typ(), maximum(maximum) {} Spec(value_type minimum, value_type typ, value_type maximum) : minimum(minimum), typ(typ), maximum(maximum) {} Spec(nullspec_type minimum, value_type maximum) : minimum(), typ(), maximum(maximum) {} Spec(value_type minimum, nullspec_type maximum) : minimum(minimum), typ(), maximum(maximum) {} Spec(nullspec_type minimum, value_type typ, value_type maximum) : minimum(minimum), typ(typ), maximum(maximum) {} Spec(value_type minimum, value_type typ, nullspec_type maximum) : minimum(minimum), typ(typ), maximum(maximum) {} /** \brief A Spec must be constructed with at least one value (minimum, typical or maximum). */ Spec() = delete; constexpr bool has_min() const noexcept { return bool(minimum); } constexpr bool has_typical() const noexcept { return bool(typ); } constexpr bool has_max() const noexcept { return bool(maximum); } constexpr value_type min() const { if (minimum) return *minimum; else throw bad_spec_access("attempted to access a non-existent minimum spec"); } constexpr value_type typical() const { if (typ) return *typ; else throw bad_spec_access("attempted to access a non-existent typical spec"); } constexpr value_type max() const { if (maximum) return *maximum; else throw bad_spec_access("attempted to access a non-existent maximum spec"); } /** \brief Returns the minimum value if it exists, or the supplied value if not. \tparam V a type which can be cast to \ref value_type. \param[in] default_value the value to return if the Spec does not have a defined minimum. */ template<typename V> constexpr value_type min_or(const V& default_value) const { return minimum.value_or(default_value); } /** \brief Returns the typical value if it exists, or the supplied value if not. \tparam V a type which can be cast to \ref value_type. \param[in] default_value the value to return if the Spec does not have a defined typical value. */ template<typename V> constexpr value_type typical_or(const V& default_value) const { return typ.value_or(default_value); } /** \brief Returns the maximum value if it exists, or the supplied value if not. \tparam V a type which can be cast to \ref value_type. \param[in] default_value the value to return if the Spec does not have a defined maximum. */ template<typename V> constexpr value_type max_or(const V& default_value) const { return maximum.value_or(default_value); } /** \brief Returns the minimum value if it exists and is greater than the supplied clamp value, otherwise the supplied clamp value. Similar to min_or(), except that the return value is guaranteed to be greater than or equal to the clamp value. The argument to min_or() may be used to clamp its return value to a specified value if the Spec minimum exists, but not if the Spec has a minimum which is less than the argument to min_or(). */ template<typename V> constexpr value_type clamped_min(const V& clamp_minimum) const { // Acquire the minimum value, or use clamp_minimum if the minimum has no value // Clamp that value (if minimum has a value it might be less than clamp_minimum) return clamp_min(static_cast<value_type>(minimum.value_or(clamp_minimum)), static_cast<value_type>(clamp_minimum)); } /** \brief Returns the minimum value if it exists and is within the range specified by the clamp values. If the minimum value exists but is outside the range specified by the clamp values then the clamp value closest to the minimum is returned. If the minimum value doesn't exist then the supplied clamp minimum is returned. Similar to min_or(), except that the return value is guaranteed to be within the range of the clamp values. The argument to min_or() may be used to clamp its return value to a specified value if the Spec minimum exists, but not if the Spec has a minimum which is less than the argument to min_or(). */ template<typename V> constexpr value_type clamped_min(const V& clamp_minimum, const V& clamp_maximum) const { // Acquire the minimum value, or use clamp_minimum if the minimum has no value // Clamp that value (if minimum has a value it might not be within the range of clamp_minimum to clamp_maximum) return clamp(static_cast<value_type>(minimum.value_or(clamp_minimum)), static_cast<value_type>(clamp_minimum), static_cast<value_type>(clamp_maximum)); } /** \brief Clamps the typical value if it exists to within the range specified by the supplied clamp values. Similar to typical_or(), except that the return value is guaranteed to be within the range specified by the supplied clamp values. The argument to typical_or() may be used to clamp its return value to a specified value if the Spec typical exists, but not to a range of values and not if the Spec has a typical which is outside a clamp range. */ template<typename V> constexpr value_type clamped_typical(const V& clamp_minimum, const V& clamp_maximum) const { if (typ) return clamp(*typ, static_cast<value_type>(clamp_minimum), static_cast<value_type>(clamp_maximum)); else throw bad_spec_access("cannot clamp a non-existent typical value"); } /** \brief Clamps the typical value if it exists to within the range specified by the supplied clamp values, or returns the specified default value. Similar to clamped_typical(const V&, const V&), except that a default value is returned in the case where clamped_typical(const V&, const V&) throws an exception. Also similar to typical_or(), except that the return value is guaranteed to be within the range specified by the supplied clamp values if the Spec typical is defined. */ template<typename V> constexpr value_type clamped_typical(const V& clamp_minimum, const V& clamp_maximum, const V& default_value) const { if (typ) return clamp(*typ, static_cast<value_type>(clamp_minimum), static_cast<value_type>(clamp_maximum)); else return default_value; } /** \brief Returns the maximum value if it exists and is less than the supplied clamp value, otherwise the supplied clamp value. Similar to max_or(), except that the return value is guaranteed to be less than or equal to the clamp value. The argument to max_or() may be used to clamp its return value to a specified value if the Spec maximum exists, but not if the Spec has a maximum which is greater than the argument to max_or(). */ template<typename V> constexpr value_type clamped_max(const V& clamp_maximum) const { // Acquire the maximum value, or use clamp_maximum if the maximum has no value // Clamp that value (if maximum has a value it might be greater than clamp_maximum) return clamp_max(static_cast<value_type>(maximum.value_or(clamp_maximum)), static_cast<value_type>(clamp_maximum)); } /** \brief Returns the maximum value if it exists and is within the range specified by the clamp values. If the maximum value exists but is outside the range specified by the clamp values then the clamp value closest to the maximum is returned. If the maximum value doesn't exist then the supplied clamp maximum is returned. Similar to max_or(), except that the return value is guaranteed to be within the range of the clamp values. The argument to max_or() may be used to clamp its return value to a specified value if the Spec maximum exists, but not if the Spec has a maximum which is outside the clamp range. */ template<typename V> constexpr value_type clamped_max(const V& clamp_minimum, const V& clamp_maximum) const { // Acquire the maximum value, or use clamp_maximum if the maximum has no value // Clamp that value (if maximum has a value it might not be within the range of clamp_minimum to clamp_maximum) return clamp(static_cast<value_type>(maximum.value_or(clamp_maximum)), static_cast<value_type>(clamp_minimum), static_cast<value_type>(clamp_maximum)); } /** \brief Determines if two Specs are equal. Two Specs are considered equal if all the following are true: 1. Both Specs have equal minimum values or both Specs have an undefined minimum. 2. Both Specs have equal typical values or both Specs have an undefined typical value. 3. Both Specs have equal maximum values or both Specs have an undefined maximum. \tparam U a value_type which can be cast to type `T`. */ template<typename T, typename U> friend inline bool operator==(const Spec<T>& lhs, const Spec<U>& rhs); template<typename T, typename U> friend inline bool operator!=(const Spec<T>& lhs, const Spec<U>& rhs); /** \brief Inserts a Spec into a `std::ostream`. Inserts a string of the form '[min, typical, max]', where 'min', 'typical', and 'max' are the minimum, typical, and maximum values of the Spec, respectively. If a minimum value is not defined then the string "-Inf" is used (to symbolize negative infinity). If a maximum value is not defined then the string "Inf" is used (to symbolize positive infinity). If a typical value is not defined then the substring 'typical, ' is omitted -- the inserted string has the form '[min, max]'. Square brackets are used to denote closed intervals if `spec` has a defined minimum and/or maximum, and parentheses are used to denote open intervals if `spec` has an undefined minimum and/or maximum. For example, the string for a `spec` with a minimum of 0 and an undefined maximum is '[0, Inf)`. \tparam T a type for which operator<<(std::ostream, T) is defined. */ template<typename T> friend inline std::ostream& operator<<(std::ostream& os, const Spec<T>& spec); }; template<typename T, typename U> inline bool operator==(const Spec<T>& lhs, const Spec<U>& rhs) { // Two optionals are equal if: // 1. they both contain a value and those values are equal // 2. neither contains a value return (lhs.minimum == rhs.minimum) && (lhs.typ == rhs.typ) && (lhs.maximum == rhs.maximum); } template<typename T, typename U> inline bool operator!=(const Spec<T>& lhs, const Spec<U>& rhs) { return !(lhs == rhs); } template<typename T> inline std::ostream& operator<<(std::ostream& os, const Spec<T>& spec) { if (spec.has_min()) os << '[' << spec.min(); else os << '(' << "-Inf"; os << ", "; if (spec.has_typical()) os << spec.typical() << ", "; if (spec.has_max()) os << spec.max() << ']'; else os << "Inf" << ')'; return os; } /** \brief Clamps a value to within a specified range. Based on std::clamp() in C++17: http://en.cppreference.com/w/cpp/algorithm/clamp. \throws std::logic_error if `min > max` */ template<typename T> constexpr const T& clamp(const T& value, const T& min, const T& max) { if (min > max) throw std::logic_error("cannot clamp between a min and max value when min > max"); return value < min ? min : value > max ? max : value; } /** \brief Clamps a value to greater than or equal to a specified minimum. */ template<typename T> constexpr const T& clamp_min(const T& value, const T& min) { return value < min ? min : value; } /** \brief Clamps a value to less than or equal to a specified maximum. */ template<typename T> constexpr const T& clamp_max(const T& value, const T& max) { return value > max ? max : value; } /** \brief Clamps the values held by a Spec to within a specified range. \return a new `Spec<T>`, `s`, with its minimum, typical, and maximum values clamped to the range specified by `min` and `max`. If `spec` does not have a defined minimum then `s` has its minimum set to `min`. If `spec` does not have a defined maximum then `s` has its maximum set to `max`. If `spec` does not have a defined typical then `s` does not have a defined typical, either. If `spec` has a defined typical then the defined typical for `s` is either equal to that of `spec` (if the typical is within the clamp limits) or set to the closest clamp value (e.g. if `spec.typical() > max` then `s.typical() = max`). \throws std::logic_error if `min > max` */ template<typename T> constexpr Spec<T> clamp(const Spec<T>& spec, const T& min, const T& max) { if (min > max) throw std::logic_error("cannot clamp between a min and max value when min > max"); auto clamped_min = spec.clamped_min(min); auto clamped_max = spec.clamped_max(max); if (spec.has_typical()) { auto clamped_typical = clamp(spec.typical(), min, max); return Spec<T>(clamped_min, clamped_typical, clamped_max); } else return Spec<T>(clamped_min, clamped_max); } /** \brief Determines if a value is between the limits of a Spec. \return `true` if `value` is greater than or equal to the Spec argument's minimum and less than or equal to the Spec argument's maximum, `false` otherwise. If the Spec argument does not have a minimum then `value` is considered greater than the Spec argument's minimum. If the Spec argument does not have a maximum then `value` is considered less than the Spec argument's maximum. For example, if `spec` has a minimum but no maximum then `true` is returned if `value >= spec.min()`, `false` otherwise. If `spec` has neither a minimum nor maximum then `true` is returned for any `value`. */ template<typename V, typename T> inline bool pass(const V& value, const Spec<T>& spec) { if (spec.has_min() && value < spec.min()) return false; if (spec.has_max() && value > spec.max()) return false; return true; } template<typename V, typename T> inline bool fail(const V& value, const Spec<T>& spec) { return !pass(value, spec); } /** \brief Calculates Spec limits based on an expected value and a % tolerance. \return a Spec with `value_type` set to the same type as `value`, a minimum equal to `(1 - tolerance / 100) * value`, a typical equal to `value`, and a maximum equal to `(1 + tolerance / 100) * value`. For example, for a 1% tolerance the minimum is equal to 99% of `value` and the maximum is equal to 101% of `value`. */ template<typename V, typename T> inline Spec<V> tolerance_spec(const V& value, const T& tolerance) { return Spec<V>((1 - tolerance / 100.0) * value, value, (1 + tolerance / 100.0) * value); } /** \brief Calculates Spec limits based on an expected value and a % tolerance and a clamping range. \return a Spec with `value_type` set to the same type as `value`, a typical equal to `value`, a minimum set to `(1 - tolerance / 100) * value` or `clamp_min` (whichever is greater), and a maximum set to `(1 + tolerance / 100) * value` or `clamp_max` (whichever is less). For example, for a 1% tolerance the minimum is equal to 99% of `value` and the maximum is equal to 101% of `value` unless the minimum and/or maximum exceed the clamp range (in which case the minimum and/or maximum are set to the clamp values). */ template<typename V, typename T> inline Spec<V> tolerance_spec(const V& value, const T& tolerance, const V& clamp_minimum, const V& clamp_maximum) { // Min/max based on tolerance calculation V tol_min = (1 - tolerance / 100.0) * value; V tol_max = (1 + tolerance / 100.0) * value; // Clamp the min/max based on the tolerance calculation to the minimum/maximum clamp(tol_min, clamp_minimum, clamp_maximum); clamp(tol_max, clamp_minimum, clamp_maximum); return Spec<V>(tol_min, value, tol_max); } #endif Here's a demo program of Spec and the related utilities: #include <iostream> #include <chrono> #include <stdexcept> #include <vector> #include <fstream> #include "Spec.h" using namespace std::chrono_literals; template<typename T> void print_unchecked_accesses(std::ostream& os, Spec<T> spec) { try { auto min = spec.min(); } catch (const bad_spec_access& err) { os << err.what() << '\n'; } try { auto typ = spec.typical(); } catch (const bad_spec_access& err) { os << err.what() << '\n'; } try { auto max = spec.max(); } catch (const bad_spec_access& err) { os << err.what() << '\n'; } os << '\n'; } template<typename T> void print_checked_accesses(std::ostream& os, Spec<T> spec) { if (spec.has_min()) auto min = spec.min(); else os << "non-existent min\n"; if (spec.has_typical()) auto typ = spec.typical(); else os << "non-existent typical\n"; if (spec.has_max()) auto max = spec.max(); else os << "non-existent max\n"; os << '\n'; } template<typename T, typename V> void print_access_or(std::ostream& os, Spec<T> spec, V default_value) { os << "Accesses with default value " << default_value << " provided: "; os << spec.min_or(default_value) << ", "; os << spec.typical_or(default_value) << ", "; os << spec.max_or(default_value) << "\n\n"; } template<typename T, typename V> void test_clamps(std::ostream& os, Spec<T> spec, V clamp_min, V clamp_max) { os << spec << " limits clamped to " << clamp(spec, clamp_min, clamp_max) << " with clamps " << clamp_min << " and " << clamp_max << '\n'; V default_value = 0; os << "Clamping typical with default value " << default_value << ": " << spec.clamped_typical(clamp_min, clamp_max, default_value) << '\n'; os << "Clamping typical without a default value: "; try { os << spec.clamped_typical(clamp_min, clamp_max); } catch (const bad_spec_access& err) { os << err.what(); } os << '\n'; } int main() { std::ofstream ofs("demo.txt"); double clamp_min = 0; double clamp_max = 15; std::vector<Spec<double> > specs = { Spec<double>(0, 5), // double-sided Spec<double>(-5, nullspec), // min only Spec<double>(nullspec, 298), // max only Spec<double>(90), // typical only Spec<double>(-79, 235, 89235), // min, typical, and max Spec<double>(tolerance_spec<double, int>(5, 10)), Spec<double>(tolerance_spec<double, int>(15, 10, clamp_min, clamp_max)) }; for (auto spec : specs) { ofs << "Spec: " << spec << '\n'; ofs << "Exceptions caught due to unchecked accesses:\n"; print_unchecked_accesses(ofs, spec); ofs << "Non-existent values determined from checked accesses:\n"; print_checked_accesses(ofs, spec); print_access_or(ofs, spec, -1); test_clamps(ofs, spec, clamp_min, clamp_max); ofs << "--------------------------------------------------------------------------------\n"; } ofs << "Testing equality:\n"; using TimeSpec = Spec<std::chrono::microseconds>; TimeSpec::value_type max = 5us; TimeSpec t1(-15us, 0us, max); TimeSpec t2 = t1; TimeSpec t3(-10us, 0us, max); // unequal min TimeSpec t4(-15us, 1us, max); // unequal typical TimeSpec t5(-15us, 0us, 6us); // unequal max std::vector<TimeSpec> timespecs{t1, t2, t3, t4, t5}; ofs << std::boolalpha; for (std::size_t t = 1; t < timespecs.size(); t++) { ofs << "t1 == t" << t + 1 << "?: " << (t1 == timespecs[t]) << '\n'; } ofs << '\n'; Spec<double> spec(-15, 0, 5); ofs << "Testing pass/fail with Spec " << spec << "\n"; for (auto value : {-16, -15, 0, 5, 10}) { ofs << value << " passes?: " << pass(value, spec) << '\n'; } return 0; } The demo program outputs: Spec: [0, 5] Exceptions caught due to unchecked accesses: attempted to access a non-existent typical spec Non-existent values determined from checked accesses: non-existent typical Accesses with default value -1 provided: 0, -1, 5 [0, 5] limits clamped to [0, 5] with clamps 0 and 15 Clamping typical with default value 0: 0 Clamping typical without a default value: cannot clamp a non-existent typical value -------------------------------------------------------------------------------- Spec: [-5, Inf) Exceptions caught due to unchecked accesses: attempted to access a non-existent typical spec attempted to access a non-existent maximum spec Non-existent values determined from checked accesses: non-existent typical non-existent max Accesses with default value -1 provided: -5, -1, -1 [-5, Inf) limits clamped to [0, 15] with clamps 0 and 15 Clamping typical with default value 0: 0 Clamping typical without a default value: cannot clamp a non-existent typical value -------------------------------------------------------------------------------- Spec: (-Inf, 298] Exceptions caught due to unchecked accesses: attempted to access a non-existent minimum spec attempted to access a non-existent typical spec Non-existent values determined from checked accesses: non-existent min non-existent typical Accesses with default value -1 provided: -1, -1, 298 (-Inf, 298] limits clamped to [0, 15] with clamps 0 and 15 Clamping typical with default value 0: 0 Clamping typical without a default value: cannot clamp a non-existent typical value -------------------------------------------------------------------------------- Spec: (-Inf, 90, Inf) Exceptions caught due to unchecked accesses: attempted to access a non-existent minimum spec attempted to access a non-existent maximum spec Non-existent values determined from checked accesses: non-existent min non-existent max Accesses with default value -1 provided: -1, 90, -1 (-Inf, 90, Inf) limits clamped to [0, 15, 15] with clamps 0 and 15 Clamping typical with default value 0: 15 Clamping typical without a default value: 15 -------------------------------------------------------------------------------- Spec: [-79, 235, 89235] Exceptions caught due to unchecked accesses: Non-existent values determined from checked accesses: Accesses with default value -1 provided: -79, 235, 89235 [-79, 235, 89235] limits clamped to [0, 15, 15] with clamps 0 and 15 Clamping typical with default value 0: 15 Clamping typical without a default value: 15 -------------------------------------------------------------------------------- Spec: [4.5, 5, 5.5] Exceptions caught due to unchecked accesses: Non-existent values determined from checked accesses: Accesses with default value -1 provided: 4.5, 5, 5.5 [4.5, 5, 5.5] limits clamped to [4.5, 5, 5.5] with clamps 0 and 15 Clamping typical with default value 0: 5 Clamping typical without a default value: 5 -------------------------------------------------------------------------------- Spec: [13.5, 15, 16.5] Exceptions caught due to unchecked accesses: Non-existent values determined from checked accesses: Accesses with default value -1 provided: 13.5, 15, 16.5 [13.5, 15, 16.5] limits clamped to [13.5, 15, 15] with clamps 0 and 15 Clamping typical with default value 0: 15 Clamping typical without a default value: 15 -------------------------------------------------------------------------------- Testing equality: t1 == t2?: true t1 == t3?: false t1 == t4?: false t1 == t5?: false Testing pass/fail with Spec [-15, 0, 5] -16 passes?: false -15 passes?: true 0 passes?: true 5 passes?: true 10 passes?: false I'm looking for suggestions regarding the class design, naming, etc. Also, a few specific concerns that I'd appreciate reviewers to comment on: I'm still learning how to use the full power of C++11 and C++14 since I was using a pre-C++11 compiler for a long time, so are there any C++11 or C++14 features I forgot to use which would improve the code? I'm not very comfortable with move semantics yet. Are there some optimizations I've missed because of that? I've provided no functions to modify the internal data of the Spec class (it is immutable). This simplifies the class design a bit (e.g. I don't have to check that at least one of the minimum/typical/maximum values are defined when modifying the instance -- I just have to delete the default constructor). However, in my usage it has occasionally been inconvenient to not be able to modify a Spec instance after it's constructed (e.g. to acquire a Spec instance with limits taken from another instance but clamped -- I have to copy it instead). Was my decision to make the class immutable a good one, or does it limit its usefulness? It would be nice to detect if the code is being compiled by a compiler that supports std::optional and use std::optional if so, or try to fall back on boost::optional if not. Is there a way to do this without too much difficulty? Answer: Naming: Be consistent, and preferably avoid abbreviations. The members are minimum, typ and maximum, but then there's has_min(), has_typical() and has_max(), etc. It might be neater to just spell the whole thing out every time. Constructors: Uh... that's a lot of constructors. IMHO, it would be much neater to make the optional_type typedef public, and have just one constructor taking the optional type for all three parameters: Spec(optional_type const& minimum, optional_type const& typical, optional_type const& maximum): minimum(minimum), typ(typical), maximum(maximum) { } You can still pass value_type or nullspec directly (at least with std::optional, I haven't checked boost::optional), but now it's instantly obvious what's going on, because you have to define all three parameters every time: Spec<double>(0.0, nullspec, 5.0); There's no need to explicitly delete the default constructor, since you've defined non-default constructors. Other: There's no need to static cast the results of minimum.value_or() in clamped_min(), because std::optional::value_or() will returns value_type already. Same with the maximum version. You might like to check for a negative tolerance in tolerance_spec() (although the docs say max can be lower than min, so maybe not...). (BUG?:) In tolerance_spec, you don't seem to be using the result of the clamp operations! Your Questions: There's no heap-allocated members, so there's nothing that would be faster to move than copy. The compiler will generate a move constructor for you anyway. If set_foo() member functions would be useful, there's no reason not to provide them. If an instance mustn't be altered, that's what const and const& are for. Similar to the constructor, these can take an optional_type, rather than the value_type.
{ "domain": "codereview.stackexchange", "id": 29433, "tags": "c++, c++11, boost, optional" }
What is the advantage of using exponential function over trigonometric function in analyzing waves?
Question: A.P.French in his book Vibrations and Waves writes: . . . Why should the exponential function be such an important contribution to the analysis of vibrations? The prime reason is the special property of the exponential function. . .its reappearance after every operation of differentiation or integration. Now,what is the advantage of using exponential function over trigonometric function? They are directly linked by De-Moiver's theorem. Answer: I'm taking a bit of a gamble here from your age on your user page: we do have a 15 year old string theorist on this site (who is also from your homeland), so at the risk of seeming belittling, here is something I found really satisfying in relation to your question when I was about your age. Differentiation of trigonometric functions is fiddly. When you differentiate a $\sin$ it becomes a $\cos$, when you differentiate a $\cos$ it becomes a $-\sin$. So you have to differentiate twice to get a trigonometric function back to its original form. The second derivative $\mathrm{d}_t^2 f(\omega\,x)$ is equivalent to multiplying by $-\omega^2$ (where $f$ is any linear combination of $\sin\,\omega\,t$ and $\cos\,\omega\,t$), but the first derivative in general changes the function to something linearly independent from it. In contrast, differentiation of $\exp(i\,\omega t)$ is equivalent to a simple scaling, which means it can greatly simplify operations containing derivatives of all orders, not just even ones, as is the case with $\sin$ or $\cos$. This is all equivalent to saying that $\sin$ and $\cos$ fulfill second but not first order differential equations, whereas $e^{\pm i\,t}$, which are special linear combinations of $\sin$ and $\cos$ fulfill first order DEs. Here is a wonderful way of thinking that for me unified $e^{i\,t}$, $\sin t$, $\cos t$ and justified to me the complex number field when I was 17. We begin by thinking of the kinematics of something moving on a path around the unit circle. A position vector on this circle is $\vec{r} = \left(\begin{array}{c}x\\y\end{array}\right)$ such that $\left<\vec{r},\,\vec{r}\right>=x^2+y^2=1$. We now find the equation of motion of a point $\vec{r}(t) = \left(\begin{array}{c}x(t)\\y(t)\end{array}\right)$ constrained to the circle is defined by ${\rm d}_t \left<\vec{r},\,\vec{r}\right> = 0$, whence $\left<{\rm d}_t \vec{r}(t),\,\vec{r}(t)\right> = 0$, whence (with a little fiddling): $${\rm d}_t \left(\begin{array}{c}x(t)\\y(t)\end{array}\right) = v(t) \left(\begin{array}{cc}0&-1\\1&0\end{array}\right) \left(\begin{array}{c}x(t)\\y(t)\end{array}\right)$$ where $v(t) = \sqrt{\dot{x}^2+\dot{y}^2}$. We take, for simplicity, $v(t) = 1$ so that we immediately have, by the universal convergence of the matrix exponential, when supposing that the the path begins at the point $x=1,y=0$ at time $t=0$: $$\vec{r}(t) = \exp\left[\left(\begin{array}{cc}0&-1\\1&0\end{array}\right)\,t\right]\left(\begin{array}{c}1\\0\end{array}\right)=\left(\mathrm{id} + i\,t + \frac{i^2\,t}{2!} + \frac{i^3\,t}{3!} + \cdots\right)\left(\begin{array}{c}1\\0\end{array}\right)$$ where I have defined $$i= \left(\begin{array}{cc}0&-1\\1&0\end{array}\right)$$ You can play around with this one and check that this $i$ has all the properties that the "everyday" $i$ has; in particular $i^2=-1$. Indeed, you can go a little further and prove that "numbers" of the form: $$a \,\mathrm{id} + b\,i = \left(\begin{array}{cc}a&-b\\b&a\end{array}\right)$$ add, subtract, multiply and divide exactly like the everyday complex numbers. The field of matrices of the form above is isomorphic to the complex number field. Mathematically it is therefore indistinguishable from the complex number field. Then we separate real parts (multipliers of the identity matrix) and imaginary parts (multipliers of the $i$ matrix to define, in this field: $$e^{i\,t} = \cos(t) + \sin{t}\,i$$ and this entity has the useful property that its derivative is a simple scale factor times the original function and we get: $$\cos(t) = \mathrm{id} - \frac{t^2}{2!}\mathrm{id}+ \frac{t^4}{4!}\mathrm{id} + \cdots$$ $$\sin(t) = \mathrm{id}\, t - \frac{t^3}{3!}\mathrm{id}+ \frac{t^5}{5!}\mathrm{id} + \cdots$$ How do these match up with everyday $\sin$ and $\cos$? Well, work out the co-ordinates of the point beginning at position $(1,0)$ and you will see that they are indeed given by the same $\sin$ and $\cos$ as defined above. You may also enjoy the lecture by Richard Feynman: "Algebra"; Lecture 22, Volume 1, The Feynman Lectures on Physics
{ "domain": "physics.stackexchange", "id": 19337, "tags": "waves, fourier-transform, complex-numbers" }
Regular expression at least 2 out of 3 consecutive characters should be 1
Question: How can I build a regular expression that, using only the concatenate, union and star operations, over the alphabet {0,1}, describes the language "Every three consecutive characters contain at least two 1, and the input has length at least 3"? For instance 110011, 0101 and 11 should be refused. I was thinking on using the logic from this (incomplete) DFA, but I can't figure out how to get a regular expression that follows the rule. Thanks! Answer: Assume that string $s$ is in $L$. We will look at the last two characters of the string: 00, impossible, because this string could not be in $L$. 01, next character must be 1, new last two characters is 11. 10, next character must be 1, new last two characters is 01. 11, next character may be 1 or 0, new last two characters is 10 or 11. Note that no matter your current state, you will always pass through the 11 state within two steps. Assuming that $s$ ended with 11, we get the following loop: (1|(011))* This will be the middle section of any string in $L$. All we need to do now is handle possible prefixes, making sure not to allow any with size < 3: (111|011|1011|11011) And finally, possible suffixes (note the empty union to express that the suffix is optional): (|0|01) Now all that's left is to concatenate them: (111|011|1011|11011)(1|(011))*(|0|01)
{ "domain": "cs.stackexchange", "id": 7367, "tags": "regular-languages, automata, finite-automata, regular-expressions" }
Is there a legendre polynomial series expansion for the gravitational potential in General Relativity (GR)?
Question: In Newtonian mechanics we perform a multipole expansion on the gravitational potential $V(r)=-GM/r$ by a series expansion of Legendre polynomials. Then the Hamiltonian is given by \begin{equation} H = T + \tilde{V}, \end{equation} where $\tilde{V}$ is the expanded gravitational potential. What about in GR? If we describe the spacetime produced by the Earth with the Schwarzschild metric then the associated Hamiltonian of a test particle in free fall around the Earth is given by \begin{equation} H = \frac{1}{2} \left(A(r)^{-1} p_t^2 - B(r)^{-1}p_r^2-\frac{p_{\theta}^2}{r^2} - \frac{p_\phi^2}{r^2 \sin^2 \theta} \right) + V(r), \end{equation} where $V(r) = -GM/r.$ To account for the non-spherical geometry of the Earth in GR Is it simply a case of doing the same expansion? I have my doubts because things like this are never as easy as they first appear. Answer: It is possible to expand the Einstein field equations around their spherical symmetric solutions using so called tensor harmonics. Pioneering work on the topic has been done by T. Regge and J. A. Wheeler, Phys. Rev. 108, 1063 (1957), F. J. Zerilli, Phys. Rev. Lett. 24, 737 (1970) and many others. In general the key idea is to expand the metric around the spherical symmetric one using spherical harmonics generalized to tensor and then solving the resulting field equations. In general this is not trivial and I do not know much about the general case. I am however rather familiar with a special case or application of this approach: the so called Hartle-Thorne formalism: In a series of papers J. B. Hartle, K. S. Thorne and collaborators applied this approach to slowly rotating stars: expanding the metric in the angular velocity around the spherical symmetric one. This is a rather simple application and in its original form (up to second order in the angular velocity) it describes mono- and quadrupole perturbations away from spherical symmetry as well as an $l=1$ frame dragging term. The line element reads: $$ds^2=-e^{\nu(r)}\left[1+2(h_0(r)+h_2(r)P_2(\cos(\theta)))\right]dt^2+e^{\lambda(r)}\left[1+2(m_0(r)+m_2(r)P_2(\cos(\theta)))/(r-2m(r))\right]dr^2+r^2(1+2k_2(r)P_2(\cos(\theta)))\left[d\theta^2+\sin(\theta)^2(d\phi-\omega(r)dt)^2\right].$$ Treating $h_0$, $h_2$, $m_0$, $m_2$ and $k_2$ as small perturbations and $\omega$ as first order frame dragging term one can solve the Einstein field equations up to second order in the angular velocity. This approach can be used to describe not only deformations due to rotation but also to describe small deformations due to magnetic fields (eg. K. Ioka and M. Sasaki, Astrophys. J. 600:296-316 (2004)). To conclude a few words on the specific question raised by OP: It is not a "simple" expansion but rather a complicated one and in general the geodesic equations can not be reduced back to a simple Hamiltonian with an effectiv potential in a $V(r)$ form. There are analytic solutions for the metric perturbation (in case of the Hartle-Thone ansatz) but the geodesic equations become rather lengthy. They include effects like frame dragging as well as the perturbations away form spherical symmetry. The exterior solutions depend only on a few global parameters like, mass and radius of the unperturbed star and mass shift, angular momentum, angular velocity and mass quadrupole moment of the perturbed configuration. In that regard the geodesic equations are not that complicated in this case but that is just the case in this very special ansatz, which has very high symmetry and describes only the lowest order perturbation away from spherical symmetry and only for small perturbations. For arbitrary large perturbations and higher "multipolarity" things become much more complicated and ultimately a more general treatment in the framework of Numerical Relativity (NR) becomes necessary.
{ "domain": "physics.stackexchange", "id": 38514, "tags": "newtonian-mechanics, general-relativity, potential, hamiltonian-formalism, multipole-expansion" }
Why do the following Network Transformations give different answers?
Question: I did a Star to Delta Network Transformation and a Delta to Star Network Transformation on different parts of the original circuit as shown in the image below. It gave me two new circuits. On solving those circuits, I get different answers for the equivalent resistance. Why is this so? Answer: You made an error in the star-delta transformed circuit (1). You overlooked that the delta branch of 4 Ohm at terminal (a) is short circuited.
{ "domain": "physics.stackexchange", "id": 47059, "tags": "homework-and-exercises, electrostatics, electric-circuits, electrical-resistance, electronics" }
Displacement for rotation along some axis
Question: If I rotate something along some axis which has direction $\vec \Omega$, at an small angle $\epsilon$ and if the position of the body is $\vec R$ then according to my book the displacement will be $$\delta R=\epsilon\vec\Omega\times\vec R$$ I am unable to understand how this happened. I will be helpful if someone illustrate it to me. Answer: The rotation direction is perpendicular to $\vec{\Omega}$ and $\vec{R}$ and is given by $\vec{v} = \frac{\vec{\Omega}\times\vec{R}}{||\vec{R}||}$. Note that $\vec{v}$ is a unit vector. The vector $\vec{R}$ rotates in this direction and thus $\vec{v} \parallel \delta R$. If you draw a triangle with bases $\vec{R}$ and $\delta R$, you find $\tan(\epsilon) = \frac{||\delta R||}{||R||}$. This means that the amplitude of the displacement is given by $||\delta R||= \tan(\epsilon)||\vec{R}||$. So, now we know the amplitude and direction of the displacement which gives $$\delta R = ||\delta R||\vec{v} = \tan(\epsilon)||\vec{R}||\frac{\vec{\Omega}\times\vec{R}}{||\vec{R}||} = \tan(\epsilon)(\vec{\Omega}\times\vec{R}) \,.$$ For small angles, the tangent can be approximated by $\tan(\epsilon) \approx \epsilon$ which gives the result $$\delta R =\epsilon(\vec{\Omega}\times\vec{R}) \,.$$
{ "domain": "physics.stackexchange", "id": 70081, "tags": "newtonian-mechanics, classical-mechanics" }
Pendulum on a train
Question: I've seen multiple questions about a pendulum on a train and most say to use $T = 2 \pi (L/F)^{1/2}$ and I have done this to compare the pendulum's periods before being on a train and then once its on the train I am aware the period on the train should be shorter however I am trying to prove this. My problem is resolving the Forces acting on the pendulum on the train the two forces being centripetal force and acceleration due to gravity. I've had a couple of ideas of how to do this being changing the view of the pendulum so that the equilibrium point has shifted such that the centripetal force is now acting vertically and the gravity is causing the oscillations but I can't get my head round resolving these forces Can someone show me how to set up these forces ? Answer: I infer from the formula and your question, that you are talking about a train in a turn, that experiences centripetal force. In this case, the "effective $g$" is greater than just the earth attraction. In the simplest case you assume, that the new force is perpendicular to gravitation and that you know the parameters of the train motion: then the centripetal acceleration is $a = v^2/R$, and the net effective acceleration on the pendulum is $\sqrt{g^2+a^2}$. Put this into your formula as $F$ (it would be clearer to call this not $F$ but for example $g'$ ... the normal formula for a pendulum has an $\sqrt{l/g}$ in it), and there you have the solution.
{ "domain": "physics.stackexchange", "id": 29805, "tags": "harmonic-oscillator, curvature, oscillators" }
Is this ladybird larva secreting a protective fluid?
Question: I know that adult ladybird beetles can secrete a fluid to keep predators from eating them. Can the larvae do that too? Or are these yellow droplets something else? Answer: In larvae stage Ladybird beetles secretes yellow gooey alkaloids from its abdomen. Ladybird beetles oozes hemolymph(in other word its blood), which is toxic and very smelly, when threatened. It's a mixture of various alkaloids and repulse the predators. Good Job, There might be no picture more clearer than you provided source: [1]: Ladybird defence alkaloids: Structural, chemotaxonomic and biosynthetic aspects (Col.: Coccinellidae), Désiré Daloze, Jean-Claude Braekman, Jacques M. Pasteels [2]: 10 Fascinating Facts About Ladybugs
{ "domain": "biology.stackexchange", "id": 2596, "tags": "entomology" }
Is this an abuse big O notation as a power of a number?
Question: Theorem 7.11 in Introduction to the theory of computation 3rth edition says Let $t(n)$ be a function where $t(n)>n$. Then every $t(n)$ time nondeterministic single-tape Turing machine has an equivalent $2^{O(t(n))}$ time deterministic single tape Turing machine. I can understand that in this proof when we have a nondeterministic TM which every node in it can at most go to $b$ other nodes in its computation tree, then the time complexity of a deterministic Turing machine simulating it would be $O(t(n)b^{t(n)})$. But I'm not sure if I understand the $2^{O(t(n))}$ correctly. Does it means that if $f(n) \in 2^{O(t(n))}$ then $\exists b\in \mathbb{N} \; f(n)\in O(b^{t(n)})?$ On the other hand, the claim 1.5 in the Computational complexity book says For every $f : \{0, 1\}^∗ \rightarrow \{0, 1\}$ and time-constructible $T : N\rightarrow N$, if $f$ is computable in time $T(n)$ by a TM $M$ using alphabet $\Gamma$, then it is computable in time $4 log |\Gamma|T(n)$ by a TM $M$ using the alphabet $\{0,1,\square,\rhd\}$. So every nondeterministic TM M running in $t(n)$ time can have an equivalent nondeterministic TM using $\{0,1\}$ alphabet running upper hand in $|\Gamma|^2t(n)$. Thus this equivalent TM's computation tree has at most 2 different branches in each step. Then we can have a deterministic simulator of that TM running in time $O(2^{|\Gamma|^2t(n)})$. The intersection of 2 above statement suggests that carelessly we can say $O(2^{t(n)})=2^{O(t(n))}$, but we know $\forall b>2,\; b^{t(n)}\notin O(2^{t(n)})$. Unfortunately, I didn't find any definition of $2^{O(t(n)}$. I wish to know is there any formal definition for $2^{O(t(n))}$? - Edit: There is a similar question here. But I'm doubting its answer is the same since the big O notation in the power is not just $O(1)$. Suppose we have nondeterministic TM that each node of it goes to 3 other nodes. Then there is a deterministic Turing machine that simulates it in time $t(n)3^{t(n)}$. So we can't say that there is a constant $k$ that $t(n)3^{t(n)} < 2^{kt(n)}.$ Meanwhile, when I think more, this definition holds if we ignore the Sipser proof. I mean if we begin the proof by the statement that every nondeterministic TM has a version $\{0,1\}$ alphabet running in $c.t(n)$ then the answer's definition is correct. I wonder if there is bad use of big O notation in the Sipser's book or $2^{O(t(n))}$ has a different definition. Answer: This is perfectly standard use of big O notation, despite some purists disliking it. Here is what the passage you quoted means: Let $t(n)$ be a function where $t(n) > n$. Then every $t(n)$ time nondeterministic single-tape Turing machine has an equivalent $2^{f(n)}$ time deterministic single tape Turing machine, for some function $f(n)$ satisfying $f(n) = O(t(n))$. This is still somewhat ambiguous, since it’s perhaps not clear what variables the big O depends on. In this case, big O hides a universal constant: There exists a constant $C>0$ such that the following holds. Let $t(n)$ be a function where $t(n) > n$. Then every $t(n)$ time nondeterministic single-tape Turing machine has an equivalent $2^{f(n)}$ time deterministic single tape Turing machine, for some function $f(n)$ satisfying $f(n) \leq Ct(n)$ for all $n$. This allows us to rephrase the statement in yet another way: There exists a constant $C>0$ such that the following holds. Let $t(n)$ be a function where $t(n) > n$. Then every $t(n)$ time nondeterministic single-tape Turing machine has an equivalent $2^{Ct(n)}$ time deterministic single tape Turing machine. (This assumes that by “$f(n)$ time machine” we mean a machine running in time at most $f(n)$.)
{ "domain": "cs.stackexchange", "id": 12597, "tags": "time-complexity" }
Help with thermodynamic relation $n= \dfrac{N}{V} = \left( \dfrac{\partial N}{\partial V} \right)_{T, \mu}$
Question: In my statistical mechanics book I ran into this relation , not sure how I can prove this. For context, it is a chapter on the grand canonical partition function, so I tried the grand potential differential $dJ = -SdT - PdV - Nd\mu$ but to no use... Answer: The particle number $N$ is a extensive variable. If you are looking at it as a function $N(T, V, \mu)$ the only extensive variable it depends on is the volume $V$, since the temperature and chemical potential are intensive. To get an extensive behaviour the particle number must therefore be linear in V: \begin{equation} N(T, V, \mu) = n(T, \mu) V \end{equation} The equality with the derivative trivially follows.
{ "domain": "physics.stackexchange", "id": 95861, "tags": "homework-and-exercises, thermodynamics" }
How to pre-process the name String of a customer?
Question: I implement logistic regression to predict if a customer is a business or a non-business customer with the help of TensorFlow in Python. I have several feature candidates like name, street, zip, longitude and latitude. At them moment I am thinking of how to use the name field. The name often has repeating parts like “GmbH” (e.g. “Mustermann GmbH”) which in this context has a similar meaning to Corp. which is an indicator that the customer is a business customer. This information is useless in combination with the other parts of the name because then the name will be unique. So my question is: how should I pre-process this field so that only repeating parts will be used to predict the classification? Answer: You may want to tokenize the strings, e.g., “Mustermann GmbH” tokenizes into "Mustermann" and "GmbH". Allow for spaces and commas certainly, perhaps also hyphens and other punctuation. You may want to look into Natural Language Processing (NLP) if you're classifying text, but whatever method you choose should have better luck sniffing out business vs. non-business using tokens of the strings.
{ "domain": "datascience.stackexchange", "id": 7529, "tags": "machine-learning, classification, feature-engineering, categorical-data" }
Solving for Bananas
Question: This code creates a problem for the user to solve (very simple problem) it will then check if the user has got it right and will say so. import random import time def mathsquestion(): attempts=2 bananas1 = random.randrange(1,40) bananas2 = random.randrange(30,70) print('lucy has', bananas1,' bananas and tracy has', bananas2,' if both of them put their bananas together and split them evenly') question = float(input('how many bananas do each of them get?')) if question == (bananas1+bananas2)/2: print ('correct') restart=input('want to try again') if restart == ('yes'): mathsquestion() else: print('wrong') attempts-=1 print('you have',attempts,'attempts remaining') question2 = float(input('how many bananas do each of them get?')) if question2 == (bananas1+bananas2)/2: print('yay finally') restart=input('want to try again') if restart == ('yes'): mathsquestion() else: print('wrong') attempts-=1 print('you have',attempts,'attempts remaining') print('shutting down') if attempts == 0: time.sleep(3) quit() mathsquestion() Answer: General Feedback The good thing is that it is quite clear what your code does. You have for the most part clear variable names and not too much clutter. However I would change bananas1 to lucys_bananas and bananas2 to tracys_bananas. It seems you are familiar with another programming language. Python has a style guide PEP 8 which explains in excruciating detail how to structure your code. I whole heartily recommend skimming through it and follow it. You write attempts = 3 however according to PEP 8 it is commended to use CAPITALIZED_WITH_UNDERSCORES for constants, UpperCamelCase for class names, and lowercase_separated_by_underscores for other names. Error handling Logic The first thing that came to mind was question = float(input('how many bananas do each of them get?')) I guess you thought that it was a good idea to make question into a float since (bananas1+bananas2)/2 creates a float. The idea is good but what happens if I guess something that is not a float? Your whole program crashes! A good way to avoid this is to catch the error before it happens question = input('how many bananas do each of them get?') try: question = float(question) except: print('Error, number of bananas must be an integer') continue With how your code is currently setup this does not quite work. However we can make a few more changes. Getting half number of bananas, and having to work with floats when bananas is integers is in my opinion a tad strange. However it is possible to make sure that the number of bananas always is evenly divisible. If Lucy has a odd number, then Tracy has to have a odd number of bananas. Similarly if Lucy has an even number, then Tracy has to have an even number of bananas. Since \$(2L+1)+(2T+1)=2(T+L+1)\$ and \$2L+2T=2(T+L)\$. This can be coded as follows. lucy_bananas = get_bananas(1, 40) even_lucy_bananas = lucy_bananas % 2 tracy_bananas = get_bananas(30+even_lucy_bananas, 70+even_lucy_bananas, 2) To switch behaviour we can put the creation of the random variables into a new function. Se the revised code below The next part is strange and not very pythonic. if attempts == 0: time.sleep(3) quit() What is the purpose of time.sleep(3)? Also quit() is something which you should never have to use. The standard is return however a better solution is to wrap the entire function in a while loop. That way it can terminate once attempts reaches zero. Further tips and changes What your function does is clear however it would be much clearer if you split it into smaller parts. A good practice is that each function has a single purpose, as it stands your main function does too many things. Unnecessary import the time is not neccecary You do not need to import the entire random libary, you are only using a single function. You should use the if __name__ == "__main__": module in your answer. Duplication of code you write question2 = float(input('how many bananas do each of them get?')) more than once. The above is not really a question is it? A better name would be answer Error handling, you convert answer to a float, what if this causes an error? You have no way to detect this. mathsquestion() should not call itself, you should leave this to a main function. The creation on random variables should be put into a seperate function Clear variable names are good, but you should also have included a short explenation of what your code does. This is done by using docstrings The printing could be improved by using pythons awesome formating options I have improved the first few points in the code below. I will leave the rest to you. from random import randrange MAX_ATTEMPTS = 3 LUCY_MIN_BANANA = 1 LUCY_MAX_BANANA = 40 STACY_MIN_BANANA = 30 STACY_MAX_BANANA = 70 EVEN_BANANAS = True def get_bananas(): lucy_bananas = randrange(LUCY_MIN_BANANA, LUCY_MAX_BANANA) if EVEN_BANANAS: offset = lucy_bananas % 2 step = 2 else: offset = 0 step = 1 tracy_bananas = randrange(STACY_MIN_BANANA+offset, STACY_MAX_BANANA-offset, step) return lucy_bananas, tracy_bananas def mathsquestion(): lucy_bananas, tracy_bananas = get_bananas() bananas_evenly_divided = float(lucy_bananas+tracy_bananas)/2 print('lucy has', lucy_bananas, ' bananas and tracy has', tracy_bananas, ' if both of them put their bananas together and split them evenly') print('how many bananas do each of them get?') attempts = MAX_ATTEMPTS while attempts > 0: answer = input('answer: ') try: answer = float(answer) except: print('Error, the number of bananas must be an integer') continue if answer == bananas_evenly_divided: return True print('attempts remaining: ', attempts-1) attempts -= 1 return False def main(): want_to_play = True while want_to_play: guess_correct = mathsquestion() if guess_correct == True: print('Congratulations!)') else: print('Better luck next time!') restart = input('want to try again? [y/n]: ').lower() if restart not in ['yes', 'y']: want_to_play = False if __name__ == '__main__': main()
{ "domain": "codereview.stackexchange", "id": 20675, "tags": "python, random, quiz" }
tum simulator cloning problem
Question: I am working in the proxy network of my college. And i did all proxy setting in my system whatever i know. but when i executing following command, it return following error: root@chhonkar-pc:~# git clone https://github.com/tum-vision/tum_simulator.git Cloning into 'tum_simulator'... fatal: unable to access 'https://github.com/tum-vision/tum_simulator.git/': Failed to connect to github.com port 443: Connection refused system : ubuntu 14.04 ros : indigo Originally posted by ASHISH CHHONKAR on ROS Answers with karma: 41 on 2014-11-14 Post score: 0 Answer: If you're having trouble accessing that url via a proxy I think you still need to work on your proxy settings. I have confirmed that your command works fine for me. Originally posted by tfoote with karma: 58457 on 2014-11-15 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ASHISH CHHONKAR on 2014-11-16: I done cloning but next problem arises, I unable to launch any .launch file as mentioned in section 3.2 in following link: http://wiki.ros.org/tum_simulator Comment by Rabe on 2014-11-16: Open a new question with a detailed description of what doesn't work. Also, try going through the basic ROS tutorials: http://wiki.ros.org/ROS/Tutorials
{ "domain": "robotics.stackexchange", "id": 20053, "tags": "ros, simulation, git" }
Nuclear Energy is measured in MeV, why not in Joules?
Question: Why it is preffered to measure Nuclear Energy in MeV insted of Joules? Answer: Joule is a very large unit of energy when it comes to particle physics. The rest energy of an electron is approximately $10^{-13}$ J. Dealing with any masses and energies in Joule would require consistently dealing with tiny numbers. That might bring up the next question, then why don't we measure energies in femtojoules and picojoules? This is because electron volt is a very practical unit as well. If you are performing an accelerator experiment, for example, and take an electron or proton and take it across a given potential difference, you already know the energy it has acquired in eV. Since it is very natural to work with multiples of electronic charges and potential differences of volts (and kilovolts and megavolts...) eV becomes a very handy unit. This is very similar to the scenario in astronomy. We often specify distances in astronomical units and parsecs because they are the right size and directly related to various measurement techniques.
{ "domain": "physics.stackexchange", "id": 37001, "tags": "energy, nuclear-physics, units, conventions, si-units" }
How does 4-vector notation work?
Question: In particle physics we are going over 4-vector notation. However, my background on this is a little shaky, and I'm having difficulty differentiating the notation and visualizing what it actually means. What is the difference between: $X^\mu,X^\nu$and $X^\sigma$? I'm not sure what the different superscripts mean, and I don't know how to visualize this. I know they are 4-vectors, but I don't know what they look like written out. Are each of them different? Or are the superscripts just placeholders to let you know that they have different components? Another issues I have is the different between $\partial^\mu$ and $\partial^\nu$. $\partial^\mu=\big({\partial\over \partial t},-\vec \nabla\big) $ but what is $\partial ^\nu$? Answer: $X^\mu,X^\nu$and $X^\sigma$ are all the same four-vector. The letter used for the superscript or subscript doesn't matter. If an index it isn't being contracted with another index to form a scalar, as in $X^\mu X_\mu$, then it is just a placeholder which can take the value 0, 1, 2, and 3 (or sometimes people use t, x, y, and z). For example, either $$p^\mu=m u^\mu$$ or $$p^\nu=m u^\nu$$ is just shorthand for four equations, $$p^0=m u^0 \\ p^1=m u^1 \\ p^2=m u^2 \\ p^3=m u^3 \\ $$ When an index appears twice on the same side of the equation, once up and once down, this is called a contraction and you you have to sum over all four values of the index: $$X^\mu X_\mu=X^0 X_0+X^1 X_1+X^2 X_2+X^3 X_3.$$ $\partial^\mu$ and $\partial^\nu$ both mean $\big({\partial\over \partial t},-\vec \nabla\big)$ (if you take the metric to be +---). But $\partial_\mu$ and $\partial_\nu$ both mean $\big({\partial\over \partial t},\vec \nabla\big)$.
{ "domain": "physics.stackexchange", "id": 55589, "tags": "special-relativity, metric-tensor, vectors, tensor-calculus, notation" }
Is that a sundew?
Question: I have bought the seeds for a sundew/drosera (growbro.nl) but I have pretty serious doubt if that is really the plant I wanted :-) If not, what kind of plant is that I have grown? Answer: This is not a sundew. A simple Google search can confirm that.
{ "domain": "biology.stackexchange", "id": 9875, "tags": "species-identification, botany" }
Improvements on tree index class
Question: I developed a QuickIndex class which serves to index arrays of stringafiable objects using a tree structure. The purpose is to then primarily to allow for fast index or include? method calls directed to the quick_index object as a proxy for the original array, as the index tree can be traversed much more efficiently than a linear array. There is a gist containing the whole class, but the main functionality comes from the two methods included below. class QuickIndex def initialize ary, stop_char = "!" @stop_char = stop_char @size = ary.size @index = Hash.new ary.each_with_index do |num, i| num = num.to_s.split("") << @stop_char index = @index until num.empty? n = num.shift index = index[n] ||= (n == @stop_char ? [] : {}) end index << i end end def index item index = @index (item.to_s.split("") << @stop_char).each { |n| next unless index = (index[n] or nil rescue nil) } index end end The stop_char is a single character which serves to indicate the end of a string in the index. The class is specifically intended for locating specific values in very large (NArray) arrays of ints or floats, but it's nice for it to work with other objects too. It works reasonably well, but I'd like to know of any optimisations or alternative strategies to this problem which would make the class quicker at either building or querying the index and/or reduce the memory footprint. Or if there's some standard library alternative I've overlooked... Answer: You can build it faster and use it faster if you use a flat hash instead of cascading hashes. Here's a simple implementation using a "flat" hash: class QuickerIndex def initialize(array) @index = {} array.each_with_index do |item, i| (@index[item.to_s] ||= []) << i end end def index(item) @index[item.to_s] end end and a benchmark to compare the two: require 'benchmark' floats = 10_000.times.map{rand} Benchmark.benchmark('', 20) do |x| index = nil x.report('cascading create:') {index = QuickIndex.new(floats)} x.report('cascading access:') do floats.each do |float| index.index(float) end end x.report('flat create:') {index = QuickerIndex.new(floats)} x.report('flat access:') do floats.each do |float| index.index(float) end end end CPU No duplicate items Without duplicate items, the CPU time needed to create and access various sizes of indices were: ------CREATE-------- -----ACCESS------ NUMBER OF FLOATS CASCADING FLAT CASCADING FLAT 10,000 0.27 0.05 0.20 0.03 100,000 3.98 0.45 2.42 0.43 1,000,000 140.75 14.13 69.72 4.84 With no duplicate items, flat hash scales a little better than cascading hash in create, but much better in access. Flat hash is faster for create and access. Duplicate items Now, what about repeated items? You indicate that there is some likelihood that two items will share the same key (that is, have the same #to_s). Let's assume that each number is duplicated 10 times. That is, in a list of 10,000 floats, there are really 1,000 unique floats; in a list of 100,000 floats, there are 10,000 unique floats: ------CREATE-------- -----ACCESS------ NUMBER OF FLOATS CASCADING FLAT CASCADING FLAT 10,000 0.23 0.02 0.20 0.02 100,000 2.20 0.26 2.12 0.21 1,000,000 27.70 3.56 26.29 2.62 10,000,000 364.14 25.16 341.66 23.44 Both flat and cascading hash perform better with duplicate items than without. Flat hash still scales better, and is always faster. Memory Unless compiled with the right switch, Ruby lacks an effective way to tell how much memory is in use by its objects. As a proxy, we can use "ps", which reports the total process size in kilobytes. It's an imperfect proxy, but it's the best we have. Here are the numbers for various numbers of floats. The number of unique floats was always kept at 1/10th the total number of floats. The memory size reported is the amount that memory usage went up after creating the instance of the index class: NUMBER OF FLOATS CASCADING FLAT 10,000 2,048k 52k 100,000 32,952k 1,664k 1,000,000 232,868k 28,596k 10,000,000 2,232,568K 229,512k
{ "domain": "codereview.stackexchange", "id": 5604, "tags": "optimization, ruby" }
cupola based hydrocarbons
Question: My favorite groups of shapes are cupolas. Are there hydrocarbons in the shape of these similar to how cubane is to a cube? If so how stable are they, and could you please give some general information about each one. Answer: They would be highly unstable due to the inverted tetrahedral geometry of the carbons in the top face. Generally, a polyhedron can form the basis of a stable hydrocarbon if it has at most 3 edges for each corner. A big caveat is that the hydrocarbon will distort the shape if the angle strain is too great. I found that out in my computational research on prismanes.
{ "domain": "chemistry.stackexchange", "id": 3349, "tags": "organic-chemistry, structural-formula" }
Why do we automatically assume that the velocity vector $\vec{v}$ and location vector $\vec{r}$ are independent?
Question: I'm not sure if it's relevant, but I'm talking about a situation where a particle is moving in an electro-magnetic field. As I understand, if we see the term $\nabla \cdot \vec{v}$ or $\nabla \times \vec{v}$ we can say it is equal to 0, because the velocity vector is independent of the location vector. My question is this: Is this always true, or only in specific settings (particle in an electro-magnetic field for example)? If it is always true: What about a state where an electron's velocity changes with respect to x? Then they can't be independent, can they? In other words, are the following always true: $\partial v_{x}/\partial x = 0$ $\partial v_{y}/\partial x = 0$ $\partial v_{z}/\partial x = 0$ $\partial v_{x}/\partial y = 0$ $\partial v_{y}/\partial y = 0$ $\partial v_{z}/\partial y = 0$ $\partial v_{x}/\partial z = 0$ $\partial v_{y}/\partial z = 0$ $\partial v_{z}/\partial z = 0$ Answer: Despite your notation, I'm guessing that you're asking about partial derivatives. To tex a partial derivative use "\partial x" as in "$\partial x$". This sort of confusion arises in Lagrangian calculations and I'll bet that's where you came upon it. The effect arises from our mathematics. It appears with only a single dimension x so I'll illustrate the effect that way. If we left everything in terms of position (with velocity defined as $dx/dt$, then the gradient of the velocity wouldn't be zero. That's because the velocity does, in fact, depend on position. Instead, when we do classical mechanics with Lagrange's equation, we think of the Lagrangian as being a function of position, velocity, and perhaps time. So we write it as $L(x,\dot{x},t)$. The effect of this way of looking at the problem is that we double the number of variables (from $x$ to $x,\dot{x}$), but we eliminate the need for second derivatives. This makes the problem actually easier to solve, but to be consistent, we have to make our partial derivatives (i.e. partials with respect to position or velocity) apply only to the position or velocity. You can quickly verify that the Lagrangian $L(x,y,\dot{x},\dot{y}) = 0.5m\dot{x}^2 +0.5m\dot{y}^2 - mgy$ leads to the equations of motion that you expect for a body of mass m moving in a gravitational field: $m\ddot{x} = 0,\;\;m\ddot{y} = -g$, and that this happens only if you follow the rules you've been given for calculating partial derivatives. Maybe this will ease your mind. Also see: Why does Calculus of Variations work? Why does calculus of variations work? By the way, the place where partial derivatives used to bother me the most was in thermodynamics. So in short, we don't automatically assume it. It happens when we use math in certain ways.
{ "domain": "physics.stackexchange", "id": 487, "tags": "classical-mechanics, variational-calculus" }
Time complexity of finding the shortest path in DAG
Question: I apologize for seemingly basic question: I saw numerous times that the time complexity of finding the shortest path in directed acyclic graph is O(|V| + |E|). Why there is this |V|? Isn't it always the case that for a connected graph |E|+2 > |V|, hiding the |V| under |E|? I mean: O(|V| + |E|) = O(|E| + |E| + 2) = O(|E| * 2) = O(|E|)? I also think the algorithm for solving the SP only follow the edges, no? Answer: DAG (Directed acyclic graph) doesn't have to be a connected graph, so the assumption $|E|+2 > |V|$ doesn't hold for some inputs.
{ "domain": "cstheory.stackexchange", "id": 2494, "tags": "graph-theory, graph-algorithms" }
Active Directory Query Application
Question: This application is designed to query an active directory, and at the moment, performs only two tasks: Save a list of all users to a file. Save a list of all groups that all users are in to a file. I tried to implement a print all groups method, but ended up removing it. I believe I removed all references to it from this code, but if you see one, please ignore (or fuss at me for not scrutinizing enough). The method that saves all user groups uses a background worker due to how much longer it takes to run. I understand I should probably add a background worker for the save users method too. Users are able to change both the domain and the organizational units using text boxes. ActiveDirectoryTool.cs: public partial class ActiveDirectoryTool : Form { private Backend backend; public ActiveDirectoryTool() { InitializeComponent(); backend = new Backend(); UpdateDisplay(); } private void getAllUsers_Click(object sender, EventArgs e) { this.Enabled = false; if (backend.PrintAllUsersToFile()) { this.Enabled = true; } } private void printAllUserGroups_Click(object sender, EventArgs e) { if (!backgroundWorker1.IsBusy) { this.Enabled = false; backgroundWorker1.RunWorkerAsync(); } } private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e) { backend.PrintAllUserGroupsToFile(); } private void backgroundWorker1_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { this.Enabled = true; } private void UpdateBackend() { backend.Domain = domainBox.Text; backend.OrganizationalUnits = organizationalUnitBox.Text; backend.WriteSettings(); } private void UpdateDisplay() { domainBox.Text = backend.Domain; organizationalUnitBox.Text = backend.OrganizationalUnits; scopeDisplay.Text = backend.Scope; } private void organizationalUnitBox_TextChanged(object sender, EventArgs e) { UpdateBackend(); UpdateDisplay(); } private void domainBox_TextChanged(object sender, EventArgs e) { UpdateBackend(); UpdateDisplay(); } } Backend.cs: internal class Backend { private const string DefaultDomain = "dnet.domtar"; private const string CommaSeparatedValuesExtension = ".csv"; private const string PlainTextExtension = ".txt"; private const string TabSeparatedValuesExtension = ".tsv"; private const string DefaultExtension = PlainTextExtension; private const string DateTimeFormat = "yyyyMMddTHHmmss"; private const char Hyphen = '-'; private const char Tab = '\t'; private const char Comma = ','; private const string UserListHeader = "Last\tFirst\tDisplay Name\tID\tActive\tLocked\tDescription\tHome Drive\tHome Folder\tLogin Script\tEmail\tStreet\tCity\tState\tPhone\tDistinguished Name"; private const string GroupListHeader = "Group Name\tGroup ID\tManaged By\tDescription\tDistinguished Name"; private const string UserGroupListHeader = "User ID\tGroup\tUser Full Name\tUser Distinguished Name"; private string path = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments), "ActiveDirectoryTool"); public Backend() { if (!Directory.Exists(path)) { Directory.CreateDirectory(path); } ReadSettings(); } private string domain; private string domainContext; private string organizationalUnits; public string Domain { get { return domain; } set { domain = value; } } public string OrganizationalUnits { get { return organizationalUnits.Replace("OU=", "").Replace(",", " "); } set { organizationalUnits = "OU=" + value.Replace(" ", ",OU="); } } private DirectoryEntry DefaultRootDirectory { get { return new DirectoryEntry("LDAP://RootDSE"); } } private DirectoryEntry RootDirectory { get { if (!String.IsNullOrWhiteSpace(Domain) && !Domain.Equals(DefaultDomain)) { return new DirectoryEntry("LDAP:://" + Domain); } else { return DefaultRootDirectory; } } } public string DomainContext { get { if (String.IsNullOrWhiteSpace(domainContext)) { domainContext = GetDefaultDomainContext(); } return domainContext; } } public string Scope { get { return organizationalUnits + Comma + DomainContext; } } private string GetDefaultDomainContext() { return RootDirectory.Properties["defaultNamingContext"].Value as string; } private PrincipalContext GetPrincipalContext() { return new PrincipalContext(ContextType.Domain, Domain, Scope); } public void WriteSettings() { ConfigurationManager.AppSettings["domain"] = domain; ConfigurationManager.AppSettings["organizationalUnits"] = organizationalUnits; } private void ReadSettings() { domain = ConfigurationManager.AppSettings["domain"]; organizationalUnits = ConfigurationManager.AppSettings["organizationalUnits"]; } private void ShowException(Exception e) { MessageBox.Show("Exception: " + e.Message + "\n" + e.StackTrace); } private string GenerateFilename(string fileType) { return fileType + Hyphen + DateTime.Now.ToString(DateTimeFormat) + DefaultExtension; } private bool IsActive(DirectoryEntry de) { if (de.NativeGuid == null) return false; int flags = (int)de.Properties["userAccountControl"].Value; return !Convert.ToBoolean(flags & 0x0002); } private string ConcatenateWithTabs(params string[] strings) { string concatenated = ""; foreach (string s in strings) { concatenated += s + Tab; } return concatenated; } private string UserAsString(UserPrincipal user) { DirectoryEntry entry = user.GetUnderlyingObject() as DirectoryEntry; return ConcatenateWithTabs( user.Surname, user.GivenName, user.DisplayName, user.SamAccountName, IsActive(entry).ToString(), user.IsAccountLockedOut().ToString(), user.Description, user.HomeDrive, user.HomeDirectory, user.ScriptPath, user.EmailAddress, (string)entry.Properties["streetAddress"].Value, (string)entry.Properties["l"].Value, (string)entry.Properties["st"].Value, user.VoiceTelephoneNumber, user.DistinguishedName); } private string UserGroupAsString(UserPrincipal user, GroupPrincipal group) { return ConcatenateWithTabs( user.SamAccountName, group.SamAccountName, user.Name, user.DistinguishedName); } public bool PrintAllUsersToFile() { try { int count = 0; string filename = Path.Combine(path, GenerateFilename("AllUsers")); using (var searcher = new PrincipalSearcher(new UserPrincipal(GetPrincipalContext()))) { using (StreamWriter sr = new StreamWriter(filename)) { sr.WriteLine(UserListHeader); foreach (var result in searcher.FindAll()) { sr.WriteLine(UserAsString((UserPrincipal)result)); count++; } } } MessageBox.Show("All (" + count + ") users saved to file " + filename); } catch (Exception e) { ShowException(e); } return true; } public bool PrintAllUserGroupsToFile() { try { int count = 0; string filename = Path.Combine(path, GenerateFilename("UserGroups")); using (var searcher = new PrincipalSearcher(new UserPrincipal(GetPrincipalContext()))) { using (StreamWriter sr = new StreamWriter(filename)) { sr.WriteLine(UserGroupListHeader); foreach (UserPrincipal user in searcher.FindAll()) { foreach (GroupPrincipal group in user.GetGroups()) { sr.WriteLine(UserGroupAsString(user, group)); } count++; } } } MessageBox.Show("All (" + count + ") users with groups saved to file " + filename); } catch (Exception e) { ShowException(e); } return true; } } Answer: "Backend" is a compound word and thus the correct capitalization should be "BackEnd". However, that would still be a fairly meaningless name for a class, even for a namespace. Your Backend class starts out with a dozen const; I'd put those in a separate (static) class. Strings like "domain" and "organizationalUnits" and "OU=" get used multiple times, so they too should be const strings, and of course then moved to that static class mentioned earlier. Why is there a method ShowException in your back-end code? Separate your UI from your logic. The contents of that method are also up for debate: showing the stack trace? You do a lot of string concatenation where perhaps string.Format() would be more appropriate, e.g. return fileType + Hyphen + DateTime.Now.ToString(DateTimeFormat) + DefaultExtension;. ConcatenateWithTabs seems to reinvent string.Join() for some reason? MessageBox.Show also is present in PrintAllUserGroupsToFile and PrintAllUsersToFile, and it shoudln't be: decouple your UI from your back-end.
{ "domain": "codereview.stackexchange", "id": 20118, "tags": "c#, winforms, active-directory" }
In this “hybrid” ceramic 608 bearing is there likely a plastic piece holding the bearings?
Question: I have some “hybrid” bearings that I purchased a long time ago on Amazon. They have silicon carbide balls and steel inner and outer races, as you can see in the photos. What I am wondering is what are the other pieces that seem to be holding the balls and covering the balls? Are they likely made of plastic? I realize that this is hard to answer without any traceability or part number, but clearly these are mass- produced bearings common to many people so maybe someone knows. The bearings seem to spin very fast without much resistance but also have a lot of play in their non-preloaded condition. A second aspect to my question is based on this play observed and the fact that there looks like a plastic part in there - could these be some sort of gimmick for use in fidget spinners and not for any actual constrained or load bearing application? My fault for buying them in the first place but curious if anyone knows more about these mysterious bearings from the cave of alibaba, and the design choices that led them to be what they are. Edit: I think this is called a bearing retainer based on the answer below. I found a high end fancy bearing from campagnolo bicycles that seems to have a retainer that looks similar .. so just because it’s plastic doesn’t mean it’s cheap or not good quality. Answer: All ball bearings have what is called a ball retainer or ball cage that holds the balls in the raceways. It can be metal or plastic. You can just Google images "ball bearing anatomy" for exploded pictures of a variety of constructions.
{ "domain": "engineering.stackexchange", "id": 5215, "tags": "bearings, ceramics" }
Is there any way for a planet orbiting a red dwarf in the habitable zone to not be tidally locked?
Question: Is there any way to avoid the tidal locking of a planet orbiting a red dwarf in the habitable zone? For example, could a planet with a 90° obliquity and large moon avoid such a situation? Answer: Yes: It has a companion planet or an excessively large moon, with the two bodies orbiting their common center of mass (much like the Earth and the Moon). They could be tidally-locked to each other, but they cannot be tidally-locked to their star.
{ "domain": "astronomy.stackexchange", "id": 6682, "tags": "orbit, tidal-forces, celestial-mechanics, red-dwarf" }
How can a Regression based Neural Network learn class thresholds?
Question: I understand that to solve multilabel classification problems, we can use the softmax activation function in the output layer of the neural network. The softmax function outputs probabilities of each label, and the label with highest probability is then predicted as the target label. However, I just saw in a research paper that the authors used regression function instead of softmax function, in output layer. The paper says: Because regression classification can automatically adjust classification thresholds based on data distribution to maximize classification performance I do not understand how can the model learn classification thresholds by itself? Are these thresholds part of the neural network architecture? Are these thresholds trained like weights of layers? This is the link of the paper: https://www.sciencedirect.com/science/article/abs/pii/S016816991931556X Answer: First thing to notice, is that the assumptions on the target don't match the ones of multi-classifications: in particular, in multi-class classification, it's generally assumed that any other class outside the target one, is equally bad. Instead here, it's clear that this is not true: Given an input with target "Healty Apple", predicting "General Apple Scab" is not as bad as predicting "Serious Cedar Apple Rust"... in other words, the class order counts. In order to capture this property, they decide to use classification regression. About the automatic threshold, they don't say anything about it on the paper, so in my opinion what they do is to adjust them to improve performance. On top of my head, one way is by predicting the regression score for the training set, and then fitting 6 Gaussian distributions (like a naive Bayes model), and adjusting the threshold by moving them so that they best fit the result.... or you can just plot them with different colors and check where the colors lies
{ "domain": "ai.stackexchange", "id": 3906, "tags": "neural-networks, regression, sigmoid, multiclass-classification" }
Chemical Reaction for making Acetic Acid from Glycerol inside Fermenter
Question: I have been trying to find a chemical reaction for the formation of acetic acid from glycerol. I have been searching different literature, but apparently the reaction can not be found. I was hoping maybe anyone here would know the reaction. $$\ce{C3H8O3 + X -> C2H4O2 + X}$$ Some Background I am working on producing Succinic Acid (SA) from Glycerol (GLA) inside a fermenter. I know that inside the fermenter, the GLA is converted into succinic acid, water and acetic acid. I have been able to find a reaction for the formation of succinic acid, which is given by the following, $$\ce{C3H8O3 + CO2 -> C4H6O4 + H2O}$$ But I can not seem to find a reaction for the acetic acid. I need an equation because I want to know the stoichiometric relation between the GLA and AA. Additional Question Another thing is the buffer solution reaction required to keep the $\mathrm{pH}$ constant inside the fermenter. I have been studying that either ammonia or $\ce{Ca(OH)2}$ can be used. I know the chemical reaction, Calcium Hydroxide + Succinate -> Calcium Succinate + Water. The calcium succinate can be reacted with conc. sulfuric acid in order to crystallize the SA crystals, since sulfuric acid can release the free succinic acids. I understand the theory behind these reactions, but I can not seem to find chemical reactions, from where I can see the stoichiometric coefficients. Answer: Short answer: under anaerobic conditions, it is impossible to produce acetate as the sole product from glycerol with wild-type E. coli. Explanation: Under anaerobic conditions (and assuming no substrates for anaerobic respiration have been added), a significant concern is redox balancing. Since electrons are not transferred to oxygen (as they are under aerobic conditions), the number of electrons in the substrates must equal the number of electrons in the products. Glycerol as a substrate presents problems in this regard, because it is slightly more reduced than typical hexose substrates (eg glucose). Furthermore, the desired product acetic acid is more oxidized than glycerol (or hexoses for that matter). Looking at the core metabolic pathways of E. coli, we immediately see that 1 molecule of glycerol is converted to 1 molecule of acetate. E coli is unable to utilize all three carbon atoms of glycerol make 3 molecules of acetate from 2 molecules of glycerol. The third carbon atom of glycerol is lost as either formic acid or $\ce{CO2}$ (with concomitant production of hydrogen gas). So we can write an initial stoichiometry of glycerol ($\ce{C3H8O3}$) + $\ce{H2O ->}$ acetic acid ($\ce{C2H4O2}$) + formic acid ($\ce{CH2O2}$) + 4 electrons + 4 $\ce{H+}$. [Note that the formic acid can be converted to $\ce{H2}$ and $\ce{CO2}$. The exact ratio of hydrogen to formate will depend on conditions.] Unfortunately, there is no pathway for disposing of the excess electrons. You would need to add a respiration substrate such as fumarate (converted to succinate) or nitrate or nitrite (both can be reduced to ammonia). One alternative approach would be to ferment the glycerol to a mixture of ethanol and formic acid or hydrogen gas and then chemically oxidize the ethanol to acetic acid. The balanced stoichiometry of that fermentation is glycerol ($\ce{C3H8O3}$) $\ce{->}$ ethanol ($\ce{C2H6O}$) + formic acid ($\ce{CH2O2}$) or glycerol ($\ce{C3H8O3}$) $\ce{->}$ ethanol ($\ce{C2H6O}$) + $\ce{H2 + CO2}$
{ "domain": "chemistry.stackexchange", "id": 13271, "tags": "biochemistry" }
In what sense is the Kalman filter optimal?
Question: The Kalman filter is a minimum mean-square error estimator. The MSE is defined as $E\left(||\hat{x}_k-x_k||^2\right)$ where $x$ is the state and $\hat{x}$ is the estimate. When $x$ is a vector, for example, a vector that contains distance and velocity, is the MSE equals to distance MSE plus velocity MSE? If so, the base units of distance and velocity are different. Does the MSE have any physical meaning? Answer: In the academic sense, where dimensions are not allowed into the room, the Kalman filter minimizes the expected MSE of the state vector, as you stated. You mentioned dimensions, and I thought "uh oh, this is a conundrum". But for a properly-constructed Kalman filter* the states are uncorellated, i.e. $\mathrm E \left \lbrace x_k \cdot x_n \right \rbrace \ 0\ \forall \ n \ne k $. This means that for any weighting vector $\mathbf w$, the Kalman minimizes $\mathbf w^T x$. So you can choose any values for the elements of $\mathbf w$ that make the dimensions work out, and the resulting error will be minimized. To answer your direct question: the Kalman is optimal in the sense that minimizes the expected error of each state. It just happens that in the process (because it also decorrelates the errors) it minimizes any global weighted sum of the states, regardless of the weighting you choose. * "Properly constructed" in this case means that the model the Kalman was designed to actually matches the system whose states you're estimating**. ** Which really never happens in practice. It takes a lot of work to get close enough so that you can ignore the difference. In actual practice a very few folks do that, but more often they either design a Kalman using informed guesses about the system dynamics and the process and measurement noises then iterate on a solution, or they design some robust variant of the Kalman, such as an H-infinity filter.
{ "domain": "dsp.stackexchange", "id": 10576, "tags": "kalman-filters" }
Decide the existence of a string homomorphism
Question: Consider the following problem: Given two strings x,y, decide whether there exists a string homomorphism f such that f(x)=y. It is easy to show that this problem is in $NP$. Are there other things we can say about this problem? e.g. Is it in $coNP$, or even $P$? This problem seems very natural, so I am not surprised if it has been studied thoroughly. However I could not find this problem in literature. Answer: It's discussed in one of the very first papers about strings and complexity, namely, Dana Angluin, Finding patterns common to a set of strings, J. Comput. System Sci. 21 (1980), 46-62. Look at Theorem 3.6. The problem is NP-complete. It's also in A. Ehrenfeucht, G. Rozenberg, Finding a homomorphism between two words is NP-complete, Inform. Process. Lett. 9 (1979) 86–88.
{ "domain": "cstheory.stackexchange", "id": 2958, "tags": "cc.complexity-theory, reference-request, fl.formal-languages" }
Speed to run a loop
Question: So this guy was the first to run a loop and in this (german) article (and also in the video) a certain speed (13.8km/h) is mentioned. Why must he run at this speed and not just "as fast as possible"? My intuition says it has to do with to much centrifugal force and him not being used to it $\rightarrow$ trip hazard. Answer: When they mention that speed they are speaking of the minimum speed he will need to run in order to keep contact with the loop. I believe they calculated that number simply as a reference point so that 1) they knew looping the loop was theoretically possible, and 2) so he had an estimate of about how fast he needed to run. Imagine trying this stunt all day just to find later that in order to run around the loop you would have needed to achieve a speed of $50 {km \over h}$ for the stunt to be even theoretically possible. In fact, if you watch the video you can tell that he does run faster than what he theoretically needed to in order to run safely around the loop. Theoretically, the minimum velocity needed at the apex of the loop is the velocity which will result in a normal force of zero at that point. This means at the very top of the loop his feet would not be exerting any force upon the loop at all (meaning he would just barely make it through the apex without falling). You can see from the video that this is not the case. If you watch the slow motion clip his foot clearly presses against the track very near to the apex of the loop, meaning he must have been traveling faster than the theoretical minimum velocity. So, in answer to your question, yes, running as fast as he could would have worked just fine, since that would clearly have been faster than the minimum velocity. Of course, the faster he runs the more leg strength he would need at the apex to resist the normal force of the track pushing down on him.
{ "domain": "physics.stackexchange", "id": 12137, "tags": "newtonian-mechanics, centrifugal-force" }