anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
How high will a party balloon filled with helium go? | Question: How high can a party balloon filled with helium go before bursting? If you filled 2 balloons to different amounts and tied them together, would they go higher?
I'm asking because I'm considering buying a 200g weather balloon to take a phone into the upper stratosphere, but I suddenly wondered if a bunch of cheaper party balloons and string would do the trick.
Answer: There's a lot more than just physics to this answer.
First - you may not be aware of the fact that the "helium" you buy in a party store is a helium/air mixture that contains enough helium to lift a party balloon - but it doesn't have a lot of "lift" beyond that.
Second - you are in essence asking about the pressure at high altitudes; a balloon that has little elasticity (like a proper high altitude balloon) but a lot of "give" will expand in volume as the surrounding pressure drops, according to $\frac{PV}{T}=\rm{const}$. The pressure of the atmosphere follows a roughly exponential shape, with the value being about 0.1 atm at an altitude of 20 km (which is "somewhere in the stratosphere", depending on your definition).
Now there are two problems. The first is that your party balloons can only be partially filled at 1 atm if you don't want them to burst at 0.1 atm; this means you need a lot more balloons to get the same lift, and that means a lot of additional surface area / mass (including the strings holding the balloons together).
The second problem is the elasticity of the balloon. As the balloon fills, there is some internal pressure that builds up. This will compress the gas inside, making it denser and less buoyant. For a meteorological balloon, this effect is small
This is the reason proper high altitude balloons are slack - they can increase in size as they go higher. Having a single large surface rather than lots of smaller surfaces also significantly reduces the total mass of balloon you would need.
Finally - when toy balloons burst, they end up in the environment, and they kill wildlife. Please don't contribute further to that very real problem. | {
"domain": "physics.stackexchange",
"id": 35795,
"tags": "pressure, atmospheric-science, density, buoyancy, gas"
} |
Why doesn't charge accumulate in a loop? | Question: When learning about electromagnetism at my university, electricity flow is generally shown as a conductor with a high potential at one end and a low potential at the other and thus charges flowing down that potential gradient.
The charges are said to accumulate at one end until their own potential flattens this gradient.
We are then taught that this accumulation does not occur in a loop but I don't understand why it wouldn't.
At some point in the loop won't the charges necessarily have to go up a potential gradient to travel around the whole loop? If this is done by some mechanism in a battery does that mean it is incorrect to state that all you need for current flow is a closed loop of conductor and a pd?
Also current flow isn't the movement of charge but the movement of signal or fields if I understand correctly. So how is current flow in a non loop stopped so quickly?
I am a physics student with a pretty poor understanding of circuit components etc so if this can be explained with electromagnetic principles more than circuits I would really appreciate it, though no worries if not!
Thanks so much
Answer: It depends on how this potential difference is applied to the conductor. The first case seems to correspond to a conductor placed in an external electric field. Then the charges would move to the ends of the conductor and build up a counteracting electric field so that the field and potential difference and thus the current in the conductor becomes zero. This is a consequence of the charge flow being blocked at the ends. The same would happen to a conducting loop that is placed in an external field. Opposite charges would build up at the end of the loop in the direction of the applied external field. The situation is different when the potential difference is due to an applied battery which maintains a potential difference at the ends of a conductor. Then a constant current flow occurs because the charge entering the conductor from the battery at one electrode flows back through the conductor entering the battery at the other electrode where it continues to the other electrode. Thus the charge current flow forms a circuit and there is nowhere an accumulation of charge due to this current. Similarly, a closed conductor loop in a changing magnetic field, which produces an electric field by induction along the loop, has a current without an accumulation of charge. This is a consequence of the law of charge preservation, or the current continuity equation in the stationary case. | {
"domain": "physics.stackexchange",
"id": 100578,
"tags": "electromagnetism, electric-circuits, electricity, electric-current, potential"
} |
Experimental exactness of Schrödinger equation for more than 100 particles | Question: A question from a mathematician far from physics :)
I have heard that Schrödinger equation for $n$ particles is hard in the following sense:
If $n$ is enough big then there is no computer which can give a numerical approximation of the solution to the corresponding Schrödinger equation.
Since the notion of "proof" in physics is intimately related to the experimental observations, I was wondering about the following statement "quantum mechanics is the most exact physical theory".
My question: how can we (objectively) claim such statements if there is no way to verify the numerical exactness of the Schrödinger equation for 100 particles?
Edit: let me be clear about what I am looking for: is there a concrete numerical simulation of Schrödinger equation with more than 100 particles interacting in not negligible way! And a real comparison with the experimental observations?
Edit2: everyone is free to downvote, but it will be nice if you provide the reason. But of course it is not obligatory:)
Answer: Despite it being impossible to exactly simulate a large number of particles on a computer, physicists have found many tricks to approximately simulate it. From Quantum Monte Carlo to Density functional theory to the whole field of Tensor networks and many others, there is a plethora of methods to obtain accurate predictions for the physics of many particles that doesn't involve attacking the problem directly (which would be impossible).
As an example, a single particle is described by a $d$ dimensional vector space, so an $N$ particles state is described by an unmanageable $d^N$ dimensional vector. It turns out though that physically relevant states of $N$ particles (i.e. ground states of local Hamiltonians) have some nice properties that allows them to be tractable. Specifically, they have relatively low entanglement (i.e. since the physics is described by interaction between nearby particles, far away particles will not be very correlated). This allows to get an accurate description of such a state with a number of parameters that scales polynomially instead of exponentially in the system size. This is only possible because the physically relevant case is not the general case. This observation is where the techinques associated with tensor networks and matrix product states stem from. | {
"domain": "physics.stackexchange",
"id": 74355,
"tags": "quantum-mechanics, schroedinger-equation, epistemology"
} |
Drifting issue in ROS2 using SLAM-Toolbox | Question: I'm slightly new to robotics and I'm trying to prototype using ROS2-Galactic, I am using built-in packages such as Slam Toolbox and others.
Now that I'm trying to run it, I'm having some issues with the created map as well in tf perhaps. I don't know how to point out where I am getting the problem, I am completely noob to debugging this stuff. the scenario is, I'm trying to do SLAM and I put the robot in-place without doing nothing, but it seems that the robot is rotating clockwise by its own as shown in rviz.
I hope someone is interested to help me, you can check clip here,
https://youtu.be/UVhMtrMc2ZU
Answer: Beyond the fact that the robot's not moving, so I can't know if there are other TF issues, the most obvious problem I see is that you're not using a laser scanner. This is a 2D laser scanner SLAM method. It needs 2D laser scanner like angular data coverage. You're not going to get away with RGBD sensors unless you integrated 2-3 of them to get a reasonable angular band to scan match against.
If you have an RGBD sensor, there are specialized RGB-D SLAM's or visual SLAMs that exist that you should utilize. We're not in 2010 anymore where there aren't many or good options for RGBD cameras. | {
"domain": "robotics.stackexchange",
"id": 2684,
"tags": "slam, raspberry-pi, ros2"
} |
Pulse signal to sine signal with the same frequency | Question: I have a pulse signal with frequency f(t) And i want to generate from it a sine signal with frequency f(t) (f(t) doesn't vary a lot in time)
Is there a method to do it directly with a function generator ?
If no, How can i acheive this?
Answer: Sounds like a very classical application of a PLL to me, aiming for the positive-going zero crossings of the sine wave emitted by a controllable oscillator to align with the position of these pulses
Assuming you ask this in an electronics, continuous-time context, implementation would look something like this:
Have a VCO running at roughly the right frequency¹.
Have a transmission gate (or analog mux, whatever) that gets activated for a controlled short time by every second of your input pulses (divide-by-2 with a flipflop), and which only then lets through the instantaneous value output by your VCO
Have an integrator / low pass filter to convert these pulses to the control voltage of your VCO
Alternative approaches that can be more power-efficient and faster to build in CMOS technology² would instead
Have your VCO
convert your pulse train to a binary square wave (e.g., only 0 V and +1 V) by means of a toggle flip-flop
convert your VCO's sine to a similar square wave by comparing it to its average voltage (0 V)
combine the two with an XOR gate: if they are both at exactly the same frequency and phase-aligned, it will always output 0
when the XOR output is
low / 0: output zero
high/ not 0: either output a positive or negative voltage, depending on the state of the flip-flop output
use an analog integrator on that
use the integrator's output as VCO control voltage
If you know the rough frequency range sufficiently well, it's not too high and you don't care too much about harmonics in your produced sine wave:
The Fourier transform of an impulse train is, surprisingly at first, an impulse train of the inverse spacing (so, a DC component, the oscillation at the same frequency, and every multiple of it). A simple band-pass filter can select the fundamental you want from that mixture. But that bandpass filter will not be perfect – you'll see your sine, but with harmonics, depending on how well your filter suppresses all unwanted harmonics.
¹ you can get roughly the right frequency by initially just counting the number of pulses in a time window, e.g. with a microcontroller, FPGA, or actual high-speed counter, depending on your application's needs.
² but don't trust me on this too much: I'm very much not a silicon designer. This is what I read in much older lectures. | {
"domain": "dsp.stackexchange",
"id": 12542,
"tags": "signal-synthesis, electrical-signal"
} |
Cannizarro reaction of chloral | Question: When chloral undergoes reaction with a concentrated alkali, will it undergo Cannizarro reaction to form the corresponding alcohol and acidic salt, or should it form chloroform instead by departure of trichloro carbanion instead of the hydride ion?
I understand that the trichlorocarbanion is stabilized by the $-I$ effect of chlorine, so why not form that instead?
Answer: In the Cannizarro reaction an aldehyde with no alpha-hydrogens is treated with $\ce{OH^-}$. Through a series of equilibria the aldehyde goes on to disproportionate into the corresponding carboxylate and alcohol as shown in the following mechanism.
Again, the only requirement for the Cannizzaro reaction to occur is that the aldehyde not have any alpha-hydrogens. Formaldehyde and benzaldehyde are two very common aldehydes that undergo the reaction when treated with $\ce{OH^-}$. Chloral ($\ce{CCl3CHO}$), having no alpha-hydrogens also undergoes the Cannizzaro reaction (reference)
Edit: Response to OP's comment
A key part of the Cannizzaro mechanism is shown in the second line of the above mechanism. Here a hydrogen is transferred in a concerted step to the second molecule of aldehyde. Free $\ce{H^-}$ is not generated in the process, rather it is a smooth, concerted transfer of a hydrogen atom.
[Note: This has been proven experimentally, when the reaction is run in $\ce{D2O}$, no deuterium winds up attached to the alcohol carbon. This shows that free $\ce{H^-}$ was never involved, but rather a concerted transfer of the aldeyhdic hydrogen takes place].
Thanks to Mithoron for noting that, with chloral, the haloform reaction can apparently compete with the Cannizarro reaction. Presumably, the pathway followed depends upon the reaction conditions. | {
"domain": "chemistry.stackexchange",
"id": 8457,
"tags": "carbonyl-compounds"
} |
Vacuum Wavefunctional | Question: I am having this problem in understanding the vacuum wavefunctional in QFT. Hence this naive question.
I mean, if someone say vacuum wavefunctional, I can think of an element like wavefunction as in Quantum mechanics but now a function of fields. Its easier to solve for it, I think, in Schroedinger representation, as e.g Hatfield does in chapter 10 of his book QFT of point particles and strings. I seemingly understood that. Step by step. easy. But now I see that in path integral formalism, say in Euclidean FT, the vacuum state is defined as path integral over half of the total spacetime with some fixed boundary condition (B.C) $\psi(\tau=0,x,z)\psi_0(x,z)$ obeyed in the boundary of the half space and is given by $\mathbf{\Psi}[\psi_0]=\int_{fixed B.C}\mathcal{D}\psi e^{-W[\psi]}$ with $W$ the action. Isn't it reminiscent of the generating or partition function of an Euclidean field theory?
Anyway, I don't understand (may be also because I didn't find a reference, could you suggest one?) why this is and how to see it. An intuitive explanation will be very helpful. Thanks.
Answer: The path integral over a "thick layer" of spacetime always produces the transition amplitudes
$$ \langle {\rm final}| U | {\rm initial}\rangle $$
where $U$ is the appropriate unitary evolution operator. This is already true in non-relativistic quantum mechanics where the equivalence between Feynman's path integral approach and the operator formalism is being shown most explicitly. The only difference in quantum field theory is that there are infinitely many degrees of freedom. It's like having infinitely many components of $x_i$ where the discrete index $i$ becomes continuous and is renamed as a point in space, $(x,y,z)$, and $x$ is replaced and renamed by fields $\phi$, so $x_i$ is replaced by $\phi(x)$.
It means that if we have a "thick layer" of spacetime given by time coordinate $t$ satisfying
$$ t_0 < t < t_1 $$
then the path integral with boundary conditions $\phi_0$ and $\phi_1$ at the initial and final slice calculates the matrix element
$$ \langle \phi_1| U | \phi_0 \rangle$$
in a full analogy with (just an infinite-dimensional extension of) non-relativistic quantum mechanics. Just to be sure, the initial and final states above are really meant to be given by the wave functional
$$ \Psi[\phi(x,y,z)] = \Delta [\phi(x,y,z) - \phi_0(x,y,z)] $$
which holds for one of them (initial/final) while the other is obtained by replacing $0$ by $1$. So far, everything is completely isomorphic to the case of non-relativistic quantum mechanics except that the number of independent degrees of freedom $\hat x$, now called $\hat \phi$, is higher.
The only new thing in quantum field theory is that we also often need path integrals where the initial or final state is replaced by a semiinfinite line. When it's done so, we no longer specify a particular configuration on this initial slice or final slice because there's really no initial slice or final slice on this side (or on both sides, if the path integral integrates over configurations in the whole spacetime)
We don't specify the boundary conditions; in fact, when we follow the correct rules, the path integral immediately and automatically replaces the initial or final state (replaced by the semi-infinite line or half-spacetime) by the ground state $|0\rangle$ or $\langle 0|$, whichever is appropriate. Why is it so? It's because in the operator formalism, such a path integral still contains the evolution operator
$$ \exp(\hat H \cdot \Delta t / i\hbar) $$
over an infinite period $\Delta t$. In fact, the $i\epsilon$ and related rules in the path integral – the way how we treat the poles in the complex energy/momentum plane – really guarantee that
$$ \Delta t = \infty (1+i \epsilon) $$
where $\epsilon$ is an infinitesimal constant which is however greater than $1/\infty$ where $\infty$ is the positive real prefactor above. Consequently, the exponential (evolution operator) above contains the factor of
$$ \exp(-\infty \epsilon \hat H ) $$
which is suppressing states in the relevant initial and/or final state according to their energy. Because $\infty\epsilon$ is still infinite, all the excited states are suppressed much more than the ground state and only the ground state survives.
That means that if we integrate in Feynman's path integral over all configurations in the whole spacetime, we automatically get the matrix elements in the vacuum
$$ \langle 0 | (\dots ) |0 \rangle .$$
Similarly, if we integrate over configurations in a semi-infinite spacetime and specify the boundary condition for the fields (resembling a classical field configuration) at the boundary, we obtain matrix elements like
$$ \langle 0 | (\dots) | \phi_0 \rangle $$
or the Hermitian conjugates of them where $\phi_0$ is the boundary condition. If the inserted operators $(\dots)$ are empty, just an identity operator, the expression above clearly reduces to
$$ \langle 0 | \phi_0 \rangle \equiv \Psi^*[\phi_0]$$
where the last identity is nothing else than an infinite-dimensional generalization of
$$ \langle \psi| x \rangle = \psi^*(x) $$
in non-relativistic quantum mechanics. We just have infinitely many $x$-like degrees of freedom in quantum field theory which is why wave functions are replaced by wave functionals. | {
"domain": "physics.stackexchange",
"id": 2272,
"tags": "quantum-field-theory, path-integral"
} |
Electric field due to a uniformly charged FINITE rectangular plate | Question: I was teaching kids about how to find electric field using the superposition
principle for continuous charge distributions. I thought maybe I should derive
the formula for electric field due to a finite rectangular sheet of charge of charge on the surface $S$,
where
$$
S = \left\{(x,y,z)\in \mathbb{R}^3 \mid -a/2< x < +a/2; -b/2< y < +b/2 ; z = 0 \right\}
.$$
However, I got stuck at the following integration.
$$
E(0,0,r) = \frac{\sigma r}{4\pi\epsilon_o}
\int_{x=-a/2}^{x=+a/2}\int_{y=-b/2}^{y=+b/2} \frac{dx dy}{(x^2+y^2+r^2)^{3/2}},
$$
where $\sigma$ is the surface charge density.
Note: This integration can be done if $a$ or $b$ or both are very large i.e.
$\infty$ in which case we get usual result of $E=\frac{\sigma}{2\epsilon_o}$
So my question is, Can this integral be calculated? If not then what method
would I use to find the electric field in this case. Also It would be greate if
anyone can comment on how to find the electric field by directly solving the
poisson equation.
Consequently if we take case of finite disk the following is the resulting
integration.
$$
E = \frac{\sigma r}{2\epsilon_o}
\int_{\xi=0}^{\xi=R} \frac{\xi d\xi}{(\xi^2+r^2)^{3/2}}
$$
which can be solved as
$$
E = \frac{\sigma}{2\epsilon_o} \left(1- \frac{r}{\sqrt{r^2+R^2}}\right)
$$
Now by taking the limit $R \rightarrow \infty$ we can show that
$E \rightarrow \frac{\sigma}{2\epsilon_o}$.
Answer: The integrals are difficult but not impossible, unless I've made a mistake with WolframAlpha. The result is:
$$E = \frac{\sigma}{\pi \epsilon_0} \arctan\left( \frac{ab}{4r\sqrt{(a/2)^2+(b/2)^2+r^2}} \right)$$
When $a,b \to \infty$ the whole arctangent goes to $\pi/2$ and we recover $E=\frac{\sigma}{2\epsilon_0}$, which is definitely encouraging.
And I don't know what you mean by "directly solving Poisson's equation". As far as I know, the usual way to do that is to use Green's functions, i.e., this integral. | {
"domain": "physics.stackexchange",
"id": 51779,
"tags": "homework-and-exercises, electrostatics, electric-fields"
} |
Resistance in A.C. circuits | Question: Why do electromagnets offer maximum resistance in A.C. circuits?
Answer: An electromagnet has a lot of turns of wire, which gives it a property called inductance : the property of resisting a change in the current. You may be familiar with Lenz's Law
$$V=-L\frac{d\Phi}{dt}$$
which says that an inductor with inductance $L$ will generate a back e.m.f. $V$ when the flux $\Phi$ through it changes. Now if you drive a current through an inductor, the act of sending the current will change the flux, and will therefore generate a "back e.m.f.". Which looks a lot like the voltage that is generated across a resistor when you drive a current through it.
Now if the current is of the form $I=I_0 \sin \omega t$, then the rate of change of flux will be proportional to the rate of change in current, $\Phi\propto I_0 \omega \cos\omega t$ and therefore be proportional to the frequency $\omega$.
This means that the back e.m.f. will increase with frequency, and the resistance felt will also increase. In fact, if you know how to use complex numbers, an inductor with a DC resistance $R$ and inductance $L$ will have a complex impedance
$$Z = j\omega L + R$$
We use complex numbers because the back e.m.f. is out of phase with the current (when you differentiate $\sin$ you get $\cos$... The magnitude of this impedance is
$$|Z| = \sqrt{\omega^2L^2 + R^2}$$
At large frequencies, the series resistance will become almost irrelevant, and the impedance will be proportional to the frequency.
UPDATE
It was pointed out by CuriousOne that there is an additional complication. An inductor typically has some (stray, parasitic) parallel capacitance. This is caused, for example, by the fact that adjacent windings act as a very small capacitor. With capacitance in parallel with inductance, the complex impedance becomes
$$Z = \frac{(R + j\omega L)\frac{1}{j\omega C}}{R+j\omega L + \frac{1}{j\omega C}}\\
=\frac{R + j\omega L}{j\omega RC - \omega^2 LC+1}$$
If $R$ is small, then this will reach a maximum when $\omega^2 LC = 1$ - this is the condition for resonance of the coil. At even higher frequencies, the impedance will increase again, as the parallel capacitance becomes a better and better path for the electrical current to bypass the inductor.
While this effect is real, it will only show up at very high frequencies - typically beyond the range where you would want to use an electromagnet. | {
"domain": "physics.stackexchange",
"id": 28183,
"tags": "electromagnetism, electric-circuits, electrical-resistance"
} |
Does the cosmological constant represent anti-gravity? | Question: Does the cosmological constant represent anti-gravity?
According to the current Lambda-CDM cosmological model, there must be a fair amount of the dark energy in the universe responsible for the acceleration of the expansion and other things. This idea is represented by the cosmological constant, introduced by Einstein.
If the cosmological constant (dark energy) is responsible for repulsion, can it be considered a gravitational repulsion? And consequently, can gravity be considered both attractive and repulsive? I feel the answer is no, but why specifically is this a wrong way of thinking?
Answer: First a point that is often mis-stated: The cosmological constant is not the same as dark energy. We observe that the universe is accelerating (growing big at an ever-increasing rate). Since we do not know what is causing this accelerated expansion, we dub the "stuff" that sources this accelerated expansion "dark energy". A cosmological constant (a type a fluid that never dilutes) is one possible explanation for dark energy, but there exist other possible fluids (such as scalar field quintessence) that can also lead to an accelerated expansion.
In gravity (general relativity), go back to the Einstein Field Equations: $G_{\mu\nu}=8\pi G T_{\mu\nu}$. The (energy-density and pressure of the) stuff that fills the universe ($T_{\mu\nu}$) will affect the way the background curves (in other words, $T_{\mu\nu}$ sources gravity).
As the universe expands, the energy density of normal matter will dilute (planets, dust, etc.) proportional to $1/\text{Volume}$. The energy density of relativistic matter will dilute proportional to $1/\text{Volume}^{4/3}$. A cosmological constant will never dilute: the energy density is constant at all times.
The behavior of the background on which all other particles live (e.g. the curvature of the background, or $G_{\mu\nu}$) will depend on the stuff on that lives on background $T_{\mu\nu}$. It is this behavior of the background that is "gravity". A $T_{\mu\nu}$ described by a cosmological constant will grow at an ever increasing rate. A universe that is homogenous and filled with a $T_{\mu\nu}$ that dilutes in a manner similar to normal everyday matter will tend to eventually collapse on itself.
Basically, you may say that the manner in which the energy-density of "stuff" dilutes determines how that "stuff" gravitates. | {
"domain": "physics.stackexchange",
"id": 43281,
"tags": "gravity, cosmology, space-expansion, dark-energy, cosmological-constant"
} |
Bayesian Network - Inference | Question: I have the following Bayesian Network and need help with answering the following query.
EDITED:
Here are my solutions to questions a and b:
a)
P(A,B,C,D,E) = P(A) * P(B) * P(C | A, B) * P(D | E) * P(E | C)
b)
P(a, ¬b, c ¬d, e) = P(a) * P(¬b) * P(c | a, b) * P(¬d | ¬b) * P(e | c)
= 0.02 * 0.99 * 0.5 * 0.99 * 0.88 = 0.0086
c)
P(e | a, c, ¬b)
This is my attempt:
a × ∑ P(a, ¬b, c, D = d, e) =
d
a × ∑ { P(a) * P(¬b) * P(c | a, b) * P(d) * P(e | c) + P(a) * P(¬b) * P(c | a,b) *P(¬d)
d + P(e | c) }
Note that a is the alpha constant and that a = 1/ P(a,¬b, c)
The problem I have is that I don't know how to compute the constant a that the sum is multiplied by. I would appreciate help because I'm preparing for an exam and have no solutions available to this old exam question.
Answer: You're on the right path. Here's my suggestion. First, apply the definition of conditional probability:
$$ \Pr[e|a,c,\neg b] = {\Pr[e,a,c,\neg b] \over \Pr[a,c,\neg b]}. $$
So, your job is to compute both $\Pr[e,a,c,\neg b]$ and $\Pr[a,c,\neg b]$. I suggest that you do each of them separately.
To compute $\Pr[a,\neg b,c,e]$, it is helpful to notice that
$$ \Pr[a,\neg b,c,e] = \Pr[a,\neg b,c,d,e] + \Pr[a,\neg b,c,\neg d,e]. $$
So, if you can compute terms on the right-hand side, then just add them up and you've got $\Pr[a,\neg b,c,e]$. You've already computed $\Pr[a,\neg b,c,\neg d,e]$ in part (b). So, just use the same method to compute $\Pr[a,\neg b,c,d,e]$, and you're golden.
Another way to express the last relation above is to write
$$ \Pr[a,\neg b,c,e] = \sum_d \Pr[a,\neg b,c,D=d,e]. $$
If you think about it, that's exactly the same equation as what I wrote, just using $\sum$ instead of $+$. You can think about whichever one is easier for you to think about.
Anyway, now you've got $\Pr[e,a,c,\neg b]$. All that remains is to compute $\Pr[a,c,\neg b]$. You can do that using exactly the same methods. I'll let you fill in the details: it is a good exercise. Finally, plug into the first equation at the top of my answer, and you're done. | {
"domain": "cs.stackexchange",
"id": 1683,
"tags": "probability-theory"
} |
Conventions in wiki.ros | Question:
For example in: http://wiki.ros.org/stdr_simulator/Tutorials/Create%20a%20map%20with%20gmapping there are several sections numbered 0.1
Is the idea that they are strict independent alternatives to each other? It's not totally clear, and sometimes it seems like there's a dependency between identically numbered sections.
Or is that a typo?
Originally posted by pitosalas on ROS Answers with karma: 628 on 2018-01-26
Post score: 0
Answer:
That was using level 3 headings instead of level 2 headings so they didn't enumerate properly. I've fixed it and they're enumerated correctly now.
Originally posted by tfoote with karma: 58457 on 2018-01-26
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 29872,
"tags": "ros-kinetic"
} |
JavaScript function for logging the members of an object (with horizontal alignment) | Question: When iterating over the members of an object for logging the keys / values one gets a "stair case" effect.
Therefore I wrote myself this function which takes care for a left-alignment of the values.
Any hints concerning flaws and improvement-recommendations welcome.
// #### START TEST #######################
var person = {
yourMobilPhoneNumber : 01234171819,
firstName : 'theFirstName',
lastName : 'theLastName',
mail : 'myEmail@abc.com',
zip : '12345',
street : 'theNameOfMyStreet',
city : 'someCitySomewhere',
yourVeryPersonalWebpage : 'http://that-is-me.com',
id : 12345,
calculate: function() { return 3 + 4; }
};
displayMembers(person);
// #### END TEST #######################
// Displays the members of an assigned
// object on the console.
// -- Parameter -------------
// Object - An object which members
// shall be displayed.
function displayMembers(obj) {
var i;
var max = (function() {
var ret = 0;
var keys = Object.keys(obj);
for (i = 0; i < keys.length; i++) {
if (keys[i].length > ret)
ret = keys[i].length;
}
return ret;
})();
var getSpacer = function(len, state) {
if (state.length < len) {
return getSpacer(len, state += ' ');
} else {
return state;
}
}
for (i in obj) {
console.log('%s: %s%s',
i,
getSpacer(max - i.length, ''),
obj[i]);
}
}
Answer: Simplification with for/in loops
Your loop through Object.keys() in the function you have for the max variable is reinventing JavaScript's for/in loop which loops over and objects keys. Here's how you could simplifying that code:
for(var key in obj) {
if(!obj.hasOwnProperty(key)) {
continue;
}
if(key.length > ret) {
ret = key.length;
}
}
Unnecessary recursion
Your getSpacer function is using recursion when it really does not need to be; the function would be a lot simpler and a lot faster if you used a neat JavaScript trick for repeating characters:
function getSpacer(len) {
return Array(len + 1).join(" ");
}
Now, there's no need for recursion - that means there is less interaction with the stack - and, rather than switching to a loop, this nice solution can be used. | {
"domain": "codereview.stackexchange",
"id": 19105,
"tags": "javascript, functional-programming"
} |
How do I implement a local planner that doesn't rotate in place? | Question:
I'm new to using the navigation stack, and I have a robot with Ackermann steering. When I simulate the robot, any 2D Nav Goal set behind it will cause the robot to freeze up, publishing angular cmd_vel values which it cannot realize. I'm looking to tweak the local planner so that I can match the capabilities of my robot. The docs say that dwa_local_planner is more modular, so I plan to use it (unless there is a reason I should use the regular base_local_planner). I know this is probably really simple, but I can't find anything in the documents that goes through this process.
Thanks in advance.
Originally posted by nc61 on ROS Answers with karma: 41 on 2014-01-17
Post score: 0
Answer:
Hello!
Just to be sure we're on the same page first: move_base is what actually sends cmd_vel commands to the robot, based on the plan from your local planner, so the parameters I'll refer to would be set for that node.
From my experience, the robot will only spin in place during clearing behavior, not during navigation (ie if my robot is pointed along some positive x axis and I set the goal somewhere on the negative x axis, it will follow an arc to get there, not rotate in place). It's possible I've just never noticed this, but looking back at videos of it running there's never any in-place rotation.
With that in mind, does it stop giving the impossible commands if you disable rotation in place for clearing behavior by setting ~clearing_rotation_allowed to false? There's more details on this at this link under section "1.1.6 Parameters".
Let me know how that works for you.
-Tim
Originally posted by Tim Sweet with karma: 267 on 2014-01-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 16682,
"tags": "ros, navigation, base-local-planner, dwa-local-planner"
} |
Is It Meaningful to Talk About Pure vs. Mixed States for (Continuous) Position? | Question: I see a lot written about pure and mixed states regarding state vectors and density matrices/operators that contain a finite number of states/elements.
For something like the continuous state vector of position $|\psi\rangle = \int_{-\infty}^{\infty} \psi(x) |x\rangle dx$, where $\psi(x)$ is the position wave function and $\int_{a}^{b} \rho(x) dx=\int_{a}^{b} \psi^*(x)\psi(x) dx$ yields the probability of measuring the object within $[a,b]$. I think that we could also speak of a continuous density matrix/operator $\hat\rho(x) = |\psi\rangle \langle\psi|$ such that $\rho = \langle x|\hat\rho|x \rangle$ and when $\hat\rho$ is inserted into the resolution of the identity integrated over all x the result is $1$, confirming the $100\%$ probability that the object is somewhere in space.
For continuous position, is it meaningful to speak of pure vs. mixed states? I would think that a mixed state is where we have $\rho(x)$ but not $\psi(x)$? That might happen after entanglement? Consider the double slit experiment, using photons, where we put different polarizing filters over each slit and then have a radial $\psi(x)$ emerging from each slit, but not adding together. In that case, we would have
$$
\rho(x) = \frac{1}{2}(\psi^*_1(x)\psi_1(x) + \psi^*_2(x)\psi_2(x)).
$$
Is it not possible to find some $\psi_3(x)$ such that
$$
\frac{1}{2}(\psi^*_1(x)\psi_1(x) + \psi^*_2(x)\psi_2(x)) = \psi^*_3(x)\psi_3(x)?
$$
I believe that for QM, we demand that any $\psi(x)$ be a "test function", such as a Gaussian or a smooth function of compact support. Because of the restriction on $\psi(x)$, maybe there are situations where there is no such $\psi_3(x)$...
Answer: Yes, it's perfectly meaningful to talk about pure vs mixed states regardless of the dimension of the Hilbert space.
On the other hand, your understanding of what the density matrix is suggests that you're only seeing a small part of the true core of the concept: by "density matrix" we don't just mean density, we also mean that it must be seen as a matrix i.e. as an operator, which can be queried (in the position basis) with different states on either side: thus, for a pure state, you can happily get
$$
\rho(x,x') = \langle x| \hat \rho|x'\rangle = \langle x| \psi\rangle \langle \psi|x'\rangle = \psi(x) \psi(x')^*,
$$
with different positions on the two factors. The diagonal of the density matrix, $\rho(x,x) = \rho(x)$, encodes the population (density) over the basis states in the chosen representation, much like it does in finite dimension, and (again like it does in finite dimension) the off-diagonal components $\rho(x,x')$ for $x\neq x'$ encode the coherence between the population present at different sites.
This then informs the answer to your subsidiary question: is it possible to find a state $\psi_3(x)$ such that
$$
\frac{1}{2}(\psi^*_1(x)\psi_1(x) + \psi^*_2(x)\psi_2(x)) = \psi^*_3(x)\psi_3(x)
$$
given states $\psi_1(x)$ and $\psi_2(x)$? Absolutely, just try $\psi_3(x) = \sqrt{\frac{1}{2}(\psi^*_1(x)\psi_1(x) + \psi^*_2(x)\psi_2(x))}$. But that's the wrong question: what you need to be asking is whether, given two linearly-independent states $\psi_1(x)$ and $\psi_2(x)$, is there a state $\psi_3(x)$ such that
$$
\frac{1}{2}(\psi^*_1(x)\psi_1(x') + \psi^*_2(x)\psi_2(x')) = \psi^*_3(x)\psi_3(x')
$$
for all (independent) real $x$ and $x'$? Then the answer is simple: no. The operator on the left has rank $2$ and the operator on the right has rank $1$, so they can never be equal.
That's probably still insufficient to clear away all of the misconceptions your post hints at, but it's hopefully enough to point you in the right direction. | {
"domain": "physics.stackexchange",
"id": 48454,
"tags": "quantum-mechanics, hilbert-space, density-operator"
} |
What affects friction more in this situation : the material of the object, the surface area of the object, or some other factor? | Question: I did an experiment the other day in my Grade 11 Physics class. We were measuring the force required to move a few different objects. One object was this block; one side was two rubber hockey pucks, and the other side, was a piece of lumber. We used a dynamometer and we recorded the force just before the object moved. These are the results/dimensions of the objects:
(in this case, the force reading is right before movement, so the force exerted by me = force of friction)
rubber side : area = 88,36cm^2, mass = 890,2g, friction = 7,22N
wood side : area = 91,14cm^2, mass = 890,2g, friction = 3,32N
I am trying to figure out what caused the rubber side's friction to be so much higher. I'm thinking either the rubber is better at gripping to the surface (laminated table) or the rubber side is smaller, so there is more mass per square cm, causing more friction, or both, or neither of those hypotheses.
The problem with that second hypothesis is that we measured the friction for a block of wood, about the same size as the other chunk of lumber, but with a smaller mass. Yet, the lighter block had about the same amount of friction. Here are the results/dimensions for the lighter block:
small wood: area = 85,68cm^2, mass = 197,6g, friction = 3,10N
I can't, for the life of me, figure out what is going on with the experiment. I saw a few suggestions like the wood is more lubricated than rubber, the rubber is more malleable (at a microscopic level, anyway) that it "fits" into the other surface better, etc. I don't know which is right, or if some kind of human error is involved.
P.S. Sorry, not all my units are SI, I figured that having "cleaner" numbers makes it a bit easier to visualize. These are all significant digits.
Answer: Friction is a complicated matter that is still not clear in many aspects. The degree of friction crucially depends, among other things, on the surface details of the contacting materials. This suggests that your first scenario is more likely (rubber is "grippier" that wood). The second hypothesis is less likely in my opinion. Moreover, there is this "Amonton's law" of (dry) friction that states that the only factors that determine the friction force $f_{fr}$ is the type of materials in contact quantified by the "coefficient of friction" $\mu$ and the force with which the one is pressed against the other $N$ (typically the weight of the sliding body): $f_{fr}=\mu N$. You sure remember this expression from your textbook, but if you think over it, it is almost mysterious: it tells you that contact area ${\it does\, not}$ matter. So if you take say a steel block with various facets, it will not matter for the friction force which side it is lying on. Weird as it is, it is an empirical law valid for what is called "dry" friction. If we assume your hockey puck does not stick to the table, nor do other parts of your sample make dents in the table top, Amonton's law should be valid for your experiment too, hence the contact area should not matter and it should be the difference in $\mu$ (different "grippiness"), which as a matter of fact can be very different for different sorts of wood; it can also strongly depend on the condition of the wood (e.g. how dry or polished it is) which can explain your results with different blocks of wood which seemingly violate Amonton's law. | {
"domain": "physics.stackexchange",
"id": 91255,
"tags": "homework-and-exercises, friction"
} |
Why are there $(2kn+1)^{kn}$ Turing Machines with $k$ symbols and $n$ states? | Question: I've seen a few references [1], [2], [3] that say that for a Turing Machine with transition function defined by:
$\delta: Q \times \Gamma \rightarrow Q \times \Gamma \times \{L, R\}$
the number of turing machines with $n$ states and $k$ tape symbols is $(2kn+1)^{kn}$.
However, looking at the definition, shouldn't it be $(2kn)^{kn}$ instead?
Answer: Perhaps they are counting the number of Turing machines with at most $n$ states. In any case, it's not a big difference. Usually one is only interested in an upper bound, and from that perspective there is no difference between the two expressions. | {
"domain": "cs.stackexchange",
"id": 4407,
"tags": "turing-machines, combinatorics"
} |
What is "adjustable constant"? | Question: This is quoted from A.P.French's Vibrations & Waves.
Explicit differential form of linear harmonic oscillator is:
$$ m\dfrac{d^2x}{dt^2} + kx = 0 \quad \& \quad \dfrac{1}{2} m(\dfrac{dx}{dt})^2 + \dfrac{1}{2} kx^2 = E$$. Whenever one sees an equation analogous to either of the above, one can conclude that the displacement $x$ as a function of time is of the form $$x = A\cos(\omega t + \alpha)$$ where $\omega^2$ is the ratio of spring constant $k$ to the inertia constant $m$. [. . .]It is to be noted that the constant $\omega$ is defined for all circumstances by the given values of $m \quad \& \quad k$. The equation contains two other constants - the amplitude $A$ & the initial phase $\alpha$ - which between them provide a complete specification of the state of motion of the system at $t = 0$. The initial statement of Newton's law contains no adjustable constants. However, the second one, often referred to as the first integral of the former, contains one adjustable constant $E$ , the total energy which is equal to $\dfrac{kA^2}{2}$.
Now, what is meant by adjustable constant? Are not $m \quad, k \quad, A \quad, \omega \quad, \alpha$ constants? Why aren't they and only $E$ adjustable constant?
Answer: The "adjustable constant" in that statement is the total energy $E$, and they mean it's "adjustable" in that the behavior of the system is completely independent of $E$ - this is known in physics as a symmetry, in that they system doesn't change if it has a different total amount of energy.
In this case, the way to "adjust" the amount of energy would be to shift to a different inertial reference frame. If you were moving relative to the harmonic oscillator (imagine you and the harmonic oscillator are floating past each other in space), the harmonic oscillator would have more kinetic energy in your reference frame (and therefore a greater total energy $E$) than if you were at rest relative to it. Thus, the energy of the oscillator has been "adjusted", but clearly the harmonic oscillator behaves the same whether you're moving or not.
You can in some sense physically "adjust" the mass $m$ or restoring constant $k$, but not without affecting the behavior of the system.
A clearer way for them to say this would be that "this system is symmetric over changes in $E$, meaning that $E$ can be freely changed without affecting the behavior of the system."
EDIT
Now that I've seen your updated post with more text, I think they may have meant something slightly different. I still stand by my previous answer, but they may have been referring to the fact that the total energy is dependent on the initial amplitude of the oscillator, $A$. Thus, without affecting the properties of the oscillator, by giving it a greater initial amplitude, you can "adjust" the total energy $E$. | {
"domain": "physics.stackexchange",
"id": 21362,
"tags": "newtonian-mechanics, soft-question, terminology"
} |
Where do the color indices come back in $SU(3)$ Yang-Mills Quantization? | Question: Can the partition function of $SU(3)$ (the Generic Partition function for a yang-mills theory found on the linked wiki page below), be split into a sum of 8 functional integrals for each gauge field?
https://en.wikipedia.org/wiki/Yang-Mills_theory#Quantization
Answer: $F_{\mu\nu}$ is shorthand for $F^a_{\mu\nu}T^a$ where $T^a$ are eight SU(3) generator matrices satisfying $[T^a,T^b]=i\,f^{abc}T^c$ and $\text{tr}(T^a T^b)=\frac{1}{2}\delta^{ab}$. So the first term in $Z$ contains the expression
$$\text{Tr}(F^{\mu\nu}F_{\mu\nu})=\text{Tr}(F^{a\mu\nu}T^a F^b_{\mu\nu}T^b)=F^{a\mu\nu}F^b_{\mu\nu}\text{Tr}(T^a T^b)=\frac{1}{2}F^{a\mu\nu}F^a_{\mu\nu}$$
Expressed in terms of the field strengths, this is eight terms, each one containing a different color index. But the third term in
$$F^a_{\mu\nu}=\partial_\mu A^a_\nu-\partial_\nu A^a_\mu+gf^{abc}A^b_\mu A^c_\nu,$$
due to the fact that SU(3) is nonabelian, means that each of these terms contains potentials with other color indices. So there is no clean split into a $Z$ for each color index when expressed in terms of potentials. This is expressing the fact that gluons interact with gluons. | {
"domain": "physics.stackexchange",
"id": 53599,
"tags": "yang-mills, partition-function, strong-force, color-charge"
} |
[ERROR] [1346057546.695789920]: Exception thrown when deserializing message of length [91] from [/nxt_lejos_proxy_mod]: Buffer Overrun | Question:
[ERROR] [1346057546.695789920]: Exception thrown when deserializing message of length [91] from [/nxt_lejos_proxy_mod]: Buffer Overrun
in your opinion whic kind of error is this because it appears when I launch the nav stack but everything works fine.
Originally posted by camilla on ROS Answers with karma: 255 on 2012-08-26
Post score: 1
Answer:
The error indicates that something went wrong when serializing the message or that the serialized message was corrupted during transmission. In your case, it looks like more bytes were received than required for representing some message sent by nxt_lejos_proxy_mod. Without knowing more about that node, it's hard to say what exactly went wrong.
It seems like connections stay valid if such an error happens. The subscriber that was throwing the error might work with corrupted data though.
Originally posted by Lorenz with karma: 22731 on 2012-08-27
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 10773,
"tags": "ros"
} |
Confusion about energy transport in a circuit | Question: I have some confusions about how to imagine energy flow in a circuit. Imagine for example just a simple circuit with a battery, two wires and a light bulb.
According to classical Electrodynamics the energy is transported by the electromagnetic field outside the wires to the light bulb and we can describe the direction of energy flux by the Poynting vector.
So far, so good…
But I have a few questions at this point:
1.) A few sources emphasize that it is NOT the electrons that transport the energy to the light bulb but the EM waves. But when the EM waves hit the light bulb, the light and thermical energy is created by fast moving electrons or am I wrong? So it is not right say that electrons transport energy but it should be right to say that EM waves give the energy to the electrons and they vibrate and this kinetic energy is used?
2.) In addition to the first question: I am always wondering how does the field know where the light bulb is? The obvious answer would be: it doesn’t. But with that logic, the energy from the EM field should be at every point of the wire. Is that correct?
I am confused because pictures like this (https://i.stack.imgur.com/f9K80.png) only show the energy transport directly to the light bulb but shouldn’t the energy be transported everywhere?
I hope someone can clear up my confusion :)
Thanks in advance!
Answer: The wave "knows" where the bulb is because the EM field is directed by the wires to the bulb: The H-field surrounds the wire, while the E-field is between the wires.
Experimentally, there is no way to prove that the "field" lights up the bulb and not the motion of the electrons. The reason behind this explanation is not this simple example of a bulb connected to a battery but that this example serves as a didactic introduction to energy transfer between antennas that are not connected by anything "material", as in a radio or cell phone communicating with a tower.
In those cases there is no other rational explanation but that EM wave energy travels via the field that Faraday and Maxwell envisioned. Between this battery to bulb example and of the radios there is the intermediate case of a transmission line formed by pairs of long wires, long relative to wavelength (Lecher wire), where the bulb lights up before any individual electrons could travel the distance, but even in that case one could claim that the current lights up the bulb not the field. This is the case because where there is no current there is no H-field to surround the wire, and where is no charge accumulation neither there is E-field. In other words, as long as we have the wires we have both currents/charges and H/E fields, it is your preference to say which lights up the bulb. | {
"domain": "physics.stackexchange",
"id": 97750,
"tags": "electromagnetism, electromagnetic-radiation, electric-circuits, electric-fields"
} |
Relation between relativistic momentum in center of mass and another particle's rest frames | Question: Consider a system of 2 particles A and B. I am trying to prove the formula $$|\vec{P}_{CM}|=\frac{m_B}{E_{CM}}|\vec{P}_{A, LAB}|$$ where $|\vec{P}_{CM}|$ and $E_{CM}$ are the momentum of one each one of the particles and the total energy/mass in the center of mass frame, and $|\vec{P}_{A, LAB}|$ is the momentum of particle A in particle B's rest frame. Notice I am using natural units.
I have tried expresing the scalar products of $P^\mu_A$, $P^{\mu}_B$ and the total momentum in the different frames to get the relation using lorentz invariance but I cannot reach the final formula. My main approach was:
$$E^2_{CM}=P_\mu P^\mu = (P_A)_\mu (P_A)^\mu+(P_B)_\mu (P_B)^\mu+2(P_A)_\mu (P_B)^\mu$$
Now the only invariant term that I can think of that gives me the $|\vec{P}_{A, LAB}|$ factor is $(P_A)_\mu (P_A)^\mu$ expanded in the LAB frame. However this gives extra terms that I cannot cancel and does not seem to end up in a proportionality factor. I have tried calculating the other products in both reference frames and getting relations between them but I must be missing something.
Answer: For such calculations it is usually easiest to use the rotational invariance and just do the calculation explicitly in a frame where it is especially simple. For instance
if we choose the $z-$axis along $\vec{p}_{CM}$, then
$$p_A=\begin{pmatrix}E_A\\0\\0\\|\vec{p}_{CM}|\end{pmatrix}, \quad p_B=\begin{pmatrix}E_B\\0\\0\\-|\vec{p}_{CM}|\end{pmatrix}$$
with $E_A+E_B = E_{CM}$.
A lorentz boost in the $z-$direction can be written as
$$\Lambda_z=\begin{pmatrix}\cosh\theta & 0 & 0 & -\sinh\theta \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
-\sinh\theta & 0 & 0 & \cosh\theta\end{pmatrix}$$
since we want this to be a transformation into the rest frame of $B$, we must have
$$
\begin{align}
E'_B &= E_B\cosh\theta + |\vec{p}_{CM}|\sinh\theta
\overset{!}{=} m_B \\
p'_{B,z} &= -E_B\sinh\theta - |\vec{p}_{CM}|\cosh\theta
\overset{!}{=} 0
\end{align}
$$
from which we get $\cosh\theta=\frac{E_B}{m_B} ~~$ and $~\sinh\theta =-\frac{|\vec{p}_{CM}|}{m_B}$. You can now apply the lorentz transformation to
$p_A$ and should find the above relation for the energies :) | {
"domain": "physics.stackexchange",
"id": 80388,
"tags": "special-relativity"
} |
Frequency analysis based anagram checker | Question: I uploaded code solutions for some problems of the book Cracking the Coding Interview, 6th Edition to GitHub, I would like to know your rating and potential improvement of the code I wrote.
Here is the first problem of chapter 1 (anagrams):
#include <stdio.h>
#include <string.h>
/*check if one string is an anagram of another, it uses an int array
* called alphabet to store frequencies of chars in both strings, add
* 1 for s1 and subtract 1 for s2*/
int are_anagrams(const char *s1, const char *s2) {
int alphabet[26] = { 0 };
int index1, index2;
size_t l1 = strlen(s1), l2 = strlen(s2), i;
/*if the strings have different lengths are not anagrams */
if (l1 != l2) return 0;
/* count the frequencies of characters */
for (i = 0; i < l1; ++i) {
index1 = s1[i] - 'a';
index2 = s2[i] - 'a';
++alphabet[index1];
--alphabet[index2];
}
/* all the alphabet letters should be 0, otherwise the strings are not
* anagrams */
for (i = 0; i < 26; ++i)
if (alphabet[i] != 0) return 0;
return 1;
}
int main() {
char s1[] = "aaabbbccc";
char s2[] = "aabbaccbc";
printf("%d\n", are_anagrams(s1, s2));
return 0;
}
Any suggestion or advice is welcome, thanks for your time.
Answer:
Avoid magic numbers, especially if they, like 26 in your code, are used repeatedly. Along the same line, keep in mind that the code only works in the "C" locale. Other locales may have alphabets of different size.
Prefer to declare variables as close to use as possible, e.g.
for (size_t i = 0; i < l1; ++i) {
int index1 = s1[i] - 'a';
int index2 = s2[i] - 'a';
....
If the string contains non-lowercase characters, the code would do an out-of-bound access. You should ensure islower(s[i]) prior to computing indices, and ask the interviewer about what are the guarantees.
An opportunistic if (l1 != l2) return 0; is not an optimization. It still require a linear time to compute the lengths.
Along the same line, length computation is not necessary. An idiomatic C approach is to us pointers:
while ((ch = *s++) != 0)
The bullets above suggest splitting the loop into two:
while ((ch = *s1++) != 0) {
if (islower(ch)) {
alphabet[ch - 'a']++;
}
}
while ((ch = *s2++) != 0) {
if (islower(ch)) {
alphabet[ch - 'a']--;
}
}
Now, a DRY principle require factoring this loops into a function:
static void count_frequencies(char * s, int * alphabet, int addend) {
while ((ch = *s++) != 0) {
if (islower(ch)) {
alphabet[ch - 'a'] += addend;
}
}
}
....
count_frequencies(s1, alphabet, 1);
count_frequencies(s2, alphabet, -1);
As a perk benefit, see how the /* count the frequencies of characters */ disappears.
For a very long string the integer in alphabet may overflow. Could be a nitpick, but could also be a failure, depending on the interviewer. | {
"domain": "codereview.stackexchange",
"id": 35076,
"tags": "c, interview-questions"
} |
Periodic Zone Scheme - Bloch Theorem in Lattices | Question: I am quite confused about the different representations of the dispersion relation in a lattice.
This image makes a lot of sense to me, since it only represents one dispersion curve and transforms it back to the 1st-Brillouin-Zone due to the periodicity of the lattice in the Bloch theorem.
But sometimes, the dispersion relation is also showed in the so called "periodic zone scheme" as follows:
In this one, there are a lot of dispersion curves, what would make me think there are infitenely many wave-functions with the same energy, but different reciprocal wave vectors $k=k+n \cdot \frac{\pi}{a}$, what would lead to an infinity number of possible states in each band. What am I getting wrong?
Answer: The periodic zone scheme is redundant, as it shows the same states multiple times. All of the same band structure information is contained within the reduced zone scheme (fig.(a) in your first image).
Piggybacking off of the comment by march on your post:
The periodic zone scheme could be useful for Umklapp processes, where scattering between different Brillouin zones occur. | {
"domain": "physics.stackexchange",
"id": 99933,
"tags": "wavefunction, solid-state-physics, electronic-band-theory, dispersion, lattice-model"
} |
Are linear and angular kinetic energies separate from each other? | Question: Suppose an object was rolling (moving both rotationally and translationally):
Would the object's total kinetic energy be the sum of both linear and angular kinetic energies? i.e. $K_{net}=\frac{1}{2}mv^2+\frac{1}{2}I\omega^2$?
OR should linear and angular kinetic energies be treated as separate entities, similar to how linear and angular momentum are completely separate?
Thank you so much!
Answer: Short answer. These contributions can be identified in the kinetic energy of a rigid system, whose material points move under the rigid-body constraint
$\mathbf{v}_P - \mathbf{v}_Q = \omega \times (\mathbf{r}_P - \mathbf{r}_Q)$,
for every material points $P$, $Q$ of the system.
Kinetic energy as an additive physical quantity. For every physical system, the kinetic energy is an additive quantity, i.e. the kinetic eneergy of the system is equal to the sum of the kinetic energy of its parts: you take the parts of the system, you evaluate the kinetic energy of each part and sum these terms and you get the kinetic energy of the overall system.
For a system with a discrete point mass distribution, we can write it as
$K = \sum_i K_i = \sum_i \dfrac{1}{2} m_i |\mathbf{v}_i|^2 = \sum_i \dfrac{1}{2} m_i \mathbf{v}_i \cdot \mathbf{v}_i $,
or for a system with continuous mass distribution, with density $\rho(\mathbf{x})$, we can write it as
$K = \dfrac{1}{2} \displaystyle \int_{\Omega} \rho(x) |\mathbf{v}(\mathbf{x})|^2$
Kinetic energy for a rigid system. (here, only for discrete systems; as an exercise try to retrieve the same expressions for continuous systems). Using the rigid-body constraint, it's possible to write the velocity $\mathbf{v}_i$ of each point mass of the system w.r.t the center of mass of the system, $\mathbf{r}_G$,
$\mathbf{v}_i = \mathbf{v}_G + \omega \times (\mathbf{r}_i - \mathbf{r}_G)$,
where the position and the velocity of the center of mass are
$\mathbf{r}_G = \dfrac{\sum_i m_i \mathbf{r}_i}{\sum_i m_i}$,
$\mathbf{v}_G = \dfrac{\sum_i m_i \mathbf{v}_i}{\sum_i m_i}$.
Introducing the expression for $\mathbf{v}_i$ in the expression for the kinetic energy, we get
$K = \sum_i \dfrac{1}{2} m_i \mathbf{v}_i \cdot \mathbf{v}_i
= \sum_i \dfrac{1}{2} m_i (\mathbf{v}_G + \omega \times (\mathbf{r}_i - \mathbf{r}_G)) \cdot (\mathbf{v}_G + \omega \times (\mathbf{r}_i - \mathbf{r}_G)) $,
and rearranging the terms
$K = \dfrac{1}{2} \sum_i m_i |\mathbf{v}_G|^2 + \mathbf{v}_G \cdot \omega \times \underbrace{\sum_i m_i (\mathbf{r}_i - \mathbf{r}_G)}_{=\mathbf{0} \text{ (def of G)}} + \dfrac{1}{2} \sum_i \underbrace{ m_i(\omega \times (\mathbf{r}_i - \mathbf{r}_G)) \cdot (\omega \times (\mathbf{r}_i - \mathbf{r}_G))}_{= - m_i \omega \cdot (\mathbf{r}_i - \mathbf{r}_G) \times ((\mathbf{r}_i - \mathbf{r}_G) \times \omega)} $.
Now, we can sum over $i$, to recognize:
the total mass of the system $m = \sum_i m_i$
the inertia tensor of the system w.r.t. its center of mass $G$, $\mathbb{I}_G = - \sum_i m_i (\mathbf{r}_i - \mathbf{r}_G) \times (\mathbf{r}_i - \mathbf{r}_G) \times$
and eventually write the kinetic energy for a rigid system as the sum of the contribution of the translation of its center of mass and the rotation around it,
$K = \dfrac{1}{2} m |\mathbf{v}_G|^2 + \dfrac{1}{2} \omega \cdot \mathbb{I}_G \cdot \omega$. | {
"domain": "physics.stackexchange",
"id": 90934,
"tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics"
} |
What is the Noether charge associated with screw symmetry? | Question: I put my desk fan on top of a book, and noticed it was Neuenschwander's book about Noether's theorem. That got me thinking about the symmetries of the fan: while there is a periodic symmetry for each $1/3$rd turn of the fan, there is a simple continuous screw symmetry along the time axis affecting the $x$-$y$ plane. So shouldn't there be a conserved Noether charge in this system associated with this symmetry?
Answer: Any screw transformation can be decomposed into a translation and a rotation (either along the screw axis or by decomposing it into a translation and a couple). Thus any conserved quantity related to such a symmetry would be decomposable into a combination of momentum and angular momentum.
In your particular case, with the translation being only along the time axis, I do believe the conserved quantity would be entirely angular momentum. | {
"domain": "physics.stackexchange",
"id": 89828,
"tags": "symmetry, noethers-theorem"
} |
What happens if we change the limits of integral in Fourier transform? | Question: By definition of Fourier transform
$$X(\omega)=\int_{-\infty}^\infty x(t) e^{-j\omega t} dt $$
Now what will happen to the answer of transform for example in case of $x(t)= \cos(\omega_0 t)$ if limit is $0$ to $A$ instead of $-\infty$ to $\infty$?
For $x(t)=\cos(\omega_0 t)$ its fourier transform is given by $ X(\omega)= \pi[\delta(\omega-\omega_0) + \delta(\omega+\omega_o)]$
so if the limit is changed will it effect the answer?
Answer: Yes, it will affect the answer. What you're suggesting is known as the short-time Fourier transform. In the sinusoidal case that you proposed, you will observe spectral leakage, as the truncation of the integral limits is equivalent to multiplication of the sinusoid by a rectangular window function. This multiplication in the time domain maps to convolution in the frequency domain. The Fourier transform of a rectangular window is a sinc function, so the convolution will yield two sinc functions centered at the locations of the impulses in your original answer. | {
"domain": "dsp.stackexchange",
"id": 603,
"tags": "fourier-transform"
} |
Sound from a rocket | Question: I have been watching programmes commemorating the Apollo 11 mission. One obvious feature of the launch is the sound. It was mentioned that water was sprayed into the pit below the rocket not (primarily) for cooling as you may expect but to prevent the sound energy bouncing back from the concrete and damaging the engines.
I expect that there are very many inefficiencies in a real rocket and far more fuel is required that a naive application of the Tsiolkovsky rocket equation (Wikipedia) would suggest. One of these inefficiencies, probably not the largest, is the energy lost as sound.
Has this been estimated? Is it significant? Clearly, a Saturn V launch generates a lot of sound energy but it could still be a small fraction of the energy in its fuel.
Answer: Yes, these topics are widely studied in the field of aeroacoustics, thermoacoustics or generally compressible fluid dynamics. The combustion is literally a hot topic. This series might be interesting for you and there is also a book from T. C. Lieuwen approaching this topic (and certainly it is not the only one).
Usually the energy loss due to sound emission presents relatively small part of the overall energy (for cold subsonic flows it is proportional at best to the third power of Mach number). The issues might arise when some sort of feedback-loop is triggered, resulting (for example) in strong standing waves in the combustion chambre or some sort of enclosure. | {
"domain": "physics.stackexchange",
"id": 59547,
"tags": "thermodynamics, fluid-dynamics, acoustics, rocket-science, efficient-energy-use"
} |
Algorithm for evaluating polynomials | Question: I'm reading The Algorithm Design Manual and I stumbled upon this problem.
I can't really get my head around this, I don't even know how the number of multiplications could differ, what I mean is that there is no polynomial that would make this algorithm perform poorly.
I also have no idea how this could get improved, every operation seems necessary to me.
Any help would be much appreciated.
Answer: The number of multiplications in the algorithm is $2n - 1$, but the number of multiplications in the Horner's method is $n$, which means this algorithm is not optimal.
As mentioned in the Wiki page, Horner's method in evaluating a polynomial is optimal, therefore we can change this algorithm so that it uses the Horner's method.
p = a[n]
for i = n-1 to 0:
p = p.x + a[i] | {
"domain": "cs.stackexchange",
"id": 17759,
"tags": "time-complexity, algorithm-analysis"
} |
Multiple turtlebots "odom" topic problem | Question:
Hi Everyone,
I spawned multiple turtlebots for my gazebo simulation using the kobuki model from the turtlebot simulator.
I can input velocity commands for each individual turtlebot using different namespace like /robot1/vel_cmd, robot2/vel_cmd ..etc but I couldn't read my odometry topic on the same manner. In other words I just got one single topic “/odom” but I need to push down the “/odom” topic into its own namespace like /robot1/odom, robot2/odom...robot4/odom.
I installed hydro ROS, gazebo/gazebo_ros_pkg and turtlebot simulator from pre- built-debians and I read some posts where people recommend to install gazebo/gazeb_ros_pkg “from source” to have access to the source code then to solve this problem by changing the source. For this particular problem I read that I need to change the gazebo_ros_create.cpp file where the odom topic is declared like “/odom” as global name and I need to change it to “odom” as relative name to be able to push down the topic
I haven't tried it yet but I have my doubts about this option. Please any help regarding this matter. Thanks alot
Originally posted by Robert1 on ROS Answers with karma: 63 on 2014-04-24
Post score: 1
Answer:
Hi...
I had the same problem, and I found a solution (at least for my case). I will try to explain what I do, in case of someone is interested in try it out (this will be a long answer!)..
First, i will you you the launch files that i use, and then some of the nodes I write in order to make it work...
The launch files are:
<?xml version="1.0" ?>
<launch>
<arg name="r1_x" default="1" />
<arg name="r1_y" default="1" />
<arg name="r2_x" default="-1" />
<arg name="r2_y" default="1" />
<!-- start world -->
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<!-- -->
<arg name="use_sim_time" value="true"/>
<arg name="debug" value="true"/>
<arg name="gui" value="true"/> <!-- graphic interface -->
<arg name="headless" value="false"/>
<arg name="world_name" value="$(find tb_tables)/worlds/tres_mesas.world"/>
</include>
<!-- include robot description -->
<param name="robot_description"
command="$(find xacro)/xacro.py '$(find turtlebot_description)/robots/kobuki_hexagons_kinect.urdf.xacro'" />
<!-- BEGIN ROBOT 0-->
<group ns="robot0">
<param name="tf_prefix" value="robot0_tf" />
<include file="$(find tb_tables)/launch/simulation/includes/turtlebot1.launch" >
<arg name="init_pose" value="-x $(arg r2_x) -y $(arg r2_y) -z 0" />
<arg name="robot_name" value="robot0" />
</include>
</group>
<!-- BEGIN ROBOT 1-->
<group ns="robot1">
<param name="tf_prefix" value="robot1_tf" />
<include file="$(find tb_tables)/launch/simulation/includes/turtlebot1.launch" >
<arg name="init_pose" value="-x $(arg r1_x) -y $(arg r1_y) -z 0" />
<arg name="robot_name" value="robot1" />
</include>
</group>
</launch>
This is the auxiliar launch file (called above, inside a namespace).
<?xml version="1.0" ?>
<launch>
<arg name="robot_name"/>
<arg name="init_pose"/>
<!-- odom publisher -->
<param name="robot_name" value= "$(arg robot_name)" />
<node pkg="tb_tables" type="odom_sim_2" name="odom_sim"/>
<!-- Gazebo model spawner -->
<node name="spawn_turtlebot_model" pkg="gazebo_ros" type="spawn_model"
args="$(arg init_pose) -unpause -urdf -param /robot_description -model $(arg robot_name) -robotNamespace $(arg robot_name) -namespace $(arg robot_name)"/>
<!-- robot state publisher (at 1 Hz) -->
<node pkg="robot_state_publisher" type="robot_state_publisher" name="robot_state_publisher">
<param name="publish_frequency" type="double" value="1.0" />
</node>
<!-- Publish tf info -->
<node pkg="tb_tables" name="tb_tf_broadcaster" type="tb_tf_can2">
</node>
</launch>
And this launch file is for running the navigation stack for both robots:
<?xml version="1.0" ?>
<launch>
<arg name="map_file" default="$(find tb_tables)/maps/blank_map.yaml"/>
<arg name="rviz_robot" default="robot1"/>
<node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" >
</node>
<!-- BEGIN ROBOT 0-->
<group ns="robot0">
<param name="tf_prefix" value="robot0_tf" />
<node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" />
<node pkg="tb_tables" name="fake_localization" type="fake_localiza2" output="screen"/>
<include file="$(find tb_tables)/launch/simulation/includes/move_base_sim.launch.xml" />
</group>
<!-- BEGIN ROBOT 1-->
<group ns="robot1">
<param name="tf_prefix" value="robot1_tf" />
<node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" />
<node pkg="tb_tables" name="fake_localization" type="fake_localiza2" output="screen"/>
<include file="$(find tb_tables)/launch/simulation/includes/move_base_sim.launch.xml" />
</group>
</launch>
You can see in this launch files that I use a node named "odom_sim". As some people have noticed, namespaces in the kobuki gazebo plugin doesn't work for many topics (being odometry one of them), so I program this node (odom_sim) in order to publish the odometry information in an namespaced topic ("robot1/odom" & "robot2/odom"). For obtaining the information of the robot position, I use the "/gazebo/get_model_state" service. The code I use is this:
#include <iostream>
#include <ros/ros.h>
#include <nav_msgs/Odometry.h>
#include <geometry_msgs/TransformStamped.h>
#include <geometry_msgs/Quaternion.h>
#include <geometry_msgs/Pose.h>
#include <geometry_msgs/Twist.h>
#include <control_msgs/JointControllerState.h>
#include <gazebo_msgs/GetModelState.h>
#include <gazebo_msgs/ModelState.h>
#include <ros/param.h>
#include <boost/assign/list_of.hpp>
int main(int argc, char** argv){
ros::init(argc, argv, "odometry_publisher");
ros::NodeHandle n;
ros::Publisher odom_pub;
ros::Rate r(25.0);
while(n.ok()){
ros::spinOnce(); // check for incoming messages
//
odom_pub = n.advertise<nav_msgs::Odometry>("odom", 10);
std::string name;
ros::param::get( "robot_name", name);
//std::cout << name << std::endl;
// take pose info from gazebo
ros::ServiceClient client = n.serviceClient<gazebo_msgs::GetModelState>("/gazebo/get_model_state");
gazebo_msgs::GetModelState getmodelstate;
client.waitForExistence();
getmodelstate.request.model_name = name;
client.call(getmodelstate);
//odometry message over ROS
nav_msgs::Odometry odom;
odom.header.stamp = ros::Time::now();
odom.header.frame_id = "odom";
//set the position
odom.pose.pose.position.x = getmodelstate.response.pose.position.x;
odom.pose.pose.position.y = getmodelstate.response.pose.position.y;
odom.pose.pose.orientation.z = getmodelstate.response.pose.orientation.z;
odom.pose.pose.orientation.w = getmodelstate.response.pose.orientation.w;
odom.pose.covariance = boost::assign::list_of(1e-1) (0) (0) (0) (0) (0)
(0) (1e-1) (0) (0) (0) (0)
(0) (0) (1e6) (0) (0) (0)
(0) (0) (0) (1e6) (0) (0)
(0) (0) (0) (0) (1e6) (0)
(0) (0) (0) (0) (0) (5e-2) ;
//set the velocity
odom.child_frame_id = "base_footprint"; //base_footprint base_link
odom.twist.twist.linear.x = getmodelstate.response.twist.linear.x;
odom.twist.twist.angular.z = getmodelstate.response.twist.angular.z;
//publish the message
odom_pub.publish(odom);
r.sleep();
}
}
Now that we have odometry information of each robot, we can write a node that publish the "odom" TF transform (I called this node "tb_tf_broadcaster" in the launch file) The code is:
#include <ros/ros.h>
#include <tf/transform_broadcaster.h>
#include <nav_msgs/Odometry.h>
#include <tf/transform_datatypes.h>
#include <ros/param.h>
#include <tf/tf.h>
void odom_callback(const nav_msgs::OdometryConstPtr& msg){
// Publish /odom to base_footprint transform
static tf::TransformBroadcaster br;
tf::Transform BaseFootprintTransf;
BaseFootprintTransf.setOrigin(tf::Vector3(msg->pose.pose.position.x,msg->pose.pose.position.y, 0.0));
tf::Quaternion q;
tf::quaternionMsgToTF(msg->pose.pose.orientation, q);
BaseFootprintTransf.setRotation(q);
std::string tf_name;
ros::param::get( "tf_prefix", tf_name);
std::string odom_ = std::string(tf_name) + "/odom";
std::string base_footprint_ = std::string(tf_name) + "/base_footprint";
br.sendTransform(tf::StampedTransform(BaseFootprintTransf, ros::Time::now(),odom_, base_footprint_));
// Publish base_footprint transform to odom
//static tf::TransformBroadcaster br2;
//tf::Transform OdomTransf;
//br2.sendTransform(tf::StampedTransform(OdomTransf, ros::Time::now(),"/origin", odom_));
}
int main(int argc, char** argv){
ros::init(argc, argv, "turtlebot_tf_broadcaster");
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("odom", 100, &odom_callback);
ros::spin();
return 0;
};
Finally, if you want to simulate de AMCL localization, you can use instead the "Fake_localization" node. This node is computationally less expensive than the AMCL node but it also works perfectly with the rest of the navigation stack modules (move_base). As AMCL this node publish TF transformation between map and robot1 and odom
In my case i had to modified the original code in order to work with two robots
#include <ros/ros.h>
#include <ros/time.h>
#include <nav_msgs/Odometry.h>
#include <geometry_msgs/PoseArray.h>
#include <geometry_msgs/PoseWithCovarianceStamped.h>
#include <angles/angles.h>
#include "ros/console.h"
#include "tf/transform_broadcaster.h"
#include "tf/transform_listener.h"
#include "tf/message_filter.h"
#include "message_filters/subscriber.h"
#include <iostream>
class FakeOdomNode
{
public:
FakeOdomNode(void)
{
m_posePub = m_nh.advertise<geometry_msgs::PoseWithCovarianceStamped>("amcl_pose",1,true);
m_particlecloudPub = m_nh.advertise<geometry_msgs::PoseArray>("particlecloud",1,true);
m_tfServer = new tf::TransformBroadcaster();
m_tfListener = new tf::TransformListener();
m_base_pos_received = false;
ros::NodeHandle private_nh("~");
private_nh.param("odom_frame_id", odom_frame_id_, std::string("odom"));
private_nh.param("base_frame_id", base_frame_id_, std::string("base_link"));
private_nh.param("global_frame_id", global_frame_id_, std::string("/map")); // "/map"
private_nh.param("delta_x", delta_x_, 0.0);
private_nh.param("delta_y", delta_y_, 0.0);
private_nh.param("delta_yaw", delta_yaw_, 0.0);
private_nh.param("transform_tolerance", transform_tolerance_, 0.1);
// get our tf prefix
std::string tf_prefix = tf::getPrefixParam(private_nh);
global_frame_id_ = tf::resolve(tf_prefix, global_frame_id_);
base_frame_id_ = tf::resolve(tf_prefix, base_frame_id_);
odom_frame_id_ = tf::resolve(tf_prefix, odom_frame_id_);
m_particleCloud.header.stamp = ros::Time::now();
m_particleCloud.header.frame_id = global_frame_id_;
m_particleCloud.poses.resize(1);
ros::NodeHandle nh;
// messages (erase)
ROS_INFO("odom_frame_id: %s", odom_frame_id_.c_str());
ROS_INFO("base_frame_id: %s", base_frame_id_.c_str());
ROS_INFO("global_frame_id: %s", global_frame_id_.c_str());
m_offsetTf = tf::Transform(tf::createQuaternionFromRPY(0, 0, -delta_yaw_ ), tf::Point(-delta_x_, -delta_y_, 0.0));
stuff_sub_ = nh.subscribe("odom", 100, &FakeOdomNode::stuffFilter, this); //base_pose_ground_truth
filter_sub_ = new message_filters::Subscriber<nav_msgs::Odometry>(nh, "", 100); //""
filter_ = new tf::MessageFilter<nav_msgs::Odometry>(*filter_sub_, *m_tfListener, base_frame_id_, 100);
filter_->registerCallback(boost::bind(&FakeOdomNode::update, this, _1));
// subscription to "2D Pose Estimate" from RViz:
m_initPoseSub = new message_filters::Subscriber<geometry_msgs::PoseWithCovarianceStamped>(nh, "initialpose", 1);
m_initPoseFilter = new tf::MessageFilter<geometry_msgs::PoseWithCovarianceStamped>(*m_initPoseSub, *m_tfListener, global_frame_id_, 1);
m_initPoseFilter->registerCallback(boost::bind(&FakeOdomNode::initPoseReceived, this, _1));
}
~FakeOdomNode(void)
{
if (m_tfServer)
delete m_tfServer;
}
private:
ros::NodeHandle m_nh;
ros::Publisher m_posePub;
ros::Publisher m_particlecloudPub;
message_filters::Subscriber<geometry_msgs::PoseWithCovarianceStamped>* m_initPoseSub;
tf::TransformBroadcaster *m_tfServer;
tf::TransformListener *m_tfListener;
tf::MessageFilter<geometry_msgs::PoseWithCovarianceStamped>* m_initPoseFilter;
tf::MessageFilter<nav_msgs::Odometry>* filter_;
ros::Subscriber stuff_sub_;
message_filters::Subscriber<nav_msgs::Odometry>* filter_sub_;
double delta_x_, delta_y_, delta_yaw_;
bool m_base_pos_received;
double transform_tolerance_;
nav_msgs::Odometry m_basePosMsg;
geometry_msgs::PoseArray m_particleCloud;
geometry_msgs::PoseWithCovarianceStamped m_currentPos;
tf::Transform m_offsetTf;
//parameter for what odom to use
std::string odom_frame_id_;
std::string base_frame_id_;
std::string global_frame_id_;
public:
void stuffFilter(const nav_msgs::OdometryConstPtr& odom_msg){
//we have to do this to force the message filter to wait for transforms
//from odom_frame_id_ to base_frame_id_ to be available at time odom_msg.header.stamp
//really, the base_pose_ground_truth should come in with no frame_id b/c it doesn't make sense
boost::shared_ptr<nav_msgs::Odometry> stuff_msg(new nav_msgs::Odometry);
*stuff_msg = *odom_msg;
stuff_msg->header.frame_id = odom_frame_id_;
filter_->add(stuff_msg);
}
void update(const nav_msgs::OdometryConstPtr& message){
// Callback to register with tf::MessageFilter to be called when transforms are available
tf::Pose txi;
tf::poseMsgToTF(message->pose.pose, txi);
txi = m_offsetTf * txi;
tf::Stamped<tf::Pose> odom_to_map;
try
{
m_tfListener->transformPose(odom_frame_id_, tf::Stamped<tf::Pose>(txi.inverse(), message->header.stamp, base_frame_id_), odom_to_map);
}
catch(tf::TransformException &e)
{
ROS_ERROR("Failed to transform to %s from %s: %s\n", odom_frame_id_.c_str(), base_frame_id_.c_str(), e.what());
return;
}
m_tfServer->sendTransform(tf::StampedTransform(odom_to_map.inverse(),
message->header.stamp + ros::Duration(transform_tolerance_),
global_frame_id_, odom_frame_id_));
tf::Pose current;
tf::poseMsgToTF(message->pose.pose, current);
//also apply the offset to the pose
current = m_offsetTf * current;
geometry_msgs::Pose current_msg;
tf::poseTFToMsg(current, current_msg);
// Publish localized pose
m_currentPos.header = message->header;
m_currentPos.header.frame_id = global_frame_id_;
m_currentPos.pose.pose = current_msg;
m_posePub.publish(m_currentPos);
// The particle cloud is the current position. Quite convenient.
m_particleCloud.header = m_currentPos.header;
m_particleCloud.poses[0] = m_currentPos.pose.pose;
m_particlecloudPub.publish(m_particleCloud);
}
void initPoseReceived(const geometry_msgs::PoseWithCovarianceStampedConstPtr& msg){
tf::Pose pose;
tf::poseMsgToTF(msg->pose.pose, pose);
if (msg->header.frame_id != global_frame_id_){
ROS_WARN("Frame ID of \"initialpose\" (%s) is different from the global frame %s", msg->header.frame_id.c_str(), global_frame_id_.c_str());
}
// set offset so that current pose is set to "initialpose"
tf::StampedTransform baseInMap;
try{
m_tfListener->lookupTransform(base_frame_id_, global_frame_id_, msg->header.stamp, baseInMap);
} catch(tf::TransformException){
ROS_WARN("Failed to lookup transform!");
return;
}
tf::Transform delta = pose * baseInMap;
m_offsetTf = delta * m_offsetTf;
}
};
int main(int argc, char** argv)
{
ros::init(argc, argv, "fake_localization");
FakeOdomNode odom;
ros::spin();
return 0;
}
I hope that someone found this useful...
Greetings!
Originally posted by Carlos Hdz with karma: 103 on 2014-05-20
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 17767,
"tags": "ros, multiple, turtlebot"
} |
My Power Law (Gamma) Transformation Code Doesn't Compatible With Gamma Value | Question: I'm trying to apply Power Law (Gamma) Transformation. Formula is simple: s = c*rγ.
(c=1 , r= intensity , s= outputintensity). Here is detailed information: https://theailearner.com/2019/01/26/power-law-gamma-transformations/.
I tried code (witch C#) below. But it doesn't work properly.
public void gammatransform(Bitmap CikisResmi)
{
int ResimGenisligi = Img.Width;
int ResimYuksekligi = Img.Height;
int Greyscale;
double y =0.4;
for (int i = 0; i < ResimGenisligi; i++)
{
for (int j = 0; j < ResimYuksekligi; j++)
{
OkunanRenk = Img.GetPixel(i, j);
Greyscale = Convert.ToInt16(OkunanRenk.R * 0.3 + OkunanRenk.G * 0.58 + OkunanRenk.B * 0.12);
Greyscale = Convert.ToInt32(Math.Pow(Greyscale, y));
if (Greyscale > 255)
{
Greyscale = 255;
}
CikisResmi.SetPixel(i, j, Color.FromArgb(Greyscale, Greyscale, Greyscale));
}
}
return;
}
Answer: Gamma correction can be applied to values ranging between 0 and 1. Based on your code, it looks like there is either no conversion from 0..255 range to 0..1 range and back. If there range really is 0..1, the intermediate result is stored in an integer so it is an issue anyway. | {
"domain": "dsp.stackexchange",
"id": 7742,
"tags": "c#, power-law"
} |
Time dilation at zero velocity (and zero gravity) | Question: From what I've learned, the more an object travels closer and closer to the speed of light, the more time will slow down for that object.. at least from an outside perspective..
It was shown that atomic clocks run slower in high speed orbit than clocks on earth.. I assume that the rate of radioactive decay (for example) is also slowed down at high speeds (correct me at any time, please).
We are moving through space right now at 760 miles per second (0.40771% the speed of light), which I can only assume is our current "cosmic clock", which also regulates how fast radioactive decay happens on earth (if we continue with that example).
When an astronaut is traveling at high velocity, his/her velocity is being added to the overall velocity of our galaxy moving through space, right?
So my question is this:
What will happen if an object were to stay completely stationary in space-time? Far away from any galaxy.. Will time go infinitely fast for that object? Will it instantly decay?
Since space is expanding, I realize you can't really stay "stationary".. but I mean: not having velocity of moving through space.
Thanks :)
Answer: You're messing up with simple time dilation. Time intervals are relative quantities. Two observers may not be agree with measured time intervals of an event. You see other moving observer's time dilated. Also, you see other observer's time dilated if she is deep in Gravity well than you are. Meaning, you find other observer's measured time interval more than your own measurement result of the same event. That's it.
Now, come to Radioactive decay: You measured half-life of a substance on Earth. Another observer who is independent of motion of Earth etc and far from any Gravity well (the notion of stationary is irrelevant), would find your measured half-life more than hers. | {
"domain": "physics.stackexchange",
"id": 26254,
"tags": "speed-of-light, spacetime, velocity, radioactivity, time-dilation"
} |
Why is the ionic product of water also the equilibrium constant of dissociation of water? | Question: This answer presents a derivation of the value of ionic product of water at $25^{\circ}\text{C}$.
The relation $K_\text{eq} = \operatorname{e}^{-\frac{\Delta_\text{r}G^{\circ}}{RT}}$ is used for the derivation.
But why is the equilibrium constant $\mathrm{K_w=[H^+][OH^-]}$ and not $\mathrm{K_d=\frac{[H^+][OH^-]}{[H_2O]}}$ or $\mathrm{K_i=\frac{[H_3O^+][OH^-]}{[H_2O]^2}}$?
After all, the reaction $\ce{H2O <=> H+ + OH-}$ is used for calculating $\Delta_\text{r}G^{\circ}$, so intuitively it appears that $\mathrm{K_d}$ should be used.
How can using any of the equations be equivalent when we have specified which reaction we are using while calculating $\Delta_\text{r}G^{\circ}$? What am I missing?
Answer: The thermodynamic constant of water dissociation can be written as$$K_\mathrm{D}=\dfrac{a(\ce{H3O+(aq)}) \cdot a(\ce{OH-(aq)})}{\ce{(a(H2O(l)}))^2} \approx \\
\approx \dfrac{a(\ce{H3O+(aq)}) \cdot a(\ce{OH-(aq)})}{1^2} \approx \dfrac{[\ce{H3O+(aq)}][\ce{OH-(aq)}]}{1^2}=[\ce{H+}][\ce{OH-}]= K_\text{w} $$
$a$ as the thermodynamic activity is defined as
$$\mu = \mu^{\circ} + RT \ln a$$
where $\mu$ resp. $\mu^{\circ}$ is chemical potential defined as
$$\mu = \left(\frac{\partial G}{\partial n_i}\right)_{T,p,n_j, j \ne i}$$
The standard chemical potential is defined for pure water, therefore activity of the pure water $a=1$.
OTOH the standard chemical potential of ions is defined as extrapolation from very diluted solutions to concentration $\pu{1 mol L-1}$, putting
$$\lim_{c \to 0}{(a)} = c$$ | {
"domain": "chemistry.stackexchange",
"id": 17964,
"tags": "physical-chemistry, acid-base, equilibrium, water"
} |
Is this kids experiment a legitimate way to show that air has mass? | Question: Consider the experiment in this link.
The experiment includes using a ruler as a lever, with an inflated balloon on one side and a balloon which is not inflated on the other.
The aim of the experiment is to show that air has mass.
I have seen many kids performing similar experiments.
But, if the air pressure inside the balloon is equal to that outside, then the buoyant force will cancel out the weight of the air inside the balloon, won't it?
Answer: I can think of at least four things going on in this experiment that need pointing out:
When you inflate a balloon by mouth, the air is warm: this makes the air inside the inflated balloon slightly lighter than the air it displaced
The air inside the balloon has 100% relative humidity at 37C, and condensation will quickly form on the inside of the balloon as the air inside cools down.
The air inside the balloon contains carbon dioxide, which has higher density than room air (molecular mass of 12+16+16 = 44 amu, vs oxygen at 32 amu and nitrogen at 28 amu - ignoring small isotopic effects, and ignoring Argon).
The pressure inside the balloon is larger than outside - this increases the density
So how large are each of these effects?
Warm air: 37C vs 20C results in drop in density of 0.945x (293 / 310) or -5.5%
Moisture: partial pressure of water at 37C is 47.1 mm Hg source which is about 0.061 atmospheres. Assuming that pressure is constant, this water (mass 18 amu) displaces air (mean mass 29 amu), so the density of the air decreases by 0.061 * (29 - 18) / 29 = 2.3%. If we allow the air outside the balloon to have 60% relative humidity (with saturated vapor pressure of 10.5 mm Hg), it would be slightly less dense than dry air (10.5*0.6/760*(29-18)/29 = 0.3%) making the net difference -2.0%. Note that much of this moisture will condense when the balloon cools down - little droplets will form on the inside of the balloon. With the air inside still saturated, its density will be 0.1% lower than on the outside; the net result amounts to 2.9% of the mass of the air in the balloon.
Carbon dioxide: the exhaled air has 4 - 5 % carbon dioxide source: wikipedia, with an equivalent drop in oxygen. The density of exhaled air is therefore higher than that of inhaled air by 0.045 * (44 - 32) / 29 = +1.9%
Pressure in the balloon: from this youtube video - time point 3:43 I estimate the pressure increase in the balloon at 23 mm Hg, resulting in an increase in density of 2.9%
Summarizing in a table:
factor effect at room T
temperature -5.5% 0.0%
moisture -2.0% 2.9%
CO2 1.9% 1.9%
pressure 2.9% 2.9%
net -2.7% 7.7%
A freshly inflated balloon will thus have only a slightly lower density than the air it displaced, because the temperature + moisture effect is greater than the other two. After you wait a little while, the temperature will equalize and the density of the air inside the balloon will be greater - by 7.7%, with more than half of that not caused by the pressure in the balloon...
In summary: the experiment described in your link measures the difference in density between air in a balloon, and ambient air. Since the density of the air inside the balloon is higher than the density outside the balloon, one may conclude that the air inside the balloon has finite density. One may NOT conclude that the medium outside the balloon (which we believe to be "dry air") has any density at all - since nothing in this measurement tells us about the air outside the balloon.
If you did the experiment carefully with a balloon initially filled with warm air, and you allowed the air to cool down, you might be able to tell that the balance shifts - in other words, that there must be a change in the buoyancy experienced by the balloon as it cools down. THAT would be an experiment to demonstrate "air has mass" (volume of balloon decreases, and it experiences less buoyancy). From the experiment as described (popping the balloon), we learn that "exhaled air has mass". That is not the same thing.
If you used an air pump (balloon pump) to inflate the balloon, the first three components would go away and you are left with the difference due to the pressure only - 2.9% of the mass of the air in the balloon. | {
"domain": "physics.stackexchange",
"id": 18617,
"tags": "experimental-physics, home-experiment"
} |
Removing missing categories from geom_bar | Question: I have a simple dataframe that includes points where the x values are 0, 1, 2, 3, 4, and 7. When I go to plot the data this is what ggplot2 gives me:
I only want to include the points mentioned above. How do I take out the instance of 5 and 6 in the bar plot?
(I have tried googling the answer and nothing is coming up with my actual need so I figured I'd ask the community which might give an answer sooner - please be kind I know this is bound to have a simple solution)
Answer: I was able to get it to work by converting the column in the dataframe to a character using this command:
df$day <- as.character(df$day)
and it worked like a charm! Thanks for your advice @haci
You can give a try by converting the column of numbers to a factor variable. | {
"domain": "bioinformatics.stackexchange",
"id": 2506,
"tags": "r, ggplot2"
} |
Boundary condition for partial reflection | Question: I want to solve a wave equation for the wave $\psi(x,t)$.
One boundary is moving, therefore I impose the velocity
$$v(x=0)=v_a\cos(\omega t)$$
the other boundary is fixed, but reflecting. If the reflection is total the proper boundary condition is
$$ \frac{\partial \psi}{\partial x}\bigg|_{x=L}=0 $$
Which is the right boundary condition if the reflection is partial? Therefore, my boundary has a reflection coefficient $R$ and an absorbing coefficient $A$ (no transmission), so that $R+A=1$.
Answer: Consider a boundary condition of the form
$$
\alpha \psi + \beta \frac{\partial \psi}{\partial x} + \gamma \frac{\partial \psi}{\partial t} =0
$$
on the boundary, where $\alpha$, $\beta$, and $\gamma$ are real coefficients. Standard Dirichlet boundary conditions correspond to $\beta = \gamma = 0, \alpha \neq 0$, while Neumann boundary conditions correspond to $\alpha = \gamma = 0, \beta \neq 0$.
If we impose this condition at $x = 0$ for an incoming wave of the form $\psi(x,t) = e^{i(kx - \omega t)}$, then the wave solution for $\psi$ will be
$$
\psi(x,t) = e^{i(kx - \omega t)} + A e^{i(-kx - \omega t)}
$$
where $A$ is the (complex) amplitude of the reflected wave. Some algebra then reveals that we must have
$$
A = \frac{i (k \beta - \omega \gamma) + \alpha}{i (k \beta + \omega \gamma) - \alpha},
$$
and so
$$
R = |A|^2 = \frac{(k \beta - \omega \gamma)^2 + \alpha^2}{(k \beta + \omega \gamma)^2 + \alpha^2}.
$$
By appropriate choices of $\alpha$, $\beta$, and $\gamma$, one can "tune" the reflection coefficient to be anywhere between 0 and 1.
A few notes on this expression:
Even if the medium is non-dispersive (i.e., the speed of propagation is independent of frequency), the reflection coefficient will only be frequency-independent if either (a) $\beta = \gamma = 0$, or (b) $\alpha = 0$. And if the reflection coefficient is frequency-dependent, this means that a pulse sent towards the wall will not be reflected with the same shape. If you want pulses to retain their shape on reflection, you need one of the above conditions.
For a given value of $R$, the coefficients $\alpha$, $\beta$, and $\gamma$ are not uniquely determined. This is for two reasons. First, we can always divide or multiply all three coefficients by the same number to get the same boundary condition; you can always use this freedom to set one (non-zero) coefficient equal to 1 if you like. Second, the reflection coefficient alone doesn't tell you everything about the reflected wave; the reflected wave can also be phase-shifted.
If $\gamma = 0$, we have $R = 1$ identically. This makes some sense; in such a case, the equations have time-reversal symmetry, whereas absorption involves an inherent "arrow of time". | {
"domain": "physics.stackexchange",
"id": 60367,
"tags": "waves, reflection, boundary-conditions, differential-equations"
} |
Why are there three $p$-orbitals? | Question: This question is specifically about Schrödinger quantum mechanics, but if an answer in some other mode would illuminate it could be acceptable, as demonstrating a physical or mathematical reason for added axioms.
In short - since the p-orbital has rotational symmetry about only one axis, but the potential of a point charge has spherical symmetry, a specific solution corresponding to a p-orbital should also be a solution when arbitrarily rotated. That means there is an infinite number of p-orbital solutions in this context. However, the dimension of the solution space for the given energy, that is, the eigenspace for the given eigenvalue is presumably exactly three. One can use three axial p-orbitals to span the whole eigenspace.
Thus the exclusion principle for fermions seems to be that there are at most the dimension of the eigenspace number of particles in an eigenspace rather than 1 particle in an orbital, if an orbital is taken as a solution to the Schrödinger equation.
Can anyone confirm or deny this line of reasoning? And provide a reference to explicit statement in the literature?
Answer:
Thus the exclusion principle for fermions seems to be that there are at most the dimension of the eigenspace number of particles in an eigenspace rather than 1 particle in an orbital, if an orbital is taken as a solution to the Schrödinger equation.
Your reasoning is correct. We don't normally frame things in this way because it is clunker than y the strictly-equivalent language of one particle per orbital in a linearly-independent set, but your description is somewhat more accurate.
The true underpinnings of this structure is the fact that multi-electron states must be antisymmetric with particle exchange; if you want to produce such a state given a set of orbitals, then you apply a procedure called antisymmetrization, and you end up with a state called a Slater determinant. If you start with more electrons than the dimension of the space spanned by your orbitals, then the Slater determinant will vanish.
Furthermore, as you correctly note, what really matters in a multi-electron state is strictly the subspace spanned by the constituent orbitals, and not the specific choice of orbitals as a basis for that subspace. For more on that, see Are orbitals observable physical quantities in a many-electron setting?.
More generally, the passage from the naive Pauli exclusion principle to the fully-grown version in terms of antisymmetrized Slater determinants is treated at length in any atomic physics textbook; if you want something specific I'll recommend Haken and Wolf's The physics of atoms and quanta, but any textbook should do. | {
"domain": "physics.stackexchange",
"id": 50709,
"tags": "angular-momentum, hilbert-space, schroedinger-equation, orbitals, spherical-harmonics"
} |
A certain potential shape reoccuring in physics | Question:
I have seen this shape in many different phenomena in physics, for example- when two neutral atoms are brought together, then potential between them as a function of separation takes this type of shape, also in kepplers law I have seen this type of potential. Today I was studying QM, and I saw this shape of potential again $$V= \frac{l(l+1)}{2mr^2} + \frac{-e^2}{r}$$ Where $\dfrac{-e^2}{r}$ is coulomb potential.
What is so special about this type of shape of potential, that it occurs in many different phenomena?
Answer: The general shape of the potential you've displayed describes any system that has a single stable bound state with a finite binding energy.
The fact that the potential has a minimum at finite $r$ gives it a stable bound state. The fact that it has only one extremum (i.e. minimum or maximum) means that it only has a single bound state. And the fact that the potential asymptotically approaches a finite value at large $r$ means that the system can be separated using a finite amount of energy (and therefore has a finite binding energy).
This potential is common because many bound systems in nature can be approximated, under certain conditions, as having a single stable bound state with a finite binding energy. The bound systems that we know how to study well have two general properties:
They last long enough for us to measure, and
We can take them apart and put them together.
The first property requires the existence of at least one stable bound state, and the second property requires a finite binding energy. Systems that violate one of these two principles are much harder to understand. Transitional states in chemical reactions violate the first property, and as such their structure is hard to determine; the proton, as a bound state of quarks, violates the second property, and so our understanding of the strong force is very limited. | {
"domain": "physics.stackexchange",
"id": 68731,
"tags": "potential"
} |
Port to connect to rostest rosmaster after Indigo | Question:
Connecting to rosmater with specific port number used to be possible. I see this pull request against Indigo that randomizes the rosmaster port for rostest.
Does this mean that there is now no way to connect to rosmater during rostest?
(I'm seeing an issue that only occurs to me during rostest. I want to see values set in Parameter Server for debug purpose.)
$ dpkg -p ros-indigo-ros-comm | grep Ver
Version: 1.11.13-0trusty-20150429-190011-0700
Originally posted by 130s on ROS Answers with karma: 10937 on 2015-05-02
Post score: 2
Answer:
https://github.com/ros/ros_comm/pull/637 replaced the earlier pr and was merged and now rostest --reuse-master pkg foo.test will use the standard ROS_MASTER_URI port.
Originally posted by lucasw with karma: 8729 on 2016-03-04
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by lucasw on 2016-03-19:
But is there a way to catkin_make run_tests and reuse-master? | {
"domain": "robotics.stackexchange",
"id": 21597,
"tags": "ros, ros-comm, rostest, ros-master-uri, ros-indigo"
} |
What are the differences between HPRD and BIOGRID databases? | Question: What are the differences between HPRD and BIOGRID protein-protein interactions databases?
What are their purposes? Why do we need two different databases?
How is data collected into each one?
How different are they in the interactions they contain?
Is one considered more reliable than the other?
Which one is more commonly used when benchmarking models?
Are there any other commonly used PPI databases?
Answer: Short Answer
In a nutshell, there are two major differences:
Species range: BioGRID integrates multiple species' protein-protein interaction (PPI) data, whereas HPRD focuses mainly on Human data.
Functionality: HPRD has some GUI tools that can interact directly with it's database (e.g. BLAST - for searching proteins and their binding partners via sequence alignment) whereas BioGRID is mostly just a database.
Long Answer
In bold are the unique features of each database.
About HPRD:
The Human Protein Reference Database represents a
centralized platform to visually depict and integrate information
pertaining to domain architecture, post-translational modifications,
interaction networks and disease association for each protein in the
human proteome. All the information in HPRD has been manually
extracted from the literature by expert biologists who read, interpret
and analyze the published data. HPRD has been created using an object
oriented database in Zope, an open source web application server, that
provides versatility in query functions and allows data to be
displayed dynamically.
FAQ HPRD: Why did you decide to develop yet another database instead of integrating other existing databases?"
We believe that biological databases are still in their early stages
and no protein database can be considered as an established standard.
We feel that a variety of databases trying to solve problems in
diverse ways provide the biologists the possibility of choosing their
favorite. Our approach is radically different from existing databases
and we want to offer biologists the possibility of choosing instead of
imposing one database by default. Besides, most of the databases are
automated and ours is manually curated to avoid errors. We are also
trying to provide information that few other databases provide.
About Biogrid:
The Biological General Repository for Interaction Datasets (BioGRID)
is a public database that archives and disseminates genetic and
protein interaction data from model organisms and humans
(thebiogrid.org). BioGRID currently holds over 720,000 interactions
curated from both high-throughput datasets and individual focused
studies, as derived from over 41,000 publications in the primary
literature. Complete coverage of the entire literature is maintained
for budding yeast (S. cerevisiae), fission yeast (S. pombe) and thale
cress (A. thaliana), and efforts to expand curation across multiple
metazoan species are underway. Current curation drives are focused on
particular areas of biology to enable insights into conserved networks
and pathways that are relevant to human health. The BioGRID 3.2 web
interface contains new search and display features that enable rapid
queries across multiple data types and sources. BioGRID provides
interaction data to several model organism databases, resources such
as Entrez-Gene, SGD, TAIR, FlyBase and other interaction
meta-databases. The entire BioGRID 3.2 data collection may be
downloaded in multiple file formats, including IMEx compatible PSI MI
XML. For developers, BioGRID interactions are also available via a
REST based Web Service and Cytoscape plugin. All BioGRID documentation
is available online in the BioGRID Wiki. | {
"domain": "biology.stackexchange",
"id": 1587,
"tags": "bioinformatics, proteins, software, database, protein-interaction"
} |
Splitting Hashmap by converting to TreeMap | Question: I have a project requirement where I have to split a HashMap of around 60-70 entries into multiple HashMaps. The critical number of the split is 30 due to performance reasons because eventually this HashMap is going to be wrapped in a customer Request object and sent in an API call. The receiving end during performance testing found that 30 is the only number they can take for good performance.
So, I have to split the HashMap of 70 subscribers into sub-HashMaps of - 30, 30, 10.
Since HashMaps do not have methods like - subMap() or tailMap() - I am thinking of converting the HashMap into TreeMap. Here's my implementation:
checkAndSplitSubscribers(Map<String, Type> originalMap) throws Exception {
List<Map<String, Type>> listOfSplitMaps = new ArrayList<Map<String, Type>>();
int criticalNumber = 30; (Although this will be configurable)
try {
TreepMap<String, Type> treeMap = new TreeMap<String, Type> (originalMap);
List<String> keys = new ArrayList<String> (originalMap.keySet());
final int originalMapSize = treeMap.size();
for (int i = 0; i < originalMapSize; i += criticalNumber) {
if (i + criticalNumber < originalMapSize) {
listOfSplitMaps.add(treeMap.subMap(keys.get(i), keys.get(i + criticalNumber));
} else {
listOfSplitMaps.add(treeMap.tailMap(keys.get(i));
}
}
} catch (Exception e) {
throw e;
}
return listOfSplitMaps;
}
I want to ask if there is anything wrong with this implementation? Is converting HashMap to TreeMap mid way of the code really a bad programming practice? Or if there's any better way to achieve the above, please suggest (I have searched and gone through almost all splitting hashmap questions on SO but I did not find any answer to my satisfaction).
Answer: Interfaces over implementations
TreepMap<String, Type> treeMap = new TreeMap<String, Type> (originalMap);
This could be
SortedMap<String, Type> splittable = new TreeMap<>(original);
That gets rid of the typo TreepMap.
It saves specifying the types twice. You don't need to do that unless you are building against an old version of Java.
Specifying the interface rather than the implementation makes it easier to change implementations in the future. You could also use NavigableMap, but it's not necessary for these operations.
I prefer descriptive names like splittable and original to type names. Of course, your coding standards could be different.
Exceptions
} catch (Exception e) {
throw e;
}
If you're just going to throw the same exception, there's no reason to catch it. If you want to catch it, then handle it. If you want it to percolate up to the caller, then just let it go without catching it. As is, the try/catch doesn't do anything. You could simplify the code by taking it out.
Alternatives
I'm not convinced that this does enough to be worthwhile. It's probably going to be less efficient than the more manual solution. And it's not less code. Consider
Map<String, Type> current = new HashMap<>(MAXIMUM_CAPACITY);
for (Map.Entry<String, Type> entry : original.entrySet()) {
current.put(entry.getKey(), entry.getValue());
if (current.size() >= MAXIMUM_SIZE) {
splitMaps.add(current);
current = new HashMap<>(MAXIMUM_CAPACITY);
}
}
if (!current.isEmpty()) {
splitMaps.add(current);
}
This is about the same amount of code, but it saves turning the keySet into a List.
Both versions have to iterate over the original map. This one does so manually while the original code did so implicitly by converting to a TreeMap.
I would find the code more compelling if the data was moved from the HashMap to a TreeMap as the base source. I.e. if there never was a HashMap and the map came as a TreeMap.
This assumes the creation of two new constants. You might be able to find better names with more context. I don't like criticalNumber as a name because it doesn't tell me what makes the number critical. MAXIMUM_SIZE is a bit better in my opinion, as it tells me that the number is a size and that we don't want to go beyond it. | {
"domain": "codereview.stackexchange",
"id": 26531,
"tags": "java, tree, hash-map"
} |
Is there a term to describe a room only used to stop sound getting in? | Question: an acoustic anechoic chamber has mainly 3 features
stopping sound getting in;
stopping sound getting out;
reduce echoey.
Is there a term to describe a room only used to stop sound getting in?
Answer: Acoustic test booth
However, acoustic tests require special spaces. They should be conducted in a place where the sounds can be isolated and any unwanted noises are kept outside. This is where acoustic test booths come in.
https://www.enoisecontrol.com/acoustic-test-booth-chamber/
Specifically, a hemi-anechoic chamber has some sound-reflective inside surfaces, typically the floor, in order to simulate operating conditions. | {
"domain": "engineering.stackexchange",
"id": 3146,
"tags": "terminology"
} |
Potential energy of phonons as harmonic oscillators | Question: I'm trying to derive the phonon hamiltonian which is anologous to a collection
of independent harmonic oscillators. I've already have kinetic energy but I'm kinda stuck on the potential energy. The initial expression I have is $$V=\sum_{\boldsymbol{R},\boldsymbol{R'}}\sum_{a,b}\sum_{\mu,\nu}D_{a\mu,b\nu}(\boldsymbol{R},\boldsymbol{R'})u_{a\mu}(\boldsymbol{R})u_{b\nu}(\boldsymbol{R'})$$
where $$D_{a\mu,b\nu}(\boldsymbol{R},\boldsymbol{R'})$$ is dynamical matrix and $$u_{a\mu}(\boldsymbol{R})=\sum\limits_{\boldsymbol{q}\lambda}\frac{1}{\sqrt N}\varepsilon_{a\mu}^\lambda(\boldsymbol{q})e^{i\boldsymbol{q}\cdot(\boldsymbol{R}+\boldsymbol{r}_a)}Q_\lambda(\boldsymbol{q})$$ is the displacement.
I think I should use the following identities (possibly in this order ?) to get to the final expression
$$\omega^2\varepsilon_{a\mu}^\lambda(\boldsymbol{q})=\sum_{b,\nu}D_{a\mu,b\nu}\,\varepsilon_{a\mu}^\lambda(\boldsymbol{q})$$
$$\sum_\boldsymbol{R}e^{i\boldsymbol{q}\cdot\boldsymbol{R}}=N\delta_{\boldsymbol{q},0}$$
$$e^{\lambda}_{a\mu}(\boldsymbol{q})=\left[e^{\lambda}_{a\mu}(\boldsymbol{-q})\right]^*$$
$$\sum_{a,\mu}e^{\lambda}_{a\mu}(\boldsymbol{q})e^{\lambda'}_{a\mu}(\boldsymbol{q})=\delta_{\lambda\lambda'}$$
$$Q_{\lambda}(\boldsymbol{q})=\left[Q_{\lambda}(\boldsymbol{-q})\right]^*$$
The result should be $$V=\sum_{\boldsymbol{q},\lambda}\omega(\boldsymbol{q})|Q_\lambda(\boldsymbol{q})|^2$$
I'm just not getting anywhere with those sums. I'm not able to arrive at anything meaningful. Can anyone help me with this derivation?
Answer: Let $R,a,\mu$ denote, respectively, the coordinates of the $R$th unit cell, the location of the $a$th basis atom, and the direction along the $\mu$th primitive vector. The potential energy within the harmonic approximation is
$
V=\frac12\sum_{Ra\mu}\sum_{R'b\nu}\Phi_{a\mu,b\nu}(R,R')\,u_{a\mu}(R)\,u_{b\nu}(R')
$
where $u_{a\mu}(R)$ are the thermal displacements away from equilibrium (with the time dependence suppressed) and $\Phi_{a\mu,b\nu}(R,R')$ are elements of the force constant matrix. In general, these are not the same as the elements of the dynamical matrix. The $u_{a\mu}(R)$ can be expanded as
$
u_{a\mu}(R)=\frac{1}{\sqrt{m_aN}}\sum_{q\lambda}\varepsilon_{a\mu}^{\lambda}(q)\,Q_\lambda(q)\,\exp(iq\cdot R)
$
where $q$ is the phonon wavevector, $m_a$ is the mass of atom $a$, $\lambda$ are branches of the dispersion relation, $\varepsilon_{a\mu}^{\lambda}(q)$ is a component of the eigenvector indexed by $q\lambda$, and $Q_\lambda(q)$ are the 'normal coordinates'. Note that including the term $\exp(iq\cdot a)$ in the expansion is tantamount to redefining the dynamical matrix.
Substituting the expansion into the potential gives
$
\begin{align}
V
&=\frac{1}{2N}\sum_{Ra\mu}\sum_{R'b\nu}\frac{\Phi_{a\mu,b\nu}(R,R')}{\sqrt{m_am_{b}}}\sum_{q\lambda}\varepsilon_{a\mu}^{\lambda}(q)\,Q_\lambda(q)\,e^{iq\cdot R}\sum_{q'\lambda'}\varepsilon_{b\nu}^{\lambda'}(q')\,Q_{\lambda'}(q')\,e^{iq'\cdot R'} \\
&=\frac{1}{2N}\sum_{Ra\mu}\sum_{R'b\nu}\frac{\Phi_{a\mu,b\nu}(0,R'-R)}{\sqrt{m_am_{b}}}\sum_{q\lambda}\varepsilon_{a\mu}^{\lambda}(q)\,Q_\lambda(q)\,e^{iq\cdot R}\sum_{q'\lambda'}\varepsilon_{b\nu}^{\lambda'}(q')\,Q_{\lambda'}(q')\,e^{iq'\cdot R'} \\
&=\frac{1}{2N}\sum_{Ra\mu}\sum_{R'b\nu}\frac{\Phi_{a\mu,b\nu}(0,R')}{\sqrt{m_am_{b}}}\sum_{q\lambda}\varepsilon_{a\mu}^{\lambda}(q)\,Q_\lambda(q)\,e^{iq\cdot R}\sum_{q'\lambda'}\varepsilon_{b\nu}^{\lambda'}(q')\,Q_{\lambda'}(q')\,e^{iq'\cdot (R'+R)} \\
&=\frac{1}{2N}\sum_{a\mu}\sum_{R'b\nu}\frac{\Phi_{a\mu,b\nu}(0,R')}{\sqrt{m_am_{b}}}\sum_{q\lambda}\varepsilon_{a\mu}^{\lambda}(q)\,Q_\lambda(q)\sum_{q'\lambda'}\varepsilon_{b\nu}^{\lambda'}(q')\,Q_{\lambda'}(q')\,e^{iq'\cdot R'}\underbrace{\left(\sum_R e^{i(q+q')\cdot R}\right)}_{N\delta_{q,-q'}} \\
&=\frac{1}{2}\sum_{a\mu}\sum_{R'b\nu}\frac{\Phi_{a\mu,b\nu}(0,R')}{\sqrt{m_am_{b}}}\sum_{q\lambda}\varepsilon_{a\mu}^{\lambda}(q)\,Q_\lambda(q)\sum_{\lambda'}\varepsilon_{b\nu}^{\lambda'}(-q)\,Q_{\lambda'}(-q)\,e^{-iq\cdot R'} \\
&=\frac{1}{2}\sum_{a\mu}\sum_{b\nu}\sum_{q\lambda}\varepsilon_{a\mu}^{\lambda}(q)\,Q_\lambda(q)\sum_{\lambda'}\varepsilon_{b\nu}^{\lambda'}(-q)\,Q_{\lambda'}(-q)\,\underbrace{\sum_{R'}\frac{\Phi_{a\mu,b\nu}(0,R')}{\sqrt{m_am_{b}}}e^{-iq\cdot R'}}_{D_{a\mu,b\nu}(-q)} \\
&=\frac{1}{2}\sum_{a\mu}\sum_{b\nu}\sum_{q\lambda}\varepsilon_{a\mu}^{\lambda}(-q)\,Q_\lambda(-q)\sum_{\lambda'}\varepsilon_{b\nu}^{\lambda'}(q)\,Q_{\lambda'}(q)\,D_{a\mu,b\nu}(q) \\
&=\frac{1}{2}\sum_{a\mu}\sum_{q\lambda}\varepsilon_{a\mu}^{\lambda}(-q)\,Q_\lambda(-q)\sum_{\lambda'}Q_{\lambda'}(q)\underbrace{\sum_{b\nu}D_{a\mu,b\nu}(q)\,\varepsilon_{b\nu}^{\lambda'}(q)}_{\omega^2(q\lambda')\,\varepsilon_{a\mu}^{\lambda'}(q)} \\
&=\frac{1}{2}\sum_{a\mu}\sum_{q\lambda}\varepsilon_{a\mu}^{\lambda}(q)^*\,Q_\lambda(q)^*\sum_{\lambda'}Q_{\lambda'}(q)\,\omega^2(q\lambda')\,\varepsilon_{a\mu}^{\lambda'}(q) \\
&=\frac{1}{2}\sum_{q\lambda\lambda'}\omega^2(q\lambda')\,Q_\lambda(q)^*\,Q_{\lambda'}(q)\underbrace{\sum_{a\mu}\varepsilon_{a\mu}^{\lambda}(q)^*\varepsilon_{a\mu}^{\lambda'}(q)}_{\delta_{\lambda\lambda'}} \\
&=\frac{1}{2}\sum_{q\lambda}\omega^2(q\lambda)\,|Q_\lambda(q)|^2 \\
\end{align}
$ | {
"domain": "physics.stackexchange",
"id": 96746,
"tags": "condensed-matter, solid-state-physics, potential-energy, harmonic-oscillator, phonons"
} |
What is the purpose of the equation | Question: In my chemistry class we solved the equation wavelength = h/mv for finding the mass of a baseball and were given all of the other variables. I know that this equation has to do with wavelength of objects that contain mass but since objects with mass such as a baseball don't move constantly up and down in a wave pattern as they move towards something, they just arc down then what is the purpose of this? My teacher said it had something to do with waves emitted by the baseball or something like that but then why would it need mass?
Answer: deBroglie wavelength of a particle: λ=h/mv
Yup - it seems to be for getting the wavelength of an electron or a proton.
https://www.thoughtco.com/definition-of-de-broglie-equation-604418
http://www.softschools.com/formulas/physics/de_broglie_wavelength_formula/150/
It's an entertaining equation to use with a baseball, though, even if it might not be a good application of the equation.
The baseball has 'huge' masses and velocities. What sort of a wavelength will you get if you divide h by a 'huge' number?
Light can be either a wave or a [very very tiny] particle.
https://en.wikipedia.org/wiki/Wave–particle_duality - this wikipedia article talks about how, according to quantum mechanics, every particle also has a wave nature.
I hope that helps! | {
"domain": "chemistry.stackexchange",
"id": 9948,
"tags": "quantum-chemistry"
} |
Why does job appear to be stuck in queue on IBMQ backend? | Question: I have submitted a batch of circuits to ibmq_vigo using the IBMQJobManager and the batch is correctly split properly into multiple jobs(as viewed on the dashboard), however the job at the front of the queue appears stuck at the front for multiple hours. The backend does not appear to be in reserve mode and I did not have this issue when I successfully executed the same batch of circuits on ibmq_rochester.
Does anyone know what might be the issue?
Also this is my second unsuccessful attempt at executing on ibmq_vigo.
Answer: Sometime that happens. The controlled electronic might got a reboot during while your jobs were in queue or something of that sort... You can try to cancel your jobs and resubmitted them to see if that fixes it. Note that if you running jobs through Aqua, like performing QAOA or VQE, you can cancel the current jobs and they will create a replacement job automatically. | {
"domain": "quantumcomputing.stackexchange",
"id": 1947,
"tags": "programming, ibm-q-experience"
} |
Role of thermal fluctuations in restoring the symmetry in finite systems | Question: A symmetry is spontaneously broken in a system with infinite number of degrees of freedom (DOF), when the system finds itself in the ground state that breaks the symmetry of the Hamiltonian. For example, the $SO(3)$ symmetry of the Hamiltonian in Heisenberg model, is spontaneously broken in paramagnetic to ferromagnetic transition.
For systems with infinite number of degrees of freedom (DOF), thermal fluctuations cannot restore the symmetry of the ground state i.e., the ground state does not share the symmetry of the Hamiltonian. However, with finite DOF, symmetry can be restored via thermal fluctuations.
How does this happen? In other words, how and why is it that thermal fluctuations can restore the invariance of the ground state under the symmetry (of the Hamiltonian) when the system has finite DOF but fails to do so when the system has infinite DOF?
Answer: There seems to be some confusion in your question between thermal fluctuations and quantum fluctuations, so I will try to address both of them in my answer.
Spontaneous symmetry breaking occurs when the ground state of the Hamiltonian does not exhibit the full symmetries of the Hamiltonian. In other words, there are multiple degenerate configurations with the lowest energy. Whether the symmetry can be restored by fluctuations depends on the "size" of the manifold of minimum energy states.
The way the restoration occurs is that fluctuations can carry the system from one minimum-energy configuration to another. If the fluctuations are strong enough, and the "number" of minimum-energy states is small enough, the full symmetry of the system is restored. For example, in the classical system of one a particle in a potential with two equally deep wells, if the temperature is high enough (bigger than the barrier between the wells, roughly), the symmetry is restored, because the particle can move freely back and forth between the wells, thanks to the thermal energy it possesses.
Quantum fluctuations can work the same way. For the same particle in a double well potential, the quantum ground state is actually equally distributed between the two wells. The quantum fluctuations, which lead to tunneling through the barrier, restore the symmetry. As the "size" of the manifold of ground states increases, it becomes harder to restore the symmetry. In a field theory system with one spatial dimension, the quantum fluctuations always restore the symmetry; the thermal fluctuations do so as well, provided $T>0$.
As the dimension of the field theory increases, it becomes harder to restore the symmetry. The restorration also depends on the nature of the states. The two-dimensional Ising model, with a discrete state space, has broken symmetry below a threshold temperature. However, a system with a continuous state space, such as a Heisenberg model, does not exhibit symmetry breaking in two dimensions.
The key difference between quantum and thermal fluctuations is that at high enough temperature, thermal fluctuations can always restore a symmetry. If the thermal energy is vast compared with the energy barriers, the system is free to move between the ground states, and the dynamics are symmetric. Contrary to what you suggest in the question, the rotation symmetry of a ferromagnet will be restored at high temperature. | {
"domain": "physics.stackexchange",
"id": 36349,
"tags": "quantum-mechanics, quantum-field-theory, statistical-mechanics, symmetry-breaking, thermal-field-theory"
} |
librealsense not building | Question:
$ cmake .. -DBUILD_EXAMPLES:BOOL=true
-- Building in a ROS environment
-- Using CATKIN_DEVEL_PREFIX: /home/jackson/github/librealsense/build/devel
-- Using CMAKE_PREFIX_PATH: /home/jackson/catkin_ws/devel;/opt/ros/indigo
-- This workspace overlays: /home/jackson/catkin_ws/devel;/opt/ros/indigo
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/jackson/github/librealsense/build/test_results
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.6.18
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
GLFW_INCLUDE_DIR
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
used as include directory in directory /home/jackson/github/librealsense/examples
GLFW_LIBRARIES
linked by target "c-tutorial-1-depth" in directory /home/jackson/github/librealsense/examples
linked by target "c-tutorial-2-streams" in directory /home/jackson/github/librealsense/examples
linked by target "c-tutorial-3-pointcloud" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-alignimages" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-callback" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-callback-2" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-capture" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-config-ui" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-enumerate-devices" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-headless" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-motion-module" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-multicam" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-pointcloud" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-restart" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-stride" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-tutorial-1-depth" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-tutorial-2-streams" in directory /home/jackson/github/librealsense/examples
linked by target "cpp-tutorial-3-pointcloud" in directory /home/jackson/github/librealsense/examples
-- Configuring incomplete, errors occurred!
See also "/home/jackson/github/librealsense/build/CMakeFiles/CMakeOutput.log".
See also "/home/jackson/github/librealsense/build/CMakeFiles/CMakeError.log".
Originally posted by jacksonkr_ on ROS Answers with karma: 396 on 2016-10-25
Post score: 1
Answer:
This happens when you don't have the glfw_libraries installed. Follow the librealsense installation
scripts/install_glfw3.sh
Originally posted by jacksonkr_ with karma: 396 on 2016-10-25
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 26050,
"tags": "ros"
} |
How do we handle waves in a system with a high degree of inhomogeneity? | Question: When waves move into another medium, such as from air to water, we can calculate how much they transmit, reflect, and diffract as we know the angle of incidence they make with the interface, and we know the refractive index of the materials. And this is the theory behind basic microscopy.
But what happens when we don't have a clear well defined interface such as waves entering water from air and instead there are innumerable extremely small interfaces? For example when acoustic waves are travelling through the human body? Maybe there are some interfaces that are pretty straightforward, such as a section of bone..but most of the time the body will be extremely inhomogeneous with a spatially dependent index of refraction $n(x)$.
How can we handle this situation mathematically/physically when it is not possible to do simple calculations anymore like we did for the air-water case?
Answer: It depends on what is your objective. Sometimes you are interested in a net effect, while in others you may be curious about a specific region.
As an example, let's say you're studying how a certain type of electromagnetic radiation interacts with the human body. There are several possible questions regarding this subject:
Will the radiation pass through the body?
How does that specific kind of radiation interact with a certain type of complicated tissue or organ?
Will it give you cancer?
and so on. I can give you some ideas about how to model some of those problems:
To see if the radiation will be able to exit your body, you might want to model it as a thin wall of the most impenetrable tissue in the body with respect to the kind of radiation you're using (in general those are bones). The thickness of the wall can be used to approximate experimental data.
If you're worried about the effects in a specific organ, you might want to model that organ first, which means obtaining its approximated geometrical structure and using interpolation techniques. You can then use numerical methods to solve some approximation of Maxwell's equations inside a cavity to give you some results about bouncing, or maybe you have a greater interest at modelling the wall of that organ as a uniform tissue with a mean density. Perhaps your interest lies exactly in the fact that your problem is inhomogeneous, and in that case you can take an infinitesimal section of inhomogeneous tissue, do some Monte Carlo or any other method to solve some numerical equations, and then numerically integrate.
In this case you'll probably going to model things at the Chemical limit, which means a sort of semiclassical limit where sometimes you need Quantum Mechanics and sometimes Classical Mechanics is enough.
I don't even know if the things I told you make complete sense or if they answer your question, but I said them just to show you that modelling nature requires simplification and, almost always, numerical techniques. The satisfying part is that for a lot of experiments modelling the human body as a ball of water gives you extremely good results. | {
"domain": "physics.stackexchange",
"id": 33717,
"tags": "optics, waves, acoustics, diffraction"
} |
Printing the days of the week for every day this year | Question: @chux pointed out that my attempted solution contained several problems. Since I screwed that up so badly, I figure that I should put my revised solution up for review.
#include <stdio.h>
#include <time.h>
/**
* The current year (number of years since 1900)
*/
static int this_year(void) {
time_t now = time(NULL);
struct tm thetime;
localtime_r(&now, &thetime);
return thetime.tm_year;
}
static struct tm jan1(int yy) {
struct tm t = { 0 };
t.tm_year = yy;
t.tm_mday = 1;
t.tm_isdst = -1;
return t;
}
int main(void) {
int yy = this_year();
for (struct tm t = jan1(yy); mktime(&t), t.tm_year == yy; t.tm_mday++) {
char buf[80];
if (!strftime(buf, sizeof buf, "%m/%d/%Y is a %A", &t)) {
return 1; // Unexpected failure
}
puts(buf);
}
}
Concerns include standards compliance, portability, and correctness in all possible time zones. If possible, a more succinct way to obtain the current year and January 1 of the current year would be appreciated.
Answer: Functionality
Code needs to set t.tm_isdst = -1; before calling mktime(&t) each day to avoid only advancing only 23 hours.
setenv("TZ", "America/Sao_Paulo", 1);
tzset();
...
for (struct tm t = jan1(yy); mktime(&t), t.tm_year == yy; t.tm_mday++) {
char buf[80];
if (!strftime(buf, sizeof buf, "%m/%d/%Y%z is a %A", &t)) {
return 1; // Unexpected failure
}
puts(buf);
}
02/20/2015-0200 is a Friday
02/21/2015-0200 is a Saturday
02/21/2015-0300 is a Saturday
02/22/2015-0300 is a Sunday
Data abstraction
Even though struct tm references the year 1900, a function called this_year() should either use a more neutral epoch or be renamed. Since this is a static function - used only locally, not a strong need for this.
int this_year(void) // Return 2015 for this year
int this_year_since_1900(void) // Return 115 for this year
Generality
If this was not a static function, but for general use, tm jan1() should returned a struct tm with all its fields in a consistent state.
static struct tm jan1(int yy) {
struct tm t = { 0 };
t.tm_year = yy;
t.tm_mday = 1;
t.tm_isdst = -1;
mktime(&t); // Add to up_date tm_isdst, tm_yday, tm_wday and other optional fields.
return t;
} | {
"domain": "codereview.stackexchange",
"id": 15664,
"tags": "c, datetime, portability"
} |
Infinitesimal Poincare transformations , Taylor expansion | Question: Let $(\Lambda,a)\in\text{ ISO}_o(3,1)$ be a finite (proper) Poincare transformation and Let $U(\Lambda,b)$ be the corresponding unitary operator implementing this transformation on the Hilbert space of (free) one particle states.
Consider now an infinitesimal Poincare transformation, $(\Lambda,a)=(1+\omega,\varepsilon)$ where 1 is the identity operator and $\varepsilon^a$, $\omega_{ab}=-\omega_{ba}$ are small parameters.
In most textbooks I have found, the authors simply claim that the corresponding unitary operator for such an infinitesimal transformation is (see Weinberg V1 pg 59 or Srednicki pg 17 for example)
$$U(1+\omega,\varepsilon)=1+\frac{1}{2}i\omega_{ab}\mathcal{J}^{ab}-i\varepsilon_a\mathcal{P}^a+\cdots$$
where $\mathcal{J}^{ab}$ and $\mathcal{P}^a$ are the generators of Lorentz transformations and translations respectively. I assume that this is some sort of taylor expansion in the infinitesimal parameters $\omega$ and $\varepsilon$, where we only keep linear terms in the infinitesimal parameters.
There are a couple of facets of this expansion which I am unsure about.
Where does the factor of $\frac{1}{2}$ and $i$ come from?
It is my impression that the signs in front of the second and third terms are based on your metric convention. If this is true, why is it so?
Regarding (1), I get the feeling that we have some freedom of choice. We choose the $1/2$ because it makes the exponential expression for a finite transformation look nicer. The factor of $i$ ensures that $U$ is hermitian. If this is true, $\textit{why}$ do we have this freedom of choice?
Answer: For (1), the idea is to expand in all independent quantities, which are $\omega_{01}, \omega_{02},\omega_{03}, \omega_{12},\omega_{13}, \omega_{23}$, and $\epsilon_{0}$, $\epsilon_{1}$,$\epsilon_{2}$,$\epsilon_{3}$. These are just numbers however, and hence need matrix "coefficients". So when you do the expansion you would get something like this
$$
U=1 + \omega_{01}\tilde{J}^{01} + \omega_{02}\tilde{J}^{02}+\omega_{03}\tilde{J}^{03}+\omega_{12}\tilde{J}^{12}+\omega_{13}\tilde{J}^{13}+\omega_{23}\tilde{J}^{23} + \epsilon_{0}\tilde{P}^0 +\epsilon_{1}\tilde{P}^1+\epsilon_{2}\tilde{P}^2+\epsilon_{3}\tilde{P}^3+\mathcal{O}(\omega^2,\epsilon^2)$$
Now $U$ is supposed to be unitary, $U^\dagger=U^{-1}$. Note that we have $U^{-1}(1+\omega,\epsilon)=U(1-\omega,-\epsilon)$. When you expand both sides, this means you must have,
$$ \tilde{J}^{12\dagger}=-\tilde{J}^{12}$$ and the same for all the other generators, which means that they are anti-hermitian. It is then a common convention in physics to pull out a factor of $i$, to make them Hermitian, i.e you would define generators $J^{12}$ such that
$$\tilde{J}^{12}=iJ^{12}$$. Then
$$\tilde{J}^{12\dagger}=i^\ast J^{12\dagger}=-iJ^{12\dagger}$$
but we want $\tilde{J}^{12\dagger}=-\tilde{J}^{12}=-i J^{12}$
and hence the new generators $J$ must be hermitian, i.e.
$$J^{12\dagger}=J^{12}$$
Hermitian operators in quantum mechanics are associated with observables, so this is just a convenient convention to make them manifest. This answers your second question.
As for the first part, note that what you wrote is really what I wrote.
$$\omega_{\mu\nu}J^{\mu\nu}= \omega_{00}J^{00}+\omega_{01}J^{01}+\omega_{01}J^{10}+\dots$$
The terms with $\omega_{\mu\mu}$ (no sum on $\mu$) vanish by the antisymmetry of $\omega$, and similarly by the antisymmetry, you get $\omega_{10}J^{10}=(-1)^2\omega_{01}J^{01}=\omega_{01}J^{01}$ and hence you actually get $2\omega_{01}J^{01}$, instead of $\omega_{01}J^{01}$ as I had in the expansion at the top. The factor of $1/2$ in your expansion is there to compensate for this.
The minus sign before the momentum generators is basically just convention. I vaguely recall a reason why its convenient so I'll add it here when I get more time later.
Update: I didn't state clearly but because the $\omega$ are antisymmetric it follows that the $J$ must be too because
$$\omega_{\mu\nu}J^{\mu\nu}=\omega_{\nu\mu}J^{\nu\mu}=-\omega_{\mu\nu}J^{\nu\mu}$$ where for the first equality I rename $\mu\rightarrow \nu$ and vice-versa and then in the second equality use the antisymmetry of $\omega$. Equating the first and last expression then tells you that $J^{\mu\nu}=-J^{\nu\mu}$. | {
"domain": "physics.stackexchange",
"id": 41652,
"tags": "special-relativity, group-theory, representation-theory, lie-algebra"
} |
Draw a centered circle | Question: I've written this program that, renders a circle in the middle of the screen. It's been written in a very elementary way.
Any suggestions on principles or techniques? Or any more optimal or efficient ways of drawing circles in C?
I'm using SDL2 (Simple Direct Media Library) and GCC to compile.
#include "SDL.h"
#include <math.h>
#include <stdio.h>
#define PI 3.14159
#define SCR_WDT 1280
#define SCR_HGT 960
const int SCR_CEN_X = SCR_WDT / 2;
const int SCR_CEN_Y = SCR_HGT / 2;
struct Circle
{
int radius;
int h; // The X access of the center of the circle.
int k; // The Y access of the center of the circle.
int new_x;
int new_y;
int old_x;
int old_y;
float step;
};
int main ( int argc, char *argv[] )
{
SDL_Init ( SDL_INIT_VIDEO );
SDL_Window *window = SDL_CreateWindow ( "Drawing a Circle", SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED, SCR_WDT, SCR_HGT, 0 );
SDL_Renderer *renderer = SDL_CreateRenderer ( window, -1, SDL_RENDERER_SOFTWARE );
Circle circle;
circle.radius = 200;
circle.h = SCR_CEN_X;
circle.k = SCR_CEN_Y;
circle.new_x = 0;
circle.new_y = 0;
circle.old_x = 0;
circle.old_y = 0;
circle.step = (PI * 2) / 50;
bool is_running = true;
while ( is_running )
{
SDL_Event event;
if ( SDL_PollEvent( &event ))
{
if ( event.type == SDL_QUIT )
{
is_running = false;
}
}
SDL_RenderClear ( renderer );
SDL_SetRenderDrawColor ( renderer, 255, 255, 255, 255 );
for ( float theta = 0; theta < (PI * 2); theta += circle.step )
{
circle.new_x = circle.h + (circle.radius * cos ( theta ));
circle.new_y = circle.k - (circle.radius * sin ( theta ));
SDL_RenderDrawLine ( renderer, circle.old_x, circle.old_y,
circle.new_x, circle.new_y );
circle.old_x = circle.new_x;
circle.old_y = circle.new_y;
}
SDL_SetRenderDrawColor ( renderer, 0, 0, 0, 255 );
SDL_RenderPresent ( renderer );
}
SDL_Quit ();
return 0;
}
Answer: Doesn't SDL have a function for drawing a circle, or are you just doing that as practise?
#define PI 3.14159
math.h should have M_PI.
struct Circle {
int radius;
int h; // The X access of the center of the circle.
int k; // The Y access of the center of the circle.
Just call them x and y, or center_x and center_y.
int new_x; int new_y; int old_x; int old_y; float step;
These should be in the function that draws the circle, as they're part of the logic of drawing, not the circle as an object.
int main (int argc, char *argv[])
Circle circle;
circle.radius = 200;
circle.h = SCR_CEN_X;
Make a function to initialize the Circle struct, esp. as you have fields that are initialized to as uninteresting a value as zero.
if ( event.type == SDL_QUIT ) {
is_running = false;
I would break out of the loop here, since you're going to exit soon anyway.
for ( float theta = 0; theta < (PI * 2); theta += circle.step ) {
circle.new_x = circle.h + (circle.radius * cos ( theta ));
circle.new_y = circle.k - (circle.radius * sin ( theta ));
SDL_RenderDrawLine ( renderer, circle.old_x, circle.old_y,
circle.new_x, circle.new_y );
Put the circle drawing in a function, and make new_x, new_y etc. locals to it. As for step, you might want to scale it based on the circle's radius. Or make it a parameter to the function.
I think you have a sort of an off-by-one error here, as (1) on the first iteration of the loop, you draw starting from (old_x, old_y), but old_x and old_y aren't set yet. And (2) when the loop ends, you probably want to draw a line from the last point (at angle 49/50*2*pi) to the very first point (at angle 0) to complete the circle.
Also, decide if you're writing C or C++. You can't define a struct Circle as just Circle foo in C, and in C++, you should make the class have a proper constructor. | {
"domain": "codereview.stackexchange",
"id": 22575,
"tags": "c, graphics, sdl"
} |
Is there such a thing as a "minimal soap" molecule? | Question: Wikipedia's Soap gives sodium stearate as an example of soap, and apparently I've been eating it:
Sodium stearate is the sodium salt of stearic acid. This white solid is the most common soap. It is found in many types of solid deodorants, rubbers, latex paints, and inks. It is also a component of some food additives and food flavorings.
What would be the smallest or simplest molecule that we could reasonably call a soap? Perhaps a functional definition would be that it could perform some of the functions of soap in the same way that soap does.
In Why does bleach feel slippery? and its follow-up Is it known for sure that bases feel slippery because of the production of soap/surfactant? the saponification of other existing molecules is discussed, and I'm not looking for that here.
Source
Answer: It boils down to the definition of soap. Wikipedia defines a soap as the salt of a fatty acid.
IUPAC claims the smallest fatty acid can be considered to have 4 carbons.
Therefore the simplest soap molecule would be a (generally sodium) salt of butyric (butanoic) acid, i.e. sodium butyrate.
Now apart from the chemical definition, a soap must adhere to its function in order to be defined as such. Therefore, before calling something a soap, we would have to know if the substance does in fact act as a surfactant in a oil-water surface (reducing the interfacial tension) - but that will depend on the nature of the oil in question and the purity of the water. Only for a well defined system we can then conclude that such molecular salt is acting as a soap in it. | {
"domain": "chemistry.stackexchange",
"id": 10960,
"tags": "everyday-chemistry, surfactants"
} |
If the universe is infinite, would QM allow the existence of "weird" zones? | Question: If we suppose that the universe is spatially infinite and extends more or less homogenously in all directions without end (having similar galaxies, stars etc.), then we can assume that there is a more than zero probability for a macroscopic quantum tunneling event to occur somewhere (let's say a pebble tunneling through a barrier/rock wall or something similar). But if the universe is infinite then there could be infinite such tunneling events going on at any given time in different locations. In such a case if we consider the set of all those events where such tunneling occurred, we can ask further whether if another tunneling event occurred just after the first one. Surely the probability for that (two macroscopic tunneling events occurring sequentially or side by side) would be even smaller but that still would be more than zero (think of the pebble tunneling back to the original side just after tunneling to the other side). Hence such set of two nearby or sequential tunneling events would also be occurring somewhere in this infinite universe.
Taking this logic further and increasing the number of simultaneous or sequential nearby tunneling events we can think of an entire "zone" of the universe where this tunneling occurs all the time macroscopically (just because it can due to still having a miniscule but non zero probability). Does this seem to imply that there would be zones in an infinite universe where apparently the normal natural laws seem to be violated just from the implications of quantum mechanics (which in itself is a set of normal natural laws).
What would observers in such "weird" zones conclude about the nature of universe and its laws and would they be able to develop similar theories of QM (an experimenter inside this zone might conclude that macroscopic tunneling is actually the 'norm' in nature and might derive very different set of laws from that). What about the possibility that we ourselves might be living in such a weird zone where laws which we consider to be the norm from our perspective are actually quite rare and exceptional phenomenon when viewed from the perspective of the universe as a whole?
Does this scenario seem plausible and is the existence of such zones very likely in the case of an infinite universe governed by QM?
Answer: Statistics can be tricky. This is especially so when you involve infinities. So the first thing we will need to do is break down what it means for an observer to develop a scientific theory in their "weird" area.
Probably the best tool in your pocket is Bayesian Inference. You are probably more familiar with frequentist inferrence, which is typically what we are taught in school. Both Bayesian and frequentist approaches to statistics use the same fundamental math of probability, they interpret the results differently. Frequentists infer the frequency of events occurring. Bayesian inference can be more thought of as dealing with the "belief" that something is true. Consider a 6 sided die. You roll it a few times, and observe that it comes up {6, 6, 6, 6, 6, 6, 6, 6}. Do you believe this is a fair die? That is Bayesian thinking.
In Bayesian inferrence, we talk about priors. These are your pre-existing beliefs. Then, with every observation, we update them using Bayes rule to get posterior beliefs. You may initially think the die is fair, but after observing a few 6's, you would start questioning that belief. Bayes rule can be used to quantify a rational level of disbelief based on the observations.
This is convenient because it takes a lot of the what-ifs out of your question. Could this happen? Absolutely, without a shred of doubt. Why am I so certain? Because it happens all the time. Ignoring the QM aspects, we are constantly coming up with beliefs about the world around us which are disproved eventually. It's part of life. Go take a journey through quantitative finance and the stock market, and you can see countless examples of theories which held true for a rather long time before being finally disproved.
This leaves only QM, which is now a relatively minor player in the game. It becomes relatively easy to explore the probability that some particular set of extraordinary events can lead some from some reasonable priors to some faulty view of the universe. This is true for any system which invokes statistics. QM, weather, stock market. Theories on all of these things point to the potential for unusual events.
However, it is hard to work with such things. While such a region as you describe could occur, what do you do with that information? Would you be able to develop better rational opinions?
There's one mainstream example of this thinking: quantum suicide. This is a thought experiment akin to Schrödinger's cat, except instead of putting an innocent cat in the box, you put yourself in there. You have a 50% chance of surviving the encounter. But if you use the Many Worlds interpretation of QM, one of you survives. You can repeat this many times, each time generating more worlds where you die, but at least one where you survive. This is sometimes used as a test case for philosophies built around the Many Worlds Interpretation.
But what can you do with this information? One quasi-scientific opinion is that, on the timeline where you survive, you are "exceptionally lucky," and that counts for something. But it turns out to be very tricky to make this logic work out. | {
"domain": "physics.stackexchange",
"id": 83354,
"tags": "quantum-mechanics, cosmology, spacetime, universe, quantum-tunneling"
} |
What is difference between intramolecular redox and disproportionation redox | Question: I can not understand difference between the intramolecular redox reaction and the Disproportionation redox reaction. Both involve the same molecule as substrate, but what is difference?
Answer: Disproportionation is a redox reaction where one element works both ways, with part of it being oxidized and another part reduced. It does not have to be intramolecular in the strict sense, since the oxidizing and reducing agents do not necessarily come from the same molecule (though they come from similar molecules, of course). Think of the well-known reaction:
$$\ce{3KClO -> KClO3 + 2KCl}$$
It is certainly a disproportionation, but one may hardly call it intramolecular; there is not much going on within one molecule (or rather within one ion, for all of these are ionic compounds). Chlorine oxidizes chlorine; now, one ion contains one atom of chlorine, so to oxidize anything it needs to find another ion in the first place.
Intramolecular redox reaction occurs when we have the oxidizing and reducing agents within one molecule. They may or may not be of the same element. In the former case it is going to be a disproportionation, but not in the latter. Think of TNT:
$$\ce{2 C6H2(NO2)3CH3 → 3 N2 + 5 H2O + 7 CO + 7 C}$$
See what's going on? Basically, N oxidizes C. They are indeed from the same molecule, so this may be intramolecular, but they are not the same element, hence this is not a disproportionation. | {
"domain": "chemistry.stackexchange",
"id": 17163,
"tags": "redox, molecules, terminology"
} |
Is torque energy? Does the Poynting vector have anything to do with torque? | Question: My physics teacher told us that Torque is nothing but energy. I was very skeptical about this as torque is a vector qty. and energy is scalar, and over that torque is literally a force which makes bodies rotate how would it be energy?
Answer: Torque is best be defined as the work that can done per unit angle of rotation (as in Joules/radian) by a force acting in a manner that tends to cause a rotation. (This immediately gives the formula: Work = Torque x Angle). This also helps one remember that in doing work, you want the component of force in the direction of motion, and there is a distance involved (along an arc which is proportional to the radius). As with most rotational quantities, the vector representing torque is defined as being along the axis of rotation (in the same direction as the associated angular displacement vector). | {
"domain": "physics.stackexchange",
"id": 69635,
"tags": "classical-mechanics, energy, torque"
} |
Additional prefix and suffix required when windowing an OFDM symbol? | Question: To reduce out of band energy, it is common to apply a Raised Cosine filter to the front and back of an OFDM symbol. It is also common to overlap the windowed portions of adjacent symbols to eliminate amplitude dips.
For an OFDM symbol consisting of a Cyclic Prefix followed by NFFT points from an IFFT, is it common to insert an additional prefix and suffix for the windowed portion? If that isn't done then the CP and the IFFT data are shaped by the window, distorting the constellation. A pretty good description of what I am talking about is here:
http://zone.ni.com/reference/en-XX/help/373725A-01/wlangen/windowing/
Answer: Yes, it is necessary to add an additional pre/suffix to the OFDM symbol that corresponds to the window length when windowing is applied. As the length of the already existing cyclic prefix is usually chosen as the maximum delay spread of the channel no further Inter-symbol interference (ISI) must be introduced or otherwise orthogonality is lost. The windowing method described in the link you referenced is introducing additional ISI by overlapping consecutive symbols.
Furthermore, weighting some samples of the OFDM time-domain signal with different factors is a distortion that also distorts the constellation in frequency domain, as you say.
Note that windowing is not filtering. Windowing is the multiplication of the OFDM signal $x_k$ with some window function $w_k$. In contrast, a filter has a memory. | {
"domain": "dsp.stackexchange",
"id": 1932,
"tags": "ofdm, window, cosine"
} |
Limited memory priority queue | Question: I'm trying to implement a highly efficient limited memory priority queue. The interface is the same as a std::priority_queue.
Can anyone suggest any performance improvements on my attempt?
limited_queue.h
#include <array>
#include <algorithm>
template<typename T, std::size_t N>
class LimitedPriorityQueue
{
public:
LimitedPriorityQueue() {}
void push(T item)
{
if (next != items.end()) {
*next = item;
++next;
std::push_heap(items.begin(), next);
} else {
std::sort_heap(items.begin(), items.end());
if (items.front() < item) {
items[0] = item;
}
std::make_heap(items.begin(), items.end());
}
}
T top()
{
return items.front();
}
void pop()
{
std::pop_heap(items.begin(), next);
--next;
}
bool empty()
{
return size() == 0;
}
std::size_t size()
{
return next - items.begin();
}
private:
std::array<T, N> items;
T* next = items.begin();
};
main.cpp
#include "limited_queue.h"
int main()
{
LimitedPriorityQueue<int, 10> queue;
for (int i = 20; i >= 0; --i) {
queue.push(i);
}
while (!queue.empty()) {
std::cout << queue.top() << std::endl;
queue.pop();
}
}
Also, I wasn't quite telling the truth when I said the interface is the same as a std::priority_queue; I'm missing an emplace. I'm not sure if this is possible using an underlying std::array?
Answer: Overall, it looks great. The code is clear and easy to follow, you leveraged the standard library well, and performance should be decent decent. There are however, a few things that could be improved.
It might be worth templating the comparison. All that you would have to change is passing an extra argument to make_heap, pop_head and push_heap. There's really not much of a reason to not template the comparator unless I'm missing something.
top() should return a const reference. No reason to make the user copy it unless they want.
You've allowed for move semantics by taking a value rather than const reference in push, but you haven't taken advantage of it the whole way through. Your assignments should be move assignments since you know you're done with the values. That lets you avoid a potentially expensive copy assignment.
Speaking of movement.... You are correct that can't meaningfully implement an emplace with an std::array. Since the elements are default constructed, there's no blank slot to construct in place in to. The best you could do would be a move assignment into an already constructed element which is the same performance as using push with move semantics.
What you could do, however, is use std::aligned_storage and emplace into that. It's basically a tool for getting unitialized memory that's ripe for having something constructed into it. From there, you can implement emplace just like you would in a non-stack allocated situation.
Unfortunately your other logic would get a bit more complicated, but not much. Only construction and destruction (i.e. push and pop) would change, and they would only need placement new and manual-destruction.
There's a pretty good example of what I'm talking about at on cppreference's std::aligned_storage page, though it unfortunately does not have an emplace example.
One last, highly subjective thing: I would probably name it BoundedPriorityQueue instead of LimitedPriorityQueue. Limited sounds like limited functionality to me, whereas bounded sounds like having a maximum size (like bounded buffer).
Also, it might be worth throwing in a bit of documentation. It's pretty obvious from the name and the template params, but someone could wonder what happens when it's full (example thrown, or chucking out the lowest element). | {
"domain": "codereview.stackexchange",
"id": 10371,
"tags": "c++, optimization"
} |
Effect of ambient fluid on a pinhole camera's intrinsic parameters | Question: Suppose I am performing calibration of a pinhole camera under the Brown–Conrady distortion model and find the focal length $(f_x, f_y)$, principal point $(c_x, c_y)$, radial distortion coefficients $K_i$, and tangential distortion coefficients $P_i$.
I repeat the calibration process in air and in fresh water, in identical camera housings.
Would any of the intrinsic camera parameters change between the two calibrations? For example, would the focal length or radial distortion be affected by the refractive index of the water?
Answer: In "Perspective and non-perspective camera models in underwater imaging — overview and error analysis" (2011; DOI), Anne Sedlazeck and Reinhard Koch explain that,
When using the perspective model on underwater images captured through a glass port, a calibration based on above-water images is invalid underwater. Furthermore, the perspective model itself is
invalid for underwater images due to the non-single view point.
That said, the pinhole model can approximate the effects of refraction, as discussed in "Camera Calibration Techniques for Accurate Measurement Underwater" (2019; PDF, DOI; CC BY 4.0) by Mark Shortis:
2.3.2 Absorption of Refractive Effects
In the underwater environment the effects of refraction must be corrected or modelled to obtain an accurate calibration. The entire light path, including the camera lens, housing port and water medium, must be considered. By far the most common approach is to correct the refraction effects using absorption by the physical camera calibration parameters.
Assuming that the camera optical axis is approximately perpendicular to a plane or dome camera port, the primary effect of refraction through the air-port and port-water interfaces will be radially symmetric around the principal point (Li et al. 1996). This primary effect can be absorbed by the radial lens distortion component of the calibration parameters.
There will also be some small, asymmetric effects caused by, for example, alignment errors between the optical axis and the housing port, and perhaps non-uniformities in the thickness or material of the housing. These secondary effects can be absorbed by calibration parameters such as the decentring lens distortion and the affinity term.
(Emphasis and line breaks added for readability.)
Shortis provides figures comparing radial and decentering distortion from in-air and in-water calibrations of a GoPro HERO4, reproduced at the end of this answer.
While "the absorption approach has the distinct advantage that it can be used with any type of underwater housing," he notes that fundamentally, the camera model is flawed for in-water distortion:
The disadvantage of the absorption approach for the refractive effects is that there will always be some systematic errors which are not incorporated into the model. The effect of refraction invalidates the assumption of a single projection centre for the camera (Sedlazeck and Koch 2012), which is the basis for the physical parameter model. The errors are most often manifest as scale changes when measurements are taken outside of the range used for the calibration process.
The subsequent section "2.3.3 Geometric Correction of Refraction Effects" describes alternative models.
For more conversion between in-air and approximate in-water calibration, see "On the Calibration of Underwater Cameras" by J. G. Fryer and C. S. Fraser (1986; DOI) or "Underwater Camera Calibration" by J.M. Lavest, G. Rives, and J.T. Lapresté (2000; PDF, DOI).
Camera
GoPro HERO4 #1
GoPro HERO4 #2
Parameter
In-air
In-water
Ratio
In-air
In-water
Ratio
$c_x$ (mm)
0.080
0.071
0.88
-0.032
-0.059
1.82
$c_y$ (mm)
-0.066
-0.085
1.27
-0.143
-0.171
1.20
f (mm)
3.676
4.922
1.34
3.658
4.898
1.34
Affinity
-6.74e-03
-6.71e-03
1.00
-6.74e-03
-6.84e-03
1.01 | {
"domain": "physics.stackexchange",
"id": 95026,
"tags": "optics, camera"
} |
Constructive proof to show the quotient of two regular languages is regular | Question: I have a question regarding the quotient of two regular languages, $R$ and $L$.
I saw the answers to this question: are regular languages closed under division
and the proof sketch is not constructive, because $L$ can be any language.
I'm thinking about a constructive proof when $L$ is regular: how can we construct that $F2$ group in the case when $L$ is regular?
We can't go over every word $x$ in $L$ and check if $\delta(q,x)\in F1$ in the case when $L$ is infinite...
Answer: In the question you link to, the automaton for the operation "division" (usually known as "quotient") $L_1/L_2$ is obtained from a FSA $M_1 = (Q,\Sigma,\delta,q_0,F_1)$ for $L_1$ by changing the final states. The new states are given as $F_2=\{q\in Q \mid \delta(q,x)\in F_1 \text{ for some } x\in L_2\}$.
The quotient is regular for regular $L_1$ even when $L_2$ is arbitrarily complex.
We can't go over every word $x$ in $L_2$ and check if $\delta(q,x)\in F_1$
in the case when $L$ is infinite...
You are right that in general the construction is not effective. We just know how the final states are chosen, but we cannot definitely say which states are final. But in case $L_2$ is regular, we can determine which states are final, as regular languages are closed under intersection.
For any state $q$ change $M_1$ simply by setting $q$ as its new initial state (instead of $q_0$): $M_q = (Q,\Sigma,\delta,q,F_1)$. Now $q$ is final (in $F_2$) iff the intersection $L(M_q) \cap L_2$ is nonempty. Because any $x$ is the intersection is precisely an $x\in L_2$ for which $\delta(q,x)\in F_1$ as required. | {
"domain": "cs.stackexchange",
"id": 12897,
"tags": "automata, regular-languages, finite-automata"
} |
Magnetic field a distance z away from the center of a current-carrying (counter-clockwise) loop? | Question: I know the answer to be
But I'm not exactly sure how to construct the vector $\mathrm{d}L$ to in turn utilize Biot-Savart's law to solve the problem.
Intuitively it seems if we take a point along the circle $(R\cosѲ, R\sinѲ, 0)$ and consider another point $(R\cos(Ѳ+\mathrm dѲ), R\sin(Ѳ + \mathrm dѲ), 0)$ as $\mathrm dѲ \to 0$ that subtracting the two should give us $\mathrm dL$. But since that's just the definition of the derivative, we arrive at
$$\mathrm dL = (-R\sinѲ, R\cosѲ, 0).$$
But this expression seems nonsensical: we have an infinitesimal vector = a vector with norm $R$?
Answer: Your expression for $d\vec L$ left out a factor of $d\theta$. For example, the correct Taylor expansions are
$$\cos{(\theta+d\theta)}\approx\cos\theta-d\theta\sin\theta$$
and
$$\sin{(\theta+d\theta)}\approx\sin\theta+d\theta\cos\theta.$$
The result is thus
$$d\vec L=R\,d\theta\,(-\sin\theta,\cos\theta,0).$$
By the way, this result can be written in the form
$$d\vec L=R\,d\theta\,\hat\theta.$$
This should look intuitive: Its magnitude is $R\,d\theta$, the length of an infinitesimal arc along the circle, and its direction is $\hat\theta$, along (tangent to) the circle. | {
"domain": "physics.stackexchange",
"id": 62341,
"tags": "electromagnetism, vectors, coordinate-systems"
} |
Enthalpy definitions. What are their main differences? | Question: The definition of enthalpy is properly described in here.
The principal objective of this question is mainly academic. In order to any students with internet connection can find a definition and an example of the enthalpy concept applications.
What are the main differences of:
Standard enthalpy
Enthalpy of atomization
Enthalpy of formation
Enthalpy of hydration
Enthalpy of reaction
Enthalpy of combustion
Enthalpy of solution
Enthalpy of lattice dissociation
Enthalpy of lattice formation
Enthalpy of mixture
Enthalpy of excess mixture
Enthalpy of (any phase transition)
Law of Hess
Law of Kirchhoff
Any other definition not listed here
Answer: Explanation of notation:
$H$ is the enthalpy of the system.
$\Delta$ means change of, so $\Delta H$ means change of the enthalpy
This symbol means standard condition, standard condition is defined as a pressure of $100\text{kPa}$ and reactants and products are in their standard state, or concentration of solutions are $1\text{M}$. However, since LaTeX doesn't have this symbol, I'll substitute it for $^\ominus$.
Enthalpy changes:
Enthalpy of dilution – The enthalpy change when a solution containing one mole of a solute is diluted from one concentration to another.
Enthalpy of ($n\text{th}$) electron affinity – The enthalpy change when $n$ electrons are added to one mole of gaseous atoms.
$$\ce{Li(g) + e-(g) -> Li-(g) +60\ \text{kJ}}$$
$$\ce{F(g) + e-(g) -> F-(g) +328\ \text{kJ}}$$
Enthalpy of ($n\text{th}$) ionization – The enthalpy change when $n$ electrons are removed from one mole of gaseous atoms. It is always positive.
$$\ce{Li(g) +520\ \text{kJ} -> Li+(g) +e-(g)}$$
$$\ce{He(g) +2372\ \text{kJ} -> He+(g) +e-(g)}$$
Enthalpy of lattice dissociation – The enthalpy change when one mole of an ionic lattice dissociates into isolated gaseous ions.
An example for sodium chloride which have an enthalpy of lattice dissociation, also known as lattice energy, of $787\ \mathrm{kJ/mol}$
$$\ce{NaCl(s) +787\ \text{kJ} -> Na+(g) + Cl-(g)}\qquad\Delta H=787\ \text{kJ/mol}$$
Enthalpy of lattice formation – The enthalpy change when one mole of solid crystal is formed from its scattered gaseous ions.
$$\ce{Na+(g) + Cl-(g) -> NaCl(s)} +787\ \text{kJ}\qquad\Delta H=-787\ \text{kJ/mol}$$
Which means Enthalpy of lattice dissociation$=-$Enthalpy of lattice formation
Enthalpy of hydration($\Delta_\text{hyd}H^\ominus$) – Enthalpy change when when one mole of ions undergo hydration.
Enthalpy of mixing – The enthalpy change from a substance when mixed.
Enthalpy of neutralisation – The enthalpy change when an acid is completely neutralised by a base.
For a strong acid, like $\ce{HCl}$ and strong base, like $\ce{NaOH}$, they disassociate almost completely $\ce{Cl-}$ and $\ce{Na+}$ are spectator ions so what is actually happening is $$\ce{H+(aq) + OH-(aq) -> H2O(l) + 58\ \text{kJ/mol}}$$
However, using a weak acid/base will have a lower enthalpy of neutralisation as normally most of the acid/base does not disassociate.
For example, mixing ethanoic acid and potassium hydroxide only has a enthalpy of neutralisation of $-11.7\ \text{kJ/mol}$
Enthalpy of precipitation – The enthalpy change when one mole of a sparingly soluble substance precipitates by mixing dilute solutions of suitable electrolytes.
Enthalpy of solution – Enthalpy change when 1 mole of an ionic substance dissolves in water to give a solution of infinite dilution.
Enthalpy of solution can be positive or negative as when a ionic substance dissolves, the dissolution can be broken into three steps
Breaking of solute-solute attraction (endothermic)
Breaking solvent-solvent attraction (endothermic), eg. hydrogen bonds, LDF
Forming solvent-solute attraction (exothermic)
An example of a positive enthalpy of solution is potassium chlorate which has an enthalpy of solution of $41.38\ \text{kJ/mol}$
Enthalpy of (Solid$\rightarrow$Liquid: $\Delta_\text{fus}H^\ominus$, Liquid$\rightarrow$Solid:$\Delta_\text{freezing}H^\ominus$, Liquid$\rightarrow$Gas: $\Delta_\text{vap}H^\ominus$, Gas$\rightarrow$Liquid: $\Delta_\text{cond}H^\ominus$, Solid$\rightarrow$Gas: $\Delta_\text{sub}H^\ominus$, Gas$\rightarrow$Solid: $\Delta_\text{deposition}H^\ominus$) – The enthalpy change from providing energy, to a specific quantity of the substance to change its state.
$$\Delta_\text{fus}H^\ominus=-\Delta_\text{freezing}H^\ominus,\Delta_\text{vap}H^\ominus=-\Delta_\text{cond}H^\ominus,\Delta_\text{sub}H^\ominus=-\Delta_\text{deposition}H^\ominus$$
Standard enthalpy of atomization($\Delta_\text{at}H_T^\ominus$) – Change when a compound's bonds are broken and the component atoms are reduced to individual atoms at at $T^\circ K$.
$$\ce{S_8 -> 8S}\qquad\Delta_{at}H^\ominus=278.7\ \text{kJ/mol}$$
Standard enthalpy of combustion($\Delta_\text{c}H_T^\ominus$) – The enthalpy change which occurs when one mole of the compound is burned completely in oxygen at $T^\circ K$ and $10\ \mathrm{kPa}$.
$$\ce{H2(g) +\frac{1}{2}O2(g) -> H2O(g)}+572\ \text{kJ}\qquad\Delta_cH^\ominus = -286\ \text{kJ/mol}$$
Standard enthalpy of formation($\Delta_\text{f}H_T^\ominus$) – Change in enthalpy during the formation of one mole of the compound from its constituent elements, with all substances in their standard states, and at a pressure of $100\ \mathrm{kPa}$ at $T^\circ K$.
It can be calculated using Hess's law if the reaction is hypothetical. An example is methane, $\ce{C}$ and $\ce{H2}$ will not normally react but the standard enthalpy of formation of methane is determined by Hess's law to be $-74.8\ \text{kJ/mol}$
$$\ce{\frac{1}{2}N2(g) +\frac{1}{2}O2(g) -> NO(g)}\qquad\Delta_\text{f}H^\ominus=90.25\ \text{kJ/mol}$$
Standard enthalpy of reaction($\Delta_\text{r}H^\ominus_T$) – Enthalpy change that when matter is transformed by a chemical reaction at $T^\circ K$ and $10\ \mathrm{kPa}$.
$$\ce{H2(g) +\frac{1}{2}O2(g) -> H2O(g)} +572\ \text{kJ}\qquad\Delta_\text{r}H^\ominus = −572\ \text{kJ/mol}$$
$$\Delta_\text{r}H^\ominus=\sum H^\ominus_\text{products}-\sum H^\ominus_\text{reactants}$$
$$\Delta_\text{r}H_\text{forward}=-\Delta_\text{r}H_\text{backwards}$$
Laws:
Le Chatelier's Principle – When an external change is made to a system in dynamic equilibrium, the system responds to minimise the effect of the change.
Yellow $\ce{Fe^3+}$ reacting with colorless thiocyanate ions $\ce{SCN-}$ to form deep red $\ce{[Fe(SCN)]^2+}$ ions:
$$\ce{Fe^3+(aq) + SCN-(aq) -> [Fe(SCN)]^2+(aq)}$$
When $\ce{NH4Cl}$ is added, $\ce{Cl-}$ reacts with $\ce{Fe^3+}$ ions to form $\ce{[FeCl4]-}$ ions. By the Le Chatelier's Principle, when $\ce{Cl-}$ is added into a solution of deep red $\ce{[Fe(SCN)]^2+(aq)}$ ions, the equilibrium will shift to $\ce{Fe^3+(aq) + SCN-(aq)}$, turning the solution pale red.
Kirchhoff's Law – Enthalpy of any substance increases with temperature.
Hess's Law – Total enthalpy change of a chemical reaction is independent of the number of steps the reaction takes.
Henry's Law – Amount of dissolved gas is proportional to its partial pressure in the gas phase.
Processes:
Constant Entropy – Isoentropic
Constant Pressure – Isobaric
Constant Volume – Isovolumetric
Side note: This is probably not a complete list as I may have missed some | {
"domain": "chemistry.stackexchange",
"id": 8026,
"tags": "thermodynamics, enthalpy"
} |
What type of Organic Substance is used in LCD | Question: I know that the liquid crystal displays contain some class of organic compound, which is somehow used to polarise light to make contrast variations(In calculator displays).
My Question is What type of Organic Compound is used in this case and How does it polarise light only when stimulated by external agents such as current in LCD's
Answer: Liquid crystals are usually rigid rod-like molecules that tend to show some degree of intermolecular order even when notionally liquids (there is a good summary on Wikipedia). This happens because the rigid polarisable parts of the molecules can have relatively strong directional interactions even in the liquid phase.
Some of the structures they form can rotate the plane of polarised light You can get some intuition about why knowing that polarising lenses consist of highly ordered molecules whose interaction with light is highly direction dependent. In normal liquids the molecular orientations are effectively random so, even in each molecule interacts with light depending on its orientation, this effect washes out due to the randomness of the overall structure. Liquid crystals form structures where there is the possibility or large-scale order allowing macro-effects on polarisation (just like the rigid structure in a polaroid film).
The other thing that makes liquid crystals useful is that the details of the ordering in the liquid can be affected by external electric fields, especially in thin films of the compounds. In principle this is possible when any molecule is polar (that is it has a net electrical dipole). Of course, making an effective display requires a lot of fine tuning of the electrical, mechanical and chemical structure, but the basic details are simply that the molecules are affected by electrical fields and this allows some control of the macro-structure which, in turn, alters the way the liquid crystal interacts with light. the effect is not "turned on" by the field, just altered and that is enough to make a viable display technology.
Liquid crystals have a long history. The first compounds known to show liquid crystal behaviour is cholesterol benzoate, the first molecule shown below, which was dicoveredn in 1888. The other structures show some actual molecules from the early days of LCD displays to modern developments. For more examples of the history there is a great summary from Merck (one of the major manufacturers) in this slideshow. | {
"domain": "chemistry.stackexchange",
"id": 2465,
"tags": "organic-chemistry, liquid-crystals"
} |
Reusing too much code, but I cannot figure out how to refactor this correctly | Question: I have some code that run a series of functions in order and then outputs the results of each. When I execute this code, I am reusing a ton of it, where only the function itself in the code is changing. MethodThatChanges in the below example, is a method that returns void.
Task[] task1 = new Task[10];
for (int i = 0; i < 10; i++)
{
task1[i] = Task.Factory.StartNew(() =>
{
MethodThatChanges1(i);
});
}
Task.WaitAll(task1);
Task[] task2 = new Task[10];
for (int i = 0; i < 10; i++)
{
task2[i] = Task.Factory.StartNew(() =>
{
MethodThatChanges2(i);
});
}
Task.WaitAll(task2);
etc........
I would like to write a function instead that contains all of this code and passes in as a parameter, the method that each section calls. Something like this, obviously this doesn't work as is...
void ExecuteTask(void method)
{
Task[] task = new Task[10];
for (int i = 0; i < 10; i++)
{
task[i] = Task.Factory.StartNew(() =>
{
method(i);
});
}
Task.WaitAll(task);
}
Then I could replace all of the copied code with:
ExecuteTask(method1);
ExecuteTask(method2);
etc...
I've looked into Action and Func delegates, but I do not understand exactly how that work. Any help would be appreciated.
Answer: You were on the right track with Action delegates. There is another problem however: the loop variable i is captured inside the lambda expression. Since it is the same variable in each Task, when you do i++ in the for-loop, other Tasks will use the new value as well.
(This is a very weird problem, more on this on Eric Lippert's blog: Closing over the loop variable considered harmful) It can be solved by simply creating a copy of i.
void ExecuteTask(Action<int> method) {
Task[] task = new Task[10];
for (int i = 0; i < 10; i++)
{
int copy = i;
task[i] = Task.Factory.StartNew(() => method(copy));
}
Task.WaitAll(task);
}
ExecuteTask(method1);
ExecuteTask(method2); | {
"domain": "codereview.stackexchange",
"id": 8717,
"tags": "c#"
} |
syntax error near unexpected token `'joint_states listener'' | Question:
In this tutorial - http://wiki.ros.org/pr2_controllers/Tutorials/Getting%20the%20current%20joint%20angles
I was trying to run the service node for getting the current joint angles but the screen shows the message stated below.-
~/groovy_workspace/sandbox/joint_states_listener$ rosrun joint_states_listener joint_states_listener.py
/home/roger/groovy_workspace/sandbox/joint_states_listener/nodes/joint_states_listener.py: line 6: syntax error near unexpected token `'joint_states_listener''
I am following the tutorial stepwise and havent missed any. Dont know what the problem is. Pls help.
I am using geoovy and ubuntu 12.04
Originally posted by Ros_newbie on ROS Answers with karma: 17 on 2014-04-10
Post score: 0
Answer:
This sounds like you have a syntax error on line 6 in your python program.
Without seeing the source code, I can't say anything more than that.
Originally posted by ahendrix with karma: 47576 on 2014-04-11
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 17616,
"tags": "ros, nodes, joint-states"
} |
What are the characteristics of the magnetic field surrounding a human brain? | Question: The human brain is said to produce a magnetic field resulting from the action potentials released inside the brain. What's the nature of such a field in terms of size and strength, and what is the potential for manipulation of brain functions by interfering with it by means of electromagnetic radiation?
Answer: The human body produces a wide range of bioelectromagnetic signals from various electrical impulses in the brain. The origin of the magnetic field is the charge exchange in the muscular and neural tissues (i.e no magnetic material is usually present in the body with very rare exceptions).
The brain's magnetic field varies from 10s of fT to 100s of fT [1]. The frequency varies from 0.1 Hz to <100 Hz. Measurement of brain magnetism is limited by the complexity of signal due to the overlap of the signals from various parts of the brain. In magentoencephelograpahy this is done using SQUIDS [2].
The second part of your question is about magnetic excitement of brain functions. This is in fact possible and some testing/use is reported with limited clinical success. The strength of the applied magnetic fields are 1-5T for trans-cranial excitation [3].
[1] http://www.bem.fi/book/12/12.htm#02
[2] http://www.scholarpedia.org/article/Magnetoencephalogram
[3] http://en.wikipedia.org/wiki/Transcranial_magnetic_stimulation
[4] http://www.bem.fi/book/22/22.htm
An excellent source for this topic is [1] | {
"domain": "physics.stackexchange",
"id": 1200,
"tags": "electromagnetism, biophysics"
} |
Bayesian monophyly testing from MrBayes and BEAST posterior tree samples 2 | Question: Continuation of this question:
I have tried to refactor code according to PEP 8.
I tried to raise some errors instead of just print and sys.exit, as was suggested. (should I define much more exceptions for every function and so, or are general ones enough?)
Code is not complete, there are a few issues I need to look into, the "main" could be divided into several logical function for cleaner and more logical code (I required results).
As you may see, I tried to pick nice docstring format, I chose numpy. However, it is quite a lot of work for every, even very simple function. Is it necessary to comment everything?
Do I need to comment whole function? It is basically described in arguments of argparse.
Is it really necessary to keep limit 80 chars per line? I tried to keep it, but sometimes I let one or two chars go.
As last time, thank you for any suggestion on improving my coding practices. I really need to be bashed.
"""This script will perform bayesian monophyly test on output from MrBayes or BEAST."""
from __future__ import division
import sys
import re
import math
import argparse as arg
def parse_args():
parser = arg.ArgumentParser(prog="BayesMonophyly", \
description="Perform bayesian monophyly test on output from MrBayes" + \
" or BEAST. Test is simple comparison of number of" + \
" trees that have monophyly versus number of trees" + \
" without monophyly, corrected by prior chance of" + \
" obtaining ratio of monophyletic/nonmonophyletic" + \
" trees at random (i.e., from data that contain no" + \
" information regarding monophyly).")
parser.add_argument("-s", "--species", required=True, nargs="*",
help="Species that should be monophyletic")
parser.add_argument("-i", "--input", required=True, nargs="+",
help="One or more MrBayes or BEAST input files." + \
"More files should be used only from the same analysis" + \
"(i.e., where it actually make sense, such as standard" + \
" two runs from MrBayes analysis).")
parser.add_argument("-b", "--burnin", required=False, default=0.2,
type=int, help="Number of trees ignored as burnin phase")
parser.add_argument("-r", "--rooted", required=False, default=False,
action="store_true",
help="Will treat input trees as rooted" + \
"(e.g. BEAST always assume some molecular clock).")
args = parser.parse_args()
return(args)
class ParsingError(Exception):
pass
def parse_tree_file(treefile):
"""Parse posterior tree sample file from BEAST or MrBayes.
Parse posterior tree sample file generated by MrBayes or BEAST software.
These files are of NEXUS format and contain several block. Of these blocks,
only the Translate and Tree blocks are of interest. Translate block contain
list of species in trees and their translation, as names are translated
into numbers. Tree block contains posterior sample of trees.
This parser will search this file and returns Translate block and trees.
There are several checks employed to ensure, that parsing is correct.
Parameters
----------
treefile : string
path to file that is to be parsed
Returns
-------
translated_taxa : dictionary
original names of species in file and their numeric translation
"""
tree_file_text = []
try:
tree_file = open(treefile,"r")
except IOError:
raise ParsingError("Couldn't open file, does file exists?")
else:
with tree_file:
tree_file_text = tree_file.readlines()
#Various checks:
#first line must be #NEXUS
if(tree_file_text[0].strip("\n\t ").lower() != "#nexus"):
raise ParsingError("No NEXUS header. Is file NEXUS?")
#find begin trees
begin_block_start = 0
for num,line in enumerate(tree_file_text):
if line.strip("\n\t ").lower() == "begin trees;":
begin_block_start=num
break
if begin_block_start == 0:
raise ParsingError("Begin trees block not found!")
#check if following one is translate:
if tree_file_text[begin_block_start+1].strip("\n\t ").lower() != "translate":
raise ParsingError("ERROR: Misformed Begin trees block," + \
" \"translate\" not found.")
#translate block, numbers from 1 to ntaxa
#but because taxa block is not required
#number of taxa is not known and must be estimated from translate
translated_taxa = dict()
begin_block_end = 0
for num,line in enumerate(tree_file_text[begin_block_start+2 : ]):
pair=line.strip("\n\t, ").split()
if len(pair) != 2:
begin_block_end = num + begin_block_start + 2
break
else:
translated_taxa[int(pair[0]) ] = pair[1]
#check if begin_block_end has changed:
if begin_block_end == 0:
raise ParsingError("ERROR: end of translation block not found.")
#now, every tree should start with "tree", so find a first tree, if not next:
trees_start = 0
for num,line in enumerate(tree_file_text[begin_block_end + 1 : ]):
if line.strip("\n\t ")[0:4].lower() == "tree":
trees_start = num + begin_block_end + 1
break
#test if trees_start changed:
if trees_start == 0:
raise ParsingError("ERROR: no tree was found!")
trees = []
#read all trees and put them into list
for line in tree_file_text[trees_start:]:
#get tree
line=line.strip("\n\t ;")
if line.lower() == "end":
#end of tree block
break
tree=line.split(" = ")[1].strip()
if(tree[0:5] == "[&U] "): #remove "[&U] ", if present
tree = tree[5:]
#delete [&something=number] tags from BEAST
#TODO better matching is required
#in my file, I am currently matching only [&rate=number]
if "[" in tree:
tree = re.sub("\[&rate=[0-9]*\.?[0-9]*([eE]-[0-9]+)?\]", "", tree)
#get cladogram
tree = re.sub(":[0-9]+\.?[0-9]*([eE]-[0-9]+)?", "", tree)
trees.append(tree)
return(translated_taxa, trees)
def check_species_in_taxa(species,translated_taxa):
"""Check if specified species are in dictionary.
Simple check if species required for monophyly are in species contained
contained in tree.
Parameters
----------
species : list of strings
list of species for monophyly
translated_taxa : dictionary
dictionary of species and their number from taxa block of nexus file
Returns
-------
"""
#test if all species are in translated_taxa
taxa_in_tree = translated_taxa.values()
for specie in species:
if not specie in taxa_in_tree:
raise RuntimeError("Species \"{0}\" is not in tree taxa!".format(specie))
def check_species_equivalency(list_of_dicts):
"""Check if taxa in multiple tree files are exactly the same.
Parameters
----------
list_of_dicts : list of dictionaries
list of dictionaries of several translated_taxa from taxa block of
multiple nexus files
Returns
-------
"""
if len(list_of_dicts) == 0:
raise RuntimeError(" For some reason, no dictionary of translated_taxa" +
" was passed.")
elif len(list_of_dicts) == 1:
#with only one dict, no checking for equivalence is necessary
pass
else:
#test every other dict against first dict:
template_dict = list_of_dicts[0]
template_dict_items=list_of_dicts[0].items()
template_dict_items.sort()
for num,matched_dict in enumerate(list_of_dicts[1:]):
if len(template_dict) != len(matched_dict):
raise RuntimeError(("Number of taxa in file 1 and " +
"file {0} is different!").format(num + 2))
matched_dict_items = matched_dict.items()
matched_dict_items.sort()
for template,matched in zip(template_dict_items, matched_dict_items):
if template != matched:
raise RuntimeError(("Number or taxa name differs in" +
" file 1 {1} and file {0} {2}!" +
" They are probably not equivalent!")
.format(num + 2, str(template), str(matched))
)
def ete2solution(trees, translated_species):
"""Return number of monophyletic trees for input species.
Converts trees from string to ete2.Tree and check if specific species
are monophyletic.
Parameters
----------
trees : list of strings
trees, cladograms in text form
translated_species : list of strings
list of species for monophyly, translated into numeric form
Returns
-------
monophyletic_counter : int
number of monophyletic trees
"""
monophyletic_counter = 0
import ete2
for tree in trees:
try:
ete2_tree=ete2.Tree(tree + ";")
except ete2.parser.newick.NewickError:
print tree
raise RuntimeError("Problem with turning text into tree with ete2!")
try:
if ete2_tree.check_monophyly(values=translated_species,
target_attr="name")[0]:
monophyletic_counter += 1
except ValueError:
print translated_species
print tree
raise RuntimeError("Species are not in tree. Error in translating?")
return(monophyletic_counter)
def translate_species(translate_dict, species):
"""Translate input species to numbers as they appear in tree file."""
reversed_dict = {value : key for key,value in translate_dict.iteritems()}
translated_species = [str(reversed_dict[item]) for item in species]
return(translated_species)
def non_ete2solution(tree, species):
""" I think that solution would be with re.sub"""
pass #TODO
def n_unrooted_trees(n):
"""Returns number of unrooted trees for n taxa."""
return math.factorial(2*n-5) / (2**(n-3) * math.factorial(n-3))
def n_rooted_trees(n):
"""Returns number of rooted trees for n taxa."""
return math.factorial(2*n-3) / (2**(n-2) * math.factorial(n-2))
def bayes_factor(prior, posterior):
"""Compute standard Bayes factor from prior and posterior."""
if prior in [0,1]:
#raise some error
pass
elif posterior == 1:
#raise warning?
pass
bayes_factor = (posterior/(1-posterior)) / (prior/(1-prior))
return(bayes_factor)
def compute_prior(num_taxa, num_species, rooted):
"""Compute prior probability for trees, either rooted or unrooted."""
if rooted:
prior = (n_rooted_trees(num_taxa-num_species+1)* \
n_rooted_trees(num_species)) / n_rooted_trees(num_taxa)
else:
prior = (n_unrooted_trees(num_taxa-num_species+1)* \
n_rooted_trees(num_species)) / n_unrooted_trees(num_taxa)
#Can't actually happen, this equation does not work for some special cases:
#TODO
#Question is, how to treat those values?
#
if prior == 1:
raise ValueError("Prior is one.")
if prior == 0:
raise ValueError("Prior is zero.")
return(prior)
def compute_posterior(num_monophyletic, num_total):
"""Compute posterior probability of monophyletic trees."""
posterior = num_monophyletic / num_total
#posterior 0 can happen and is legitimate
#posterior 1 however destroys bayes factor
if posterior == 1:
return(posterior)
if __name__ == "__main__":
args=parse_args()
#analysis makes sense only for 2 and more species:
if len(args.species) < 2:
print "ERROR: Must specify at least 2 species."
sys.exit()
#also, the species must be different:
if len(set(args.species)) < len(args.species):
print "ERROR: Please, make sure that species are unique."
sys.exit()
#read all input files
all_translated_taxa = []
all_trees = []
for input_file in args.input:
(translated_taxa,trees) = parse_tree_file(input_file)
check_species_in_taxa(args.species,translated_taxa)
all_translated_taxa.append(translated_taxa)
all_trees.append(trees)
check_species_equivalency(all_translated_taxa)
#if equivalent, every file has same species, can use the first one
#apply burnin
all_trees_burned = [trees[int(len(trees)*args.burnin):] for trees in all_trees]
all_trees_burned = [inner for outer in all_trees_burned for inner in outer]
num_total = len(all_trees_burned)
translated_species = translate_species(all_translated_taxa[0], args.species)
num_monophyletic = ete2solution(all_trees_burned, translated_species)
prior = compute_prior(len(all_translated_taxa[0]),
len(translated_species), args.rooted)
posterior = compute_posterior(num_monophyletic, num_total)
bayes_factor=bayes_factor(prior, posterior)
#output:
all_trees_burned_num = len(all_trees_burned)
all_trees_num = sum([len(item) for item in all_trees])
expected_monophyletic = int(round(prior*all_trees_burned_num))
output=("Total trees read: {0}\n" +
"Trees after burnin: {1}\n" +
"Monophyletic trees found: {5}\n" +
"Monophyletic trees expected: {3}\n" +
"(in the case of noninformative data)\n\n" +
"Prior: {2:.4f}\n" +
"Posterior: {4:.4f}\n" +
"Bayes factor: {6:.4f}\n").format(all_trees_num,
all_trees_burned_num,
float(prior),
expected_monophyletic,
float(posterior),
num_monophyletic,
bayes_factor)
print output
if bayes_factor==0:
print("Probability of this by chance alone given prior: {0:.2e}"
.format((1-prior)**all_trees_burned_num)
)
Answer:
Should I define much more exceptions for every function and so, or are general ones enough?
I normally only raise errors if it is imperative for the function to work.
Say you pass a str rather than an int you should raise an error. I think that the ones that you have done are good. There may be more errors and you should raise them.
However, it is quite a lot of work for every, even very simple function. Is it necessary to comment everything?
I think that Googles style guide for comments is really good.
never describe the code. Assume the person reading the code knows Python (though not what you're trying to do) better than you do.
Most of your comments help to know what your functions do.
However there are some comments that do what Google say not to do.
You only have to follow Googles style guide if you are working for them.
#test every other dict against first dict:
Is it really necessary to keep limit 80 chars per line? I tried to keep it, but sometimes I let one or two chars go.
It's a guide. This are some platforms that can't display 80+ characters.
Also the code in StackExchange's 'code' blocks for me is 80 characters.
And then I start having to scroll.
Also it helps to make code easier to read, and breaks up dense code.
First, there is a nice feature where there is implicit string concatenation.
description="Perform bayesian monophyly test on output from MrBayes" + \
" or BEAST. Test is simple comparison of number of" + \
...
" information regarding monophyly)."
description=("Perform bayesian monophyly test on output from MrBayes"
" or BEAST. Test is simple comparison of number of"
...
" information regarding monophyly).")
Things inside () allow you to go to a new line without the use of \.
Python also allows you to concatenate strings without the + operator,
if both strings are raw strings.
You should not do this:
parser.add_argument("-b", "--burnin", required=False, default=0.2,
type=int, help="Number of trees ignored as burnin phase")
This breaks two PEP8 rules.
Arguments on first line forbidden when not using vertical alignment
Further indentation required as indentation is not distinguishable.
It should look like this:
parser.add_argument(
"-b", "--burnin", required=False, default=0.2,
type=int, help="Number of trees ignored as burnin phase")
Any amount of whitespace that isn't 4 is good.
Or:
parser.add_argument("-b", "--burnin", required=False, default=0.2,
type=int, help="Number of trees ignored as burnin phase")
return(args) is unconventional, just leave it bare. return args.
for num,line in enumerate(tree_file_text):
You should have a space between , and line.
You can make some lines shorter than 79 for sure.
if tree_file_text[begin_block_start+1].strip("\n\t ").lower() != "translate":
this can use the () rule so there is no \.
if (tree_file_text[begin_block_start+1].strip("\n\t ").lower()
!= "translate"):
Change the apostrophes for easier strings.
raise ParsingError("ERROR: Misformed Begin trees block," + \
" \"translate\" not found.")
The second string can use '' to simplify it.
raise ParsingError("ERROR: Misformed Begin trees block,"
' "translate" not found.')
pair=line.strip("\n\t, ").split()
Always have a single spaces between the = operator when doing assignment.
translated_taxa[int(pair[0]) ]
The white-space is against PEP8.
Change it to:
translated_taxa[int(pair[0])]
if len(list_of_dicts) == 0:
This doesn't follow PEP8.
For sequences, (strings, lists, tuples), use the fact that empty sequences are false.
if not list_of_dicts:
import ete2
Do not import in the middle of the code.
When using sys.exit give an exit status. sys.exit(-1) means an error happened.
all_trees_burned = [trees[int(len(trees)*args.burnin):] for trees in all_trees]
Split this on to more lines.
all_trees_burned = [
trees[int(len(trees)*args.burnin):]
for trees in all_trees
]
Readability added. Doesn't go past 79 chariters.
I personally would say that readability is your main problem.
For me it was quite hard to read your code.
It was very dense and overwhelming in some places.
If you read PEP8, it's not that long, this problem will be solved.
This is because it was made to increase readability. | {
"domain": "codereview.stackexchange",
"id": 14569,
"tags": "python"
} |
How long can E. coli stocks be stored at -20°C? | Question: I'm volunteering for a biohacker lab - biocurious in Sunnyvale. The have a pretty good set of equipment - gel boxes, incubators, but they don't have a -80°C freezer yet.
I'd like to set up some glycerol stab stocks of some E. coli strains, but i'm told even untransformed (i.e. no plasmid) bacteria will get funny after a while at -20°C. It would also be good to hear how long you can keep a plasmid transformed strain at -20°C.
Can anyone be specific about what happens and how long it takes when you store them at -20°C for long periods of time?
I have done a little reading for competent cells and I guess they stay active for about 4 days without a -80°C freezer.
Answer: Gergana covered the "why" part of your question. +1
If all you have at the moment is a -20°C and mostly what you want to store is E. coli harboring plasmids, I'd recommend preparing plasmid DNA and storing that at -20°C. The DNA will stay stable in the medium term, and you can re-transform into E. coli once you've got your -80°C freezer up and running. And ideally you want both DNA stocks and bacterial stocks on hand - you never know when a freezer will go down.
For your untransformed strains, you could make agar stabs or plates to store at 4°C. I would re-streak them every 1-2 weeks, grow overnight at 37°C, and then put them back in the fridge. | {
"domain": "biology.stackexchange",
"id": 195,
"tags": "molecular-biology, ecoli, bacteriology"
} |
Turtlebot moves backwards | Question:
Hello everyone,
I'm following this tutorial on sending simple goals to the Turtlebot navigation stack;
SendingSimpleGoals
When I run the program, the Turtlebot moves in a backwards direction in the sense that, it moves opposite to the direction the Kinect sensor is facing. We've calibrated the gyro and odometry as described in the wiki page but the Turtlebot never seems to move forward (in the direction the Kinect is facing). Whenever we run the program, the Turtlebot rotates a couple of times (creating a map) and then moves in the opposite direction.
Could anyone please provide some assistance as to why this is the problem?
EDIT: Thanks everyone, the kinect was mounted the wrong way around by the people who used it previously and hence why the Turtlebot was moving backwards.
Originally posted by aviprobo on ROS Answers with karma: 61 on 2013-12-17
Post score: 0
Original comments
Comment by tfoote on 2013-12-17:
What happens if you send a direct command velocity of forward only?
Answer:
I had got the same problem, the kinect was mounted in a wrong way. I just changed it and all was fine ;-)
Originally posted by MAKL with karma: 36 on 2014-01-15
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 16484,
"tags": "turtlebot"
} |
Simple wiki with PHP, part two | Question: Following up from part one, I've attempted to convert the largest part of my project into an MVC pattern as per suggestions from the accepted answer. I've managed to split the original edit.php file into a template and the business logic part, but there's still a lot of repetition going on. I feel like there has to be a better way to handle cutting down on the repetition without introducing tons of new variables and if statements all over the place.
edit.php
<?php
require_once "bootstrap.php";
$message = $_SESSION["message"] ?? "";
unset($_SESSION["message"]);
const MAX_LENGTH_TITLE = 32;
const MAX_LENGTH_BODY = 10000;
$errors = [];
$title = htmlspecialchars($_GET["title"] ?? "");
$slug = "";
if ($title !== slugify($title)) {
$slug = slugify($title);
} else {
$slug = $title;
}
$stmt = $pdo->prepare("SELECT id, title, slug, body FROM articles WHERE slug = ?");
$stmt->execute([$slug]);
$article = $stmt->fetch();
if ($_SERVER["REQUEST_METHOD"] === "POST") {
if (!empty($_POST["edit-article"])) {
$title = $_POST["title"];
$body = $_POST["body"];
$slug = slugify($title);
if (empty(trim($title))) {
$errors[] = "No title. Please enter a title.";
} elseif (strlen($title) > MAX_LENGTH_TITLE) {
$errors[] = "Title too long. Please enter a title less than or equal to " . MAX_LENGTH_TITLE . " characters.";
} elseif (slugify($title) !== $article["slug"]) {
$errors[] = "Title may only change in capitalization or by having additional symbols added.";
}
if (strlen($body) > MAX_LENGTH_BODY) {
$errors[] = "Body too long. Please enter a body less than or equal to " . MAX_LENGTH_BODY . " characters.";
}
if (empty($errors)) {
$stmt = $pdo->prepare("UPDATE articles SET title = ?, body = ? WHERE id = ?");
$stmt->execute([$title, $body, $article["id"]]);
$_SESSION["message"] = "Article successfully updated.";
header("Location: /wiki.php?title=" . $article["slug"]);
exit();
}
} elseif (!empty($_POST["create-article"])) {
$title = $_POST["title"];
$body = $_POST["body"];
$slug = slugify($title);
if (empty(trim($title))) {
$errors[] = "No title. Please enter a title.";
} elseif (strlen($title) > MAX_LENGTH_TITLE) {
$errors[] = "Title too long. Please enter a title less than or equal to " . MAX_LENGTH_TITLE . " characters.";
}
$stmt = $pdo->prepare("SELECT title, slug FROM articles WHERE title = ? OR slug = ?");
$stmt->execute([$title, $slug]);
$article_exists = $stmt->fetch();
if ($article_exists) {
$errors[] = "An article by that title already exists. Please choose a different title.";
}
if (strlen($body) > MAX_LENGTH_BODY) {
$errors[] = "Body too long. Please enter a body less than or equal to " . MAX_LENGTH_BODY . " characters.";
}
if (empty($errors)) {
$stmt = $pdo->prepare("INSERT INTO articles (title, slug, body) VALUES (?, ?, ?)");
$stmt->execute([$title, $slug, $body]);
$_SESSION["message"] = "Article successfully created.";
header("Location: /wiki.php?title=" . $slug);
exit();
}
}
}
$title = $article["title"] ?? $title;
$template = "edit.php";
require_once "templates/layout.php";
templates/edit.php
<?php if (!empty($errors)): ?>
<ul>
<?php foreach ($errors as $error): ?>
<li><?= $error; ?></li>
<?php endforeach; ?>
</ul>
<?php endif; ?>
<?php if ($article): ?>
<form action="/edit.php?title=<?= $article["title"]; ?>" method="post" name="form-edit-article">
<div><label for="title">Title</label></div>
<div><input type="text" name="title" id="title" size="40" value="<?= htmlspecialchars($article["title"]); ?>" required></div>
<div><label for="body">Body</label></div>
<div><textarea name="body" id="body" rows="30" cols="120" maxlength="10000"><?= htmlspecialchars($article["body"]); ?></textarea></div>
<div><span id="character-counter"></span></div>
<div><input type="submit" name="edit-article" value="Edit Article"></div>
</form>
<?php else: ?>
<form action="/edit.php" method="post" name="form-create-article">
<div><label for="title">Title</label></div>
<div><input type="text" name="title" id="title" size="40" value="<?= htmlspecialchars($title); ?>" required></div>
<div><label for="body">Body</label></div>
<div><textarea name="body" id="body" rows="30" cols="120" maxlength="10000"><?= htmlspecialchars($_POST["body"] ?? ""); ?></textarea></div>
<div><span id="character-counter"></span></div>
<div><input type="submit" name="create-article" value="Create Article"></div>
</form>
<?php endif; ?>
Answer:
I recommend that you not store htmlspecialchars() values as a general practice. Of course you can if you wish, but I prefer to keep raw values in the db as a base representation of the data and then only manipulate the value when needed and in the manner required for the given task.
You are (conditionally) calling slugify() twice on the $_GET["title"] value; this is in breach of DRY coding practices. You should calling it once at most. If this were my project, I don't think that I would allow the call of slugify() (or the trip the database for that matter) at all if the title is empty. I don't know if the identical title-slug check is necessary to perform -- it looks like extra work. If you want to validate the title and check if it is a slug... rather than running the multiple preg_replace() calls in that custom function and comparing the return value to the input, just make a single preg_match() call to make sure it contains only valid letters and complies with your length requirements.
Because I believe you can rely on the existence of $_POST["title"], empty(trim($title) can be written as !trim($title) for the same effect with one less function call.
Again, to espouse DRY coding practices, don't call strlen($title) twice, cache the value and check the variable.
All string validations should pass before making any trips to the db for best performance. You are check the body length too late. You should write it before the SELECT query. | {
"domain": "codereview.stackexchange",
"id": 34581,
"tags": "performance, php, mvc"
} |
How is the scalar product between two vectors defined in general relativity? | Question: Since vectors in general relativity are represented as directional derivative operators, it seems to me that the product of two vectors should be a second order derivative operator. However, the product of two vectors is not an operator, but a scalar number just like a scalar product in linear algebra. Why?
Answer: Vectors are represented as differential operators like
$$
v = v^\mu \partial_\mu.
$$
These are spanned by a basis of $\partial_\mu$.
In addition to vectors, there is also something called a "1-form."
$$
\omega = \omega_\mu dx^\mu.
$$
These are spanned by a basis of $d x^\mu$.
In particular, these 1-forms can be thought of as a map
$$
\omega : \text{tangent vectors} \to \text{scalars}
$$
For instance,
$$
\omega (v) = \omega_\mu v^\mu.
$$
In particular,
$$
d x^\mu ( \partial_\nu) = \delta^\mu_\nu.
$$
and
$$
dx^\mu (v) = v^\mu.
$$
Now, the metric is made up of a sum of symmetric products of these one-forms. (It is not a "2-form" because it is made of symmetric products, not anti-symmetric products.)
$$
g = g_{\mu \nu} dx^\mu \otimes dx^\nu.
$$
Usually the symbols $\otimes$ is dropped, but whatever. Anyway, the metric is a map
$$
g : \text{two tangent vectors} \to \text{scalars}
$$
and acts as
\begin{align}
g(v, u) &= g_{\mu \nu} dx^\mu \otimes dx^\nu (v, u) \\
&= \frac{1}{2} g_{\mu \nu}(( dx^\mu (v) )( dx^\nu (u) )+( dx^\mu (u) )( dx^\nu (v) )) \\
&= \frac{1}{2} g_{\mu \nu}( v^\mu u^\nu + u^\mu v^\nu) \\
&= g_{\mu \nu} v^\mu u^\nu
\end{align}
so that is really just how everything is defined. | {
"domain": "physics.stackexchange",
"id": 83091,
"tags": "general-relativity, differential-geometry, metric-tensor, vectors, tensor-calculus"
} |
Proper CMakeLists | Question:
It looks like I do not understand something in CMakelists. In my package, I have two C++ files, one called uart.cpp and another one called joint.cpp.
Joint uses uart functions and variables but only Join has rosnode and main function, so how do I make my CMakelists? I get that uart functions have undefined references and my CMakelists:
cmake_minimum_required(VERSION 2.8.3)
project(genius_control)
add_compile_options(-std=c++11)
find_package(catkin REQUIRED COMPONENTS
roscpp
serial
)
catkin_package(
INCLUDE_DIRS
include
CATKIN_DEPENDS
roscpp
LIBRARIES
genius_control
)
include_directories(
include
${catkin_INCLUDE_DIRS}
)
install(DIRECTORY include/${PROJECT_NAME}/
DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
FILES_MATCHING PATTERN "*.h"
PATTERN ".svn" EXCLUDE)
add_library(genius_control
include/${PROJECT_NAME}/uart.h
include/${PROJECT_NAME}/joint.h
src/uart.cpp
src/joint.cpp
)
add_executable(joint src/joint.cpp)
target_link_libraries(genius_control ${catkin_LIBRARIES})
Originally posted by EdwardNur on ROS Answers with karma: 115 on 2019-04-04
Post score: 0
Original comments
Comment by gvdhoorn on 2019-04-04:
Please note: this is not a Catkin or ROS related question. It's purely CMake.
Answer:
You should change the add_executable to
add_executable(joint src/joint.cpp src/uart.cpp) otherwise it cannot locate the implementation of the functions that the uart.cpp file includes.
As for the add_library, it can include only the header files and not the *.cpp.
As for the target_link_libraries(genius_control ${catkin_LIBRARIES}), you should change it to `target_link_libraries(joint ${catkin_LIBRARIES})
edit: The previous is a working, but not really proper way to solve this, as far as I know. A better way would be the following:
[..previous are ok]
catkin_package(
INCLUDE_DIRS
include
CATKIN_DEPENDS
roscpp
LIBRARIES
genius_control
uart
)
include_directories(
include
${catkin_INCLUDE_DIRS}
)
install(DIRECTORY include/${PROJECT_NAME}/
DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
FILES_MATCHING PATTERN "*.h"
PATTERN ".svn" EXCLUDE)
add_library(uart
src/uart.cpp
)
target_link_libraries(uart
${catkin_LIBRARIES}
)
add_executable(joint
src/joint.cpp
)
target_link_libraries(joint
${catkin_LIBRARIES}
uart
)
Originally posted by kosmastsk with karma: 210 on 2019-04-04
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by EdwardNur on 2019-04-04:
I am getting this error
CMake Error: CMake can not determine linker language for target: genius_control
CMake Error: Cannot determine link language for target "genius_control".
Comment by kosmastsk on 2019-04-04:
I edited my answer, check if this works | {
"domain": "robotics.stackexchange",
"id": 32815,
"tags": "ros-melodic, cmake"
} |
Is there a language with non-isomorphic minimum-state UFAs? | Question: For all regular languages L, by the Myhill-Nerode classes, all state-minimal DFAs for L are isomorphic. On the other hand, "a regular language may have many non-isomorphic state-minimal nfas". What about Unambiguous Finite Automata? Every DFA is also a UFA, so every regular language has at least one UFA. The natural numbers are well-ordered, so in turn every regular language has at least one state-minimal UFA.
Is there a regular language whose state-minimal Unambiguous Finite Automata are not all isomorphic to each other?
Answer: In their paper "A note about minimal non-deterministic automata" Arnold, Dicky and Nivat observe that the problem of finding non-isomorphic minimal automata was solved in a report by Christian Carrez (Lille, 1970).
They give the following example, which happen to be both unambiguous, so are applicable in your case too. | {
"domain": "cs.stackexchange",
"id": 7525,
"tags": "finite-automata, nondeterminism"
} |
Different type of padding in image? | Question: I want to implement a gradient operator on an image in Matlab. Should I have to pad the image before its implementation? How do I decide which padding to use e.g. whether to use zero-padding or Symmetric padding? How are they differ from each other? I am new to this field of image processing, Can you provide some references to learn this stuff?
Answer: Gradients enhances differences. Hence, highly non-smooth padding (like zero-padding) yields higher gradients, as they don't ensure continuity. Symmetric extensions preserve continuity, and anti-symmetric ones are likely to better preserve derivatives. There is a lot of literature on that, for instance in the filter bank domain. Suppose that the edge sample is $x[0]$, you can have whole-sample anti-symmetry or half-sample anti-symmetry:
$x[-1] = -x[1]+2x[0] $
$x[-1] = x[0] $ and $x[-2] = -x[1]+2x[0] $
This is illustrated in the picture, from the first paper mentioned below. One can find whole-sample (WS) and half-sample (HS) symmetries and anti-symmetries. A lot of image processing operators combine symmetry or anti-symmetry, so those can be useful.
A couple of pointers:
Lifted linear phase filter banks and the polyphase-with-advance representation (JPEG 2000 image coding applications)
Classification of Nonexpansive Symmetric Extension Transforms for Multirate Filter Banks
Unitary FIR filter banks and symmetry
Recently, unitary FIR filter banks with linear-phase have been
completely parameterized by exploiting the eigenstructure of the
exchange matrix. This correspondence gives new characterizations of
polyphase matrices of filter banks with various types of symmetry on
the filters. Using matrix extensions of the well-known hyperbolic and
orthogonal lattices, we give a alternative proof for the
parameterization of linear-phase unitary filter banks. A complete
parameterization of FIR unitary filter banks with each of the
different types of symmetries considered (not just linear-phase) is
also given. These results can also be used to generate non-unitary
filter banks with symmetries, though no completeness results can be
obtained. In some cases implicit, and in others explicit
parameterization of wavelet tight frames associated with these filter
banks are also given. This paper only considers filter banks with an
even number of channels. A similar theory can be developed if the
number of channels is an odd integer.
Aside, there are many other possible extensions, using constant continuation (or pixel replication), or higher-order (eg linear, polynomial) extrapolations:
Image padding, REFLECTION padding | {
"domain": "dsp.stackexchange",
"id": 9111,
"tags": "image-processing, computer-vision, zero-padding, finite-differences"
} |
How do neutrinos pass through the sun so quickly? | Question: If it takes thousands of years for light to escape our sun then how can neutrinos generated in the very center of the sun reach earth in just 8 minutes? Why is their speed so great that they can easily escape the pull of the sun at near light speed?
Solar neutrinos originate from the nuclear fusion powering the Sun and other stars. The details of the operation of the Sun are explained by the Standard Solar Model. In short: when four protons fuse to become one helium nucleus, two of them have to convert into neutrons, and each such conversion releases one electron neutrino.
The Sun sends enormous numbers of neutrinos in all directions. Each second, about 65 billion ($6.5\times 10^{10}$) solar neutrinos pass through every square centimeter on the part of the Earth that faces the Sun.[8] Since neutrinos are insignificantly absorbed by the mass of the Earth, the surface area on the side of the Earth opposite the Sun receives about the same number of neutrinos as the side facing the Sun.
-Wikipedia
Answer: Just to supplement Sean's answer, neutrinos interact through the weak force. This results in the neutrino having to get very close to another particle before it can intact with it, as the weak force has a very short range. The mass of a neutrino is estimated to be
$0.320 ± 0.081 eV/c^2$ (sum of 3 flavors) and this explains their speed, approaching that of light, and also the fact that gravity has very little effect on such a tiny mass.
From Weak Interaction
The weak interaction has a very short range (around $10^{−17}$ to $10^{−16}$ m). At distances around $10^{−18}$ meters, the weak interaction has a strength of a similar magnitude to the electromagnetic force, but this starts to decrease exponentially with increasing distance. At distances of around $3×10^{−17}$ m, the weak interaction is 10,000 times weaker than the electromagnetic.
Solar Neutrino Detection | {
"domain": "physics.stackexchange",
"id": 33393,
"tags": "photons, standard-model, radiation, sun, neutrinos"
} |
Multiple Turtlebots in Gazebo - how to solve conflicts in multiple cameras? | Question:
Hello all!
I'm trying to introduce two turtlebots in gazebo for some of our experiments, and I followed instructions given in the answer to
answers.ros.org/question/41433/multiple-robots-simulation-and-navigation/
..and successfully spawned two turtlebots with the lasers working correctly (and independently) for both. Now, I want to get the cameras for both the turtlebots working, but have had no luck in that regard yet. I suspect the problem is that the separate tf_prefixes for the two robots are not getting appended to the camera topics of the two robots while publishing, and consequently the robots try to publish to the same topic and fail.
I have not been able to find a solution to this yet, and any help in this regard would be much appreciated!
My Current configuration is:
Ubuntu 11.10 + ROS Electric (Yes, i am stuck with Electric and cannot upgrade at the moment),
Gazebo (simulator_gazebo) - version 1.4.15
Also, I get the following ERRORs when i try to roslaunch my launch file for gazebo node:
"Tried to advertise a service that is already advertised in this node [/camera/image_raw/compressed/set_parameters]"
"Tried to advertise a service that is already advertised in this node [/camera/image_raw/theora/set_parameters]"
Here are my launch files:
** create_multi_robot_world.launch **
<?xml version="1.0" ?>
<launch>
<param name="/use_sim_time" value="true" />
<!-- start world -->
<node name="gazebo" pkg="gazebo" type="gazebo" args="-u $(find turtlebot_exp)/worlds/Boyd_5th_floor_brown_doors.world" respawn="false" output="screen" />
<!-- include our robots -->
<include file="$(find turtlebot_exp)/launch/robots.launch"/>
</launch>
** robots.launch **
<?xml version="1.0" ?>
<launch>
<!-- No namespace here as we will share this description.
Access with slash at the beginning -->
<param name="robot_description"
command="$(find xacro)/xacro.py $(find turtlebot_description)/urdf/turtlebot.urdf.xacro" />
<!-- BEGIN ROBOT 1-->
<group ns="robot1">
<param name="tf_prefix" value="robot1_tf" />
<include file="$(find turtlebot_exp)/launch/one_robot.launch" >
<arg name="init_pose" value="-x 2.5 -y 0 -z 0" />
<arg name="robot_name" value="Robot1" />
</include>
</group>
<!-- BEGIN ROBOT 2-->
<group ns="robot2">
<param name="tf_prefix" value="robot2_tf" />
<include file="$(find turtlebot_exp)/launch/one_robot.launch" >
<arg name="init_pose" value="-x -1.5 -y 0 -z 0" />
<arg name="robot_name" value="Robot2" />
</include>
</group>
</launch>
** one_robot.launch **
<?xml version="1.0" ?>
<launch>
<arg name="robot_name"/>
<arg name="init_pose"/>
<node name="spawn_minibot_model" pkg="gazebo" type="spawn_model" args="$(arg init_pose) -urdf -param /robot_description -model $(arg robot_name)" respawn="false" output="screen"/> <!-- launch-prefix="xterm -e ddd" /> -->
<node pkg="diagnostic_aggregator" type="aggregator_node" name="diagnostic_aggregator" >
<rosparam command="load" file="$(find turtlebot_bringup)/config/diagnostics.yaml" />
</node>
<node pkg="robot_state_publisher" type="state_publisher" name="robot_state_publisher" output="screen">
<param name="publish_frequency" type="double" value="30.0" />
</node>
<!-- The odometry estimator, throttling, fake laser etc. go here -->
<!-- All the stuff as from usual robot launch file -->
<!-- The odometry estimator -->
<node pkg="robot_pose_ekf" type="robot_pose_ekf" name="robot_pose_ekf">
<param name="freq" value="30.0"/>
<param name="sensor_timeout" value="1.0"/>
<param name="publish_tf" value="true"/>
<param name="odom_used" value="true"/>
<param name="imu_used" value="false"/>
<param name="vo_used" value="false"/>
<param name="output_frame" value="odom"/> <!-- This line is added by @KPM to enable running of amcl in simulator -->
</node>
<!-- throttling -->
<node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager" respawn="true">
<param name="max_rate" value="20.0"/>
<remap from="cloud_in" to="camera/depth/points"/>
<remap from="cloud_out" to="cloud_throttled"/>
</node>
<!-- Fake Laser -->
<node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager" respawn="true">
<param name="output_frame_id" value="camera_depth_frame"/>
<!-- heights are in the (optical?) frame of the kinect -->
<param name="min_height" value="-0.15"/>
<param name="max_height" value="0.15"/>
<remap from="cloud" to="cloud_throttled"/>
</node>
<!-- Fake Laser (narrow one, for localization -->
<node pkg="nodelet" type="nodelet" name="kinect_laser_narrow" args="load pointcloud_to_laserscan/CloudToScan openni_manager" respawn="true">
<param name="output_frame_id" value="camera_depth_frame"/>
<!-- heights are in the (optical?) frame of the kinect -->
<param name="min_height" value="-0.025"/>
<param name="max_height" value="0.025"/>
<remap from="cloud" to="cloud_throttled"/>
<remap from="scan" to="narrow_scan"/>
</node>
</launch>
Originally posted by kedarm on ROS Answers with karma: 86 on 2013-06-22
Post score: 1
Original comments
Comment by Martin Günther on 2013-06-23:
The "tried to advertise a service" warnings can be ignored, they always pop up (I think). Don't confuse tf_prefix with remapping, you need to get both right. If remapping works, rostopic list should show sth like /robot1/camera/... and not just /camera/....
Comment by kedarm on 2013-06-30:
I thought so too initially, but i still don't see any camera feed from either of the two turtlebots. Also, these are not merely warnings, they pop up as "ERRORS". I tried to see if /robot_1/camera/.. gets published, but its not. And all i see is /camera/...
Answer:
OK, I have finally been able to find a solution to this problem. The problem here was two-fold.
The /camera/image_raw topic does not get resolved to the correct TF prefix when getting published
The correct frame ID does not get published with the image topic when there are multiple robots with the same camera.
Solution to problem 1:
In the turtlebot_description package, inside the "urdf" folder, there is a file named "gazebo.urdf.xacro". Modify the "turtlebot_sim_kinect" macro in that file and remove the leading "/" from the imageTopicName element. (I also removed it from other topic names since having absolute topic names here is a bad practice to start with).
Now, the final "turtlebot_sim_kinect" xacro should look like:
<xacro:macro name="turtlebot_sim_kinect">
<gazebo reference="camera_link">
<sensor:camera name="camera">
<imageFormat>R8G8B8</imageFormat>
<imageSize>640 480</imageSize>
<hfov>60</hfov>
<nearClip>0.05</nearClip>
<farClip>3</farClip>
<updateRate>20</updateRate>
<baseline>0.1</baseline>
<controller:gazebo_ros_openni_kinect name="kinect_camera_controller" plugin="libgazebo_ros_openni_kinect.so">
<alwaysOn>true</alwaysOn>
<updateRate>20</updateRate>
<imageTopicName>camera/image_raw</imageTopicName>
<pointCloudTopicName>camera/depth/points</pointCloudTopicName>
<cameraInfoTopicName>camera/camera_info</cameraInfoTopicName>
<frameName>camera_depth_optical_frame</frameName>
<distortion_k1>0.0</distortion_k1>
<distortion_k2>0.0</distortion_k2>
<distortion_k3>0.0</distortion_k3>
<distortion_t1>0.0</distortion_t1>
<distortion_t2>0.0</distortion_t2>
</controller:gazebo_ros_openni_kinect>
</sensor:camera>
</gazebo>
</xacro:macro>
...save the file and exit. Now, the /camera/image_raw topic should resolve correctly to the correct robot prefix.
Namely... "/robot1/camera/image_raw" and "/robot2/camera/image_raw" in my case.
Note: I believe recompilation is unnecessary since we only modified the xacro. Although, I did do a recompile without realizing this at first. Even if you DO do a recompile, it shouldn't do any harm.
Solution to problem 2:
The camera topics don't carry the correct frame ID for the images they publish. Which means that although the camera topics are separate, the frame IDs do not get resolved correctly to the appropriate TF prefix. Hence, someone who wants to use the two independent camera images cannot do so.
This is related to a similar bug in the laser sensor that was fixed here:
code.ros.org/trac/ros-pkg/ticket/5511
Using the same principle that was applied in the laser sensor bug fix, I applied the solution to the camera sensor as follows...
Go to the "gazebo_ros_openni_kinect.cpp" file in /src of the "gazebo_plugins" package. (This is the source for the camera controller specified in "gazebo.urdf.xacro" file earlier)
Add the following to the code:
1.Add this at the top with other includes-
#include "tf/tf.h"
2.Add the following lines inside the "GazeboRosOpenniKinect::LoadChild" function:
std::string prefix;
this->rosnode_->getParam(std::string("tf_prefix"), prefix);
this->frameName = tf::resolve(prefix, this->frameName);
NOTE: add the above lines before any of the topics is advertised! Ideally, just before the line-
this->image_pub_ = this->itnode_->advertise(
(This makes sure the TF prefix is resolved before any topic is advertised)
Finally... save, exit, and recompile the gazebo_plugins package.
Voila!!! The multiple cameras should be up and running!
Originally posted by kedarm with karma: 86 on 2013-10-03
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by jorge on 2013-10-03:
Which version of turtlebot_decription are you using? the leading "/" problem is fixed on hydro, but your fix must be applied to previous versions.
Comment by bit-pirate on 2013-10-03:
Please also fill tickets on GitHub for bugs you find. Also code.ros.org is not used any more and most packages have been migrated, but not all tickets.
Comment by kedarm on 2013-10-03:
@jorge: Yeah...i am stuck with Electric, hence the leading "/" problem. It did occur to me that the problem might be fixed with newer versions. Anyways, I'll try bit-pirate's suggestion and file a ticket (and a fix if i can) on GitHub.
@bit-pirate: I'll definitely file a ticket on GitHub. I gave the link to that code.ros.org ticket because that was where I found it. Thanks a bunch for the suggestion though! :-) | {
"domain": "robotics.stackexchange",
"id": 14666,
"tags": "ros, gazebo, turtlebot, camera, multiple"
} |
Subnetting calculator in Go | Question: Whenever I try to learn a new language, I write a subnetting calculator. Here's my attempt with Go.
In particular, I think I'm not writing idiomatic Go, as well as abusing float64(), uint64(), etc.
My math should be correct, although I might have goofed when trying to translate JavaScript's number types into Go's C-esque versions.
Also, I know I should do some error-checking on the inputs (e.g. seeing if I get a logical IP instead of a string like "cat", but that'll come once I learn how to write correct Go ;))
package main
import (
"flag"
"fmt"
"math"
"os"
"strconv"
"strings"
)
const SIXTY_FOUR_BITS uint64 = 18446744073709551615
const MAX_BIT_VALUE int = 32
const MAX_BIT_BIN uint64 = 255
func main() {
var pf uint64
var unpack_pf uint64
flag.Parse()
if len(flag.Args()) == 0 {
flag.Usage()
os.Exit(1)
}
var ip_address string = flag.Args()[0]
var netmask = flag.Args()[1]
var ip_arr []string = strings.Split(ip_address, ".")
var sm_arr []string = strings.Split(netmask, ".")
pf, _ = strconv.ParseUint(netmask, 10, 32)
if len(sm_arr) == 1 {
if pf <= 32 {
unpack_pf = unpack_int(pf)
sm_arr = strings.Split(int_qdot(unpack_pf), ".")
netmask = strings.Join(sm_arr, ".")
} else if pf > 32 {
pf = host_pf(pf)
unpack_pf = unpack_int(pf)
sm_arr = strings.Split(int_qdot(unpack_pf), ".")
netmask = strings.Join(sm_arr, ".")
}
if pf == 0 {
panic("Cannot have netmask < 0 or 0.0.0.0")
}
}
var ip_dec uint64 = qdot_int(ip_arr)
var nm_dec uint64 = qdot_int(sm_arr)
var network string = network_address(ip_dec, nm_dec)
var broadcast string = broadcast_address(ip_dec, nm_dec)
var wildcard string = int_qdot(^nm_dec)
var prefix uint64 = qdot_pf(nm_dec)
var ip_class string = class(ip_arr[0])
var num_hosts float64 = hosts(prefix)
var num_subnets float64 = subnets(nm_dec)
var net_min string = int_qdot(qdot_int(strings.Split(network, ".")) + 1)
var net_max string = int_qdot(qdot_int(strings.Split(broadcast, ".")) - 1)
var ip_range string = net_min + " - " + net_max
var addr_type string = "IPv4"
write_results(addr_type, ip_address, ip_dec, netmask, nm_dec, wildcard, ip_class, network, prefix, net_min, net_max, broadcast, ip_range, num_subnets, num_hosts)
}
func write_results(addr_type, ip_address string, ip_dec uint64, netmask string, nm_dec uint64, wildcard string, ip_class string, network string, prefix uint64, net_min string, net_max string, broadcast string, ip_range string, num_subnets float64, num_hosts float64) int {
fmt.Printf("====================\n %s RESULTS \n====================\n\nAddress: - %s\n (hex): - %#X\n (dec): - %d\n (bin): - %b\nNetmask: - %s = %d\n (hex): - %#X\n (dec): - %d\n (bin): - %b\nWildcard: - %s\nClass: - %s\nNetwork: - %s\nNet Min: - %s\nNet Max: - %s\nBroadcast: - %s\nRange: - %s\nSubnets: - %.0f\nHosts/net: - %.0f\n\n-\n", addr_type, ip_address, ip_dec, ip_dec, ip_dec, netmask, prefix, nm_dec, nm_dec, nm_dec, wildcard, ip_class, network, net_min, net_max, broadcast, ip_range, num_subnets, num_hosts)
return 0
}
func unpack_int(sm_int uint64) uint64 {
return SIXTY_FOUR_BITS << (uint64(MAX_BIT_VALUE) - sm_int)
}
func qdot_int(qdot []string) uint64 {
var a uint64 = 0
var w uint64
var x uint64
var y uint64
var z uint64
w, _ = strconv.ParseUint(qdot[0], 10, MAX_BIT_VALUE)
x, _ = strconv.ParseUint(qdot[1], 10, MAX_BIT_VALUE)
y, _ = strconv.ParseUint(qdot[2], 10, MAX_BIT_VALUE)
z, _ = strconv.ParseUint(qdot[3], 10, MAX_BIT_VALUE)
a += w << 24
a += x << 16
a += y << 8
a += z << 0
return a
}
func int_qdot(integer uint64) string {
var w string = strconv.FormatUint(integer>>24&MAX_BIT_BIN, 10)
var x string = strconv.FormatUint(integer>>16&MAX_BIT_BIN, 10)
var y string = strconv.FormatUint(integer>>8&MAX_BIT_BIN, 10)
var z string = strconv.FormatUint(integer&MAX_BIT_BIN, 10)
var a []string = []string{w, x, y, z}
return strings.Join(a, ".")
}
func network_address(ip uint64, sm uint64) string {
return int_qdot(ip & sm)
}
func broadcast_address(ip uint64, sm uint64) string {
return int_qdot(ip | ^sm)
}
func host_pf(hn uint64) uint64 {
var x float64
if 0 < hn {
x = float64(MAX_BIT_VALUE) - math.Ceil(math.Log2(float64(hn)))
} else {
x = 0
}
return uint64(x)
}
func qdot_pf(qdot uint64) uint64 {
var x uint64 = qdot
var mask []uint64 = []uint64{
0x55555555,
0x33333333,
0x0F0F0F0F,
0x00FF00FF,
0x0000FFFF,
}
var shift uint64
var i uint64
for i, shift = 0, 1; i < 5; i, shift = i+1, shift*2 {
x = x&mask[i] + x>>shift&mask[i]
}
return x
}
func class(bit string) string {
var ip uint64
var x string
ip, _ = strconv.ParseUint(bit, 10, 32)
if ip < 128 {
x = "Class A"
} else if ip < 192 {
x = "Class B"
} else if ip < 224 {
x = "Class C"
} else if ip < 240 {
x = "Class D"
} else if ip < 256 {
x = "Class E"
} else if ip < 0 {
panic("Invalid IP")
} else {
panic("Super invalid IP")
}
return x
}
func subnets(pf uint64) float64 {
var mod_base float64 = math.Mod(float64(pf), 8)
var x float64
if mod_base > 0 {
x = math.Pow(2, mod_base)
} else {
x = 256
}
return x
}
func hosts(bits uint64) float64 {
var x float64 = float64(bits)
if x >= 2 {
x = math.Pow(2, (float64(MAX_BIT_VALUE)-x)) - 2
}
return x
}
Answer: You have correctly noticed that this isn't extremely idiomatic. However, this code also has some issues that are completely language independent:
You have a large amount of magic numbers. Either explain them with a short comment, or (preferably) calculate them. Let the compiler do constant folding; don't do it yourself.
Your code does not have a single comment. Not having comments does not mean that your code would be self-documenting.
You have the tendency to declare your variables up front. Try to put the declaration as close to its first usage as possible, and try to restrict variables to the narrowest possible scope. Except in dated languages such Pascal or C 89, declaration at the top of a scope is not generally required. In fact, it is often beneficial to never declare variables without also defining them in the same statement.
Your variable names are quite arbitrary. Names such as w, x, y, z are unacceptable. Instead, byte1, byte2, … might be preferable, but that means you actually want a loop. Let the compiler unroll loops if it sees fit, don't do so yourself (at least without benchmarking).
Your main function does both general startup such as parsing arguments, and contains the main algorithm. Separate these two concerns. When you split out the algorithm in its own function, consider returning a struct instead of a large tuple of return values.
If you have some kind of template in your code, try to make this template look similar to the result. In the case of your write_results format string, this means splitting it up into several lines, and concatenating the string pieces to the final format. Try to limit your line length to avoid line wrapping or horizontal scrolling in your code.
A few Go-specific remarks beyond those issues mentioned by Yuushi:
If the exact size of a numeric type is relevant in your program (and this is the case here), don't use int or uint, but int32 or something like that. The size of int is implementation-specific and may either be 32 bit or 64 bit. (See the language spec)
If a built-in type has a special semantic meaning in your program, declare a type alias, e.g. type Ip4Address uint32.
Once you have a type for your IP adresses, you can use Go's OOP features for a more elegant interface. Instead of func hosts(bits uint64) float64 consider func (this Ip4Adress) Hosts() int.
If we're already discussing that hosts function: Why on earth are you working with floats when you are actually calculating a discrete value? Note that \$2^n\$ can be calculated as 1 << n for integers, so you don't have to use math.Pow. | {
"domain": "codereview.stackexchange",
"id": 9912,
"tags": "networking, go"
} |
Frictional wheels without slipping (gear system) | Question: In the figure below it is stated that the friction force should always be greater than the tangential force in order to prevent slipping between two frictional wheels. My question is that if we apply Newton's second law to that contact point won't the tangential force resultant point upwards with the direction of the friction since the tangential force is larger and thus the wheels have an opposite rotating direction?
Answer: The concept behind
the friction force should always be greater than the tangential force
is that the maximum friction force should be greater to the force that causes the rotation of the wheel. If you try to apply more torque, then there will be slipping.
This is very similar to the following concept:
If a small force $F_A$ is applied, then the friction force will only be equal to that force $F_A$ and the box will not move. If $F_A$ exceeds the maximum friction force then the box will slide.
Important Sidenote:
That sliding motion is what we usually consider as motion, and sometimes get confused in the case of the gear the OP presented. I.e. the rotation again is considered motion. However, in the case of the gears there is no sliding -- i.e. for the contact point of gear A and gear B the relative velocity is zero, or in another way the move in unison. | {
"domain": "engineering.stackexchange",
"id": 4560,
"tags": "mechanical-engineering, gears, dynamics, kinematics, machine-design"
} |
Spawning a Robot in Gazebo using Python Launch File | Question:
This is something I struggled with for a few hours today and I wanted to post this for posterity. If you have a URDF file and want to launch Gazebo with the robot spawned then this will get you there:
launch_gazebo_arg = IncludeLaunchDescription(
PythonLaunchDescriptionSource([get_package_share_directory('ros_ign_gazebo'), '/launch', '/ign_gazebo.launch.py']),
launch_arguments={
'ign_args' : #Add path to World File if so desired
}.items(),
)
spawn_robot_arg = Node(
package='ros_ign_gazebo',
executable='create',
output='screen',
arguments=["-file", "<path_to_urdf>/robot.urdf"]
)
return LaunchDescription([
launch_gazebo_arg,
spawn_robot_arg
])
Originally posted by ROS_Engineer on ROS Answers with karma: 16 on 2023-03-07
Post score: 0
Answer:
Solution to spawning a robot from a URDF file into a Gazebo instance provided above. Hope it helped.
Originally posted by ROS_Engineer with karma: 16 on 2023-03-07
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 38307,
"tags": "ros, sdf"
} |
Rigid body - a bucket in a well | Question: Imagine that we want to get a bucket filled with water out of a well whose mechanisms looks something like that I have drawn. Then, we want to calculate the acceleration the bucket would have if we stopped applying torque. The force that would cause the bucket to accelerate would be the tension of the rope.
My question is, can we assume that the force of gravity acting on the bucket is very close to the tension of the rope and use its value in our calculations instead?
Answer: I'm not sure I understood the question but if the pulley is massless than T is zero when the torque is zero. IF it has a mass than we need its radius $R$ and its angular mass $I$.
So we can write the kinematic relation between the bucket and the angular acceleration and velocity of the pulley $v=\omega R$, $a=\dot\omega R$.
Then we need the energy relation $W={dK\over dt}$ where $W=mgv$ is the power of the weight and $K=\frac12mv^2+\frac12I\omega^2$ is the kinetic energy of the system:
$mgv=mva+I\omega\dot\omega\Rightarrow mgv=mva+IR^2va\Rightarrow a=\frac{mg}{m+IR^2}$
Now you can find $T$ using Newton law $mg-T=am\Rightarrow T=mg\frac{IR^2}{m+IR^2}$ | {
"domain": "physics.stackexchange",
"id": 40452,
"tags": "newtonian-mechanics, rigid-body-dynamics"
} |
Why is FORTRAN so commonly used in computational chemistry? | Question: I've been using Ruby to write scripts for research, but I want to get into some heavier stuff that Ruby is just too slow for. I noticed there are a few things written in C and C++, but there is an oddly large proportion of software used in computational chemistry that is written in FORTRAN (in which I have zero experience.)
Why is FORTRAN used in computational chemistry? From what I understand, FORTRAN is kind of on the ancient side (“punchcard” old.) I was a bit shocked to find fairly recently written tutorials for FORTRAN.
Is it a sort of, "this is how we've always done it," thing or is there an efficiency aspect to FORTRAN I'm overlooking?
Note: I may have FORTRAN confused with later programming languages with similar names.
Answer: I don't think that's really true anymore.
Some Fortran use is historical (i.e., early codes were developed in FORTRAN because that was the best programming language for number crunching in the 70s and 80s). Heck, the name stands for "formula translation."
Some Fortran use is because of performance. The language was designed to be:
especially suited to numeric computation and scientific computing.
Many times, I find chemistry coders sticking to Fortran because they know it and have existing highly optimized numeric code-bases.
I think the performance side isn't necessarily true anymore when using modern, highly optimizing C and C++ compilers.
I write a lot of code in C and C++ for performance and glue a lot of things with Python. I know some quantum programs are written exclusively or primarily in C++. Here are a few open source examples:
Psi4 - Written in C++ and Python
MPQC - Written in C++
LibInt - Written in C++ for efficient quantum integrals.
LibXC - Written in C with Fortran "bindings" for DFT exchange-correlation functionals
This is my opinion, but my recommendation for faster performance in chemistry would be Python with some C or C++ mixed in.
I find I'm more efficient coding in Python, partly because of the language, partly because of the many packages, partly since I don't have to compile, and that's all important.
Also, you can run Python scripts and functions in parallel, on the GPU, and even compile them, e.g. with Numba. As I said, if I think performance is crucial, I'll write pieces in C or usually C++ and link to Python as needed. | {
"domain": "chemistry.stackexchange",
"id": 8293,
"tags": "computational-chemistry, theoretical-chemistry"
} |
Is there a known closed-form expression for the susceptibility of the 2-D Ising model at $B = 0$? | Question: The Onsager solution for the 2-D Ising model allows us to find (among other things) complicated expressions for the internal energy of the system (in the thermodynamic limit and in zero magnetic field):
$$
u \equiv \frac{U}{JN} = - \coth \frac{2}{t} \left\{ 1 + \frac{2}{\pi} \left[ 2 \tanh^2 \left( \frac{2}{t} \right) - 1 \right] K\!\left[4 \, \text{sech}^2 \left( \frac{2}{t} \right) \tanh^2 \left( \frac{2}{t} \right) \right] \right\}
$$
where $t \equiv kT/J$ is the dimensionless temperature and $K(x)$ is a complete elliptic integral of the first kind. We can then (in principle) find a closed-form expression $C = \partial U/\partial T$.
Further, the net mean magnetization is known to be
$$
m = \begin{cases} \left[ 1 - \text{csch}^4 (2/t) \right]^{1/8} & t < 2/\ln(1 + \sqrt{2}) \\ 0 & t > 2/\ln(1 + \sqrt{2})
\end{cases}
$$
The question is then:
Is there a known closed-form expression for the magnetic susceptibility $\chi$ of the 2-D Ising model at zero field?
My (limited) intuition tells me that there should be, because energy and heat capacity are related to the first and second derivatives of the partition function with respect to $\beta$, and we have closed-form expressions for both of those quantities. Similarly, the magnetization and susceptibility are related to the first and second derivatives of the partition function with respect to the external field—but I have not been able to find a source that discusses a closed-form expression for $\chi$, only for $m$. Am I just looking at the wrong sources, or is there not actually a known expression for $\chi$ at zero field?
Answer: There are no explicit expressions, as far as I know, only expressions in the form of (complicated) infinite series, originating from expressing the magnetic susceptibility as a sum over 2-point correlation functions and using the exact expressions known for the latter. These have been used to analyze the remarkable analytic properties of the magnetic susceptibility.
The resulting expressions being very complicated, it seems pointless to reproduce them here. You can find them (together with links to the relevant literature) in McCoy's 2009 book; see Section 10.1.9 therein. You may also have a look at his article on scholarpedia.
In addition, a 2010 review of the history of this problem by some of its main investigators can be found here.
My (limited) intuition tells me that there should be, because energy and heat capacity are related to the first and second derivatives of the partition function with respect to β, and we have closed-form expressions for both of those quantities. Similarly, the magnetization and susceptibility are related to the first and second derivatives of the partition function with respect to the external field—but I have not been able to find a source that discusses a closed-form expression for χ, only for m.
Note that there are no known expressions for the free energy as a function of the magnetic field. This prevents the computation of the susceptibility by differentiating the free energy, which is the reason the available computations rely instead on correlation functions. (This is actually also the typical way the spontaneous magnetization is computed: $m^2=\lim_{n\to\infty} \langle \sigma_{(0,0)}\sigma_{(n,0)}\rangle$.) | {
"domain": "physics.stackexchange",
"id": 91506,
"tags": "statistical-mechanics, magnetic-fields, ising-model, specific-reference"
} |
How do we argue that there are only integer-spaced $m$ values for angular momentum? | Question: I am following the usual development using ladder operators (in Ballentine Chapter 7) for the angular momentum spectrum (for the joint eigenbasis $|j,m \rangle$ of $J^2$ and $J_z$) and I am satisfied with the logic up to the point where we have demonstrated the existence of a maximum and minimum value for $m$ (given fixed $j$); namely, $m_{max} = j$ and $m_{min} = -j$.
I cannot convince myself of the next step though. Ballentine states without proof that
We have thus shown the existence of a set of eigenvectors corresponding to integer spaced $m$ values in the range $−j \leq m \leq j$.
I am not sure I agree though. Up to this point in the argument, we have merely shown that $m$ obeys $−j \leq m \leq j$ and, further, that it is possible to add or subtract one from $m$ and still obtain a valid eigenstate as long as we remain within those bounds. But why should only integer-spaced $m$ be admissible? I recall hearing hazy arguments to the effect that, if not, then one could use the raising operator over and over and so obtain some $m$ violating $−j \leq m \leq j$. But why must this be so? We simply need to require (for example) that if $j-1 < m \leq j$, then $J_+|j,m\rangle = 0$ (i.e. just as with $|j,j\rangle$). Can someone explain the general argument leading to only integer-spaced values between $-j$ and $j$ being allowed?
This seems to me a requirement in order to assert that $j - (-j) \in \mathbb{N}$ so that one obtains the usual $j = k/2$ for $k \in \mathbb{N}$ condition on allowed values of the angular momentum.
Answer: Look carefully at the formulas (7.8 - 7.10). If you omit the words "maximum value of $m$", then these formulas simply demonstrate that for any state $|\beta, j\rangle$ such that
$$
J_{+}|\beta, j\rangle = 0 \quad (7.8)
$$
the following condition must be met
$$
\beta = j(j+1).\quad (7.10)
$$
If you have two different states sattisfiying (7.8), $|\beta, j_1\rangle$ and $|\beta, j_2\rangle$, it follows from (7.10) that the equality must hold
$$
j_1(j_1+1) = j_2(j_2+1).
$$
The last equality implies $j_1 = j_2$ or $j_1 + j_2 + 1 = 0$. I suppose there must be a way to show that $j_1 = j_2$. For example, we can similarly consider the situation with the minimum value of $m$. | {
"domain": "physics.stackexchange",
"id": 95612,
"tags": "quantum-mechanics, angular-momentum"
} |
How do I prove that delta - sinc function is the same as an (-1)^n times the sinc | Question: $$\delta(n) - \frac{1}{2} \mbox{sinc} \left(\frac{n}{2}\right) = (-1)^n \frac{1}{2} \mbox{sinc} \left( \frac{n}{2} \right)$$
The picture shows what I've tried
Answer: Hint : Use the fact that, $\sin(-\frac{n\pi}{2}) = (-1)^n \sin(\frac{n\pi}{2}), \forall n \in \mathbb Z$ | {
"domain": "dsp.stackexchange",
"id": 8781,
"tags": "sinc"
} |
Modulation and allocation in GSM900 | Question: I want to do one project and I just want to make sure that I've understood correctly.
Human's voice frequency is around 85-255Hz.
When we talk on phone, carrier is modulated with voice signal in order to shift frequency spectrum to higher frequency for transmission. If we take for example GSM900 which is 876 - 915 MHz paired with 921 - 960 MHz. That means that our speech will be in this GSM900 range after modulation?
Moreover, how much bandwidth user gets in 876 - 915 MHz paired with 921 - 960 MHz range? Any articles about that?
Answer: For the classical analog wired telephony lines, the human speech bandwidth is taken to be in [300 , 3000] Hz. This range can be extended by different similar applications such as wireless handsets, voice over IP, streaming apps, and GSM.
Most probably GSM also takes human speech range to be in the similar bandwidth, although later mobile phone standards 4G/4G+ etc also support higher quality voice services and hence consider a wider speech bandwidth.
Your understanding of modulation is basically correct for classical analog continuous wave amplitude modulation (AM, DSB-SC, SSB etc) in which a carrier cosine wave amplitude is continuously controlled by the instantaneous amplitude of the message signal which is voice in this case.
However, GSM for cellular phones employs a digital modulation scheme. Hence it's sending bits instead of continuous amplitudes. Note that it sends the bits not as 5V or 0V states (like it's in a digital computer wire), rather it still uses sorts of sine wave packets whose various attributes are used to convey the digital information. Other digital modulation techniques also exist.
Furthermore and most importantly, GSM is not sending the waveform shape of the input speech; rather it transforms the input speech into a compressed form (using LPC type of voice coder) and sends those bits instead. | {
"domain": "dsp.stackexchange",
"id": 6622,
"tags": "modulation, bandwidth"
} |
Overall acurracy +/- E (with 90% C.I.) | Question: I am assessing the accuracy of my classification model. I performed a 4-folds cross-validation and I obtained the following Overall Accuracy: OA = (0.910, 0.920, 0.880, 0.910). So, the average OA is 0.905. My dataset contains 120 samples, therefore in each fold of cross-validation I used 90 samples for training (3/4) and 30 samples for validation (1/4).
Now I want to calculate the 95% confidence interval around the mean. I am thinking of using the following formula to calculate the symmetrical interval with respect to the average (confidence interval of the binomial proportion):
interval = z * sqrt ((accuracy * (1 - accuracy)) / n)
where z is the number of standard deviations from The Gaussian Distribution and n is the number of samples. For 95% C.I. z = 1.96. But,
What I should use n value? 120, 30, 4?
Is there a better way to calculate it?
Answer: Here $n=120$ will be the right answer as you are calculating the distribution over the total number of trials performed accross all folds.
The confidence interval here is over a series of binary trials, which here is the per-data point classifications.
So the average trial here would be over the total number of trials in the k-fold validations, which is
$$30*4 = 120$$
from Wikipedia
Using the normal approximation, the success probability p is estimated as
$$ {\displaystyle {\hat {p}}\pm z{\sqrt {\frac {{\hat
> {p}}\left(1-{\hat {p}}\right)}{n}}},} $$ or the equivalent $$
> {\displaystyle {\frac {n_{S}}{n}}\pm {\frac {z}{n{\sqrt {n}}}}{\sqrt
> {n_{S}n_{F}}},}{\displaystyle {\frac {n_{S}}{n}}\pm {\frac {z}{n{\sqrt
> {n}}}}{\sqrt {n_{S}n_{F}}},} $$ ... measured with ${\displaystyle n}$
trials
In this case, each trial is a classification, since each prediction on validation data point is a Bernoulli trial (binary classifications) with some $n_s$ successes (correct classifications) and $n_f$ failures (incorrect classifications). | {
"domain": "datascience.stackexchange",
"id": 11431,
"tags": "accuracy, performance, confidence"
} |
Is escape velocity really a velocity (rather than a speed)? | Question: The term escape velocity is quite common to us. But we know velocity is a dimension dependent on the direction. But the escape velocity has same value irrespective of from where it is thrown from the earth. So its just the magnitude that matters, not the direction. In that sense it should be given the name as escape speed. But it is not given so, can anyone explain?
Answer: As others have said, escape velocity is a speed, not a velocity. As to why, see the etymology of the word velocity:
early 15c., from Latin velocitatem (nominative velocitas) "swiftness, speed," from velox (genitive velocis) "swift, speedy, rapid, quick," of uncertain origin.
Velocity used to mean speed, and we still say things like "high velocity bullet". The vector-quantity meaning came later. Interestingly, when you look at translations of Einstein's papers such as here, you see the word velocity used instead of the word speed. Then when you backtrack to the original German, you find the original word was Geschwindigkeit. Native German speakers have advised me that it means both speed and velocity. | {
"domain": "physics.stackexchange",
"id": 23373,
"tags": "soft-question, terminology, velocity, speed, escape-velocity"
} |
motoman stack on ros industrial | Question:
I am trying to install the motoman stack from ros industrial on a fuerte installation. When I compile (with a rosmake), I receive the error "Cannot find required resource: warehouse_ros". I have downloaded and installed warehouse_ros from the git repository located at https://github.com/ros-planning/warehouse_ros.git, but the problem did not go away. Any ideas?
BTW, the reason that I am doing this is to experiment with the Fast_IK solution found at http://www.ros.org/wiki/Industrial/Tutorials/Create_a_Fast_IK_Solution. However, when I installed this package, I was missing the sia10d_mesh_arm_navigation and started down a path of installing from git. Any help with the Fast_IK would also be appreciated!
Originally posted by Steveb on ROS Answers with karma: 26 on 2013-04-02
Post score: 0
Answer:
It appears as though you are using a groovy version of ROS-Industrial. The dependency on warehouse_ros was added for groovy. Assuming this is the case, I would recommend one of the following:
Install the binary ROS-Industrial version for motoman: sudo apt-get install ros-fuerte-motoman.
Install from source to get the latest functionality (i.e. close to groovy), use the fuerte dev branch: here
Originally posted by sedwards with karma: 1601 on 2013-04-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13643,
"tags": "ros, ikfast, motoman, ros-industrial"
} |
What are the differences between network analysis and geometric deep learning on graphs? | Question: Both of them deal with data of graph structure like a network community. Is there a big difference there?
Answer: Network analysis does not necessarily use deep learning techniques, while geometric deep learning (GDL) on graphs uses only deep learning techniques (that is, you train a neural network using gradient descent or other optimization methods). You can do some network analysis using GDL. | {
"domain": "ai.stackexchange",
"id": 1362,
"tags": "convolutional-neural-networks, comparison, graphs, geometric-deep-learning"
} |
Is in-place run length encoding possible in O(1) space given that the output is shorter than the input? | Question: This is inspired by a problem from here. This is the approximate form of the problem:
Given a string like "aaaa777cbb" (10 symbols long), run length encode it in-place to a string like "a473c1b2" (8 symbols long). You are guaranteed that the input will always be longer than the output.
The precise form of the problem is:
You are given an ordered list $L$ of symbols from a set $S$. Any symbol from $S$ may appear in the list.
$S$ contains all the positive integers up to and including $|L|$ (the length of $L$) and also some other symbols.
Rules of manipulating the input in-place
You can replace one symbol in the list with another
You can trim the list to a length of your choice by removing symbols from the end
You cannot insert symbols
You must overwrite the list of symbols with it's run-length-encoding representation and trim it to length so that it includes only the run-length-encoding representation.
The run-length-encoding representation replaces each series of 1 or more of the same symbol in the input with that symbol followed by the symbol representing the number of occurrences of the previous symbol.
For example: $[a, a, a, a, a, a, a, a, a, a, 7]$ becomes $[a, 10, 7, 1]$ meaning "$a$ ten times followed by $7$ one time"
Note that the length of the output list is always even
You are guaranteed that the length of the input list is always larger than the length of the output list
You must do this with $O(1)$ additional working memory
Each "word" of working memory contains $log_2 |S|$ bits (put another way, words may be constructed which store constant amounts of information, the position of any element in the input, or any symbol from the input)
Intuitively I don't think this is possible. The solutions provided on the original site seem to break on strings like "abccccc" (length 7) where the output should be "a1b1c5" (length 6), since they start by overwriting "b" with the "1" from "a1" before they have even checked which symbol is in the 2nd position.
I have thought about trying to start by finding the compressible runs of letters (2 or more of the same letter), but I don't know how to tell which symbols are already processed and which are from the original input without using some sort of memory that which would grow with the size of the input (like a bitmap of processed areas) and therefore put me in violation of the $O(1)$ space requirement.
I consider acceptable answers to be proofs that this problem either is or is not solvable in $O(1)$ space.
Answer: An $O(1)$ space algorithm that uses one extra symbol not found in $L$, which I will call $B$ for blank space.
I define an operation, a "shift right" at position $k$. It finds the next blank symbol $B$ after position $k$ , shifts all symbols one to the right, and sets position $k$ to $B$. For example a right shift at the third symbol:
abcdeBfjgB becomes abBcdefjgB
^ ^
Similarly a "shift left" at position $k$ assumes there is a $B$ symbol there, and moves it all the way to the end of the string, shifting all other symbols left.
abBdeBfjgB becomes abdeBfjgBB
^ ^
Note that you can perform both shifts in $O(1)$ memory.
Now, first we replace all runs of any symbol $x$ with length $l \geq 3$ or greater with $xlB^{l-2}$. This can be done in-place, and leaves such runs identifiable. Also note that these are all the runs that shortens the output compared to the input.
Then, move a single pointer $p$ from left to right:
If the string at the pointer starts with $B$, shift left.
If the string at the pointer starts with $xlB^+$, this is the start of a run with length at least 3. Increment $p$ by $2$.
If the string at the pointer starts with $xx$ replace it with $x 2$ and increment $p$ by 2. Note that $xx$ can never be the start of a run of length 3 or higher, since we already replaced those.
If the string at the pointer has form $xy$, increment $p$ by 1, shift right, and replace the resulting $B$ at $p$ with $1$. Increment $p$ by 1 again. Note that the shift right must succeed due to the guarantee the output is shorter than the input, and we already created all space possible when replacing all runs of length $3$.
If any space is left over at the end, the algorithm will eventually get stuck performing step 1. Detect this, strip the remaining space, and you are done. | {
"domain": "cs.stackexchange",
"id": 16971,
"tags": "space-complexity"
} |
export ROS_HOSTNAME, ROS_MASTER_URI configuration problem | Question:
Hello all,
I was trying all day to figure out why my cmd won't accept my arrow keys as input for the turtlesim example.
At first I though there was some configuration problem with the shell I use (zsh).
After experimenting a little I noticed that I need to export ROS_HOSTNAME and ROS_MASTER_URI in all 3 tabs I need to run the turtlsime example. So in each tab I go like:
export ROS_HOSTNAME=localhost --- export ROS_MASTER_URI=http://localhost:11311 --- roscore
export ROS_HOSTNAME=localhost --- export ROS_MASTER_URI=http://localhost:11311 --- rosrun turtlesim turtlesim_node
export ROS_HOSTNAME=localhost --- export ROS_MASTER_URI=http://localhost:11311 --- rosrun turtlesim turtle_teleop_key
Only then my cmd will accept my arrow keys as input and I will see the turtle moving. Any idea what I am doing wrong? Is there a way I can avoid this? Like run it once, even if I need to restart my PC?
Thank you
Originally posted by mzouros on ROS Answers with karma: 3 on 2020-12-23
Post score: 0
Answer:
I'm not sure why you need to set ROS_HOSTNAME to localhost, but to avoid having to set these environment variables manually, add them to whichever file your shell loads/parses/sources when you start/log into a new shell.
Originally posted by gvdhoorn with karma: 86574 on 2020-12-23
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by mzouros on 2020-12-23:
Tutorial says so.
I just forced edited the rc file as you proposed. Seems to work.
Thanks!
Comment by gvdhoorn on 2020-12-24:
Which tutorial?
Comment by mzouros on 2020-12-24:
ROS/NetworkSetup - 2.2. Single Machine Configuration
http://wiki.ros.org/ROS/NetworkSetup
Comment by gvdhoorn on 2020-12-24:
I would not expect that to be necessary for a single machine setup.
I would recommend you try running things without setting those variables, and then see whether things work. If things don't work without those variables, that would point to a potential issue with your network configuration. | {
"domain": "robotics.stackexchange",
"id": 35902,
"tags": "ros, turtlesim, tutorials"
} |
Navigation and mapping using gmapping on drone | Question:
Hi
I am using the Structure Sensor (Depth sensor) on the Parrot AR.Drone using gmapping slam
How do I run the navigation stack and how will the maps be generated
Originally posted by Francis Dom on ROS Answers with karma: 21 on 2014-10-28
Post score: 0
Answer:
Gmapping is for creating a 2-D occupancy grid map. You need 3d representation of environment. check this page on mapping using quadrocopter.
Originally posted by bvbdort with karma: 3034 on 2014-10-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Francis Dom on 2014-10-28:
Hi
The sensor that I am using gives depth output (not color unlike MS Kinect) only and so I am using gmapping
Also I am using the PX4 autopilot on the Parrot Drone
Comment by polde on 2018-02-12:
hey! did you ever come to a solution about this problem? | {
"domain": "robotics.stackexchange",
"id": 19870,
"tags": "ros, navigation, drone, stack"
} |
Why does Banded Needleman-Wunsch give alignments with no more than d base pairs of indels? | Question: A common modification to the Needleman-Wunsch to reduce running time is to only fill in the cells along a diagonal band of the matrix (slides 27/28). Let 2d + 1 represent the width of the band. Then based on the provided text, this ensures that the total number of inserts and deletes does not exceed d.
However, I can't understand the intuition behind the algorithm. If we let L represent a list of the size of each mutation in the alignment in-order (where negative sizes represent deletions), doesn't the band restriction merely say that the sum of values in any prefix slice of L must stay between -d and d? But the total number of mutations could easily exceed d.
E.g. If d = 4, throughout a long alignment, there could be 5 distinct mutations, where L refers to a series of [insertion, deletion, insertion, insertion, deletion] with sizes: [3,-2,2,1,-4]. Visually, such an alignment would represent a wiggly line through the DP matrix that results in plenty of small insertions and deletions, alternating to ensure that the alignment stays within the band.
In other words, the band does not actually restrict the total number of insertions and deletions as claimed? If I'm misunderstanding something, I'd appreciate any clarification. Thanks!
P.S. Was unsure if this post should go under the bioinformatics stackexchange beta.
Answer: You are correct, the number of insertions/deletions $d$ is not constrained (only) by the bandwidth.
However, the algorithm only uses the fact that if we know $d$, then the path must stay within the $2d+1$ band. This band does not have to be tight: a zigzag path would indeed fit within a much narrower band.
You may wonder why the smallest bandwidth (which is indeed equal to the maximum sum over prefixes of your list $L$) isn't used as a parameter instead of $d$. Since $d$ is related to the similarity of the strings, it isn't unreasonable that an upper bound of $d$ is known in some applications. The smallest bandwidth doesn't seem to correspond to a 'natural' parameter in the context of similarity as $d$ does, so it doesn't seem reasonable that it is known and can be used in the algorithm. | {
"domain": "cs.stackexchange",
"id": 10075,
"tags": "algorithms, dynamic-programming, subsequences, bioinformatics"
} |
What is the importance of linear languages? | Question: What is the point of linear languages? They appear to be an intermediate set of languages in between regular and context-free languages, but do they have any useful or nice properties that either have been studied, or make them worthwhile to study?
Answer: The first obvious reason why linear languages were introduced is that mathematicians can hardly resist, facing a lateralized notion, to consider the symmetrical version as well. For instance, in algebra, you deal with left or right inverses, but also with inverses. In ring theory, you consider left or right ideals, but also ideals, etc.
Since left linear and right linear grammars define regular languages, studying linear grammars just came as a natural question to study in the early days of formal language theory. Furthermore, as you mentioned in your question, linear languages appeared to be a natural intermediate class between regular and context-free languages.
A nice characterization of linear languages was given by Arnold L. Rosenberg: A machine realization of the linear context-free languages, Information and Control 10 (1967) 175–188. The formulation below is Proposition V.6.5 in J. Berstel's book, Transduction and Context-free Languages, Teubner 1979.
A language $L$ of $A^*$ is linear if and only if there exists a rational subset $R$ of $A^* \times A^*$ such that
$$
L = \{ u\tilde v \mid (u, v) \in R \}
$$
where $\tilde v$ denotes the reversal of $v$. This result shows that, in a very loose sense, linear languages are a generalization of the language of palindromes. | {
"domain": "cstheory.stackexchange",
"id": 3058,
"tags": "fl.formal-languages, automata-theory, grammars"
} |
How I deal with this kind of gene expression comparision | Question: I have RNA-seq from two sequencing batches; Lab technician says that he has run the RNA expression quantification two times in bathes 1 and 2 for example tumor 1 in batch 1 and tumor 1 in batch 2 , normal 2 in batch1 and normal 2 in batch 2. PCA shows however shows two experimental runs are too close to each other and can be merged
[![enter image description here][1]][1]
Now my question is can I compare only a tumour sample with its own matched normal sample in one patient to get differentially expressed genes for this patient by using experimental runs (technical replicates )? Design would be Tumour1_batch1 Tumour1_batch2 versus Normal1_batch1 Normal1_bacth2
> head(mycols)
Condition
Normal1_batch1 N
Normal1_batch2 N
Tumour1_batch1 T
Tumour1_batch2 T
Answer: No. If the only difference between Tumour_batch1 and Tumour_batch2 was the library prep/sequencing run, then all this will tell you is if the particular sample of RNA taken from the tumour is different from the particular sample of RNA taken from normal. It almost certainly is.
Indeed if you did technical replicates of two RNA samples, both taken from normal, you would almost certainly find differences.
What you want to find is the differences in RNA level between tumour in general and normal in general. To do that you need to make an estimate of the mean and variance of RNA levels in the tumour and in the normal. To do that you need multiple samples from each.
If you need differences within a single patient, it would be more honest just to compute the fold changes without any implication of statistical certainty. You would then need to think about alternate schemes for measuring the confidence in inferences drawn from the data when you consider the actual biological questions. | {
"domain": "bioinformatics.stackexchange",
"id": 1110,
"tags": "r, rna-seq, differential-expression, deseq2, design"
} |
Set operations on pcl indices | Question:
I'm not sure if PCL specific questions are welcome here but:
I'm looking at doing standard set operations on indices of PCL point clouds. I can certainly implement such a thing myself, but would prefer to do it some standard/canonical way. Unfortunately after a bit of searching I'm seeing no evidence that such functionality exists natively.
Can someone verify one way or another: Is there a cannoical way to do set operations on PCL point cloud indices?
Originally posted by Asomerville on ROS Answers with karma: 2743 on 2014-06-18
Post score: 0
Answer:
Would pcl::PointIndices class be what you are looking for? Throughout a lot of PCL code, they use these indices to select which point in a cloud to perform a particular operation on. Most of the core filters have a "SetInputCloud" and "SetIndices" functions and then a "filter" or "segment" or "extract" or whatnot function that acts on that subset of the cloud (if no indices are set, it is assumed to use the entire cloud).
Originally posted by fergs with karma: 13902 on 2014-06-19
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Asomerville on 2014-06-24:
I ended up creating a collection of functions for std::sort and std::set_union/difference etc reaching into the guts of PointIndices But, I'm surprised the functionality doesn't exist as reaching into the guts of a custom data type and performing such operations is a development no-no/code-smell. | {
"domain": "robotics.stackexchange",
"id": 18311,
"tags": "pcl"
} |
Symmetry between inertial reference frames | Question: So my textbook says the following - roughly translated - in the context of Special Relativity:
"Assume we have two observers, A and B, moving relative to each other. Observer A measures a velocity v for observer B. Because of the symmetry of the situation, observer B measures the same velocity for observer A. If this isn't clear, note that A and B can be replaced by one another. If A and B measured a different velocity, then either of the two would be in a 'special reference frame'. This is in contradiction with the postulates of special relativity."
Alright, so I get this intuitively, of course. However, I can't follow their line of reasoning. How would we know that the laws of physics are different for A and B, if they don't measure the same velocity? Velocity is already relative, so how do we know for sure there would have to be a different set of laws for, say, A to get this result?
I'm familiar with the following argument: We've already deduced that a light clock yields different time intervals between its ticks for different observers. Assume there'd be another 'invariant' clock that would tick the same for any observer (a supposed 'universal time' clock). Then any observer would be able to measure their absolute speed, by comparing the time interval measured by the universal clock and a licht clock (that are assumed to not be movingg relative to one another). This is a contradiction, for velocity can't be absolute by our postulates.
Is it perhaps possible to give the same kind of argument with the symmetry problem? Would it somehow be possible for either of the observers to deduce an 'absolute velocity', if they don't measure the same velocity (which would yield the desired contradiction).
I'm hoping someone could help me out with either of the arguments!
Answer: Alright, I got a more intuitive/physical answer (instead of just plugging in values in Galileo/Lorentz transformations!).
We are going to assume two things: object A and object B are moving relative to each other (acceleration=0). Now, we can fix object A and make A face in the direction of B. Assume A tells us that B moves with speed v. Now, we maintain a completely identical situation if we fix B instead of A. In that case we have an object (B, in this case) that is facing another object that moves relative to it [in the same way as above]. This is literally the same situation as above, so if A gives us a value v for B, then B must give us the same value for A.
In short:
Situation I: An object (A) is moving relative to another (B).
Situation II: An object (B) is moving relative to another (A), in the same way.
Conclusion: Those situations are identical, so any results concocted in situation I must also apply to situation II.
I know this is an overly cumbersome explanation, but this is the only way I believe I truly get it. :) | {
"domain": "physics.stackexchange",
"id": 36717,
"tags": "special-relativity, inertial-frames, observers, lorentz-symmetry, relative-motion"
} |
Numerically Controlled Oscillator (NCO) for phasor implementation? | Question: If I want to simulate a digital oscillator (or phasor) to modulate an arbitrarily length signal:
$$y(t) = \cos(2\pi f_ct)\quad\text{where}\quad t = \frac{n}{f_s}\quad\text{for sample}\quad n$$
What is the advantage of using a numerically controlled oscillator (Answered here) versus simply incrementing $n$ over the length of the input?
main():
n = 0 # total samples processed
N = len(input) # num samples in input
blksz = N / 16 # don't create phasor all in one call
# calculate phasor over multiple passes
while ( n < N )
update_phasor(blksz)
# calculate phasor
update_phasor(blksz):
for (i = 0; i < blksz; i++)
y[i] = exp(j*2*pi*fc*n/fs)
n++
The NCO referenced above and the pseudocode implementations give different results and I am trying to understand which makes more sense.
Also, what is the best way to synthesize the phasor if the desired frequency needs to change on the fly?
Answer: The NCO is a cyclical counter that can go on indefinitely but is otherwise similar to what you suggest in that you are increment n to set the output rate. It basically is a look up table of all the values in one complete cycle, and "wraps" on an overflow so that it will output continuous cycles with no discontinuity.
I think the NCO is ideal for what you are trying to do; given its simplicity, it's ability to run indefinitely keeping track of its own position in time, its quantified noise levels that you can set to what you need, it's fixed-point implementation (with no multipliers) and its ability to change the frequency of your phasor "on the fly" as you require. I think this bottom lines the difference with the alternate approach you describe, which would not be as efficient (given the multiplications required and possible implementation in floating-point without careful mapping and scaling in which case you may as well go down the NCO path).
A little more theory may help you to see all the advantages (and simplicity) of an NCO.
First, referring to the diagrams below for the basic NCO Architecture: A digital Frequency Control Word (FCW) sets the count rate to an extended precision accumulator (counter), optionally a Phase Control Word (PCW) for phase modulation can then be added to the output of the accumulator. The most signficant bits of this summation are then used as the address pointer to a Look-Up-Table (LUT) which holds the values for one complete cycle of a sine wave (you could also imagine having two pointers for sine and cosine to enable a complex I and Q output).
Now see the same block diagram with a mathematical view below that helps give further technical insight into the operation of the NCO. An accumulator (counter) is the digital counterpart to an integrator (if you don't see that right away, imagine sending all 1's into a counter, the output would be a ramp: 1,2,3,4,... just as you would expect an integrator output to be with a constant level at its input). The input FCW which is just a digital signal, which can change will time (as you were looking for), is a waveform that represents frequency vs time. What value of frequency for each digital word you can input I will elaborate on later, but for now know that its value at any given time is directly proportional to the output frequency. The integral of frequency is phase (and if you are less familiar with that it may be easy to see that frequency is a change in phase versus a change in time, therefore $f=d\phi/ dt$; frequency is the derivative of phase, and therefore phase is the integral of frequency.) Since our FCW input to an accumulator is the digital representation of a frequency quantity, and the accumulator is a digital integrator, then the value at the output of the accumulator represents phase versus time (which is why we can add a phase offset with PCW at this point if desired) and the accumulator counts from 0 to 2$\pi$, rolling over upon overflow.
Since the accumulator output represents phase that is changing with time, and we want to generate a sinusoid output ($sin(\theta)$), then we can simply use a LUT to perform the trigonometric function. (Note: If you have plenty of extra cycles but no memory, other techniques to calculate the sine of an angle can be done, notably the CORDIC algorithm). Beautiful right? So now how do we decide on specifics to design our NCO, and what happens when we lose all the least signficant bits in our phase word? Read on!
First, the accumulator sets the frequency resolution, and usually an extended precision accumulator is used with either 24, 32 or 48 bits typically used depending on the application. This is easy to see, imagine first FCW =1: The accumulator will step through every value, meaning the address pointer to the LUT will also step through every value in the stored sine wave, so the sine wave output will be at the slowest rate, and that rate will be as given by the "step size" in the formula below. Why step size? Because then imagine setting FCW =2, and the counter will now count by 2's and therfore go twice as fast before rolling over (and also upon rollover the counter must continue to count and that is why the NCO will continue to output the desired sine wave indefinately), put in FCW = 3 and it will count 3 times as fast etc. Therefore,
$$F_{out}= FCW\frac{f_{clock}}{2^{accum_size}}$$
So regardless of how many bits we decide to use for the LUT, the output frequency is strictly set by this formula and nothing else.
Now to briefly explain phase truncation and primary considerations: Phase truncation is when we decide to only use the most significant bits of the accumulator output to send to the LUT, and in doing this we are truncating the phase word (rounding down). To understand the implications of this, first consider the diagram below of what would happen if we did not have any phase truncation (meaning a very very large look up table, or very course frequency step size if the accumulator is small). What this picture is showing is that an impossible implementation containing a perfectly sinusoidal analog source at the specific frequency shown (with no phase noise), sampled with a perfect 100 MHz clock with a perfect 12 bit A/D converter will produce the IDENTICAL results of the NCO output, using a noisy 100 MHz clock. In fact for the NCO with no phase truncation, all of the output frequencies in multiples of $f_{step}$ as provided by the formula above will be this precise, with quantization noise at the output being the only noise source (which you can control by setting the output word width). You can see if you imagine different cases that without phase truncation, the look up table will provide the exact output at any given point in time that is required (limited to the quantized frequency choices with FCW, but that step size can be very small with large accumulators). The waveform will be very smooth without any skips or hiccups, in other words, pure.
So this is great, consider the example with a 32 bit accumulator, and 12 bit output of the LUT; providing very fine frequency resolution with great spectral purity (and 6 dB better for every additional bit you add to the output width)...until you get to the memory requirements! An herein lies the motivation to consider phase truncation.
Phase Truncation
With phase truncation, memory requirements are significantly reduced at the expense of an additional noise source (phase noise). What we will see if that the noise will be well understood and can be planned to be well below any given requirements (as a trade with memory required).
Also to mention for memory optimization only a quarter cycle is needed since the remaining portions of the cycle can be derived from the first quarter cycle. There are many other memory optimizations as well such as interpolation between values (most common), and to mention without explanation, Hutchison Algorithm and Sunderland Algorithm, as well as the Cordic Rotator previously mentioned.
The phase noise pattern itself from phase truncation will be a sawtooth function of phase versus time, representing the truncated values that are missing. From this, the useful relationship of SNR to phase truncation is given as in the picture below. Here SNR is the power of the desired sine wave output relative to the power of all spurious output due to the phse truncation. This formula applies when the small angle criteria applies (when $sin(\theta) \approx \theta$) and comes from the rms value for a sawtooth function (or equivalently the standard deviation for a uniform distribution) which is $\frac{D}{\sqrt{12}}$ where D is the peak to peak height of the ramp or width of the distribution. This formula is combined with the quantization noise contributions from the LUT for a digitized signal (using a similar formula: 6 dB/bit + 1.76 dB also derived from our $\sqrt(12)$ factor since quantization noise can be modeled as a uniform distribution!), to address all noise sources in the NCO. To use this formula, the number of bits in the formula is the number of bits sent to the NCO (number of phase bits not truncated).
Finally we may be interested in the spurious free dynamic range (SFDR) which would be the power level of the strongest spur relative to our output signal (as opposed to the sum power of all spurs in SNR). The power of the strongest spur due to phase truncation is simply 6.02 dB/bit, where again bit is the number of bits sent to the LUT. (This can be derived by taking the Fourier Transform of the ramp pattern which represents our phase error, again applicable to small angle approximations). All the spurs are integer harmonics of the fundamental output frequency, many of which will have digitally folded in the first Nyquist zone of our implementation, as suggested in the diagram below. Unlike the diagram, the 2nd harmonic is not necessarily the strongest spur, but helps to give context to the idea of spurs and SFDR.
Dithering
Dithering is the process of adding a small amount of noise (for example, use a LFSR generator as the PCW input) which will improve the SFDR at the expense of SNR. The overall noise power increases (due to our additional additive noise), however the spurious levels can be substantially reduced in the process. | {
"domain": "dsp.stackexchange",
"id": 4825,
"tags": "modulation, oscillator"
} |
If a CNF contains only Horn and Xor clauses, then what is the complexity of determining Satisfiability? | Question: If a CNF contains only Horn and Xor clauses, and does not contain clauses of other types, then can its Satisfiability be determined in polynomial time?
Answer: Hint: Using XOR clauses you can express "$x = \lnot y$", and this allows you to simulate general clauses by Horn clauses. | {
"domain": "cs.stackexchange",
"id": 10195,
"tags": "complexity-theory, time-complexity, satisfiability, sat-solvers"
} |
Organic solvents distillation for purification | Question: I've realised that the majority of the solvent that we use in my lab are contaminated with grease (peaks at 1.26 and 0.86 ppm in $^1H$ NMR in $CDCl_3$), since they come in plastic containers (I guess that's the reason).
One solution would be to buy better grade solvents, but we have a large amount of these solvents in the plastic containers. I was thinking about distilling them to purify them and keeping them in glass bottles, but I thought of some issues that could appear.
Some organic solvent present stabilizers. For example, dichloromethane is often added amylene (like in my case), so I'm afraid of losing the stabilizer when distilling. Amylene has a boiling point of 38.5 ºC, so it should distill with the DCM. However, I was thinking of using the rotavap for practical reasons, but I think that maybe the amylene would not condense at the conditions used for the DCM. Also, I don't know if more stabilizers are added, since sometimes I get peaks that resemble BHT but I don't use any solvent that has BHT in it.
Could I use the rotavap for distilling the DCM or should I perform a simple distillation? Would the simple distillation work for keeping the stabilizer and preserving the DCM for long periods? Also, in the bottles of ethyl acetate and n-hexanes it doesn't say anything about stabilizers, is it alright to assume that they don't need any and I can distill them without problem?
Answer: Using the rotovap to distill the DCM proved useful, as it left a yellow oil as a residue. After that, the solvent was clean by NMR, only with some peaks that seem to be from amylene (the stabilizer present in DCM).
As far as I know, n-hexane and ethyl acetate don't need stabilizers. The ones we have in our lab are clean, although they also come in a plastic container.
I don't know if the contaminated DCM that we have is due to a bad batch, the plastic being dissolved or the actual quality of the DCM. It could be easily tested, but I haven't done it. | {
"domain": "chemistry.stackexchange",
"id": 17606,
"tags": "organic-chemistry, stability, solvents, purification, distillation"
} |
port install of log4cxx failes on Mac osx 10.6 | Question:
[ rosmake ] rosdep install failed: Rosdep install failed
Traceback (most recent call last):
File "/usr/local/bin/rosinstall", line 5, in
pkg_resources.run_script('rosinstall==0.5.20', 'rosinstall')
File "build/bdist.linux-i686/egg/pkg_resources.py", line 489, in run_script
File "build/bdist.linux-i686/egg/pkg_resources.py", line 1214, in run_script
self.egg_info = os.path.join(path, 'EGG-INFO')
File "/Library/Python/2.6/site-packages/rosinstall-0.5.20-py2.6.egg/EGG-INFO/scripts/rosinstall", line 599, in
File "/Library/Python/2.6/site-packages/rosinstall-0.5.20-py2.6.egg/EGG-INFO/scripts/rosinstall", line 590, in rosinstall_main
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 462, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'source /Users/brianriviere/ros/setup.sh && rosmake ros ros_comm --rosdep-install' returned non-zero exit status 1
Originally posted by brian on ROS Answers with karma: 1 on 2011-09-04
Post score: 0
Answer:
Could you maybe give more info on the context on which u get this
Originally posted by makokal with karma: 1295 on 2011-09-04
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Mac on 2011-09-07:
If Macports can't build log4cxx, that's a MacPorts problem (and they provide a suggestion right there in the error message you posted).
Comment by brian on 2011-09-05:
Right now it is failing failing to install log4cxx using:
sudo port install log4cxx
To report a bug, see http://guide.macports.org/#project.tickets
Failed to install log4cxx!
[ rosmake ] rosdep install failed: Rosdep install failed
Traceback (most recent call last):
Fi | {
"domain": "robotics.stackexchange",
"id": 6603,
"tags": "ros, installation, log4cxx, macos-snowleopard, osx"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.