anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Why is the ionic radius of Al(3+) smaller than that of Li+? | Question: I was examining the ionic radii of some ions from this site for a school assignment. I noticed a weird anomaly in the ionic radius of $\ce{Li+}$ as compared to that of $\ce{Al}^{3+}$.
The ionic radius of $\ce{Li+}$ is about 2/3 as large of that of $\ce{Al}^{3+}$. How is this possible, since the electronic configuration of $\ce{Al}^{3+}$ is $[1s^2, 2s^2 2p^6]$ whereas that of $\ce{Li+}$ is $[1s^2]$? It appears that despite the presence of an extra shell in $\ce{Al}^{3+}$, it has a smaller ionic radius than $\ce{Li+}$.
The answers in this post suggests an exactly opposite outcome. All the answers says that $\ce{Mg^{2+}}$ is above $\ce{Ca^{2+}}$, thus it has a smaller radius. By applying the exactly same logic i can say $\ce{Li+}$ is above $\ce{Al^{3+}}$ so it must have less ionic radius but that is not the case.
As the below picture suggests,(taken from the same post).
I think it is because of diagonal relationship. As $\ce{Mg^{2+}} = \ce{Li+}$ because of diagonal relationship and $\ce{Mg^{2+}} \gt \ce{Al^{3+}}$ because ionic radius decreases for cations along the period by going left to right.
Am I correct?
Answer: The size of the 1s orbital in $\ce{Li+}$ and $\ce{Al^3+}$ is not necessarily the same. In fact, they are quite different because of the much larger effective nuclear charge in $\ce{Al^3+}$. One can easily look up the wavefunction for the 1s orbital and see the radial dependence on $Z_\mathrm{eff}$.
Therefore, merely looking at the electronic configuration cannot tell you anything about the size. If that was the case then one would expect $\ce{Cl-}$ and $\ce{K+}$ to have exactly the same size, which is clearly not true.
All the answers says that $\ce{Mg^{2+}}$ is above $\ce{Ca^{2+}}$, thus it has a smaller radius. By applying the exactly same logic i can say $\ce{Li+}$ is above $\ce{Al^{3+}}$ so it must have less ionic radius but that is not the case
You are missing the point here entirely. Mg and Ca are in the same group which is what makes them directly comparable. Li is not "above" Al any more than F is above Na.
I think it is because of diagonal relationship. As $\ce{Mg^{2+}} = \ce{Li+}$ because of diagonal relationship and $\ce{Mg^{2+}} \gt \ce{Al^{3+}}$ because ionic radius decreases for cations along the period by going left to right.
The diagonal relationship is something that is observed, not a fundamental principle of chemistry. It still has to be rationalised. Saying that it is "because of the diagonal relationship" is akin to saying "electrons move very fast because they have a high velocity".
The diagonal relationship holds because 1) you add an extra shell of electrons going vertically from $\ce{Li+}$ to $\ce{Na+}$, which decreases $Z_\mathrm{eff}$ and increases ionic radius; and 2) you go horizontally from $\ce{Na+}$ to $\ce{Mg^2+}$, which increases $Z_\mathrm{eff}$ (same number of electrons but one more proton) and decreases ionic radius.
Going one more step horizontally to $\ce{Al^3+}$, one can see that $Z_\mathrm{eff}$ should increase again. So your thoughts on the matter are correct, but they just do not get to the crux of the matter.
The origin of the much smaller ionic radius of $\ce{Al^3+}$ is therefore directly attributable to the much larger $Z_\mathrm{eff}$ in $\ce{Al^3+}$ on the 2p valence electrons, compared to that on the 1s valence electrons in $\ce{Li+}$.
Be careful that the ionic radius is not always particularly well-defined and the values can vary from one source to another depending on how they are experimentally obtained. | {
"domain": "chemistry.stackexchange",
"id": 6353,
"tags": "ions, periodic-trends, atomic-radius"
} |
What unit is "SetForce" on a revolute joint? | Question:
I have not been able to determine the unit used for applying force to a joint with SetForce(). I assume that its argument is Torque [Nm] but I didn't find it in the documentation and must admit that the name of the method confuses me a bit?
Is there a reason why there is no SetTorque() method on joints?
Thanks in advance
Originally posted by JakobWelner on Gazebo Answers with karma: 13 on 2018-08-06
Post score: 1
Answer:
From experiments, I can confirm that, for rotational joints, SetForce()'s effort argument is in Nm - at least with the ODE backend.
As to why it's SetForce() instead of SetTorque(), the devs wanted to cover both rotational and prismatic joints.
Originally posted by josephcoombe with karma: 609 on 2018-08-07
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by alextoind on 2021-02-16:
I believe that N/m is a typo for Nm, isn't it?
Comment by josephcoombe on 2021-03-30:
Yep, good catch. Corrected it. | {
"domain": "robotics.stackexchange",
"id": 4308,
"tags": "gazebo"
} |
Temporal Loudness/Sharpness of sound signals | Question: I have been doing some research on objective sound metrics used to classify the quality of a sound signal. According to my research, the most usually used are loudness and sharpness ( definitions can be found here )
I have encountered a fair amount of literature on the matter, and i am familiar with the way these metrics are calculated ( loudness is strictly defined by national and international standards, and sharpness according to DIN 45692 ).
This analysis though takes place in the frequency domain, and all of the plots i see use as the x-axis the so called critical band rates ( info on critical bands here )
My question: I also encounter some plots though that show the evolving of the phenomena with respect to time. My guess is that researchers break the original time signal to parts, and do analysis on these parts to create these kind of plots. As the definition of these metrics does not explicitly depend on time, i cannot be sure of how they derive that. Am i missing something here? Is there another way to deal with this on the time domain?
Answer: It looks like you are referring to the Fourier transform between time and frequency domains. In math, the Fourier transform is usually done on the entire space. An example of this could be a Fourier spectrum of a photo image. In music this is not very useful, because it corresponds to a Fourier transform of the entire composition. The musical equivalent of such a transform would be a list of all musical notes used in the composition without their sequential order. To make this practically useful we must preserve the time sequence of the notes, such as bars. This is practically done exactly as you are suggesting. The time sequence is broken into short periods, which are analyzed by a Fourier transform of each period separately and then the results are put back in a time sequence. This is equivalent to a Fourier transform of a movie by transforming every frame separately and then putting the results back in a time sequence frame after frame. The difference in this example from music or audio is that a movie is already broken into frames while the audio sequence is not, and so breaking it into time periods is arbitrary and is a part of the conversion process.
An example of this is a spectrum of a song commonly used in audio processing. The spectrum is plotted along the horizontal time axis. this seems a contradiction of terms, as spectrum in in the frequency domain rather than time domain. However, all this means is a time sequence of the spectra of small time periods joined together. The attached image shows a stereo spectrum of the same song uncompressed on the right and MP3 compressed on the left where you can see the high frequency information removed (black areas) to reduce the storage space needed.
s | {
"domain": "physics.stackexchange",
"id": 42360,
"tags": "acoustics"
} |
Energy-optimal beer cooling | Question: Suppose that I am going to host a big party one month from now, and I have various liters of cold beverages that will need to be refrigerated by the time of the party. I have a fridge at home (which is on and being used under a normal everyday home fridge usage pattern); however, there is enough free space in it for these beverages, and I don't need that extra space.
Does my fridge consume more energy if
(a) I put the beers in the fridge right now, one month in advance, or
(b) I put them in the fridge only right before the party, so that they have just enough time to get cold (for instance, one day in advance)?
Answer: There is probably a slight energy advantage to putting the beer in the fridge.
As explained by Joe Iddon, it will take a fixed amount of energy to cool your beers, and it always takes a certain amount of power to maintain the temperature of your fridge when it is on and closed. Considering only those two factors, it makes no difference when you put the beers in your fridge.
However, because you say that your fridge is in use in the intervening time, you should also consider how the fridge behaves when you open it. When you open the door, it trades some air with the outside. When you close the door, it cools down the "new" air. If you put the beers in the fridge, there is less air in it and consequently less new air after opening and closing the door. (It may be more important that the beer blocks air currents, but the effect is the same.) So your beer actually makes the fridge slightly more resilient when the door is opened and closed.
For completeness, though, if you open the door long enough to significantly warm up the beer, then you could do yourself some harm because the beer will take more energy to cool down than the same volume of room temperature air, but I estimate that you would have to leave the door open a lot for that to happen: As long as the air heating up the beer stays in the fridge, it is that much less heat energy that the fridge needs to pump back out when the door is closed. | {
"domain": "physics.stackexchange",
"id": 69688,
"tags": "thermodynamics, everyday-life, cooling"
} |
Better explanation of the common general relativity illustration (stretched sheet of fabric) | Question: I've seen many science popularisation documentaries and read few books (obviously not being scientist myself). I am able to process and understand basic ideas behind most of these. However for general relativity there is this one illustration, which is being used over and over (image from Wikipedia):
I always thought that general relativity gives another way how you can describe gravity. However for this illustration to work, there needs to be another force, pulling the object down (referring to a direction in the attached image). If I put two non-moving objects in the image, what force will pull them together?
So where is my understanding incorrect? Or is general relativity not about explaining gravity and just describes how heavy objects bends spacetime (in that case the analogy is being used not correctly in my opinion)?
UPDATE Thank you for the answers and comments. Namely the XKCD comics is a spot on. I understand that the analogy with bent sheet of fabric pretty bad, but it seems that it can be fixed if you don't bent the fabric, but just distort the drawn grid.
Would you be so kind and answer the second part of the question as well - whether general relativity is explaining gravitational force. To me it seems that it is not (bending of spacetime simply can not affect two non-moving objects). However most of the time it is being presented that it does.
Answer: You're quite correct that the metaphor is misleading, and indeed you'll find professional relativists tend to be rather scornful of it. There are a number of problems with it, of which the problem you mention is just one. For example the diagram implies only space is bent, while the bending is of spacetime so time is bent as well. The diagram also implies there is a third dimension out of the plane in which the bending occurs. Applied to our 4D spacetime this would mean there would have to be a fifth dimension for spacetime to bend in, but this isn't the case and the type of bending that occurs is called intrinsic curvature and needs no extra dimensions.
The problem is that GR is really, really unintuitive. If you want to know more than the hints suggested by the rubber sheet metaphor the only course is to roll up your sleeves and start learning the maths.
It would be nice if there were some intermediate course between the misleading but simple rubber sheet metaphor and the maths, but I don't know of anything. I think the problem is that you won't get anywhere without first understanding coordinate invariance and this is a really tough idea to understand. If you really want to learn more I'd start with special relativity as this contains the seeds of the ideas you'll need.
Response to comment:
In your edit you say bending of spacetime simply can not affect two non-moving objects. I'm guessing that you're thinking about objects rolling around on a curved surface as shown in the common metaphors for GR. The question is then why objects that aren't rolling around should experience a force.
The reason for this is that an apparantly stationary object is moving because it's moving in time. For the usual 3-D velocities we see around us we describe velocity as a 3-vector $\vec{v} = (v_x, v_y, v_z)$. But remember that spacetime is four dimensional, and the velocity for objects in relativity is a 4-vector called the 4-velocity that includes change in the time coordinate. The reason a stationary object experiences a force is that the time coordinate is curved just like the space coordinates. This brings me back to one of my criticisms of the rubber sheet analogy i.e. that it cannot show that the time coordinate is curved just like the spatial coordinates.
At the risk of getting repetitive, it's hard to explain why curvature in time causes the force without getting into the maths. The simplest explanation I've seen is in twistor59's answer to What is the weight equation through general relativity?. This shows, with the bare minimum of algebra, why a stationary object in a gravitational field experiences a force. | {
"domain": "physics.stackexchange",
"id": 99891,
"tags": "general-relativity, gravity, spacetime, curvature, popular-science"
} |
2D aerofoil fluid analysis - find velocity ratio | Question: I just wondered if you could help me out on this fluid mechanics question?
I have worked through it myself but am very unsure of my answer (not yet found the pressure difference)
the question is in two parts, and states a wake velocity profile equation at the end:
My workings used the equation given for the wake profile, but am unsure on whether this is needed to be used to find a solution?
IS THIS METHOD WRONG BELOW??
The way I approached it was to look at overall mass flow, assume density is constant throughout then simply rearrange to get the ratio.
I approached this problem also assuming the wind tunnel is cylindrical, then for downstream I said that the mass flow rate is the mass flow rate due to velocity U1 (across the whole length 2H) minus the mass flow rate caused by the wake velocity profile (across 2b). I assume this to be incorrect, but am really confused at how it would be done.
If you need visuals to how I did it to make it clear, ill be more than happy to write it out for you guys.
Answer: If u want to find exit pressure of control volume you have consider, you can use Bernoulli equation since you know exit velocity profile. Since boundary layer is thin and no flow separation, this flow is more or less irrotaional flow and I'm assuming the flow is steady.
$$ p_1 +\rho \frac{v_1^2}{2} =p_2 +\rho \frac{v_2^2}{2}$$
Here
$p_1$, $v_1$ are inlet velocity and pressure respectively. $p_2$ and $v_2$ are exit velocity and pressure.
Then the exist pressure equation is
$$ p_2 =p_1 -\rho \frac{U1}{2}\left(1+\frac{y}{b}\right)+\rho \frac{U_0^2}{2} $$
Please note that $p_2 \leq p_1$.
If u want the exact velocity profile then you should so small CFD simulation. Velocity profile you have consider may be valid upto $y^+$ = 5 (apprx.). For that u can refer fifth page of following link.
http://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=6&sqi=2&ved=0CDEQFjAFahUKEwjm8YWKx9nGAhUKo5QKHSRZDWI&url=http%3A%2F%2Fwww.mit.edu%2Fcourse%2F1%2F1.061%2Fwww%2Fdream%2FSEVEN%2FSEVENTHEORY.PDF&ei=Q3GkVabEE4rG0gSksrWQBg&usg=AFQjCNEHNG85udrczw4D6dgihi7eTPlyTQ&sig2=3_uDHoAEOyy9fmMoCXl2jg&bvm=bv.97653015,d.dGo&cad=rja | {
"domain": "physics.stackexchange",
"id": 23424,
"tags": "homework-and-exercises, fluid-dynamics, velocity, flow"
} |
Is there a vaccine against the plague (Yersinia pestis)? | Question: There seems to be recurrent events of infections of the plage (Yersinia pestis), from the well known Justinian plague to the Black Death and to recent years. In fact, two cases were reported in China in November 2019. However, it seems there is not an effective vaccine yet. At least until 2013 there was no vaccine approved by the US FDA. There seems to be on-going research about vaccines for the plage.
Is then no effective vaccine against the plague? If yes, does it mean humanity is still at risk of yet another pandemic of the plague?
Answer: There is little motivation right now for vaccination against plague because:
Human infections with plague are fairly rare. A vaccine administered to the general populace would have to be very cheap and extremely safe to make it cost-effective and have a net benefit given the risks of plague are so low, and because
Antibiotics are effective against plague - this makes the likelihood of a widespread outbreak fairly low. Antibiotic resistance could be a concern, but plague would not be one of the most concerning pathogens from that perspective - there isn't much exposure to antibiotics in the reservoir species that the bacterium lives in, so there is less selective pressure towards resistance compared to other bacterial pathogens.
Additionally, plague lives in reservoir species, so extensive human vaccination is not a plausible way to eradicate the bacteria entirely, unlike pathogens that have humans as a primary host.
That said, vaccines are available and can be used for certain individuals at high risk. However, they take a long time to show immune protection (>1 month) making them not that useful during outbreaks. From the same WHO report linked above:
Worldwide, live attenuated and formalin-killed Y. pestis vaccines are
variously available for human use. The vaccines are variably immunogenic
and moderately to highly reactogenic. They do not protect against primary
pneumonic plague. In general, vaccinating communities against epizootic
and enzootic exposures is not feasible; further, vaccination is of little use
during human plague outbreaks, since a month or more is required to
develop a protective immune response. The vaccine is indicated for
persons whose work routinely brings them into close contact with Y. pestis,
such as laboratory technicians in plague reference and research laboratories
and persons studying infected rodent colonies (23).
WHO plague manual:
https://www.who.int/csr/resources/publications/plague/WHO_CDS_CSR_EDC_99_2_EN/en/ | {
"domain": "biology.stackexchange",
"id": 10533,
"tags": "bacteriology, epidemiology, vaccination"
} |
Are all pseudoscalars secretly Goldstone bosons? | Question: A pseudoscalar Goldstone boson, $\pi(x)$, is protected by a shift symmetry: it shows up with a derivative in its interaction terms in a Lagrangian. As a pseudoscalar, we may also write it with the usual $i\gamma^5$ interaction. There are thus two ways to encode the interaction:
Shift symmetry manifest: $$\mathcal L = \left(\frac{\partial_\mu \pi}{v}\right)\bar\Psi\gamma^\mu \gamma^5\Psi$$
Pseudoscalar manifest: $$\mathcal L = g \pi \bar\Psi i\gamma^5 \Psi$$
These two are related by the equation of motion, so that $g = 2qm/f$, where $m$ is the fermion mass, $f$ is the order parameter of symmetry breaking, and $q$ is the charge with respect to the broken axial symmetry.
My question is: In the shift-symmetry manifest form, we know that fermion loops do not generate a pseudoscalar mass. However, pseudoscalar manifest form of the interaction looks like a generic pseudoscalar interaction with no symmetry protecting $\pi$ from receiving mass corrections from $\Psi$ loops. So:
How do we see that $\pi$ is protected by a symmetry when we write the interaction in the manifestly pseudoscalar format?
Conversely, I could write any pseudoscalar interaction as $ g \pi \bar\Psi i\gamma^5 \Psi$ Does this mean that I can use the fermion equation of motion to convert any pseudoscalar interaction into one that has a shift symmetry?
Details follow, but the main question is stated above.
Example
Set up: Goldstone interaction with fermions
We show how to convert between the shift-symmetric and pseudoscalar forms of the interaction. For simplicity, assume the case of a global, internal, compact U(1) symmetry that is spontaneously broken by a field $H$ that obtains a vev $\langle H \rangle = f$. Let the theory contain a left-handed fermion $\psi_L$ and a right-handed fermion $\psi_R$. Assume axial U(1) charges such that
$$Q[H] = 2q\\
Q[\psi_L] = q\\
Q[\psi_R] = -q$$
Then we may write out the theory with a Yukawa interaction:
$$\mathcal L_\text{Yuk} = y H^*\bar\psi_L \psi_R + \text{h.c.}$$
We now "pull out the Goldstone fields" from the fields. In order to do this, we transform each field $\Phi \in \{H,\psi_L,\psi_R\}$ by the spontaneously broken symmetry:
$$ \Phi = e^{iq_\Phi \epsilon} \Phi'$$
On the right-hand side, $\Phi'$ is understood to be the field with no Goldstone component. The Goldstone lives in the exponential. For the U(1) case, $\epsilon$ is the transformation parameter, and $q_\Phi$ is the U(1) charge of the $\Phi$.
Then we simply promote the transformation parameter to the Goldstone field, $\epsilon \to \pi(x)/f$. This is a nonlinear transformation to help identify the Goldstone interaction (Sec 19.6 of Weinberg Vol II, or CCWZ II). This gives
$$ \Phi(x) = \exp\left(iq_\Phi \frac{\pi(x)}{f}\right) \Phi'$$
When we do this,
$$\mathcal L_\text{int} = y H'^* e^{-2iq} \bar \psi_L' e^{iq} e^{iq} \psi_R' + \text{h.c.} =
y H'^*\bar \psi_L' \psi_R' + \text{h.c.}
$$
The Goldstone has been completely removed from the Yukawa term and doesn't show up there. This is a consequence of U(1) conservation of the Lagrangian term. Where did the interaction go? We know that the Goldstone must have a derivative interaction, so the natural place to look is the fermion kinetic term.
Writing the kinetic terms with implicit projection operators (alternatively, you can replace $\gamma^\mu$ with $\sigma^\mu$ or $\bar\sigma^\mu$ as appropriate):
$$\mathcal L_\text{kin}
= i \bar \psi_L \gamma^\mu \partial_\mu \psi_L
+ i \bar \psi_R \gamma^\mu \partial_\mu \psi_R
$$
Replacing $\psi_{L,R}$ by the fields with the Goldstone pulled out:
$$
\mathcal L_\text{kin}
= i \bar \psi_L' e^{-iq \pi(x)/f} \gamma^\mu \partial_\mu \left( e^{iq \pi(x)/f}\psi_L' \right)+
= i \bar \psi_R' e^{iq \pi(x)/f} \gamma^\mu \partial_\mu \left( e^{-iq \pi(x)/f}\psi_R' \right)$$
In addition to the usual kinetic terms, these give terms where the derivative acts on the Goldstone, $\pi(x)$. These are the interaction terms that are our primary focus. For simplicity, let us combine the left- and right handed chiral spinors $\psi_{L,R}'$ into a Dirac spinor, $\Psi = (F',f')^T$ and use the projection operators $\frac{1}{2}(1\pm \gamma^5)$:
$$\mathcal L_\text{int}
= i \left(i\frac{q}{f}\partial_\mu \pi \right) \bar\Psi \gamma^\mu \frac{1}{2}\left(1-\gamma^5\right) \Psi
+ i \left(-i\frac{q}{f}\partial_\mu \pi \right) \bar\Psi \gamma^\mu \frac{1}{2}\left(1+\gamma^5\right) \Psi
$$
These terms combine simply into:
$$\mathcal L_\text{int}
=
\frac{q}{f}\left(\partial_\mu \pi\right)\bar\Psi \gamma^\mu \gamma^5 \Psi \ .
$$
We thus arrive at the the Goldstone--fermion interaction term in the shift-symmetric manifest form: clearly $\pi$ is invariant under $\pi(x) \to \pi(x) + c$ and so it is protected from quantum corrections that might generate a mass term $m_\pi^2 \pi^2$.
Using the fermion equation of motion
Now we can use the fermion equation of motion to convert this shift-symmetric form of $\mathcal L_\text{int}$ into one that is manifestly pseudoscalar. Recall that the equation of motion in Dirac notation is:
$$i\gamma^\mu\partial_\mu \Psi = m\Psi$$
Armed with this, we may now integrate $\mathcal L_\text{int}$ by parts to shift the derivative from the $\pi$ to the fermion bilinear. We assume that there's no surface term so that integration by parts in the action amounts to a minus sign and moving the derivative in the Lagrangian:
\begin{align}
\mathcal L_\text{int} &= \frac{q}{f} \pi \partial_\mu \left(\bar \Psi \gamma^\mu \gamma^5 \Psi \right) \\
& = \frac{q}{f} \pi\left[
(\partial_\mu\bar\Psi)\gamma^\mu\gamma^5 \Psi + \bar\Psi \gamma^\mu \gamma^5 \partial_\mu \Psi
\right]
\\
& = \frac{q}{f} \pi\left[
(\partial_\mu\Psi)^\dagger \left(\gamma^0\gamma^\mu \gamma^0\right) \gamma^0\gamma^5 \Psi - \bar\Psi \gamma^5 \gamma^\mu \partial_\mu \Psi
\right]
\\
& = \frac{q}{f} \pi\left[
(\gamma^\mu\partial_\mu\Psi)^\dagger \gamma^0\gamma^5 \Psi - \bar\Psi \gamma^5 \gamma^\mu \partial_\mu \Psi
\right]
\\
& = \frac{q}{f} \pi\left[
(-im\Psi)^\dagger \gamma^0\gamma^5 \Psi - \bar\Psi \gamma^5 \left(-im\Psi\right)
\right]
\\
& = \frac{2iqm}{f} \pi \bar\Psi\gamma^5 \Psi \ .
\end{align}
This now gives us our manifestly pseudoscalar interaction between the Goldstone $\pi$ and the fermions.
Reiteration of the puzzle
So the puzzle is that:
The shift-symmetric and manifestly pseudoscalar forms of the interaction seem perfectly equivalent.
However, the pseudoscalar form of the interaction seems perfectly general. One could tune the fermion mass $m$ to be whatever you want by tuning the Yukawa coupling $y$. This, in turn, tunes $g = 2qm/f$ to be any pseudoscalar coupling. Does this mean that any pseudoscalar interaction between massive fermions can be written as a Goldstone interaction?
In the manifestly pseudoscalar version of the theory, are loop contributions to the $\pi$ mass manifestly zero? This does not seem to generically be the case. (See, e.g. this discussion based on a problem in Peskin & Schroeder)
So: in the case where there really is a spontaneously broken symmetry, there should be a shift symmetry protecting the $\pi$, but how can we see the effect of that shift symmetry when we calculate loops in the pseudoscalar theory?
Alternatively, if we took a generic pseudoscalar theory with no shift symmetry (i.e. the pseudoscalar is not a Goldstone), then what prevents me from using the equation of motion to write the interaction in a manifestly shift-symmetric form and waving my hands that there ought to be a shift symmetry?
Answer: The two theories, namely the ``gradient model'' $\partial_\mu\pi \bar{\Psi}\gamma^5\gamma^\mu\Psi$ and the Yukawa model $g\pi\bar{\Psi}\gamma^5\Psi$ (both with a massive $\Psi$), are definitely not equivalent. They have different symmetries, spectrum and scattering amplitudes, hence are physically distinct theories.
The main mistake that you (the OP) are doing is using the free equations of motion for the fermions, but that's ok only for the external legs and not for the virtual ones that enter e.g. in the one-loop calculation of the $\pi$ mass, or as I'll show below in a scattering amplitude with an intermediate virtual $\psi$ exchanged. (The mistake that Cosmas Zachos was doing in his earlier answer and that in part is still doing in the marginally improved answer is explained in my comments to his answer, I will not repeat it here).
The gradient model is indeed invariant under $\pi\rightarrow \pi+const$ which clearly forbids a mass term for $\pi$. This isn't the case for the Yukawa model where a bare mass is needed to remove the quadratic divergent mass generated by the fermion loops. A physical pole mass is therefore generically non-zero, barring fine-tuning.
More importantly, Goldstone bosons (GB's) aren't just massless particles, they have various special features. For example, soft GBs (that is the limit of vanishing $\pi$-momentum) give vanishing scattering amplitudes (the so-called Adler zero condition). This is realized for the gradient theory but not for the Yukawa theory.
Let's see this in more detail looking at a physical scattering amplitude $\pi\Psi\rightarrow \pi\Psi$.
For the Yukawa theory one has
$$
M^{Yukawa}_{\pi\Psi\rightarrow\pi\Psi}=-g^2 \left[\bar{u}(p_2^\prime)\frac{i(\gamma_\alpha p_1^\alpha+\gamma_\alpha p_2^\alpha-m)}{s-m^2+i\epsilon}u(p_2)+\mbox{crossed diag.}\right]
$$
for $\pi(p_1)\Psi(p_2)\rightarrow\pi(p_1^\prime)\Psi(p_2^\prime)$. The $\gamma^5$ have been moved around and simplified with the numerator of the fermion propagator, i.e. $\gamma^5i(\gamma_\alpha p_1^\alpha+\gamma_\alpha p_2^\alpha+m)\gamma^5=-i(\gamma_\alpha p_1^\alpha+\gamma_\alpha p_2^\alpha-m)$. We could have simplified the numerator using $\gamma_\alpha p_2^\alpha u(p_2)=m u(p_2)$, where $m$ is the fermion mass, but it's more convenient this form in the following. There is an s-channel contribution, explicitly displayed, along with a crossed diagram that we do not display explicitly.
(disclaimer: I am doing this calculation by hand on an Ipad, I hope it is not grossly incorrect :-), although factors of 2 and minus signs are most likely off)
This $M^{Yukawa}_{\pi\Psi\rightarrow\pi\Psi}$ doesn't vanish for $p_1\rightarrow 0$ because, even though the numerator goes to zero (namely $\gamma_\alpha p_1^\alpha+\gamma_\alpha p_2^\alpha-m)u(p_2)=\gamma_\alpha p_1^\alpha u(p_2)\rightarrow 0$), so does the denominator at the same rate ($s-m^2=2p_{1\alpha} p_2^\alpha\rightarrow0$; here I am assuming that we have tuned the spectrum to be the same, that is the $\pi$ mass in the Yukawa model has been tuned to zero by hand, otherwise the numerator wouldn't even vanish and the comparison between the two models would make no sense).
On the other hand, for the gradient theory we get
$$
M^{gradient}_{\pi\Psi\rightarrow\pi\Psi}=\frac{1}{f^2}\left[\bar{u}(p_2^\prime)\frac{i(\gamma_\alpha p_1^\alpha+\gamma_\alpha p_2^\alpha-m)^3}{s-m^2+i\epsilon}u(p_2)+\mbox{crossed diag.}\right]\rightarrow 0
$$
which is not only different (hence the two theories are physically distinct, period) but it gives a vanishing amplitude in the GB soft limit $p_1\rightarrow 0$ since the numerator can be written as $i\bar{u}(p_2^\prime)(\gamma_\alpha p_1^\alpha+\gamma_\alpha p_2^\alpha-m)^3u(p_2)=i\bar{u}(p_2^\prime)\gamma_\alpha p_1^{\prime\alpha}(\gamma_\beta p_1^\beta+\gamma_\beta p_2^\beta-m)(\gamma_\gamma p_1^\gamma)u(p_2)$, and for momentum conservation $p_1^\prime\rightarrow 0$ too.
The takeaway message is: the two models are distinct physically and mathematically. The gradient theory describes a GB whereas the Yukawa theory describe a scalar with a mass tuned to be zero.
Extra edits
I have finally found some times to add a last remark that I mentioned in the comments but it is actually worth to report in the full answer. It is also related to the answer by @Cosmas Zachos.
Having established that the two theories are different, one may wonder how much different and what is the relation between the two, given that the simple use of the equations of motion by the OP was flawed.
The answer is very simple: the two theories differ at the non-linear $\pi$-level, starting from the quadratic order. In particular, the claim is that the theory given by
$$
\mathcal{L}_{new}=\bar{\Psi}(i\gamma^\alpha \partial_\alpha-m e^{-2i\gamma^5\pi/f})\Psi+\frac{1}{2}(\partial_\mu\pi)^2\qquad f\equiv 2m/g\,,
$$
which differs from $\mathcal{L}_{Yukawa}=\bar{\Psi}(i\gamma^\alpha \partial_\alpha-m)\Psi+ig\pi \bar{\Psi}\gamma^5\Psi+\frac{1}{2}(\partial_\mu\pi)^2$ starting from $o(\pi^2)$, is in fact equivalent to the gradient theory
$$
\mathcal{L}_{gradient}=\frac{1}{2}(\partial_\mu\pi)^2+\bar{\Psi}(i\gamma^\alpha \partial_\alpha-m)\Psi+\frac{1}{f}\partial_\mu\pi \bar{\Psi}\gamma^5\gamma^\mu\Psi\,.
$$
Indeed, it's enough to perform the field redefinition $\Psi\rightarrow e^{i\gamma^5 \pi/f}\Psi$ to move the $\pi$ from the non-derivative term to the gradient coming from the $\Psi$-kinetic term.
As ultimate check, let's see the behavior under the soft limit $p_1\rightarrow 0$. The contributions from two linear-$\pi$ vertex insertions is like in the Yukawa theory, but now there is also a contact term coming from expanding the exponential, $\frac{2m}{f^2}\pi^2\bar{\Psi}\Psi$, i.e.
$$
M^{new}_{\pi\Psi\rightarrow\pi\Psi}=M^{Yukawa}_{\pi\Psi\rightarrow\pi\Psi}+i\frac{4m}{f^2}\bar{u}(p_2^\prime)u(p_2)\,.
$$
Now, in the soft limit $p_1\rightarrow 0$, we have $p_2^\prime \rightarrow p_2$ hence the Yukawa terms give
$$
M^{Yukawa}_{p_1\rightarrow 0}=-g^2 \left[\bar{u}(p_2)\frac{i\gamma_\alpha p_1^\alpha}{2 p_1 p_2}u(p_2)+\mbox{crossed diag.}\right]=-2ig^2
$$
where I have used that $\bar{u}(p)\gamma^\alpha u(p)=2p^\alpha$. On the other hand, the new contact term in from $\mathcal{L}^{new}$ gives
$$
i\frac{4m}{f^2}\bar{u}(p_2)u(p_2)=i\frac{8m^2}{f^2}=2i g^2\,,
$$
from $\bar{u}(p)u(p)=2m$ (no sum over the two spin orientations, we are considering definite polarizations). Summing the two contributions we see that they vanish each other out, in agreement with the Adler-zero condition.
But again, the equivalence with the gradient theory is achieved only after modifying the theory at the $o(\pi^2)$ level in the way shown above, which corresponds to render the $\pi$ a GB. The Yukawa coupling alone instead is for non-GB particles. | {
"domain": "physics.stackexchange",
"id": 42766,
"tags": "quantum-field-theory, particle-physics, symmetry-breaking, chirality, sigma-models"
} |
Position, Velocity and Time | Question:
Is there a standard ROS message that can represent an object's x, y position along with its velocity at each position and time passed at the position?
Originally posted by Divya Thomas on ROS Answers with karma: 1 on 2017-10-17
Post score: 0
Original comments
Comment by gvdhoorn on 2017-10-18:
It would be good if you could provide a little more details. Are you looking for something that can report current state, or something that is used to encode desired state?
Comment by Divya Thomas on 2017-10-18:
This is for desired state. I'm trying to output points along a trajectory where each point represents x, y, desired velocity at that point, and the time to get to the point.
Comment by gvdhoorn on 2017-10-18:
To encode desired state, a trajectory would seem to be the logical choice. That would be what @tfoote suggested. However, perhaps a nav_msgs/Path would be better suited than the trajectory_msgs he links to.
Answer:
That sounds a lot like a Trajectory specifically the MultiDOFJointTrajectory http://wiki.ros.org/trajectory_msgs
Originally posted by tfoote with karma: 58457 on 2017-10-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 29114,
"tags": "ros, messages"
} |
How does ReLU function make it possible to let the CNN learn more complex features in input data? | Question: In many descriptions of a CNN i often read that at the end of the Convolutional layer, a ReLU function is needed, for two reasons: first it solves many problems about the vanishing gradient problem, second it enables the network to learn more complex feature in the data. What i cannot figure out is, how is it possible to understand such an improvement by just looking at the form of the ReLU function. I removes all negative values and this translates in getting rid of things such as smooth transitions of grey in an image. But starting from this i do not see how this should lead to the capability of detecting more complex features in an input image.
Answer: Before ReLU became popular, the usual activation functions were tanh and sigmoid. These activation functions suffer a problem called "vanishing gradient", which caused the gradient to be very small in the initial layers because it was "squeezed" by the regions of those functions that were far from the origin. It mainly happened when you stacked a lot of layers and therefore the gradient was squeezed once and again in backpropagation. The effect of this problem is that the network learned very very slowly and even failed to learn at all.
ReLU does not suffer from the vanishing gradient problem, as it does not squeeze the gradient. Therefore, you can stack more layers which, in the end, allows you to recognize more complex stuff.
Nevertheless, ReLU have their own problems, like the "Dying ReLU" problem, in which the optimization process gets "trapped" in the area where the ReLU is zero. That's why variants like Leaky ReLU were proposed; they don't output zero but a small slope, preventing the dying ReLU issue. | {
"domain": "datascience.stackexchange",
"id": 12109,
"tags": "cnn, image-recognition, activation-function"
} |
gazebo_ros_turtlebot3.cpp with real robot (waffle) | Question:
Hi,
I would like to know why the gazebo_ros_turtlebot3.cpp, used to make the turtlebot3 autonomously navigates in the gazebo 3D, does not work with the real robot? I am using a turtlebot3 waffle. What changes do I have to make in the cpp code, or in the launch file, in order to make it works with the real robot?
What I mean is, if I run roslaunch turtlebot3_gazebo turtlebot3_world.launch and after that I run roslaunch turtlebot3_gazebo turtlebot3_simulation.launch the turtlebot3 navigates into the simulator. But, if I run roscore in the pc, and, after that, I run roslaunch turtlebot3_bringup turtlebot3_robot.launch in the turtlebot3 waffle, and then run roslaunch turtlebot3_gazebo turtlebot3_simulation.launch the turtlebot3 does not navegate, it just spins around itself. But, if i run roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch, I can teleoperate the robot normaly using the keyboard.
Thanks.
Originally posted by RubensQRZ on ROS Answers with karma: 1 on 2018-01-28
Post score: 0
Answer:
Gazebo is the simulator. The launch file turtlebot3_simulation.launch is designed to work within the simulation not in the real robot. turtlebot3_simulation.launch is a demo that will help you see a fully functioning system in the simulation.
For instructinos on how to bring up the TB3 please see: http://emanual.robotis.com/docs/en/platform/turtlebot3/bringup/#turtlebot3-waffle
There are other sections with instructions on how to setup navigation: http://emanual.robotis.com/docs/en/platform/turtlebot3/navigation/#perform-navigation
Originally posted by tfoote with karma: 58457 on 2018-01-31
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 29892,
"tags": "ros-kinetic, turtlebot3"
} |
Is this a good encryption algorithm? | Question: I've never had a serious attempt at writing an encryption algorithm before, and haven't read much on the subject. I have tried to write an encryption algorithm (of sorts), I would just like to see if anyone thinks it's any good. (Obviously, it's not going to be comparable to those actually used, given that it's a first attempt)
Without further adieu:
def encode(string, giveUserKey = False, doubleEncode = True):
encoded = [] #Numbers representing encoded characters
keys = [] #Key numbers to unlonk code
fakes = [] #Which items in "encoded" are fake
#All of these are stored in a file
if giveUserKey:
userkey = random.randrange(2 ** 10, 2 ** 16)
# Key given to the user, entered during decryption for confirmation
# Could change range for greater security
else:
userkey = 1
for char in string:
ordChar = ord(char)
key = random.randrange(sys.maxsize)
encoded.append(hex((ordChar + key) * userkey))
keys.append(hex(key))
if random.randrange(fakeRate) == 0:
fake = hex(random.randrange(sys.maxsize) * userkey)
encoded.append(fake)
fakes.append(fake)
if doubleEncode:
encoded = [string.encode() for string in encoded]
keys = [string.encode() for string in keys]
fakes = [string.encode() for string in fakes]
hashValue = hex(hash("".join([str(obj) for obj in encoded + keys + fakes]))).encode()
return encoded, keys, hashValue, fakes, userkey
def decode(encoded, keys, hashValue, fakes, userkey = 1):
if hash("".join([str(obj) for obj in encoded + keys + fakes])) != eval(hashValue):
#File has been tampered with, possibly attempted reverse-engineering
return "ERROR"
for fake in fakes:
encoded.remove(fake)
decoded = ""
for i in range(len(keys)):
j = eval(encoded[i]) / userkey
decoded += chr(int(j - eval(keys[i])))
return decoded
Answer: Firstly, your encryption is useless. Just run this one line of python code:
print reduce(fractions.gcd, map(eval, encoded))
And it will tell you what your user key is (with a pretty high probability).
Your algorithm is really obfuscation, not encrytion. Encryption is designed so that even if you know the algorithm, but not the key, you can't decryt the message. But your algorithm is basically designed to make it less obvious what the parts mean, not make it hard to decrypt without the key. As demonstrated, retrieving the key is trivial anyways.
def encode(string, giveUserKey = False, doubleEncode = True):
Python convention is lowercase_with_underscores for local names
encoded = [] #Numbers representing encoded characters
keys = [] #Key numbers to unlonk code
fakes = [] #Which items in "encoded" are fake
#All of these are stored in a file
if giveUserKey:
userkey = random.randrange(2 ** 10, 2 ** 16)
# Key given to the user, entered during decryption for confirmation
# Could change range for greater security
Why isn't the key a parameter?
else:
userkey = 1
for char in string:
ordChar = ord(char)
key = random.randrange(sys.maxsize)
encoded.append(hex((ordChar + key) * userkey))
keys.append(hex(key))
These just aren't keys. Don't call them that.
if random.randrange(fakeRate) == 0:
fake = hex(random.randrange(sys.maxsize) * userkey)
encoded.append(fake)
fakes.append(fake)
if doubleEncode:
encoded = [string.encode() for string in encoded]
keys = [string.encode() for string in keys]
fakes = [string.encode() for string in fakes]
Since these are all hex strings, why in the world are you encoding them?
hashValue = hex(hash("".join([str(obj) for obj in encoded + keys + fakes]))).encode()
return encoded, keys, hashValue, fakes, userkey
def decode(encoded, keys, hashValue, fakes, userkey = 1):
if hash("".join([str(obj) for obj in encoded + keys + fakes])) != eval(hashValue):
#File has been tampered with, possibly attempted reverse-engineering
return "ERROR"
Do not report errors by return value, use exceptions.
for fake in fakes:
encoded.remove(fake)
What if coincidentally one of the fakes looks just like one of the data segments?
decoded = ""
for i in range(len(keys)):
Use zip to iterate over lists in parallel
j = eval(encoded[i]) / userkey
Using eval is considered a bad idea. It'll execute any python code passed to it and that could be dangerous.
decoded += chr(int(j - eval(keys[i])))
It's better to append strings into a list and join them.
return decoded | {
"domain": "codereview.stackexchange",
"id": 3339,
"tags": "python, algorithm"
} |
Unable to understand how the while loop extracts the least number of coins with given sum | Question: I was reading this classical Coin Problem (7.1): Given a set
of coin values coins = {c1, c2,..., ck} and a target sum of money n, our task is to
form the sum n using as few coins as possible.
I fully understood the discussion until I encountered the last code snippet of the section Constructing A Solution:
int first[N]; // N is sufficiently large
value[0] = 0;
for (int x = 1; x <= n; x++) {
value[x] = INF;
for (auto c : coins) {
if (x-c >= 0 && value[x-c]+1 < value[x]) {
value[x] = value[x-c]+1;
first[x] = c;
}
}
}
while (n > 0) {
cout << first[n] << "\n";
n -= first[n];
}
My doubt is pertaining to the way array first is storing the coin denominations and how the while loop at the last is able to extract the required coins.
For example consider this case when I have coins of denominations {1, 4, 5, 10, 20}, with each coin available as many numbers as needed. So to form a sum of 22, optimally I need one 20 unit and two 1 units. This is what the code gives as output, when properly coded.
But I am not able to undertand how this first array is storing the denomination values. So to understand this I printed out this first arrray and got this:
0 1 1 1 4 5 1 1 4 4 10 1 1 4 4 5 1 1 4 4 20 1 1
And I was more confused.
In short: please explain how first array stores values and how the while loop is able to print out only the required values.
Answer:
Let us take a look at line 3 and 4 of the following part of the code.
1 for (auto c : coins) {
2 if (x-c >= 0 && value[x-c]+1 < value[x]) {
3 value[x] = value[x-c]+1;
4 first[x] = c;
5 }
6 }
value[x] should store the least number of coins needed to form x.
value[x-c] stores the least number of coins needed to form x-c.
Line 3 means, if we take coin c first and use value[x-c] coins to form the rest, x-c, we will use value[x-c]+1 coins to form c+(x-c)=x. So we can set value[x] to be value[x-c]+1.
Since first[x] is set to c at line 4, we can replace c by first[x].
Line 3 can be read again as, in order to use the least number of coins to form x, we can take coin first[x] first and then use the least number of coins to form the rest, x-first[x].
Let us read the while loop.
Initially, the target is n. Treating n as x above, we see that in order to use the least number of coins to form n, we can take coin first[n] first (so, we will print first[n] by cout << first[n] << "\n";) and then will use the least number of coins to form n-first[n].
Now the target is n-first[n]. This is the same situation as the initial situation. So we will set n to n-first[n], and repeat.
For example, when n=22.
Since first[22]=1, we will take coin 1 first.
The target to form next is 22-1=21.
Since first[21]=1, we will take coin 1 first. (In fact, this is the second coin.)
The target to form next is 21-1=20.
Since first[20]=20, we will take coin 20 first. (In fact, this is the third coin).
The target to form next is 20-20=0. No more action is needed to form 0. | {
"domain": "cs.stackexchange",
"id": 20252,
"tags": "data-structures, dynamic-programming, arrays, coin-change"
} |
How to calculate spatial distance in space-time? | Question: Pinning two test particles at two different points in space, how can I calculate their spatial distance, when the geometry is given by the Schwarzschild metric?
Let's say particle 1 is pinned at $r=R$, $\theta=\frac \pi 2$, $\varphi = 0$ where $R$ is a positive radius bigger than the Schwarzschild radius and particle 2 is pinned at $r=R+L$, $\theta=\frac \pi 2$, $\varphi = 0$ where $L$ is also a positive constant.
What is now the spatial distance between the two particles?
Do I have to calculate the length of a geodesic from one particle to the other? Is this equal to the distance?
Answer: I'm guessing that when you say:
What is now the spatial distance between the two particles?
You mean the proper distance. The coordinate distance is of course just $L$. The proper distance is the distance you would measure if you sat at radius $R+L$ and let out a tape measure until it reached radius $R$.
To calculate the proper distance start with the Schwarzschild metric:
$$ ds^2 = -\left(1-\frac{r_s}{r}\right)dt^2 + \frac{dr^2}{\left(1-\frac{r_s}{r}\right)} + r^2 d\Omega^2 $$
Presumably you want the proper distance along the radial path from $r = R$ to $r = R+L$, in which case $dt = d\theta = d\phi = 0$ so the metric simplifies considerably to:
$$ ds = \frac{dr}{\sqrt{1-\frac{r_s}{r}}} $$
and to get the proper distance just integrate:
$$ s = \int_R^{R+L} \frac{1}{\sqrt{1-\frac{r_s}{r}}} dr $$
According to Wolfram this integrates to:
$$ s = \left[ r\sqrt{1-\frac{r_s}{r}} + \frac{r_s}{2} log \left( 2r \left( \sqrt{1-\frac{r_s}{r}} + 1 \right) - r_s \right) \right]_R^{R+L} $$
I'll leave you to finish the algebra because it's straightforward but messy and unrewarding. | {
"domain": "physics.stackexchange",
"id": 46537,
"tags": "general-relativity, differential-geometry, geometry"
} |
Change of basis vs. change of coordinate system | Question: I'm trying to understand how the translation of coordinate system works in physics, (for example in the Galilean transformations).
When I talk about vectors, I usually mean quantities that can be added or multiplied by a scalar in a manner that the axioms of vector space are verified. What makes me feel confused are the illustrations drawn in physics textbooks, like this one:
For example, if I have to model a situation where there are two different observers, each one using a different coordinate system, I think of each observer as an (ortonormal) basis (of $\mathbb{R}^3$), and then I express the position vector with respect the more convenient one for the description of the motion according to the change of basis matrix. In the picture above, the vectors of the "new" coordinate system $O'x'y'z'$ are given, by definition, by the transformation: $$(1)\quad \tau : r \mapsto r - r_0$$
But where I can find, in this example, the concept of change of basis? I'm confused when I find things like this in physics: $$(2)\quad \overbrace{r'}^{\text{"new" coordinate system}} = \overbrace{r-r_0}^{\text{"original" coordinate system}}$$: here we are saying that $\hat\imath' x' + \hat\jmath' y' + \hat k' z' = \hat\imath (x-x_0)+\hat\jmath (y-y_0)+\hat k (z-z_0)$, but in this case who are $\hat\imath'$,$\hat\jmath'$ and $\hat k'$? Are they determined by our transformation law $\tau$?
I hope someone could clarify me the general concept of position vectors and change of frame of reference (or coordinate system), or link me a good resource where this stuff is treated from the mathematician's viewpoint.
Edit: The thing that I don't get is how to build the transformation matrix: if (I'm talking about translations for abbreviating notation, but it can be surely generalized to include rotations) we know the law $T(\mathbf{v}) = \mathbf{v}' = \mathbf{R}\mathbf{v}+\mathbf{t}$, then we can, calling $\mathbf{e}^i$ the $i$-th vector of the original basis, write $T(\mathbf{e}^i) = \mathbf{e}^i + \mathbf{t} = \sum_{i=1}^n{\alpha_{i}' {\mathbf{e}^{i}}'}$, where $\{{\mathbf{e}^i}'\}_{i=1,\dots,n}$ are the vectors of our "translated" basis.
If $\mathbf{t} = {^t({\alpha_j}_0)}_{\{\mathbf{e}^i\}}$ if I take $T(\mathbf{e}^i)$ I get $\mathbf{e}^i + \mathbf{t} = {^t({\alpha_j}_0 + \delta_{i,j})}_{\{\mathbf{e}^i\}} = {^t({\alpha_1}_0,\dots,{\alpha_i}_0+1, \dots, {\alpha_n}_0)}_{\{\mathbf{e}^i\}}$, for $i=1,\dots,n$. But how do I interpret this result in term of space transformation? It is still related to the original frame of reference, and doesn't tell me nothing about the $\{{\mathbf{e}^i}'\}$.
Answer:
I hope someone could clarify me the general concept of position vectors and change of frame of reference (or coordinate system), or link me a good resource where this stuff is treated from the mathematician's viewpoint.
First off, a better name for a position vector is a displacement vector. Displacement vectors are not free vectors. They aren't quite vectors, period, in the sense of the mathematical concept of a vector space. They instead are members of an affine space, with the transformation between two affine spaces given by an affine transformation $\boldsymbol x' = \boldsymbol {\mathrm M} \boldsymbol x + \boldsymbol b$, where $\boldsymbol {\mathrm M}$ is an invertible matrix (or a proper orthogonal matrix if you want to keep things simple) and $\boldsymbol b$ is the displacement vector from one origin to another.
You asked for a good resource where this stuff is treated from a mathematician's point of view. The sister site, Mathemetics StackExchange has a good number of questions and answers on the topics of affine spaces and affine transformations. | {
"domain": "physics.stackexchange",
"id": 51029,
"tags": "reference-frames, vectors, coordinate-systems"
} |
Algebraic (spectral) algorithms for the minimum spanning tree problem | Question: Are there any algorithms that use the spectral properties of a graph to solve the minimum spanning tree problem?
To clarify further what I have in mind, starting with the Laplacian matrix I want to algebraically arrive at the required set of edges. This is opposed to Prim's or Kruskal's algorithm where the result is obtained by operating on the edges/vertices sets directly.
Answer: In short: no, not that I know of and I would be surprised if they exist.
For something longer, read on:
I don't think the spectrum of a graph can be of much use in finding an MST. The main reason is this: weights of a graph aren't part of the spectrum of a graph! An MST only makes sense on a weighted graph.
The only thing I see we could try, if all weights are integral, 'unfold' the graph such that all weights are unit-weights (so an edge of weight $32$ becomes $32$ edges and $31$ nodes in them) and mark the original nodes and make a min-cost tree that spans the original edges. After this transformation, the spectrum might contain all relevant info, but this transformation step makes our new graph simply horribly large, so any algorithm based on this will have an awful running time. This is simply a bad idea.
I've never heard of a useful way to 'attach' weights to edges in the adjacency matrix and connect them to the spectral properties, either. It doesn't seem likely to work, although proving a vague method won't work is of course impossible.
Of course, the fact that spectral properties likely aren't useful doesn't mean algebraic methods in general won't work. What is important is that you don't leave the edge weights behind. However, most other algebraic methods are used for exponential time algorithms and I'm not aware anyone has constructed something algebraic for MST. | {
"domain": "cs.stackexchange",
"id": 11124,
"tags": "algorithms, graphs, minimum-spanning-tree"
} |
problem with common_msgs | Question:
when I try to build a package, I get this error:
dede@ubuntu:~/electric_workspace/sandbox/labrob_ros_quadrotor$ rosmake labrob_quadrotor_sensors
.
.
.
.
In file included from /home/dede/electric_workspace/sandbox/labrob_ros_quadrotor/labrob_quadrotor_sensors/include/labrob_quadrotor_sensors/labrob_gazebo_camera.h:74:0,
from /home/dede/electric_workspace/sandbox/labrob_ros_quadrotor/labrob_quadrotor_sensors/src/labrob_gazebo_camera.cpp:31:
/opt/ros/electric/stacks/diagnostics/diagnostic_updater/include/diagnostic_updater/diagnostic_updater.h:47:41: fatal error: common_msgs/DiagnosticArray.h: No such file or directory
compilation terminated.
.
.
but
dede@ubuntu:~/electric_workspace/sandbox/labrob_ros_quadrotor$ sudo apt-get install ros-electric-common-msgs
Reading package lists... Done
Building dependency tree
Reading state information... Done
ros-electric-common-msgs is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 406 not upgraded.
what can I do to solve this problem?
Originally posted by schizzz8 on ROS Answers with karma: 183 on 2012-11-01
Post score: 0
Answer:
Very weird error. Normally, the include line in diagnostic_updater should not include common_msgs/DiagnosticArray.h but diagnostic_msgs/DiagnosticArray.h. Try reinstalling the stack diagnostics:
sudo apt-get install --reinstall ros-electric-diagnostics`
Originally posted by Lorenz with karma: 22731 on 2012-11-01
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 11589,
"tags": "ros"
} |
Why is the Bohr's idea of defined circular orbits overruled? | Question:
If we consider a thought experiment for determining position of an electron by using photons of light. According to principles of optics, if we use light of wavelength $\lambda$, then the position of electron cannot be located more accurately than + or - $\lambda$. The shorter the wavelength, the greater is the accuracy. Therefore, to observe the position of the electron accurately, light of approximately small wavelength should be used. But the photons of radiations of smaller wavelength will have higher momentum. When even a single photon of this light strikes against it, a large amount of momentum will be transferred to the electron at the time of collision. This will result into greater uncertainty in velocity or momentum.
On the other hand, in order to minimize the change in momentum we have to use light having photons with small values of momentum. This will require radiations of larger wavelengths (low momentum), the velocity or momentum will not change appreciably but we will not be able to measure the position accurately with larger wavelength. Therefore, uncertainty in position will increase. Thus, we cannot simultaneously measure the position and momentum of a small moving object like electron accurately. However, in case of macroscopic objects, the position and velocity of the objects can be determined accurately because in these cases, during the interaction between the object and the measuring device, the changes in position and velocity are negligible.
What I explained above is actually the Heisenberg's Uncertainty principle which states that
it is not possible to measure simultaneously both the position and momentum (or velocity) of a microscopic particle, with absolute accuracy.
According to Bohr, the electrons revolve around the nucleus in certain well defined circular orbits. But the idea of uncertainty in position and velocity is said to overrule the Bohr's idea of uncertainty picture of fixed circular orbits.
We may not be able to design an experiment (until now) to measure simultaneously both the position and momentum for the electron. But we cannot overrule Bohr's idea of fixed orbits because of this reason. Because, we may not know whether electrons are revolving in fixed orbits, if are able to locate electrons without using photons. Thus, this could not be the exact reason for overruling the idea of fixed orbit.
So, is there any reason for overruling the idea of fixed orbit? or is there any thing wrong in my opinion about the concept, if so please explain, so that I would not proceed with that wrong thinking?
EXTERNAL LINKS
Is uncertainty principle a technical difficulty in measurement? (This is a very good question by gotaquestion, which discusses inability of knowing electron location without disturbing it, this link strengthens my last claim on the opposition of rejecting fixed orbit)
Answer:
So, is there any reason for overruling the idea of fixed orbit? or is there any thing wrong in my opinion about the concept, if so please explain, so that I would not proceed with that wrong thinking?
An insurmountable problem with the Bohr atom is that one has two charged particles orbiting around each other. Electromagnetism was an exact science at that time. A charged particle moving in an electric field ( one moving in the field of the other) would have to radiate energy away, finally falling into the nucleus, if there were no laws that did not allow it to radiate. Thus within the laws of classical physics fixed orbits were an impossibility.
Postulating that X electrons orbiting around Y protons can orbit without radiating was necessary, but not general enough to be called a theory.
The theory that emerged from the plethora of experimental data studying the small dimensions of the atoms was quantum mechanics. It is now understood by physicists that the underlying stratum of nature behaves according to the rules of quantum mechanics, and that classical mechanics and classical electrodynamics are emergent theories from this basis.
Generally, quantum mechanics does not assign definite values. Instead, it makes a prediction using a probability distribution; that is, it describes the probability of obtaining the possible outcomes from measuring an observable. Often these results are skewed by many causes, such as dense probability clouds. Probability clouds are approximate, but better than the Bohr model, whereby electron location is given by a probability function, the wave function eigenvalue, such that the probability is the squared modulus of the complex amplitude, or quantum state nuclear attraction.Naturally, these probabilities will depend on the quantum state at the "instant" of the measurement. Hence, uncertainty is involved in the value. There are, however, certain states that are associated with a definite value of a particular observable. These are known as eigenstates of the observable ("eigen" can be translated from German as meaning "inherent" or "characteristic").
The simplest mathematical formulation for solving for an electron around a nucleus is the Schrodinger equation with the appropriate potential. The solutions reproduce the experimental observations and allow for predictions for other potentials and situations. What is calculated instead of an orbit, is an orbital , a locus in space and time where the electron can be found if measured with a probability given by the square of the wave function. | {
"domain": "physics.stackexchange",
"id": 10743,
"tags": "quantum-mechanics, heisenberg-uncertainty-principle, atoms, atomic-physics"
} |
Nature of materials | Question: Many a time I become confused when materials (and substances) are described with the following words: granular, crystalline, pellet.
Worse still, the same material can be described with a combination of the words (e.g. sodium chloride granules vs sodium chloride crystals).
My question then is the exact difference between these three words.
Answer: A crystal is the generic term for solids, having at atomic/molecular level regular structure, what macroscopically manifests in formation of regular geometric 3D crystallic objects. The size and shape can be natural, formed by crystallization from solutions or melted solids, or mechanically broken, e.g. in production mills.
A pellet or a granule refer rather to a particular shape than to nature of the material or way of its forming. They are not particular crystal forms, but they rather relate to technology that produce them. They do not follow the natural geometric patterns of crystals and are usually polycrystallic. Solids are forced by the technological process to take a specific shape. Material of pellets/granules may not be crystalline at all.
Pellets as somewhat flatted shapes are often result of pressure binding and cutting of solid microcrystallic or amorph matter. But it could be prepared from originally liquid, or liquid+solid forms as below.
Granules as more spherical-like shapes than pellets. Granules of homogenous inorganic or organic substances are often produced by fast cooling of melted solid or by fast evaporation of hot saturated/oversaturated solutions. Alternative option is using a solidifying binder, if granules do not have homogeneous composition - e.g. like pet feed. | {
"domain": "chemistry.stackexchange",
"id": 14756,
"tags": "materials"
} |
How to shutdown gazebo from a plugin? | Question:
Hi,
how would one safely terminate gazebo from a plugin?
Calling gazebo::shutdown() kills it but not in a nicely fashion (errors appear in the terminal)
Thanks,
Andrei
Originally posted by AndreiHaidu on Gazebo Answers with karma: 2108 on 2014-09-25
Post score: 0
Original comments
Comment by Georg Bartels on 2015-01-07:
Excellent question! I called gazebo::shutdown() from the UpdateCallback(...) of a ModelPlugin and it left the simulation in a frozen state without terminating gazebo. I can still press the buttons in the menu. Is this the intended behavior? (Ubuntu 12.04, Gazebo 4.1.0)
Answer:
Try sending a SeverControl message to the /gazebo/server/control topic where the message has "stop" set to true.
Originally posted by nkoenig with karma: 7676 on 2015-01-07
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Georg Bartels on 2015-01-09:
This worked nicely for me, i.e. when starting the simulation with gzserver it terminates cleanly after the message was send. When started with gazebo, the server terminates cleanly and the gui remains. Thanks a lot! EDIT: I'd like to up-vote your answer but I do not have enough karma.. | {
"domain": "robotics.stackexchange",
"id": 3648,
"tags": "gazebo"
} |
Why the continued obsession in measuring the year as a multiple of days? | Question: Pick a any notion of year and any type of day measurement. Outside of history and the common cycle of day an night, why are these viewed as integrally or even fractionally commensurate?
Why all the stunts with leap days and leap seconds, etc... when we now have atomic clocks? Why not decouple the two?
Answer: The year is not defined as a multiple of days.
The year is the time for the Earth to orbit the sun once. This is not a constant amount of time. So it is not defined in seconds or days or any other length of time. However it only varies a little.
The (synodic) day is the amount of time for the Earth to rotate once relative to the Sun. This is also not a fixed amount of time. But like the year only varies a little.
Time is measured in seconds. one second is 9,192,631,770 cycles of the radiation produced by the transition between two levels of the caesium 133 atom. This is among the most stable clocks we know. The second can be redefined if a more stable clock exists.
Now for the practical aspect of calendar forming it is convenient to have a simple rule that anyone can apply and can decide, without the need to make careful observations, which day is in which year. To this end the Gregorian Calendar gives a fairly simple but accurate approximation to the true astronomical year.
The disadvantages of using astronomical observations to decide the first day of the year are significant. Politically, who makes the observations? Russia? China? Japan? Botswana? In theory it would mean that I could not know which day a year was in, until after that day had been reached, and the observations done. In practice there would be no difference from the Gregorian calandar for many centuries; so why bother.
To summarise: The astronomical year is approximated by the Gregorian year. This is simple to use. It is accurate for most calendar applications. It allows for the calendar to be extended into the future. The Gregorian calendar has wide acceptance among many countries. If precise timekeeping is needed then one doesn't use "years" instead one uses the SI unit of time, the second. | {
"domain": "astronomy.stackexchange",
"id": 2263,
"tags": "history, time"
} |
Do generative model produce varying outputs for same input | Question: I am new to data sciences. I believe the generative model generate responses on-the-fly for a valid user input. Is it correct to assume that such models would generate different responses for the same question?
For e.g: if we trained the model on say medical data. Now if user 1 asked "what is fever" and user 2 asked the same question, could be that user 1 and 2 will receive different answers? if this is so then how to circumvent this problem?
thanks in advance
Answer: This depends entirely on the specific model. There are generative models, like most Generative Adversarial Networks (GANs) that receive a random number and generate data. There are other generative models that generate a probability distribution over the output space (e.g. text generation models), and therefore whether the model generates data deterministically depends on the inference procedure (e.g. greedy, sampling, beam search).
If you want your model to generate outputs deterministically, you just select a model and inference method that assures that.
In your example, you may have a normal seq2seq model (e.g. Transformer) and use beam search for decoding, and the outputs will be the same given the same input. | {
"domain": "datascience.stackexchange",
"id": 9572,
"tags": "deep-learning, nlp"
} |
3D object in OpenGL | Question: I made a 3D object in OpenGL. But I think my code is extremely bad and now, I want to make my code better.
Here is my code:
#include <GL/glut.h>
void display()
{
glClear(GL_COLOR_BUFFER_BIT);
glBegin(GL_POLYGON);
glColor3f(0.5, 0.5, 0.5);
glVertex2f(0.0, 0.0);
glVertex2f(0.5, 0.0);
glVertex2f(0.5, 0.5);
glVertex2f(0.0, 0.5);
glVertex2f(0.75, 0.75);
glVertex2f(0.25, 0.75);
glVertex2f(0.0, 0.5);
glVertex2f(0.5, 0.0);
glVertex2f(0.75, 0.25);
glVertex2f(0.75, 0.75);
glEnd();
glBegin(GL_LINES);
glColor3f(1.0, 1.0, 1.0);
glVertex2f(0.0, 0.0);
glVertex2f(0.5, 0.0);
glVertex2f(0.5, 0.0);
glVertex2f(0.5, 0.5);
glVertex2f(0.5, 0.5);
glVertex2f(0.0, 0.5);
glVertex2f(0.0, 0.5);
glVertex2f(0.0, 0.0);
glVertex2f(0.5, 0.0);
glVertex2f(0.75, 0.25);
glVertex2f(0.75, 0.25);
glVertex2f(0.75, 0.75);
glVertex2f(0.75, 0.75);
glVertex2f(0.5, 0.5);
glVertex2f(0.75, 0.75);
glVertex2f(0.25, 0.75);
glVertex2f(0.25, 0.75);
glVertex2f(0.0, 0.5);
glEnd();
glFlush();
}
int main(int argc, char *argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE);
glutCreateWindow("OpenAdventrue");
glutDisplayFunc(display);
glutFullScreen();
glutMainLoop();
}
(Compile with gcc file.c -o file -lGL -lglut)
Answer: There is not much code to review.
GLUT is deprecated. The original GLUT ceased support more than 20 years ago. FreeGLUT is an open source alternative, but I'm not sure what the status is, since the last release was 18 months ago. Use something modern like GLFW or SDL2.
glBegin() and glEnd() are deprecated. The immediate-mode API was deprecated in version 3.0 (we are now at version 4.6). You should use Modern OpenGL (vertex arrays, buffer objects). Any performant OpenGL code should not use the immediate mode API. See this SO answer why.
glVertex2f essentially sets the z-coordinate of the vertex to 0.0. So the function is drawing a 2D point, not a 3D point. So it is not 3D object. | {
"domain": "codereview.stackexchange",
"id": 40942,
"tags": "c, opengl"
} |
Is there any stereospecific enzyme in PDB that catalyzes an anabolic reaction and has an entry showing both reactant ligands? | Question: I am desperately searching for an anabolic enzymatic reaction, ideally with a metal ion involved in the reaction complex, and for which -- unlike in the case of the lactose synthase -- we have snapshots for the critical step of the reaction in PDB that shows the alignment of the two ligands which get bonded into a single stereo-isomer. It's not easy by just fishing in UniProt because most PDB entries do not have ligands, and when they have, as with lactose synthase, then they don't have both ligands shown. I know this is very difficult to capture these precise moments in a reaction, which is why I am asking this question as there may be only a single or a hand full of such PDB entries in millions -- a literal needle in the haystack. I would take a best guess predicted model also, don't care if it is actually seen in x-ray imaging, as long as it has a PDB entry.
Answer: When an enzymatic reaction is studied using X-ray crystallography, there are multiple structures that provide information:
The apo-enzyme (empty active site)
Enzyme with inhibitor in the active site
Product complex
Substrate complex
Of those four types, (4) is often the most difficult to achieve. The first enzyme studyied by crystallography, lysozyme, still lacks a substrate complex structure as far as I know. The reason this is difficult is that crystallographic data collection takes time, and during that time the substrate will react. Strategies to still get information about the substrate complex are to use an active-site mutant (inactivated enzyme), to leave out substrates in multi-substrate reactions, or to work with caged substrates (e.g. caged ATP) and time-resolved crystallography.
To search for a stereospecific enzyme, you could go after reactions that have two possible ways of connecting two substrates (e.g. synthesis of lactose vs maltose, search for retention vs. inversion of configuration) or substrates that have a prochiral center (e.g. in the reaction of pyruvate to lactate). Often, textbooks will choose a system that is structurally well characterized to illustrate the stereospecific of enzymes, so looking in the table of contents of textbooks on enzyme mechanism might also be a search strategy.
Lactate dehydrogenase comes close to your list of requirements. In the reverse direction, it transforms pyruvate with its prochiral center stereospecifically to lactate.
The mechanism is described here and depicted here. Multiple structures with pyruvate exist, as listed here. If this is not a good example for the OPs needs, use the same kind of search strategies to find other examples.
It is likely that for a complete view of an enzymatic mechanism (which is just a hypothesis in any case) you would have to combine information from multiple structures. | {
"domain": "chemistry.stackexchange",
"id": 16081,
"tags": "reaction-mechanism, biochemistry, cheminformatics, enzymes"
} |
Newton's globe experiment with linear acceleration? | Question: Newton's globe experiment: two globes that share all their features are connected with a rope in an otherwise empty universe.
Newton introduced this experiment to show that even though the cases where the globes don't move and where they rotate are not distinguishable relationally (i.e. considering their distance to each other), we can nevertheless distinguish them due to inertial forces that only occur in the rotation case (tension on the rope between the two globes).
My question: could Newton also have chosen an example using linear accelerated motion instead of angular accelerated motion? Inertial forces should occur there as well. If not, why does he need circular motion to make his point?
Answer: To ensure that there is no relative motion in the case of linearly accelerated motion, the same amount of force must act at every point of the moving body. If this is the case, there is no noticeable inertial force (just like freely falling in a homogenous gravitational field doesn't produce any measurable forces, since $m_{g}\propto m_{i}$).
In the case of circular motion, however, there is no relative motion (all distances remain the same) but nevertheless, inertial forces occur because the forces acting at the outside of the circle the globes move around is greater than the force acting on the inside of the circle. | {
"domain": "physics.stackexchange",
"id": 98489,
"tags": "newtonian-mechanics, acceleration, inertial-frames, string, machs-principle"
} |
How can this problem be solved in $O(n^{3/2}log(n))$ time? | Question: I can solve the following problem by Jeff Erickson in $O(n^3)$(and maybe in $O(n^2logn)$) time but how is the $O(n^{3/2} log(n))$ time solution possible?
Let $D[1 .. n]$ be an array of digits, each an integer between $0$ and $9$. A digital subsequence of $D$ is a sequence of positive integers composed in the usual way from disjoint substrings of $D$. For example, $3, 4, 5, 6, 8, 9, 32, 38, 46, 64, 83, 279$ is a digital subsequence of the first several digits of $\pi$:$$ \underline 3 , 1, \underline 4 , 1, \underline 5 , 9, 2, \underline 6 , 5, 3, 5, \underline 8 , \underline 9 , 7, 9, \underline{3, 2} , 3, 8, \underline{4, 6} , 2, \underline{6, 4} , 3, 3, \underline{8, 3 }, \underline {2, 7, 9}$$ The length of a digital subsequence is the number of integers it contains, not the number of digits; the preceding example has length $12$. As usual, a digital subsequence is increasing if each number is larger than its predecessor.
Describe and analyze an efficient algorithm to compute the longest increasing digital subsequence of $D$. [Hint: Be careful about your computational assumptions. How long does it take to compare two k-digit numbers?]
For full credit, your algorithm should run in $O(n^4)$ time; faster algorithms are worth extra credit. The fastest algorithm I know for this problem runs in $O(n^{3/2}log n)$ time; achieving this bound requires several tricks, both in the design of the algorithm and in its analysis, but nothing outside the scope of this class.
Answer: I will assume you know how to solve the problem by keeping a table $T[i, j]$
which stores the length of the best solution whose last integer is $D[i\cdots j] = D[i]D[i+1] \cdots D[j]$. To compute $T[i, j]$ you need to find the best $T[a, b]$ such that $D[a \cdots b] <_{\text{lex}} D[i \cdots j]$ and $b < i$. The computation of $T[i, j]$ (provided the previous entries of $T$ have been filled) can take $O(n)$ if the $<_{\text{lex}}$ is implemented in $O(1)$ time with a good data structure like a suffix array + LCP; after all, you're always comparing substrings of $D$. Naively, this would yield a runtime of $O(n^3)$ which is the state at which you seem to be.
Now consider the following observations.
Observation 1: The $i$-th term of the optimal sequence should never be longer than the $i-1$-th term by more than 1 digit.
Observation 2: Taking $D[1]$, $D[2\cdots3]$, $D[4 \cdots 6], \ldots D[n-\sqrt{n} \cdots n]$ is always a solution, implying that the answer is always at least $\sqrt{n}$, and by Observation 1, we get that only the cells of $T$ with $j-i+1 \leq \sqrt{n}$ should be computed (one can even limit it to $j-i+1 \leq \sqrt{j}$, but it doesn't help asymptotically...). There are now $O(n^{3/2})$ cells.
Now let's try to speed-up the computation of
\begin{equation}
T[i, j] = 1 + \max_{a \leq b < i} \Big\{ T[a, b] \; \vert \; D[a\cdots b] <_{\text{lex}} D[i\cdots j] \Big\}.\tag{1}
\end{equation}
As you were already doing in your $O(n^3)$ approach, we should only consider $b - a = j - i$ or $b-a = j - i -1$ (this is just Observation 1).
If we keep, for each substring length $\ell \in \{1, \ldots, \sqrt{n}\}$ a matrix where $B[\ell][i] := \max_{b < i} \Big\{ T[b-\ell+1, b]\Big\}$, then $B[j-i-1][i]$ is one candidate for the $\max$ in Equation (1), while the other candidate will be
$$
\max_{b < i} \Big\{ T[b-j+i+1, b] \; \vert \; D[a\cdots b] <_{\text{lex}} D[i\cdots j] \Big\}.
$$
How could we compute this last amount more efficiently? We can assume we're constructing $T[i, j]$ in increasing order of $i$, and let the length be denoted $\ell = j-i+1$. We can keep in memory a set of pairs $$S^{i}_\ell = \Big\{ (T[b-\ell+1, b], D[b-\ell+1 \cdots b]) \; \vert \; b < i\Big\}.$$
If we let $w = D[i\cdots j]$, then we're looking for
$$
\max v \; \text{s.t. } \; (v, y) \in S^i_{\ell} \text{ and } y <_{\text{lex}} w.
$$
We're now facing a data structures problem; how can we keep a set of pairs $(\text{integer}, \text{string of length } \leq \sqrt{n})$ in a way that quickly allows us to query with a word $w$ for the maximum first coordinate whose second coordinate is smaller than $w$? It is enough to maintain $S$ as a balanced binary search tree, where the ordering of the tree is based solely on the words $y$ inserted in it, but every node keeps track of the best value $v$ present in its subtree. Notice that because comparisons between strings take $O(1)$, searching on the tree will take $O(\log n)$. Therefore our entire algorithm runs in $O(n^{3/2} \log n)$. | {
"domain": "cs.stackexchange",
"id": 21335,
"tags": "algorithms, time-complexity, dynamic-programming"
} |
Why is $\| M|\psi\rangle \| \leq 1$ for POVM $M$? | Question: In this question‘s answer it is mentioned that $\| M|\psi\rangle \| \leq 1$ for POVM Element $M$. I don‘t get why this is.
My thoughts so far: for the set of POVM elements $\{M_a\}$ we know that all $M_a$ are positive operators satisfying $\langle \psi | M_a | \psi \rangle \leq 1$ and that $\sum_a M_a = I$. We chose an arbitrary $M$ out of this (arbitrary) POVM set. So I tried:
using the sum rule $\sum_a \langle \psi | M_a | \psi \rangle = 1$ can we somehow infer $\sum_a \langle \psi | M_a^2 | \psi \rangle \leq 1$?
we can use the schwarz inequality to show that $\langle \psi | M_a | \psi \rangle \leq \| M_a |\psi\rangle \|$, however this does not help me since the equal sign is „the wrong way“
I tried throwing the Schwarz inequality at it in other ways but it didn‘t get me anywhere
Those were my ideas so far.
(Note: here $M$ is the actual POVM element, not the measurement operator. It is the definition Nielsen and Chuang uses in Box 4.1, which is where the linked question comes from)
Answer: Here's the simplest proof I could come up with. First note that by definition we have $M \leq I$ where $I$ is the identity operator. Now use $A \leq B \implies X^\dagger A X \leq X^\dagger B X$ with $A = M$, $B = I$ and $X = M^{1/2}$ to get that
$$
M^2 \leq M.
$$
(Another way to see this inequality is via the spectral theorem).
Then
$$
\| M |\psi\rangle\|^2 = \langle \psi|M^2 |\psi \rangle \leq \langle \psi| M | \psi \rangle \leq \langle \psi| I |\psi\rangle = 1\,.
$$ | {
"domain": "quantumcomputing.stackexchange",
"id": 3871,
"tags": "linear-algebra, povm"
} |
FCC and BCC lattice | Question: Some crystals can be described by a FCC lattice or BCC lattice. For example diamond can be described as a FCC lattice with two basis vectors. Is it also possible to describe it using an ordinary cube lattice with a different basis than the FCC basis?
Answer: Yes, it is possible. However, let me add some clarification:
A crystal is described by the combination of a lattice together with a motif. The lattice is independent of the physical crystal, it is a mathematical construct. Therefore, a priori, you could describe your crystal with several lattices, although there are some which are more convenient than others.
The motif are the coordinates of the physical atoms of the crystal in a given vector basis, so for the same crystal, depending on the chosen lattice, the corresponding motif coordinates will be different.
Finally, it is important to remark that no matter the lattice $\bigoplus$ motif combination which is chosen, one retrieves the same physical quantities (e.g. diffraction pattern) since the crystal is the same. | {
"domain": "physics.stackexchange",
"id": 55349,
"tags": "solid-state-physics, crystals"
} |
Has anyone tried to use llama.cpp with NVLink? | Question: Apparantly its possible to pool the memory of two 3090 using NVLink (although not with 4090). This would make it possible to run large LLM's on consumer hardware.
https://huggingface.co/transformers/v4.9.2/performance.html
Although before I invest into a new GPU, I would like to verify that it actually works, since conventional wisdom used to be that SLI only doubled performance, not memory.
So has anyone tried yet? Whats the token rate?
Answer: memory pooling is not really much of a thing these days: the interface is not really that all of a sudden you get a single address space. You still have individual GPUs, you just specify the enablement of peer transfers. This makes sense from a design point of view, because in order to write efficient software the software has to really be aware of which data is on which physical device so that efforts can be made to optimize and reduce (unnecessary) transfers.
Therefore I'd say the premise of your question is flawed. Perhaps it should be edited.
Anyway, I'm running llama.cpp with dual 3090 with NVLink enabled. llama.cpp does have implemented peer transfers and they can significantly speed up inference. For example 10 tok/s -> 17 tok/s for a 70B model. | {
"domain": "ai.stackexchange",
"id": 3983,
"tags": "gpu"
} |
Determine minimum sample rate for continuous sinusoid | Question: Consider a signal $$ x(t) = \cos(175\pi t) $$ which is sampled to produce discrete time signal $$ x[n] = x(nT_s) $$ The fundamental period of $x[n]$ is $$ N_0 = 7 $$
Given this, what is the smallest possible sample rate $T_s$? (Ans: 1.6327 ms).
I would assume that this is related to finding the Nyquist frequency. I was thinking:
Since, $$N_0 = 7 \implies f_0 = \frac17 \implies f_{\mathrm{Nyquist}} = 2 \frac17\implies T_s = \frac72 $$ However, this is obviously incorrect. I am not even using any information of the original signal. Any suggestions on what I could be doing wrong here?
Answer: Remember that the cosine function repeats itself every $2\pi$. The discrete signal can be written as $$x(n)=\cos(175\pi n T_s)$$
So when the argument reaches $2\pi$, we'll have a period. As we know that the fundamental period is $7$, then
$$175\pi \cdot7T_s=2\pi \implies T_s =\frac{2}{7\cdot 175}=1.6327\cdot 10^{-3}$$ | {
"domain": "dsp.stackexchange",
"id": 6027,
"tags": "discrete-signals, sampling, nyquist, aliasing"
} |
What would be the identification reactions of selenate(IV) ion? | Question: I have not been able to find any $\ce{SeO3^2-}$ identification reactions anywhere. Are there any?
Answer: You can use reducing agents to reduce the $\ce{SeO3^2-}$ salt to elemental selenium. This paper1 discusses the use of iron(II) salts in acidic medium (phosphoric - hydrochloric acid) as reducing agent. The reaction proceeds at r.t.
You can use other reducing agents like hydrochloric acid, sulfur dioxide, hydroxylamine hydrochloride, hydrazine hydrochloride. The reduced selenium has a red color which turns greyish-warm on warming, but be careful on overheating (boiling or evaporating turns to serious losses of selenium by forming volatile $\ce{SeCl4}$).
References
Rajkumar Kalaparthi , Srija Korapu , Padmarao Chekuri, 2020, New Spectrophotometric Determination of Sodium Selenate [Selenium (IV)] and Sodium Selenite[Selenium(VI)] with Iron(II), and Their Quantitative Analysis of Selenium (IV) – Selenium (VI) Present in A Binary Mixtures, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) Volume 09, Issue 05 (May 2020), DOI: 10.17577/IJERTV9IS050094
Vogel, A. I., & Svehla, G. (1996). Vogel's qualitative inorganic analysis | {
"domain": "chemistry.stackexchange",
"id": 14947,
"tags": "inorganic-chemistry, analytical-chemistry, identification"
} |
Why MADDPG rather than taking all cooperating agents as a single meta-agent? | Question: Since MADDPG uses a centralized critic for training, why not simply treat all cooperating agents as a single meta-agent with a concatenated observation space and a concatenated action space? In my opinion, MADDPG is centralized enough, so it won't hurt to go one step further.
Answer: MADDPG can be used to model agents that have limited observation and communication capabilities after training, which is an interesting and useful real world scenario.
why not simply treat all cooperating agents as a single meta-agent with a concatenated observation space and a concatenated action space?
Any real world implementation will then require resources to provide and manage that overview. This may not be practical or desirable in all cases.
There is no single fix for this, it is an open area of research. Whether to invest in better communication and central processing, or better autonomy for multiple agents is likely to have different answers depending on the problem and current technology limits for either approach.
MADDPG reduces the role of central processing to assessment of global reward signals during training. That means:
Each agent works with local signals only, and simpler observation and action spaces as a result. Only the reward signal processing is handled externally.
Trained agents can theoretically be used in environments where a central processor is not available.
So, for example, agents can be trained in simulation with all the oversight that allows, or with a carefully instrumented environment including high bandwidth connections between agents and central processing. They can then be deployed into matching environments where the central oversight is not available, or too costly. | {
"domain": "datascience.stackexchange",
"id": 8718,
"tags": "reinforcement-learning, openai-gym"
} |
Around what apparent magnitude can the naked eye observe an object during full moon | Question: For a very rough guideline using healthy/corrected eyes adjusted to the dark, around how bright should an object be to expect it to be visible?
Answer: Wikipedia's page on the Bortle Scale claims the full moon at a dark site is roughly equivalent to the light pollution at the urban/suburban transition which means you could see stars with an naked-eye limiting magnitude (NELM) 4.6-5.0 | {
"domain": "astronomy.stackexchange",
"id": 6551,
"tags": "observational-astronomy, the-moon, apparent-magnitude, atmospheric-effects"
} |
Would it have been possible to send a radio signal towards ʻOumuamua? | Question: As we could not get any radio signals from asteroid ʻOumuamua, couldn't we have sent a powerful radio signal to it and then check if we can get any radio signal in response?
This way we can at least check if it is just a rock or any alien spaceship. If it were an alien spacecraft, then they should definitely give a radio signal in response I believe.
Answer: Answer posted under the wire during closure. Others may be able to write an additional and/or different answer as soon as two more reopen votes are cast.
Would it have been possible to send a radio signal towards ʻOumuamua?
Yes. It is certainly true that at any time, a large radio telescope or even a small transmitter can always sent a signal toward something. New Horizons can receive our signals now even though it's in the Kuiper belt because it knows when and where to listen and at what frequency.
As we could not get any radio signals from asteroid ʻOumuamua, couldn't we have sent a powerful radio signal to it and then check if we can get any radio signal in response?
The asteroid was quite close to us in the 2nd half of 2017, but now it's much farther than New Horizons. However if it had advanced technology it would know we were very active in the electromagnetic radio spectrum and it might be listening. If so, we certainly could have sent it a signal that it could receive.
Checking for a response is also hard because we'd need to know what frequency and when. We'd have to allocate resources, etc.
For the frequency, see also
https://scifi.stackexchange.com/q/197905/51174
https://scifi.stackexchange.com/q/16348/51174
Below: Plots of the distance of ʻOumuamua (bumpy, blue) and the New Horizons spacecraft (straight, green) from Earth as a function of years since 2017-01-01. | {
"domain": "astronomy.stackexchange",
"id": 4853,
"tags": "radio-astronomy, extra-terrestrial, trojan-asteroids, fast-radio-bursts"
} |
How to calculate the Hamiltonian from the Lagrangian for a non-relativistic charged point particle in an EM field? | Question: I was given the equation of the Lagrangian:
\begin{equation}
L~=~\frac{1}{2}m \dot{x}^2+\frac{e}{c}\vec{\dot{x}}\cdot \vec{A}(\vec{x},t)-e\phi (\vec{x},t).
\end{equation}
I proceeded to use the equation:
\begin{equation}
H~=~\sum_{i} \dot{q_i} \frac{\partial L}{\partial \dot{q_i}} -L
\end{equation}
to get the Hamiltonian as:
\begin{equation}
H~=~\frac{1}{2}m \dot{x}^2+e\phi (\vec{x},t),
\end{equation}
but, in the text, the Hamiltonian is given as:
\begin{equation}
\hat{H}~=~\frac{1}{2m}(\frac{\hbar}{i}\nabla - \frac{e}{c}\vec{A})\cdot(\frac{\hbar}{i}\nabla - \frac{e}{c}\vec{A})+e\phi
\end{equation}
So,why and where did I go wrong?
Answer: To make the correct answer clearer, allow me to introduce the canonical momentum $\vec{p}$, given by:
$$\vec{p}=\dfrac{\partial L}{\partial\dot{x}}$$
This way we can rewrite the Hamiltonian as:
$$H=\vec{p}\cdot\vec{\dot{x}}-L$$
Let's start by computing $\vec{p}$:
$$\vec{p}=\dfrac{\partial L}{\partial\dot{x}}=m\vec{\dot{x}}+\dfrac{e}{c}\vec{A}(\vec{x},t)$$
And you get:
\begin{align}H&=m\dot{x}^2+\dfrac{e}{c}\vec{x}\cdot\vec{A}-\dfrac{1}{2}m\dot{x}^2-\dfrac{e}{c}\vec{\dot{x}}\cdot\vec{A}+e\phi\\&=\dfrac{1}{2}m\dot{x}^2+e\phi\end{align}
But from the expression of the canonical momentum we found earlier, you can rewrite $\vec{\dot{x}}$ as:
$$\vec{\dot{x}}=\dfrac{1}{m}\left(\vec{p}-\dfrac{e}{c}\vec{A}\right)$$
Such that:
$$\dot{x}^2=\dfrac{1}{m^2}\left|\vec{p}-\dfrac{e}{c}\vec{A}\right|^2$$
Plugging this result into $H$:
$$H=\dfrac{1}{2m}\left|\vec{p}-\dfrac{e}{c}\vec{A}\right|^2+e\phi$$
Make the transition to quantum mechanics by promoting the classical momentum $\vec{p}$ to the operator $\hat{p}=-i\hbar\nabla$ and you're done. | {
"domain": "physics.stackexchange",
"id": 20713,
"tags": "homework-and-exercises, electromagnetism, lagrangian-formalism, hamiltonian-formalism"
} |
I Do Not Understand a Textbook Example: Calculating Voltage Across a Resistor | Question: I am reading Agarwal and Lang's Foundations of Analog and Digital Circuits for self study, and I do not understand one of their illustrated examples (specifically, example 2.14).
They present this circuit and claim $v = 0.5$ V. If I were to use a different method, namely the fact that $v = iR$, why am I incorrect that $v = 2$ mA $\times$ $1$ k$\Omega = 2$ V?
The book uses an "energy conservation" method which yields $v = 0.5$ V.
Answer: The book is indeed wrong and the quoted answer $V=0.5\ \text{volts}$ does not appear to make any sense.
Using conservation of energy, one can compare the power going into the source and the power going into the resistor. Obviously for the source this is $$P=VI=(0.002\,\text{A})\cdot V$$ And that for the resistor is $$P=\frac{V^2}{R}=\frac{V^2}{10^3\,\Omega}=(0.001\,\Omega^{-1})\cdot V^2$$
since $$V=IR$$ so $$I=\frac{V}{R}$$
By conservation of energy, these two expressions must be equal meaning $$(0.002\,\text{A})\cdot V=(0.001\,\Omega^{-1})\cdot V^2$$ meaning $$V=2\,\text{A}\,\Omega = 2\ \text{volts}$$
If you calculated the voltage from the resistance and current using Ohm's law then you would get $$V=I\ R=2\times 10^{-3}\,\text{A}\times 1000\,\Omega=2\ \text{volts}$$ which is also consistent with the conservation of energy analysis above. | {
"domain": "physics.stackexchange",
"id": 83589,
"tags": "homework-and-exercises, electric-circuits, electrical-resistance, batteries, textbook-erratum"
} |
What does this symbol $\odot$ mean? | Question: While reading through a physics textbook, I came across the use of sub-scripted ☉s.
Here's the context:
Stars between 0.5M☉ and 10M☉ will evolve into red giants...
I'm assuming it's to do with the life-span of a star; however, I don't know exactly how.
I searched google/wikipedia however it simply stated it represented the sun
Answer: The symbol in question, $\odot$, usually denotes the Sun. The solar mass, $M_\odot$, is often used as a unit of mass in astronomical/astrophysical texts. Another example is the solar luminosity, $L_\odot$. | {
"domain": "physics.stackexchange",
"id": 35528,
"tags": "astronomy, notation, sun"
} |
Avoiding globals in gumball machine class | Question: I'm planning to transition from years of forced procedural programming to OOP. I decided I'd start off small with a little gumball machine object to get my bearings. Everything seems to run ok, but I feel strange having all these globals in my class.
Code in action here.
Two questions:
Is there a way to avoid globals in my code?
Are there any glaring issues with the way I've used OOP to put this gumball machine together?
<?php
/* Gumball Machine */
class gumballMachine {
/* Class Variables */
public $pricePerTurn = .25;
private $maxGumballs = 200;
private $numColors = 5;
private $estimateError = .25;
public function __construct() {
global $redGumballs;
global $blueGumballs;
global $whiteGumballs;
global $greenGumballs;
global $yellowGumballs;
global $totalGumballs;
global $iTotalGumballs;
$redGumballs = rand(1,($this->maxGumballs / $this->numColors));
$blueGumballs = rand(1,$this->maxGumballs/$this->numColors);
$whiteGumballs = rand(1,$this->maxGumballs/$this->numColors);
$greenGumballs = rand(1,$this->maxGumballs/$this->numColors);
$yellowGumballs = rand(1,$this->maxGumballs/$this->numColors);
$totalGumballs = $redGumballs+$blueGumballs+$whiteGumballs+$greenGumballs+$yellowGumballs;
$iTotalGumballs = $totalGumballs;
echo 'You walk up to a gumball machine.<br><br>';
}
public function estimateTotalGumballs() {
global $totalGumballs;
echo 'You glance at the gumball machine and guess it is about '. round(($totalGumballs/$this->maxGumballs *100+5/2)/5)*5 .'% full. You know the gumball machine can hold about '.$this->maxGumballs.' gumballs total.<br><br>';
}
public function estimateTotalColorGumballs($color) {
$color = strtolower($color);
switch ($color) {
case 'red':
global $redGumballs;
$colorGumballs = $redGumballs;
break;
case 'blue':
global $blueGumballs;
$colorGumballs = $blueGumballs;
break;
case 'white':
global $whiteGumballs;
$colorGumballs = $whiteGumballs;
break;
case 'green':
global $greenGumballs;
$colorGumballs = $greenGumballs;
break;
case 'yellow':
global $yellowGumballs;
$colorGumballs = $yellowGumballs;
break;
}
global $totalGumballs;
if ($colorGumballs == 0) {
echo "You glance at the gumball machine and don't see any ".$color." gumballs remaining. <br>";
return 0;
} elseif ($colorGumballs == 1) {
echo "You glance at the gumball machine and only see a single ".$color." gumball remaining. <br>";
return 1;
} else {
$remaining = $colorGumballs + round((rand(-1,1)) * $totalGumballs * (($colorGumballs/$totalGumballs)* $this->estimateError),0);
echo "You glance at the gumball machine and estimate there are ". $remaining ." ".$color." gumballs remaining. <br>";
return $remaining;
}
}
public function insertQuarter() {
echo '<br>You insert a quarter into the machine<br>';
}
public function twistHandle() {
echo 'You twist the gumball machine handle and a gumball is chosen.<br>';
}
public function openDoor() {
global $redGumballs;
global $blueGumballs;
global $whiteGumballs;
global $greenGumballs;
global $yellowGumballs;
global $totalGumballs;
$redSelector = $redGumballs;
$blueSelector = $redSelector+$blueGumballs;
$whiteSelector = $blueSelector+$whiteGumballs;
$greenSelector = $whiteSelector+$greenGumballs;
$yellowSelector = $greenSelector+$yellowGumballs;
$gumballSelector = rand(1,$totalGumballs);
if ($totalGumballs > 0) {
if ($gumballSelector <= $redSelector) {
echo "You open the gumball machine door and out pops a <font color=red><b>red</font></b> gumball!<br><br>";
$redGumballs--;
$totalGumballs--;
return 'red';
} elseif ($gumballSelector <= $blueSelector) {
echo "You open the gumball machine door and out pops a <font color=blue><b>blue</font></b> gumball!<br><br>";
$blueGumballs--;
$totalGumballs--;
return 'blue';
} elseif ($gumballSelector <= $whiteSelector) {
echo "You open the gumball machine door and out pops a <font color=black><b>white</font></b> gumball!<br><br>";
$whiteGumballs--;
$totalGumballs--;
return 'white';
} elseif ($gumballSelector <= $greenSelector) {
echo "You open the gumball machine door and out pops a <font color=green><b>green</font></b> gumball!<br><br>";
$greenGumballs--;
$totalGumballs--;
return 'green';
} elseif ($gumballSelector <= $yellowSelector) {
echo "You open the gumball machine door and out pops a <font color=orange><b>yellow</font></b> gumball!<br><br>";
$yellowGumballs--;
$totalGumballs--;
return 'yellow';
}
} else {
echo "You already took the last gumball!<br><br>";
return 'none';
}
}
}
$gumballMachine = new gumballMachine;
$gumballMachine->estimateTotalGumballs();
$eRed = $gumballMachine->estimateTotalColorGumballs('Red');
$eBlue = $gumballMachine->estimateTotalColorGumballs('Blue');
$eWhite = $gumballMachine->estimateTotalColorGumballs('White');
$eGreen = $gumballMachine->estimateTotalColorGumballs('Green');
$eYellow = $gumballMachine->estimateTotalColorGumballs('Yellow');
$reds = 0;
$blues = 0;
$whites = 0;
$greens = 0;
$yellows = 0;
for ($i=0;$i<$iTotalGumballs; $i++) {
$gumballMachine->insertQuarter();
$gumballMachine->twistHandle();
$returned = $gumballMachine->openDoor();
switch ($returned) {
case 'red':
$reds++;
break;
case 'blue':
$blues++;
break;
case 'white':
$whites++;
break;
case 'green':
$greens++;
break;
case 'yellow':
$yellows++;
break;
}
}
echo "After $i quarters (\$".number_format($i * .25,2).")...<br>";
echo "You have <font color=red><b>$reds</font></b> red gumballs after estimating there were <font color=red><b>$eRed</font></b> reds.<br>";
echo "You have <font color=blue><b>$blues</font></b> blue gumballs after estimating there were <font color=blue><b>$eBlue</font></b> blues.<br>";
echo "You have <b>$whites</b> white gumballs after estimating there were <b>$eWhite</b> whites.<br>";
echo "You have <font color=green><b>$greens</font></b> green gumballs after estimating there were <font color=green><b>$eGreen</font></b> greens.<br>";
echo "You have <font color=orange><b>$yellows</font></b> yellow gumballs after estimating there were <font color=orange><b>$eYellow</font></b> yellows.<br><br>";
?>
Answer: Is there a way to avoid globals in my code?
Yes, of course there is! (more on that in my actual answer).
Are there any glaring issues with the way I've used OOP to put this gumball machine together?
Yes. If I'm honest, there are loads of issues.
I don't want to be rude, but it is my opinion that CR has to be blunt to be good. This is something I've explained at length some time ago.
Anyway, onwards:
Avoid the need for globals by using properties. Classes (and thus objects) allow you to couple state to functionality. You need a given value, and your methods will be using that value (altering it, using it for computation) throughout, then that data needs to be tucked away inside that class, where no other code can touch it.
class MyText
{
public $string = 'my text';//a terrible example
public function scream()
{
return strtoupper($this->string);
}
}
Ok, but how do we get a value in that class, and use it as a property? Enter the __construct method:
class MyText
{
public $string = null;//no value (yet)
public function __construct($value)
{
$this->string = $value;
}
public function scream()
{
return strtoupper($this->string);
}
}
You then create an instance like so:
$instance = new MyText('This is the value');
echo $instance->scream();//THIS IS THE VALUE
$another = new MyText('And now for something completely different');
echo $another->scream();//AND NOW FOR SOMETHING...
Each instance has its own property, and can do the same thing without this affecting the other instances of the same class. They are all self-contained units.
Now, the property here isn't neatly tucked away, of course, it's public, so other code can simply change the value of the string property:
$instance->string = 'foobar';
echo $instance->scream();//FOOBAR
That's not ideal, especially considering that this could happen:
$instance->string = array('foobar');
//strtoupper on an array shouldn't be allowed
Enter access modifiers. By defining a property as being protected or private you can prevent other (external, as in: not contained within the class) from accessing and altering the properties you need directly:
class MyText
{
protected $string = null;
//all other methods are identical
}
$instance = new MyText('I am protected');
echo $instance->scream();
$instance->string = 'not allowed';//error!
Of course, sometimes you do find yourself wanting to change the value of a property of an instance that already exists. Rather than creating a new instance, a simple setter method can be used to do just that:
class MyText
{
protected $string = null;
public function __construct($value = null)
{
$this->string = $value;
}
public function setText($value)
{
$this->string = (string) $value;//ensure $value is a string!
return $this;
}
}
Now you can do something like this:
$instance = new MyText();//<-- don't set a value here, it's been made optional
echo $instance->setText('Hello')
->scream();//because setText returns $this, we can chain method calls
And indeed, this'll echo HELLO. Setters have the added advantage of giving you the chance to validate the data that the user (the one calling the method) is trying to assign to a given property. A method like this:
public function setData(array $data)
{//note the type-hint!
$this->data = $data;
return $this;
}
Will result in a fatal error when this method is called with anything else than an array. Your class expects the $data property to be an array, so your method only excepts an array argument. This makes your API a lot less error-prone, reveals bugs in the calling code more quickly and produces something that is beginning to resemble self-documenting code.
That's what you need for you gumball counters, too. Pass all those variables through to your constructor, and assign them to properties. Then you no longer rely on those global variables even existing:
$blueGumballs = 123;
$foo = new Gumball($blueGumballs, 123, 123, 123);//<-- assign all colours
$foo->estimateTotalColorGumballs('blue');
$blueGumballs = 456;//<-- has no effect on the instance!
$foo->estimateTotalColorGumballs('blue');
Ideally, you don't use a separate argument for each color/gumball counter value, but you'd pass them all as an array (which allows for some type-hinting).
As your objects become more complex, you soon find yourself writing classes for data, too. These class names can be used for type-hints, too.
For example, I have a db table full of user data. An SQL table is a rigid data structure, that can, for example contain: a user name, nickname, status, age and some timestamps.
Each of these fields imply a specific data type: strings for the name and nickname, an email is a special string, that should be validated using filter_var($email, FILTER_VALIDATE_EMAIL) and thus only accepted if it's a valid email address.
Age is an integer, the timestamps could be instances of the DateTime object and the status (active or not) could be a bool (true/false).
That class, representing a user could be filled by a query result or a form submission (a new user registered). Once the instance is created and the data set, you pass that instance around, to ensure the data stays intact (setters that validate data), and nothing is lost along the way (an object is a single unit)...
Perhaps think about going down that route for your gumballs: they all have a count (number of gumballs left), and a distinct colour. Furthermore, type-hinting makes life just so much easier when debugging, or maintaining code someone else wrote.
Basically, an object should be self-contained What happens inside the object, only affects the object. What happens outside of the object doesn't concern the object. A global variable is an object's worst enemy.
If you want to write OO code, pour your data into objects, and think about your data as being instances, instead of variables.
I'll leave you with this sneak-preview of rants to follow: SOLID is easy!
Perhaps you can check out a couple of my other answers on this site where I explain, at length, why a method should never ever echo, for example this answer of mine. I know, I tend to write long answers, but at least then I'm sure people are likely to understand why I say way I say, and where my critiques stem from.
To be continued | {
"domain": "codereview.stackexchange",
"id": 7092,
"tags": "php, object-oriented"
} |
How to verify the accuracy of a pH meter | Question: One suggestion I received is that I prepare a solution of a known activity and probe the solution with the pH meter. The problem I see with this is that any error in solution preparation can lead us to conclude that the pH meter is faulty.
Is there any other way to test the accuracy of a pH meter? We need to make extremely accurate measurements (we are studying the activities of ionic solutes).
Also, what solution has an activity of exactly 1.0?
Answer: Your school should have a purchased buffer solutions that have been calibrated against a NIST standard. If they don't, have them purchase several in order to span the entire pH range, here's one from Sigma-Aldrich. For your calibration, pick a standard solution that is close to the range you plan to work in. Here's a couple of links with some more "tips" on how best to actually perform the calibration.
tips #1
tips #2 | {
"domain": "chemistry.stackexchange",
"id": 9509,
"tags": "aqueous-solution, ph"
} |
Torque/moment of force fundamentally | Question: I've very much gotten used to calculating the torque about some axis of an object by multiplying the force and the distance from the axis at which the force acts on.
We also know that if two opposite, equal moments (equal torque) act about the same axis, the total moment is zero and therefore the angular acceleration of the object is zero. This means that if we apply a clockwise force at some distance from an axis, and a counter-clockwise force with half the magnitude but twice the distance, the total moment is zero and the object is not accelerating angularly. That is, if the object was initially at rest it stays so with these moments applied.
But why is this true? What is the physical reason for this? Obviously we can define moments mathematically however we choose, but how does nature know that even though the counter-clockwise force is only a half of the clockwise force, it is twice as far from the axis and therefore the object should stay at rest? Is this something that we have just observed to be true about nature or is there a reasoning for this from other physical principles/laws? This has never really been explained to me and I never have been able find a good explanation for it. Obviously we can measure torques and conclude that this principle seems hold always but why do we believe it to be true in general?
I can believe there could be a simple reasoning behind this but I would appreciate if somebody could clarify this for me.
Answer: Let's assume the object actually would start to accelerate angularly. This means the torque must have done some amount of work, since the kinetic energy of the object would change. Calculating the work yields:
$$\Delta E_{kinetic}=W=\textrm{force}\cdot\textrm{distance}=F_1\cdot r_1\cdot \theta+F_2\cdot r_2\cdot \theta,$$
where $F_2$ is the force at some distance $r_2$ from the axis and $F_1=-2F_2$ is the force twice as big as $F_1,$ but in the opposite direction and at a distance $r_1=\frac{1}{2}r_2$. $\theta$ is the angle by which the object has rotated measured in radians. Substituting this in the work equation above gives:
$$W=\theta\left(-2F_2\cdot \frac{1}{2}r_2+F_2\cdot r_2\right)=0$$
This means that the change in kinetic energy is zero, and thus the rotational speed must be unchanged. Said in another way, there is no angular acceleration. | {
"domain": "physics.stackexchange",
"id": 46170,
"tags": "torque, moment"
} |
Downsampling in Matlab simple example | Question: I'm trying to visualise downsampling in the frequency domain in matlab
I take a simple sinusoid, perform an fft and plot a two sided spectrum. Then I downsample the time domain signal (downsamplefactor D=2) and perform the same fft and two sided spectrum plot.
What i would expect to see: (This uses M, i use D as downsample factor)
(from:
Frequency Representation of Downsampled Signal)
So i'm expecting my frequencies to get stretched by a factor D and the amplitude should get scaled by 1/D. The stretching works as expected, however my amplitude doesnt scale. What am I missing/not seeing? Matlab code:
% normal signal -> fft -> plot
Fs = 100; %sampling frequency
T = 1/Fs; %sampling time
L = 500; %signal length
f = 5; %sine frequency
dF = Fs/L; %frequency bin step
t = (0:L-1)*T; %time vector
y = 2*sin(f*2*pi*t); %sine
Y = fft(y); %fft
P = abs(Y/L); %extract magnitude plot
P2 = fftshift(P); %rearranges P to 2sided spectrum:negfreq,0component,posfreq
F2 = dF*(-L/2:1:L/2-1); %for when L is even
%F2 = dF*((-L+1)/2:1:(L+1)/2-1); %for when L is odd
F2rad = F2*pi/(Fs/2); % *pi /Nyquist to convert to [-pi pi]
figure, hold on
%plot(F2,P2) %frequency plot
plot(F2rad,P2) %radian plot
% downsampled signal -> fft -> plot
D = 2;
yd = y(1:D:L); %can also use downsample command
yd = downsample(y,D); %equal to above
Ld = length(yd); %downsampled length
Yd = fft(yd);
Pd = abs(Yd/Ld); %0component,posfreq,nyquist,negfreq
Pd2 = fftshift(Pd); %rearranges P to nyquist,negfreq,0component,posfreq
dFd = (Fs/D)/Ld; %frequency bin step
Fd2 = dFd*(-Ld/2:1:Ld/2-1); %for when L is even
%Fd2 = dF*((-L+1)/2:1:(L+1)/2-1); %for when L is odd
Fd2rad = Fd2*pi/(Fs/(2*D)); % *pi /Nyquist to convert to [-pi pi]
%plot(Fd2,Pd2) %frequency plot
plot(Fd2rad,Pd2) %radian plot
Answer: You are removing the scaling effect yourself when you perform the following operations:
Pd = abs(Yd/Ld)
P = abs(Y/L)
If you don't divide by the length, then you'll see that one has half the amplitude of the other. | {
"domain": "dsp.stackexchange",
"id": 5084,
"tags": "matlab, frequency-spectrum, downsampling"
} |
Turtlebot Rviz "Status: Warning" | Question:
When I am trying to visualize data coming from my turtlebot kinect with a point cloud, there is a bar underneath it that says "Status:Waring" in yellow. What does this mean?
Originally posted by ParkerGibbons on ROS Answers with karma: 61 on 2011-09-11
Post score: 0
Answer:
Click the little "+" button next to it to get more details.
Originally posted by tfoote with karma: 58457 on 2011-09-11
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by osuairt on 2011-12-05:
OK, I would think that would be the case yes. But, would you have a suggestion why a kinect wouldn't be publishing data?
Comment by mmwise on 2011-09-11:
No data is being published by the kinect
Comment by ParkerGibbons on 2011-09-11:
What does it mean if the topic is "No messages received"? | {
"domain": "robotics.stackexchange",
"id": 6653,
"tags": "rviz, turtlebot"
} |
Why do particles of a real gas have intrinsic random motion even before to collide with each other when the gas is heated? | Question: Why do particles of a real gas have intrinsic random motion (before to collide with each other when the gas is heated)?
Answer: There's a bunch of great answers already. Let me try another picture. This time focusing on the cosmological scale you seem to be interested in based on some comments.
Let's imagine there are stationary particles of hydrogen that just got created in the middle of perfect vacuum at some random position. This is our starting situation.
These particles interact with each other - let's start with just gravity, and let's keep it to Newtonian gravity at that (in fact, it's a scenario that's great for intuitively deriving general relativity, but that'd take a while :)). Each particle feels a slight gravitational attraction from every other particle, which will impart momentum to each of them. Of course, the average momentum is zero - in every pair of interactions, both particles gain the same amount of momentum in opposite directions. It's clear enough that the net result is that the gas will start collapsing - the particles will be moving at increasing velocities towards the common mass centre of the whole "cloud". The mere fact that there are particles with separation in space is enough for this.
Of course, if you take a snapshot of the whole system at any time, you can run the same equations backwards, and you'll get back to the starting condition. There is no randomness here. As the particles get closer, the force acting upon them will grow stronger; but since there's more than two particles, they will not hit each other - the deflection from the other particles will be enough to instead allow extremely close (and fast!) approaches that will eject them essentially at random and in random directions.
This is not true randomness - you can still trace the paths backwards to the original magical stationary condition. But it's clearly becoming harder to see that there even was such an original position. Absent any other interactions, you'll get a churning cloud of particles that will behave in all ways like a similar cloud that was created with random motion in the first place - except for this one thing, where if you run all those equations perfectly backwards, you'll get back to a stationary configuration - for an instant. For bonus fun points, if you keep going backwards, you will see the exact same evolution as in the forward direction.
Of course, in reality, gravity (Newtonian or otherwise) isn't good enough for a decent model of the behaviour of interstellar gases. Electromagnetism is rather important as well. Even if interstellar gas was neutral (most of it is actually a plasma), as the particles approach each other, the mutual electrostatic repulsion will grow stronger - the charge cancellation is not perfect, and the slight difference in the distribution of charge is enough to manifest as a very real force. Things get further complicated when you account for electromagnetic radiation. And then you realize that you're modelling a chaotic system, and tiny variations or imprecisions will give you vastly different results.
At every point, you could run the equations backwards and get back to the start. But only very rarely would you be in a conversation where there would be any distinction between "random motion" and "just following the equations of motion to get this particular configuration of particle momenta". Appearing random is the common thing - it's the order that's interesting.
There are many sources of both apparent and "true" randomness. Each dominate at different scales, in different situations. But when we talk about particles in a gas being in random motion, we're not talking about "true" randomness. Just "it's impractical and useless to consider any particular configuration of momenta, so we might as well consider particles in random motion following a particular distribution of momenta". Having more than two particles is more than enough to introduce enough apparent randomness to be virtually indistinguishable from any sources of "true" randomness (like, say, nuclear decay).
Oh well. Things are complicated? :D This is still just a tiny section of everything that happens in reality, or even when trying to simulate a naïve kinetic theory of gases model. But I hope it's enough to show that the end result of having a gas made up of particles with essentially random momenta is not weird - and it's a good starting point for considering any particular collection of gas particles. This goes double when you're talking about gas in a thermal equilibrium - that must have gone through this "randomization" anyway.
To sum it up - gas particles interact with each other too, not just with the walls of a container. If they start stationary with respect to one another (let's ignore GR again), they will not stay that way. | {
"domain": "physics.stackexchange",
"id": 92116,
"tags": "quantum-mechanics, thermodynamics, statistical-mechanics"
} |
Drawing a cloud of points | Question: I decided that it would be nice to write some code to draw a cloud of points.
Of course my first thought was to draw some random points but the result is not cloud-ish, it is well... random.
In the end I decided to use the Mitchell random points algoritmh generating pretty good results:
Blue = Random
Red = Cloud (Michelin)
from __future__ import division
import matplotlib.pyplot as plt
import random
# The more the nearest the points
RANDOM_POINTS_PER_GENERATION = 6
def random_point():
return [random.random(), random.random()]
def average(lst):
return sum(lst) / len(lst)
def distance(a,b):
return ((a[0]-b[0])**2 + (a[1]-b[1])**2)
def nearest(random_points, seen_points):
def distance_to_points(point):
return average([distance(point, seen_point)
for seen_point in seen_points])
return min(random_points, key=distance_to_points)
def next_point(points):
random_points = [random_point() for _ in range(RANDOM_POINTS_PER_GENERATION)]
return nearest(random_points, points)
def mitchell(number_of_points):
"""
Generates almost random points, but at each iteration
`k` points are generated and the nearest is taken, in
order to have a better looking points-cloud.
"""
points = [random_point()]
while len(points) < number_of_points:
points.append(next_point(points))
return points
def plot_points(points, color):
plt.plot(*zip(*points), color=color, linestyle='none', marker='.',
markerfacecolor='blue', markersize=4)
if __name__ == "__main__":
plot_points(mitchell(250), "red")
plot_points([random_point() for _ in range(250)], "blue")
plt.show()
Answer:
mitchell is the only function with a docstring. What do all the other functions do? How do I call them? What do they return?
A function's name should describe what it does, or what it returns, rather than the name of algorithm it uses (which is typically an implementation detail). So I would use a name like point_cloud instead of mitchell.
Your implementation picks the random sample that is closest to all the other points, but the algorithm described in Mitchell's 1991 paper "Spectrally optimal sampling for distribution ray tracing" picks the random sample that is furthest away from any other point (see section 4, "Sequential Poisson-Disk Sampling"). This is so different that I think it would be better not to use the name "Mitchell" to describe the algorithm.
It would make the code easier to use if RANDOM_POINTS_PER_GENERATION were a keyword argument to mitchell, rather than a global variable. Consider a program that tries to determine a good value for this parameter, or a program with two pieces of code that need different values for the parameter.
mitchell(0) returns a list of one point, but surely this should return a list of no points?
Instead of:
while len(points) < number_of_points:
write:
for _ in range(number_of_points - 1):
since you know that each iteration adds exactly one point.
The function average is built into Python under the name statistics.mean.
The function distance is misleadly named: it actually calculates the squared distance between points. Similarly, distance_to_points returns the mean square distance to the points.
The function nearest doesn't need to take the average: since seen_point has the same length in every case, the sum of the squared distances would be just as good.
Instead of creating a temporary list that gets thrown away immediately:
average([distance(point, seen_point) for seen_point in seen_points])
use a generator expression:
sum(distance(point, seen_point) for seen_point in seen_points)
At each stage nearest finds the average squared distance from a candidate point to each point in the cloud, making the overall runtime \$ O(kn^2) \$. However, it's possible to do better than that. For each candidate point \$ q \$, the value that needs to be computed is the sum of squared distances, $$ \sum_j \left|p_j - q\right|^2. $$ Using the dot product we can expand the square, getting $$ \sum_j (p_j - q)·(p_j - q). $$ Multiplying out, we get $$ \sum_j p_j·p_j - 2q·\sum_j p_j + q.q \sum_j 1. $$ Since we taking the minimum of this value over all the candidate points \$ q \$, we don't need the first term \$ \sum_j p_j·p_j \$ as this is the same for all \$ q \$ and so cannot affect the result.
What this means is that if we keep a running sum \$ \sum_j p_j \$ then we can find the candidate point with the minimum sum of squared distances to all the points in the cloud in \$ O(k) \$, bringing the runtime down to \$ O(kn) \$.
To implement this, we need a representation of vectors that supports sums and dot products. Here I'm using NumPy arrays:
import numpy as np
def point_cloud(n, k=6):
"""Generate a point cloud with n points. At each step k (default 6)
candidate points are generated and the one whose sum of squared
distances to the other points is smallest is used.
"""
if n == 0: return
p = np.random.random(size=2)
yield p
sum_p = p.copy()
for i in range(1, n):
# Generate k candidate points.
q = np.random.random(size=(k, 2))
# Sum of squared distances from each candidate point to all
# the points generated so far, less the sum of squares of the
# points generated so far (as this doesn't affect the minimum,
# we don't need it here).
s = i * (q * q).sum(axis=1) - 2 * (q * sum_p).sum(axis=1)
# Pick the candidate that's closest to the points generated so
# far.
p = q[np.argmin(s)]
yield p
sum_p += p | {
"domain": "codereview.stackexchange",
"id": 13271,
"tags": "python, python-2.x, matplotlib"
} |
P=NP, isn't it? | Question: Cook and Levin showed in 1971 how deterministically in polynomial time from every non deterministic Turing machine M, that halts in polynomial number of moves/steps, and string w to create the boolean formula $\Phi_{M,w}$ in conjunctive normal form that is true for each accept path/route of M on w and false for each reject path/route of M on w and therefore $\Phi_{M,w}$ is satisfiable if and only if M accepts w.
Suppose $NTM_{CNFFAL}$ is non deterministic Turing machine that decides in polynomial time the language CNFFAL, that is the language of all strings that are encoding of falsifiable boolean formulas in conjunctive normal form, by just guessing truth assignment that evaluates the input boolean formula to false.
Let f be any boolean formula in conjunctive normal form.
Therefore $\Phi_{NTM_{CNFFAL},f}\equiv\lnot f$ since Cook and Levin reduction is parsimonious and $\Phi_{NTM_{CNFFAL},f}$ is also in conjunctive normal form.
Therefore f is falsifiable if and only if $\Phi_{NTM_{CNFFAL},f}$ is satisfiable.
Therefore f is satisfiable if and only if $\Phi_{NTM_{CNFFAL},f}$ is falsifiable.
This suppose to show that $CNFSAT\le_PCNFFAL$, where CNFSAT is the language of all strings that are encoding of satisfiable boolean formula in conjunctive normal form, also show that CNFFAL is NP Complete, since CNFSAT is already NP Complete and we all know that $CNFFAL\in\Bbb{P}$.
Therefore $\Bbb{P=NP}$, isn't it?
Answer: In the comments of Yuval Filmus' answer, the OP suggested that while, for any two CNF formulas $f$ and $\tilde{f}$,
($f \text{ falsifiable} \iff \tilde{f} \text{ satisfiable}) \iff (f \text{ satisfiable} \iff \tilde{f} \text{ falsifiable}$)
wasn't usually true, it was implied in that specific case by the parsimonious reduction of the CNFFAL instance $f$ to the CNFSAT instance $\tilde{f}$, thus proving that
$f \text{ satisfiable} \iff \Phi_{NTM_{CNFFAL},f} \text{ falsifiable}$.
This answer is supposed to show evidence that it isn't the case (assuming it isn't clear to the reader), as a complement of the previously mentioned answer (per request of the OP in the comments).
I'll try to exhibit an example making clear that it isn't the case. Now, building the corresponding $\Phi_{NTM_{CNFFAL},f}$ of some $f$ would be a bit tedious, so I'll build a parsimonious reduction from CNFFAL to SAT instead (contrast with the parsimonious reduction from CNFFAL to CNFSAT used in the question), but such that the falsifiability of the SAT instance doesn't imply the satisfiability of the CNFFAL instance.
I claim that, for any CNF formula $f$,
any assignment $(a_1, \dots, a_n)$ falsifying $f(a_1, \dots, a_n)$ allows me to build a unique satisfying assignment of $\tilde{f}(a_1, \dots, a_n, c) = \lnot f(a_1, \dots, a_n) \land c$ (and vice-versa). This (admittedly dumb) reduction from CNFFAL to SAT preserves the number of solutions so it's parsimonious.
Now let's assume $f(a, b) = (a \lor b) \land (\lnot a \lor b) \land (a \lor \lnot b) \land (\lnot a \lor \lnot b)$.
There indeed are as many ways to falsify $f$ as to satisfy $\tilde{f}$ (4, per the previous parsimonious reduction). Therefore:
$f \text{ falsifiable} \iff \tilde{f} \text{ satisfiable}$
However there also are 4 ways to falsify $\tilde{f}$ (just set $c$ to 0) whereas $f$ is clearly unsatisfiable. Therefore:
$\lnot(f \text{ satisfiable} \iff \tilde{f} \text{ falsfiable})$
Contrast this with that excerpt of the reasoning in the question:
Therefore $\Phi_{NTM_{CNFFAL},f}\equiv\lnot f$ since Cook and Levin reduction is parsimonious and $\Phi_{NTM_{CNFFAL},f}$ is also in conjunctive normal form.
Therefore f is falsifiable if and only if $\Phi_{NTM_{CNFFAL},f}$ is satisfiable.
Therefore f is satisfiable if and only if $\Phi_{NTM_{CNFFAL},f}$ is falsifiable.
The previous statement stems from the fact that $\tilde{f}$ uses more variables than $f$, hence the number of possible assignments of $\tilde{f}$ is greater than the number of possible assignments of $f$. Indeed, if those two numbers of possibilities were equal, the reduction being parsimonious would indeed lead to the equivalence of $f$ being satisfiable and $\tilde{f}$ being falsifiable (by a simple counting argument, as the set of falsifiable assignments is the complement of the set of satisfiable assignments).
In the question, $\Phi_{NTM_{CNFFAL},f}$ has been obtained thanks to Cook-Levin's reduction from an arbitrary NP problem to CNFSAT. That formula doesn't necessarily requires the same amount of variable as $f$ (and is actually very likely to require a far greater amount), so the equivalence doesn't necessarily holds. | {
"domain": "cs.stackexchange",
"id": 10602,
"tags": "complexity-theory, turing-machines, reductions, satisfiability, p-vs-np"
} |
Homogenity and Isotropicity of space | Question: In school it is given that law of conservation of momentum is a result of homogeneity of space and law of conservation of angular momentum is a result of isotropicity of space but what is isotropicity and homogeneity is not explained also how are they related to this. Plz Explain.
Answer: It is a little different in General Relativity. Let's start with Special Relativity and all the 3 forces of the Standard Model in physics. Then we will talk about gravity and the universe.
In The Standard Model spacetime is Minkowski, meaning flat in all 4 dimensions. If it is that way clearly any direction and position is equivalent. That's called rotational symmetry and translation symmetry. No position or direction is favored, there is no reason to. Yes, from that you can deduce that linear and angular momentum are conserved. Time symmetry tells you that energy is conserved. All 3 of the standard model forces involve interactions high satisfy these symmetries. And some others.
But the universe is a very large system that also includes gravity. With gravity the spacetime can be bent, or generally have curvature. Isotropy and homogeneity now depends on how that gravity, caused by matter and energy, evolved. Sure microscopically the symmetries of special relativity hold (rotation= isotropy, translation= homogeneity),and General Relativity reduces to Special Relativity in local regions near the observer. But when we go to the big, such as the universe, it is General Relativity with a more statistical or thermodynamic representation of the matter and energy.
Thus, whether the universe is homogeneous or isotopic is not now a given. You must observe the universe. We have been observing the stars and galaxies, and in the large (i.e., for distances on the order of a couple megaparsecs or more) the distribution of matter we see in the universe seems to be the same anywhere we look (at those large aggregations of objects). They look the same in all directions circularly, and at all distances linearly. Rotationally and translationally invariant, in the large.
We have more proof of that from the cosmic microwave background, microwave radio waves which is the redshifted products of the light and radiation produced about 380000 years after the Big Bang. We have detected it and it is extremely homogeneous and isotropy. We think it has to be produced that symmetrically then.
In General Relativity that leads to only one possible family of solutions to the equations. It's a Robertson Walker Friedman universe. We got all its equations, and it all agrees with all our measurements of how fast the universe expands, and everything else (to a high degree of precision, but not 100%).
In General Relativity we go the other way. Spacetime is homogeneous and isotropic, so we conclude that momentum and angular momentum is conserved. So interactions in the universe, in the large, conserve these entities.
The next question you asked was why is spacetime isotropic and homogeneous. Well, this was also due to the way the univesrse evolved. But it is known that if it started that way, with the only non symmetries in those being due to quantum fluctuations, the epochs of time when the matter and energy were in contact they had to be in thermal equilibrium, and thermalized each other to a similar symmetric distribution, plus or minus the fluctuations that grew (density fluctuation tended to grow because of gravity's attraction, forming galaxies and stars and planets). But in the large the symmetries remained. The story is more complex, with another important part contributed by the so called space inflation, a huge growth in a very short time, which actually tended to then have galaxies separated over longer distances than the light could have traveled still be in equilibrium so that the symmetries are seen in the very large distances. The story of galaxies and stars evolving is also pretty complex, but the principle simple: gravity brought matter to coalesce (often in very violent collisions etc)
So the symmetries and conservation: yes, but more complex and the universe homogeneity and isotropy came from the Big Bang and thermalization, along with inflation.
And in General Relativity since the universe is not time symmetric, energy as understood in it is not conserved. Some people say that gravity and matter/energy somehow balance out that energy between them. Not too important an issue, since it is known how the interactions happen and how to account for them. | {
"domain": "physics.stackexchange",
"id": 31611,
"tags": "spacetime, conservation-laws, symmetry, relativity"
} |
rosdep install error | Question:
Hi,
I had following error when install rosdep to my oneiric-amd64 machine. Does anyone know what is the problem?
sudo easy_install -U rosdep
Searching for rosdep
Reading http://pypi.python.org/simple/rosdep/
Reading http://www.ros.org/wiki/rosdep
Reading http://pr.willowgarage.com/downloads/rosdep/
Best match: rosdep 0.8.0
Downloading http://pypi.python.org/packages/source/r/rosdep/rosdep-0.8.0.tar.gz#md5=09e0e4f52ba813ab540aac5715524e86
Processing rosdep-0.8.0.tar.gz
Running rosdep-0.8.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-C_3LU7/rosdep-0.8.0/egg-dist-tmp-rb_MJH
Traceback (most recent call last):
Downloading http://pypi.python.org/packages/source/r/rosdep/rosdep-0.8.0.tar.gz#md5=09e0e4f52ba813ab540aac5715524e86
Processing rosdep-0.8.0.tar.gz
Running rosdep-0.8.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-C_3LU7/rosdep-0.8.0/egg-dist-tmp-rb_MJH
Traceback (most recent call last):
File "/usr/bin/easy_install", line 9, in <module>
load_entry_point('distribute==0.6.16dev-r0', 'console_scripts', 'easy_install')()
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 1912, in main
with_ei_usage(lambda:
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 1893, in with_ei_usage
return f()
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 1916, in <lambda>
distclass=DistributionWithoutHelpCommands, **kw
File "/usr/lib/python2.7/distutils/core.py", line 152, in setup
dist.run_commands()
File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 364, in run
self.easy_install(spec, not self.no_deps)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 604, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 634, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 824, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 1101, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 1090, in run_setup
run_setup(setup_script, args)
File "/usr/lib/python2.7/dist-packages/setuptools/sandbox.py", line 29, in run_setup
lambda: execfile(
File "/usr/lib/python2.7/dist-packages/setuptools/sandbox.py", line 70, in run
return func()
File "/usr/lib/python2.7/dist-packages/setuptools/sandbox.py", line 31, in <lambda>
{'__file__':setup_script, '__name__':'__main__'}
File "setup.py", line 6, in <module>
File "src/rosdep2/__init__.py", line 38, in <module>
File "src/rosdep2/installers.py", line 39, in <module>
ImportError: No module named rospkg.os_detect
Originally posted by Kei Okada on ROS Answers with karma: 1186 on 2012-02-21
Post score: 0
Answer:
You need to "pip install -U rospkg"
I'm fixing an issue right now that will make this problem go away with the next update.
Originally posted by kwc with karma: 12244 on 2012-02-21
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Enrique on 2012-05-31:
It worked for me, but had to use 'rosmake --pre-clean'. | {
"domain": "robotics.stackexchange",
"id": 8316,
"tags": "rosdep"
} |
Does the Hack computer from "The Elements of Computing Systems" use Von Neumann architecture? | Question: I'm reading "The Elements of Computing Systems" (subtitled "Building a Modern Computer from First Principles - Nand to Tetris Companion) by Noam Nisan and Shimon Schocken.
Chapter 4 is about machine language, and more specifically the machine language used on their computer platform called Hack. Section 4.2.1 says this about Hack:
The Hack computer is a von Neumann platform. It is a 16-bit machine, consisting of a CPU, two separate memory modules serving as instruction memory and data memory, and two memory-mapped I/O devices: a screen and a keyboard.
…
The Hack programmer is aware of two distinct address spaces: an instruction memory and a data memory. … The CPU can only execute programs that reside in the instruction memory. The instruction memory is a read-only device, and programs are loaded into it using some exogenous means.
With that distinction between instruction memory and data memory, is it really a von Neumann architecture? According to my understanding of the difference between von Neumann and Harvard, that description sounds much more like a Harvard architecture.
Answer: I had the same suspicion going through the Nand to Tetris course.
Indeed, if you look at John K. Bennett's annotated version of "The Elements of Computing Systems", specifically Chapter 5, he calls this out:
The Hack architecture would more properly be called a “Harvard Architecture,” since the data and instruction memories are separate.
We can also do some digging for ourselves.
If we consult Wikipedia, it states that the Harvard architecture is distinguished by:
The term "von Neumann architecture" has evolved to mean any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus.
The design of a von Neumann architecture machine is simpler than a Harvard architecture machine—which is also a stored-program system but has one dedicated set of address and data buses for reading and writing to memory, and another set of address and data buses to fetch instructions.
The latter is indeed the architecture of the Hack machine, which has two separate memories and can do an instruction lookup, data lookup, and computation all in one step* of the CPU. You can see this in the book's diagram:
Last, this is also explicitly stated at the end of Chapter 5 of the original book:
Unlike Hack, most general-purpose computers use a single address space for storing both data and instructions. In such architectures, the instruction address as well as the optional data address specified by the instruction must be fed into the same destination: the single address input of the shared address space. Clearly, this cannot be done at the same time.... In contrast, the Hack architecture is unique in that it partitions the address space into two separate parts, allowing a single-cycle fetch-execute logic.
*Note: In the Hack machine language, it actually takes two steps (e.g. @dataaddr then D = M + 1) to do an operation on data in memory. However, during the second instruction, both the instruction and the data in memory can be read by the CPU simultaneously. | {
"domain": "cs.stackexchange",
"id": 16450,
"tags": "terminology, computer-architecture"
} |
Understanding the talks in Conferences and Workshops | Question: I am a graduate student from India. I am very much interested in attending the Workshops, conferences, and invited lecturers given by prominent professors.
At the end of the talk as usual some people will ask questions and speaker will answer them. But my problem is I am not understanding most of the questions and answers. Even if I ask any question I am unable to understand the answer which given by speaker.
Can somebody share their experience and suggestions to my problem.
Answer: This may seem strange at first, but one way I've found to understand an area better is to write in it.
Every area of any discipline has its own mini-language. It consists most obviously of specialized terminology---but there is also a certain structure and order to how people in that area prefer to give and receive information. There are also particular ways of understanding certain difficult concepts, especially abstract or nonobvious concepts.
Reading is obviously key, and it will help you understand the terminology. But to really climb into the heads of those who present in your area, you have to write. You'll then be forced to learn good ways to structure information and ways to understand difficult concepts---and discover how and why the other researchers in your area think and talk the way they do. | {
"domain": "cstheory.stackexchange",
"id": 1979,
"tags": "co.combinatorics, soft-question"
} |
I'm reading all lines of another sheet and I paste the value that I want to the active sheet VBA | Question: I have a program that read thousands of lines and return the value that I need. The problem is that request take a lot of time probably like 1min just to search and paste the value, even when I'm saving the code it takes a lot of time...
I think it is around this line that the code is slow :
If Sheets("MT950").Cells(line, 1) Like "-}{5:*" Then
Here is my code :
Function mysolde62(mycurrency As String, swift As String) As Double
Dim SearchString As String
Dim LastLine As Long, line As Long, k As Long
Dim mybegin As Long, myend As Long, test As Long, count As Long
Dim sign As String
Dim myvalue As Double
LastLine = Sheets("MT950").Range("A1").End(xlDown).Row
count = 0
myend = 1
For line = 1 To LastLine
If Sheets("MT950").Cells(line, 1) Like "-}{5:*" Then
SearchString = Sheets("MT950").Range("A" & line).Value
mypos = InStr(1, SearchString, swift, 1)
If mypos <> 0 Then
count = count + 1
End If
End If
Next line
For k = 1 To count
For line = myend To LastLine
If Sheets("MT950").Cells(line, 1) Like "-}{5:*" Then
SearchString = Sheets("MT950").Range("A" & line).Value
mypos = InStr(1, SearchString, swift, 1)
If mypos <> 0 Then
mybegin = line
For linebis = mybegin To LastLine
If Sheets("MT950").Cells(linebis, 1) Like ":62F:*" Then
SearchString = Sheets("MT950").Range("A" & linebis).Value
mypos = InStr(1, SearchString, mycurrency, 1)
If mypos <> 0 Then
myend = linebis
test = 1
End If
Exit For
End If
Next linebis
If test = 1 Then Exit For
End If
End If
Next line
If test = 1 Then Exit For
Next k
sign = Mid(Sheets("MT950").Cells(myend, 1).Value, 5, 1)
myvalue = Mid(Sheets("MT950").Cells(myend, 1).Value, 15)
If sign = "D" Then
mysolde62 = -myvalue
Else
mysolde62 = myvalue
End If
End Function
Answer: Important notes:
Proper indenting helps make code easier to read and maintain.
Always use Option Explicit
The first area of performance improvement is to use arrays, not keep referring to Excel objects. As an example - your first loop is:
LastLine = Sheets("MT950").Range("A1").End(xlDown).Row
count = 0
myend = 1
For line = 1 To LastLine
If Sheets("MT950").Cells(line, 1) Like "-}{5:*" Then
SearchString = Sheets("MT950").Range("A" & line).Value
mypos = InStr(1, SearchString, swift, 1)
If mypos <> 0 Then
count = count + 1
End If
End If
Next line
Where as, it could be:
LastLine = Sheets("MT950").Range("A1").End(xlDown).Row
Dim tempArray as Variant '<-- this is where we hold the values.
tempArray = Sheets("MT950").range("A1:A" & CStr(LastLine)).Value
count = 0
myend = 1
For line = LBound (tempArray, 1) to UBound(tempArray,1) ' Cycle through array
If tempArray(line,1) Like "-}{5:*" Then
mypos = InStr(1, tempArray(line,1), swift, 1)
If mypos <> 0 Then
count = count + 1
End If
End If
Next line
Your k loop does not appear to achieve anything, except repeat something without any variation [until deeper inspection, myend varies]. The performance of inner portion of that loop can also be improved by the use of arrays.
Looking at the code, this uses exactly the same array we used last time - so no need to even re-assign it!
For k = 1 To count
For line = myend To LastLine
If Sheets("MT950").Cells(line, 1) Like "-}{5:*" Then
SearchString = Sheets("MT950").Range("A" & line).Value
mypos = InStr(1, SearchString, swift, 1)
If mypos <> 0 Then
mybegin = line
For linebis = mybegin To LastLine
If Sheets("MT950").Cells(linebis, 1) Like ":62F:*" Then
SearchString = Sheets("MT950").Range("A" & linebis).Value
mypos = InStr(1, SearchString, mycurrency, 1)
If mypos <> 0 Then
myend = linebis
test = 1
End If
Exit For
End If
Next linebis
If test = 1 Then Exit For
End If
End If
Next line
If test = 1 Then Exit For
Next k
Can be like:
For k = 1 To count
For line = myend To LastLine
If tempArray(line, 1) Like "-}{5:*" Then
mypos = InStr(1, tempArray(line, 1), swift, 1)
If mypos <> 0 Then
mybegin = line
For linebis = mybegin To LastLine
If tempArray(linebis , 1) Like ":62F:*" Then
mypos = InStr(1, tempArray(linebis , 1), mycurrency, 1)
If mypos <> 0 Then
myend = linebis
test = 1
End If
Exit For
End If
Next linebis
If test = 1 Then Exit For
End If
End If
Next line
If test = 1 Then Exit For
Next k
Notice that with these changes, we only touch the Excel model only once - to extract the array of values and this will greatly improve performance. With some re-thinking, you can have the required Range as a parameter - thus allowing this to be re-used or flexible (perhaps even as a UDF, depending on where you are getting your "swift" and "currency" values from!).
With these simple changes in place, you can look at your code logic and determine if other optimisations can be done. | {
"domain": "codereview.stackexchange",
"id": 35093,
"tags": "performance, vba, excel"
} |
Small web application | Question: I'm building a small web application and it's starting to get a bit complex. I have reached to a point where I have to run some tests and load some libraries.
I made it so I can use it in this way:
this.loadDependencies([
{
test : tests.JSON,
polyfill : self.polyfills.json
},
{
test : tests.Storage,
polyfill : self.polyfills.storage
}], self.libraries);
The first argument, the array, it's just a series of tests and polyfil loading, the second argument it's where I load the libraries. This is how the self.libraries looks:
libraries: {
underscore: {
enabled: true,
library: [directory + 'libraries/underscore/underscore-1.4.2.js']
},
handlebars: {
enabled: true,
library: [directory + 'libraries/handlebars/handlebars-1.0.rc.1.js']
},
hammer: {
enabled: true,
library: [directory + 'libraries/hammer/hammer.js', directory + 'libraries/hammer/jquery.specialevent.hammer.js']
},
bootstrap: {
transition: {
enabled: true,
library: [directory + 'libraries/bootstrap/bootstrap-transition.js']
},
alert: {
enabled: false,
library: [directory + 'libraries/bootstrap/bootstrap-alert.js']
},
modal: {
enabled: false,
library: [directory + 'libraries/bootstrap/bootstrap-modal.js']
},
dropdown: {
enabled: false,
library: [directory + 'libraries/bootstrap/bootstrap-dropdown.js']
},
scrollspy: {
enabled: false,
library: [directory + 'libraries/bootstrap/bootstrap-scrollspy.js']
},
tab: {
enabled: false,
library: [directory + 'libraries/bootstrap/bootstrap-tab.js']
},
tooltip: {
enabled: false,
library: [directory + 'libraries/bootstrap/bootstrap-tooltip.js']
},
popover: {
enabled: false,
library: [directory + 'libraries/bootstrap/bootstrap-popover.js']
},
button: {
enabled: false,
library: [directory + 'libraries/bootstrap/bootstrap-button.js']
},
collapse: {
enabled: true,
library: [directory + 'libraries/bootstrap/bootstrap-collapse.js']
},
carousel: {
enabled: false,
library: [directory + 'libraries/bootstrap/bootstrap-carousel.js']
},
typeahead: {
enabled: false,
library: [directory + 'libraries/bootstrap/bootstrap-typeahead.js']
},
affix: {
enabled: false,
library: [directory + 'libraries/bootstrap/bootstrap-affix.js']
}
}
}
The loadDependencies function takes that object and runs on each level, checks if the enabled exists and if it's true it loads the library that is defined on the same level.
The function that does the above is what I want to improve a bit, so I'm looking for any advice or tips on how I could improve that code:
Application.Dependencies.prototype.loadLibraries = function(polyfills, libraries, callback) {
var self = this,
libs = self.helpers.objectSize(libraries),
iterations = [],
loaded = [];
self.helpers.each(libraries, function(object) {
if(object.hasOwnProperty('enabled') && object.enabled === true) {
for(var i = object.library.length - 1; i >= 0; i--) {
iterations.push(object.library[i]);
};
} else if(self.helpers.objectSize(object) >= 0) {
self.helpers.each(object, function(obj) {
if(obj.hasOwnProperty('enabled') && obj.enabled === true) {
for(var i = obj.library.length - 1; i >= 0; i--) {
iterations.push(obj.library[i]);
};
};
});
};
});
return self.helpers.each(libraries, function(object) {
if(object.hasOwnProperty('enabled') && object.enabled === true) {
self.loadEngine(false, {
tests: (typeof object.enabled === 'array') ? (object.enabled.length === 1 ? false : true) : true ? object.enabled : self.helpers.reduceArray(object.enabled, function(initial, current) {
return initial && current;
}, 1),
libraries: object.library,
callback: function(url, result, key) {
loaded.push(url);
if(loaded.length === self.helpers.objectSize(iterations)) {
self.console(function() {
console.log('Libraries : ', loaded);
self.polyfillize(polyfills, function() {
return(typeof callback === 'function' && callback !== undefined) ? callback.apply(this, [this]) : console.log('Argument : Invalid [ Function Required ]');
});
});
};
}
});
} else if(self.helpers.objectSize(object) >= 0) {
self.helpers.each(object, function(obj) {
if(obj.hasOwnProperty('enabled') && obj.enabled === true) {
return self.loadEngine(false, {
tests: (typeof obj.enabled === 'array') ? (obj.enabled.length === 1 ? false : true) : true ? obj.enabled : self.helpers.reduceArray(obj.enabled, function(initial, current) {
return initial && current;
}, 1),
libraries: obj.library,
callback: function(url, result, key) {
loaded.push(url);
if(loaded.length === self.helpers.objectSize(iterations)) {
self.console(function() {
console.log('Libraries : ', loaded);
self.polyfillize(polyfills, function() {
return(typeof callback === 'function' && callback !== undefined) ? callback.apply(this, [this]) : console.log('Argument : Invalid [ Function Required ]');
});
});
};
}
});
};
});
};
});
};
Answer: One of the javascript code conventions I like to follow is to start each function with declaring all variables used in the function. Even the ones used inside For loops. The scope of these is the function body, declaring variables like this just clarify scope.
Your code is fairly readable and clean. But you do have code duplication where you call loadEngine(), which you can eliminate. | {
"domain": "codereview.stackexchange",
"id": 3099,
"tags": "javascript, object-oriented, jquery"
} |
Manganate and Permanganate | Question: In my chemistry course, the addition of the “per” prefix generally means that the oxoanion of relevance will have one more oxygen than the “base” ion.
e.g. chlorate = $\ce{ClO3-}$, perchlorate = $\ce{ClO4-}$
However, permanganate is $\ce{MnO4-}$ and manganate is $\ce{MnO4^2-}$. The charges are different but there are still four oxygen atoms in each, so it seems to violate the convention. Any thoughts?
Answer: They ran out of names, maybe?
Permanganate does have more oxygen in the sense that it could formally be considered a possible product of adding oxygen to manganate:
$\ce{4MnO4^{2-} + O2 + 2H2O -> 4 MnO4^- + 4 OH^-}$
This theoretical reaction goes along with manganese having oxidation state +7 in permanganate versus only +6 in manganate. We may say that "per-" simply means higher oxidation state. | {
"domain": "chemistry.stackexchange",
"id": 16757,
"tags": "nomenclature"
} |
Can one produce a single photon? | Question: Yesterday it was stated in this site that a nucleus oscillating in a crystal lattice (may) produces a single photon. I thought that conservation of (angular) momentum requires the production of at least 2 photons, can someone please explain if that is true and how you produce one single photon, expecially considering the thermal radiation produced by the oscillation of a nucleus?
Edit:
I realize that referring to crystal lattices hugely complicates matters, so I'll ask the general question: suppose we have an individual free charge and we make it oscillate at frequency k Hz on the z axis. How many photons can be emitted? if one is possible what happens to conservation of momentum? What determines the exact direction of the MF oscillation and propapation ?
Answer: Let's take the concrete example of a heteronuclear diatomic molecule. This is probably as close as we can come to an ideal harmonic oscillator. To a good approximation the selection rule for the vibrational transition is:
$$ \Delta v = \pm 1 $$
(the anharmonicity in the internuclear potential means that other transitions do occur, but these have a low probability and can usually be ignored).
But in most cases a pure vibrational transition cannot occur precisely because of the conservation of angular momentum. There is a further selection rule:
$$ \Delta J = \pm 1 $$
That is, the transition is rovibrational so both the vibrational and rotational quantum numbers must change at the same time. This is because the angular momentum of the diatom molecule must change by the opposite of the photon spin to conserve angular momentum.
In combined electronic/rovibrational transitions the rotational quantum number can remain unchanged, i.e. $\Delta J = 0$ leading to the Q-branch in the spectrum, but only if the angular momentum of the excited state differs from the ground state by $\pm \hbar$. In this case the angular momentum of the electron state changes by the opposite of the photon spin.
I've chosen a specific example because your question:
Suppose we have an individual free charge and we make it oscillate at frequency k Hz on the z axis
is too vague to be answered. You need to consider what creates the potential within which the charge is oscillating, and what creates the potential will be involved in the conservation of angular momentum. In the case I describe it is the diatomic molecule that changes state to conserve angular momentum, and in the case of the lattice it is the lattice that conserves angular momentum. | {
"domain": "physics.stackexchange",
"id": 40641,
"tags": "thermal-radiation"
} |
Infinite square well discrete energies meaning | Question: In the problem of infinite square well we come out with quantized energies that an electron can have. And each energy level has its own wave function. The general solution is a linear combination of these wave functions. My question is what is the physical meaning of this quantization, and why the general solution that describes this electron is a combination of the quantized solutions.
I'm thinking that when we measure the electron several times, it may be in different energy levels in each measurement. Is this true? And if so, how can it change its energy level?
Answer:
My question is what is the physical meaning of this quantization...
The physical meaning is that if you were to measure the energy of the electron, you would only measure certain values. You would not get a continuum of energy measurements were you to measure the energy of similarly prepared systems. How often you measure a certain energy value of similarly prepared quantum systems depends on the state of the system before measurement of the energy.
... and why the general solution that describes this electron is a combination of the quantized solutions
Well that just comes from the Schrodinger equation being a linear differential equation. If two equations solve the Schrodinger equation, then any linear combination of these equations also solves it. You can then extend this reasoning to any number solutions. The solutions we can use in this linear combination is determined by initial/boundary conditions of the system in question.
I'm thinking that when we measure the electron several times, it may be in different energy levels in each measurement. Is this true?
You have to be more specific. You have to say what you are measuring. You can't just "measure the electron".
If you mean measure the energy, then what you say is false. For the particle in a box the energy eigenstates correspond to stationary states. So once you measure the electron to be in an energy state, then any subsequent measurement of the energy will yield the same result.
If you mean measurement of the position and then the energy, then you are correct. The position operator and the infinite well Hamiltonian do not commute, so there are not simultaneous eigenstates of both position and energy. This means that when you make a position measurement, the state of the particle becomes a superposition of energy states (and a sharp peak in position space that then evolves according to the SE). Therefore, the outcome of a subsequent measurement of the energy cannot be determined. The probability of an energy measurement will be determined by the superposition of energy states that now describes the state of the electron.
And if so, how can it change its energy level?
Let's say we measured the energy and found the electron to be in the energy state $|n_1\rangle$. Then let's say we measured the position of the electron, and then we measured the energy again and found the electron to be in energy state $|n_2\rangle$. Then it would have "changed energy levels" if $n_1\neq n_2$. Although keep in mind that right after the position measurement but before the second energy measurement the electron is not in any energy state. | {
"domain": "physics.stackexchange",
"id": 61250,
"tags": "quantum-mechanics, energy, discrete"
} |
Huygens' Principle Integral Dimensionality | Question: Huygens' principle, when describing plane waves, and its subsequent corrected versions, are based on integration of radial waves (usually with some factors multiplying them inside the integral) over a wavefront. This is generally presented in the context of the wave in a space containing an opaque screen containing a narrow open aperture.
Assume the space is the two-dimensional (2D) $x$-$y$ plane, with a plane wave propagating along the $y$-axis. Also, assume a linear opaque screen, with a narrow aperture, is parallel to the $x$ axis.Then the above integral is over the wavefront at the screen, and is an integral over $x$.
I would like to consider Huygens' principle for the same plane wave in the same space, but without an opaque screen, or any other object (a “free space” case). Of course, we can think of there being a wavefront for each value $y$. Therefore, if we wish to decompose the plane wave into radial waves, should there also be integration of the radial waves, not just over a single wavefront, but also integration over all wavefronts, so that there is a double integral of radial waves, over both $x$ and $y$?
I have been researching Huygens' principal, and it’s subsequent versions, but have not seen this free space case addressed. The implication is that the 2D plane wave, in free space, would be decomposed into a radial wave originating at every point in the 2D space, not just over every point along a single wavefront line, since the choice of that single wavefront would be arbitrary.
Answer: If you integrate over both $x$ and $y$ you need to take phases into account, otherwise you should get 0 because of destructive interference of circular waves originating half a wavelength away from each other in $y$-direction. If you do that, I assume you should be able to correctly calculate the oscillation of some arbitrary point in the plane, but you do not gain any information, because that oscillation is already given by the plane wave itself, so I do not see why anybody would want to do that.
Another problem is, that when representing the waves by complex exponential functions, if I'm not mistaken, the integral with the phases for the oszillation at the origin would look something like
$$
\int dx \int dy ~ \exp \left (i \left(k \left( \sqrt{x^2+y^2} + \underbrace{y}_{\text{phase}}\right) \right) -\omega t\right)~,
$$
which seems to be hard to solve. | {
"domain": "physics.stackexchange",
"id": 81555,
"tags": "huygens-principle"
} |
Checking hash and passwords with a wordlist, more efficient | Question: I have done a small code in which with a wordlist (out68.lst) I get the passwords from the hashes in the file 'shadow3'.
import crypt
import string
import itertools
import datetime
dir = "shadow3" #File that contains hashes and users
file = open(dir, 'r').readlines()
username = []
hashed = []
k=0
for x in file:
usr, hshd, wtf, iss, this, thing, here, doing, example = x.split(':')
username.append(usr)
hashed.append(hshd)
#Loop in order to split the data in the file and store it in username and hashed
grupo1=open('out68.lst','r').readlines()
long=len(grupo1)
print(long)
for y in grupo1: #Loop in order to go through all the possible words available
c = 0
y=y.rstrip('\n')
y=y.capitalize()
k = k+1
if k==(long//100):
print('1%')
if k==(long//10):
print('10%')
if k==(long//5):
print('20%')
if k==(3*long//10):
print('30%')
if k==(4*long//10): #Just to check the progress
print('40%')
if k==(5*long//10):
print('50%')
if k==(6*long//10):
print('60%')
if k==(7*long//10):
print('70%')
if k==(8*long//10):
print('80%')
if k==(9*long//10):
print('90%')
for x in hashed:
rehashed = crypt.crypt(y, x) #Hash verification f(passwor+hash)=hash?
if rehashed == x:
print('La contraseña del usuario ' + username[c] + ' es ' + y)
c = c + 1
It does work but depending on the size of the files, it can last now from 30 minutes to 6 hours. So I am asking if there is any way to improve the performance, by paralelization, or GPU processing (but I have no idea about this).
Answer: Some suggestions:
Run the code through at least one linter such as flake8 or pycodestyle to produce more idiomatic code.
Don't read all the lines into a variable before starting processing - this will slow things down and use much more memory than necessary for large files. Instead you can use for line in file.readlines().
You are doing ten calculations in order to run a single print statement. Either get rid of them or do something simpler like print("{}/{} complete".format(k, long)).
If you know y has exactly one newline at the end you can do y[:-1] instead of y.rstrip('\n').
Capitalizing each word is expensive. Avoid it if at all possible.
If you don't need a bunch of the fields in an input file add a limit to your split() and mark the last stuff as discarded by using the _ variable. For example: usr, hshd, _ = x.split(':', 3)
Rather than keeping track of k manually you can just do for k, y in enumerate(grupo1).
Rather than having a list of usernames and a list of their hashed passwords, a Dict[str, str] of username to hash should be easier to keep track of. | {
"domain": "codereview.stackexchange",
"id": 32723,
"tags": "python, performance, cryptography, hashcode"
} |
ROS_INFO can't print a ROS message? | Question:
I just spent around almost 2 hrs trying to fix a problem, that didn't even exist in the first place. All because ROS_INFO was printing the wrong thing (essentially printing garbage) when I tried to do something like
std::string s = "..."
ROS_INFO("result: %s", s)
I had to print an std::string because I was trying to print a ROS message, which I had to parse myself, because I couldn't JUST PRINT THE MESSAGE:
void soemclass::jointPositionsCallback(const some_msgs::JointsPositions::ConstPtr& msg){
ROS_INFO(msg); // <- COMPILE ERROR
}
Is there really no way to JUST PRINT THE MESSAGE? Why can't I just print the std::string as well?
I feel this should be so much easier.
Originally posted by db on ROS Answers with karma: 138 on 2020-08-14
Post score: 0
Answer:
The non-stream versions take C-style strings (a null terminated character array), so they work with something like this:
ROS_INFO("a c-style string");
If you want to print a std::string you can do it one of two ways:
ROS_INFO(s.c_str()); // this prints a C-style string
ROS_INFO_STREAM(s);
For your message, this should work:
ROS_INFO_STREAM(msg);
Originally posted by fergs with karma: 13902 on 2020-08-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2020-08-14:
Obligatory link to the wiki page: roscpp/Overview/Logging.
Comment by db on 2020-08-16:
Ok, I didn't know about the ROS_INFO_STREAM, thank you. Do you know if the ROS message objects have a built-in "to_string()" method?
Comment by db on 2020-08-16:
@gvdroorn I saw this documentation. But, as you can see, I just wanted to make a simple logging of an std::string and a ros message. This is a very complicated documentation for such a basic function, and the documentation doesn't even mention the supposedly correct method to do this, which is ROS_INFO_STREAM(). The purpose of this question is also to bring awareness to this issue, these little things make ROS unnecessarily hard to use.
Comment by fergs on 2020-08-17:
Yes, the ROS messages have a "<<" operator, which is how streams work. | {
"domain": "robotics.stackexchange",
"id": 35412,
"tags": "ros-melodic"
} |
Robot Model does not exist, | Question:
I am using the ROS2 humble SPOT github repo, while installing the package in the workspace and running my first command, I came across this problem.
I tried to check the launch file for any wrong files path but found everything in it's place.
I did not encounter any problems during the installation and everything went well, How can I fix this error and include the world frame.
Answer: It looks like from the launch file that you might have to declare a static transform frame called "world", and connect that to the link "body". You can try this out using the following bit of code in the launch file:
import os
import typing
import launch
import launch_ros
from launch.substitutions import (
Command,
FindExecutable,
LaunchConfiguration,
PathJoinSubstitution,
)
pkg_share = launch_ros.substitutions.FindPackageShare(package="spot_description").find("spot_description")
default_model_path = os.path.join(pkg_share, "urdf/spot.urdf.xacro")
default_rviz2_path = os.path.join(pkg_share, "rviz/viz_spot.rviz")
def launch_setup(context: launch.LaunchContext) -> typing.List[launch_ros.actions.Node]:
namespace = LaunchConfiguration("namespace").perform(context)
robot_description = Command(
[
PathJoinSubstitution([FindExecutable(name="xacro")]),
" ",
PathJoinSubstitution([pkg_share, "urdf", "spot.urdf.xacro"]),
" ",
"arm:=",
LaunchConfiguration("arm"),
" ",
"tf_prefix:=",
LaunchConfiguration("tf_prefix"),
" ",
]
)
robot_state_publisher_node = launch_ros.actions.Node(
package="robot_state_publisher",
executable="robot_state_publisher",
parameters=[{"robot_description": robot_description}],
namespace=namespace,
)
joint_state_publisher_node = launch_ros.actions.Node(
package="joint_state_publisher",
executable="joint_state_publisher",
name="joint_state_publisher",
namespace=namespace,
condition=launch.conditions.UnlessCondition(LaunchConfiguration("gui")),
)
joint_state_publisher_gui_node = launch_ros.actions.Node(
package="joint_state_publisher_gui",
executable="joint_state_publisher_gui",
name="joint_state_publisher_gui",
condition=launch.conditions.IfCondition(LaunchConfiguration("gui")),
namespace=namespace,
)
rviz_node = launch_ros.actions.Node(
package="rviz2",
executable="rviz2",
name="rviz2",
output="screen",
arguments=["-d" + default_rviz2_path],
)
static_tf_node = launch_ros.actions.Node(
package="tf2_ros",
executable="static_transform_publisher",
output="screen",
arguments=["0", "0", "0", "0", "0", "0", "world", "body"]
)
return [
joint_state_publisher_node,
joint_state_publisher_gui_node,
robot_state_publisher_node,
rviz_node,
static_tf_node
]
def generate_launch_description() -> launch.LaunchDescription:
launch_arguments = [
launch.actions.DeclareLaunchArgument(
name="gui", default_value="True", description="Flag to enable joint_state_publisher_gui"
),
launch.actions.DeclareLaunchArgument(
name="model", default_value=default_model_path, description="Absolute path to robot urdf file"
),
launch.actions.DeclareLaunchArgument(
name="rvizconfig", default_value=default_rviz2_path, description="Absolute path to rviz config file"
),
launch.actions.DeclareLaunchArgument("arm", default_value="false", description="include arm in robot model"),
launch.actions.DeclareLaunchArgument(
"tf_prefix", default_value='""', description="apply namespace prefix to robot links and joints"
),
launch.actions.DeclareLaunchArgument("namespace", default_value="", description="Namespace for robot tf topic"),
]
return launch.LaunchDescription(launch_arguments + [launch.actions.OpaqueFunction(function=launch_setup)])
EDIT Added the whole launch file to show the changes a bit better | {
"domain": "robotics.stackexchange",
"id": 38673,
"tags": "gazebo, ros2, rviz, world"
} |
Prime factors and perfect square | Question: I want to convert an integer to a perfect square by multiplying it by some number. That number is the product of all the prime factors of the number which not appear an even number of times. Example 12 = 2 x 2 x 3; 2 appears twice (even number of times) but 3 just once (odd number of times), so the number I need to multiply 12 by to get a perfect square is 3. And in fact 12 x 3 = 36 = 6 * 6.
I converted my code to Haskell and would like to know what suggestions you have.
import Data.List (group)
toPerfectSquare :: Int -> Int
toPerfectSquare n = product . map (\(x:_) -> x) . filter (not . even . length) . group $ primefactors n
primefactors :: Int -> [Int]
primefactors n = prmfctrs' n 2 [3,5..]
where
prmfctrs' m d ds | m < 2 = [1]
| m < d^2 = [m]
| r == 0 = d : prmfctrs' q d ds
| otherwise = prmfctrs' m (head ds) (tail ds)
where (q, r) = quotRem m d
Sorry about the naming, I'm bad at giving names.
One particular doubt I have is in the use of $ in toPerfectSquare, that I first used . but it didn't work and I needed to use parenthesis. Why? And is it usual to have that many compositions in one line?
Answer: We can replace some custom functions or constructs by standard library ones:
\(x:_) -> x is called head
not . even is called odd
Next, 1 is not a prime, and 1 does not have a prime factorization. Since product [] yields 1, we can use [] instead in prmfctrs'.
The worker prmfctrs' is a mouthful. Workers are usually called the same as their context (but with an apostrophe, so primefactors') or short names like go.
And last but not least, we can use @ bindings to pattern match on the head, tail and the whole list at once.
If we apply those suggestions, we get
import Data.List (group)
toPerfectSquare :: Int -> Int
toPerfectSquare n = product . map head . filter (odd . length) . group $ primefactors n
primefactors :: Int -> [Int]
primefactors n = go n 2 [3,5..]
where
go m d ds@(p:ps) | m < 2 = []
| m < d^2 = [m]
| r == 0 = d : go q d ds
| otherwise = go m p ps
where (q, r) = quotRem m d
In theory, we can even get rid of a parameter in go, namely the d, so that we always just look at the list of the divisors:
import Data.List (group)
toPerfectSquare :: Int -> Int
toPerfectSquare n = product . map head . filter (odd . length) . group $ primefactors n
primefactors :: Int -> [Int]
primefactors n = go n $ 2 : [3,5..]
where
go m dss@(d:ds) | m < 2 = []
| m < d^2 = [m]
| r == 0 = d : go q dss
| otherwise = go m ds
where (q, r) = m `quotRem` d
We could also introduce another function \$f\$, so that for any \$a,b \in \mathbb N\$ we get a pair \$(n,y) \in \mathbb N^2\$ such that
$$
a^n y = b
$$
If we had that function, we could write use it to check easily whether the power of a given factor is even or odd. However, that function and its use in toPerfectSquare are left as an exercise. | {
"domain": "codereview.stackexchange",
"id": 33911,
"tags": "haskell, primes, integer, factors"
} |
Is it in NP to check if the convex hull contains the unit ball? | Question: Given a set of $n$ points in $d$ dimensional Euclidean space, the problem is to determine if the convex hull contains the unit ball centered at the origin.
Is this problem in NP?
It is in co-NP as one can give a point in the ball outside the convex hull as a witness and verify this fact using linear programming.
My focus here is not in computer precision relating to square roots although this may also be interesting.
(Related to https://mathoverflow.net/questions/141782/efficiently-determine-if-convex-hull-contains-the-unit-ball .)
Answer: The problem is NP-hard; see my answer at mathoverflow. Thus there is no polynomial-size certificate that the unit ball is contained in the convex hull of given points unless $\text{NP} = \text{co-NP}$ (if $\text{NP} = \text{co-NP}$ then the polynomial hierarchy collapses). | {
"domain": "cstheory.stackexchange",
"id": 2253,
"tags": "cc.complexity-theory, cg.comp-geom"
} |
What is the use of finding minimum number of straight lines to cover a set of points? | Question: There is that popular problem [1] [2] in the computer science that is finding minimum number of straight lines that covers a given set of points in 2D.
Even though I have scanned many papers, none of them has a clear motivation for the problem.
What is the use of solving this problem? Is there a paper that explains this?
Answer: Although many papers in theoretical computer science claims practical applications for their work, this is unfortunately often simply not the case. Usually, either the problems are too far away from being something useful (too simplified), or the algorithms are too far away from being practical (e.g. hiding big constants in the O-notation).
However, you can look at the papers
Approximation algorithms for hitting objects with straight lines (Hassin & Megiddo) and
On the complexity of locating linear facilities in the plane (Megiddo & Tamir).
They claim, e.g.
The problem of hitting objects in the plane with a minimum number of straight lines has a military application. In many cases when a bomber attempts to destroy
targets on the ground, protected by anti-aircraft missiles, it has to spend as little time as possible close to the targets. Thus, careful planning of an air raid on a multi-target site (for example, a cluster of fuel tanks) calls for a minimum number of times a bomber has to fly across the site. Moreover, each pass has to be carried out as fast as possible, hence for each dive into the site there exists a straight line (a "stick") along which targets are destroyed. Another application
(in three dimensions) is in medicine where radiotherapy is administered
by inserting a minimum number of radioactive needles into a certain area of the body so as to achieve a required level of radiation.
And also:
For example, we may view the problems faced by a planner who has to locate r (linear) segments of a new railroad system so as to minimize the average cost to the users who have to reach the tracks from a number of different small communities. Thus, a straight line or a line segment is of natural importance in this context. Sometimes such problems are easier than those with point facilities. For example, it is much easier to find a line, so as to minimize the sum of distances to it from a set of given points, than to find a single point with the same objective. | {
"domain": "cs.stackexchange",
"id": 4581,
"tags": "reference-request, computational-geometry, applied-theory"
} |
Why does $\nabla \to ik$ when you Fourier transform? | Question: I am reading a text that describes the scattering of light by a particle with dielectric constant $\epsilon$
After a bit of maths starting from Maxwell's equations they obtain:
$$\nabla (\nabla \cdot E(r)) - \nabla^2E(r)=\mu_0\omega^2\epsilon(r)\cdot E(r)$$
then say "Fourier transforming with respect to $r$ gives (replace $\nabla$ by $ik$):"
$$[k^2 \hat{I}-kk]\cdot E(k) = \mu_0 \omega^2 \int \epsilon(r) \cdot E(r) \exp(-ik \cdot r) \,\mathrm{d}r$$
I don't understand why all $\nabla$ have turned into $ik$? Is there any way to visualize why this can be done?
Answer: Perhaps the best way to understand this is to start simply:
Consider a function $f(x)$. Now, let's try to take the Fourier transform of its derivative $f'(x)$. Just use the definition of the Fourier transform:
$$\mathscr{F}(f'(x))(k)=\frac{1}{\sqrt{2\pi}}\int dx\ e^{-ikx}f'(x) $$
and now use integration by parts (assuming $f(x) \to 0$ as $|x|\to \infty$, a typical assumption in this context). | {
"domain": "physics.stackexchange",
"id": 17303,
"tags": "homework-and-exercises, electromagnetic-radiation, fourier-transform"
} |
Calculating Reactive Gyroscopic Couple and its Relationship with the Attitude of a Rigid Body | Question: Consider some rigid disk, (as an approximation for a propeller on an aircraft for instance), spinning about an axis through the center of mass of said disk with angular velocity ω. In terms of a cylindrical coordinate system, assume the COM of the disk is at a distance r from the Z-axis, and assume further that angular velocity vector ω of the disk points in the +θ direction initially. In this setup, if one were to impart an additional angular velocity Ω on the disk about the Z-axis, a reactive gyroscopic couple C will manifest along the R-axis causing the orientation of the disk as well as the angular velocity vector ω to pitch up or down (depending on the directions of ω and Ω). Thus, a single propeller aircraft when making a left or right turn will tend to go nose-up or nose-down.
I have a few queries about this motion.
How does one obtain the reactive gyro couple vector? The literature suggests that its magnitude should be C=IωΩ, but, as we have two separate rotations about two separate axes, should there not be a separate MOI for the contributions due to ω and Ω? If not, why?
Where does the angular momentum/energy causing this gyroscopic couple and rotation about the r-axis come from? (Is the new reactive angular momentum/motion about the r-axis taken from lessening the momentum associated with ω (θ direction), Ω (z-direction), or some combination of the both?
How might one calculate the change in orientation of this disk over time due to the couple? (Like the change in the Euler angles of a vector normal to the disk as the couple is applied, for instance).
Thanks in advance!
Answer: About your questions:
The usual approximation is to take the case where angular velocity of the spinning disk/propellor is far larger than the angular velocity of the yaw, so the contribution of the angular velocity of the yawing motion is negligable in comparison. But yeah, an exhaustive treatment takes both into account.
The momentum exchange is discussed by Feynman in section 20-3 of chapter 20 of book I, that section is titled: the gyroscope
The following source, by the looks of it, contains an exhausive treatment
The overall series:
MITOpenCourseWare aeronautics
Specifically the case of gyroscopes:
3D rigid body dynamics: tops and gyroscopes
Additional remarks:
During WW I many aircraft types had a rotary engine.
As you can imagine, the gyroscopic effects were very strong, dangerously so. However, I don't think I have ever read mention of noticable gyroscopic effects from the propellor of today's propellor-driven aircrafts.
It is important to be aware that the onset of gyroscopic precession is about motion. Specifically: to a pilot flying an aircraft with a rotary engine it may well have seemed as if giving rudder resulted in pitching instead. However, precise measurement would show that in order for the pitching to occur a bit of yaw must happen. That is: the rotary engine aircraft responds to yawing motion. It's just that if the pitching response is strong the smaller yawing motion may well go unnoticed. Conversely, initiating a slight pitching motion would result in yaw. So if the pilot wanted to turn he had to push for pitching. Flying those aircrafts must have been very tricky. | {
"domain": "physics.stackexchange",
"id": 59821,
"tags": "rotational-dynamics, gyroscopes"
} |
Boolean expression parser | Question: I was trying to write some of the Haskell list functions into Java, and I realized that one of the key strengths for many of the functions was the ability to pass in a boolean expression. I decided to write a parser to facilitate this. It has worked perfectly as far as I have tested it, but I want to know just how [in]efficiently I've done this, and if there's anything key that I missed.
A few rules for using this parser:
You cannot use the ! operator in an expression [yet].
For comparison of strings, use ==, !=, etc...
For exponentiation, use ^.
PEMDAS ("order of operations") is followed for mathematical expressions.
Otherwise, expressions can be written almost exactly like a normal if statement. For example:
Halo.takeWhile("[x]>=2", someArray);
As a direct call to the parser:
ExpressionParser.evaluate("2+[x] == [y] && (1==1 && Joe==Joe && 3^2>10)", "x", 5, "y", 7);
class ExpressionParser
{
private static final String[] operators = { "!=", "==", ">=", "<=", ">", "<", "||", "&&", "*", "/", "+", "-", "^" };
private static boolean parseAndEvaluateExpression(String ex)
{
for (char c : ex.toCharArray())
{
if (!Character.isSpaceChar(c))
return parseWithStrings(ex);
}
System.err.println("ERROR: Expression cannot be empty!");
return false;
}
@SafeVarargs
static <T> boolean evaluate(String or, T... rep)
{
String[] temp = new String[rep.length];
for (int i = 0; i < rep.length; i++)
temp[i] = "" + rep[i];
return evaluate(or, temp);
}
static boolean evaluate(String or, String... vars)
{
if ((vars.length % 2 == 1 || vars.length < 2) && vars.length != 0)
{
System.err.println("ERROR: Invalid arguments!");
return false;
}
for (int i = 0; i < vars.length; i += 2)
or = or.replace("[" + vars[i] + "]", "" + vars[i + 1]);
return parseAndEvaluateExpression(or);
}
private static boolean parseWithStrings(String s)
{
int[] op = determineOperatorPrecedenceAndLocation(s);
int start = op[0];
String left = s.substring(0, start).trim();
String right = s.substring(op[1]).trim();
String oper = s.substring(start, op[1]).trim();
int logType = logicalOperatorType(oper);
System.out.println("PARSE: Left: \"" + left + "\" Right: \"" + right + "\" Operator: \"" + oper + "\"");
if (logType == 0) // encounters OR- recurse
return parseWithStrings(left) || parseWithStrings(right);
else if (logType == 1) // encounters AND- recurse
return parseWithStrings(left) && parseWithStrings(right);
if (containsMathematicalOperator(left)) // evaluate mathematical expression
left = "" + parseMathematicalExpression(left);
if (containsMathematicalOperator(right))// see above
right = "" + parseMathematicalExpression(right);
String leftSansParen = removeParens(left);
String rightSansParen = removeParens(right);
if (isInt(leftSansParen) && isInt(rightSansParen))
return evaluate(Double.parseDouble(leftSansParen), oper, Double.parseDouble(rightSansParen));
else
return evaluate(leftSansParen, oper, rightSansParen); // assume they are strings
}
private static int[] determineOperatorPrecedenceAndLocation(String s)
{
s = s.trim();
int minParens = Integer.MAX_VALUE;
int[] currentMin = null;
for (int sampSize = 1; sampSize <= 2; sampSize++)
{
for (int locInStr = 0; locInStr < (s.length() + 1) - sampSize; locInStr++)
{
int endIndex = locInStr + sampSize;
String sub;
if ((endIndex < s.length()) && s.charAt(endIndex) == '=')
sub = s.substring(locInStr, ++endIndex).trim();
else
sub = s.substring(locInStr, endIndex).trim();
if (isOperator(sub))
{
// Idea here is to weight logical operators so that they will still be selected over other operators
// when no parens are present
int parens = (logicalOperatorType(sub) > -1) ? parens(s, locInStr) - 1 : parens(s, locInStr);
if (containsMathematicalOperator(sub))
{
// Order of operations weighting
switch (sub)
{
case "^":
case "/":
case "*":
parens++;
break;
case "+":
case "-":
break;
}
}
if (parens <= minParens)
{
minParens = parens;
currentMin = new int[] { locInStr, endIndex, parens };
}
}
}
}
return currentMin;
}
private static int logicalOperatorType(String op)
{
switch (op.trim())
{
case "||":
return 0;
case "&&":
return 1;
default:
return -1;
}
}
private static boolean containsMathematicalOperator(String s)
{
s = s.trim();
for (char c : s.toCharArray())
if (c == '/' || c == '+' || c == '*' || c == '-' || c == '^')
return true;
return false;
}
private static int parens(String s, int loc)
{
int parens = 0;
for (int i = 0; i < s.length(); i++)
{
if (s.charAt(i) == '(' && i < loc)
parens++;
if (s.charAt(i) == ')' && i >= loc)
parens++;
}
return parens;
}
private static String removeParens(String s)
{
s = s.trim();
String keep = "";
for (char c : s.toCharArray())
{
if (!(c == '(') && !(c == ')'))
keep += c;
}
return keep.trim();
}
private static boolean isOperator(String op)
{
op = op.trim();
for (String s : operators)
{
if (s.equals(op))
return true;
}
return false;
}
private static boolean isInt(String s)
{
for (char c : s.toCharArray())
if (!Character.isDigit(c) && c != '.')
return false;
return true;
}
private static boolean evaluate(double left, String op, double right)
{
switch (op)
{
case "==":
return left == right;
case ">":
return left > right;
case "<":
return left < right;
case "<=":
return left <= right;
case ">=":
return left >= right;
case "!=":
return left != right;
default:
System.err.println("ERROR: Operator type not recognized.");
return false;
}
}
private static double parseMathematicalExpression(String s)
{
int[] op = determineOperatorPrecedenceAndLocation(s);
int start = op[0];
String left = s.substring(0, start).trim();
String right = s.substring(op[1]).trim();
String oper = s.substring(start, op[1]).trim();
System.out.println("MATH: Left: \"" + left + "\" Right: \"" + right + "\" Operator: \"" + oper + "\"");
if (containsMathematicalOperator(left))
left = "" + parseMathematicalExpression(left);
if (containsMathematicalOperator(right))
right = "" + parseMathematicalExpression(right);
return evaluateSingleMathematicalExpression(Double.parseDouble(removeParens(left)), oper,
Double.parseDouble(removeParens(right)));
}
private static double evaluateSingleMathematicalExpression(double result1, String oper, double result2)
{
switch (oper)
{
case "*":
return result1 * result2;
case "/":
return result1 / result2;
case "-":
return result1 - result2;
case "+":
return result1 + result2;
case "^":
return Math.pow(result1, result2);
default:
System.err.println("MATH ERROR: Mismatched Input.");
return 0;
}
}
private static boolean evaluate(String left, String op, String right)
{
switch (op)
{
case "==":
return left.equals(right);
case "!=":
return !left.equals(right);
default:
System.err.println("ERROR: Operator type not recognized.");
return false;
}
}
}
Answer: You've written something quite impressive there, but it is an unorthodox
approach and some of the code is rather impenetrable. I've laid out a few
points of issue and in some cases possible solutions and comments below.
Method documentation
Your methods could stand to have a brief JavaDoc block at the top. Here's how
you might annotate what I believe to be your simplest method, isInt:
/**
* Determines whether a given string consists only of digits.
*
* @param s The string to test.
* @return True if the string consists only of digits; false otherwise.
*/
private static boolean isInt(String s)
{
for (char c : s.toCharArray())
if (!Character.isDigit(c) && c != '.')
return false;
return true;
}
(By the way, Character.isDigit('.') == false, so you don't need that
&& c != '.' bit in there.)
Proper typing of arguments
I notice you provide arguments with a sort of “key, value, key, value” style
and explicitly check to make sure that format is followed. I suppose that
works, but you could be more semantic and get more out of the type system by
using a Map from variable names to values.
Non-usage of exceptions
You've got a lot of error cases:
System.err.println("ERROR: Expression cannot be empty!");
System.err.println("ERROR: Invalid arguments!");
System.out.println("PARSE: Left: \"" + left + "\" Right: \"" + right + "\" Operator: \"" + oper + "\"");
System.err.println("ERROR: Operator type not recognized.");
System.out.println("MATH: Left: \"" + left + "\" Right: \"" + right + "\" Operator: \"" + oper + "\"");
System.err.println("MATH ERROR: Mismatched Input.");
System.err.println("ERROR: Operator type not recognized.");
Printing them out to standard error (or standard output, without rhyme or
reason) may be suitable for debugging, but it is not appropriate beyond that.
Java has a method of dealing with errors: exceptions. Use them; they won't
bite.
Inefficient concatenation
In removeParens (and possibly elsewhere), you're repeatedly concatenating
strings using +. This is fine for one-offs, or where the number of operands
are constant, as Java can optimize it. However, doing it repeatedly in a loop
is inefficient. You'd be better off using a StringBuilder:
private static String removeParens(String s)
{
s = s.trim();
StringBuilder keep = new StringBuilder();
for (char c : s.toCharArray())
{
if (!(c == '(') && !(c == ')'))
keep.append(c);
}
return keep.toString().trim();
}
Also realize you can change that if condition to c != '(' && c != ')'.
You may also want to consider initializing the StringBuilder to s and
deleting the parentheses. Also keep in mind that
the first s = s.trim() is not necessary given the trim at the end.
parens
This could be clarified with more documentation as recommended at the top, but
it seems to me like it's supposed to determine (twice?) the nesting level of
parentheses at a particular location in the string. I'm worried there's a bug
with closing parentheses before the location or opening parentheses after.
Consider the following situation, where ^ is the location in question.
( ( ) ( ) ( ) )
^
At that ^ point, there are two levels of nesting of parentheses. parens
will return 6, counting these:
( ( ) ( ) ( ) )
^ ^ ^ ^ ^ ^
I think you probably want a result of 4 instead, ignoring the paired
parentheses.
( ( ) ( ) ( ) )
~~~ ~~~ <- irrelevant to middle pair, but counted anyway
determineOperatorPrecedenceAndLocation
This method is quite impenetrable to me, and I really suspect there are latent
bugs lurking within it, particularly given the behavior of parens above. I
think you'll want to comment more heavily what you're doing here or take a
different approach.
Overall approach
Unorthodox method of parsing
Your overall approach to parsing expressions is unorthodox. Now, this isn't
necessarily a bad thing, but it's more difficult for me to examine, and a
more traditional approach may end up being more extensible and easier to
maintain. Normally you'd split this up into a few stages: first, you lex the
string into a series of tokens, so the input 5 + 6 == 10 + [x] might yield
the tokens
INT 5
PLUS
INT 6
EQUALS
INT 10
PLUS
VAR x
From there, you parse it out. If you're just planning on using the result, you
can evaluate it as you parse. You can also build an
abstract syntax tree (AST) and evaluate from there, but evaluating
immediately should be fine.
For the actual parsing, you've got a bunch of possible approaches. If you
wanted to go down this route, I might read up on
recursive-descent parsers and then Pratt parsers. There
are also a variety of parser-generators like ANTLR, but I'd try to
first write one manually.
Extensibility
You may want to consider making it easier to extend your expression parser to
add more operators and such. For example, say I wanted to use the modulo
operator. A good API for this might be something like this:
ExpressionParser parser = new ExpressionParser();
parser.addOperator(new InfixOperator() {
@Override
public String getSymbol() {
return "%";
}
@Override
public int getPrecedence() {
return 20; /* or whatever's appropriate within your system */
}
@Override
public Associativity getAssociativity() {
return Associativity.LEFT;
}
@Override
public double evaluate(double left, double right) {
return left % right;
}
});
System.out.println(parser.evaluate("7 % 3 == 2")); // => false
I don't know how I'd fit this in with your approach, but it is trivial with a
Pratt parser.
Appropriateness
I admire your effort in this, and while I imagine this would be useful in many
cases, I'm not so sure your list library needs this. There is a rather
straightforward translation of predicates into Java: that is, an interface and
anonymous classes, as we did with the extensibility example. For example:
public interface Predicate<T> {
public boolean matches(T item);
}
Then you could filter a list of integers like so:
int[] integers = new int[] { 1, 2, 3, 4, 5 };
integers = Halo.filter(integers, new Predicate<Integer>() {
@Override
public boolean matches(Integer item) {
return item % 2 == 0; // even items only
}
});
Now, I'm not suggesting you completely abandon your approach. I just think
you might want to support that use and then provide an ExpressionPredicate or
something so I could do this:
integers = Halo.filter(integers, new ExpressionPredicate<int>("[item] % 2 == 0"));
Then I can pick and choose whether I want to use Java or your mini-language to
write my predicate. I can even use ExpressionPredicate from within my
Predicate that's otherwise written in Java: I get the best of both worlds. | {
"domain": "codereview.stackexchange",
"id": 5517,
"tags": "java, strings, array, haskell, parsing"
} |
Turtlebot: Unstable | Question:
We recently started working with a turtlebot using the ros tutorials for the same.
We do not know when it is switched on in full mode and when in passive mode or safe mode. Immediately after we switch on the iRobot Create, the red light flashes and there is a small beep and the green light comes on. But in a few seconds the green light goes off with another beep. This doesn't happen all the time, but very often. Also, the moment we switch on the Create, the kinect cable's led is ON. But only when the red light flashes, the led is on. After the beep, after the green light comes on, this kinect cable's LED goes OFF. And sometimes the output from minimal.launch says, "check if the Create is plugged in and powered on", when the connections are perfect and after i have turned the create on.
Will minimal.launch get the turtlebot service on by default? Because if I start the service and then launch minimal.launch, it says "shutdown request: New node registered with the same name."
Dashboard doesn't always work from the workstation. Sometimes it works:Sometimes not. It works fine on the turtlebot. On the workstation it says POWER SYSTEM ERROR. Though we do not know what that means. This also happens only sometimes.
Sometimes starting up minimal.launch powers the kinect. (May be the bot is pushed to full-mode). Sometimes the same thing happens with teleop. We are not always able to push it to full mode from the dashboard, which should work.
We still managed to calibrate the bot and build a map. The map was nowhere near the real room. Later we found out that when the actual bot turned 90degrees the bot on the map turned very less (about 45degrees). We manipulated the parameters odom_angular_scale_correction and gyro_scale_correction, but to no luck.
What we know for sure.
The network connection is perfect. For teleop and ssh work great.
The env variables, ROS_MASTER_URI and ROS_HOSTNAME are correctly set.
All mechanical connections on the turtlebot are correct.
Please help us out.
Originally posted by prasanna on ROS Answers with karma: 55 on 2012-03-14
Post score: 3
Answer:
okay...
The create doesn't always display it's power state accurately using the LED. Typically the green LED will be on and then go off when the ROS driver connects. Decide whether you're going to use minimal.launch or the turtlebot service but do not use both. They do the same thing essentially and will conflict with each other causing nodes to startup and shutdown repeatedly.
to start the service type:
sudo service turtlebot start
to stop the service type:
sudo service turtlebot stop
If the service isn't working or not starting up properly look at the dashboard and it should display the problem. Or make sure that you have the correct interface setup for the service. Check out this tutorial for details http://ros.org/wiki/turtlebot_bringup/Tutorials/TurtleBot%20Bringup
most likely the reason the dashboard isn't working is because you don't have the ROS_HOSTNAME set on the workstation machine in the terminal that you ran the dashboard from. Make sure that the terminal you are running from has the correct ROS_HOSTNAME and ROS_MASTER_URI.
minimal.launch will never power up the kinect. There is no method for doing it. most likely what is happening it the robot received a cmd_vel msg, the TurtleBot node will put the TurtleBot in full mode when it receives a cmd_vel msg, when the breaker service is called, or when the TurtleBot is set into full mode.
Are you sure that you calibrated the gyro properly? If you run the calibration repeatedly it should get approximately the same number every time. Also check to see what the angular rate of the gyro is 150 deg/sec or 300deg/sec. TurtleBots from clearpath now have 300deg/sec gyros and you must set the ~gyro_measurement_range parameter for the gyro type. You can also use dynamic reconfigure to do this as well. Also make sure that after you calibrate the TurtleBot you update your launch file either minimal.launch or turtlebot.launch in /etc/ros/electric(or your current distro). Then relaunch minimal.launch or restart the TurtleBot service.
Originally posted by mmwise with karma: 8372 on 2012-03-14
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 8581,
"tags": "ros, irobot-create, navigation, turtlebot, gmapping"
} |
What is the amount of information carried by a $X$-bit system? | Question: Please consider me a beginner and not an expert by any means. My question below will thus be very very simple as that of an uninitiated. Let me consider four simple physical systems with ${\rm N}_S$ states. With those, I'll try to explain my current state of knowledge/understanding at this point and ask the question.
First, one coin. It has two possible states- a 'head' and a 'tail' i.e., ${\rm N}_S=2$. Each state can be denoted by $0$ and $1$. This is an example of a $1$-bit system.
Second, consider $N$ identical coins. Clearly, the number of states (or configurations) is now $N_S=2^N$. Each state of the full system can again be represented or encoded by a distinct string of '$p$' zeros and '$q$' ones such that $p+q={\rm N}$. This is an example of a classical ${\rm N}$-bit system.
Third, consider a dice where the number of states $N_S=6$. By definition, this is a $\log_2 6\approx 2.585$-bit system.
Fourth, consider $N$-dies so that the number of states is $N_S=6^N$. It is, by definition, a $\log_2(6^N)\approx 2.585N$-bit system.
Therefore, irrespective of whether each 'microscopic' constituent is a $1$-bit system (e.g., a coin) or not (e.g., a dice), the quantity $X=\log_2N_S$ is used define a $X$-bit system. I have tried to explain that with my four examples above.
Given the above set up, my question is, if we have a $X$-bit system, what is the amount of information carried by that system?
Answer: The amount of information carried by a system is given by entropy function $ H(X) $ which is defined as:$$ H(X) =-\sum p(x) \log_{2}(p(x)) $$ where $ p(x)$ is the probability of random variable $X$, where the sum is over support of $p(x) $.
For example: Now if you have biased coin with $p({\rm heads}) = 0.8$ and $p({\rm tails}) = 0.2$. You can calculate the $H(X) $ which will be less than one $1$-Bit. Hence it requires less information than a fair coin to describe it.
The amonut of information carried by system is a measure of amount of uncertainty present in the system. | {
"domain": "physics.stackexchange",
"id": 67886,
"tags": "quantum-information, entropy, information"
} |
Why don't modern SAT solvers use the notion of a "watched clause", in the same way they use the notion of a "watched literal"? | Question: Modern SAT solvers use the notion of "watched literals": when a value is chosen for a literal $l$, the solver only checks whether that falsifies clauses with $l$ in them if $l$ is one of the watched literals in the clause. This prevents checking every clause for every newly assigned literal.
The correctness of the scheme comes from the fact that clauses are disjunctions and will only be false if all literals in it are false. So we know it is not false until at least the watched literal is false, and we only need to check when that happens.
I am wondering why the dual idea of a "watched clause" is not used as well. This is how it would work: the overall formula is a CNF, that is, a conjunction of clauses. The watched literal scheme already checks very fast whether the CNF has become false. However, it would also be useful to detect when it becomes true: no further search would be needed then. A conjunction is only true if all clauses in it are true. So we choose one "watched clause" $C$ and, if a literal in $C$ is assigned a value, we check to see if $C$ has turned true. If so, we mark that clause as now irrelevant, and pick another clause not yet satisfied. If we cannot find a not-yet-satisfied clause, all clauses are satisfied and the problem is satisfiable.
I realize this would require keeping track of more information; assigning to literals would now not only require checking if they are watched literals, but also if they are in the watched clause. More importantly, looking for a new watched clause requires checking if other clauses are not yet satisfied, which means checking all their literals. Depending on the structure of the problem instance, this could be much slower. However, for some instances (for example, problems with small clauses) this would be much faster because satisfaction would be detected much sooner.
So my question is whether this is not used simply for empirical reasons (say, it would be slower for most benchmarks), or if there is a deeper flaw about it that I am missing.
A secondary question is: if this is a merely empirical issue, then is there some simple argument predicting this from the description of the technique alone, or is it something that would need to be observed in practice? Or, in other words, if this is a bad idea, is it so in principle, or just empirically?
Addendum:
It has been suggested by an answer that the notion of watched literals implies the use of DPLL which implies tracking which clauses have been satisfied, and that therefore the question does not make sense. It is true that watched literals are used in the context of DPLL, but it is not true that this means tracking satisfied clauses.
The paper on Chaff, which defined watched literals, describes DPLL in pseudo-code as
while (true) {
if (!decide()) // if no unassigned vars
return(satisifiable);
while (!bcp()) {
if (!resolveConflict())
return(not satisfiable);
}
}
This shows that satisfiability is decided as positive only when all variables are assigned, not when all clauses are satisfied (which could happen much earlier). Chaff and zChaff have been the basis for many of modern SAT solvers, including the very prominent MiniSAT.
The same idea is described in detail in Filip Maric's paper "Formalization and Implementation of Modern SAT Solvers", which describes this approach as the basis for many of the most prominent SAT solvers today.
Answer: A typical SAT solver notices that a satisfying assignment has been found when there are no more variables to assign. So the only time that a SAT solver would save by early notification is the time it would take to assign values to any remaining unassigned variables. This is a one-time cost linear in the number of variables in the formula. It's linear because there can be no backtracking once the formula is solved.
Meanwhile, to maintain your watched clause pointer you must do a search to find a new unsatisfied clause each time your watched clause is satisfied. That search is linear in the number of clauses in the formula. These searches will be triggered during the exponential part of the search and so you will probably do many of them on your way to finding a satisfying assignment.
The net result is that you will waste more time than you will save trying to immediately notice when all clauses are satisfied.
Note that if you're adapting a traditional SAT solver to count solutions, then your idea has more merit. Noticing that all clauses are satisfied while there are still unassigned variables means you have found $2^n$ solutions to the formula, where $n$ is the number of unassigned variables. This provides an exponential speedup over finding these solutions one by one, so the lazy clause tracking could potentially pay for itself. However, an alternative would be to notice a satisfying assignment in the usual way and then undo assignments in reverse order until a clause is left unsatisfied. You would have the same $2^n$ speedup without the clause tracking overhead. | {
"domain": "cs.stackexchange",
"id": 4674,
"tags": "satisfiability, sat-solvers, constraint-satisfaction"
} |
Sampling frequency of voice and audio | Question: I read somewhere that the signal of speech has a bandwidth from 300 - 3400 Hz(Why), and audio files have a bandwidth from 50 Hz to 50000 HZ(Why).
Could someone help me , why are those constant sampling frequencies ?
Answer: It's the human hearing which defines associated bandwidths. Then the sampling frequency will be chosen to be larger than twice the bandwidth for satisfying the sampling theory and still as small as possible to reduce processing costs.
Human hearing for the general sound signals is accepted to be between 20 Hz and 20 kHz. The limits depending on individual, age, gender etc.
Note that various mechanical devices, acoustical instruments or alike can produce sound waves that are well in excess of 20 kHz upper limit and include ultrasound, however, an ordinary human will not be able to hear that, that's why the upper limit is taken to be 20 kHz.
For speech used in old analog communication (telephony audio) network, the intention was to find the minimum intelligible bandwidth to reduce the technical complexity and economic cost of building that network. This bandwidth is found to be within 300-3000 Hz. However as technology improved and other means of speech communication emerged, higher bandwidth audio was also used for better quality transmission. Digital codecs, radio broadcast etc. can use 8 kHz, 16 kHz etc. bandwidths too. | {
"domain": "dsp.stackexchange",
"id": 7901,
"tags": "matlab"
} |
Euler's equation for rigid body rotation applied to inertia frame? | Question: According to many rigid body dynamics books, the Euler's equation is expressed in local body frame(principal axes). That is to say the external moment,inertia tensor and angular velocity are all expressed in terms of local body frame:
$$\tau_b=I_b\dot{\omega_b}+\omega_b \times I_b \omega_b$$
The subscript $b$ indicates the local body frame. My queation is could Euler's equation be expressed in inertia frame? I found in some game physics engines the equation is expressed in inertia frame rather than body frame.
$$\tau=I\dot{\omega}+\omega \times I \omega$$
In that situation the inertia tensor $I$ is not constant but varying with the body frame. Therefore it is updated with the orientation of rigid body. Moment $\tau$ and angular velocity $\omega$ are expressed in world frame(inertia frame). Is the formula above established?
Answer:
I found in some game physics engines the equation is expressed in inertia frame rather than body frame.
$$\tau=I\dot{\omega}+\omega \times I \omega$$
In that situation the inertia tensor $I$ is not constant but varying with the body frame. Therefore it is updated with the orientation of rigid body. Moment $\tau$ and angular velocity $\omega$ are expressed in world frame(inertia frame). Is the formula above established?
tl;dr: Yes, this is valid. Getting there is nontrivial.
The $\boldsymbol \omega_B \times (\mathrm I_B \, \boldsymbol \omega_B)$ term in $\boldsymbol \tau_B = \mathrm I_B \, \dot{\boldsymbol \omega}_B + \boldsymbol \omega_B \times (\mathrm I_B \, \boldsymbol \omega_B)$ results from working in a rotating frame. It is a fictitious torque, very much akin to the fictitious forces that arise with Newton's second law when computed in a rotating frame. Just as those fictitious forces vanish in an inertial frame, so does the fictitious torque. The inertial frame rotational equation of motion is
$$\boldsymbol\tau_I = \frac d{dt} (\mathrm I_I\, \boldsymbol \omega_I) =
\mathrm I_I \, \dot {\boldsymbol \omega}_I
+ \dot {\mathrm I}_I \,\boldsymbol \omega_I$$
This raises the question, what is the time derivative of the inertia tensor from the perspective of an inertial frame? To determine this, the relationship between the inertia tensor as expressed in the body frame and as expressed inertial frame is needed. This is
$$\mathrm I_I = \mathrm T_{R\to I}\,\mathrm I_B\,\mathrm T_{R\to I}^{\,T}$$
where $\mathrm I_I$ is the inertia tensor expressed in the inertial frame, $\mathrm T_{R\to I}$ is the transformation matrix from the body frame to the inertial frame, $\mathrm I_B$ the inertia tensor expressed in the body frame, and $\mathrm T_{R\to I}^{\,T}$ is the transpose of (and also the inverse of) $\mathrm T_{R\to I}$. Taking the derivative with respect to time yields
$$\dot {\mathrm I}_I =
\dot {\mathrm T}_{R\to I} \, \mathrm I_B\,\mathrm T_{R\to I}^{\,T}
+ \mathrm T_{R\to I} \, \mathrm I_B \, \dot {\mathrm T}_{R\to I}^{\,T}$$
Without derivation, the time derivative of the transformation matrix is
$$\dot {\mathrm T}_{R\to I} = \mathrm T_{R\to I} \operatorname{Sk}(\boldsymbol \omega_B) = \operatorname{Sk}(\boldsymbol \omega_I)\mathrm T_{R\to I}$$
where $\operatorname{Sk}(a)$ is the skew symmetric cross product matrix generated by the vector $\boldsymbol a$:
$$\operatorname{Sk}(\boldsymbol a) \equiv \begin{pmatrix}
\phantom{-}0 & -a_z & \phantom{-}a_y \\
\phantom{-}a_z & \phantom{-}0 & -a_x \\
-a_y & \phantom{-}a_x & \phantom{-}0
\end{pmatrix}$$
Note that $\operatorname{Sk}(\boldsymbol a) \, \boldsymbol b = \boldsymbol a \times \boldsymbol b$. With the above, the time derivative of the inertia tensor as expressed in the inertial frame is
$$\dot {\mathrm I}_I =
\operatorname{Sk}(\boldsymbol \omega_I) \mathrm T_{R\to I} \, \mathrm I_B\,\mathrm T_{R\to I}^{\,T}
+ \mathrm T_{R\to I} \, \mathrm I_B \, \mathrm T_{R\to I}^{\,T} \operatorname{Sk}^T(\boldsymbol \omega_I)
= \operatorname{Sk}(\boldsymbol \omega_I)\,\mathrm I_I - \mathrm I_I \, \operatorname{Sk}(\boldsymbol \omega_I)$$
This can be nonzero due to the non-commutative nature of matrix multiplication. We're not interested so much in the form of $\dot {\mathrm I}_I$ in general so much as the product $\dot {\mathrm I}_I\,\boldsymbol \omega_I$. This is
$$\dot {\mathrm I}_I\,\boldsymbol \omega_I =
\operatorname{Sk}(\boldsymbol \omega_I)\,\mathrm I_I \,\boldsymbol \omega_I
- \mathrm I_I \, \operatorname{Sk}(\boldsymbol \omega_I) \,\boldsymbol \omega_I
= \boldsymbol \omega_I \times (\mathrm I_I \,\boldsymbol \omega_I)
- \mathrm I_I \, (\boldsymbol \omega_I \times \boldsymbol \omega_I)
= \boldsymbol \omega_I \times (\mathrm I_I \,\boldsymbol \omega_I)$$
and thus
$$\boldsymbol\tau_I =
\mathrm I_I \, \dot {\boldsymbol \omega}_I
+ \boldsymbol \omega_I \times (\mathrm I_I \,\boldsymbol \omega_I)$$ | {
"domain": "physics.stackexchange",
"id": 49887,
"tags": "newtonian-mechanics, rotational-dynamics, rigid-body-dynamics"
} |
Training Encoder-Decoder using Decoder Outputs | Question: I am trying to build an encoder-decoder model for a text style transfer problem. The problem is I don't have parallel data between the two styles so I need to train the model in an unsupervised setting.
Some papers I have seen use an auto-encoder to train the encoder and decoder components separately. By setting the problem as an auto-encoder, they can train the decoder by passing the target sequence (equal to the input sequence) into the decoder. (Here are some examples, https://arxiv.org/pdf/1711.06861.pdf, https://arxiv.org/pdf/1804.04003.pdf)
Instead of an auto-encoder, I would like to know if it's possible to train a decoder by feeding its predictions at time, t-1, into the input at time-step t. I would pass the generated output into a classifier to check the style and to obtain a training signal. Is this sensible and what are the pros / cons of doing so? Thanks.
Answer:
I would like to know if it's possible to train a decoder by feeding
its predictions at time, t-1, into the input at time-step t.
Yes, it is possible to do it.
But I don't see why you would do it.
Υou will have accumulated error propagated and amplified in every new prediction, making your prediction to diverge from the ground truth sooner or later. | {
"domain": "datascience.stackexchange",
"id": 3399,
"tags": "training, autoencoder, sequence-to-sequence"
} |
The relation of Gödel's Incompleteness Theorems to the Church-Turing Thesis | Question: This may be a naive question, but here goes. (Edit -- it is not getting upvotes, but nobody has offered a response either; perhaps the question is more difficult, obscure, or unclear than I thought?)
Gödel's First Incompleteness Theorem can be proven as a corollary of the undecidability of the halting problem (e.g. Sipser Ch. 6; blog post by Scott Aaronson).
From what I understand (confirmed by the comments), this proof does not depend on the Church-Turing thesis. We derive a contradiction by showing that, in a complete and consistent formal system, a Turing Machine could solve the halting problem. (If on the other hand we had just shown that some effective procedure could decide the halting problem, we would need to also assume the Church-Turing thesis to get a contradiction.)
So, we might say that this result provides a bit of intuitive support for the Church-Turing thesis, because it shows that a limitation of Turing Machines implies a universal limitation. (Aaronson's blog post certainly supports this view.)
My question is whether we can gain something more concrete by going in reverse: What formal implications do Gödel's theorems have for the Church-Turing thesis? For instance, it seems intuitively possible that the First Incompleteness theorem implies that no effective procedure can determine if an arbitrary Turing Machine halts; the reasoning might go that the existence of such a procedure implies ability to construct a complete $\omega$-consistent theory. Is this correct? Are there any results along these lines?
(I'm asking out of curiosity -- I don't study logic myself -- so I apologize if this is well-known or not research-level. In that case, consider this a reference request! Thanks for any comments or responses!)
Question that sounds related, but isn't: Church's Theorem and Gödel's Incompleteness Theorems
EDIT: I'll try to make the question more clear! First -- my naive intuition is that Gödel's Incompleteness should imply at least some limitations on what is or is not computable. These limitations would be unconditional, i.e., they should apply to all models of computation rather than just Turing Machines.
So I am wondering if this is the case (there must be some implication, right?). Assuming it is, I'm most curious about how it impacts the Church-Turing Thesis -- the notion that anything effectively calculable can be computed by a Turing Machine. For example, it seems possible that the existence of an effective procedure for deciding whether a Turing Machine halts would contradict the First Incompleteness Theorem. This result would demonstrate that no possible method of computation can be "much" more powerful than Turing Machines; but is this result true? I have a couple similar questions in the comments. I would be very interested to hear an answer to one of these questions, a pointer to an answer in the literature, an explanation of why my entire reasoning is off-base, or any other comments!
Answer: Here is a philosophical answer that may entertain you.
Gödel's incompleteness theorems are about the formal system of Peano arithmetic. As such they say nothing about models of computation, at least not without some amount of interpretation.
Peano arithmetic easily shows existence of non-computable functions. For example, being a classical theory expressive enough to talk about Turing machines, it shows the particular instance of excluded middle which says that every Turing machine halts or runs forever. Nevertheless, from the work of Gödel an important notion of computability arose, namely that of a (primitive) recursive function. So it is not the theorems themselves that connect to computability, but rather the method of proof which establishes them.
The gist of the incompleteness theorems can be expressed in an abstract form using provability logic, which is a kind of modal logic. This gives the incompleteness theorems a wide range of applicability well beyond Peano arithmetic and computability. As soon as certain fixed-point principles are satisfied, incompleteness kicks in. These fixed-point principles are satisfied by traditional computability theory, which therefore falls victim to incompleteness, by which I mean existence of inseparable c.e. sets. Because the provable and refutable sentences of Peano arithmetic form inseparable c.e. sets, the traditional Gödel's incompleteness theorems can be seen as a corollary to incompleteness phenomena in computability. (I am being philosophically vague and your head will hurt if you try to understand me as a mathematician.)
I suppose we can take two stands on how all this relates to the informal notion of effectivity ("stuff that can actually be computed"):
For all we know, we are just a rather large finite automaton, capable of contemplating fictional super-heroes called "Turing machines" that are able to calculate with unbounded numbers (gasp!). If this is the case, Gödel was just a very good story-teller. How his stories translate to effectivity is then a matter of some (necessarily inaccurate) application of imagination to reality.
Because incompleteness phenomena arise naturally in many contexts, and certainly in all reasonable notions of computability, we conclude that the same has to be the case for effectivity. For example, suppose we could send Turing machines into black holes to compute a la Joel Hamkin's infinite-time Turing machines. This gives us immense computational power in which the halting oracle is a kindergarten toy. But still, the model satisfies the basic conditions that allow us to show the existence of inserparable sets. And therefore once again, computation is not all-powerful and incompleteness is a fact of life. | {
"domain": "cstheory.stackexchange",
"id": 1819,
"tags": "lo.logic, computability, church-turing-thesis"
} |
Why are points on the axis of a circuit with axial symmetry equipotential? | Question: Given the following electrical circuit:
How to explain that L and K are equipotential?
More generic question: why given any electrical circuit with axial symmetry, points that belong to the axis are equipotential:
P.S. It's derived from my more complex homework, but I am trying to understand how things work, not just trying to get it solved.
Update. There is a resistance between K and L instead of ideal wire.
Answer: Assuming the wire is ideal and since there are no resistances between the point L and K, the potential drop between L and K is zero i.e potential does not change. So the potential is same anywhere in between those two points and therefore, they are equipotential.
UPDATE:
Mathematical Proof: Assume potential at A, L, K and B are A, x, y and B. Using Kirchhoff's law we get:
$$\frac{y-B}{R}+\frac{y-x}{4R}+\frac{y-0}{R}=0$$
$$\frac{x-0}{2R}+\frac{x-y}{4R}+\frac{x-B}{2R}=0$$
Upon addition of the above equations and further simplification, we get $x=\frac{B}{2}$ and $y=\frac{B}{2}$. This means the potential at L and K are same i.e equipotential and the existence of the 4R resistor does not matter.
However, it is very difficult to use Kirchhoff's law every time, especially when you have something as ambiguous as a circuit like this:
For these type of circuits, we must find symmetricity. However, note that if there is no symmetricity then Kirchhoff's law is the only way to solve this. Luckily there is symmetricity here and I can guide you through the thought process. Let us take the first diagram as an example:
Let us say $i$ current flows in through A which breaks as $i_1$ in wire AK and $i_2$ in wire AL. Now since the wires connected to B have similar resistances as the wires connected to A, the same amount of current must flow in wire LB as in AL and exit through B and the same amount of current must flow in the wire KB as in AK and exit through B. So now we can notice that the $i_1$ current in AK does not enter the wire LK and the $i_2$ current in AL does not enter the wire LK. This means, there is no flow of charge or current through the wire LK. | {
"domain": "physics.stackexchange",
"id": 47246,
"tags": "electric-circuits, potential, symmetry, electrical-resistance, voltage"
} |
Will moveit accept a list of goals? | Question:
I am currently using a custom node to accept a list of waypoints and send them one at a time to move_base when the current waypoint is successfully reached. So I can send it a path that the robot will travel.
Does moveit already have this functionality?
I saw the MotionGroup Action message that has the MotionPlanRequest member which in turn has the Constraints[] member. So I know it can accept a list of constraints but nothing I can find specifies the behaviour of that list.
EDIT
To clarify, I want to be able to send an ordered list of waypoints in some frame to moveit. It will compute the necessary trajectory for the first waypoint. Once it reaches that waypoint moveit will compute the trajectory for the next and so on.
Example (I am assuming a local frame and units in meters):
(frame: "base_link", [(1,1), (3, -3), (0, 0)])
The robot will travel to 1 meter forward and 1 meter right, then continue on to 3 meters forward and 3 meters left (from the original location). I am not concerned about frames right now just the iteration through a list of waypoints.
Originally posted by virgil93 on ROS Answers with karma: 7 on 2013-02-19
Post score: 0
Answer:
Do you mean the ability to follow a trajectory? That would be performed by your own robot's controllers that listen to a "control_msgs::FollowJointTrajectoryAction" actionlib message. This action is sent from MoveIt.
Or are you looking at doing time parameterization of a list of waypoints, to generate velocities, accelerations and wait times for wach waitpoint? This can be accomplished by MoveIt.
I don't understand how that relates to the second half of your question about constraints though, I'm afraid.
Originally posted by Dave Coleman with karma: 1396 on 2013-02-19
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Dave Coleman on 2013-02-19:
Got it. I haven't seen this functionality but your could easily just feed them one at a time to MoveIt as it completes each waypoint. You are desiring to have it plan all of it at once I presume?
Comment by virgil93 on 2013-02-20:
No, it doesn't have to plan all the waypoints at once. Just the current one in the list. Thanks for your answer though.
Comment by clark_txh on 2017-03-08:
HI all
I have knew how the time,velocities, accelerations is generated.They are look like a average value for smooth curve.
but I get a problem that I do't how to use time,velocities,accelerations to control a motor,I can't use them all. | {
"domain": "robotics.stackexchange",
"id": 12959,
"tags": "moveit"
} |
Can't get correct position of ARMarkers | Question:
Hi everyone!
I'm using ar_pose to identify some markers. It identifies the markers correctly (pattern 1, 2, etc.), but their positions are not very good. The distance is not being correctly assigned. I guess this is not ar_pose's problem.
See the image below:
Could it be a depth_registration issue?
Also, I tried calibrating my camera (kinect) following these tutorials:
rgb calibration
IR calibration
If couldn't make myself clear, please, tell me, so that I can explain it better.
I would really appreciate if someone could help me!
Thanks!
Originally posted by Mendelson on ROS Answers with karma: 73 on 2013-04-11
Post score: 2
Answer:
Check your the file that specifies the patterns you will use. It contains the name, the marker to detect, its width and center position. If you have the wrong size there, the algorithm will think the markers are closer/farther then they really are.
This file will be loaded by the ar_pose package, for example in your launch file:
<param name="marker_pattern_list" type="string" value="$(find yourpackage)/yourpatterns"/>
yourpatterns file will look like this:
#the number of patterns to be recognized
3
#pattern 1
#Sample1
#data/patt.sample1
#250.0
#0.0 0.0
#pattern 2
Sample2
data/patt.sample2
250.0
0.0 0.0
#pattern 3
Hiro
data/patt.hiro
250.0
0.0 0.0
#pattern 4
Kanji
data/patt.kanji
250.0
0.0 0.0
You have to measure the size of your printed marker and write it in mm in the third field of each pattern block (where you read 250.0 here).
Originally posted by Procópio with karma: 4402 on 2013-06-03
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jep31 on 2013-06-04:
What is the influence of the marker_width ? is it finally same thing ?
And if for instance I use directly a pattern file like patt.hiro ?
I'm trying to increase pattern size but I haven't very good results for the detection.
EDIT: One is for ar_single(marker_width), other is for ar_multi(file)
Comment by Procópio on 2013-06-04:
Hi. I updated the answer, hope it can give you more info. Please, post the patterns file you are using for your the experiment of your question (it is not patt.hiro).
Comment by jep31 on 2013-06-04:
My question is different so I have created a other topic:
http://answers.ros.org/question/64290/marker-long-distance-tracking-with-kinect/
I 'm posting a part of my launch file there
Comment by Procópio on 2013-06-05:
did you managed to get the correct distance of your markers?
Comment by jep31 on 2013-06-05:
Yes I did thanks, I printed a 200mm marker and position orientation are good. It's only a detection issue with some luminosity. My paper is bright with room light. | {
"domain": "robotics.stackexchange",
"id": 13774,
"tags": "ros, ar-pose, position"
} |
How to know if a vehicle is moving without any external source of information? | Question: The situation is the following:
I'm inside a vehicle (plane or a car, it doesn't matter) and I need to know if the vehicle is moving at a constant speed BUT I cannot perceive any external change like visual changes, vibration, etc.
How can I know if the vehicle is moving? Do I really can know?
Additional question Can I know my speed?
Answer: You cannot tell moving with constant speed apart from standing still. This is the principle of Galilean relativity. | {
"domain": "physics.stackexchange",
"id": 17540,
"tags": "reference-frames, speed"
} |
What is this wood-infesting insect? | Question: I have these insects in a very old wooden floor. I would like to know whether they are possibly termites, and if not, what they are.
They are about 4 mm in length (and I've had difficulty taking a picture with a standard camera, I may be able to get a better one in a couple of days). This one is dead and may have been crushed a little. Another sign of their presence is little heaps of wood dust appearing with a periodicity of 24-48 hrs, which are about 1 cm in diameter.
Following the indications of @usr137, I think it's rather an ant, indeed it has a slender mid-portion of the body, corresponding to detail (4) in this termite/ant comparative:
But they are much smaller than the ranges I find for carpenter ants (6—19 mm).
Answer: To close this, we're now reasonnably sure that they were carpenter ants or a close relative. One more clue is that they infested the wood close to a window receiving sunshine and were much more active when it was warm there. Thanks to @user137. | {
"domain": "biology.stackexchange",
"id": 5874,
"tags": "species-identification, entomology"
} |
What are the engineering principles for a train to get electricity from the railway | Question: How many general methods are there for transferring electricity from the railway to a train? I could see that some trains are connected by a pantograph and some have a third rail.
Are there any other methods? What is the general engineering principle behind each? What are the basic differences (pros and cons) of each method? I was looking for these questions but could not find a good review. If anyone has a reference or could answer them kindly do.
Also, is it possible to build a small model of such a system and scale it up? I am not talking about a commercial model train. I am aiming at self-designing the same engineering principals and demonstrating them on a scalable small model. If so, I would be happy to have a nice tutorial reference.
I am focusing on Electric multiple units (EMU).
Answer: Pantograph and third rail are pretty much it.
Engineering principles: both have a conducting surface on the train (moving) in contact with the stationary rail/wire, in both cases you need a material that's resilient and conductive. The contact strip is a wear material.
Differences:
overhead catenary wires are flexible and will move around when a train drives underneath them. At high speeds, the pantograph can set up waves in the catenary wire, if these waves aren't damped sufficiently, the pantograph will start bouncing.
third rail is limited to low voltages to reduce the risk of arcing between the rails. The third rail systems I know of (UK), use 750 V DC which limits the amount of power you can draw without excessive losses, so no heavy or fast trains. Overhead wiring can use 25 kV, which has much lower resistive losses. Several power systems are in use for overhead catenary systems, from 1500V DC to 25 kV AC 50 Hz. This patchwork of systems makes it difficult to run international trains in Europe, although with the introduction of high-power electronics it's become easier to build locomotives that can run on 2-4 different systems.
This Wiki page does a decent job of discussing the advantages and disadvantages of third rail systems.
Overhead catenary is more susceptible to damage (from e.g. falling trees, vehicles that ignore the height limit), but has less risk of electrocution on e.g. level crossings. Electrocution risk can be mitigated by interrupting the third rail on crossings, but that has problems of its own (interruptions in the power supply).
Scaling up/down:
you can demonstrate the basic principles. In fact, third rail and overhead catenary are both available in standard scale model trains. The pros and cons are more difficult to demonstrate in a model:
scale model catenary is much less flexible than full size: the wire thickness of the scale model is not in proportion, so the wire is much stiffer.
scale models use 12V, which has no electrocution risk and arcing is much less of an issue than in the full-size system.
Due to the much lower speeds and shorter running times, wear in a scale model is much less of an issue than in the full-size system.
due to the much lower weight, power interruptions are more of an issue in the scale model than in the full-size system. | {
"domain": "engineering.stackexchange",
"id": 3075,
"tags": "rail"
} |
How do I increase the speed of this loop or pixel color finding process? | Question: I've been working on a program that should find the color red on the screen and then send out an alert. The issue is that the red color moves around a lot and the color finder moves too slow on the screen to find the color accuritely and in time before the color dissapears to another location on the screen.
Is there a way to speed up this color finding process so that it is practically instant?
Here is the code:
ColorFinder.java
import java.awt.AWTException;
import java.awt.Robot;
import java.awt.event.KeyEvent;
import java.awt.Color;
import java.util.Scanner;
import java.awt.geom.*;
import java.awt.event.*;
import java.awt.*;
public class ColorFinder
{
public static void main(String[] args) throws AWTException
{
//Create memories
Scanner scanner = new Scanner(System.in);
Robot robot = new Robot();
String ans;
int red;
int green;
int blue;
int x=535;
int y=168;
//Prints instructions
System.out.println("Type 'p' to start.");
System.out.println("");
//Waits for start input
ans = scanner.nextLine();
if(ans.equals("p"))
{
//Starts loop
for(int i=0; i<100; i++)
{
i=0;
System.out.println("");
//Color searcher
x=x+1;
if(x>1378 && y>468) //reset on highest values x & y
{
x=535; y=168;
}
if(x>1378) //step down by one pixel with y
{
y=y+1;
x=535;
}
//Get RGB value on screen coords
System.out.println("x:"+x+" y:"+y);//this part is not needed, it's only for visualising if it's working or at what speed it moves
Color color = robot.getPixelColor(x, y);
red=color.getRed();
green=color.getGreen();
blue=color.getBlue();
/*
System.out.println("Red = " + red);
System.out.println("Green = " + green);
System.out.println("Blue = " + blue);
*/
//Remove the note marks below to see how fast it moves
/*
robot.mouseMove(x, y);
if(x==1000)
{break;}
*/
//if it finds the specific red color it stops the loop and prints out "Found red color!"
if(red==255 && green==20 && blue==25)
{
System.out.println("Found red color!");
break;
}
}
}
}
}
I will gladly answer any questions.
Any kind of solution that does the same thing but way faster is good, many thanks in advance!
Answer: This question brings back memories from 2010 when I asked how to draw constantly changing grapics
https://stackoverflow.com/questions/3742731/java-how-to-draw-constantly-changing-graphics
The idea is that you prepare a location rectangle of the area you want to capture and ask the robot to fill it:
Rectangle rect = new Rectangle(p.x - 4, p.y - 4, 8, 8);
final BufferedImage capture = robot.createScreenCapture(rect);
then you loop over the area checking for the match
for (int x = 0; x < 8; x++) {
for (int y = 0; y < 8; y++) {
final Color pixelColor = new Color(capture.getRGB(x, y));
if (!pixelColor.equals(view.getColorAt(x, y))) {
// do something
}
}
}
Typically my computer vision AI's does not use 64-pixel area, but 64*64=4096, but you can use 4k screenshot has 3840x2160=8294400 pixels and a minimum of 8 million comparisons will actually take some time unless you parallelize the work.
I write c#, so this is written from the head as I remember Java +10 years ago.
Color red = new Color(255, 20, 25);
Rectangle rect = new Rectangle(535, 168, 843, 300);
final BufferedImage capture = robot.createScreenCapture(rect);
Pair<String, Integer> location = FindColor(capture, red);
if(location != null){
System.out.printf("Found red color at (x:{%s}, y: {%s})%n",
location.getValue(0),
location.getValue(1));
where FindColor is something in the lines of:
public Pair<String, Integer> FindColor(BufferedImage capture, Color color){
if (capture == null || color == null)
return null;
for (int x = 0; x < capture.getWidth(); x++)
for (int y = 0; y < capture.getHeight(); y++)
if (new Color(capture.getRGB(x, y)).equals(color))
return Pair.with(x, y);
return null;
}
instead of tuple-like Pair<>, you can use point2d or a similar struct just likely you do not need the null checks. | {
"domain": "codereview.stackexchange",
"id": 42327,
"tags": "java, performance, beginner"
} |
2,4-dinitrophenyl hydrazine test | Question: I found this reaction over here:
which illustrates the carbonyl test but looks fuzzy.
Assuming that the R in the product is a typo for H, I couldn't conserve the number of Nitrogen atoms.
What is the correct reaction?
Is this correct?
Is there a possibility of a methyl substitution on the ring in a variant of this test?
Answer: Somebody messed up the scheme completely!
As you have already imagined, there is no magical $\ce{NO2}$ to $\ce{CH3}$ conversion in this classical reaction ;-)
In the times before modern spectroscopy, these reactions were valuable analytical tools: the melting points of the dinitrophenylhydrazones and/or other derivatives were used to identify the carbonyl compounds. | {
"domain": "chemistry.stackexchange",
"id": 3415,
"tags": "organic-chemistry, reaction-mechanism, analytical-chemistry, erratum"
} |
Getting predicted values for a student report form | Question: I have a situation where I need to predict the input values for a form used to create a new model. The model is a "class report" that a teacher submits for a student after their class finishes and the values of the report I need to predict include the time the class started and finished. These are to be predicted based on previously submitted reports and other model data. When a teacher opens the form to submit a class report, they should see the "class start time" and other values filled in for them automatically with the predictions made.
Currently, this is being handled in the model as one single method and I am looking for a way to refactor it into its own class:
class ClassReport
...
# Get predicted values for report form for a given student, teacher and weekday.
# e.g. if a student and teacher had a class on the same day as the current day
# last week, use the same start and end times as the defaults.
#
# Values predicted are:
# - class start time
# - class finish time
# - date of next class
# - start time of next class
def self.predict_fields(student, teacher, day)
last_report = ClassReport.most_recent(student, teacher)
last_report_for_today = ClassReport.most_recent_for_wday(student, teacher, day.wday)
if Matching.has_class_today?(student, teacher)
# Get schedule from calendar
class_start, class_finish = Matching.get_class_time(student, teacher)
elsif last_report.present?
if last_report_for_today.present?
# Use start time from last report for this weekday
class_start = last_report_for_today.class_start
else
# Use start time from last report
class_start = last_report.class_start
end
if last_report_for_today.present?
# Use duration from last report for current weekday
class_finish = class_start + last_report_for_today.class_minutes.minutes
else
# Default duration of 1 hour
class_finish = class_start + 1.hour
end
else
# Set time to 1 hour before now
# Default duration of 1 hour
class_start = Time.now-1.hour
class_finish = Time.now
end
# Get next class day and try and get last report for that weekday
next_class_wday = Matching.next_class_wday(student, teacher)
if next_class_wday.present?
next_report = ClassReport.most_recent_for_wday(student, teacher, next_class_wday)
end
if next_report.present?
# Use next class day and time from previous report for this weekday
next_report_day = next_report.class_day
next_report_time = next_report.class_start
else
if next_class_wday.present?
# Use next matching day as date
date_diff = (day.wday - next_class_wday).abs
next_report_day = day + date_diff.days
else
# One week from now
next_report_day = day + 1.week
end
if last_report.present?
# Set time to same as previous report
next_report_time = last_report.class_start
else
# Set time to now-1h
next_report_time = Time.now-1.hour
end
end
[class_start, class_finish, next_report_day, next_report_time]
end
You can see that this uses multiple models and is doing far too much for a single method. Ideally I would want this as a class that takes student, teacher and day in the constructor, does some initialisation, and has methods for ReportPrediction#class_start, ReportPrediction#class_finish, etc.
I'm not sure if this is the best approach though, or where such a class should be placed. I had a read through these refactoring patterns but couldn't see any that I thought fitted this situation. The closest seemed to be a "policy object" but that appears to be more commonly used for returning boolean values based on model data. Service objects also seem slightly different as the way the author describes them implies they are for write operations.
How would you refactor this in a Rails-like manner? Or in more general terms, where would you have logic that looks at the attributes of multiple models and gives you back some values based on those?
Answer: For any complex piece of logic that involves multiple models, I'd rely on a service. Here's how I'd refactor your method into services:
#result= PredictFieldService.build.call(student, teacher, day)
class PredictFieldService
def self.build
new(ClassStartFinishService.build, .....)
end
def initialize(class_start_finish_service, .......)
@class_start_finish_service = class_start_finish_service
........
end
def call(student, teacher, day)
last_report = get_last_report
last_report_for_today = get_last_report_for_today
class_start, class_finish = @class_start_finish_service.call(student, teacher, last_report, last_report_for_today)
next_report_day = anyservice.call
next_report_time = whateverservice.call
[class_start, class_finish, next_report_day, next_report_time]
end
private
def get_last_report
ClassReport.most_recent(student, teacher)
end
def get_last_report_for_today
ClassReport.most_recent_for_wday(student, teacher, day.wday)
end
end
class PredictFieldService::ClassStartFinishService
def self.build
new
end
def call(student, teacher, last_report, last_report_for_today)
if Matching.has_class_today?(student, teacher)
class_start, class_finish = get_matching_class
else
class_start, class_finish = get_start_finish(last_report, last_report_for_today)
end
class_start, class_finish
end
private
def get_matching_class
Matching.get_class_time(student, teacher)
end
def get_start_finish(last_report, last_report_for_today)
if last_report.present?
if last_report_for_today.present?
class_start = last_report_for_today.class_start
class_finish = class_start + last_report_for_today.class_minutes.minutes
else
class_start = last_report.class_start
class_finish = class_start + 1.hour
end
else
class_start = Time.now-1.hour
class_finish = Time.now
end
[class_start, class_finish]
end
end
This way you can test each piece of logic encapsulated in a service in isolation, and expand the functionality through dependency injection. | {
"domain": "codereview.stackexchange",
"id": 20575,
"tags": "ruby, ruby-on-rails"
} |
Do telescopes exist that reflect the incoming light more than three times along their length? | Question: Refractors only use the length of the telescope once, reflectors twice, catadioptric telescopes like those of the Schmidt-Cassegrain design three times. Have telescopes been built that reflect the incoming light at least once more from one end of the telescope to another?
Say, by adding a tertiary mirror which reflects the light forward again, towards a small quarternary mirror in front of the secondary, which ultimately directs the light though a small hole in the tertiary (and primary) mirror towards the eyepiece?
Note: The above description makes this question different from the similar-sounding Do triple or more times reflecting telescopes exist?
Answer: If your willing to accept more than 2 discontinuous mirrors, the Three Mirror Anastigmat has 4 passes along some/most of the overall tube length. An early working prototype example (which I've actually seen in person many years ago) was built at the University of Cambridge by Dr. Roderick Willstrop. The Institute of Astronomy has a page on the Three Mirror Telescope which includes the following optical diagram:
The 0.5 meter 3 Mirror Telescope (3MT) has a focal length of 0.8 meters, in an overall tube length of 1.2 meters producing a very compact telescope with a large field of view (5 degrees in diameter) with good image quality (<0.33" across the whole field of view). The quality of the site at Cambridge was not ideal for building on the progress of the prototype and a proposal for building a larger version at a better site was declined by the UK science funding agency of the time (SERC, which became PPARC and then STFC) in favor of buying into the Gemini collaboration of two 8 meter telescopes.
Some constructed or in construction examples of three mirror anastigmat include:
James Webb Space Telescope (see Figure 2 of this page on the JWST Telescope)
The Simonyi Survey Telescope of the Vera C. Rubin Observatory which will carry out the Legacy Survey of Space and Time (LSST) has M1 and M3 ground into the same piece of glass (see Figure 7 of the LSST overview paper) with a separate secondary M2 mirror and prime focus camera
The planned ESO Extremely Large Telescope will use 5 mirrors in a folded three mirror anastigmat design along with 2 flat mirrors which will have 5 bounces along all or part of the overall "tube" structure before exiting to a Nasmyth focus; see the not very clear diagram below: from the ESO ELT optical diagram page, somewhat better diagram in Figure 4 of Hippler 2018. | {
"domain": "astronomy.stackexchange",
"id": 5569,
"tags": "telescope, optics"
} |
Multiple machines - nodes somewhat communicating | Question:
I am trying to get a navigation stack running on an Lego NXT-based system. However, it is different from the normal nxt_apps assisted_teleop in that the NXT is carrying a camera and a laptop, which is doing some of the navigation and SLAM processing (so I don't have to send camera images over the network), and another computer, which is running rviz and the rest of the vslam and navigation stacks. I am having trouble seeing messages from one computer on another. I have read the ROS troubleshooting page for network configuration, and I have done everything it says. Here is my roswtf output:
Beginning tests of your ROS graph. These may take awhile...
analyzing graph...
... done analyzing graph
running graph rules...
... done running graph rules
Online checks summary:
Found 2 warning(s).
Warnings are things that may be just fine, but are sometimes at fault
WARNING The following node subscriptions are unconnected:
* /base_ctrl:
* /joint_states
WARNING The following nodes are unexpectedly connected:
* /joy_node->/rosout (/rosout)
Found 7 error(s).
ERROR Communication with [/rosout] raised an error:
ERROR Communication with [/camera_tf] raised an error:
ERROR Communication with [/nxt_teleop] raised an error:
ERROR Communication with [/base_ctrl] raised an error:
ERROR Communication with [/camera] raised an error:
ERROR Communication with [/nxt_ros] raised an error:
ERROR The following nodes should be connected but aren't:
* /base_ctrl->/nxt_ros (/joint_command)
* /nxt_teleop->/base_ctrl (/cmd_vel)
* /joy_node->/nxt_teleop (/joy)
Here is my 'rostopic list' output:
/camera_info
/cmd_vel
/diagnostics
/image_raw
/joint_command
/joint_states
/joy
/rosout
/rosout_agg
/tf
These commands were run on the second computer, the one which is not the laptop. If I run these commands on the laptop, they report completely normal results.
If it's of any interest, roscore is running on the computer which is not the laptop. Also, no firewalls are running on either computer.
What is happening here?
Originally posted by Skinkworks on ROS Answers with karma: 11 on 2011-07-07
Post score: 0
Answer:
This is somewhat related to this question.
I guess you don't have a correct DNS configuration in your network. ROS uses domain names instead of IP addresses per default. When a node registers at the master, it uses the hostname of the computer it is running on. If another node, maybe on a different computer, wants to communicate with that node, it uses this information to establish the connection. Now if a DNS is not present and the hostnames are not mapped to IP addresses in /etc/hosts, the connection cannot be established. The fix is to set ROS_IP on both computers to their own IP address.
You can find more information on fixing your network setup here and here.
Originally posted by Lorenz with karma: 22731 on 2011-07-07
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by tfoote on 2011-07-09:
Sometimes rxgraph will determine nodes are not connected if there's a lossy link between rxgraph and the nodes.
Comment by Skinkworks on 2011-07-08:
I now have each computer's /etc/hosts file correctly set up, and I can get data from one node to another, so it works, but rxgraph sometimes thinks that nodes are not connected. Is this rxgraph, or is there actually a problem?
Comment by Lucas2903 on 2021-02-19:
If you have your IP addresses set correctly (as explained in this answer), and the error is still present, this can be caused by a firewall (e.g. ufw). Try disabling it and it should solve the issue. | {
"domain": "robotics.stackexchange",
"id": 6072,
"tags": "ros, multiple, multi-machine, fedora"
} |
Is the superposition principle in quantum mechanics always valid? | Question: Is the superposition principle in quantum mechanics always valid, or might there be a reason for it to break down at some high energy scales? What would be the effect of this, and how would one measure it?
Answer: Our current theory of (orthodox) quantum mechanics maintains that the superposition principle is valid at all energy scales.
It may be possible that the superposition principle breaks down at high energies. I'm not sure the highest energies to which we've tested the superposition principle.
Some candidates that come to mind:
traditional atomic physics experiments are typically performed around room temperature or colder. Maybe a little hotter but not an order of magnitude hotter.
We have some facilities which generate x-rays that are used for various imaging processes. I'm not sure if single photon experiments have been performed using these beams or not. For example a two-slit experiment. Such an experiment would be a higher energy test of the superposition principle.
Experiments which utilize high energy plasmas may rely in some way on the quantum superposition principle. These get quite hot and could be a high energy test of the superposition principle.
High energy particle experiments rely in some way on the superposition principle in that they make predictions based on particles being in superpositions of states. For example nuetrino oscillation experiments rely on neutrinos being in superpositions of flavors. Sometimes these neutrinos come from the sun so they may be high energy.
I've heard it hypothesized that neutron stars might be Bose-Einstein Condensates. The physics of neutron stars may then be a test of the superposition principle at high energy.
The most straightforward way to directly test for this would be to perform the double slit experiment with higher and higher energy particles. orthodox quantum mechanics would predict you always see an interference pattern form. If you ever saw that a pattern didn't form, (and there's not some mistake with your experiment you can rule out) then that may be evidence for a breakdown of the superposition principle at high energy.
There are currently theories known as spontaneous collapse theories which postulate that, under certain physical circumstances, the superposition principle breaksdown and the wavefunction "spontaneously" or stochastically collapses. The theories vary on what the exact physical circumstances are that induce the collapse. It's possible some of these theories may consider an energy dependency of the collapse. | {
"domain": "physics.stackexchange",
"id": 81527,
"tags": "quantum-mechanics, hilbert-space, superposition"
} |
Orbital period of two planets | Question: There is the question #80 from GRE1777 https://www.ets.org/s/gre/pdf/practice_book_physics.pdf which goes as follows:
I tried to replace M from Keppler's law with the reduced mass of the system, but it didn't work
Answer: At each instant, the gravitational force from the star and the gravitational force from the planet on the opposite side are in the same direction, so you can simply add them together. The total force is
$$ F = m\left(\frac{GM}{r^2} + \frac{Gm}{(2r)^2}\right)$$
If you put this in the form
$$ F = m\frac{GM'}{r^2}$$
then you'll have your answer. | {
"domain": "physics.stackexchange",
"id": 43296,
"tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, orbital-motion"
} |
Quantum Mechanical Interpretation of Water Waves? | Question: So I have been exploring the idea of wave-particle duality and came across and interesting idea.
Could water waves, be interpreted as particles in some context? If so, how would you observe their particle-like properties?
Answer: If you interpretate "particle" as a stable pattern in fluid motion, then the analogy between particles and solitons is quite reasonable, for example the equations describing the motion of fluids admit in some situations solitons solutions (for a visual demonstration, see for example 1 and 2).
Nevertheless, I would't consider it a quantum mechanical interpretation of waves in water, at limit it's the opposite as quantum mechanics is physically way more complex (in all senses) than fluid motion (mathematically are both hard). | {
"domain": "physics.stackexchange",
"id": 7960,
"tags": "quantum-mechanics, wave-particle-duality"
} |
Cannot identify C[Mg+] | Question: I am not able to identify the molecule C[Mg+]. I came across it a number of times in the USPTO-50k dataset, in reactions such as the one shown below:
CON(C)C(=O)c1cn(-c2ccc(Cl)cc2)c(-c2ccc(Cl)cc2Cl)n1.C[Mg+]>>CC(=O)c1cn(-c2ccc(Cl)cc2)c(-c2ccc(Cl)cc2Cl)n1
I searched in PubChem and MolView and couldn't identify the compound. The closest thing I found was magnesium carbide, with SMILES [CH3-].[CH3-].[Mg+2]. Is it the same compound?
Any ideas what C[Mg+] is or where I can look next? Also RDKit accidentally generated a similar molecule once, C[MgH]. Does this one exist too?
Answer: One explanation for C[Mg+] is the one of of methylmagnesium as in the Grignard reagent of methylmagnesium chloride (C[Mg+].[Cl-]), or the softer methymagnesium bromide (C[Mg+].[Br-]). Obviously, metal organic compounds with bonds which sometimes differ from «pure covalent» as e.g. in pyridine (c1ccncc1), or «clearly ionic» (as in the example of sodium acetate, [O-]C(=O)C.[Na+]) can represent a limitation of this reduced representation.
Maurice' comment shows that SMILES are better known for the representation of chemical compounds, than for chemical reactions (indicated by >>). So I copy-pasted the string into ChemDraw's sample page as illustration:
Edit: For comparison, the reading by Marvin (after an additional click to disable the Thiele rings) | {
"domain": "chemistry.stackexchange",
"id": 17443,
"tags": "ionic-compounds"
} |
A directive allowing to make a whole DOM subtree readonly | Question: What do you think about this directive? For each input, it traverses the DOM towards the root and if it finds an element with the class readonly, it makes the input readonly.
An ng-readonly directive on the input itself get honored: The input becomes readonly whenever ng-readonly evaluates to true or any enclosing element has the class readonly.
.directive("input", function($parse) {
return {
restrict: "E",
link: function($scope, element, attr) {
if (attr.type === "radio" || attr.type === "checkbox") {
return; // for simplicity, let's ignore them
}
var org = attr.ngReadonly;
if (!org && attr.readonly) {
return; // readonly seems to be set manually, so let's not touch it
}
$scope.$watch(function() {
var readonly = $parse(org)($scope);
for (var e = element; e.length && !readonly; e=e.parent()) {
readonly = e.hasClass("readonly");
}
console.log(element.attr("ng-model"), readonly)
attr.$set("readonly", !!readonly);
});
},
};
})
The purpose is to allow to make a whole subtree readonly, no matter what's inside. It works, but I can imagine there can be performance problems and/or interferences with ng-readonly setting the value independently of this directive.
An explanation why I'm not simply binding to a scope variable
While I really appreciate Thomas' answer, I disagree with this part:
Again, I highly recommend that you simply bind to a scope variable instead of checking for the presence of a class further up the tree.
I really find it both complicated and error-prone: I'm having a big form in which many fields already have their readonly logic (e.g., bank name is read only when a known BIC is entered). Now, I'd have to add is_readonly to all existing ng-readonly directives and add the directive to every input field missing it.
Moreover, the form includes some partials, which have no idea about the outer scope. Again, I'd have to add change the partial as above and make it to get is_readonly from the outer scope.
Answer: Generally speaking, you should avoid looking outside the current directive for anything that would affect your internal state (bar events). The preferred method of communication is scopes. Here is how I would recommend you do this.
<div ng-controller="MyCtrl">
<div>
<input ng-readonly="is_readonly" />
</div>
<div>
<div>
<input ng-readonly="is_readonly" />
</div>
</div>
</div>
Simply toggling $scope.is_readonly from MyCtrl is enough to disable all child inputs that have ng-readonly="is_readonly", and it's bindable too, so you can add/remove readonly status at runtime.
Now, if there is a genuine need to have the behaviour you described, then you have 2 options.
Stick with your current implementation, but this will not be performant, since your DOM traversal will fire any time the scope changes, for any reason, anywhere up the scope tree.
Rewrite to use directive's require property.
Here is how you would do the latter.
.directive("readonly", function() {
return {
restrict: "C",
link: function() {},
controller: function() {},
}
})
.directive("input", function()
return {
restrict: "E",
// search for optional parent directive named `readonly`
require: "^?readonly",
// 4th parameter is `readonly` controller or null.
link: function($scope, element, attr, has_readonly) {
var org = attr.ngReadonly;
if (org) {
return; // readonly seems to be set manually, so let's not touch it
} else if (has_readonly) {
attr.$set("readonly", true)
}
},
};
})
This tells angular that it should pass you a reference to the readonly controller if found, or null otherwise. In this way, you can tell at link time whether the parent node exists. This saves you from performing the DOM traversal yourself, and should be faster too.
Again, I highly recommend that you simply bind to a scope variable instead of checking for the presence of a class further up the tree. | {
"domain": "codereview.stackexchange",
"id": 8685,
"tags": "javascript, performance, angular.js"
} |
Connected and disconnected conductors spheres | Question: I have an exercise in which I have the following situation:
there are two conductors spheres (far apart) both of a given radius (R1, R2=2R1). The lesser sphere has a positive charge q and the other shpere is chargeless.
The two spheres are connected with a thin cable and then disconnected.
Now I have to establish the final charges q1 and q2 and the final potentials V1 and V2.
That's what I thought: when they are connected they stay at the same V so I can write:
$$k*Q1/R1 = k*Q2/R2 \ \ \ \ where \ Q1+Q2=q$$
and using this I can establish Q1 and Q2.
But what exactly happen when I disconnect them? Do they just stay at the same V, with the charges they were having while connected, or do they have a different behavior?
Thank you in advance for the help.
Answer: When you connect them, charge can flow, and it will do so until there is no net electric field it can move along (the surfaces are the same potential). Imagine I just suddenly destroyed the wire; now the charges are stuck on the spheres. They are still under the condition that they want to equilibrate and put a constant voltage across the surface, but they were already in that situation (and removing the wire doesn't change that) so nothing will change when you remove the wire.
An interesting extension to think about conceptually, if you want: you're essentially imagining these spheres to be very far apart, so that the field from one sphere doesn't affect the other. What happens if they're very close? Each sphere is generating a field that will push the surface charge on the other away. So I end up with more charge on the outside-facing sides of each sphere, and less on the inside-facing sides. (Calculating the exact the distribution is a lot more work, though, so don't worry about that! It's just a good thing to keep in mind in these problems.) | {
"domain": "physics.stackexchange",
"id": 32162,
"tags": "charge, potential, conductors"
} |
How to make a ROS topic? | Question:
How do I make a ROS topic. I want to make a ROS topic (Either a subscriber or a publisher) which I can either write to or read from.
For example: how ROS programs have topics I can subscribe to or publish on. I want to make my own.
I just want to be able to send a float32 type message using a ROS command on my topic.
Is this possible to do?
Originally posted by sdcguy on ROS Answers with karma: 23 on 2018-03-09
Post score: 2
Answer:
Of course. You can do that. First I would reccomend you to follow ROS tutorial. Also, you can use your own message types. Follow ROS message tutorials.
You can create your own topics using ROS messages. Also, you can create different topics using same message type. If you need, you can create your own message type as well.
You always subscribe topics published by a subscriber. Not messages. To do that,
You should create a publisher that publishes topics like chatter.
Then, you should create a subscriber that can subscribe that specific topic that is published by the publisher.
whenever the subscriber receives topics, your callback function in the subscriber node is called. Then you can use those data.
Originally posted by Gayan Brahmanage with karma: 929 on 2018-03-09
This answer was ACCEPTED on the original site
Post score: 5
Original comments
Comment by sdcguy on 2018-03-09:
Thanks for the response.
The first tutorial you have mentioned is a tutorial to write a publisher, which publishes on the topic "chatter". It is making a call to the advertise function which "notifies anyone that is trying to subscribe to that topic".
Comment by sdcguy on 2018-03-09:
So, should I make a subscriber for that topic? And that is basically me creating my own topic.
Comment by Gayan Brahmanage on 2018-03-10:
Hi sdcguy, I just updated the answer with a breif explanation. Hope this will help you. | {
"domain": "robotics.stackexchange",
"id": 30262,
"tags": "ros, ros-lunar, topic, create, rostopic"
} |
Are all pseudo-random number generators ultimately periodic? | Question: Are all pseudo-random number generators ultimately periodic? Or are they periodic at all in the end?
By periodic I mean that, like rational numbers, they in the end generate a periodic subsequence...
And pseudo-random means algorithmic/mathematical generation of random numbers...
Answer: All pseudorandom generators that don't rely on outside randomness and use a bounded amount of memory are necessarily ultimately periodic since they have finite state. You can think of them as huge deterministic finite automata which have special "output" states in which they give their output. All finite automata are eventually periodic, and so all pseudorandom generators produce eventually periodic output.
However, the period length can be enormous. For example, a PRNG with a cryptographic state of 128 bits might only cycle once every $2^{128}$ bits of output, and so even if outputting one bit every nanosecond, the solar system will be dead ere the PRNG repeats.
If the PRNG is allowed to use an unbounded amount of memory (which isn't realistic) then it can, for example, output the binary expansion of $\sqrt{2}$, which we know isn't eventually periodic (since $\sqrt{2}$ is irrational). | {
"domain": "cs.stackexchange",
"id": 6780,
"tags": "randomness, pseudo-random-generators"
} |
UI Headers Not Updating qt_ros | Question:
I created a qt_ros package and integrated my QT application files into it. Everything has worked perfectly so far, but today when I added a new button to the .ui file the associated ui header file was not updated. Is this normal, or is there something wrong with my CMakeLists?
Note: This project started as a .pro project and uic ran fine on the .ui files then.
CMakeLists.txt
##############################################################################
# Rosbuild2
##############################################################################
if(ROSBUILD)
include(rosbuild.cmake OPTIONAL)
return()
endif()
##############################################################################
# CMake
##############################################################################
cmake_minimum_required(VERSION 2.4.6)
##############################################################################
# Ros Initialisation
##############################################################################
include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake)
rosbuild_init()
#set the default path for built executables to the "bin" directory
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
#set the default path for built libraries to the "lib" directory
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)
rosbuild_genmsg()
# Set the build type. Options are:
# Coverage : w/ debug symbols, w/o optimization, w/ code-coverage
# Debug : w/ debug symbols, w/o optimization
# Release : w/o debug symbols, w/ optimization
# RelWithDebInfo : w/ debug symbols, w/ optimization
# MinSizeRel : w/o debug symbols, w/ optimization, stripped binaries
#set(ROS_BUILD_TYPE Debug)
##############################################################################
# Qt Environment
##############################################################################
rosbuild_include(qt_build qt-ros)
rosbuild_prepare_qt4(QtCore QtGui QtWebkit QtNetwork) # Add the appropriate components to the component list here
##############################################################################
# Sections
##############################################################################
file(GLOB QT_FORMS RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} ui/*.ui)
file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/Google_Maps_CMAKE/*.hpp)
QT4_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
QT4_WRAP_UI(QT_FORMS_HPP ${QT_FORMS})
QT4_WRAP_CPP(QT_MOC_HPP ${QT_MOC})
##############################################################################
# Sources
##############################################################################
file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
##############################################################################
# Binaries
##############################################################################
rosbuild_add_executable(Google_Maps_CMAKE ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
target_link_libraries(Google_Maps_CMAKE ${QT_LIBRARIES})
Originally posted by newToRos on ROS Answers with karma: 52 on 2013-02-25
Post score: 0
Original comments
Comment by 130s on 2013-02-26:
Can you update your question to add; CMakeLists.txt, .ui files and the command line you used? At least without those info it's hard to tell what's going on.
Comment by newToRos on 2013-02-27:
Sure. I will add them later tonight.
Comment by newToRos on 2013-02-28:
I just added the CMakeLists.txt
Answer:
Alright, I figured out what was going on. When I started the project everything was in the same directory and I was using a .pro file. I needed to include the ui_form.h file in my form.cpp file to be able to access my buttons. When I would build the project this way, the ui_form.h file would be generated and stayed in the same directory as all of the other files.
I then decided to organize my project into include, src, ui, and resources folders. Since the ui_form.h file was a header file, I moved it to the include folder and updated my include statement to look in the include directory. Around this time, I switched to using CMake and discovered that the files were not updating when I changed the interface. I thought I would just delete the ui_form.h, since it was auto-generated. I did and a got a compiler error because my include statement pointed to the include directory.
I just realized that CMake puts the ui_form.h file in the build directory. I deleted the one I had in the include directory and changed my include statements in form.cpp to
#include "../build/ui_form.h"
instead of
#include ../include/ui_form.h
I am still not sure if this is the best way to do it since the build directory could be changed, but at least everything is working now.
Originally posted by newToRos with karma: 52 on 2013-02-28
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by newToRos on 2013-03-01:
I am not yet able to accept my own answer. Feel free to accept it for me. | {
"domain": "robotics.stackexchange",
"id": 13060,
"tags": "ros, gui, cmake"
} |
On the expansion of space on small distances | Question: After reading this post :Why does space expansion not expand matter? and after posting some comments to Peter Diehr, I was invited to make my argument to a question. The main things one needs from the post above are
The answer seems to be (from Marek in the previous question) that the gravitational force is so much weaker than the other forces that large (macro) objects move apart, but small (micro) objects stay together. However, this simple explanation seems to imply that expansion of space is a 'force' that can be overcome by a greater one. That doesn't sound right to me.
And
Hi. @PeterDiehr and Sushant23 . But why Brooklyn doesn't? If on the small scale we agree on the posted answer, that we cannot see the expansion since everything is expanding, then why see it on the big scale? Is it because it expands faster on the big scales but on the small the speed is the same for all
objects? Thanks. – Constantine Black 1 hour ago
@ConstantineBlack: the expansion is equivalent to a very weak force - local binding forces always overwhelm it: atoms, molecules, people (eg, not a valid excuse for the waistline!), planets, solar systems, and galaxies. But you can see it over very great distances - hence the red shift due to cosmic expansion is a good proxy for distance, though other proxies are used to set the distance scale. See Hubble's Law – Peter Diehr 1 hour ago
@PeterDiehr Thanks for the fast response. I find it conceptually wrong to admit that expansion is a force such that you can use an equation like Newton's or any argument at least saying that: the total force on the object is expansion + other_forces so that the result in small scales is not-expansion. It' s more reasonable to either say that experiments say this or that or that in small scales, the expansion rate is the same for all objects( even inside the galaxy??) so that we don't observe it. Am I losing something here? Thanks. – Constantine Black 53 mins ago
from the comment discussion. I am posting them as they are, because in the end, I don't know if my question is something new besides the fact that the answers at the pre-mentioned post are not satisfactory( at least for me), and because indeed, my question is a function of the preceding discussion.
Thank you.
Answer: In general relativity a free particle moves on a trajectory called a geodesic and to make it diverge from that geodesic you need to apply a force to it. To take an everyday example, an object momentarily at rest at the surface of the Earth would normally follow a geodesic that leads radially towards the centre of the Earth with an acceleration relative to the surface of $g$. To keep the object stationary at the surface we need to apply a force, and that force is of course just the gravitational force $F=mg$.
You'll hear people say that gravity isn't a force and while this is true it's also misleading. The curvature of spacetime does result in a force but that force is a bit different to the naive view. There isn't a force pulling objects down, but you do need a force to prevent them falling down i.e. to accelerate them away from the geodesic they would otherwise follow.
The point of all this is that I suspect your key concern is your statement:
I find it conceptually wrong to admit that expansion is a force such that you can use an equation like Newton's
You are correct that the expansion of space is not a force, but to make an object not follow that expansion you have to apply a force to it, and that is a real force that is in principle measurable (in practice it would be far too small to measure). So if you could tie two objects together with a string many light years long, to keep them at rest relative to each other, there really would be a tension in that string. That tension arises because you are forcing the objects to accelerate away from the geodesics they would otherwise follow, and it arises in the same way as the tension in the string if you suspend an object in Earth's gravity.
Having said this, I actually agree that it is meaningless to talk about the force on an atom due to the expansion, except just possibly in some idealised circumstances. By the expansion of spacetime we mean a spacetime geometry called the FLRW metric, but this is a large scale geometry due to a homogeneous distribution of matter and the distribution of matter isn't homogenous. If you looked at the spacetime geometry with some imaginary curvometer then you'd find at the small scale it didn't look anything like the FLRW metric. And if it locally doesn't look like the FLRW metric then that means locally there is no expansion and therefore no force needs to be applied to resist that expansion.
A footnote: when I started writing this I intended to go into more details about geodesics in an expanding spacetime, but this turns out to be somewhat complicated so it's a temptation I have resisted. If this is a subject that interests you it might be worth writing a new question. | {
"domain": "physics.stackexchange",
"id": 31216,
"tags": "cosmology, space-expansion"
} |
Why does measured pressure change over time in closed hose with temperature gradient | Question: I have a 4' hose that is closed at one end and connected to a Airdata Test Set (precise control of pressure) and a high accuracy pressure monitor on the other end with a T and valve. The valve allows the Airdata Test Set connection to be closed off resulting in a hose connected to the pressure monitor and closed at the other end. The valve is a high quality needle valve. The pressure monitor is a a Druck DPI 142. Half the length of the hose is in a temperature chamber controlled to 70 C. The Druck connected end is outside the chamber at roughly 22 C. When the airdata test set is commanded to a pressure of 1300 mb and allowed to settle for 30 seconds or so, then the valve closed, the pressure reported by the Druck drops over 20 minutes or so with a decreasing rate of change. The airdata test set draws air from the room when operating. The hose is ~0.190" ID neoprene, Saint-Gobain P/N 06404-15. The temp chamber, hose, etc, are given 1 hour to thermally stabilize prior to commanding the pressure to 1300 mb. The difference between initial pressure and stable pressure is ~18 mb. Why does the pressure take 20 minutes to stabilize?
Answer: Firstly, and forgive me for asking the obvious, are you certain that there are no leaks anywhere in your setup? I'd suggest getting someone else to check it over in person - it may be something obvious you've overlooked that a fresh pair of eyes would see. Again, apologies if you've already tried this :-)
Assuming that's not it:
As you increase the pressure in the hose, the air temp in the hose will also increase (work is done on the gas to increase the pressure). As the temp settles back to equilibrium, the pressure decreases slightly according to $pV = NRT$. (I think you say you allow 1 hour for equilibrium prior to increasing the pressure).
If this is true, then increasing the pressure again should result in a much smaller pressure drop (since an increase of 18mb will result in much less heating than 1300mb).
Are you able to monitor temperatures inside the hose?
Given that the pressure does seem to settle, this is where I'd put my money. | {
"domain": "physics.stackexchange",
"id": 183,
"tags": "fluid-dynamics"
} |
In continuum mechanics, what is work potential in the context of total potential energy? | Question: I'm reading a book on the finite element method. Specifically I'm looking at the background material where they are discussing potential energy, equilibrium, and the Rayleigh–Ritz method.
The book claims that potential energy is equal to strain energy + work potential energy. I see that strain energy is like spring energy from a spring being stretched or compressed. However, I don't see what work potential represents. I feel that if a spring system is perturbed via forces, the work put into the system should be equal to the amount stored in the springs. (By integrating f(x) over x). The book seems to suggest otherwise since they are simply multiplying the force applied in the equilibrium position, times the displacement.
What's going on here, and especially what is the intuitive meaning behind work potential?
Answer: They probably mean external work (load * displacement) potential; summed with (inner) strain potential gives total potential energy. See e.g. here (just a quick search). | {
"domain": "physics.stackexchange",
"id": 86368,
"tags": "energy, work, potential-energy, spring, continuum-mechanics"
} |
Electric field produced by a uniformly polarized sphere | Question: I am thinking about the classic problem of a uniformly polarized sphere, within which the polarization is in $z$ direction.
I've been trying to find the electric field inside the uniformly polarized sphere using the electric displacement $\vec{D} = \varepsilon_{0}\vec{E} + \vec{P}$. By using the "Gauss Law" for electric displacement $\oint \vec{D} \cdot d\vec{A} = Q_{\text{free}}$, $\vec{D}$ is $0$ because there is no free charge inside the material. Hence, I have
$$ \varepsilon_0\vec{E} + \vec{P}=0$$
from the above equation the electric field inside the sphere is $\vec{E}= -\frac{\vec{P}}{\varepsilon_0}$. But the correct answer is $\vec{E}=-\frac{\vec{P}}{3\varepsilon_0}$. Can anyone please explain this inconsistency?
Answer: The macroscopic Gauss's Law for $\vec{D}$ is not enough to solve this problem. In a (static) system with the divergence equation $\vec{\nabla}\cdot\vec{E}=0$, you can conclude that $\vec{E}$ is a constant, since the curl equation $\vec{\nabla}\times\vec{E}=0$ is also known. Hence, by Helmholtz's Theorem, $\vec{E}$ is zero if if vanishes at spatial infinity. However, an analogous argument does not hold for $\vec{D}$, since $\vec{D}$ is not a curl-free (that is, conservative) vector field. In general, $\vec{\nabla}\times\vec{D}=\vec{\nabla}\times\vec{P}$, and while $\vec{\nabla}\times\vec{P}=0$ inside the sphere, it is singular (meaning it is a $\delta$-function) at the surface of the sphere.
Because, in general, $\vec{\nabla}\times\vec{D}\neq0$, using $\vec{D}$ to calculate the fields is not useful in most situations in which there is not a planar, cylindrical, or sphereical symmetry (in which cases, the symmetry ensures $\vec{\nabla}\times\vec{D}=0$). It is more useful to work with the conservative field $\vec{E}=-\vec{\nabla}\Phi$, solving for the scalar potential $\Phi$ using a method such as separation of variables. To solve the problem that way, rather than by introducing $\vec{D}$, you should calculated the bound charges $\rho_{b}$ and $\sigma_{b}$ from the polarization $\vec{P}$. In this case, $\rho_{b}=0$ and $\sigma_{b}=P\cos\theta$, which makes for an easy separation of variables solution (with only $\ell=0$ terms).
To see concretely why this more general method is necessary, note any constant $\vec{E}=E_{0}\hat{n}$ inside the sphere would satisfy $\vec{\nabla}\cdot\vec{E}=0$ (or equivalently $\oint d\vec{S}\cdot\vec{E}=0$) inside the sphere. So Gauss's Law cannot possibly be enough to fix $\vec{E}$ (or $\vec{D}$). | {
"domain": "physics.stackexchange",
"id": 98561,
"tags": "electromagnetism, electrostatics, gauss-law, polarization"
} |
Polish Notation in C++ | Question: This is my homemade Polish Notation implementation in C++.
I need a review for it to improve it and improve my coding skill. I also will put this implementation on my GitHub account
//======================================================
// Author : Omar_Hafez
// Created : 28 July 2022 (Thursday) 6:10:44 AM
//======================================================
#include <math.h>
#include <iostream>
#include <stack>
#include <vector>
class PolishNotation {
private:
std::string operations[5] = {"+", "-", "*", "/", "^"};
bool isDigit(char &ch) const { return ch >= '0' && ch <= '9'; }
bool isOperation(std::string str) const {
for (std::string x : operations) {
if (str == x) return 1;
}
return 0;
}
bool isNumber(std::string str) const {
bool floatingPoint = 0;
if (str.length() == 1) {
return isDigit(str[0]);
}
for (int i = (str[0] == '-'); i < str.length(); i++) {
if (!floatingPoint && str[i] == '.') {
floatingPoint = 1;
continue;
}
if (!isDigit(str[i])) return 0;
}
return 1;
}
bool isHigher(std::string a, std::string b) const {
int cnt1 = 0, cnt2 = 0;
cnt1 = (a == "+" || a == "-" ? 1 : a == "*" || a == "/" ? 2 : 3);
cnt2 = (b == "+" || b == "-" ? 1 : b == "*" || b == "/" ? 2 : 3);
return cnt1 > cnt2;
}
std::string calculate(std::string a, std::string b, std::string operation) const {
long double x = stold(a);
long double y = stold(b);
if (operation == "+") return std::to_string(x + y);
if (operation == "-") return std::to_string(x - y);
if (operation == "*") return std::to_string(x * y);
if (operation == "/") return std::to_string(x / y);
if (operation == "^") return std::to_string(pow(x, y));
return a;
}
std::vector<std::string> cutter(std::string str) const {
std::vector<std::string> ope;
std::string tmp = "";
int ind = 0;
while (ind < str.length()) {
while (ind < str.length() &&
(isDigit(str[ind]) || str[ind] == '.')) {
tmp += str[ind++];
}
if (tmp != "") {
ope.push_back(tmp);
tmp = "";
}
while (ind < str.length() && !isDigit(str[ind])) {
tmp += str[ind++];
}
if (tmp.length() > 1 && tmp.back() == '-') {
tmp.pop_back();
ope.back() = "-" + ope.back();
}
if (tmp != "") {
ope.push_back(tmp);
tmp = "";
}
}
return ope;
}
std::vector<std::string> convertToPolishNotation(std::vector<std::string> ope) const {
if (ope.size() == 1) return {ope[0]};
std::vector<std::string> a;
a.push_back(ope[0]);
std::stack<std::string> st;
for (int i = 1; i < ope.size(); i += 2) {
if (st.empty()) {
st.push(ope[i]);
} else {
while (!st.empty() && !isHigher(ope[i], st.top())) {
a.push_back(st.top());
st.pop();
}
st.push(ope[i]);
}
a.push_back(ope[i + 1]);
}
while (!st.empty()) {
a.push_back(st.top());
st.pop();
}
return a;
}
public:
std::vector<std::string> convertToPolishNotation(std::string str) const {
return convertToPolishNotation(cutter(str));
}
long double calculateThePolishNotation(std::vector<std::string> a) const {
int ind = 0;
std::stack<std::string> st;
while (ind < a.size()) {
while (st.empty() || isNumber(st.top())) {
st.push(a[ind++]);
}
std::string operation = st.top();
st.pop();
std::string x = st.top();
st.pop();
std::string y = st.top();
st.pop();
st.push(calculate(y, x, operation));
}
return stold(st.top());
}
};
int main() {
std::string str;
std::cin >> str;
PolishNotation p = PolishNotation();
std::cout << std::fixed << p.calculateThePolishNotation(p.convertToPolishNotation(str));
}
Answer: Include All Necessary Include Files
The code does not contain the necessary include for std::string, it does not compile on my computer (Windows 10, Visual Studio 2019 Professional) without this.
Don't Reinvent the Wheel
The C programming language has ctype.h which includes the function isdigit() there is no need to write your own, you can include cctype to get access this function. You might also want to try the first answer in this stack overflow question.
Alternate Implementation of the calculate() Function
One problem with the current implementation of the calculate() function is that there is no error checking, what happens if someone inputs an operator that you don't check for such as %, ^ or $.
A second problem with this function is that it is difficult to add a new operator/operation.
An alternate implementation that could solve both these problems would be to use std::map or std::unordered_map that uses a character or string as a Key and has a pointer to a function as the mapped return type. | {
"domain": "codereview.stackexchange",
"id": 43671,
"tags": "c++, math-expression-eval"
} |
Reference for first paper using light clocks | Question: I am trying to find the first historical paper that uses light clocks for special relativity arguments (the famous photon bouncing off mirrors). It seems to me that actually Einstein wasn't the first one talking about this.
Any reference would be greatly appreciated! Thank you!
Answer: Mathematically, it already appeared in Michelson and Morley's famous paper from 1887, because the path of the transverse interferometer arm resembles the light clock perfectly. They obtained the formula on page 336:
$$
2D \sqrt{1+\frac{v^2}{V^2}}
$$
Michelson, Albert A.; Morley, Edward W. (1887), "On the Relative Motion of the Earth and the Luminiferous Ether", American Journal of Science, 34: 333–345
https://en.wikisource.org/wiki/On_the_Relative_Motion_of_the_Earth_and_the_Luminiferous_Ether
In the context of relativity, it seems that Lewis and Tolman in 1909 were the first to use it. They obtained the formula on p. 715:
$$
\frac{mn}{op}=\frac{1}{\sqrt{1-\frac{v^{2}}{c^{2}}}}
$$
Lewis, Gilbert N.; Tolman, Richard C. (1909), "The Principle of Relativity, and Non-Newtonian Mechanics", Proceedings of the American Academy of Arts and Sciences, 44: 709–726
https://en.wikisource.org/wiki/The_Principle_of_Relativity,_and_Non-Newtonian_Mechanics | {
"domain": "physics.stackexchange",
"id": 46262,
"tags": "special-relativity, speed-of-light, time, history, specific-reference"
} |
Finding impulse response/polynomial zero on the unit circle | Question: The polynomial here is impulse coefficients of minimum phase FIR filter, or it's impulse response.
A code somewhere tries to finding roots of polynomial on the circle. It have roots of polynomial ru and it's derivatives root rdu.
The algorithm to finding them uses weird merit to increase unit circle tolerance and that is:
if ~rem(length(ru),2) && length(rdu) == length(ru)/2 && ~isempty(ru)
No longer increase tolerance and use rdu instead of ru
But why the roots of derivative must be half the original polynomial? Why multiple of two?
Appendix
function [ru,tol,Nsz] = findUnitCircleZeros(b,r,tol,Nsz)
% Find single multiplicity zeros on the unit circle
% Compute the zeros of the derivative
% This will contain the same zeros on the unit circle as b, but not double
rd = roots(polyder(b));
% Initialize flag indicating whether to use the zeros from the derivative
% polynomial or not. Don't use derivative zeros by default.
derivflag = 0;
% Initialize flag
flag = 1;
while tol < 1e-3 && flag
% Find the zeros on the unit circle
ru = r(abs(abs(r)-1) < tol);
% Find the zeros on the unit circle corresponding to the derivative
rdu = rd(abs(abs(rd)-1) < tol);
if nargin > 3
% Check if number of single zeros found corresponds to number
% given number of zeros
if length(rdu) >= Nsz
rdu = rdu(1:Nsz);
flag = 0;
derivflag = 1;
elseif ~rem(length(ru),2) && length(ru)/2 == Nsz
flag = 0;
else
% Decrease the tolerance
tol = 10*tol;
end
elseif ~rem(length(ru),2) && length(rdu) == length(ru)/2 && ~isempty(ru)
% Check that the length of the derivative zeros half of the length of
% the original zeros on the upper unit circle. If not, decrease
% tolerance and try again
flag = 0;
Nsz = length(rdu);
if max(abs(polyval(b,ru))) > max(abs(polyval(b,rdu)))
derivflag = 1;
end
else
% Decrease the tolerance
tol = 10*tol;
end
end
% If max tolerance has been reached and number of unit circle zeros has not
% been specified, use the zeros on the unit circle found from the original
% polynomial (not the derivative)
if tol >= 1e-3 && nargin < 4
Nsz = length(ru)/2;
end
if ~isempty(ru)
if derivflag
% Use unit circle zeros from derivative
ru = rdu;
else
ru = getsinglezeros(ru,Nsz);
end
if tol < 1e-3
% Force length of zeros to be exactly one, keep the angle the same
ru = exp(1i*angle(ru));
end
else
Nsz = 0;
end
Answer: I'm missing some context but from what I know about such algorithms I can make an educated guess about what's going on. I don't think that the polynomial provided to the function is a minimum-phase polynomial. I believe it is a zero-phase polynomial with a non-negative amplitude response. Since it is non-negative, any zeros on the unit circle must be double (or of a larger even order). Consequently, the number of zeros on the unit circle must be even. Creating a minimum-phase filter from the given response can be done by keeping just one of each double zero on the unit circle. Of course, the zeros that are not on the unit circle must be taken care of somewhere else in the code (keep the ones inside, throw away the ones outside).
The derivative of the polynomial has single roots where the original polynomial has double roots.
For example, if $p(z)$ is
$$p(z) = (z-A)^2(z-B)^2$$
then the derivative is
\begin{align*}
\frac{p(z)}{dz} &= 2(z-A)(z-B)^2 + 2(z-A)^2(z-B) \\
&= 2(z-A)(z-B)(z-B + z-A)\\
&= 4(z-A)(z-B)(z - (A+B)/2)
\end{align*}
I think that ideally you wouldn't need these roots, but they're used as a second option if there's some (numerical) problem. | {
"domain": "dsp.stackexchange",
"id": 12106,
"tags": "finite-impulse-response, minimum-phase, polynomial"
} |
Calculator - C++ operator-overloading | Question: I am new to object-oriented concepts. The following is my attempt at creating a basic calculator using class and operator overloading concepts. Please review it for improvements. Also, how do I make it more intuitive? I want to display: +,-,*,/ as options and use switch on them.
#include <iostream>
#include <conio.h>
using namespace std;
class calculator
{
private:
float val;
public:
calculator(): val(0)
{}
void getdata()
{
cout << "enter number: ";
cin >> val;
}
void showdata()
{
cout << "value: " << val << endl;
}
calculator operator + (calculator) const;
calculator operator - (calculator) const;
calculator operator * (calculator) const;
calculator operator / (calculator) const;
};
calculator calculator::operator + (calculator arg2) const
{
calculator temp;
temp.val = val + arg2.val;
return temp;
}
calculator calculator::operator - (calculator arg2) const
{
calculator temp;
temp.val = val - arg2.val;
return temp;
}
calculator calculator::operator * (calculator arg2) const
{
calculator temp;
temp.val = val * arg2.val;
return temp;
}
calculator calculator::operator / (calculator arg2) const
{
calculator temp;
temp.val = val / arg2.val;
return temp;
}
void main()
{
calculator obj1, obj2, obj3;
char ch;
int choice;
obj1.getdata();
cout << "1st value entered: ";
obj1.showdata();
cout << endl;
obj2.getdata();
cout << "2nd value entered: ";
obj2.showdata();
cout << endl;
cout << "Input choice as integer: 1: +, 2: -, 3: *, 4:/ ";
cin >> choice;
cout << endl;
switch (choice)
{
case 1:
obj3 = obj1 + obj2;
break;
case 2:
obj3 = obj1 - obj2;
break;
case 3:
obj3 = obj1 * obj2;
break;
case 4:
obj3 = obj1 / obj2;
break;
default:
cout << "Invalid choice! " << endl;
}
cout << "Result ";
obj3.showdata();
cout << endl;
}
Answer: You can also overload operator>> instead of using getdata():
std::istream& operator>>(std::istream& in, calculator& obj)
{
return in >> obj.val;
}
Similar approach with showdata(), using operator<<:
std::ostream& operator<<(std::ostream& out, calculator const& obj)
{
return out << "value: " << obj.val;
}
As for your existing overloads, you shouldn't need to return a calculator. You appear to treat each calculator object as a single value. This makes each one to appear to be single-use, and the more values you wish to enter, the more objects thus the more code you'll need to have.
What you should do is maintain one object and have val maintained as you enter data. However, this will also mean that your arithmetic operators aren't needed. If you wish to keep them, then you'll have to approach this differently. For instance, you can calculate (not merely input) a final value for different calculators and use those operators to get a new value. This may look like a needless approach for a simple calculator, but at least you're still able to utilize these arithmetic operators.
Some miscellaneous notes:
Be aware of the issues of using using namespace std.
You don't appear to be using <conio.h>, so just remove it.
You don't need to use std::endl in so many places. A simple "\n" for outputting a newline should be sufficient. More info about that here. | {
"domain": "codereview.stackexchange",
"id": 42366,
"tags": "c++, beginner, object-oriented, overloading, calculator"
} |
How correctly move joints by SetPositionPID() and SetPositionTarget() | Question:
I want to set a joint to a certain angle properly and get its torque using Joint::GetForceTorque() in the process.
So I follow this tutorials try to control a joint,
I try SetPositionPID() and SetPositionTarget() to contorl a joint ,but when I do this the joint cannt
stop even its PID be set to zero by SetPositionPID(name, common::PID(0, 0, 0)).
P.S. I am using Gazebo-8, WITHOUT ROS. I have added (part of) the code from my plugin below.
std::string name;
if (update_num == 0)
{
//Joint velocity using PID controller
this->jointController.reset(new physics::JointController(this->model));
this->jointController->AddJoint(model->GetJoint("purple_joint"));
name = model->GetJoint("purple_joint")->GetScopedName();
// this->jointController->SetVelocityPID(name, common::PID(100, 0, 0));
this->jointController->SetPositionPID(name, common::PID(100, 0, 0));
this->jointController->SetPositionTarget(name,1.57);
// this->jointController->SetVelocityTarget(name, 1.0);
}
else if (update_num < 200)
{
// Must update PID controllers so they apply forces
this->jointController->Update();
}
else if (update_num >= 200)
{
name = model->GetJoint("purple_joint")->GetScopedName();
this->jointController->SetPositionPID(name, common::PID(0, 0, 0));
this->jointController->SetPositionTarget(name,0);
}
update_num++;
Is there something wrong and why?
How to set a joint to a certain angle properly and get its torque using Joint::GetForceTorque() in this process?
Best Regards!
xinaxjm
Originally posted by xianxjm on Gazebo Answers with karma: 15 on 2018-03-21
Post score: 1
Answer:
The controller cannot stop the joint if the PID gains are set to common::PID(0, 0, 0).
The gains control how much torque the controller will apply. All zeros means everything in the equation used to calculate the torque applied by the controller is being multiplied by zero. Some torque needs to be applied to stop a moving joint.
Often the gains on the controller are set once at startup.
if (update_num == 0)
{
// Initialize the controller with the gains to use
this->jointController->SetPositionPID(name, common::PID(100, 0, 0));
}
else if (update_num == X)
{
// time to tell the joint to go to position 1.57
this->jointController->SetPositionTarget(name, 1.57);
}
else if (update_num == Y)
{
// time to tell the joint to go back to position 0
this->jointController->SetPositionTarget(name, 0);
}
// The controller applies torque when it is updated.
// This must be called every update that the controller should be controlling the joint position
this->jointController->Update();
You might be interested in reading
https://en.wikipedia.org/wiki/PID_controller#Mathematical_form
Originally posted by sloretz with karma: 558 on 2018-03-21
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by xianxjm on 2018-04-25:
thank you very much~ | {
"domain": "robotics.stackexchange",
"id": 4247,
"tags": "control"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.