anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Identification - hornet | Question: It has around 3cm. Does it look like a European or Asian Hornet?
The place where I found it is Central Europe/Romania but, after the wikipedia picture for Asian Hornet's head .. it looks exactly the same.
Thank you!
Answer: European hornet (Vespa crabro). In the Asian hornet, the black body bands are much wider and more uniform. For the European hornet, as in your specimen, the bands are thinner and appear as thin lines connecting triangular black dots. See this image for comparison:
https://www.sciencephoto.com/media/980933/view/asian-and-european-hornets-illustration | {
"domain": "biology.stackexchange",
"id": 10952,
"tags": "species-identification, entomology"
} |
Python - identify mechanical systems from input / output in the time domain | Question: I have a set of input / output time series measurements for different times and machines.
I assume each machine behavior doesn't change over time
output is generally linked to the integral of input
Machines have different characteristic that would allow me to discern them
I generally work with Python
My first approach is building FFT(Y)/FFT(X) to observe response at different frequencies. Then the idea is to compare FFT(Y)/FFT(X) graphs to see if I can observe differents machines.
Is the approach correct (notably regarding the part where I expect the output being an integration of the input) ? Are there better ways to proceed ?
Answer: Assuming the machines are LTI (Linear Time Invariant) systems, they are completely determined by their transfer function. In theory $FFT(y)/FFT(x)$ will give you this but that's not that easy in practice.
You need enough good enough signal to noise ratio at ALL frequencies. If the input has low or no energy at some frequencies you'll end up with a "divide by zero" problem. It also requires proper framing since the output typically lags the input by a frequency dependent amount.
You may be better off determining the impulse response using an adaptive filter and then doing the Fourier Transform of that.
Once you have the transfer function of the system you can try to identify the poles and the zeros of the transfer function (which isn't a trivial problem). These corresponds to masses, springs, resonances, resistances, modes, etc of your machines. | {
"domain": "dsp.stackexchange",
"id": 10610,
"tags": "fft, signal-analysis, power-spectral-density"
} |
Unmet dependencies installing Groovy on Oneiric | Question:
I am unable to install Groovy on a fresh install of Ubuntu 11.10 (Oneiric). I get the following:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help resolve the situation:
The following packages have unmet dependencies:
ros-groovy-desktop-full : Depends: ros-groovy-rqt-robot-plugins (= 0.2.7-0oneiric-20130103-2316-+0000) but it is not going to be installed
Depends: ros-groovy-robot-model (= 1.9.32-0oneiric-20130103-0134-+0000) but it is not going to be installed
Depends: ros-groovy-visualization-tutorials (= 0.7.3-s1357253948~oneiric) but it is not going to be installed
Depends: ros-groovy-simulator-gazebo (= 1.7.6-s1357263088~oneiric) but it is not going to be installed
Depends: ros-groovy-rviz (= 1.9.18-0oneiric-20130103-2224-+0000) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
I followed the directions exactly as described in the official documentation (including setting up sources, updating repositories, etc. Any suggestions on how to fix this?
Originally posted by kamek on ROS Answers with karma: 79 on 2013-01-10
Post score: 1
Answer:
There was in improper version of libassimp2 in the repository. It's been set to the right version now.
Originally posted by tfoote with karma: 58457 on 2013-01-10
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 12359,
"tags": "ros, ubuntu, ubuntu-oneiric, dependencies, ros-groovy"
} |
Phase difference between two waves confusion | Question: I'm facing much confusion about a rather silly problem recently. Suppose I have two functions $\sin(x+\phi_1)$ and $\sin(x+\phi_2)$.
Defining $\theta_1=x+\phi_1$ as the phase of the first and $\theta_2=x+\phi_2$ as the phase of the second function.
The phase difference, by definition comes out to be simply:
$$\Delta\theta=\theta_1-\theta_2=x+\phi_1-x-\phi_2=\phi_1-\phi_2$$
Hence, $$\Delta\theta=\phi_1-\phi_2$$
So, there is a constant phase difference between these two waves.
However, what if I have two functions $\sin(x+\phi_1)$ and $\sin(\phi_2-x)$ instead ? If I follow the same definition above, then I have $\theta_1=x+\phi_1$ and $\theta_2=\phi_2-x$.
In this case, the phase difference comes out to be $$\Delta \theta=2x+(\phi_1-\phi_2)$$
As $x$ increases, the phase difference seems to increase here. However, this doesn't seem correct to me.
For example, I know $\cos(x)=\sin(\frac{\pi}{2}-x)=\sin(\frac{\pi}{2}+x)$. This automatically implies, that both $\sin(\frac{\pi}{2}-x)$ and $\sin(\frac{\pi}{2}+x)$ have the same phase, since they are both equal to $\cos(x)$.
However if I use the definiton as above, then $\theta_1=\frac{\pi}{2}-x$ and $\theta_2=\frac{\pi}{2}+x$. In this case however,
$$\theta_1-\theta_2=2x=2\omega t \,\,\,\,\,\,(x=\omega t)$$
However, this implies that the phase difference increases over time. When the difference is an even multiple of $\pi$, the waves are in phase, and apart from that, they are out of phase. However, in this example, since both are essentially the same function, we should have a obtained a constant phase difference that was always $0$.
What is wrong with my definition then, and how would I find the phase difference between two waves in general.
Answer: In calculating the phase difference between two waves, it seems necessary that both waves have the same functionality as a function of the independent variable and that the frequency of the two waves are the same, i.e., their period is the same. In that case the phase difference may be interpreted as the horizontal shift that is necessary to make the two waves fall right on top of each other (assuming they have the same amplitude). It is this fact that necessitates the same period of the two waves. Once the periods are the same, the shift necessary to make the two waves identical is definitely less than the equal wave period. With that, one can proceed to each of your examples to calculate the phase difference. In the first case the two waves are $\sin(x+\phi_{1})$ and $\sin(x+\phi_{2})$. Note that the two waves have the same exact frequency and both are the same periodic function. Thus the magnitude of the phase difference is
$$|(x+\phi_{1})-(x+\phi_{2})|=|\phi_{1}-\phi_{2}|$$
In the second case the two waves are $\sin(x+\phi_{1})$ and $\sin(\phi_{2}-x)$. While both waves are $\sin$ functions, they do not have the exact same frequency (though the absolute value is the same). Thus, as mentioned in the comments
$$\sin(\phi_{2}-x)=-\sin(x-\phi_{2})=\sin(x-\phi_{2}+\pi)$$
Subsequently, the magnitude of the phase difference in this case is
$$|(x+\phi_{1})-(x-\phi_{2}+\pi)|=|\phi_{1}+\phi_{2}-\pi|$$
Note that as mentioned in your problem, the implication of the above equation for $\cos(x)=\sin(\pi/2-x)=\sin(\pi/2+x)$ is that the phase difference between $\sin(\pi/2-x)$ and $\sin(\pi/2+x)$ is 0. I think this answer provides general guidelines for finding the phase difference between two waves through specific examples mentioned in the question. Please let me know if you have any questions or find the answer incorrect. | {
"domain": "physics.stackexchange",
"id": 85743,
"tags": "waves, harmonic-oscillator, phase-diagram"
} |
Work Integral and its derivation | Question: The work integral is something I saw long time ago and in completely understood it.
\begin{align}
W_{12} & =\int F(x)dx=m\int^{t_2}_{t_1}adx=m\int\left(\frac{dv}{dt}\right)dx=m\int\left(\frac{dv}{dx}\right)\left(\frac{dx}{dt}\right)dx\\
&=m\int\left(\frac{dx}{dt}\right)dv=\frac12\left(mv_2^2-mv_1^2\right)
\end{align}
which is clear as day.
But then i saw another version and i cannot follow it:
So my question is:
How did $$m\int\frac{d}{dt}[\dot{x}(t)]\dot{x}(t)dt$$ become $$\frac{m}{2}\int\frac{d}{dt}[\dot{x}(t)]^2dt$$
Where does the $\frac{1}{2}$ come from?
And how did that expression become the change in kinetic energy equation?
I'm sure this is a simple question but i havnt been able to find a solution online so im asking here. Thanks for the help
Answer: They're using the $2^{nd}$ principle of dynamics $\frac{d \mathbf{Q}}{dt} = \mathbf{F}$ to replace $\mathbf{F}$ with $\frac{d \mathbf{Q}}{dt} = \frac{d}{dt}(m\mathbf{v})$.
With the assumption $\dot m = 0$, you can further manipulate the expression $\mathbf{F} = m \frac{d \mathbf{v} }{dt} $, before writing the work integral as
$W = \displaystyle \int_{t_0}^{t_1} \mathbf{F} \cdot \mathbf{v} dt = m \int_{t_0}^{t_1} \underbrace{\frac{d \mathbf{v} }{dt} \cdot \mathbf{v}}_{=\frac{d}{dt} \left(\frac{1}{2} \mathbf{v} \cdot \mathbf{v} \right)} dt = m \int_{t_0}^{t_1} \dfrac{d}{dt} \left( \dfrac{1}{2} |\mathbf{v}|^2 \right) dt = \int_{0}^{1} dT = T_1 - T_0$. | {
"domain": "physics.stackexchange",
"id": 91177,
"tags": "newtonian-mechanics, energy, work"
} |
Versatile string validation | Question: I passed a technical test the other day, one part included a cheap string validation and I am wondering this can be further improved to be more versatile to requirement changes.
The requirements were something like that
Create a Validate method, which accepts a string and returns true if it's valid and false if it's not.
A string is valid if it satisfies the rules below:
The string must be at least 6 characters long and not exceed 16 characters.
The string must contain only letters, numbers and optionally one hyphen (-).
The string must start with a letter, and must not end with a hyphen.
For example, validate("Michelle Belle"); would return false because it contains a space.
My solution was like that:
public static class ComparableExtensions
{
public static bool IsStrictlyLowerThan<TComparable>(this TComparable comparable, TComparable value)
where TComparable : IComparable<TComparable>
{
return comparable.CompareTo(value) < 0;
}
public static bool IsStrictlyGreaterThan<TComparable>(this TComparable comparable, TComparable value)
where TComparable : IComparable<TComparable>
{
return comparable.CompareTo(value) > 0;
}
public static bool IsStrictlyNotBetween<TComparable>(this TComparable comparable, TComparable lowerBound, TComparable upperBound)
where TComparable : IComparable<TComparable>
{
if (lowerBound.IsStrictlyGreaterThan(upperBound))
{
throw new ArgumentOutOfRangeException(nameof(lowerBound) + nameof(upperBound));
}
return comparable.IsStrictlyLowerThan(lowerBound) || comparable.IsStrictlyGreaterThan(upperBound);
}
}
public static class CharExtensions
{
public static bool IsLetterOrDigit(this char c)
{
return char.IsLetterOrDigit(c);
}
public static bool IsLetter(this char c)
{
return char.IsLetter(c);
}
public static bool IsHyphen(this char c)
{
return c == '-';
}
}
public class Test
{
public static bool Validate(string str)
{
if (str.Length.IsStrictlyNotBetween(6, 16))
{
return false;
}
if (!str.First().IsLetter() || str.Last().IsHyphen())
{
return false;
}
var hyphenCount = 0;
for (var i = 1; i < str.Length - 1; i++)
{
if (str[i].IsLetterOrDigit())
{
continue;
}
if (str[i].IsHyphen())
{
hyphenCount++;
if (hyphenCount > 1)
{
return false;
}
}
else
{
return false;
}
}
return true;
}
}
I purposefully decided to not go with Regular Expressions to keep the logic readable and I am wondering if my code can be further refactored to incorporate new business rules.
Answer: Not much to say about the extensions methods, as they are mostly wrappers.
However if you're looking for ways to make the algorithm more readable, LINQ is your friend. You can replace most of your logic with a one-liner:
var hyphenCount = 0;
for (var i = 1; i < str.Length - 1; i++)
{
if (str[i].IsLetterOrDigit())
{
continue;
}
if (str[i].IsHyphen())
{
hyphenCount++;
if (hyphenCount > 1)
{
return false;
}
}
else
{
return false;
}
}
return true;
Like this:
return str.All(c => c.IsHyphen() || c.IsLetterOrDigit()) && str.Count(c => c.IsHyphen()) <= 1;
Which more clearly explains your intent, you can also move those expression to their separate methods to make it as readable as possible, this way you can keep adding new conditions, without modifying the existing logic (unless they interfere with one another that is). | {
"domain": "codereview.stackexchange",
"id": 33379,
"tags": "c#, interview-questions, validation"
} |
Koopmann von Neumann (KvN) Theory | Question: I was just wondering does anyone have any informative sources apart from the obvious wikipedia articles regarding Koopmann von Neumann (KvN) theory? Or if its possible could someone explain the basic premise?
I just cannot find anything on the web at all!
Answer: See for example Topics in Koopman-von Neumann Theory by D. Mauro. This should be one of the most extensive overviews of KvN Theory, it also contains some examples of applying this theory to some well known problems such as Aharonov-Bohm Effect. | {
"domain": "physics.stackexchange",
"id": 15996,
"tags": "quantum-mechanics, classical-mechanics, resource-recommendations, wavefunction, hilbert-space"
} |
How could Leptoquarks explain Lepton Flavour universality (LFU) violation? | Question: I recently read about the possibility of Leptoquarks and that this new particle could also explain a possible LFU violation.
Why would introducing a new particle explain LFU violation?
Answer: What is a leptoquark?
Leptoquarks (LQs) are hypothetical particles that would interact with quarks and leptons. Leptoquarks are color-triplet bosons that carry both lepton and baryon numbers. Their other quantum numbers, like spin, (fractional) electric charge and weak isospin vary among theories. Leptoquarks are encountered in various extensions of the Standard Model,
In general extra elementary particles in a new model modify the effect of higher loop corrections in the calculations of an interaction. The fact that leptoquarks carry lepton number means that higher order loop corrections can be different for decays that end up in leptons depending on the type of lepton and the mass of the leptoquark, thus affecting universality. | {
"domain": "physics.stackexchange",
"id": 80051,
"tags": "particle-physics, magnetic-moment, beyond-the-standard-model, leptons"
} |
What chemicals are used in receipt paper? | Question: Most "invisible inks" turn brown when heated and they take a while to transform. I would like to make (or obtain) an "ink" that responds to the heat of an iron and permanently turns black or a similarly dark color - instantly. Thermal receipt paper does this where the paper is white until it is heated with almost instant response times AND the "ink" has a permanence of sorts (e.g. sticking a receipt in a freezer won't revert the reaction). What chemical combinations in what amounts will do an equivalent transformation?
Bonus question: What chemical combinations would create similar effects for standard palette colors? That is, transparent/hidden until heat from an iron is applied, whereupon it instantly appears.
My goal is to use such an ink with paintbrushes and fountain pens on canvas. Black is probably sufficient, but knowing how to make standard colors would dramatically expand my options. The ink ideally shouldn't be toxic to touch after drying nor emit toxic fumes when heating with an iron at a "Cotton" setting. It can be semi-toxic while wet but nothing that will melt nitrile gloves, fingers, or human lungs. I will, of course, wear sufficient protection at all times as I don't really need any chemicals leeching through my skin. Having a drawing/painting appear instantly in front of an audience from a seemingly blank canvas would be very dramatic and painting with invisible ink will be an interesting adventure.
Answer: Have a look at wikipedia article on Thermal Paper. The whole trick is in the three components, which are mixed as solid substances and applied as coating to the paper. When you heat the paper, the solid substances melt and the coloring reaction occurs.
For this you need fine pulverized chemicals, stirred in a form of suspension in some volatile solvent and apply them on the canvas. It should not be a big problem, see e.g. patent on Thermal Paper.
If all this effort is for an artistic purpose, I would suggest as a first try to approach the companies manufacturing the thermal paper (there should be lot of them) and ask them for a bottle of the prepared suspension, telling them your plans. The success rate would not be high, but hopefully non-zero and it saves you 99% of work.
Additionally, the painting itself could be quite fun, I would expect the components to be UV-active, so the black light could help you. | {
"domain": "chemistry.stackexchange",
"id": 1786,
"tags": "heat"
} |
Height dependancy when adding volume from below to a fluid column | Question: Suppose there is a fluid column of height H and there is a screw already inserted from outside the column entering through the bottom of the column. The screw has a total length L and only 0.1*L is inside the fluid column.
Let Fa be the force required to screw the screw into the column an addition 0.5*L.
Let Fb be the force required to do the same thing when H changes to 2*H.
My question is: is Fb = 2*Fa ?
If the screw end (the part that is into the column) had the shape of a sphere, would that change the relation between Fa and Fb?
Answer: Yes, $F_b \approx 2 F_a$.
The force needed to advance the screw into the water is $F = PA$ where $P$ is the pressure on the end of the screw and $A$ is the cross-sectional area of the screw. When the height of the water is doubled, the pressure is also doubled.
The relation is not exact because your scenario doubles the height of the water from the bottom of the vessel, which is not exactly the same thing as doubling the height of water above the end of the screw.
Changing the shape of the screw doesn't matter; you are still doing work against pressure that is twice as great, no matter the shape of the screw.
Of course, for a real screw there is usually friction as you screw it in; I've ignored that here. | {
"domain": "physics.stackexchange",
"id": 19865,
"tags": "fluid-dynamics, pressure"
} |
Seeing Azha (Eta Eridani) in Northern Hemisphere | Question: When I use Stellarium, I am unable to see Azha in London or Atlanta, Georgia. However, it is possible to see Azha in Tokyo and Auckland.
I am not changing the time, only the location. I presume that my time is converted to their time. So for example, if my time is 11:30 (London) then when I select Tokyo, I should see what the sky is like at 11:30 Tokyo time.
I can see Azha in Auckland too. I would've thought the ability to see a star is based on N/S hemisphere not E/W hemisphere. Is it right I should be able to see Azha in the Eastern Hemisphere but in the West or am I using Stellarium incorrectly?
I am asking because of the Eta Eridanids. They might be an obscure or a disappointing meteor shower. The reason for such differing locations was to see where in the world it's possible to see them.
Answer: Eta Eridani has a declination of about -8 degrees, so it is visible from most of the Northern Hemisphere. Eridanus is a constellation that is next to Orion, and like Orion, it is easiest to see during winter.
The Eta Eridanids are a meteor shower that have a peak on about 11th of August. The radiant on that date is just south of Eta Eridani, at about -11 degrees. On 11th of August, Eta Eridani rises at about 3am in London, About an hour before twilight.
Tokyo, which is much further south, and doesn't use Daylight saving, sees Eta Eridani rise at about 1am (local time).
When you change location in Stellarium the time doesn't change. So if you change from London at 3am in August (UTC+1), the time in Tokyo will be 11 am (UTC+9).
The Eta Eridanids are a minor shower with a ZHR of about 6 meteors per hour They will be visible whenever the radiant is above the horizon, but from more northern locations it is only visible for a short time. It also clashes with the Perseids. One may only see a couple of Eta Eridanids during the viewing window, which is why it was only identified in 2005. | {
"domain": "astronomy.stackexchange",
"id": 2395,
"tags": "star"
} |
How to form 6-31G basis set; specific instructions | Question: I'm new to basis sets. I read that for 3-21G, given the following basis set for carbon (from Basis Set Exchange):
C S
172.2560000 0.0617669
25.9109000 0.3587940
5.5333500 0.7007130
C SP
3.6649800 -0.3958970 0.2364600
0.7705450 1.2158400 0.8606190
C SP
0.1958570 1.0000000 1.0000000
You form three 'atomic orbitals' (is that the right term?) as:
let f(alpha) = (2*alpha/pi)^(3/4) * exp(-alpha*r^2)
O1 = 0.061 f(172) + 0.358 f(25) + 0.7 f(5.5)
O2 = -.39 f(3.66) + 1.21 f(.77) + 1 f(.195)
O3 = 0.23 f(3.66) + 0.86 f(.77) + 1 f(.195)
I truncated the numbers to make it easier to type...
Assuming that is correct, that makes sense to me. But when I look at the 6-31G basis set data for carbon:
C S
3047.5249000 0.0018347
457.3695100 0.0140373
103.9486900 0.0688426
29.2101550 0.2321844
9.2866630 0.4679413
3.1639270 0.3623120
C SP
7.8682724 -0.1193324 0.0689991
1.8812885 -0.1608542 0.3164240
0.5442493 1.1434564 0.7443083
C SP
0.1687144 1.0000000 1.0000000
C SP
0.0438000 1.0000000 1.0000000
From the naming nomenclature, I thought the only difference would be 6 primitive gaussians for the inner shell which I see, but . . . I was not expecting that third SP line. I don't know how to use it.
I want to write:
O1 = .0018 f(3047) ...
O2 = -.119 f(7.86) + ... + 1.0 f(.168)
O3 = 0.068 f(7.86) + ... + 1.0 f(.168)
Just like I did for the 3-21G case.
Maybe I could add:
O4 = -.119 f(7.86) + ... + 1.0 f(0.0438)
O5 = 0.068 f(7.86) + ... + 1.0 f(0.0438)
But I don't think this is right because I read online that there should be 9 orbitals formed (although they didn't explain how to get them).
Thanks for the help!
Update
Ok, so I looked up 6-311G instead of 6-31G. However, I still don't see how to form 9 orbitals from 6-31G:
C S
3047.5249000 0.0018347
457.3695100 0.0140373
103.9486900 0.0688426
29.2101550 0.2321844
9.2866630 0.4679413
3.1639270 0.3623120
C SP
7.8682724 -0.1193324 0.0689991
1.8812885 -0.1608542 0.3164240
0.5442493 1.1434564 0.7443083
C SP
0.1687144 1.0000000 1.0000000
I really need to see all 9 orbitals written out explicitly.
Update 2
Ok, with more research I finally found this
So it looks like I'm missing a function. I need to define:
f1(alpha) = (2*alpha/pi)^(3/4) exp(-alpha r^2)
f2(alpha, x) = (128 alpha^5/pi^3) x exp(-alpha r^2)
Then the 9 orbitals are:
O1 = .001 f1(3047) + .014 f1(457) + ... + .362 f1(3.16)
O2 = -.119 f1(7.68) + -.16 f1(1.88) + 1.14 f1(.54)
O3 = 1.0 f1(.168)
O4 = .068 f2(7.68, x) + .316 f2(1.88, x) + .744 f2(.54, x)
O5 = 1.0 f2(.168, x)
O6 = .068 f2(7.68, y) + .316 f2(1.88, y) + .744 f2(.54, y)
O7 = 1.0 f2(.168, y)
O8 = .068 f2(7.68, z) + .316 f2(1.88, z) + .744 f2(.54, z)
O9 = 1.0 f2(.168, z)
Can anyone confirm this? Or, if not confirm, specify what's wrong?
Thanks!
Answer: I believe you have not correctly interpreted the first basis set, which should read:
O1 = 0.061 f(172) + 0.358 f(25) + 0.7 f(5.5)
O2 = -.39 f(3.66) + 1.21 f(.77)
O3 = 1 f(.195)
Note that these are only the s-orbitals. Using the coefficients in the right-most column, two triplets of p-orbitals are also present. By doing it this way for the second basis set, which is 6-31G, you would obtain 3 s-orbitals and two triplets of p-orbitals, making it nine AOs in total. Note that all orbitals in a tuple must have the same contraction scheme, otherwise, the calculation would not be rotationally invariant (to within integration accuracy).
Aside: Why does this basis specify s- and p-orbitals this way?
Depending on the integral scheme, one can reuse certain intermediates of some integrals for others and save some computational time. However, modern programs tend to work better with segmented basis sets (i.e. basis sets that do not allow for reuse, because the $\alpha$ exponents are all different). The results are also of higher quality. IMHO, the Pople basis sets are only of historical interest or for hobbyists with very limited resources. | {
"domain": "chemistry.stackexchange",
"id": 9266,
"tags": "computational-chemistry, orbitals, basis-set"
} |
Performant Sort function for big arrays | Question: In sort(arr), I want to sort an array. Children must be beneath their parent. And children of the same parent are sorted using orderPos value.
I guess the complexity of this algorithm is quite high, is there a way to make it more performant for very long lists?
(the language used here is typescript)
interface data { id:number; parentPK:number; orderPos:string; children: data[] }
function sort(arr:data[]) { const sorted : data[] = [];
let parent: data = {children: [] , id : 0 , orderPos:'0' , parentPK:0} ; let i = 1;
// detect parent of all , parent pk is -1 , assumes theres only one root element whose parent id is -1
arr.forEach(x => {
if (x.parentPK == -1) {
parent = x;
sorted.push(x);
arr.splice(arr.indexOf(x), 1);
return;
} }); // root element is now first in the array // get root's children and place them beneath their parent (sorted by orderPos)
let temp: data[] = arr
.filter(x => x.parentPK === (parent && parent.id))
.sort((a, b) => parseInt(a.orderPos, 10) - parseInt(b.orderPos, 10)); sorted.push(...temp); temp.forEach(x => parent.children.push(x)); while (i < arr.length - 1) {
// new parent
parent = sorted[sorted.indexOf(parent) + 1];
// place children beneath parent
temp = arr.filter(x => x.parentPK === (parent && parent.id)).sort((a, b) => parseInt(a.orderPos, 10) - parseInt(b.orderPos, 10));
temp.forEach(x => parent.children.push(x));
sorted.splice(sorted.indexOf(parent) + 1, 0, ...temp);
i++; }
return sorted; }
let array : data[] = [ { id: 4, parentPK : 1, orderPos:'100' , children:[]}, {id:1, parentPK : -1, orderPos:'1', children:[] }, { id:2, parentPK : 1, orderPos:'51',children:[] }, {id:3, parentPK : 2, orderPos:'1',children:[] }, {id:5, parentPK : 2, orderPos:'10',children:[] } ]
let sortedArray = sort(array)
console.log('sorted array : ',sortedArray.map(element => [element.id, element.parentPK,element.orderPos] ))
Answer:
The main improvement that can be made in the algorithm is recognizing that you can reduce more expensive sorting (and make the algorithm a lot easier to understand at a glance) by arranging the data into a structure indexed by the parent ID first. This operation is O(n). (In contrast, sorting is O(n log n)) The data structure will look like:
// This is not runnable, I'm just using a snippet to hide this long code
const itemsByParentId = {
// Key: parent ID
"1": [
// Value: Array of children which have that ID as a parent (parentPK)
{
"id": 4,
"parentPK": 1,
"orderPos": "100",
"children": []
}, {
"id": 2,
"parentPK": 1,
"orderPos": "51",
"children": []
}],
"2": [{
"id": 3,
"parentPK": 2,
"orderPos": "1",
"children": []
}, {
"id": 5,
"parentPK": 2,
"orderPos": "10",
"children": []
}],
"-1": [{
"id": 1,
"parentPK": -1,
"orderPos": "1",
"children": []
}]
}
Then:
For each subarray (where all elements with the same parent are grouped together), sort it by orderPos (there's no getting around the O(n log n) complexity here)
Make a function that, given a parent, finds child elements in the data structure with that parent ID (O(1)), pushes them to the output array (O(n)), and recursively iterates over its children, depth-first, as the algorithm requires.
function sort(arr: data[]) {
const itemsByParentId: {
[id: number]: Array<data>;
} = {};
// (1) Group by parent ID
for (const item of arr) {
if (!itemsByParentId[item.parentPK]) {
itemsByParentId[item.parentPK] = [];
}
itemsByParentId[item.parentPK].push(item);
}
// (2) Sort each subarray
for (const childrenArr of Object.values(itemsByParentId)) {
childrenArr.sort((a, b) => Number(a.orderPos) - Number(b.orderPos));
}
// (3) Recursively add parents and their children, depth-first
const sorted: data[] = [];
const addParent = (parent: data) => {
sorted.push(parent);
if (!itemsByParentId[parent.id]) {
return;
}
for (const child of itemsByParentId[parent.id]) {
parent.children.push(child);
addParent(child)
}
};
addParent(itemsByParentId[-1][0]);
return sorted;
}
Live snippet of the compiled code:
function sort(arr) {
const itemsByParentId = {};
for (const item of arr) {
if (!itemsByParentId[item.parentPK]) {
itemsByParentId[item.parentPK] = [];
}
itemsByParentId[item.parentPK].push(item);
}
for (const childrenArr of Object.values(itemsByParentId)) {
childrenArr.sort((a, b) => Number(a.orderPos) - Number(b.orderPos));
}
const sorted = [];
const addParent = (parent) => {
sorted.push(parent);
if (!itemsByParentId[parent.id]) {
return;
}
for (const child of itemsByParentId[parent.id]) {
parent.children.push(child);
addParent(child);
}
};
addParent(itemsByParentId[-1][0]);
return sorted;
}
let array = [{ id: 4, parentPK: 1, orderPos: '100', children: [] }, { id: 1, parentPK: -1, orderPos: '1', children: [] }, { id: 2, parentPK: 1, orderPos: '51', children: [] }, { id: 3, parentPK: 2, orderPos: '1', children: [] }, { id: 5, parentPK: 2, orderPos: '10', children: [] }];
let sortedArray = sort(array);
console.log('sorted array : ',sortedArray.map(element => [element.id, element.parentPK,element.orderPos] ))
For a review of your existing code:
Put each statement on its own line, usually - this significantly improves readability and means that readers of the code don't have to scroll horizontally.
Indent when starting a new block, and indent when chaining methods off an expression above - for similar reasons to above, it makes it a lot easier to pick up on the logic at a glance. (Also use the same indentation across the same block.) There are many IDEs which will automatically format your code to adhere to some sort of standard like this. They're well worth using.
parentPK? Currently, an object's parentPK value links to another object's id value. I'm not sure what these objects represent in your wider script, but you might consider if it would be clearer if the parentPK property was named parentID or idOfParent instead, to make the link clearer.
Declare variables close to where they'll be used, to reduce cognitive overhead. For example, rather than:
let parent;
let i = 1;
// many lines of code that doesn't reference i
while (i < arr.length - 1) {
you'll usually want to do
let parent;
// many lines of code that doesn't reference i
let i = 1;
while (i < arr.length - 1) {
You could also consider a for loop, since you increment i at the end:
for (let i = 1; i < arr.length - 1; i++) {
Use strict equality with === and !== - avoid != and ==, since they have strange coercion rules a reader of the script should not have to have memorized in order to understand the logic.
return does nothing in a forEach callback except terminate the current callback. It doesn't stop further callbacks from running. If you want to find the index of an element with a particular property in an array, use findIndex:
const rootIndex = arr.findIndex(item => item.parentPK === 1);
sorted.push(arr[rootIndex]);
arr.splice(rootIndex, 1);
Unnecessary comparison? I think parent will always exist, so the .filter(x => x.parentPK === (parent && parent.id)) check of parent first is superfluous. (If it might not exist, then you could simplify with optional chaining: x.parentPK === parent?.id) | {
"domain": "codereview.stackexchange",
"id": 40121,
"tags": "javascript, algorithm, sorting, typescript, complexity"
} |
xbee communication for Groovy on Ubuntu 12.04 | Question:
I am trying to use an Xbee series 1 pro to communicate from an arduino to a laptop. I was planning on using the rosserial_xbee but the stack was removed from the metapackage. Is there anyway to use the xbee for communication?
Originally posted by tonybaltovski on ROS Answers with karma: 2549 on 2013-03-24
Post score: 3
Answer:
Was recently fixed.
Originally posted by tonybaltovski with karma: 2549 on 2013-08-01
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 13507,
"tags": "rosserial"
} |
Could not find a package configuration file in catkin workspace | Question:
hello everyone,
I want use a rosbuild package "seds" and "mathlib" in my catkin workspace (in which has a catkin package "iiwa_ros"), so based on the websit: http://wiki.ros.org/catkin/migrating_from_rosbuild#Prerequisites I migrating the rosbuild package in catkin package and add into my catkin workspace. Every thing is ok, and the catkin build can build all the packages in the workspace. But when I try to add the "#include "seds/**.h" " in the .cpp files in my "iiwa_ros" package, catkin build said there are some thing wrong:
Errors << iiwa_ros:check /home/edward/kuka_catkin/logs/iiwa_ros/build.check.057.log
CMake Warning at /opt/ros/indigo/share/catkin/cmake/catkinConfig.cmake:76 (find_package):
Could not find a package configuration file provided by "seds" with any of
the following names:
sedsConfig.cmake
seds-config.cmake
Add the installation prefix of "seds" to CMAKE_PREFIX_PATH or set
"seds_DIR" to a directory containing one of the above files. If "seds"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
CMakeLists.txt:6 (find_package)
CMake Error at /opt/ros/indigo/share/catkin/cmake/catkinConfig.cmake:83 (find_package):
Could not find a package configuration file provided by "seds" with any of
the following names:
sedsConfig.cmake
seds-config.cmake
Add the installation prefix of "seds" to CMAKE_PREFIX_PATH or set
"seds_DIR" to a directory containing one of the above files. If "seds"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
CMakeLists.txt:6 (find_package)
make: *** [cmake_check_build_system] Error 1
cd /home/edward/kuka_catkin/build/iiwa_ros; catkin build --get-env iiwa_ros | catkin env -si /usr/bin/make cmake_check_build_system; cd -
...............................................................................
Failed << iiwa_ros:check [ Exited with code 2 ]
Failed <<< iiwa_ros [ 0.4 seconds ]
[build] Summary: 18 of 19 packages succeeded.
[build] Ignored: 2 packages were skipped or are blacklisted.
[build] Warnings: None.
[build] Abandoned: None.
[build] Failed: 1 packages failed.
and my CMakeList.txt is as follow, which only add seds in find_package:
cmake_minimum_required(VERSION 2.8.3)
project(iiwa_ros)
set(CMAKE_CXX_FLAGS "-std=c++11 ${CMAKE_CXX_FLAGS}")
find_package(catkin REQUIRED COMPONENTS
iiwa_msgs
tf
cmake_modules
seds
)
catkin_package(
INCLUDE_DIRS include
LIBRARIES ${PROJECT_NAME}
CATKIN_DEPENDS iiwa_msgs
)
## Specify additional locations of header files
include_directories(include
${catkin_INCLUDE_DIRS}
)
## Declare a cpp library
add_library(${PROJECT_NAME}
include/iiwaRos.h
src/iiwaRos.cpp
#~/kuka_catkin/src/seds/include/GMR.h
)
## Declare a cpp executable
add_executable(${PROJECT_NAME}-example
src/main.cpp
)
## Add dependence to the iiwa_msg module for the executable
add_dependencies(${PROJECT_NAME}-example
iiwa_msgs_generate_messages_cpp)
## Add dependence to the iiwa_msg module for the library
add_dependencies(${PROJECT_NAME}
iiwa_msgs_generate_messages_cpp)
## Specify libraries to link a library or executable target against
target_link_libraries(${PROJECT_NAME}-example
${PROJECT_NAME}
${catkin_LIBRARIES}
)
install(DIRECTORY include/
DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION})
and the package.xml
<?xml version="1.0"?>
<package>
<name>iiwa_ros</name>
<version>1.4.0</version>
<description>The iiwa_ros package</description>
<author email="salvo.virga@tum.de">Salvo Virga</author>
<maintainer email="salvo.virga@tum.de">Salvo Virga</maintainer>
<license>BSD</license>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>roscpp</build_depend>
<build_depend>iiwa_msgs</build_depend>
<run_depend>iiwa_msgs</run_depend>
<build_depend>seds</build_depend>
<run_depend>seds</run_depend>
</package>
so, can anybody help?
Thanks very much!
thanks the help of mgruhler @mgruhler
And the CMakeLists.txt and package.xml of the migrated packages are shown below:
CMakeLists.txt and package.xml of "seds":
CMakeLists.txt of "seds":
cmake_minimum_required(VERSION 2.8.3)#SEDS2.4.6
project(seds)
#set the default path for built executables to the "bin" directory
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
#set the default path for built libraries to the "lib" directory
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)
set(CMAKE_CXX_FLAGS "-std=c++11 ${CMAKE_CXX_FLAGS}")
#common commands for building c++ executables and libraries
FILE(GLOB theSourceFiles src/*.cpp)
FILE(GLOB theHeaderFiles include/*.h)
find_package(catkin REQUIRED COMPONENTS
roscpp
std_msgs
message_generation
mathlib
)
catkin_package(
INCLUDE_DIRS include
LIBRARIES ${PROJECT_NAME}
CATKIN_DEPENDS roscpp message_runtime #pass the dependencies to the package in find_package
)
## Specify additional locations of header files
include_directories(include ${catkin_INCLUDE_DIRS})
## Declare a cpp library
add_library(${PROJECT_NAME}
include/GMR.h
src/GMR.cpp
)
package.xml of "seds":
<?xml version="1.0"?>
<package>
<name>seds</name>
<version>1.4.0</version>
<description>The iiwa_ros package</description>
<author email="mohammad.khansari@epfl.ch">Mohammad Khansari</author>
<maintainer email="mohammad.khansari@epfl.ch">Mohammad Khansari</maintainer>
<license>BSD</license>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>message_generation</build_depend>
<build_depend>roscpp</build_depend>
<run_depend>roscpp</run_depend>
<run_depend>message_runtime</run_depend>
<build_depend>mathlib</build_depend>
</package>
and the CMakeLists.txt of "mathlib":
cmake_minimum_required(VERSION 2.8.3)#2.4.6
project(mathlib)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DCHECK_N_WRAP_MALLOCS")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -Wl,--wrap=malloc,--wrap=free")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} -Wl,--wrap=malloc,--wrap=free")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -Wl,--wrap=malloc,--wrap=free")
set(CMAKE_CXX_FLAGS "-std=c++11 ${CMAKE_CXX_FLAGS}")
#set the default path for built executables to the "bin" directory
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
#set the default path for built libraries to the "lib" directory
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)
FILE(GLOB theSourceFiles src/*.cpp)
FILE(GLOB theHeaderFiles include/*)
find_package(catkin REQUIRED COMPONENTS
roscpp
std_msgs
message_generation)
catkin_package(
INCLUDE_DIRS include
LIBRARIES ${PROJECT_NAME}
CATKIN_DEPENDS roscpp message_runtime #pass the dependencies to the package in find_package
)
## Specify additional locations of header files
include_directories(include ${catkin_INCLUDE_DIRS})
### Declare a cpp library
add_library(${PROJECT_NAME}
${theSourceFiles} ${theHeaderFiles})
and the package.xml of "mathlib":
<?xml version="1.0"?>
<package>
<name>mathlib</name>
<version>1.4.0</version>
<description>Klas Kronander</description>
<author email="mohammad.khansari@epfl.ch">Klas Kronander</author>
<maintainer email="mohammad.khansari@epfl.ch">Klas Kronander</maintainer>
<license>BSD</license>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>message_generation</build_depend>
<build_depend>roscpp</build_depend>
<run_depend>roscpp</run_depend>
<run_depend>message_runtime</run_depend>
</package>
the package "seds" use some of the function in package "mathlib".
Originally posted by RLoad on ROS Answers with karma: 13 on 2018-11-22
Post score: 1
Original comments
Comment by mgruhler on 2018-11-22:
@RLoad please use the preformatted text button (the one with 101010) on it, when copy-pasting code the next time. This just makes reading much easier :-)
So, what you are saying is: you converted seds and mathlib from rosbuild to catkin? There is probably something wrong there.
Comment by mgruhler on 2018-11-22:
Please show the CMakeLists.txt and package.xml of the migrated packages.
Comment by RLoad on 2018-11-22:
Thanks a lot for help me change the text, it was my first time copy the code in ROS answer.
And the CMakeLists.txt and package.xml of the migrated packages are shown.
Comment by RLoad on 2018-11-22:
thank your for your reminder @mgruhler
I'm not familiar with the writing of CMakeLIst.txt, so I read the wiki page: http://wiki.ros.org/catkin/CMakeLists.txt and change my CMakeLIst.txt and package.xml.
I call catkin_package and use include_directories and add_library to declear of the .h and .cpp.
Comment by RLoad on 2018-11-22:
while the problem is still exist, and I can't find where is the error, thanks a lot for your help~
Comment by mgruhler on 2018-11-23:
the rosbuild_* calls are for rosbuild, not catkin.
Please check again the migration guide you linked, as well as the catkin documentation. This is crucial to make this work. If you have specific questions, please come back here.
Comment by RLoad on 2018-11-23:
Sorry for make you misunderstand, the rosbuild_* is in the CMakeList.txt of the old dry packages before migration, I show them to describe the migration process more clear. And the CMakeList.txt and package.xml of the wet packages have update as your advise. I will check them again based on the wiki
Comment by RLoad on 2018-11-24:
the problem is solved by catkin clean -- and rebuild the catkin workspace while I still thank you for your help @mgruhler . the migration process is right but there still some problem to use these two package, while it's not include in these discussion.
Answer:
thx for the update @RLoad.
There is something seriously wrong with the CMakeLists.txt that you have there. Did you really follow the migration guide?
Just as a hint, but not complete errors:
you need to find_package seds in the mathlib package, if you want to use this there.
you need to call catkin_package. This is commented.
you don't compile an executable or library. Or are both, mathlib and seds header only?
you should not change the output paths, afaik
There are quite some more things you could do to improve this, but i guess this is enough for now...
Originally posted by mgruhler with karma: 12390 on 2018-11-22
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by RLoad on 2019-04-18:
Thanks for your help, the code works now! I think I should learn more knowledge about ROS, it's so convenient for me! | {
"domain": "robotics.stackexchange",
"id": 32089,
"tags": "ros, catkin, build, package, ros-indigo"
} |
Neural networks of arbitrary/general topology? | Question: Usually neural networks consist from layers, but is there research effort that tries to investigate more general topologies for connections among neurals, e.g. arbitrary directed acyclic graphs (DAGs).
I guess there can be 3 answers to my question:
every imaginable DAG topology can be reduced to the layered DAGs already actively researched, so, there is no sense to seek for more general topologies;
general topologies exist, but there are fundamental restrictions why they are not used, e.g. maybe learning is not converging in them, maybe they generate chaotic osciallations, maybe they generate bifurcations and does not provide stability;
general topologies exist and are promising, but scientists are not ready to work with them, e.g. maybe they have no motivation, standard layered topologies are good enough.
But I have no idea, which answer is the correct one. Reading the answer on https://stackoverflow.com/questions/46569998/calculating-neural-network-with-arbitrary-topology I start to think that answer 1 is the correct one, but there is no reference provided.
If answer 3 is correct, then big revolution can be expected. E.g. layered topologies in many cases reduces learning to the matrix exponentiation and good tools for this are created - TensorFlow software and dedicated processors. But there seems to be no software or tools for general topologies is they have some sense indeed.
Answer: The simplistic neural networks that have been given away for free after they prove insufficient by themselves in field use consist solely of two orthogonal dimensions.
Layer width — the number of ordinal numbers or floating point numbers that represent the signal path through any given layer comprise of an array of layer elements
Network depth — the number of layers in the primary signal path, which is the number of activation function arrays or convolution kernels or whatever other elements Usually neural networks consist from layers
However, in large corporations that have AI pipelines, this is not the case. We are beginning to see more interesting topologies in open source. We see this in generative systems for images, text, and speech. We see this in robotic control of robots. The truth is that these more sophisticated topologies have been in play for years, but were just not appearing in the open source community because they were company confidential. Enough academic work, releasing of portions of corporate IP, and the accumulation of independent OSS work has occurred to start to see these topologies in GIT repos.
Cyclic Not Acyclic
Artificial network topologies are generally cyclic, not acyclic in terms of their causality or their signal pathways, depending on how you depict them theoretically. These are three basic examples from among dozens in the literature and in the open source repositories.
Back-propagation represents the introduction of a deliberate cycle in signal paths in a basic multilayer perceptron, making that topology a sequence of layers represented by vertices, connected sequentially by a set of directed edges representing forward propagation, and a set of directed edges in the reverse direction to distribute the corrective error determined at the network output according to the principle of gradient descent. For efficiency, the corrective signal is distributed recursively backward through the layers to the $N - 1$ matrices of parameters attenuating the signals between $N$ layers. Back propagation requires the formation of these $N - 1$ cycles for convergence to occur.
In a generative adversarial network (GAN), we have the signal path of each of the two networks feeding the training criteria of the other. Such a topological arrangement is like negative feedback in a stable control system in that an equilibrium is formed between the generative network and discriminative network. The two directed edges, (a) the one that causally affects G with Ds result, and (b) the one that causally affects D with Gs result, create a cycle on top of the cycles in each of G and D.
In attention based networks being touted as theoretically advantageous over LSMT (which has been dominating over CNNs) has a much more complex topology and more cycles above those in supervisory layer than those in GANs.
Analysis of Answer One of Three
It is true that every directed graph can be realized in an arbitrarily large RNN because they are Turing complete, but that doesn't mean they are a great topology for all finite algorithms.
Turing was aware that his punched tape model was not the best general purpose, high speed computing architecture. He was not intending to prove anything about computing speed but rather what could be computed. His Turing machine had a trivial topology deliberately. He wanted to illustrate his completeness theorem to others and resurrect the forward movement of rationalism after Gödel disturbed it with his two incompleteness theorems.
Similarly, John von Neumann proposed his computing architecture, with a central processing unit (CPU) and unified data and instruction bus, to reduce the number of relays or vacuum tubes, not to maximize parallel algorithm execution. That topology as a directed graph has the instruction controller and the arithmetic unit in the center and everything else branching out from the data and address bus leading from them.
That a topology can accomplish a task is no longer a justification for persisting in the use of that topology, which is why Intel acquired Nirvana, which deviates from traditional von Neumann architecture, DSP architecture, and the current CUDA core architecture that NVidia GPUs use and offer for artificial network realization through C libraries that can be called via integrated Java and Python adapters.
There is definitely sense to seek for more general topologies, if they are fit for the purpose, just as with Turing's or von Neuman's.
Analysis of Answer Two of Three
General topologies exist, the most economically viable of which is the CUDA cores begun by NVidia, which can be configured for MLPs, CNNs, RNNs, and general 2D and 3D video processing. They can be configured with or without cycles depending on the characteristics of the parallelism desired.
The realization of topologies unlike the Cartesian arrangements of activation functions in artificial networks or the kernel cells in convolution engines do have barriers to use, but they are not fundamental restrictions. The primary barrier is not one of hardware or software. It is one of linguistics. We don't think topologically because we don't talk topologically. That's what is great about this question's challenge.
FORTRAN began to dominate over LISP during the time when general purpose programming began to emerge in many corporations. That is not surprising because humans communicate in orthogonal ways. It is cultural. When a child scribbles, teachers are indoctrinated to say nice things but respond by drawing a shape. If the child draws a square, the teacher smiles. The child is given blocks. The books are rectangular. Text is justified into rectangles.
We can see this in building architecture dating back to Stonehenge. Ninety degree angles are clearly dominant in artificial things, whereas nature doesn't seem to have that bias.
Although directed graphs were easy to implement and traverse in recursive structure and were commonplace in the LISP community. FORTRAN with its realization of vectors and matrices in one and two dimensional arrays respectively were easier to grasp by those with less theoretical background in data structures.
The result is that even if learning EMMASCRIPT (JavaScript) which has its seed from the LISP community and is not biased toward orthogonal data structures, people tend to proceed from HelloWorld.js to something with a basic loop in it, with an underlying array through which the loop iterates.
There are three wonderfully inquisitive and insightful phrases in answer two of three.
Maybe learning is not converging in them — Interestingly an algorithm cannot learn without a cycle. Directly applying a formula or converging using a known convergent series of terms does not qualify as learning. Gradient descent relies entirely on the cyclical nature of a corrective action at the end of each sample processing or batch of them.
Maybe they generate chaotic [oscillations] — This gets into chaos theory and control theory's concept of stability. They can do so, but so can a basic multilayer perceptron if the learning rate is set to high.
Maybe they generate bifurcations — Now we have fully entered the realm of chaos, which is arguably closely related to creativity. Mendelbrot proposed the relationship between new forms of order and the apparent chaotic behavior arising from the appropriate level of feedback in a system with signal path components that cannot be modelled with a first degree equation. Since then, we find that most phenomena in nature are actually strange attractors. The plot of the training of a network from a continuous feed of consistently distributed data in phase space will reveal ... you guessed it ... a strange attractor. When purtibations are deliberately injected into a training epoch from a pseudo-random number generator, the specific purpose is a bifurcation, so that the global optimization can be found when the training gets stuck in a local optimization.
Analysis of Answer Three of Three
General topologies exist and are promising and researchers are ready to work with them. It is enthusiasts that can have a dismissive attitude. They don't yet understand the demos they've downloaded and painstakenly tweaked to run on their computer, they're about to launch their AI carrier amidst the growing demand from all the media hype, and now someone is introducing something interesting and not yet implemented in code. The motivational direction is generally to either dismiss or resist the creative proposals.
In this case, Google, CalTech, IBM, MIT, U Toronto, Intel, Tesla, Japan, and a thousand other governments, institutions, corporations, and open source contributors will solve that problem, provided people keep talking about topology and the restrictions inherent in purely Cartesian thinking.
Misunderstanding Topology to Mean Dimensionality or Topography
There has been some confusion in terms. The SO reference in the question is an example of thinking that changing an array dimension is changing the topology. If such were so, then there would be no change one could make to the geometry of an AI system that would not be topological. Topology can only have meaning if there are features that are not topological. When one draws a layer, they don't need to increase the height of the rectangle representing it if the number of activitations, the width of the layer, is changed from 100 to 120.
I've also seen academic papers that called the texture or roughness of an error surface its topology. That completely undermines the concept of topology. They meant to use the term topography. Unfortunately neither the publisher nor the editor noticed the error.
Software or Tools
Most programming languages support directed graphs in recursive hashmaps. LISP and its derivatives supported them at a more machine instruction efficient level, and that's still the case. Object oriented databases and graph libraries exist and are in use. Google uses them extensively in web indexing and lookup. FaceBook's API is called the Graph API, because it is a query and insert API into the graph that is FaceBook's user data store.
The explosion is here in global software giants. There is open source for it. The revolution that is missing is among those who are not yet educated as to the meaning of topology, the difference between a hierarchy and a network or the role of feedback in any learning system.
Regarding Java and Python there are many barriers to the revolution in thinking, primarily these.
There are now key words in either Java or Python to directly deal with directed graphs other than the idea of a class with references to instances of other classes, which is quite limited. None of these languages can add edge types with a single simple language construct.
There is no mapping to hardware yet, although Nirvana allegedly developed one, and Intel acquired Nirvana, so that barrier may evaporate soon.
The bias still exists in preschool, kindergarten, and first grade
Hilbert spaces are not generally taught in calculus
Graphviz and other graphing software that auto-generates diagrams from unconstrained directed or bidirectional graph representations have done much to bust through the barriers because the generated images are visible across the web. It may be through visual representations of graphs that linguistic representations, thought, hardware, and software begin to emerge representing the paradigm shift the question investigates.
It is not that constraints are not useful. Only some patterns and paradigms produce results, but since the results from the human brain demand attention, and the human brain is
Not at all orthogonal,
Not implemented using Cartesian neural patterns, and
Not topologically a box,
One can all but conclude that those are not particularly well chosen constraints. Neither is the acyclic criteria. Nature is cyclic, and intelligence probably requires it in many ways and at many levels. | {
"domain": "ai.stackexchange",
"id": 720,
"tags": "neural-networks, topology"
} |
Is there a difference between "cell line" and "cell strain"? | Question: I'm translating one Russian text where a cell line is transfected to produce a drug. I wonder if I can sometimes use "cell strain" or simply "strain" instead of "cell line" or instead of "producer" (продуцент, meaning "the producer of the drug", i.e. the cells that produce the drug).
I googled and found this answer on Yahoo answers:
Cells in culture can be passaged a finite number of times before reaching a crisis which can be compared with aging. The number of passages, before reaching crisis, has been termed the Hayflick limit and is related to the longevity of the species from which the tissue was originally derived.
Within the Hayflick limit, the cells are referred to as a cell strain.
Cells that survive the crisis and continue to grow are referred to as a cell line. Cell lines can also be derived directly from cancer cells. There are many properties that distinguish cell lines from cell strains, including altered chromosome number, changes at the cell membrane, and reduced requirement for certain growth factors.
Therefore:
A cell strain is defined as an euploid population of cells subcultivated once or twice in vitro, lacking the property of indefinite serial passage. Cell strains ultimately undergo degeration or death (senescence).
A cell line is an aneuploid population of cells that can be grown in cultures indefinitely.
Is this true? Is a cell line always aneuploid and capable of indefinite growth, as opposed to being euploid and incapable of indefinite growth?
Answer: "Strain", in general, is usually used for whole organisms (whether they are unicellular or multicellular). For example, there are mouse strains.
"Cell line", on the other hand, is a very specific term that is used to refer cultured cells obtained from a multicellular organism. They need not be necessarily aneuploid.
Is a cell line always aneuploid and capable of indefinite growth, as
opposed to being euploid and incapable of indefinite growth?
Aneuploidy does not have any causative relationship with indefinite growth. Most cancerous cells evade the cell cycle checkpoints, which allows them to grow out of control. Some of these checkpoints also ensure proper segregation of chromosomes. Bypassing them leads to aneuploidy.
Are cell lines essentially immortal?
Usually primary cell cultures are not referred to as "cell lines" these days. However, originally, any cultured cell that grows in vitro is called cell line. From Freshney (Basic Principles of Cell Culture):
When cells are isolated from donor tissue, they may be maintained in
a number of different ways. A simple small fragment of tissue that
adheres to the growth surface, either spontaneously or aided by
mechanical means, a plasma clot, or an extracellular matrix
constituent, such as collagen, will usually give rise to an outgrowth
of cells. This type of culture is known as a primary explant, and the
cells migrating out are known as the outgrowth (Figs. 1.1, 1.2, See
Color Plate 1). Cells in the outgrowth are selected, in the first
instance, by their ability to migrate from the explant and
subsequently, if subcultured, by their ability to proliferate. When a tissue sample is disaggregated, either mechanically or enzymatically
(See Fig. 1.1), the suspension of cells and small aggregates that is
generated will contain a proportion of cells capable of attachment
to a solid substrate, forming a monolayer. Those cells within the monolayer that are capable of proliferation will then be selected at
the first subculture and, as with the outgrowth from a primary
explant, may give rise to a cell line.
Elsewhere, a cell strain is defined as positively selected subpopulation of cells:
A cell strain is a subpopulation of a cell line that has been positively selected from the culture, by cloning or some other method. A cell strain will often have undergone additional genetic changes since the initiation of the parent line. Individual cell strains may, for example, have become more or less tumorigenic than the established line, or they may be designated as a separate strain following transfection procedures.
This definition does not say anything about what kind of selection was used. You can select for cell that express GFP, for instance. So that strain would be simply designated as GFP+. | {
"domain": "biology.stackexchange",
"id": 6142,
"tags": "terminology, cell-culture"
} |
ekf_localization_node navsat_transform_node | Question:
Hi Tom! Remember mi system:
-I have a IMU microstrain 3DMGX2
-GPS
-I want integrate with ekf
My configuration:
ekf_localization_node
-Inputs
IMU/DATA output
Tnavsat_transform_node output
-Outputs
Odometry message
navsat_transform_node
-Inputs
IMU/DATA output
nmea serial GPS (NavSatFix)
Odometry message
-Outputs
odometry gps
With rqt graph i can see:
Then i include a broadcasting transform base_link -> IMU and when i do this mi rqt graph change:
When i echo gps/odometry not is correct, i need to do other broadcast?
In upload to dropbox my launch file and debug:
https://www.dropbox.com/sh/ab23ou7fo7nw2cd/AAD722h-vmk0dCveJaWuOfZBa?dl=0
Thanks
EDIT 2: when I point to the imu towards magnetic north, on the positive x serial cable the opposite side. imu get data through the following:
X: 0
Y: 0
Z: 0.7
W: 0.7
Is it right?
Originally posted by Porti77 on ROS Answers with karma: 183 on 2015-03-02
Post score: 1
Original comments
Comment by Tom Moore on 2015-03-02:
Removed the comments for easier reading.
Comment by Porti77 on 2015-03-03:
I simplified the nodes and included the rostopic echo. Thank you very much
Comment by Tom Moore on 2015-03-10:
I think you're going to need to be more descriptive than "is not correct." What exactly is wrong with it? Is it always zero? Does it jump around too much? Feel free to record and post a bag file and I'll take a look when I get a chance.
Comment by Porti77 on 2015-03-14:
The odometry/filtered is giving a consistent value and suddenly begins to increase much and then returns to converge to values consistent . I leave a link with a rosbag where you can view all the topics . Thank You https://www.dropbox.com/s/rjqd6lc6tqtojz2/viernes13_3.bag?dl=0
Comment by Tom Moore on 2015-03-17:
Cool, I'll check it out soon.
Answer:
Can you post a sample IMU message? At first glance, I'd say you should get rid of your xxx_rejection_threshold settings. As for your question about tf, ekf_localization_node publishes a tf transform from the world_frame to the base_link_frame, and navsat_transform_node (optionally) publishes a transform from the UTM grid to your world frame. Even though you have that parameter set to false, the TransformBroadcaster object gets declared anyway, and I'm guessing that's enough to cause the graph to draw the arrow.
EDIT in response to updates/comments: the rejection thresholds are intended to help reject outlier measurements. Given your measurement sources, I don't think you need them. Just delete them from your launch file.
I think you should back up and try one thing at a time. Just run ekf_localization_node, and start with just your IMU. Make sure that it's producing output, and then move on to integrating the navsat_transform_node. And to answer what I think you were asking, yes, if you have some other source of odometry for your robot, then you should definitely fuse that as well.
EDIT 2: The nav_msgs/Odometry message uses quaternions to represent orientations. This is common in ROS. If you want to transform it, you'll need to write a node that takes in the quaternion and converts it. Alternatively, you can use tf_echo to listen to the world_frame->base_link_frame transform being broadcast by ekf_localization_node. tf_echo does the conversion for you.
For the GPS, no, you don't currently need a base_link->gps transform.
EDIT 3 (in response to comments): I'm afraid I don't understand. What do you mean by you "adapted an EKF"? ekf_localization_node and navsat_transform_node are all you should need. Do rostopic echo /odometry/gps. If it's not publishing, then navsat_transform_node isn't getting all the data it needs, or there's an issue with the data it's getting. As to whether I'll revise the code, if you meant that I should fix navsat_transform_node, I think it's more likely that there's an issue with the data. If you meant can I revise your code, I'm afraid I don't have the spare time.
EDIT 4: After looking at your debug file, I have a few notes:
Your position is definitely not 0. I don't know what topic you're listening to or for how long, but if you search through the debug file for "Corrected full state is," you'll see that it's definitely integrating your GPS and IMU data, and that your positions are not 0. Are you certain you're echoing the right topic?
That said, the GPS odometry messages that are coming in are all over the place. Not sure why. Have you plotted raw data from all your sensors to make sure the incoming data is correct?
Your IMU data has incorrect accelerations. If you have it mounted flat, then your linear acceleration should be +9.81 for Z. Are you using a Microstrain GX2? Try this fork. Your accelerations and velocities are exploding.
I think you might benefit from reading through all the robot_localization tutorials again, and pay particular attention to the sections on preparing your sensor data.
EDIT 5: No, when the sensor faces magnetic north, the quaternion values should be close to (0, 0, 0, 1), which is equivalent to 0 roll, 0 pitch, and 0 yaw. Why don't we take this offline and update the question and answer when we've figured it out? Please use my e-mail address on the wiki page.
Originally posted by Tom Moore with karma: 13689 on 2015-03-02
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Porti77 on 2015-03-03:
When I connect the imu with single ekf not get to have / odometry / filtered. I have to throw something other than the node imu and the ekf? Or make a call to a service or change any settings?
Thanks for your time
Comment by Tom Moore on 2015-03-03:
No. One sensor is enough. If you're not getting output, then something is wrong. Are you broadcasting a transform from base_link->imu?
Comment by Porti77 on 2015-03-03:
I'm broadcasting a transform from base_link->imu, now i need euler angles because is more easy to interpret. When i want integrate a GPS i need broadcasting a transform from base_link->gps too?
Comment by Porti77 on 2015-03-05:
Thanks for all the help that you proporcionadao me. I'm using the same IMU that you and I want to place in the center of the vehicle, the transformed I do is correct? Because the message odometry filtered and leaves the imu are not consistent. The launch tf is above
Comment by Tom Moore on 2015-03-05:
Your parameters to static_transform_publisher are wrong. It accepts two different parameter sets. One lets you specify Euler angles, and one lets you specify a quaternion. You are using the quaternion, but have the 1 in qX location, when it should be the qW location.
Comment by Porti77 on 2015-03-06:
I´ve corrected it and when i see the frames with rviz is right!! Thanks. Now i want integrate gps with navsat, i adapted a ekf for this but when i see odometry filtered the position x and y is 0. However gps/odometry is true. Can you revise the code?
Comment by Porti77 on 2015-03-06:
Ok thanks, asked if he thought my launch parameters are fine.
When I echo on odometry / gps values are correct, but when I echo on odometry / filtered position is always 0 in X and Y. I've already edited the launch with nodes that I throw. Thank You
Comment by Tom Moore on 2015-03-06:
Set debug mode to true, run your launch file, wait until a few /odometry/gps messages have been published, stop everything, and then post the debug output file using DropBox or something similar.
Comment by Porti77 on 2015-03-07:
Ok! Thanks! https://www.dropbox.com/sh/ab23ou7fo7nw2cd/AAD722h-vmk0dCveJaWuOfZBa?dl=0
Comment by Porti77 on 2015-03-25:
I sent you email! One more thing What axis of IMU points to magnetic north? X or Y. Thanks
Comment by Tom Moore on 2015-03-25:
I'm aware. I've been busy with work and will respond when I get a chance. | {
"domain": "robotics.stackexchange",
"id": 21031,
"tags": "ros, navigation, ekf, navsat-transform, robot-localization"
} |
Language to define perfectly a programming problem | Question: Is there any language, which can be used to define all programming problems perfectly?
By perfectly, I mean with these two properties:
p is the problem.
d is the definition in the language.
P(d, p): "d is the definition of the problem p"
$$
\forall p\exists d(P(d, p))
$$
$$
\forall p\exists d1,d2((P(d1, p) \land P(d2, p)) \rightarrow (d1 \Leftrightarrow d2))
$$
Example: The simple problem to output the first N numbers, can be defined in many ways, for example in the english language. I want a language which restricts the number of definitions to 1 for every problem. In other words, no 2 distinct definitions can define the same problem, and every problem has a definition.
1. This language exists?
2. If such language does not exists, is it possible to create one?
Answer: This question is somewhat unclear to me; however, under one interpretation there is a result which indicates that the answer is unsatisfyingly yes: namely, the existence of Friedberg numberings. Roughly speaking, a Friedberg numbering is a programming language which is Turing complete but in which no two programs perform the same task.
(More formally: a Friedberg numbering is a computable function $\varphi$ of two variables such that for each computable function $\psi$ of one variable there is exactly one $e_\psi^\varphi$ such that $\lambda x.\varphi(e_\psi^\varphi,x)\cong\lambda x.\psi(x)$.)
A simple proof of the existence of such numberings was given by Kummer.
That said, it's easy to show that we can never "translate into" a Friedberg numbering, which renders the positive result above somewhat misleading at best: if $(\theta_i)_{i\in\mathbb{N}}$ is the usual numbering of computable functions of one variable and $\varphi$ is a Friedberg numbering, the map $$(*)_\varphi:\theta_i\mapsto e_{\theta_i}^\varphi$$ is not computable. Essentially, what this means is that programming in the usual sense is impossible in the context of a Friedberg numbering: while every computable function has a corresponding program, there's no way to find it.
To prove this, simply note that from the map $(*)_\varphi$ we can compute the set of indices for the never-defined function: let $c$ be the unique number such that $\lambda x.\varphi(c,x)$ is never defined, and to tell whether $\theta_i$ is never defined just check whether $(*)_\varphi(i)=c$. (Note that this can be improved: if we replace the never-defined function with the identity function, or more generally any partial computable function with infinite domain, the resulting set of indices is strictly more complicated than the halting problem - it has Turing degree $\bf 0''$. So in fact "translating into a Friedberg numbering" is really very impossible indeed.)
This "impossibility of translation" is what breaks the "obvious" proof that Friedberg numberings are impossible. It also points the way to the general study of numberings, which is a fruitful area of study within computability theory. The numberings which are Turing complete in a "non-stupid" way are the acceptable numberings, which are also those which are maximal in a certain natural pre-ordering on numberings. | {
"domain": "cs.stackexchange",
"id": 14949,
"tags": "formal-languages, logic"
} |
Determine if a String has a mirroring head and tail | Question: Problem statement:
Given a string, look for a mirror image (backwards) string at both the
beginning and end of the given string. In other words, zero or more
characters at the very begining of the given string, and at the very
end of the string in reverse order (possibly overlapping). For
example, the string "abXYZba" has the mirror end "ab".
Examples:
mirrorEnds("abXYZba") → "ab"
mirrorEnds("abca") → "a"
mirrorEnds("aba") → "aba"
Below is my solution to the problem in java:
public String mirrorEnds(String string) {
final int len = string.length();
final int half = len / 2;
String result = "";
for (int i = 0; i < half; i++) {
if (string.charAt(i) != string.charAt(len -1 -i)) {
break;
} else {
result += string.substring(i, i + 1);
}
}
return result.length() == half ? string : result;
}
Is it safe to say that in terms of time complexity the solution is optimal already? Any other review comments are also welcome.
Answer: I don't program much in Java, but suspect it is suboptimal to be building the string in the loop one character at a time.
Also, calculating a "fresh" tail end position each time from base units may take cycles, rather than decrementing a reverse counter. You then end up with an empty else{}, which should also help loop optimisation.
So something like, where j (as a variable that survives loop destruction) is overloaded to be the "tail test" position in the loop, and the number of matched characters as the loop exits:
[BTW, can't test this as no Java system to hand - just editing as I go. Particularly check the final arithmetic on "j".]
public String mirrorEnds(String string) {
final int len = string.length();
final int half = len / 2;
int j = len - 1;
for (int i = 0; i < half; i++) {
if (string.charAt(i) != string.charAt(j--)) {
j = len - j - 1;
break;
}
}
return j == half ? string : string.substring(0, j);
}
or
public String mirrorEnds(String string) {
int len = string.length();
final int half = len / 2;
int i = 0;
while (i < half) {
if (string.charAt(i) != string.charAt(--len)) {
break;
}
i++;
}
return i == half ? string : string.substring(0, i);
} | {
"domain": "codereview.stackexchange",
"id": 38004,
"tags": "java, strings"
} |
Commuting observables and an eigenstate | Question:
Consider the observables $A$ and $B$ with $[A, B]=0$. Suppose that the state of the system satisfies the equation $A|a_1\rangle=a_1 |a_1\rangle$. After a measurement of the observable $A$, in what state will the system be? If we then measure the observable $B$, what will the final state of the system be?
I am a bit confused by this question. By the measurement postulate, I know that after a measurement of $A$ the system will jump into one of the eigenstates of $A$. But the question doesn't say anything about all the eigenvectors of $A$, I only know that it has an eigenvector $|a_1\rangle$. Is it safe to assume that $A$ has only this eigenvector? If this is the case, the question is trivial: after a measurement of $A$, the system will still be in the state $|a_1\rangle$ and if we then measure $B$ the system I think will still be in the state $|a_1\rangle$ since $A$ and $B$ commute. But what if $A$ has more than one eigenvector? What conclusions could be drawn?
Answer: Your initial state is an eigenstate $\lvert a_1 \rangle$ of the observable $A$. In the time leading up to your measurement of observable $A$, there is no interaction occurring which would change the initial state to another state. In other words, for all time $t$ leading up to measurement of observable $A$, the state remains in the state $\lvert a_1 \rangle$. Thus, the state remains in an eigenstate of the observable you are measuring up to the point in time at which you measure it. Recall that if a state is already is an eigenstate $\lvert a_1 \rangle$ of an observable $A$ and you perform a measurement of observable $A$ that you will read measurement value $a_1$ 100% of the time, and the state post-measurement is still $\lvert a_1 \rangle$.
If $A$ and $B$ have nondegenerate spectrums, $[A, B] = 0$ implies that they share a unique set of eigenvectors which form a basis for the Hilbert space said operators are defined over. This means that
$$A\lvert a_1 \rangle = a_1 \lvert a_1 \rangle$$
and
$$B\lvert a_1 \rangle = b_1 \lvert a_1 \rangle$$
and the same reasoning we applied in the first paragraph would then be applied again. Note that if $\lvert \psi \rangle$ is a shared eigenstate of operators $A$ and $B$ with eigenvalues $a_1$ and $b_1$, respectively, it is customary to write the eigenstate as $\lvert \psi \rangle = \lvert a_1, b_1 \rangle$ which makes it clear that the ket is an eigenstate of both operators with the corresponding eigenvalues.
If the spectrums are degenerate for either operator or both, then the above possibly does not hold true. And I would say what happens when you measure $B$ is unknowable given the information provided in the quotation. At the beginning of Quantum texts, it is usually implicit that spectrums are nondegenerate; however, I am not sure that is the case here. | {
"domain": "physics.stackexchange",
"id": 95547,
"tags": "quantum-mechanics, operators, hilbert-space, commutator, observables"
} |
Extract single qubit state from combined state in QuTiP using a Hamiltonian | Question: This questions is very similar to another question asked a couple of months ago. I would like to extract the state of single qubits from combined states, using QuTip (the Quantum Toolbox in Python) using a Hamiltonian simulator.
Example
For example, suppose I begin with two qubits, |0⟩ and |1⟩. I can create these in QuTiP (as q1 and q2) as follows:
q1 = basis(2,1)
q2 = basis(2,0)
$q_1 = \begin{bmatrix}0\\1\end{bmatrix} \quad q_2 = \begin{bmatrix}1\\0\end{bmatrix}$
I can then combine q1 and q2 using tensor multiplication:
q1q2 = tensor(q1, q2)
$q_1 \otimes q_2 = \begin{bmatrix}0\\0\\1\\0\end{bmatrix}$
Now, suppose that I apply a CNOT gate to $q_1\otimes q_2$. We can define CNOT as:
circ = QubitCircuit(2)
circ.add_gate("CNOT",targets=[1],controls=[0])
U = gate_sequence_product(circ.propagators())
$U=\begin{bmatrix}
1&0&0&0 \\
0&1&0&0 \\
0&0&0&1 \\
0&0&1&0 \\
\end{bmatrix}$
Probability of $q_2$
I now want to figure what q2 would look like after application of this gate. We can do this using the partial trace of the density matrix:
M = ket2dm(U*q1q2).ptrace(0)
q_2' = np.diag(np.sqrt(M.full()))
$M = \begin{bmatrix}0&0\\0&1 \end{bmatrix}\\
q_2' = \begin{bmatrix}0\\1 \end{bmatrix}$
Probability of $q_2$ version 2
I would like to do the same now, but using a Hamiltonian simulator. For this, I use the built-in master equation solver which solves:
$iℏ\frac{d}{d}t|ψ⟩ = H |ψ⟩$
The code for this is:
times = np.linspace(0.0, 20.0, 100)
data = mesolve(U, q1q2, times)
M2 = ket2dm(data.states[-1]).ptrace(0)
q_2'' = np.diag(np.sqrt(M2.full()))
$M_2 = \begin{bmatrix}0.167&0.373j\\-0.373j&0.833 \end{bmatrix}\\
q_2'' = \begin{bmatrix}0.40809641\\0.91293884 \end{bmatrix}$
Which is not what I expected. Question: can I simply use my unitary matrix here or do I need to do some transformation before/after running the simulator? Are there other ways to come up with the probabilities of one qubit, besides using ptrace?
Answer: Say $U$ is a unitary evolution (the CNOT in your case), and $\vert\psi\rangle$ an initial state.
Evolving $|\psi\rangle$ with $U$ amounts to computing $U\lvert\psi\rangle$.
If on the other hand you want to study the dynamics of this evolution, then you need a generator for the evolution. In other words, you want an Hamiltonian $H$ such that after a certain time $t$ you get $$e^{-i t H}\lvert\psi\rangle=U\lvert\psi\rangle.$$
Such an Hamiltonian is what you need to feed to mesolve. Here is a working version of the code doing just this:
import numpy as np
import scipy
import qutip
cnot_generator = qutip.Qobj(1j * scipy.linalg.logm(qutip.cnot().full()), dims=[[2] * 2] * 2)
initial_state = qutip.tensor(qutip.basis(2, 1), qutip.basis(2, 0))
times = np.arange(0, 1.1, 0.1)
qutip.mesolve(cnot_generator, initial_state, times).states
Note how the last state given by the above is (in good approximation) the same you obtained yourself.
I used ket notation here because, being everything unitary, there is no need to use density matrices. You may adapt the code appropriately to use density matrices. | {
"domain": "physics.stackexchange",
"id": 47315,
"tags": "quantum-information, quantum-computer"
} |
rosbridge connection closed | Question:
I have already rosrun rosbridge rosbridge.py
And then when I run a test html file in browser "connection closed" has been showed...why it can`t connect to rosbridge?
html file:
function main() {
var connection = new ros.Connection("ws://hostname:9090");
connection.setOnClose(function (e) {
document.write('connection closed');
});
connection.setOnError(function (e) {
document.write('error!');
});
connection.setOnOpen(function (e) {
document.write('connected to ROS');
});
}
Originally posted by koprenee on ROS Answers with karma: 15 on 2012-07-19
Post score: 0
Original comments
Comment by Felix Endres on 2012-07-19:
You have no tags on your question, it is unlikely to be found by people who know about these things
Answer:
Could you test using the head version of rosbridge (svn url: http://brown-ros-pkg.googlecode.com/svn/trunk/experimental/rosbridge/). It's possible that you're experiencing a bug that has since been fixed but hasn't propagated to the latest version yet.
As for the "closed 10" error - the current version of rosbridge isn't very good at displaying meaningful errors. We're currently working on a more robust version of rosbridge which deals with this kind of thing much more gracefully. For now though, it may say "closed 10" even though you are actually connected.
Originally posted by Jon Mace with karma: 431 on 2012-07-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 10283,
"tags": "rosbridge"
} |
Why are hydathodes called hydathodes and not hydrothodes? | Question: I can't seem to find any etymological root for the hyda- in hydathode. I expected the water-relater structure to be called a hydrothode, but it just isn't!
Answer: The etymology of hydathode, according to the Collins dictionary, is "way (hodos) of water (hydat-)". So the correct way of dividing the word is not hyda-thode, but hydat-hode.
Also, "hydat-" is the genitive of the Greek word for water, "hydor" (literally "of water"), whereas "hodos" is in its nominative form ("the way"). If you want to say "water way" in ancient Greek, you have to use nominative+genitive, not nominative+nominative, but there might be also euphonic reasons to why is not *hydrohode. | {
"domain": "biology.stackexchange",
"id": 9931,
"tags": "plant-anatomy, etymology"
} |
Prove the hermiticity of $\sigma_{\mu\nu} F^{\mu\nu}$ | Question: In exercise 4.2, "Relativistic Quantum Mechanics" by Bjorken-Drell, an additional term is added to the Dirac Hamiltonian such that new equation of motion is
$$\left(i\gamma_\mu{\nabla^\mu}-e_{i}\gamma_\mu{A^\mu}+\frac{\kappa_{i}e}{4M_{i}}\sigma_{\mu\nu}F^{\mu\nu}-M_{i}\right)\psi(x)=0.$$
The first two and fourth terms generate the Dirac equation. The third term has to be hermitian where $\kappa_{i}$ is a real number and $e$ is the electron charge. How do I prove it?
My attempt:
$$\left(\sigma_{\mu\nu}F^{\mu\nu}\right)^{\dagger}$$
$$=(-i)\gamma_{\nu}^{\dagger}\gamma_{\mu}^{\dagger}F^{\mu\nu}$$
$$=-i\begin{cases}
\begin{array}{c}
-\gamma_{0}\gamma_{i}F^{i0}\\
\gamma_{i}\gamma_{j}F^{ji}
\end{array} & \begin{array}{c}
\nu=0,\mu=i\\
\nu=i,\mu=j
\end{array}\end{cases}$$
$$=\begin{cases}
\begin{array}{c}
\sigma_{0i}F^{i0}\\
-\sigma_{ij}F^{ji}
\end{array} & \begin{array}{c}
\nu=0,\mu=i\\
\nu=i,\mu=j
\end{array}\end{cases}$$
$$=\begin{cases}
\begin{array}{c}
-\sigma_{0i}F^{i0}\\
+\sigma_{ij}F^{ij}
\end{array} & \begin{array}{c}
\nu=0,\mu=i\\
\nu=i,\mu=j
\end{array}\end{cases}$$
In this way, I can't find it to be hermitian.
Answer: It is wrong trying to prove that
$$
\sigma_{\mu\nu}F^{\mu\nu}
$$
is hermitian. Actually it's not!
The right way is to prove that the action
$$
S= \int dx^4\bar{\psi}(x)\left(i\gamma_\mu{\nabla^\mu}-e_{i}\gamma_\mu{A^\mu}+\frac{\kappa_{i}e}{4M_{i}}\sigma_{\mu\nu}F^{\mu\nu}-M_{i}\right)\psi(x)
$$
is hermitian or equivalently to prove that the counterpart of the equation of motion
$$
\left(i\gamma_\mu{\nabla^\mu}-e_{i}\gamma_\mu{A^\mu}+\frac{\kappa_{i}e}{4M_{i}}\sigma_{\mu\nu}F^{\mu\nu}-M_{i}\right)\psi(x)=0
$$
for $\bar{\psi}(x)$ is
$$
\bar{\psi}(x)\left(i\gamma_\mu{\nabla^\mu}-e_{i}\gamma_\mu{A^\mu}+\frac{\kappa_{i}e}{4M_{i}}\sigma_{\mu\nu}F^{\mu\nu}-M_{i}\right)=0.
$$
Therefore, it can be easily seen that for the extra non-Dirac term, the proof of hermiticity boils down to proving
$$
\gamma_0\left(\sigma_{\mu\nu}F^{\mu\nu}\right)^{\dagger}\gamma_0= \sigma_{\mu\nu}F^{\mu\nu}
$$
or
$$
\gamma_0\left(\sigma_{\mu\nu}\right)^{\dagger}\gamma_0= \sigma_{\mu\nu}.
$$
The proof is pretty straight forward, and I will leave it yourself. A hint:
$$
\left(\gamma_\mu\right)^{\dagger}= \gamma_0\gamma_\mu\gamma_0.
$$
Added note:
The extra term:
$$
\bar{\psi}(x)\left(\frac{\kappa_{i}e}{4M_{i}}\sigma_{\mu\nu}F^{\mu\nu}\right)\psi(x)
$$
is of dimension 5, thus not renormalizable and excluded from the Standard Model.
However, from an effective field theory point of view, it should be part of the Lagrangian, since it observes all the required symmetries: the Lorentz symmetry and the $U_{EM}(1)$ gauge symmetry. The catch is that the term is extremely small
($\kappa_{i}$ is small) under low energy circumstances. That said, this sort of terms would be unavoidably important right after the big bang, where the high energy Planck scale effect is front and center. | {
"domain": "physics.stackexchange",
"id": 72806,
"tags": "homework-and-exercises, operators, quantum-electrodynamics, dirac-equation, dirac-matrices"
} |
What is an entropic trap? | Question: Can someone explain the concept of entropic traps, or provide good sources to look up? The term is not present in any chemistry books I own, and I cannot find online sources explaining it.
Answer: I think this should address your query:
The separation of macromolecules such as polymers and DNA by means of electrophoresis, gel permeation chromatography or filtration exploits size-dependent differences in the time it takes for the molecules to migrate through a random porous network. Transport through the gel matrices, which usually consist of full swollen crosslinked polymers depends on the relative size of the macromolecule compared with the pore radius. Sufficiently small molecules are thought to adopt an approximately spherical conformation when diffusing through the gel matrix, whereas larger ones are forced to migrate in a snake-like fashion. Molecules of intermediate size, however, can get temporarily trapped in the largest pores of the matrix, where the molecule can extend and thus maximize its conformational entropy. This 'entropic trapping' is thought to increase the dependence of diffusion rate on molecular size. Here we report the direct experimental verification of this phenomenon. Bragg diffraction from a hydrogel containing a periodic array of monodisperse water voids confirms that polymers of different weights partition between the hydrogel matrix and the water voids according to the predictions of the entropic trapping theory. Our approach might also lead to the design of improved separation media based on entropic trapping. – Source
I don't know if you needed a textbook definition, but thought you'll get a more elaborate view from this.
This is a schematics of an entropic trap device(ref: Anal. Chem. 2002, 74, 394-401) | {
"domain": "chemistry.stackexchange",
"id": 4968,
"tags": "photochemistry, entropy"
} |
Typical form of the beta function of the renormalization group | Question: Why in "typical" cases (according to some non-English text I read), does the $\beta$-function have the form
$$
\beta (g) = ag^{2} + bg^{3} + O(g^{4})\ ?
$$
I.e., why are there no linear or logarithmic terms?
Answer: Typically, when you calculate quantum effects, you will put some cut-off $\Lambda$, and typical integrals, say for a $ \lambda \phi^4$ theory, are, at first order, going in $\log \Lambda$ (ex : $\int \frac{d^4k}{k^2 (p-k)^2}$)
While defining renormalized quantities at some energy scale $p_0$, you may remove the cut-off, and get typically equations like (at first order) $:
$$\lambda_r(p) = \lambda_r(p_0) + C \lambda_r(p)^2 \log (\frac{p}{p_0}) \tag{1}$$
where $C$ is some constant, and $p$ represents a energy scale.
So finally, at first order, typical beta functions go like $\beta(\lambda_r) = \frac{\partial \lambda_r}{\partial \log p} = C \lambda_r^2$.
Going back, we see that a $\log \lambda$ factor would seem strange, because the factor $\lambda_r^2$ in $(1)$ comes from the fact that one has $2$ vertex in the Feynman diagram. If we would have $4$ vertex, we would have a $\lambda_r^4$ factor, and so on. So, unless you are doing a fine-tuning infinite sum, you would not obtain a factor $\log(\lambda)$ or more generally a factor $\lambda^l \log(\lambda)^m$.
You do not obtain a linear factor, because simply, the quantum corrections implies more vertex, so necessarily the first order quantum correction gives at least a $\lambda_r^2$ factor. | {
"domain": "physics.stackexchange",
"id": 16319,
"tags": "quantum-field-theory, renormalization, regularization"
} |
Positron-electron anihilation in the Sun | Question: As 75% of the Sun is Hydrogen and 25% Helium and the latter derives from 4 hydrogen atoms where half of protons that formed neutrons expeled positive charges as positrons that anihilated with nearby electrons, so half of electrons are missing too. In that case 12.5% of all electrons and positrons(protons) that are now missing formed very energetic gamma-rays which must have a role in fusion reactions in the core.Maybe they partialy roll back to form electron-positron pairs but how much of energy in the core is due to mass defect of helium and how much of electron-positron anihilation?
Answer: The positrons were created from the excess energy of the fusion reaction. So you can't count them in addition to the fusion. It's all part of it. But the loss of the original 2 electrons is included in the total energy budget. When you see a figure like 26.7MeV for 4H -> 1He, that includes the electron loss.
The binding energy released from forming the $\text{He}^4$ nucleus is about $0.03u$. But the energy from removing two electrons is equal to their mass or about $0.001u$. So 3-4% of the contribution is from the electron loss. | {
"domain": "physics.stackexchange",
"id": 68970,
"tags": "energy, particle-physics, sun, fusion"
} |
String encryption function in admin section | Question: I have been using this function locally to encrypt strings for example a username.
If the user successfully logged in, place the encrypted username into a session, decrypt the username in the admin section and use for various things.
Is it a strong method and are there any security concerns with the way I'm using it?
Cipher Function:
public function cipher($string, $method) {
$key = pack('H*', "bcb04b7e103a0cd8b54763051cef08bc55abe029fdebae5e1d417e2ffb2a00a3");
$iv_size = mcrypt_get_iv_size(MCRYPT_RIJNDAEL_128, MCRYPT_MODE_CBC);
$iv = mcrypt_create_iv($iv_size, MCRYPT_RAND);
if($method === 1) {
$string = mcrypt_encrypt(MCRYPT_RIJNDAEL_128, $key, $string, MCRYPT_MODE_CBC, $iv);
$string = $iv . $string;
$encryptedString = base64_encode($string);
return trim($encryptedString, "\0");
}
if($method === 2) {
$string = base64_decode($string);
$iv_dec = substr($string, 0, $iv_size);
$ciphertext_dec = substr($string, $iv_size);
$decryptedString = mcrypt_decrypt(MCRYPT_RIJNDAEL_128, $key, $ciphertext_dec, MCRYPT_MODE_CBC, $iv_dec);
return trim($decryptedString, "\0");
}
}
Function Usage:
//Encrypt
$_SESSION['username'] = $backend->cipher($username, 1)
// Decrypt
echo $backend->cipher($_SESSION['username'], 2)
Answer: It looks great, however, you can improve it as follows:
Break out the function into two - encrypt and decrypt. Your functions should do the least amount of things.
return on trim - no need to set $encryptedString.
In the encrypt, no need to have the second $string when you concat $iv. Just concat on the initial mcrypt call.
In the end, we're left with three functions. One setupCipher which creates the key, iv_size and iv. One encrypt and one decrypt to encrypt and decrypt, as follows:
private function setupCipher(){
$iv_size = mcrypt_get_iv_size(MCRYPT_RIJNDAEL_128, MCRYPT_MODE_CBC);
return array(
pack('H*', "bcb04b7e103a0cd8b54763051cef08bc55abe029fdebae5e1d417e2ffb2a00a3"),
$iv_size,
mcrypt_create_iv($iv_size, MCRYPT_RAND)
);
}
public function decrypt($string){
list($key, $iv_size) = $this->setupCipher();
$string = base64_decode($string);
$iv_dec = substr($string, 0, $iv_size);
$ciphertext_dec = substr($string, $iv_size);
return trim(mcrypt_decrypt(MCRYPT_RIJNDAEL_128, $key, $ciphertext_dec, MCRYPT_MODE_CBC, $iv_dec), "\0");
}
public function encrypt($string) {
list($key, $iv_size, $iv) = $this->setupCipher();
$string = $iv . mcrypt_encrypt(MCRYPT_RIJNDAEL_128, $key, $string, MCRYPT_MODE_CBC, $iv);
return trim(base64_encode($string), "\0");
}
You would use them nearly identically to how you used them before, except no need to pass in the $method:
# Encrypt
$_SESSION['username'] = $backend->encrypt($username);
# Decrypt
echo $backend->decrypt($_SESSION['username']); | {
"domain": "codereview.stackexchange",
"id": 9310,
"tags": "php, strings, cryptography"
} |
Is there any documented case in which floating substances led to the introduction of a new species? | Question: Floating substances in the marine or freshwater environment often carry hitch hiking organisms. It is argued that many species may use this dispersal vector to reach new habitats, however to my knowledge there is no hard evidence / no documented case. This may be because of the difficulty to associate an established species with a certain vector of introduction.
By floating substances I refer to natural floating substances such as detached algae, trees or pumice and those of anthropogenic origin such as litter items, detached buoys, etc. (in whatever aquatic environment: sea, lakes, rivers)
Any help to find a reference / documented case study is very welcome!
Answer: There is a number of reported cases of marine species reaching new locations through "hitchhiking" in recent times. However, it seems harder to find reports of species actually becoming established in a new location through this.
The following articles describe examples of species transported with ocean debris:
In an early study on the dispersal of plastic pellets (3), it was suggested that pellets encrusted with the bryozoan Membranipora tuberculata was transported to New Zealand across the Tasman Sea from Australia. Barring one exception, this was the first time the species was observed in New Zealand waters.
Appendix A of a later article by the same author (1) contains 13 examples of species transported by sea (not necessarily through debris, or invasive), including the previous example. Of it, the author writes that
Later, L. M. Stevens (1992, unpublished data) was to report that [Membranipora tuberculata]
was abundant on both eastern and western shores around northernmost
New Zealand.
Thus, this is possibly a case of a species transported on floating debris and subsequently becoming established in a new habitat.
Jose Derraik also deals with the topic in a review (4):
Plastics floating at sea may acquire a fauna of various encrusting
organisms such as bacteria, diatoms, algae, barnacles, hydroids and
tunicates (Carpenter et al., 1972; Carpenter and Smith, 1972; Minchin,
1996; Clark, 1997). The bryozoan Membranipora tuberculata, for
instance, is believed to have crossed the Tasman Sea, from Australia
to New Zealand, encrusted on plastic pellets (Gregory, 1978). The same
species together with another bryozoan (Electra tenella) were found on
plastics washed ashore on the Florida coast, USA, and they seem to be
increasingtheir abundance in the region by driftingon plastic debris
from the Caribbean area (Winston, 1982; Winston et al., 1997). Minchin
(1996) also describes barnacles that crossed the North Atlantic Ocean
attached to plastic debris. Drift plastics can therefore increase the
range of certain marine organisms or introduce species into an
environment where they were previously absent (Winston, 1982).
Finally, the book Marine Pollution: New Research (2) contains the following passage which ends with the observation that conclusive proof is difficult to obtain:
Drift plastics are known to have introduced exotic marine species to
several areas (Winston et al., 1997; Deraik, 2002). Winston et al.
(1997) reported that the non-indegenous oyster Lopha cristagalli was
fond on plastics wshed ashore in southern New Zealand, and that the
exotic bryozoan Thalamoperella evelinae was found on plastics washed
ashore in Florida. Barnes and Milnder (2005) fond the exotic barnacle
Elminius modestus on plastic debris in the Shetland Islands (Atlantic
Ocean). The bryozoaen Membranipora tuberculate is believed to have
crossed the Tasman Sea from Australia to New Zealand rafting on
plastic pellets (Gregory, 1978). M Tuberculate and the bryozan Electra
tenella seem to be increasing their abundance on the Florida coast by
drifting on plastic debris from the Caribbean sea (Winston, 1982
Winston et al, 1997). Maso et al. (2003) (...) However, attributing a
marine biological invasion to floating marine debris and not to other
mechanisms is very difficult in most cases, and available data are
generally insufficient. | {
"domain": "biology.stackexchange",
"id": 2055,
"tags": "marine-biology, invasive-species"
} |
Why are the negative frequencies of the DFT symmetrically reflected at the nyquist to the positive frequencies? | Question: I was playing around with plotting DFT and realized that the negative frequencies are symmetric to the positive frequencies reflected at the nyquist.
Plot shown for the signal $f(x) = \cos(\frac{\pi}{2}x - \frac{\pi}{2}) + 2\cos(\pi x+ \frac{\pi}{2})$.
So why are the negative frequencies symmetrically distributed case for every DFT?
Possible approaches
I understand that the negative frequencies are required for the complete inverse DFT even though leaving them out will not change the result of the DFT though this does not explain the symmetry
Since for even $N$ the nyquiest is at $N/2$ the following should be true: $\exp(-\frac{j2 \pi n}{N}(\frac{N}{2}+1)) = \exp(-\frac{j2 \pi n}{N}(\frac{N}{2}-1))$
Looking at the statement above:
$\exp(-\frac{j2 \pi n}{N}(\frac{N}{2}+1)) = \exp(-j \pi n -\frac{j2 \pi n}{N}) = \exp(- j \pi n) \cdot \exp(-\frac{j2 \pi n}{N}) = [-1]^n \cdot \exp(-\frac{j2 \pi n}{N})$
should be equal to
$\exp(-\frac{j2 \pi n}{N}(\frac{N}{2}-1)) = \exp(-j \pi n + \frac{j2 \pi n}{N}) = \exp(- j \pi n) \cdot \exp(\frac{j2 \pi n}{N}) = [-1]^n \cdot \exp(\frac{j2 \pi n}{N})$
Answer: The DFT $X[k]$ of $x[n]$ is
$$X[k]=\sum_{n=0}^{N-1}x[n]e^{-j2\pi nk/N}\tag{1}$$
If $x[n]$ is real-valued, the complex conjugate mirrored DFT coefficients are
$$\begin{align}X^*[N-k]&=\sum_{n=0}^{N-1}x[n]e^{j2\pi n(N-k)/N}\\&=\sum_{n=0}^{N-1}x[n]e^{j2\pi n}e^{-j2\pi nk/N}\\&=\sum_{n=0}^{N-1}x[n]e^{-j2\pi nk/N}\\&=X[k]\end{align}$$
So for real-valued $x[n]$, $X[k]=X^*[N-k]$ holds, i.e., all DFT bins $X[k]$ for $k>\lfloor N/2\rfloor$ are redundant.
This makes sense because we need only $N$ real-valued numbers to represent the $N$ (real) values $x[n]$, and the real and imaginary parts of a complex DFT coefficient represent two numbers. For complex-valued sequences $x[n]$, there is generally no redundancy in the DFT coefficients. | {
"domain": "dsp.stackexchange",
"id": 11993,
"tags": "discrete-signals, fourier-transform, frequency-spectrum, dft"
} |
Yet Another Fraction | Question: A recent question inspired me to implement a Fraction class. I decided to write this one in vba, because I like writing tools for the poor souls that still have to deal with that language myself.
I really wanted to be able to create a Fraction instance with a static-like syntax, so I created a FractionType class with a VB_PredeclaredId attribute value of True - so I can write code like this:
Public Sub Test()
Dim result As Fraction
If FractionType.TryParse("34/178", result) Then
Debug.Print result.ToString & " = " & result.ToDouble
End If
Set result = FractionType.Create(0, 0)
Debug.Print result.ToString & " = default? " & result.Equals(FractionType.Default)
End Sub
...well, if I wanted to output this:
17/39 = 0.435897435897436
0/0 = default? True
Because I have a List class that can work with objects that implement IEquatable and IComparable interfaces in my toolkit, I can also use it like this:
Dim fractions As New List
fractions.Add FractionType.Create(1,2), _
FractionType.Create(2,3), _
FractionType.Create(0,0), _
FractionType.Create(9,8), _
FractionType.Create(123,345), _
FractionType.Create(48,0), _
FractionType.Create(12,182)
fractions.Sort
Dim item As Fraction
For Each item in fractions
Debug.Print item.ToString, item.ToSingle
Next
As it turns out, I'm especially interested in the CompareTo implementation. I decided to split the functionality in two types:
FractionType class module
Private Const MinimumInt As Integer = -32768
Private Const MaximumInt As Integer = 32767
Option Explicit
Public Property Get Default() As Fraction
Static result As New Fraction
Set Default = result
End Property
Public Property Get Zero() As Fraction
Static result As New Fraction
result.Numerator = 0
result.Denominator = 1
Set Zero = result
End Property
Public Property Get One() As Fraction
Static result As New Fraction
result.Numerator = 1
result.Denominator = 1
Set One = result
End Property
Public Property Get MinValue() As Fraction
Static result As New Fraction
result.Numerator = MinimumInt
result.Denominator = 1
Set MinValue = result
End Property
Public Property Get MaxValue() As Fraction
Static result As New Fraction
result.Numerator = MaximumInt
result.Denominator = 1
Set MaxValue = result
End Property
Public Function Create(ByVal numeratorValue As Integer, ByVal denominatorValue As Integer) As Fraction
Dim result As New Fraction
result.Numerator = numeratorValue
result.Denominator = denominatorValue
Set Create = result.Simplify
End Function
Public Function Parse(ByVal value As String) As Fraction
Dim operatorIndex As Integer
operatorIndex = InStr(1, value, "/")
If operatorIndex <= 1 _
Or operatorIndex = Len(value) _
Then
RaiseInvalidFormatError value, "Parse"
End If
Dim numeratorPart As String
numeratorPart = Left(value, operatorIndex - 1)
Dim denominatorPart As String
denominatorPart = Right(value, operatorIndex - 1)
Set Parse = Create(CInt(numeratorPart), CInt(denominatorPart))
End Function
Public Function TryParse(ByVal value As String, ByRef outResult As Fraction) As Boolean
On Error GoTo CleanFail
Dim result As Boolean
Set outResult = Parse(value)
result = True
CleanExit:
TryParse = result
Exit Function
CleanFail:
result = False
Resume CleanExit
End Function
Private Sub RaiseInvalidFormatError(ByVal errorValue As String, ByVal errorSource As String)
Err.Raise vbObjectError + 44, errorSource, "Error: Invalid format. Value '" & errorValue & "' could not be parsed to a valid fraction."
End Sub
Public Function Simplify(ByVal value As Fraction) As Fraction
If value.IsUndefined Then
Set Simplify = value
Exit Function
End If
Dim gcd As Integer
gcd = GetGreatestCommonDivisor(value.Numerator, value.Denominator)
Dim result As New Fraction
result.Numerator = value.Numerator / gcd
result.Denominator = value.Denominator / gcd
Set Simplify = result
End Function
Private Function GetGreatestCommonDivisor(ByVal value1 As Integer, ByVal value2 As Integer) As Integer
Dim result As Integer
If value2 = 0 Then
result = value1
Else
result = GetGreatestCommonDivisor(value2, value1 Mod value2)
End If
GetGreatestCommonDivisor = result
End Function
Fraction class module
Option Explicit
Private Type TFraction
Numerator As Integer
Denominator As Integer
End Type
Private this As TFraction
Implements IEquatable
Implements IComparable
Public Property Get Numerator() As Integer
Numerator = this.Numerator
End Property
Public Property Let Numerator(ByVal value As Integer)
this.Numerator = value
End Property
Public Property Get Denominator() As Integer
Denominator = this.Denominator
End Property
Public Property Let Denominator(ByVal value As Integer)
this.Denominator = value
End Property
Public Property Get IsUndefined() As Boolean
IsUndefined = this.Denominator = 0
End Property
Public Property Get IsNaN() As Boolean
IsNaN = this.Denominator = 0 And this.Numerator = 0
End Property
Public Property Get IsPositiveInfinity() As Boolean
IsPositiveInfinity = IsUndefined And this.Numerator > 0
End Property
Public Property Get IsNegativeInfinity() As Boolean
IsNegativeInfinity = IsUndefined And this.Numerator < 0
End Property
Public Function Simplify() As Fraction
Set Simplify = FractionType.Simplify(Me)
End Function
Public Function ToString() As String
ToString = this.Numerator & "/" & this.Denominator
End Function
Public Function ToSingle() As Single
ToSingle = CSng(this.Numerator / this.Denominator)
End Function
Public Function ToDouble() As Double
ToDouble = CDbl(this.Numerator / this.Denominator)
End Function
Public Function Equals(ByVal other As Fraction) As Boolean
Dim simplifiedOther As Fraction
Set simplifiedOther = other.Simplify
Dim simplifiedMe As Fraction
Set simplifiedMe = Me.Simplify
Equals = simplifiedOther.Numerator = simplifiedMe.Numerator _
And simplifiedOther.Denominator = simplifiedMe.Denominator
End Function
Private Function IEquatable_Equals(ByVal other As Variant) As Boolean
IEquatable_Equals = Equals(other)
End Function
Public Function CompareTo(ByVal other As Fraction) As Integer
If Me.IsUndefined Or other.IsUndefined Then
CompareTo = 0
If Me.IsNaN And other.IsPositiveInfinity Then
CompareTo = 1
ElseIf Me.IsNaN And other.IsNegativeInfinity Then
CompareTo = -1
ElseIf other.IsNaN Then
If Me.IsPositiveInfinity Then
CompareTo = -1
ElseIf Me.IsNegativeInfinity Then
CompareTo = 1
End If
End If
Else
Dim otherValue As Double
otherValue = other.ToDouble
Dim myValue As Double
myValue = Me.ToDouble
If otherValue > myValue Then
CompareTo = 1
ElseIf otherValue < myValue Then
CompareTo = -1
Else
CompareTo = 0
End If
End If
End Function
Private Function IComparable_CompareTo(ByVal other As Variant) As Integer
IComparable_CompareTo = CompareTo(other)
End Function
I find the CompareTo method sticks out like a sore thumb. Is it just me? How would I write this more elegantly?
Any issues I'm not seeing?
Answer: First, let's talk about what I consider to be a bug in the fraction class. Running this code results in an Overflow error.
Public Sub bug()
Dim item As Fraction
Set item = FractionType.Create(0, 0)
Debug.Print item.ToDouble
End Sub
So let's look at what's going on in the ToDouble() function.
Public Function ToDouble() As Double
ToDouble = CDbl(this.Numerator / this.Denominator)
End Function
Okay, so we're diving zero by zero. It's no wonder why we're getting a runtime error. Also note that ToSingle suffers from this same issue. The least we can do is raise an error that sufficiently describes what actually went wrong, but I would carefully consider how you really want to handle this issue. It might make for a better API to just return zero, but raising this error is semantically correct. It's really a judgement call, but don't make the dev using your class dig into the code to figure out why their getting an overflow error.
Public Function ToDouble() As Double
If IsUndefined Then
RaiseUndefinedError
Else
ToDouble = CDbl(this.Numerator / this.Denominator)
End If
End Function
Private Sub RaiseUndefinedError()
' Raises Division by Zero Error instead of letting an overflow error happen.
Const DivByZeroError As Integer = 11
Err.Raise DivByZeroError, TypeName(Me), "Division by Zero is Undefined"
End Sub
While your Fraction.CompareTo function does not break my Single Screen Principle, I do see an opportunity to clarify it by breaking it down into a few distinct functions. Keep in mind that VB6's And operator does not short-circuit. This means that when checking to see if Me is not a number, the code will evaluate both calls every time. So, first, a quick refactor to resolve the slight inefficiency.
Public Function CompareTo(ByVal other As Fraction) As Integer
If Me.IsUndefined Or other.IsUndefined Then
CompareTo = 0
If Me.IsNaN Then
If other.IsPositiveInfinity Then
CompareTo = 1
ElseIf other.IsNegativeInfinity Then
CompareTo = -1
End If
ElseIf other.IsNaN Then
If Me.IsPositiveInfinity Then
CompareTo = -1
ElseIf Me.IsNegativeInfinity Then
CompareTo = 1
End If
End If
Else
Dim otherValue As Double
otherValue = other.ToDouble
Dim myValue As Double
myValue = Me.ToDouble
If otherValue > myValue Then
CompareTo = 1
ElseIf otherValue < myValue Then
CompareTo = -1
Else
CompareTo = 0
End If
End If
End Function
But now we're verging on arrow code. There are two main parts to the CompareTo implementation, comparing undefined fractions and comparing defined fractions. Those sound like pretty good private function names to me.
Public Function CompareTo(ByVal other As Fraction) As Integer
If Me.IsUndefined Or other.IsUndefined Then
CompareTo = CompareUndefined(other)
Else
CompareTo = CompareDefined(other)
End If
End Function
Private Function CompareUndefined(ByVal other As Fraction) As Integer
CompareUndefined = 0
If Me.IsNaN Then
If other.IsPositiveInfinity Then
CompareUndefined = 1
ElseIf other.IsNegativeInfinity Then
CompareUndefined = -1
End If
ElseIf other.IsNaN Then
If Me.IsPositiveInfinity Then
CompareUndefined = -1
ElseIf Me.IsNegativeInfinity Then
CompareUndefined = 1
End If
End If
End Function
Private Function CompareDefined(ByVal other As Fraction) As Integer
Dim otherValue As Double
otherValue = other.ToDouble
Dim myValue As Double
myValue = Me.ToDouble
If otherValue > myValue Then
CompareDefined = 1
ElseIf otherValue < myValue Then
CompareDefined = -1
Else
CompareDefined = 0
End If
End Function
Now, you could repeat this process for CompareDefined, but it's not nested so badly now, and the function is pretty short and concise as is. You know what though, I'm not quite happy with it... I think this is a case for Iff(). It's shorter and undoes the nesting, but does sacrifice a little bit of "understandability" (as any ternary operator would).
Private Function CompareUndefined(ByVal other As Fraction) As Integer
CompareUndefined = 0
If Me.IsNaN Then
CompareUndefined = IIf(other.IsPositiveInfinity, 1, -1)
ElseIf other.IsNaN Then
CompareUndefined = Iff(Me.IsPositiveInfinity, -1, 1)
End If
End Function
There's not much else to say. You generally write very readable code, so I won't bother with any nitpicks about style. | {
"domain": "codereview.stackexchange",
"id": 9547,
"tags": "object-oriented, parsing, vba, rational-numbers"
} |
Is the triiodide ion polar? | Question: Three professors argue it is non-polar.
My professor argues that it is a monopole, like most ions.
The structure of the triiodide ion places a negative formal charge on the central iodine atom. The molecular geometry is also linear (at least according to VSEPR), and its electronic geometry is trigonal bipyramidal. Given the high amount of symmetry, shouldn't triiodide be non-polar? Sure, it might have a charged central atom, but so does carbon dioxide.
What's the answer?
Answer:
The structure of the triiodide ion places a negative formal charge on the central iodine atom.
No it doesn’t. The two resonance structures that describe the four-electron three-centre bond put the negative formal charge on the outer iodines ($\ce{1/2-}$ each).
That said, polarity is usually defined as having a non-zero dipole moment. The dipole moment’s vector must display the same symmetry as the entire molecule. Since the molecule is linear and both $\ce{I\bond{...}I}$ distances are equal, its point group is $D_{\infty \mathrm{h}}$ which includes $i$ which means the overall dipole moment must be zero.
The charge it carries does not matter. Any single-atom ion also has zero dipole moment and would thus be called non-polar. ‘Non-polar’ does not mean ‘free of electrostatic interaction’. | {
"domain": "chemistry.stackexchange",
"id": 4735,
"tags": "inorganic-chemistry, polarity"
} |
How to include calls to an $O(n)$ subroutine on finite-sized inputs in an analysis? | Question: I am trying to calculate the runtime complexity of a function that does not have fixed size input, but uses several helper methods that do have fixed size input. I was unsure of how to include the helper methods in my calculations.
If I have an array with a fixed size of 32 indices, and I have a function that sums up the elements in that array, will that function be $O(n)$, or $O(1)$? I think that a function that sums up the elements of an array is $O(n)$ because each element of an $n$-length array must be visited, but if the function only sums up arrays of length 32, is it still $O(n)$?
Answer: There are some things to consider here.
Conceptually, an algorithm that iterates once over the input array has runtime $\Omega(n)$ (and $O(n)$ if it does constant-time work per element), $n$ being the size of the array.
In application, if the algorithm is only ever called with arrays of length $n \leq n_0$ for some constant $n_0$, you may treat the used time as $O(n) \subseteq O(n_0) = O(1)$. Note that the $n$ here is probably not the one you use for your whole algorithm!
In practice, every algorithm runs constant time as real computers have finite memory. This is not an interesting model in the sense that it does not help to differentiate between algorithms that do behave very differently in practice, so we don't use it.
You may be interested in some of our reference questions on asymptotics, runtime analysis and the time used by arithmetic operations. | {
"domain": "cs.stackexchange",
"id": 1774,
"tags": "terminology, algorithm-analysis, runtime-analysis"
} |
How are snakemake's --cluster and --drmaa options implemented? | Question: I'm fairly new to snakemake and I'm trying to understand the difference between the --cluster and --drmaa flags, both of which are used to submit jobs to compute clusters/nodes.
The docs give a few hints about the advantages of using --drmaa here:
If your cluster system supports DRMAA, Snakemake can make use of that
to increase the control over jobs. E.g. jobs can be cancelled upon
pressing Ctrl+C, which is not possible with the generic --cluster
support.
And here:
If available, DRMAA is preferable over the generic cluster modes
because it provides better control and error handling.
So I have a conceptual understanding of the advantages of using --drmaa. However, I don't consider the above a very complete explanation, and I don't know how these flags are implemented in snakemake under the hood, can someone elaborate?
Note, that although this could be considered a more general programming question, snakemake is predominantly used in bioinformatics; I was persuaded that this question would be considered on-topic because of this meta-post and this answer.
Answer: I'd always kind of wondered how this worked too, so I took this as an excuse to look into the snakemake code. At the end of the day this becomes a question of (1) how are jobs actually submitted and (2) how is it determined if jobs are done (and then whether they failed)?
For DRMAA, python has a module (appropriately named "drmaa") that wraps around the libdrmaa library that comes with most schedulers. This is a very popular route, for example the Galaxy project uses this for dealing with most clusters (e.g., I use it to connect our internal Galaxy instance to our slurm cluster). The huge benefit here is that DRMAA does magic to allow you to simply submit commands to your cluster without needing to know whether you should execute qsub or srun or something else. Further, it then provides methods to simply query whether a job is running or not and what its exit status was.
Using the --cluster command requires a LOT more magic on the side of snakemake. At the end of the day it creates a shell script that it then submits using the command you provided. Importantly, it includes some secret files in that script that it can then watch for (ever noticed that .snakemake directory where you execute a script? That appears to be among the things it's used for.). These are named {jobid}.jobfinished and {jobid}.jobfailed and one of them will get touched, depending on the exit status of your command/script. Once one of those is there, then snakemake can move on in its DAG (or not, if there's a failure). This is obviously a LOT more to keep track of and it's then not possible for snakemake to cancel running jobs, which is something one can do easily with DRMAA. | {
"domain": "bioinformatics.stackexchange",
"id": 60,
"tags": "snakemake"
} |
Why aren't 100% UV blocked sunglasses safe to view an eclipse with? | Question: I am not planning on staring into the sun during an eclipse or any other time.
I have been reading about how no variety of regular sunglasses are safe enough to view the eclipse with. I'm not talking about being able to see things clearly, but just actual eye safety.
From what I understand it is the ultraviolet light that causes damage to the retina, but maybe it is more complicated.
How do my eyes get hurt if I am looking at the sun through so called "100% UV protection" and what makes the eclipse glasses sold in stores different?
edit: To clarify this is not about how the rays from the sun are dangerous, but about why "100% UV blocking" sunglasses fail. Do other dangerous rays get through? Is the "100%" marketing? Essentially, in what way are the best consumer sunglasses inadequate for looking at an eclipse.
Answers about pupil dilation and what makes an eclipse more dangerous for naked-eye viewers are not what I'm after.
Answer: You are correct that almost always it is the UV content of sunlight and not its power that is the main hazard in staring at the Sun.
The lighting during a total eclipse is one of those situations outside the "almost always". Eclipses did not weigh heavily on our evolution, so we are ill kitted to deal with them.
Moreover, UV sunglasses are not designed to attenuate direct sunlight, only reflected sunlight.
Normally, the eye's pupil is shrunken to about a millimeter diameter in bright sunlight. This means that it admits about a milliwatt of sunlight, which, for healthy retinas, is nowhere near enough to do thermal damage (see my answer here for further discussion).
During an eclipse, the pupil dilates to about $7\,\mathrm{mm}$ diameter to adapt for the low light levels of the eclipse's twilight. Thus its aperture is fifty times bigger than it normally is in sunlight. This means it admits a great deal more UV than normal (and the corona, at $100\,000\,\mathrm K$, radiates a great deal of this). You're getting about $50$ times the dose you would normally get even looking directly at the Sun.
Furthermore, suddenly the diamond ring phase begins, and high levels of sunlight reach the retina before the pupil can shrink again. The latter happens only very slowly. So even thermal damage is a risk here. | {
"domain": "physics.stackexchange",
"id": 42686,
"tags": "sun, vision, biology, eclipse, laboratory-safety"
} |
A table of Particles and Energies In different satellite orbites(GEO MEO LEO) | Question: I have done a bunch of search for information about: which particles with how much energy exist in near space environment of earth in inner and outer Van Allen Belt or satellite orbits like GEO MEO and LEO, and I found some sporadic and sometimes antithetic information. Any help Would be awarding.
Answer: To understand what particles are in what zones you have to understand the magnetic fields around the planet. Since they move dramatically and we don't really have anything like a real time map, you might be approaching your question the wrong way. Particles don't get funneled into specific orbits, they bounce, interact, bend their way around the planet. Some of them are caught and work their way towards the north pole, but it moves around all time. Without knowing the reason for your question, I can only suggest you look into Nasa's THEMIS results and the related satellite programs which you can read about here and the radiation belt storm probes and the WIND spacecraft. Perhaps you'll find what you're looking for in their research results.
EDIT: The question is still a bit vague, but since you're interested in satellites I think the area you need to look into is radiation hardening. Orbit does play a role, but designs are specified mostly based on expected lifetime exposure to hard radiation, because there's no real rule of what particle energy sits in what orbit (primarily protons and electrons at various energy levels). It is much more dynamic than that and depends heavily on what the sun is up to.
A reference for you that might help is: A. Holmes-
Siedle, and L. Adams, “ Handbook of Radiation Effects,” Oxford University
Press, 1993. Where you'll see some of expected exposures and testing requirements for hardware:
Another area for you to look into is radiation hardening and test assurance for spacecrafts. This covers the tests, radiation database data, models, methods, etc. It should be enough to point you in the direction you need. | {
"domain": "physics.stackexchange",
"id": 9268,
"tags": "experimental-physics, space"
} |
When sending messages through rostopic, should both rosnodes need on? | Question:
Hi All:
Here is one question about the "rostopic". Can client rosnode send message to the topic when server rosnode is off?
or vice versa. The topic concept is also be used in MQTT protocol. In their MQTT website description, the action of sending messages from one node to another through topic does not need to concern about the received one condition(that means received one can be on or off state) due to the waste of long-term internet connections and the occupitation of the internet bandwidth ?
Originally posted by PaulTsai111 on ROS Answers with karma: 3 on 2016-11-03
Post score: 0
Original comments
Comment by mgruhler on 2016-11-04:
@JoshMarino why not post this as an answer? :-)
Comment by NEngelhard on 2016-11-04:
"A publisher sends the message regardless if anyone listens to it or not." How is this possible if messages are transported via TCP? Or do you just mean that the .publish(msg) does not block if there is no listener?
Comment by gvdhoorn on 2016-11-05:
publish(..) never blocks.
And whether TCP is used or not doesn't matter: if there are no subscribers, there are no connections between nodes, so the middleware will simply discard the msg passed to it by the various .publish(..) methods.
Only nodes exchange msgs, the master is not involved.
Answer:
A publisher sends the message regardless if anyone listens to it or not.
A subscriber will not enter its callback function if no message is received. The subscriber can still be running though and doing nothing if no message is ever received.
Originally posted by JoshMarino with karma: 592 on 2016-11-03
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 26147,
"tags": "rosnode, ubuntu-trusty, ubuntu, rostopic"
} |
Genetic Algorithm Minimum Population Size | Question: Is there a minimum limit to a pool (population) size when using the genetic algorithm to solve an optimization problem? For example a population of size 2.
Answer: I had a similar question and emailed Inman Harvey from the University of Sussex (author of Artificial Evolution: A Continuing SAGA and The Microbial Genetic Algorithm). Here is his answer (emphasis mine):
What is an optimum population size? Why?
If, like the natural world, a trillion microbes can evolve in the sea in parallel with no cost to you, then the bigger the better. In practice, on a serial machine, 10 times the pop-size costs you 10 times the time. Experience, with rather little theory, suggests that pop sizes of 30 to 100 minimum get you quite a lot of the advantages that GAs have over hill-climbing (ie pop of 2). So rule of thumb is to start here.
So basically, the bigger the population the better, but realistically you often have to make compromises to reach your goal in a reasonable amount of time. When building a genetic algorithm, you have to guess what the optimal values are for a lot of parameters. There is no good answer. Mostly there is a lot of trial and error. | {
"domain": "cs.stackexchange",
"id": 3850,
"tags": "algorithms, optimization, randomized-algorithms, genetic-algorithms"
} |
API Client and usage implementation | Question: I'm really bothered with my approach in this, as I'd like to use the client in multiple projects.
The client extends Guzzle and I'm using a factory method to initialize the client with the necessary settings:
<?php
namespace App\Services\ApiClient;
use App\Tenant;
use GuzzleHttp\Client as GuzzleClient;
use GuzzleHttp\HandlerStack;
use GuzzleHttp\Middleware;
use GuzzleHttp\Psr7\Request;
use Illuminate\Support\Str;
use Psr\Http\Message\RequestInterface;
use Psr\Http\Message\UriInterface;
class ApiClient extends GuzzleClient {
protected Tenant $tenant;
public static function factory(Tenant $tenant)
{
$handler = HandlerStack::create();
$handler->push(Middleware::mapRequest(function (RequestInterface $request) use ($tenant) {
$extraParams = ['tenant' => $tenant->api_tenant];
$uri = $request->getUri();
$uri .= (isset(parse_url($uri)['query']) ? '&' : '?');
$uri .= http_build_query($extraParams);
return new Request(
$request->getMethod(),
$uri,
$request->getHeaders(),
$request->getBody(),
$request->getProtocolVersion()
);
}));
$client = [
'base_uri' => rtrim($tenant->api_base_url, '/') . "/api/{$tenant->api_rest_version}/companies({$tenant->api_company_id})/",
'timeout' => 5.0,
'handler' => $handler,
'auth' => [$tenant->api_user, $tenant->api_password],
'curl' => [CURLOPT_SSL_VERIFYPEER => false],
'debug' => false,
'headers' => [
'Accept' => 'application/json',
'If-Match' => '*',
'Accept-Language' => 'en-US',
'OData-Version' => '4.0',
'Prefer' => 'odata.continue-on-error',
],
];
return new static($client, $tenant);
}
public function __construct(array $config = [], Tenant $tenant)
{
parent::__construct($config);
$this->tenant = $tenant;
}
public function getBatchUri() {
return rtrim($this->tenant->api_base_url, '/') . "/api/{$this->tenant->api_rest_version}/\$batch";
}
/**
* Converts request data for batch preparation.
*
* @param array $body
* @param string $method
* @param string $url
* @return array
*/
public function batch(array $body = [], $method = 'POST', string $url)
{
$extraParams = ['tenant' => $this->tenant->api_tenant];
$uri = $url;
$uri .= (isset(parse_url($uri)['query']) ? '&' : '?');
$uri .= http_build_query($extraParams);
return array_filter([
'method' => $method,
'atomicityGroup' => uniqid(null, true),
'id' => 'id_' . uniqid(null, true),
'url' => $uri,
'body' => (empty($body)) ? null : $body,
'headers' => [
'Content-Type' => 'application/json; odata.metadata=minimal; odata.streaming=true',
'OData-Version' => '4.0',
'If-Match' => '*',
'Prefer' => 'odata.continue-on-error',
],
]);
}
}
The available endpoints that the client is using differs based on the Tenant and the application the client is being used in.
As an example, I'm using the client to get some contacts from the external service and I then try and use that information for authentication.
Example:
<?php
namespace App\Http\Controllers\Api;
use App\Http\Controllers\Controller;
use Illuminate\Http\Request;
class AuthenticationController extends Controller
{
public function login(Request $request)
{
$request->validate([
'email' => 'required|string',
'password' => 'required|string',
]);
if (!$user = Auth::guard('nav')->attempt($request->all())) {
throw ValidationException::withMessages([
'email' => ['The provided credentials are incorrect.'],
]);
}
return response()->json(['token' => $this->createToken($user)]);
}
}
The attempt method contains some nested methods that ends up with the following use of the API Client:
public function retrieveByCredentials(array $credentials)
{
$request = $this->apiClient->get('contacts', ['query' => [
'$filter' => "E_Mail eq '{$credentials['email']}'",
'$top' => 1
]]);
$data = json_decode($request->getBody()->getContents());
if(!empty($data->value)) {
return new NavContact((array)$data->value[0]);
}
return null;
}
NavContact is a Data Transfer Object (that resembles an Eloquent Model).
Now, the whole ordeal feels and looks awful (to me at least). And I'm not particularly interested in having a client library blowing up to the size of PayPal or Google's PHP libraries in the same way I'm not particularly interested in having something that requires me to use my memory alone to fetch the right endpoints with the required query parameters.
I'm sure there must be a place in the middle?
Answer: While there isn't a lot of code here, there is some duplicated code - e.g.
$extraParams = ['tenant' => $tenant->api_tenant];
$uri = $request->getUri();
$uri .= (isset(parse_url($uri)['query']) ? '&' : '?');
$uri .= http_build_query($extraParams);
in the callback to Middleware::mapRequest passed to $handler->push() in the factory() method, as well as similar lines in the batch() method:
$extraParams = ['tenant' => $this->tenant->api_tenant];
$uri = $url;
$uri .= (isset(parse_url($uri)['query']) ? '&' : '?');
$uri .= http_build_query($extraParams);
This could be seen as a violation of the Don't Repeat Yourself principle. The similar lines could be abstracted into a static method that accepts a URL and a Tenant object. | {
"domain": "codereview.stackexchange",
"id": 42472,
"tags": "php, api, laravel, client"
} |
Installation of Fuerte under Fedora 14 | Question:
Hey,
I followed the instructions and everything went fine until the cmake stuff.
Fedora Installation Guide
cmake ..
-DCMAKE_INSTALL_PREFIX=/opt/ros/fuerte -DSETUPTOOLS_DEB_LAYOUT=OFF
-- +++ catkin
-- Could NOT find PY_em (missing: PY_EM) CMake Error at
catkin/cmake/empy.cmake:29 (message):
Unable to find either executable
'empy' or Python module 'em'... try
installing package 'python-empy' Call
Stack (most recent call first):
catkin/cmake/all.cmake:51 (include)
catkin/CMakeLists.txt:12 (include)
any ideas?
Originally posted by Flowers on ROS Answers with karma: 342 on 2012-07-16
Post score: 2
Answer:
It looks like you are missing the EmPy Python package. On Ubuntu that is installed as a Debian package dependency.
Maybe there is a corresponding Fedora 14 package. If so, I would use it.
Otherwise, try installing it with pip:
sudo pip install -U EmPy
Originally posted by joq with karma: 25443 on 2012-07-16
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 10216,
"tags": "ros, python, ros-fuerte, fedora"
} |
How can heat turn into light | Question: I am confused about how hot surfaces can radiate light to their surroundings. When I shine a light on a surface the light turns to heat spontaneously, and when I leave that hot surface it radiates light spontaneously. To me this suggests that the process isn't driven by entropy increasing because it is two directional. Could anyone explain how this process complies with the second law of thermodynamics?
Answer: Heat, in the context of something "giving off heat" that we use in everyday conversation, is a term we use often to describe emission of a specific part of the electromagnetic spectrum (namely the infrared spectrum). As you start to pour more and more energy into an object, the electrons can get more and more excited (which is the process of absorbing photons), until they emit a photon to return to a lower energy state. As you get hotter the electron can jump up further and further until when it returns to the ground state it emits visible (or UV/x-ray) photons.
In terms of entropy and the second law, when shining the light on a surface, the electrochemical processes occurring in the flashlight results in a greater amount of entropy since the chemical concentrations of the battery cells increase in the number of micro-states. Similar would be true of whatever is driving the light emission from your source. When the energy is emitted from the surface, heat is flowing from a hot source to a colder one, increasing the number of potential states (multiplicity) and thus entropy of the entire system. Throughout the entire process entropy is having a net increase if you observe the entire system. | {
"domain": "physics.stackexchange",
"id": 15384,
"tags": "thermodynamics, visible-light, entropy, radiation"
} |
Carbonic acid in water | Question: If we add a weak acid such as carbonic acid in water it weakly dissociates to form bicarbonate and hydrogen ions . Now water already has some hydrogen ions, having a ph=7.
So now indirectly we added hydrogen ions and base (bicarbonate ) ions to water which already has a hydrogen ion concentration. The resulting solution is acidic . This means that in the addition of equimolar hydrogen ions and bicarbonate ions( base) hydrogen ions dominate.
The only reason which I can think of is this: lets say there were 1000 hydrogen ions in water, now addition of 100 hydrogen ions and 100 bicarbonate ions will cause a net increase in hydrogen ion concentration because bicarbonate wont be able to counteract the increase in hydrogen ions because bicarbonate, when acting as a base, will be in equilibrium with carbonic acid. I.e. 100 bicarbonate ions won be able to neutralize 100 hydrogen ions present in the solution, and because of their equilibrium they will ionize only say 50 and overall, and there will be an increase of 50 hydrogen ions.
Is my explanation correct?
Answer: To start off, here is the equilibrium we are talking about so we can refer to it later:
$$\ce{H2CO3 <--> H+ + HCO3-}$$
Since the pH of the solution is $\mathrm{-log[H+]}$, the pH of pure water will decrease upon the addition of carbonic acid.
Where I think you are getting confused is regarding the action of the bicarbonate that is produced. As this is an equilibrium reaction, bicarbonate will indeed act as a base, which is the reaction from right to left above. But lets say a molecule of bicarbonate reacts with hydrogen ion to reform carbonic acid. That just means another molecule of carbonic acid will dissociate to take it's place. This is all just a way of saying that the reaction shown above is an equilibrium, such that both the left to right and right to left reactions are occurring. But in the end there is never any net hydroxide produced or hydrogen ions consumed by this reaction, as there is always plenty carbonic acid available to counteract the basic action of the bicarbonate.
I hope I understood your question correctly. Feel free to use the comments below to ask for any clarifications. | {
"domain": "chemistry.stackexchange",
"id": 8110,
"tags": "acid-base, equilibrium, ph"
} |
What is condition for emission spectrum? | Question: Deexcitation of electron gives us emission spectrum but in my book states it's only possible if
$\Delta n\ne 0$,
$\Delta \ell=\pm 1$,
$\Delta m=0,\pm 1$.
Answer: The first condition on $\Delta n$ guarantees the initial and final states do not have the same energy, so the photon can carry the difference.
The second condition arises because in atoms, the emitted radiation is usually of the dipole type. The dipole is a vector, with angular momentum $\ell=1$, so the change in $\ell$ is due to momentum carried by radiation. Note it is also technically possible to have $\Delta \ell=0$ transition that does not change $\ell$ is prevented by Laporte’s rule (i.e. by parity argument). (Note also that, in deformed nuclei, quadrupole transitions can occur with $\Delta \ell=\pm 2$.)
The last condition is related to the polarization of the radiation. For $\Delta m=\pm 1$, the emitted radiation will have circular polarization, whereas for $\Delta m=0$ the radiation will be linearly polarized. | {
"domain": "physics.stackexchange",
"id": 63378,
"tags": "particle-physics, photons, spectroscopy, photon-emission"
} |
How to store explored nodes in ROS? | Question:
Hi,
I am using nav2d to explore a map. Is there any way I can store the order of explored nodes in a data structure?
I was thinking we can whether I can use rosbag for this somehow. Please let me know if anyone knows about this
Thanks
Originally posted by robo_explorer on ROS Answers with karma: 29 on 2015-11-02
Post score: 0
Original comments
Comment by Sebastian Kasperski on 2015-11-04:
What do you want to achieve? All nodes are published on a Marker topic, but this list is more or less unordered.
Comment by robo_explorer on 2015-11-04:
I want to check the overlap in the algorithm. So, I want to have the order of explored nodes, so that I can track what path the robot followed.
Answer:
I don't see how this could be done out-of-the-box. But the pointclouds that are attached to the nodes should have a sequence number or at least a timestamp, that can be used to order them. But you would have to find a proper way to output them via ROS.
Another way might be to wite a very simple node, that regularly logs to robot's pose via TF.
Originally posted by Sebastian Kasperski with karma: 1658 on 2015-11-05
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by robo_explorer on 2015-11-05:
Yes, that's a great idea, I can write a node to log the robot's pose through tf_listener. But, I am not sure, how can I trace the position back to the specific occupancy grid cell. Is there any way where I can relate a position with the cell number?
Comment by Sebastian Kasperski on 2015-11-06:
I don't think so. The grid map is dynamically created from the laser scans and the corrected poses from the slam-backend whenever get_map is called. Such a relation would even change with an optimization run of the backend after a loop was closed. | {
"domain": "robotics.stackexchange",
"id": 22885,
"tags": "ros, exploration, rosbag, nav2d"
} |
Circumpolar Equation Derivation | Question: Edit: Geometric answer from Math.stackexchange
I am trying to understand where the circumpolar equation comes from.
$\delta \geq \frac{\pi}{2} - l$
I can understand that the sort of "rotation" of the reference frame that one experiences in horizon coordinates. I'm not sure where to go from the fact that a star's lowest altitude must be greater than 0.
It is pretty simple to show that $a = l$ and $c = \frac{\pi}{2} - l$. $a$ in this case is the altitude of the north celestial pole. What I need to do is show that the angle of the cone made by a circumpolar star with $\delta + l \geq \frac{\pi}{2}$.
You can deduce that $a_{min} \geq 0$ for a star to be considered circumpolar. I just need to do this geometrically in terms of its declination from the celestial equator.
Edit: I think my latest attempt has found a solution.
For a star to be circumpolar, the star must, at the very least, come above the horizon.
$0 \leq a \leq \frac{\pi}{2}$
In the celestial sphere coordinate system, $c$ can be found to be
$c = \frac{\pi}{2} - \delta$
In the horizon coordinate system, $c$ can be found to be
$c = l - a_{min}$
Therefore
$l - a_{min} = \frac{\pi}{2} - \delta$
$\\$
$l + \delta = \frac{\pi}{2} + a_{min}$
$a_{min}$ can be set to 0 because it is the lower bound for circumpolarity.
$l + \delta = \frac{\pi}{2}$
$\delta$ must be in the cone made by the star's rotation about its declination angle. Therefore
$\frac{\pi}{2} - l \leq \delta$
Answer: There is nothing wrong with your explanation. You could add some explanatory notes like $a_{min} = l-c$ and $a_{max}=l+c$. Also, I suggest that you put the equation $l+\delta = \frac{\pi}2-a_{min}$ on a separate line.
It is very good to figure out these equations on your own.
$ $
Here is an alternative explanation with less algebra:
If you walk outside at night, you will notice that the north star has an elevation of $l$ degrees if your latitude is $l$.
The north star does not move.
The angular distance between stars does not change (in the short term!).
In order for the a star to be circumpolar, it needs to be within $l$ degrees of the north star (otherwise it will dip below the horizon when its hour angle is 12 hours.)
So, the angle $\theta$ between the north star to a circumpolar star needs to be less than $l$. That angle is $\theta=90^\circ-\delta = \pi/2 -\delta$ where $\delta$ is the declination of the star.
Finally,
$$
\begin{align} \theta &\leq l\\
\pi/2 -\delta &\leq l \\
\pi/2 &\leq \delta + l \\
\pi/2 - l &\leq \delta.
\end{align}
$$ | {
"domain": "astronomy.stackexchange",
"id": 3454,
"tags": "observational-astronomy"
} |
How does AMCL publishes tf using pose? | Question:
In my project, I am able to obtain a special pose (a future pose based on prediction) of a robot, and I need to publish the pose as tf from map to odom_future frame (my new frame). I know that amcl has provided such a tf publishing code. Therefore, I try to use the AMCL code to write a publisher that convert a pose and publish tf like amcl. However, I encountered some difficulties.
In the amcl_node.cpp file, it contains the following lines:
// subtracting base to odom from map to base and send map to odom instead
geometry_msgs::PoseStamped odom_to_map;
...
tf2::Quaternion q;
q.setRPY(0, 0, hyps[max_weight_hyp].pf_pose_mean.v[2]);
tf2::Transform tmp_tf(q, tf2::Vector3(hyps[max_weight_hyp].pf_pose_mean.v[0],
hyps[max_weight_hyp].pf_pose_mean.v[1],
0.0));
geometry_msgs::PoseStamped tmp_tf_stamped;
tmp_tf_stamped.header.frame_id = base_frame_id_;
tmp_tf_stamped.header.stamp = laser_scan->header.stamp;
tf2::toMsg(tmp_tf.inverse(), tmp_tf_stamped.pose); //invert the given pose
this->tf_->transform(tmp_tf_stamped, odom_to_map, odom_frame_id_); ///subtraction occurs here
...
I tried to use this snippet to help build my tf broadcaster, but it gives the error:
"Do not call canTransform or lookupTransform with a timeout unless you are using another thread for populating data. Without a dedicated thread it will always timeout. If you have a seperate thread servicing tf messages, call setUsingDedicatedThread(true) on your Buffer instance.";
My only major modification is the time stamp, because my node does not subscribe a laser scan (I also tried to subscribe a laser scan, and use the timestamp, but it doesn't work as well):
tmp_tf_stamped.header.stamp = ros::Time::now();
I've also tried setUsingDedicatedThread(true). The error disappears, but the node doesn't do anything.
Questions:
What exactly does tf_->transform(...) do? I couldn't find any documentation about this function.
Does the error originated from the timestamp?
Is there any other elegant way to compute the tf between map and odom_future from pose? (in the same way of odom frame).
Originally posted by alex_f224 on ROS Answers with karma: 28 on 2018-06-21
Post score: 0
Answer:
I figured out it is non-trivial in my application to compute tf(map, odom_future) and tf(odom_future, base_footprint_future) given amcl_pose using the method used in AMCL.
As shown in the AMCL ROS webpage, it has the following tfs:
tf(map, odom) + tf(odom, base_footprint) = tf(map, base_footprint)
tf(map, odom) is the drifting effect by the odometer.
tf(odom, base_footprint) is the pose of the robot from the odometer reading.
The pose given by AMCL is tf(map, base_footprint). In my application, since I am predicting the future, the amcl_pose is in the future. In other words, tf(map, odom), tf(odom, base_footprint), tf(map,base_footprint) are in the future, while only tf(map,base_footprint) is given by my motion predictor.
One way to overcome this is to build a motion model to predict future robot pose in the odom frame, i.e. tf(odom, base_footprint), which is not a trivial task. One could use the velocity motion model, but it might be a naive solution.
Originally posted by alex_f224 with karma: 28 on 2018-06-21
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 31067,
"tags": "ros, navigation, ros-kinetic, tf2, amcl"
} |
Why can't concave lenses be used as magnifying glasses? | Question: I see why convex lenses work, but why can't concave lenses work? The light from an object diverges, making it wider, so technically that should make the object bigger, shouldn't it? Why isn't this the case?
My thinking:
Answer: In your image it is not clear how you actually end up with a "big image". It is not enough to take into account just one light ray per point on the object.
You also need to consider how we actually see something, because if you think of it, at any moment there is light hitting our eye that comes from all kinds of directions and all kind of objects. However only a small part of it is actually used to make up the image we see.
You need to distinguish between virtual images and real images.
Real images are formed if all (or at least many) light rays coming from one point of the object are focused into one point on the viewer side. In that case you can for instance put a screen/paper on the viewer side on which an image gets projected. This is for instance the case in projectors or cameras.
In the case of Virtual images, the light rays diverge on the viewer side, so rays coming from one point of the object are not focused into a single point on the viewer side anywhere. Therefore one cannot have a projection directly onto a screen. However all the rays from one point of the object appear on the viewer side as if they are coming from a single point (if there was no lens). The eye re-focuses all these rays and produces a real image inside the eye, in the same way as it would do when looking at a real object the size/distance of the virtual image.
Let's look at the four combinations of virtual/real convex/concave lenses for a single lens:
Convex, real image
If the object is located outside the focal length of the lens, a real (reversed) image is formed on the viewer side. This is effectively what you have in case of cameras, though they have somewhat more complicated optical system, since for instance they need to keep the distance from the lens to the film/sensor/screen constant, independent on how far away the object is. If you take a magnifying glass and look at distant object through it you should also get this effect. But note, that in that case your eye is not where the image is formed but further away (to the right in the image below).
Convex, virtual image
If the object is within the focal length of the lens a virtual image is formed. Light rays on the observer side are diverging but appear to come from a point corresponding to a larger upright (virtual) object. This is the case for the magnifying glass when used to magnify things.
Concave, virtual image
Independent on whether the object is inside or outside the focal length, these lenses always produce a virtual image that is smaller than the original. | {
"domain": "physics.stackexchange",
"id": 37789,
"tags": "optics, visible-light, refraction, lenses"
} |
Energy-Momentum Tensor with mixed indices | Question: I know that $T_{\mu\nu}$ is the Energy-Momentum Tensor and $T=g^{\mu\nu}T_{\mu\nu}$, but does anyone know what $T^{\nu}_{\mu}$ is and how its calculated?
Answer: $T_\mu^\nu = T_{\mu\sigma}g^{\sigma\nu}$ | {
"domain": "physics.stackexchange",
"id": 19880,
"tags": "general-relativity, metric-tensor, notation, tensor-calculus, stress-energy-momentum-tensor"
} |
Does a ball thrown down exert less force on the ground when we walk? | Question: Scenario A: You stand still and throw a ball vertically down. When the ball hits the ground it exerts a specific force on the ground.
Scenario B: While walking you throw a ball vertically down. When the ball hits the ground it exerts a specific force on the ground.
Is the force in both scenarios equal or is it less in scenario B (because the ball hits the ground at a lower angle)?
Answer: If the vertical velocity is the same then the vertical force is the same.
Clearly the weight does not change. | {
"domain": "physics.stackexchange",
"id": 29460,
"tags": "newtonian-mechanics, classical-mechanics"
} |
Compound microscope vs astronomical telescope | Question: In compound microscope, we take such an objective lens which has small focal length. While in astronomical telescope, we take such an objective lens which has large focal length.
Why don't we use same objective lens in both? The function of both the devices is to enlarge an object at infinity.
Answer: Wikipedia's article on Focal length explains pretty clearly (emphasis mine):
In most photography and all telescopy, where the subject is essentially infinitely far away, longer focal length (lower optical power) leads to higher magnification and a narrower angle of view; conversely, shorter focal length or higher optical power is associated with a wider angle of view. On the other hand, in applications such as microscopy in which magnification is achieved by bringing the object close to the lens, a shorter focal length (higher optical power) leads to higher magnification because the subject can be brought closer to the center of projection.
So it basically depends on the fact that most of our astronomical sources are at distances larger than $\sim10^{18}$ cm away from us, while our microscopic objects are less than $\sim100$ cm away from us. | {
"domain": "physics.stackexchange",
"id": 9064,
"tags": "optics, geometric-optics, lenses, telescopes, microscopy"
} |
What are the exceptions to Mendel's laws? | Question: What are the exceptions to the $-$
Law of dominance
Law of independent assortment
Law of segregation
My knowledge:
Exception of law of dominance:
Incomplete dominance
In incomplete dominance when a red snapdragon flowered plant is crossed with a white flowered plant an intermediate phenotype appears in the F1 hybrid instead of a parental phenotype.
Codominance
Unlike dominance in codominance when a A ($I^AI^A$) blood group individual mates with B ($I^BI^B$) blood group individual the offsprings have blood group AB ($I^AI^B$) instead of A or B.
Exception of law of segregation:
Nondysjunction
During meiosis homologous chromosomes/sister chromatids and hence genes may move to a common gamete violating law of segregation.
Exception of law of independent assortment:
Linkage
When genes are present on the same chromosome they tend to remain together and enter into the same gamete. This is the reason behind deviation of dihybrid test cross ratio from 1:1:1:1 and occurrence of parental combination in high frequencies.
Am I correct? Are there any more exceptions to these laws?
Answer: You are right but the list is certainly not exhaustive. Here are a few more concepts you may want to consider.
Exception of law of dominance:
epistasis
Allelic effect at a given locus depend on the variants at other loci
environment dependence and specific case of frequency dependence
The concept of dominance can be used for any quantitative trait. Typically, when the quantitative trait of interest is fitness, then the trait often depend on the environment. A typical and interesting case is when the fitness depends on the frequency of alleles in the population. Depending of the frequency of the allele at a given locus, the relationship might be a relationship of dominance, additivity or dominance for the other allele.
Exception of law of segregation:
meiotic drive
from wiki: "Meiotic drive is a type of intragenomic conflict, whereby one or more loci within a genome will affect a manipulation of the meiotic process in such a way as to favor the transmission of one or more alleles over another, regardless of its phenotypic expression. More simply, meiotic drive is when one copy of a gene is passed on to offspring more than the expected 50% of the time."
Exception of law of Independent assortment:
epistasis for fitness
Imagine a case where a given combination of allele is lethal at very young age, then you would never see these two alleles together in an individual. It is per se, not a case of exception to independent assortment but it gives the feeling of such exception. A case more extreme when a given combination of alleles prevents fecundation could however be considered as an exception to independent assortment. | {
"domain": "biology.stackexchange",
"id": 5803,
"tags": "genetics"
} |
Comparing the orbit radius of two spherical objects | Question:
Assume the mass of star 2 is 4 times the mass of star 1. Compare the radius of the orbit of star 1 to that of star 2.
Possible answers:
R1:R2=1:4
R1:R2=1:2
R1:R2=2:1
R1:R2=4:1
R1:R2=16:1
What's the answer?
I guess I'd use the gravitational force formula:
$ F = G\cdot \frac {m1 \cdot m2}{r^{2}} $
But I don't see how the $r$ from the formula, which is the distance between the two stars can be related to what the question is essentially asking - radius of the orbit of the smaller star.
And when the question says: "Compare the radius of the orbit of star 1 to that of star 2"
Wouldn't it mean that the bigger star (2) will stay stationary, and the smaller star would orbit around it? Or will they both orbit each other? I guess the latter.
If so, wouldn't they orbit each other the same way? e.g. always remain the same distance from each other?
Answer: For two objects to remain in a stable circular orbit, the force acting on them must be equal to the centripetal force corresponding to their rotation. $$F=\frac{mv^2}{r}$$
or in terms of angular velocity $$F=m\omega^2r$$
where $r$ is the radius of orbit in this case.
As the gravitational force acting on the two stars is the same. $F$ is equal in both cases. Additionally the angular velocity must be equal in both cases. Otherwise the force will not keep acting along the centre of orbit which is obviously unphysical. Therefore we get:
$$m_1\omega^2r_1=m_2\omega^2r_2$$
$$4mr_1=mr_2$$
$$4r_1=r_2$$
The gravitational force is of limited use for your problem. If you knew the masses and wanted to work out the radial distances then you could use the gravitational force. | {
"domain": "physics.stackexchange",
"id": 21075,
"tags": "newtonian-mechanics, newtonian-gravity, conservation-laws, inertial-frames"
} |
Projecting rectangular stereography centred on 42°N, 10°E | Question: I am reposting from GIS Stack Exchange as here might also be someone who might help:
I have a precipitation GRIB2 file obtained at EUMETSAT H-SAF I am able to decode it to ascii but the software I am using (gdal, ArcMap,...) does not recognize the projection (in the title) so I am not able to properly project to Web Mercator for example.
So can anyone please suggest software that preferably works on Ubuntu and projects either grib2 files to other projections or is able to recognise the projection from decoded ascii.
Answer: The product (H-SAF 05) manual stated the datasets are in the projection mentioned in the title. EUMETSAT user support answered me the datasets use a polar stereographic projection with 25-75°N lat, 25°W- 45°E long. | {
"domain": "earthscience.stackexchange",
"id": 1603,
"tags": "meteorology, precipitation, weather-satellites, projection, coordinate-system"
} |
Time evolution of a Gaussian wave packet, why convert to k-space? | Question: I'm trying to do a homework problem where I'm time evolving a Gaussian wave packet with a Hamiltonian of $ \frac{p^{2}}{2m} $
So if I have a Gaussian wave packet given by:
$$ \Psi(x) = Ae^{-\alpha x^{2}} \, .$$
I want to time evolve it, my first instinct would be to just tack on the time evolution term of $e^{-\frac{iEt}{\hbar}}$.
However, in the solution it tells me that this is incorrect, and I first need to convert the wave function into k-space by using a Fourier transform due to the Hamiltonian being $ p^2/2m$. Can anyone tell me why I need to convert it to k-space first? In a finite well example with the same Hamiltonian we can just multiply the time evolution term to each term of the wave function. Why can't we can't do that to a Gaussian wave packet?
Answer: Tacking on a term $e^{-iEt/\hbar}$ is the correct interpretation of the Schrödinger equation $$i\hbar |\partial_t \Psi\rangle = \hat H |\Psi\rangle$$only for those eigenstates for which $$\hat H |\Psi\rangle = E|\Psi\rangle,$$as otherwise you do not know what value of $E$ should be used to substitute. Hypothetically you can still do it, but you pay a very painful cost that the $E$ is in fact a full-fledged operator and you therefore need to exponentiate an operator, which is nontrivial.
If this is all sounding a bit complicated, please remember that QM is just linear algebra in funny hats, and so you could get an intuition for similar systems by just using some matrices and vectors, for example looking at $$i\hbar \begin{bmatrix} f'(t) \\ g'(t) \end{bmatrix} = \epsilon \begin{bmatrix} 0&1\\1&0\end{bmatrix} \begin{bmatrix} f(t) \\ g(t)\end{bmatrix}.$$One can in fact express this as $$\begin{bmatrix}f(t)\\g(t)\end{bmatrix} = e^{-i\hat H t/\hbar} \begin{bmatrix} f_0\\ g_0\end{bmatrix},$$ but one has to exponentiate this matrix. That is not hard because it squares to the identity matrix, causing a simple expansion, $$\begin{bmatrix}f(t)\\g(t)\end{bmatrix} = \cos(\epsilon t/\hbar) \begin{bmatrix} f_0\\ g_0\end{bmatrix} - i \sin(\epsilon t/\hbar) \begin{bmatrix} g_0\\ f_0\end{bmatrix}. $$ One can then confirm that indeed this satisfies the Schrödinger equation given above. One can also immediately see that this does not directly have the form $e^{-i\epsilon t/\hbar} [f_0; g_0],$ but how could it? That would be a different Hamiltonian $\hat H = \epsilon I.$
But, with some creativity, one can see that if $f_0 = g_0$ those two remaining vectors would be parallel, or if $f_0 = -g_0$, and one can indeed rewrite this solution in terms of those eigenvectors of the original $\hat H$ as $$\begin{bmatrix}f(t)\\g(t)\end{bmatrix} = e^{-i\epsilon t/\hbar} \alpha \begin{bmatrix} 1\\ 1\end{bmatrix} + e^{i\epsilon t/\hbar} \beta \begin{bmatrix} -1\\ 1\end{bmatrix}. $$ So the trick to more easily finding general solutions is to find these eigenvectors first and then form a general linear combination of those eigenvectors once they have been multiplied individually by their time dependence. Then for a given initial state, we need to find the $\alpha$ and $\beta$ terms: in this case it is simple enough by looking at $t=0$ where $\alpha - \beta = f_0$ while $\alpha + \beta = g_0.$
Similarly for your Hamiltonian $\hat H = \hat p^2/(2m) = -\frac{\hbar^2}{2m}\frac{\partial^2~}{\partial x^2},$ you know that the eigenvectors are plane waves, $$\phi_k(x) = e^{ikx}.$$You know that you can then add time dependence to them in the obvious way, $$\Phi_k(x, t) = e^{i(k x - \omega_k t)},$$ where of course $$\hbar \omega_k = \frac{\hbar^2k^2}{2m}.$$ So the eigenvector story is just beautifully simple for you to do, all you need is the ability to take derivatives of exponentials.
The part of the story that is more complicated is assembling an arbitrary $\psi(x)$ as a sum of these exponentials. However while it is complicated it is not impossible: you know from Fourier's theorem that $$\psi(x) = \frac{1}{2\pi}\int_{-\infty}^{\infty} dk ~e^{i k x} \int_{-\infty}^\infty d\xi ~e^{-ik\xi} ~\psi(\xi).$$ Let your eyes glaze over the second integral and see it as just what it is, some $\psi[k]$ coefficent in $k$-space. What we have here then is a sum—a continuous sum, but still a sum!—of coefficients times eigenfunctions:$$\psi(x) = \int_{-\infty}^{\infty}\frac{dk~\psi[k]}{2\pi}~\phi_k(x).$$
And we know how to Schrödinger-ize such a sum, we just add $e^{-i\omega_k t}$ terms to each of the eigenfunctions, turning $\phi_k$ into $\Phi_k.$ So we get,
$$\Psi(x, t) = \frac{1}{2\pi}\int_{-\infty}^{\infty} dk ~e^{i (k x - \omega_k t)} ~\psi[k].$$
You do not have to do it this way, you can try to do some sort of $$\exp\left[-i \frac{\hbar t}{2m} \frac{\partial^2~}{\partial x^2}\right] e^{-a x^2}$$
monstrosity, expanding the operator in a power series and then seeing whether there are patterns you can use among the $n^\text{th}$ derivatives of Gaussians to simplify. But the operator expansion way looks really pretty difficult, while the eigenvector way is really easy.
The reason it is really easy is that both $\hat H$ and $i\hbar \partial_t$ are linear operators: they distribute over sums. So if you are still feeling queasy about this procedure, convince yourself by just writing it out: calculate this value $$0 = \left(i\hbar \frac{\partial~}{\partial t} + \frac{\hbar^2}{2m}\frac{\partial^2~}{\partial x^2}\right) \frac{1}{2\pi} \int_{-\infty}^\infty dk~\psi[k] ~e^{i (k x - \omega_k t)}.$$ Notice that it holds with pretty much no restriction on the actual form of $\psi[k]$ so that you only need to choose coefficients $\psi[k]$ such that $\Psi(x, 0) = \psi(x).$ | {
"domain": "physics.stackexchange",
"id": 57680,
"tags": "quantum-mechanics, homework-and-exercises"
} |
How old are Chile's fjords? | Question: Do we have any knowledge about the age of Chile's fjords, more specifically, those found near the Northern Patagonian Ice Field?
Is it reasonable to conclude that they were formed in Quaternary given the fact that there exist glaciers there at this point in time?
Answer: The Quaternary is definitely a good guess. But it is difficult to answer your question because the "age of a fjord" is a rather ambiguous concept.
Also, I'll asume you are interested in the bedrock topography associated to the fjords, and not only the sea inlets (as in that case they would have formed very recently, just when the glaciers receded enough to allow the ocean water to take their place). With that said let's try to answer the question:
The fjords of western Patagonia are a feature created by the Pleistocene glaciations, therefore you can definitely say that the fjords as a feature of the landscape were formed during the Quaternary. However, they are in constant evolution, so you can arguably also say that the fjords as you see them today, were formed today.
Glacier erosion rates in Patagonia range roughly from a few tenths of millimeters to a few centimeters per year. Faster rates probably happening at the edges of the ice sheets were glacier basal sliding velocities are higher. Long term averages (~10 ka) are probably between 0.5 mm/yr and 1 mm/yr, meaning that in the 2.6 Ma of the Quaternary you would erode just about one to two kilometers. As some fjords can be much deeper than that (when measured from the mountain tops), it is clear that the fjords as a geographical feature are not in steady state, and they are now more dramatic than ever throughout the Quaternary.
Erosion rates measured in outlet glaciers in Patagonia and other areas in different timescales. From figure 2.a in "The relative efficacy of fluvial and glacial erosion over modern to orogenic timescales" 2009, Nature Geoscience 2(9):644-647.
The reason that explain why Fjords can become deeper than that one kilometer estimate, is the concentrated erosión in outlet glaciers at the edge of the ice sheet. For example, the most visited glacier in the Northern Patagonian Icefield: the San Rafael glacier, show erosion rates of 0.83 mm/year. And as shown in the figure above, outlet glaciers can erode up to ~5 cm/yr at shorter timescales.
Sumarizing:
Northern Patagonian Ice Field's fjords started to form with the first Pleistocene glaciation at the begin of the Quaternary about 2.6 million years ago, but they have become deeper after each successive glaciation and they are still forming today. Current glaciers are eroding their bedrock at rates on the order of one to a few centimeters per year. | {
"domain": "earthscience.stackexchange",
"id": 1394,
"tags": "geology, glaciology, geomorphology, ice-age, glaciation"
} |
Roscore without peer to peer | Question:
Hello,
Might be an odd question.
Is there a way for Roscore (i.e Master) to disable p2p and act as a server transmitting data back and forth instead of letting peers communicating between them directly or no ?
Originally posted by maxime on ROS Answers with karma: 27 on 2020-07-23
Post score: 0
Original comments
Comment by gvdhoorn on 2020-07-24:
Could you please describe what made you ask this question (ie: why)?
It could be an xy-problem.
Answer:
Short answer: no.
http://wiki.ros.org/Master#Master_Name_Service_Example
Originally posted by Dirk Thomas with karma: 16276 on 2020-07-23
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2020-07-24:
And the answer is "no" as this would be a complete departure from how pub-sub is implemented in ROS 1.
Comment by maxime on 2020-07-27:
I was just wondering if it could be done easily. Would have been allowing me to test something quickly and give it a go.
Basically @gvdhoorn, in the solution we are building, we have several ROS1 Node. Therefore they are all build and running as a whole using docker. Each Node is a Docker container.
We have a docker internal network. The thing IS one of the containers need to be linked to the outside world. So what I tried to do was to get that container out of the internal network and instead let him on the host network. and gave him our Master Node docker ip. It works and see the Master Node but as the Master Node gives him IPs that are within the docker internal network. So it doesn't work. Hence my question. | {
"domain": "robotics.stackexchange",
"id": 35316,
"tags": "ros, roscore, master"
} |
Apriori gives 0 rules | Question: I am trying to use apriori() on this dataset.
After cleaning it, I made all attributes categorical with as.factor(). Then I uset these instruction:
chocolate_rules <- apriori(chocolateApriori, parameter=list(minlen=3, supp=0.1, conf=0.7), appearance=list(default="lhs", rhs=c("RatFactor=1", "RatFactor=2", "RatFactor=3", "RatFactor=4", "RatFactor=5")))
where RatFactor is categorial rounded "Rating" from the original dataset (and has only 5 possible values).
It seems to work, but when I call "chocolate-rules" the result is set of 0 rules.
Can you explain me why? Or can you help me getting another result?
Answer: "Apriori" algorithm is used for "Association Rules" learning.
In very simple terms its trying to determine that if people who buy chocolates ,do they buy roses also with that? or do they buy chocolate with ice-cream more? or its chocolate+roses+ice-cream always together? or any combination of it.
So, the data which contains these purchase transactions are analysed and then "frequent item sets" are determined.
From this "rules" are derived , eg. here a rule can be, chocolates are bought with roses, or roses are bought with chocolates.
Each rule has confidence and support.
Now in you dataset, do you don't seem to have transactions, rather it has attributes regarding chocolates and its rating.which doesn't suit associative learning since the objective is different.
For the "learning" purpose, if its predicting that there are NO rules, means there are NO rules to determine.
Here something to get you started-
https://en.wikipedia.org/wiki/Apriori_algorithm
https://en.wikipedia.org/wiki/Association_rule_learning | {
"domain": "datascience.stackexchange",
"id": 7562,
"tags": "dataset, association-rules"
} |
How to install packages as subdirectory of metapackage? | Question:
So, currently I have packages that install their binaries to catkin_ws/install/lib/package_name
I made a metapackage to contain these directories. Now I would like them to install like this: install/lib/meta_package_name/package_name
Is there any good way of achieving this?
Originally posted by pachuc on ROS Answers with karma: 13 on 2014-10-01
Post score: 0
Answer:
Unless I misunderstand you, metapackages don't work that way (they are not stacks). They are essentially a way to group a number of packages and make them easily installable by only installing the metapackage. Dependency resolution then pulls in the others.
In the original design, metapackages didn't even have a CMakeLists.txt.
Do you have any specific reason for such a directory layout?
Originally posted by gvdhoorn with karma: 86574 on 2014-10-01
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by pachuc on 2014-10-01:
well my source is laid out like this: /src/group_of_files/many_individual_package_folders.
I was hoping that my install could look the same way ie: install/lib/group_of_files_/my_packages
I think this is mostly just for convenience and easily identifying package install locations.
Comment by pachuc on 2014-10-01:
i guess its not necessary, but it would be nice. So did stacks used to work this way (sorry im new to catkin/ros)? If so, why did they change it for metapackages? I guess the only point of a metapackage is just to install easier?
Comment by gvdhoorn on 2014-10-01:
If you really want to know, take a look at the rationale behind the move to catkin (FHS compliance, etc). Stacks did not put everything in install/lib, but they did contain all packages.
As to 'being nice': you typically do things like $(find PKGNAME), so does it matter where things are?
Comment by pachuc on 2014-10-01:
ok, thanks for the answer. I guess it doesn't really matter... its just... annoying lol | {
"domain": "robotics.stackexchange",
"id": 19594,
"tags": "ros, catkin, stack, metapackage, package"
} |
Role of information in physics | Question: The fact that information is preserved is often stated as a fundamental fact in physics.
Not being a physicist I do not understand how information enters physics at all. What is it an attribute of? How is it defined? What laws exist?
I hope it is not only the notion of entropy that introduces information into physics. In mathematics it is very clear cut: entropy is an attribute of probability distributions.
You can make a convincing case that the information gain from observing an event of probability $p$ is $-log(p)$. With this the entropy of a discrete probability distribution is the average gain of information from observing an atomic event.
A more general distribution $P$ needs to be related via a density to the most random distribution $U$ and you can then interpret $(entropy(U) - entropy(P))$ as a measure of the gain of information when you know that outcomes are controlled by $P$ rather than $U$.
If you accept this, the Second Law of Thermodynamics states that the information from observing the macroscopic state of a system is steadily decreasing as the configuration becomes more and more random.
So it is unlikely that entropy is the way in which information enters into physics.
I also do not believe that the information associated with a system is the information needed to completely specify the state of the system since this is infinite in almost every case (e.g. whenever any initial condition is a random real).
So what is it?
Answer: By "conservation of information", physicists mean that you can reconstruct the past if you're given the present state. That is, all the information about the past is never destroyed. You can, in principle, extract it all from the present state by re-winding the laws. Conservation of information may be false in Quantum Mechanics depending on the interpretation, because the wavefunction collapse is probabilistic. But an interpretation like Many-Worlds still allows for the laws to be re-wound.
Entropy does not come into this "information conservation" concept. A different definition of information can be given using entropy, but that information is not what the law of conservation of information is referring to. Information defined using entropy is less fundamental, as it is merely a measure of our ignorance about the exact state of the system, due to our choice of "macro-variables" to describe the system. The macro-variables contain compressed information about a physical system. | {
"domain": "physics.stackexchange",
"id": 92358,
"tags": "statistical-mechanics, entropy, definition, information"
} |
Simple Single Linked List | Question: I'm currently studying data structures, I have implemented simple single linked list as shown below.
I would like to know how can I improve it?
#include <iostream>
#include <memory>
#include <cassert>
template <typename T>
class List
{
struct node
{
T value;
std::unique_ptr<node> next;
};
public:
List();
void add(T value);
T first() const;
T value(std::size_t position) const;
std::size_t length() const;
void clear();
private:
std::unique_ptr<node> mFirst;
std::size_t mLength;
};
template <class T>
List<T>::List()
: mFirst(nullptr)
, mLength()
{
}
template <typename T>
std::size_t List<T>::length() const
{
return mLength;
}
template <typename T>
T List<T>::first() const
{
assert(mFirst != nullptr);
return mFirst->value;
}
template <typename T>
void List<T>::add(T value)
{
auto next = std::move(mFirst);
mFirst = std::make_unique<node>();
mFirst->value = value;
mFirst->next = std::move(next);
mLength++;
}
template <class T>
T List<T>::value(std::size_t position) const
{
assert(mFirst != nullptr);
auto current = mFirst.get();
for (auto i = 0u; i < position; ++i)
{
current = current->next.get();
}
return current->value;
}
template <class T>
void List<T>::clear()
{
assert(mFirst != nullptr);
auto length = mLength;
for (auto i = 0u; i < length; ++i)
{
mFirst = std::move(mFirst->next);
mLength--;
}
}
int main()
{
List<int> list;
for (int i = 0; i < 10; ++i)
list.add(i);
for (auto i = 0u; i < list.length(); ++i)
std::cout << list.value(i) << ' ';
list.clear();
std::cout << "\n\n" << list.length() << "\n\n";
}
Answer: Choice of Members
You currently keep a head pointer and a length. The result of this is that add is a linear operation. It would be much better to additionaly keep a tail pointer, so that add is constant time.
Since linked lists do not support random access, it's questionable to have a method like T value(size_t ). If you need to index into a container, you probably don't want to use a linked list. Better to not provide it.
Unfortunately, this is your only way to get anything other than the first value - which makes iteration take quadratic time! What you need to do is to do as the standard library do and add something like:
struct iterator {
// basically a wrapper around node* that works as a ForwardIterator
};
iterator begin() { return iterator{mFirst.get()}; }
iterator end() { return iterator{mLast}; }
This will additionally allow everybody to use the standard library algorithms with your class. You will want a const_iterator as well, to allow for iteration over a const List<T>.
Use References
Right now add() takes a T and first() returns a T. For something like int, the former is OK, but the latter is still questionable. What if you want to modify the list elements? That seems like a useful operation. To that end, you should add:
void add(T const& );
void add(T&& );
T& first();
T const& first();
You may also want to add an emplace, to construct values in-place:
template <typename... Args>
void emplace(Args&&...);
You may want to rename first to front for convention. Similarly length to size.
Clearing
Since you are using smart pointers for your list, clearing is very simple. You don't need a loop at all:
template <class T>
void List<T>::clear()
{
mFirst.clear();
mLast = nullptr;
mLength = 0;
} | {
"domain": "codereview.stackexchange",
"id": 16511,
"tags": "c++, c++11, linked-list"
} |
how to pass pointcloud from the callback function in subscriber | Question:
Aim-Subscribe to pointcloud from kinect and access it from main() function
I created a class and callback function as its member function.
I am able to subscribe to the pointcloud data from kinect using
ros::Subscriber sub = nh.subscribe ("/camera/depth_registered/points", 3,&pass_cloud::callback,&cloud1);
I am the changing the member value by the callback but i am not able to see it working.
You can find my code here..Please let me know what changes can be done
https://dl.dropbox.com/u/95042389/sampl.cpp
Originally posted by sai on ROS Answers with karma: 1935 on 2013-02-06
Post score: 0
Answer:
Avoid global variables. it is bad practice. here is how you could do it:
class pass_cloud
{
public:
pass_cloud (ros::NodeHandle& nh) : nh_(nh), i(0)
{
sub = nh_.subscribe ("/camera/depth_registered/points", 3,&pass_cloud::callback,this);
} // your constructor and store the passed node handle (nh) to your class attribute nh_
void callback (const pcl::PointCloud& cloud)
{
input= cloud; // copy the variable that the callback passes in to you class variable (attribute) input
i=i+1;
}
process()
{
//You might want to add an if statement here to only print if the input is not empty, I am not sure how but something like this if(input is not empty){
std::cout << " " << input.points[i].x << " " << input.points[i].y << " " << input.points[i].z << std::endl; std::cout<<"reached here"<<std::endl;
}
void spin ()
{
ros::Rate rate (30);
while (ros::ok ())
{
ros::spinOnce ();
rate.sleep ();
}
}
protected: // Your class attributes
ros::NodeHandle nh_;
pcl::PointCloud& input;
cloud ros::Subscriber sub;
int i;
}; // end of class
int main(int argc, char **argv)
{
ros::init(argc, argv, "pointcloud_node");
ros::NodeHandle nh;
pass_cloud* node = 0;
node = new pass_cloud(nh);
node->spin ();
return 0;
}
Hope this helps!
Originally posted by K_Yousif with karma: 735 on 2013-02-07
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by sai on 2013-02-07:
This is very nice but I have some basic questions to ask. could you give your email id
Comment by K_Yousif on 2013-02-07:
I am not a ROS expert as I only started learning ROS less than 2 months ago. But I will be happy to help if can. My email is khalid.ayousif@hotmail.com
Comment by sai on 2013-02-07:
Hi, if possible , could you see the code at this link and tell what might be wrong https://dl.dropbox.com/u/95042389/sampl.cpp
Comment by K_Yousif on 2013-02-07:
In your code, it is being stuck in a for loop, without updating your cloud. I do not think you need a for loop since ros::spin() is already constantly looping until you press ctrl+c, I have updated my code in my answer to your case. Copy and paste it and see if it works. let me know | {
"domain": "robotics.stackexchange",
"id": 12780,
"tags": "ros, kinect, callback, pointcloud"
} |
Shear Force Diagram of a Simply Supported Beam with triangular load distribution | Question: I've been attempting this fundamental shear force diagram problem for several days, but can't seem to get the correct result. I'm trying to calculate the shear force diagram in terms of $x$, but I'm unsure about the intensity $w(x)$ of the triangular load distribution between $0m \le x \lt 3m$. I am able to calculate the correct result for the latter section $3m \lt x \le 6m$, so I'm a little confused as to what the correct intensity of the triangular load distribution is and how to calculate the correct shear force using the correct intensity $w(x)$?
Below I've attached the problem and calculated the support reactions, which are $A_y=15kN$ and $B_y=15kN$.
Now, I've attached my free body diagram of the first section between $0m \le x\lt 3m $ and indicated the positive sign convention for this beam.
I then proceeded to find the shear force in terms of $x$ as follows:
$\sum F_y=0:$
$$15-w(x)·x·\frac12 - v_1 = 0 \quad (eq\ 1)$$
Where $w(x)·x·\frac12$ is the area of the triangular load distribution.
This is where I get confused. My understanding of triangular load distribution in terms of the intensity $w(x)$ is that:
$$w(x)=\frac{w_0x}{L}$$
Where $w_0 = 10$ and $L=3$ for this problem.
But substituting these values into the intensity $w(x)$ and back into $(eq\ 1)$ gets me the wrong result of:
$$v_1=15-\frac53 x^2$$
After reading multiple textbooks and watching several videos, I finally found out that if the maximum load of a triangular load distribution is at the initial point $x=0$ then the following formula should be applied:
$$w(x)=\frac{w_0x}{L}-w_0$$
I now understand this a bit, but I am wondering where I could get a good explanation as to why?
I'm struggling to find a good explanation as almost every example I've found in textbooks/videos use triangular load distributions that increase from the initial point and not decrease.
However, after utilising this formula, I still get the wrong solution. My working out is as follows:
$\sum F_y=0:$
$$15-\Bigl(w(x)·x·\frac12 \Bigr) - v_1 = 0$$
$$15-\biggl(\Bigl(\frac{10x}{3}-10\Bigr)·x·\frac12 \biggr)- v_1 = 0$$
$$15-\biggl(\Bigl(\frac{10x}{6}-\frac{10}{2}\Bigr)·x\biggr) - v_1 = 0$$
$$15-\biggl(\Bigl(\frac{5x}{3}-5\Bigr)·x \biggr)- v_1 = 0$$
$$15-\frac{5x^2}{3}+5x - v_1 = 0$$
$$\Rightarrow v_1=15-\frac{5x^2}{3}+5x$$
The actual solution is:
$$v_1=15+\frac{5x^2}{3}-10x$$
So I'm not sure whether I'm using the correct intensity $w(x)$ and/or whether the triangle area has been correctly calculated using this intensity $w(x)$.
For the second section $3m\le x\lt6m$ I am able to calculate the correct shear force in terms of $x$, this solution is:
$$v_2=-15-\frac{5x^2}{3}+10x$$
Plotting a diagram of the correct shear forces $v_1$ and $v_2$ in terms of $x$ looks the following:
For your reference, this problem (F11.6) can be found in chapter 11 of Statics and Mechanics of Materials (4th Ed. SI edition) by Hibbeler.
I'd appreciate if someone could explain intensity loads for situations similar to above and where I went wrong in my calculations.
Thank you.
Edit:
After reading a few examples, I found that if I calculate the shear force from the left end I am able to get the correct shear force using my initial intensity $w(x)=\frac{w_0x}{L}$ and not the latter intensity $w(x)=\frac{w_0x}{L}-w_0$.
However I'm unsure why I can't calculate this from the right end? Does it have something to do with the left support $A_y=15kN$ creating a discontinuity? If I calculate from the left end am I correct in changing the section's range to $0m \lt x \le 3m$ to not include the left support $A_y$?
My working out is as follows:
$\sum F_y=0:$
$$-\Bigl(w(x)·x·\frac12 \Bigr) + v_1 = 0$$
$$-\biggl(\Bigl(\frac{10}{3}(3-x)\Bigr)·(3-x)·\frac12 \biggr)+ v_1 = 0$$
$$-\biggl(\Bigl(10-\frac{10x}{3}\Bigr)·(3-x)·\frac12 \biggr)+ v_1 = 0$$
$$-\biggl(\bigl(30-10x-10x+\frac{10x^2}{3}\bigr)·\frac12 \biggr)+ v_1 = 0$$
$$-\biggl(\bigl(30-20x+\frac{10x^2}{3}\bigr)·\frac12 \biggr)+ v_1 = 0$$
$$-\bigl(15-10x+\frac{10x^2}{6}\bigr)+ v_1 = 0$$
$$-15+10x-\frac{5x^2}{3}+ v_1 = 0$$
$$\Rightarrow v_1=15+\frac{5x^2}{3}-10x$$
This is the correct solution.
Answer: Your procedure is correct, but you have made a mistake with the sign convention.
Apparently you are using the same convention as I do, where a vertical load/force is negative, and an upward force is positive.
Consequently, as you've written correctly
$$ w(x)=\frac{w_0}{L}x-w_0 $$
Your mistake happens as you formulate the force equilibrium equation.
With this definition of $w(x)$ you already comply to the sign convention.
If you now formulate the shear force equation and write
$$ V_1=15-\int w(x)dx $$
you basically reverse the sign convention again.
To formulate the force equilibrium equation you have to sum all forces, not subtract them, thus
$$ V_1=15+\int w(x)dx $$
which leads to
$$ V_1=15+\frac{5}{3}x^2-10x $$
which is the correct result.
Sign convention [edited]
Take a look at your $w(x)$. It's a force pointing downwards, so it should be negative.
You have written it as
$$ w(x)=\frac{w_0}{L}x-w_0 \qquad \mbox{for}\qquad x=\{0...3\}$$
Thus $w(0)=-w_0$, which means your load $w(x)$ is already defined in the coordinate system you specified. If you now sum or integrate and add a minus sign in front of $w(x)$ you basically turn the downward load into an upwards facing load.
Consider, e.g. $w(x)=const.=-w_0$, i.e. an evenly distributed downwards load, thus a negative value. It's easy to figure out, that the reaction at the bearing is $A_y=\frac{1}{2} w_0L $.
Now to find the shear force distribution, if we use your method, we'd write:
$$ V_1=A_y-\int_0^L (-w_0)dx=\frac{1}{2} w_0L - (-w_0)L=\frac{3}{2}w_0L $$
This is the wrong result, because for the shear force distribution calculation we suddenly switch sign convention (subtracting during integration instead of adding).
In the first part you said $w(x)$ was $w(x)=-w_0+\frac{x}{L}w_0$, which goes linearly from $-w_0$ to $0$, which in your example would be $w(x)=\frac{10}{3}(x-3)$. During your second calculation you managed to get the right result because you changed the sign convention of your $w(x)$, as you inserted $w(x)=\frac{10}{3}(3-x)$ (which is an upwards load), but then you turned it downwards again by adding a negative sign in front of the whole term.
I really recommend sticking to one sign convention. Either you say you sum all forces, and defined downward loads as negative, or you subtract downward loads from upward reactions. | {
"domain": "engineering.stackexchange",
"id": 2063,
"tags": "statics"
} |
Impulse response of an LTI system given the input and output signals | Question: I have been given the input and output signals of an LTI system as:
$x[n] = (\frac{1}{2})^nu[n] + 2^nu[-n-1]$
$y[n] = 6(\frac{1}{2})^nu[n] - 6(\frac{3}{4})^nu[n]$
I have found the system function $H(z)$ by using $H(z) = \frac{Y(z)}{X(z)}$ and using the standard transform tables to get:
$X(z) = \frac{1}{1-\frac{1}{2} {z^{-1}}} - \frac{1}{1-2z^{-1}} = \frac{-\frac{3}{2}z^{-1}}{(1-\frac{1}{2}z^{-1})(1-2z^{-1})}$
$Y(z) = 6*\frac{1}{1-\frac{1}{2} {z^{-1}}} - 6*\frac{1}{1-\frac{3}{4}z^{-1}} = \frac{-\frac{3}{2}z^{-1}}{(1-\frac{1}{2}z^{-1})(1-\frac{3}{4}z^{-1})}$
$H(z) = \frac{Y(z)}{X(z)} = \frac{\frac{-\frac{3}{2}z^{-1}}{(1-\frac{1}{2}z^{-1})(1-\frac{3}{4}z^{-1})}}{\frac{-\frac{3}{2}z^{-1}}{(1-\frac{1}{2}z^{-1})(1-2z^{-1})}} = \frac{1-2z^{-1}}{1-\frac{3}{4}z^{-1}}$
Then this can be further simplified to:
$H(z) = 1 - \frac{\frac{5}{4}z^{-1}}{1-\frac{3}{4}z^{-1}}$
However from here I am stuck on how to get the form of the second term back into the time domain as it does not match anything from the standard tables.
I am just wondering if I have missed anything or made a mistake in my calculations, or if there is some way to get it into time domain form from here that I am not seeing, so that i can get the impule response $h[n]$.
Thank you for any help you can give me.
Answer: For a term $\frac{1}{1-az^{-1}}$ it can either be $a^nu[n]$ with Region of Convergence(ROC) $|z| \gt |a|$ OR $-a^nu[-n-1]$ with ROC $|z| \lt |a|$.
Since your input and output are both stable, your $h[n]$ is also stable. If you see its original form of $\frac{1-2z^{-1}}{1-\frac{3}{4}z^{-1}}$, it has pole at $z=3/4$, it's ROC is $|z| \gt 3/4$ so that the region includes unit circle. Since this region doesn't include $z=0$, there will not be negative powers of $z$. So your $h[n]$ will be of the composed of only negative powers of $z$. So from the final form of $H(z)$, the first term $1$ will translate to $\delta[n]$. The second term $(\frac{3}{4})^nu[n]$ will be scaled by $5/4$ and delayed by 1 sample due to presence of $z^{-1}$. So
$$
h[n] = \delta[n] - (\frac{5}{4})(\frac{3}{4})^{n-1}u[n-1]
$$
OR equivalently
$$
h[n] = (\frac{3}{4})^nu[n] - 2(\frac{3}{4})^{n-1}u[n-1]
$$ | {
"domain": "dsp.stackexchange",
"id": 8532,
"tags": "discrete-signals, z-transform, impulse-response"
} |
How many comparisons do we need to find min and max of n numbers? | Question: Suppose we have given a list of 100 numbers. Then How can we calculate the minimum number of comparisons required to find the minimum and the maximum of 100 numbers.
Recurrence for the above problem is
$$T(n)=T(\lceil \frac{n}{2} \rceil) + T(\lfloor \frac{n}{2} \rfloor) + 2$$
$$\approx 1.5 \times n - 2$$
hence
$$T(100)=1.5 \times 100 - 2 = 148$$
But by solving like as I've mentioned below, I'm coming up with the ans 162
We can divide list of 100 numbers in two list (50,50). Now upon combining the result of these two list 2 more comparisons will be required. Now recursively we break the lists like below, which will make a binary tree
$$100 \implies (50,50)$$
$$50 \implies (25,25)$$
$$25 \implies (12,13)$$
$$12 \implies (6,6), 13 \implies (6,7)$$
$$6 \implies (3,3), 7 \implies(3,4)$$
$$3 \implies (2,1), 4 \implies (2,2)$$
By combining the results upwards in the tree with 2 comparisons on merging each two lists I'm coming up with ans 162. What am I over counting.
Answer: You're dividing badly. If you look at your execution tree, you'll notice that most of the leaves on the last level are 3's. This means you'll need 3 comparisons here. For the same cost you could also compare 4 elements. Your efficiency here is 75%.
Your partial trees have to be full to be the most efficient. So you'll need to divide into trees with numbers of elements that are powers of two.
In your case dividing into a left oriented tree it's
T(100) = T(64) + T(36) +2
= T(64) + T(32) + T(4) + 4
= 94 + 46 + 4 + 4
= 148 | {
"domain": "cs.stackexchange",
"id": 3807,
"tags": "algorithm-analysis, recurrence-relation"
} |
Does exist an algorithm to find all equivalence classes of given regular expression without kleene star? If yes then what is it? | Question: Does exist an algorithm to find all equivalence classes of given regular expression without kleene star?
If yes then what is it?
Note that alphabet of the given regular expression is binary, i.e. it contains only 2 symbols exactly.
This is important, because I can easily construct NFA from the given regular expression and then construct the DFA from that NFA, but the result DFA is not minimal and the result DFA's number of states is exponential.
Of course I can run Hopcroft's algorithm to minimize the DFA, but if DFA's number of states is Θ(2n) then Hopcroft's algorithm runs in Θ(2nlog(2n)) = Θ(2nn) = Θ(n2n) and this might take too much time to get the minimal DFA for the given regular expression.
Also my computer has only 4GB of RAM.
If the given regular expression is long and the result NFA is big, then in the middle time conversion from NFA to DFA, there might be out of memory exception thrown and my application won't make it to call Hopcroft's algorithm at all.
For the given regular expression, running the algorithm to find all it's equivalence classes is considered to be much better than constructing NFA from the regular expression, constructing DFA from the NFA and run Hopcroft's algorithm to minimize it, but if and only if the algorithm to find all the equivalence classes won't use too much resources and won't take too much time to finish, then I can use the equivalence classes to efficiently construct the minimal DFA quickly for the given regular expression.
I tried to google the answer for this question, but I didn't find it yet.
Thinking about this algorithm myself is very difficult and hard.
I am very curious to find out what is this algorithm already.
Answer: No such algorithm is forthcoming. As I mentioned in an answer to one of your other questions, you can encode SAT as a regular expressions that equals $(0+1)^n$ iff the given formula is unsatisfiable. This means that the algorithm you are looking for will be able to solve SAT. | {
"domain": "cs.stackexchange",
"id": 9457,
"tags": "formal-languages, automata, regular-languages, finite-automata, regular-expressions"
} |
Question about zero-phase filter bank | Question: I found what I believe is a valid solution to the following assignment problem but the answer is bothering me.
Let $$H_0(z) = 1+2z^{-1}+3z^{-2}+2z^{-3}+z^{-4}\quad\textrm{and}\quad H_1(z)=H_0(-z).$$
Find causal FIR filters $F_0(z)$ and $F_1(z)$ such that $\hat{x}(n)$ agrees with $x(n)$ except for a possible delay and (nonzero) scale factor.
So I came up with the following causal FIR filters for $F_0(z)$ and $F_1(z)$
\begin{align}
F_0(z) &= \frac{1}{4}\left(2-5z^{-1}+4z^{-2}-2z^{-3}\right) \\
F_1(z) &= \frac{1}{4}\left(2+5z^{-1}+4z^{-2}+2z^{-3}\right)
\end{align}
Lets see what happens when I plug them in...
First let's determine the transfer function for the system
\begin{equation}
\hat{X}(z) = X(z)H_0(z)F_0(z) + X(z)H_1(z)F_1(z)
\end{equation}
then
\begin{equation}
\frac{\hat{X}(z)}{X(z)} = H_0(z)F_0(z) + H_1(z)F_1(z)
\end{equation}
Plugging in my answer for $F_0(z)$ and $F_1(z)$ I get
\begin{equation}
\frac{\hat{X}(z)}{X(z)} = 1
\end{equation}
But does this make sense? Shouldn't there always be some delay when I filter? I always thought the best I would be able to do would be linear phase distortion when all of my filters are causal. $F_0$ and $F_1$ are causal right? They are a linear combination of delayed inputs so I don't see how they wouldn't be causal. Any help pointing out where I may have made a mistake would be appreciated. Or is it reasonable to have causal FIR filters composed such that the output has zero phase distortion?
Edit: Thanks to Teague's answer I think I understand now. Also I verified using simulink that the output does indeed match the input. Here's the simulation.
Edit 2: From Mr. Lyons' request I'm posting the method I used to solve the problem.
Disclaimer: The problem gave a hint that I should consider polyphase representations but I couldn't figure out how to get a solution using a polyphase approach so I did it a different way. If anyone has an answer for how to solve this using polyphase I will post a new question so you can answer it.
The question asks to find FIR filters $F_0(z)$ and $F_1(z)$ so the first thing I did was assume that FIR filters existed and I thought it was reasonable to think they were no higher order than $H_0(z)$ and $H_1(z)$. I let
\begin{equation}
F_k(z) = \sum_{m=0}^{4}{b_{km}z^{-m}}, \hspace{10pt} k=0,1
\end{equation}
and solve for the 10 coefficients $b_{00}, b_{01}, ... b_{04}, b_{10}, b_{11}, ..., b_{14}$
Then plugging $F_0(z)$ and $F_1(z)$ into the transfer function
\begin{align}
\frac{\hat{X}(z)}{X(z)} & = H_0(z)F_0(z) + H_1(z)F_1(z) \\
& = (b_{00} + b_{10}) + (2b_{00} + b_{01} - 2b_{10} + b_{11})z^{-1} + ... + (b_{04} + b_{14})z^{-8}
\end{align}
I want to find the coefficients that result in only a delay and scale. I figured there was no reason mathematically that I couldn't solve for $\hat{X}(z)/X(z)=1$. So I set up the following matrix to zero all the delays except $z^{0}$.
\begin{equation}
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
2 & 1 & 0 & 0 & 0 & -2 & 1 & 0 & 0 & 0 \\
3 & 2 & 1 & 0 & 0 & 3 & -2 & 1 & 0 & 0 \\
2 & 3 & 2 & 1 & 0 & -2 & 3 & -2 & 1 & 0 \\
1 & 2 & 3 & 2 & 1 & 1 & -2 & 3 & -2 & 1 \\
0 & 1 & 2 & 3 & 2 & 0 & 1 & -2 & 3 & -2 \\
0 & 0 & 1 & 2 & 3 & 0 & 0 & 1 & -2 & 3 \\
0 & 0 & 0 & 1 & 2 & 0 & 0 & 0 & 1 & -2 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\
\end{bmatrix}
\begin{bmatrix}
b_{00} \\
b_{01} \\
b_{02} \\
b_{03} \\
b_{04} \\
b_{10} \\
b_{11} \\
b_{12} \\
b_{13} \\
b_{14} \\
\end{bmatrix}
=
\begin{bmatrix}
1 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
\end{bmatrix}
\end{equation}
The matrix is a 9x10 rank 9, meaning it is under constrained. Because of this I can arbitrarily add a 10th equation. Because I thought it would be nice if $F_0(z) = F_1(-z)$ I added a constraint that $b_{02} = b_{12} \rightarrow b_{02} - b_{12} = 0$. Adding this to the bottom of the matrix I get
\begin{equation}
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
2 & 1 & 0 & 0 & 0 & -2 & 1 & 0 & 0 & 0 \\
3 & 2 & 1 & 0 & 0 & 3 & -2 & 1 & 0 & 0 \\
2 & 3 & 2 & 1 & 0 & -2 & 3 & -2 & 1 & 0 \\
1 & 2 & 3 & 2 & 1 & 1 & -2 & 3 & -2 & 1 \\
0 & 1 & 2 & 3 & 2 & 0 & 1 & -2 & 3 & -2 \\
0 & 0 & 1 & 2 & 3 & 0 & 0 & 1 & -2 & 3 \\
0 & 0 & 0 & 1 & 2 & 0 & 0 & 0 & 1 & -2 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\
\end{bmatrix}
\begin{bmatrix}
b_{00} \\
b_{01} \\
b_{02} \\
b_{03} \\
b_{04} \\
b_{10} \\
b_{11} \\
b_{12} \\
b_{13} \\
b_{14} \\
\end{bmatrix}
=
\begin{bmatrix}
1 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
\end{bmatrix}
\end{equation}
Which makes the matrix a 10x10 rank 10. Solving the linear system yield the solutions for $F_0(z)$ and $F_1(z)$ given above.
Interestingly the final constraint is pretty arbitrary as long is it isn't linearly dependent on the other 9 rows. I can make it pretty much anything. For example, if I want the coefficients to sum to 0 then I could make the last row of the matrix all $1$s which yields a different solution than the one I showed above, that is...
\begin{align}
F_0(z) & = \frac{1}{8} (7 - 16z^{-1} + 17z^{-2} - 10z^{-3} + 3z^{-4}) \\
F_1(z) & = \frac{1}{8} (1 + 4z^{-1} - 1z^{-2} - 2z^{-3} - 3z^{-4})
\end{align}
Again if anyone knows the polyphase way to do this problem I am curious. Let me know in the comments and I'll post the question.
Answer: Your work is correct.
First just think about how causal digital filters produce an output as a function of the current and previous inputs. Right?
Now think about the case where a 'filter' only produces an output as a function of the current input (i.e. not influenced by the previous inputs). We don't typically classify these as filters, instead we generally call these 'gain blocks'.
That's essentially what you've created. Your individual transfer functions are causal, and individually their outputs depend on the current and previous outputs. However when combined, they act as a single gain block with a gain of 1.
I understand where you're coming from though where you think there should be some kind of delay and you are correct. In an actual digital system, it would take time to produce the output after doing the intermediate calculations (e.g. what if you had one billion gain blocks in a row? How would this happen instantaneously?).
The best way to think about this is if you were to implement it on a DSP chip. In this realm, you are working with two different clocks: the system update rate and the processor clock. Say you want to operate a digital filter at 1kHz. It is likely that your processor will be operating at a clock rate in the MHz (i.e. not at 1kHz). So what takes multiple steps within the processor (i.e. takes multiple clock ticks) appears to happen instantaneously with regard to the 1kHz filter update rate.
Make sense? | {
"domain": "dsp.stackexchange",
"id": 4301,
"tags": "filters, homework, digital-filters, filter-bank"
} |
Why things wobble when submitted to wind? | Question: If I place a little piece of paper in front of a fan, it is going to wobble/oscillate.
Is it because the wind flow isn't constant ? Or because the air movement makes the piece of paper move away, so it doesn't offer the same resistance, so it comes back, and then the air can push it with more strength again ?
Answer: The wind flow does not need to be variable. Things can still wobble because of turbulence. If you could visualize the airflow around the object, you could see that it is not a smooth flow, but a flow with vortices. These vortices form near the object but then detach from it. The force acting on the object is different while a vortex forms vs. when it detaches.
Then there is aeroelastic flutter (see https://en.wikipedia.org/wiki/Aeroelasticity#Flutter. I naively thought that it is no different from vortex shedding, but I stand corrected. I thank Glen the Udderboat for pointing my nose in the right direction.) This is the response of an elastic object to a fluid flow, altering its shape and thus in turn altering the force acting on it. This can result in harmonic motion, oscillation, and it can even be destructive.
Remember the famous Tacoma Narrows Bridge, also known as Galloping Gertie? This was the bridge that was destroyed in 1940, shortly after it was built, by wind. (If you haven't seen the video, look it up on YouTube). The wind flow was steady, but the bridge started wobbling nonetheless, just like a strip of paper in front of a fan. This was originally thought to have been the result of vortex shedding, in particular vortex shedding frequency, the rate at which vortices detach from an object. This is approximately linearly dependent on wind velocity. However, if the object in question has a natural oscillation, when the vortex shedding frequency approaches that natural oscillation frequency, a lock-in condition may set in so effectively, vortex shedding and the object itself remain in resonance even if the wind velocity changes slightly.
Here is a nice page with additional details, including plenty of relevant equations and some nice pictures of Galloping Gertie: http://ntl.bts.gov/DOCS/ch2.html | {
"domain": "physics.stackexchange",
"id": 23500,
"tags": "forces, fan"
} |
Text-based adventure game where you fight two enemies per round | Question: This is a text based adventure game, where you need to fight against 2 enemy in every round. The damage dealt and potion heal is a random number.
How can I improve?
Game class:
package textbasedadventuregame;
import java.util.Scanner;
public class Game {
public static void main(String[] args) {
String[] enemyTypes = {"Skeleton", "Giant", "Goblin", "Demon", "Defeated Knight", "Warrior", "Demon Lord"};
String[] enemyStatsText = createEnemyStats(enemyTypes);
Player player = new Player(150, 3);
System.out.println("How many rounds do you want to play?");
int maxRound = new Scanner(System.in).nextInt();
int roundCount = 0;
for (roundCount = 1; roundCount < maxRound; roundCount++) {
roundStart(chooseEnemys(enemyStatsText), player, roundCount);
}
}
private static String[] chooseEnemys(String[] enemyStatsText) {
String[] enemies = new String[2];
enemies[0] = enemyStatsText[randomNumber(0, enemyStatsText.length - 1)];
enemies[1] = enemyStatsText[randomNumber(0, enemyStatsText.length - 1)];
return enemies;
}
private static String[] createEnemyStats(String[] enemyType) {
String[] enemyStatsText = new String[enemyType.length];
int minHealth = 25;
int maxHealth = 70;
int minDamage = 15;
int maxDamage = 25;
for (int i = 0; i < enemyStatsText.length; i++) {
enemyStatsText[i] = enemyType[i] + ";" + randomNumber(minHealth, maxHealth) + ";" + randomNumber(minDamage, maxDamage);
}
return enemyStatsText;
}
public static int randomNumber(int min, int max) {
int range = (max - min) + 1;
int stats = (int) (Math.random() * range) + min;
return stats;
}
private static void roundStart(String[] enemies, Player player, int roundCount) {
System.out.println(enemies[0] + " " + enemies[1]);
String[] enemyOne = enemies[0].split(";");
String enemyOneName = enemyOne[0];
int enemyOneHP = Integer.parseInt(enemyOne[1]);
int enemyOneDamage = Integer.parseInt(enemyOne[2]);
boolean enemyOneDead = false;
String[] enemyTwo = enemies[1].split(";");
String enemyTwoName = enemyTwo[0];
int enemyTwoHP = Integer.parseInt(enemyTwo[1]);
int enemyTwoDamage = Integer.parseInt(enemyTwo[2]);
boolean enemyTwoDead = false;
int stopLoop = 1;
while (stopLoop == 1) {
System.out.println(" \n \n \n Round " + roundCount + " is starting!");
System.out.println("Player HP: " + player.getPlayerHP());
System.out.println(" \n 1: I want to fight \n 2: I want to drink a potion. I have " + player.getPotionCount() + " potions." + "\n 3: Escape");
int input = new Scanner(System.in).nextInt();
if (input == 1) {
System.out.println("So, you choose to fight!");
stopLoop = 2;
chooseEnemyToFight(player, enemyOneName, enemyOneHP, enemyOneDamage, enemyOneDead, enemyTwoName, enemyTwoHP, enemyTwoDamage, enemyTwoDead, roundCount);
} else if (input == 2) {
potionDrink(player);
System.out.println("You have drinked a potion. Potions remaining: " + player.getPotionCount() + ". Your hp is: " + player.getPlayerHP());
stopLoop = 2;
} else if (input == 3) {
System.out.println("You are escaped. Your score is: " + roundCount);
stopLoop = 2;
} else {
System.out.println("Invaild number!");
}
}
}
private static void chooseEnemyToFight(Player player, String enemyOneName, int enemyOneHP, int enemyOneDamage, boolean enemyOneDead, String enemyTwoName, int enemyTwoHP, int enemyTwoDamage, boolean enemyTwoDead, int roundCount) {
while (enemyOneDead == false || enemyTwoDead == false) {
int input = 0;
if (enemyOneDead == false && enemyTwoDead == false) {
System.out.println("The two enemy is: " + enemyOneName + " with " + enemyOneHP + " health and " + enemyOneDamage + " attack DMG and a " + enemyTwoName + " with " + enemyTwoHP + " health and " + enemyTwoDamage + " attack DMG");
System.out.println("Which one do you want to hit? The other one can attack you!");
System.out.println("If you want to fight with the " + enemyOneName + ", type number 1. If you want to fight with the " + enemyTwoName + ", type number 2.");
input = new Scanner(System.in).nextInt();
} else if (enemyOneDead == false && enemyTwoDead == true) {
System.out.println("You will fight with the " + enemyOneName + " who has " + enemyOneHP + " HP and " + enemyOneDamage + " attack damage.");
input = 1;
} else if (enemyOneDead == true && enemyTwoDead == false) {
System.out.println("You will fight with the " + enemyTwoName + " who has " + enemyTwoHP + " HP and " + enemyTwoDamage + " attack damage.");
input = 2;
}
if (input == 1) {
if (enemyTwoDead == false) {
System.out.println("So you want to fight with the " + enemyOneName + " first \n");
}
enemyOneHP = attackEnemy(enemyOneName, enemyOneHP, enemyOneDamage, player);
if (enemyOneHP <= 0) {
System.out.println(enemyOneName + " is died. \n");
enemyOneDead = true;
}
playerDamageRecived(player, enemyTwoDamage, enemyTwoName, roundCount);
} else if (input == 2) {
if (enemyOneDead == false) {
System.out.println("So you want to fight with the " + enemyTwoName + " first \n");
}
enemyTwoHP = attackEnemy(enemyTwoName, enemyTwoHP, enemyTwoDamage, player);
if (enemyTwoHP <= 0) {
System.out.println(enemyTwoName + " is died. \n");
enemyTwoDead = true;
}
playerDamageRecived(player, enemyOneDamage, enemyOneName, roundCount);
}
}
}
private static void playerDamageRecived(Player player, int enemyOneDamage, String enemyOneName, int roundCount) {
player.setPlayerHP(enemyAttack(player, enemyOneDamage));
if (player.getPlayerHP() > 0) {
System.out.println(enemyOneName + " attacked you! Your hp is: " + player.getPlayerHP());
} else if (player.getPlayerHP() <= 0) {
System.out.println("You are dead! Your score is: " + roundCount);
System.exit(0);
}
}
private static int attackEnemy(String enemyName, int enemyHP, int enemyDamage, Player player) {
System.out.println("Which attack do you want to use, your knife (1) or your gun (2)? If you use your knife, you can use a heal potion too, or attack twice! \n");
int input1 = new Scanner(System.in).nextInt();
if (input1 == 1) {
enemyHP = enemyHP - player.getPlayerKnifeDamage();
System.out.println("The enemy hp is: " + enemyHP + ". Do you want to attack again with the knife(1), or do you want to use a potion(2). You have " + player.potionCount + " potions.");
int input2 = new Scanner(System.in).nextInt();
if (input2 == 1) {
enemyHP = enemyHP - player.getPlayerKnifeDamage();
System.out.println("The enemy hp is: " + enemyHP);
} else if (input2 == 2) {
potionDrink(player);
System.out.println("You have drinked a potion. Potions remaining: " + player.getPotionCount() + ". Your hp is: " + player.getPlayerHP() + "\n");
}
} else if (input1 == 2) {
enemyHP = enemyHP - player.getPlayerGunDamage();
}
return enemyHP;
}
private static void potionDrink(Player player) {
player.setPotionCount(player.getPotionCount() - 1);
player.setPlayerHP(player.getPlayerHP() + player.getPotionHeal());
if (player.getPlayerHP() > 100) {
player.setPlayerHP(100);
}
}
private static int enemyAttack(Player player, int enemyDamage) {
return player.getPlayerHP() - enemyDamage;
}
}
Player class:
package textbasedadventuregame;
import static textbasedadventuregame.Game.randomNumber;
public class Player {
int playerHP;
int playerKnifeDamage;
int playerGunDamage;
int potionCount;
int potionHeal;
public Player(int playerHP, int potionCount) {
this.playerHP = playerHP;
this.potionCount = potionCount;
}
public int getPlayerHP() {
return playerHP;
}
public void setPlayerHP(int playerHP) {
this.playerHP = playerHP;
}
public int getPlayerKnifeDamage() {
return playerKnifeDamage = randomNumber(15, 25);
}
public void setPlayerKnifeDamage(int playerKnifeDamage) {
this.playerKnifeDamage = playerKnifeDamage;
}
public int getPlayerGunDamage() {
return playerGunDamage = randomNumber(20, 60);
}
public void setPlayerGunDamage(int playerGunDamage) {
this.playerGunDamage = playerGunDamage;
}
public int getPotionCount() {
return potionCount;
}
public void setPotionCount(int potionCount) {
this.potionCount = potionCount;
}
public int getPotionHeal() {
return potionHeal = randomNumber(50, 80);
}
public void setPotionHeal(int potionHeal) {
this.potionHeal = potionHeal;
}
}
Answer: Java is idiomatically very object-oriented, and the current code doesn't quite get there - it has too many statics, and doesn't properly encapsulate the classes that it should. Particularly, Enemy should get a class. Game can also be instantiated from main().
roundStart isn't particularly well-named, since it doesn't just start the round - it runs the entire round.
chooseEnemys should be spelled chooseEnemies.
Don't pack your enemy stats into semicolon-separated strings - you aren't serializing them to disk or sending them over the network. Just put those stats as members on objects.
You should generalise your code so that it's easy to change the number of enemies per round, not hard-coding everything to two enemies.
Don't write your own randomNumber - ThreadLocalRandom has what you want.
Don't make a new Scanner every time that you need to get input - just make a single instance. If you want to make this testable you can accept it as a parameter to the class constructor, though I haven't shown this.
Avoid writing \n as it isn't portable. Use %n in format strings instead.
The score is a strange calculation. You get the same score if you run away from every single fight as when you win every single fight. This probably needs refinement.
You have drinked should just be You drank, and You are escaped should just be You escaped. The two enemy is should be The two enemies are. Is died should just be died. Recived is spelled Received.
Consider use of switch rather than your repeated ifs when checking input. Also, your input validation is a good start but is not comprehensive - for instance, if the user enters a letter instead of a number your loop will not catch it. The easy way around this is to work with strings and don't bother calling nextInt.
Avoid calling System.exit(). Instead, let the round logic pick up on the fact that the player is dead.
Remove the word Player from all of your methods on the player class, since it's redundant.
Your damage should not be represented as variables on the player class. Delete those and keep your damage calculation functions.
Remove basically all of your set methods and constrain your state manipulation to supported operations like taking damage and drinking potions.
Health and damage are continuous quantities. Why not represent them as doubles instead of ints?
Suggested
Game.java
package textbasedadventuregame;
import java.util.Iterator;
import java.util.Scanner;
import static java.lang.System.out;
public class Game implements Iterator<Round> {
public final int nRounds;
private int roundIndex = 0;
private final Player player = new Player();
public static void main(String[] args) {
System.out.println("How many rounds do you want to play?");
int nRounds = new Scanner(System.in).nextInt();
new Game(nRounds).run();
}
public Game(int nRounds) { this.nRounds = nRounds; }
@Override
public boolean hasNext() { return roundIndex < nRounds && player.isAlive(); }
@Override
public Round next() {
roundIndex++;
out.printf("%n%n%nRound %d is starting!%n", roundIndex);
return new Round(player);
}
public void run() { forEachRemaining(Round::run); }
}
Round.java
package textbasedadventuregame;
import java.util.List;
import java.util.Scanner;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
import java.util.stream.Stream;
import static java.lang.System.out;
public class Round {
private final List<Enemy> enemies = List.of(
new Enemy(), new Enemy()
);
private final Player player;
private static final Scanner scanner = new Scanner(System.in);
public Round(Player player) { this.player = player; }
public String enemyDescriptions() {
// Return a string with all enemy descriptions separated by spaces.
return enemiesAlive()
.map(Enemy::toString)
.collect(Collectors.joining(", "));
}
public void run() {
out.printf(
"%s%n"
+ "Player HP: %.0f%n",
enemyDescriptions(), player.getHealth());
while (true) {
out.printf(
"1: I want to fight!%n"
+ "2: I want to drink a potion. I have %d potions.%n"
+ "3: Escape.%n",
player.getPotionCount());
String input = scanner.next();
switch (input) {
case "1" -> fight();
case "2" -> drinkPotion();
case "3" -> out.println("You have escaped.");
default -> {
out.println("Invalid choice!");
continue;
}
}
break;
}
}
public Stream<Enemy> enemiesAlive() {
return enemies.stream().filter(Enemy::isAlive);
}
private void fight() {
out.println("So, you choose to fight!");
while (player.isAlive() && enemiesAlive().findAny().isPresent()) {
Enemy toFight = chooseEnemy();
attackEnemy(toFight);
enemiesAlive()
.filter(enemy -> enemy != toFight)
.forEach(this::receiveDamage);
}
}
public Enemy chooseEnemy() {
List<Enemy> alive = enemiesAlive().collect(Collectors.toList());
Enemy enemy;
if (alive.size() == 1)
enemy = alive.get(0);
else {
out.println("The enemies are:");
out.println(
IntStream.range(0, alive.size())
.mapToObj(index -> String.format(
"%d. %s", index + 1, alive.get(index)
))
.collect(Collectors.joining(System.lineSeparator()))
);
out.println("Which one do you want to hit? The others can attack you!");
enemy = alive.get(scanner.nextInt() - 1);
}
out.printf("So you want to fight with the %s.%n", enemy.name);
return enemy;
}
private void receiveDamage(Enemy enemy) {
player.takeDamage(enemy.damage);
out.printf("%s attacked you! Your health is: %.0f%n",
enemy.name, player.getHealth());
if (!player.isAlive())
out.println("You are dead!");
}
private void attackEnemy(Enemy enemy) {
out.println("Which attack do you want to use, your knife (1) or your gun (2)? If you use your knife, you can use a heal potion too, or attack twice!");
boolean bonusAction;
double damage;
while (true) {
switch (scanner.next()) {
case "1" -> {
bonusAction = true;
damage = player.getKnifeDamage();
}
case "2" -> {
bonusAction = false;
damage = player.getGunDamage();
}
default -> {
out.println("Invalid choice!");
continue;
}
}
break;
}
damageEnemy(enemy, damage);
if (bonusAction) {
out.println("Do you want to attack again with the knife(1), or do you want to use a potion(2)?");
switch (scanner.next()) {
case "1" -> damageEnemy(enemy, player.getKnifeDamage());
case "2" -> drinkPotion();
}
}
}
public void damageEnemy(Enemy enemy, double damage) {
enemy.takeDamage(damage);
out.printf("The enemy health is: %.0f%n", enemy.getHealth());
if (!enemy.isAlive())
out.printf("%s died.%n", enemy.name);
}
private void drinkPotion() {
player.drinkPotion();
out.printf(
"You drank a potion. Potions remaining: %d. Your health is: %.0f.%n",
player.getPotionCount(), player.getHealth());
}
}
Body.java
package textbasedadventuregame;
public class Body {
public final double startingHealth;
protected double health;
protected Body(double health) {
startingHealth = health;
this.health = health;
}
public double getHealth() { return health; }
public boolean isAlive() { return health > 0; }
public void takeDamage(double damage) {
health = Math.max(0, health - damage);
}
public void heal(double amount) {
health = Math.min(startingHealth, health + amount);
}
}
Player.java
package textbasedadventuregame;
import java.util.concurrent.ThreadLocalRandom;
public class Player extends Body {
private static final ThreadLocalRandom rand = ThreadLocalRandom.current();
private int potionCount;
public Player(double health, int potionCount) {
super(health);
this.potionCount = potionCount;
}
public Player() { this(150, 3); }
public int getPotionCount() { return potionCount; }
public void drinkPotion() {
if (potionCount > 0) {
potionCount--;
heal(rand.nextDouble(50, 80));
}
}
public double getKnifeDamage() {
return rand.nextDouble(15, 25);
}
public double getGunDamage() {
return rand.nextDouble(20, 60);
}
}
Enemy.java
package textbasedadventuregame;
import java.util.concurrent.ThreadLocalRandom;
public class Enemy extends Body {
private static final ThreadLocalRandom rand = ThreadLocalRandom.current();
public static final String[] names = {
"Skeleton", "Giant", "Goblin", "Demon", "Defeated Knight", "Warrior", "Demon Lord",
};
public final double damage;
public final String name;
public Enemy(String name, double health, double damage) {
super(health);
this.name = name;
this.damage = damage;
}
public Enemy() {
// Random enemy
this(
names[rand.nextInt(names.length)],
rand.nextDouble(25, 70),
rand.nextDouble(15, 25)
);
}
@Override
public String toString() {
return String.format(
"%s with %.0f health and %.0f attack damage",
name, health, damage
);
}
} | {
"domain": "codereview.stackexchange",
"id": 42587,
"tags": "java, game, adventure-game, battle-simulation"
} |
Show that $\vec L$ and $\vec S$ commute with each other | Question: It is stated in Griffiths in a hint to a question that $\vec L$ and $\vec S$ commute with each other but no proof is given. $\vec L$ is given in the differential form and $\vec S$ is given in matrix form. How can I prove that they commute with each other?
Answer: I think of it this way:
$\vec{L}$ and $\vec{S}$ live in two completely different Hilbert spaces, that's why you know they commute; they do not "talk" with eachother.
Usually, $\vec{L}$ lives in $L^2(\mathbb{R}^3)$ and $\vec{S}$ in $\mathbb{C}^n$.
You can visualize this by thinking that $\vec{L}$ acts as a differential operator on a 3-dimensional function and $\vec{S}$ acts on a n-dimensional vector. | {
"domain": "physics.stackexchange",
"id": 51216,
"tags": "quantum-mechanics, commutator"
} |
Direction of restriction for NP hard proves | Question: I have a very silly question, as I am reading through all the proofs showing a problem is NP hard, one of the techniques is by showing an already-proven NP complete problem is a special case for that problem.
I am wondering shouldn't that be another way around? I mean if you show your problem is a special case of a NP-complete problem, then you showed it is NP complete as well? I know this logic is wrong but why?
What is the advantage and disadvantage of this technique.
Comment: this question actually contents the answer for why we need to reduce a problem to a NP hard problem to prove its NP hardness.
Answer: The technique your top paragraph shows NP-hardness. The technique in your next paragraph shows membership in NP. | {
"domain": "cs.stackexchange",
"id": 3970,
"tags": "np-complete, proof-techniques"
} |
Is the rate of light emissions actually the frequency of the light? | Question: I am confusing with the concept of frequency in the context of Doppler effect studying SR. The Schutz says, if the source emits pulses more frequently, then the observer would see the lights not only more frequently, but also with more energy!
Is that true? Can't we emit the LOW frequency(energy) light like infrared pulses at HIGH frequency(rate) about 10^100 Hz?
Answer: This question raises several concerns. From a purely signal processing point-of-view, where the aforementioned pulse is not a photon, but rather a time-domain pulse of a certain duration (here less than 10^-100 seconds)--then NO: you cannot have pulses shorter than the period of wave in question--as the wave an be defined. Consider sound: when a 440Hz A note gets short enough, you just hear a "click"--not a well defined pitch.
The other concern is with the Lorentz invariance of de Broglie relation: when you boost an EM wave, the frequency and energy-density change--they are reference frame dependent--but the number of photons simply cannot change--as it's quantized. If it did change, there would be "special" velocities where you add or subtract a photon--and that's just nonsense. Others have worked it out here:
http://www.physics.princeton.edu/~mcdonald/examples/uoveromega.pdf | {
"domain": "physics.stackexchange",
"id": 30668,
"tags": "special-relativity, visible-light, relativity, frequency, doppler-effect"
} |
LOL'ing-Up Project Euler One | Question: It's been a while since I last wrote some lolcode, so I felt like tackling Project Euler #1.
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
This is the code/script I wrote to solve this:
HAI 1.2
VISIBLE "HAI, PROJEK LOLLER ONE!!"
I HAS A LIMIT ITZ 1000
I HAS A TOTAL ITZ 0
I HAS A CHEEZ ITZ 3
I HAS A BURGER ITZ 5
I HAS A CHEEZBURGER ITZ PRODUKT OF CHEEZ AN BURGER
HOW IZ I ADDTOTAL YR VALUE
I HAS A RESULT ITZ SUM OF VALUE AN TOTAL
FOUND YR RESULT
IF U SAY SO
IM IN YR MIND UPPIN YR NUMBER TIL BOTH SAEM NUMBER AN LIMIT
I HAS A PICKLE ITZ FAIL
BOTH SAEM 0 AN MOD OF NUMBER AN CHEEZBURGER
O RLY?, YA RLY
PICKLE R WIN
NO WAI
BOTH SAEM 0 AN MOD OF NUMBER AN CHEEZ
O RLY?, YA RLY
PICKLE R WIN
NO WAI
BOTH SAEM 0 AN MOD OF NUMBER AN BURGER
O RLY?, YA RLY
PICKLE R WIN
OIC
OIC
OIC
BOTH SAEM PICKLE AN WIN
O RLY?, YA RLY
TOTAL R I IZ ADDTOTAL YR NUMBER MKAY
OIC
IM OUTTA YR MIND
VISIBLE SMOOSH "TEH ANSWER IZ " AN TOTAL MKAY
VISIBLE "DOWN WITH PROJEK LOLLER ONE!"
KTHXBYE
This code can be executed on CodingGround, and produces this output:
HAI, PROJEK LOLLER ONE!!
(spoiler)
TEH ANSWER IZ 233168
DOWN WITH PROJEK LOLLER!
Is the fizzbuzz-like logic appropriate, or I fell in the "easy" trap? Would there be a way to collapse all the conditions into one and make a shorter script? Or is this algorithm a bad/inefficient one?
Answer: Same Guy, Same Language, same criticism?
Let me start by (again) critzising your variable casing choice of SHOUTCASE
It's so darn hard to tell what's a variable and what's not :(
But your names got better since the last time. Cheez and Burgr aren't the best, but given their content is arbitrarily set and accordingly somewhat meaningless they really stand their ground.
One thing though: your Cheezburgers are not needed.
It's a waste of computation power to check divisors by the biggest one first if you return early. (I recall rolfl having some nice stats on that in one fizz-buzz review I can't find right now). We can completely elimimate that O RLY? YA RLY from your code.
Oh and while we're at cheeseburgers... There's one thing I really dislike on cheeseburgers... Pickles.
The temporary TROOF variable to remove that three YA RLY doing the same thing is completely overkill. Instead of PICKLES R WIN it would be sufficient to add the current Number to the total.
Code until now:
IM IN YR Mind UPPIN YR Number TIL BOTH SAEM Number AN Limit
BOTH SAEM 0 AN MOD OF Number AN Cheez
O RLY?, YA RLY
Total R I IZ AddToTotal YR Number MKAY
NO WAI
BOTH SAEM 0 AN MOD OF Number AN Burger
O RLY?, YA RLY
Total R I IZ AddToTotal YR Number MKAY
OIC
OIC
IM OUTTA YR Mind
Now what have we got here? this looks like it should be either an ElseIf or just a simple combination of conditions into an Or.
The latter is the easier one, but for reference: There is an else if. It's MEBBE
Final Code:
HAI 1.2
VISIBLE "HAI, PROJEK LOLLER ONE!!"
I HAS A Limit ITZ 1000
I HAS A Total ITZ 0
I HAS A Cheez ITZ 3
I HAS A Burger ITZ 5
HOW IZ I AddToTotal YR Value
I HAS A Result ITZ SUM OF Value AN Total
FOUND YR Result
IF U SAY SO
BTW Iterates from 0 to 999
IM IN YR Mind UPPIN YR Number TIL BOTH SAEM Number AN Limit
EITHER OF BOTH SAEM 0 MOD OF Number AN Cheez AN BOTH SAEM 0 MOD OF Number AN Burger
O RLY?, YA RLY
Total R I IZ AddToTotal YR Number MKAY
OIC
IM OUTTA YR Mind
VISIBLE SMOOSH "TEH ANSWER IZ " AN Total MKAY
VISIBLE "DOWN WITH PROJEK LOLLER ONE!"
KTHXBYE | {
"domain": "codereview.stackexchange",
"id": 11965,
"tags": "programming-challenge, lolcode"
} |
When factoring N=21 on IBMQ Devices with Shor's algorithm, does the quantum subroutine require modular exponentiation? | Question: I am currently working on Qiskit code implementing Shor algorithm in the form of a quantum circuit. I am factoring $N = 21$ (into 3 and 7) using 5 qubits, with 3 qubits in the work register and 2 in the control. The purpose is that a random integer between 2 and 20 (inclusive) will be input and the program will either use classical or quantum+classical computing to derive the factors.
One thing I noticed as I was creating the circuit was that if a random number is input into the quantum subroutine, say the number 4, and there is no modular exponentiation function, just an inverse QFT on the work register, it still works and finds the period $r$, which is 6. I originally assumed this was a bug or a coding mistake, but it also made me wonder if the modular exponentiation is the same for each input.
Due to this, I have two questions: Is there a universal series of gates that can transform $x$ into $a^x \mod N$, the control register output? I was under the impression it was different depending on what $x$ was, but the results I am getting are seemingly disproving that.
Similarly, for numbers as small as $N=21$, does the control register output even matter? The work register output from the inverse QFT appears to be doing all of the work, no matter if the mod. exp. function is included or not?
Picture below, approx. same for every input that reaches the quantum subroutine of Shor's algorithm in my code:
Answer: Period finding for N=21 requires a 5 qubit work register, not 3. You need at least $\lceil \log_2 N \rceil$ work qubits. Otherwise you can't actually store intermediate values during the modular exponentiation.
You also need at least $10$ controls, not 2. If you use fewer than $\lceil 2 \log_2 N \rceil$ controls, the result doesn't actually have the necessary resolution to distinguish between different periods. That being said, you can use the standard trick of recycling the control qubits so there's only one alive at a time. But you should be expecting 10 recycling steps, not 2 recycling steps.
Using fewer than 5 work qubits, or fewer than 10 steps, or making the steps not do actual multiplications, is a clear sign that you are not running Shor's algorithm and are instead using knowledge of the factors to optimize the circuit (aka cheating). In which case you might as well just solve the problem classically and have the quantum computer flip coins so it can get a participation award. | {
"domain": "quantumcomputing.stackexchange",
"id": 3917,
"tags": "qiskit, shors-algorithm, modular-exponentiation"
} |
Io: Why so much sulfur? | Question: Io is the only body in the Solar System whose crust is dominated by sulfur and its compounds. Carbon and nitrogen are also very common in the universe but we don't see them or their compounds in Io's crust. Why not?
Answer: Io is not dominated by sulfur. Io is mostly silicate rock and iron. Sulfur is a thin coating on the surface and sulfur dioxide makes up most of the atmosphere, but that's because sulfur dioxide is a volatile. In other words, it has a much lower melting and boiling point than silicate rock. So while the rock remains solid, the sulfur dioxide is gas.
Carbon is not common on Io's crust for the same reason that it is not common on Earth's crust. Carbon may be common in the universe, but it binds to oxygen to make a gas. When the solar system was forming most of the carbon was in the form of CO or CO2 and would not easily have coalesced into a solid body. Silicon is much less common than carbon, but Earth and Io are made of silicon because silicon makes solid stuff at the relevant temperatures. | {
"domain": "astronomy.stackexchange",
"id": 2400,
"tags": "jupiter, natural-satellites, geology"
} |
What is meant by a 'definite value' in physics? | Question:
What are the allowed values of a constant when a physical theory mentions that a constant is has a 'definite' value (for example, definite velocity c in special relativity). What is meant with the addition of 'definite'? Is it to exclude infinity? What about zero?
Is a universal constant always assumed to have a non-zero and non-infinite value?
Answer: Definite doesn’t mean anything special. It just emphasises it is a constant and that it’s value is important for the theory. A constant can be zero, but then it isn’t there so there is no point talking about it. A constant cannot be infinite. Infinity is not something you can measure; it is a purely mathematical concept; albeit very useful. | {
"domain": "physics.stackexchange",
"id": 76082,
"tags": "terminology, definition, physical-constants"
} |
Simple Quantum Mechanics question about the Free particle, (part2) | Question: Continuing from my first question titled, Simple Quantum Mechanics question about the Free particle, (part1)
Griffiths goes on and says,
"A fixed point on the waveform corresponds to a fixed value of the argument and hence to x and t such that,"
x $\pm$ vt = constant, or x = $\mp$ vt + constant
What does this mean? I am so confused.
He goes on and says that psi(x,t) might as well be the following:
$$\psi(x,t) = A e^{i(kx-(hk^2/2m)t}$$
because the original equation,
$$\psi(x,t) = A e^{ik(x-(\hbar k/2m)t)} + B e^{-ik(x + (\hbar k/2m)t)}$$
only differs by the sign in front of k. He then says let k run negative to cover the case of waves traveling to the left, $k = \pm \sqrt{(2mE)}/\hbar$.
Then after trying to normalize psi(x,t), you find out you can't do it! He then says the following,
"A free particle cannot exist in a stationary state; or, to put it another way, there is no such thing as a free particle with a definite energy."
How did he come to this logical deduction. I don't follow. Can someone please explain Griffith's statement to me?
Answer: The first part about velocities says that we're looking at a function
$$\psi(x,t) = \psi(u)$$
for $u=x-vt$. For example, $\psi = \cos(x-vt)$.
Now pick some fixed value for $\psi$, say $0.4$. Find a place where $\psi = 0.4$ such as $x=1.16$. If you let some small amount of time $\textrm{d}t$ go by, then look at $x=1.16$ again, $\psi$ has changed a little, but if you look at the point $x = 1.16 + v\textrm{d}t$, you will find $\psi = 0.4$ there, so we could say that the place where $\psi$ is $0.4$ is moving at speed $v$.
This is not the same as saying that the particle is moving at that speed. It is saying that $v$ is the phase velocity of the wave.
The second part about normalizability says that a wavefunction must be an element in a vector space called a Hilbert space (by physicists; I think mathematicians call it $L_2$). The Hilbert space consists of wavefunctions that are square-normalizable; you can square them, integrate from negative infinity to infinity, and get a finite value. Things that die off exponentially on their tails do this, for example.
The sine function doesn't die off exponentially, or at all. If you square it and integrate from negative infinity to infinity, you get something infinite. Thus, the sine wave doesn't represent a reasonable probability density function for the location of a particle. The particle would be "infinitesimally likely to be observed everywhere in an infinite region", which physically does not make sense. Instead, for a real particle, we must have a normalizable wavefunction. Since a free particle with definite energy would have a pure sinusoidal wavefunction, a free particle with definite energy is physically not possible. | {
"domain": "physics.stackexchange",
"id": 28529,
"tags": "quantum-mechanics"
} |
Venues for short research articles | Question: I've just completed a short (5 page) paper on proving a certain combinatorial game NP-Complete. This is in no way a result of huge significance, yet it's one that I believe is publishable. What venues would be good for a paper like this? The only one I'm aware of is Information Processing Letters; are there other ones like this?
Answer: Fun with Algorithms! Although it is a conference not a journal that you might looking for, among other things several NP-hardness results about (combinatorial) games are published here, including this one.
However this is a triennial conference (except for 2014, where the last conference was in 2012), so if you want to publish it soon then there might be a better choice. | {
"domain": "cstheory.stackexchange",
"id": 2796,
"tags": "journals"
} |
How do I convert an ROS image into a numpy array? | Question:
Hi All,
I'm trying to read ROS images published from a video camera (from MORSE) into a numpy array in my python script, but I don't know what is the correct way to do this, and everything I have tried hasn't worked.
Does anyone know how to do this?
Here is a snippet of what I have tried so far:
import numpy
import rospy
from sensor_msgs.msg import Image
def vis_callback( data ):
im = numpy.array( data.data, dtype=numpy.uint8 )
doSomething( im )
rospy.init_node('bla',anonymous=True)
sub_vis = rospy.Subscriber('navbot/camera/image',Image,vis_callback)
rospy.spin()
I get an error that looks like this:
[ERROR] [WallTime: 1370383204.000265] bad callback: <function vis_callback at 0x32e2398>
Traceback (most recent call last):
File "/opt/ros/fuerte/lib/python2.7/dist-packages/ros_comm-1.8.15-py2.7.egg/rospy/topics.py", line 678, in _invoke_callback
cb(msg)
File "ros_simulate.py", line 55, in vis_callback
im = numpy.array(data.data,dtype=numpy.uint8)
ValueError: invalid literal for long() with base 10: "':\x0f\xff';\x10\xff+A\x12\xff7Q\x16\xffB`\x18\xffGg\x18\xffDa\x15\xff=W\x12\xff2H\r\xff*<\x0b\xff&8\n\xff$6\t\xff%6\t\xff'9\n\xff*>\n\xff0G\x0b\xff8S\x0b\xff;Y\n\xff:X\t\xff8V\x07\xff6T\x06\xff2N\x04\xff-F\x04\xff+B\x03\xff&<\x03\xff(?\x05\xff#8\x07\xff\x1e0\x07\xff\x19)\x07\xff\x19(\x08\xff\x1b+\t\xff\x1e0\x0b\xff\x1e0\x0b\xff 3\x0c\xff\x1e1\x0b\xff\x1e1\x0b\xff\x1d/\n\xff\x1c-\t\xff\x1c,\t\xff\x1a*\x08\xff\x1d.\x08\xff(>\x0b\xff-F\x0c\xff<[\x0f\xffJo\x12\xffJp\x12\xffKr\x12\xffLs\x12\xffMt\x13\xffMu\x13\xff"
I noticed that the Image object has a deserialize_numpy() method, I couldn't find any documentation on how to use it, so I crossed my fingers and tried it out:
def vis_callback( data ):
im = numpy.array(data.deserialize_numpy(data.data, numpy ), dtype=numpy.uint8)
doSomething( im )
and got more errors:
[ERROR] [WallTime: 1370383699.289428] bad callback: <function vis_callback at 0x34b5410>
Traceback (most recent call last):
File "/opt/ros/fuerte/lib/python2.7/dist-packages/ros_comm-1.8.15-py2.7.egg/rospy/topics.py", line 678, in _invoke_callback
cb(msg)
File "ros_simulate.py", line 53, in vis_callback
im = numpy.array(data.deserialize_numpy(data.data, numpy ), dtype=numpy.uint8)
File "/opt/ros/fuerte/lib/python2.7/dist-packages/sensor_msgs/msg/_Image.py", line 282, in deserialize_numpy
raise genpy.DeserializationError(e) #most likely buffer underfill
DeserializationError: unpack requires a string argument of length 8
Originally posted by Albatross on ROS Answers with karma: 157 on 2013-06-04
Post score: 6
Answer:
You need to deserialize the message data with cv_bridge first (python tutorial). Once you have the image as a cv.Mat, you can just call numpy.asarray on it.
Originally posted by Dan Lazewatsky with karma: 9115 on 2013-06-04
This answer was ACCEPTED on the original site
Post score: 10
Original comments
Comment by Albatross on 2013-06-05:
Thanks! That worked perfectly for me | {
"domain": "robotics.stackexchange",
"id": 14426,
"tags": "sensor-msgs, rospy, ros-fuerte, compressedimage, morse"
} |
ActiveRecord model for upvotes and downvotes | Question: How can I make my model DRY and KISS?
class Rating < ActiveRecord::Base
attr_accessible :item_type, :item_id, :rating, :voters_up, :voters_down
serialize :voters_up, Hash
serialize :voters_down, Hash
belongs_to :ranks, :polymorphic => true
def self.up(item, user, iteration = 1)
@rating = Rating.where(item_type: item.class.name, item_id: item.id).first
@rating = Rating.create(item_type: item.class.name,
item_id: item.id,
rating: 0,
voters_up: {users:[]}, voters_down: {users:[]}) unless @rating
changed = nil
if !@rating.voters_up[:users].include?(user.id) && !@rating.voters_down[:users].include?(user.id)
if changed.nil?
@rating.increment(:rating, 1)
@rating.voters_up[:users] << user.id
changed = true
end
end
if @rating.voters_up[:users].include?(user.id) && !@rating.voters_down[:users].include?(user.id)
if changed.nil?
@rating.decrement(:rating, 1)
@rating.voters_up[:users].delete user.id
changed = true
end
end
if @rating.voters_down[:users].include?(user.id) && !@rating.voters_up[:users].include?(user.id)
if changed.nil?
@rating.voters_up[:users] << user.id
@rating.voters_down[:users].delete user.id
@rating.increment(:rating, 2)
changed = true
end
end
@rating.save
item.update_attribute(:rating_value, @rating.rating)
end
def self.down(item, user, iteration = 1)
@rating = Rating.where(item_type: item.class.name, item_id: item.id).first
@rating = Rating.create(item_type: item.class.name,
item_id: item.id,
rating: 0,
voters_up: {users:[]}, voters_down: {users:[]}) unless @rating
changed = nil
if !@rating.voters_down[:users].include?(user.id) && !@rating.voters_up[:users].include?(user.id)
if changed.nil?
@rating.decrement(:rating, 1)
@rating.voters_down[:users] << user.id
changed = true
end
end
if @rating.voters_down[:users].include?(user.id) && !@rating.voters_up[:users].include?(user.id)
if changed.nil?
@rating.increment(:rating, 1)
@rating.voters_down[:users].delete user.id
changed = true
end
end
if @rating.voters_up[:users].include?(user.id) && !@rating.voters_down[:users].include?(user.id)
if changed.nil?
@rating.voters_down[:users] << user.id
@rating.voters_up[:users].delete user.id
@rating.decrement(:rating, 2)
changed = true
end
end
@rating.save
item.update_attribute(:rating_value, @rating.rating)
end
end
Answer: Here's a 2-part answer: First, a direct answer to your question, and then an alternative approach to the whole thing.
DRYing the current implementation
Firstly, I'd make sure that every "item" has_one :rating. Right now, you're doing a lot of manual record creation, and Rating has to leave all of its attributes accessible to mass-assignment. It'd also be nicer to simply say question.upvote or answer.upvote directly, instead of "manually" calling the Rating.up method and having to supply both user and item.
That can be done, provided you can store the current user somewhere that's accessible to the models. But let's keep it a little simpler and just go for a syntax like question.upvote(user)
To DRY out the "vote-enabled" models, try this:
# in app/concerns/votable.rb
require 'active_support/concern'
module Votable
extend ActiveSupport::Concern
included do
has_one :rating, :as => :item
end
module ClassMethods
def upvote(user)
# Build the Rating record if it's missing
build_rating if rating.nil?
# Pass the user on
rating.upvote user
end
def downvote(user)
build_rating if rating.nil?
rating.downvote user
end
end
end
Then, in a record that users can vote on, include that concern:
class Question < ActiveRecord::Base
include Votable
# ...
end
class Answer < ActiveRecord::Base
include Votable
# ...
end
Now the Question and Answer records will automatically build their own Rating record, if they don't have one already, and then pass the upvote/downvote calls on to that rating record.
As for Rating itself, here's what I'd do (I've changed the attribute names a little, but you can of course keep using yours):
class Rating < ActiveRecord::Base
belongs_to :item, :polymorphic => true
# Serialize stuff as straight-up Array
serialize :upvoters, Array
serialize :downvoters, Array
# Create arrays after record initialization, so they're not nil on new records
after_initialize :initialize_voters
# Update the "owner's" rating cache after every save
after_save :update_item
# Note: No attr_accessible at all; it's not needed.
# I'll skip the comments, as this should all be very easily readable.
def upvote(user)
if upvoted_by?(user)
decrement :rating
remove_upvoter(user)
elsif downvoted_by?(user)
increment :rating, 2
add_upvoter user
remove_downvoter user
else
increment :rating
add_upvoter user
end
save
end
def downvote(user)
if downvoted_by?(user)
increment :rating
remove_downvoter user
elsif upvoted_by?(user)
decrement :rating, 2
remove_upvoter user
add_downvoter user
else
decrement :rating
add_downvoter user
end
save
end
def upvoted_by?(user)
upvoters.include? user.id
end
def downvoted_by?(user)
downvoters.include? user.id
end
private
def add_upvoter(user)
upvoters << user.id
end
def add_downvoter(user)
downvoters << user.id
end
def remove_upvoter(user)
upvoters.delete user.id
end
def remove_downvoter(user)
downvoters.delete user.id
end
def initialize_voters
upvoters ||= []
downvoters ||= []
end
def update_item
if item.present?
item.update_attribute :rating_value, rating
end
end
end
And that's pretty much it. Now you can do Question.find(23).upvote(current_user) or whatever, and it should work out.
Going straight to the database
Alternatively, I'd suggest using "raw" database queries instead of models with serialized attributes and all that. All you really need is a simple table. It doesn't need an id column or a model-layer abstraction. All it needs to is have 4 columns: user_id, vote, item_type, and item_id. You can skip ActiveRecord models and such, since all you need to do is check for a certain user/item combo.
Make user_id, item_id and item_type the primary key (or add a uniqueness constraint) and you can do the entire logic in SQL (at least MySQL). If, for instance, user 23 votes on answer 42, it'd look like this:
INSERT INTO `votes` (`user_id`, `item_id`, `item_type`, `vote`)
VALUES (23, 42, "Answer", X)
ON DUPLICATE KEY UPDATE `vote`=IF(`vote`=X, 0, X);
where X is 1 for an upvote, or -1 for a downvote. That's actually all the upvote/downvote logic in one query.
Right after that, update the total rating for the post:
UPDATE `answers`
SET `rating_value`=(
SELECT IFNULL(SUM(`vote`), 0)
FROM `votes`
WHERE `item_id`=42 AND `item_type`= "Answer"
)
WHERE id=42;
From time to time, you may want to do some clean-up to keep the table a little leaner (i.e. delete rows where the vote is zero, or when an associated item is deleted), but otherwise you should have a pretty robust solution with none of the overhead of Rails and ActiveRecord. And it can all be abstracted into a mixin like the concern above, and you can easily add the upvoted_by?(user)/downvoted_by?(user) methods there too with some simple SQL:
SELECT * FROM `votes` WHERE `vote`<>0 AND `user_id`=23 AND `item_id`=42 AND `item_type`="Answer"; | {
"domain": "codereview.stackexchange",
"id": 3206,
"tags": "ruby, ruby-on-rails"
} |
BWA doesn't recognize HiC options | Question: I'm trying to run bwa on HiC data. This is the command I'm giving:
bwa mem -5SP -T0 -t4 -R '@RG\tID:\tSM:\tLB:\tPL:ILLUMINA\tPU:none' REFERENCE.fa PAIR1.fq PAIR2.fq
and here's the error message:
mem: illegal option -- 5
Supposedly, the 5SP option is for HiC data (cite), but my BWA isn't recognizing it. Anybody have any ideas? My initial thought is to update BWA to the newest version, but I don't want to mess with our replicatibility since I am new to the lab and don't want to be updating software when I don't need to. The server is a Mac Pro and bwa version is 0.7.12-r1039. Thanks.
Answer: This option has been added in 0.7.16a, see https://github.com/lh3/bwa/releases, so consider updating. | {
"domain": "bioinformatics.stackexchange",
"id": 1588,
"tags": "bwa, hi-c"
} |
How the increasing of sampling frequency in OFDM didn't cause increase on the required channel bandwidth | Question: In OFDM, how the increasing of sampling frequency didn't cause increase on the required channel bandwidth (fs>>BW), as i know sampling frequency means number of samples per second, if this number of samples increased by logic the the needed bandwidth increase.
Answer: You can say the null subcarriers have their own bandwidth if you define an alphabet including "zeros" and use null carriers to transmit these "zeros". As the comment of MBaz,
If we start with the discrete-time OFDM symbol s[n], and increase the
sampling rate (defined by the time interval between successive samples
of s[n]), then the OFDM symbol becomes "shorter" and therefore its
bandwidth will increase.
But it is not the case. The OFDM symbol $T_u = N_{dft} T_s = 1/\Delta f$.
In this bandwidth interpretation, I think the term "virtual subcarrier" is more appropriate.
What you send over by electromagetic wave is data in $N$ active subcarriers $< N_{dft}$, or the frequency positions of null/virtual subcarriers can be used for other systems; because when you revert back the time domain signal to the frequency domain, what you need is the frequency positions of these active subcarriers.
It is easier to think OFDM as FDM. In this case you have $N$ narrowbands of bandwidth $B_{nb}$ and a sample rate $F_s > N B_{nb}$ is required for practical implementations, such as easily-realizable anti-aliasing filters. OFDM is simply one implementation of this system by using $\Delta f = B_{nb}$ and $F_s = N_{dft} \Delta f = N_{dft} B_{nb} > N B_{nb}$. | {
"domain": "dsp.stackexchange",
"id": 5199,
"tags": "fft, ofdm, ifft"
} |
Sudoku challenge driver program | Question: For my planned Sudoku with Handicap challenge over at codegolf.SE, I need a driver program. I've decided to write it in Python, for learning purposes. Since I've got no Python experience, I thought I put it here to see whether it can be improved.
The code is intended to implement the rules as written in the the linked post, except that it tests only one Sudoku (combining the score of several Sudokus is easy to do).
The program gets as first argument the file containing the Sudoku, followed by the command line to start the entry to be scored.
The Sudoku is given in a file containing a 9×9 digit grid, with unknown digits represented by a dot. That grid is followed by an empty line, and a second grid containing the solution. The program checks that the solution indeed fits the original Sudoku (that way I've already spotted a typo on the solution of the first Sudoku in my post), but doesn't currently do any further tests of the validity of the input data.
While validation of the Sudoku file is only nice to have, correct validation of the entries is crucial: It should catch all clients that violate the rules, and even more importantly, it should not declare any valid entry as invalid.
Here's the contents of the file for my first test Sudoku (that's the file I've also done all my testing with):
4.5.7.89.
..2.5.6..
..79..542
..35.6489
...3.8...
6847.91..
238..59..
..6.9.3..
.79.3.2.1
415672893
892453617
367981542
723516489
951348726
684729135
238165974
146297358
579834261
And this is the code:
#!/usr/bin/python
from __future__ import print_function
import sys
import itertools
import subprocess
import re
import copy
# -------------------------------------------------------------------
# general functions
# -------------------------------------------------------------------
# Remove leading and trailing whitespace from all lines
def trim_lines(text):
return map(lambda s: s.strip(), text);
# (for debugging purposes only)
# print a list of lists as 2D array
def arrayprint(array, width):
for row in array:
print( (("{:>"+str(width)+"}")*len(array[0])).format(*row) )
# print on stderr
def error(*content):
print(*content, file=sys.stderr)
# load the content of a file
def loadfile(filename):
try:
file = open(filename)
file_content = file.readlines()
file.close()
except:
error("Could not read file ", filename)
sys.exit(1)
return file_content
def testall(predicate, list):
return reduce(lambda x, y: x and y, [True]+map(predicate, list))
# -------------------------------------------------------------------
# field manipulation and verification
# -------------------------------------------------------------------
def constfield(value):
return [x[:] for x in [[value]*9]*9]
# get the data indicated by the specifier string from the working field
def get_data(field, scratch, specifier):
def subfield(field, k):
i = (k-1) / 3
j = (k-1) % 3
sub3x3=map(lambda s: s[3*i:3*i+3],field[3*j:3*j+3])
return itertools.chain(*sub3x3)
if specifier != "S":
index = int(specifier[1]) - 1
if specifier[0] == 'R':
return field[index]
elif specifier[0] == 'C':
return map(lambda s: s[index], field)
elif specifier[0] == 'F':
i = index / 3
j = index % 3
sub3x3=map(lambda s: s[3*j:3*j+3], field[3*i:3*i+3])
return itertools.chain(*sub3x3)
else:
return scratch
# set the field parts indicated by the specifier string
# to the given data combined with the original data
# using the given function
def set_data(field, scratch, specifier, newvals, combine):
newvals =list(newvals)
if specifier != "S":
index = int(specifier[1]) - 1
if specifier[0] == 'R':
field[index] = map(combine, field[index], newvals)
elif specifier[0] == 'C':
for i in xrange(0,len(field)):
field[i][index] = combine(field[i][index], newvals[i])
elif specifier[0] == 'F':
i = index / 3
j = index % 3
for k in xrange(0,3):
for l in xrange(0,3):
field[3*i+k][3*j+l] = combine(field[3*i+k][3*j+l],
newvals[3*k+l])
else:
for i in xrange(9):
scratch[i] = combine(scratch[i], newvals[i])
# check whether field2 is obtained by making progress on field1
def validate(field1, field2):
for i in xrange(0,9):
for j in xrange(0,9):
if field1[i][j] & field2[i][j] != field2[i][j]:
print("mismatch: ", i, ", ", j, ": ",
field1[i][j], " -> ", field2[i][j], sep="");
return False
return True
# -------------------------------------------------------------------
# parse Sudoku file
# -------------------------------------------------------------------
# parse a single character from the Sudoku field
def parse_sudoku_char(char, allowdot):
if char.isdigit():
return 1 << (int(char) - 1)
elif allowdot and char == '.':
return 511
else:
error("Invalid character in Sudoku field:", char)
sys.exit(1)
# parse a line of the Sudoku field
def parse_sudoku_line(line, allowdot):
if len(line) != 9:
error("Sudoku field: Line too long")
sys.exit(1)
return map(lambda c: parse_sudoku_char(c, allowdot), line)
# parse a complete Sudoku field
def parse_sudoku_field(field, allowdot):
return map(lambda r:parse_sudoku_line(r, allowdot), field)
# parse the complete Sudoku input file
def read_sudoku(filename):
sudoku_text = trim_lines(loadfile(sys.argv[1]));
# The file contains two Sudoku fields of 9 lines each,
# separated by an empty line
if len(sudoku_text) != 19 or sudoku_text[9] != "":
error(sys.argv[1], ": file must consist of two Sudoku boards")
error("separated by an empty line")
sys.exit(1)
unsolved_field = parse_sudoku_field(sudoku_text[0:9], True)
solved_field = parse_sudoku_field(sudoku_text[10:19], False)
if not validate(unsolved_field, solved_field):
error("The given solution does not solve the given Sudoku")
sys.exit(1)
return (unsolved_field, solved_field)
# -------------------------------------------------------------------
# check the field and run the solver
# -------------------------------------------------------------------
# test is a specifier is valid
def test_specifier(specifier):
if len(specifier) < 1 or len(specifier) > 2:
return False
if specifier[0] not in "RCFS":
return False
if specifier[0] == "S":
return len(specifier) == 1
return specifier[1].isdigit() and specifier[1] != '0'
# pass data through the solver
# returns one of "continue", "finished" and "disqualified"
CONTINUE = "continue"
FINISHED = "finished"
DISQUALIFIED = "disqualified"
def step(field, scratch, specs, solvercommand):
speclist = specs[0]
speclength = len(speclist)
solver = subprocess.Popen(solvercommand,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
print(*speclist, file=solver.stdin)
for spec in speclist:
print(*get_data(field, scratch, spec), file=solver.stdin)
solver.stdin.close()
output = trim_lines(solver.stdout.readlines())
if len(output) != speclength+1:
return DISQUALIFIED
if not testall(lambda line: re.match(r'^(\d+\s+)+\d+$', line),
output[:speclength]):
return DISQUALIFIED
replacement_data = map(lambda line: map(int, line.split()),
output[:speclength])
def row_valid(row):
return len(row) == 9 and testall(lambda num: 0 <= num < 2**9, row)
if not testall(lambda row: row_valid(row), replacement_data):
return DISQUALIFIED
tmpfield = constfield(511)
tmpscratch = [511]*9
for index in xrange(speclength):
set_data(tmpfield, tmpscratch,
speclist[index], replacement_data[index],
lambda x, y: x & y)
for index in xrange(speclength):
set_data(field, scratch, speclist[index],
get_data(tmpfield, tmpscratch, speclist[index]),
lambda x, y: y);
action = output[len(speclist)]
if (action == "STOP"):
return FINISHED
speclist = action.split()
specs[0] = speclist
if len(speclist) not in range(1,4):
return DISQUALIFIED
if not testall(test_specifier, speclist):
return DISQUALIFIED
return CONTINUE
# The actual runloop and scoring
def runloop(command, unsolved_field, solved_field):
# the working field is the field the process is actually operating on
working_field = copy.deepcopy(unsolved_field)
# The scratch data
scratch = [0]*9
# The list of specifiers for the data to provide
# Initially, that's just the scratch space
# Additional list level to provide reference semantics
specs = [[ "S" ]]
steps = 0;
action = CONTINUE;
while action == CONTINUE:
steps = steps + 1
action = step(working_field, scratch, specs, command)
print("Number of runs:", steps);
if action == DISQUALIFIED:
return None
print("checking with puzzle ... ", end="");
if not validate(unsolved_field, working_field):
return None
print("Ok.");
print("checking with solution ... ", end="");
if not validate(working_field, solved_field):
return None
print("Ok.");
onebits = 0;
for num in itertools.chain(*working_field):
onebits = onebits + bin(num).count('1')
print("Number of set bits:", onebits);
return steps + 5* onebits - 45
# -------------------------------------------------------------------
# main program
# -------------------------------------------------------------------
# Check command line arguments
if len(sys.argv) < 3:
error("Usage: ", sys.argv[0], "sudoku solver [solver-args]")
error(" sudoku = file containing the sudoku to solve")
error(" solver = executable that should solve it")
error(" solver-args = arguments to be passed to the solver")
sys.exit(2)
fields = read_sudoku(sys.argv[1]);
score = runloop(sys.argv[2:], *fields)
if (score):
print("Total score: ", score)
else:
print("Disqualified.")
Answer: That was quite a lot of code, so I don't think I'll cover all of it, but lets start from the top:
Add from __future__ import division – Just to make sure that your integer divisions doesn't produce a float here and there
Comments are supposed to be """docstrings""" – You have seemingly good comments, but they should be docstrings on the first line after the function definitions instead of hash-comments in front
Use list comprehension instead of map for trim_lines – List comprehension tends to be easier to read and maintain. Read accepted answer on Python List Comprehension vs. Map, and see code suggestion below:
def trim_lines2(texts):
"Remove leading and trailing whitespace from all lines."
return [text.strip() for text in texts]
_In arrayprint(): Simplify print calls, if possible` – Even though only one line, it is really hard to read. Here are two variants, where I think I prefer the last one:
def arrayprint_v3(array, width):
row_length = len(array[0])
col_format = '{{:>{}}}'.format(width)
for row in array:
print((col_format*row_length).format(*row))
def arrayprint_v3(array, width):
row_length = len(array[0])
for row in array:
print ''.join('{0:>{1}}'.format(col, width) for col in row)
Use with ... for reading files – See http://www.dotnetperls.com/file-python, which explains why it is better to use with ... instead of ordinary try...except when reading files
What the duplicate in testall? – I'm not sure I understand what's happening in your testall as you seem to confuscate the original reduce(map(...)) with an extra lambda call?!? Can surely be more readable, and most likely a lot shorter... Shorter version all(map(predicate, list))
Try out the divmod() function – Try the following on for size: div, mod = divmod(k-1, 3)
Switch to a one-dimensional sudoku array, using index function – You have loads of double loops, and double indexing and inplace calculation to address a given sudoku field. This could be simplified greatly if you switch to a one-dimensional array, and having a function doing transforming from 2D to 1D. The logic is a little bit to confusing, so I might be wrong, but here is some code trying to illustrating simplifications which should be available:
# Replace
i = index / 3
j = index % 3
for k in xrange(0,3):
for l in xrange(0,3):
field[3*i+k][3*j+l] = combine(field[3*i+k][3*j+l],
newvals[3*k+l])
# with something like
for i in xrange(9):
field[index][i] = combine(field[index][i], newvals[i])
# optionally use some variatons over
i, j = divmod(index, 3)
def pos(x, y):
return 3*x + y
You are overly fond of for i in xrange, followed by list indexing – Your code would look neater if either you used for row in rows, or even for i, row in enumerate(rows). Then you could access row directly, and if you need to cross reference you could use the i in the latter option. Also remember that xrange(0, 9) is the same as xrange(9)
... me is exhausted trying to get through the code ...
Final note
Even though you have good variable names and function names, and some good vertical spacing (except not haveing two newlines before a function), your code is hard to read, and difficult to understand.
This in some part due to heavy use of lambdas and maps, combined with the complicated for loops. I do believe there is much room for improvement through simplifying list structures, and finding the more pythonic ways of doing things instead of repeating the habits of your older coding language. | {
"domain": "codereview.stackexchange",
"id": 16182,
"tags": "python, beginner, python-2.x, validation, sudoku"
} |
Is Gravity on a bell curve? | Question: Is there a point where gravity decreases the farther away from an object that is not related to distance?
Answer: Variables "on a bell curve", if I am interpreting you correctly, are random variables with probability distributions, that is, they are non-deterministic. Repeated measurements (or experiments) of random variables yield distinct results, with probability obeying a certain cumulative distribution function (CDF). One such distribution is the famous bell curve (shown are the chances of falling in certain intervals).
I guess your question can be rephrased as "Is Gravity non-deterministic?" -- that is, would we expect different results from repeated measurements of Gravity at the same distance from an attractive object?
The answer would be no, because the force of gravity is theoretically consistent across measurements. In contrast, Quantum Mechanics ellucidates phenomena at very small scales that indeed effectively do not yield the same results for repeated measurements.
Keep in mind that gravity measurement instruments, however, do measure "on a bell curve", because of error. The source of error is usually noise: a plethora a small interference effects (ranging from radio waves, vibrations, to thermal fluctuation of electronics).
If you get a chronometer and measure the height $h$ a ball released at time $t=0$ has fallen at time $t=1s$, you might expect from the effects of gravity alone the ball has fallen $g/2$ meters, giving a measurement of your local gravity $g$. However, in practice you won't obtain consistent results (that's an easy experiment to try yourself! you can do it by using smartphone video recoding and dropping objects against a measurement tape; you should get $g/2 \approx 4.9$). | {
"domain": "physics.stackexchange",
"id": 33287,
"tags": "gravity, space, distance"
} |
Molality of solution to be used when finding water separated from solution as ice | Question: Question
$\pu{1 kg}$ of a solution of cane sugar is cooled and maintained at $\pu{-4 ^\circ C}$. How much ice will separate out if molality of solution is $0.75$?
$K_\mathrm{f} \ \text{(water)} = 1.86\ \mathrm{K\ kg/mol}$
Solution given:
$\Delta T_\mathrm{f} = 0 - (-4)=4$
$i = 1$ for sucrose.
Therefore, since $\Delta T_\mathrm{f}$ is defined as
$$\Delta T_\mathrm{f}= K_\mathrm{f}\cdot i \cdot \frac {\text{amount of sucrose in } \pu{mol}}{\text{weight of water left in }\pu{kg}}$$
[...]
My doubts
$\Delta T_\mathrm{f}$ is the absolute value of the difference of freezing point of pure solvent and the solution with the non volatile solute. Here, the freezing point of the solution is not $\pu{-4 ^\circ C}$. So how can we say $\Delta T_\mathrm{f}$ is equal to $4$?
Why does the molality term contain amount of remaining water in the denominator? In the original equation shouldn't it be the original molality here?
Answer:
The question: $\pu{1kg}$ of a solution of cane sugar is cooled and maintained at $\pu{-4 ^\circ C}$. How much ice will separate out if molality of solution is $0.75$?
Given: $K_\mathrm{f} \text{ (Water)}= \pu{1.86 K kg/mol}$
The solution is given assuming you understand the concept. It seems you did not completely understand the colligative properties of solutions. Thus I'm giving some clues you how to approach this problem.
The textbook equation you are going to use is:
$$\Delta T_\mathrm{f}= K_\mathrm{f}\times i \times \text{molality of the solution} = K_\mathrm{f}\times i \times \frac {\text{amount of sucrose in }\pu{mol}}{\text{weight of solvent in }\pu{kg}} \tag 1$$
where $i = 1$ for sucrose (sucrose does not ionize in water).
The solution is kept at $\pu{-4 ^\circ C}$. The freezing point of pure water is $\pu{0 ^\circ C}$ ($\therefore \ \Delta T = 0 - (-4)=4$). First what you should do is find the freezing point of given solution using equation $(1)$. If $\Delta T_\mathrm{f} \gt 4$, the game is over. That means, this solution does not freeze at $\pu{-4 ^\circ C}$. If $\Delta T_\mathrm{f} \lt 4$, the game is on. That means, some water will freeze out until the final molality reached the molality of solution, which freeze at $\pu{-4 ^\circ C}$.
For given solution (no units was used):
$$\Delta T_\mathrm{f}= K_\mathrm{f}\times i \times m = 1.86 \times 1 \times 0.75 = \pu{1.395} $$
Thus, this solution freeze at $\pu{-1.395 ^\circ C}$.
Now, you have to find out at what molality of a solution would be just about to freeze at $\pu{-4 ^\circ C}$. That means, $\Delta T_\mathrm{f}= 4$ (still, $i = 1$):
$$\Delta T_\mathrm{f}= K_\mathrm{f}\times i \times m \ \Rightarrow \ m= \frac{\Delta T_\mathrm{f}}{K_\mathrm{f}} = \frac{4}{1.86} = 2.15$$
This means, water in original mixture freeze out until the molarity of solution becomes $\pu{2.15 mol/kg}$.
Put it this way:
The original solution has $\pu{0.75 mol}$ of sucrose in $\pu{1 kg}$ of water. The amount of sucrose would not freeze.
Thus, how much water would remain with $\pu{0.75 mol}$ of sucrose to keep molality at 2.15?
I'm sure it is given in your solution manual. Regardless, I'll show you the calculations. Suppose the weight of water is $W \ \pu{kg}$ (ignore the units in equation}:
$$ \frac{0.75}{W} = 2.15 \ \Rightarrow \ W = \frac{0.75}{2.15} = 0.349$$
Thus, the weight of water in final solution after freezing at $\pu{-4 ^\circ C}$ is $\pu{0.349 kg}$ (the amount of sucrose is still $\pu{0.75 mol}$).
However, this calculation is in significant deviation from actual amount since the given amount of original solution is $\pu{1.0 kg}$. That means it does not contain $\pu{0.75 mol}$ of sucrose. Now, molecular weight of sucrose is $\pu{342.3 g/mol}$. If amount of sucrose in $\pu{1.0 kg}$ of original solution is $x$, then:
$$\frac{\frac{x}{342.3}}{(1000-x)\times 10^{-3}} =0.75 \ \Rightarrow \ 1.257x = 0.75 \times 342.3 \ \Rightarrow \ x = \frac{256.725}{1.257} = \pu{204.2 g}$$
Therefore, the amount of water in that solution is $\pu{1000-204.2) g} = \pu{795.8 g}$ or $\pu{0.796 kg}$. Thus, amount of sucrose in the of original solution is: $\frac{\pu{204.2 g}}{\pu{342.3 g/mol}} = \pu{0.597 mol}$.
Using these values, we can recalculate the water amount in final solution. Suppose the weight of water in final solution is $W_f \ \pu{kg}$:
$$ \frac{0.597}{W_f} = 2.15 \ \Rightarrow \ W_f = \frac{0.597}{2.15} = 0.278$$
Thus, the weight of water in final solution after freezing at $\pu{-4 ^\circ C}$ is $\pu{0.278 kg}$ (the amount of sucrose is $\pu{0.597 mol}$ making it $\pu{2.15 molal}$).
Thus, the amount of water freeze out from the original $\pu{1.0 kg}$ of solution is:
$$\pu{(0.796-0.278) kg} =\pu{0.518 kg}$$ | {
"domain": "chemistry.stackexchange",
"id": 14265,
"tags": "physical-chemistry, solutions, phase, colligative-properties"
} |
Text to speech in a python node | Question:
I've done the soundplay tutorial and can do text-to-speech from the command line, but can someone explain how to do it from inside a python program?
Originally posted by ringo42 on ROS Answers with karma: 55 on 2011-08-04
Post score: 1
Answer:
Hi Ringo,
The following works for me in Diamondback.
First add a dependency on sound_play to your package manifest.xml file:
<depend package="sound_play"/>
and re-run rosmake in your package directory.
Then launch the soundplay node from the command line or your launch file:
$ rosrun sound_play soundplay_node.py
or
<launch>
<node name="soundplay_node" pkg="sound_play" type="soundplay_node.py"/>
</launch>
Finally, add the following Python snippets to your main speech script.
from sound_play.libsoundplay import SoundClient
self.soundhandle = SoundClient()
rospy.sleep(1)
self.soundhandle.say('Take me to your leader.')
--patrick
Originally posted by Pi Robot with karma: 4046 on 2011-08-04
This answer was ACCEPTED on the original site
Post score: 5
Original comments
Comment by dinosaur on 2015-09-21:
I did this in hydro on a PR2. It worked without adding the dependency. Also, our PR2 automatically runs a soundplay node. If a second one is accidentally started, speech sounds choppy until the whole system is restarted (even restarting all ros processes doesn't fix it). | {
"domain": "robotics.stackexchange",
"id": 6351,
"tags": "ros, python, sound-play"
} |
Resampling FIR coefficients | Question: I have received from a client a 32-coefficient FIR filter running at 1 kHz. I would like to adapt the filter to a 24-coefficient FIR filter running at 750 Hz while preserving the 0-375 Hz frequency response as much as possible, especially the 0-30 Hz band. Is there a way to resample my FIR coefficients?
Edit :
The FIR has linear phase, is a low-pass filter, and also has zeroes at specific frequencies (multiples of 60 Hz)
Answer: As mentioned in a comment, I would probably try to re-design the filter at the new sampling rate. Take equidistant samples of the magnitude of the existing filter between DC and the new Nyquist frequency as a desired response. Since you want to make sure that the new filter has zeros at integer multiples of $60$ Hertz, split your new filter response into two parts:
$$H(z)=P(z)G(z)\tag{1}$$
where $P(z)$ is a polynomial with zeros at integer multiples of $60$ Hz. Your desired magnitude is then defined by equidistant samples of
$$M_D(e^{j\omega})=\left|\frac{H_D(e^{j\omega})}{P(e^{j\omega})}\right|\tag{2}$$
where $H_D(z)$ is the transfer function of the original filter. In your frequency grid, avoid frequencies at which the zeros of $P(z)$ occur. Of course, those zeros are cancelled by the zeros of $H_D(z)$, but you might nevertheless have some numerical problems otherwise.
Now you find a linear phase transfer function $G(z)$ approximating $M_D(e^{j\omega})$ on the unit circle, and your final filter transfer function is given by Eq. $(1)$.
I would use a weighted least squares approximation, which just requires the solution of a system of linear equations. If the range between DC and $30$ Hz is especially important, you can assign those frequencies a higher weight, such that the approximation is better in that range (at the cost of the approximation outside that range). | {
"domain": "dsp.stackexchange",
"id": 9218,
"tags": "filter-design, finite-impulse-response, digital-filters, decimation, linear-phase"
} |
SPOJ Adding reversed number in Java | Question: I just started to practice for competitive programming and just started to solve the most solved problems on SPOJ.
This is what I made for ADDREV: Adding reversed numbers in Java (https://www.spoj.com/problems/ADDREV/)
The Antique Comedians of Malidinesia prefer comedies to tragedies. Unfortunately, most of the ancient plays are tragedies. Therefore the dramatic advisor of ACM has decided to transfigure some tragedies into comedies. Obviously, this work is very hard because the basic sense of the play must be kept intact, although all the things change to their opposites. For example the numbers: if any number appears in the tragedy, it must be converted to its reversed form before being accepted into the comedy play.
Reversed number is a number written in arabic numerals but the order of digits is reversed. The first digit becomes last and vice versa. For example, if the main hero had 1245 strawberries in the tragedy, he has 5421 of them now. Note that all the leading zeros are omitted. That means if the number ends with a zero, the zero is lost by reversing (e.g. 1200 gives 21). Also note that the reversed number never has any trailing zeros.
ACM needs to calculate with reversed numbers. Your task is to add two reversed numbers and output their reversed sum. Of course, the result is not unique because any particular number is a reversed form of several numbers (e.g. 21 could be 12, 120 or 1200 before reversing). Thus we must assume that no zeros were lost by reversing (e.g. assume that the original number was 12).
Input
The input consists of N cases (equal to about 10000). The first line of the input contains only positive integer N. Then follow the cases. Each case consists of exactly one line with two positive integers separated by space. These are the reversed numbers you are to add.
Output
For each case, print exactly one line containing only one integer - the reversed sum of two reversed numbers. Omit any leading zeros in the output.
import java.util.*;
import java.lang.*;
class Main
{
public static void main (String[] args) throws java.lang.Exception
{
Scanner sc = new Scanner(System.in);
sc.nextLine();
while(sc.hasNextInt())
{
printNextNumber(sc);
}
}
public static void printNextNumber(Scanner sc)
{
int a = sc.nextInt();
int b = sc.nextInt();
int reversedA = reverseNumber(a);
int reversedB = reverseNumber(b);
int reversedSum = reverseNumber(reversedA+reversedB);
System.out.println(reversedSum);
}
public static int reverseNumber(int number)
{
return listToNumberMadeOfDigits(numberToListOfDigits(number));
}
public static List<Integer> numberToListOfDigits(int number)
{
List<Integer> list = new ArrayList<>();
while(number>0){
list.add(number%10);
number /= 10;
}
Collections.reverse(list);
return list;
}
public static int listToNumberMadeOfDigits(List<Integer> digits)
{
int number = 0;
for(int i = 0; i < digits.size();i++){
number += Math.pow(10,i)*digits.get(i);
}
return number;
}
}
How is this code?
What should I avoid in the future?
Answer: First, note that this review is of general good coding practices for production code, and not all suggestions will be appropriate for the sphere of "competitive programming", in which I have no significant experience.
Some thoughts:
For readability, use whitespace consistently and more frequently. There should be whitespace (0) after control statements (if, while, for), around operators (+, %, ..), and before curly braces ({).
In Java, the convention is that open curly braces go on the same line, not on a newline.
Use final aggressively when variable declarations will not change to reduce cognitive load on readers.
AutoCloseable resources such as Scanner should be used in a try-with-resources block to ensure they get closed.
a and b are poor choices for variable names because they're meaningless. Variable names should describe the value they hold. You probably don't even need the intermediate variables.
Methods not intended for use outside their class should be private. Always prefer the most restrictive access you can reasonably apply.
In a real application, it would be highly preferable to having the reversed sum pushed back to the main() method to handle printing. This gives you flexibility, rather than pushing your output choice through your code - if you want to change later, it changes in fewer, higher-level places.
It might be nice to use the number of cases they provide you with.
Don't throw Exception ever - throw the most specific checked exception possible. Don't declare thrown checked exceptions if you don't actually throw them.
Your algorithm is much more complex than it needs to be. Lists are extraneous. The problem can be solved with math.
If you were to apply all the changes I suggested, your code might look more like:
import java.util.Scanner;
class Main {
public static void main(final String[] args) {
try (final Scanner sc = new Scanner(System.in)) {
final int numberOfCases = sc.nextInt();
for (int i = 0; i < numberOfCases; i++) {
final int firstNumber = sc.nextInt();
final int secondNumber = sc.nextInt();
System.out.println(addReversedNumbers(firstNumber, secondNumber));
}
}
}
private static int addReversedNumbers(final int firstNumber, final int secondNumber) {
return reverseNumber(reverseNumber(firstNumber) + reverseNumber(secondNumber));
}
private static int reverseNumber(final int number) {
int numberToReverse = number;
int reversedNumber = 0;
while (numberToReverse > 0) {
final int digit = numberToReverse % 10;
numberToReverse /= 10;
reversedNumber = 10 * reversedNumber + digit;
}
return reversedNumber;
}
} | {
"domain": "codereview.stackexchange",
"id": 33596,
"tags": "java, programming-challenge"
} |
Energy cycle centered around methane production / consumption | Question: Many car manufacturers have developed projects for hydrogen cars but I've always been wondering why they put so much effort into hydrogen, as it is hard to store, and fuel-cells use to restitute energy back are based on rare-earth-metal not suitable to produce many cars for everyone.
Nature mainly store energy with carbon, I'm not a chemist but is it that hard or not yet enough economically viable for not seing rising projects around simple hydrocarbure synthesis. Easy to store, easy to consume, but maybe hard to produce, right ?
Production : $\ce{CO_2 + 2H_2O}$ (+ Renewable Energy Source) $\ce{-> CH_4 + 2O_2}$
Consumption : $\ce{CH_4 + 2O_2 \rightarrow CO_2 + 2H_2O}$ (+ Energy To Use)
Could even $\ce{CO_2}$ be simply captured from air with limewater in such cycle ?
Sorry if my question may seems too simplistic, but I'd really like to handle (in simple terms) what are the economic/technical constraints to elaborate such apparently simple round-trip.
Answer: Vehicles that run on natural gas already exist, and have done for many years - you can find CNG (compressed natural gas) cars and larger vehicles around the world.
But there's very little synthesising of methane because there are very few places with a sensible carbon price, so it's just cheaper to extract natural gas and burn it.
Research is ongoing on methane synthesis - this is usually done under the name "power to gas", which can refer to hydrogen or methane. Either way, it usually starts with the splitting of water to get hydrogen, and then is sometimes combined with methanation of the hydrogen. The latter is an exothermic reaction, so combining the two processes can give pretty good efficiencies (see e.g. this paper).
Hydrogen / methane for cars is looking like a lost battle: electric cars have almost certainly already won; it's just going to take a few years until they completely dominate. However, there is still the possibility that larger vehicles (heavy goods vehicles, boats, aircraft) might settle on hydrogen or renewable methane as their decarbonisation option.
By the way, just because a fuel-cell might use rare-earth metals, doesn't make it unsuitable for mass market. Most rare-earth metals aren't rare, despite the name. Fuel cells have lots of issues, and fuel-cell vehicles really haven't achieved much yet, but that's to do with lack of reliability and longevity, rather than scarcity of components. | {
"domain": "chemistry.stackexchange",
"id": 7222,
"tags": "hydrocarbons, green-chemistry"
} |
Is the Liénard-Wiechert electric field conservative? | Question: I know that an accelerated charge should emit an e.m. field and loose energy. Therefore, the Liénard-Wiechert (L.W.) electric field of an accelerated charge should be non-conservative.
But I checked first what happens when the charge is not accelerated, i.e. moves with a constant velocity. I expected to find a conservative field as in the case when the charge is at rest. A charge moving with constant velocity doesn't radiate. But it seems that this is not what happened.
Given the scalar potential $\phi$ and vector potential $\vec A$, the electric field is
$$ \ (1) \ \vec E = - \nabla \phi - \frac {∂ \vec A}{∂t},$$
where
$$ (2) \ \phi (r, t) = \frac {1}{4 \pi \epsilon _0} \left( \frac {q}{(1 - \vec n \vec \beta _s)|\vec r - \vec r_s|} \right)_{t_r},$$
$$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3) \ \vec A = \frac {\mu _0 c}{4 \pi} \left( \frac {q \ \vec {\beta} _s}{(1 - \vec n \vec \beta _s)|\vec r - \vec r_s|} \right)_{t_r} = \frac {\vec \beta _s (t_r)}{c} \phi (r, t).$$
see the article.
I assume that for constant velocity of the charge, $t_r = t$.
A field that obeys
$$ \ (4) \ \vec F(\vec r) = \nabla V(r)$$
is conservative, i.e.
$$ \ (5) \ \int_{\vec {r_1}}^{\vec {r_2}} \vec F \ d \vec {\ell} = V(\vec {r_2}) - V(\vec {r_1}).$$
So, I expected that for the constant velocity the formula (1) will turn into (4), i.e. that I would get that $\vec A$ does not depend on time. But this doesn't happen. Why? A charge in movement with constant velocity shouldn't radiate, its electric field should be conservative.
Do I make a confusion, do I make a mistake?
Answer: An Electric Field is only conservative if it is static. The propagation of E with a L-W field contradicts this, so it is not conservative. | {
"domain": "physics.stackexchange",
"id": 19517,
"tags": "lienard-wiechert, conservative-field"
} |
Can I call ROS “ros::init(…)” within the player (Player/Stage) driver? | Question:
I am trying to write a Player driver that will publish messages on ROS.
Player driver does not create an executable file and hence I am not sure how to call ROS initialize within the player driver.
The main function of player driver looks like this...
void PlayerDriver::Main()
{
int argc; // Player Main() method does not take argument
char **argv; // What to do with argc and argv??
geometry_msgs::Point commandInput;
ros::init(argc, argv, "Command");
ros::NodeHandle n;
ros::Publisher command_pub = n.advertise<geometry_msgs::Point>("servocommand", 1000);
ros::Rate loop_rate(1);
while (ros::ok())
{
ros::spinOnce();
ProcessMessages();
//Do some stuff
commandInput.x = globalVel.v;
commandInput.y = globalVel.w;
commandInput.z = 0.0;
command_pub.publish(commandInput);
//Do some stuff
loop_rate.sleep();
}
}
The Player driver compiles and creates a shared library and I have a cfg file. It is called by "player playerdriver.cfg" and works fine, gets connected to a Player Client but it does not publish messages on ROS.
As the Player Main() method does not take arguments, I believe this is where I am doing some mistake. Any suggestions are welcome.
Originally posted by sks on ROS Answers with karma: 36 on 2012-07-24
Post score: 1
Original comments
Comment by sks on 2012-07-24:
Thanks @allenh1 for the reply. I saw your example, Robot class. I think here the Player is a client. I am trying to create a Player server. Do you think I can create a class as you have suggested? I have had no problem integrating a player client with ROS publisher/subscriber.
Comment by allenh1 on 2012-07-24:
Hm. When you say player server, what exactly do you mean?
Comment by sks on 2012-07-24:
Player server (also known as Player Plugin). Please see following link
http://psurobotics.org/wiki/index.php?title=Writing_a_Player_Plugin
Comment by allenh1 on 2012-07-24:
Oh. I think I see what you're doing now. You're writing a Player, ROS driver to publish ros messages into player, right?
Comment by sks on 2012-07-24:
What I am trying to write is this "Simulated Robot with Player driver" which will allow a player client to subscribe with Position2d. The client can send Position2d msgs to simulated robot. Now, I want to put ROS Publisher in this "simulated player robot". Hope I was able explained my problem.
Answer:
There are several approaches to using a player driver in ROS:
Wrapping the player library in a ROS node (like erratic_player does).
Converting the driver to ROS manually.
The first probably gets the job done quicker, but is harder to maintain. With a little practice, the second is relatively straightforward. I've done quite a few. If you have follow-up questions, feel free to ask.
When converting a player driver to a ROS node, the class inherited from the
player Driver class is no longer needed and just adds unnecessary
complexity.
Make a simple C++ main program, something like this:
int main(int argc, char** argv)
{
ros::init(argc, argv, NODE);
ros::NodeHandle node;
// topics to read and write
brake_cmd = node.subscribe("cmd", 10, ProcessCommand,
ros::TransportHints().tcpNoDelay(true));
brake_state = node.advertise<art_brake::State>("state", 10);
if (GetParameters() != 0) // from the player constructor
return 1;
ros::Rate cycle(20.0); // set driver cycle rate
if (Setup() != 0) // from the player Setup()
return 2;
// Main loop; grab messages off our queue and republish them via ROS
while(ros::ok())
{
... // activities from player Main()
cycle.sleep(); // sleep until next cycle
}
Shutdown();
return 0;
}
Originally posted by joq with karma: 25443 on 2012-07-24
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by sks on 2012-07-24:
Ok, what I am trying to do is exactly same as erratic_player ...however, at a smaller scale, as I have no sensors, just Position2d interface. An easier method will help but I am also looking at erratic_player.cpp to find solution.
Comment by sks on 2012-07-24:
The robot driver is in ROS but the client controller is written using player for an old robot. I am just taking a short-cut by not writing the client controller again using ROS. Instead I am writing this "erratic_Player" type driver to use position2dmsgs from player client and publish for ROS robot.
Comment by joq on 2012-07-24:
Why not just convert the client controller to ROS? Do you need to maintain both versions?
Comment by sks on 2012-07-24:
I might have to spend a lot of time to convert client controller to ROS. Initially I was trying the same, but its a large code written by someone else in my lab. I thought writing a player driver will be an easier task. Is there a way to email you my driver? Can you look at my code?
Comment by sks on 2012-07-24:
From your example, what i am getting is I can create an object of PlayerDriver class inside a small ros publisher subscriber program and call its methods in a sequence that Player will follow? Am I correct?
Comment by joq on 2012-08-03:
I think so, but that class would not be derived from PlayerDriver. | {
"domain": "robotics.stackexchange",
"id": 10339,
"tags": "ros, driver"
} |
Why does pasta become sticky after being cooked? | Question: This question was posed by my biochemistry professor at the end of a class and then never answered.
What I've gathered is that:
It has something to do with the denaturing of a protein, since we were on the subject of denaturing that lecture.
It most likely involves gluten, because that's the main protein in wheat. And, from personal experience of making glue, gluten is very sticky.
The answer might involve hydrophilic/hydrophobic interactions, because those interactions are disturbed during the denaturing process.
Any ideas?
Answer: After cooking, pasta has a lot of free surface starch ready to bond with things. This is why we rinse well after cooking to remove extra starch and cool down the pasta to prevent it from producing more surface starch. | {
"domain": "chemistry.stackexchange",
"id": 2329,
"tags": "biochemistry, food-chemistry"
} |
Is it a law of physics that all machines will break? | Question: The question sounds kinda dumb when I say it out loud but at the same time I'm very curious. When things break, is it solely due to an intrinsic design flaw or is it due to entropy? And is the machine that never breaks analogous to a perpetual motion machine?
To clarify my question take computer hard drives. If I'm not mistaken, they used to write to a spinning disk, but the moving parts would inevitably erode. Then they came out with ssd but heat would eventually corrode the transistors. So is it possible to make some design that lasts forever (with continued use)?
Answer: It is at least certainly a law of engineering ;-)
Never and infinity is a strange place; I would say, everything that is not repaired by a system does break down at infinity. Things like radiation, non-pure vacuums, non-pure materials, Heisenberg uncertainty, and other crazy physics we don't understand yet, degrade engineered systems with some amount of background entropy we can not control.
That said, we can get very close :-)
In mechanical engineering there is a concept called infinite life. It basically means that a material(usually metal) is not stressed beyond its purely elastic region. In practice if it can survive 1 million cycles it is considered "infinite". In an exaggerated example; if I machined a stainless steel door stop, it would probably continue to stop doors in excess of 1 million cycles. It is way over engineered for the task we require of it; so provided they still use doors in the future it would last for millions and millions of years.
One story I heard a long time ago talked about making a lathe with a wooden screw carved by hand. Using that wooden screw inside the lathe as a component you were able to turn a better screw out of wood. Then put that screw in the lathe and turn a better one out of metal and so on. Though I am not certain how well that would really work in practice; the principle is that we can achieve a higher degree of precision and engineering beyond the precision our tools were built with. An individual component can not beat an infinity of entropy, but a system can. | {
"domain": "physics.stackexchange",
"id": 27119,
"tags": "everyday-life, entropy, perpetual-motion, solid-mechanics"
} |
Prove that every component of angular momentum commutes with $f$ | Question:
Let $f( \hat {\vec r} , \hat {\vec p} )$ be the any polynomial in variables $r^2, p^2 $ and
$(\vec r \cdot \vec p)$. prove that every component of angular momentum commutes with $\hat f$:
$[\hat L_k , f( \hat {\vec r} , \hat {\vec p} )] = 0 $.
So maybe it is enough to prove for $[\hat L_z]$ ; I know how to proof commutation with $r^2 $ and $p^2 $ and $(\vec r \cdot \vec p)$ but how to prove commutation of any combination of those?
I tried proving this using $f = (\vec r + \vec p)^2$ but I don't know if thats enough to proof any polynomial.
Answer: Commuting with $L$ is equivalent to being invariant under rotations. The quantities $r^2$, $r\cdot p$ and $p^2$ are all rotationally invariant, as is any function of them. That is all that is needed. | {
"domain": "physics.stackexchange",
"id": 90060,
"tags": "quantum-mechanics, homework-and-exercises, operators, angular-momentum, commutator"
} |
Neural Networks to Output Metrics other than Prediction | Question: I have a Deep Learning Network that predicts values. However, along with the prediction i also want it to output some further metrics. For example, for each image i want it to give me the total sum of all pixels. How does one achieve this?
Answer: You could create a custom callback function. Let's say you want to find the total sum of all pixels in your validation data. Here's how the callback function would look like:
import numpy as np
from keras.callbacks.callbacks import Callback
class F1Callback(Callback):
def __init__(self, val_gen, batch_size, valid_freq, freq=1):
super().__init__()
self.val_gen = val_gen
self.batch_size = batch_size
self.pixel_sum = {}
"""
pixel_sum will look as such:
{1: {"1.jpg":5,
"2.jpg":3,
.
.
.},
2:
"""
self.epoch = 0
def on_train_begin(self, logs={}):
self.epoch += 1
self.pixel_sum[self.epoch] = {}
def on_epoch_end(self, epoch, logs={}):
batches = len(self.val_gen)
# logic to store the pixel values from predictions to self.pixel_sum
for batch in range(batches):
xVal, yVal = next(self.val_gen)
# deriving pixel values after passing it through model
pixel_values = np.asarray(self.model.predict(xVal))
# some way of retrieving image paths from val_gen
# storing
for pixel_value, image_path from zip(pixel_values, image_paths):
self.pixel_sum[self.epoch][image_path] = pixel_value
return | {
"domain": "datascience.stackexchange",
"id": 7465,
"tags": "machine-learning, neural-network, deep-learning, keras, metric"
} |
Universal robots calibration offsets | Question:
When running the Universal robots ur5_driver and connecting to the robot I always see these messages:
[WARN] [WallTime: 1390394126.210844] No calibration offset for joint "shoulder_pan_joint"
[WARN] [WallTime: 1390394126.212435] No calibration offset for joint "shoulder_lift_joint"
[WARN] [WallTime: 1390394126.214354] No calibration offset for joint "elbow_joint"
[WARN] [WallTime: 1390394126.216533] No calibration offset for joint "wrist_1_joint"
[WARN] [WallTime: 1390394126.219127] No calibration offset for joint "wrist_2_joint"
[WARN] [WallTime: 1390394126.222308] No calibration offset for joint "wrist_3_joint"
[ERROR] [WallTime: 1390394126.223789] Loaded calibration offsets: {}
I can however not find anything in the documentation about this. Is this to compensate any drift? Or with the calibration loaded you don't have to initialize the joints by moving them around? How can this be used?
Originally posted by davinci on ROS Answers with karma: 2573 on 2014-01-22
Post score: 1
Original comments
Comment by sedwards on 2014-01-22:
Can you please verify which version of the driver you are using? Debian (ROS version) or from source (which branch)?
Comment by davinci on 2014-01-22:
I think it is the Debian version (as I copied it from /opt/ros/fuerte) of this package: http://wiki.ros.org/ur5_driver
Comment by maxgitt on 2017-08-03:
Did this end up being an issue for you? I'm having the exact same [WARN] message.
Comment by expo on 2018-10-19:
when these warns occured,the python file didn't run successfully to control the robot.As the answer said,the robot should run well in this situation.Is there something else wrong with my robot?
Answer:
This isn't a real answer, but I can tell you that we work without the joint offsets all the time. The calibration procedure of the UR is separate from any ROS piece. In my opinion, the calibration done by the UR is good enough and no further offsets are required. The offsets are really only needed if the mechanics of the UR do not properly locate the true "zero" position. Most industrial arms have well calibrated zero position since they often depend on accurate kinematics for cartesian control.
Originally posted by sedwards with karma: 1601 on 2014-01-23
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by expo on 2018-10-19:
As you said,the robot should run well in this situation.but when these warns occured,the python file didn't run successfully to control my robot.So sad | {
"domain": "robotics.stackexchange",
"id": 16725,
"tags": "ros, calibration, ros-industrial, universal-robot, ur5"
} |
How to calculate the specific heat of ethanol? | Question: I am currently stuck with this question. Can somebody please help me with this?
100 g of water was heated from $\pu{15 °C}$ to $\pu{70 °C}$ by burning $\pu{20 g}$ of ethanol. How much energy per mole does this ethanol have?
So far I have calculated that there are $\pu{0.43 mol}$ in $\pu{20 g}$ of ethanol using the formula $n = m/M$.
\begin{align}
m &= \pu{20 g}\\
M &= 12 + 3 + 12 + 2 + 16 + 1 = \pu{46 g/mol}\\
n &= 20/46 = \pu{0.43 mol}\\
\end{align}
From here, I'm not sure if I should divide the energy used to heat up the water by the amount of substance. By using the specific heat formula,
$$Q = c \cdot m \cdot \Delta{}T
= \pu{4.2 J//g \cdot K} \cdot \pu{100 g} \cdot \pu{55 K}
= \pu{23100 J}$$
$$\text{Energy per mole} = \frac{\pu{23100 J}}{\pu{0.43 mol}} = \pu{53720.93 J//mol}.$$
Is this the right answer?
Answer: Each gram of water has an specific heat of 4 J/(K.g)*. So heating $\pu{100 g}$ of water from $\pu{15 °C}$ to $\pu{70 °C}$ would take:
Heat = (70°C - 15°C) * 100g * 4 J/(K.g) = 22,000 joules = 22 kJ
$\pu{20 g}$ of ethanol would be:
Ethanol mol = (20g / 46,07 g/mol) = 0.43 mol
So this means that:
0.43 mol = 22 kJ
mol = 22 / 0.43
mol = 50677 J = 50.67 kJ
So ethanol has 50.67 kJ/mol vs 53.72 kJ/mol said by you. I would say that you have made fine your homework, there is a very little difference (I have used some decimal to calculate, maybe you not. (Below you can find a more accurate answer)).
* Water specific heat: water changes it's specific heat when it changes it's heat. For example: (Wikipedia)
Water at 100 °C (steam) 2.080 J/(K.g)
Water at 25 °C 4.1813 J/(K.g)
Water at 100 °C 4.1813 J/(K.g)
Water at −10 °C (ice) 2.05 J/(K.g)
Normaly it's used: 4.1813 J/(K.g)
A better table can be found in a french page.
Using this table we can do a better aproximation:
Water heat = (70°C - 15°C) * 100g
Water heat = (293.03 J - 63.04 J) * 100g
Water heat = 229.99 J * 100g
Water heat = 22,999 J = 22.999 kJ ≈ 22.9 kJ
Ethanol mol = (20g / 46.07 g/mol) = 0.43 mol
Exactly: 0,4341219882787063164749294551769 mol
0.43 mol = 22999 J
mol = 22999 / 0.43
mol = 52978.19 J = 52.97819 kJ ≈ 52.97 kJ
Exactly: 52978,196500000000000000000000001 J
Answer: Ethanol has 52.97 kJ/mol. A value closer to your own value.
PD: I am only a normal person who like chemistry, maybe I am wrong. For example in the theory I have no idea what means $Q = c \cdot m \cdot \Delta{}T = \pu{4.2 \frac{J}{g \cdot K}} \cdot \pu{100 g} \cdot \pu{55 K} = \pu{23100 J}$ Well, I have some ideas but I haven't learned it in the school.
Edit:
I notice that you have your own mol of ethanol (46 g/mol instead of my 46.07 g/mol) and your own specific heat of water (4.2 J/(K.g)). You have to do:
Water heat = (70°C - 15°C) * 100g * 4.2 J/(K.g)
Water heat = 55°C * 100g * 4.2 J/(K.g)
Water heat =
Water heat = 23100 J = 23.1 kJ
Ethanol mol = (20g / 46 g/mol) = 0.43 mol
Exactly: 0.43478260869565217391304347826087 mol
0.43 mol = 23100 J
mol = 23100 / 0.43
Using mol decimals (0.43478260869565217391304347826087)
mol = 53130 k = 53.13 kJ
Using only 2 decimals (0.43)
mol = 53720,93 J ≈ 53.72 kJ
Exactly: 53720,930232558139534883720930233 J
Answer: Ethanol has 53.13 kJ/mol (With decimals) or 53.72 kJ (with 2 decimal). So yes! You have done right! | {
"domain": "chemistry.stackexchange",
"id": 8403,
"tags": "thermodynamics, heat"
} |
Why is the Mixed Faraday Tensor a matrix in the algebra so(1,3)? | Question: The mixed Faraday tensor $F^\mu{}_\nu$ explicitly in natural units is:
$$(F^\mu{}_\nu)=\left(\begin{array}{cccc}0&E_x&E_y&E_z\\E_x&0&B_z&-B_y\\E_y&-B_z&0&B_x\\E_z&B_y&-B_x&0\end{array}\right)$$
and thus has the same form as a general element of the Lorentz algebra $\mathfrak{so}(1,\,3)$. Is there a physical interpretation of this or is it co-incidence?
Answer: Recall that the Faraday tensor in this form is a linear mapping that maps a charged particle's contravariant four-velocity to the latter's rate of change, wrt proper time (modulo scaling by invariant rest mass $m$ and invariant charge $q$):
$$m\,\frac{\mathrm{d} v^\mu}{\mathrm{d}\tau} = q\, F^\mu{}_\nu\,v^\nu\tag{1}$$
Now let's think of a particle's four-velocity evolving on the particle's world line. If the time evolution is linear then $v(\tau) = G(\tau)\,v(0)$ for some proper-time-varying transformation matrix $G(\tau)\in GL(4,\,\mathbb{R})$.
But now recall that a four-velocity's pseudo-norm is always constant: $\langle v,\,v\rangle = v_\nu\,v^\nu = c^2$. Therefore, $G(\tau)$ must conserve this norm; in other words, $G(\tau) \in SO(1,\,3)$ the group of matrices that do so (it can't be in the four-coset $O(1,\,3)$ outside $SO(1,\,3)$ if $G(\tau)$ varies continuously, since $G(0)=\mathrm{id}$, but this is an aside). Therefore, we must have:
$$\frac{\mathrm{d} v}{\mathrm{d}\tau} = \dot{G}(\tau) v(0) = \dot{G}(\tau) \,G(\tau)^{-1} v(\tau) = F(\tau)\, v(\tau)\tag{2}$$
where $F(\tau) = \dot{G}(\tau) \,G(\tau)^{-1}$ belongs to the Lie algebra of $SO(1,\,3)$. (2) then is the form that (1) must take, for any four-force that depends homogeneously and linearly on the four-velocity alone. So the structure of the mixed Faraday tensor is precisely defined by the fact that four velocities have constant norm and the notion is general to any situation of the kind just described, not only the Lorentz force law.
If we now lower the index $\mu$ in (1) to have the covariant velocity on the left hand side of (1), the Faraday tensor is skew-symmetric, so that it can be a two form and thus fulfil $d\,F=0$ to resume the Great Faraday's thought picture that tubes of $F$ never end. | {
"domain": "physics.stackexchange",
"id": 24581,
"tags": "electromagnetism, special-relativity, group-theory, lie-algebra, tensor-calculus"
} |
Count check before Linq Single() call | Question: Should I check Count before Linq Single() call if I expect only one element?
if (objects.Count != 1)
{
throw new
InvalidOperationException(
"%Collection% should contain one %element%, but now it's: "
+ objects.Count);
}
return objects.Single();
Single implements the same check and throwing. But its message is less specific.
P.S. in my case wrong count is handled just writing log message like
_log.Error("Error during %operation%", ex);
task.Complete(TaskResult.Fail);
Answer: Yep, go for it. Adding more detail to error messages is helpful.
Also, depending on which style you prefer you could avoid explicitly checking the count by catching the exception thrown by Single().
try
{
return objects.Single();
}
catch (InvalidOperationException ex)
{
throw new InvalidOperationException(
"%Collection% should contain one %element%, but now it's: " + objects.Count,
ex
);
} | {
"domain": "codereview.stackexchange",
"id": 20623,
"tags": "c#, linq, exception"
} |
In Batch Normalisation, are $\hat{\mu}$, $\hat{\sigma}$ the mean and stdev of the original mini-batch or of the input into the current layer? | Question: In Batch Normalisation, are the sample mean and standard deviation we normalise by the mean/sd of the original data put into the network, or of the inputs in the layer we are currently BN'ing over?
For instance, suppose I have a mini-batch size of 2 which contains $\textbf{x}_1, \textbf{x}_2$. Suppose now we are at the $k$th layer and the outputs from the previous layer are $\tilde{\textbf{x}}_1,\tilde{\textbf{x}}_2$. When we perform batch norm at this layer would be subtract the sample mean of $\textbf{x}_1, \textbf{x}_2$ or of $\tilde{\textbf{x}}_1,\tilde{\textbf{x}}_2$?
My intuition tells me that it must be the mean,sd of $\tilde{\textbf{x}}_1,\tilde{\textbf{x}}_2$ otherwise I don't think it would be normalised to have 0 mean and sd of 1.
Answer: Your intuition is correct. We will be normalizing the inputs of the layer under consideration (just right before applying the activation function).
So, if this layer receives an input $\mathrm{x}=\left(x^{(1)} \ldots x^{(d)}\right)$, the formula for normalizing the $k^{th}$ dimension of $\mathrm{x}$ would look as follows:
$$\widehat{x}^{(k)}=\frac{x^{(k)}-\mathrm{E}\left[x^{(k)}\right]}{\sqrt{\operatorname{Var}\left[x^{(k)}\right]}}$$
Note that in practice a constant $\epsilon$ is also added under the square root in the denominator to ensure stability.
Source: The original Batch Normalization paper (Section 3).
Andrew Ng's video on this topic might also be useful for illustration. | {
"domain": "ai.stackexchange",
"id": 2059,
"tags": "neural-networks, deep-learning, batch-normalization"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.