anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Mass-Energy relation | Question: Einstein mass- energy relation states $E=mc^2$. It means if energy of a paricle increases then mass also increases or vice-versa.
My question is that what is the actual meaning of the statement "mass increases"? Is really the mass of the particle increasing or what?
Answer: The rest mass of an object is, by definition, independent of the energy. But all other forms of mass are indeed increasing with the energy, as $E=mc^2$. With the relativistic interpretation of the kinetic energy, the total mass is
$$ m = \frac{m_0}{\sqrt{1-v^2/c^2}}$$
Here, $m_0$ is the mass measured at rest, i.e. the rest mass. The corrected, total mass goes to infinity if $v\to c$ and it holds for the following interpretations of the mass:
the inertial mass, i.e. the resistance towards the acceleration, increases. For example, the protons at the LHC have mass about 4,000 times larger than the rest mass (the energy is 4 TeV), and that's the reason why it's so hard to accelerate them above their speed of 99.9999% of the speed of light and e.g. surpass the speed of light. It's impossible to surpass it because the object is increasingly heavy, as measured by the inertial mass
the conserved mass. If you believe that the total mass of all things is conserved, it's true but only if you interpret the "total mass" as the "total energy over $c^2$". In this conserved quantity, the fast objects near the speed of light indeed contribute much more than their rest mass. If you considered the total rest mass of objects, it wouldn't be conserved
the gravitational mass that enters Newton's force $Gm_1m_2/r^2$. If an object is moving back and forth, by a speed close to the speed of light, it produces a stronger gravitational field than the same object at rest. For example, if you fill a box with mirrors by lots of photons that carry some huge energy and therefore "total mass" $m=E/c^2$, they will increase the gravitational field of the box even though their rest mass is zero. Be careful, in general relativity, the pressure from the photons (or something else) creates a gravitational field (its independent component curved in a different way), too.
Despite this Yes Yes Yes answer to the question whether the total mass indeed increases, Crazy Buddy is totally right that especially particle physicists tend to reserve the term "mass" for the "rest mass" and they always prefer the word "energy" for the "total mass" times $c^2$. | {
"domain": "physics.stackexchange",
"id": 67428,
"tags": "special-relativity, energy, mass, mass-energy"
} |
Conjugate secondary antibody | Question: Why is the secondary antibody conjugated to the enzyme in ELISA, instead of the primary antibody? Wouldn't it be easier to conjugate the enzyme to the primary antibody?
Answer: Making an antibody-enzyme conjugate isn't trivial. By using a primary/secondary set-up you can use the same well-characterised conjugate in combination with many different primary antibodies (as long as these primaries are all raised in the same species). There is also the possibility of some amplification: for example, if the secondary is an anti-Fab then two secondary Igs will bind to each primary.
Response to OP comment
Most primary antibodies in common use are derived from rabbit or mouse, and most are IgG. So, for example, the secondary antibodies goat anti-rabbit IgG and goat anti-mouse IgG will cover most experiments. | {
"domain": "biology.stackexchange",
"id": 4365,
"tags": "immunology"
} |
Stuck while deriving the Lindblad Master Equation | Question: I was following Quantum Markov Processes from the book The Theory of Open Quantum Systems by Breuer and Petruccione. In the section The Markovian Quantum Master Equation they proceeds to 'construct the most general form for the generator $\mathcal{L}$ of a quantum dynamical semigroup.' Then I observed that almost the same approach is taken in the book Quantum Dynamical Semigroups and Applications by Alicki and Lendi, so I moved there.
We have the definition of $\mathcal{L}$: $$\frac{d}{dt}\rho_t=\mathcal{L}\rho_t$$ with $\rho_t=\Lambda_t\rho$. And this gives $\Lambda_t=e^{\mathcal{L}t}, t\ge0.$
For unbounded $\mathcal{L}$ they have used the following definition of exponential - $$e^{\mathcal{L}t}\rho=\lim\limits_{n \to \infty}\left(1-\frac{t}{n}\mathcal{L}\right)^{-n}\rho.$$ After that,
We find now a general form of $\mathcal{L}$ in the case of finite-dimensional
Hilbert space $\mathcal{H_S}$ (dim $\mathcal{H_S} = N$). Introducing a linear basis {$F_k$}, $k =$
$0,1,... ,N^2 − 1$ in $\mathcal{B(H_S)}$ such that $F_0 = \mathbb{1}$ we may write a
time-dependent version of $\Lambda\rho=\sum_{\alpha} W_{\alpha}\rho W_{\alpha}^*$ as follows, $$\Lambda_t\rho=\sum_{k,l=0}^{N^2-1}C_{kl}(t)F_k\rho F_l^*,$$ where $C_{kl}(t)$ is a positive-definite matrix.
The part after this is what I'm totally in the dark with. Let me paste the photo as it will be too much effort to type:
How are we getting this expression for $\mathcal{L}\rho$ with all these limits, $\epsilon$'s and all that? What I got is the following: $$\mathcal{L}\rho=\frac{d}{dt}\rho_t=\frac{d}{dt}{[\Lambda_t\rho]}=\frac{d}{dt}\left(\sum_{k,l=0}^{N^2-1}C_{kl}(t)F_k\rho F_l^*\right)$$ but can proceed no further.
Answer: You have to be careful with the notation, $\rho = \rho_0$ is the initial state here. The left hand side of your last equation should be $\mathcal L \rho_t$.
We thus have to evaluate the equation at $t=0$, obtaining
$$ \mathcal L \rho = \frac{d}{dt} \Bigl( \sum_{kl} C_{kl}(t) F_k \rho F_l^\dagger \Bigr)_{t=0} . $$
The result now follows from
$$ \left. \frac{d}{dt} f(t) \right|_{t=0} = \lim_{\epsilon \to 0} \frac{1}{\epsilon} \bigl( f(\epsilon) - f(0) \bigr) , $$
that is, $\mathcal L \rho = \lim_{\epsilon \to 0} \frac{1}{\epsilon} \bigl( \Lambda_\epsilon \rho - \rho \bigr)$.
There is a typo in Alicki / Lendi, the sums in the final result are all supposed to start at $1$ instead of zero. The same calculation can be found for example in the proof of Thm. 5.1 in the Rivas / Huelga book, I think it is a bit easier to understand there. | {
"domain": "physics.stackexchange",
"id": 97718,
"tags": "quantum-mechanics, quantum-information, density-operator, open-quantum-systems"
} |
What is the implication of NP-completeness if P=NP? | Question: If a certain problem $X$ is NP-complete and $P\neq NP$, then $X$ is not polynomial. But we still don't know that $P\neq NP$, so in theory $X$ may be polynomial. Does the fact that $X$ is NP-complete say anything about it being more difficult than other polynomial problems? In particular:
Does it mean that $X$ cannot be solved in linear time? (alternatively, are there NP-complete problems that can be solved in linear time if $P=NP$)?
Does it mean that $X$ cannot be solved in quadratic time? (alternatively, are there NP-complete problems that can be solved in quadratic time if $P=NP$)?
If $X$ is NP-complete, and I prove a lower bound of $\Omega(n^2)$ on the run time of $X$ (a lower bound that does not depend on whether $P=NP$), did I prove anything new?
Answer: Yes, but without further information, only that $X$ has a lower bound of $\Omega(n^{2})$. (I'm assuming that, in line with the title, you're interested in what happens if it turns out that $P=NP$) The problem with transferring this information elsewhere is with the reductions used. As the only constraint is that they're computable in polynomial time, we can easily have the case that we simply solve the problem as part of the reduction (again assuming $P=NP$), so we can reduce it to any other problem in $P$ (this is why $P$-completeness is defined using more restrictive reductions - $NC$ reductions or logspace reductions, or similar, depending on exactly what version you're going for).
With some more specific information we can say something. If we have a problem $W$ where $X \leq_{P} W$ and we can compute the reduction in $O(n^{c})$ where $c < 2$, then we know that $W$ has a lower bound of $\Omega(n^{2/c})$ essentially because an upper bound on the problem tells us something. Say $A$ has an upper bound of $O(n^{c_1})$ for some $c_1$, we have a problem $B$ such that $B \leq_{P} A$ and we can compute this reduction in $O(n^{c_{2}})$, then $B$ has, at worst, a $O(n^{c_{1}c_{2}})$ algorithm - at worst we can convert the instance of $B$ to an instance of $A$ and solve that instead. So in our case, if $W$ had an upper bound of $O(n^{2/c - \varepsilon})$ for any $\varepsilon > 0$, then $X$ could be solved in $O(n^{c\cdot (2/c - \varepsilon)}) = O(n^{2-c\varepsilon})$, contradicting the lower bound.
Note of course that none of this really has anything to do with $X$ being $NP$-complete, it's true for any problem in $P$. | {
"domain": "cs.stackexchange",
"id": 4071,
"tags": "complexity-theory, time-complexity, np-complete"
} |
Physics of weird "boing" sound in racquetball courts? | Question: While playing racquetball, I frequently hear a very prominent "boing" sound (or more formally, a chirp). For example, you can hear it in this video when the ball hits the front wall.
Does anyone know what the origin of this sound is, and why the pitch rises?
Here is the spectrogram from the above video:
A careful examination shows that there are at least four linear chirps, which I've highlighted below. If you really listen carefully, all four of these are audible. (However I can only distinguish between the two high frequency chirps when the audio is played at half speed.)
Answer: After much investigation, simulation and a deep literature search, I've figured out the true answer.
You perceive a chirp because you are being hit with the echos of the sharp noise that generated the sound. The times between the arrival of those echos is decreasing inversely with time, so it sounds as if it were a tone with a fundamental frequency increasing linearly in time, hence the chirp.
To get a feel for the phenomenon, consider a simulation:
Above you see a slowed down version of the simulated pressure wave inside a 2D racquetball court. I threw up the generated sound on soundcloud.
If you watch the simulation, pick a particular point and watch the reflected sounds go by, you'll notice the different instances of the multiple echos arrive faster and faster as time goes on.
You can clearly hear the chirps in the generated sound, and if you listen closely you can hear secondary chirps as well. These are also visible in the spectrogram:
This phenomenon was studied and published recently by Kenji Kiyohara, Ken'ichi Furuya, and Yutaka Kaneda: "Sweeping echoes perceived in a regularly shaped reverberation room ," J. Acoust. Soc. Am. Vol.111, No.2, 925-930 (2002). more info
In particular, they explain not only the main sweep, but the appearance of the secondary sweeps using some number theory. Worth reading in full. This suggests that for the best sweep one should both stand and listen in the center of the room, though they should be generic at any location.
Simple geometric argument
Following the paper, we can give a simple geometric argument. If you imagine standing in the middle of a standard racquetball court, which is twice as long as it is tall or wide, and clap, your clap will start propagating and reflecting off the walls. A simple way to study the arrival times is with the method of images, so you imagine other claps generated by reflecting your clap across the walls, and then reflections of those claps and so on. This will generate a whole set of "image" claps, located at positions
$$ ( m, l, 2k) L $$
where $m,l,k$ are integers and $L$ is 20 feet for a racquetball court, the time for any particular clap to reach you is $t = d/c$ and so we have
$$ t = \sqrt{m^2 + l^2 + 4k^2} \frac{L}{c} $$
for our arrival times. If we look at how these distribute in time:
It becomes clear why we perceive a chirp. The various sets of missing bars, which themselves are spaced like a chirp, give rise to our perceived subchirps.
Details of the 2D Simulation
For the simulation, I numerically solved the wave equation:
$$ \frac{\partial^2 p}{dt^2} = c^2 \nabla^2 p $$
and used impedance boundary conditions on the walls
$$ \nabla p \cdot \hat n = -c \eta \frac{\partial p}{\partial t} $$
I used a collocation method spatially, with a Chebyshev basis of order 64 in the short axis and 128 on the long axis. and used RK4 for the time integration.
I modeled the room as 20 feet by 40 feet and started it of with a gaussian pressure pulse in one corner of the room. I listened near the back wall towards the top corner.
I put up an ipython notebook of my code, with the embedded audio and video. I recommend playing with it yourself. On my desktop it takes about minute to do a full simulation of the sound.
Effect of listening location
I've updated the code to generate sound at multiple locations, and generate their sounds. I can't seem to embed audio on stackexchange, but if you click through to the IPython notebook view, you can listen to all of the generated sounds. But what I can do here is show the spectrograms:
These are laid out in roughly their locations inside of the room. Here the noise was generated in the lower left, but the chirps should be generic for any listening and generation location. | {
"domain": "physics.stackexchange",
"id": 15895,
"tags": "acoustics, everyday-life"
} |
Existence of infinite unique FSAs | Question: It is reasonably simple to show that there are an infinite number of different finite state automata that can be constructed, but has it been proven if there is an infinite number of unique finite state automata? Meaning no automata that recognize the same language.
I asked my professor and he didn't know, and I haven't been able to find anything about it online.
Answer: Sure. You can easily find an automaton that recognizes the language $\{a\}$, and one that recognizes the language $\{aa\}$, and one that recognizes the language $\{aaa\}$. You can probably see where I'm going from here... | {
"domain": "cs.stackexchange",
"id": 18951,
"tags": "finite-automata, theory"
} |
How does aluminium react with bases to form aluminates? | Question: An example of reaction:
$$\ce{2Al + 2KOH + 6H2O->2 K[Al(OH)4] + 3H2 ^}$$
Aluminium is not ionic, how then does it attract the $\ce{OH-}$ groups to bond with them into the complex ion $\ce{[Al(OH)4]-}$?
I started re-reading non-organic chemistry at a slightly deeper level recently after going through the basics of organic chemistry. This reaction stumped me. I've re-read some stuff about aluminium and complex ions, but can't seem to understand it yet.
There's no positive charge on $\ce{Al}$, yet it reacts with hydroxide groups floating around. Where will the electron from a non-ionic aluminium atom go?
I cannot 'imagine' the reaction, visualize its mechanism, and this makes it hard to remember it.
P.S. Aluminium hydroxide seems to be mysterious in its structure. I quote Chemguide:
The chemistry textbooks that I have to hand aren't too clear about the structure of aluminium hydroxide as far as the degree of covalent character is concerened, and a web search (until I got totally bored with it!) didn't throw up any reliable chemistry sites which discussed it. Several geology sites describe the mineral gibbsite (a naturally occurring form of aluminium hydroxide) in terms of ions, but whether it actually contains ions or whether this is just a simplification as a convenient way of talking about and drawing a complicated structure, I don't know.
Answer:
There's no positive charge on Al, yet it reacts with hydroxide groups floating around.
Your observation is correct. In the beginning, aluminium in its elemental state has the oxidation state 0. Apparently, that changes in the course of the reaction:
$$\ce{Al <=> Al^3+ + 3e-}$$
Where will the electron from a non-ionic aluminium atom go?
Again, your observation is correct. If one species is oxidized, another one must get reduced. In this case, protons from the water are reduced to form hydrogen gas:
$$\ce{ 2H2O + 2e- <=> 2OH- + H2 ^}$$
If you combine both half reactions and balance the stoichiometry, you end up with the initial equation for the whole redox reaction. | {
"domain": "chemistry.stackexchange",
"id": 5542,
"tags": "redox, ions"
} |
Find The Parity Outlier | Question: I am trying to solve this coding exercise:
You are given an array (which will have a length of at least 3, but could be very large) containing integers. The array is either entirely comprised of odd integers or entirely comprised of even integers except for a single integer N. Write a method that takes the array as an argument and returns N.
For example:
[2, 4, 0, 100, 4, 11, 2602, 36]
Should return: 11
[160, 3, 1719, 19, 11, 13, -21]
Should return: 160
Here is my code:
import java.util.ArrayList;
public class FindOutLier {
public static int search(ArrayList<Integer> lists, int num) {
int count = 0;
int index;
Integer a[] = new Integer[lists.size()];
a = lists.toArray(a);
for (index = 0; index < a.length; index++) {
if (num == a[index]) {
count++;
}
}
return count;
}
public static boolean checkEvenOrOdd(int[] integers) {
int t = 0;
double result = 0.0;
ArrayList<Integer> l = new ArrayList<Integer>(integers.length);
while (t < integers.length) {
result = integers[t] % 2;
if (result == 0.0) {
l.add(1);
t++;
} else {
l.add(0);
t++;
}
}
int counter = search(l, 1);
if (counter == 1)
return true;
else
return false;
}
public static int find(int[] integers) {
int t = 0;
double newresult = 0.0;
boolean result = checkEvenOrOdd(integers);
if (result == false) //
{
while (t < integers.length) {
newresult = integers[t] % 2;
if (newresult != 0.0) {
break;
} else {
t++;
}
}
} else {
while (t < integers.length) {
newresult = integers[t] % 2;
if (newresult == 0.0) {
break;
} else {
t++;
}
}
}
return integers[t--];
}
}
It passed Junit Test in Eclipse. Please review it/ suggest any improvements.
Thanks.
Answer: Your approach is far too complicated.
In checkEvenOrOdd() you create a ArrayList<Integer> containing
zeros or ones.
In search() you create an Integer a[] from that list.
Then you iterate over the array and count the number of zeros or ones.
You can iterate over int[] integers directly to find the number of odd (or even) entries:
int oddCount = 0;
for (int n : integers) {
if (n % 2 != 0) {
oddCount += 1;
}
}
Note how a "enhanced for statement" can be used to iterate over
the array elements, instead of a for-loop or a while-statement for
the array indices, as in your code.
There is also no need to use a double result or floating point literals
when doing an integer remainder calculation.
if (counter == 1)
return true;
else
return false;
can be simplified to
return (counter == 1)
But in this case the entire logic can be put into a single function,
e.g. like this:
public static int find(int[] integers) {
int oddCount = 0;
for (int n : integers) {
if (n % 2 != 0) {
oddCount += 1;
}
}
if (oddCount == 1) {
for (int n : integers) {
if (n % 2 != 0) {
return n;
}
}
} else {
for (int n : integers) {
if (n % 2 == 0) {
return n;
}
}
}
return 0;
}
Then one could try to avoid the code repetition for the even/odd
case.
But actually I would suggest a completely different, more efficient approach:
Check the first two array elements if they are even or odd.
If they are both even, the outlier can be found
by search for the first odd element, starting at index 2.
If they are both odd, the outlier must be even, and can be found
similarly.
If the first two array elements have different parity, one of
them is the outlier. Which one, can be determined by checking the third
array element. | {
"domain": "codereview.stackexchange",
"id": 29835,
"tags": "java, programming-challenge"
} |
Comparing acidic strengths between benzylammonium ion and phenol | Question: Question
Compare the acidic stengths between benzylammonium ion and phenol.
I first tried to remove the H+ ion from both of them and tried to compare the relative stabilities of resulting conjugate bases, but I am not sure, which one would be more stable.
The first one would be a neutral compound and the second one would have resonating structures on which the negative charge gets mostly delocalised to carbon.
I have seen somewhere on the internet that benzylammonium ion has a $\mathrm{pK_a}$ value of 9.33 while phenol has a $\mathrm{pK_a}$ value of 10. If this data is correct, then how do we explain the acidity order?
Answer: The acid strength of each compound can be explained, but the acidity order is much more difficult to compare, because the two compounds are only remotely connected. It is misleading to conclude that the mere presence of a phenyl group somehow connects these molecules. The similarity of the pK$_a$s is likely a coincidence.
The question needs a clear definition of pK$_a$. We can ask what the pK$_a$ of phenol is, or the pK$_b$ of phenoxide. When we ask for the pK$_a$ of benzyl amine, we want the pK$_a$ of benzylammonium ion, not the pK$_a$ of benzyl amine going to C$_6$H$_5$CH$_2$NH$^-$. There is a bit of flexibility (or sloppiness) in the terminology here, and also a conflict in the values presented: the question quotes 9.33 for benzylammonium ion (I found 9.34 - no big deal), but the comment by the OP gives 8.82 for benzyl amine. The values should be identical because we know what the OP means. It appears that 8.82 is incorrect.
The difference between these molecules is far more important than any similarities. For example, one is a neutral molecule that you can put into a bottle with 6 x 10$^{23}$ more; the other is a charged piece of a molecule that you can hardly bring next to another one.
The phenyl ring has an enormous effect on the acidity of phenol, due to the resonance effect. It stabilizes the oxygen anion by more than 5 orders of magnitude, compared to, e.g., methanol (pK$_a$ = 15.5) or benzyl alcohol (15.4) or ethanol (15.9).
(Interesting that phenylethanol (pK$_a$ = 14.81) is more acidic than benzyl alcohol. This suggests that the methyl group in ethanol is more electron-donating than the benzyl group in benzyl alcohol; this agrees with the discussion in #3 below, for phenethylammonium ion.)
The phenyl ring has a much smaller effect on the amine molecule-ion; no resonance, just inductive. The pK$_a$ of benzylammonium ion (9.34) can be compared with, e.g.:
methylammonium ion (pK$_a$ = 10.66) Replacing the phenyl of benzylammonium ion with H reduces the K$_a$ by a factor of 21.
This is equivalent to saying that benzyl amine is less basic (pK$_b$ = 4.66) than methylamine (pK$_b$ = 3.34); therefore the protonated benzyl amine will give up an added proton more readily than methylamine. We can conclude that methyl is more electron-donating than benzyl.
ethylammonium ion (pK$_a$ = 10.71) Replacing the phenyl of benzylammonium ion with a methyl group reduces the K$_a$ by a factor of 23. Thus, ethylamine, like methylamine, is more basic than benzyl amine because methyl is more electron-donating than phenyl. (But we knew that.)
Phenethylammonium ion has a pK$_a$ = 9.73. Putting another CH$_2$ group between phenyl and nitrogen reduces K$_a$ by a factor of 2.5. This indicates that a benzyl group is less electron-donating than a methyl or ethyl group, in agreement with the comparison of the inductive effect between benzylammonium vs methyl- or ethylammonium ions and benzyl alcohol vs phenylethanol.
Benzyl amine and phenol are the neutral actors in this drama, and might react nicely in a one-to-one ratio. What would be the pH in solution? The graph shows the ratio of ion concentration to neutral molecule concentration for phenol and benzyl amine, separately, over the pH range of 5 to 9. They show contrasting slopes which cross at pH 7.33. At this pH, the ionic fraction of each compound is 0.21% of the whole - it doesn’t matter what concentration you started with if everything is soluble (benzyl amine is very soluble, but phenol, mp 43 C, is soluble in water slightly less than ~0.1 M in water at room temperature). Just add the same number of moles of each neutral compound to get 0.21% reaction and pH 7.33. What a coincidence! Practically the same pH as pure water. | {
"domain": "chemistry.stackexchange",
"id": 15533,
"tags": "organic-chemistry, acid-base, aromatic-compounds, amines, phenols"
} |
Message for publishing roll, pitch, yaw without converting to quaternion | Question:
Hi there,
I have been looking for a way to publish roll, pitch, yaw angles without change them to quaternions but I didn't find that kind of message after googling for a while. I know I can write my own types of messages but I don't want to reinvent the wheel if is the case.
So, does anyone know a standar message with pose and orientation being the angles in rpy form?
Regards,
Juan Sandubete López.
Originally posted by JSandu on ROS Answers with karma: 20 on 2018-10-31
Post score: 0
Original comments
Comment by gvdhoorn on 2018-10-31:
There is none in the standard msg set, exactly because RPY (and Euler angles in general) has quite a few disadvantages which quaternions avoid.
You may not be looking for a discussion about the relative merits of quaternions, but can you perhaps tell us why you want to use RPY?
Comment by JSandu on 2018-10-31:
Thanks you for fast answering @gvdhoorn
I am trying it for better visualizing the angles with rqt_bag because It is not easy to guess what is happening to the robot (with the playing rosbag) only seeing its quaternion values. With rpy it is easier.
Comment by gvdhoorn on 2018-10-31:
I wouldn't have an answer to that: raw quaternion values are indeed not very "nice" to look at. But that is what visualisation tools are for. Changing your msgs for that seems like the wrong thing to do.
Would creating a small rqt plugin that plots the state of your robot be an option?
Comment by gvdhoorn on 2018-10-31:
What sort of robot are you using? A mobile platform?
Comment by JSandu on 2018-10-31:
To create a rqt plugin could be an option, but I don't really need it that much. Maybe I can spend some time learning how to create rqt plugins when I will be less busy.
It is an UAV, a quadcopter with usual X configuration.
Answer:
One alternative I can think of is to use topic_tools/transform to write a very simple quaternion to rpy converter (one dimension at a time) that publishes to 3 topics (one for R, P and Y) and then use the gauges plugin to visualise that.
Sort of a poormans GCS.
Originally posted by gvdhoorn with karma: 86574 on 2018-10-31
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2018-10-31:
Just noticed that the wiki/topic_tools/transform page even has an example for exactly this (ie: quaternion to euler). But that doesn't work directly with rqt_bag of course.
Comment by gvdhoorn on 2018-10-31:
Final comment: this seems like an xy-problem really btw (the real question seems to be: "how to plot/visualise pose of my quadcopter instead of looking at raw quaternion values", but your post is about msgs for rpy values).
Comment by JSandu on 2018-10-31:
That seems to be enough for what I need (at least for now). And yep, I noticed the example, it fit like a glove
And no problem with the xy relation with yaw, I resolve the transformations before publishing anything.
Thanks you @gvdhoorn!
Comment by gvdhoorn on 2018-10-31:
The xy-problem is not a mathematical problem. It is a term used to describe a situation in which a user asks for the wrong thing: instead of asking for a solution for X, they ask how to implement Y, which they themselves already decided is the solution to X. They should ask about X.
Comment by JSandu on 2018-10-31:
Ok, should I change this question?
Comment by gvdhoorn on 2018-10-31:
No :)
I just wanted to make you aware of it, so you avoid doing it in the future.
It happens quite often.
Comment by JSandu on 2018-10-31:
Alright ! Thanks for your help | {
"domain": "robotics.stackexchange",
"id": 31993,
"tags": "ros, pose, ubuntu, ubuntu-trusty, messages"
} |
Need to prepare the data to Link Analysis project? | Question: I've a dataset with following schema:
Customer_ID - Unique ID
Product - ID of purchased product
Department - ID of the department that sells the product
Product_Type - The purchased product type
Date - The date of purchase
Quantity - The number of units purchased
I need to do a link analysis project to analyze some consumption patterns of the products and answer the following questions:
"If product B is purchased then customer will also take product A"
I will use Scala/Python to make the link analysis over the datasets but the examples I have seen are dataset with direct links to the project of "Flight Data" in the schema is:
ID
Origin
Destination
My question is: there I need to prepare my dataset to make the Link Analysis (there exists some best practices to do this?) or can I analyze the dataset with that structure?
Many thanks! Sorry my inexperience on this topic!
Answer: One option is that you could make a bipartite graph from your TLOG and then implement some link analysis. Depending on the requirements that you need (volume of the data) there are different frameworks that you can use. One which is quite popular for not so large data is networkx where (manual) you can find already implemented algorithms for link analysis and link prediction.
Maybe, the community would be able to help you more if you try to be more specific about what kind of link analysis you want and what kind of problem you try to solve (does it has to do probably with Supervised or Unsupervised learning). | {
"domain": "datascience.stackexchange",
"id": 2404,
"tags": "data-mining, bigdata, dataset, apache-spark, social-network-analysis"
} |
Separation of variables and choosing solutions | Question: On Griffiths E&M page 145 he does an example of an uncharged metal sphere of radius R in a uniform E-field $E=E_0\hat{z}$. Now with the general solution $$V(r,\theta)=\sum_{l=0}^{\infty}\left(A_lr^l+\frac{B_l}{r^{l+1}}\right)P_l(cos(\theta))$$ and noting that $V=-E_0z+C$ And V=0 at r=R we get $$A_l R^l+\frac{B_l}{r^{l+1}}=0$$ This is fine. But then it says that for the solution as r$\rightarrow \infty$, our $B_l$ Becomes negligible and therefore we only deal with the $A_l$ terms. This is breaking my head a little bit because I thought that we always chose the non-problematic term as the solution i.e non-diverging or blowing up at zero. But the $A_l r^l$ terms will diverge as r$\rightarrow \infty$. What is going on here? Any help would be greatly appreciated!
Answer: You've written it out yourself: "$V= -E_0z+ C$". $z= r\cos\theta$, so $V\sim -E_0 r$ as $r\rightarrow \infty$. Then we do have to deal with the $A_l$ terms, but it doesn't mean all of them. We want to match the coefficients of $A_l$ to $V$, which means $A_1 \neq 0$ and $A_l = 0$ for $l\neq1$. | {
"domain": "physics.stackexchange",
"id": 57107,
"tags": "electrostatics, electric-fields"
} |
How are London Dispersion Forces generated? | Question: While talking about the gaseous state of matter we came to the topic of London Dispersion Forces caused by the generation of a dipole in one atom which induces a dipole in another. While talking about the cause of such interactions I stated that, since Schrödinger's equation for multi-electron atoms is time-dependent, at some moment a dipole may get generated (due to the increase in the probability of electrons in that region). Though now I am thinking that what I said was quite vague and may not be the reason for it.
So what is the reason for the generation of dipole moment in one atom to cause London forces?
Answer: Let me crash the party here.
TL;DR: The classical explanation of induced dipole attractions from electron densities "evading" each other does, by itself, not adequately or intuitively explain the actual charge density patterns that arise in these situations.
(I know that this must seem like an outrageous statement, and surely downvote fingers are itching now. Bear with me.)
Based on the Hellmann-Feynman theorem, it is known that the forces acting on a nucleus arise from two purely coulombic sources: Its attraction to its surrounding electron distribution, and its repulsion with other nuclei. Hence, the observation that e.g. the two atoms in a rare gas dimer are attracted to each other immediately implies that there is a concentration of electron density in between the nuclei, so that the resulting net forces pulls them "inwards" towards each other. As Feynman put it himself in 1939 (emphasis in the original):
... the charge distribution of each is distorted from central symmetry, a dipole moment of order $1/R^{7}$ being induced in each atom. The negative charge distribution of each atom has its center of gravity moved slightly toward the other. It is not the interaction of these dipoles which leads to van der Waals' force, but rather the attraction of each nucleus for the distorted charge distribution of its own electrons that gives the attractive $1/R^{7}$ force.
This is the complete opposite picture of the momentary effects in the induced-dipole explanation, where the electron densities "evade" each other by simultaneous displacements in the same direction to create attractive dipole interactions. The Feynman explanation is not very popular, but there are in fact some authors who have picked it up and commented on it, most notably perhaps Richard Bader (of Atoms-In-Molecules fame). Politzer and Murray have written a nice article on the topic.
But, is Feynman actually correct? If so, then we should be able to observe some accumulation of charge density between two neutral atoms that are bound by dispersion interactions, right? Indeed we do. The below image comes from a publication by Thonhauser et al. and shows the difference in charge density that arises when the dispersion interaction between to Ar atoms is "switched on" (highlight mine):
So, this may hold true for atoms, but what about entire molecules? Luckily, Hunt has shown in a very laborious 1990 paper that Feynman's picture holds true in that case as well:
... Feynman's statement concerning forces between well-separated atoms in S states generalizes to interacting molecules A and B of arbitrary symmetry (with A and B in their ground electronic states). To leading order, the dispersion force on each nucleus I in molecule A results from the attraction of I to the dispersion-induced change in polarization of the electronic charge cloud on molecule A itself.
Obviously, the "inwards polarization" effect must seem counter-intuitive at first. Why would the negatively charged electron clouds actually want to approach instead of evade each other? Thankfully, a straightforward rationalization of this effect for a rare gas dimer comes from another paper by Clark, Murray and Politzer:
What causes the polarization of the electronic densities toward each other (...)? This can be easily understood when it is noted that the electrostatic potential produced by the nucleus and electrons of any free ground-state atom is positive everywhere; the effect of the nucleus dominates over that of the dispersed electrons. This positive potential is what each atom “sees” of the other atom (...).
Of course, the astute reader may also voice another point of protest: "The fluctuating dipoles are variable in time, whereas Feynman's deformed charge density explanation is entirely static. How do we even compare the two? And what is the effect of the fluctuating dipoles when averaged over time?"
As it turns out, the two explanations are apparently consistent with each other, as detailed by several authors; Hunt herself in the paper mentioned above acknowledges fluctuating dipoles as a possible starting point, and a paper by Clark dedicates a full paragraph to the seeming dichotomy of the two pictures. At its core, however, the suggestive "electron-evading" nature of this explanation is very much misleading in light of the observable static effects of the "inwards" charge redistributions -- which, again, are actually required to create the resulting attractve interactions. | {
"domain": "chemistry.stackexchange",
"id": 12873,
"tags": "physical-chemistry, quantum-chemistry, intermolecular-forces, gas-phase-chemistry"
} |
Relationship between Townsend Discharge and Capacitively coupled plasma | Question: I'm researching on processes for plasma generation, and I found articles discussing both Townsend discharge and capacitively coupled plasma (CCP).
Both appear to describe an almost identical mechanism. As I understand it, when a strong electric field is applied to a gas, free electrons and ions are accelerated, causing them to collide with (and ionize) other molecules/atoms, resulting in an electron avalanche which eventually leads to avalanche breakdown that makes the gas conductive (hence, a plasma).
My guess is that CCP is the name of an industrial process that makes use of Townsend discharge. Can someone clarify exactly what the relationship between these two processes is?
Answer:
My guess is that CCP is the name of an industrial process that makes use of Townsend discharge. However, I was unable to find any article or paper confirming this.
A Townsend discharge operates by applying a DC voltage across a gap between a cathode and anode. Electrons are emitted by the cathode and collide with air molecules in the gap. Once ionized, the positive ions move toward the cathode while the new secondary electrons move toward the anode. This is repeated multiple times resulting in a shower of electrons resulting in an avalanche breakdown.
A capacitively coupled plasma (CCP) relies on a radio frequency (RF) power supply connected to two parallel, conducting plates. Once an ionization occurs in the gas between the plates, the electrons are accelerated by the AC electric field of the RF power supply but the much heavier ions experience much smaller accelerations due to their masses being at least 1836 times more than the electrons. The accelerated electrons can then initiate an avalanche breakdown if the oscillating electric fields are large enough.
If one isolates one of the plates using a capacitor, the electrons will accumulate on this plate resulting in a DC electric field being established in addition to the AC field from the RF power supply. The negative plate does not discharge immediately because of the capacitor attached used to isolate it. Recall the ions were not significantly accelerated by the AC field in the first design but here, with the DC field present, they can be accelerated toward the isolated plate.
Can someone clarify exactly what the relationship between these two processes is?
It seems they are related but the Townsend discharge relies almost entirely on a DC power supply while the CCP often uses an RF power supply (at least this is the main difference between Townsend's original version and modern CCP set ups). | {
"domain": "physics.stackexchange",
"id": 76312,
"tags": "electromagnetism, plasma-physics"
} |
Electricity from Pendulum | Question: Can Electricity be generated from a pendulum?. Considering pendulum in its ideal condition i.e. it never stops. If Yes, How? Pendulum can be a simple, complex or any other type. What exactly I mean to ask is Can the oscillatory motion of the pendulum be converted into other kind of motions and then convert that motion into generating electricity, may be by connecting a dynamo?
Answer: The pendulum motion can be converted to energy any number of ways. For example, if the pendulum bob is a magnet simply placing a coil of wire near it as it swings will induce a current which can be siphoned off and used. | {
"domain": "physics.stackexchange",
"id": 17486,
"tags": "electricity"
} |
Getting error message for rospack depends geometry_msgs | Question:
Hi,
i haven't been using ros for a while.
Today i tried to compile some ros nodes i create last month.
However i was getting an error message...
I figured out that it has to do something with my geometry_msgs package.
As i tried to look at the dependencies with
rospack depends geometry_msgs
i got the same error message i got for the make command:
[rospack] woah! expanding the dependency tree made it blow up.
There must be a circular dependency somewhere.
[rospack] circular dependency
i'm using diamondback-desktop-full.
I update/reinstalled it. However this didn't make anything better.
Any ideas what i'm messing up?
cheers
mimax
Originally posted by Mimax on ROS Answers with karma: 174 on 2011-09-29
Post score: 1
Answer:
Do you have two packages that depend on one another anywhere in your ROS_PACKAGE_PATH? If you have any two packages that depend on one another, you could cause this circular dependency. In case you're curious, the output of rospack depends geometry_msgs on your system should look like this:
rosbuild
roslang
cpp_common
roscpp_traits
rostime
roscpp_serialization
rospack
roslib
xmlrpcpp
rosconsole
std_msgs
rosgraph_msgs
roscpp
rospy
rosclean
rosgraph
rosparam
rosmaster
rosout
roslaunch
rosunit
rostest
topic_tools
rosbag
rosbagmigration
Originally posted by DimitriProsser with karma: 11163 on 2011-09-29
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Mimax on 2011-09-29:
mmmh.. thanks for you answer. One package in my own rospackages folder was causing problem... still don't know what exactly the problem was, since i coudl not see any circual dependencies. anyway it's working ;) | {
"domain": "robotics.stackexchange",
"id": 6825,
"tags": "rosmake, geometry-msgs, ros-diamondback, make"
} |
What is the mean power of a complex random variable? | Question: Say $\alpha$ is a complex random variable, then which one of the following expressions is correct?
$\mathbb{E}[\alpha^2]$ or
$\mathbb{E}[\alpha \alpha^*]$?
Answer: The correct expression for the mean power of a complex random variable $\alpha=x+jy$ is
$$
\begin{align}
\bar P &= \operatorname E\left[\alpha \alpha^*\right]\\
&= \operatorname E\left[x^2 + y^2\right]\\
&= \operatorname E\left[x^2\right] + \operatorname E\left[y^2\right]\\
&= \bar P_\mathrm{x} + \bar P_\mathrm{y}
\end{align}
$$
In other words, the mean power of a complex random variable is the sum of the mean powers of its real and imaginary part, respectively. In contrast, expression no. 1 from your question evaluates to
$$
\begin{align}
\operatorname E\left[\alpha^2\right] &= \operatorname E\left[x^2 + j2xy - y^2\right]\\
&= \operatorname E\left[x^2\right] - \operatorname E\left[y^2\right] + \operatorname E\left[j2xy\right]\\
&= \bar P_\mathrm{x} - \bar P_\mathrm{y} + j2\operatorname E\left[xy\right]
\end{align}
$$
Note that for the special case of $x$ and $y$ being uncorrelated $\operatorname E\left[xy\right]=E\left[x\right]E\left[y\right]$. If, in addition, $x$ or $y$ has zero mean $E\left[xy\right]=0$. | {
"domain": "dsp.stackexchange",
"id": 1747,
"tags": "statistics"
} |
Regularity of language of words containing a square | Question: $$L = \{w\mid w\text{ contains a substring of form }yy\text{, where }y\text{ is any non-empty string}\}.$$ Is this language regular? We do not know what $y$ looks like in advance. And why is this language regular or not regular?
Answer: Slightly more formally, for an alphabet $\Sigma$, the language of repetitive strings over $\Sigma$ is defined as follows.
$$ RR = \{w: w=uyyv \text{ for some }u, y, v \in \Sigma^*, y \text{ is not empty}\}$$ The question is, is RR a regular language?
There are three cases.
$\Sigma$ has one symbol.
WLOG, let $\Sigma=\{a\}$. Then $RR = aaa^*$ is a regular language.
$\Sigma$ has two symbols.
WLOG, let $\Sigma=\{a, b\}$. Suppose $w\in RR$ does not contain $aa$ or $bb$. That is, if an $a$ in $w$ is followed by another symbol, that symbol must be $b$; if a $b$ in $w$ is followed by another symbol, that symbol must be $a$. That means, if $w$ starts with $a$, it must contain $abab$; otherwise, it must contain $baba$. So any string in $RR$ must have one of $aa, bb, abab$ and $baba$ as its substring. Obviously, any string that contain one of those four strings is a repetitive string. So we can write $RR = (a+b)^\ast(aa+bb+abab+baba)(a+b)^\ast$ as a regular expression. Hence, $RR$ is regular.
$\Sigma$ has 3 or more symbols. This is the most interesting case.
WLOG, let $\Sigma\supseteq\{a,b,c\}$. We know there is an infinite sequence $u$ of $a,b,c$'s that has no consecutive repeated substring. Let $u_k$ be the first $k$ symbols in $u$. For example, if $u=abcabacabcb\cdots$, then $u_1=a, u_2=ab, \cdots, u_6=abcaba, \cdots$.
Let $i<j$ be two positive integers. Then $u_j=u_iv$ for some non-empty string $v\in\Sigma^*$. Since $u_iv=u_j\notin RR$ and $u_jv=u_ivv\in RR$, $u_i$ and $u_j$ are in different Myhill–Nerode equivalent classes for $RR$. So we have infinitely many Myhill–Nerode equivalent classes that are represented by $u_1, u_2, \cdots$ respectively. According to the Myhill–Nerode theorem, $RR$ cannot be regular.
In the analysis for the last case, we have used one of Alex Thue's results on non-repetitive string over three symbols. He showed that there are many such strings of infinite length. One of them is given by the following procedure. Let $\Sigma = \{a,b,c\}$. Let $A_0 = a$ and $\phi : a \mapsto abcab, b\mapsto acabcb, c\mapsto acbcacb$. So
$$\begin{aligned}
A_0 = a& \\
\phi(A_0)= a&bcab\\
\phi^2(A_0)= a&bcab\ acabcb\ acbcacb\ abcab\ acabcb\\
\phi^3(A_0)= a&bcab\ acabcb\ acbcacb\ abcab\ acabcb \\
a&bcab\ acbcacb\ abcab\ acabcb\ acbcacb\ acabcb\\
a&bcab\ acbcacb\ acabcb\ acbcacb\ abcab\ acbcacb\ acabcb\\
a&bcab\ acabcb\ acbcacb\ abcab\ acabcb\\
a&bcab\ acbcacb\ abcab\ acabcb\ acbcacb\ acabcb\\
\cdots\end{aligned}
$$
Note that $A_0$ is a prefix of $\phi(A_0)$. By a very easy mathematical induction, we can show that $\phi^n(A_0)$ is a prefix of $\phi(\phi^{n}(A_0))=\phi^{n+1}(A_0)$. (Thinking about this recurrence relation, I can feel the logic beauty of self reference. That is why I cannot help but writing down this example.) So we can specify a unique infinite string, $\omega$ whose prefix can be $\phi^n(A_0)$ for any $n$. $\omega$ is the wanted string. For a proof that $\omega$ is a string without consecutive repeated substrings, reader can check Axel Thue, Uber die gegenseitige Lage gleicher Teile gewisser Zeichenreihen; Norske Vid. Skrifter
I Mat.-Nat. Kl.; Christiania; 1912 page 1–67.
In fact, as HendrikJan pointed out, the language of repetitive strings over three or more symbols is not context-free. This fact is proved in Repetitive strings are not context-free by R.Ross and K.Winklmann, RAIRO - Theoretical Informatics and Applications, 16 (1982) 191-199.
Since the complement of a regular language is also a regular language, the complement of the language of repetitive strings is also regular, too. In fact, if the alphabet has one or two symbols, they are finite languages. All non-repetitive strings over $\{a\}$ are $\{\epsilon, a\}$. All non-repetitive strings over $\{a,b\}$ are $\{\epsilon, a, b, ab, ba, aba, bab\}$.
Exercise. Show that an infinite language that does not contain a square is not context-free.
Question. Is the language of almost repetitive strings defined below regular? If no regular, is it context-free?
$$ RR = \{w: w=uyxyv \text{ for some }u, y, v \in \Sigma^*, y \text{ is not empty}, x\in\Sigma\}$$ | {
"domain": "cs.stackexchange",
"id": 12374,
"tags": "formal-languages, regular-languages"
} |
Products of Lead(II) nitrate decomposition | Question: It is known, that decomposition of Lead(II) nitrate is one of the ways of generating $\ce{NO2}$ for lab use.
I recently did this in order to acquire $\ce{NO2}$ in liquid form (ambient temperature was ~0 °C), and got quite strange results. First, liquid $\ce{NO2}$ had 2 visible layers, usual brown one at the top and transparent at the bottom:
Also, during condensation liquid initially had green color (I know that Graham condenser is not safe in this configuration - I wouldn't see this color change in conventional condenser setup).
Can anyone clarify why I got such variety of products instead of pure $\ce{NO2}$?
Answer: The colorless area may be $\ce{N2O4}$ which is in equilibrium with $\ce{NO2}$. $\ce{N2O4}$ is colorless and has a greater density and therefore is found at the bottom and green compound might be $\ce{N2O3}$ which is in equilibrium with $\ce{NO2}$ and $\ce{NO}$.
Edit :
Boiling point of $\ce{N2O4}$ is 21.69 °C, so the temperature at which you perform experiment is also critical. Moreover, there is high probability that $\ce{N2O4}$ might be present in liquid state too. | {
"domain": "chemistry.stackexchange",
"id": 2207,
"tags": "experimental-chemistry"
} |
No stable closed orbits for a Newtonian gravitational field in $d\neq 3$ spatial dimensions | Question: We are supposed to show that orbits in 4D are not closed.
Therefore I derived a Lagrangian in hyperspherical coordinates
$$L=\frac{m}{2}(\dot{r}^2+\sin^2(\gamma)(\sin^2(\theta)r^2 \dot{\phi}^2+r^2 \dot{\theta}^2)+r^2 \dot{\gamma}^2)-V(r).$$
But we are supposed to express the Lagrangian in terms of constant generalized momenta and the variables $r,\dot{r}$. But as $\phi$ is the only cyclic coordinate after what I derived there, this seems to be fairly impossible. Does anybody of you know to calculate further constant momenta?
Answer: Hints:
Prove that the angular momentum $L^{ij}:=x^ip^j-x^jp^i$ is conserved for a central force law in $d$ spatial dimensions, $i,j\in\{1,2,\ldots ,d\}.$
Since the concept of closed orbits does not make sense for $d\leq 1$, let us assume from now on that $d\geq 2$.
Choose a 2D plane $\pi$ through the origin that is parallel to the initial position and momentum vectors. Deduce (from the equations of motion $\dot{\bf x} \parallel {\bf p}$ and $\dot{\bf p} \parallel {\bf x}$) that the point mass continues to be confined to this 2D plane $\pi$ (known as the orbit plane) for all time $t$. Thus the problem is essentially 2+1 dimensional with radial coordinates $(r,\theta)$ and time $t$. [In other words, the ambient $d-2$ spatial dimensions are reduced to passive spectators. Interestingly, this argument essentially shows that the conclusion of Bertrand's theorem are independent of the total number $d\geq 2$ of spatial dimensions; namely the conclusion that only central potentials of the form $V(r) \propto 1/r$ or $V(r) \propto r^2$ have closed stable orbits.]
Deduce that the Lagrangian is $L=\frac{1}{2}m(\dot{r}^2 +r^2\dot{\theta}^2) -V(r)$.
The momenta are $$p_{r}~=~\frac{\partial L}{\partial \dot{r}}~=~m\dot{r}$$ and $$p_{\theta}~=~\frac{\partial L}{\partial \dot{\theta}}~=~mr^2\dot{\theta}.$$
Note that $\theta$ is a cyclic variable, so the corresponding momentum $p_{\theta}$ (which is the angular momentum) is conserved.
Deduce that the Hamiltonian is $H=\frac{p_{r}^2}{2m}+\frac{p_{\theta}^2}{2mr^2}+ V(r)$.
Interpret the angular kinetic energy term
$$\frac{p_{\theta}^2}{2mr^2}~=:~V_{\rm cf}(r)$$
as a centrifugal potential term in a 1D radial world. See also this Phys.SE post. Hence the problem is essentially 1+1 dimensional $H=\frac{p_{r}^2}{2m}+V_{\rm cf}(r)+V(r)$.
From now on we assume that the central force $F(r)$ is Newtonian gravity. Show via a $d$-dimensional Gauss' law that a Newtonian gravitational force in $d$ spatial dimensions depends on distance $r$ as $F(r)\propto r^{1-d}$. (See also e.g. the www.superstringtheory.com webpage, or B. Zwiebach, A First course in String Theory, Section 3.7.) Equivalently, the Newtonian gravitational
potential is
$$V(r)~\propto~\left\{\begin{array}{rcl} r^{2-d} &\text{for}& d~\neq~ 2, \\
\ln(r)&\text{for}& d~=~2. \end{array}\right. $$
So from Bertrand's theorem, candidate dimensions $d$ for closed stable orbits with Newtonian gravity are:
$d=0$: Hooke's law (which we have already excluded via the assumption $d\geq 2$).
$d=3$: $1/r$ potential (the standard case).
$d=4$: $1/r^2$ potential (suitably re-interpreted as part of a centrifugal potential).
We would like to show that the last possibility $d=4$ does not lead to closed stable orbits after all.
Assume from now on that $d=4$. Notice the simplifying fact that in $d=4$, the centrifugal potential $V_{\rm cf}(r)$ and the gravitational potential $V(r)$ have precisely the same $1/r^2$ dependence!
Thus if one of the repulsive centrifugal potential $V_{\rm cf}(r)$ and the attractive gravitational potential $V(r)$ dominates, it will continue to dominate, and hence closed orbits are impossible. The radial coordinate $r$ would either go monotonically to $0$ or $\infty$, depending on which potential dominates.
However, if the repulsive centrifugal potential $V_{\rm cf}(r)$ and the attractive gravitational potential $V(r)$ just happen to cancel for one distance $r$, they would continue to cancel for all distances $r$. Newton's second law becomes $\ddot{r}=0$. Hence a closed circular orbit $\dot{r}=0$ is possible. However, this closed circular orbit is not stable against perturbations in the radial velocity $\dot{r}$, in accordance with Bertrand's theorem. | {
"domain": "physics.stackexchange",
"id": 8432,
"tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, lagrangian-formalism, orbital-motion"
} |
How similar is the recent (2021) tardigrade experiment to a Schrödinger's cat experiment? | Question: In a recent paper on the arXiv, a team of researchers from Singapore and elsewhere claim to have established entanglement between a superconducting qubit and a tardigrade (or water-bear), which is a teeny little critter that's known for being able to survive all sorts of extreme conditions (apparently even in the vacuum of outer space), by being able to enter a cryptobiotic state of suspended animation.
There may be scientific value in the study of the extremes tardigrades can be exposed to, e.g. in a dil-fridge, but it's natural to consider the experiment as a very trimmed-down version of Schrödinger's cat.
If we define a Schrödinger's cat experiment as preparing and maintaining coherence of a state $\vert\psi\rangle=\frac{1}{\sqrt 2}(\vert\text{alive}\rangle+\vert\text{dead}\rangle)$ for some macroscopic cat-state, and then measuring in the appropriate basis to prove that the cat is in a coherent superposition of being alive and being dead, then can we say that the tardigrade experiment "also" put a tardigrade into a similar superposition?
Clearly there's some click-baity headlines and perhaps some run-away conclusions, but can we formalize what the Singapore team did, in a way that requires a superconducting quantum computer?
Answer: I would say that this experiment is not at all similar to preparing such a cat-state, because no measurable property of the tardigrade was ever probed, or rather, that the measurable property is not experimentally justified. The authors make a number of assumptions in order to claim some correlation between the energy state of qubit B and the energy state of the tardigrade, such as:
the entire tardigrade is cooled to its ground state
the tardigrade is well-modeled by a set of harmonic oscillators
these oscillators all couple similarly to the qubit
there is never more than 1 photon in the system
which would be pretty extreme claims for something as big and complex as a tardigrade even in the presence of experimental data, which they don't provide. All of this is to assert that if qubit A is entangled with qubit B, and qubit B is correlated with the tardigrade, then the tardigrade must also participate in the entanglement to some degree. But this correlation is entirely assumed, and the effect of the tardigrade on the circuit is never shown to be more than a frequency shift, which is a classical effect. Notice that the tomography circuit only acts on the qubits -- this should hint that one can't directly control the state of the tardigrade nor measure any relevant observable.
PS#1: They use a tardigrade, but it could have been any speck of dust or matter of about the same size.
PS#2: Two of the authors were previous Ig-Nobel Prize winners, so make of that what you will ;) | {
"domain": "quantumcomputing.stackexchange",
"id": 3363,
"tags": "interpretations, quantum-biology"
} |
Lie derivative vs. covariant derivative in the context of Killing vectors | Question: Let me start by saying that I understand the definitions of the Lie and covariant derivatives, and their fundamental differences (at least I think I do). However, when learning about Killing vectors I discovered I don't really have an intuitive understanding of the situations in which each one applies, and when to use one over the other.
An important property of a Killing vector $\xi$ (which can even be considered the definition) is that $\mathcal{L}_\xi\, g = 0$, where $g$ is the metric tensor and $\mathcal{L}$ is the lie derivative. This says, in a way, that the metric doesn't change in the direction of $\xi$, which is a notion that makes sense. However, if you had asked me how to represent the idea that the metric doesn't change in the direction of $\xi$, I would have gone with $\nabla_\xi g = 0$ (where $\nabla$ is the covariant derivative), since as far as I know the covariant derivative is, in general relativity, the way to generalize ordinary derivatives to curved spaces.
But of course that cannot be it, since in general relativity we use the Levi-Civita connection and so $\nabla g = 0$. It would seem that $\mathcal{L}_\xi\, g = 0$ is be the only way to say that the directional derivative of $g$ vanishes. Why is this? If I didn't know that $\nabla g = 0$, would there be any way for me to intuitively guess that "$g$ doesn't change in the direction of $\xi$" should be expressed with the Lie derivative? Also, the Lie derivative is not just a directional derivative since the vector $\xi$ gets differentiated too. Is this of any consequence here?
Answer: Nice question. One way to think about it is that given a metric $g$, the statement $\mathcal L_Xg = 0$ says something about the metric, whereas $\nabla_Xg = 0$ says something about the connection. Now what $\mathcal L_Xg = 0$ says, is that the flow of $X$, where defined, is an isometry for the metric, while $\nabla_Xg = 0$ says that $\nabla$ transports a pair of tangent vectors along the integral curves of $X$ in such a way that their inner product remains the same.
As an example, consider the upper half plane model of the hyperbolic plane. Its metric is $y^{-2}(dx^2 + dy^2)$, so clearly $\partial_x$ is a Killing vector field; its flow, horizontal translation, is an isometry. The fact that $\nabla_{\partial_x}g = 0$ doesn't say anything about $g$, but it does say that Euclidean parallel transport is compatible with this directional derivative of the connection.
Now consider $\partial_y$. This of course is not a Killing vector field, since vertical translation is not an isometry. The connection however can be made such (by the theorem of Levi-Civita) that a pair of tangent vectors can be parallel transported in such a way that the inner product is preserved.
EDIT
I think I have a more illustrative example: consider the sphere embedded in $\Bbb R^3$. Pick an axis and take the velocity vector field $\xi$ associated to rotation around the axis at some constant angular velocity. Also consider a second vector field $\zeta$ that is everywhere (in a neighbourhood of the equator, extend in any smooth way toward the poles) proportional to $\xi$, but that has constant velocity everywhere, something like in this image
(downloaded from this page).
Obviously $\xi$ is a Killing field, as it integrates to an isometry. An immediate way to see that $\zeta$ is not, is by noting that curves parallel to the equator remain parallel to the equator under the flow of $\zeta$, hence so do their tangent vectors. What happens to a curve whose tangent vector at the equator points toward a pole, is that the flow of $\zeta$ moves the point at the equator over a smaller angle than a point above the equator, so these two vectors don't remain perpendicular. For parallel transport on the other hand, two perpendicular tangent vectors to a point at the equator will remain perpendicular both under $\xi$ and in $\zeta$, since they only depend on the restriction to the vector fields to the equator, where they are equal. This doesn't say anything about the vector field generating an isometry, i.e. being a Killing vector field. | {
"domain": "physics.stackexchange",
"id": 22631,
"tags": "differential-geometry, metric-tensor, differentiation, vector-fields"
} |
How do ribosomes contribute to their own synthesis? | Question: In other words, what products synthesized by ribosomes are actual parts of ribosomes (if any)? How are they involved in their own synthesis otherwise? What is the cycle/chain of products starting with the proteins synthesized by ribosomes and actually ending with the synthesis of a ribosome?
Answer: The most important parts of the ribosome are not made by other ribosomes - 5 rRNA (ribosomal RNA) of the ribosome actually do most of the direct work of creating the protein and are made by RNA polymerase ( a protein, but not the ribosome).
Then there are 92 ribosomal proteins, which as a rule bind to ribosomal RNA to support their structure and keep everything going. These are all made by ribosomes. They are thought to have appeared later in the evolution of the ribosome though I imagine that it would not be possible to constitute a working ribosome without each one of them.
these numbers are for the eukaryotic ribosome, the prokaryotic ribosome has 3 rRNA and 52 ribosomal protein components.
http://en.wikipedia.org/wiki/Ribosome | {
"domain": "biology.stackexchange",
"id": 204,
"tags": "ribosome"
} |
Partition function for generic spin state | Question: I am studying statistical mechanics starting with the Gibbs state and the postulate of the partition function.
I learned that the partition function is a sum over all the possible states of a system and applied it in a number of simple systems like spin 1/2, and other two or three level systems.
But now I am stucked with a problem of calculating the partition function for a system of a generic spin state:
given that a spin state can be an integer or half-integer number like $S = 1/2 , 1, 3/2, 2$ etc and that $s_z = S, S - 1, ... , -S + 1, -S$ and the energy eigenvalues of this system in a magnetic field are given by:
'
$E_s = -hs$
'
Since the eigenvalues are equally spaced and the ground state is $s_z = S$, how can I calculate the partition function?
I started in just writing the definition:
'
Z = $\sum_{s_z}^n \exp(-\beta E_s) = exp(h\beta S) + exp(h\beta (S-1)) + ... + exp(h\beta (-S+1)) + exp(h\beta (-S))$
'
Is this right? If yes, I understood the "sum over all states" thing. But I do not know how to proceed in calculating this sum. If this is not right, then I do not even know how to start.
One way of, maybe, simplifying this is adding a constant, since the it does not modify the final result. I can say that the ground state is of energy zero, then $s_z = 0, -1, ... , 1, 0$ and I have a more symmetric system with the exponentials. Can I do this?
Answer: Your expression for $Z$ looks correct. Notice that each term is $e^{-h\beta}$ times the previous term, so it is a geometric series, which you can easily sum. | {
"domain": "physics.stackexchange",
"id": 61257,
"tags": "statistical-mechanics, partition-function"
} |
How was the Marinoan Glaciation triggered? | Question: The Marinoan Glaciation (a.k.a. Elatina Glaciation) was a glaciation that is thought to have occurred towards the end of the aptly-named Cryogenian period at ca. 650Ma. It is particularly known as one of the glaciations that may or may not have been a Snowball Earth.
Whether or not this glaciation was truly global, there is evidence that this glaciation existed. But what are the current hypotheses on how this glaciation was triggered?
Answer: Several articles suggest that the mechanisms involved with the weathering and/or the breakup of supercontinents as being a mechanism for this kind of glaciation, in the timeframe pertinent to your question, Rodinia.
According to the article Precambrian supercontinents, glaciations, atmospheric oxygenation, metazoan evolution and an impact that may have changed the second half of Earth history (Young, 2013), a suggested mechanism of the onset of glaciation is the formation of the supercontinent, specifically, the author postulates:
Enhanced weathering on the orogenically and thermally buoyed supercontinents would have stripped $CO_2$ from the atmosphere, initiating a cooling trend that resulted in continental glaciation.
The Polarisbreen Formation is touted in the article The Marinoan glaciation (Neoproterozoic) in
northeast Svalbard (Halverson et al. 2004) suggests that weathering as being one of the main factors for the sustained glaciation.
An alternative hypothesis is proposed in the article From Volcanic Winter to Snowball Earth: An Alternative Explanation for Neoproterozoic Biosphere Stress (Stern et al. 2008), where the authors suggest that increased global volcanism from the breakup of the supercontinent. | {
"domain": "earthscience.stackexchange",
"id": 226,
"tags": "glaciology, paleoclimatology, glaciation, precambrian"
} |
Degree of anisotropy of crystal tensors | Question: Does there exist a scalar that can describe how anisotropic the elasticity of a crystal is? What about other tensors such as the permittivity or susceptibility? I found a Wikipedia article that was particularly illuminating:
Fractional anisotropy is a scalar value between zero and one that describes the degree of anisotropy of a diffusion process. A value of zero means that diffusion is isotropic, i.e. it is unrestricted (or equally restricted) in all directions. A value of one means that diffusion occurs only along one axis and is fully restricted along all other directions._
Could this be extended to $C_{ijkl}$? If so, how do I construct this parameter that is between 0 and 1? I'm assuming it starts by somehow contracting the elastic tensor. This can be very useful if you have a bimaterial system in which a particular physical phenomena emerges from the mismatch of this anisotropic parameter.
Answer: I'm going to follow a paper Phys. Rev. Lett. 101, 055504 that seems to answer this question very concisely. Usual Voight notation: $C_{ijkl} \to C_{mn}$ here, we define Voight and Reuss estimators as defined in Proc. Phys. Soc. A 65 349. For example : $$K^V= \frac{1}{9}\left(C_{11} + C_{22} + C_{33} + 2 (C_{12} + C_{23} + C_{31})\right)$$ and so on for $K^R, G^V, \mathrm{and} \,G^R$. I will further define these in this answer later, if needed. The authors of the PRL then go to define $$A^U = 5 \frac{G^R}{G^V} + \frac{K^R}{K^V} - 6 \geq 0 $$ as the universal elastic anisotropic index. Their claims, that $A^U = 0$ for isotropic crystals where $C_{11} = C_{22} = C_{33}$, $C_{44} = C_{55} = C_{66} = \frac{C_{11} - C_{12}}{2}$, and $C_{12} = C_{13} = C_{23}$ are exact, as well as the claims about cubic classes. This quantity covers a wide range of crystal classes(all of them) and does not have the conflicting issues that the Zener anisotropic index has, as I understand. | {
"domain": "physics.stackexchange",
"id": 22510,
"tags": "material-science, elasticity, continuum-mechanics, stress-strain"
} |
A better approach to parsing this file name in Java? | Question: I am creating a file uploader, and one function of the uploader will parse the file name to make sure that it is in the correct format.
This is the proper format of the file name: FILENAME_USERNAME_YYYYMMDD.csv,
where FILENAME and USERNAME are any chars of any length.
Specifically, I must extract the date from the file name and compare it to a date given in the .csv file content. (comparing to a date: MM/DD/YYYY).
Below is the way that I have approached the problem: but if the user happens to input a file with a random, arbitrary file name, it's very possible for the method to throw a null pointer exception.. Hoping for a suggestion for a more elegant solution.
public boolean compareDate(File file, String cellDate)
{
String date;
String tempDate;
String year, month, day;
int startIndex = file.getName().lastIndexOf("_");
int lastIndex = file.getName().lastIndexOf(".");
if (startIndex > 0 && lastIndex > 0)
{
tempDate = file.getName().substring(startIndex+1, lastIndex);
if (tempDate.length() < 8) // 8 == chars in date
return false;
}
year = tempDate.substring(0,4);
month = tempDate.substring(4,6);
day = tempDate.substring(6,8);
date = month + "/" + day + "/" + year;
if (date.equals(cellDate) || date.contains(cellDate)) // (ex. 01/25/2013 == 1/25/2013
return true;
else
return false;
}
Answer: My version:
public class CodeReview_27975 {
// match everything between the last "_" and "." [explanation in the notes!]
private final static Pattern FILENAME_DATE_PATTERN = Pattern.compile(".*_(.*?)\\..*");
// apply the pattern to extract date from file name
private String extractDateFromFilename(String filename) {
Matcher m = FILENAME_DATE_PATTERN.matcher(filename);
if (!m.matches()) {
throw new IllegalArgumentException(
"Filename doesn't match expected format: " + filename);
}
return m.group(1);
}
private boolean compareDate(String fileName, String cellDateString) {
SimpleDateFormat fileDateParser = new SimpleDateFormat("yyyyMMdd");
SimpleDateFormat cellDateParser = new SimpleDateFormat("MM/dd/yyyy");
Date d1, d2;
try {
d1 = fileDateParser.parse(extractDateFromFilename(fileName));
d2 = cellDateParser.parse(cellDateString);
} catch (ParseException e) {
throw new IllegalArgumentException(
"One of the input dates failed to be parsed:" + fileName
+ ", " + cellDateString);
}
return d1.equals(d2);
}
// this is the only public method, but it delegates to private utility functions
public boolean compareDate(File file, String cellDate) {
return compareDate(file.getName(), cellDate);
}
public static void main(String[] args) {
CodeReview_27975 app = new CodeReview_27975();
// easy to test
System.out.println(app.compareDate("file_user_20130125.csv", "1/25/2013"));
System.out.println(app.compareDate("file_user_20130125.csv", "01/25/2013"));
System.out.println(app.compareDate("file_user_20130101.csv", "1/1/2013"));
}
}
split the single method into smaller, easier to test, chunks.
each function deals with a single concern (most of the code deals with strings, no need to pass the File object further down the stack). This makes it easier to test
regular expressions are useful when dealing with patterns in strings. Extracting date from file is much more robust this way (you'll most likely getting the NullPointerException for non-standard input; the current expression doesn't solve this problem but at least it reports it correctly, making it easier to create a fix)
comparing dates as strings is just wrong; your equals-or-contains trick would fail when comparing 1/1/2013 and 01/01/2013. Creating a Date object is a small price to pay for correctness.
the class could be visually longer but I believe it's easier to understand this way; most of the new code comes from exception handling - if you decided to simply make the caller handle checked exceptions, you could remove most of that.
Also, convention for compare functions is to return an integer: <0 if first argument is smaller, 0 if both are equal and >1 if the first argument if bigger. This would make the method more flexible and potentially useful for the caller. Changing the name to areDatesSame or something like that would be another option.
edit:
".*_(.*?)\\..*" pattern dissected:
.* any character, zero or more of them at the start
_ an underscore
(.*?) capture a group (after underscore, before the dot)
\. dot character (has to be escaped into "\\." if inside a String)
.* any character, any number of them at the end
Regular expressions are a broad subject but knowing the basics (and where/how to look for more detailed information) will always pay off. You can start by having a look at the Java Tutorial on Regex
The expression could be changed to only accept exactly 8 digits in the group (rather than ANY string between "_" and "."), making it a bit more complicated but more specific.
Visualisation from debuggex.com: | {
"domain": "codereview.stackexchange",
"id": 4067,
"tags": "java, strings, parsing"
} |
Refactoring while-do into tail recursion with F# | Question: I have a while-do loop nested inside a while-do loop in my program, due to it needing to iterate in two dimensions through an array of arrays, and originally my loop looked like this:
while ( tHeightIndex < tSearchHeight - 1 ) && tContinue do
while ( tWidthIndex < tSearchWidth - 1 ) && tContinue do
let tCurrentValue = tLargeArray.[tHeightIndex].[tWidthIndex]
if tCurrentValue = tFirstSmallPixel then
tMatch <- ArrayFunctions.SearchSubset tSmallArray tLargeArray ( tHeightIndex, tWidthIndex )
if tMatch then tContinue <- false
if tMatch = false && tContinue = true then
tWidthIndex <- tWidthIndex + 1
if tMatch = false && tContinue = true then
tWidthIndex <- 0 // Reset to search next row
tHeightIndex <- tHeightIndex + 1
tMatch, tWidthIndex, tHeightIndex
However, seeing as I'm doing this project entirely to learn better functional programming practices, I refactored that nested loop into two tail-recursive functions. This code passes my unit tests and appears to work correctly, so I would like advice about the style of the code and whether there's better ways I can make the code read clearly.
The relevant part to the original while-do loop is located after the declaration of firstSmallPixel.
module ImageSearch =
open ImageFunctions
let SearchBitmap (smallBitmap:Bitmap) (largeBitmap:Bitmap) =
let smallArray = Transform2D <| LoadBitmapIntoArray smallBitmap
let largeArray = Transform2D <| LoadBitmapIntoArray largeBitmap
let searchWidth = largeBitmap.Width - smallBitmap.Width
let searchHeight = largeBitmap.Height - smallBitmap.Height
let firstSmallPixel = smallArray.[0].[0]
let WidthLoop heightIndex =
let rec WidthLoopRec heightIndex widthIndex =
let ContinueLoop () = WidthLoopRec heightIndex (widthIndex + 1)
let currentLargePixel = largeArray.[heightIndex].[widthIndex]
match ( widthIndex < searchWidth , currentLargePixel = firstSmallPixel ) with
| ( true , true ) -> let foundImage = ArrayFunctions.SearchSubset smallArray largeArray ( heightIndex, widthIndex )
if foundImage then widthIndex , foundImage
else ContinueLoop ()
| ( true , false ) -> ContinueLoop ()
| ( false, _ ) -> widthIndex , false
WidthLoopRec heightIndex 0
let HeightLoop () =
let rec HeightLoopRec heightIndex =
let widthIndex, foundImage = WidthLoop heightIndex
match ( foundImage, heightIndex < searchHeight ) with
| ( false , true ) -> HeightLoopRec ( heightIndex + 1 )
| ( _ , _ ) -> foundImage , widthIndex , heightIndex
HeightLoopRec 0
HeightLoop ()
Answer: I think that learning functional programming should be about making your code more readable (and also better in other aspects), neither of your samples seems very readable to me.
I also think that when working with collections in functional programming, the most important concept is not recursion, it's higher-order functions.
If you use those in the form of a query expression, your code could look like this:
query {
for row in 0..searchHeight do
for col in 0..searchWidth do
let pixel = largeArray.[row].[col]
where (pixel = firstSmallPixel)
select (row, col, pixel)
head
}
The nice thing about this code is not only that it's more readable and shorter, it's that it emphasizes what you want to do, not how to do it. | {
"domain": "codereview.stackexchange",
"id": 8956,
"tags": "array, recursion, functional-programming, f#"
} |
Storage container for components of entities (ECS) | Question: Overview
After playing a while with the ECS implementation of the Unity engine and liking it very much I decided to try recreate it as a challenge. As part of this challenge I need a way of storing the components grouped by entity; I solved this by creating a container called a Chunk.
Unity uses archetypes to group components together and stores these components in pre-allocated chunks of fixed size.
I made a simple design of my implementation as clarification:
Here Archetype is a linked list of chunks; the chunks contain arrays of all the components that make the archetype - in this case Comp1, Comp2 and Comp3. Once a chunk is full a new chunk is allocated and can be filled up and so on.
The chunk itself is implemented like this:
With this solution I can store the components grouped by entity while making optimal use of storage and cache because the components are tightly packed in an array. Because of the indirection provided by the array of indices I am able to delete any component and move the rest of the components down to make sure there aren't any holes.
Questions
I have some items I'd like feedback on in order to improve myself
Is the code clear and concise?
Are there any obvious performance improvements?
Because this is my first somewhat deep-dive in templates, are there any STL solutions I could've used that I have missed?
Code
chunk.h
Contains the container.
#pragma once
#include "utils.h"
#include "entity.h"
#include <cstdint>
#include <tuple>
template<size_t Capacity, typename ...Components>
class chunk
{
public:
struct index
{
uint16_t id;
uint16_t index;
uint16_t next;
};
chunk()
:
m_enqueue(Capacity - 1),
m_dequeue(0),
m_object_count(0)
{
static_assert((Capacity & (Capacity - 1)) == 0, "number should be power of 2");
for (uint16_t i = 0; i < Capacity; i++)
{
m_indices[i].id = i;
m_indices[i].next = i + 1;
}
}
const uint16_t add()
{
index& index = m_indices[m_dequeue];
m_dequeue = index.next;
index.id += m_new_id;
index.index = m_object_count++;
return index.id;
}
void remove(uint16_t id)
{
index& index = m_indices[id & m_index_mask];
tuple_utils<Components...>::tuple_array<Capacity, Components...>::remove_item(index.index, m_object_count, m_items);
m_indices[id & m_index_mask].index = index.index;
index.index = USHRT_MAX;
m_indices[m_enqueue].next = id & m_index_mask;
m_enqueue = id & m_index_mask;
}
template<typename... ComponentParams>
constexpr void assign(uint16_t id, ComponentParams&... value)
{
static_assert(arg_types<Components...>::contain_args<ComponentParams...>::value, "Component type does not exist on entity");
index& index = m_indices[id & m_index_mask];
tuple_utils<Components...>::tuple_array<Capacity, ComponentParams...>::assign_item(index.index, m_object_count, m_items, value...);
}
template<typename T>
constexpr T& get_component_data(uint16_t id)
{
static_assert(arg_types<Components...>::contain_type<T>::value, "Component type does not exist on entity");
index& index = m_indices[id & m_index_mask];
return std::get<T[Capacity]>(m_items)[index.index];
}
inline const bool contains(uint16_t id) const
{
const index& index = m_indices[id & m_index_mask];
return index.id == id && index.index != USHRT_MAX;
}
inline const uint32_t get_count() const
{
return m_object_count;
}
static constexpr uint16_t get_capacity()
{
return Capacity;
}
private:
static constexpr uint16_t m_index_mask = Capacity - 1;
static constexpr uint16_t m_new_id = m_index_mask + 1;
uint16_t m_enqueue;
uint16_t m_dequeue;
uint16_t m_object_count;
index m_indices[Capacity] = {};
std::tuple<Components[Capacity]...> m_items;
};
utils.h
Contains utility functions for templates used by the chunk class.
// utils.h
#pragma once
#include <tuple>
#include <type_traits>
#include <algorithm>
// get total size of bytes from argumant pack
template<typename First, typename... Rest>
struct args_size
{
static constexpr size_t value = args_size<First>::value + args_size<Rest...>::value;
};
template <typename T>
struct args_size<T>
{
static constexpr size_t value = sizeof(T);
};
template<typename... Args>
struct arg_types
{
//check if variadic template contains types of Args
template<typename First, typename... Rest>
struct contain_args
{
static constexpr bool value = std::disjunction<std::is_same<First, Args>...>::value ?
std::disjunction<std::is_same<First, Args>...>::value :
contain_args<Rest...>::value;
};
template <typename Last>
struct contain_args<Last>
{
static constexpr bool value = std::disjunction<std::is_same<Last, Args>...>::value;
};
//check if variadic template contains type of T
template <typename T>
struct contain_type : std::disjunction<std::is_same<T, Args>...> {};
};
template<typename... Args>
struct tuple_utils
{
// general operations on arrays inside tuple
template<size_t Size, typename First, typename... Rest>
struct tuple_array
{
static constexpr void remove_item(size_t index, size_t count, std::tuple<Args[Size]...>& p_tuple)
{
First& item = std::get<First[Size]>(p_tuple)[index];
item = std::get<First[Size]>(p_tuple)[--count];
tuple_array<Size, Rest...>::remove_item(index, count, p_tuple);
}
static constexpr void assign_item(size_t index, size_t count, std::tuple<Args[Size]...>& p_tuple, const First& first, const Rest&... rest)
{
std::get<First[Size]>(p_tuple)[index] = first;
tuple_array<Size, Rest...>::assign_item(index, count, p_tuple, rest...);
}
};
template <size_t Size, typename Last>
struct tuple_array<Size, Last>
{
static constexpr void remove_item(size_t index, size_t count, std::tuple<Args[Size]...>& p_tuple)
{
Last& item = std::get<Last[Size]>(p_tuple)[index];
item = std::get<Last[Size]>(p_tuple)[--count];
}
static constexpr void assign_item(size_t index, size_t count, std::tuple<Args[Size]...>& p_tuple, const Last& last)
{
std::get<Last[Size]>(p_tuple)[index] = last;
}
};
};
Usage
auto ch = new chunk<2 * 2, TestComponent1, TestComponent2>();
auto id1 = ch->add();
auto id2 = ch->add();
auto contains = ch->contains(id1);
ch->assign(id1, TestComponent2{ 5 });
ch->assign(id2, TestComponent1{ 2 });
ch->remove(id1);
Tests
#include "chunk.h"
#define CATCH_CONFIG_MAIN
#include "catch.h"
struct TestComponent1
{
int i;
};
struct TestComponent2
{
int j;
};
struct TestComponent3
{
char t;
};
SCENARIO("Chunk can be instantiated")
{
GIVEN("A Capacity of 4 * 4 and 3 component types as template parameters")
{
chunk<4 * 4, TestComponent1, TestComponent2, TestComponent3> testChunk;
THEN("Chunk has Capacity of 4 * 4 and is empty")
{
REQUIRE(testChunk.get_capacity() == 4 * 4);
REQUIRE(testChunk.get_count() == 0);
}
}
}
SCENARIO("Items can be added and removed from chunk")
{
GIVEN("A Capacity of 4 * 4 and 3 component types as template parameters")
{
chunk<4 * 4, TestComponent1, TestComponent2, TestComponent3> testChunk;
auto entityId = 0;
WHEN("Entity is added to chunk")
{
entityId = testChunk.add();
THEN("Chunk contains entity with id")
{
REQUIRE(testChunk.contains(entityId));
REQUIRE(testChunk.get_count() == 1);
}
}
WHEN("Entity is removed from chunk")
{
testChunk.remove(entityId);
THEN("Chunk does not contain entity with id")
{
REQUIRE(!testChunk.contains(entityId));
REQUIRE(testChunk.get_count() == 0);
}
}
}
}
SCENARIO("Items can be given a value")
{
GIVEN("A Capacity of 4 * 4 and 3 component types as template parameters with one entity")
{
// prepare
chunk<4 * 4, TestComponent1, TestComponent2, TestComponent3> testChunk;
auto entity = testChunk.add();
auto value = 5;
WHEN("entity is given a type TestComponent2 with a value of 5")
{
testChunk.assign(entity, TestComponent2{ value });
THEN("entity has component of type TestComponent2 with value of 5")
{
auto component = testChunk.get_component_data<TestComponent2>(entity);
REQUIRE(component.j == value);
}
}
}
}
Answer: Answers to your questions
Is the code clear and concise?
That's definitely a yes.
Are there any obvious performance improvements?
That is hard to say. For generic use, I think it will do just fine. However, if the components are very small, the overhead of m_indices might become noticable. A bitmask to mark which elements are in use might be better then. Also, there might be access patterns that could benefit from a different implementation. If you add a lot of entities, then use the entities, then delete all of them and start over, you wasted cycles keeping track of the indices. But again, for generic use it looks fine. Use a profiling tool like Linux's perf tools to measure performance bottlenecks, and if you see you spend a lot of cycles in the member functions of class chunk, you can then decide whether another approach might be better.
Because this is my first somewhat deep-dive in templates, are there any STL solutions I could've used that I have missed?
The list-of-chunks looks a lot like what std::deque does. You could use a std::deque in your class archetype, and not have a class chunk. The only issue is that std::deque hides the chunks it uses internally from you. So you if you go this way, you probably cannot initialize the indices like you did in class chunk, but have to do this in a more dynamic way.
Assert that you don't overflow uint16_t variables
The template parameter Capacity is a size_t, but you use uint16_t indices. Add a static_assert() to ensure you don't overflow the index variables. Note: static_assert()s are declarations, not statements, so you don't have to put them inside a member function.
Add runtime assert()s
Apart from compile-time checks, it might also be useful to add run-time checks to ensure errors are caught early in debug builds. For example, in Chunk::add() you should assert(m_object_count < Capacity).
Consider combining add() and assign()
When reading your code, I was wondering why add() and remove() looked so different. Adding a new entity is apparently a two-step process: first you call add() to reserve an ID, and then you assign() values to the components of that ID. Why not make this a one-step process?
High bits in IDs
You seem to be using the high bits as a kind of generation counter. Is this doing anything useful? If Capacity is set to 65536, then there are no high bits left, so you can't be relying on this. I would avoid this altogether, this way you can remove m_index_mask, m_new_id and all the & m_index_mask operations.
Try to make your class look and act like STL containers
The standard library containers all have a similar interface; you only have to learn it once and you can apply this knowledge on all the containers it provides. It helps if you follow the same conventions, so you don't have to learn and use different terms for your classes. Mostly, it's just renaming a few member functions:
add() -> insert() (just like std::set)
remove() -> erase()
get_component_data() -> get() (just like std::tuple)
get_count() -> size()
get_capacity() -> capacity()
You also might want to add some functions commonly found in STL containers, such as empty() and clear(). Most importantly, I assume you want to loop over all entities at some point and call a function on each of them. For this, it helps if you add iterators to this class, so they can be used in range-based for-loops, in STL algorithms, and makes it easy to interact with anything else that supports iterators. | {
"domain": "codereview.stackexchange",
"id": 39614,
"tags": "c++, entity-component-system"
} |
What is the mass of a single erythrocyte? | Question: I really have been searching through internet on different languages, but can’t find any article that answers on the question what is the single erythrocyte mass. I don’t know, I think it’s pretty easy to calculate experimentally, but I didn’t find anything.
Has anyone measured single erythrocyte mass, and if yes, what is the value?
My try
Experimentally
I am not biologist or medical student. What do I know: blood consists of liquid part(water, salt) and solid part(red blood cells, white blood cells and thrombocytes). If It is possible, the white cells and thrombocytes can be moved off the blood in some test tube, so that will left only erythrocytes. Then, there should be medical statistical value like erythrocyte density or number of erythrocytes per liter.
We have some blood with only erythrocytes, we know how many erythrocyte are there. We can measure that blood weight, substrate the liquid weight(somehow), divide by erythrocyte number and will get the mass of a single erythrocyte.
Theoretically
About 90% of erythrocyte mass is hemoglobin. Hemoglobin is a molecule. Molecule is a countable thing, so, perhaps there is information about how many hemoglobin molecules, in average, is in one erythrocyte.
According to Angelo D’Alessandro, Monika Dzieciatkowska, Travis Nemkov, and Kirk C. Hansen article Red blood cell proteomics update: is there more to discover? it is about 270 millions per red blood cell. Molecular mass of hemoglobin is about $64 kDa$, absolute mass equals $1.106*10^{-22} kg$.
If my assumptions are correct, then 90% of erythrocyte mass is about
$$2.97*10^{-14} kg$$
Answer: About 90% of erythrocyte mass is hemoglobin.
According to Angelo D’Alessandro, Monika Dzieciatkowska, Travis Nemkov, and Kirk C. Hansen article Red blood cell proteomics update: is there more to discover? there are about 270 millions hemoglobin molecules per red blood cell. Molecular mass of hemoglobin is about $64 kDa$, or $1.106*10^{-22} kg$.
From the given, 90% of erythrocyte mass is about
$$2.97*10^{-14} kg$$
Now, using the proportion, calculate that the average erythrocyte mass is
$$m=\dfrac{2.97*10^{-14}kg*100\%}{90\%}=3.3*10^{-14} kg$$
$$erythocyte\space mass=33 pg$$
@David calculations result is also pretty similar to mine, He got $83 pg$ | {
"domain": "biology.stackexchange",
"id": 10616,
"tags": "cell-biology, hematology, red-blood-cell"
} |
Do simple, non-sonic, omni-directional rangefinding beacons exist? | Question: I am on a robotics team that plans to compete in a competition where one of the rules is that no sort of sonic sensor is allowed to be used. I guess that limits it to some sort of EM frequency right?
Ideally, my team is looking for a simple beacon system, where beacon A would be attached to the robot, while beacon B would be attached to a known point on the competition space. Then, beacon A can give information about how far away B is. After some searching, I could only turn up laser rangefinders that required pointing at the target. I am a CS student, so I'm not familiar with the terminology to aid searches.
Another nice property would be if the beacons also gave the angle of beacon A in beacon B's field of view, although this is not necessary, since multiple beacons could be used to obtain this information.
We have an Xbox 360 Kinect working, and able to track things and give distances, but it looses accuracy over distance quickly (the arena is about 6 meters long), and this beacon should be as simple as possible. We ONLY need it for a relative position of our robot.
Alternate Solution:
Another way to solve this would be for an omni-directional beacon to only give angle information, two of these could be used to triangulate, and do the job just as well.
Answer: Yes, there is such a system available today, ScenSor from DecaWave:
These tags can measure their distance from base stations using the time of flight of radio packets.
They have an precision of about 10cm, I.E. successive samples are randomly distributed in a 10cm diameter cloud around the true location. Also, the radio signal needs a clear line of sight from the tags to the base stations. Any obstacles will cause a slight delay in the signal, giving a slight error in the readings.
However, if you combine enough tags and base stations, plus odometry from your robot, you can get pretty good precision indeed.
One caveat: The chips on the modules are very complex to use, and the example source code provided by Decawave is a nearly indecipherable mess, so expect to put a lot of time and effort into getting these to work. | {
"domain": "robotics.stackexchange",
"id": 460,
"tags": "localization, electronics, laser, rangefinder"
} |
What's the brightness of Alpha Centauri from Proxima Centauri? | Question: Self-explanatory, but I would like a comparison as well. Is the light enough to see by? How disrupted will the pitch darkness on the spot opposite of the 'solar pole' be?
Answer: Not close to being able to read by.
Proxima Centauri is about 13,000 AU from the two binary Centari stars. Together they have about twice the luminosity of the sun but at 13,000 AU, that's roughly 2/169,000,000 the visible light that the Earth gets from the Sun.
The brightness variation of the full moon to the Sun is about 1 to 440,000, so, some rough math, the two stars would shine about 1/190th as bright as the full moon. That would make those 2 stars by far the brightest stars in the Proxima Centauri sky, but far short of reading light. Together, the two stars would be about 10 times as bright as Venus at Venus' peak. (Venus is about 1/2000th as bright as the moon, -4.4 apparent magnitude to -12.6 for the moon).
That would, I think, be bright enough to be visible during the day under an earth like sky a lot of the time.
The two stars would be quite close to each other too, maximum visible distance between them would be roughly 1/4 the diameter of the full moon, where they would still be visible separate stars when far apart, but they would pass close to each other too, perhaps appearing to touch to the naked eye.
The two stars complete a full orbit around each other in 79.9 years, so the variation would be noticeable over a human lifetime. | {
"domain": "astronomy.stackexchange",
"id": 2562,
"tags": "star, the-sun, magnitude"
} |
Why does the surface structure of a metal make it hydrophobic? | Question: I was just reading this article from phys.org describing water-repellant surfaces.
However the article doesn't go into enough details of explaining why a particular structure repels the water.
Can someone please explain why the water molecules react to a particular pattern on the surface of a metal in this way?
Answer: The article Gowtham linked to seems to be the one you want. Basically (if I understood correctly), if the material is already hydrophobic (water is more attracted to itself than the material), the surface tension of the water prevents it from filling small empty spaces in the surface which will remain filled with air. Thus the contact surface and attractive force between water droplet and the surface will be very small.
The pattern itself is quite simple. It is simply very small empty spaces the walls of which are also full of even smaller empty spaces. This together with the material means that barring some outside pressure forcing the water into the spaces, surface tension will keep it out. Gravity will not be enough.
And in fact if you force water into the surface it will try to bounce back as long as water is less attracted to the original matter than it is to water. Water trapped in the small spaces will be more attracted to the rest of the drop than it is to the surface and if the drop is small enough and the space is small enough, this attraction wins over the gravity trying to force water to the surface.
Hope this is in the correct direction AND at least somewhat understandable. | {
"domain": "physics.stackexchange",
"id": 19230,
"tags": "experimental-physics, water, molecules"
} |
Locating a list of parts | Question: I have an ASP.NET MVC3 web application that displays a list of parts (a part is a simple entity with a number and a description).
I have updated the action method to support filtering and paging:
[HttpGet]
public ViewResult Index(int page = 1, int pageSize = 30, string filter = "All")
{
IEnumerable<Part> parts;
int totalParts;
var acceptableFilters = new string[] {
"All", "123", "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L",
"M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z" };
// This checks that the filter value received by the method is an acceptable value.
// E.G. If a user types in a value in the query string that is not contained in the
// array above, then the filter is reset to 'All' and all records are queried.
if (!acceptableFilters.Contains(filter))
{
filter = "All";
}
if (filter == "All")
{
parts = Database.Parts
.OrderBy(p => p.Number)
.Skip((page - 1) * pageSize)
.Take(pageSize);
totalParts = Database.Parts.Count();
}
else if (filter == "123")
{
var numbers = new string[]{"1","2","3","4","5","6","7","8","9","0"};
parts = Database.Parts
.OrderBy(p => p.Number)
.Where(p=> numbers.Contains(p.Number.Substring(0,1)))
.Skip((page - 1) * pageSize)
.Take(pageSize);
totalParts = Database.Parts
.Count(p => numbers.Contains(p.Number.Substring(0, 1)));
}
else
{
parts = Database.Parts
.OrderBy(p => p.Number)
.Where(p => p.Number.StartsWith(filter))
.Skip((page - 1) * pageSize)
.Take(pageSize);
totalParts = Database.Parts.Count(p => p.Number.StartsWith(filter));
}
PartsListViewModel viewModel = new PartsListViewModel()
{
Filter = filter,
PageInfo = new PageInfo(page, pageSize, totalParts),
Parts = parts,
};
return View(viewModel);
}
The idea is this:
If the filter is equal to 'All' then query all records.
If the filter is equal to '123' then query all records that start with a number.
If the filter is equal to a letter (A, B, C) then query all records that begin with said letter.
Once the required records have been queried I then need to do some calculations to determine how many pages I have and how many items to display on each page.
This works perfectly but I do not like the code I currently have, specifically the if statement that determines the Linq query to be used as the majority of the code is identical except for the where clause (or lack of where clause if all records are being pulled down). I also don't like the fact that I have to run each query twice: once to get the records and a second time to determine the total record set size.
So, is there a better way of achieving the same result? can the Linq queries be restructured in a more elegant way to reduce the redundant code or is this the best way?
Answer: You can construct LINQ queries step by step instead of writing them all at once
parts = Database.Parts;
if (filter == "123") {
parts = parts.Where(p => p.Number[0].IsDigit())
} else if (filter != "All") { // Alphabetic filter
parts = parts.Where(p => p.Number.StartsWith(filter))
}
int totalParts = parts.Count();
parts = parts
.OrderBy(p => p.Number)
.Skip((page - 1) * pageSize)
.Take(pageSize);
I followed the DRY software design principle here. DRY = Don't Repeat Yourself. If you repeat the same (or almost the same) code over and over, the code is less maintainable and is more susceptible to mistake. The fact that you have to write more, if you repeat yourself, is rather secondary. | {
"domain": "codereview.stackexchange",
"id": 2286,
"tags": "c#, linq, asp.net-mvc-3"
} |
Can rosbag record only some fields from a topic? | Question:
I'm trying to do so, but when I try to play back, rosbag shows this error:
[FATAL] [1338427215.385261009]: Time is out of dual 32-bit range
This happens regardless how long I record data. When recording the whole topic all goes fine.
I have tried to record both the desired fields and the topic time stamp, but I get the same error.
Many thanks.
Originally posted by jorge on ROS Answers with karma: 2284 on 2012-05-30
Post score: 0
Original comments
Comment by Martin Günther on 2012-05-30:
Can you show the parameters you used for rosbag?
Comment by jorge on 2012-05-31:
nothing special; just "rosbag record /"
Answer:
Rosbag can only record full topics, not only some fields from a topic. However, if for some reason you want to omit some fields from a recorded bag file, you can use rosbag's Code API, which is pretty amazing. It should take less than 10 lines of code to filter a "full" bag file, drop some fields, and write the result a second (reduced) bag file.
when I try to play back, rosbag shows this error:
[FATAL] [1338427215.385261009]: Time is out of dual 32-bit range
Yes, that error message is not very intuitive. :-)
What's happening is this: When you do, say:
rosbag record /scan/ranges
... rosbag thinks that you want to record the topic /scan/ranges, but that doesn't exist ("ranges" is a field of topic "scan"). That means that your resulting rosbag will be empty, and that is what the error message really means. It's misleading though, and I filed a bug report here.
Originally posted by Martin Günther with karma: 11816 on 2012-06-01
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by jorge on 2012-06-10:
Thank you a lot, that makes all clear now. What I did is to record the whole bag and the filter the fields I need with rostopic. | {
"domain": "robotics.stackexchange",
"id": 9604,
"tags": "rosbag"
} |
To-do app front-end in Vue 3 | Question: I have put together a To-do Application with the Slim framework on the back-end (API) and a Vue 3 front-end. I added a demo on my YouTube channel.
In the main App.vue file I have:
<template>
<div id="app">
<Header
title="My todo list"
:unsolvedTodos = unsolvedTodos
/>
<List
:todos="todos"
:dataIsLoaded=dataIsLoaded
@delete-todo="deleteTodo"
@toggle-todo="toggleTodo" />
<Footer
:isValidInput=isValidInput
newTitle = ""
placeholder= "+ Add new todo"
validationMsg = "Please add at least 3 characters"
@add-todo="addTodo"
/>
</div>
</template>
<script>
import axios from 'axios'
import '@fortawesome/fontawesome-free/js/all.js';
import Header from './components/Header.vue'
import List from './components/List.vue'
import Footer from './components/Footer.vue'
export default {
name: 'App',
components: {
Header,
List,
Footer
},
data() {
return {
apiUrl: "http://todo.com/api",
dataIsLoaded: false,
isValidInput: true,
todos: [],
unsolvedTodos: [],
}
},
methods: {
showTodos: function(){
axios.get(`${this.apiUrl}/todos`)
.then((response) => {
this.todos = response.data;
})
.then(this.getUnsolvedTodos)
.then(this.dataIsLoaded = true);
},
getUnsolvedTodos: function(){
this.unsolvedTodos = this.todos.filter(todo => {
return todo.completed == 0;
});
},
toggleTodo: function(todo) {
let newStatus = todo.completed == "0" ? 1 : 0;
axios.put(`${this.apiUrl}/todo/update/${todo.id}`, {
title: todo.title,
completed: newStatus
})
},
deleteTodo: function(id) {
axios.delete(`${this.apiUrl}/todo/delete/${id}`)
},
addTodo: function(newTitle){
const newToDo = {
title: newTitle,
completed: 0
}
if(newTitle.length > 2){
this.isValidInput = true;
axios.post(`${this.apiUrl}/todo/add`, newToDo);
} else {
this.isValidInput = false;
}
}
},
created() {
this.showTodos();
},
watch: {
todos() {
this.showTodos();
}
}
}
</script>
In Header.vue:
<template>
<header>
<span class="title">{{title}}</span>
<span class="count" :class="{zero: unsolvedTodos.length === 0}">{{unsolvedTodos.length}}</span>
</header>
</template>
<script>
export default {
props: {
title: String,
unsolvedTodos: Array
},
}
</script>
In Footer.vue:
<template>
<footer>
<form @submit.prevent="addTodo()">
<input type="text" :placeholder="placeholder" v-model="newTitle">
<span class="error" v-if="!isValidInput">{{validationMsg}}</span>
</form>
</footer>
</template>
<script>
export default {
name: 'Footer',
props: {
placeholder: String,
validationMsg: String,
isValidInput: Boolean
},
data () {
return {
newTitle: '',
}
},
methods: {
addTodo() {
this.$emit('add-todo', this.newTitle)
this.newTitle = ''
}
}
}
</script>
The to-do list (List.vue):
<template>
<transition-group name="list" tag="ul" class="todo-list" v-if=dataIsLoaded>
<TodoItem v-for="(todo, index) in todos"
:key="todo.id"
:class="{done: Boolean(Number(todo.completed)), current: index == 0}"
:todo="todo"
@delete-todo="$emit('delete-todo', todo.id)"
@toggle-todo="$emit('toggle-todo', todo)"
/>
</transition-group>
<div class="loader" v-else></div>
</template>
<script>
import TodoItem from "./TodoItem.vue";
export default {
name: 'List',
components: {
TodoItem,
},
props: {
dataIsLoaded: Boolean,
todos: Array
},
emits: [
'delete-todo',
'toggle-todo'
]
}
</script>
The single to-do item (TodoItem.vue):
<template>
<li>
<input type="checkbox" :checked="Boolean(Number(todo.completed))" @change="$emit('toggle-todo', todo)" />
<span class="title">{{todo.title}}</span>
<button @click="$emit('delete-todo', todo.id)">
<i class="fas fa-trash-alt"></i>
</button>
</li>
</template>
<script>
export default {
name: 'TodoItem',
props: {
todo: Object
}
}
</script>
Questions / concerns:
Is the application well-structured?
Could the code be significantly "shortened"?
Answer:
Is the application well-structured?
On the whole it seems okay, though see the answer to the next question that means the structure could be slightly changed for the better.
Could the code be significantly "shortened"?
Simon Says: use computed properties
Like Simon suggests: use computed properties - the implementation inside getUnsolvedTodos() could be moved to a computed property, with a return instead of assigning the result from calling .filter() to a data variable. Then there is no need to need to call that method and set up the property within the object returned by the data function.
Promise callback consolidation
The call to axios.get() in showTodos() has multiple .then() callbacks:
showTodos: function(){
axios.get(`${this.apiUrl}/todos`)
.then((response) => {
this.todos = response.data;
})
.then(this.getUnsolvedTodos)
.then(this.dataIsLoaded = true);
},
Those can be consolidated to a single callback - especially since none of them return a promise.
showTodos: function(){
axios.get(`${this.apiUrl}/todos`)
.then((response) => {
this.todos = response.data;
this.getUnsolvedTodos(); //this can be removed - see previous section
this.dataIsLoaded = true;
});
},
While this does require one extra line, it would avoid confusion because somebody reading the code might think the statements passed to .then() should be functions that return promises.
Single-use variables
In toggleTodo the variable newStatus is only used once so it could be consolidated into the object passed to the call:
axios.put(`${this.apiUrl}/todo/update/${todo.id}`, {
title: todo.title,
completed: todo.completed == "0" ? 1 : 0
})
If that variable is kept it could be created with const instead of let since it is never re-assigned.
Passing events to parent
In List.vue the <TodoItem has these attributes:
@delete-todo="$emit('delete-todo', todo.id)"
@toggle-todo="$emit('toggle-todo', todo)"
Those seem redundant. In Vue 2 these lines could be replaced with a single line: v-on="$listeners" but apparently that was removed with Vue3. I tried replacing those lines with v-bind="$attrs" but it didn't seem to work - I will search for the VueJS 3 way to do this. | {
"domain": "codereview.stackexchange",
"id": 41568,
"tags": "javascript, html, ecmascript-6, event-handling, vue.js"
} |
Fermionic occupation operator and nearest neighbor Fermionic hopping interaction as a qubit operator | Question: How to express Fermionic occupation operator $(\hat{a}_j^\dagger\hat{a}_j)$ and nearest neighbor Fermionic hopping interaction ($H_h= J\sum_{i=1}\hat{a}_i^\dagger \hat{a}_{i+1}+\hat{a}_{i+1}^\dagger \hat{a}_{i})$ as a qubit operators.
Answer:
The oldest and most commonly known way is the Jordan-Wigner transformation. The qubit operators will be $\mathcal{O}(N)$-local for $N$ occupiable orbitals.
A significantly more complicated way is the Bravyi-Kitaev transformation for which the qubit operators will be $\mathcal{O}(\log N)$-local.
There's many other ways, but the above two are by far the most important to know in the early stages of your project.
You can simply transform creation and annihilation operators into "occupied" and "unoccupied" operators:
But you have to also make sure the wavefunction satisfies the anti-symmetric property for fermions, so you have to add strings of $Z$ operators to count the number -1's (this is the Jordan-Wigner transformation which is $n-local$):
The occupation operator in the first part of your question is simply:
To get the other operators in your question, you can just substitute the $a_j$ and $a^\dagger_j$ expression from above.
I used this source to get the equations.
Note that if your Hamiltonian has only nearest-neighbor couplings, then the issue of the JW transformation being $N$-local can be mitigated due to cancellation when multiplying (for example) $a_j$ with $a_{j+1}^\dagger$, as noted in Norbert Schuch's comment here. | {
"domain": "quantumcomputing.stackexchange",
"id": 2534,
"tags": "hamiltonian-simulation, chemistry, solid-state"
} |
Charge on conductors | Question: In a books Concepts of Physics by HC Verma,It is written that one a conductor only one surface cannot be charged.Both the surfaces have to be charged in order for the net electric field inside the conductor to be zero.I had a doubt regarding this.Suppose a conductor has a positive charge only on one side.Then the positive charge will create an electric field due to which the free electrons inside the conductor will move away from the field.So one side there would be a positive charge and on one side a negative charge.These charges will also create an electric field on a direction opposite to that of the field by the positive charge on the surface.Hence the Net electric field will become zero at some point inside the conductor.So why is A conductor charged only on one side not possible?
Answer: Conductors (here) are of two types :
1) Having net charge
2) Not having net charge (*Conductors can have a charge separation without being injected with external charges)
Lets get to the theory why for Electric field inside a conductor(neutral) to be zero, both sides must have charge.
Now lets see what happens if we place a 1 Coloumb Charge inside the conductor.
The negative charges on the left surface and positive charges on the right surface will push the 1C charge towards left.
And the Electric field will push the 1C charge towards right.
Both of these forces cancel each other out and so the net
Electric field inside the conductor is 0.
It is not possible to charge only one side of the (*neutral) conducting cube. This is because the moment when a charge develops on one side , due to that charge the opposite charges are repelled to the other sides thus causing charge separation on both sides.
Edit : In the second photo , it should be 'Repelled by $\color{red}{positive}$ charge on the right surface' | {
"domain": "physics.stackexchange",
"id": 68343,
"tags": "electrostatics, electric-fields, conductors"
} |
In computer science how is using passive voice regarded? | Question: A soft question, when I was in high school and in University I took scientific writing classes and people told me I should use passive voice as much as possible to sound objective, but when I entered grad school people told me I should not use passive voice unless I work in Chemistry. So Which one should I follow?
Answer: The bottom line advice is: Use whichever voice best aids understanding.
Many people over-use the passive voice, thinking that it makes them sound more scientific. In practice, it's more common for the passive voice to make sentences harder to follow.
Therefore, as a rough guideline, it's helpful to use the following principle: use the active voice whereever possible. If you find yourself writing in passive voice, check whether your sentence would be clearer, simpler, or easier to understand if written in the active voice.
I recommend you read style guides on clear writing. You'll find they tend to warn against over-use of passive voice: active voice often makes sentences simpler, more direct, more forceful, more vivid, and thus both more concise and more memorable. Strunk & White have some good examples of this guideline; see, e.g., Rule 11 and Rule 12.
There are some special cases where there are conventions or norms within the field that are worth following. See, e.g.,
Use of “I”, “we” and the passive voice in a scientific thesis
Is it recommended to use “we” in research papers?. | {
"domain": "cs.stackexchange",
"id": 3512,
"tags": "research"
} |
Algorithm to convert decimal number to binary | Question: I am reading this material to understand the basics of number system.
I am stuck at a point in that material where it writes the algorithm to convert a decimal number to binary number.
The heading of that part where I am stuck is Decimal to Base
The algorithm (may be presented less than faithfully, please refer the link) it mentions there is:
Let $p = \lfloor \sqrt{V} \rfloor$
Let $v = \lfloor \dfrac V {B^p} \rfloor$
(v is the next digit to the right)
Make $V = V − v * B^p$
Repeat steps 1 through 3 until $p = 0$
It is explaining by taking an example of converting decimal number 236 to binary.
I am not getting how it is calculating the 1st step, i.e. to get the value of p.
It writes that p = int(square root of V)
Now, square root of 236 = 15.36229149573721635154
As per point number 1, p = integer part of 15.36229149573721635154
So, I remove the decimal part and p then becomes 15. But the material there says it is 7.
I can't get what is happening here. I am stuck.
Answer: Just converting the comment into a short answer:
$7 = \text{int}(\log_{2} 236)$. Generally, $p = \text{int}(\log_{B}V)$.
As other people pointed out, this algorithm is needlessly complicated and not practical; it is not easy to calculate $\log_B V$ for large $V$ by pencil and paper. Instead, use the other algorithm which is also mentioned in the article you are reading:
From decimal to binary
Step 1: Check if your number is odd or even.
Step 2: If it's even, write 0 (proceeding backwards, adding binary digits to the left of the result).
Step 3: Otherwise, if it's odd, write 1 (in the same way).
Step 4: Divide your number by 2 (dropping any fraction) and go back to step 1. Repeat until your original number is 0. | {
"domain": "cs.stackexchange",
"id": 5859,
"tags": "algorithms, number-formats, base-conversion"
} |
Error when building package of calibration_publisher while installing Autoware | Question:
In autoware 1.12.0, I build the packages as the tutorial from source, and I came across the errors by calibration_publisher as follows:
--- stderr: calibration_publisher
CMakeFiles/calibration_publisher.dir/src/calibration_publisher.cpp.o: In function `main':
calibration_publisher.cpp:(.text.startup+0x9b4): undefined reference to `cv::read(cv::FileNode const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
collect2: error: ld returned 1 exit status
make[2]: *** [devel/lib/calibration_publisher/calibration_publisher] Error 1
make[1]: *** [CMakeFiles/calibration_publisher.dir/all] Error 2
make: *** [all] Error 2
Failed <<< calibration_publisher [ Exited with code 2 ]
Aborted <<< ds4
Aborted <<< detected_objects_visualizer
Aborted <<< decision_maker_panel
I have tried many times to fix it, but not work
Originally posted by park on ROS Answers with karma: 21 on 2019-08-31
Post score: 0
Original comments
Comment by park on 2019-09-01:
I add an empty file named "COLCON_IGNORE" in the package of calibration_publisher to avoid the bug, and finally build successfully
Ignore calibration_publisher, the demo test will fail.
Comment by amc-nu on 2019-09-02:
Can you please provide information about your system? ROS version? Did you install OpenCV from source? (if so what version?).
Comment by park on 2019-09-02:
Sorry, I didn't provide enough information
Hardware Platform: Jetson AGX Xavier
OS: Ubuntu 18.04, installed by the SDK Manager
ROS: melodic, installed from sources by using installROSXavier script
OpenCV: version is 3.4.3, installed from sources by using buildOpenCVXavier script in the path /usr/local, and during install ROS, 3.2.0 is also installed in the path /usr
I also notice that the error also existed in opencv's answers camera_calibration.cpp: undefined reference to cv::read
Comment by amc-nu on 2019-09-02:
I haven’t used the Jetson Hacks’ script you mentioned. I would recommend you to try a clean install. Don’t forget to run rosdep to install the opencv version to which ros’ cv_bridge was linked.( please also have in mind that Nvidia provides its own versions of OpenCV. In the case you selected that when you set up your AGX).
Comment by park on 2019-09-02:
Yes, I have tried the version of OpenCV provided by Nvidia, but it also failed. Next step, I will try to build autoware 1.12.0 on a x86_64 platform for test
Comment by park on 2019-09-02:
The error is caused by the codes in calibration_publisher.cpp;
fs["CameraExtrinsicMat"] >> CameraExtrinsicMat;
fs["CameraMat"] >> CameraMat;
fs["DistCoeff"] >> DistCoeff;
fs["ImageSize"] >> ImageSize;
fs["DistModel"] >> DistModel;
Comment by amc-nu on 2019-09-02:
Please use apt versions of OpenCV. The easiest way is to use rosdep to install the correct ones.
Comment by park on 2019-09-02:
Yes I have also tried and it doesn't seem to be a problem of the OpenCV installation.
Now, I comment the following line:
fs["DistModel"] >> DistModel;
and passed the build, very strange.
Answer:
This error occurs when the opencv library cannot be loaded. Please refer to the address below
https://github.com/Autoware-AI/autoware.ai/pull/2090/commits/63abbb1c4d26be67ea0312b04b6dd9918cef3978
Originally posted by jdj with karma: 56 on 2020-11-02
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by heylypp on 2020-11-11:
Awesome! It works.
Comment by warriorUSP on 2021-08-12:
Great, this works for me too.
Basically adding opencv libraries path in CMakeLists.txt and package.xml fixed my issue. | {
"domain": "robotics.stackexchange",
"id": 33714,
"tags": "ros-melodic"
} |
problems running image_publisher node | Question:
Hi! It's my first time I have to deal with images in ROS. As I am a beginner, I tried to run this node
rosrun image_publisher image_publisher /opt/ros/indigo/share/rviz/images/splash.png
but I get
[ERROR] [1638817012.952972146]: Failed to load image (/opt/ros/indigo/share/rviz/images/splash.png): cap_.isOpened() onInit /tmp/binarydeb/ros-melodic-image-publisher-1.14.0/src/nodelet/image_publisher_nodelet.cpp 147
I don' have a splash.png as far as I know.
Originally posted by v.leto on ROS Answers with karma: 44 on 2021-12-06
Post score: 0
Answer:
Hi @v.leto
The issue is the image path is incorrect,
If you are running Melodic then file path would be:
/opt/ros/melodic/share/rviz/images/splash.png
After running rosrun image_publisher image_publisher /opt/ros/melodic/share/rviz/images/splash.png then image_publisher will print a message such as:
[ INFO] [1638830830.340437700]: File name for publishing image is : /opt/ros/melodic/share/rviz/images/splash.png
[ INFO] [1638830830.348288400]: Flip horizontal image is : false
[ INFO] [1638830830.349854100]: Flip flip_vertical image is : false
To visualize, run another terminal rqt, under Plugins->ImageView, select from the drop down menu and you will be able to visualize image as shown below:
Originally posted by osilva with karma: 1650 on 2021-12-06
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by v.leto on 2021-12-07:
thank you very much @osilva. Is this the way to see images with open cv? I mean, I have to run rqt and then Plugins->ImageView? And if would like to see another .png image, where should I put it,
Comment by osilva on 2021-12-07:
Yes you can see published images in OpenCV, you will need something called cvbridge. Here a link to get you started: http://wiki.ros.org/cv_bridge
Comment by osilva on 2021-12-07:
You can place your .png file in any file path, just change it:
rosrun image_publisher image_publisher <file path>
Comment by v.leto on 2021-12-07:
thanks again! | {
"domain": "robotics.stackexchange",
"id": 37221,
"tags": "ros, rviz, ros-melodic, node"
} |
Building a map with SLAM using turtlebot in gazebo | Question:
Hi,
I am trying to build a map of the environment created in gazebo using the turtlebot. I am following the tutorial
But i see that the many topics are not being published. They are
/scan
/map
/particlecloud
/move_base/TrajectoryPlannerROS/global_plan
/move_base/local_costmap/obstacles
/move_base/local_costmap/inflated_obstacles
While launching the gmapping_demo.launch file i have removed the kinect launch as the turtlebot simulator in gazebo automatically launches the kinect. I can see the pointclouds in rviz.
Has anyone tried SLAM gmapping for the turtlebot in gazebo? Any help here is appreciated.
Note: I am using 11.04 and ROS electric
I think the Fake laser that is generated from the pointclouds of the kinect in turtlebot_simulation is the problem here.
I have used the robot.launch from the turtlebot_gazebo package to run the robot in the environment. Can someone help me in understanding the changes that i have to make in the launch file so that the above topics have messages being published. Following is the launch file i am using.
<launch>
<param name="robot_description" command="$(find xacro)/xacro.py '$(find turtlebot_description)/urdf/turtlebot.urdf.xacro'" />
<node name="spawn_turtlebot_model" pkg="gazebo" type="spawn_model" args="$(optenv ROBOT_INITIAL_POSE) -unpause -urdf -param robot_description -model turtlebot" respawn="false" output="screen"/>
<node pkg="diagnostic_aggregator" type="aggregator_node" name="diagnostic_aggregator" >
<rosparam command="load" file="$(find turtlebot_bringup)/config/diagnostics.yaml" />
</node>
<node pkg="robot_state_publisher" type="state_publisher" name="robot_state_publisher" output="screen">
<param name="publish_frequency" type="double" value="30.0" />
</node>
<!-- The odometry estimator -->
<node pkg="robot_pose_ekf" type="robot_pose_ekf" name="robot_pose_ekf">
<param name="freq" value="30.0"/>
<param name="sensor_timeout" value="1.0"/>
<param name="publish_tf" value="true"/>
<param name="odom_used" value="true"/>
<param name="imu_used" value="false"/>
<param name="vo_used" value="false"/>
</node>
<!-- throttling -->
<node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager" respawn="true">
<param name="max_rate" value="20.0"/>
<remap from="cloud_in" to="/camera/depth/points"/>
<remap from="cloud_out" to="cloud_throttled"/>
</node>
<!-- Fake Laser -->
<node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager" respawn="true">
<param name="output_frame_id" value="/kinect_depth_frame"/>
<!-- heights are in the (optical?) frame of the kinect -->
<param name="min_height" value="-0.15"/>
<param name="max_height" value="0.15"/>
<remap from="cloud" to="/cloud_throttled"/>
</node>
<!-- Fake Laser (narrow one, for localization -->
<node pkg="nodelet" type="nodelet" name="kinect_laser_narrow" args="load pointcloud_to_laserscan/CloudToScan openni_manager" respawn="true">
<param name="output_frame_id" value="/kinect_depth_frame"/>
<!-- heights are in the (optical?) frame of the kinect -->
<param name="min_height" value="-0.025"/>
<param name="max_height" value="0.025"/>
<remap from="cloud" to="/cloud_throttled"/>
<remap from="scan" to="/narrow_scan"/>
</node>
</launch>
Thanks,
Karthik
Originally posted by karthik on ROS Answers with karma: 2831 on 2012-02-02
Post score: 0
Answer:
This is caused by a bug in the gazebo.urdf.xacro file. The camera frame names were not correct and therefore the gazebo plugins couldn't initialize properly. It's fixed in 330:4253a4e5f257 . It's released in turtlebot 0.9.2
Originally posted by mmwise with karma: 8372 on 2012-03-05
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 8095,
"tags": "slam, navigation, turtlebot-navigation, gmapping, turtlebot-gazebo"
} |
Suggest good example of multi-threading with ROS? | Question:
Can some someone point me to a good example of multi-threading using ROS?
I would like to move calculations outside the main ROS callbackQueue, and then publish a topic after the calculations are done.
Searching wiki and answers.ros 53055, I see suggestions to do this, and to use boost::thread, but it's not clear how best to tie the results back to publishing an advertised topic.
Should I create a separate node handle, and advertise from within the worker thread?
Or, is publish/node handle thread safe so that I can call publish from within worker thread using a node handle and topic initialized/advertised in the main thread?
Originally posted by dcconner on ROS Answers with karma: 476 on 2013-02-25
Post score: 10
Original comments
Comment by Mani on 2015-12-15:
Since this question is becoming very popular, I suggest that you modify it with a minimal example scenario. It does not need to be robot related, but it should showcase your use case well. Then people can discuss that in detail and provide solutions which hopefully will turn into design patterns.
Answer:
This question is a bit old, and since I didn't get a response we worked out our own plan in the interim.
Since the question has recently got some traction, I'll briefly describe what we did.
We used boost::thread to create our worker threads.
ros was set up in the main thread, and publishers/subscribers were created per the normal tutorials.
Data coming in via subscriptions would have the const ptr stored in a lock protected copy. The worker thread would then grab a copy of the latest const pointer at the appropriate cycle. Data processing and calculations would occur in the worker thread.
The worker thread would then use the publisher to send ROS data. Minimal processing between lock/unlock, and only publishing from one thread seemed to work well.
Using the worker thread maintained the responsiveness to handling ros messages that were coming in at 1kHz. Our control loop ran around 250-300 hz.
Originally posted by dcconner with karma: 476 on 2016-02-12
This answer was ACCEPTED on the original site
Post score: 6
Original comments
Comment by uwleahcim on 2016-04-19:
Do you have an example code with these optimizations? Currently I used a class to track data and keep my callbacks as fast as possible with heavy calculations done in the main loop.
Comment by dcconner on 2016-07-06:
Unfortunately, our main controller contained proprietary data and we could not open source it. | {
"domain": "robotics.stackexchange",
"id": 13059,
"tags": "ros, multithreading"
} |
Are biases necessary to make neural networks universal approximators when using sigmoid activations? | Question: In a neural network, a bias is a constant term that is added to the weighted input in a neuron/unit:
output = activation_function( input1*weight1 + ... + inputn*weightn + bias)
I can see that the bias in a sigmoid activation function adds the ability to control the threshold of the activation. But when I started learning about neural nets we didn't use any bias, just the weighted inputs. I've also been told that sigmoid in particular can do without a bias, but this is not intuitively true for me at all.
So is it true that biases are redundant in sigmoid neural nets? How can neural nets learn to approximate any continuous function if they do not have a bias input?
Answer: No, you don't need a bias. You can have a "dummy" input (input(n+1) in your formualtion) which is always set to 1. Then the bias term is absorbed into the weights. | {
"domain": "cstheory.stackexchange",
"id": 3699,
"tags": "machine-learning, ne.neural-evol"
} |
Card Deck class for a Poker game - version 2 | Question: This code was first critiqued here (without the Card class): Card Deck class for a Poker game
After learning more about data structures both online and in class, I wanted to revisit my Card and Deck classes and see if I can still make them better. Based on my driver tests, they are doing nearly everything I want them to do.
However, I'm not sure if all my operations between these two classes and the interface are proper, even though they do give proper results.
For instance:
Is it still (theoretically) okay to return a blank Card if there are no more cards to return from the empty Deck?
Do I still need to maintain my Rule of Three, even though I don't do any memory allocation in either class?
These are my main concerns, but I'd like a review of all the code in case there's something I've overlooked and/or if some parts can be cleaned.
Card.h
#ifndef CARD_H
#define CARD_H
#include <iostream>
#include <string>
class Card
{
private:
unsigned rankValue;
char rank;
char suit;
std::string card;
public:
Card();
Card(char, char);
Card::Card(const Card&);
Card::~Card();
char getRank() const {return rank;}
char getSuit() const {return suit;}
std::string getCard() const {return card;}
bool operator<(const Card &rhs) const {return (rank < rhs.rank);}
bool operator>(const Card &rhs) const {return (rank > rhs.rank);}
bool operator==(const Card &rhs) const {return (suit == rhs.suit);}
bool operator!=(const Card &rhs) const {return (suit != rhs.suit);}
Card& operator=(const Card &obj);
friend std::ostream& operator<<(std::ostream&, const Card&);
};
#endif
Card.cpp
#include "Card.h"
Card::Card() : rankValue(0), rank('*'), suit('*'), card("**") {}
Card::Card(const Card &obj)
{
rankValue = obj.rankValue;
rank = obj.rank;
suit = obj.suit;
card = obj.card;
}
Card::~Card() {}
Card::Card(char rank, char suit)
{
this->rank = rank;
this->suit = suit;
card += rank;
card += suit;
if (rank == 'A')
rankValue = 1;
else if (rank == '2')
rankValue = 2;
else if (rank == '3')
rankValue = 3;
else if (rank == '4')
rankValue = 4;
else if (rank == '5')
rankValue = 5;
else if (rank == '6')
rankValue = 6;
else if (rank == '7')
rankValue = 7;
else if (rank == '8')
rankValue = 8;
else if (rank == '9')
rankValue = 9;
else if (rank == 'T')
rankValue = 10;
else if (rank == 'J')
rankValue = 11;
else if (rank == 'Q')
rankValue = 12;
else if (rank == 'K')
rankValue = 13;
}
Card &Card::operator=(const Card &obj)
{
// if not self-assignment and lhs is a Blank card
if (this != &obj && this->card == "**")
{
this->card = obj.card;
}
return *this;
}
std::ostream& operator<<(std::ostream &out, const Card &aCard)
{
out << '[' << aCard.card << ']';
return out;
}
Deck.h
#ifndef DECK_H
#define DECK_H
#include <array>
#include "Card.h"
class Deck
{
private:
static const unsigned MAX_SIZE = 52;
int topCardIndex;
std::array<Card, MAX_SIZE> cards;
void build();
public:
Deck();
void shuffle();
Card deal();
unsigned size() const {return topCardIndex+1;}
bool empty() const {return topCardIndex == -1;}
friend std::ostream& operator<<(std::ostream&, const Deck&);
};
#endif
Deck.cpp
#include <algorithm>
#include "Deck.h"
Deck::Deck() : topCardIndex(-1)
{
build();
shuffle();
}
void Deck::build()
{
const unsigned NUMBER_OF_RANKS = 13;
const unsigned NUMBER_OF_SUITS = 4;
const char RANKS[NUMBER_OF_RANKS] = {'A','2','3','4','5','6','7','8','9','T','J','Q','K'};
const char SUITS[NUMBER_OF_SUITS] = {'H','D','C','S'};
for (unsigned rank = 0; rank < NUMBER_OF_RANKS; ++rank)
{
for (unsigned suit = 0; suit < NUMBER_OF_SUITS; ++suit)
{
Card newCard(RANKS[rank], SUITS[suit]);
topCardIndex++;
cards[topCardIndex] = newCard;
}
}
}
void Deck::shuffle()
{
topCardIndex = MAX_SIZE-1;
std::random_shuffle(&cards[0], &cards[topCardIndex]);
}
Card Deck::deal()
{
if (empty())
{
std::cerr << "\nDECK IS EMPTY\n";
Card blankCard;
return blankCard;
}
topCardIndex--;
return cards[topCardIndex+1];
}
std::ostream& operator<<(std::ostream &out, const Deck &aDeck)
{
for (unsigned iter = aDeck.size(); iter--> 0;)
{
out << aDeck.cards[iter] << "\n";
}
return out;
}
Answer: To add to the already suggested improvements:
You have:
bool operator<(const Card &rhs) const {return (rank < rhs.rank);}
This does not sound right to me.
If you ever decide to create a set of Cards or sort a collection of Cards, I assume you are going to rely on this function. In order to sort a collection of Cards, you have to answer the following questions:
Is an Ace is to be put before a Two or after a King?
Is an Ace of Diamonds to be equal in ordering to an Ace of Clubs?
Is Two of Diamonds to be less or greater than Three of Clubs?
Assuming the following answers:
An Ace is to be put before a Two.
Use the following order of Suites: Clubs, Diamonds, Hearts, Spades (the ordering used in the game Bridge). By that logic, Ace of Diamonds is greater than an Ace of Clubs.
Using the ordering used in Bridge, Two of Diamonds is greater than Thee of Clubs.
You'll have to update your operator< function to:
bool operator<(const Card &rhs) const
{
if ( this->suite != rhs.suite )
{
// Suites are 'C', 'D', 'H', and 'S'.
// Lucky coincidence.
return (this->suite < rhs.suite);
}
// Need to use rankValue, which are ordered 1-13, not rank.
return (this->rankValue < rhs.rankValue);
}
Also, I would change the implementation of the constructor
Card::Card(char rank, char suit)
to
Card::Card(char rank, char suit) : rank(rank),
suit(suit),
rankValue(getRankValue(rank))
// Initialize members in
// the initializer list whenever possible.
{
}
And move the logic of getting the ordered rank value from the input rank to a helper function. In the function, use a switch instead of an if-elseif logic.
static int Card::getRankeValue(int rank)
{
switch (rank)
{
case 'A':
return 1;
case '2':
case '3':
case '4':
case '5':
case '6':
case '7':
case '8':
case '9':
return rank-'0';
case 'T':
return 10;
case 'J':
return 11;
case 'Q':
return 12;
case 'K':
return 13;
default:
return -1;
}
} | {
"domain": "codereview.stackexchange",
"id": 11117,
"tags": "c++, c++11, classes, playing-cards"
} |
What do people mean by gauge invariance of the normalization of field? | Question: Lets have the scalar Klein-Gordon field interacting with EM field:
$$
L = \partial_{\mu}\varphi \partial^{\mu}\varphi - m^2\varphi \varphi^{*} - j_{\mu}A^{\mu} + q^{2} A_{\mu}A^{\mu}\varphi \varphi^{*} - \frac{1}{4}F_{\mu \nu}F^{\mu \nu}. \qquad (1)
$$
I heard that the normalization of Klein-Gordon field in a theory $(1)$ is invariant under gauge transformations. What normalization is meaned? Does it refer to the factor $\frac{1}{\sqrt{2(2 \pi)^{3} E_{\mathbf p}}}$? How to prove it?
An edit.
It was the invariance of condition $\int j^{0}d^{3}\mathbf r = q$ under $U(1)$ local gauge transformations. $j^{0} = \frac{q}{2m}(\psi^{*}\partial^{0}\psi - \psi \partial^{0}\psi^{*}) - \frac{q^2}{m}A^{0}|\Psi |^{2}$.
Answer: I can't be sure what the source meant without seeing the context, however I suspect the author meant the following. A $ U(1) $ gauge transformation acting on a charged scalar field gives:
\begin{equation}
\phi (x) \rightarrow e ^{ i \alpha (x) } \phi (x)
\end{equation}
Under such a transformation the normalization is invariant since $\phi$ simply gains a phase. This is just the definition of a $U(1)$ rotation. | {
"domain": "physics.stackexchange",
"id": 11993,
"tags": "klein-gordon-equation, gauge-invariance, gauge-symmetry"
} |
Sampling rate in Wifi 802.11 -> 20 MHz enough with a BW of 20 MHz? | Question: In a Wifi 802.11n context, I'm looking to find the sampling rate to perform some processing on the signal and I've something I cannot understand : in the book Next Generation Wireless LAN's: 802.11n and 802.11ac, I would like to know the meaning of this sentence :
In 802.11a, the fundamental sampling rate is 20 MHz, with a 64-point
FFT/IFFT. The Fourier transform symbol period, T, is 3.2 μs in
duration and F is 312.5 kHz.
Why is the sampling rate « 20 MHz » ? The bandwitch of the Wifi 802.11 is 20 MHz so my first idea would be to double this value to validate the Nyquist-Shannon criterion. In addition, the guard interval is not included ?
Answer: The radio-frequency signal of 802.11n can have a bandwidth of 20 MHz or 40 MHz. But even if it is 20 MHz, it can be sampled at 20 MSa/s if an I/Q demodulator is used: at the receiver the radio-frequency signal is downconverted to the baseband. The I/Q demodulator has two lowpass output signals (Inhphase/Quadrature or real/imaginary part) with bandwidth 10 MHz each. Two samplers with 20 MSa/s each are then sufficient to fulfill the sampling criterion.
There are other receiver architectures where the received signal is first downconverted to an intermediate frequency (for example 10 MHz) and is then sampled at a higher rate (for example 40 MSa/s). In this case only one sampler is necessary.
The guard interval has only a negligible influence on the signal bandwidth. It is not included in the "Fourier transform period $T$", because it is only added after the transform. The symbol duration after the transmitter IFFT is therefore
$$
T = N/f_\mathrm s = 64/(20\,\text{MHz}) = 3.2\,\mathrm{\mu s}
$$ | {
"domain": "dsp.stackexchange",
"id": 2404,
"tags": "sampling, digital-communications, ofdm, bandwidth"
} |
Software for mission planning in multi-robot systems | Question: I am interested in mission planning for multi-robot systems. Given multiple robots and multiple tasks and an environment, I need to specify missions and software should plan for the robot team to accomplish the mission.
To be more precise, tasks are just a bunch of waypoints with or without time-stamps or deadlines. More elaborately, a task in abstract sense is something like patrol location A using two robots which essentially can be coded up as two sets of way-points or trajectories for the robots. hence the assertion above that tasks can just be viewed as a bunch of waypoints.
So there are multiple tasks and what tasks have to be executed in which order has to be planned by user or software so as to fulfil a mission. I am looking for Github repositories where people have tackled such problems to take inspiration. I am open to any software framework.
As a prime example of the kind of work or software I am looking for, FLYAQ - An open source platform for mission planning of autonomous quadrotors is an example. Please share any code or PDF links, if possible.
Answer: First, it's important to note that depending on your problem specifications, we may not know of any good algorithms to solve it - this is especially true when you start to add in more problem constraints.
There are three levels of abstraction here - your question primarily deals with the highest level, but FLYAQ covers all three so I include them here as well.
High level: Task assignment and ordering
Given a list of tasks, constraints, and costs, we first find the best set of ordered sets of waypoints for each robot in your team respects the constraints. This is generally called the "vehicle routing problem", and has many variants. One open source software is OptaPlanner. The HeuristicLab also has an impressively mature software stack for these problems. You could also consider solving these problems using a linear program solver such as Gurobi or open source alternatives. If you have a more specific problem in mind, create another question and I'll post an answer (this is my area of research).
Mid-level: Trajectory planning
Once a robots has been assigned an ordered set of waypoints to visit, it still needs to decide how to connect the dots. Depending on your hardware this might already be abstracted away (e.g. with MotionPlanner and Ardu* planners). If you need to solve this problem, there are a few very good libraries for this, such as Open Motion Planning Library and Drake. The output of this layer is typically a polynomial or spline representation of a trajectory that is guaranteed to be collision free.
Low level: Rate control
At the lowest level the robot needs to translate a desired trajectory into rate commands for its actuators. This is entirely dependent on your hardware platform, and is often abstracted away. If you already have hardware, then see if a google search yields an example of how to go from a waypoint list or trajectory to rate commands.
Putting everything together
A typical software set-up will separate each level of abstraction into different applications, since they run at different rates (typically high level is ~0.1-1Hz, mid level is 1-10Hz, low level is 10-100Hz). One very popular middleware is the Robot Operating System, which has an active community and many tutorials.
This answer is vague by necessity - your question covers a range of topics which has thousands of published work on. If you have a more specific problem or abstraction layer in mind, there are definitely more specific answers available. | {
"domain": "robotics.stackexchange",
"id": 1403,
"tags": "mobile-robot, motion-planning, algorithm, multi-agent, reference-request"
} |
Expression for the gradient of gravity potential in spherical coordinates | Question: I am currently using the GGM05C Stokes' coefficients to reconstruct the gradient of gravity potential of Earth.
I have found an expression for said gradient in spherical coordinates in this technical report by the ICGEM. In particular, equation 122 on page 23 shows the partial derivatives of the gravity potential $W$ with respect to the three spherical coordinates parameters $r$ (distance to center), $\lambda$ (longitude) and $\varphi$ (geocentric latitude).
These equations involve the use of associated Legendre functions $P_{lm}$, which are a function of the latitude $\varphi$. I understand then that, when performing the partial derivatives with respect to $r$, $\varphi$ and $\lambda$, the Legendre functions remain unaltered in the derivatives with respect to $r$ and $\lambda$, since the Legendre functions are not a function of either $r$ or $\lambda$. However, since they are a function of the latitude $\varphi$, when we calculate $\dfrac{\partial W}{\partial \varphi}$, we need to derive the associated Legendre functions, obtaining the following expression, as indicated in equation 122 of the linked document:
$$
\dfrac{\partial W}{\partial \varphi} = \frac{GM}{r}\sum_{\mathscr{l}=0}^{\mathscr{l}_{max}}\left(\frac{R}{r}\right)^\mathscr{l}\sum_{m=0}^{\mathscr{l}}\dfrac{\partial P_{\mathscr{l}m}(sin\ \varphi)}{\partial \varphi}\left(C_{\mathscr{l}m}^Wcos(m\lambda)+S_{\mathscr{l}m}^Wsin(m\lambda)\right)
$$
However, following the chain rule, shouldn't this derivative also include a multiplication by $cos\ \varphi$, since that is the derivative of the $sin\ \varphi$ nested within $P_{\mathscr{l}m}(sin\ \varphi)$?
Answer: The chain rule is already included, since the derivative is taken with respect to $\varphi$. Note that
\begin{equation}
\frac{\partial P_{\ell m}(\sin(\varphi))}{\partial \varphi} = \frac{\partial P_{\ell m}(\sin(\varphi))}{\partial \sin(\varphi)} \frac{\partial \sin(\varphi)}{\partial \varphi}.
\end{equation} | {
"domain": "earthscience.stackexchange",
"id": 2431,
"tags": "gravity"
} |
What is the general way for the slab loading be distributed to adjacent beam? | Question: From what I know, for a nice rectangle slab with 4 edges bounded by beams, one can distribute the slab loading to adjacent beam via the Tributary Area method.
But what if the slab is irregular, or there are arbitrary line loads on the slab, or the slab is not of uniform method, or not all sides of the slabs are covered by beams? How should the slab's dead load and live load, and any load on slab be distributed to beams? Is there a general way?
Answer: Extension of my comment to an answer, feel free to edit and expand it!
The most general method would be to use a numerical model consisting the slab as well, thus the load transferring mechanism is captured by the model. Within the finite element (FEM) paradigm plate elements would suffice. With today's computers and FEM applications it can be easily solved.
I do not know about your background and experience but using plate elements with line elements to model slabs with beams is quite widespread. That is why I just mentioned FEM without any further details. If you could provide more details on your experience or formulate more specific questions, e.g. how to model the eccentricity of beams, beam slab connection, then we might be able to give equally specific answers.
Here are two practitioners' guides on modelling of reinforced concrete slabs:
Recommendations for finite element analysis for the
design of reinforced concrete slabs
How to Design r.c. Flat Slabs Using Finite Element Analysis
Regarding how the tributary approach compares with FEM plate model for a simple case:
A Case Study Comparing Two Approaches for Applying Area Loads: Tributary Area Loads vs Shell Pressure Loads
Based on their simple example for a particular slab-to-beam stiffness ratio:
The tributary area method is on the safe side and yields to about 20% overestimation of maximal bending moment and shear force.
These are based on a single, simple example thus the results are not conclusive, at best only indicative.
Since the structure type was not mentioned and maybe you have other than a building structure in mind, here are two relevant books on bridge decks:
O'Brien and Keogh: Bridge Deck Analysis
Hambly: Bridge Deck Behaviour | {
"domain": "engineering.stackexchange",
"id": 815,
"tags": "structural-analysis"
} |
On Calculating Expectation Values in Sakurai's Modern Quantum Mechanics | Question: Expectation value $\langle A\rangle=\langle \alpha|A|\alpha\rangle$. Which can be written as $\langle A\rangle = \sum_{a}\sum_{b} \langle\alpha|b\rangle \langle b|A|a\rangle\langle a|\alpha\rangle$.
How is $\langle A\rangle = \sum_{a}\sum_{b} \langle\alpha|b\rangle \langle b|A|a\rangle\langle a|\alpha\rangle=\sum_{a} a|\langle a|\alpha\rangle|^2$?,
where $A = \sum_{a} a\Lambda_{a}$
and $\sum_{a}\Lambda_{a} = \sum_{a} |a\rangle\langle a|=1$
Answer: Notice that the definitions imply $A |a\rangle = a |a\rangle$, $|\langle a | \alpha \rangle|^2 = \langle a | \alpha \rangle\langle \alpha | a \rangle$ and $\sum_b |b\rangle\langle b | = 1$. Substituting these expressions on the expression you provided for $\langle A \rangle$ leads to the desired conclusion. | {
"domain": "physics.stackexchange",
"id": 84687,
"tags": "quantum-mechanics"
} |
Getting structure and sequence related to PDB IDs | Question: I have two kinds of interactions: transient and stable. We are supposed to work on stable interaction, like interactions between two monomers in a heterodimer.
In a heterodimer there are two chains in which there are some residues between monomers which take part in the interaction of monomers and building the structure of heterodimer, and there are other residues interacting inside each monomer for building the structure of that monomer. There are some PDB IDs related to heterodimers in PDB.
Now imagine that we are trying to show which residues are physically interacting between two monomers in a heterodimer, and then we are trying two show the site of these physically interacting residues in the sequence related to that special structure. For that, we need to find the sequence related to that structure and we can download it from PDB.
But if we want to make a multiple sequence alignment of that sequence, first we need to find the refseq sequence of that and then we must blast the sequence and find homologues of that sequence.
Here there are some challenges:
we must find residues interacting in the structure of heterodimer between two monomers.
we must find the sequences of monomer chains of the heterodimer and map the interacting residues in the structure to the sequence.
we know that when a structure is solved and its monomers are sequenced, may be they are not able to completely sequence the heterodimer and residue numbers are maybe different from the related refseq sequence.
I would like to know how can we find the related refseq sequence (or sequences, in the case of heterodimer) and then how can we map the physically interacting residues in the structure of PDB ID to refseq sequence (finding the sites of these interacting residues in that refseq sequence) regarding three challenges mentioned above.
Imagine that I have 100 PDB IDs of 100 nonredundant heterodimers and I would like to find and download the structures, sequence and finally refseq sequence. What should I do?
Answer: You can download seqs and structures based on a list of PDB ids using http://www.rcsb.org/pdb/download/download.do#FASTA | {
"domain": "bioinformatics.stackexchange",
"id": 348,
"tags": "protein-structure"
} |
(Co)-induction, fixpoints and inference systems | Question: I'm learning about induction and co-induction. From what I know, given a set of judgments $U$ and an inference system $\Phi \subseteq \wp(U) \times U$, where $(\left\{ h_1,\dots,h_n \right\}, c) \in \Phi$ stands for the rule $\frac{h_1 \quad \dots \quad h_{n}}{c}$, we can define the function:
$ F_{\Phi}(X) = \left\{ c \mid \exists (H, c) \in \Phi \;.\; H \subseteq X \right\}, $
and the least fixpoint $\mu F_{\Phi}$ is the set of all judgments that have a finite proof in $\Phi$.
The induction principle states that $F_{\Phi}(X) \subseteq X \Rightarrow \mu F_{\Phi} \subseteq X$. It is a consequence of the Knaster-Tarski theorem, which requires that $F_{\Phi}$ be monotone.
However, not all inference systems seem to be monotone; they would be if they contained "identity" rules of the form $(c,c)$ for all $c \in U$. For instance, take $U = \left\{ 0,1 \right\}$ and $\Phi = \left\{ \frac{}{0}, \frac{0}{1} \right\}$: $F_{\Phi}(\varnothing) = {0}$ and $F_{\Phi}(\{1\}) = \varnothing$, so $F_{\Phi}$ is not monotone.
On the other hand, if an inference system has identity rules then co-induction is meaningless, since the greatest fixpoint of $F_{\Phi}$ is $U$ in that case.
There's clearly some flaw in my understanding, I would be grateful if somebody could point out my mistakes.
Answer: Intuitively, you can see that $F_\Phi(X)$ is monotone in $X$ by carefully looking at the body of the definition
$$ \exists (H, c)\in \Psi,\ H\subseteq X$$
If there is an $H, c$ with $H\subseteq X$, then definitely the same $H$ will work if $X$ is taken to be larger!
You are correct about the identity deduction rules making co-induction trivial. | {
"domain": "cs.stackexchange",
"id": 21622,
"tags": "logic, fixed-point, coinduction"
} |
Not direct/inverse proportion implies systematic error | Question: We were doing error-analysis and my physics teacher said:
A relationship between physical constants is either direct proportion or inverse proportion. If these are not true then there is a systematic error.
The wording may be different but I remember him saying something like this.
Does anyone (due to having more experience than me) know what this might mean? Even if it is technically wrong, I'm sure he meant something else and I want to know what he was referring to. (Yes I can ask on Monday but I don't want to wait until then.)
Thanks!
Answer: From a trivial point of view and in line with what you say $x \propto y \iff x=k_{1}y$ for some constant $k_{1}$. On the other hand $x \propto \frac{1}{y} \iff x=\frac{k_{2}}{y}$ for some constant $k_{2}$. If none of these relationships holds, then there may well be an error in the assumptions in proportionality. | {
"domain": "physics.stackexchange",
"id": 21895,
"tags": "experimental-physics, error-analysis"
} |
Is work is equal to $mv^2$ (without $\frac{1}{2}$)? | Question: I was trying to come up with an equation for work that doesn't include time, because I don't know time.
Here's what I did:
$$ work = Fd = mad = m{v\over t}d = m{v\over\left({d\over v}\right)}d = mv^2 $$
Now, $mv^2$ is familiar, from kinetic energy equation I think, but I believe that was $\frac{1}{2} [* mv^2]$.
Where did I go wrong?
Answer: Remember how work is defined. The key word is displacement. I think that when you wrote $Fd$, you considered $d$ to define a single point, and not displacement. Now back to your problem.
The work done by a net force is $$W=Fd=mad=ma{(}\frac{v_{f}^{2}-v_{i}^{2}}{2a})=\frac{mv_{f}^{2}}{2}-\frac{mv_{i}^{2}}{2}=\Delta E_{c}$$
The single diference is that I considered $d$ to be the particles displacement (i.e. $v_{f}^{2}=v_{i}^{2}+2ad$ ) | {
"domain": "physics.stackexchange",
"id": 15942,
"tags": "homework-and-exercises, newtonian-mechanics, energy, work"
} |
Pointcloud data to mesh conversion | Question:
Hi,
I would like to create a mesh out of a pointcloud which I keep in a list, let's say. Is this really possible? If so, does anyone know a way to do it?
Originally posted by Jägermeister on ROS Answers with karma: 81 on 2019-05-10
Post score: 1
Answer:
This can be done with functionality available in PCL, here's a example of using it that: cloud_to_mesh_node, cloud_to_mesh_ros.h. This node converts a point cloud it receives on the ~/cloud topic to a mesh and publishes that as a Marker or Shape message.
Using PCL functionality like this might not be the most efficient method though. For real-time use you might want to look into using other tools like voxblox. This builds a TSDF or ESDF representation from sensor data and can efficiently mesh it in real-time.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2019-05-10
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Jägermeister on 2019-05-28:
Thanks! Voxblox looks impressive, I'll take a look into it!
Comment by askkvn on 2020-04-27:
@Jägermeister, have you tried Voxblox to create mesh file? I am also working on same problem. please share your solution if you have made it? | {
"domain": "robotics.stackexchange",
"id": 32998,
"tags": "ros-melodic, mesh, pointcloud"
} |
Are fatty acids and glycerol lipids? | Question: As far as I know, lipids are defined as biomolecules which are hydrophobic.
Triglycerides are composed of fatty acids and glycerol and are considered lipids but, are fatty acids alone or glycerol alone considered as lipids ?
Answer: From IUPAC Goldbook:
A loosely defined term for substances of biological origin that are
soluble in nonpolar solvents. They consist of saponifiable lipids,
such as glycerides (fats and oils) and phospholipids, as well as
nonsaponifiable lipids, principally steroids.
That means that lipid does not mean any hydrophobic molecule.
Lipids can be amphipathic as you already know about fatty acids.
Glycerol is not a lipid; neither is it hydrophobic. Glycerol, if to be classified into one of biomolecular classes, it would be that of monosaccharides (in the form of sugar alcohol like sorbitol and xylitol). | {
"domain": "biology.stackexchange",
"id": 3062,
"tags": "biochemistry, lipids"
} |
Is there a way to merge bag files? | Question:
I have data across multiple bag files, but I don't care about absolute timestamps, just relative time. I would like to merge the separate bag files into one bag file. For example, if I was recording images to a bag file, then later recorded more images to a different bag file, I would like to combine these bag files into a single one. Right now, I am playing them back and recording them into another bag file, but this seems clunky.
Originally posted by cmansley on ROS Answers with karma: 198 on 2011-07-21
Post score: 9
Original comments
Comment by lucasw on 2022-06-24:
https://www.clearpathrobotics.com/assets/downloads/support/merge_bag.py doesn't do the time offsetting but does merge two or more bag files
Answer:
You can checkout the rosbag Code API and associated cookbook you should be able to do exactly what you want pretty quickly.
Originally posted by tfoote with karma: 58457 on 2011-07-21
This answer was ACCEPTED on the original site
Post score: 6
Original comments
Comment by cmansley on 2011-07-21:
Thanks! I ended up doing just that. I just felt like there should be a nice way to merge two bag files with the same topics. I can see this being helpful for many different types of data. Something like rosbag merge *.bag /topic | {
"domain": "robotics.stackexchange",
"id": 6222,
"tags": "ros, rosbag, rxbag"
} |
Why is Clique NP-complete while k-Clique is in P for all k? | Question: I just stumbled upon this question here Why is the clique problem NP-complete?
and I am confused by the given answer.
The question about about whether $k$-clique is NP-hard for a fixed $k$ and the answer is no. However, we know that clique in general is NP-hard. Also we know that $0 \leq k \leq n$ must hold (for a graph of size $n$).
What is wrong about the following reasoning:
Assume I have some algorithm $A_i$ that solves $i$-clique for some fixed $i$ in polynomial time.
Now given some general clique problem without a specific $k$ (these problems are supposedly NP-hard), I simply run $A_i$ for $i=1$ to $n$. Now since $n$ is poly and every $A_i$ runs in polynomial time $p_i$, the resulting algorithm also runs in polynomial time $\sum p_i$, but this cannot be the case, since clique is NP-hard.
What is wrong about my reasoning?
Answer: The polynomial depends on the parameter $k$. In particular, the algorithm that people have in mind runs in time $O(n^k)$ (better algorithms might exist, but I believe that we don't know a running time better than $O(n^{O(k)})$. The running time of your proposed algorithm is then big O of
$$
\sum_{k=1}^n n^k,
$$
which is no longer polynomial. | {
"domain": "cs.stackexchange",
"id": 7355,
"tags": "complexity-theory, np-hard, clique"
} |
Influence of a diagonal fold on a web + double fold situation | Question: I need to build a steel frame with a steel web in it, as illustrated below.
Flanges of the frame will be welded, as well as the web, along its whole perimeter.
Would adding a diagonal fold to the web allow this structure to whistand a bigger load? Or, for an equivalent loading, would the fold allow me to use a thinner web?
Addendum:
Considering the doubly folded strap of the web material welded on the ends to the frame, is there a critical angle for this double fold, where, at some point it is stronger to have no folds at all, rather than two folds.
Intuitively and I may be wrong about this, concentrating matter close to the diagonal tends to make the web behave as a truss. However shear flow must be disrupted as well with two folds of very obtuse angle. See illustration below.
Is this case can it be assumed that one single fold, whatever the angle of the fold, weakens the structure, but two diagonal folds, whatever the angle, makes it stronger?
Answer: if you add a fold you disrupt the flow of shear force which is acting in a rotating pattern all around the web, and make it weaker actually. because you are encouraging it to warp.
However if you just put a doubly folded strap of the web material with strong welds on the ends to the frame and leave the web undisturbed it would act as a truss, and be stronger.
Edit
I attached a sketch of shear flow below. as you see you create two triangles of shear panel with much less shear resistance, usually shear strength of a section is related to its side length , a^3 , so we have here roughly 1/2a^3=1/8a multiplied by two, 1/4a or times less shear strength. | {
"domain": "engineering.stackexchange",
"id": 2682,
"tags": "structural-analysis, steel"
} |
How easy is it to make animal horns transparent? | Question:
I never knew that HORN in HORNbook refers to animal horns, or animal horns can be made transparent? If Medievalists can make it transparent, the transparency process must've been cheap easy and quick, right?
The Horn Book | Why is it called "The Horn Book"?
Back in the sixteenth century, English monks began to make hornbooks to help their pupils learn to read. Usually a wooden paddle with an alphabet and a verse glued to the surface, hornbooks derived their name from the piece of transparent horn protecting the verse. The picture to the right shows a modern replica of a hornbook.
Hornbook - Wikipedia
A hornbook is a book that serves as primer for study. The hornbook originated in England as long ago as 1450,[1] or earlier.[2] The term has been applied to a few different study materials in different fields. In children's education, in the years before modern educational materials were used, it referred to a leaf or page displaying the alphabet, religious materials, etc., covered with a transparent sheet of horn (or mica) and attached to a frame provided with a handle.[3]
Answer: I found this description of how the horn is prepared in a post by Tammy L. Austin, at the University of Notre Dame's website (https://www3.nd.edu/~rbarger/www7/hornbook.html):
The horn of oxen and sheep were used to make the laminating structure. The horn was left in cold water for several weeks, which separated the usable part from the bone. It was then heated, first in boiling water then by fire, and pressed by plates and machines to make it smooth and transparent. | {
"domain": "chemistry.stackexchange",
"id": 14742,
"tags": "everyday-chemistry"
} |
Quantum tunnelling near the speed of light | Question: Given a particle travelling very close to the speed of light encountering a barrier, is it possible for the particle to exceed the speed of light by tunnelling forward in the direction of motion through the barrier?
Answer: Here is the simple quantum mechanical barrier tunneling situation:
According to classical physics, a particle of energy E less than the height U0 of a barrier could not penetrate - the region inside the barrier is classically forbidden. But the wavefunction associated with a free particle must be continuous at the barrier and will show an exponential decay inside the barrier. The wavefunction must also be continuous on the far side of the barrier, so there is a finite probability that the particle will tunnel through the barrier.
etc.
The impotant thing to keep in mind is that the energy level is the same inside and outside the barrier, thus the particle must have the same momentum i.e. velocity all through. It is only the probabilities that change, because this is a quantum mechanical phenomenon.
A particle moving near the velocity of light and hitting a barrier, may radiate Cerenkov radiation entering the medium, and thus reduce its energy, but that is another story, not tunneling, itis scattering. | {
"domain": "physics.stackexchange",
"id": 46682,
"tags": "quantum-mechanics, special-relativity, speed-of-light, quantum-tunneling"
} |
Are there papers or experiments that point to non-fusion sources of solar neutrinos? | Question: I've read that nuclear fusion is what generates such large quantities of solar neutrinos.
Also saw that it was recently experimentally confirmed that two types/energies of neutrinos are generated in the sun via two types of fusion: proton-proton and CNO-fusion. https://www.livescience.com/rare-solar-cno-neutrinos-detected.html.
I am having trouble finding papers that support fusion as the only source of solar neutrinos vs. some other process. Is it just that we cannot think of any other source? Or are there some measurements or experiments that have strongly confirmed this?
Is there a way to detect neutrinos produced from some of the experimental, earth-based fusion reactors that have run (albeit for short bursts of time)? Has this been looked into?
Are there any experiments or papers that point to non-fusion sources of solar neutrinos?
NOTE: I understand we can detect neutrinos generated in accelerators and other extra-solar phenomenon. I also understand that the math behind quantum interactions and the standard model may point to fusion as the source of solar neutrinos. However, with this question, I'm looking for experiments or papers that specifically point to non-fusion sources of solar neutrinos.
Answer: For each fusion reaction, we can predict the resulting neutrino energy spectrum. In addition, for each possible source of experimental background (for example, radioactive decay inside the detector itself), we can predict (and independently measure) the shape of the spectrum for that background component. So, when we do an actual solar neutrino energy spectrum measurement (like this one), we get something like this:
The extracted signal components are shown in red, and the background components are all other colors. When we compare the measurement of the extracted neutrino flux of all of these types with the predictions that we can make about solar neutrino fluxes of various types, the predictions agree with the measurements, to within experimental uncertainty. So there really isn't any reason to suspect that another process within the Sun contributes to neutrino fluxes.
For a moment, though, let's suppose that there was some other process. Based on what we have measured, let's see what must be true about this hypothetical process. The main limiter is that there just isn't a whole lot of "room" for a different spectral shape to be added to that plot above. So either:
The new process has basically the same spectrum as one of the signal components, and is currently being mixed up with the signal;
The new process has basically the same spectrum as one of the background components, and is currently being mixed up with the background; or
The new process has a distinct shape, but its contribution is so tiny that it only causes variations on the order of the current experimental uncertainty.
Options 1 and 2 are essentially ruled out by Occam's Razor - it's exceedingly unlikely that a totally unrelated process just so happens to have the same spectrum as something we both know very well and have measured independently in other contexts. So let's talk about option 3.
As we collect more and more precise experimental data, the amount of "room" for a new process to alter the spectrum gets smaller and smaller. This means that the maximum size of the effect gets lower and lower. Of course, this doesn't rule something out, in a strictly technical sense. But, in science, nothing is ruled out, in a strictly technical sense. What it means to "rule something out" in science is different than what it means in other contexts. Scientific conclusions are not logical proofs arising from immutable axioms; instead, they're collections of observations, and new observations can always come along that would alter those conclusions. When we say that something is "ruled out" in science, what we usually mean is that the possibility that it exists depends on such a convoluted series of hypotheses that it's no longer plausible.
This is what happened to the "luminiferous aether" hypotheses in the early 20th century - after Michelson and Morley demonstrated that there was no "aether wind" detectable on Earth, there was of course a significant effort to amend the theory. For example, many postulated that the Earth "dragged" the aether along with it, such that no effect was detectable at its surface, but this would imply that there was a detectable optical distortion arising from the distortion of the aether, which required yet another component to fix, etc. At some point, special relativity became convincing enough in its predictions, given its relative simplicity next to the increasingly-convoluted aether-dragging models, that there was very little doubt that it was the most correct description of nature that we had. In a similar way, it is technically possible to construct a geocentric model of the cosmos that fully explains the motions of all heavenly bodies (this is essentially what many flat-Earthers try to do). It's just that such a model would be so tremendously convoluted, with so many arbitrary moving parts compared to the heliocentric model, that no rational person would be convinced that it was more correct than the heliocentric model.
The same applies here - you can postulate an increasingly miniscule contribution from some other unrelated process, but unless the introduction of that process either a) explains the data better, or b) makes the explanation of other phenomena simpler, it's simply not going to gain much traction. And experiments cost time and money, so it needs to gain at least some traction before somebody puts in the effort to experimentally test it.
Of course, there is the possibility of conducting a different type of experiment to disprove the conclusions of a particular model in a particular parameter space, but how to do this, and whether this has been done, depends heavily on what that other process actually looks like. Without specifying a particular model, there's not much more that can be said other than the above.
As for detecting "solar-like" neutrinos from terrestrial fusion reactors, there's one huge problem with that idea. Most of the fusion in the Sun is proton-proton (pp) fusion. This kind of fusion has one of the highest temperature thresholds, and is possible in the Sun mainly because the Sun's immense gravity leads to very high plasma densities in the core. We simply don't do pp fusion in terrestrial fusion reactors. The most common fusion "fuel" is a mix of deuterium and tritium - the fusion reaction between those species is orders of magnitude easier to achieve. | {
"domain": "physics.stackexchange",
"id": 71406,
"tags": "astrophysics, neutrinos, fusion"
} |
How to visualize data of a multidimensional dataset (TIMIT) | Question: I've built a neural network for a speech recognition task using the timit dataset. I've extracted features using the perceptual linear prediction (PLP_ method. My features space has 39 dimensions (13 PLP values, 13 about first order derivative and 13 about second order derivative).
I would like to improve my dataset. The only thing I've tried thus far is normalizing the dataset using a standard scaler (standardizing features with mean 0 and variance 1).
My questions are:
Since my dataset has high dimensionality, is there a way to visualize? For now, I've just plotted the dataset values using a heat map.
Are there any methods for separating my sample even more, making it easier to differentiate between the classes?
My heat map is below, representing 20 samples. In this heatmap there are 5 different phonemes, related to vowels, in particular, uh, oy, aw, ix, and ey.
As you can see, each phoneme is not really distinguishable from the others. Does anyone know how could I improve it?
Answer: Like I said in the comment, you'll need to perform dimension reduction, otherwise you'll not be able to visualize the $\mathbb{R}^n$ vector space and this is why :
Visualization of high-dimensional data sets is one of the traditional applications of dimensionality reduction methods such as PCA (Principal components analysis).
In high-dimensional data, such as experimental data where each dimension corresponds to a different measured variable, dependencies between different dimensions often restrict the data points to a manifold whose dimensionality is much lower than the dimensionality of the data space.
Many methods are designed for manifold learning, that is, to find and unfold the lower-dimensional manifold. There has been a research boom in manifold learning since 2000, and there now exist many methods that are known to unfold at least certain kinds of manifolds successfully.
One of the most used methods for dimension reduction is called PCA or Principal component analysis. PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. You can read more on this topics here.
So once you reduce your high dimensional space into a ${\mathbb{R}^3}$ or ${\mathbb{R}^2}$ space you will able to project it using your adequate visualization method.
References :
Information Retrieval Perspective to Nonlinear Dimensionality
Reduction for Data Visualization - Jarkko Venna
EDIT: To avoid confusion for some concerning PCA and Dimension Reduction, I add the following details :
PCA will allow you compute the principal components of your vector model, so the information are not lost but "synthesized".
Unfortunately there is no other imaginable way to display 39 dimensions on a 2/3 dimension screen. If you wish to analyze correlations between your 39 features, maybe you should consider another visualization technique.
I would recommend a scatter plot matrix in this case. | {
"domain": "datascience.stackexchange",
"id": 490,
"tags": "machine-learning, neural-network, feature-selection, visualization, preprocessing"
} |
Why are trans fats worse than saturated fats? | Question: Saturated fat molecules have no double-bonded carbons, so they are long and straight, which means they stack easier and tend to form solids at room temperature, and solids are better at forming plaques in your arteries and interacting with cholesterol in the bloodstream.
Moving onward to hydrogenation, to my current understanding, if you hydrogenate vegetable oil all the way, you get saturated fats (no double bonds anywhere because you've shoved in so much hydrogen to occupy all the bonds). But for the purposes of making trans fats, it's usually partial hydrogenation.
I assume that most vegetable oils that get partially hydrogenated must be polyunsaturated fats to begin with (meaning more than one double-bond) because partial means you aren't replacing all the double bonds, so there must be at least two double-bonds present.
The result of partial hydrogenation means you remove these double bonds and start straightening out the chains, which makes them easier to stack (which is why trans fats tend to be solid or semisolid), like saturated fats.
My question:
Why are trans fats worse than saturated fats? It looks as though partial hydrogenation is simply taking a polyunsaturated fat and bringing it closer and closer to the status of a saturated fat, but not all the way. If the goal is to get a fat that is solid and lasts longer, why not just use a saturated fat?
And for whatever reason saturated fats aren't the answer: Isn't the end result of partial hydrogenation technically just another unsaturated fat? I thought unsaturated fats were considered healthy? So how are trans fats worse than fully-stackable saturated fats?
Or upon reflection maybe my question is, why are trans fats considered so much worse than other unsaturated fats? If all hydrogenation does is break up double bonds and insert hydrogen, straightening out any cis-kinks in the chain, doesn't this just generate the same acid with fewer double bonds and straighter chains? How is this any different than going out in nature and finding the same unsaturated fat, rather than going to the trouble of converting?
What's causing the difference, here? Where is my understanding breaking down?
Answer: Someone may very well give a more thorough answer but from what I understand, most naturally occurring unsaturated fats have their double bonds in the cis conformation which is generally higher in energy than the trans conformation.
Thus, the idea is that when you ingest trans fats (i.e. fats with a trans double bond) your body has to somehow hydrogenate this double bond before it can break it down further. This is more difficult for your body to do because it must either isomerize to a cis bond and then hydrogenate, or just hydrogenate the trans bond which is not easy for your body to do effectively. In some cases hydrogenation of trans fats can't actually happen because nature basically makes cis double bonds in fats (which sounds weird but is true) so your body has ways of breaking these fats down but not trans fats. So, the fat just gets stored or excreted without doing anything to it. | {
"domain": "chemistry.stackexchange",
"id": 17756,
"tags": "hydrogen-bond, fats"
} |
Filtering out rude words | Question: This program will filter the input by replacing matching bad words with "Bleep!". I'd like the code to be more concise and C++ style where possible. One thing that bugs me is the was_bad flag I think is needed to skip printing a word that has matched one of my bad words - if there was some better way to skip the rest of the while loop upon encountering a bad word so I don't print Bleep! poop, for instance.
#include <iostream>
#include <vector>
#include <string>
using namespace std;
int main(void)
{
int i, was_bad = false;
string input, bad[] = {"poop", "balls"};
vector <string> badwords(bad, bad + sizeof(bad) / sizeof(string));
while (cin >> input)
{
for (i = 0; i < badwords.size(); ++i)
if (badwords[i] == input)
{
cout << "Bleep! ";
was_bad = true;
break;
}
if (!was_bad)
cout << input << " ";
was_bad = false;
}
return 0;
}
One thing that did strike me was to use a ternary operator:
while (cin >> input)
{
for (i = 0; i < badwords.size(); ++i)
if (badwords[i] == input)
{
is_bad = true;
break;
}
cout << (is_bad ? "Bleep! " : input + " ");
is_bad = false;
}
Answer: The word you are looking for is continue;
while (cin >> input)
{
for (i = 0; i < badwords.size(); ++i)
if (badwords[i] == input)
{
cout << "Bleep! ";
continue; // This starts the next iteration of the loop.
// ie we go back to the while loop and read the
// the next word (or attempt to do so).
}
cout << input << " ";
}
You can also improve your search. Rather than using std::vector<std::string> use a std::set<std::string>. Then a find will automatically do a \$O(\log n)\$ search of the data.
while (cin >> input)
{
if (badwords.find(input) != badwords.end())
{
cout << "Bleep! ";
continue;
}
cout << input << " ";
}
Now that we have the basic algorithm we can use some of the standard algorithms:
So now replace the main loop:
std::transform(std::istream_iterator<std::string>(std::cin),
std::istream_iterator<std::string>(),
std::ostream_iterator<std::string>(std::cout, " "),
BadWordFilter(badwords));
Then you just need to define the BadWordFilter
struct BadWordFilter
{
std::set<std::string> const& badWordSet;
BadWordFilter(std::set<std::string> const& badWordSet) : badWordSet(badWordSet) {}
std::string operator()(std::string const& word) const
{
static std::string badWord("Bleep!");
return (badWordSet.find(word) == badWordSet.end())
? word
: badWord;
}
}; | {
"domain": "codereview.stackexchange",
"id": 2702,
"tags": "c++"
} |
Omnidirectional Movement for 3 Wheeled Spherical Omni wheel robot | Question: I was watching James Bruton's video on his spherical omniwheeled robot and he was explaining how to go directly sideways. He explains this from 6:09 to 7:06. I didn't understand how the velocity of the wheels not facing the direction of movement move exactly the cos(60). I kind of get how the yaw force from the directional wheel need to be counteracted by the other 2 wheels so each wheel spins at half the velocity of the directional wheel but how does this have to do with the cos(60)? Can someone please explain the math behind this?
Answer: I think this is not really difficult trigonometry. Suppose we want the robot to move in a forward direction? We need to use the two front motors. (I tried to sketch this out in the pictures, I will now explain all this scribble). Due to the fact that the motors are not at right angles to the axis we need, therefore, we need to feed the motors a little higher speed so that it travels at the axial speed we need. To calculate how much you need to increase the speed for the motors so that the axial speed is necessary for us, we need trigonometry.
Imagine that we need to move forward at an axial speed of 10. What speed must be applied to the left and right motors so that the entire robot moves forward at a speed of 10? Let's find out. X is the required axial speed. The front wheels are off-axis by 30 degrees. The side is adjacent, so we take the cos of 30 degrees. We get 10 / cos(30) is approximately 11.54. But we have two motors, so we just halve the speed and feed the motors. (I do not take into account the direction of rotation here).
Now it's more interesting how to move to the sides? In this case, all motors must move in the same axial direction. Let's start with the back motor. It stands parallel to the axis we need, so we don't need to change its speed. Well, just split it in half, because the top and bottom parts of the robot must move at the same speed.
Now the front motors. They are located at an angle of 60 degrees to the desired axis. The side is again adjacent, we take the cosine. Next, we take half of the required speed, divide it by cosine 60, then divide it in half again, because there are two motors in front and that's it. Well, an example. If we want to move at a speed of 10 to the side, then we need to apply speed 5 to the back motor, and 2.5 to each of the front ones.
So that's why there is a cos(60). | {
"domain": "robotics.stackexchange",
"id": 2363,
"tags": "kinematics, motion"
} |
Are the following two compounds isomers or not? | Question: Are the following two compounds isomers or not?
I feel they are enantiomers(non-superimposable mirror images) however my book states that they are not isomers?
I cannot understand why?
Answer: You are right, these two molecules are enantiomers. If you rotate the bottom molecule 180 degrees, you can see that it is the mirror image of the top molecule. Enantiomers are stereoisomers, so the only reason your book would say they are not isomers is that it (unfortunately) contains a typo. | {
"domain": "chemistry.stackexchange",
"id": 6241,
"tags": "organic-chemistry, stereochemistry, isomers"
} |
Creating a custom Vector class | Question: I'm new to C++ and am doing the C++ 4th Edition Stroustrup book. I expanded on one of the examples, and have a few questions to ask (embedded within the code: ////QUESTION 1-9).
Please provide any critiques, I'd like to bang in safe practice as early as possible. The Vector.h file simply declares these functions, and two private members, elem (list of elements, int*) and sz (the size, int).
Vector::Vector(std::initializer_list<int> list) //called via list init: ie, Vector v = {1, 2, 3, 4};
:elem{ new int[list.size()] }, sz{ list.size() }
{
copy(list.begin(), list.end(), elem); //copy list from start to end to elem
}
Vector::Vector(int s) //Constructor w/ size
{
if (s < 0) throw length_error{"Vector::Vector"};
elem = new int[s];
for (int i = 0; i < s; i++)
elem[i] = 0; //init elems to 0
sz = s;
}
Vector::Vector(const Vector& a) //copy constructor - rule of 3 (if destructor then copy constructor & copy assignment op)
:elem{ new int[sz] },
sz{ a.sz } ////QUESTION 1
{
for (int i = 0; i < sz; i++)
elem[i] = a.elem[i];
}
Vector::Vector() //empty constructor - functionally useless
{
sz = 0;
}
Vector::~Vector() { //DESTRUCTOR
cout << "DESTRUCTOR TRIGGERED! END OF DAYS COMING\n";
delete[] elem;
}
int& Vector::operator[](int i) const { ////QUESTION 2
if (i<0 || i>=size()) throw out_of_range{ "Vector::operator[]" };
return elem[i];
}
Vector& Vector::operator=(const Vector& a) { //copy assignment op
int* p = new int[a.sz];
for (int i = 0; i < sz; i++) ////QUESTION 3
p[i] = a.elem[i];
delete[] elem;
this->elem = p;
this->sz = a.sz;
return *this;
}
const bool Vector::operator==(Vector& right) const { ////QUESTION 4
if (size() != right.size())
return false;
else {
for (int i = 0; i < size(); i++){ //left and right have same size, doesn't matter which
if (elem[i] != right[i])
return false;
}
}
return true;
} ////QUESTION 5
//MEMSAFE (or so I like to think?)
Vector& Vector::operator+=(const Vector& a) {
int* p = new int[sz + a.sz];
for (int i = 0; i < sz; i++)
p[i] = elem[i];
for (int i = sz, ctr = 0; i < sz + a.sz; i++, ctr++)
p[i] = a.elem[ctr];
delete[] elem;
this->elem = p;
this->sz += a.sz;
return *this;
}
const Vector& Vector::operator++() {
this->pushBack(0);
return *this;
}
//MEMSAFE
////QUESTION 6
const Vector& Vector::operator--() {
//delete elem[sz - 1]; //delete (elem+sz); //this hates me.
this->sz -= 1;
return *this;
}
const Vector Vector::operator+(const Vector& a) { ////QUESTION 7
if (this->sz != a.sz)
return NULL;
Vector v = sz; //init's Vector with all 0's ( O(2n) with this init, and the for loop below..)
for (int i = 0; i < this->sz; i++)
v.elem[i] = elem[i] + a.elem[i];
return v;
}
const Vector& Vector::operator+(int x) {
this->pushBack(x); //recycling working code
return *this;
}
//Returns a Vector with the calling Vector's remaining elem's (ie, all except last) - doesn't affect calling Vector in any way
Vector Vector::softRest() const {
Vector v = *this;
int* p = new int[v.sz - 1];
for (int i = 0; i < sz - 1; i++)
p[i] = v[i + 1];
delete[] v.elem;
v.elem = p;
v.sz -= 1;
return v;
} //seems wildly inefficient, suggestions?
//Sets the calling Vector to be all elem's except last.
const Vector& Vector::hardRest() {
int* p = new int[sz - 1];
for (int i = 0; i < sz - 1; i++)
p[i] = elem[i + 1];
delete[] elem;
this->elem = p;
this->sz -= 1;
return *this;
}
const Vector& Vector::pushBack(int x) {
int* temp = new int[sz + 1];
for (int i = 0; i < sz; i++)
temp[i] = elem[i];
temp[sz] = x; //temp is new int[sz+1]; so temp[sz] = last elem
delete[] elem; ////QUESTION 8
this->elem = temp;
this->sz += 1;
return *this;
}
//MEMSAFE
void Vector::addToEnd(std::initializer_list<int> list) {
Vector v(list);
this->operator+=(v);
}
int Vector::size() const {
return this->sz; ////QUESTION 9
}
Questions (for ease of reading, they still have their placeholders in the code to let you know what I'm referring to):
How would it differ if I set elem & sz in the body of the constructor? As of now, they are being declared after the method declaration, but before the start of the actual function (in the Copy Constructor).
removed
I use the sz variable, as an upperbound for a loop. What is safest? Should I use sz, this->sz, size() or this->size()? size() is a function within the code which returns this->sz;
Am I overusing const? Since there are no assignment operators within the function, it doesn't perform any changes to its class members - so is the last const useless?
My operator==(Vector&) function is pretty ugly. Any suggestions for a nicer/more efficient solution?
In my operator--() function (which is meant to remove the last element in the vector), I simply reduce the sz variable for the calling vector by 1. I'm not actually deleting anything. Is this bad practice? What is a better solution? Is there a way to delete a single entry in an array?
I understand that a (const Vector& a argument to the operator+(..) function) is a const, but the function doesn't change the argument whatsoever. If the function remains as is, could I remove the const declaration which prepends the argument?
In my pushBack(int) function, an int array (called temp) is created using new - which means it must be deleted. However, using _CrtDumpMemoryLeaks();, I get no objection from the compiler. Is this because it automatically self-deconstructs because it's an int array?
Why shouldn't my size() return sz instead of this->sz. Am I correct in understanding this is primarily for multithreaded reasons?
EDIT:
Class declaration (Vector.h) -
#include <initializer_list>
#include <iostream>
#include <stdexcept>
using namespace std;
class Vector {
public:
//Constructors
Vector(std::initializer_list<int>); //constructor with {x, y, z} init (ie Vector v({1, 2, 3}); )
Vector(int); //declare size, and initialize all elements to 0
Vector(); //empty constructor
Vector(const Vector& a); //COPY CONSTRUCTOR - rule of 3
~Vector(); //DESTRUCTOR - rule of 3
//Overloaded operators
int& operator[](int) const; //function type: int& because returns a[i] (or &a[i])
const Vector& operator++();
const Vector& operator--();
const bool operator==(Vector&) const;
Vector& operator=(const Vector&); //COPY ASSIGNMENT - rule of 3
Vector& operator+=(const Vector&); //a Vector, += a vector (since its IN the vector class)
const Vector operator+(const Vector&); //adds values of two equal sized vectors
const Vector& operator+(int); //deals with adding a single int (essentially .pushBack(int))
//Input functions
const Vector& pushBack(int); //add single element to end
void addToEnd(std::initializer_list<int>); //add list to end
Vector softRest() const;
const Vector& hardRest();
//Output functions
int size() const;
void arr_print() const;
private:
int sz;
int* elem;
};
Answer: Questions:
How would it differ if I set elem & sz in the body of the constructor? As of now, they are being declared after the method declaration, but before the start of the actual function (in the Copy Constructor).
It's best to set everything in the initializer list (good habit when things could be arbitrary objects).
removed
I use the sz variable, as an upperbound for a loop. What is safest? Should I use sz, this->sz, size() or this->size()? size() is a function within the code which returns this->sz;
Just use sz.
Use of this-> is discouraged as it means you are trying to force the compiler to resolve a particular variable that is shadowed which means you are using a bad naming scheme that is suitable to shadowing.
Shadowing causes all sorts of problems. One way to get around it is to force the use of this-> on all members (which is fine until you accidentally miss one).
The better option is turn up your compiler warnings so it warns you about shadowing. Then treat all warnings as errors (your code should compile warning free on the highest warning level).
Am I overusing const? Since there are no assignment operators within the function, it doesn't perform any changes to its class members - so is the last const useless?
Don't bother with const on the return type when returning by value.
Be judicious on your use of returning Vector by const ref.
A lot of the time you want to return *this as a ref to allow chaining.
My operator==(Vector&) function is pretty ugly. Any suggestions for a nicer/more efficient solution?
You could use a standard algorithm to help you check. But your code does not look that bad.
In my operator--() function (which is meant to remove the last element in the vector), I simply reduce the sz variable for the calling vector by 1. I'm not actually deleting anything. Is this bad practice? What is a better solution? Is there a way to delete a single entry in an array?
Normally vector contain two sizes.
The number of elements currently in the vector.
The amount of space allocated.
This is space allocated but currently unused. Normally when creating arrays you allocate slightly more space than you need. So you can use it without having to reallocate the whole data segment and copy it just for adding a single value (or when deleting a value you just reduce the size and can safely re-use it).
I understand that a (const Vector& a argument to the operator+(..) function) is a const, but the function doesn't change the argument whatsoever. If the function remains as is, could I remove the const declaration which prepends the argument?
?
In my pushBack(int) function, an int array (called temp) is created using new - which means it must be deleted. However, using _CrtDumpMemoryLeaks();, I get no objection from the compiler. Is this because it automatically self-deconstructs because it's an int array?
?
Why shouldn't my size() return sz instead of this->sz. Am I correct in understanding this is primarily for multithreaded reasons?
Its the same thing. See my description of this usage above.
Comments on Code:
Always prefer to use the initializer list. The compiler is going to plant the appropriate code anyway. May as well take advantage of this fact and use the compiler to put the corret initial values in place. (Note with POD data there is no initialization but for user defined types there will be. So it will construct the object members before the function is entered.
Vector::Vector(int s) //Constructor w/ size
// Add initializer list.
The following can be done in a single line:
elem = new int[s];
for (int i = 0; i < s; i++)
elem[i] = 0; //init elems to 0
//
elem = new int[s](); // zero initialize all members.
// Or default construct them if you change the Vector to
// use generic types.
Note: members are initialized in the order they are declared in the class declaration (not the order they appear in the initializer list). If you turn up wanings the compiler will warn you about this. If you make the compiler treat warnings as errors (as you should be doing) then it will not compile if the initializer list is in the wrong order.
Vector::Vector(const Vector& a)
:elem{ new int[sz] }, // Is `sz` defined at this point ???
// I can't tell because I don't have the class
// declaration.
sz{ a.sz } ////QUESTION 1 // But the order here is not conjusive to read
// as it looks like you are setting sz after
// you have used it in the previous line.
In this one you don't initialize elem.
Vector::Vector() //empty constructor - functionally useless
{
sz = 0;
}
This means it is pointing at some random piece of memory. When the destructor is run you delete a random unitialized pointer.
You actually did the assignment operator correctly and the hard way. Though if you had user defined types rather than int in your object it may not have worked correctly.
Vector& Vector::operator=(const Vector& a) {
int* p = new int[a.sz];
for (int i = 0; i < sz; i++)
p[i] = a.elem[i];
delete[] elem;
this->elem = p;
this->sz = a.sz;
return *this;
}
To have the strong exception guarantee you need to do the assignment in three distinct phases.
Make a copy of the RHS
int* p = new int[a.sz];
for (int i = 0; i < a.sz; i++) ////QUESTION 3
p[i] = a.elem[i];
Replace the content of the current object using exception safe NO THROW techniques.
std::swap(this->elem, p); // use swap rather than assignment (see below)
this->sz = a.sz;
Dealocate the old object.
delete[] p; // Note it was swapped above.
Note: We do the deallocation after updating the object to a consistent state. This is because the deallocation may fail (or throw an exception). So if you deallocate before your object is consistent you leave your object in an invalid state that is not usable.
Luckily for you int does not throw exceptions on deallocation (but a user defined type may do). So you should be careful. If another program comes along behind you and tries to make your Vector generic but does not notice this he may get screwed over accidently.
Looking at your function again:
Vector& Vector::operator=(const Vector& a) {
int* p = new int[a.sz];
for (int i = 0; i < sz; i++)
p[i] = a.elem[i];
delete[] elem; // Assume your vector is not int but a user type.
// Deleting the array here will call the destructor
// on all the elements. Which may result in an exception.
// If this happens you do not know the state of `elem`
// but your object refers to it.
// So you have a dangling pointer.
// and because of the exception the rest of the code is
// not executed and you have an object in an invalid state.
//
// Also note if this happens you leak `p`
this->elem = p;
this->sz = a.sz;
return *this;
}
There is also another technique to resolve all these problems. Its called the "Copy and Swap Idiom".
Vector& Vector::operator=(Vector a) // Pass by value so you get a copy.
{ // You were making a copy anyway.
// This just makes a copy in a way that can't
// leak if there is an exception.
a.swap(*this); // Swap the content of a and this.
// Swap is a no-throw operation so totally safe.
// The old data from `this` is now inside `a`
return *this;
} // When `a` goes out of scope at the end of the
// function it calls the destructor and tides up
// any allocated memory (remember the old this data
// is now inside `a` and thus gets correctly deleted).
// And the whole thing is exception safe.
// And much shorter.
void swap(Vector& other) nothrow
{
std::swap(elem, other.elem);
std::swap(sz, other.sz);
}
Returning const bool does not make any sense.
const bool Vector::operator==(Vector& right) const { ////QUESTION 4
You suffer from the same problem here as you did in the assignment operator.
Vector& Vector::operator+=(const Vector& a) {
int* p = new int[sz + a.sz];
for (int i = 0; i < sz; i++)
p[i] = elem[i];
for (int i = sz, ctr = 0; i < sz + a.sz; i++, ctr++)
p[i] = a.elem[ctr];
delete[] elem;
this->elem = p;
this->sz += a.sz;
return *this;
}
I would rewrite the above as:
Vector& Vector::operator+=(const Vector& other) {
{
Vector newValue(sz + a.sz); // Make a new object to hold the tmp data.
// This makes sure that there is no leaks
// if there are exceptions.
// Now copy the data into the new object
std::copy(this->elem, this->elem + this->sz, newValue.elem);
std::copy(other.elem, other.elem + other.sz, newValue.elem + this->sz);
// Now Swap the newValue with the current object.
newValue.swap(*this);
}// Destructor handles the deallocation.
Not sure this makes sense for a vector.
Personally I would remove this function completely.
const Vector& Vector::operator++() {
this->pushBack(0);
return *this;
}
Just like the operator++ this make no sense.
Remove this function.
const Vector& Vector::operator--() {
//delete elem[sz - 1]; //delete (elem+sz); //this hates me.
this->sz -= 1;
return *this;
}
Interesting concept. (const Vector makes no sense on a return type (return by value)).
const Vector Vector::operator+(const Vector& a) {
if (this->sz != a.sz)
return NULL;
Vector v = sz; //init's Vector with all 0's ( O(2n) with this init, and the for loop below..)
for (int i = 0; i < this->sz; i++)
v.elem[i] = elem[i] + a.elem[i];
return v;
}
This should not compile:
if (this->sz != a.sz)
return NULL;
If it does. It is not doing what you think. It is calling a constructor that will convert the NULL into a Vector object. If this is happening I would try and find which one and make that constructor explicit so the compiler can't do that. Because it is probably not doing anything good.
This is not doing quite what you think.
Vector v = sz;
This is the same as:
Vector v = Vector(sz);
Which is the same as:
Vector v(Vector(sz));
So you are constructing a Vector object then using the copy constructor to copy the temporary vector into your new vector v. Luckily for you the compiler is allowed to optimzie that heavily and you probably only get one vector construction. But I would change the declaration to
Vector v(sz); // Much clearer.
Your comment:
//init's Vector with all 0's ( O(2n) with this init, and the for loop below..)
A valid concern. I actually fixed this problem above. But in situations where that is not possible. Then I would have added another constructor that took two Vectors and added their content. I would just made the constructor private so that only the operator+ could use it.
Vector Vector::operator+(const Vector& a) {
if (this->sz != a.sz)
throw AnExceptionThatIsAppropriate("Plop");
return Vector v(*this, a);
}
private:
Vector(Vector const& lhs, Vector const& rhs)
: elem{new int[lhs.sz]}
, sz{lhs.sz}
{
std::transform(lhs.elem, lhs.elem + sz,
rhs.elem,
elem,
std::plus);
}
Not sure. Why you want to use operator+ to add elements. Seems a bit of a stretch.
const Vector& Vector::operator+(int x) {
this->pushBack(x); //recycling working code
return *this;
}
But OK. lets use it (just as a demo case). In this case I would not make the result Vector const& as this prevents further mutation. Just return a reference to the Vector and it will allow you to chain operators.
Vector mine;
mine + 5; // Now vector has 5 in it.
mine + 6 + 7; // Fails. As the result of `mine + 6` is a reference to a const Vector.
// If you change it to return a reference it allows you add multiple values.
std::vector<int> st;
st.push_back(5);
st.push_back(6).push_back(7); // Chained operators.
Yes this is really ineffecient as you are creating multiple arrays and copying stuff around. I would create a new private constructor to solve the issue.
Vector Vector::softRest() const {
Vector v = *this;
int* p = new int[v.sz - 1];
for (int i = 0; i < sz - 1; i++)
p[i] = v[i + 1];
delete[] v.elem;
v.elem = p;
v.sz -= 1;
return v;
} //seems wildly inefficient, suggestions?
Got bored. | {
"domain": "codereview.stackexchange",
"id": 7620,
"tags": "c++, beginner, memory-management, vectors"
} |
Is sound really adiabatic because it is a fast process? | Question: In many books I have consumed so far there is the statement that sound is adiabatic because heat transfer does not have nearly enough time to reach isothermal equilibrium. Doesn't this contradict meteorological processes, which are very very slow as compared to sound and yet adiabatic to a very good approximation?
Isn't it the other way around, that sound becomes isothermal at very high frequencies due to the increased temperature gradient due to the short wavelengths? I found this in another textbook and it appears more logical to me.
What is the truth now? Are so many textbooks really wrong or do I have a misinterpretation?
Answer: Sound propagation is indeed better modeled as an isothermal process at higher frequencies, for exactly the reason you note: With a shorter wavelength, the hotter and colder regions are closer and more readily exchange heat. The topic is discussed in Wu's "Are sound waves isothermal or adiabatic?".
More broadly, to consider whether an adiabatic assumption is reasonable without the answer depending on the circumstances or the units used, you might consider using a dimensionless number such as the Fourier number $\mathrm{Fo}=\alpha t/L^2$, which incorporates a time scale $t$, length scale $L$, and material thermal diffusivity $\alpha$ (which itself incorporates the thermal conductivity, density, and specific heat capacity).
If the Fourier number is much greater than one, enough time has passed (for that length scale and those material properties) for most of the material to be at nearly the same temperature—good justification for the isothermal idealization.
If the Fourier number is much less than one, so little time has passed that most of the material doesn't thermally "know" what's happening at the perimeter—good justification for the adiabatic idealization. | {
"domain": "physics.stackexchange",
"id": 98892,
"tags": "thermodynamics, fluid-dynamics, waves, acoustics, adiabatic"
} |
How is ‘x + ½ = 2 and x ∈ ℤ’ an open statement? | Question: I was watching this video on statements. There is an example:
$x + \frac12 = 2$
It's an open statement as the truth value could be T or F depending on the value of $x$.
$∃x: x + \frac12 = 2$ and $x ∈ ℤ$
Now the statement is closed statement (proposition) as the truth value is F.
But later the professor said:
$x + \frac12 = 2$ and $x ∈ ℤ$
is an open statement. But it's not clear to me how it's an open statement as for any value of $x$ the statement is F?
Update from the author of the video:
$x$ is a free (unquantified) variable here, so by definition this is an open statement. But you've identified something that's a source of confusion. Many online sites say that a statement is open if its truth value depends on what values the variable(s) take on. That usually coincides with the definition I indicated above. But this example points out that the two definitions are not the same. While it's easy to slip into the "is the truth value known" definition (as I seemed briefly to do at around 19:40), the definition of an open statement that works best is that it's a statement with one or more free variables, as I point out in several places in this video.
Answer: $$x=1$$
$$∃x: x=1$$
The first is an open statement, since no value for $x$ is given. $x$ is called a free variable here.
The second is a closed statement, because it talks about all possible values of $x$. $x$ is not a free variable here.
An open statement can be true or false depending on what the values of its free variables are.
A closed statement is either always true or always false. (But we might not know which)
$$x=x+1$$
This is an open statement since there is no $∃x$ there. It just so happens that it is always false, but that is not relevant to openness. | {
"domain": "cs.stackexchange",
"id": 19118,
"tags": "first-order-logic"
} |
NL - iterating all edges of a graph in log space | Question: Given a turing machine which has logrtmic space, and consists of an input tape and a working tape, Is it possible to iterate all egdes of an input graph? I know the answer is probably NO, because clique and other similer NP problems belongs to NP and not to NL. Unfortunately, I can't find a satisfying and clear explanation to that fact. Can some one make it more clear for me?
Answer: There's absolutely no problem to iterate over all edges in a graph in logspace (even deterministic logspace!). The details depend on how the graph is encoded. For example, if the graph is encoded as an adjacency matrix, then you can simply go over all pairs of vertices.
However, this doesn't give an NL algorithm for clique. The problem is that you need to iterate on $k$-tuples of vertices, for non-constant $k$. This is something you cannot do in logspace.
NP is sometimes described in terms of witnesses. However, the "official" definitions is using nondeterministic Turing machines, which are allowed to have more than one correct move at any given point in time. The class of languages accepted by nondeterministic polytime Turing machines coincides with the class of languages which are given a polynomial size witness and can verify it in polynomial time. Unfortunately, there is no such alternative description for nondeterministic logspace Turing machines. | {
"domain": "cs.stackexchange",
"id": 14051,
"tags": "complexity-theory, graphs"
} |
Artificial Intelligence Software Packages: Professionals, University education is oft' a step behind. What's actually being used? | Question: Currently using Windows 8, what software packages for artificial intelligence programming (robotics branch) are used in today's professional environment as standard. Lots of internet suggestions, but companies seem to keep this a closely guarded secret. And are the internet rumors true? Would switching to Ubuntu offer me more in terms of depth.
Context: Educational field: Computer Science and Artificial Intelligence, current focus (though obviously experience in others) in programming languages stands at c++, C and Python. Looking to build, program and develop a human-like bot (NOT aiming for singularity at this point ;))and am asking this question in order to build my toolbox a little.
Answer: I am not sure which end of the AI work you intended, so this may not apply. If you are working disconnected from hardware one might be better off with straight MATLAB / Octave.
If you are planning to demonstrate on hardware, the widest robotics control architecture / middleware is ROS which was the DARPA funded Player/Stage / Gazebo. There are many middlewares based on CORBA as the main alternatives. It has interfaces for all your language preferences. http://playerstage.sourceforge.net/
ROS has sort of won the OS wars in robotics. People clung to ACE TAO/ MIRO/ OROCOS / CORBA middlewares until they - mostly - accepted that ORBs are flexible but bloated producing bloated cores. The telling feature of all this collected effort is ROS' widest device driver array for sensors and robot chassis you will find:
http://wiki.ros.org/Sensors
It might be better to dual boot that machine to Ubuntu and slowly acclimatize. ROS has native Ubuntu .apts. I dual-booted my first machine in 2003 and have never looked back. Nor rebooted the Windows partition either... Best of luck! | {
"domain": "robotics.stackexchange",
"id": 714,
"tags": "design, software, artificial-intelligence, programming-languages"
} |
Why are they taking the cosine to find the x-component of the resultant vector in this problem? | Question: What I don't understand in this picture is why they are taking $$5\cos130^{\circ}$$ I usually do these problems by drawing the components along the axes in my case I would draw the y component along the y-axis and the x component would be going across on the top from the y axis to the resultant vector.
If I draw my right triangle like that then would the x component would be $$5\sin130^{\circ}$$ I do see though that this way would give me an incorrect x component but I don't know why they are using cosine as if I drew it on paper the angle that is between the hypotenuse and base would not $$=130^{\circ}$$
Answer: Another way of understanding the cosine is through the definition of the scalar product
$$
\vec{a}\cdot\vec{b} := |\vec{a}||\vec{b}|\cos\angle(\vec{a},\vec{b})
$$
If you want to find the x component $B_x$ of your vector $\vec{B}$, you just have to multiply with the unit vector in x direction, this is called $\hat{i}$ in your drawing. This way you get
$$
B_x = \vec{B}\cdot\hat{i}=|\vec{B}|\underbrace{|\hat{i}|}_{=1}\cos(130°)=5\cos(130°)
$$
Similarly you get for $B_y$
$$
B_y = \vec{B}\cdot\hat{j}=|\vec{B}||\hat{j}|\cos(40°)=5\sin(130°)
$$ | {
"domain": "physics.stackexchange",
"id": 38482,
"tags": "vectors"
} |
If fluid flows faster through a narrower pipe, why do hourglasses work? | Question: I'm essentially describing a fallacy due to a flaw in my understanding, and I'm trying to understand what the flaw is.
I know that if a pipe narrows, the fluid moving through it will move faster to preserve the same volumetric flow rate.
But, by that logic, an hourglass should not work, nor any type of funnel, nor should drains ever clog, because fluids should just flow faster through the narrower hole and ultimately pass through the opening in the same amount of time as if it were a wider opening.
What is the flaw in my reasoning?
Answer: As mentioned in other answers, the fluid doesn't flow on its own.
Think of your problem this way:
1. Gravity pulls the fluid through the opening. Only the fluid column above the opening is moving down as the other parts are restricted by the container. Larger the opening more fluid can be pulled down per unit time. The volume left void is just replaced by the remaining fluid in the top half.
2.
I know that if a pipe narrows, the fluid moving through it will move faster to preserve the same volumetric flow rate.
Yes, in case of steady flow of incompresisble fluid. The hourglass still passes criteria for constant volumetric flow. By definition, during a steady-flow process, the total amount of mass (=volume when incompressible) contained within a control volume does not change with time. The control volume here is the opening, where Volume in = Volume out.
3. What would make it flow faster?
A larger acceleration (due to gravity here) would make it flow faster. Its independent of the size of the opening. The same applies in other situations. If the flow is also steady and incompressible then constant volumetric flow applies. | {
"domain": "physics.stackexchange",
"id": 44144,
"tags": "fluid-dynamics, bernoulli-equation"
} |
On a lunar map, why is it labelled East on Earth, West on Moon? | Question:
source
Why is east and west reversed for the moon? Why does north and south remain the same?
Please explain like I'm five.
Answer: Imagine you're lying down outside, looking up at the sky, with your body aligned so your head is pointing North, and your Feet are pointing South. If you look to your Left, you'll be looking East, and see the East part of the ground and sky. If you look to your right, you'll be looking West, and see the West part of the ground and sky.
Now imagine a friend holds a globe above your head, centered on your home city, with the north pole pointing North, and the South pole pointing South.
When you look at your country on that globe, the parts East of your city will be on the right side of the globe, and the points west of it will be on the left side of the globe.
The same thing happens with the Moon. When you're standing on the Earth, looking up at the Moon near the meridian, from the Northern Hemisphere (so it appears generally south of you), The parts of the Moon that are Lunar-Surface-East of the Moon's center will appear on the right side, and the parts of the Moon that are Lunar-Surface-West will appear on the Left. | {
"domain": "astronomy.stackexchange",
"id": 5249,
"tags": "the-moon, crater"
} |
finding angular velocity and regular velocity when bouncing off a surface with friction | Question: Take the game of pong as a simple example. When you hit the ball with a paddle that has a frictional surface, the ball will spin as well as change direction according to the coefficient of kinetic friction on the paddle and the velocity of the paddle. The ball will then spin, and when it hits another surface this spin will cause a change in direction as well. Assuming that the velocity of the paddle is never low enough to use the static coefficient of friction of the paddle, how would i find the generated angular velocity on the ball and the new velocity of the ball?
Answer: First of all, choose the reference frame co-moving with the paddle and assume that this reference frame is inertial. This is the key to all ball-and-wall problems. Of course, ignore gravity and air drag. Now we have a spinning ball incident on a stationary surface with friction. Let $\mathbf{V}$ be the vector of the ball's velocity with respect to the paddle, and $\mathbf{V}_n$ and $\mathbf{V}_t$ be the normal and the tangential components of the ball's velocity before impact. Let $\boldsymbol{\omega}$ be the ball's angular velocity before impact. All quantities in bold are vectors.
Then it gets as complicated as you wish. To make it as simple as possible, let us assume that the ball spends time $\tau$ in contact with the surface, and that all forces acting on it during contact are constant in time. Further, assume that the friction coefficient, $\mu$, does not depend on the relative velocity of the ball and the paddle. Important: This assumption will break down if rotation of the ball ever reaches such speed that the ball will start rolling instead of sliding; at this moment the tangential force vanishes, and such situation must be treated independently. A LOT of assumptions to make it possible to tackle the problem analytically, huh? But the good news is that it's all Mechanics 101 from now on.
If the ball is perfectly elastic, then there are no dissipative forces acting in the direction normal to the surface. This means that when the ball rebounds, its normal component of velocity will just be inverted:
$$
\mathbf{V}_n'=-\mathbf{V}_n\tag{1}
$$
This allows us to calculate the average normal force acting on the ball during contact (just Newton's 2nd law):
$$\left|F_n\right|=2m\frac{\left|V_n\right|}{\tau}$$
Tangential force acting on the ball due to friction during contact is
$$\mathbf{F}_t=\mu\left|F_n\right|\hat{\mathbf{e}}=2m\mu\frac{\left|V_n\right|}{\tau},$$
where $\hat{\mathbf{e}}$ a unit vector in the direction of the friction force. It can be found as
$$
\hat{\mathbf{e}}=\frac{\left(\mathbf{R}\times\boldsymbol{\omega}\right)-\mathbf{V}_t}{\left|\left(\mathbf{R}\times\boldsymbol{\omega}\right)-\mathbf{V}_t\right|},
$$
where $\mathbf{R}$ is the vector from the center of the ball to the point of contact, $\boldsymbol{\omega}$ is the ball's angular velocity, and $\mathbf{V}_t$ is the tangential speed of the ball. Here | | denote taking a vector's modulus (length).
This force, acting $\tau$ seconds, will the change the ball's tangential momentum by
$$ \Delta \mathbf{p}_t=\tau\mathbf{F}_t=2m\mu\left|V_n\right|\hat{\mathbf{e}},$$
(note that $\tau$ cancels out!), so the ball's tangential speed after impact will be
$$\mathbf{V}_t'=\mathbf{V}_t+\hat{\mathbf{e}}\frac{\left|\Delta\mathbf{p}_t\right|}{m} = \mathbf{V}_t + 2\mu\left|V_n\right|\hat{\mathbf{e}}\tag{2}$$
Finally, the angular momentum picked up by the ball equals
$$\Delta\mathbf{L}=R\tau|F_t|\hat{\mathbf{f}}=2Rm\mu\left|V_n\right|\hat{\mathbf{f}}$$
where $R$ is the radius of the ball (also, $R=\left|\mathbf{R}\right|$), and $\hat{\mathbf{f}}$ is a unit vector in the direction in which the angular momentum was picked up. To find $\hat{\mathbf{f}}$, use
$$\hat{\mathbf{f}}=\frac{\mathbf{R}\times\hat{\mathbf{e}}}{R}.$$
This change in $\mathbf{L}$ will make the ball's angular velocity after impact
$$\boldsymbol{\omega}'=\boldsymbol{\omega}+\hat{\mathbf{f}}\frac{|\Delta L|}{I}=\boldsymbol{\omega} + \frac{|2Rm\mu V_n|}{I}\hat{\mathbf{f}}\tag{3}$$
Here $I=(2/5)mR^2$ is the moment of inertia of the ball (assuming that it is a hollow uniform sphere). Note that the mass $m$ cancels out.
Equations (1), (2) and (3) give you the relationship between pre- and post-impact linear velocity $\mathbf{V}$ and angular velocity $\boldsymbol{\omega}$. They are determined in the paddle's reference frame, so to get lab (room) frame quantities, you need to transfer back to that frame. It is tedious to do by hand, but if you are developing a ping pong emulator in which you use a game console's accelerometer to emulate a paddle, the computer will happily do this for you.
Let me re-iterate at this point that this calculation does not apply to cases when the ball is rolling (or nearly rolling), as opposed to sliding. This situation will occur if $[\mathbf{R} \times \boldsymbol{\omega} ] = \mathbf{V}_t$, or, equivalently, $\hat{\mathbf{e}}=0$. If this situation occurs (it will if initial conditions are close to it), the ball will start rolling, its tangential momentum and angular velocity will freeze, and it will just rebound. To treat this situation properly, you may want to choose a finite $\tau$, a large integer $N$, and break $\tau$ into smaller time steps $\Delta\tau=\tau/N$. Then calculate all quantities during impact in $\Delta\tau$ increments. Once you detect the rolling condition $\hat{\mathbf{e}}=0$, you make the ball rebound with the current $\mathbf{V}_t'$ and $\boldsymbol{\omega}'$.
I may have messed up the signs here and there, but, basically, it is all very simple. To top it off, here is a diagram (you are correct, I am not an artist!) In my example, $\boldsymbol{\omega}$ is pointing toward us, but in the solution, its direction can be arbitrary.
EDIT: I just realized that equations (2) and (3) are only correct if the ball is spinning in the direction it is flying (i.e. if vector $\hat{\mathbf{e}}$ is parallel or anti-parallel to vector $\mathbf{V}_t$). So 2D Pong is OK with equations (2) and (3). However, in 3D problem, if the ball is spinning sideways, then during contact, $\boldsymbol{\omega}$ and $\mathbf{V}_t$ will change in length, and therefore unit vectors $\hat{\mathbf{e}}$ and $\hat{\mathbf{f}}$ will change direction. This means that to get the ball's speed and spin, we need to integrate in time. It can be done analytically (a nice problem to torture graduate students), and it is not difficult to do numerically. One should choose a time of contact duration $\tau$ a large integer $N$, and small time step $\Delta t=\tau/N$. And then calculate
$$\mathbf{V}_t\left(t+\Delta t\right)=\mathbf{V}_t(t)+\hat{\mathbf{e}}\frac{\left|\mathbf{F}_t\right|}{m}\Delta t$$
$$\boldsymbol{\omega}\left(t+\Delta t\right)=\boldsymbol{\omega}+\hat{\mathbf{f}}\frac{\left|\mathbf{F}_t\right|R}{I}\Delta t$$
and do it $N$ times, re-calculating the values of $\hat{\mathbf{e}}$ and $\hat{\mathbf{f}}$ every time step. A fringe benefit of this approach, it is easy to control the onset of rolling. If any component of vector $\hat{\mathbf{e}}$ changes sign (i.e., goes through 0) at any time step, sliding stops and rolling begins. The answer should not depend or $\tau$ or $N$.
EDIT END | {
"domain": "physics.stackexchange",
"id": 1286,
"tags": "forces, friction, torque"
} |
How can I calculate the power required to make an air turbine spin at a constant rpm? | Question: Say I have an air turbine with diameter $d$, number of blades $N$, and moment of inertia $\frac {d^{2}*M_{Blade}^{**1}}{4}$ (at the tip of the blade).
How do I calculate the power required to spin this turbine at a constant $RPM$, ignoring all the dissipative forces, except for the air drag on the blades$^{**1}$?
$^{**1}$: $M_{total} = M_{Blade}+M_{Air}$
Answer: You need more information than this to solve the problem. Here's how you get started:
You model the fan as a black box that takes in air at a known mass flow rate and pressurizes it from ambient to a known pressure rise (i.e., the delta p across the fan disc). From this you can figure out the rate at which work must be expended to accomplish this task.
The rate of work is power, and so you convert units of the airflow power to shaft horsepower; this tells you the size of the electric motor needed to spin the fan.
This furnishes an estimate of the horsepower requirement as it ignores efficiency losses in the fan blades. It also does not tell you exactly how to design the fan (blade number, pitch, diameter). | {
"domain": "physics.stackexchange",
"id": 53695,
"tags": "rotational-kinematics, power"
} |
Is internal resistance relevant in motional EMF? | Question: When a conductor passes a magnetic field and connected to a circuit, the induced voltage is calculated via the motional EMF($\epsilon$): $$\epsilon=-vBL$$ Is the conductor's resistance (or internal resistance) relevant to the voltage? Meaning does it affect the magnitude of the voltage induced, or only determines the amount of current induced?
Trying to link ohm's law,voltage drop, and internal resistance to the motional emf. To put the question in another way, can two conductors of the same ($L$) moving at the same ($v$) in the same ($B$) strength of different resistance/internal resistance induce the same voltage however higher/lower current?
Answer: Be careful because that formula is only valid for a very limited set of field geometries. It is always better to derive EMF from the change of magnetic flux. To answer your question, the induced voltage at zero current does not depend on the resistance of the conductor. As soon as a load is connected to it, the effective voltage measured on the conductor will be reduced by $V=IR$, though. | {
"domain": "physics.stackexchange",
"id": 23244,
"tags": "electromagnetism, electric-current, voltage"
} |
Using DONUT algorithm with keras | Question: I am trying to get this repo of Xu's DONUT algorithm running, however I am getting an error I do not quite understand. The readme says I should load raw_data as follows:
timestamp, values, labels = ...
# If there is no label, simply use all zeros.
labels = np.zeros_like(values, dtype=np.int32)
# Complete the timestamp, and obtain the missing point indicators.
timestamp, missing, (values, labels) = \
complete_timestamp(timestamp, (values, labels))
however, when I do so, I get this error:
ValueError: The shape of ``arrays[0]`` does not agree with the shape of `timestamp` ((109577, 11) vs (109577,))
which does not make sense to me as I can't think of a reason timestamp would be an 11 dim array. When I pass the values as the timestamp arg, I get "timestamp must be a 1D array"
Very confused, hopefully, someone can shed some light.
Here are the checks in the code:
if len(timestamp.shape) != 1:
raise ValueError('`timestamp` must be a 1-D array')
has_arrays = arrays is not None
arrays = [np.asarray(array) for array in (arrays or ())]
for i, array in enumerate(arrays):
if array.shape != timestamp.shape:
raise ValueError('The shape of ``arrays[{}]`` does not agree with '
'the shape of `timestamp` ({} vs {})'.
format(i, array.shape, timestamp.shape))
As well as the repo itself:
https://github.com/haowen-xu/donut
Answer: How does your timestamp look like? Apparently there are too many dimensions.
When using pandas DataFrames you could pass the .index (in case it's no multiindex) or just np.arange(len(<your_data>)) as timestamps. | {
"domain": "datascience.stackexchange",
"id": 3467,
"tags": "deep-learning, data, anomaly-detection, autoencoder"
} |
How do we know a "quantum function call" is worth the same amount of time as a "classical function call?" | Question: In quantum and classical algorithms, we often need to do "function calls." Quantum algorithms such as Grover's algorithm or the Deutsch–Jozsa algorithm can take a fewer number of function calls than their classical counterparts, and this is often argued as a reason why these algorithms are more efficient. However, I see these arguments as having a gap.
My main question is, how do we know a "quantum function call" is worth the same amount of time as a "classical function call?"
For example, in an unstructured search problem, we have a function $f$ such that $f(x) = 0$ if $x$ is not a solution to our search problem and $f(x) = 1$ if $x$ is a solution to our search problem. In Grover's algorithm, we utilize an oracle $\hat{O}_{f}:|x\rangle\mapsto (-1)^{f(x)}|x\rangle$.
Now we could imagine a scenario where an implementation of $\hat{O}_{f}$ that would require $N$ seconds. Meanwhile a classical call to $f$ for a single item might take $1$ second. In this scenario, Grover's algorithm would take $O(\sqrt{N})$ oracle calls, but the entire search would take $O(N\sqrt{N})$ seconds to complete, which is not faster than $O(N)$ seconds classically.
Answer: You are probably mixing two aspects. One is the actual complexity of an algorithm and second one is "auxiliary work", i.e. preparing the circuit, transpiling, setting qubits to initial state etc. On classical computer you also do not care about time to load data into RAM, set up registers in processor, compiling source code or interpreting scripts etc. You just care about actual number of operation needed to perform the algorithm. In assessing complexity of for example Grover algorithm you assume that your circuit is already prepared. Of course, there will be some auxiliary work which can a little bit hinder speed up provided by a quantum computer but this is a classical issue of difference between theory and practical application. See this paper Is Quantum Database Search Practical? for additional information on this issue. | {
"domain": "quantumcomputing.stackexchange",
"id": 3654,
"tags": "grovers-algorithm, speedup, deutsch-jozsa-algorithm, oracles"
} |
Short hash generator | Question: I am working on some code that will generate a link with a relativly unique 10 character hash. I am not worried to much about colisions as long as they are rare enough that I could have a couple thousand links out standing since I soft delete after they are used and a colision between an open item and a closed item is not an issue. I would also like the method to be deterministic, the same input generates the same output so worst case I can recalculate the hash if needed. Here is the code I have and it works but am wondering if there is a better way.
string input = //Some input text that includes the datetime the hash was created;
using (SHA1Managed sha1 = new SHA1Managed())
{
var hash = sha1.ComputeHash(Encoding.UTF8.GetBytes(input));
//make sure the hash is only alpha numeric to prevent charecters that may break the url
return string.Concat(Convert.ToBase64String(hash).ToCharArray().Where(x => char.IsLetterOrDigit(x)).Take(10));
}
Answer: This seems fine.
This should give you a key-space of 64^10.
If you have only a few thousand links then you're more likely to win the lottery jackpot than get a collision. | {
"domain": "codereview.stackexchange",
"id": 15470,
"tags": "c#, url"
} |
How to approach this kind of task about kinetic energy? | Question: The bullet with mass $$m_{ball}=0.2 kg$$ travels with speed $$v=2 \frac{m}{s}$$ and hits Plasticine sphere with mass $$m_{sphere}=2.5 kg$$ and get stuck. I need to find the amount of heat ejected. How to approach this task? Probably it's about kinetic energy transforming to heat; and the law of conservation of momentum.
Answer: The velocity of the system after collision is
$$
V=\frac{m_b}{m_b+m_s}v_b
$$
The lost of kinetic energy can be assumed as the ejected heat in question,
\begin{align}
\Delta KE &= m_b v_b^2 /2 - (m_b+m_s)V^2 /2\\
&= m_b v_b^2 /2 - \frac{m_b^2}{m_b+m_s}v_b^2 /2\\
&= m_b v_b^2/2 \left(1-\frac{m_b}{m_b+m_s} \right)\\
&= \frac{m_b m_s}{m_b+m_s}v_b^2/2 \\
\end{align} | {
"domain": "physics.stackexchange",
"id": 12425,
"tags": "homework-and-exercises, kinetic-theory"
} |
How to implement $\sqrt{iSWAP}$ in Qiskit | Question: I want to implement the $\sqrt{iSWAP}$ operator using simple operations in Qiskit such as it is done for the $iSWAP$ here or $\sqrt{SWAP}$ gate here. How can I do this? If possible I would like to know what 'methods' do people use to find such decomposition.
Answer: Given that the
$$\sqrt{iSWAP} = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1/\sqrt{2} & i/\sqrt{2} & 0 \\ 0 & i/\sqrt{2} & 1/\sqrt{2} & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} $$
then we can use the decompose method in qiskit to get the set of elementary gates that would be implement on IBM hardware, which comes out to be:
You can use quirk to do the something similar I believe. Now, recently IBM changes its native set of gates to $\{ CZ, ID, RZ, SX, X \}$. So to see how this being implement on the hardware you can use the transpilation method. Which will transpile the above circuit to the following circuit on 'ibmq_athens':
If you wish to do the decomposition within qiskit, you can use the following script:
from qiskit.quantum_info.operators import Operator
from qiskit import QuantumCircuit, QuantumRegister
import numpy as np
sqrt2 = np.sqrt(2)
controls = QuantumRegister(2)
circuit = QuantumCircuit(controls)
Matrix = Operator( [
[1, 0, 0, 0],
[0, 1/sqrt2, 1j/sqrt2, 0],
[0, 1j/sqrt2, 1/sqrt2, 0],
[0, 0, 0, 1] ])
circuit.unitary(Matrix, [0,1])
decomp = QuantumCircuit.decompose(circuit)
print(decomp)
And the transpilation process can be done as:
from qiskit.compiler import transpile
provider = IBMQ.load_account()
Circuit_Transpile = transpile(decomp, provider.get_backend('ibmq_athens') , optimization_level=3)
print(Circuit_Transpile) | {
"domain": "quantumcomputing.stackexchange",
"id": 2457,
"tags": "qiskit, gate-synthesis"
} |
How does global and local path finding work in ROS? | Question:
Is there some good resource about how ROS implemented global and local path finding? If I want add some extra sensors which will have the local path finding, what is the best way to do it?
Thanks!
Originally posted by rozoalex on ROS Answers with karma: 113 on 2017-10-22
Post score: 0
Answer:
Hi rozoalex,
You can find an overview of how the ROS navigation stack works here: http://wiki.ros.org/nav_core?distro=lunar
Details of the local planner here: http://wiki.ros.org/base_local_planner?distro=lunar
Details of the global planner here: http://wiki.ros.org/global_planner?distro=lunar
This is the basic scheme:
image description http://wiki.ros.org/nav_core?action=AttachFile&do=get&target=move_base_interfaces.png
As you can see, both the local planner and the global planner use a costmap (a 2D o 3D representation of free/occupied space) constructed using sensor information.
If you want to add information from new sensors for the local costmap to consider you should do so by adding new layers to the local costmap. The layer type will depend on the sensor type. You should check the tutorials and documentation of the costmap_2d package.
I hope this helps
Originally posted by Martin Peris with karma: 5625 on 2017-10-22
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by rozoalex on 2017-10-26:
Thank you so much! Very helpful! | {
"domain": "robotics.stackexchange",
"id": 29162,
"tags": "navigation"
} |
jQuery .validate() plugin, additional method to validate numbers | Question: A while ago I've written a custom validation method for .validate(). It is a local validation plugin, with some limited validation.
I was required to write a validation method with the following requirements:
Must accept a custom string.
Must allow a maximum value.
Must allow a minimum value.
Has to allow to have dynamic values on the string.
Must validate using a regular expression.
Must allow numbers in the Portuguese and English formats.
At the time, it was quite a challenge and I was happy with the result, but the code is BAD. It works, but it is so hacky and kludged it hurts my soul.
(function(){
var __c_number_between=new String('');
__c_number_between.valueOf=function(){
return 'The value must be between __MIN__ and __MAX__'
.replace(/__MIN__/g,this.min)
.replace(/__MAX__/g,this.max);
};
__c_number_between.toString=function(){return this.valueOf();};
$.validator.addMethod('c_number_between',
function(d,i,o){
var num=d.replace(/\./g,'').replace(',','.')/1;
__c_number_between.min=o.min;
__c_number_between.max=o.max;
return o.max>=num&&num>=o.min;
},__c_number_between);
})();
Yes, that is the code. It fulfills all my needs, but relies on a really bad behaviour in Javascript, which is that objects are passed as a reference. It also relies on Javascript being executed linearly, instead of having multiple threads, which is REALLY bad!
How can I re-write this in a clean and decent way?
Also, worth noticing is that this is a cross-posting from StackOverflow, on the following question: jQuery .validation plugin: help cleaning aditional method
Answer: jQuery.validator.format()
jQuery.validator has a format method
This allows you to replace the String you created entirely.
However, this would require you to replace your 'o' parameter object with an array ( [min,max] ).
$.validator.addMethod('c_number_between',
function(d,i,o){
var num=d.replace(/\./g,'').replace(',','.')/1,
// min/max assigns for context readability (optional)
min = o[0],
max = o[1];
return max>=num&&num>=min;
},$.validator.format('The value must be between {0} and {1}'));
Update:
While the docs say:
The default message to display for this method. Can be a function created by ''jQuery.validator.format(value)''. When undefined, an existing message is used (handy for localization), otherwise the field-specific messages have to be defined.
Any function that returns a string can be used so boom:
$.validator.addMethod('c_number_between',
function(d,i,o){
var num=d.replace(/\./g,'').replace(',','.')/1,
return 0.max>=num&&num>=0.min;
},function(params){return 'The value must be between {0} and {1}'.replace('{0}',params.min).replace('{1}',params.max);}); | {
"domain": "codereview.stackexchange",
"id": 15098,
"tags": "javascript, jquery, validation"
} |
Print a chessboard | Question: I'm working through Eloquent Javascript and just did the examples in chapter 2. One is to create a chess board grid of arbitrary size. My code works, but the if/else if statements I used feel clunky. This is a common feeling I have when I'm writing code, but I'm often at a loss of how else to do it.
How could I write this code in a more compact, elegant way (recognizing the subjectivity of "elegant")?
http://eloquentjavascript.net/02_program_structure.html
var size = 8;
var result = "";
for(var i = 0; i < size; i++) {
for(var j = 0; j < size; j++) {
if(i !== 0 && j ===0){
result += "\n";
}
else if((j % 2 === 0 && i % 2 === 0) ||
(j % 2 === 1 && i % 2 === 1)){
result += " ";
}
else if((j % 2 === 0 && i % 2 === 1) ||
(j % 2 === 1 && i % 2 === 0)){
result += "#";
}
}
}
console.log(result);
Answer: First of all, your code has a little flaw. It will print the first two lines like this:
◼◻◼◻◼◻◼◻
◼◻◼◻◼◻◼
You may haven't noticed it, because it's hard to spot using a whitespace. This happens, because you use an else if after prepending the newline:
if (i !== 0 && j === 0) {} else if {} else if {}
Remove the first else branch it will work.
Personally, I would move this part to the end of the loop, as it reads more like:
If the line is completed, append a line-break.
This test should be sufficient and is maybe easier to read:
if (size - 1 == j) {
result += "\n";
}
To take it even a little further, move this out of the inner loop and get rid of the test at all. As you always are going to append a line-break after each row, this is sufficient:
result += "\n";
Thanks to 200_success for pointing that out.
Then you want to print a field of the chessboard in any case. So the second test is actually redundant:
if ((j % 2 === 0 && i % 2 === 0) || (j % 2 === 1 && i % 2 === 1)) {
result += " ";
} else if((j % 2 === 0 && i % 2 === 1) || (j % 2 === 1 && i % 2 === 0)){
result += "#";
}
… can become:
if ((j % 2 === 0 && i % 2 === 0) || (j % 2 === 1 && i % 2 === 1)) {
result += " ";
} else {
result += "#";
}
Now, the condition is still hard to read. Let's simplify it:
if (0 === (i + j) % 2) {
result += "◼";
} else {
result += "◻";
}
This looks quite simple already. You can alternatively use the ternary operator and rely on JavaScript casting the result to boolean:
result += (i + j) % 2 ? "◻" : "◼";
The final result
var size = 8,
result = "";
for (var i = 0; i < size; i++) {
for (var j = 0; j < size; j++) {
result += (i + j) % 2 ? "◻" : "◼";
}
result += "\n";
}
console.log(result); | {
"domain": "codereview.stackexchange",
"id": 25988,
"tags": "javascript, beginner, ascii-art"
} |
Reduce a JavaScript array into named objects | Question: The code below reduces an array into "named" objects (not sure if that's the correct terminology!)
It works, but I'm sure the code could be improved. There is some repetition going on in the reduce.
It checks if a key exists (if (accumulator[name])). If not then initialize the results array, if it does then push onto the results array.
let response = {
columns: [
'n'
],
data: [
{
graph: {
nodes: [
{
id: '169',
labels: [
'Container'
],
properties: {
reference: 'REF002',
name: 'Cupboard',
id: '003'
}
}
],
relationships: []
}
},
{
graph: {
nodes: [
{
id: '170',
labels: [
'Container'
],
properties: {
reference: 'REF003',
name: 'Cupboard A',
id: '03a'
}
}
],
relationships: []
}
},
{
graph: {
nodes: [
{
id: '964',
labels: [
'Equipment'
],
properties: {
reference: 'REF004',
name: 'Cupboard B',
id: '03b'
}
}
],
relationships: []
}
}
]
}
const result = response.data.reduce(
(accumulator, currentValue, currentIndex, array) => {
const name = currentValue.graph.nodes[0].labels[0];
if (accumulator[name]) {
accumulator[name].results.push({
title: currentValue.graph.nodes[0].properties.name,
description: currentValue.graph.nodes[0].properties.reference
});
} else {
accumulator[name] = {
name,
results: [
{
title: currentValue.graph.nodes[0].properties.name,
description: currentValue.graph.nodes[0].properties.reference
}
]
};
}
return accumulator;
},
{}
);
console.clear();
console.log(result);
Output Required
{
Container: {
name: 'Container',
results: [
{
title: 'Cupboard',
description: 'REF002'
},
{
title: 'Cupboard A',
description: 'REF003'
}
]
},
Foo: {
name: 'Foo',
results: [
{
title: 'Cupboard B',
description: 'REF004'
}
]
}
}
Answer: As it looks like you are using ES6 syntax, you could also throw in some destructuring assignments, but I guess the main point is that one would need to create the output object only once. One could also extract the accumulation into a descriptive name:
byFirstLabel = (acc, {graph: {nodes: [node]}}) => {
let label = node.labels[0]
let { name: title, reference: description } = node.properties
let entry = { title, description }
acc[label] ? acc[label].results.push(entry) :
acc[label] = { name: label, results: [ entry ] }
return acc
}
response.data.reduce(byFirstLabel, {}) | {
"domain": "codereview.stackexchange",
"id": 27911,
"tags": "javascript, functional-programming"
} |
Regulation of chromatin structure | Question: Recently, I reviewed the different levels of chromatin structure. The primary level is nucleosomes, where DNA is bound to histones, and has structural similarity to "beads on a string." The secondary level is a 30nm fiber, and the tertiary level is formed by radially looping the fibers.
I've also been learning about the histone code and how different modifications to the core histones relate to transcriptional regulation. Are these modifications the primary regulation mechanism for chromatin structure? In other words, does chromatin assume the most compact structure possible until histone modifications are made to enable transcription? Or have other regulatory mechanisms unrelated to transcription been discovered and characterized?
Answer: "Are these modifications the primary regulation mechanism for chromatin structure?"
It depends on how you define primary, we might currently think of histone modifications as primary because other regulatory mechanisms have not yet been well studied. Something else you can think of are the various regulatory proteins that interact with histone marks to modify chromatin.
I don't think you should imagine chromatin as assuming "the most compact structure possible until histone modifications are made to enable transcription" but rather histone modifications being a dynamic process with various transcription factors (a class of proteins) coming in and adding/removing histone marks as well as 'remodeling chromatin' (adding/removing nucleosomes)
As a sidenote, I don't believe much is known about the tertiary stage you initially mentioned so maybe that could play a huge role in the regulatory mechanisms of chromatin structure, it has just not been well explored. | {
"domain": "biology.stackexchange",
"id": 301,
"tags": "molecular-biology, molecular-genetics, epigenetics, histone, chromatin"
} |
Why do fusion and fission both release energy? | Question: I only have high school physics knowledge, but here is my understanding:
Fusion: 2 atoms come together to form a new atom. This process releases the energy keeping them apart, and is very energetic. Like the sun!
Fission: Something fast (like an electron) smashes into an atom breaking it apart. Somehow this also releases energy. Less energy than fusion, and it's like a nuclear reactor.
Now my understanding is that the lowest energy state is when everything is tightly stuck together (as per fusion), and it costs energy to break them apart..
So.. why do both fusion and fission release energy?
Answer: In general, both fusion and fission may either require or release energy.
Purely classical model
Nucleons are bound together with the strong (and some weak) nuclear force.
The nuclear binding is very short range; this means that we can think of nucleons as "sticking" together due to this force.
Additionally the protons repel due to their electric charge.
As geometry means that a nucleon has only a limited number of other nucleons it can "stick" to, the attractive force per nucleon is more or less fixed.
The repulsive electric field is long range. That means that as the nucleus grows, the repulsion grows, so that eventually that repulsion exceeds the attractive effect and one cannot grow the nucleus further. Hence a limited number of possible elements.
Effectively this means the attractive force per nucleon increases rapidly for a small number of nucleons, then tops out and starts to fall.
Equivalently, the binding energy per nucleon behaves similarly.
As @cuckoo noted, iron and nickel have the most tightly bound nuclei; iron-56 having lowest mass per nucleon and nickel-62 having most binding energy.
This image (from Wikipedia) illustrates the curve in the typically presented manner:
However, I prefer to think of binding energy as negative and therefore better visualize iron as being the lowest energy state:
For lighter elements:
Fission requires energy
Fusion releases energy
For heavier elements, the opposite is true.
The reason we mainly observe the release energy cases is because:
It is easier to do
It is more "useful" | {
"domain": "physics.stackexchange",
"id": 55646,
"tags": "nuclear-physics, mass-energy, fusion, binding-energy, elements"
} |
Is there a straightforward method to find the Kraus operators of a given quantum channel? | Question: $\newcommand{\Ket}[1]{\left|#1\right>}$
I would like to know if there is a systematic way of finding a set of Kraus operators $E_k$ for a quantum channel $\varepsilon$ defined by its action on a density matrix $\rho$ using these properties:
$\varepsilon(\rho) = \sum_{k}E_k\rho E_k ^{\dagger}$
$\sum_{k}E_k^{\dagger} E_k = \mathbb{1}$
I feel like you can "educated" guess the answer but I would like to know if there is a more formal method.
To be more specific, there are 2 different input possibilities for the problem: the first is when the channel $\varepsilon$ is defined by its action on a density matrix $\rho$, and the second is when it is defined by its action on kets.
Let me give an example for both these cases:
Action on $\rho$: find the Kraus operators for the dephasing channel:
$\rho \rightarrow \rho ' = (1-p)\rho + p\, diag(\rho_{00},\rho_{01})$
Action on kets: find the Kraus operators for the amplitude damping channel, defined by the action:
$\Ket{00} \rightarrow \Ket{00}$, $\Ket{10} \rightarrow \sqrt{1-p}\Ket{10} + \sqrt{p}\Ket{01}$
I cannot figure out a method for any of these types of cases even though I know a possibility of Kraus operators for both of these cases. For the dephasing channel:
$E_0 = \sqrt{1-p/2}\mathbb{1}$ and $E_1 = \sqrt{p/2}\sigma_z$
and for the amplitude damping channel:
$E_0 = \begin{pmatrix}
1 & 0 \\
0 & \sqrt{1-p}
\end{pmatrix} \quad E_1=
\begin{pmatrix}
0 & \sqrt{p} \\
0 & 0
\end{pmatrix}$
Answer: you can use the choi isomorphisme:
you apply the Choi map to your channel to obtain the corresponding Choi matrix and then you compute the spectral decomposition of this matrix.
The Kraus operators will be the eigenvectors rearranged (vectorization) into a matrix and the weight of each Kraus operator will be the corresponding eigenvalue. | {
"domain": "physics.stackexchange",
"id": 65443,
"tags": "quantum-mechanics, quantum-information"
} |
Calculating Phase difference | Question: Based on the image below, you can see two waves. The black wave is phase shifted towards the left.
If I know their starting positions 3$^{\circ}$ and 5$^{\circ}$ respectively for the black and blue waves, is it possible to calculate the phase difference between the two based on the equation and information listed below?
$$\theta(t) = \theta_{max}\:cos(\omega t + \varphi) $$
Known variables; $\omega$ , $t$ , $\theta_{max}$ and period
Answer: Yes, try to substitute $t=0$ in the equation for $\theta(t)$ for both the blue wave and the black wave, and use the information you can read from the graph:
$$\theta_\text{blue}(0)=5^\circ$$
$$\theta_\text{black}(0)=3^\circ$$
And the maximum amplitude, which is $5^\circ$ for both waves. Then you are left with the two phases $\varphi$ for which you only need to solve the equation. | {
"domain": "physics.stackexchange",
"id": 75900,
"tags": "oscillators"
} |
Kitchen sponge microbiomes; management versus replacement versus "don't use sponges"? | Question: I have unfortunately just skimmed through the recent open access paper Microbiome analysis and confocal microscopy of used kitchen sponges reveal massive colonization by Acinetobacter, Moraxella and Chryseobacterium species (M. Cardinale et al. Scientific Reports 7, Article number: 5791 (2017) doi:10.1038/s41598-017-06055-9)
It reports an extensive analysis of a group of kitchen sponge specimines. From the abstract:
Two of the ten dominant OTUs, closely related to the RG2-species Chryseobacterium hominis and Moraxella osloensis, showed significantly greater proportions in regularly sanitized sponges, thereby questioning such sanitation methods in a long term perspective. FISH–CLSM showed an ubiquitous distribution of bacteria within the sponge tissue, concentrating in internal cavities and on sponge surfaces, where biofilm–like structures occurred. Image analysis showed local densities of up to 5.4 * 1010 cells per cm3, and confirmed the dominance of Gammaproteobacteria. Our study stresses and visualizes the role of kitchen sponges as microbiological hot spots in the BE, with the capability to collect and spread bacteria with a probable pathogenic potential. (emphasis added)
A quick take-home message from the paper is that typical efforts to try to re-sanitize used sponges may only shift the sponges bacterial population toward a more potentially pathogenic population, and that it is probably a better idea to simply dispose of the sponge perhaps weekly and get a fresh one instead of microwaving it.
There's a nice write-up in the NYTimes science section, Cleaning a Dirty Sponge Only Helps Its Worst Bacteria, Study Says:
Stop. Drop the sponge and step away from the microwave.
That squishy cleaning apparatus is a microscopic universe, teeming with countless bacteria. Some people may think that microwaving a sponge kills its tiny residents, but they are only partly right. It may nuke the weak ones, but the strongest, smelliest and potentially pathogenic bacteria will survive.
Then, they will reproduce and occupy the vacant real estate of the dead. And your sponge will just be stinkier and nastier and you may come to regret having not just tossed it, suggests a study published last month in Scientific Reports.
Personally I've always hated the whole concept of kitchen sponges and now feel vindicated. Isn't something that can trap biomatter (bacteria, food particles) and keep it moist for extended periods of time the antithesis of what you would want to clean your dishes with?
Question: Aren't sponges simply a bad idea for washing dishes in the first place?
I'm not looking for opinion, rather I'm asking about policy and practice. Since what we do in the privacy of our own kitchen sink is our own, I'd like to ask if kitchen sponges are generally approved for use in restaurants or institutional food preparation settings? Do they have sterilization protocols? Does this paper speak to those?
Background:
above: "(A) Kitchen sponges, due to their porous nature (evident under the binocular; (B)) and water-soaking capacity, represent ideal incubators for microorganisms. Scale bar (B): 1 mm. (C) Pie charts showing the taxonomic composition of the bacterial kitchen sponge microbiome, as delivered by pyrosequencing of 16S rRNA gene libraries of 28 sponge samples (top and bottom samples of 14 sponges, respectively). For better readability, only the 20 most abundant orders and families are listed." From here.
above: "Neighbor-joining phylogenetic tree of the ten most abundant OTUs in the analyzed kitchen sponges, as retrieved by pyrosequencing of 16S rRNA gene amplicon libraries. The relative abundance (percentage of the total sequence dataset) and the detection frequency (number of sponges where they were detected) are given in parenthesis after the OTU number. The most similar reference sequences retrieved by BLAST and EzTaxon alignment (type strains only) were included in the tree, followed by the corresponding accession numbers. Red circles indicate risk group 2 organisms, according to the German Technical Rule for Biological Agents No. 466 (TRBA 46631). Numbers at the nodes indicate percentage values of 1000 bootstrap re–samplings (only percentages ≥ 50 are shown). Scale bar represents substitution rate per nucleotide position." From here.
Answer: If your question is, indeed,
I'm asking about the nature of sponges, and if it's a good idea to use them to wash dishes and I'd like to stay focused on that.
I can answer that.
I like the sponge+mesh scrubber for washing dishes (not countertops or other) because the sponge conforms to the shape of things and reaches into crevices (with dishsoap and water) that I normally can't reach without more effort, and the mesh, well, scrubs. I don't care how grungy it gets; I rinse it out till it looks clean, throw it in the dishwasher when I run a load, which isn't often, and reuse until it's falling apart. Is it an ideal home for bacteria? Yep. You may think this is a red herring, but so is your toothbrush. Yet no one tells you to use a new toothbrush every week.
What is important is not what's in the sponge (or the toothbrush), but 1) how much of that bacteria stays deposited on the dishes washed with a sponge and rinsed with clean water, and 2) are any of the said bacteria dangerous to one's health?
Unfortunately, while the microbiome of the sponge was analyzed, the more important questions were not. There isn't a study yet that answers that aspect. So, until there is, I'll keep using my sponge without fear, as I do my toothbrush.
For example, the most common organisms grown from sponges were from the class Gammaproteobacteria. The important question is, is Gammaproteobacteria a pathogen?
What has been shown is that lack of Gammaproteobacteria in one's environment predisposes an individual to asthma and allergies. Also worth mentioning is that the mouth has a very high density microbiome itself,
The oral microbiome is comprised of over 600 prevalent taxa at the species level, with distinct subsets predominating at different habitats.
The oral microbiome is largely still unknown, but includes some of the species found in the kitchen sponge (chicken or egg situation?) Or is it that the prevalence of these organisms in our environment (including our hands) seeds the sponge and the mouth?
While the similarities suggest a kitchen sponge core-microbiome, the more variable fractions might differ from region to region. Clearly, more data from different regions are needed to support this hypothesis, but apparently bacteria affiliated with the Moraxellaceae seem “typical” for kitchen sponges. Interestingly, Moraxellaceae have been consistently detected on sink surfaces, faucets, refrigerator doors and stoves30, i.e. surfaces that might be regularly cleaned with kitchen sponges, suggesting them as source for these surface contaminations. However, Moraxellaceae also represent typical human skin bacteria, suggesting also other sources for the contamination of kitchen surfaces. In turn, human skin (hands) might represent a source for the contamination of the sponges with Moraxellceae during use. Recently it has been shown that in particular Moraxellaceae get significantly enriched on cotton laundry during a domestic washing process.
Should we stop laundering out cotton clothing?
As a physician (a shameless appeal to authority here), I'll worry when studies show that dishes washed with a sponge are colonized by harmful bacteria. So I'll continue to use my sponge to wash dishes partly because I like studies to be meaningful to health and disease, partly because I like my dishwashing sponge, and partly because whatever I use to wash my dishes (except my bare hands, which I deem insufficient to the task) will become a breeding ground for all manner of bacteria.
For all other kitchen cleanup, I use a cloth, which is doubtlessly packed with bacteria as well, but I can fool myself into thinking it is cleaner because I launder them every couple of days, or immediately after a relatively dirtier task. | {
"domain": "biology.stackexchange",
"id": 7625,
"tags": "microbiology, microbiome, sterilisation"
} |
Is roc auc graph better than roc auc score? If yes why? | Question: This was asked in viva of my ML course. I answered yes but could not precisely explain why.
By 'better' I mean whether geometric interpretation gives more information than just the numeric score.
Answer: Yes, the graph contains information that the AUC number alone does not have.
It is most interesting when comparing 2+ models that have very close AUC numbers. The graph can tell you that one model favours recall, and the other model favours precision. Or, if the lines are basically on top of the other, it tells you two models have the same performance at all thresholds.
The graph can also help you choose a threshold. So, if precision is more important, you are interested in the threshold where the line just starts to leave the left side. If precision is more important you are interested in the point where it just starts to leave the top side.
Flipping it around, looking at a graph requires a human expert. The AUC number allows you to compare thousands of models in the blink of an eye, and stay objective. | {
"domain": "datascience.stackexchange",
"id": 10361,
"tags": "auc, roc"
} |
Minimum velocity at the top of an object on a rope vs attached to a rigid body? | Question: When working through a physics problem, I realized there's a fundamental difference between when an object is spinning in a circle and is attached to some rigid object such as a beam fixed to an axle vs a non-rigid object such as a rope.
When looking at the minimum velocity at the top to continue moving in a circle, with an object such as a rope, the minimum velocity is $\sqrt{rg}$ to get around the top. But with an object fixed to a rigid object, the velocity at the top can be zero and it will continue to move in a circle. I feel like it has something to do with the rigid object providing a normal force that counteracts gravity which a rope doesn't, but I'm not sure how this provides the difference between minimum clearing velocity at the top.
Answer: With a rigid tether, the question is whether the object starts with enough kinetic energy to make up the potential difference and reach the top. The rod is allowed to pull or push.
The rope can only pull and that causes this problem: some of the upwards kinetic energy possessed when the object passes the height of the circle's center has to be converted into sideways kinetic energy to stay on course. Tension can do this. However, in this quadrant of the circle, sideways energy can't be converted back into upwards kinetic energy or into gains in potential energy without a pusing force that the rope can't provide.
Try thinking of the problem backwards: start at the top and release the object. With a rod, it will swing outward because the rod pushes it. With the rope, it will fall straight down unless you start by throwing it sideways.
Basically, without the ability to push, you can't use all of the sideways motion to climb the top of the hill, so you can waste some energy. | {
"domain": "physics.stackexchange",
"id": 34846,
"tags": "newtonian-mechanics, classical-mechanics, forces, rotational-dynamics, centripetal-force"
} |
How to compare energy levels in hybridized sp orbitals? | Question: How to compare energy levels in $\ce{sp, sp^2, sp^3}$ orbitals?
Since a higher energy level implies lower stability, an $\ce{sp-sp}$ bond must have the lowest energy level, since it is formed by the overlap of one sigma and two pi bonds in total, more than those in $\ce{sp^2-sp^2}$ or $\ce{sp^3-sp^3}$. Hence, $\text{energy level(}\ce{sp < sp^2 < sp^3}\text{)}$.
Is my calculated trend correct? If so, then is this the intended reasoning? If not, then why?
Answer:
How to compare energy levels in $\ce{sp, sp^2, sp^3}$ orbitals?
Background
Electrons in an atomic $\ce{2s}$ orbital will be lower in energy than electrons in an atomic $\ce{2p}$ orbital since the s orbital is closer to the nucleus.
When we describe a hybrid orbital as $\ce{sp^n}$, this is really a shorthand notation for $\ce{s^1p^n}$, which means that the orbital is made up from 1 part s orbital and n parts p orbital.
Analysis
As the "n" in $\ce{sp^n}$ grows larger we will have more p character and less s character in the hybrid orbital and, as explained above, it will consequently be higher in energy. So an $\ce{sp}$ hybrid orbital that has 50% s character will be lower in energy than an $\ce{sp^3}$ orbital which only contains 25% s character.
The energy ordering you proposed in your question is correct: $\ce{sp < sp^2 < sp^3}$. | {
"domain": "chemistry.stackexchange",
"id": 9615,
"tags": "bond, hybridization"
} |
amcl demo: map_server could not open yaml file (yaml file exists) | Question:
tl;dr
I am trying to open amcl demo with:
roslaunch chefbot_gazebo amcl_demo.launch map_file:=home/neuronet/hotel_world.yaml
And I am getting error:
Map_server could not open home/neuronet/hotel_world.yaml
Long version
I'm in indigo, ros version 1.11.20 on Ubuntu 14, and I am working through a project (chefbot) that is based on turtlebot:
http://www.instructables.com/id/Chefbot-A-DIY-Autonomous-mobile-robot-for-serving-/?ALLSTEPS#step2
I have run the gmapping_demo and created a map, and saved the map using
rosrun map_server map_saver -f ~/hotel_world
This has saved hotel_world.yaml and hotel_world.pgm in my home directory. Then the goal is to run the amcl demo. First (as instructed at instructables) I open the robot in gazebo, which works:
roslaunch chefbot_gazebo chefbot_hotel_world.launch
And then I launch the amcl_demo:
roslaunch chefbot_gazebo amcl_demo.launch map_file:=home/neuronet/hotel_world.yaml
It seems to start working (nodes start) but eventually I get the error that the map server could not open the yaml file:
core service [/rosout] found
process[map_server-1]: started with pid [6488]
process[amcl-2]: started with pid [6489]
process[move_base-3]: started with pid [6491]
**[ERROR] [1471437118.250759705]: Map_server could not open home/neuronet/hotel_world.yaml.**
Strangely, the. yaml file does exist in the intended directory. Here are the yaml file contents:
image: /home/neuronet/hotel_world.pgm
resolution: 0.010000 origin:
[-11.240000, -12.200000, 0.000000]
negate: 0 occupied_thresh: 0.65
free_thresh: 0.196
Further, the pgm file (image) does exist, here it is:
Note I'm working with rospy, if that matters (any technical c++ stuff will likely go over my head).
Finally, here is the file amcl_demo.launch file:
<launch>
<!-- Map server -->
<arg name="map_file" default="$(find chefbot_gazebo)/maps/playground.yaml"/>
<node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" />
<!-- Localization -->
<arg name="initial_pose_x" default="0.0"/>
<arg name="initial_pose_y" default="0.0"/>
<arg name="initial_pose_a" default="0.0"/>
<include file="$(find chefbot_bringup)/launch/includes/amcl.launch.xml">
<arg name="initial_pose_x" value="$(arg initial_pose_x)"/>
<arg name="initial_pose_y" value="$(arg initial_pose_y)"/>
<arg name="initial_pose_a" value="$(arg initial_pose_a)"/>
</include>
<!-- Move base -->
<include file="$(find chefbot_bringup)/launch/includes/move_base.launch.xml"/>
</launch>
Originally posted by neuronet on ROS Answers with karma: 39 on 2016-08-17
Post score: 0
Answer:
I guess you are missing a / before home, it should probably be:
roslaunch chefbot_gazebo amcl_demo.launch map_file:=/home/neuronet/hotel_world.yaml
Originally posted by mgruhler with karma: 12390 on 2016-08-18
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by neuronet on 2016-08-18:
Dear me I am such a noob not just to Ros but to *Nix. Thank goodness that I just halted design for 36 hours because of this mistake I am super efficient! On the plus side, I will not make this mistake again. :)
Comment by mgruhler on 2016-08-18:
I guess everybody here stumbled upon this one at least once ;-) | {
"domain": "robotics.stackexchange",
"id": 25537,
"tags": "navigation, mapping, turtlebot, map-server, amcl"
} |
Where did mess up while calculating the expected value of the momentum squared? | Question: I have the correct answer except with a negative sign.
The wave function is given as,
$$\Phi=A\exp\left[-a\left(\frac{mx^2}{\hbar} + it\right)\right]$$
By squaring the momentum quantity, I found the expectation value of momentum squared to be $\langle-\hbar^2\frac {\partial^2}{\partial x^2}\rangle$.
I then computed the second derivative of $\Phi$ and found it to be
$$\frac{\partial^2\psi}{\partial x^2}=A\exp\left[-a\left(\frac{mx^2}{\hbar} + it\right)\right]\cdot\left[4\left(\frac {am}\hbar\right)^2x^2-\left(\frac {am}\hbar\right)\right].$$
The expectation value can therefore be written as
$$-\hbar^2 4\left(\frac {am}\hbar\right)^2(\int \Phi^*(x^2)\Phi\mathop{}\!\mathrm dx -\frac {am}\hbar\int \Phi^*\Phi \mathop{}\!\mathrm dx)$$
$\int \Phi^*(x^2)\Phi\mathop{}\!\mathrm dx$ is just the expectation value for $x^2$, and the other integral is just 1 (since the wave function is normalized).
I previously found the expectation value $x^2$ to be $\frac \hbar{4ma}$.
The expectation value of momentum squared should then simplify to
$$-\hbar^2 4\left(\frac {am}\hbar\right)^2\cdot(\frac \hbar{4ma}-\frac {am}\hbar)=-\hbar am +4a^3m^3$$
The given answer is $$\hbar am.$$
Answer: First of all, your second derivative is wrong it should be $$\frac{d^{2}\Phi}{dx^{2}}=\Phi\left[\left(\frac{2ma}{\hbar}\right)^{2}x^{2}-\left(\frac{2ma}{\hbar}\right)\right]$$
Second, you wrote wrong the expression for the expectation value
$$\langle p^{2}\rangle=-\hbar^{2}\left[\left(\frac{2ma}{\hbar}\right)^{2}\int\! \Phi^{*}(x^{2})\Phi\, dx-\left(\frac{2ma}{\hbar}\right)\int\! \Phi^{*}\Phi\,dx\right]=-\hbar^{2}\left[\left(\frac{2ma}{\hbar}\right)^{2}\left(\frac{\hbar}{4ma}\right)-\left(\frac{2ma}{\hbar}\right)\right]=\hbar ma$$ | {
"domain": "physics.stackexchange",
"id": 54606,
"tags": "quantum-mechanics, homework-and-exercises, operators, momentum, wavefunction"
} |
Calculator supporting multiplication, division, or modulo of two numbers | Question: This is for a college assignment, it has been submitted but I'd like some general feedback on how to improve it or whether I should be using more functions or less functions.
Also, is my commenting too extreme?
/**
* This program takes two binary integers and performs one of three operations
* on them.
*
* It will either multiple, divide or get the modulo of the two
* numbers.
*
* It ignores whitespace and accepts input in the following format
* <number><operation><number> = <enter>
*
* The program will then convert the binary integer to decimal, perform it's
* operation, convert the output back to binary and print the result.
*
* An error will occur the incoming input is in the wrong format or if the
* binary intergers are not unsigned (positive).
*
* @author Sam Dunne <sam.dunne@ucdconnect.ie> 10308947
*/
/**
* Include standard libraries
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <limits.h>
/**
* This function checks the string to ensure it's valid
*
* @param char input The string input to be checked
*
* @var int i
* @var char operation The operation to be performed
*
* @return char operation
*/
static char check_string(char *input) {
char operation;
unsigned int i = 0;
/**
* Check to see which operation is contained within the string.
* Assign the correct operation to the 'operation' variable.
*/
operation = strchr(input, '*') ? '*' : ( strchr(input, '/') ? '/' : ( strchr(input, '%') ? '%' : '0'));
/**
* Ensure a valid operator was found
*/
if(operation == '0') {
printf("\n\nNo operator found. Closing\n\n");
exit(1);
}
/**
* Ensure there are no illegal characters in the string
*/
for(i = 0; i < strlen(input); ++i) {
if(input[i] != '0' && input[i] != '1' && input[i] != '*' && input[i] != '/' && input[i] != '%' && input[i] != ' ') {
printf("\n\nInvalid Input. Closing\n\n");
exit(1);
}
}
return operation;
}
/**
* This function parses the input into two numbers and an operation and
* performs the operation
*
* @param char input The string input to be parsed
* @param char operation The operation being performed
* @param int dec_output The pointer that will 'return' the decimal value
* @param int bin_output The pointer that will 'return' the binary value
*
* @var int dec[] The decimal representations of the binary strings
* @var int dec_result The decimal result of the operation
* @var int mask This is the mask for the decimal to binary conversaion
* @var char bin[] The binary strings
* @var char rest The rest of the string after it has been tokenised
* @var char junk Junk left over from strtol()
*
* @return int 0
*/
static void parse(char *input, char operation, unsigned int *dec_output, char *bin_output){
unsigned int dec[2], dec_result = 0, mask = 0;
char *bin[2], *rest, *junk;
/**
* Tokenise the string and assign the two tokens to variables
*/
bin[0] = strtok_r(input, &operation, &rest);
bin[1] = rest;
/**
* Convert binary number to decimal
*/
dec[0] = strtol(bin[0], &junk, 2);
dec[1] = strtol(bin[1], &junk, 2);
if(dec[1] == 0){
printf("\n\nDividing by zero and modulus with zero is impossible. Destroy the Universe elsewhere. Good day.\n\n");
exit(1);
}
/**
* Perform correct operation
*/
dec_result = (operation == '*') ? (dec[0]*dec[1]) : ( (operation == '/') ? (dec[0]/dec[1]) : (dec[0]%dec[1]) );
/**
* Convert result back to binary
*/
*(bin_output+16) = '\0';
mask = 0x10 << 1;
while(mask >>= 1)
*bin_output++ = !!(mask & dec_result) + '0';
*dec_output = dec_result;
}
/**
* This is the main function that calls all other functions
*
* @var char line[] The array for storing the incoming string
* @var int i
* @var int dec_output The decimal result to be printed
* @var int bin_output The binary result to be printed
*
* @return int 0
*/
int main(void) {
char operation, line[256], bin_output[256];
unsigned int i = 0, dec_output = 0;
/**
* Read in the input to 'char line[]'
*/
printf("Enter two binary integers seperated by one of [ * / %% ] and press enter\nCALC: ");
fgets(line, sizeof(line), stdin);
/**
* Remove newline from string if present
*/
i = strlen(line) - 1;
line[i] = '\0';
/**
* Check validity of the input
*/
operation = check_string(line);
/**
* Call the parser for results
*/
parse(line, operation, &dec_output, bin_output);
/**
* Print the final answer
*/
printf("\n\n|*---|Result = {DECIMAL:%d || BINARY:%s}|---*|\n\n", dec_output, bin_output);
return 0;
}
Answer: In general the code is very clear.
That said, you use way too many comments. Do not paraphrase the code in comments, and instead use them to say things you cannot express in code: “code tells you how, comments tell you why.”
Comments such as “This is the main function that calls all other functions” or “ This function checks the string to ensure it's valid” are useless. They are only useful if the reader doesn’t understand C, and this isn’t the point of comments.
In particular, the main function needs no comment. Its use is clear to everybody. The check_string function, on the other hand, needs an improved comment: use it to explain what constitutes a valid string (instead of just saying that it checks validity).
What’s more, a multi-line comment in C starts by /*, not by /**. Use the latter only for documentation comments. By convention, they are then used by documentation systems to parse the code and generate documentation. For all other comments (in functions etc.) use normal single-line comments or multi-line comments that start with a single star (/*).
A word on the conditional operator: I believe the other commenters are wrong in their assessment of this operator. It’s safe, readable and should absolutely be used when necessary. However, a slightly different formatting can make chained conditions much more readable:
operation = strchr(input, '*') ? '*' :
strchr(input, '/') ? '/' :
strchr(input, '%') ? '%' : '0';
Note that thanks to the precedence rules of the conditional operator, the original parentheses aren’t necessary. Chaining conditionals in this way is a pretty well established idiom and certainly improves readability over using if. | {
"domain": "codereview.stackexchange",
"id": 1507,
"tags": "c, calculator"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.