anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Intuition behind recurrences with growth O(n log n) vs O(n²)
Question: Been trying to get the intuition behind why two very similar recurrence relations don't follow a pattern I would expect. They are pretty well known relations: Relation 1 - $T(n) = 2T(\frac{n}{2}) + n$ Relation 2 - $T(n) = 2T(\frac{n}{2}) + n^2$ When sketching out a tree for the first recurrence, the way I was taught to think that at every level of a tree, you are doing some amount of work. Height 1 - do some O(n) of work Height 2 - do some O(n) of work Height log(n) - do some O(n) of work Now In my head, If i visualize the work as a sum of all the work done, it seems like its $O(n\log(n))$ of work being done, which according to the master theorem is correct. However, using the same intuition for the second recurrence is wrong as we know the master theorem tells us the answer is $O(n^2)$. Any reason why the intuition seems to be correct for the first recurrence, and why it is wrong for the second? Answer: What you are missing is the big O constants. In the first case, at successive heights you are doing work of $$n, 2\frac{n}{2}, 4\frac{n}{4}, \ldots = n,n,n,\ldots. $$ In the second case, at successive heights you are doing work of $$n^2,2\frac{n^2}{4}, 4\frac{n^2}{16}, \ldots = n^2,\frac{n^2}{2},\frac{n^2}{4}.$$ Since the series $1,1/2,1/4,\ldots$ converges, the total work is only $O(n^2)$.
{ "domain": "cs.stackexchange", "id": 3357, "tags": "asymptotics, recurrence-relation" }
Color Variation in RR Lyrae
Question: I've been doing some research on RR Lyrae stars and haven't been really able to find an answer to this question. RR Lyrae are well known for their periodic magnitude, and also are usually found in certain color ranges ("RR Lyrae Color Box"). My question is: does the color of RR Lyrae Stars (G-R in particular) vary over it's period? I would also love if anyone has a link to a topical journal article! Answer: The short answer is that yes the color would have to change somewhat. All the stars in the instability strip (RR Lyrae, Cepheids, etc) are variable because they pulsate. Over the course of the pulsation, they vary in brightness, size and surface temperature. It is the temperature change that is important here. As the temperature goes up the luminosity increases for a given amount of surface area on the star. This plus the changing size account for variations in brightness. However, as the temperature changes, the peak wavelength changes as well (as described by Wein's Law). This means that the star gets redder and bluer over the course of the pulsation cycle. Now as to the amount of change and how measurable it would be, that I'm not sure of. At a fundamental level there has to be a change but whether it is noticable with broadband (or even narrowband) photometry, is a different question. I know people who might know (as this is what they study) and I'll ask them and see if I can find a reference as well.
{ "domain": "physics.stackexchange", "id": 3043, "tags": "visible-light, stars" }
Question regarding the projection postulate of quantum mechanics
Question: In Quantum Mechanics, McIntyre states the projection postulate like so: After a measurement of $A$ that yields the result $a_n$, the quantum system is in a new state that is the normalized projection of the original system ket onto the ket (or kets) corresponding to the result of the measurement: $$\left|\psi^\prime\right> = \frac{P_n\left|\psi\right>}{\sqrt{\left<\psi\right|P_n\left|\psi\right>}}.$$ Doesn't this then mean that $$\left|\psi^\prime\right> = \frac{P_n\left|\psi\right>}{\sqrt{\left<\psi\right|P_n\left|\psi\right>}} = \frac{\left|a_n\right>\left<a_n\mid\psi\right>}{\sqrt{\left<\psi\right|P_n\left|\psi\right>}} = \textrm{(some number)}\left|a_n\right>\;?$$ Because if it's so, then that number must be some phase (right?) and then we might as well say that the new state is just the eigenstate $\left|a_n\right>$ corresponding to the result $a_n$. So why isn't the postulate stated that way? Answer: The key point is [...] onto the ket (or kets) [...] When you measure observable $A$ and get result $a_n$, the corresponding eigenspace may have more dimensions than just one, i.e. you cannot speak of "the eigenstate $\left|a_n\right>$ corresponding to the result $a_n$. Hence, you really need to project the original state onto the full eigenspace. For example, the states of the hydrogen atom are usually labelled as $\left|nlm\right>$, and if you measure energy eigenvalue $E_n$, you still have quite a number of states indexed my $l$ and $m$ to project onto.
{ "domain": "physics.stackexchange", "id": 52097, "tags": "quantum-mechanics, hilbert-space, wavefunction-collapse" }
Airplane executing horizontal loop
Question: The Problem: An aircraft executes a horizontal loop at a speed of $720$ kmph (or $200$ m/s) with its wings banked at an angle of $15^{\circ}$. What is the radius of the loop. My Confusion: Unlike the case of cars on banked roads executing a loop (which have the aid of friction and normal forces to provide the centripetal force), there seems to be no force (except air resistance) that aids the plane to execute a loop. So how does an aircraft maintain a loop horizontally? Answer: $$\tan \left( \beta \right) =\dfrac {f_{z}}{f_g}=\dfrac {m\dfrac {v^{2}}{r}}{m\cdot g}\tag 1$$ Where $f_z$ Is the centrifugal forces ,$f_g$ Is the weight forces and $r$ is the radius of the loop. $\begin{aligned}r=\dfrac {v^{2}}{\tan \left( \beta \right) \cdot g}= \dfrac {200^2}{10\cdot \tan \left( \dfrac {15\cdot \pi }{180}\right) }\simeq 14.9 \end{aligned} \quad [km]$
{ "domain": "physics.stackexchange", "id": 55825, "tags": "homework-and-exercises, newtonian-mechanics, centripetal-force" }
Idea of precursors of the electro-magnetic waves
Question: The idea of the material Maxwell equation is almost clear. But I'm curious about the idea that except for material equation the pure Maxwell equation should work, but in harder sense: more currents and charges. There are plenty of vacuum in solid body, so, I think, that there should be some precursors of electro-magnetic waves that propagates through the body: $v = c/n$ the speed of wave in body, but part of wave propagates in vacuum with the speed of $c$. And for any frequency some part of amplitude would be observed much earlier. Is there some theory behind this and does this phenomena is observed? Answer: $n$ depends on the wave frequency. For high frequencies $n\rightarrow 1$. So for a general electromagnetic field each harmonic propagates with its own speed. EDIT: In order to speak of a front propagation, you have to have a whole spectrum of harmonics that exist always and everywhere. Each harmonic does not have a front in space, it is their superposition who has. In other words, a front "spreads out" due to frequency dispersion of waves in material.
{ "domain": "physics.stackexchange", "id": 5458, "tags": "electromagnetism, electromagnetic-radiation, refraction, maxwell-equations" }
How to select a model based on statistical analyses?
Question: I've been statistically validating the performance of different Deep Learning models in classifying parasitized and normal cells. In the process, I could find that there is no statistically significant difference in performance between the pre-trained CNN models. In the absence of such significance, which model am I supposed to choose? Answer: This is a very general question, since you are not providing any statistics, that one can comment on. But in general: yes, this can be the case. However, there is an interesting paper on comparing classification models with statistical tests written by Demšar (2006). He shows a way to vizualize the results of different models that do not significantly differ in quality.
{ "domain": "datascience.stackexchange", "id": 2429, "tags": "deep-learning, statistics, performance, descriptive-statistics" }
C++ templates for sin, cos, tan taylor series
Question: I used C++ metaprogramming to build taylor series for sin, cos, and tan (not really for tan). The whole thing is on my github. Please give any feedback related (but not limited) to style, functionality, etc. Overall I think the biggest flaw is that the client needs to make their own struct with a static const long double member called value, and pass that struct to the "functions." I would have just passed the double, but C++ doesn't let you use floating-point numbers as template arguments. Using a struct was the only way I could think of to get around that. Thanks! #ifndef TRIG_HPP #define TRIG_HPP namespace mt { const long double PI = 3.14159265358979323f; // FACTORIAL template <int N> struct fact { static const long long value = N * fact<N-1>::value; }; template <> struct fact<1> { static const long long value = 1L; }; // EXPONENTS template <class X, int N> struct power { static const long double value; }; template <class X, int N> const long double power<X, N>::value = X::value * power<X, N-1>::value; template <class X> struct power<X, 0> { static const long double value; }; template <class X> const long double power<X, 0>::value = 1.0f; // SIMPLIFY RADIAN TO INTERVAL [-2π, 2π] template <class X> struct SimplifyRad { static const long double value; }; template <class X> const long double SimplifyRad<X>::value = X::value - (2 * PI * (double) (int) (X::value/(2*PI))); // float modulo? // COSINE template <class X, int N> struct cos { static const long double value; }; template <class X, int N> const long double cos<X, N>::value = (((N % 2 == 0 ? 1 : -1) * power<SimplifyRad<X>, 2*N>::value) / ((long double) fact<2*N>::value)) + cos<X, N-1>::value; template <class X> struct cos<X, 0> { static const long double value; }; template <class X> const long double cos<X, 0>::value = 1.0f; // SINE template <class X, int N> struct sin { static const long double value; }; template <class X, int N> const long double sin<X, N>::value = (((N % 2 == 0 ? -1 : 1) * power<SimplifyRad<X>, 2*N-1>::value) / ((long double) fact<2*N-1>::value)) + sin<X, N-1>::value; template <class X> struct sin<X, 0> { static const long double value; }; template <class X> const long double sin<X, 0>::value = 0.0f; // TANGENT template <class X, int N> struct tan { static const long double value; }; template <class X, int N> const long double tan<X, N>::value = sin<X, N>::value / cos<X, N>::value; } #endif Example usage: #include <cstdio> #include "../include/trig.hpp" #define DEPTH 8 struct var { static const long double value; }; const long double var::value = mt::PI / 4; int main(int argc, char *argv[]) { printf("%Lf\n", var::value); printf("sin(%Lf) = %Lf\ncos(%Lf) = %Lf\ntan(%Lf) = %Lf\n", var::value, mt::sin<var, DEPTH>::value, var::value, mt::cos<var, DEPTH>::value, var::value, mt::tan<var, DEPTH>::value); } Output: $ g++ main.cpp $ ./a.exe 0.785398 sin(0.785398) = 0.707107 cos(0.785398) = 0.707107 tan(0.785398) = 1.000000 Answer: I see several issues with your code. Your motivation should be stronger/clearer While you may have a very good motivation for having this functionality, it is not apparent from your post hear nor from your GitHub repo. At compile time there are typically very few values I'd need the sine or cosine of; is it really clearer to write printf("%f", apply_my_complicated_TMP_construct(PI/4, 8) ) than auto sine_of_quarter_pi = 0,707106781187f; printf("%f", sine_of_quarter_pi); ? I don't know that it is. Plus, the second way the reader doesn't need to think about iterations, the taylor series etc. It will also reduce compilation time. Now, you could say "Ah, but I have a lot of these values to apply sin() or cos()" - but honestly, I doubt it. If that's the case you should just do that offline and independently of developing your C++ code. In your repository, you list the following advantages: All the work is done a compile-time. The compile-time efficiency is actually going to be better than run-time efficiency compared to "normal," But, again, there isn't really any work; and there's no "efficiency" issue if you just use the value you need. Even if you had to calculate a sine or cosine a couple dozen times - that's really negligible. Don't reinvent the wheel I There's an M_PI constant available in POSIX-compliant system's <math.h> header (and thus, effectively, in most <cmath> headers, including MSVC's with a bit of work, see here). In case you want to be more portable, you might do: #ifndef M_PI #define M_PI 3.14159265358979323846 #endif ... but, in fact, these days you can even get a better accuracy constant, M_PIl, on some systems. So you would have actually gained from not reinventing the wheel. ... and there's more. Remember my sqrt(2)/2 from the example above? That's often predefined too. Have a look at your system's <math.h>. Don't reinvent the wheel II Someone has already provided what looks like a more robust implementation of the same functionality - here on the site: constexpr Sin Function C++ 14 Now, granted, I haven't scrutinized that code. Maybe it's buggy or has other problems. But I'll bet you 100 reputation you haven't read that implementation before writing yours... although you could have, if you'd have looked. I didn't know about it before I was writing this review. Drop most template recursion in favor of C++14 constexpr functions You use recursive template instantiation when you could just put a for loop in your function - and sometimes not even need a template: constexpr long long factorial(int n) { return (n <= 1) ? 1L : (n * factorial(n - 1)); } (taken from here). Also, when you can't just "go constexpr", you can still iterate over multiple template arguments with for_each_argument and so on.
{ "domain": "codereview.stackexchange", "id": 26864, "tags": "c++, template-meta-programming" }
Approximating Max-Coverage when the elements need to be covered multiple times
Question: In the set multicover problem we are given a set N of n elements and a set S of m subsets of N. Additionally, each element has a coverage requirement, i.e. the number of times it has to be covered. The question is to cover N with the minimum number of subsets from S. I'm aware of the approximation algorithms for this problem (Rajagopalan & Vazirani). I'm interested in a different version of the problem (max multi-coverage). We are given K and we ask for the maximal number of the elements from N we can cover with K subsets from S. We count an element as covered if it is covered at least the required number of times. Are there any approximation algorithms for this case? Are there any lower bounds for this case (or even for the standard set multicover problem)? Thanks in advance! Answer: If the required number of times we need to cover an element is 2, we have the following densest k-subgraph problem: (Imagine the edges are elements and nodes are sets.) Given a graph $G$ and an integer $k$, find a subgraph of $G$ on $k$ nodes with maximum density. Khot proved that no PTAS exists under plausible complexity assumptions; there is an $O(n^{1/4+\epsilon})$ approximation to the problem by Bhaskara, Charikar, Chlamtac, Feige and Vijayaraghavan; a polynomial integrality gaps result by Bhaskara, Charikar, Guruswami, Vijayaraghavan and Zhou. Therefore I believe your question is hard to approximate in the most general case.
{ "domain": "cstheory.stackexchange", "id": 2419, "tags": "set-cover, approximation" }
Perception of a rapidly flickering light
Question: Suppose a light switches on and off at a fast rate, with equal time off and on. Why do we see a light that appears only on rather than only off? Answer: This is a question for a biology board. It is because of the way the cells in our retina work. Have you heard of the "after image"? If one sees a bright enough objects and shuts ones eyes, an after image appears with eyes closed. The same biological mechanism keeps the sense of light, rather than dark and bridges over the gap. I am guessing at the physics here, but it must be due to the chemistry of the organic molecules and how it is used to transmit to the head the information On and Off. Obviously once excited to an on state, there is a lifetime to decay to the off, and depending on the frequency from a certain point onward it bridges the gap. Something like that happens also with the change of frames in the film of movies and other optical motions.
{ "domain": "physics.stackexchange", "id": 1131, "tags": "visible-light, vision" }
is this NPC Prob? Minimum count of distinct values at all matrix columns provided only in-row swap operation
Question: I am searching for an algorithm for this! Cannot find anything useful in textbook so far. Thanks in advance! Question: The input is a $N \times K$ matrix, where $N$ and $K$ are positive numbers( usually $N$ is a large number, $K$ is a small number). Each row of the matrix contains distinct values, but each column may contain duplicate values. For $i = 1, 2, \cdots, K$, let $m_i$ denote the count of distinct values at $i$-th column and define $\rho := min(m_1, m_2, \cdots, m_K)$. If we can swap any two numbers within each row, how to minimize $\rho$ provided you can perform any number of the given operations(swaps)? Can this problem be NP-complete in terms of $N$?? For example: in a $4\times3$ matrix $$ \begin{matrix} 1&2 & 3\\ 3&4& 5\\ 4&5& 7\\ 6&7& 8\\ \end{matrix} $$ Since only swaps within a row are allowed, we can swap 1 and 2 in the first row, but cannot swap 1 in the first row with 4 in the second row. I think after performing a few swaps, one optimal solution we can get is $$ \begin{matrix} 1&2 & 3\\ 4&5& 3\\ 4&5& 7\\ 6&8& 7\\ \end{matrix} $$ where $m_1 = 3, m_2=3, m_3 = 2$, Hence the minimum count of distinct values of all columns is $\rho = 2$. Answer: The first thing we have to do is define a decision version of your problem, like for example: Is it possible to swap the elements inside the rows of the matrix so that in at least one column the number of distinct elements is less or equal than $\rho$? Well, the first thing to take into account is that this problem is just asking if we can swap elements inside the rows in order to find a column where the number of distinct elements is less or equal than $\rho$. We are going to reduce the decision version of 3HITTING SET to the decision version of your problem. 3HITTING SET is an np-complete variant of minimum hitting set where all subsets have only three elements. Hitting set is like the inverse of set cover. In minimum set cover you cover members of the universe with subsets and in miminum hitting set you cover subsets with members of the universe. That means that if an element appears in a subset, that subset is covered by the element. The objective is to find the minimum number of elements inside the universe that are enough to cover all subsets. A decision version of 3HITTING SET will be: Is it possible to cover all subsets with at most $\rho$ elements of the universe? Now with the reduction: We simply convert each subset of the hitting set problem into a row of your problem. That is we create a matrix $A$ .Then, for each subset $i$ that belongs to the hitting set instance, we create a row $i$ inside the matrix $A$. And then for each element $j$ that belongs to the universe in the hitting set instance being reduced, we create a number inside the matrix, this number will be inside row $A_i$ if the element $j$ that belong to the universe appears inside the subset $i$ in the hitting set instance. If there is a hitting set of size $\rho$, then we will have a matrix where we can form a column by swapping with at most $\rho$ distinct elements. The reason for this is that if we can cover the sets of the hitting set instance with $\rho$ elements then we can form a column with $\rho$ or less distinct elements inside the matrix by picking the numbers that were mapped to elements of the universe in the hitting set instance and swapping them inside the rows of the matrix so that they are all in the same column. For the other direction, that is, for the cases in which there is no minimum hitting set instance with at most $\rho$ distinct elements we have to show that those cases are converted in our reduction into matrices where we can´t form a column with at most $\rho$ elements by swapping numbers inside rows. If on those instances we could, by swapping numbers inside rows, create a column with at most $\rho$ elements inside the matrix; that would mean that there exists $\rho$ elements that can cover the subsets in the hitting set instance. This is a contradiction The reason is that we have mapped every element of the universe in the hitting set instance to a different number inside the matrix and also we have mapped every subset into a different row inside the matrix. So if we could form a column with at most $\rho$ distinct elements by swapping elements inside the rows, that will mean that there is at least one of those numbers inside each row. Because each subset of the hitting set instance was mapped to a different row and each element of the universe is mapped to a different number inside the matrix, that will mean that we can cover the subsets with those $\rho$ elements in the hitting set instance.
{ "domain": "cs.stackexchange", "id": 6815, "tags": "algorithms, combinatorics, matrices" }
weight of a canister of air
Question: Consider four scenarios: (on a typical weighing scale) Measure the weight of a canister of air filled at atm pressure Measure the weight of a compressed canister of air Measure the weight of a canister with no air inside. Measure the weight of an empty canister on earth with no atmosphere I would guess that 1 is the same as 4 and 2 and 3 weigh the same as the mass 2 plus the added or subtracted mass of air. Any explanations? This is just because a canister's volume doesn't or shouldn't change the force of it's weight being applied to the weighing scale. A weighing scale accounts for air pressure as it's applied equally on all sides and the machine is tared. When the canister is compressed the air is now heavier and tries to sink. I want to explain this more mathematically in terms of pressure resulting in a greater force on the scale. As you can probably tell, my reasoning doesn't hold much weight at the moment. (not sure if pun intended) Answer: Keep in mind that we live at the bottom of an ocean of air, which is a fluid, so gravity is not the only factor in determining what your scale reads. There's an upward buoyant force to consider. It's this force that makes balloons full of helium rise in air. The magnitude of the buoyant force is equal to the weight of the fluid that is displaced by the can. In case #1, the scale reading is the net downward force of gravity on the can itself, and the upward buoyant force due to the displacement of air by the closed can. Because the air in the can has the same pressure and density as the air outside, it does not affect the result. The net result is going to be almost the same as the downward gravitational force on the can alone. In case #2, you have more air molecules inside of the can, so your scale reading will be what you get for case #1 PLUS the extra downward gravitational weight of the extra gas molecules inside the can. In case #3, with no air inside the can, you will get case #1 MINUS the gravitational weight of the gas molecules that you removed. In case #4, there is no atmosphere, so there is no upward buoyant force, and your scale reading is only the gravitational weight of the can itself, since it contains no gas.
{ "domain": "physics.stackexchange", "id": 42400, "tags": "forces, pressure, weight" }
How does hemoglobin-free blood transport oxygen?
Question: Snails... We have some kind of huge garden snails appearing lately (read since the last 10 years or so). Try as one might, it's impossible to avoid them when driving. These snails do not appear to have red blood - I doubt they have blood at all. Wikipedia tells me blood's red colour is due to haemoglobin - which serves to transport oxygen. How do snails (and any other animals that do not have red coloured blood) get oxygen? Or, do such animals need no oxygen at all? Answer: Snails, like most molluscs, have a protein called hemocyanin dissolved directly in the hemolymph ("blood"). Hemocyanins are copper-containing metalloproteins: the binding site for a single O2 molecule contains two copper atoms. Unlike hemoglobin, where reversible oxygen binding is accomplished without a change in the oxidation state of the Fe(II) atoms in the heme prosthetic groups, in hemocyanin the copper undergoes a transition from colourless Cu(I) in the deoxygenated state to blue Cu(II) when oxygenated. Incidentally, there are certain species of fish that manage with little or no hemoglobin - icefish.
{ "domain": "biology.stackexchange", "id": 670, "tags": "red-blood-cell" }
Nitrobenzene - Aromatic Substitution Reactions
Question: Which EAS (electrophilic aromatic substitution) reactions does nitrobenzene show? We know that nitrobenzene is particularly deactivated towards most such reactions, as the nitro group is strongly electron withdrawing via resonance and inductive effects. However, I wish to know exactly where to draw the line - which reactions nitrobenzene undergoes, and which ones it doesn't. Of course, there's no clear boundary as to whether or not a compound undergoes a particular EAS, but can't we at least comment on the yield obtained in such reactions? I haven't been able to find much about EAS reactions of nitrobenzene online, hence I come to Chem SE, seeking data and sources. The reactions of interest are: 1.Friedel Crafts Alkylation, Acylation 2.Nitration (concentrated sulphuric and nitric acid) 3.Sulphonation (fuming sulphuric acid) 4.Halogenation in presence of Lewis acids etc. Please help! Thanks a lot! Answer: Of course, there must be some boundary as to whether or not nitrobenzene undergoes a particular electrophilic aromatic substitution (EAS) since nitro-group is very deactivating. I leave it to you to decide that fact on the yield obtained in following reactions, because there are some data you may fail to find but, nonetheless, exist in literature. Chlorination of nitrobenzene at $\pu{35\!-\! 45 ^{\circ}C}$ in the presence of iron (III) chloride ($\ce{FeCl3}$) gives an isomer mixture, which contains 86% 3-chloronitrobenzene, 10% 2-chloronitrobenzene and 4% 4-chloronitrobenzene (Ref.1). Even if nitro group is $m$-directing, this reaction has given significant amount of $o$-prodouct. You may compare this reaction with nitration reaction of chlorobenzene at $\pu{40\!-\! 70 ^{\circ}C}$ with mixed acids (30% $\ce{HNO3}$ : 56% $\ce{H2SO4}$ : 14% $\ce{H2O}$), which typically gives a 98% yield of an isomer mixture consisting of 34-36% 2-chloronitrobenzene, 63-65% 4-chloronitrobenzene, and about 1% 3-chloronitrobenzene (Ref.1). A simple and efficient method for the sulfonation of nitrobenzene using liquid $\ce{SO3}$ as the sulfonating agent in a microreactor at $\pu{40\!-\! 110 ^{\circ}C}$ was developed and conducted recently, and 88% yield of 3-nitrobenzenesulfonic acid (m-NBSA) has been obtained (Ref.2). The main by-product is bis(3-nitrophenyl)sulfone (BNPS) and proposed mechanism was depicted in the following Figure: On the other hand, the sulfonation of toluene has given $o:m:p$ ratio of 21.2% : 2.1% : 76.7% in 77.8% $\ce{H2SO4}$ (Ref.3). The $o:m:p$ ratio has become 38.8% : 2.6% : 58.6% when solvent has changed to 84.3% $\ce{H2SO4}$ (significant decrease of $p$-product in increasing acidity; Ref.3). Also, note that the sulfonation by sulfuric acid of nitrobenzene itself and of various aromatic compounds in nitrobenzene solution has been studied at a series of temperatures (Ref.4). Yet, no concomitant sulfonation of nitrobenzene in the presence of other activated substrates such as toluene and xylene has been reported. Using $\ce{NO2+PF6-}$ as the nitrating reagent, Olah had studied nitration of nitrobenzene in two different solvents: He achieved $o:m:p$ ratio of 7.1% : 91.5% : 7.4% in 96% $\ce{H2SO4}$ and that of 10% : 88.5% : 1.5% in $\ce{CH3NO2}$ (Ref.3). For comparison, same nitrating agent ($\ce{NO2+PF6-}$) has given $o:m:p$ ratio of 68.2% : 2% : 29.8% in $\ce{CH3NO2}$ when subjected to nitrate toluene (Ref.3). To my knowledge, Friedel-Crafts reaction of nitrobenzene (alkylation or acylation) is yet to be known. I'd love to hear if somebody say otherwise. References: International Agency for Research on Cancer (World Health Organization), lARC Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to Humans, Vol. 65, 1996, “2-Chloronitrobenzene, 3-chloronitrobenzene and 4-chloronitrobenzene,” pp. 263-296. Y. Chen, Y. Su, F. Jiao, G. Chen, “A simple and efficient synthesis protocol for sulfonation of nitrobenzene under solvent-free conditions via a microreactor,” RSC Adv. 2012, 2, 5637–5644 (DOI: 10.1039/C2RA20406A). George A. Olah, “Aromatic substitution. XXVIII. Mechanism of electrophilic aromatic substitutions,” Acc. Chem. Res. 1971, 4(7), 240–248 (DOI: 10.1021/ar50043a002). F. J. Stubbs, C. D. Williams, C. N. Hinshelwood, “The Kinetics of Aromatic Sulphonation Reactions. Sulphonation by Sulphuric Acid in Nitrobenzene Solution,” J. Chem. Soc. 1948, 1065–1078 (DOI: 10.1039/JR9480001065).
{ "domain": "chemistry.stackexchange", "id": 11931, "tags": "organic-chemistry, aromatic-compounds, electrophilic-substitution" }
Publish without validation score?
Question: My mentor wants me to write and submit an academic paper reporting a predictive model, but without any validation score. Everything I have read in textbooks or the Internet says that this is wrong, but is there any case where only reporting a train score makes sense? Background The model was fit "by hand" by someone in our team, using a visual inspection of features extracted from our entire dataset. It is a linear model based on hand-crafted features extracted from some very nonlinear and high-dimensional data. The linear model is based on less than fifty features, but those features were extracted from thousands. We do not have any data left to use as validation. Answer: The most likely issue here is to do with fifty features, but those features were extracted from thousands If those features were selected according to a pre-data-analysis theory, and other selections were not considered, then a linear model that fit the data might be strong proof that the theory was plausible. However, a linear model that fits well due to selection from a large feature set in order to make it fit is very likely to be overfit. You absolutely need a hold-out test data set in this case, as you have used your initial data to form a hypothesis, and have no proof of validity at all. I cannot advise you whether to submit the paper or not. There may be ways you can word it to make it clear that the work establishes a hypothesis and does not validate it (but without making a song and dance about the lack of rigour in validation, as then you are undermining your own submission). I think that as long as you do not try to obfuscate the lack of follow up work, and present results so far accurately, then it is a fair submission - it may then get rejected if a reviewer wants to see some validation, or it may get accepted and there will need to be follow up work that either validates or refutes the model in a second paper. That might be your work, it might be another team's. How good/bad those scenarios are depends on how your field works in general. Perhaps ask with some relevant details on https://academia.stackexchange.com/ to gauge your response, as in some ways this is a people problem - how to please your mentor whilst retaining pride in your work and progressing your career (which in turn depends on a mix of pleasing your supervisor and performing objectively good work). Your mentor may still be open to discussing the technical merits of the work. Perhaps they have not fully understood the implications that you are seeing for how the model was constructed. However, they might fully understand this, and may be able to explain from their view the merits of publishing at an early pre-validation stage for this project.
{ "domain": "datascience.stackexchange", "id": 6093, "tags": "cross-validation" }
QM - States with same orbital angular momentum quantum number but different magnetic quantum number are equiprobable
Question: I'm studying the hydrogen atom. My professor says that in a spherically symmetric problem, the choice of the $z$-axis is arbitrary and so, given a fixed value of the orbital angular momentum quantum number $l$, all the states with same $l$ and different magnetic quantum number $m$ have to be equiprobable and this should also be the reason why the charge density of a full subshell is spherically symmetric. Why is this the case? I don't understand why equiprobability is assumed here. If, for example, i have an electron in the hydrogen atom with a fixed value of the modulus of the orbital angular momentum $L$, then the orbital angular momentum quantum number $l$ is fixed, but the $z$-component of the orbital angular momentum can have $2l+1$ different values with probabilities depending on the coefficient multiplying the spherical harmonics in the eigenfunction expansion of the state function $\psi$. As far as i know, the only requisite is that the sum of the modulus squared of the coefficients of the eigenfunction expansion has to be equal to 1. How do I get this equiprobability condition? Answer: When we have degenerate or quasi-degenerate states, even a very small perturbation of a system prepared in a well-defined quantum state is enough to allow transitions to the other degenerate states. It is then reasonable to assume that a system, weakly interacting with a thermal environment, could be found in each of its degenerate eigenstates with equal probability.
{ "domain": "physics.stackexchange", "id": 96273, "tags": "quantum-mechanics, angular-momentum, atomic-physics, hydrogen, orbitals" }
Is it true that if L* is recursive, L is also recursive?
Question: Is it true that if $L^*$ is recursive, where $*$ is Kleene star, $L$ is also recursive? I know that the opposite direction is true: If $L$ is recursive, then $L^*$ is recursive. But I don't know how to prove it in this direction. Answer: No. Take any language $L$ over $\Sigma = \{0, 1\}$ that contains 0 and 1. Then $L^* = \Sigma^*$ (this is recursive, even regular), regardless of what $L$ might be. It could be a not even recursively enumerable language.
{ "domain": "cs.stackexchange", "id": 16586, "tags": "formal-languages, computability, kleene-star" }
Sauna thermodynamics
Question: Why does it get hotter (feel hotter) in a sauna when one pours water over the hot stones? Wikipedia says that the water condenses onto the skin, but the actual air humidity is so low that I doubt anything is condensing there. The water (sweat) evaporates from skin instead. Answer: It actually gets a bit complicated, since several effects are involved: Evaporating water does require heat, which comes primarily from the hot stones. So throwing water on the stones does cool them down. (This is where the claim one occasionally hears, that "throwing water on the stones makes the sauna colder", comes from. Technically it's true, if one considers the total heat content of the sauna as a whole. But since most of that heat is in the stones, and since you don't sit on the stones, that's pretty much completely irrelevant to how hot the part of the sauna that you do sit in gets, or feels.) On the other hand, throwing water on the stones also significantly increases the heat transfer rate from the stones to the air: the evaporation produces a lot of hot steam, which will rise and mix with the ambient air in the sauna. So it is possible for the air temperature in the sauna to increase, even as the stones are cooled down. Also, the introduction of steam obviously increases the humidity of the air, which will increase the rate of water precipitation on skin, and/or decrease the rate of sweat evaporation. (The relative importance of these two effects will depend on the baseline humidity of the air, which can vary quite a lot. My gut feeling, based on experience, is that in all but the driest of saunas condensation probably dominates, simply because human skin is so much cooler than the air.) In either case, the effect will be to transfer more heat to the skin, and thus to make the air feel hotter. Finally, as the hot steam rises off the stones, it will push hot air around the sauna in front of it. While this increase in air movement is slight and transient, it probably does have a noticeable effect: as the hot air flows past the people in the sauna, it will act to disperse the layer of cooler air that forms over the skin, and thus increases heat transfer to the skin. (If you don't believe me, try blowing some air over your skin in a sufficiently hot sauna. It burns.) The upshot is that throwing water on the stones increases heat transfer, both from the stones to the air and from the air to your skin. As long as the stones stay hot enough to supply that heat, the net effect will be that you feel hotter. However, if you throw too much water on the stones, it's possible to "kill the stones" by cooling them close to or even below the boiling point of water. At that point, throwing more water is useless, and all you can do is add more firewood or turn up the thermostat and wait for the stones to heat up again. Or, if you manage to do this in a smoke sauna, go wash yourself and get dressed up because the sauna is over for the night.
{ "domain": "physics.stackexchange", "id": 2626, "tags": "thermodynamics, heat, water, evaporation" }
Qualitative understanding of Euler rotation on gravitational vector
Question: If I choose the sequence of my Euler rotations to be $Z\rightarrow Y \rightarrow Z$ or in terms of matrix multiplication $R_x(\phi)R_y(\theta)R_z(\psi)$, a stationary $3$-axis accelerometer can measure a gravitational vector through the following equation $$ \left[ \begin{array}{@{\;}c@{\;}} a^i_x \\ a^i_y \\ a^i_z \end{array}\right] = \left[ \begin{array}{@{\;}r@{\;\;\;} @{\;\;\;}r@{\;\;\;} @{\;\;\;}r@{\;}} % \mathrm{c}_{\theta} \mathrm{c}_{\psi} & \mathrm{c}_{\theta} \mathrm{s}_{\psi} & -\mathrm{s}_{\theta} \\ % \mathrm{s}_{\phi} \mathrm{s}_{\theta} \mathrm{c}_{\psi} - \mathrm{c}_{\phi}\mathrm{s}_{\psi} & \mathrm{s}_{\phi} \mathrm{s}_{\theta} \mathrm{s}_{\psi} + \mathrm{c}_{\phi} \mathrm{c}_{\psi} & \mathrm{s}_{\phi} \mathrm{c}_{\theta} \\ % \mathrm{c}_{\phi} \mathrm{s}_{\theta} \mathrm{c}_{\psi} + \mathrm{s}_{\phi} \mathrm{s}_{\psi} & \mathrm{c}_{\phi} \mathrm{s}_{\theta} \mathrm{s}_{\psi} - \mathrm{s}_{\phi} \mathrm{c}_{\psi} & \mathrm{c}_{\phi} \mathrm{c}_{\theta} % \end{array} \right] \left[ \begin{array}{@{\;}c@{\;}} 0 \\ 0 \\ g \end{array}\right] = \left[ \begin{array}{@{\;}r@{\;}} -g\sin({\theta}) \\ g\sin(\phi) \cos({\theta}) \\ g\cos(\phi) \cos(\theta) % \end{array} \right] $$ I can follow the derivation and see how all $\psi$ terms are zero-ed out in the matrix multiplication, but I don't grasp the qualitative meaning of it. Gravitational vector is just a vector. Any Euler rotation applied to a vector in 3D space has to be specified by a chosen sequence of three Euler angles $\psi,\theta$ and $\phi$. The equation however suggests that any frame (or vector) can rotate to any orientation with just two Euler angles $\phi$ and $\theta$ instead of three. Visualizing a rotation of any vector confirms this to be true - any vector can be repositioned to a new vector of any coordinate with just two Euler rotations. If we can rotate a vector or the frame with just two Euler angles then why is Euler angle defined by three angles? [Edit] A reason I can think of that can make use of a 3rd rotation is when I visualize a vector to be an airplane instead that rolls around its own axis. Other than that, all geometric vectors like gravitational vector are simply a line with an arrow that makes no use of a Third Euler angle. Answer: The vector $\vec{g}$ points in the z-direction, and the last angle $\psi$ in the sequence of rotations is about the z-axis also. So the angle $\psi$ has no effect on $\vec{g}$. Or in mathematical terms $$ R_z(\psi) \vec{g}= \vec{g} $$ If you had chosen a different set of Euler angles, such as the last operation is not a rotation about the z-axis, then the angle $\psi$ would not cancel out. The two angles that orient a vector, are exactly the angles of the spherical coordinate system, similar to the earth's latitude/longitude angles.
{ "domain": "physics.stackexchange", "id": 92675, "tags": "rotational-kinematics, rotation" }
Question regarding the equation $\Delta U = Q - W$
Question: One of the specification points in my A-Level is to be able to interpret negative/positive values in the $\Delta U = Q - W$ equation. I can't seem to find an intuitive explanation in my textbook so could anyone else help? EDIT - If you are pushing a piston into a cylinder, what is happening to the variables in this equation? Answer: Always know how to answer this The key question you have to ask yourself is What do these symbols mean? Heat The $Q$ means heat, of course, but does it mean heat added to or removed from the system. In almost all cases it takes the sense of "added to". Work Should $W$ be the work done on the system or the work done by the system? If you are primarily interested in the system than it should be the work do on the system which has the added bonus of meaning that both symbols mean energy added to the system. However, the main practical application of early thermodynamics was in the building of engines. And when you are designing an engine the things you worry about are how much fuel you put in and how much work you get out. The obvious convention for a engine designer is to have $W$ represent the work done by their machine, meaning that $W$ represents energy leaving the system. First Law So the engineers version of the First law is $$ \Delta U = Q - W \tag{engineering} \;,$$ while most chemists and physicists prefer $$ \Delta U = Q + W \tag{science} \;.$$ Side effects of this choice Note that the effects of this choice reverberate through the math. Taking the case of a fluid system the mechanical work is $$ W = +P \,\mathrm{d}V \tag{engineering} \;, $$ or $$ W = -P \,\mathrm{d}V \tag{science} \;, $$ and that means that the way you write the various thermodynamic potentials changes as well. Comparing texts that use different sign conventions for work is a painstaking undertaking. Aside: Reif's textbook is the only one I am familiar with to use two symbols to represent both meanings. $W$ is the work done by the system $\mathcal{W}$ is the work done on the system That way you can write the first law in the manner that makes the easiest sign handling for each problem you come to. I've been using this approach in all the writing I do for my students and even adopts some of Reif's other notational conventions that I don't care much for ($\bar{E}$ for the internal energy? Really?) just so that my students can compare the notes I give them directly to one of their texts.
{ "domain": "physics.stackexchange", "id": 47859, "tags": "thermodynamics, work, conventions" }
Why don't miners get boiled to death at $4$ km deep?
Question: The Mponeng Gold Mine is nearly $4$ km deep. It has the largest elevators in the world and is considered one of the most dangerous mines in the world. The geothermal gradient is $25$ degrees Celsius per kilometer, which would be $100$ degrees. Therefore, it would be well over boiling temperature at the deepest part of the mine, at least theoretically. Why don't the miners get boiled to death? Also, I have read that the temperature in the mine is only $150$ °F [$66$ °C] which would seem to conflict with the geothermal gradient. Why is that? Answer: As noted in CountTo10's answer, the main answer is simple - miners don't "boil" because the mines use suitable cooling and ventilation equipment, plain and simple. That said, there is a contradiction, at least if you go only by Wikipedia and explicitly ignore its caveats. The Wikipedia page for the Mponeng gold mine makes the maximum rock temperature at 66 °C, and if all you read from the Wikipedia page on the geothermal gradient is the stuff in bold, then yes, a 25 °C/km gradient over a 4 km depth would give you 100 °C on top of the surface temperature. However, the actual text in that page reads Away from tectonic plate boundaries, it is about 25 °C per km of depth (1 °F per 70 feet of depth) near the surface in most of the world. and makes it clear that there can be local variations. With that in mind even some very mild digging turns up this map of the geothermal heat flow in South Africa: (Taken from S. Afr. J. Sci. 110 no. 3-4, p. 1 (2014).) This makes it clear that the Mponeng mine is right on top of a cold spot in the Wits basin. The stated heat flows are not enough to reconstruct the thermal gradient (you need the thermal conductivity for that), and I'm not going to go on an expedition for fully trustworthy sources for that gradient. However, some more cursory digging unearthed this source, which looks reasonable (if not particularly scientific), and which claims that mining at these depths is only feasible in South Africa’s Wits Basin due to a relatively low geothermal gradient (nine degrees Celcius/km) and the presence of gold reefs in hard competent country rocks. This is enough of an agreement to call it a day. Backtracking a 9 °C/km gradient over 4 km gives a ~36 °C difference, and taking that away from the 66 °C (maximal!) rock temperature in the mine gives a ~30 °C average surface temperature. This is relatively high, but it is within a reasonable envelope, and there's plenty of leeway on the numbers (e.g. making the gradient 10 or 11 °C/km) to take away any glaring contradictions.
{ "domain": "physics.stackexchange", "id": 35128, "tags": "temperature, geophysics" }
Why Refill of Gel Pen contains some transparent type liquid?
Question: See This Image you will get full clarity What I was asking: I also read the wikipedia page of Gel Pen, no where mentioned about this Transparent Thing What I am asking about. Answer: I'm not sure this is really physics, but since you asked the clear fluid is called the ink follower. The gel is a suspension of pigment particles in an aqueous solution of a polymer. The gel has to have just the right properties - it has to be thick enough to suspend the pigment particles but thin enough to flow onto the ball and then the paper. As it happens many polymer solutions are shear thinning, and since suspending the pigment is low shear while flowing onto the ball is high shear a single polymer can meet both requirements. However using a shear thinning polymer does mean that in high stress situations, e.g. dropping the pen onto the floor, the gel could shear thin and flow out of the end of the tube and create a mess. Adding a plug of ink follower prevents this. The ink follower does not have to flow onto the ball and paper and it doesn't have to suspend oxide particles, so it can be chosen solely for its ability to resist sudden shocks like dropping the pen. Ink followers tend to be low volatility oils of various types thickened with a solid such as silica or clay. Being non-polar means they don't mix with the gel, and it also prevents evaporation of the gel.
{ "domain": "physics.stackexchange", "id": 41237, "tags": "material-science" }
waitForMessage functionality for TimeSyncronizer
Question: Hi I am interested to get new sensor values (published to a topic) while in a service callback. For a single topic I was able to do this using ros::topic::waitForMessage. Unfortunately I am actually looking for a pair of images + camera information (4 topics in total) that I synchronize using message_filters::TimeSyncronizer. Is there a way to wait (block) for the next synchronized set while in a service callback? Thanks Originally posted by dk on ROS Answers with karma: 176 on 2011-09-05 Post score: 1 Answer: There is no 'shortcut' such as ros::topic::waitForMessage for the message time synchronizer. You'll have to construct your own mechanism where you connect to the topics from the service callback, and monitor the callback of the time synchronizer. Originally posted by Wim with karma: 2915 on 2011-09-16 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by dk on 2011-09-23: I tried this but it did not work. The callback (given to TimeSyncronizer) does not get called while the service callback is active. Looking at the implementation of waitForMessage it seems I need to specify a different CallbackQueue, but I could not figure out how to do this for TimeSynchronizer.
{ "domain": "robotics.stackexchange", "id": 6609, "tags": "ros, topic, callbackqueue, service" }
Rendezvous Orbits
Question: If I have two orbits, Orbit 1 and Orbit 2, and the spacecraft with Orbit 2 is trying to dock with the spacecraft in Orbit 1, but does not have a stable orbit (it will crash into Earth) how will this affect Orbit 1's orbit? Here's a simple diagram. So how does the rendezvous effect Orbit 1? Will this make Orbit 1 shrink, or would it stay the same (or something else)? Answer: When Orbit 2 makes rendezvous with Orbit 1, it is travelling more slowly than Orbit 1 (which is why, if it misses rendezvous, it will then drop to a lower altitude and reenter). So there are two possibilities for the rendezvous. 1) Just before joining up with #1, #2 fires rockets to gain speed to match with #1. In this case, the two conjoined ships will continue in Orbit 1. 2) #2 does not speed up, and the two ships collide. Assuming that they survive this collision, and they fuse together, and their masses are approximately equal, the new composite vehicle will have a velocity which is the average of the two original velocities, and an orbit sort of like #2's original Orbit 2. However, the new orbit will not approach the earth quite as closely at perigee. If Orbit 2 just grazed the earth, the new orbit will miss earth entirely.
{ "domain": "physics.stackexchange", "id": 23078, "tags": "orbital-motion, space" }
Reducing voltage with resisitors
Question: I am working on a project in structure, in which i have to rotate the second storey of my building, now my assembly consists of 12 v battery connected in parallel with 12v motor which is connected to a shaft,which in turn is connected to blades/paddle wheel , and those blades are joined with the floor of 2nd storey using screws, my problem is that the wheel rotates at high speed, i have to limit the speed, but i couldn't find a voltage regulator, so how can i reduce the voltage given to the motor using resitors. I am not good at circuits, so please make it as simple as you can. Answer: In general reducing the voltage to control the rpm of a motor is not a good strategy and it's even worse when executed with resistors. If you use a resistor, the voltage drop on the resistor will be $V_{R}=RI_m$, i.e. the remaining voltage for the motor will be $V_m=V_0-V_R=V_{supply}-RI_m$. The more torque the motor has to generate, the higher the current $I_m$ will be and the more voltage will drop across the resistor and the less will be available for the motor. Since the output power of the motor is $P=V_{m}I_m=(V_{supply}-RI_m)I_m=V_0I_m - RI_m^2$, there will be a roughly quadratically increasing power loss with increasing torque. Since the motor will need a substantial current just to start moving, the resistor method usually leads to very poor start behavior. If you don't have a voltage regulator, at the very least try your luck with several diodes in series. Each diode will drop the voltage by approx. 0.7V, relatively independent of the motor current $I_m$. This only imposes a motor power penalty that is roughly linear with $I_m$. The diodes to need to be able to handle the motor current and a power dissipation of $I_m\times0.7V$. In practice even this is not a very good solution. I would always use a motor controller for this purpose, if possible. They are cheap and they get the job really done.
{ "domain": "physics.stackexchange", "id": 29943, "tags": "electric-circuits, electric-current, electrical-resistance, electronics" }
Problem with pr2_2dnav tutorial
Question: Hello, i'm trying to follow the pr2_2dnav tutorial but I found that when I execute the command roslaunch pr2_2dnav pr2_2dnav.launch it says: [pr2_2dnav.launch] is neither a launch file in package [pr2_2dnav] nor is [pr2_2dnav] a launch file name I used sudo apt-get install ros-hydro-pr2-2dnav in order to obtain the package. What could generate this problem? Thanks beforehand! Originally posted by pexison on ROS Answers with karma: 82 on 2015-03-16 Post score: 0 Answer: Looking at the CMakeLists.txt for pr2_2dnav for Hydro, it would appear that the launch file is not installed (the build script is missing an install(..) rule). That would result in the error message you quoted. You can check this by doing a ls -al /opt/ros/hydro/share/pr2_2dnav/ and then look for the pr2_2dnav.launch file in the listing. It will most likely not be there. If the file is missing, please log an issue on the pr2_navigation_apps issue tracker. Originally posted by gvdhoorn with karma: 86574 on 2015-03-17 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by pexison on 2015-03-17: under this directory is only a cmake folder and a package.xml file, there are no more files/folders. Should I log an issue then? Comment by gvdhoorn on 2015-03-17: Yes, file an issue and please add a link to this question to the issue. Comment by DevonW on 2015-03-17: Thanks for bringing this up. I've fixed it and re-released it. See: https://github.com/ros/rosdistro/pull/7546 Comment by pexison on 2015-03-18: By the way, it is possible to use this library for the turtlebot? or it is only for PR2 robot? Comment by DevonW on 2015-03-18: Please open a new question if this one is answered.
{ "domain": "robotics.stackexchange", "id": 21143, "tags": "ros, tutorial, pr2" }
Energy of a Continuous Charge Distribution
Question: I had a question regarding the derivation for the following expression of the energy of a continuous charge distribution $$W=\frac{\epsilon_0}{2}\int_\text{all space} E^2d\tau$$ To get this result, we start from the fact that when considering point charges, the work is given by the sum $$W=\frac{1}{2}\sum_{i=1}^nq_1V(\vec{r}_i)=\frac{1}{2}\sum_{n=1}^{n}\sum_{\begin{align*}j=1\\j\ne i\end{align*}}^n\frac {q_iq_j}{4\pi\epsilon_0}\frac{1}{r_{ij}}$$ Then, we extend this to continuous distributions by making it a volume integral, and taking into account the charge distributions in two regions of space $$W=\frac{1}{2}\int_\text{all space}\frac{\rho_1\rho_2}{4\pi\epsilon_0}\frac{1}{r_{12}}d\tau_1d\tau_2$$ We can see that $\int\frac{\rho_2}{4\pi\epsilon_0}\frac{1}{r_{ij}}d\tau_2$ is just the potential at region 1, due to the charge distribution in region 2. Hence we can rewite this as $$W=\frac{1}{2}\int_\text{all space}\rho Vd\tau$$ I understand that $\rho=\rho_1+\rho_2$, and that $V=V_1+V_2$, then $$\frac{1}{2}\int\rho Vd\tau=\frac{1}{2}(\int\rho_1 V_1d\tau+\int\rho_1V_2d\tau+\int\rho_2V_1d\tau+\int\rho_2V_2d\tau)$$ Where the two integrals in the middle are equal, so by dividing their sum by two we get the total work. However, the terms which include the product between a potential and its own charge distribution should vanish, yet I haven't been able to see how this happens, since when I try to solve those integrals, like $\int\rho_1V_1d\tau$, the result diverges. I wanted to know whether $\rho$ and $V$ are what I understand them to be, and if so how does the integral vanish, or If this is wrong, then what charge distribution and potential do $\rho$ and $V$ stand for. \ Answer: You will soon see that the splitting of charge density and potential into 2 distinct elements, is the same as splitting E into 2 elements. $$\vec{E}_{total} = \vec{E}_{1} + \vec{E}_{2}$$ $$W = \frac{1}{2} \epsilon_{0} \iiint |\vec{E}_{total}|^2 d^3 r$$ $$W = \frac{1}{2} \epsilon_{0} \iiint |\vec{E}_{1} + \vec{E}_{2}|^2 d^3 r$$ Computing this expression gives us 3 distinct terms. $$W = \frac{1}{2} \epsilon_{0} \iiint |\vec{E}_{1} |^2 d^3 r $$ $$+\frac{1}{2} \epsilon_{0} \iiint |\vec{E}_{2} |^2 d^3 r $$ $$+\epsilon_{0} \iiint \vec{E}_{1}\cdot \vec{E}_{2} d^3 r $$ What do they represent? The first term represents the energy of $\vec{E}_{1}$ The second term represents the energy of $\vec{E}_{2}$ The third term represents the potential energy between the charge distributions (building up field 1 in the presence of field 2), I'll leave it to you to prove this! Splitting up charge density: This decomposition of E into 2 elements.is the same as splitting up the charge distribution into 2 elements $$\rho = \rho_{1} + \rho_{2}$$ $$V = V_{1} + V_{2}$$ $V_{1}$ is caused by $\rho_{1}$, and $V_{2}$ is caused by $\rho_{2}$ $$ W= \frac{1}{2}\iiint [\rho_{1} + \rho_{2}][V_{1} + V_{2}] d^3r$$ There are 3 distinct terms of this expression $$W= \frac{1}{2}\iiint \rho_{1}V_{1} d^3r$$ $$+\frac{1}{2}\iiint \rho_{2}V_{2} d^3r$$ $$+\frac{1}{2}\iiint [\rho_{1}V_{2} + \rho_{2} V_{1}] d^3r$$ The first 2 terms take the form that we are familiar with, they do not vanish. They are the individual energies of $\vec{E}_{1}$ and $\vec{E}_{2}$. The last term is slightly more complicated. This term represents the potential energy between the 2 charge distributions! To show this: $$\iiint \rho_{1}V_{2} d^3r = \iiint \rho_{2}V_{1} d^3r$$ As building up distribution 1 in the presence of potential 2, is the same as building up distribution 2 in the presence of potential 1 [which is intuitive, you can also prove this mathematically] Substituting this identity into our third term, reveals that this term. . Is infact $$\iiint \rho_{1} V_{2} d^3r$$ This is obviously the potential energy between our charge distributions since we are building up a charge $\rho_{1} d^3r$ in the presence of $V_{2}$ Note: You say the first 2 terms diverge, if your using this expression for a point charge then yes, the field energy is infinite, if you use these formulas. This formula is not valid for point charges since the derivation assumes $\rho$ is finite[discussed further in griffiths]. Instead we model energy of point charges using the discrete formula you mentioned, or using renormalisation]
{ "domain": "physics.stackexchange", "id": 89096, "tags": "electrostatics, charge, work, potential-energy, voltage" }
Is Weighted Averages the Best Method to Aggregate Information?
Question: I'm working on a recommendation system. My system uses user's past rating data, to predict future ratings. I designed mathematical methods for generating recommendation algorithms that allows me to generate an unlimited number of recommendation algorithms (given a large enough set of relevant ratings). I intend to aggregate the results from all this algorithms to compute a final expected rating of the user. So I thought of taking the weighted averages of the expected ratings of each algorithm to use to calculate a final expected rating. Basically I either: 1. Assign weights based on the amount of information (ratings collected). 2. Assign weights based on the accuracy of the algorithm used. The algorithms arrive at their answer via different means, and it is possible that two may give the same answer, but they may take into account different information and/or use different inferential techniques on that information. So if an accuracy of 1 means there's no deviation between the algorithm's expected ratings, and the user's actual ratings, and an accuracy of 0 means there's maximum deviation between an algorithm's expected ratings and user's actual ratings, we may get $\{Al_1, Al_2, Al_3\}$ with accuracies of $(0.9, 0.85, 0.92)$. I have a minimum information and/or accuracy threshold that I would use to filter the algorithms to assign weights to (I have not yet figured out what this threshold would be, and as it may vary on a per user basis, I want to leave determining the threshold to an ML system. My question is this: Is weighted averages the Best methods for me to aggregate the data provided by my various algorithms/infer from it? If not, what other methods are more apt for the problem at hand? Answer: A weighted average might or might not be the best method. This will depend on (a) the nature of the algorithms, and (b) what objective function you are trying to minimize. Let me give you a sample set of assumptions. Suppose we treat the predictions from the recommendation algorithms as independent random variables (e.g., because each recommendation algorithm was trained independently). Also, suppose we wish to minimize the variance of the final prediction. In other words, our loss function is the $L_2$ loss: if our prediction was $Y$, and the correct value was $\tilde{Y}$, then the loss (penalty) is $(Y-\tilde{Y})^2$. Then in this case we can find the optimal set of weights. In particular, in this case, if we restrict ourselves to linear combinations, the optimal set of weights is inversely proportional to the variance of each estimator. Suppose $X_1,\dots,X_n$ are the predictions from our $n$ recommendation algorithms. Suppose that we have measured the accuracy of each recommendation algorithm by measuring its $L_2$ loss on a validation set; in other words, we know the variance $\text{Var}(X_i)$ for each $i$. Here $\text{Var}(X_i)$ is a measure of the accuracy of the $i$th recommendation algorithm: the larger the variance, the larger the expected $L_2$ loss, and thus the less accurate the prediction is. Then the optimal linear weighting is to select a final prediction $Y$ by $$Y = c_1 X_1 + \dots + c_n X_n,$$ where we choose the weights $$c_i = {a_i \over a_1 + \dots + a_n}$$ where $$a_i = {1 \over \text{Var}(X_i)}.$$ Is a linear combination optimal (as opposed to other, nonlinear methods of combining the predictions)? In this situation, if we additionally know that the errors made by each recommendation algorithm have a Gaussian distribution, then yes, a linear combination is optimal. If the errors have a different distribution, then some other combination might be better. As you can see, this will depend on the loss function we select, and on other assumptions. That said, in practice, a linear combination (i.e., a weighted average) is often a reasonable choice. There's lots written about this sort of thing in the statistics literature. For instance, the above discussion is just a restatement of the fact that the average is the optimal estimator of the mean of a Gaussian distribution (i.e., it minimizes the mean squared error). See, e.g., https://en.wikipedia.org/wiki/Estimator, https://en.wikipedia.org/wiki/Mean_squared_error, https://stats.stackexchange.com/q/81571/2921, https://stats.stackexchange.com/q/97765/2921, https://stats.stackexchange.com/q/48864/2921, https://math.stackexchange.com/q/9032/14578.
{ "domain": "cs.stackexchange", "id": 9261, "tags": "machine-learning, recommendation-systems" }
Relativity of simultaneity: two light sources
Question: A. Suppose a moving train. The train has two wheels(front and back) and each wheel is connected to light source inside the train. The light source is triggered(light is emitted) when train passes certain point on track and both light sources are triggered at the same time(the gap between certain points is identical to the gap between two wheels). A-1. Suppose an observer at the middle of the train(same distance from two light sources). He observes two light at the same time, since in his frame light sources and he is stationary. A-2. Suppose an observer outside the train, who can observe light reaching the middle of train(by mirror or similar). He observes front light reaching the middle first, since in his frame train is moving. So the event of both light reaching the middle is not simultaneous for both observers. Am I understanding and thinking right? Answer: Although the two lights are emitted simultaneous in the train's frame, they do not appear to be emitted simultaneously in the track's frame. The observer on track sees the rear light emitted earlier than the front light, by exactly $\Delta t = \frac{v/c}{\sqrt{1 - (v/c)^2}} \frac{L}{c}$, where $L$ is the rest length of the train and $v$ is its velocity on the track. This is just relativity of simultaneity at work, and the main source of confusion in the understanding of this typical train setup. So the observer on the track does observe the front light taking a shorter time to reach the middle of the train then the rear light, but both him and the observer on the train do see the two lights reaching the middle of the train simultaneously. What is seen from the track is that by the time the front light is emitted, the rear light has already traveled a distance $d = c\Delta t = \frac{v/c}{\sqrt{1 - (v/c)^2}} L$, and from that moment both lights travel equal distances until they meet in the middle of the train. The reason why the latter simultaneity is allowed is because the event occurs at a single point in space-time. The emission events, on the other hand, occur at two distinct locations in any frame and are always spacelike separated. This means that although they are simultaneous in the train's frame, they are not simultaneous in any other frame. Moreover, different frames observe them in different order: they are not causally related. Another source of confusion: in the track's frame the distance between the locations at which the two lights are seen emitted is neither the contracted length of the train $L\sqrt{1-(v/c)^2}$, nor the rest length of the train, but actually a dilated length $L/\sqrt{1-(v/c)^2} > L > L\sqrt{1-(v/c)^2}$. Please keep in mind though that this is a distance between events that are observed at different moments in the track's time. The phenomenon occurs for any two events that are simultaneous in a given frame. Derivation of the delay $\Delta t$: Let the train's frame have primed coordinates $(x', ct')$, and the track frame have unprimed coordinates $(x, ct)$. If the frames are synchronized such that for $ct = 0$ on the track then $ct' = 0$ on the train, the Lorentz transformation from train coordinates to track coordinates reads $$ x = \gamma(x' + \beta ct')\\ ct = \gamma(ct' + \beta x') $$ for $\beta = v/c$ and $\gamma = 1/\sqrt{1 - \beta^2}$. Suppose the two lights are emitted in the train's frame at coordinates $(x'_1 = -L/2, ct'_0 = 0$) and ($x'_2 = L/2, ct'_0 = 0$). Then in the track frame these events have coordinates $$ x_1 = \gamma(-L/2 + \beta \cdot 0) = -\gamma L/2\\ ct_1 = \gamma(0 + \beta (-L/2)) = -\beta \gamma L/2 $$ for the rear light and $$ x_2 = \gamma ( L/2 + \beta \cdot 0) = \gamma L/2\\ ct_2 = \gamma(0 + \beta L/2) = \beta \gamma L/2 $$ for the front light. From these we have that the track observer sees the rear light being emitted first, since $ct_2 > ct_1$, and so the front light appears to be delayed by $$ \Delta t = t_2 - t_1 = \beta\gamma \frac{L}{c} = \frac{v/c}{\sqrt{1-(v/c)^2}} \frac{L}{c} $$ Likewise, the distance between the locations at which the track observer sees the lights being emitted is $$ \Delta x = x_2 - x_1 = \gamma L = \frac{L}{\sqrt{1-(v/c)^2}} $$ Fun exercise: Apply the same kind of reasoning to calculate where and when the meeting of the two lights in the middle is observed in the track frame. Then calculate the distances traveled by the two lights until they meet, as seen on the track :)
{ "domain": "physics.stackexchange", "id": 29353, "tags": "special-relativity, inertial-frames, observers" }
sending image to html
Question: Hi, I am getting the following error in Rosbridge WebSocket server when I try to send an image to a html web page using roslibjs with cvbridge "Tried to register topic /camera/image/compressed with type sensor_msgs/CompressedImage but it is already established with type sensor_msgs/Image" Any ideas on what I am doing wrong? i can see the image being published to rviz and I have the option there to choose raw or compressed so I'm assuming it's something I am doing wrong on the html side This is what I have on the html side var image_listener = new ROSLIB.Topic({ ros : ros, name : '/camera/image/compressed', messageType : 'sensor_msgs/CompressedImage' }); image_listener.subscribe(function(message) { //document.getElementById("liveimage").innerHTML = m.data; console.log("image listenerr event fired"); var imagedata = "data:image/png;base64," + message.data; document.getElementById("liveimage").src = imagedata; //document.getElementById("liveimage").src = m.data; }); this is what I have on the C++ side ros::init(argc, argv, "test image"); ros::NodeHandle n; cv::Mat image2 = cv::imread("test.png" ,0); image_transport::ImageTransport it_(n); image_transport::Publisher image_pub_ = it_.advertise("/camera/image/compressed", 1); cv_bridge::CvImagePtr cv_ptr(new cv_bridge::CvImage); ros::Rate loop_rate(10); //10 hz while(ros::ok()){ #if 1 ros::Time time = ros::Time::now(); //cv_ptr->encoding = "bgr8"; cv_ptr->encoding = "mono8"; cv_ptr->header.stamp = time; cv_ptr->header.frame_id = "/camera/image/compressed"; #endif //cv_ptr->image = image; cv_ptr->image = image2; image_pub_.publish(cv_ptr->toImageMsg()); ROS_INFO("ImageMsg Sent."); ROS_INFO("Subscribers: %d", image_pub_.getNumSubscribers()); ros::spinOnce(); loop_rate.sleep(); } #endif ros::spin(); return 0; Originally posted by Jt3d on ROS Answers with karma: 5 on 2020-09-22 Post score: 0 Answer: When you advertise the image topic you need to only supply the base topic name, which in your case should be like this: image_transport::Publisher image_pub_ = it_.advertise("/camera/image", 1); Once advertised transports will automatically be appended depending on what your have installed on your system. Since you supplied '/camera/image/compressed' as the base topic name, your node will advertise a topic by this name of type sensor_msgs/Image. To see what image_transports you have installed on your system, run rosrun image_transport list_transports Make sure you see image_transport/compressed in the output. If you don't see it, then it's probably not installed. You can install it using sudo apt install ros-$ROS_DISTRO-compressed-image-transport With your current node running, list the topics using 'rostopic list -v'. You should see '/camera/image/compressed' advertised as a sensor_msgs/Image topic. But, if you have image_transport/compressed installed you should also see '/camera/image/compressed/compressed' being advertised. This is the topic of type sensor_msgs/CompressedImage. If you plan to use this node for capturing images directly from your camera and publishing them, then take a look at image_transport::ImageTransport::CameraPublisher. Hope this helps! Originally posted by mwenger with karma: 36 on 2020-09-22 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Jt3d on 2020-09-23: Removing compressed off the topic name worked - many thanks As a matter of interest what are the benefits of using the camera publisher vs the normal image publisher? any good tutorials / examples you would recommend? Also for anyone else who might be struggling with setting this up you need this on the html side of things i.e you need the topic with compressed at the end var image_listener = new ROSLIB.Topic({ ros : ros, name : '/camera/image/compressed', messageType : 'sensor_msgs/CompressedImage' }); Comment by mwenger on 2020-09-23: CameraPublisher is just a convenience class that simplifies interacting with the camera. I don't know of any good tutorials that use it. However, you could look at the usb_cam source code and see how it's used there. Comment by Jt3d on 2020-09-23: OK - many thanks for the help again
{ "domain": "robotics.stackexchange", "id": 35561, "tags": "ros, ros-melodic, image" }
Printing Stabilizer State of a circuit using Cirq
Question: How to print the stabilizers for a given circuit using Cirq (just like in Qiskit)? For example, if I have the following cluster state and make a circuit using Cirq. I give the circuit as input, and I should have the output as: {+XZII, +ZXZI, +IZXZ, +IIZX} Answer: This should work, though there might be some better UX on top of it that I'm not aware of. a,b,c,d = cirq.LineQubit.range(4) circuit = cirq.Circuit(cirq.H.on_each([a,b,c,d]), cirq.CZ(a,b), cirq.CZ(b,c), cirq.CZ(c,d) ) t = cirq.CliffordTableau(4) state = cirq.CliffordTableauSimulationState(t, qubits=[a,b,c,d]) for op in circuit.all_operations(): cirq.act_on(op, state) print(t)
{ "domain": "quantumcomputing.stackexchange", "id": 4230, "tags": "programming, cirq, stabilizer-state" }
Does a ball stretch in a vacuum with constant velocity and no force
Question: I am trying to understand the physics behind why and how a ball stretches when bouncing. Two questions I have regarding this: Does the amount of stretch depend on the velocity vector? If so would a ball in a vacuum stretch when moving with a constant velocity? Most common depictions of a bouncing ball show the ball stretch in the direction of movement (ie the velocity vector). Is this accurate? My intuition tells me the ball should stretch in the direction of force / acceleration vector. So if a ball bounces while moving forward horizontally, it should stretch in the vertical direction only and not in the direction of the velocity vector? Answer: A ball does not stretch in the manner shown in the animation image, nor does a tennis ball stretch as you suggest in your comment to trula's answer. The animation image: the stretch in the picture is just artistic licence - it is used by an animator because it unconsciously leads the viewer of the animation to anticipate the collision of the ball with the floor. In reality, teh ball does not deform at all before it hits the ground. In practice, the ball will only deform: while it is incontact with the floor, as a result of elastic compression; and possibly immediately after it has left teh floor, as a result of internal 'wobbling'. But unless the ball is very soft or moving very fast, there is unlikely to be any discernable deformation even during the collision. The tennis ball comment: This is just an optical illusion resulting from how photography works. During each frame, the camera shutter is open long enough for the balls position to move a little. As a result the ball is 'smeared' in the image and appears elongated. In fact, if you know the shutter-open time of your camera (or the electronic collection period for a shutterless camera), then from the length of the 'smear' you can tell exactly how fast an object is moving. But in reality there is no physical elongation.
{ "domain": "physics.stackexchange", "id": 91311, "tags": "newtonian-mechanics, elasticity" }
Core radius of King's model
Question: I've been learning about the King's model for globular clusters and have been taught that it assumes a core radius: $$r_c = \sqrt \frac{9\sigma^2}{4\pi \rho_0 G} \ (1)$$ where $r_c$ is the core radius (the radius at which the density $\rho$ is half the central density value $\rho_0$) and $\sigma$ is the velocity dispersion. But there are 2 things I'm confused about: How is this equation derived? When I attempt to derive it (given that $\rho(r_c) = \frac{\rho_0}{2}$), I just keep getting this equation: $$r_c = \sqrt \frac{6\sigma^2}{4\pi \rho_0 G}\ (2)$$ When I was searching about the core radius in my attempt to derive the equation, I found that Equation 1 is actually the equation for the King radius ($r_0$) not $r_c$, and that $r_0 \neq r_c$. Is this true? And if so, what is the difference between $r_0$ and $r_c$ then? Thanks for any help anyone can give! [EDIT] References: https://iopscience.iop.org/article/10.1088/0004-637X/778/1/57 , pg. 1 of PDF This is the first article I came across that had the equation I was looking for, but here the equation involves $r_0$ instead of $r_c$. It also says (on line 12 of the paragraph below Equations 1, 2 and 3) that $r_c$ is not equivalent to $r_0$. https://people.ast.cam.ac.uk/~vasily/Lectures/SDSG/sdsg_7_clusters.pdf , pg. 40-41 This is the 2nd and only other reference to this equation that I have found so far, which also has the same above information as the 1st reference. Answer: The first thing you should know (as Peter Erwin and eshaya have implied) is that the King Model is notorious for being a mishmash of observational and theoretical definitions. From what I understand, the original King Model was defined as an empirical tool that used the core radius, tidal radius, and central surface density as free parameters to achieve a fit to a few globular clusters (https://articles.adsabs.harvard.edu/pdf/1962AJ.....67..471K). I don't believe that this definition of a King Model is easily or cleanly extendable to the 3D density of the cluster or the potential of the cluster. I think where the confusion appears is that there is a related but distinct concept, which is also called a King Model, but is better described as a truncated isothermal sphere model. This model is the one present in Vasily Belokurov's course notes that you shared above, and his treatment of it is nearly identical to the discussion of King Models in Binney & Tremaine (2008), which is the reference for the discussion below. B&T also has the definition for $r_0$ that you call $r_c$ in your equation (1). The isothermal sphere version of the King Model has a simple analytical form for the density of the singular isothermal sphere given the velocity dispersion of the cluster $\sigma^2$: \begin{equation} \rho(r) = \frac{\sigma^2}{4\pi G r^2} \end{equation} although this definition leads to a non-physical infinite density at the center of the cluster. You can get around this by instead expressing the density and radius as dimensionless quantities which are divided by the (observed) central density $\rho_0$ and the core radius $r_0$ (I will use $r_0$ to match the B&T notation rather than the $r_c$ you use above). The choice of core radius is completely arbitrary as long as it has units of length. It seems like the common definition is the one you share in your equation (1) above. If you define things this way, you get a Poisson Equation for $\rho(r)$ of the form \begin{equation} \frac{\textrm{d}}{\textrm{d}r}\left( r^2 \frac{\textrm{d} \ln \rho}{\textrm{d}r} \right) = -9 r^2 \rho. \end{equation} This DE has to be solved numerically. Why was 9 chosen for this definition? Your guess as good as mine. I don't think this is a quantity that has been derived from anything in particular from first principals. I couldn't find the original place that this definition for $r_0$ was first used, but I suspect that the factor of 9 was empirically added because it causes $\rho(r_0) \approx \rho_0/2$ to be roughly true. To illustrate this, you could easily replace the 9 in the above equation with 1 (or any other number, it doesn't change the physics at all) with a more "natural" definition of $r_0$: \begin{equation} r_0 = \sqrt{\frac{\sigma^2}{4\pi G \rho_0}}. \end{equation} However, $\rho(r_0) \approx \rho_0/2$ is no longer true for this "natural" definition of $r_0$, which could make it less appealing to observers. If anyone knows more about this, or if there actually is a clean way to derive this extra factor of 9, I would appreciate seeing it! As for your question about why $r_0\neq r_c$, this is because the observational definition of $r_c$ is the half-light radius i.e. $\Sigma(r_c) = \Sigma(0)/2$, whereas the definition for the truncated isothermal sphere scale radius $r_0$ appears to be arbitrary and coincidentally roughly corresponds to $\rho(r_0) \approx \rho(0)/2$. These two definitions are not the same.
{ "domain": "astronomy.stackexchange", "id": 7054, "tags": "astrophysics, globular-clusters" }
Generic 2D engine for simulation
Question: I'm using SFML to manage my windows and I'm writing this engine to visualize some PSO work. I'm trying to make it generic enough that I can extend it for other simulations later on and keep it as minimal as possible. (I'm going to copy this design pattern to a freeglut initialized openGL 3D engine w/CUDA later on as well, so going for performance here.) It would be a great help to be given some technique and design tips before it before I continue and get too deep to make it an easy rework. I suppose the relevant knowledge in this area is mostly gameloop design and class structuring, but this is mostly skeleton code anyways. Please be pedantic, but not mean! I'm going to implement exception catching everywhere and I want to make sure I'm compartmentalizing as much as possible, but not too much! Maybe there are ways to store my variables or implement handlers differently. Basically, let me know of general good coding practices that you think I may be over or underdoing. main.cpp #include "App.h" int main() { class GE gol; gol.run(); return 0; } App.h #include <SFML/Graphics.hpp> #include <iostream> #include <string> class App { public: App(sf::VideoMode mode, std::string title, _Uint32t style) : Window(mode, title, style) { quit_s = false, vs_s = true; logic_ticks_sec = 60; // make ctor to handle these, default ctor will always init like this frame_time = sf::milliseconds(1000 / logic_ticks_sec); max_frameskip = 5; Window.setVerticalSyncEnabled(vs_s); } ~App() { } int run(); protected: sf::RenderWindow Window; sf::Event Event; //State Variables //Timing sf::Clock clock; int logic_ticks_sec, max_frameskip; sf::Time frame_time, elapsed; // Resources //Functions //Event Handlers inline void toggleVsync() { vs_s = !vs_s; Window.setVerticalSyncEnabled(vs_s); }; inline void quit() { quit_s = true; }; virtual void handleMouse(); //Resource Management virtual int initResources(); virtual int cleanResources(); //Game Execution virtual int gameLoop(); //not usually overridden, maybe not virtual and private? virtual int eventLoop(); //handle events virtual int renderLoop(); //virtual private: //State Variables bool quit_s, vs_s; sf::Vector2i mouse_pos; }; class GE : public App // GOL example engine derivation, move to own files when App project is finished { public: GE() : App(sf::VideoMode::getDesktopMode(), std::string("Game of Life"), sf::Style::None) {}; ~GE() {}; private: }; App.cpp #include "App.h" //App int App::run() { initResources(); gameLoop(); cleanResources(); Window.close(); return 0; } //Event Handlers void App::handleMouse() { mouse_pos = sf::Mouse::getPosition(Window); } //Resource Management int App::initResources() { return 0; } int App::cleanResources() { return 0; } //Main Loops int App::gameLoop() { while (!quit_s) { clock.restart(); eventLoop(); renderLoop(); elapsed = clock.getElapsedTime(); } return 0; } int App::eventLoop() { while (Window.pollEvent(Event)) { switch (Event.type) { case sf::Event::Closed: quit(); break; case sf::Event::MouseMoved: handleMouse(); break; case sf::Event::KeyPressed: if (sf::Keyboard::isKeyPressed(sf::Keyboard::Escape)) { quit(); } else if (sf::Keyboard::isKeyPressed(sf::Keyboard::F5)) { toggleVsync(); } break; case sf::Event::MouseButtonPressed: if (sf::Mouse::isButtonPressed(sf::Mouse::Button::Left)) { } else if (sf::Mouse::isButtonPressed(sf::Mouse::Button::Right)) { } break; default: break; } } //Physics //AI return 0; } int App::renderLoop() { Window.clear(); Window.display(); return 0; } Answer: Just as it was nicely put by @David in his answer, you will want to split responsibilities into separate classes, if your intention is to build more on top of this. A good thing to keep in mind when designing is the Single Responsibility Principle. Design each class to do just one thing, but do this one thing very well. Code Review: I didn't see an Include Guard in your header file. If you don't have one, you'll definitely have to add it. App is meant to be inherited from, so it should have a virtual destructor. If you are targeting C++11, you can declare a default virtual destructor, since you currently are not performing any manual cleanup: virtual ~App() = default; _Uint32t, where is that type coming from? It is not standard and the _ prefix tells me that it is probably an internal detail of one of your libraries or the compiler. Use the standard sized types from <cstdint> instead. inline is implicit when you declare and define a class method directly inside the class body. It should be removed to avoid redundancy. Use booleans instead of return codes. Your methods return 0 on success, which can be quite error prone. A bool with true on success and false on failure is a lot clearer. I didn't get the point behind the _s suffix on the booleans quit_s and vs_s. Also, vs_s is impossible to guess what it means. Please give it a better name. (Okay, after looking around you figure out that it stands for V-Sync, so just name it as such: vsyncEnabled or similar). Explicitly constructing a string from a char *, such as in here: : App(sf::VideoMode::getDesktopMode(), std::string("Game of Life"), sf::Style::None) Is not necessary, since std::string implicitly constructs from a char array. Doing this is potentially worse since it might create an extra copy of the string that gets thrown away on the next line. Some compilers will not optimize this tiny mistake. These little things are a good reminder that we should stop trying to help the compiler ;)
{ "domain": "codereview.stackexchange", "id": 11785, "tags": "c++, sfml" }
How to understand the kernel as a transition amplitude?
Question: Consider the time evolution operator $U(t_f, t_i)$ that controls the evolution of a wave function according to $|\psi(t_f \rangle = U(t_f, t_i) | \psi(t_i) \rangle$. As I understand it, the Born rule says that we interpret $ \langle x | \psi(t) \rangle$ as the complex probability amplitude of measuring the system $\psi$ to have position $x$ and time $t$. Using the concept of the wave function, we write $\psi(x, t) = \langle x | \psi(t) \rangle$, and is the square norm of this wave function gives us a pdf. Thus, it seems reasonable to interpret $ \langle x_f | U(t_f, t_i) | \psi(t_i) \rangle$ as the probability amplitude of measuring the postion of $\psi$ to be $x_f$ at time $t_f$. Let us say that $| \psi(t_i) \rangle = | x_i \rangle$. Then it seems reasonable to interpret $ K(x_f, t_f; x_i; t_i) := \langle x_f | U(t_f, t_i) | x_i \rangle$ as the "transition amplitude" of measuring a system to have position $x_f$ at time $t_f$ when we know the system had position $x_i$ at time $t_i$. However, I have only seen the term "transition amplitude" used (in Townsend's "A Modern Approach to Quantum Mechanics, Chapter 8, and these notes, Section 1, http://www.blau.itp.unibe.ch/lecturesPI.pdf) used to refer to the notation $\langle x_f, t_f | x_i, t_i \rangle$, where the state $ |x_i, t_i \rangle$ is expressly NOT the time evolution of $ |x_i \rangle$ but rather the eigenfunction of the time evolved position operator in the Heisenberg picture (thus, $ |x_i, t_i \rangle = \exp\{i \hat{H} t_i\} | x_i \rangle$). So, my question (finally) is: why does it seem that the term "transition amplitude" is applied to $\langle x_f, t_f | x_i, t_i \rangle$ and NOT (the equivalent) $K(x_f, t_f; x_i, t_i)$, when it seems that $K(x_f, t_f; x_i, t_i)$ has a very clear understanding as the transition amplitude of a particle starting at position $x_i$ at time $t_i$ to be measured at position $x_f$ at time $t_f$, whereas the interpretation of $\langle x_f, t_f | x_i, t_i \rangle$ seems much less straightforward? Answer: The overlap ${}_H\langle x_f, t_f | x_i, t_i \rangle{}_H$ is indeed the transition amplitude between the two instantaneous position eigenstates $| x_i, t_i \rangle{}_H$ and $| x_f, t_f \rangle{}_H$ in the Heisenberg picture. It is also equal to ${}_S\langle x_f | U(t_f, t_i) | x_i \rangle{}_S$ in the Schrödinger picture. In both pictures, it is often called a transition amplitude, see e.g. Section 2.2 & Section 2.5 in Sakurai. For a connection between the kernel and the Greens function, see e.g. this Phys.SE post.
{ "domain": "physics.stackexchange", "id": 57367, "tags": "quantum-mechanics, terminology, time-evolution, quantum-states, born-rule" }
ROS on a custom robot
Question: I've been working on a robot project using a Rover 5 base and mecanum wheels. I really want to get started controlling it with ROS. Right now I have it running on an Arduino Mega where I send it commands over bluetooth from my computer, being controlled by an Xbox controller. My question is where can I find good tutorials to get started with ROS. Yes I am familiar with the tutorials on the ROS wiki and found the silent ROS tutorials on youtube. Is there anyone out there that can point me at an awesome ROS class online? I've watched all of ROSCON 2012 and found Bill Smarts class, Mobile Robotics and can't find any of his lectures. Is there anyone that can point me in the right direction getting started with ROS or feel like taking a beginner under their wing to answer all of my silly questions? I'd love to talk on skype with someone thats knowledgable about it. I'm jrombousky on skype, please feel free to message me about anything with ROS. Thanks for your help! Originally posted by atticdweller on ROS Answers with karma: 1 on 2013-01-01 Post score: 0 Answer: None of the lectures for the class are on-line. In the classes, we didn't really cover ROS all that much. The vast majority of the time was spent on core robotics stuff, like kinematics, localizarion, mapping, and the like. I usually mentioned the associated ROS packages, but not in any depth. Mostly, what you see on the web is what the students got: they did the ROS tutorials, and then started to write some simple code to move a robot about in Stage, and then ported it to a real platform (Turtlebots, in the most recent iteration of the class). If you're looking for more ROS-specific stuff for your own robot, I'd recommend Patrick Goebel's book, or (as Tully suggested in a comment to another answer) the Homebrew Robotics Club. Originally posted by Bill Smart with karma: 1263 on 2013-01-03 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 12243, "tags": "ros, arduino, bluetooth" }
C++ lock wrapper class around pointer
Question: I have created a class which wraps pointers and behaves like a pointer would behave, minus a few cases. It is a lock wrapper class which simply locks before usage of the pointer and unlocks once the pointer usage is done. It must behave exactly like a pointer EXCEPT in the following cases: It cannot be implicitly converted to the pointer type it is pointing to It does not handle any implicit memory allocation/deallocation (it provides a method to delete the pointer to the object) When compared to another wrapper object, it compares their pointers to see if they are the same Here is what I have so far: template <class PtrType, class LockType> class ELockWrapper { public: class Proxy { public: Proxy(PtrType* p, LockType* lock) : ptr(p), mLock(lock) { mLock->Lock(); } ~Proxy() { mLock->Unlock(); } PtrType* operator->() { return ptr; } PtrType operator*() { return *ptr; } private: PtrType* ptr; LockType* mLock; }; ELockWrapper() : ptr(nullptr), lock(nullptr) {} ELockWrapper(nullptr_t t) : ELockWrapper() {} ELockWrapper(PtrType *p, LockType* l) : ptr(p), lock(l) {} ELockWrapper(PtrType *p, LockType& l) : ptr(p), lock(&l) {} ELockWrapper(const ELockWrapper& copy) = default; ELockWrapper& operator=(const ELockWrapper& x) = default; bool operator==(const ELockWrapper& cmp) { return cmp.ptr == ptr; } bool operator!=(const ELockWrapper& cmp) { return !operator==(cmp); } bool operator==(PtrType* t) { return ptr == t; } bool operator!=(PtrType* t) { return ptr != t; } bool operator==(bool b) { return (ptr && b) || (!ptr && !b); } bool operator!=(bool b) { return !operator==(b); } operator bool() const { return ptr; } Proxy operator->() { return Proxy(ptr, lock); } PtrType operator*() { return *Proxy(ptr, lock); } void Delete() { Proxy(ptr, lock); delete ptr; } private: PtrType* ptr; LockType* lock; }; Here is it in action with some test cases. Any mistakes/suggestions would be much appreciated. One quick thing I want to ask: if ANY of the methods on ELockWrapper can be called concurrently, should I wrap each overloaded boolean operator with a lock? I'm thinking perhaps the delete method will be called, which a thread was interrupted in one of the operators might be problematic. Just a confirmation if this is the right thing to do? Answer: operator== and operator!= should be const. You need to #include <cstddef> to use nullptr_t. You are defaulting the copy constructor and assignment operator, but not the move constructor and move assigment operator. That's fine because the defaults are generated anyway, but it is inconsistent, I think. What is the point of operator==(bool)? Proxy should be private. You don't want to have users create Proxys. ELockWrapper does not actually own the pointer it holds. It is copyable and will just copy the pointer it holds. It also does not create anything or delete anything in the constructor/destructor. Nontheless you provide a Delete member. That seems very wrong to me. You don't know how the memory you point to was aquired, it could be with static storage or an array. In both cases calling Delete would cause undefined behavior. The object creating the pointer with new should also be responsible for deleteing it. Calling Delete on two copies of ELockWrapper also causes undefined behavior. The fact that the pointer is not owned seems to be a major flaw here anyway. Currently the user of your class needs to gurantee that the object pointed to keeps in scope until all ELockWrapper's holding it are destroyed and then it has to take care of proper deletion. Using it outside of ELockWrapper is not allowed. So why not move the responsibility of managing the pointer to ELockWrapper? The same holds true for the lock. Currently the user needs to provide a suitable lock and keep it live until no ELockWrapper references it anymore. Why not create the lock/mutex inside the constructor from pointer? Though I realize if you do it like this you need to manage references to the lock and you end up with fully reimplementing std::shared_ptr. Therefore I will just assume that this wrapper is explicitly only about locking and not managing ressource lifetime and that the user is required to run the destructors properly. operator-> is fine if copy elision is performed (not required in C++11, but in C++17), but operator* is not either way. The Proxy object you create in operator* is destroyed before the function returns. That means that Unlock is called before the caller expression using the pointer ends. operator* currently returns a copy. It should return a reference. At least that is the usual way this operator is interpreted. In general the approach with operator* and operator-> has some limitations. You may not call either twice in one expression or you have deadlock. But you are also not allowed to save the pointer returned by operator* or a pointer/reference to the object referred because these are not guarded any more by the lock. I guess it would be much better to let the caller call a aquire which returns a Proxy. Access to the holding pointer is then only allowed via operator* and operator-> of this Proxy. As soon as Proxy goes out of scope its destructor releases the lock, similar to std::lock_guard. This still allows the user to misuse a saved reference/pointer to the raw object, but at least more than one usage can me made in the same lock aquisition. An implementation via C++11 standard library would be similar to this (not tested): template<typename T, typename Mutex> class ELockWrapper { T* t; Mutex* mut; class Proxy { T* t; std::unqiue_lock<Mutex> lock; Proxy(ELockWrapper& wr) : t(wr->t), lock(wr->mut) { } friend ELockWrapper; public: T& operator*() { return *t; } T* operator->() { return t; } }; public: ELockWrapper() : t(nullptr), mut(nullptr) {} ELockWrapper(nullptr_t) : ELockWrapper() {} ELockWrapper(T* t, Mutex* mut) : t(t), mut(mut) {} ELockWrapper(T* t, Mutex& mut) : t(t), mut(&mut) {} Proxy aquire() { return {*this}; } };
{ "domain": "codereview.stackexchange", "id": 22197, "tags": "c++, c++11, locking, wrapper, pointers" }
Defining an HTML Template as an Algebraic Type
Question: Wondering if/how you could define a highly nested structure as a Dependent Type (or an Algebraic or Parameterized type). Specifically, an HTML template. Not that they work like this (HTML templates don't have variables to plug in), but imagine a template like this: <template id="MyTemplate"> <section> <header> <h1>{title}</h1> <h2>{subtitle}</h2> </header> <div>{content}</div> <footer> <cite>{author}</cite> <time>{year}</time> </footer> </section> </template> This is a template, so it basically acts like a class (or a type). So you would instantiate the type like this: var node = new MyTemplate({ title: 'A title', subtitle: 'A subtitle', content: 'Foo bar ...', author: 'foo@example', year: 2018 }) And you would get: <section> <header> <h1>A title</h1> <h2>A subtitle</h2> </header> <div>Foo bar ...</div> <footer> <cite>foo@example</cite> <time>2018</time> </footer> </section> The HTML node that is returned is like the type instance. (I'm assuming these are HTMLEntity objects and related DOM objects, not strings). The way the DOM node instances are generically represented is: { tag: 'section', children: [ { tag: 'header', children: [ ... ] }, ... ] } But the template, being a type, is like it is defining multiple nested types at once. That is, this is a type: <h1>{title}</h1> And that is wrapped in this type: <header> <h1>{title}</h1> <h2>{subtitle}</h2> </header> And that is wrapped in the <section>...</section> type. It's like a type like this: type Section { type Header { type H1 { title: String } type H2 { subtitle: String } }, type Div { content: String }, type Footer { type H1 { cite: String } type H2 { time: Integer } } } Or perhaps, since we are actually plugging this into the HTMLEntity's textContent property, it would be more like this: type Section { type Header { type H1 { title: String where textContent = title } type H2 { subtitle: String where textContent = title } }, ... } Either way, wondering if you can do anything like that in Haskell, or another type-theory-oriented language like Coq. In Haskell, a (binary) tree is represented as a recursive structure: data Tree a = Nil | Node a (Tree a) (Tree a) I don't know much Haskell, so I'm not sure how to represent the above HTML template "type" as a Haskell algebraic type (or if it is possible). But it seems like it could be defined as some form of an algebraic or a parameterized type. My first question is, the kind of type the template can be called, and how to model it as a type (in Haskell, or Coq, or some language using a lot of type theory). The second question is if an extended version of this template, which has looping, would be considered a dependent type (and then how to model it as a dependent type). That might look like this. <template id="MyTemplateWithIteration"> <section> <header> <h1>{title}</h1> <h2>{subtitle}</h2> </header> <ul> {each label in labels} <li>{label}</li> {/each} </ul> <footer> <cite>{author}</cite> <time>{year}</time> </footer> </section> </template> The reason I am thinking this could potentially be a dependent type is because dependent types deal with "forall", which seems like what the iteration is doing. Might be misunderstanding that part. To finish up, what I normally just do is create a template object, instead of a type, and then from the template object you create an instance of some other type (the HTMLEntity in this case). But it seems like this could be formalized some more, and instead of a template object we could upgrade it to a template type, and then we would just be creating a template instance when instantiating it. Hoping to see how that definition would look for this highly nested structure. Related note, wondering if this (modeling natural language trees using types) is similar. I'm not sure if they are modeling tree as a type, or just the nodes as types. Or perhaps grammars are nested types of some sort. Answer: You might be looking for CDuce programming language.
{ "domain": "cs.stackexchange", "id": 11754, "tags": "type-theory, functional-programming, dependent-types" }
Why don't individuals with sickle cell trait suffer from sickle cell anemia?
Question: Ostensibly, in people with sickle cell trait, half of the hemoglobin in their body would be defective. Is it actually the case that sickle cell hemoglobin is produced in equal amounts? If not, how is that regulated? If so, how does that still not cause anemia? Answer: This could be a very long answer, but I'll try to keep it brief. The first thing to understand is what causes the anaemia. I'm going to refer to defective haemoglobin found in a sickle cell patient as HbS and normal haemoglobin as HbA. Under certain circumstances (low O2 concentrations) the HbS protein is prone to aggregating into long filaments. Here is an image of a lysed erythrocyte releasing some of these filaments. The presence of these filaments in the erythrocytes causes them to become misshapen (sickle-shaped) and this in turn causes them to get filtered out for destruction in the spleen which is continuously monitoring for defective erythrocytes (which could cause blockages in the circulation). This causes the anaemia (specifically a haemolytic anaemia, due to erythrocyte destruction, and nothing to do with a lack of iron). The appearance of the ability to form filaments is due to a single Glu>Val mutation in the two β subunits of the haemoglobin tetramer. The new amino acid can fit into a pre-existing hydrophobic pocket on the surface of another subunit and so the molecules can daisy-chain to form the filaments. The normal HbA can also bind on to a growing filament (it has the binding pocket) but that terminates the filament (because HbA doesn't have the Val residue). In this way people with sickle cell trait are protected from the anaemia by their normal haemoglobin.
{ "domain": "biology.stackexchange", "id": 1132, "tags": "hematology" }
Dynamic Array in java
Question: I am implementing a class to mimic dynamic array as a part of my learning process of data structures. I have written the following class which works as a dynamic array data structure. I have gone through some video tutorials and also read some code from Java's ArrayList implementation to understand how to code it in a simple manner. I would like to get the code reviewed from the following perspective: Is there any design pattern that I should be following Improvements like removing compile time warning related to generics. Any tips on including test cases that can expose hidden issues in the code Any other tips that will make it a better code like improve its performance or make it more modular. import java.util.Arrays; import java.util.Iterator; public class DynamicArray<T> implements Iterable<T> { private Object[] elements; private int length = 0; private int capacity = 0; public DynamicArray() { this(10); } public DynamicArray(int capacity) { this.capacity = capacity; this.length = 0; this.elements = new Object[capacity]; } public int size() { return length; } public boolean isEmpty() { return size() > 0; } public T get(int index) { validateIndexValue(index); return (T)elements[index]; } public void set(int index, T element) { validateIndexValue(index); elements[index] = element; } public void clear() { for(int i = 0; i < length; i++) { elements[i] = null; } length = 0; } public boolean add(T element) { ensureCapacity(length + 1); elements[length++] = element; return true; } public T removeAt(int index) { validateIndexValue(index); T element = (T) elements[index]; removeElement(index); return element; } public boolean remove(T element) { if(element == null) { for(int i = 0; i < length; i++) { if(elements[i] == null) { removeElement(i); return true; } } } else { for(int i = 0; i < length; i++) { if(element.equals(elements[i])) { removeElement(i); return false; } } } return false; } public int indexOf(T element) { if(element == null) return -1; for(int i = 0; i < length; i++) { if(element.equals(elements[i])) { return i; } } return -1; } public boolean contains(T element) { return indexOf(element) > -1; } public java.util.Iterator<T> iterator() { return new java.util.Iterator<T>() { int index = 0; public boolean hasNext() { return index < length; } public T next() { return (T) elements[index++]; } }; } private void removeElement(int index) { int elementsToMove = length - index - 1; if(elementsToMove > 0) { System.arraycopy(elements, index + 1, elements, index, elementsToMove); } elements[--length] = null; } private void ensureCapacity(int minSize) { if(minSize >= capacity) { capacity *= 2; elements = Arrays.copyOf(elements, capacity); } } private void validateIndexValue(int index) { if(index >= length || index < 0) { throw new IllegalArgumentException("Bad index value passed : " + index); } } } ``` Answer: return size() > 0; Typo, should be (less than or) equal zero. for(int i = 0; i < length; i++) { elements[i] = null; } length = 0; It might be interesting to actually throw the array away and get a clean one in place, for example if this list was previously filled with 100 million entries. public boolean add(T element) { ensureCapacity(length + 1); elements[length++] = element; return true; } Given that this can never return false, it might be interesting to return a more meaningful value instead, for example the current instance. That would allow a fluent use when adding multiple values: dynamicArray .add(someObject) .add(anotherObject) .add(yetAnotherObject); Your remove accepts null, indexOf does not. Both should accept null, then you can also use indexOf in remove. public boolean remove(T element) { int index = indexOf(element); if (index >= 0) { removeElement(index); return true; } else { return false; } } private void ensureCapacity(int minSize) { if(minSize >= capacity) { capacity *= 2; elements = Arrays.copyOf(elements, capacity); } } Depending on the use-case, this might be overkill. Maybe a little bit more flatter, conservative curve might be appropriate. private void validateIndexValue(int index) { if(index >= length || index < 0) { throw new IllegalArgumentException("Bad index value passed : " + index); } } Split it into two exception, one for negative values and one for too large values, to make it easier to debug. You stuck to the conventions already established in the Java world, that's great, so is your naming of things for the most part. I'd rename validateIndexValue to verifyIndex or assertIndex, though, "assert" might yield the wrong message. As far as I can see, you can infer capacity from the array size. Overall, looks quite well. Javadoc is missing, you should write one, it's a good exercise and will make you recheck your implementation. Let's talk API design. public class DynamicArray<T> implements Iterable<T> { private Object[] elements; private int length = 0; private int capacity = 0; public DynamicArray(); public DynamicArray(int capacity); public int size(); public boolean isEmpty(); public T get(int index); public void set(int index, T element); public void clear(); public boolean add(T element); public T removeAt(int index); public boolean remove(T element); public int indexOf(T element); public boolean contains(T element); public java.util.Iterator<T> iterator(); private void removeElement(int index); private void ensureCapacity(int minSize); private void validateIndexValue(int index); } Your class is only extensible as far as the public methods go. If I wanted to extend this class and implement a swap method, I will have a hard time doing that. Implementing a different size strategy will basically mean rewriting methods. Now, how could we improve that situation? The easiest is to make the internal properties protected, it's not exactly a neat or fancy solution, but if somebody wants to implement additional methods, they will have all the necessary access. public class DynamicArray<T> implements Iterable<T> { protected Object[] elements; protected int length = 0; protected int capacity = 0; // public SNIP protected void removeElement(int index); protected void ensureCapacity(int minSize); protected void validateIndexValue(int index); } That gives every extending class the same access as you had, meaning that implementing a swap method or different strategy will be easy to do. But many people will now start ranting about how that is bad design (I disagree on that matter, but let's leave it at that). So we arrive at the question: Can we give an extending class enough access to implement additional functionality, but also protect base functionality? For example, now elements could be null after any call. We can, we can guard our elements array behind some functions that ensure its well being: public class DynamicArray<T> implements Iterable<T> { private Object[] elements; private int length = 0; protected final Object[] getElements[] { return elements; } protected final int getLength() { return length; } protected final setElements(Object[] elements, int length) { requireNotNull(elements); requireZeroOrPositive(length); requireLessThan(length, elements.length); this.elements = elements; this.length = length; } protected final setLength(int length) { requireZeroOrPositive(length); requireLessThan(length, elements.length); this.length = length; } // SNIP methods as above } Neatish, I'd say. This allows us to make fair assumptions about the internal state. It also gives the extending class any freedom to access and modify the array, without the possibility to nulling the reference. Note that modifying the array directly is still possible, and I'd argue that that should stay that way, because the extending class must be able to modify it. Forcing the extending class to retrieve a copy of the array, modify that copy and set it back will do nothing but waste memory and processing power. With that in place, a swapping method in an extending class can look like this: public void swap(int firstIndex, int secondIndex) { validateIndexValue(firstIndex); validateIndexValue(secondIndex); Object[] elements = getElements(); T firstElement = elements[firstIndex]; T secondElement = elements[secondIndex]; elements[firstIndex] = secondElement; elements[secondIndex] = firstElement; } Which is as simple as it can get. Of course, the internal state can still be "corrupted" to a certain part, by setting a "wrong" length value, but I'd argue that that is the problem of the extending class. After all, the extending class could also override add to perform a division with zero. We can't and should not guard against anything, otherwise you'll have a too restrictive API/class design which will make it impossible to extend your classes. In the real world, that means to either throw out your class, or copy and modify the code of the class.
{ "domain": "codereview.stackexchange", "id": 40758, "tags": "java, performance, array" }
Representation of time evolution operator and spectral decomposition
Question: I was preparing for my quantum mechanics exam, and I came to think about this question regarding the spectral representation of time evolution operator. Let's say we are given the Hamiltonian: $\hat{H} = S_{-} S_{+} $. What will be the time evolution operator in the spectral representation? Here is my shot at it. What worries me is that, shouldn't this Hamiltonian have two eigenstates? I am only getting one. Anyways, here is my approach: The raising and lowering spin operators can be written, in terms of the z-basis, as: $S_{+} = \hbar |+ z \rangle \langle -z|$, $S_{-} = \hbar |- z \rangle \langle +z | $. Then, the Hamiltonian becomes: $\hat{H} = \hbar^2 |- z \rangle \langle +z |+ z \rangle \langle -z| = \hbar^2 |-z\rangle \langle -z|. $ Now, the only eigenvector of this equation is $|-z\rangle$, with the corresponding eigenvalue, $\hbar^2$. So, we can write the spectral representation of the time evolution operator, given by: $e^{i\hat{H}t/\hbar} = e^{i\hbar t} |-z\rangle \langle -z|$. Is this it? Or am I missing something here? Any help would be greatly appreciated! Answer: $\newcommand{\bra}[1] {\left< #1 \right|} \newcommand{\ket}[1] {\left| #1 \right>} \newcommand{\bracket}[2] {\left< #1 \vert #2 \right>} $ Your calculation of the Hamiltonian $$\hat{H} = \hbar^2 \ket{-z}\bra{-z}$$ is correct so far. One eigenvector is $\ket{-z}$, with eigenvalue $\hbar^2$. But you missed (as Dani already wrote in his comment): another eigenvector is $\ket{+z}$, with eigenvalue $0$. So you could also write (in a rather pedantic way): $$\hat{H} = 0\ket{+z}\bra{+z} + \hbar^2 \ket{-z}\bra{-z}$$ Then it is more obvious, that the time evolution operator becomes: $$e^{i\hat{H}t/\hbar} = 1 \ket{+z}\bra{+z} + e^{i\hbar t}\ket{-z}\bra{-z}.$$ Yet another equivalent approach is by using the matrix representation (with $\ket{+z}$ and $\ket{-z}$ as base vectors) and calculate the matrix exponential of a diagonal matrix. We have $$\hat{H}=\begin{pmatrix}0 & 0 \\ 0 & \hbar^2\end{pmatrix}.$$ Then we get $$e^{i\hat{H}t/\hbar} = \exp\left[i\begin{pmatrix}0 & 0 \\ 0 & \hbar^2\end{pmatrix}t/\hbar\right] = \exp \begin{pmatrix}0 & 0 \\ 0 & i\hbar t\end{pmatrix} = \begin{pmatrix}1 & 0 \\ 0 & e^{i\hbar t}\end{pmatrix} $$
{ "domain": "physics.stackexchange", "id": 61936, "tags": "quantum-mechanics, hilbert-space, operators, hamiltonian, time-evolution" }
Solving the game Set in F#
Question: I was playing the game Set online, and thought it would be a good exercise to write an F# script to find all of the sets for me. The rules of set are as follows: A SET is 3 cards for which each feature is either common across all 3 cards or is different on each card. The object is to find all the SETs among the cards. I started learning F# four days ago, so what I'm looking for are obvious/easy/"no duh" improvements. I've started at a low bar of just writing a function to compare a group of three cards, and tell you if those three cards form a set. You can see my first attempt below. It feels a bit clunky, like I'm not really making use of all that F# has to offer. // Create a record type to represent each card type Card = { Color: string Shape: string Pattern: string Number: int } // Create some functions for comparing elements of a collection let allEqual l = l |> Seq.pairwise |> Seq.forall (fun (x, y) -> x = y) let allDifferent l = (l |> Seq.distinct |> Seq.length) = (Seq.length l) // Combine them let isSet l = (allEqual l) || (allDifferent l) // The meat of this script let compare card1 card2 card3 = // Put the cards together for easy iteration let cardList = [card1; card2; card3] // Check each field of the record to see if it makes a set let colorSet = cardList |> List.map (fun c -> c.Color ) |> isSet let shapeSet = cardList |> List.map (fun c -> c.Shape ) |> isSet let patternSet = cardList |> List.map (fun c -> c.Pattern) |> isSet let numsSet = cardList |> List.map (fun c -> c.Number ) |> isSet // Check that they all make a set colorSet && shapeSet && patternSet && numsSet // Test the functions out let c1 = {Color = "blue"; Shape = "oval"; Pattern = "stripes"; Number = 3} let c2 = {Color = "red"; Shape = "diamond"; Pattern = "solid"; Number = 2} let c3 = {Color = "green"; Shape = "squiggle"; Pattern = "empty"; Number = 2} let areCardsSet = compare c1 c2 c3 printfn "The cards c1, c2, and c3 are a set? %b" areCardsSet Answer: Well, you're off to a good start, I think. Only one major thing comes to mind, when I read your code. Tuples Instead of operating on seqs or similar, use a tuple. The size of a tuple is fixed and known at compile time. In this game, you know you'll always consider three cards at a time, so it fits the bill. This will also help simplify your allEqual and allDifferent functions. The isSet function still works, as the parameter l is now a tuple instead of a seq. let allEqual (c1, c2, c3) = c1 = c2 && c2 = c3 let allDifferent (c1, c2, c3) = c1 <> c2 && c2 <> c3 && c1 <> c3 This affects the shape of your compare function as well. The line let colorSet = cardList |> List.map (fun c -> c.Color ) |> isSet now becomes like let colorSet = isSet (card1.Color, card2.Color, card3.Color) and similar for shapeSet, patternSet and numsSet. Now the cardList variable may be removed, and the compare function is short and to the point. Abstracting the isSet function Suppose you think the way the colorSet, shapeSet, etc. variables are defined looks to clunky. You could abstract away the commonalities, by defining a function for getting the Color property and so on. These could then be supplied as arguments to the isSet, allEqual and allDifferent functions . Like this: let color c = c.Color // ... let allEqual feature (c1, c2, c3) = feature(c1) = feature(c2) && feature(c2) = feature(c3) // ... let cards = (card1, card2, card3) let colorSet = isSet color cards // ... I've only shown a couple of the required functions above, but I hope you get the point. In a small example like this, this might be overkill, but had you had many more features, I would probably go that way. Discriminated unions One last point is, that it would be obvious to use discriminated unions instead of string and int. I.e. you could define types for each feature, and use them in the Card definition type Color = Green | Blue | Red type Number = One | Two | Three // ... type Card = { Color: Color; Number: Number; ... } This will help readability of your code. Furthermore, if this should interact with the outside would, where input comes in the shape of strings, it pushes parsing and validation to the boundary layer.
{ "domain": "codereview.stackexchange", "id": 37160, "tags": "f#" }
Degrees of freedom of wrist
Question: Why does a wrist have 3 degrees of freedom? The image below illustrates them but pronation can be done even without the wrist present so I don't understand why it is considered to be a degree of freedom of the wrist. During pronation the whole wrist itself is moving. Answer: Why does a wrist have 3 degrees of freedom? Is the premise of the question correct? Consider a simple online search for wrist "two degrees of freedom": "The articular complex of the wrist has basically two degrees of freedom. When these are compounded with pronation and supination, i.e. rotation of the forearm around its long axis, the hand can be oriented at any angle to grasp or hold an object." Kapandji, The Physiology of the Joints, vol. 1, p. 130, full text. "The wrist movements are limited to two degrees of freedom: flexion–extension and ulnar–radial deviation. The apparent axial rotation of the palm—called pronation and supination—occurs at the proximal and DRUJs, with the hand moving with the radius, not separately from it." (Source.) "The wrist has two degrees of freedom, that is, flexion and extension and ulnar and radial deviation." (Source.) "...the carpus essentially moves with only two degrees of freedom-pitch (flexion/extension) and yaw (radial deviation/ulnar deviation)-while largely preventing roll (pronation/supination)." (Source.) "The wrist has two degrees of freedom: pitch (flexion/extension) and yaw (radial and ulnar deviation). All other movements are resisted, including rotation (prono-supination)" (Source.) "The wrist has two degrees of freedom which must be measured. In addition the wrist has several small motions which make it difficult to transduce the major wrist motions." (Source.) "No significant torsion occurs." (Source.) So, no, I don't buy the premise of the question. This is a subjective matter of biomedical/biomechanics terminology/convention. Some authors model the wrist joint as having only two primary degrees of freedom as a joint, with any pronation at all subsumed into rotation around the longitudinal axis of the forearm. Others broadly include pronation motion of the entire wrist as a degree of freedom of the lower arm associated with wrist revolution, or include pronation measured under certain test conditions.* *Example: "Our electrogoniometer was designed with the capability of measuring rotation as well as flexion/extension and radioulnar deviation. However, it was not felt that the goniometer was accurate for rotation unless it was affixed directly to the bone with Steinmann pins. The one subject who was tested with the goniometer affixed to the bone revealed that for each task performed, wrist rotation ranged from 2.2° to 11.8° with an average of 6.9°. We feel that this confirms that the wrist joint has not two but three degrees of freedom."
{ "domain": "physics.stackexchange", "id": 93786, "tags": "newtonian-mechanics" }
Good text on algorithm complexity
Question: Where should I look for a good introductory text in algorithm complexity? So far, I have had an Algorithms class, and several language classes, but nothing with a theoretical backbone. I get the whole complexity, but sometimes it's hard for me to differentiate between O(1) and O(n) plus there's the whole theta notation and all that, basic explanation of P=NP and simple algorithms, tractability. I want a text that covers all that, and that doesn't require a heavy mathematical background, or something that can be read through. LE: I'm still in highschool, not in University, and by heavy mathematical background I mean something perhaps not very high above Calculus and Linear Algebra (it's not that I can't understand it, it's the fact that for example learning Taylor series without having done Calculus I is a bit of a stretch; that's what I meant by not mathematically heavy. Something in which the math, with a normal amount of effort can be understood). And, do pardon if I'm wrong, but theoretically speaking, a class at which they teach algorithm design methods and actual algorithms should be called an "Algorithms" class, don't you think? In terms of my current understanding, infinite series, limits and integrals I know (most of the complexity books I've glanced at seemed to use those concepts), but you've lost me at the Fast Fourier Transform. Answer: It is my very personal opinion that the book of Jon Kleinberg and Éva Tardos is the best book for studying the design and analysis of efficient algorithms. It might be not as comprehensive as Cormen et al. but it is a great textbook. Let me point out, why I think this book might suit your interests best you don't need heavy math machinery for the proofs the book gives often a great intuition why something is working (or not), this is in my opinion very important for beginners and self learners a very intuitive approach to NP-completeness it has a great chapter how to deal with NP-complete problems in practice it focuses on design patterns, which might help you to design your own clever algorithms You should also notice, that there is a lot of free material in the WWW available. Great lecture notes are provided by Jeff Erickson. And you can even watch the whole MIT class on "Introductions to algorithms" taught by Charles Leiserson and Erik D. Demaine. Cool stuff!
{ "domain": "cs.stackexchange", "id": 427, "tags": "complexity-theory, reference-request, algorithm-analysis, education, books" }
whitch kind of kinect should i buy
Question: hi,i want to buy a kinect to control the NAO robot,there are two kinds of kinect i can but:1,kinect for xbox360.2,kinect for windows. i want to use openni and kinect to develop a application to control the NAO.is there anyone know whitch kind of the kinect should buy? Originally posted by doudoushuixiu on ROS Answers with karma: 31 on 2013-11-05 Post score: 0 Answer: I'd say go for the XBOX360 Kinect, since it's widely used in the ROS community and some people report problems getting the Kinect for Windows to work with ROS. PS: IMO the ASUS Xtion Pro Live is an even better choice, since it only requires an USB connection, is smaller and lighter and has the same performance as the Kinect. Unfortunately, it is usually a bit more expensive. Originally posted by bit-pirate with karma: 2062 on 2013-11-07 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by doudoushuixiu on 2013-11-08: is the problem (kinect for windows )serious? Comment by bit-pirate on 2013-11-08: I included a link to the Q&A on how to get the Kinect for Windows to work with ROS in my answer. Seems like a bit of work to get it up and running.
{ "domain": "robotics.stackexchange", "id": 16067, "tags": "ros, kinect, nao" }
Are there problems that a classical computer can solve and a quantum computer can never solve?
Question: Apologies if this is a silly question. But I've heard quantum computers can solve problems that classical computers can't. What about the converse, are there any problems that a classical computer can solve and a quantum computer can never solve? Answer: The classical computer only manipulates logic 0 and logic 1, composing the essence of the logical circuit. So the question that classical computers can solve is merely by manipulating logic 0 and logic 1, while quantum can easily mimic it by only manipulating $|0\rangle$ and $|1\rangle$. So I think there does not exist any question that classical computers can solve while the quantum computers cannot.
{ "domain": "quantumcomputing.stackexchange", "id": 2945, "tags": "classical-computing" }
Is $\epsilon$ always contained in $\Sigma^*$?
Question: Please correct me on any terminology. For some reason I'm a bit confused. $\Sigma = \{\epsilon, 0, 1\}$ This means my alphabet, $\Sigma$, contains three symbols ($\epsilon, 0, 1$). $\Sigma^*$ is the language over $\Sigma$, and it equals $\{\epsilon, 0, 1, 01, 10\}$. My regular expression for $\Sigma^*$: $\epsilon+0+1+(01)+(10)$. First question: Does every $\Sigma^*$ include $\epsilon$? I see some with, and some without. I feel like this is a big difference because your regular expression and DFSA will be different. Second question: At this point, I would have five accepting states in a DFSA? Since the first state is the empty string, is it $\epsilon$? Or is the first state just nothing, which transitions to a second state via $\epsilon$ which contains the empty string? Answer: Normally, $\epsilon$ stands for the empty string: the string with no characters, which would be "" is most programming languages. It's too confusing to have $\epsilon$ be a symbol in the alphabet, so I'm going to rename it and write your alphabet as $\Sigma=\{e,0,1\}$. (The alternative would be to use some other symbol to denote the empty string.) Now, by definition, $\Sigma^*$ is the set of all finite strings that can be written using the characters of $\Sigma$. This always includes the empty string $\epsilon$ and, as long as $\Sigma\neq\emptyset$, it also contains strings of all finite lengths. So the claim in the question that $\Sigma^*=\{e, 0, 1, 01, 10\}$ is incorrect: $\Sigma^*$ is an infinite set. The regular expression for $\Sigma^*$ is $(e+0+1)^*$; the automaton consists of a single state, which is accepting and has transitions to itself for each symbol in $\Sigma$.
{ "domain": "cs.stackexchange", "id": 5026, "tags": "formal-languages, terminology, regular-languages" }
Problem starting rviz with OGRE EXCEPTION(7:InternalErrorException)
Question: Hi, I am new to ROS. As I am trying to start rviz following the tutorial, I receive this error message right after issuing "rosrun rviz rviz". Please help me if you know the solution. Thanks. Tuan Failed to initialize Ogre: OGRE EXCEPTION(7:InternalErrorException): Could not load dynamic library Plugin_CgProgramManager. System Error: Plugin_CgProgramManager.so: cannot open shared object file: No such file or directory in DynLib::load at /tmp/buildd/ros-diamondback-visualization-common-1.4.1/debian/ros-diamondback-visualization-common/opt/ros/diamondback/stacks/visualization_common/ogre/build/ogre_src_v1-7-1/OgreMain/src/OgreDynLib.cpp (line 91) [ERROR] [1311070723.641349333]: Caught exception while loading: OGRE EXCEPTION(7:InternalErrorException): Could not load dynamic library Plugin_CgProgramManager. System Error: Plugin_CgProgramManager.so: cannot open shared object file: No such file or directory in DynLib::load at /tmp/buildd/ros-diamondback-visualization-common-1.4.1/debian/ros-diamondback-visualization-common/opt/ros/diamondback/stacks/visualization_common/ogre/build/ogre_src_v1-7-1/OgreMain/src/OgreDynLib.cpp (line 91) Originally posted by Tuan on ROS Answers with karma: 111 on 2011-07-18 Post score: 0 Answer: Duplicate of: http://answers.ros.org/question/1503/rivzogre-crash-after-update Originally posted by dornhege with karma: 31395 on 2011-07-19 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 6179, "tags": "rviz" }
rotation curve in the innermost part of galaxies
Question: I read in Sofue's articles (1999) that he used a method to investigate the bulge of galaxies to obtain rotation curve of this part of galaxies. So my question is how could Vera Rubin et al. obtained the rotation curve in their article (1978), especially the steep increase in the begenning of the curve since in Sofue's article he seems to say it was rather difficult, at that time, to investigate galactic bulges ? Answer: So they are talking about different scales. Rubin is finding the rotation curve on kpc scales. And Sofue is talking about how hard it is to get the rotation curve in the central less than a kiloparsec. In the center you cannot use the same techniques as Rubin did. Rotation curves of galaxies are most often measured by the gas speed. The speed of the stars can be used, but owing to dynamical heating over time, the stellar rotation speed does not indicate the true rotation speed of the galaxy as well as the gas speed. When looking for the speed of something, astronomers will look for the redshift or blueshift of the stars or the gas which they are observing. So, they need to identify a gas, which will be present in all galaxies, which has a well known and easy to observe "line" in the spectra which will allow a velocity determination by seeing how red or blueshifted it is compared to that emission line in a laboratory. Two such lines are H$\alpha$ and molecular CO (the one mentioned in that paper you've linked). The H$\alpha$ line is an optical line at 656.28 nm. The CO lines are radio waves at 2.6 and 1.3 mm. It looks like the Rubin+1978 paper took spectra of the H$\alpha$ lines, and from Figure 2, it looks as if they've collected spectra from various radii from the center. If you do it to many different points inside the galaxy at different distances from the nucleus, you can see how fast they are moving relative to each other. That steep increase at the beginning of the curve usually occurs for around the extent of the bulge, a few kiloparsecs (as in this image from Rubin+1978). It looks like they had a resolution of about a kpc, which would not tell you much about the center, except that the rotation curve is very high, very quickly (you assume the very center is 0 rotation): Sofue+1999 seems to be talking about even more central than that, like the "central few hundred pc region" which will require a new technique. A large limitation, as mentioned in the Sofue+1999 paper, is that HI is often obscured in the very centers (<0.5 kpc) of these galaxies by dust. Galactic dust, in general extincts shorter wavelength light much more effectively than long wavelength light. In this way, looking at the galaxy in longer wavelengths allows you to see through the dust. The HI line is much more easily erased by galactic extinction than the CO line at a longer wavelength. So CO makes it easier to look into the galactic central hundred parsecs than H$\alpha$. You can see this in the galactic extinction curve: The CO line is in the radio band (mm range), a type of astronomy that was just picking up speed in the 1960s and 1970s. Having been predicted in the 1800s by Maxwell’s equations, scientists were unable to observe any radio signals from the Galaxy until a serendipitous observation of Sagittarius A* by Karl Jansky at Bell Labs in the 1930s. However, radio astronomy comes with a problem of resolution, the resolving power of a telescope is basically: $\Theta = 1.22 * \frac{\lambda}{D}$ Where lambda is the wavelength of the observation and D is the diameter of the telescope. And, for reference, the distances to two galaxies from that rotation curve above are: NGC4594 = 9.45 Mpc NGC7217 = 15 Mpc At the distances of these galaxies, we need to be looking in the right wavelength to penetrate, and we need enough resolution to get the rotation speed at maybe 10 parsec scales. This resolution, at 15 Mpc, is maybe 0.1 arcseconds. $3600 * arctan(\frac{10}{15e6}) * \frac{180}{\pi} = 0.1375 arcseconds$ To get that resolution observing at the 2.6 mm you need for the CO line, you need a: $1.22 * 3600 * \frac{180}{\pi} * \frac{2.5e-3}{0.1375} = 4581.48$ 4.5 km telescope… Hmmmmm. A bit of a problem considering the unfortunate tale of the 300 foot telescope. Then, computers to the rescue. Interferometry is an idea whereby one can use two telescopes to increase their resolving power. When combining their signals, they have an effective diameter of their separation. Of course, they are not equivalent to a single dish telescope of that diameter, because they collect far less total starlight. The technique was successfully utilized scientifically as early as the 1920s, to resolve the star Betelgeuse. But it was difficult to use this technique in it’s modern form, combining tens or hundreds of telescopes at once without the advent of computers which could handle the calculations in the 1960s and 1970s. And now, the final piece. Interferometry grows much more difficult the smaller your wavelength. Interferometry is only possible if amplitude and phase of the incoming wave is recorded and recombined precisely. So an optical interferopeter needs: more precisely polished mirrors, more precisely measured separations, stable signal information transport from the dishes (fiberoptics), and more computing power to run more fourier transforms. While radio interferometry was performed in the 1920s, it took all the way until the 1990s to do optical interferometry, and it's still far from routine (see the Large Binocular Telescope). So, to summarize: 1) To get a rotation curve of a galaxy, we need a molecular line that we can get a redshift or blueshift from. HI (optical) and CO (mm) are good choices. 2) The very centers of galaxies are obscured by dust, which destroys optical signals, but allows mm to pass, so CO is the molecule of choice for rotation curves of the nuclei of these galaxies, as H$\alpha$ signal is poor in the centers. 3) However, for a mm telescope to be able to spatially resolve small parts of the galaxies at the large distances of 10+ MPC in this wavelength, we need a telescope a few kilometers wide. 4) We can solve this with interferometry, but only recently has the technology to do interferometry at this wavelength been accomplished. The first interferometer observing at this wavelength, the SubMillimeter Array (SMA; eight, 6-meter telescopes), opened to guest astronomers for the first time in 1990. The SMA was still being expanded at the time of the paper you linked. These days, the Atacama Large Millimeter Array (ALMA) is just now nearing completion, and is focusing on the same wavelength range only with huge resolution and surface area (66 telescopes from 7-12 meters). They are collecting CO rotation curves to such precision that they claim to be able to estimate the central black hole masses of galaxies to 5%. Stuff: https://en.wikipedia.org/wiki/Astronomical_interferometer https://en.wikipedia.org/wiki/Aperture_synthesis https://ned.ipac.caltech.edu/level5/Fitzpatrick/Figures/figure1.gif https://www.nrao.edu/whatisra/images/300ft-before.gif https://www.nrao.edu/whatisra/images/300ft-after.gif http://www.atnf.csiro.au/outreach//images/wallpapers/atcamorningsml.jpg http://www.almaobservatory.org/
{ "domain": "astronomy.stackexchange", "id": 2221, "tags": "rotation, dark-matter" }
How and how long will the metallic ball move?
Question: We have suspended an electrically neutral metallic ball from an insulated string between two parallel metallic plates that are oppositely charged but have the same magnitude. So, the charge in this system is conserved. We put the metallic ball in contact with one of the plated (any one of them) and then leave it to move freely. How would the ball move and till how long would it move? Here's what I think would happen : Let's assume that the metal plate on the left has a charge of $(+x)$ $e$ and the one on the right has a charge of $(-x)$ $e$ and that the metallic ball is put in contact with the metallic plate on the right. So, net charge between the ball and the plate = $0+(-x) = 0-x = (-x)$ $e$, so after contact, both of them will have a charge of $\Big (\dfrac {-x}{2} \Big )$ $e$. And then, the ball would move towards the metallic plate on the right with a charge of $\Big (\dfrac {-x}{2} \Big )$ $e$ because the ball being negatively charged and the plate on the left being positively charged will attract each other. Now, the net charge between the ball and the plate on the left will be $\Bigg ((x)+\Big (\dfrac {-x}{2} \Big ) \Bigg )e = \Big (\dfrac {+x}{2} \Big )$ $e$. So, after the contact, both the metallic ball and the plate on the left will have a charge of $\Big (\dfrac {+x}{4} \Big )$ $e$. Now, the ball being positively charged and the metallic plate on the right being negatively charged $\Bigg (\Big (\dfrac {-x}{2} \Big )$ $e$ $\Bigg )$ will attract each other and the same process will follow. So, according to me, the ball would move infinitely and will keep on acquiring a charge of $\Big ( \pm \dfrac {x}{2^n} \Big )$ $e$, where $n \in \Bbb N$ Let me know if I'm right/wrong, thanks! Answer: On each full swing, the ball is picking up electrons from the negative plate and delivering them to the positive plate. If the charge in the system is conserved, then the charge on each plate is decreasing. The charge picked up from the negative plate by the ball will be proportional to the charge remaining on that plate, so the rate of decrease will be a stepped exponential. (In theory never reaching zero.)
{ "domain": "physics.stackexchange", "id": 67750, "tags": "electromagnetism, charge" }
Snake Game C++ Improvements?
Question: Is there anything you would suggest to improve this code? In particular, any code to make the snake move more smoothly and make the cursor disappear. #include <iostream.h> #include <conio.h> #include <stdlib.h> #include <dos.h> unsigned long int score, high_score = 0; const int no_rows = 22, no_columns = 77, head_position = 0; char choice, token, token_food, key_press, direction_head, overlay; char board[no_rows][no_columns]; unsigned int i, j, k, snake_length, grow, crash; struct position { int row; int column; } head_map[no_rows * no_columns], food, temp1, temp2; void Initialize() { start: clrscr(); cout << "\t\t\t\tWELCOME TO SNAKE GAME\n\n\n"; cout << "Choose the Body of the Snake(Enter a symbol):\n"; cin >> token; cout << "\nChoose the Food of the Snake(Enter a symbol):\n"; cin >> token_food; if(token_food == token) { clrscr(); cout << "\nDon't use the same symbols for food and body of the snake!!!\n\n\n"; goto start; } cout << "\n\nUP\t-\tW\nDOWN\t-\tS\nRIGHT\t-\tD\nLEFT\t-\tA\n\n\nARE YOU READY(Y / N) ?? \n"; cin >> choice; for (i = 0; i < no_rows; i++) { for (j = 0; j < no_columns; j++) board[i][j] = ' '; } head_map[0].row = no_rows / 2; head_map[0].column = no_columns / 2; board[head_map[0].row][head_map[0].column] = token; direction_head = 'R'; snake_length = 1; grow = crash = score = 0; } void Food() { do { food.row = rand() % no_rows; food.column = rand() % no_columns; } while(board[food.row][food.column] == token); board[food.row][food.column] = token_food; } void Display() { clrscr(); for(k = 0; k < (no_columns + 2); k++) cout << "-"; cout << endl; for(i = 0; i < no_rows; i++) { cout << "|"; for(j = 0; j < no_columns; j++) cout << board[i][j]; cout << "|" << endl; } for(k = 0; k < (no_columns + 2); k++) cout << "-"; cout << "\t\t\t\t\t SCORE :" << score; } void Input() { if(kbhit()) { key_press = getch(); if((key_press == 'd' || key_press == 'D') && (key_press != 'a' || key_press != 'A') && (direction_head != 'L')) direction_head = 'R'; if((key_press == 's' || key_press == 'S') && (key_press != 'w' || key_press != 'W') && (direction_head != 'U')) direction_head = 'D'; if((key_press == 'a' || key_press == 'A') && (key_press != 'd' || key_press != 'D') && (direction_head != 'R')) direction_head = 'L'; if((key_press == 'w' || key_press == 'W') && (key_press != 's' || key_press != 'S') && (direction_head != 'D')) direction_head = 'U'; if(key_press == 'p' || key_press == 'P') delay(5000); } temp1 = head_map[head_position]; board[head_map[head_position].row][head_map[head_position].column] = ' '; switch(direction_head) { case 'R': { if(head_map[head_position].column == no_columns) crash = 1; overlay = board[head_map[head_position].row][++head_map[head_position].column]; break; } case 'L': { if(head_map[head_position].column == 0) crash = 1; overlay = board[head_map[head_position].row][--head_map[head_position].column]; break; } case 'D': { if(head_map[head_position].row == no_rows) crash = 1; overlay = board[++head_map[head_position].row][head_map[head_position].column]; break; } case 'U': { if(head_map[head_position].row == 0) crash = 1; overlay = board[--head_map[head_position].row][head_map[head_position].column]; break; } default: break; } } void Move() { board[head_map[head_position].row][head_map[head_position].column] = token; for(i = 1; i < snake_length; i++) { temp2 = head_map[i]; head_map[i] = temp1; board[head_map[i].row][head_map[i].column] = token; board[temp2.row][temp2.column] = ' '; temp1 = temp2; } if(grow == 1) { head_map[i] = temp1; board[head_map[i].row][head_map[i].column] = token; ++snake_length; score += 10; Food(); grow = 0; } } void Change() { if(overlay == token) crash = 1; if(overlay == token_food) grow = 1; } void Game_over() { getch(); clrscr(); cout << "\n\n\n\n\n\n\n\n\n\n\t\t\t\tGAME OVER!!!\n\t\t\t\tFINAL SCORE: " << score; if(score > high_score) { high_score = score; cout << "\n\t\t\tYOU BEAT THE HIGH SCORE!!!"; } else { if(score == high_score) cout << "\n\t\t\tYOU TIED THE HIGH SCORE!!!"; else cout << "\n\t\t\tHIGH SCORE: " << high_score; } getch(); clrscr(); cout << "\n\n\n\n\n\n\n\n\n\n\n\t\t\tDO YOU WANT TO CONTINUE(Y/N)??\n\t\t\t\t\t"; cin >> choice; getch(); } int main() { do { Initialize(); Food(); Display(); do { Input(); Change(); if(crash == 1) break; Move(); delay(200); Display(); } while(crash != 1); Game_over(); } while(choice == 'Y' || choice == 'y'); return 0; } Answer: Making the code more modern Your code looks very old. This is because the <iostream.h> header in line 1 is not used anymore since about the year 1998. Same for the <dos.h> and <conio.h> headers. To get your program to compile with a modern C++ compiler, I had to replace the first paragraph of your code with this: #include <iostream> using std::cin; using std::cout; using std::endl; extern "C" { void clrscr(); bool kbhit(); void delay(int); int getch(); } I only did this change to make your code valid for my compiler. It won't run after these changes. Therefore you should not apply these changes to your code. Just leave your code as it is. Don't be surprised though when you try to run your code in a more modern environment, as this won't work. Putting text on the screen I noticed that you use long sequences of \n and \t to place the text on the screen. Since you are using the clrscr function, I suppose that the function gotoxy is also defined. If so, you can replace this code: clrscr(); cout << "\n\n\n\n\n\n\n\n\n\n\t\t\t\tGAME OVER!!!\n\t\t\t\tFINAL SCORE: " << score; with this code: clrscr(); gotoxy(32, 10); cout << "GAME OVER!!!"; gotoxy(32, 11); cout << "FINAL SCORE: " << score; This code takes more vertical space than before, but there is no need anymore to count the number of \n characters in the string. Input methods You are using two fundamentally different input methods: cin >> choice, which reads a character, but the program only sees this character after Enter has been pressed. This is not suitable for a snake game. getch(), which reads a single key without requiring the Enter key. This function works closely together with kbhit(). You should not mix these two, at least not in the same phase of the game. There is the dialog phase ("play again?"), which should use cin >> choice, and there is the playing phase, which should use kbhit() and getch() (in this order). Reducing the scope of variables Your code declares (among others) these variables: unsigned int i, j, k; These variables are used later by pieces of code that are completely unrelated to each other. Therefore it doesn't make sense that these unrelated code pieces use the same variables. One of these pieces is: for (i = 0; i < no_rows; i++) { for (j = 0; j < no_columns; j++) board[i][j] = ' '; } After the opening parenthesis of each for loop, you should declare the variable, which is then in scope for the rest of the for loop. The changed code is: for (unsigned int i = 0; i < no_rows; i++) { for (unsigned int j = 0; j < no_columns; j++) board[i][j] = ' '; } When you do that in the other for loops as well, there will be a compile error: head_map[i] = temp1; board[head_map[i].row][head_map[i].column] = token; This compile error means that your code is somewhat unusual. You used i in a loop, and usually that variable is not needed after the loop. Not so in this case. When the for loop is finished, i will be the same as snake_length. Therefore you can replace the code with the very similar: head_map[snake_length] = temp1; board[head_map[snake_length].row][head_map[snake_length].column] = token; This makes the intention of the code a bit clearer, since for experienced programmers the variable name i means a variable that changes its value often, such as in your loops that fill the board with spaces. That name i would be misleading here, since the code handles the tail of the snake. The expression head_map[snake_length] expresses this more clearly than the expression head_map[i]. Redundant conditions Further down, you have this code: if((key_press == 'w' || key_press == 'W') && (key_press != 's' || key_press != 'S') && (direction_head != 'D')) This code is redundant. If the pressed key is 'w' or 'W', it cannot be 's' at the same time. Therefore you don't need to check for 's' at all. Furthermore, the expression key_press != 's' || key_press != 'S' will always be true. There are 3 cases: key_press != 's': the first condition is true, therefore the whole expression is true. key_press != 'S': the second condition is true, therefore the whole expression is true. any other key: both conditions are true, therefore the whole expression is true. Therefore, the simplified code is: if((key_press == 'w' || key_press == 'W') && (true) && (direction_head != 'D')) This can be further simplified to: if((key_press == 'w' || key_press == 'W') && direction_head != 'D') Pausing the game if(key_press == 'p' || key_press == 'P') delay(5000); This looks wrong. When I press the 'p' key, I expect the game to pause until I explicitly continue it by pressing 'p' again. Waiting 5 seconds is something entirely different. To implement the pause correctly, you should define a global variable: bool paused = false; After that, adjust the code from above: if(kbhit()) { ... if(key_press == 'p' || key_press == 'P') paused = !paused; } if (paused) return; Since the second part of the Input function does not deal with the input at all but instead moves the snake, that part is skipped as long as the game is paused. Final words Your code is structured well, especially since the code of the main function gives a rough overview over the whole game flow, just as it should. You named the functions well, which makes the main function easy to grasp. There are many more things that can be said about your code, but they are not urgent. Getting a few ideas and thinking about them is easier than getting a hundred tips at once.
{ "domain": "codereview.stackexchange", "id": 36571, "tags": "c++, snake-game" }
Pertinence of the wave function of the universe, or complete description of system with massive number of dof
Question: I have heard couple of times about the concept of wave function of the universe, an object that would capture every degrees of freedom inside it (every particle, me, even you dear reader, etc...) and it always sounded fallacious or at least non pertinent, what would be the point of using that gigantic object to describe our universe. From my first classes of statistical mechanics, I learned that there is no point in trying to monitor $10^{23}$ and more degrees of freedom, that we need to look for emergent pertinent quantities (pressure, temperature, etc...). Even more, I am know reading an article where the author (there is no point in giving you the title/name) takes the example of a local QFT completely describing the Solar system, so roughly $10^{60}$ degrees of freedom, plus the possibility that an observer could be monitoring all those in real time, which is totally unrealistic. Does this make sense to you? Answer: Yes, it is a logically positivistically meaningless notion, so in an absolute sense it is complete bullshit--- you can't measure the wavefunction of the universe, nor give a sense to the idea that it is A and not B when the overlap of A and B is nonzero. But it is useful bullshit, as a figure of speech, used as a conceptual aid, to get you to understand how the Everett interpretation works, how cosmology can give a complete description, how the dynamics of vacuum selection could be working, and how a nonsymmetric universe can emerge from a symmetric initial conditions. The point is that it is an imprecise but good crutch for the intuition, like the idea of an infinitesimal quantity in mathematics, or the idea of negative coupling phi-4 theory in quantum field theory, or a bazillion other things which have precise analogs, but don't need precise analogs to be useful as figures of speech. Here is a question which is clarified by considering the wavefunction of the universe. Suppose the laws of physics are rotationally invariant, and the universe started out completely rotationally symmetric. Would we observe a symmetric state? The answer is no, because the wavefunction of the universe would be in a superposition of states which would be symmetric, but no observer's perception would be symmetric. Now to make this precise, you could go to an observer's point of view, and consider the universe relative to this observer, and see that it is not symmetric. Or you could say that you collapsed the wavefunction of the universe by looking around, or whatever you prefer. The end result is talking about sense-impressions, but the philosophical crutch allows you to understand that this does not require a breaking of symmetry in any fundamental law or initial condition. The wavefunction of the universe is also used to give predictions on the likelihood of different states in quantum gravity. Here the point is to make sure that we are not making models whose a-priori probability is too low to be plausible. In this context, the wavefunction of the universe is a useful figure of speech. It is not a mistake to use positivistically unverifiable notions, so long as you always know how to translate this into sense impressions at the end, so that you eliminate the metaphysical looking things.
{ "domain": "physics.stackexchange", "id": 2821, "tags": "quantum-mechanics" }
Exchange system of numbers
Question: I've got the following task: Keeping in mind Horner's Scheme, write an application that converts a given number x in n-number system to m-number system where m, n <= 10. x is a natural number or zero, x <= unsigned(-1) My code is as follows: #include <iostream> int toDecimal (int, int); int fromDecimal (int, int); int convert(int, int, int); int main() { int base, number, desiredBase; std::cin >> number >> base >> desiredBase; std::cout << convert(number, base, desiredBase) << std::endl; } int toDecimal (int base, int number) { if (number / 10 == 0) { return number; } return (number % 10) + (base * toDecimal (base, number / 10)); } int fromDecimal (int base, int number) { if (number / base == 0) { return number; } return (number % base) + (10 * fromDecimal (base, number / base)); } int convert(int number, int base, int desiredBase) { int p = toDecimal(base, number); return fromDecimal(desiredBase, p); } What can I improve? What can be done better? Answer: In general, the code is well presented, easy to read, but, it is hard to spot the 'Horner's Scheme' in the code. When implementing a specific algorithm in code, it is useful to clearly comment where the elements of the algorithm are being used. In this case, the depth-first recursion is processing the most significant digits first, and as a result, it computes the high-order values in the Horner's scheme first. Making that value available to be added to the next value in the system. Note that this makes tail-recursion optimization impossible, but it does simplify the code. So, I had to look up how Horner's scheme would help your code, and I had to figure out how your code is helped by it. This is not work that should be hard. You should make that easy for the person reading the code. I would expect something like: // based on Horner's scheme: http://en.wikipedia.org/wiki/Horner%27s_method // The source base of the value can be considered to be Xo in the algorithm, and // the digit value is the coefficient for that base. As for your recursion, you can simplify it a little by recursing one level more, and returning 0 (eliminating a duplicated division on each level). Consider your code: int toDecimal (int base, int number) { if (number / 10 == 0) { return number; } return (number % 10) + (base * toDecimal (base, number / 10)); } and replacing that code with: int toDecimal (int base, int number) { if (number == 0) { return 0; } return (number % 10) + (base * toDecimal (base, number / 10)); } The difference is marginal, trading one division/comparison with a simple comparison and an extra level of recursion. Regardless, I prefer the reduced code duplication, and it makes the recursion termination easier to see. The other item I see missing is validation on the input. I would prefer to see some exceptions thrown if the input is in a base that does not support the supplied digits. For example, with the input: 12345 4 10 Putting this all together, I suggest the following: #include <iostream> #include <stdexcept> int toDecimal (int, int); int fromDecimal (int, int); int convert(int, int, int); int main() { int base, number, desiredBase; std::cin >> number >> base >> desiredBase; try { std::cout << convert(number, base, desiredBase) << std::endl; } catch (const std::invalid_argument& e) { std::cerr << "Unable to convert " << number << " from base " << base << std::endl; return 1; } } int toDecimal (int base, int number) { if (number == 0) { return 0; } int digit = number % 10; if (digit >= base) { throw std::invalid_argument( "received out-of-range digits in the input for the supplied base"); } return digit + (base * toDecimal (base, number / 10)); } int fromDecimal (int base, int number) { if (number == 0) { return 0; } return (number % base) + (10 * fromDecimal (base, number / base)); } int convert(int number, int base, int desiredBase) { int p = toDecimal(base, number); return fromDecimal(desiredBase, p); } In addition to handling the exceptions from invalid input numbers, you should also handle requests to/from invalid bases as well (like negative or bases > 10).
{ "domain": "codereview.stackexchange", "id": 12113, "tags": "c++, algorithm, converting" }
kinect green light keeps freshing. Seems no driver
Question: I have installed freenect from sudo apt-get install ros-hydro-freenect-stack and also blacklist gspca_kinect(I am using Ubuntu 12.04 LTS) But when I try the command roslaunch freenect_launch freenect.launch It says that [ INFO] [1380174487.842752473]: No devices connected.... waiting for devices to be connected and then I try lsusb it shows me like Bus 002 Device 015: ID 045e:02c2 Microsoft Corp. Bus 002 Device 016: ID 045e:02be Microsoft Corp. Bus 002 Device 017: ID 045e:02bf Microsoft Corp. These are kinect's and I could not find something like camera or others. Could you please help me? Since for this Sept I could not even run kinect on Ubuntu and I feel confused. Thank you so much and I will wait for the reply. Originally posted by Battery on ROS Answers with karma: 25 on 2013-09-25 Post score: 0 Original comments Comment by brianaxelrod on 2013-10-03: I'm having this problem too. Where you able to find a solution? Comment by bit-pirate on 2013-10-03: Does it work using openni_launch? I'm using Ubuntu 12.04 with Hydro and not having any issues with my Kinects. Comment by Battery on 2013-10-04: Do you mean that you can launch Kinect successfully? What is your version of Kinect? Could you plz leave your email and we could discuss directly. Comment by bit-pirate on 2013-11-12: @brianaxelrod please post your comment as an answer in order to be able to mark this question properly as closed. Thx. Answer: It turns out that it was the specific kinect. Changing the kinect seemed to fix it. I'll have another chance to use it on Tuesday or Thursday so I can grab some more information and test it. @Battery, how would i grab the version? feel free to email me at kinect-problems@mit.edu in the next 48hours. Originally posted by brianaxelrod with karma: 16 on 2013-10-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Victor Gomez on 2014-02-20: Did you still use a Kinect for Windows or chaged for a Kinect for XBOX? I'm having the exact same problem with a Kinect for Windows 1517. Comment by Victor Gomez on 2014-02-27: Any update, anyone?
{ "domain": "robotics.stackexchange", "id": 15659, "tags": "ros, kinect, driver, freenect" }
Refactoring duplicated code
Question: How can I remove duplicated code? It's hard to me because I use different service class and model each code. public class AcrAccessAreaController extends BaseController { @Autowired private AcrAccessAreaService acrAccessAreaService; @RequestMapping("/action") public String action(ModelMap model, AcrAccessAreaModel acrAccessAreaModel) { try { if(acrAccessAreaModel.getTrStatus().equals(TrStatus.DELETE)) { Map cateData = acrAccessAreaService.chkUseYnArea(acrAccessAreaModel); int cnt = Integer.parseInt(((AcrAccessAreaModel)cateData.get("chkInfo")).getCnt()); if(cnt > 0) { super.setResult(model,Result.E04); return "/dummy"; } } Map accData = acrAccessAreaService.action(acrAccessAreaModel); model.put("accAreaInfo",accData.get("accAreaInfo")); super.setResult(model); } catch(Exception e) { super.setResult(model, new ApprovalException(e)); } return "/dummy"; } } public class AcrAssociationController extends BaseController { @Autowired private AcrAssociationService acrAssociationService; @RequestMapping("/action") public String action(ModelMap model, AcrAssociationModel acrAssociationModel) { try { if(acrAssociationModel.getTrStatus().equals(TrStatus.DELETE)) { Map assoData = acrAssociationService.chkUseYn(acrAssociationModel); int cnt = Integer.parseInt(((AcrAssociationModel)assoData.get("chkInfo")).getCnt()); if(cnt > 0) { super.setResult(model,Result.E04); return "/dummy"; } } Map assoData = acrAssociationService.action(acrAssociationModel); model.put("assoInfo",assoData.get("assoInfo")); super.setResult(model); } catch(Exception e) { super.setResult(model, new ApprovalException(e)); } return "/dummy"; } } public class AcrCategoryController extends BaseController { @Autowired private AcrCategoryServiceacrCategoryService; @RequestMapping("/action") public String action(ModelMap model, AcrCategoryModel acrCategoryModel) { try { if(acrCategoryModel.getTrStatus().equals(TrStatus.DELETE)) { Map cateData = acrCategoryService.chkUseYn(acrCategoryModel); int cnt = Integer.parseInt(((AcrCategoryModel)cateData.get("chkInfo")).getCnt()); if(cnt > 0) { super.setResult(model,Result.E04); return "/dummy"; } } Map cateData = acrCategoryService.action(acrCategoryModel); model.put("categoryInfo",cateData.get("categoryInfo")); super.setResult(model); } catch(Exception e) { super.setResult(model, new ApprovalException(e)); } return "/dummy"; } } Answer: I will give you an idea. You need to have a dummy class with the functionality that you want. As you haven't provided much details I will make the assumption that the all models are sibiling classes. Same assumption for other things. After you have the dummy class you can just sub-class them and give the infoVariable a new value and override the action method. In action method just call super with the appropriate variables. public class DummyController extends BaseController { String infoVariable = "info"; @RequestMapping("/action") public String action(ModelMap model, ParentInterfaceOfModels myModel) { try { if(myModel.getTrStatus().equals(TrStatus.DELETE)) { Map data = myService.chkUseYn(myModel); int cnt = Integer.parseInt(((AcrCategoryModel)data.get("chkInfo")).getCnt()); if(cnt > 0) { super.setResult(model,Result.E04); return "/dummy"; } } Map data = acrCategoryService.action(myModel); model.put(infoVariable,data.get(infoVariable)); super.setResult(model); } catch(Exception e) { super.setResult(model, new ApprovalException(e)); } return "/dummy"; } } I have not made all changes necessary because many things are not clear. In the typecasting I have left the Model class as it is. You can simply get the class name at runtime to do the casting. A simple example for casting to String class at runtime is "java.lang.String".getClass().cast(obj). But how will you get the name of the class as String in the sub-classes? I will leave that to you to find out.
{ "domain": "codereview.stackexchange", "id": 7922, "tags": "java" }
Calculating the one-direction speed of light
Question: According to Derek Muller from Veritasium, measuring the one-direction speed of light requires synchronised clocks, and that's a problem because moving one of the clocks will slow down its time. In the example from the video, he keeps one of the clocks stationary while moving the other one 1km. While watching the explanation I though "why don't starting both clocks at a given position, say A, and then moving them in opposite directions at the same speed, so their time dilation will be the same, and they will end up at points B and C separated by 1km while still synchronised". Obviously, thousands of people way more intelligent than me already thought about this problem, so the probable explanation is that instad of single-handedly solving this problem my solution has an obvious flaw, but I can't see it. What is the flaw? Answer: The flaw is that you can't assume the clocks stay synchronized. Assuming they stay synchronized is equivalent to adopting Einstein synchronization, and thus adopting the convention that the one-way speed of light is isotropic. The typical "equation of time dilation" used in relativity already makes this assumption. If don't want to assume this, then it is one of many equations that are not valid.
{ "domain": "physics.stackexchange", "id": 88779, "tags": "special-relativity, speed-of-light, time, one-way-speed-of-light" }
Error running ccny_rgbd
Question: Hey I'm trying to run the ccny_rgbd package. I have everything built properly but when I run ccny_openni_launch openni.launch publish_cloud:=true I get a lot of errors. nft@nft-desktop:~/ros_workspace/ccny_rgbd_tools$ roslaunch ccny_openni_launch openni.launch publish_cloud:=true ... logging to /home/nft/.ros/log/5efebfd8-1e6b-11e3-981d-74f06d80936b/roslaunch-nft-desktop-25268.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server [hidden] SUMMARY ======== PARAMETERS * /openni_driver/depth_camera_info_url * /openni_driver/depth_frame_id * /openni_driver/depth_mode * /openni_driver/device_id * /openni_driver/image_mode * /openni_driver/rgb_camera_info_url * /openni_driver/rgb_frame_id * /rgbd_image_proc/calib_path * /rgbd_image_proc/publish_cloud * /rgbd_image_proc/scale * /rgbd_image_proc/unwarp * /rgbd_image_proc/verbose * /rosdistro * /rosversion NODES / camera_rgb_frame_tf (tf/static_transform_publisher) camera_rgb_optical_frame_tf (tf/static_transform_publisher) debayer (nodelet/nodelet) openni_driver (nodelet/nodelet) rgbd_image_proc (nodelet/nodelet) rgbd_manager (nodelet/nodelet) auto-starting new master process[master]: started with pid [25284] ROS_MASTER_URI=[hidden] setting /run_id to 5efebfd8-1e6b-11e3-981d-74f06d80936b process[rosout-1]: started with pid [25297] started core service [/rosout] process[rgbd_manager-2]: started with pid [25309] process[openni_driver-3]: started with pid [25320] [ INFO] [1379293230.457714541]: Initializing nodelet with 2 worker threads. [ INFO] [1379293230.569478335]: Loading nodelet /openni_driver of type openni_camera/driver to manager rgbd_manager with the following remappings: [ INFO] [1379293230.570003891]: /depth -> /camera/depth [ INFO] [1379293230.570188411]: /ir -> /camera/ir [ INFO] [1379293230.570338431]: /rgb -> /camera/rgb process[debayer-4]: started with pid [25361] process[rgbd_image_proc-5]: started with pid [25379] process[camera_rgb_frame_tf-6]: started with pid [25401] process[camera_rgb_optical_frame_tf-7]: started with pid [25417] [FATAL] [1379293235.769048546]: Service call failed! [rgbd_manager-2] process has died [pid 25309, exit code -11, cmd /opt/ros/fuerte/stacks/nodelet_core/nodelet/bin/nodelet manager __name:=rgbd_manager __log:=/home/nft/.ros/log/5efebfd8-1e6b-11e3-981d-74f06d80936b/rgbd_manager-2.log]. log file: /home/nft/.ros/log/5efebfd8-1e6b-11e3-981d-74f06d80936b/rgbd_manager-2*.log [rgbd_image_proc-5] process has died [pid 25379, exit code 255, cmd /opt/ros/fuerte/stacks/nodelet_core/nodelet/bin/nodelet load ccny_rgbd/RGBDImageProcNodelet rgbd_manager __name:=rgbd_image_proc __log:=/home/nft/.ros/log/5efebfd8-1e6b-11e3-981d-74f06d80936b/rgbd_image_proc-5.log]. log file: /home/nft/.ros/log/5efebfd8-1e6b-11e3-981d-74f06d80936b/rgbd_image_proc-5*.log [ INFO] [1379293238.822060943]: Bond broken, exiting [openni_driver-3] process has finished cleanly log file: /home/nft/.ros/log/5efebfd8-1e6b-11e3-981d-74f06d80936b/openni_driver-3*.log Originally posted by avalada on ROS Answers with karma: 31 on 2013-09-15 Post score: 1 Original comments Comment by sigmaduarte on 2013-10-11: Did you find out how to fix this? I get a failure loading RGBDImageProcNodelet. Answer: Well not a fix but I did a fresh install of everything, including Ubuntu and then it started working fine. What I think the problem is that there is something happening if you have both ccny_rgbd and rgbdslam installed. Maybe ccny_rgbd is using an older version of some calls or something like that. Originally posted by avalada with karma: 31 on 2013-10-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 15533, "tags": "ros, openni, ccny-rgbd" }
Rheological Definition of Friction
Question: I was listening to a record at our university about friction and its rheological definition. For the first moment I thought that its the normal definition Friction is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other. However the lecture pointed out that the rheological definition is different from the normal one. The professor who recorded the lecture is out of town now so I can't ask him. So I just need to know any idea what is meant by the Rheological definition of Friction and what makes it different from the normal one?? Any help is appreciated Answer: I'll assume you meant to say "Rheological definition of friction". Rheology deals with the friction of fluid layers against one another and with any solid boundaries. The factors of fluid viscosity determine the frictional forces exterted within a fluid and with its boundaries. A brief overview of the principles, mathematics can be found here
{ "domain": "physics.stackexchange", "id": 79191, "tags": "friction, definition" }
Angular momentum and star system in space with four spatial dimensions
Question: I came across a YouTube Video by minutephysics, a YouTuber with a Ph.D. degree in physics. In this video, he explains that the solar system is approximately flat because of two reasons: 1) conservation of angular momentum, 2) our universe has three spatial dimensions and the same does not hold in a universe with four spatial dimensions. I completely agree with the first point, and I think it would be nice to add that under the constraints of conservation of momentum and angular momentum, the total energy is minimized if all particles lie in the same plane. I have doubts about the second point, which the video barely explains anything. In summary, I have two questions that I need help with. 1) If we can visualize angular momentum in 3D as a vector, how should we think about its counterpart in 4D, a tensor? And what about the law of conservation of angular momentum. 2) What is the shape of the solar system in a universe with four spatial dimensions? Answer: 1) If we can visualize angular momentum in 3D as a vector, how should we think about its counterpart in 4D, a tensor? And what about the law of conservation of angular momentum. In general, angular momentum is a geometric object called a rank $2$ differential form. To oversimplify a bit, you can think of it as a plane, along with a signed magnitude. In three dimensions, and only in three dimensions, every plane corresponds to a vector (i.e. the normal vector), so we can also think of angular momentum as a vector. 2) What is the shape of the solar system in a universe with four spatial dimensions? Since angular momentum is a plane in any number of dimensions, the conservation law argument actually works the exact same way: if the initial total angular momentum lies in a plane, then the solar system ends up orbiting in that plane. At least, it would, but gravitational orbits in four spatial dimensions are not stable, or more precisely they are neutrally stable, with particles easily drifting into drastically larger or smaller orbits. For this reason, it's unclear if anything like a solar system could form at all.
{ "domain": "physics.stackexchange", "id": 66039, "tags": "classical-mechanics, newtonian-gravity, angular-momentum, conservation-laws, astrophysics" }
Melting Points and Pressure for Water
Question: I'm thoroughly confused by the relationship between the Melting Point of Water and pressure. Some sources suggest that since water expands upon freezing, applying pressure makes it "harder" for the water to expand in volume, thus requiring a greater amount of energy, and thus a greater melting point. Other sources suggest that water does indeed have a lower melting point under greater pressure. Clearly I'm misunderstanding something. What am I missing? Answer: Just bear in mind that water has the highest density at 4 Celsius. Increasing the pressure will increase their melting point, in accordance with Le Chatelier's principle. Since they expand upon melting, an increase in pressure tends to prevent it from melting, therefore raising their melting point. With water, it contracts upon melting, so an increase in pressure is encouraging melting, and so, its melting point decreases.
{ "domain": "physics.stackexchange", "id": 85772, "tags": "pressure, water, phase-transition" }
How do I factor in features which are IDs?
Question: I am an absolute beginner in data science and I had this (possibly stupid) question on my mind, while reading a problem in Kaggle: Say I'm given IDs of some clients, IDs of products that they sell, and quantity of the product sold, and I'm asked to predict the quantity of a product, given the client ID, and the product ID. Now, say the client IDs are in the range 10000 - 50000 and the product IDs are in the range 1-10. Suppose, for a moment that the client IDs were random integers from 1-1000000 and the product IDs were random integers from 1000-2000. This isn't supposed to make the slightest change in the results, is it? After all, IDs are mere tags. But thinking data-wise, I've just bloated up two entire columns in my data to a higher scale, and these are two features as well. So how do I think about this? How do I factor in features which are IDs? How do I 'normalize' them? I hope I'm not being vague here. I just don't know a better way of phrasing this question. Answer: These IDs should not be represented as numerical values to your model. If you would, your model thinks 2 and 3 are closer together than 2 and 2000, while it's just an ID, the number is just a name. Some models can deal with them but then they need to be factors or categories (like decision trees). However most models cannot deal with categories at all, there are however numerous ways to solve this problem. The most used one is probably one-hot encoding, which means for every category in your feature you add a column, and you put a 1 if it's that category and a 0 otherwise. Example: ID | target 1 | 0 1 | 1 2 | 3 3 | 2 To: ID_1 | ID_2 | ID_3 | target 1 | 0 | 0 | 0 1 | 0 | 0 | 1 0 | 1 | 0 | 3 0 | 0 | 1 | 2 This work very well if you have few categories, however in the case of thousands of IDs this will increase your dimensionality too much. What you can do is collect statistics about the target and other features per group and join these onto your set and then remove your categories. This is what is usually done with a high number of categories. You have to be careful not to leak any information about your target into your features though (problem called label leaking).
{ "domain": "datascience.stackexchange", "id": 1010, "tags": "dataset, data-cleaning" }
How to serialize C++ objects containing ROS messages
Question: Hello I have the following class I'd like to serialize: class Foo { int x; std::vector<double> bars; moveit_msgs::RobotState _startState; } in the absence of any ROS/MoveIt components, I am able to serialize out/in using the Boost serialization library. I resume that if all I had was the moveit message type then I could use the move_it serialization described on MessagesSerializationAndAdaptingTypes (assuming moveit_msgs are serializable by ROS?). But when I have a class containing both then I am not sure what to do. How do I go about serializing it now? I am using ROS Groovy thanks Originally posted by sned on ROS Answers with karma: 1 on 2014-06-18 Post score: 0 Answer: In ROS you cannot serialize and send arbitrary datatypes, you need to use ROS message datatypes. Some libraries provide custom mappings from native datatypes to ROS message-serialized formats. But you need a target ROS message format. More on messages here: http://wiki.ros.org/Messages Originally posted by tfoote with karma: 58457 on 2014-10-01 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 18313, "tags": "ros, serialization" }
Intuitive explanation for why voltage decreases as distance between charges decreases?
Question: A uniform electric field can be set up as shown: The force for a given charge will be constant anywhere in this field. However, as the distance from the charge to the oppositely charged plate decreases, the potential difference also decreases. This explains why force remains constant, as electric field strength is voltage divided by distance. But why does potential difference decrease if separation of a charge and the plate decrease? Perhaps it may be because I have the wrong definition for potential difference, or the wrong intuition behind it. I thought of potential difference as the difference in potential/energy between two points. Could someone correct/explain this distance question? Answer: This explains why force remains constant, as electric field strength is voltage divided by distance. The electric field is constant and that explains why the force is constant. I thought of potential difference as the difference in potential/energy between two points. Not quite. The difference in potential energy between the two points is the work done to move charge between the two points. The potential difference $V$ between two points is defined as the work per unit charge to move the charge between the two points, or the work divided by the charge. For simplicity, let's assume a vacuum in the space between the plates and a uniform electric field $E$. Suppose we wish to move a positive charge $q$ initially located in the space near the surface of the negatively charged plate over to the positively charged plate. To do this we would need to apply a force $F$ against the attraction force of the electric field, $$F=qE$$ The work $W$ required to move the charge the distance $d$ between the plates is $$W=qEd$$ This work equals the increase in potential energy of the charge. The greater the distance $d$ the greater the work and increase in potential energy. The potential difference, or voltage between the plates, is the work required per unit charge to move the charge between the plates, or $$V=\frac{W}{q}=Ed$$ The potential difference also increases with the distance $d$. This helped massively! I have a question related to this answer: the pd between any two points in a series circuit is assumed to be the pd of the battery/voltage source. You can't assume the pd between any two points in a series circuit will be the pd of the battery/voltage source. For example, if you have several resistors in series with the battery/voltage source the pd across a specific resistor will not be the pd of the voltage source. For as simple series circuit consisting only of resistors connected to the source, the sum of the pd's across the series resistors will equal the pd across the battery terminals. Is this an ideal circuit, or is it that way in reality? Wouldn't it relate in any way to the distance a point in the circuit is from the voltage source? – Not sure exactly what you mean by "an ideal circuit". All real battery/voltage sources have internal resistance (or more generally, internal impedance) which makes the source pd across its terminals depend on the amount of current delivered by the source. With no load connected to the source (open circuit) the pd at the terminals is the source emf. With a load connected the pd at the terminals will equal the emf minus the voltage drop across the internal resistance (impedance). Also, all real wires connected between components in a circuit have resistance. But it is often considered negligible enough to be ignored. Regardless of whether the circuit is ideal or not, the potential difference (voltage) between any two points in a circuit is the work required per unit charge to move the charge between the two points. For an ideal wire whose resistance (impedance) is considered zero no work is required to overcome the resistance (impedance) and thus the pd between any two points on such a wire is assumed to be zero. Hope this helps.
{ "domain": "physics.stackexchange", "id": 71875, "tags": "electric-fields, charge, potential, voltage, capacitance" }
C++ Trie using std::map and std::unique_ptr
Question: I'm learning C++ and whilst trying to write in modern C++, I've made an attempt to rewrite a trie implementation written in C found here: http://www.geeksforgeeks.org/trie-insert-and-search/ It uses arrays to hold the branches in each node and because it's C it was done using malloc() and no freeing of memory was done in the example. Is this an efficient approach of representing a trie in C++11? what other ways can I store children in trie nodes? #include <iostream> #include <map> #include <algorithm> #include <memory> class Trie { struct Node; typedef std::unique_ptr<Node> spNode; struct Node { std::map<char, spNode> children; bool isLeaf; Node() : isLeaf{false} {} }; spNode root; public: Trie(); void insert(const std::string& str); bool search(const std::string& str); }; Trie::Trie():root{nullptr}{} void Trie::insert(const std::string& str) { if (root == nullptr) { std::unique_ptr<Node> node(new Node()); root = std::move(node); } Node *temp = root.get(); for (const char& c : str) { if (temp->children.find(c) == temp->children.end()) {//if char not in map std::unique_ptr<Node> node(new Node()); temp->children[c] = std::move(node); } temp = temp->children[c].get(); } temp->isLeaf = true; } bool Trie::search(const std::string &str) { if (root == nullptr) return false; Node *temp = root.get(); for (const char& c : str) { if (temp->children.find(c) == temp->children.end()) return false; temp = temp->children[c].get(); } return (temp->isLeaf); } int main (void) { std::string words[] = { "Hello", "hi", "hey", "howdy", "ho"}; Trie test; for (const auto& str : words) { test.insert(str); } if (test.search("Hello")) std::cout << " 'Hello' is found in the trie\n"; else std::cout <<" 'Hello' is not found in the trie\n"; if (test.search("yo")) std::cout << " 'yo' is found in the trie\n"; else std::cout << " 'yo' is not found in the trie\n"; } Answer: I don't see the need for unique_ptr for the root node. spNode root; I would just make this a node. Node root; The use of std::map is fine. But it does have O(log(n)) lookup. If you switch back to an array its O(1). Its a time for space thing. Pick the one you want. I don't like the two line creation of nodes. std::unique_ptr<Node> node(new Node()); temp->children[c] = std::move(node); Just use reset: temp->children[c].reset(new Node()); Or if you have C++14 use std::make_unique() temp->children[c] = std::make_unique<Node>(); Personally search() does not seem quite the correct verb. bool search(const std::string& str); What is wrong with find()?
{ "domain": "codereview.stackexchange", "id": 25667, "tags": "c++, c++11, memory-management, hash-map, trie" }
Are correlation functions $\langle \phi(x_{1}) \cdots\phi(x_{n}) \rangle $ invariant under cyclic permutations?
Question: If I have a correlation function $$\langle \phi(x_{1}) \phi(x_{2}) \cdots \phi(x_{n-1}) \phi(x_{n})\rangle ,$$ can I cycle through the fields? I'm not sure I'm using the correct terminology. But for example, is it true that $$\langle \phi(x_{1})\phi(x_{2})\phi(x_{3})\rangle = \langle\phi(x_{2})\phi(x_{3})\phi(x_{1})\rangle = \langle \phi(x_{3})\phi(x_{1})\phi(x_{2})\rangle ~?$$ I think this is not true... but I just wanted to be sure. Answer: It depends on the action $S$ of the quantum field theory. In general, correlation functions are clearly not invariant under cyclic permutations. Simply take fermion fields $\psi(x_1),\psi(x_2)$, these fields are Grassmannian numbers (i.e. mathematical objects with $ab = - ba$ for any Grassmann numbers $a,b$). The propagator $<\psi(x_1),\psi(x_2)> \mapsto <\psi(x_2),\psi(x_1)> = - <\psi(x_1),\psi(x_2)>$ is clearly not invariant under cyclic permutations.
{ "domain": "physics.stackexchange", "id": 35947, "tags": "quantum-field-theory, correlation-functions" }
Prove $EXP \subset E^E$
Question: I'm trying to prove $EXP \subset E^E$ (strictly). I believe I need to construct my own $A \in E^E$ and show that $A \notin EXP$, but I cannot think of a smart way of doing that. Thanks. Answer: The class $E^E$ contains 2E, which is the class of problems solvable in iterated exponential time. The reason is that the E oracle can be applied on a padded input of exponential size. The time hierarchy theorem separates 2E from EXP.
{ "domain": "cs.stackexchange", "id": 11902, "tags": "complexity-theory, time-complexity" }
Why do smaller weights converge faster for RNNs?
Question: I am writing a Recurrent Neural Network using only the NumPy library for a binary classification problem. When I initialize the weights with np.random.randn, after 1000 epochs it gets ~60% accuracy, whereas when I divide the weights by 1000 first, it reaches 100% accuracy after the same amount of epochs. Why is this? Do RNNs work better with smaller weights or does the number 1000 mean something? Any and all help is welcome, thanks. Answer: There is no magic value that work for every network but in general: too large initial weights lead to exploding gradients (i.e. no convergence) too small initial weights lead to vanishing gradients (i.e. small loss without real convergence) the best initialization strategies use uniformly of normally distributed weights, and set an order of magnitude that depends on the amount of parameters in the previous layer. (see xavier initialization and kaiming initialization) Check this blog post for pretty good animations of both problems.
{ "domain": "ai.stackexchange", "id": 3434, "tags": "recurrent-neural-networks, python, binary-classification, weights-initialization, numpy" }
Path to becoming an audio engineer
Question: Can someone write down steps an totally beginner in EE and DSP needs to pass to become a audio engineer? From which subject to start with (math, electronics), and with which subjects to continue this road? My goal is to get a good knowledge in audio signal processing, things like creating sounds (sound synthesis), sound transformation, mixdown audio, understand fundamental things. I will be very grateful if someone give me instruction from where to start and what path to follow, and on what to mostly pay attention, I have a great desire to learn all of that, I have 6 years experience in electronic music production. Thank you very much! Answer: (TLDR: if you want to dive in I might recommend Will Pirkle's books Designing Audio Effect Plug-Ins in C++ and Designing Software Synthesizer Plug-Ins in C++ but you might be completely overwhelmed depending on your background) First off, if you want to create new sounds, I love Max/MSP and that will get you creating new sounds quickly but it will not teach you DSP. Before a mod closes this as off-topic or too open-ended, I will say this: There is no recipe, no set list of steps. Period. Or as Umberto Eco wrote in his famous thriller "Foucault's Pendulum"....there is no map. Take that for what you will. I'm sure everybody's story who works with audio is different. I grew up messing around with 4-track tape recorders, and since I grew up with computers, started playing with digital audio workstations like Audacity and ProTools (Ardour during my hardcore linux purist phase), making demos and playing in bands. During this time I also tried my hand at BASIC and later scripting languages. I got interested in theoretical physics in high school, and majored in physics in college (where I made a lot of demo recordings with bands and learned how to solve problems, but not a lot of domain specific knowledge (which I think is a good thing)). I didn't want to be a particle physicist and had met a professor who worked in architectural acoustics. I found a grad program that was able to pay me to think about sound and space (mostly psychoacoustics) for 4 years and earn my MS and PhD. At the end of my PhD I met (through a random LinkedIn message I initiated) the CTO of a startup who was willing to transfer my skills in psychoacoustics to programming microcontrollers to do effects and equalization. My job got me started down the path of really doing synthesis and processing. I had (almost) no idea what I was doing, and that's when I really learned. I'm just starting on this journey of being able to digitally manipulate sound, and I find that there's just so much out there and it is totally fascinating. Every day I realize I know so little, but take in what I can each day (and try not to get frustrated that I don't move faster). Understanding is a slow process that cannot be expedited. That being said, if I had to do it all over again knowing what I know now (and how could I have really?), I would have taken as much math and programming as I could have in high school (which wasn't offered in Catholic school) and majored in electrical engineering (which wasn't offered at the liberal arts college I attended). EE is really where all this stuff stems from. I'm biased but I would say it doesn't hurt to study something about hearing science and psychoacoustics too. Music appreciation can fuel your enthusiasm.
{ "domain": "dsp.stackexchange", "id": 3180, "tags": "audio" }
How to use the step count method correctly for binary search?
Question: I've tried to use the step counting method to get the worst-case time complexity for binary search. But I seem to mess it up, as my final result would be O(n) and not O(log(n)). My implementation: fn binarySearch(array:[i32;20],target:i32) -> isize{ let mut min = 0; //c let mut max = array.len(); //c let mut guess:usize; ///c while max > min{ //n guess = (max+min)/2; //c println!("Guess: {}",array[guess]);//c if array[guess] == target{//c return guess as isize; }else if array[guess]<target{//c min = guess + 1; //c }else{//c max = guess - 1; //c } } return -1 as isize; //c } I've written the time it takes in the comments c for constant n for linear. But based on this I get something like this: T(n) = c+c+c+n*(c+c+c+c+c+c+c+c) which should boil down to T(n) = 3c+n*8c which would be O(n) and not O(log(n)). Answer: The mistake in your analysis is counting the number of iteration of the while loop as $n$. The key observation with binary search is that you cut the range at about half in every iteration. So if initially your array has $n$ items, in the worst-case you will divide the array in half $k$ times until only one element is left (after this the subarray will be empty and the loop will end). You will get $n/2^k = 1$, which then is equal to $n=2^k$. Here $k$ is also the worst-case number of iterations. Taking the logarithm both sides will give you $k =\log_2 n$.
{ "domain": "cs.stackexchange", "id": 19821, "tags": "algorithms, time-complexity, runtime-analysis, binary-search" }
What is the meaning of preserving local or global structure of the data?
Question: I read about PaCMAP dimensionality reduction method (PaCMAP). They wrote that this method preserving both local and global structure of the data in original space. What is the meaning of preserving local structure of the data ? What is the meaning of preserving global structure of the data ? Can you please add examples for those preserving structures ? Answer: When reducing the dimensionality of data, you want to keep local characteristics of data such is nearest neighbours while preserving the holistic approach such as keep far data points still far from each others. This is a trade-off that most dimensionality reduction algorithms, specially non-linear ones, try to keep. It is a trade-off because usually and naturally going toward one of them will destroy the other one, so what we want is preserving local structure while preserving the global one. Example See the nonlinear curve bellow. It looks like an English S. Now let's discuss modelling of two different distances here. Those distances are the distance between dark blue region at the beginning of S and: yellow region immediately under it which is on the curvature of S very light blue region which is on the curvature of the opposite side How do you want to model it? The global structure says that distance (1) is smaller than (2) (right?), but you cognitively see and know that the continues form of S shape suggests that distance (1) is actually larger than (2). Simply because you see the global structure of S and you see that this structure is a continues form of many local structures shown by different colours. You intuitively know that if you walk on the data, you arrive to light blue region faster than yellow region! Here you are preserving local structure and if you don't, you will fall in the trap of seeing yellow region closer than blue one. This is what LLE showing you. It embeds data in a lower dimensional space in which light blue is more close to dark blue rather than yellow region, that means local structure was safely preserved, while the global structure of the shape is reduced from a S shape to a simple band (see it as opening or flattening S-shape). Hope I did not confuse you even more! Good luck
{ "domain": "datascience.stackexchange", "id": 10698, "tags": "machine-learning, dimensionality-reduction" }
Condition for constructive interference of light question
Question: The question was asking me to find an expression for the constructive interference between light which follows the paths $PQ$ and $PRS$, once the light has reached the air outside of the two glass plates. I found the correct expression: $$2 \pi \left ( \frac {2d}{ \lambda \cos \theta} - \frac {2d \sin \theta \tan \theta} { \lambda} \right) + \pi = 2 \pi$$ But I don't understand why the expression is the phase change over path $PRS$ - phase change over path $PQ = 2 \pi $ and not just the phase change over path $PRS = 2 \pi $. Can someone explain to me why I must take the path $PQ$ into account? Answer: At P the incoming waves go along one of two paths, along PQ and along PRS. At point P whichever way the waves go they are in phase with one another. The waves then are made to overlap and so you get interference. If you took a snapshot of he wave you would find that the phase of the wave at Q would not be the same as the phase of the wave at P. Likewise the phase of the wave at S would not be the same as the phase of the wave at P. What you are interested in is the phase difference between Q and S. This phase difference is give by the phase difference between S and P minus the phase difference between Q and P. That phase difference is related to the difference in the optical path lengths that the waves have travelled.
{ "domain": "physics.stackexchange", "id": 42907, "tags": "homework-and-exercises, optics, visible-light, refraction, interference" }
Review structure of PHP/HTML
Question: I am very new to PHP and I kind of sort of want to create the pages like I would in ASP.NET; it's just the way I am wired. I have been inserting the header/menu using a PHP Include and here is what the code looks like for just the first page. Am I doing this properly, or is there a better way? index.php <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>New Jerusalem</title> <link rel="stylesheet" type="text/css" href="Main.css"/> </head> <body> <div id="wrapper"> <div id="header"> <?php include ('Menu.php') ?> </div> <div id="content"> <h1 class="PageTitle" >Sturgis Hellfighters</h1> <div class="indexright" width="50%"> <h3 class="smallheader">Scripture Corner</h3> <p><img src="images/Romans5-8.jpg" width="475" alt="Romans 5:8" /></p> <p></p> </div> <div class="indexleft" width="50%"> <p><img src="images/BridalFallsSpearfishCanyon.jpg" width="475" height="720" alt="Bridal Falls Spearfish Canyon" /> </p> </div> </div> </div> </body> </html> Menu.php <img src="images/SiteHeader.jpg" width="1000" height="150" alt="HellFighters New Jerusalem" id="siteheader"/> <ul id="HomeMenu"> <li class="TopLevel"><a href="index.php">Home</a></li> <li class="TopLevel"><a href="#">News</a> <ul> <li class="SecondLevel"><a href="#">Sturgis Current Events</a></li> <li class="SecondLevel"><a href="#">Rapid City Events</a></li> <li class="SecondLevel"><a href="#">Pierre Events</a></li> <li class="SecondLevel"><a href="#">Other Events</a></li> </ul> </li> <li class="TopLevel"><a href="#">Photos</a> <ul> <li class="SecondLevel"><a href="christmasatpineridge.php">Christmas On Pine Ridge</a></li> <li class="SecondLevel"><a href="XmasMission.php">Christmas At the Mission</a></li> <li class="SecondLevel"><a href="OpenHouse.php">Open House</a></li> <li class="SecondLevel"><a href="Nationals.php">Nationals</a> </ul> </li> <li class="TopLevel"><a href="#">Events</a> <ul> <!-- --> <li class="SecondLevel"><a href="#">A Pine Ridge Christmas</a> <li class="SecondLevel"><a href="#">A Sturgis HellFighter Christmas</a> <li class="SecondLevel"><a href="#">Sturgis Motorcycle Rally</a></li> <li class="SecondLevel"><a href="#">Random City Swap Meet</a></li> <li class="SecondLevel"><a href="#">Cool Event</a></li> </ul> </li> <li class="TopLevel"><a href="#">Contact Info</a> <ul> <li class="SecondLevel"><a href="#">President</a></li> <li class="SecondLevel"><a href="#">Vice-President</a></li> <li class="SecondLevel"><a href="#">Preacher Man</a></li> </ul> </li> </ul> I would like to use HTML5 if possible, so if your review brings up HTML5 that is cool with me too. If you would like to see the front page, click here. This page will be changing as I am currently working on it. Answer: Going to some HTML5 stuff straight away. Replace your doctype with the HTML5 doctype. This is necessary for the next few steps <!DOCTYPE html> Use the shorter charset meta tag and the meta viewport tag inside your head area. The viewport meta tag is necessary for mobile devices and ensures a good default viewport. <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> Side note: You may omit the type-attribute for style, script and link tag (not for RSS) and the closing / for self-closing tags. This is not a recommendation, but it's possible. Use ID's only when you definitely can say "There will be only one element of this per page, ever". A wrapper is not unique to a page. It should be a class. Generally speaking, ID's don't provide you with a true benefit and kinda makes your life harder. I don't use ID's as styling hooks anymore and I'm really happy with it. You're navigation is included with PHP and is named menu.php, but it actually contains a header image as well. A better name would be header.php, wouldn't it? That being said, you may consider moving everything above your header and the header itself to a header.php. Even your head area and the html tag. This is only usable, if you your header is the same on all of your pages. I'm using dash-delmited class names in my HTML. No CamelCase names. This is my preference and if you want to use CamelCase, this is fine. However I would avoid using no delimiting and lowercase names like indexright or smallheader. That's it for now. I'll going to edit some more stuff later. Stay tuned.
{ "domain": "codereview.stackexchange", "id": 6041, "tags": "php, html, beginner, html5" }
Is Active Transport Required For Life?
Question: Is active transport required for all living cells to function? I was under the impression that if a cell doesn't have active transport, it either would lose molecules through the membrane and not be able to maintain a proper gradient, or molecules would be trapped inside the membrane (waste) or outside the membrane (food). Answer: For an organism (think single-celled) living in just the right kind of environment, it might just be possible to survive using only facilitated diffusion, at least as far as small molecules are concerned. This kind of organism would have to maintain exactly the right concentration of the molecules it wants to keep or get rid of, based on the external concentration of those molecules. For example, if an organism was living in a glucose-rich environment, it might be able to fulfill its glucose requirements by (1) making the membrane itself tightly impermeable to glucose and (2) tightly controlling the number of passive glucose transporters. It would be nigh-impossible to achieve this for every solute in the environment, especially those that are able to permeate the membrane. If there were such a thing, it might be found in organisms that live in very low energy aquatic regimes such as anaerobic methane oxidizers, where the solute concentration in the environment doesn't really change in small time scales. Even in such environments, the number of small ions in the environment is likely to be high enough that some active transport would be required to maintain the osmotic balance within the cell.
{ "domain": "biology.stackexchange", "id": 1039, "tags": "cell-membrane" }
How can I integrate a smart phone with my robotics project?
Question: Smart phones these days typically come with a gyroscope, accelerometer, compass, camera, and GPS sensor all on board. They also usually have a connection to the internet with Wifi and mobile data networks. I've seen many cases of using a phone as a remote control for a robot, but to me, it seems like the phone itself is a perfect lightweight computing and sensing platform for an autonomous robot. The main obstacle I see is interfacing with actuators. Being able to control motors to steer even a table-top robot, or control servos, for example. Connecting and communicating to a microcontroller could be an obstacle as well. As a robot hobbyist, I'd like to know how I can overcome these and other obstacles to be able to harness the power of my smart phone with my robotics projects. Answer: The Android Accessory Development Kit (ADK) should do everything you need. It's an Arduino board that is specifically designed to interact with Android. You can connect the two over Bluetooth or USB (or WiFi/Ethernet, I think). Since the whole architecture is open, you can use each part for what it's best at. You write the Android code to get data from the phone's built-in sensors and control the display. And you write the Arduino code to control actuators, servo's and other sensors that phone's don't come. The code to communicate between the two is provided open source via Google. From the Arduino Blog: The Android Open Accessory Development Kit (ADK) provides an implementation of an Android USB accessory that is based on the Arduino open source electronics prototyping platform, the accessory’s hardware design files, code that implements the accessory’s firmware, and the Android application that interacts with the accessory. The hardware design files and code are contained in the ADK package download. Most of the examples are designed around using the phone as a graphical interface for some sort of hardware accessory like a clock. But there's no reason you couldn't access the phone's sensors as well and pass data back and forth. Adafruit even mentions using it to make robots on their store's page for the ADK: Fast track your Android phone hardware development or just design cell-phone robots with the Arduino ADK! The Arduino ADK is a microcontroller board based on the powerful ATmega2560 (datasheet). It has a USB host interface to connect with Android based phones, and a power supply boost converter to charge up the phone from DC power while its plugged into the ADK (emphasis mine)
{ "domain": "robotics.stackexchange", "id": 28, "tags": "actuator" }
What does a zero determinant of the metric tensor in a space means?
Question: Does it show that a coordinate transformation occurrs, and how? Answer: If the metric has a zero determinant at some point or surface, then the metric is changing signature at that point, e.g., from $+---$ to $0---$. It's conceivable that spacetime would actually do this. However, it would violate the equivalence principle, and the standard mathematical machinery of GR breaks down in this situation, because everything is predicated on the assumption that we can use the metric to raise and lower indices at will. If you encounter a spacetime, described in some coordinates, where this is happening, then the more likely situation is that you're looking at it using an inappropriate choice of coordinates. If so, then by Sylvester's law the transformation to the well-behaved coordinates will have to be singular. I have a worked-out example of this kind of thing in section 6.4 of my GR book, which is free online: http://www.lightandmatter.com/genrel/
{ "domain": "physics.stackexchange", "id": 60202, "tags": "general-relativity, metric-tensor, coordinate-systems, singularities" }
How long does it take for the atmosphere to mix?
Question: If I release a say, one litre of gas, at a particular point on the planet, how soon, on average, would it take to be distributed evenly (mixed) across the entire earth's atmosphere? Answer: The time scale of interhemispheric tropospheric transport is in the order of one year (Chapter 4 of the book Introduction to Atmospheric Chemistry written by Daniel J Jacob). This is not an exact answer to your question, but might be a fair enough number for the problem you have in mind.
{ "domain": "earthscience.stackexchange", "id": 140, "tags": "meteorology, climate, atmospheric-circulation" }
Can below-threshold potential changes of neurons convey information?
Question: In neuroscience we learn that when the membrane potential of a neuron reaches a threshold (typically around -55mV) it "spikes": That is, it actively propagates a signal. I have two related questions with this respect: The spike initiation zone is typically (e.g. for mammals) at the axon hillock - from there the action potential is actively (opening of ion channels..) propagated through the axon. But what happens then at and after the (chemical) synapse? Is the propagation after the synapse to the postsynaptic cell passive? Newer imaging techniques (e.g. calcium imaging) can capture sub-threshold changes in the membrane potential. How are these sub-threshold potentials relevant for information processing? Are they propagated to postsynaptic cells, albeit again only in a passive manner? Answer: This type of passive conduction (sub-threshold) is called electrotonic conduction. When an action potential reaches the axon terminal (pre-synaptic knob) it induces Post-Synaptic Potential (PSP), through chemical or electrical synapse. Now if there is EPSP (i.e. excitatory) generation, then in the post-synaptic neuron there will be electrotonic potential, that will move towards 'axon hillock'. Upto the axon hillock, the conduction is mostly electrotonic and hence we need this type of conduction to actually generate Action Potential. In theoretical neuroscience this electrotonic conduction along dendrites is computed by using Cable Theory. It will eventually die out with distance as$-$ $V(x)={V_o}\, e^{-\frac{x}{\sqrt{r_m/r_i}}}$ ; standard notations used. ref article: https://www.sciencedirect.com/science/article/pii/B9780123971791000178 Hence we can conclude that for information transmission, sub-threshold potentials are extremely important.
{ "domain": "biology.stackexchange", "id": 10819, "tags": "neuroscience, action-potential" }
Can ethanol alone be used as Fuel?
Question: Ethanol is a clean fuel and added as an additive to petrol. Why can't ethanol itself be used as fuel? Answer: Ethanol clearly can be used as a fuel. But there are several reasons why it isn't. First is simple economics. Ethanol costs more to produce than gasoline: oversimplifying only slightly, fermentation is more complicated than simple distillation. Those countries that do use mixtures of ethanol in gasoline do so for "environmental" reasons (which are not always well thought through as being made from plants doesn't make something good if associated production costs are high or alternative uses are better). And "environmental" causes are a good excuse to subsidise favoured groups, like US or Brazilian farmers. In the absence of subsidy, there would be very little use of ethanol as a fuel. But those are not the only reasons. Ethanol carries less energy than the same volume of gasoline. In other words your mileage will be lower; your fuel efficiency will be lower. Worse, though, ethanol doesn't have exactly the same characteristics as gasoline. Engines may need to be adjusted so the alternative fuels burn equally well, which is annoying but fixable easily in electronically controlled engines. In addition, Ethanol may be more corrosive to some of the components used in fuel pipes (nothing like as bad as methanol, but clearly worse than gasoline). Ethanol is also much more hygroscopic than gasoline, absorbing significant amounts of water from air. This is another potential cause of contamination in fuel systems and may make corrosion worse. In short, there are no fundamental reasons why ethanol can't be used. But there are a range of practical issues that make it less economical or inconvenient.
{ "domain": "chemistry.stackexchange", "id": 13395, "tags": "combustion, fuel" }
P2os navigation stack
Question: Hi I'm using the navigation stack of the p2os package with Gazebo. I'm launching pioneer3dx.gazebo.launch to simulate the p3dx robot. <launch> <include file="$(find gazebo_worlds)/launch/office_world.launch"/> <include file="$(find p2os_urdf)/launch/upload_pioneer3dx.xml"/> <!-- push robot_description to factory and spawn robot in gazebo --> <node name="spawn_pioneer" pkg="gazebo" type="spawn_model" args="-z 0.051 -urdf -param robot_description -model robot_description" respawn="false" output="screen" /> <!-- Controller Manager --> <include file="$(find pr2_controller_manager)/controller_manager.launch" /> <!-- load controllers --> <node name="diffdrive" pkg="gazebo_plugins" type="gazebo_ros_diffdrive" respawn="true" output="screen"/> </launch> Then I'm launching the navigation.launch file: <launch> <master auto="start"/> <!-- Run the map server (you can run it here or in another terminal) --> <node name="map_server" pkg="map_server" type="map_server" args="$(find p2os_nav)/simple_office_map.png 0.1" respawn="true"/> <!--- Run AMCL --> <include file="$(find p2os_launch)/amcl.launch" /> <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen"> <rosparam file="$(find p2os_launch)/costmap_common_params.yaml" command="load" ns="global_costmap" /> <rosparam file="$(find p2os_launch)/costmap_common_params.yaml" command="load" ns="local_costmap" /> <rosparam file="$(find p2os_launch)/local_costmap_params.yaml" command="load" /> <rosparam file="$(find p2os_launch)/global_costmap_params.yaml" command="load" /> <rosparam file="$(find p2os_launch)/base_local_planner_params.yaml" command="load" /> <param name="base_global_planner" type="string" value="NavfnROS" /> <param name="conservative_reset_dist" type="double" value="3.0" /> <param name="controller_frequency" type="double" value="15.0" /> </node> </launch> but I get this error: [ WARN] [1358420453.650886449, 1103.836000000]: Waiting on transform from /base_link to /map to become available before running costmap, tf error: This transformation shouldn't be published by the navigation stack itself? I should create a node which publish it? Originally posted by camilla on ROS Answers with karma: 255 on 2013-01-17 Post score: 0 Original comments Comment by georgebrindeiro on 2013-01-17: Does the Pioneer 3-DX model you use have a laser scanner? Is the tf between the laser scanner and /base_link being published? I have a feeling your problem might be related to amcl not being able to localize your robot because these are missing: http://ros.org/wiki/amcl#Transforms Comment by camilla on 2013-01-17: I tried to publish a static tf between odom and map and now i get this warn: [ WARN] [1358422420.491334367, 44.120000000]: The scan observation buffer has not been updated for 32.25 seconds, and it should be updated every 0.20 seconds. so you're right probably there is some problem with the laser Comment by georgebrindeiro on 2013-01-17: This might interest you: http://answers.ros.org/question/28171/transform-from-base_link-to-map/ Comment by camilla on 2013-01-17: Thank you so much george. Can you tell me how can I set properly the scan_topic in amcl? Comment by georgebrindeiro on 2013-01-17: What is the topic the simulated Hokuyo is publishing scans to? You have to remap that to the /scan topic in amcl.launch (or somewhere else). Comment by camilla on 2013-01-17: Done and the error disappeared!Thx!If you want to post your comment as an answer I'll mark it as right :) Comment by camilla on 2013-01-18: I have this problem now: http://answers.ros.org/question/52708/navigation-stack-problem/ Comment by georgebrindeiro on 2013-01-18: Which part of it was actually the right answer? Was it remapping the /scan topic? Comment by camilla on 2013-01-20: yes the fact that problem was in tha amcl file and that i have to remap the scan topic Answer: Run rxgraph to see what is going on: the Hokuyo laser needs to publish its scans to the same topic AMCL is listening. If needed, you can remap the /scan topic in amcl.launch. You may be interested in this related question. Originally posted by georgebrindeiro with karma: 1264 on 2013-01-21 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 12460, "tags": "ros, navigation, mapping, amcl, p2os" }
What does phase velocity physically represent, and why can it be superluminal?
Question: Phase velocity is defined as $v_p=\frac{\omega}{k}$ and is described in various textbooks as being the speed at which the phase of a wave propagates. If you have a wave train that is modulated by an envelope, then while the group velocity gives you the speed of the envelope the phase velocity gives you the speed of the wave within the envelope. Do $v_p$ and $v_g$ have any meaning if we are only considering one sinusoidal wave and not a superposition of such waves? I have been told that the speed of a single sinusoidal wave is its phase velocity, and that phase velocity is what we're really referring to when we talk about a wave's speed. However, how can phase velocity be greater than light if this is the case? The standard response seems to be that the group velocity is what determines the rate of information transfer, and it is this velocity that cannot exceed $c$. However, in special relativity a simple limit is set on the speed of any particle, and photons must travel only at speed $c$. If $v_p>c$ then does this not mean that the physical wave within the envelope is superluminal, and so the photons that comprise it are also superluminal? Answer: The short answer is: group velocity and phase velocity are just terms that help describe how frequency depends on wavelength in a material, and in specific instances can help give us information about how wave propagate in said material. However, at the end of the day, they're just mathematical quantities that aren't under any special obligation to have a neat physical interpretation. Now, for the slightly longer answer. As you might already be aware, purely sinusoidal waves are in reality a poor way of modeling real signals, since they're infinite both in time and space. Luckily for us, we can express any real life signal that has some spatial confinement as an integral of sinusoidal functions, and these sinusoidal functions are in many ways easier to handle. The tool that lets us do this is the Fourier transform, which basically says that given an arbitrary wave $\alpha(x,t)$ that depends on position and time, we can rewrite it as $$\alpha(x,t)=\int_{-\infty}^{\infty}A(k)e^{i(kx-\omega t)}dk$$ Where $k$ is the wavenumber (basically the reciprocal of wavelength), $A(k)$ is the Fourier transform of the waveform at $t=0$ (which basically tells us how much of each wavelength the initial signal packet contains), and $\omega=\omega(k)$ is some function of the wavenumber (notation here shamelessly stolen from the wikipedia page on group velocity). So far, this is pure math-- all we've done is write a function in a different way. Now, remembering that $e^{i\theta}=cos(\theta)+isin(\theta)$, you might realize that the integrand looks like an infinite sinusoidal wave traveling to the right at velocity $\omega / k$ for any given value of $k$ that we happen to be integrating over. This speed is the phase velocity $v_p$, and since $\omega$ is a function of $k$, $v_p$ is as well. The important thing to note is that there isn't necessarily a clean physical interpretation of this quantity, since the thing we physically observe is the integral of the sinusoids, not any individual components of this integral. About all we can say in general about the phase velocity is that it tells us how fast the crest of an infinite sinusoid of definite frequency would travel in our medium. But infinite sinusoids don't really transfer information, given that they're already present everywhere, so the phase velocity doesn't tell us anything about the rate of information transfer in any generality. So, it's perfectly possible for $v_p$ to be greater than $c$ for some specific value of $k$ as long as $\omega (k)$ is a function such that no signal can propagate faster than $c$. That being said, there are a few specific cases where phase velocity does have a physical interpretation. Namely, if $\omega/k$ is a constant, then waves will travel at the phase velocity undistorted so that the phase velocity is in fact the rate of information transfer. Aside from EM waves in a vacuum, this is rarely the case in physics-- $\omega$ is rarely proportional to $k$ and thus the phase velocity ceases to have a single value or simple physical meaning. Finally, group velocity is defined as $\frac{\partial \omega}{\partial k}$ and so it doesn't really have much meaning for a single sinusoidal wave since derivatives depend on values around a point, not just at it. The group velocity is useful if our $\omega (k)$ is nearly linear, in which case $v_g$ gives the approximate rate of information transfer (this is exact if the dispersion is exactly linear, as with EM waves in a vacuum). Like before, this isn't true for all materials and almost every material will exhibit non-linear dispersion if pushed into an extreme enough regime. It can also be useful if the packet doesn't contain a large spread of frequencies or doesn't travel a long distance (basically, it's useful whenever we can readily approximate $\omega(k)$ as its first order Taylor expansion in the integral above). TL;DR- In general, how a wave propagates through a medium is a very complex function that both depends on the medium and the shape of the wave. However, for some simple cases, the phase velocity and group velocity can point us in the right direction and save a lot of unnecessary work.
{ "domain": "physics.stackexchange", "id": 58452, "tags": "waves, dispersion, phase-velocity" }
What are the ways to keep track of branches in the analysis?
Question: I'm going through an RNA-seq pipeline in R/Bioconductor and want to try multiple parameters at subsequent steps, for example, running clustering with different settings, running RegressOut or not on unwanted effects etc. That's a lot of "versions", even if I don't do combinations of these steps. How can I keep track of this, and my conclusions? Not necessarily want to save the results. Save the different scripts with git (seems overkill) Make notes in the script itself Answer: The main purpose of git is to version code, which usually means sequential improvement of the codebase. While it is possible to use branches for multiple variants of the software, permanent branches are traditionally used for gradual integration of new features (i.e. dev/testing/master branches). Supporting multiple independent branches requires some investment, i.e. distributing common changes among branches via merge or cherry-pick. This is hard to manage when you have more than two-three branches. If you compare different methods of analysis, you probably want to compare the results between methods. Having them on separate branches makes it hard. In my opinion you should integrate all methods of analysis into the master branch. To avoid copy & paste, it is better to put common code in a library or an independent script. You can also specify a method as a run-time parameter of your pipeline, and create a meta-script which will execute all methods of interests. Once you performed benchmarking, you shouldn't remove unused methods from you master branch. Having them is important for reproducible research, and your scripts could be used in the future for new datasets.
{ "domain": "bioinformatics.stackexchange", "id": 103, "tags": "r, bioconductor, git, workflow-management, best-practice" }
MULTI-SESSION RTABMAP PROBLEM
Question: Hello everyone, i am working with rtabmap, and I want to use multi-session feature of this package to merge local maps. I am following this tutorial multi-session tutorial. However, i figured out that besides .db files of maps, i need the multi-session1Hz.ini file as well (like in the tutorial). The tutorial doesn't show how to generate the multi-session1Hz.ini file clearly. Can you tell me how to do that please? Thanks in advance! Originally posted by N.N.Huy on ROS Answers with karma: 32 on 2021-09-16 Post score: 0 Answer: With recent rtabmap versions, parameters are saved in the database, so you don't need the ini file. Anyway, the ini can be exported with Window->Preferences->Save settings (*.ini). At the top of the page: Update: While this tutorial should still work, it is slightly outdated as maps created with recent RTAB-Map versions don't require an INI file to be merged (parameters are saved in database for convenience). Follow this multi-session tutorial with RTAB-Map Tango instead. Originally posted by matlabbe with karma: 6409 on 2021-09-16 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by N.N.Huy on 2021-11-01: Thank you so much, sir!
{ "domain": "robotics.stackexchange", "id": 36923, "tags": "ros, slam, navigation, 3d-slam, rtabmap" }
How/Why are linear systems so crucial to computer science?
Question: I've begun to get involved with Mathematical Optimization quite recently and am loving it. It seems a lot of optimization problems can be easily expressed and solved as linear programs (e.g. network flows, edge/vertex cover, travelling salesman etc.) I know that some of them are NP-hard, but the point being that they can be 'framed as a linear program' if not solved optimally. That got me thinking: We've always been taught systems of linear equations, linear algebra all throughout school/college. And seeing the power of LPs for expressing various algorithms it's kinda fascinating. Question: Although we have non-linear systems prevalent all around us how/why are linear systems so crucial to computer science? I do understand that they help simplify the understanding and are computationally tractable most of the times but is that it? How good is this 'approximation'? Are we over-simplifying and are the results still meaningful in practice? Or is it just 'nature' i.e. the problems that are the most fascinating are indeed simply linear? Would it be safe to safe that 'linear algebra/equations/programming' are of the corner stones of CS? If not then what would be a good contradiction? How often do we deal with non-linear stuff (I don't necessarily mean theoretically but also from a 'solveability' standpoint i.e. just saying it's NP doesn't cut it; there should be a good approximation to the problem and would it land up being linear?) Answer: The premise of the question is a little flawed: there are many who would argue that quadratics are the real "boundary" for tractability and modelling, since least-squares problems are almost as 'easy' as linear problems. There are others who'd argue that convexity (or even submodularity in certain cases) is the boundary for tractability. Perhaps what is more relevant is "why do linear systems admit tractable solutions ?" which is not exactly what you asked, but is related. One perspective on this is composability. Since the defining property of a linear system is that $f(x + y) = f(x) + f(y)$, this imparts a kind of "memorylessness" to the system. To build up a solution to a problem I can focus on individual pieces and combine them with no penalty. Indeed, the premise of most algorithms for flow is precisely that. This memorylessness imparts efficiency: I can break things into pieces, or work iteratively, and I don't lose by virtue of doing so. I can still make bad decisions (c.f. greedy algorithms) but the act of splitting things up itself doesn't hurt me. This is one reason why linearity has such power. There are probably many others.
{ "domain": "cstheory.stackexchange", "id": 5520, "tags": "optimization, big-picture, linear-programming, linear-equations" }
How do alpha particles contribute to pressure of nuclear decay products?
Question: A closed vessel with rigid walls contains 1 mol of $_{92}\ce{U}^{238}$ and 1 mol of air at 298K. Considering complete decay of uranium to $_{82}\ce{Pb}^{206}$, the ratio of final pressure to initial pressure of system at 298K is? ​The solution of this question is that during this decay, 8 moles of alpha particles will be emitted, so total moles of gas will be 9. At constant temperature and volume, pressure is proportional to moles of gas. So the answer is 9. My question is: are helium nuclei considered gas molecules, and should we count them while finding $n$ in the ideal gas equation? My thought process for this question was to find out the number of $\beta^{-}$ particles (electrons) emitted (which is 6 here) and then figure out that they would combine with 3 helium nuclei to form 3 helium atoms. Then I would use the ideal gas equation and the answer would come out to be 7. Answer: When it is an alpha particle, it is not a molecule(IUPAC), that is by definition neutral. An electrically neutral entity consisting of more than one atom (n>1). Rigorously, a molecule, in which n>1 must correspond to a depression on the potential energy surface that is deep enough to confine at least one vibrational state. In contrary, a molecular entity(IUPAC) covers ions too, so an alpha particle is a molecular entity: Any constitutionally or isotopically distinct atom, molecule, ion, ion pair , radical, radical ion, complex, conformer etc., identifiable as a separately distinguishable entity. When an alpha particle becomes a molecule, it is not an alpha particle anymore, as an alpha particle is a ion, a nucleus of helium-4. When they slow down enough, they are able to take electrons from literally any atom but helium. Well, in the first step even from helium, as $$\ce{He^2+ + He -> 2 He+}$$ In the idealized case when no high energy particle leaves the idealized contained, the total charge balance remains neutral. There would be neutral atoms, molecules, ionic pair, ions and electrons that would eventually recombine to neutral $\pu{1 mol}$ of lead-206, $\pu{8 mol}$ helium-4 of , together with $\ce{1 mol}$ of air ($\ce{1 mol}$ of any aerial molecules). Note that until all uranium decays (what would take much longer than the age of universe), lead would be already oxidized by oxygen.
{ "domain": "chemistry.stackexchange", "id": 17515, "tags": "radioactivity, ideal-gas" }
Notation for specific/volumetric entropy
Question: I'm working on a problem related to entropy, and for this particular use case it's advantageous to write down properties in 'specific' units. In the case of entropy (with units Joules over Kelvin, I think is commonly referred to as: $[S] = \pu{J K-1}$), I want to know how to communicate volumetric entropy, i.e. Joules per cubic meter Kelvin: $[\color{red}{?}] = \pu{J K-1 m-3 }$. Is it something like $\bar{S}$? Answer: Specific entropy According to IUPAC "Green Book" Quantities, units, and symbols in physical chemistry, specific entropy is denoted as lowercase latin "s": $s$ [1, p. 56], whereas $\bar{S}$ would refer to molar entropy: \begin{array}{lll} \text{Name} & \text{Symbol} & \text{Definition} & \text{SI unit} & \text{Notes} \\ \hline [...]\\ \text{molar quantity}~X & X_\mathrm{m}, (\bar{X}) & X_\mathrm{m} = X/n & [X]/\pu{mol} & 5,6 \\ \text{specific quantity}~X & x & x = X/m & [X]/\pu{kg} & 5,6 \\ [...]\\ \end{array} [...] $(5)$ The definition applies to pure substance. However, the concept of molar and specific quantities (see Section 1.4. p. 6) may also be applied to mixtures, n is the amount of substance (see Section 2.10, notes 1 and 2, p. 47). $(6)$ $X$ is an extensive quantity, whose SI unit is $[X]$. In the case of molar quantities the entities should be specified. Example $V_\mathrm{m,\ce{B}} = V_\mathrm{m}(\ce{B}) = V/n$ denotes the molar volume of $\ce{В}$. Just as specific heat capacity $c$, specific entropy $s$ is measured in $\pu{J K-1 kg-1}$ [1, p. 90]. General note [1, p. 6]: The adjective specific before the name of an extensive quantity is used to mean divided by mass. When the symbol for the extensive quantity is a capital letter, the symbol used for the specific quantity is often the corresponding lower case letter. Volumetric entropy Quite often there is no special notation used, and volumetric entropy is denoted with $S$. Specifically, volumetric entropy generation rate for the convective heat transfer in flowing viscous fluid is often denoted with S-prime notation, e.g. from classical paper "A Study of Entropy Generation in Fundamental Convective Heat Transfer" [2]: \begin{align} & S' & [\pu{W K-1 m-1}] \\ & S'' & [\pu{W K-1 m-2}] \\ & S''' & [\pu{W K-1 m-3}] \end{align} References IUPAC “Green Book”. Quantities, units, and symbols in physical chemistry, 3rd ed.; Cohen, R. E., Mills, I., Eds.; IUPAC Recommendations; RSC Pub: Cambridge, UK, 2007. ISBN 978-0-85404-433-7. Bejan, A. J. Heat Transfer 1979, 101 (4), 718–725. DOI: 10.1115/1.3451063.
{ "domain": "chemistry.stackexchange", "id": 9231, "tags": "physical-chemistry, entropy, notation" }
Video of Earth spinning?
Question: If the Earth is spinning or rotating at a really fast speed, why haven't we seen any videos from space of it spinning when we get a lot of photos of it? Answer: First of all, let's calculate the rotational velocity of the Earth, at the equator. The diameter of the earth is 12,756 km. Therefor, the speed is ${12,756km*\pi}/{24 hrs}=1669 km/hr$. Given that we know the rate of spin of the Earth, what else would be required to see this change? The point of reference would need to be relative to the Sun, or possible the Earth-Sun system, but not LEO like most spacecraft are. Geosync. orbit would not work either, as the satellite is moving at the same speed as the Earth. A very high orbit might work, such as the Moon would be in. In order to achieve 1, a significant distance from the Earth is required. Given that a large distance is required, the Earth will not appear to be moving as quickly as it would otherwise. Just to compare this, I pulled the speed of the moon orbiting the Earth from wikipedia. That speed is 1.022 km/s, or about 3,680 km/hr. That is a much higher speed than the Earth rotational speed. Given that the distance requirements are similar to see the rotational speed of the Earth, it seems that one would have to notice the moon orbiting if one could notice the rotation of the Earth, in real time. (Note, it's a bit easier to see the Earth rotating, due to the easy comparison, but...)
{ "domain": "physics.stackexchange", "id": 11070, "tags": "rotation, earth" }
Calculate the path sum of a binary tree
Question: I want to calculate all sum of all possible paths from root to leaf. Each node has value of an integer number (decimal). In the following example: 1 / \ 2 4 / /\ 3 5 6 Possible paths are *123, 145, and 146, which adds to 414). In this example, 12 is not a valid path since 2 is not a leaf node. I did basic tests and it works, and I'm wondering if the code is logically correct, and how I can improve its performance. class Node: def __init__(self, value, left, right): self.value = value self.left = left self.right = right def sum(node, prefix): r1 = 0 r2 = 0 if not node: return if node and not node.left and not node.right: return prefix*10+node.value if node.left: r1 = sum(node.left, prefix*10 + node.value) if node.right: r2 = sum(node.right, prefix*10 + node.value) return r1+r2 if __name__ == "__main__": node3 = Node(3, None, None) node5 = Node(5, None, None) node6 = Node(6, None, None) node4 = Node(4, node5, node6) node2 = Node(2, node3, None) node1 = Node(1, node2, node4) print sum(node1, 0) Answer: What you really need is comments. Why are you multiplying prefix by 10? Let's see, if sum() is first called with ... I need to figure it out! Your code should have comments that clearly explain what is going on. When I first glanced through your function, I noticed you calling sum() with two arguments. sum() takes only one argument (an iterable) unless you also have an initial argument. Oh, oh, you are calling your own sum function. It is a bad idea to shadow built-ins. If you actually wanted to use the built-in sum function, you might get some unexpected behavior that could be difficult to debug. Also, future readers (which might include you) could be confused, as I was. You should use default arguments. If you want to create a leaf node, it is much easier to use Node(3) than Node(3, None, None). It is also nice to say Node(8, right=node2) than Node(8, None, node2). To implement, simply put =None after the left and right parameters when you define __init__(). Also, sum() would benefit from a default value of 0 for prefix. The person using the function might wonder, "Why do I always need to provide this argument? It is always the same." Making it a default argument means that it can be called with just sum(node1) instead of sum(node1, 0). If the user really wants to find the sum from the middle, he can still provide that argument. Starting from the middle, however, will probably always be something that only your function uses. Your function is meant to return a number. Certain parts of the function depend on other calls returning a number. Therefore, I would change if not node: return to if not node: return 0 You use prefix*10 + node.value three times. It might be a good idea to define a variable for that. While it will be calculated a maximum of only two times per function call, it's something, and it's easy to fix.
{ "domain": "codereview.stackexchange", "id": 20871, "tags": "python, algorithm, python-2.x, binary-search" }
custom message fails to build; no such file or directory
Question: I followed the custom message documentation to the letter, and searched all the related questions on here, and unfortunately I am still stuck. Could you please have a look? I am getting the same error building in Ubuntu 14.04 and 16.04, both running Kinetic. All my packages that have custom messages fail to build (catkin_make exits with error). The custom message definition are part of the package, and are not from another package. The error reason is the same for all the packages with custom messages (from my limited understanding the .msg file is not found for some reason): fatal error: ... : No such file or directory ... > The package organization follows the documentation: where the custom message file is called my_msg.msg and is inside the msg directory. The contents of my_msg.msg are not suspect either: int32 my_int float64 my_float The package.xml does contain the needed lines: <build_depend>message_generation</build_depend> <run_depend>message_runtime</run_depend> The CMakeLists.txt contains everything the documentation mentions, and I even ran catkin_create_pkg test to make sure all the items are arranged in the correct order. Here is the file: cmake_minimum_required(VERSION 2.8.3) project(g_custom_messages) find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs message_generation ) add_message_files( FILES my_msg.msg ) generate_messages(DEPENDENCIES std_msgs ) catkin_package(CATKIN_DEPENDS message_runtime) include_directories(include ${catkin_INCLUDE_DIRS}) add_executable(custom_msg_subscriber src/custom_msg_subscriber.cpp) target_link_libraries(custom_msg_subscriber ${catkin_LIBRARIES}) and in the source code (file name is custom_msg_subscriber.cpp and it is inside the src directory), I am refrencing the custom message as #include <g_custom_messages/my_msg.h>, and am accessing the data via: void poseMessageReceived(const g_custom_messages::my_msg &msg) { storeInt = msg.my_int; storeFloat = msg.my_float; } All the web search gymnastics I did have not helped. Could you please let me know why am I getting an error? Originally posted by BuilderMike on ROS Answers with karma: 247 on 2018-10-26 Post score: 2 Original comments Comment by gvdhoorn on 2018-10-27: Please check the catkin documentation on how to do this. I'm fairly certain you're missing the add_dependencies(..) bit to let CMake know that custom_msg_subscriber depends on the msgs having been generated. Comment by gvdhoorn on 2018-10-27: Additionally: for the future: please don't post screenshots of terminals. It's all text, so you can just copy-paste it into your question. Comment by gvdhoorn on 2018-10-27: And for reference: updating the CMakeLists.txt of nodes using your custom message is explained in the next tutorial. Comment by BuilderMike on 2018-10-27: @gvdhoorn Thank you very much for helping. I am learning ROS on my own and I really appreciate your help. Thank you! Comment by gvdhoorn on 2018-10-28: No problem. Good to hear you got things to work. Answer: gvdhoorn was absolutely right. Thank you. If you follow the msg and srv tutorial (http://wiki.ros.org/ROS/Tutorials/CreatingMsgAndSrv), which is what I did, the only mention of add_dependencies(...) is in a link at the end of section 5... which I must have missed. When looking over the next tutorial (as gvhoorn pointed out. see: http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29 ), you can see how add_dependencies(source_file_name package_name_generate_messages_cpp) is used to build projects with custom message. adding add_dependencied(...) to all the CMakeLists.txt in the packages with custom messges fixed my catkin build problems. Originally posted by BuilderMike with karma: 247 on 2018-10-27 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by adde on 2020-03-25: Hi, I hade the same problem and when I added the 'add_dependencied' some compilation errors were fixed. But one type of error remains, it seems that the namespace declared in the generated h (msg_name.h) does not exist. The scenario I have is the following: in file foo.h \#include "msg_name.h" ... namespace_in_h_file::msg_name foo; ... This compiles 9 times of 10 with catkin build and the error is that namespace_in_h_file is not a type. Any tips on how to proceed? Comment by Cullen SUN on 2022-08-15: yes, this do fix my problem. My code can change can be found here. https://github.com/CullenSUN/robotics_se_project_5/commit/762b370b5ac814a7309c5292615739a25a4ec57f
{ "domain": "robotics.stackexchange", "id": 31971, "tags": "ros, catkin-make, ros-kinetic, std-msgs, custom-message" }
How does the MMR vaccine affect lymph nodes in preventing measles?
Question: I am trying to understand this statement about the Measles part of the MMR (Mumps, Measles and Rubella) vaccine Measles prevention: MMR (AB protect during primary and secondary viremia) Measles enters from respiratory tract, through mucosa of the upper respiratory tract. The virus passes into cervical lymph nodes. Now, starts first viremia (MMR somehow helping here) which lasts about 5 days. Then antibodies are secreted during 15 days which leads to rash lasting about 3 days. I think this second viremia refers to the possible dissemination of the virus to other lymph nodes by blood. This suggests me that the vaccine primarily have a mechanism that affects in the lymph nodes or generally in the lymphatic circulation. How MMR vaccine affect lymph nodes in preventing measles? Answer: Lets define viremia first: The primary viremia occurs in the cells which the virus enters first in the body. There it replicates to high titers before it leaves this cells, spreads through the body and infects other cells. This is then the second viremia. For measles the primary viremia occurs in the respiratory epithelium and in the local lymph nodes. The second viremia then occurs in skin, conjunctiva, respiratory tract, and other distant organs. In this phase also the characteristic rash occurs due to a hypersensitivity reaction of the skin. For details see here and here. The measles vaccine works like any other vaccine: By raising an immune response against the vaccination virus. This leads to a normal immune response which leads to the production of specific antibodies against the measles virus and also to immunologic memory. These antibodies are available to fight the virus when it enters the body of a vaccinated person. This also triggers the proliferation of virus specific memory B cells which then differentiate into plasma cells and produce vast amounts of protective antibodies. So there is no special mechanism involved here beyond immunologic memory.
{ "domain": "biology.stackexchange", "id": 2307, "tags": "microbiology, virology, virus" }
What is the pressure drop in a venturi with a compressible fluid?
Question: I would like to know if there is an equation to predict the pressure drop in a venturi device using a compressible fluid as the working medium. In particular, I'd like to use this equation to predict the amount of vacuum created for a given $\Delta p$ across the device. Simplifying assumptions can be made, such as equivalent inlet and outlet areas. Answer: Yep. Bernoulli's equation has its compressible form.
{ "domain": "physics.stackexchange", "id": 2594, "tags": "fluid-dynamics, ideal-gas, flow" }
How fast can a VEX robotics light sensor operate?
Question: In my engineering class we must make a machine that can sort 15 marbles with 3 different types in less than 2 minutes. I plan on having one marble go over/under a light sensor with a flashlight on the opposite side so it can read its reflection, then have a gear with three cups spin according to what the light sensor value is. Do you think that the light sensor will be able to read the value and spin in less than 1 second? Or should I put something to stop the marble while it is being read? Answer: The light sensor should not be a problem. Depending on the system you use this should work in milliseconds or less. The mechanical spinning of the cup will probably be the limiting factor. 1s may be possible but balls probably drop faster than this if you don't have a gate. I would probably add a gate or use a different method to control the balls. The other obvious issue to me is how are the balls flowing in. If they are near continuous do you need to be able to separate two balls of the same type. In general, you should try and design a flexible system so that you can add and remove things like a gate if they are needed or not as there will likely be some issues you haven't thought of yet.
{ "domain": "engineering.stackexchange", "id": 217, "tags": "electrical-engineering, sensors, optics, robotics" }
Is it possible to falsify the $SU(2)_{lepton, left}*SU(2)_{quark, left}*U(1)$ symmetry group as an alternative candidate for GSW Model?
Question: We know that the current symmetry of GSW is $SU(2)_{fermions, left}*U(1)$, and the correct representation of the $SU(2)_{fermions, left}$ is the $2+2$ representation. I want to know what is the reason we don't consider the symmetry group to be $SU(2)_{lepton, left}*SU(2)_{quark, left}*U(1)$? Considering quarks and leptons are both represented in the fundamental representations. I guess that it can lead to a fine-tuning problem like why the couplings to both sectors (leptons and quarks) are the same? (Surely if the couplings are the same then according to the unique Higgs vev the masses of bosons in both sectors turn out to be the same). Am I right?! Then to ease this problem doesn't one have to introduce another symmetry?!! But is there any deeper reason that can really falsify this symmetry group as an alternative to the current one? Answer: Because it is already falsified, by any scattering experiment that produces leptons from weak decay of hadrons, or the other way around. Your model forbids the $W$ bosons emitted by quarks to decay into leptons, and the other way around. We observe such decays.
{ "domain": "physics.stackexchange", "id": 68051, "tags": "quantum-field-theory, particle-physics, symmetry, standard-model, beyond-the-standard-model" }
Model from an aggregate
Question: I’m in a place where we’re unable to train models on data due to GDPR. What I want is to predict people getting a job (y) given (x,x,x,x…) their employment type working full time or part time, work sector, area and age. Since I cannot train on real data, I created an aggregate from the real data. Then used numpy.random.choice to generate synthetic data frame with data from the aggregate. Then created a decision tree boost model that had a good f1 score and accuracy. The behaviour of the model is similar to the aggregate. When I presented this to people, they informed me that what I did is not a real model and by recreating data from an aggregate, you loose important correlations from the data for individuals represented in the dataset. I disagree on these points since the model is acting based on the aggregate. After reading, I figured out that you do not need to have correlations to build models. There are many research papers based on non linear dataset with no/few correlations. I don’t understand how a model from an aggregate looses its statistical traits. Answer: There are correlations regarding the creation dates or maybe the geographical zone, but if you keep the study in a similar time frame, you shouldn't lose something important. There is a correlation loss as long as there are links between individuals and the environment (ex: time and place). If there are no links between individuals, I don't see why it is not meant to make an aggregate. In addition, most models need random data to find statistical straits, one exception is time series because the models learn from sequential data that couldn't be randomized for obvious reasons. It would be very interesting to know the reason why an aggregate loses its statistical straits (apart from the time frame).
{ "domain": "datascience.stackexchange", "id": 11145, "tags": "statistics, machine-learning-model, decision-trees, beginner" }
The Parallelepiped on a Sloping Plane: A Reconsideration of an Old Friend
Question: Consider the following elementary situation, which all the students of physics meet during their first-year study of classical mechanics. A homogeneous rigid parallelepiped is at rest on a sloping plane, under the action of the weight force. Let $\alpha$ be the slope of the plane with respect to the horizontal ground. Is there some elementary way of determining the distribution of the reaction forces acting at each point of the parallelepiped's base? Clearly, if $\alpha=0$, we expect that no friction is present and that the perpendicular reaction force is constant everywhere. But what is the answer for a general $\alpha$? Mathematically, the problem is indeterminate, since we consider the parallelepiped a rigid body, and there are for sure infinitely many distributions (of reaction forces and internal forces) compatible with this static situation. But I think some reasonable assumption should let us find a unique solution. I am particularly interested in the distribution of the perpendicular reaction forces, that is in the pressure. Given the symmetry of the problem, we can reduce it to a 2-dimensional one, in which the parallelepiped is replaced by a rectangle with surface density $\sigma$ which has a base along a straight line $x$ which meets the ground at an angle $\alpha$. Let $Oxy$ be a cartesian system, and let $R(\xi)=(R_{x}(\xi),R_{y}(\xi))$ be the reaction force per unit length acting on the rectangle at the point $(\xi,0)$ of its base. The issue is to determine $R(\xi)$. Note that, whatever the distribution of the friction forces $R_{x}$, the total moment of $R_{x}$ with respect to the center of mass of the rectangle is determined (its magnitude is equal to $\frac{h}{2} Mg \sin \alpha$, where $h$ is the height of the rectangle and $M$ its mass). So, the total moment of the forces $R_{y}$ with respect to the center of mass is known too: it must be opposite to that of the $R_{x}$. What assumption can we add to find the distribution of $R_{y}$? In the special case $\alpha=0$, we can reason as follows. If you consider a vertical little strip of the rectangle, it is reasonable to assume that the neighboring parts of the rectangle exert no force on it, so that $R_{y}(\xi)=\sigma g h$ for all $\xi$ on the base. But how to reason for a general $\alpha$? Thank you very much in advance. PS I have never taken a course in continuum mechanics, so I apologize if my question should turn out to be trivial. Answer: Actually, our problem is indeterminate also in the special case in which $\alpha=0$ and there is no friction, as the following reasoning shows. Let $L$ be the length of our original rectangle $T$, and imagine to cut $T$ along the segment $S$ of extremes $(\frac{h}{2},0)$ and $(\frac{h}{2}+\epsilon,h)$ for small enough $\epsilon > 0$ so that each one of the two resulting trapezoids $T_1$ and $T_2$ is in equilibrium on the plane (with no contact forces between $T_1$ and $T_2$). This corresponds mathematically to assume that there are no internal forces in $T$ along $S$. Call $R(\xi)$ a solution of this configuration. Then $R_{y}(\xi)$ cannot be constant, since we must have \begin{equation} \int_{0}^{\frac{h}{2}} R_{y}(\xi) d \xi = g \sigma h \left( L + \frac{\epsilon}{2} \right) > \int_{\frac{h}{2}}^{h} R_{y}(\xi) d \xi = g \sigma h \left( L - \frac{\epsilon}{2} \right). \end{equation} Since this solution is as "physically reasonable" as the one found above in which $R_{y}(\xi)$ is constant, we conclude that our problem cannot be solved without information about the internal forces inside $T$.
{ "domain": "physics.stackexchange", "id": 39022, "tags": "classical-mechanics, elasticity, continuum-mechanics" }
How to calculate the charge on silicates?
Question: An isolated $\ce{SiO4^4-}$ ion has $-4$ charge. These discrete tetrahedral units form the ortho-silicates. As these tetrahedral units combine, they form various types of silicates. How do we calculate the total charge on various forms of silicates? Examples: $\ce{Si4O11^6-}$ has $-6$ charge and so does $\ce{Si2O7^6-}.$ But I am not able to understand how to calculate this charge. Is there any general rule? Answer: In this answer, the basic concept of how to understand what figure they are and how much charge they have would be discussed. As OP has well stated, the basic unit of any silicate is the $\ce{SiO4^4-}$ tetrahedron which has four oxygens that are connected to one silicon atom. Each unshared oxygen provides -1 charge and each shared oxygen contributes 0.5 oxygens to one tetrahedral unit. Now, depending on how these units interconnect and how oxygens are shared, we get multiple different structures. I shall discuss three of them here since that should be enough to get the idea across. Type 1: Chain Silicates In this type of silicates, as you can see above, the tetrahedron shares one oxygen on either side and so the basic unit here would be the $\ce{SiO3^2-}$ unit, since two oxygens are being shared. Since two oxygens are shared per tetrahedral unit, the basic unit would have 3 (2 unshared + 0.5 * 2 shared) oxygens and 2- charge. Type 2: Pyro silicates Here, the one oxygen in the middle is shared by the two tetrahedral units. This means that there is a total of 6 free oxygen atoms that are only bonded to the silicon with a single bond. Therefore the structure arises as $\ce{Si2O7^6-}$. Since one oxygen is shared per tetrahedral unit, the basic unit would have 3.5 (3 unshared + 0.5 * 1 shared) oxygens and 3- charge. However since the values need to be integers $\ce{SiO_{3.5}^3-}$ would be written as $\ce{Si2O7^6-}$ Type 3: 3D Silicates In this case, all the oxygen atoms are shared leading to the formation of $\ce{(SiO2)_n}$ molecules. Further Reading https://www.adichemistry.com/inorganic/p-block/group-14/silicates/silicates-1.html
{ "domain": "chemistry.stackexchange", "id": 15433, "tags": "inorganic-chemistry, carbon-family" }
Limiting the save content of file to certain size
Question: The goal here is to only save (to file) the most recent Objects created if Object::maxMemoryOfObjects is exceeded. #include <iostream> #include <list> #include <algorithm> #include <fstream> #include <map> struct Object { const int ID; Object() : ID(++lastObjectID) {} static const int maxMemoryOfObjects = 200; static int totalMemoryOfObjects, lastObjectID; }; int Object::totalMemoryOfObjects = 0, Object::lastObjectID = 0; struct Location { std::list<Object*> objectsLyingAround; Location() { for (int i = 0; i < 12; i++) { Object* object = new Object; objectsLyingAround.push_back(object); Object::totalMemoryOfObjects += sizeof(*object); } } }; struct Person { void visitLocation (const Location*) {} }; std::list<Location*> itinerary; void saveObjects() { std::ofstream outfile ("Objects.txt"); std::map<int, Object*/*, std::greater<int>*/> objectMap; for (Location* x : itinerary) for (Object* o : x->objectsLyingAround) objectMap.emplace (o->ID, o); std::cout << "Objects being saved (from oldest to newest) IDEALLY are:\n"; for (const auto& x : objectMap) std::cout << x.first << ' '; if (Object::totalMemoryOfObjects > Object::maxMemoryOfObjects) { std::cout << "\nAlert! totalMemoryOfObjects > maxMemoryOfObjects! Will save only the newest objects.\n"; auto it = objectMap.begin(); while (Object::totalMemoryOfObjects > Object::maxMemoryOfObjects) { Object::totalMemoryOfObjects -= sizeof(*it->second); std::cout << "totalMemoryOfObjects reduced to " << Object::totalMemoryOfObjects << std::endl; it = objectMap.erase(it); } std::cout << "Objects being saved (from oldest to newest) NOW are:\n"; for (const auto& x : objectMap) std::cout << x.first << ' '; } outfile << objectMap.size() << std::endl; for (const auto& x : objectMap) outfile << x.first << ' '; } void loadObjects() { int numObjects; std::ifstream infile ("Objects.txt"); infile >> numObjects; // etc... } int main() { Person sam; for (int i = 0; i < 5; i++) { Location* nextLocation = new Location; itinerary.push_back (nextLocation); sam.visitLocation (nextLocation); } saveObjects(); loadObjects(); } Output: Objects being saved (from oldest to newest) IDEALLY are: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 5 7 58 59 60 Alert! totalMemoryOfObjects > maxMemoryOfObjects! Will save only the newest objects. totalMemoryOfObjects reduced to 236 totalMemoryOfObjects reduced to 232 totalMemoryOfObjects reduced to 228 totalMemoryOfObjects reduced to 224 totalMemoryOfObjects reduced to 220 totalMemoryOfObjects reduced to 216 totalMemoryOfObjects reduced to 212 totalMemoryOfObjects reduced to 208 totalMemoryOfObjects reduced to 204 totalMemoryOfObjects reduced to 200 Objects being saved (from oldest to newest) NOW are: 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 But the way I'm doing it ironically adds more memory to the Object class (its ID data member). Is there a better way to do this? We assume here that the Objects are being saved and loaded many times until megabytes or even gigabytes of data are being stored in memory, at which a max should be enforced. Really old items that are lying around in Locations but never picked up will assume to be discarded first since they are probably forgotten anyway. Answer: First: Crappy indentation makes it hard to read. Your algorithm seems unnecessarily complicated. Why not work out how many items you want to serialize and just serialize those items. // Work out how many items we will need to drop. std::size_t countOfItems = objectMap.size(); std::size_t maxNumberOfItems = Object::maxMemoryOfObjects/sizeof(Object); std::size_t advance = (countOfItems > maxNumberOfItems) ? (countOfItems - maxNumberOfItems) : 0; // Get an iterator to the first item // Then advance it forward the number of items we want t drop // These are the older items (as map is a sorted container). auto printer = objectMap.begin(); std::advance(printer, advance); // Print all the items using std::copy and a stream iterator // May not work out the box for you but you should learn how to do this. std::copy(printer, objectMap.end(), std::ostream_iterator<Object>(outfile)); Memory Ownership Your code is littered with pointers. This is not a good idea in general (but hard to tell without more context in this case). RAW Pointers are very rare in C++ as they do not convey object ownership. Usually pointers are managed by some owning object that manages their lifespan (as a result you never interact with them directly but via their owning object). You should be using containers of object (not pointers) or if you have object hierarchies then use "pointer containers" that understand their content is a pointer and can act accordingly. If you don't have pointer containers available (they are in boost) then you should at least be using smart pointers to maintain the concept of ownership. Looks like an error: for (const auto& x : objectMap) outfile << x.first << ' '; } You are not serializing the object but only the ID. I think you meant: for (const auto& x : objectMap) outfile << x.second << ' '; } But now you also need to define: std::ostream operator<<(std::ostream& str, Object const& data) { returns str << /* Serialize object */ } std::istream operator>>(std::istream& str, Object& data) { returns str >> /* De-Serialize object */ } Looks like it might be an issue. Object::totalMemoryOfObjects += sizeof(*object); The size of an object in memory is always fixed. So you may as well count the number of objects. Note: sizeof() is computed at compile time not runtime. If you want to add up the number of bytes in things like strings you need to write your own method to do it. But when you use operator<< the object is generally serialized and the number of bytes can vary (unless you write your object in binary fixed size format to the stream (but then you need to worry about machine peculiarities like endianness (so unless there is an extreme need for speed it is usually best to use a non binary format)). Declare variables JUST before you use them std::ofstream outfile ("Objects.txt"); // 500 lines of code not using outfile outfile << objectMap.size() << std::endl; for (const auto& x : objectMap) outfile << x.first << ' '; } Move the declaration of outfile to the point just before you tart using it. In C++ because of constructors/destructors the side affects may allocate resources. If you don't need them don't use them so wait until you need them. It also has the benefit of making the declaration near the code that uses it so it makes it real easy to see the type of the object you are using (as it is just there with the code). Encapsulation struct Person { You seem to be making all your objects publicly accessible. This does not help encapsulation. You should make all your data members private by default. Then provide methods (which are verbs) that act on the object. class Object { static int const maxMemoryOfObjects = 200; static int totalMemoryOfObjects = 0; static int lastObjectID = 0; static int allocateNextAvailableID() { return ++lastObjectID; } const int ID; public: Object() : ID(allocateNextAvailableID()) {} // define an ordering on your objects. friend bool operator<(Object const& lhs, Object const& rhs) { // With an ordering you can use Object as the key. // in a sorted container. return lhs.ID < rhs.ID; } }; int main() { std::set<Object> objectSet; // No need to allow access to the ID // to external entities that do not need to know. // provide the appropriate access method (in this // case `operator<()` that defines the ordering. }
{ "domain": "codereview.stackexchange", "id": 12521, "tags": "c++, file" }
Finding a unitary transformation for a qubit mixed state that projects onto a pure state
Question: Suppose we have a single qubit mixed state described by a density matrix $\rho$, and we want to find a unitary transformation that brings $\rho$ to the pure state $|0\rangle\langle 0|$. Is there a matrix $U$ such that $U^{\dagger}U=I$ and that $U \rho U^\dagger=|0\rangle\langle 0|$, independent of the matrix elements of $\rho$?. That is, if for an arbitrary single qubit $\rho$, we can always find such projection with a fixed unitary $U$, with constant matrix elements, always the same $U$ regardless of the entries of $\rho$. EDIT: Probably this is not possible since $\rho$ lies inside the unit sphere and a unitary transformation necessarily preserves the norm, so one needs a non-unitary operation in order to do that. Is there any good reference where I could look into this issue? Answer: What you're asking is basically what is forbidden by the no-deleting theorem. If you want to work out things yourself, here's a (probably incomplete) reasoning. Suppose that $\rho$ is not pure. As such $\DeclareMathOperator{tr}{tr}\tr\left(\rho^2\right)<1$. Denote $\sigma=U\rho U^{\dagger}$. Then $\tr\left(\sigma^2\right)=\tr\left(U\rho^2U^\dagger\right)=\tr\left(\rho^2\right)<1$. Thus, $\sigma$ cannot be pure. For the task of mapping an arbitrary state $\rho$ to $|0\rangle\!\langle0|$, this is forbidden by the fact that the transformation you're asking for is unitary, and as such invertible.
{ "domain": "quantumcomputing.stackexchange", "id": 4729, "tags": "quantum-state, density-matrix, bloch-sphere" }
Can pair end reads with high heterozygosity be used to polish PacBio assembly?
Question: I have used canu (correct) and smartdenovo to assembly PacBio reads. Next I am going to polish my assembly using illumina pair end reads by Pilon. However, I found there is high heterozygosity in my pair end reads when I used jellyfish to do k-mer analysis. There are two peaks in the k-mer frequencies distribution plot. The size of my genome is about 190M, and it is a diploid organism. What I am doubting is that can I use this pair end data to polish my assembly? If not, I have to do sequencing again. Answer: Resequencing your sample is not going to make it any less heterozygous. I am assuming that you are working with a diploid organism? Please update your question and specify the genome size and ploidy of your organism (the type of organism will help us update our answers, too). The answer, yes, you can use Pilon + short reads to correct the PacBio assembly even with a highly heterozygous sample. To do so you should use the --diploid option in Pilon to let it know that there will be heterozygous sites. The way that it corrects heterozygous SNPs and heterozygous indels is different, and it seems to perform best with SNPs and sometimes does not correctly identify heterozygous indels (source). Pilon itself doesn't handle phasing, but just corrects with whatever reads are in the bam file that it is provided with. The end result of providing a bam with reads from both haplotypes is that the corrected sequence will be a mosaic of the dominant allele at the heterozygous site as was observed in the short read data. This may or may not select for some alleles based on sequencing coverage bias of Illumina sequencers.
{ "domain": "bioinformatics.stackexchange", "id": 462, "tags": "pacbio" }
Wave packet at $t<0$
Question: If there's wave packet $\psi(x,t)$ in zero potential and $\psi(x,0)$ is given, what happens when $t<0$? I know that for $t>0$ the wave as a group moves at group velocity $\frac{d\omega}{dk}|_{k_0}$ where we assume that $k$ is strongly peaked around $k_0$. The width of the wave becomes wider when $t$ is large. But what about $t<0$? Does the wave just become narrower? Answer: In the Heisenberg picture, the time evolutions of the position operator $X(t)$ and the momentum operator $P(t)$ of a free particle with mass $m$ are given by $$X(t)=X(0)+\frac{P(0) }{m}t, \quad P(t)=P(0). \tag{1} \label{1}$$ Defining $\langle A\rangle_\psi:=\langle \psi |A|\psi \rangle$ and $\left(\Delta_\psi A \right)^2:= \langle \left(A-\langle A \rangle_\psi \right)^2\rangle_\psi =\langle A^2 \rangle_\psi-\langle A\rangle_{\!\psi}^{\,2}$ for an arbitrary (hermitean) operator $A$ and an arbitrary state vector $|\psi\rangle$ (with normalization $\langle \psi |\psi \rangle=1$), we find $$\begin{align}\left(\Delta_\psi X(t)\right)^2&= \langle X(0)^2 \rangle_\psi+\frac{t}{m}\langle \{X(0),P(0)\} \rangle_\psi +\frac{t^2}{m^2}\langle P(0)^2 \rangle_\psi \\[5pt] & -\langle X(0) \rangle_{\! \psi}^{\,2}- \frac{2t}{m} \langle X(0)\rangle_\psi \langle P(0) \rangle_\psi -\frac{t^2}{m^2}\langle P(0) \rangle_{\! \psi}^{\,2} \\[5pt] &=\left( \Delta_\psi X(0) \right)^2 +\frac{t}{m} \left(\langle \{X(0),P(0)\}\rangle_\psi-2\langle X(0)\rangle_\psi\langle P(0)\rangle_\psi \right) +\frac{t^2}{m^2} \left( \Delta_\psi P(0)\right)^2,\end{align} \tag{2}$$ where $\{A,B\}:=AB+BA$ denotes the anticommutator of two operators $A$, $B$. Assuming (without loss of generality) $\langle X(0) \rangle_\psi=0$, the time dependence of the square of $\Delta_\psi X(t)$ assumes the simple form $$\left( \Delta_\psi X(t) \right)^2 = (\left( \Delta_\psi X(0) \right)^2+\frac{t}{m} \langle \{ X(0),P(0) \}\rangle_\psi+\frac{t^2}{m^2}\left( \Delta_\psi P(0) \right)^2 \tag{3} \label{3},$$ a parabola with its minimum at $$t_{\rm min}=-m \langle \{ X(0), P(0) \} \rangle_\psi/2 (\Delta_\psi P(0) )^2 \tag{4} \label{4}$$ with minimum value $$ (\Delta_\psi X(t_{\rm min}))^2 = (\Delta_\psi X(0))^2-\frac{\langle \{X(0),P(0)\}\rangle_{\! \psi}^{\, 2}}{4 (\Delta_\psi P(0))^2} \ge 0 \tag{5} \label{5}$$ and $\Delta_\psi X(t) \to +\infty$ for $t\to \pm \infty$. Note that for the special case of the Gaussian wave packet $$\langle x |\psi \rangle = \frac{1}{(2\pi)^{1/4}\sigma^{1/2}} e^{-x^2/4\sigma^2} \tag{7} \label{7}$$ with $\Delta_\psi X(0) = \sigma$ and $\Delta_\psi P(0)= \hbar/2 \sigma$, the expectation value of the anticommutator of $X(0)$ and $P(0)$ vanishes and \eqref{3} simplifies to $$(\Delta_\psi X(t) )^2= \sigma^2+\frac{t^2 \hbar^2}{4 m^2 \sigma^2}.\tag{8}$$
{ "domain": "physics.stackexchange", "id": 100142, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, dispersion" }
Formulating a function on Matlab for the Shannon interpolation formula
Question: I am trying to formulate an algorithm for applying to Shannon interpolation formula to the discrete signal $$x[n]=\frac{c^2}{4}\int\limits_{0}^{n T}y\left(\tfrac{c}{2}s\right)\,ds,$$ where $c$ is constant. Now, if $\frac{1}{T}=f_{s}>2B$, where $B$ is the band-limit of $X(f)$, then we can reconstruct the continuous time signal as: $$x_{\text{const}}(t)=\sum\limits_{n=-\infty}^{\infty}\frac{c^2}{4}{\int\limits_{0}^{n T}y\left(\tfrac{c}{2}s\right)\,ds}\cdot\text{sinc}\left(\frac{t-n T}{T}\right)$$ function x=interpolate(y,N,T,c) x=0; for n=1:N x=x+((c^2)/2)*integral(y,0,n.*T).*sinc((t-n.*T)./T); end However, I am having trouble with incorporating the t variable here. Setting it as global and running interpolate(y,20,20,1) also produces nothing to note. Note that the continuous time signal in question is given by $$x_{\text{true}}(t)=\frac{c^2}{4}\int_{0}^{t}y\left(\tfrac{c}{2}s\right)\,ds.$$ Answer: Given your samples $x[n]$, here's a code to perform the interpolation by the sinc-functions: F = 1000; % the sampling frequency used for drawing the continuous function Fs = 50; % the sampling frequency T = 1/Fs; % the sampling time t = 0:1/F:1; % the time-samples for "continuous" plotting ts = 0:T:1; % the time at the sample points % the original function. In your case, this would be integral expression x = @(t) sin(2*pi*10*t) + cos(2*pi*11*t); xn = x(ts); % the discrete sequence % perform interpolation from xn to x: interpolated = 0; for n=1:length(ts) % n-1 due to the 1-based indexing of matlab interpolated = interpolated + xn(n) * sinc((t-(n-1)*T)/T); end hold off; plot(t, x(t), 'r', 'linewidth', 3); hold on; % plot continuous stem(ts, xn); hold on; % plot sampled plot(t, interpolated, 'k-o'); % plot interpolated on top of red curve As a bonus, here's what happens when the signal is not enough bandlimited: I changed the frequency of the first sine from $sin(2\pi 10 t)$ to $sin(2\pi 30 t)$: Note: The Shannon-Interpolation requires bandlimited signals, so in particular it does not work exactly with time-limeted signals (since these are never band-limited). So, you will see some deviations especially at the edge of the signal.
{ "domain": "dsp.stackexchange", "id": 4775, "tags": "matlab, algorithms, interpolation" }
Estimation of occupied frequency bins (location of non-zero fourier coefficients)
Question: I'm working in circuits fields and I am not very familiar with spectrum sensing techniques. Is there a method to identify location of non-zero Fourier coefficients of a signal (just locations, not values) with minimum computational cost (less that computations of FFT)? Suppose my signal is a vector of length $N$ associated with some white noise.Suppose the demanded frequency accuracy is $$\frac{f_\text{sample}}{10N}$$ and imagine my signal is a 1000 samples of a signal and we have quantized this signal through a low resolution ADC. And signal is sparse in frequency domain, meaning that and only a fraction of its DFT coefficients are non-zero (like less than 10% of coefficents). Currently I am using a method called Periodogram which is based on FFT. Other than FFT, do you think using multiple band-pass filters work? I was thinking to Band-pass filter, but it seems to be more computationally costly. I mean I can pass the signal through multiple band-pass (like 100) filters which altogether cover the whole spectrum and then measure the energy of the each frequency portion of the signal? Answer: Is there a method to identify location of non-zero Fourier coefficients of a signal (just locations, not values) with minimum computational cost (less that computations of FFT)? So, first of all, computation of FFT: Remember, the $10N$-point FFT, which you need for the resolution you demand: Suppose my signal is a vector of length $N$ associated with some white noise.Suppose the demanded frequency accuracy is $$\frac{f_\text{sample}}{10N}$$ has complexity of $\mathcal O\left(10N\log(10N)\right)$. You've got a sparsity of ca. $\frac1{20}$: comment clarify I give an example, imagin we have signal with lengh of 1000 or 2000. We take its DFT and there are less than 50 coefficient non-zero. which means that it's not that sparse, and all algorithms exploiting sparsity by throwing linear algebra at the problem can't really benefit much from the situation; a $K$/$L$-rank matrix/matrix operation typically has best-case complexity $K\cdot L$, so considering $K=N$ this can only work out¹ in the favour of the sparse operation if $L<10\log(10N)$. Now you gave $N\approx 1000$, and let's chose a very modest logarithm base here – $\log_{10}$ – so, you'd end up with the condition $L < 10\log_{10} 10000 = 40$, and as you can see, with $L\approx 50$, this is not generally the case – this can really only work out if your sparsity-exploiting "tone finder" is very, very, very efficient. But, you'll say, there's superefficient parametric estimators such as ESPRIT, that can get a much higher resolution with less samples! Yes, that's what I'd have suggested in your situation, in general, too, but you've got to realize that these subspace/Eigenvalue based approaches work on a autocorrelation signal model – i.e. they need an estimate of the auto-correlation-matrix, which you typically get by calculating the dyadic product of the $N$-sample vector – a operation with complexity $\frac 12 N^2$, i.e. much much more than the complexity of the FFT. So, to answer your question: No, there's to my knowledge no estimator with lower computational cost, but there's more mathematically efficient estimators than the DFT; however, you might actually get a $\frac1{10}$ resolution with less than $N$ samples, but really, you need to have a rank of your matrix estimate sufficiently higher than the number of tones you're looking for, so I'm afraid this won't change that much. ¹ I'm well aware that's not really how Landau symbols work. However, FFT implementations are really highly optimized, and so is LAPack, and thus, my gut feeling says I can really use the complexity limits as "constants".
{ "domain": "dsp.stackexchange", "id": 4091, "tags": "frequency-spectrum, signal-detection, compressive-sensing, spectrum-estimation" }