anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Time stamps in header message is always 0
Question: Using ROS Kinetic on Ubuntu linux. For some reason, the time stamps (sec, and nsec values) in the standard header msg always come out as 0. Does something have to be 'enabled' somewhere? Originally posted by rRawCWwTKVM on ROS Answers with karma: 3 on 2019-06-26 Post score: 0 Answer: ROS does not set the header timestamps automatically; the code that publishes the message has to set the timestamp. Originally posted by ahendrix with karma: 47576 on 2019-06-26 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by rRawCWwTKVM on 2019-06-26: Ah! haha. Excellent. I thought it was automatically
{ "domain": "robotics.stackexchange", "id": 33270, "tags": "ros-kinetic" }
How does a thermal temperature gun work?
Question: I once worked as a kitchen porter over a winter season. We had fun with thermal temperature guns (like these) which I learned can be used for measuring the temperature of something a reasonable distance away (aside from the obvious use of laser tag), which to my mind is pretty impressive. How do they work? Answer: They basically measure the intensity of the infrared blackbody radiation in some wavelength region and calculate the temperature needed to give that intensity according to Planck's law.
{ "domain": "physics.stackexchange", "id": 7137, "tags": "thermodynamics" }
What is incorrect about this depiction of the hydrolysis of an ester?
Question: First of all, I'd like to apologize for again asking a question solely featuring the deficiencies of my textbook. I have a few specific questions, and a general one about the image at the bottom of this question: Is the product really an acid? It doesn't even have an $H^+.$ It's stated that hydrolysis can happen when esters are treated with an acid as well as a base. But acids only donate $H^+$ so where would the extra $O$ come from to produce an alcohol as a product? It's stated that a molecule of water is produced. Is this statement incorrect, or is the diagram incorrect? In general, how would we change the diagram/information below to make it correct? If you can answer any/all of these questions, I would greatly appreciate it. Thanks again for this professional community. Answer: There are indeed two mechanisms for ester hydrolysis, one using base catalysis and one using acid catalysis. The mechanisms of both are provided below (as an aside, you mention that your textbook isn't great- you should consider getting a different one, even if your teacher gives assignments out of the other book, it'll give you a second opinion and something to cross check your book against). Base catalysed ester hydrolysis Source: Organic Chemistry (Wothers and Clayden) Acid catalysed ester hydrolysis Source: Organic Chemistry (Wothers and Clayden) Is the product really an acid? It doesn't even have an H+. Yes, it is an acid. The acid is actually initially formed however because of the basic conditions of the base catalysed mechanism it gets deprotonated to give the carboxylate. Upon workup the carboxylate salt (in your example the sodium salt of the carboxylate) would protonate to give the carboxylic acid (COOH). It's stated that hydrolysis can happen when esters are treated with an acid as well as a base. But acids only donate H+ so where would the extra O come from to produce an alcohol as a product? The product is a gain a carboxylic acid, and as you can see from the mechanism above, water is involved, which is where the additional oxygen comes from. Its worth noting that with the acid catalysed mechanism the whole thing is reversible. It's stated that a molecule of water is produced (in the base catalysed mechanism). Is this statement incorrect, or is the diagram incorrect? It could be water or an alcohol depending on whether hydroxide or the alkoxide does the deprotonation. Water is most likely however, as shown above. In the acid catalysed mechanism it is the alcohol that is formed. In general, how would we change the diagram/information below to make it correct? See the mechanisms above, taken from Clayden and Wothers. In general the diagram in your textbook looks essentially 'fine', though I dont entirely like the way the structures are drawn (its a very american textbook way of doing it.)
{ "domain": "chemistry.stackexchange", "id": 6046, "tags": "organic-chemistry" }
On the Shannon Capacity of Cycles
Question: Let $\alpha(C_{n}^{\boxtimes k})$ be independence number of $k$-fold strong product of an $n$-vertex cycle graph $C_{n}$. $\forall n > 6$, is $\alpha(C_{n}^{\boxtimes (k+r)})^{\frac{1}{k+r}} \geq \alpha(C_{n}^{\boxtimes k})^{\frac{1}{k}}$ for infinitely many positive integer pairs $(k,r)$? Answer: It is a standard fact that $\lim_{k \to \infty} \sqrt[k]{\alpha(G^k)}=\sup_{k \to \infty} \sqrt[k]{\alpha(G^k)}$ for every graph $G$. (See the proof below.) Denoting $b_k := \sqrt[k]{\alpha(G^k)}$, this implies that the tail of the sequence $(b_k)_{k \geq 1}$ cannot be monotone decreasing, and hence there are infinitely many $k$'s for which $b_{k+1} \geq b_k$, as required. In order to prove that $\lim_{k \to \infty} \sqrt[k]{\alpha(G^k)}=\sup_{k \to \infty} \sqrt[k]{\alpha(G^k)}$ note that the sequence $(\alpha(G^k))_{k \geq 1}$ is super multiplicative, i.e., $\alpha(G^{k+\ell}) \geq \alpha(G^k) \alpha(G^\ell)$ for all $k,\ell \geq 1$. Indeed, if $I$ is an independent set in $G^k$, and $J$ is an independent set in $G^\ell$, then $I \times J$ is an independent set in $G^{k+\ell}$. Therefore, by (the multiplicative version of) Fekete's lemma it follows that $\lim_{k \to \infty} \sqrt[k]{\alpha(G^k)}=\sup_{k \to \infty} \sqrt[k]{\alpha(G^k)}$.
{ "domain": "cstheory.stackexchange", "id": 2209, "tags": "graph-theory, it.information-theory" }
Identification of a certain type of standing wave
Question: the rough paint drawing attached is meant to show a sort of standing wave, where there is a 1,2,1,2,1,2,1,2 pattern: same wavelength but every other cycle is double amplitude. Is there a name for this particular type of wave? I did a bit of poking around in discussions & texts about standing waves and couldn't find a ready label for it. Thanks! Answer: Realize that a standing wave is the superposition (interference) of a wave upon itself, facilitated by trapping and reflecting the waves within a space with dimensions that match the wave's frequency, or multiples of the frequency. The wave you've drawn is more than likely a traveling wave and more so the Amplitude Modulation of one sine wave by another. Another name for modulation is signal mixing. The mixing of two signals is simply done by taking their product (multiplying the two signals). In your specific example both the carrier and information signal are sine waves whose frequencies are close to one another. If you are able to capture a longer sample of the actual time signal you could analyze it by processing the Fourier transform, and this would reveal the carrier frequency and side bands (information).
{ "domain": "physics.stackexchange", "id": 42262, "tags": "waves, oscillators" }
Lift and drag coefficients on other planets
Question: The question I'm trying to answer seemed simple: how hard would it be to fly on a planet with lower gravity but also thinner atmosphere compared to Earth. If the answer could hint me at how much different an airplane designed to fly there would look like. I know atmospheric pressure, atmospheric composition (and hence molar mass) and temperature at the surface of the hypothetical planet. However I have a problem with determining lift and drag coefficients. The NASA site says this coefficients are dependent on viscosity and compressibility of air, the form of the aircraft and angle of attack. My first thought was to separate the part of the coefficients that is dependent on the aircraft from atmospheric parameters. However, I have trouble finding a formula for it. This page says that for certain condition lift coefficient is $$ C_l=2\pi \alpha $$ But I'm not sure if this approximation is true for different atmosphere. L/D Ratio and Mars Aircraft may be relevant. Also, can I assume that lift to drag, or maximum lift to drag, is the same in any atmosphere, and if so under what conditions? Answer: Physics should not be different on other planets, so the same laws apply as on earth. Only the results of an optimization might look unfamiliar. See here for an answer on Aviation SE on a Mars solar aircraft. The lift slope equation you found is only valid for slender bodies, like fuselages and fuel tanks, and once wing span becomes a sizable fraction of length, more complicated equations will be needed, and Mach number effects must be considered, too. See here for a more elaborate answer. Generally, to fly like on earth would mean that the ratio of dynamic pressure and mass is the same. Then you would use the same aircraft as on earth (provided the other planet's atmosphere contains enough, but not too much oxygen for the engine to function). Dynamic pressure $q$ is the product of the square of airspeed $v$ and air density $\rho$: $$q = \frac{\rho\cdot v^2}{2}$$ Lift is dynamic pressure times wing area $S$ and lift coefficient $c_L$ and must be equal to weight, that is the product of mass $m$ and the local gravitational acceleration $g$: $$L = q\cdot S\cdot c_L = m\cdot g$$ The lift coefficient is a measure how much lift can be created by a given wing area and can reach values of up to 3 in case of a landing airliner. Then the wing uses all kinds of high-lift devices (slats, slotted flaps), and once those are put away, the lift coefficient of an airliner is at about 0.5. For observation aircraft, less speed is required, and a normal lift coefficient for them would be 1.2. I see no reason why this number should be different just because the atmosphere is different. The most important number would be the Reynolds number $Re$. It is the ratio of inertial to viscous forces in a flow and is affected by the dimensions of your plane (on earth we use the wing chord $l$) and the density and dynamic viscosity $\mu$ of your planet's atmosphere. $$Re = \frac{v\cdot\rho\cdot l}{\mu}$$ Lower Reynolds numbers will translate into higher friction drag, which depresses the maximum achievable lift-to-drag ratio. Gliders fly at Reynolds numbers between 1,000,000 and 3,000,000 and airliners can easily achieve 50,000,000. When you need to optimize for a more gooey atmosphere, your wings will become less slender than on earth, because you will enlarge wing chord $l$ to keep $Re$ up. Once you need speed to get the weight lifted, the Mach number $Ma$ might become important. Generally, subsonic flight is the most efficient, and it has a natural limit at $Ma^2 \cdot c_L = 0.4$. This is what can be achieved with todays technology. The speed of sound in a gas is mainly a function of temperature - Mach 1 on Mars is 238 m/s. The first part of an airplane to hit a Mach limit are the propeller tips. Maybe you need to have several slow-spinning, small propellers than one big, honking propeller which would provide the best efficiency as long as its tips are well below Mach 1. Last, you need to know the number of atoms per gas molecule. Air is dominated by two-atomic molecules, but maybe your planet has an atmosphere like early earth with lots of carbon dioxide. This will affect the ratio of specific heats $\kappa$ - this means the rate of heating and cooling with compression and expansion of the gas might be different than on earth. This will come into play when you approach or exceed Mach 1.
{ "domain": "physics.stackexchange", "id": 80823, "tags": "fluid-dynamics, aerodynamics, drag, viscosity, exoplanets" }
Can STFT (Short-time Fourier Transform ) be more useful than FFT for analyzing stationary signals under some circumstances?
Question: the nature of STFT is to be applied on non-stationary signals. for stationary signals STFT and FFT sounds exactly same to me. However, I was wondering if STFT can lead to better results compared to FFT (when applied on stationary signals), especially when optimized the STFT window function or other possible parameters? Answer: In the continuous version of the short-time Fourier transform, if you choose a uniform unit-height window, it just boils down to the continuous Fourier transform. In practice, for the discrete case, you can find STFT parameters that just give you the same information as that of the FFT. Hence, STFT can always be as useful as the FFT. Since real signals are often finite length, they are not stationary in the purest sense. Even if the noise is "stationary", we only have access to few realizations. For instance, the estimation of the parameter of an additive noise from a few realizations would probably be more robust in the STFT space than from the FFT.
{ "domain": "dsp.stackexchange", "id": 5619, "tags": "fft, stft" }
Regulation of Glycolysis and other pathways at ‘irreversible’ reaction steps
Question: The hexokinase, phosphofructokinase and pyruvate kinase steps of glycolysis (1,3 and 10, below) are the only ones that are irreversible, and are also the steps where glycolysis is regulated. Is it necessary for a regulatory step in glycolysis to be irreversible, and if so does this apply to metabolic pathways generally? Answer: As far as glycolyis is concerned, the answer is straightforward. In certain cells and tissues there is a pathway working in the opposite direction — gluconeogenesis — in which the ‘irreversible’ steps of glycolysis are, in fact (and of necessity), reversed by a different enzymic reaction in which the position of the equilibrium is in the opposite direction. Obviously if it is metabolically appropriate for glycolysis to occur it is inappropriate for gluconeogenesis to occur. The only way of turning e.g. glycolysis off while at the same time turning gluconeogenesis on is by regulating the activity of the different enzymes at these three steps. Glycolysis is, therefore a special case in sharing many reactions with another pathway working in the reverse direction. What about pathways in which the interconversions only proceed in one direction. Classic examples are biosynthetic pathways that are regulated by what is known as ‘feedback’ or ‘end-product’ inhibition. An example (for which I happen to have my own diagram) is the synthesis of isoleucine from threonine in bacteria: When the concentration of isoleucine increases to a certain amount (sufficient for the cell’s needs) this inhibits the enzyme threonine deaminase, preventing the wasteful conversion of threonine to isoleucine. The main point is not the position of equilibrium of the threonine deaminase reaction (I haven’t checked it yet) but that it is the first unique step in the synthesis pathway. Hence regulating this step prevents the unnecessary removal of threonine in a way that does not allow the wasteful accumulation of intermediates. Moral There is an old party game in which you pass a pair of scissor to your neighbour saying “I pass you these scissors crossed” or “I pass you these scissors uncrossed”. The initiates then tell you whether you have performed the operation correctly. The uninitiated player assumes that what is important is whether the scissors are crossed. In fact, it is whether or not your legs are crossed. Beware of false associations.
{ "domain": "biology.stackexchange", "id": 8018, "tags": "biochemistry, metabolism, pathway" }
Quantum numbers and radial probability of the electrons
Question: In this book it has been written: The $ns$, $(n − 1)d$, and $(n − 2)f$ orbitals are so close to one another in energy, and interpenetrate one another so extensively. And in the wikipedia article of Pauli exclusion principle it has been written: The Pauli exclusion principle is the quantum mechanical principle that states that two identical fermions (particles with half-integer spin) cannot occupy the same quantum state simultaneously. In the case of electrons in atom, it can be stated as follows: it is impossible for two electrons of a poly-electron atom to have the same values of the four quantum numbers. Does this mean that two electrons of an atom can have significant radial probability at the same location even if they are defined by different set of quantam numbers? Answer: Yes of course! Pauli's exclusion principle is only about the 'quantum numbers', more correctly, it states that 'A system containing several electrons must be described by an antisymmetric total eigenfunction', which is the stronger statement. The weaker statement is that no two electrons can have two identical set of quantum numbers. It doesnt say anything about the probability or the energy or any other obesrvable. Any property deduced satisfying the condition thereof is purely a mathematical result and entirely in accordance with the principle. As a bonus, If we consider space variables of two electrons (identical particles) to have almost the same values, then their wavefunctions are 'almost' identical if they are in the same quantum state, ie, $\psi_{a}(1)~ \simeq~\psi_{a}(2)$ and $\psi_{b}(1)~\simeq~\psi_{b}(2)$ [the label 1 and 2 denote the spatial co-ordinates of the electron '1' and '2' i.e. ($x_1,y_1,z_1$) and ($x_2,y_2,z_2$), and the labels a and b for the wavefunction denote the three quantum numbers $n,l,m$ of two different quantum states]. In this case, the antisymmetric space eigenfunction describing the system of two electrons is $$\frac{1}{\sqrt2}[\psi_{a}(1)\psi_{b}(2) - \psi_{a}(2) \psi_{b}(1)]\simeq\frac{1}{\sqrt2}[\psi_{b}(1)\psi_{a}(2) - \psi_{b}(1) \psi_{a}(2)]\simeq 0$$ In summary, pauli exclusion principle works in such a way as to reduce to zero the probability density of finding the particle in very closeby spaces.
{ "domain": "physics.stackexchange", "id": 34645, "tags": "quantum-mechanics, electrons, atomic-physics, pauli-exclusion-principle, orbitals" }
Finding common edges of two graphs
Question: Is there any algorithm that finds the common edges and vertices between two graphs? Its not a common subgraph problem though, the edges which are common between the two graphs may not be connected to each other, may be far apart. How to find all the common edges in the graph? Like finding the common subgraphs between two graphs such that none of the subgraphs are connected to each other. Like these two graphs, the black and the blue part are the uncommon part... Answer: This appears to be the maximum subgraph isomorphism problem: given graphs $G,H$, you want to find the largest pairs of subgraphs $G',H'$ that are isomorphic (where $G'$ is a subgraph of $G$ and $H'$ is a subgraph of $H'$ and $G'$ is isomorphic to $H'$). Then the edges of $G',H'$ are common to both $G$ and $H$. Apparently, this problem has been applied in chemistry, which might be of interest to you given the diagram you showed. In the comments you asked about the case $G$ and $H$ both have two common subgraphs which are not connected to each other. The maximum subgraph isomorphism solution will automatically find those. Nothing requires $G'$ or $H'$ to be connected. Thus, in your case, the optimal solution will have $G'$ consist of two disconnected components, and $H'$ consisting of the same two components. So, this problem already does what you want. This problem is NP-hard for general graphs, so the general expectation is that there's not likely to be any efficient algorithm that can handle large, general graphs. However, you have two aspects that offer room for hope. First, it appears that your graphs are relatively small, so there might algorithms that are acceptable for graphs that are about the size of the examples in your diagram: even exponential-time algorithms might be practical. Second, and more importantly, it looks like you aren't dealing with arbitrary graphs. As Tom van der Zanden accurately explains: The graphs shown in the example are both trees and they appear to be models of molecules. While of course molecules (like benzene) can have cycles, depending on what molecules the OP is interested in the problem may not be NP-hard. So, I suggest you search the research literature to see if there's been work done on the maximum subgraph isomorphism problem for trees. There might be algorithms that have been described in the algorithms literature that you could use.
{ "domain": "cs.stackexchange", "id": 4976, "tags": "graphs" }
How to process PointCloud2 message data in python?
Question: I am trying to save a pointcloud data using python. Bu I haven't been able to import pcl. I am no sure if my installation in correct as I keep getting this error : ImportError: No module named pcl How do I save the point cloud data using python Originally posted by Joy16 on ROS Answers with karma: 112 on 2017-02-23 Post score: 0 Answer: There is no (official) pcl package for Python afaik, so that won't work. You can try and see whether strawlab/python-pcl can do what you want, but it appears not to have been kept up-to-date with PCL. Originally posted by gvdhoorn with karma: 86574 on 2017-02-23 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Joy16 on 2017-02-23: Thank you @gvdhoorn for your reply. Can you please tell me how to install python-pcl from strawlab. I did a git clone from their repo to my working directory. I then in my py script said , "import pcl", it wouldn't wok giving the error mentioned above in my question. Thanks! Comment by gvdhoorn on 2017-02-23: That is really more of a Python question than a ROS one. Check whether Installing Packages - Installing from a local src tree helps.
{ "domain": "robotics.stackexchange", "id": 27115, "tags": "rospy" }
Removing collisions from a link using parameter passing
Question: I am trying to load some static models into a world. I would like them to not cause any collisions, so I am trying to remove them using the tutorial here: http://sdformat.org/tutorials?tut=param_passing_tutorial&cat=specification& Has this been implemented in sdf 1.7 and the Gazebo release for ROS Noetic? The models are showing up with the collision elements present. I am running the latest release of ROS Noetic on Ubuntu 20.04.5 My code is attached.C:\fakepath\test.world Thanks! Originally posted by Gavriel-CTO on Gazebo Answers with karma: 3 on 2022-12-08 Post score: 0 Answer: Parameter passing is only implemented in libsdformat 10 and later. I assume you're using Gazebo 11 (from the tag on the question), and that uses libsdformat9. However, I believe you can install libsdformat10 or later, run ign sdf -p on the input file and get an output that has the param-passing changes applied. Originally posted by azeey with karma: 704 on 2022-12-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Gavriel-CTO on 2022-12-11: Correct. Where is the documentation connecting ROS, Gazebo, SDFormat version number and libsdformat version number? I couldn't find any definitive resource that would contain the kind of information you provided in this answer. Thanks! Comment by azeey on 2022-12-12: http://sdformat.org/tutorials?tut=roadmap&cat=developers&#libsdformat-releases
{ "domain": "robotics.stackexchange", "id": 4681, "tags": "ros-noetic, gazebo-11, collision" }
Does a reentrant list for signal queue in a single-thread environment exist?
Question: I need to handle Unix signals in a single-threaded application with the following goals: Signals doesn't mask on receive (thus, the signal handler must be reentrant). I am not allowed to lose signal data (thus, if a new signal comes before the handler of the previous returned, it also must be handled correctly). I have the common multi-threaded primitives (spinlocks, semaphores, etc). But they doesn't seem enough, because my higher-level data structures (even a such simple as a list) aren't thread-safe. My initial idea was the following: I use a list, in which I store the data of the incoming signal fast, and process them (call the possibily much slower running handlers) later, out of the critical section. The main problem with that, that the list data structure isn't thread safe. If I lock it, I can't store a second signal anywhere. I can't wait until the previous handler exits, because on the second signal it is essentially suspended in a critical section. Simply I don't have any idea, how to handle the following scenario: signal1 comes, the process suspends, and the handler of signal1 starts signal2 comes, the handler of signal1 suspends, and the handler of signal2 starts Handler of signal2 returns, execution returns to the handler signal1 Handler of signal1 returns, execution returns to the main program. After thinking a lot on it, I have an impression, maybe my problem is unsolvable. Am I right? How do operating systems handle similar problems (for example, possibily bursting interrupts from hardware)? Answer: Finally I found a solution, which handles the whole problem at once. It is relatively easy. They key to the solution: we can use the signal stack as a "to-do stack". Important to remember, that although this problem is also about race condition eliminiation, its solution differs significantly from the "lock everything what you use, do your task, release everything" solutions. It is because it is not about parallelisation, it is about reentrancy. The common lock-based solutions would lead to deadlock here, because the parent (in the example) signal1 handler will be surely suspended while the whole execution of the handler of signal2. So, this is a disadvantage, but it is an advantage as well. We can guarantee, that signal1 won't do anything while our signal2 runs. We can't simply lock things, but we also don't need to do them. So, that blocking locks are closed out, only the nonblocking locks left. What I invented, is the following C code: #define STORE(a, b) __atomic_store(&(a), &(b), __ATOMIC_SEQ_CST) #define SWAP(a, b) __atomic_exchange(&(a), &(b), &(b), __ATOMIC_SEQ_CST) // action handler wrapper void ss_wrapper(int signum, siginfo_t* siginfo, ucontext_t* ucontext) { // currently top element on the signal stack static struct ss_hit *top = NULL; struct ss_hit *hit = ss_hit_new(signum, siginfo, ucontext); struct ss_hit *bkp; again: bkp = hit; SWAP(top, hit); if (!hit) { // we got the lock, we are the master ss_fire(bkp); SWAP(top, &bkp->next); // release the lock, find out if there is new element if (bkp->nxt) { // there IS hit = bkp; free(bkp); goto again; } else free(bkp); } else { // we didn't got the lock, but we got the top in hit STORE(hit->next, bkp); } } As we can see, it had been very beautiful to have a separated, reentrant stack (and not list) data structure. The main problem was to understood, that adding new element in the stack, AND testing if it is empty, they should be done in a single atomic operation (this is why top serves as both of spinlock and pointer to the top of the stack) similarly, removing element from the stack, and to know, if it is now empty, should be done also atomically. I also learned from this some days of thinking, that * constructing reentrant algorithm is much harder as to contrust a multithread * the most important thing is that reentrant algos should have only very few variables to interact eachother through them, and all operation should be atomic on them.
{ "domain": "cs.stackexchange", "id": 5152, "tags": "data-structures, threads, deadlocks" }
How to understand complex masses of unstable particles? The conceptual problem of calculating decay rate
Question: If a particle has a complex mass, $p^2-m^2=0$ leads to $p^μ \notin \mathbb R^4$. What does it mean? When you want to calculate S-matrix elements of decay process $\langle p_f,\ldots\mid p_i\rangle$, you compute the $n$-point correlation function $$\langle 0\mid Tφ(x_1)φ(x_2)\ldots\mid 0\rangle$$ and using the LSZ formula. Fourier transformation of the correlation function has the poles at $p_j^2=m_j^2$ ,because it includes the vacuum polarization amplitudes$$ = \frac i {p_j^2-m_j^2+O((p_j^2-m_j^2)^2)}\approx \frac i{p_i^2-m_i^2}$$ on each external legs in diagrams. The LSZ formula says when you multiply $Π_j(p_j^2-m_j^2)/i$ and take the limit $p_j^2→m_j^2$, you get S-matrix elements $\langle p_f,\ldots\mid p_i\rangle$. Therefore, you may know that you need to consider only "amputated diagrams" when you calculate S-matrix. However, beyond the tree level, the vacuum polarization amplitude of unstable particle has the pole at not real but complex value. This means on shell limit $p_i^2→m_i^2$ leads to $p_i^\mu \notin \mathbb R^4$. On the other hand, if you take limit such as $p_i^2\to\operatorname{Re} m_i^2$, you cannot do the approximation "vacuum polarization amplitudes $ \approx \frac i{p_i^2-m_i^2}$", because $p_i^2$ is far from the pole. After all, my question is how to compute decay rates of unstable particles correctly. Can I calculate S-matrix elements by computing "amputated diagrams" as usually done? The related problem is noted in Srednicki's textbook on P.162 http://web.physics.ucsb.edu/~mark/qft.html He said in and out state should consist of infinitely long-lived particles. He thought unstable particles present as intermediate states, and regarded decay late as a quantity related to the width of resonance. (p.165 (25.25)) $$f(E)=\frac{1}{E-E_0+iΓ/2}.$$ Fourier transformation of $f(E)$ gives $g(t)=\exp(iE_0t-Γt/2)$, which seems something expressing decay. I want to know too the precise meaning of this procedure. (At first, I considered the time evolution of $|ψ\rangle = \int dE \, f(E)|E\rangle$ by Schrodinger equation, but $|ψ(t)\rangle\neq g(t)|ψ(0)\rangle$). Answer: I'll discuss this in the context of Yukawa theory, and use renormalized perturbation theory. The way I understand it is this. Setup: Consider the renormalization scheme $$ \psi_R = \frac{1}{\sqrt{Z}}\psi_0 \\ m_R = \frac{1}{Z_m} m_0 $$ where $Z_i = 1 + \delta_i$. Then the renormalized propagator is. $$G^R = \frac{1}{Z_2} G^{bare}. $$ Now recall that summing over all 1PI insertions we may write $$iG^{bare} = \frac{i}{\not{p} - m_0 + \Sigma(\not{p})}$$ where $\Sigma$ is the sum over all 1PI insertions. Now, we can write $\Sigma = \Sigma_2 + {\cal O}(g^4)$ keeping only those insertions of order less than $g^4$. So that our renormalized propagator to order $g^2$ is $$G^R = \frac{1}{1+\delta}\frac{i}{\not{p} - m_0 + \Sigma_2(\not{p})} \\ \boxed{G^R= \frac{i}{\not{p} - m_R + \Sigma_R(\not{p})} } $$ where we have defined $$ \Sigma_R = \Sigma_2 + \delta \not{p} - (\delta + \delta_m)m_R $$ Now, recall that we define the physical mass as the pole of the propagator. Therefore, $$ m_P - m_R + \Sigma_R(m_P) = 0 .$$ This implies that $$ \boxed{\delta_m m_P = \Sigma_R(m_P)} $$ Your question: Let us call $\Sigma_2(\not{p}) = i\Delta m$, anticipating that the correction is purely imaginary. Then, our propagator is $$\tilde{G}^R= \frac{i}{\not{p} - m_R+ i\Delta m} $$ where now we know that our propagator has a pole at $m_P$. Recall that the amplitude for a particle to propagate from $x$ to $y$ is given by the fourier transform $$D(x-y) = \int \frac{d^4p}{(2\pi)^4} \frac{i}{\not{p} - m_R + i\Delta m}e^{-ip(x-y)} $$ To make things easier suppose that y=0. Then we have $$D(x) = \int \frac{d^4p}{(2\pi)^4} \frac{i}{\not{p} - m_R+ i\Delta m}e^{i\mathbf{p\cdot x}}e^{-iEt} $$ Where $E = \sqrt{p^2 + m_{rest}^2}$. What is the mass of the particle in its rest frame? Well by definition it's the pole of the propagator! And so we may replace $$E = \sqrt{p^2 + (m_R - i \Delta m)^2} \\ =\sqrt[4]{\left(-{\Delta m}^2+m^2+p^2\right)^2+4 {\Delta m}^2 m^2} \left(i \sin \left(\frac{\phi}{2}\right)+\cos \left(\frac{\phi}{2}\right)\right)\\ \boxed{E \equiv \xi \left(i \sin \left(\frac{\phi}{2}\right)+\cos \left(\frac{\phi}{2}\right)\right) }$$ where $\phi$ is the $\mathrm{Arg}$ of the radicand.Finally, we obtain that $$ \boxed{D(x) = \int \frac{d^4p}{(2\pi)^4} \frac{ie^{i\mathbf{p\cdot x}} e^{-i\xi\cos(\frac{\phi}{2}) t}}{\not{p} - m_R+ i\Delta m}e^{-\xi\sin(\frac{\phi}{2}) t}} $$ and so we see that indeed the probability for a particle to exist at a time $t$ decays exponentially.
{ "domain": "physics.stackexchange", "id": 54866, "tags": "quantum-field-theory, resonance, complex-numbers, correlation-functions, s-matrix-theory" }
Are there any radical approaches in CPU development?
Question: I have a desktop PC with the CPU Intel I7-12700F with 180 TDP, the CPU fan Zalman CNPS10X extreme, and Windows 11 and my problem is that The CPU cooling causes much noise when simple tasks are being performed; opening directories or windows, opening or closing small programs such as Notepad, Calculator, web browser, or when taking the PC out of sleep mode. I am not a CS student, just a layman trying to understand if the desktop CPU development field is somewhat "stuck" in the past decades. If desktop CPUs of the 2020's generally use more power than those of previous decades hence get hot faster and require more extensive and noisy cooling, I'd carefully assume that desktop CPUs of 2020's aren't essentially very different than those of the 90's and 80's. Are there perhaps different paradigms to create essentially differently desgined desktop CPUs which may allow similar capabilities but with fanless cooling / passive cooling? Currently, fanless cooling is nearly nonexistent when it comes to desktop CPUs. A notorious CPU cooling model, Nofan CR-80EH is aimed for desktop CPUs of no more than 80 Watt TDP and I don't know if such CPUs are still manufactured. I am coming to ask this ignorantly and by a possible false analogy to electricity production. If I am not mistaken, electricity production has had some radical engineering shifts from coal burning, such as producing electricity from water flow, wind flow, nuclear reaction (such as fission), solar panels (also heat but still a radical change) and who knows what future holds? Is there any radical approach to develop desktop CPUs at least in theory or currently in research labs, that will allow completely silent / fanless desktop computers with CPU capabilities similar to those of current desktop CPUs? By the way, why not creating a motherboards with several small general processers, each one with passive cooling? Answer: Two areas of research to consider: Reversible logic, and in particular, adiabatic circuits. The idea here is to increase energy conservation as much as possible by not destroying information most of the time. Asynchronous logic. The theory behind this is that the best CPU clock speed is no clock at all.
{ "domain": "cs.stackexchange", "id": 21194, "tags": "computer-architecture, cpu" }
Can interference occur between two waves that are parallel but separated by a small distance?
Question: This is a image of diffraction in crystal. My doubt is how the parallel waves coming out interfere if they are seperate? Answer: In answering your question, a lot could be said about the art of mathematical modeling, but, the short answer is: They don't. But, the rays in the scheme are only an approximation, and one that fails at the atomic scale - a beam of light, no matter how laser like or faint, is never exactly a 1-D mathematical line, it spreads sideways. That's why they can interfere.
{ "domain": "physics.stackexchange", "id": 31583, "tags": "interference" }
Thermodynamics of ideal suspended body in a vacuum
Question: We have a body suspended in a vacuum at temperature T (purple circle). We have a single heat source of incident radiation, A, shown by red arrow. Heat is shown radiated away as blue arrow, B. There is no single heat-sink, so this would be in many directions. The system is in equilibrium, and temperature T is stable. The purple body is not a heat source. We want to change or control temperature T. We cannot change A or B in any way. We cannot change the mass of the purple body; we can only change its chemical make-up. What kind of things could we do to the purple body to control temperature T ? CLARIFICATION: I want to control or determine the temperature at the surface of the body, by constructing the body from different materials / structures. My first thought is to try to change the reflectiveness of the surface. If the surface was made shiny would that cause T to drop, because heat A bounces off? What else could we do to change temperature T? Also interested if the size of the body would affect how this operates. Answer: To enable a mathematical analysis, let us simplify the problem to an infinite plane with radiation incident from one side, and uniform in space: Energy enters the body via the incident radiation; since it is a vacuum the only way for it to leave is through emitted radiation. The fraction of the incident radiation which it absorbs is given by the emissivity $\varepsilon$. If the body is opaque, the remainder of the incident radiation is reflected. The amount of emitted radiation is determined by the Stefan-Boltzman law. So if the power per unit area of the incident radiation is $J$, then we have $$\frac{\mathrm{d}E}{\mathrm{d}t} = A \varepsilon \left( J - \sigma T_H^4 - \sigma T_C^4 \right),$$ where $T_R$ is the temperature of the hot side of the plane (left in the figure) and $T_C$ is the temperature of the cold side. I am ignoring any possible wavelength-dependence of the emissivity. To get equillibrium, set the time derivative of the energy to zero (i.e. make the rate of emission equal to the rate of absorption). Thus: $$T_H^4 + T_C^4 = \frac{J}{\sigma}.$$ So far, the properties of the material haven't seemed to make a difference! ($\sigma$ is a universal constant, and the emissivity dropped out of the equation!) But where they enter is in determining the relationship between $T_C$ and $T_H$. To determine the equillibrium relationship between these two temperatures, we must balance conduction from the hot side to the cold side against radiation from the cold side. Conduction in the body is described by the heat equation, which at equillibrium inside the slab gives a linear temperature profile, with the temperature decreasing linearly from $T_H$ to $T_C$. The rate of heat conduction (per unit area) in this quasi-1d problem is $$J_{\mathrm{conduction}} = \kappa \frac{\partial T}{\partial x} = \kappa \frac{T_H - T_C}{h},$$ where $\kappa$ is the thermal conductivity and $h$ is the thickness of the slab. At equillibrium, this flux must balance the outgoing radiation on the cold side of the slab, so $$ \frac{\kappa}{h \varepsilon \sigma} \left(T_H - T_C \right) = T_C^4.$$ Notice that in this simplified geometry, we only have one control parameter, the dimensionless ratio which I will call $$ \phi \equiv \frac{\kappa}{h \varepsilon \sigma}.$$ We can make $\phi$ very large either by making the material conduct heat very well, making it very thin, or making it very reflective. Any of the above will allow the body to equillibrate internally, and give $T_H = T_C$. On the other hand, if we have a slab which conducts very poorly or is very very thick, then we will have $T_H \gg T_C$. In principle we can completely solve the problem by solving the above equation for $T_C$ as a function of $T_H$, then plugging it back into the equation containing $J$. But let's just solve the two extreme limits, and we'll basically understand everything. In the $\phi \to \infty$ limit there is one temperature, $T$, and $$T = \left( \frac{J}{2 \sigma} \right)^{1/4}.$$ In the limit $\phi \to 0$, $T_C = 0$ from the second equation, and therefore $T_H = \left( \frac{J}{\sigma} \right)^{1/4}$. So in summary: We only have one control paramter, which is basically (conduction rate) / (radiation absorptivity). If this number is high, the entire body will come to a single, medium-ish temperature. If the number gets lower, the "sunward side" will get hotter, but the "dark side" will get colder. A more realistic geometry will complicate this analysis, but I think the conclusions should be similar. I don't know if a very "weird" $\varepsilon{\left(\lambda\right)}$, produced by some sort of metamaterial, would have an interesting effect.
{ "domain": "physics.stackexchange", "id": 61466, "tags": "thermodynamics" }
First order expression for functional dependency
Question: I'm puzzled with functional dependency formula in first order logic. It is triggered by http://rjlipton.wordpress.com/2010/01/17/a-limit-of-first-order-logic/ where there seems to be a confusion between dependency and functional dependency. The expression $\displaystyle \forall x \exists y : S(x, y) .$ is anything but functional dependency. The formula for functional dependency is $\displaystyle \forall y_{1} \forall y_{2} \forall x : S(x, y_{1}) \wedge S(x, y_{2}) \implies y_{1} = y_{2} .$ Now, suppose we have ternary predicate Q(x,y,z). How does one express functional dependency y=f(x)? Answer: Apparently, we must project away variable z from ternary relation Q, which is done via existential quantifier: ∀y1∀y2∀x:(∃z:Q(x1,y,z))∧(∃z:Q(x2,y,z))⟹x1=x2. This is equivalent to standard textbook FD definition: ∀y1∀y2∀x∀z1∀z2:Q(x1,y,z1)∧Q(x2,y,z2)⟹x1=x2.
{ "domain": "cstheory.stackexchange", "id": 4352, "tags": "lo.logic, db.databases" }
Does light always take the shortest path?
Question: Does light always take the shortest path? And is it possible to change the probability of a photon travelling to a point by only disturbing the paths that are far away from the shortest path? Answer: Consider a light source Q shining through a window at a detector Z. The direct path from Q to Z goes through the windows center M. According to wave optics to get the amplitude at Z we have to add the contributions (complex amplitudes) of all the "possible paths" through the window from Q to Z. If you graphically respresent each contribution as a vector and add them by putting them head to tail, you will see that the result forms a so called cornu spiral.The straight middle section corrensponds to the path that goes trough M and it's close vicinity. The paths that are far away from M correnspond to the spiral ends of the cornu spiral. This means all the paths far away from M interfere destructively and only the path through M contributes to the amplitude at Z. This is the classical result that light takes the shortest path. However we can place obstacles at specific points inside the window in such a way that we remove the destructive interference and make the remaining paths far away from M interfere constructively with the main path trough M. This is done by placing the obstacles in paths that correspond to the parts of the cornu spiral that go against the direction of the straight part in the middle of the cornu spiral. This will make it brighter at Z even though we didn't do anything to the main path through M. I can also do the opposite and place obstacles in the window such that the remaining paths interfere destructively with the main path, such that it is dark at Z even tough there is an unobstructed direct path from from Q to Z through M. If I now place an obstactle at M to block the main path, it will actually increase the amplitude at Z because the remaining paths interfere constructively and we removed the destructively interfering main path. This means that counter-intuitively we can actually make it brighter at Z by placing obstacles which block the light. Feynman does also explain how to calculate this cornu spiral in his lectures section 26-6. If light would always take the shortest path, then I would expect that the amplitude at Z remains constant as long as I don't make changes to the main path. Remember that we didn't use mirrors to reflect light to Z, we placed obstacles that absorb light. But we can still increase the brightness at Z by placing obstacles in the window far away from M. This still works with individual photons. Does that mean an individual photon takes all paths at once? Or does it choose one of these paths according to the probabilities? In my view quantum objects are not point particles that have a well-defined path in the first place, so a photon does not always travel along the shortest path, because it does not travel along any well defined path at all. p.s. I created this initially as an answer to this comment.
{ "domain": "physics.stackexchange", "id": 74652, "tags": "optics, visible-light, quantum-electrodynamics, interference, variational-principle" }
What is wrong with a nonrenormalizable theory?
Question: Non-renormalizable theories, when regarded as an effective field theory below a cut-off $\Lambda$, is perfectly meaningful field theory. This is because non-renormalizable operators can be induced in the effective Lagrangian while integrating out high energy degrees of freedom. But as far as modern interpretation is concerned, renormalizable theories are also effective field theories. Then why the renormalizability of field theories is still an important demand? For example, QED, QCD or the standard model is renormalizable. What would be wrong if they were not? Answer: In the modern effective field theory point of view, there's nothing wrong with non-renormalizable theories. In fact, one may prefer a non-renormalizable theory inasmuch they tell you the point at which they fail (the energy cut-off). To be concrete, consider an effective lagrangian expanded in inverse powers of the energy cut-off $\Lambda$: \begin{equation} \mathcal{L}_\mathrm{eff}(\Lambda)=\mathcal{L}_\mathrm{renorm}+ \sum_\mathcal{\alpha}\frac{g_\alpha}{\Lambda^{ \operatorname{dim}\mathcal{O}_\alpha-4}}\mathcal{O}_\alpha \end{equation} where $\mathcal{L}_\mathrm{renorm}$ doesn't depend on $\Lambda$, $\mathcal{O}_\alpha$ are non-renormalizable operators (dim. > 4) and $g_\alpha$ are the corresponding coupling constants. So at very low energies $E\ll \Lambda$ the contributions from the non-renormalizable operators will be supressed by powers of $E/\Lambda$. That's why the Standard Model is renormalizable, we're just unable to see the non-renormalizable terms because we're looking at too low energies. Notice also that as we increase the energy, the first operators to become important will be the ones with the lower dimension. In general, contributions from non-renormalizable operators will become important in order given by their dimension. So you can see that, although there are infinite possible non-renormalizable coupling constants, you can make the approximation of cutting the expansion of the effective lagrangian at some power of the cut-off and get a finite number of parameters.
{ "domain": "physics.stackexchange", "id": 35716, "tags": "quantum-field-theory, standard-model, renormalization, beyond-the-standard-model, effective-field-theory" }
Short function to remove unnecessary whitespace
Question: I have a function consisting of one line of code: def trimString(string): """ Remove unnecessary whitespace. """ return re.sub('\s+', ' ', string).strip() But I've been debating with myself whether the following would be better, seeing as explicit > implicit. def trimString(string): """ Remove unnecessary whitespace. """ string = re.sub('\s+', ' ', string).strip() return string So which is preferable, the former or the latter? And why is that the case? The question may be off-topic but I find myself asking it often enough and thought this was the place to ask. Answer: I think the shorter, the better. Since you are in Python 3, if you really want to make explicit that the function is returning a string, you can use type hints: def trimString(string) -> str: You can also specify it in the parameter: def trimString(string: str) -> str: (keep in mind that Python will ignore type hints, but some IDEs like PyCharm use it to detect warnings and errors) On a side note, you should try to follow the PEP 8 styling conventions; the function name should be in camel case, so trim_string.
{ "domain": "codereview.stackexchange", "id": 35461, "tags": "python, python-3.x, strings, regex" }
Is limiting the input type in subclasses legitimate (does it give a stronger or the same specification)?
Question: Note: I have read a similar post, but my problem seems different from that. Read my attempts to understand the problem for why I believe they are different. My problem: I know that (see this and this) A subtype of another type should give stronger or the same specifications of the supertype. Spec $A$ is stronger than spec $B$ iff the set of implementations that satisfy $A$ is a proper subset of the set of those that satisfy $B$. Therefore, $A$ is stronger than $B$ if either i. $A$ has weaker preconditions than $B$ does, or ii. $A$ has stronger postconditions than $B$ does. However, I have encountered a confusing situation in today's programming: I have a base class Essay, and subclasses of different kinds of essays, say EssayA and EssayB, that inherit Essay. To summarise a list of a specific kind of essay, I define the base class EssaySummariser that has a virtual method include(Essay e) that takes an essay into account for the future summary. include()'s precondition: e must all be of a certain type of Essay. include()'s postcondition: e is included in the summariser and will be used to form the summary. has a virtual method summarise() that summarises all included essays. Now to implement a summariser for essays of type A, EssayA, I define a subclass of EssaySummariser, EssayASummariser, which overrides the include() method: new include()'s precondition: e must all be of the type EssayA new include()'s postcondition: e is included in the summariser and will be used to form. It appears that EssayASummariser's spec of include() has a stronger precondition than EssaySummariser's. However, it also doesn't seem wrong as a subtype. My attempts to understand the problem I have attempted to treat the new specification as having the same precondition but as having a stronger postcondition: only e's of type A are included. Other types will cause the method to throw. This attempt to interpret it seems wrong, because the new postcondition I imagined violates the original one (original one says any type of e's will be included) I have just read a similar post here. Trying to apply the answer to that post, I may imagine that my EssaySummariser is not a true ADT, but EssayASummariser is. However, the situation in my problem here is different from that in that post. Specifically, my EssaySummariser is not a genetic type. More seriously, if the two situations are the same, then I imagine I can convert genetic classes in that problem to non-genetic classes in my problem, and say that some classes cannot be ADT because of such specifications. This does not sound right because I think any specs can be allowed into an ADT. My questions Is it legitimate to create subtypes like I did in my problem? In particular, was my attempt 2 correct in applying that post's knowledge to my problem? If it is, how do I map the theories to the reality in the problem? Answer: I'm not an expert on this, but with the way you have defined EssayASummarizer, I don't believe it is a subtype of EssaySummarizer. You have already explained why: the set of implementations that can instantiate EssayASummarizer is not a subset of the set of implementations that can instantiate EssaySummarizer. Specifically, an instance of EssayASummarizer has an include() method that cannot accept an EssayB, so it does not qualify as an instance of EssaySummarizer. I suggest you read more about covariance and contravariance, e.g., https://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science). Your EssaySummarizer is like a function Essay -> Output, and your EssayASummarizer is like a function EssayA -> Output. The latter function is not a subtype of the former function, due to contravariance of the type of the input to the function.
{ "domain": "cs.stackexchange", "id": 21873, "tags": "type-theory, abstract-data-types" }
Interpretation of Variation Notes
Question: I would like an explanation to how this Lagragian partial derivative was taken (eq. 3). This probably is more suited for the math Stack Exchange, however this is for a physics course which is why I am posting here. Based on the definition of a Taylor expansion: I don't understand how or why it is only the partial is wrt $q_i$ and not all the other variables for the second term, along with why it's only wrt $\dot{q}_i$ in the third term. Moreover, it seems that there are no constants that are being multiplied against each function nor whatever the $(x-a)$ term could be. The full derivative is defined in eq. 5 but it doesn't match up with what the full derivatives in eq. 3 should be: A full breakdown of the math would be appreciated, or at least a general, formulaic explanation. Answer: Your definition of the Taylor series in one variable to first-order derivatives is $$ f(x) = f(a) + \frac{\partial f}{\partial x}(a)(x-a) $$ Notice we neglect higher-order terms. In two variables this would look like $$ f(x_1,x_2) = f(a_1,a_2) + \frac{\partial f}{\partial x_1}(a_1,a_2)(x_1-a_1)+ \frac{\partial f}{\partial x_2}(a_1,a_2)(x_2-a_2) $$ Let us change notation so that $\delta x = (x-a)$, which represents an infinitesimal shift in the parameters. The above Taylor series is $$ f(x_1,x_2) = f(a_1,a_2) + \frac{\partial f}{\partial x_1}(a_1,a_2)\delta x_1+ \frac{\partial f}{\partial x_2}(a_1,a_2)\delta x_2 $$ Now, what happens if we had lots of variables? We can modify the above expression to $$ f(\vec x) = f(\vec a) + \sum_i\frac{\partial f(\vec a)}{\partial x_i}\delta x_i $$ Now, what happens if half of the variables are naturally grouped together (all the positions and all the velocities too), why don't we write this for each dimension $i$, specifically pulling out both variable types! $$ f(x_1, \dots,x_n,y_1,\dots ,y_n) = f(\vec a) + \sum_i\frac{\partial f}{\partial x_i}(\vec a)\delta x_i+ \sum_i\frac{\partial f}{\partial y_i}(\vec a)\delta y_i $$ but, given the Einstein summation notation, we know that the summation is implied so we could drop the summation signs if we liked. The action is defined by $$ S = \int L(q_1,\dots ,q_n,v_1,\dots ,v_n,t)dt $$ where $q_i$ and $v_i$ are the position and velocity in each dimension. A variation in the action is the first order Taylor series in the positions and velocities minus the un-peturbed action. $$ \delta S = \int dt\bigg\{L(q_1,\dots ,q_n,v_1,\dots ,v_n,t) + \sum_i\frac{\partial L}{\partial q_i}\delta q_i+ \sum_i\frac{\partial L}{\partial v_i}\delta v_i\bigg\} - \int L(q_1,\dots ,q_n,v_1,\dots ,v_n,t)dt $$ Which simplifies to $$ \delta S = \int dt\bigg\{ \sum_i\frac{\partial L}{\partial q_i}\delta q_i+ \sum_i\frac{\partial L}{\partial v_i}\delta v_i\bigg\} $$ Which is your final result. To arrive at the Euler-Lagrange equations, we can then set $v_i=\dot q_i$ (i.e. only when the velocity is the time derivative of the position is the solution an extremal path of the action, but that's just details). $$ \delta S = \int dt\bigg\{ \sum_i\frac{\partial L}{\partial q_i}\delta q_i+ \sum_i\frac{\partial L}{\partial \dot q_i}\delta \dot q_i\bigg\} $$ In order to proceed with this expression, it is useful to collect the variations in each dimension so we can pull them out of the equation together. This can be achieved since $\dot q_i = d_tq_i$ and hence the final term can be integrated by parts to yield $$ \delta S = \int dt\bigg\{ \sum_i\frac{\partial L}{\partial q_i}-\sum_i\frac{d}{dt} \frac{\partial L}{\partial \dot q_i}\bigg\}\delta q_i + \delta q_i\frac{\partial L}{\partial \dot q_i}\bigg|^2_1 $$ The final term is a boundary term and this vanishes due to the imposition that the variation vanishes at this point i.e. $\delta q=0$. You result in the Euler-Lagrange equation after imposing only extremal solutions $\delta S=0$.
{ "domain": "physics.stackexchange", "id": 71770, "tags": "lagrangian-formalism, differentiation, notation, calculus, variational-calculus" }
"Up" script for moving up directories quickly
Question: A long time ago I created a script for moving up directories very quickly in the command line using the command up. You can find usage notes here. It's a very simple script with just 8 lines of source code, as follows: if [ -z "$1" ]; then cd .. else for i in `seq 1 $1`; do cd .. done fi I've never personally had any problems with it since I made it and started using it myself — but are there any accidental or malicious inputs (particularly with the blind injection of $1) that might cause this to do something bad that I'm not aware of? Given that the up command isn't used for anything in any command line I'm aware of, I'd like to promote more widespread usage of this script file so that people can type up instead of cd .. all the time, saving three keystrokes (or more if they want to move up more directories) for a very common operation. Answer: Your recommendation is to define alias up=". path/to/up" so that when you type up 3, it expands to . up 3. However, since you want to take an optional argument and affect the state of the current shell, I think you would be better off defining a shell function instead. As it turns out, the [ -z "$1" ] special case is not necessary, since seq 1 just expands to 1. You end up executing n separate cd .. commands for up n. This leads to a usability bug: cd - or cd $OLDPWD, which normally take you back to the previous directory, don't work the way I expect. Suggested solution: up() { cd $(for i in $(seq 1 $1) ; do echo -n ../ ; done) }
{ "domain": "codereview.stackexchange", "id": 32556, "tags": "console, bash" }
Why does a lattice have to have an inversion center?
Question: Indeed all lattices have inversion symmetry, but my teacher said a lattice has to have an inversion center: why? If a lattice doesn't have inversion symmetry, what would happen? Answer: In three dimensions a mathematical lattice (which crystallographers call a Bravais lattice) is the set of points $\{m\vec a + n\vec b + p\vec c\}$ where $\vec a, \vec b, \vec c$ are vectors which span space and $m,n,p$ are integers. If we invert a point of the lattice in the point $\vec q$ we get the point $2\vec q - m\vec a - n\vec b - p\vec c$. So if $2\vec q$ is a point of the lattice then inversion in $\vec q$ maps the lattice to itself, so $\vec q$ is an inversion centre for the lattice. In particular, $\vec q = \tfrac 1 2(\vec a + \vec b + \vec c)$ is an inversion centre for the lattice that is not on the lattice itself.
{ "domain": "physics.stackexchange", "id": 86814, "tags": "solid-state-physics, symmetry, group-theory, crystals" }
Python dice class
Question: I've built a simple dice class utility for rolling dice. It supports weighted rolling, and usage is fairly simple. Here's the code. from random import choice class Die(object): """ The Die class represents a sided die with n amount of sides. Sides can be weighted. For example: die = Die({ "side_name": weight ... }) """ def __init__(self, dice_sides: dict): self.dice_sides = dice_sides self._choosing_data = [] def _populate_choosing_data(self): """ Populate self._choosing_data with data to choose from, based on weight. """ for key, value in self.dice_sides.items(): if type(value) == int: for weight in range(value): self._choosing_data.append(key) else: raise TypeError("Weight value needs to be an integer.") def select_side(self): """ Select a random side from the list of sides based on their weights. The side name will be returned as a string. """ self._populate_choosing_data() return choice(self._choosing_data) Here's some example usage. The dictionary structure is as follows. {"side name": weight}. import dice_utils die = dice_utils.Die({ "1": 1, "2": 5, "3": 1, "4": 2, "5": 7, "6": 1 }) print(die.select_side()) I'd like some suggestions for improvement. I especially don't like how I have to allocate a private variable as well, so if there are any suggestions for improvement there, I'd like some. Answer: There is no need to explicitly inherit from object in Python 3 - all classes are new-style. And you can't do it for backwards compatibility if you're also using function annotations, which 2.x doesn't support. Your docstrings aren't accurate, for example: class Die(object): """ The Die class represents a sided die with n amount of sides. ... but there is no n in the code! Also, note that per the guidelines: Multi-line docstrings consist of a summary line just like a one-line docstring, followed by a blank line, followed by a more elaborate description. .select_side() seems like an odd name for the method - typically, this would be named .roll(), as that's what you do with a Die! You have some minor violations of the style guide, for example: Avoid extraneous whitespace in the following situations: ... More than one space around an assignment (or other) operator to align it with another. and yet: self.dice_sides = dice_sides self._choosing_data = [] if type(value) == int: is not the correct way to test whether value is an integer, for two reasons: int, like the other built-in types (and None), is a singleton, so you can and should use is int (identity) for comparison rather than == int (equality); and isinstance(value, int) is a much better approach, as it supports inheritance correctly (e.g. if you create some int subclass, for example a PositiveNonZeroInt for the weights and number of sides, it will still be accepted). Initialising _choosing_data in __init__ but populating it in _populate_choosing_data is a little awkward, and leaves you open to bugs... which you've introduced: >>> die = Die({1: 1, 2: 2}) >>> die.select_side() 1 >>> die._choosing_data [1, 2, 2] >>> die.select_side() 1 >>> die._choosing_data [1, 2, 2, 1, 2, 2] # oh dear! This is a bit inefficient in the best case, where the dice_sides never get changed (the proportions of the different items remain the same) and can lead to incorrectly-distributed outputs in the worst case, where the user tries to alter the proportions. Minimally, you can move self._choosing_data = [] to the start of _populate_choosing_data. However, it would probably be better to call _populate_choosing_data from __init__, and protect dice_sides as read-only using a property: class Die(object): def __init__(self, dice_sides: dict): self._dice_sides = dice_sides.copy() self._populate_choosing_data() @property def dice_sides(self): return self._dice_sides.copy() ... As dice_sides is a mutable object, note that I've introduced .copy() to prevent the user from accidentally changing the version that the instance uses internally. There is also space for some inheritance here; consider: class Die(object): def __init__(self, sides): self.sides = sides def roll(self): return random.randrange(self.sides) + 1 class WeightedDie(Die): def __init__(self, sides, weights): super().__init__(sides) self._weights = {side: weights.get(side, 1) for side in range(1, sides+1)} ... def roll(self): ... This lets the user specify the minimal information required; your example: die = dice_utils.Die({1: 1, 2: 5, 3: 1, 4: 2, 5: 7, 6: 1}) becomes: die = dice_utils.WeightedDie(6, {2: 5, 4: 2, 5: 7}) This only supports numerical sides at present (note that I've altered your example accordingly, and removed the extraneous whitespace), but could be adapted to allow non-numerical sides. Weighted random choices are a pretty common problem, and have been solved in various ways that you could adopt; see e.g. "A weighted version of random.choice".
{ "domain": "codereview.stackexchange", "id": 14167, "tags": "python, python-3.x, random, dice" }
changing the dimention of the base link moved the robot upward from the ground
Question: I have a mobile robot "husky" I have the following things in a urdf file: <!-- Size of the base--> <property name="base_x_size" value="0.98740000" /> <property name="base_y_size" value="0.57090000" /> <property name="base_z_size" value="0.24750000" /> In the same urdf file I have this information related to the base link: <link name="base_link"> <inertial> <mass value="${base_mass}" /> <!--This is the pose of the inertial reference frame, relative to the link reference frame. The origin of the inertial reference frame needs to be at the center of gravity. The axes of the inertial reference frame do not need to be aligned with the principal axes of the inertia.--> <origin xyz="${base_x_com} ${base_y_com} ${base_z_com}" /> <!--The 3x3 rotational inertia matrix. Because the rotational inertia matrix is symmetric, only 6 above-diagonal elements of this matrix are specified here, using the attributes ixx, ixy, ixz, iyy, iyz, izz.--> <inertia ixx="${base_ixx_com_cs}" ixy="${base_ixy_com_cs}" ixz="${base_ixz_com_cs}" iyy="${base_iyy_com_cs}" iyz="${base_iyz_com_cs}" izz="${base_izz_com_cs}" /> </inertial> <visual> <origin xyz="0 0 0" rpy="0 0 0" /> <geometry> <mesh filename="package://husky_description/meshes/base_test.stl" /> </geometry> <material name="Black" /> </visual> <collision name="colloision"> <origin xyz="0 0 ${wheel_x_size/2 - base_z_origin_to_wheel_origin - 0.02}" rpy="0 0 0 " /> <geometry> <box size = "${base_x_size+0.02} ${base_y_size} ${base_z_size + 0.02}"/> <!--making it slightly bigger in x and z direction--> </geometry> <max_contacts>10</max_contacts> </collision> </link> I changed in the stl file the dimension of the base_link there fore I have to change in the urdf file the following <property name="base_z_size" value="1.24750000" /> The problem is that the mobile robot now is flying by 1 meter above the ground. how can I change the dimension of the base_link and keep the mobile robot in the ground Originally posted by RSA_kustar on ROS Answers with karma: 275 on 2014-09-24 Post score: 1 Answer: First, I changed the <!-- Size of the base--> <property name="base_x_size" value="0.98740000" /> <property name="base_y_size" value="0.57090000" /> <property name="base_z_size" value="0.24750000" /> to the same thing I changed in the .stl file. So the this I am seeing is the same as the thing is constructed. <property name="base_x_size" value="0.98740000"/> <property name="base_y_size" value="0.57090000"/> <property name="base_z_size" value="10"/> To the same thing I changed in the .stl file. So the this I am seeing is the same as the thing is constructed. Second, I changed origin of collision from <origin xyz="0 0 ${wheel_x_size/2 - base_z_origin_to_wheel_origin - 0.02}" rpy="0 0 0 " /> to <origin xyz="0 0 5" rpy="0 0 0 "/> and it worked, I dont know how but it did work. Originally posted by RSA_kustar with karma: 275 on 2014-10-09 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 19507, "tags": "mobile-robot, collision, ubuntu, base-link, ubuntu-precise" }
Is entalphy a presence of energy or a change in energy?
Question: I see that the words enthalpy and change in enthalpy are often used interchangibly. Do they mean the same thing? Are change in enthalpy and enthalpy different? What is the true definition of enthalpy? Answer: From the wikipedia article: The total enthalpy, H, of a system cannot be measured directly. The same situation exists in classical mechanics: only a change or difference in energy carries physical meaning. Enthalpy itself is a thermodynamic potential, so in order to measure the enthalpy of a system, we must refer to a defined reference point; therefore what we measure is the change in enthalpy, ΔH. The change ΔH is positive in endothermic reactions, and negative in heat-releasing exothermic processes. Does this answer your question?
{ "domain": "physics.stackexchange", "id": 19641, "tags": "definition, thermodynamics" }
Concept of toppling and rolling
Question: Toppling occurs usually to blocks and caused by a torque. If we consider the motion of rolling, can we consider it to be a complicated form of toppling where the number of sides of the polygon tends to infinity? Please correct me If I am wrong. Answer: Your thinking is correct. See the figures below. You can think of the toppling of the block as rotation about a single point (in 2D) of contact of its leading edge in contact with a surface, which we can consider its pivot point. It the maximum possible static friction force between the block and the surface is not exceeded, the block will topple without sliding. For a circular object, the point of the object in contact with the surface at any given time is the pivot point. The pivot point keeps changing as the object rolls along. You can think of it as constantly toppling over its constantly changing pivot point. If the maximum possible static friction force at its pivot point is not exceeded, it will continue to roll (topple) without sliding. Hope this helps.
{ "domain": "physics.stackexchange", "id": 92456, "tags": "newtonian-mechanics, rotational-dynamics" }
Size of Mamenchisaurus sinocanadorum
Question: In most books the size of long-necked sauropod Mamenchisaurus is stated as about 26 meters. Recently, a huge specimen of Mamenchisaurus sinocanadorum was discovered. However, Gregory S. Paul estimates its size in length of 35 meters and weight of more than 65,000 kg, rivaling Argentinosaurus and Patagotitan, the largest known dinosaurs. However, many doubt these estimates and say they are exaggerated. What are the current and most reliable esimates of the size of Mamenchisaurus sinocanadorum? Answer: The holotype specimen originally discovered in 1993 is described as being 26m in length: An articulated neck of a large, mature sauropod, with enormously elongated cervical ribs, was discovered in strata of Late Jurassic age in the eastern Junggar Basin, Xinjiang, People's Republic of China. The animal is estimated to have originally measured 26 m in length, but was lightly proportioned relative to Brachiosaurus and Apatosaurus. ... In the present specimen, one rib from the midregion of the neck was seen to be at least 3540 mm long in the field, and after preparation measures 4100 mm in length. "A large mamenchisaurid from the Junggar Basin, Xinjiang, People Republic of China", Russell, D.A., Zheng, Z. (1993) Canadian Journal of Earth Sciences The larger measurements come from additional remains attributed to this species, but not yet formally described. Here is the relevant section from Paul's book: The Princeton Field Guide to Dinosaurs, Paul, G.S. (2010), Princeton University Press. Paul's figures are echoed in the following papers: 2006 年,中科院古脊椎所在新疆昌吉奇台(原中加马门溪龙标本发现地附近)发 掘出一具巨型蜥脚类恐龙,经初步鉴定应归入马门溪龙。从发现的部分颈椎长度推测, 这具马门溪龙长度可达 35 m 以上,成为了中国乃至亚洲目前发现的个体最大的恐龙。 [Google Translate] In the year, the ancient spine of the Chinese Academy of Sciences was located in Changjiqitai, Xinjiang (near the original site of the discovery of the Jiamamen brook specimen). A giant sauropod dinosaur was excavated and the initial identification should be attributed to Mamenchisaurus. From the length of the cervical spine found, This [Mamenchisaurid] can reach a length of more than 35 m, becoming the largest individual dinosaur ever discovered in China and Asia. 马门溪龙化石研究综述, (2008) Mamenchisaurus sinocanadorum (体长26~35 m) Advances in research on dinosaur gigantism, X XU, Q ZHAO (2016) Though the first paper does not cite a source, and the second paper cites only Russel and Zheng's original paper describing the 26m specimen.
{ "domain": "biology.stackexchange", "id": 8825, "tags": "palaeontology, dinosaurs" }
Pressure change due to Temperature change in a pipe
Question: I want to know how the following formula is derived: $$\Delta P=\frac{B\Delta T}{0.884\frac Rt+A}$$ where $\Delta P$ is the pressure change, $B$ is the difference between water and steel thermal expansion coefficients, $R$ is the internal radius of the pipe, $t$ is the thickness of the pipe and $A$ is the isothermal compressibility of water. Answer: The formula you give is an approximate version of the general formula, $$ \Delta P = \frac{\Delta T(\alpha_v-3\alpha_l)}{A+\left(\frac{R}{Et}\right)(2.5-2\mu)}, $$ where $\alpha_v$ is the cubic expansion coefficient of the liquid, $\alpha_l$ is the linear expansion coefficient of the wall, $E$ is the modulus of elasticity for the wall, $\mu$ is Poisson's ratio and the other variables are as you give them. I have assumed there is no leakage across the valve, if there is leakage then there is an additional term in the numerator. Your formula comes from taking particular values for $\mu$ and $E$. You can find the general formula and more detail in the pdf linked to in the comments of this Reddit post.
{ "domain": "physics.stackexchange", "id": 80591, "tags": "thermodynamics, fluid-dynamics, fluid-statics" }
If $L=\big\{\langle M_1,M_2\rangle\mid M_1, M_2\text{ are TM and } L(M_1)\cup L(M_1)=\Sigma^* \big\}$ is in $RE$ or $coRE$ or not in $RE\cup coRE$?
Question: I tried to solve it as the following: $$\overline{L}=\big\{\langle M_1,M_2\rangle\mid M_1, M_2\text{ are TM and } L(M_1)\cup L(M_1)\neq\Sigma^* \big\}$$ I'll show that $\overline{L}\not\in RE$ by reduction from $$\overline{A_{TM}} = \big\{ \langle M, w \rangle \mid M \text{ is TM and }M \text{ rejects } w \big\}$$ I begin by assuming that $\overline{L}$ is recognized by some TM $M'$. Next, I construct a machine $M_{\overline{A_{TM}}}$ that will use $M'$ to recognize $\overline{A_{TM}}$. TM $M'$ on input $\langle M, w\rangle$: Build TM $M_1,M_2$ Simulate $M_1,M_2$ on all $x\in\Sigma^*$, Any simulation will be at most $|x|$ steps. If $M_1$ and $M_2$ reject the same $x$, then $M'$ will accept $\langle M, w\rangle$. If $M_1$ or $M_2$ accept some $x$, then $M'$ will reject $\langle M, w\rangle$. Therefore, $\langle M, w \rangle\in\overline{A_{TM}}$ exactly when $\langle M_1,M_2\rangle\in\overline{L}$. Because $\overline{A_{TM}}\not\in RE$ so is $\overline{L}$, then $L\not\in co-RE$ Is it true and can I use similar approach to show that $L\not\in RE$? Thanks. Answer: Your proof is not very clear to me. In your description of $M'$ I don't understand what $M_1$ and $M_2$ are, or why they have the properties you claim they do. A typical reduction proof looks like: You have some language $L$ which you wish to prove is, say, not $RE$. You have a language $L'$ which you know is not $RE$. Suppose there exists a TM $D$ which recognizes $L$ Construct a TM $D'$ to recognize $L'$. Usually $D'$ will take some input $x$ and transform it into an input $y$ for $D$, such that $D'$ accepts $x$ if and only if $D$ accepts $y$ So let's do the same thing for this problem. We'll reduce $TOTAL$ (which is neither $RE$ nor $co-RE$) to $L$. Suppose $D$ recognizes $L$. Let $D'$ be the TM: TM $D'$ on input $\langle M \rangle$: Let $M'$ be the TM that acts as $M$ but inverts the output Run $D$ on $\langle M, M' \rangle$ Accept if $D$ accepts, reject if $D$ rejects. Now we want to prove that $D'$ recognizes $TOTAL$. This is straightforward. $\langle M, M' \rangle$ is in $L$ if and only if $L(M) \cup L(M') = \Sigma^*$. But $L(M') = \overline{L(M)}$. So $D'$ accepts $M$ if and only if $M$ either accepts or rejects every input, which is precisely what it means for $M$ to be in $TOTAL$! So we have shown that no TM can recognize $L$, so $L$ isn't $RE$. To show that it's not $co-RE$ either we take $D$ to recognize $\bar{L}$ instead, and define $D'$ just as above. So now $D'$ accepts $M$ if and only if $\langle M, M' \rangle$ is not in $L$. The same argument above shows us this is exactly true when $M$ is not in $TOTAL$.
{ "domain": "cs.stackexchange", "id": 11914, "tags": "turing-machines, reductions" }
Finding large Fibonacci Number in Python
Question: I'm doing Project Euler at 1000-digit Fibonacci number - Problem 25 which says The Fibonacci sequence is defined by the recurrence relation: Fn = Fn−1 + Fn−2, where F1 = 1 and F2 = 1. Hence the first 12 terms will be: F1 = 1 F2 = 1 F3 = 2 F4 = 3 F5 = 5 F6 = 8 F7 = 13 F8 = 21 F9 = 34 F10 = 55 F11 = 89 F12 = 144 The 12th term, F12, is the first term to contain three digits. What is the index of the first term in the Fibonacci sequence to contain 1000 digits? I approached this by writing a recursive function in Python that finds the nth Fibonacci number as follows: def Fibonacci(n): if n == 1: return 1 elif n == 2: return 1 else: return (Fibonacci(n-1) + Fibonacci(n-2)) However, this function runs very, very slowly. It slows down severely as n approaches 100. What can I do to make it faster? Answer: Issue Your issue is you compute fibonacci(i) for one number multiple times. To understand this, consider computing fibonacci(5). The function would call fibonacci(4) and fibonacci(3) at the end. Now what happens in fibonacci(4)? It would call fibonacci(3) and fibonacci(2). Now you can notice, that the function gets called 2 times for the number 3, which is a serious issue, since that can go VERY deep in the recursion and you can gain a massive overhead because of this. Solution Because fibonacci is a pure function, it returns the same value every time it is called with the same parameters. So there's a simple trick you can do to avoid so much time on recalculating what you already found. You can store the result of fibonacci for a specific number in an array, and if it's called again, just return that instead of computing everything again. This trick is called memoization. The Fibonacci sequence is often used as the introductory example for dynamic programming and memoization. See this nice video explaining the concept of memoization and dynamic programming Code This is a simple implementation of memoization for this problem. cache = {} def fibonacci(n): if n in cache: return cache[n] if n == 1 or n == 2: return 1 else: result = fibonacci(n-1) + fibonacci(n-2) cache[n] = result return result
{ "domain": "codereview.stackexchange", "id": 25954, "tags": "python, programming-challenge, recursion, time-limit-exceeded, fibonacci-sequence" }
How to use unigram and bigram as an feature on SVM or logistic regression
Question: How to use unigram and bigram as an feature to build an Natural Language Inference model on SVM or logistic regression?on my dataset i have premise, hypotesis and label column. I'm planning to use the unigram and bigram of the premis or hipotesis or both as one of the features on my training. for example : premise |hipotesis |hypothesis bigram =============================================================================================== I am planning to use the unigram and bigram |I am planning to use the unigram |[(i, am), (am, planning), (planning, to), (to, use), (use, the), (the, unigram)] the hypothesis bigram is a list of bigram(word), so i cant use it as input to my svm or logistic. can i convert the hypothesis bigram into vector? Answer: You need to create a vocabulary of the n-grams, i.e., a numbered inventory of bigrams that you are going to use as features. Typically, these are the most frequent ones. When you create the feature vector, you start with a zero vector and put one (or add one) if the n-gram with the corresponding index appears is in your sentence. Machine learning libraries typically have functions that do that. For instance, in scikit-learn, you can use CountVectorizer to do the job. The fit method has an ngram_range argument that controls the length of n-grams that are considered in the feature vectors.
{ "domain": "datascience.stackexchange", "id": 7812, "tags": "machine-learning, nlp, logistic-regression, svm, feature-engineering" }
Applying a projector to a qubit in a qiskit circuit
Question: I'd like to be able to apply $|0 \rangle \langle 0|$ to project a qubit to the state $|0 \rangle$ in the middle of qiskit circuit (see, for example, the attached circuit). I wonder if, in general, one can customize those orange boxes in some way and if that works in the real hardware. Answer: I hope this is what you are looking for. Within IBM hardware, there is a new option called "reset" that allows you to reset a certain qubit back to the state $|0\rangle$. For instance, from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit qreg_q = QuantumRegister(3, 'q') creg_c = ClassicalRegister(3, 'c') circuit = QuantumCircuit(qreg_q, creg_c) circuit.x(qreg_q[0]) circuit.h(qreg_q[2]) circuit.cswap(qreg_q[2], qreg_q[0], qreg_q[1]) circuit.h(qreg_q[2]) circuit.reset(qreg_q[1]) Within the circuit composer, you can find the reset method as well with the $|0\rangle$ symbol:
{ "domain": "quantumcomputing.stackexchange", "id": 2391, "tags": "quantum-gate, qiskit, error-correction" }
MAX-SAT approximation factor
Question: I am stuck on an exercise that ask the approximation factor of a MAX-SAT approximated algorithm generalized from a MAX-3SAT algorithm MAX-3SAT: set every variable with a random value ($0$ or $1$ each with probability $\frac12$) check how many clauses are satisfied This algorithm has a $\frac78$-approximation factor since: to get all $3$ variable to false we have a $\frac1{2^3}$ probability to get all $3$ variable true we have a $1 - \frac1{2^3}$ probability (since a clause to be satisfied need at least 1 variable true) Now I am confused about the generalization to MAX-SAT Since we have N variables i'm inclined towards a $\left(1 - \frac1{2^N}\right)$-approximation factor since to satisfy the clauses we have a $\frac1{2^N}$ probability to negate the clauses we have a $1 - \frac1{2^N}$ probability However, I'm not sure about that since in a SAT problem there is no guarantee that all clauses will have exactly $N$ variables since $N$ here represent only an upper bound on the number of variable per clause. Answer: You are right in your analysis: this algorithm has a $\left(1-\frac{1}{2^N}\right)$ approximation factor only when each clause has at least $N$ independant litterals (independant means on different variables). In the general case, you could have some clauses with exactly one litteral. But that means that the approximation factor is at least expected to be $\frac12$. However, since the algorithm is randomized, it cannot guarantee to get at least half of the clauses to be satisfied. One interesting fact is that this algorithm can be derandomized to guarantee a $\geqslant \frac12$ approximation factor (the idea is to choose each assignation of a variable to maximize the expected approximation factor).
{ "domain": "cs.stackexchange", "id": 20684, "tags": "satisfiability, approximation, decision-problem, 3-sat, maxsat" }
Could not find or load the Qt platform plugin "windows" in ""
Question: I am trying to build a Qt5 Widget application as a ROS2 package with colcon. After following this tutorial (only that I am using a Widget application instead of a quick app), I finally was able to compile the package, but I cannot run it because I get the following output: c:\dev\ros2>ros2 run plainwidget plainwidget This application failed to start because it could not find or load the Qt platform plugin "windows" in "". Reinstalling the application may fix this problem. I am using a custom package.xml and CMakeLists.txt as follows: <?xml version="1.0"?> <?xml-model href="http://download.ros.org/schema/package_format2.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?> <package format="2"> <name>plainwidget</name> <version>1.0.0</version> <description> Demo zum colcon builden eines Qt5 GUIs </description> <maintainer email="...@...">...</maintainer> <license>Apache License 2.0</license> <buildtool_depend>ament_cmake</buildtool_depend> <build_depend>rclcpp</build_depend> <build_depend>std_msgs</build_depend> <build_depend>qtbase5-dev</build_depend> <build_depend>qt5-qmake</build_depend> <exec_depend>libqt5-core</exec_depend> <exec_depend>rclcpp</exec_depend> <exec_depend>std_msgs</exec_depend> <export> <build_type>ament_cmake</build_type> </export> </package> ... cmake_minimum_required(VERSION 3.5) project(plainwidget) set (CMAKE_CXX_STANDARD 14) if(NOT WIN32) set(CMAKE_CXX_FLAGS “${CMAKE_CXX_FLAGS} -std=c++14 -Wall -Wextra -fPIC”) endif() IF (NOT DEFINED BUILD_VERSION) SET(BUILD_VERSION “not set”) ENDIF() ADD_DEFINITIONS(-DBUILD_VERSION=”${BUILD_VERSION}”) find_package(ament_cmake REQUIRED) find_package(example_interfaces REQUIRED) find_package(rclcpp REQUIRED) find_package(rcutils) find_package(rmw REQUIRED) find_package(std_msgs REQUIRED) find_package(Qt5 REQUIRED COMPONENTS Core Gui Widgets) set(CMAKE_AUTOMOC ON) set(CMAKE_AUTOUIC ON) set(CMAKE_AUTORCC ON) set(CMAKE_INCLUDE_CURRENT_DIR ON) include_directories( ${rclcpp_INCLUDE_DIRS} ${std_msgs_INCLUDE_DIRS} ${Qt5Core_INCLUDE_DIRS} ${Qt5Gui_INCLUDE_DIRS} ${Qt5Widgets_INCLUDE_DIRS} ) include_directories(src) set(SOURCE_FILES src/main.cpp src/maingui.cpp ) add_executable(${PROJECT_NAME} ${SOURCE_FILES}) ament_target_dependencies(${PROJECT_NAME} "example_interfaces" "rclcpp" "rcutils" "std_msgs" ) target_link_libraries(${PROJECT_NAME} Qt5::Core Qt5::Gui Qt5::Widgets ) install(TARGETS ${PROJECT_NAME} DESTINATION lib/${PROJECT_NAME}) ament_package() Maybe it has something to do with the entries under <build_depend> and <exec_depend>? I just took them out of the tutorial which is over 8 months old, so maybe something is outdated here. Where do I get the correct dependencies from which I Need to put into package.xml? Originally posted by uPrizZ on ROS Answers with karma: 31 on 2019-02-28 Post score: 1 Answer: Answer was to make use of Qt5's windeployqt feature. Setup cmd to find windeployqt (i.e. extend PATH by your Qt5 path) and then do "windeployqt /install/Lib//.exe" which will create all necessary dependencies, so you can run the Qt5 Project... Originally posted by uPrizZ with karma: 31 on 2019-03-01 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by summauto on 2021-05-30: works brilliantly, thank you!
{ "domain": "robotics.stackexchange", "id": 32559, "tags": "ros, ros2, colcon, qt5, ros-crystal" }
Why do diesel engines tend to have larger engine displacements?
Question: There are cars with petrol engines having 1,0 l or even less engine displacement but I have never seen a car with diesel engine having less than 1,5 l. Are there technical reasons that diesel engine cannot have less than about 1,5 l? Answer: There are model diesel engines, so technically it is possible. My guess in the case of car engines would be that it is about economics. Diesel engines are more expensive to manufacture (because they need to be stronger to withstand higher compression), but you earn that back in their better fuel efficiency and lower fuel taxes if you make enough kilometers. People who buy small cars are more likely to not make a lot of kilometers per year, and more likely to care more about the car's initial sale price compared to fuel costs.
{ "domain": "engineering.stackexchange", "id": 2055, "tags": "diesel" }
Why doesn't energy conservation work for a moving box filling with rain?
Question: I am unsure as to why the conservation of kinetic energy can't be used to solve this problem: An open-box is moving at $1\ \mathrm{m/s}$ on a frictionless surface. If it starts to rain, what happens to the motion of the open box? If the box weighs $2.0\ \mathrm{kg}$, its initial momentum is: $2 \times 1 = 2$. Now lets say that at a certain point after the rain starts, the mass of the water that went into the open-box is $3\ \mathrm{kg}$. So the final momentum is: $5 \times V$. Now I calculated for the final velocity by conservation of momentum: $2 = 5V$, $V = 0.4\ \mathrm{m/s}$. The initial kinetic energy of the system was: $$\begin{align} E_k &= \frac{1}{2}mv^2 \\ E_k &= \frac{1}{2}(2)(1)^2 \\ E_k &= 1\ \text{joule} \end{align}$$ The final kinetic energy of the system (using the final velocity calculated using the conservation of momentum) is: $$\begin{align} E_k &= \frac{1}{2}mv^2 \\ E_k &= \frac{1}{2}(5)(0.4)^2 \\ E_k &= 0.4\ \text{joules} \end{align}$$ I am confused as to how the kinetic energy of the system decreased from $1\ \text{joule}$ to $0.4\ \text{joules}$. I thought that the kinetic energy of the system would still be conserved and the velocity would have to decrease since the mass becomes larger, but that is not the case. If so I could solve it using the conservation of kinetic energy: $$\begin{align} E_k(\text{initial}) &= E_k(\text{final}) \\ 1\ \text{joule} &= \frac{1}{2}(5)(V)^2 \\ V &= 0.63\ \mathrm{m/s} \end{align}$$ I realize I am getting a different velocity from the conservation of momentum method, but I cannot understand why. Answer: The kinetic energy is not conserved because the rain sticks to the box. That's an inelastic collision. Also, the momentum of the rain is not constant. First it's vertical, downward and after colliding with the box it is horizontal. The horizontal momentum of the system remains constant because the system is isolated horizontally from outside forces. Vertically, the normal force of the earth on the box/rain system removes vertical momentum from that system. The speed of the rain also changes from whatever it is before the collision to whatever it is after the collision (you don't tell us a speed for the rain). So you haven't done a full kinetic energy accounting either. The reason the box slows down is that the rain drops exert a horizontal force backwards on the box when the box makes the rain begin to move forward horizontally (Newton's 3rd Law).
{ "domain": "physics.stackexchange", "id": 44542, "tags": "homework-and-exercises, newtonian-mechanics, momentum, conservation-laws, collision" }
Is glutamate always involved in the deamination and amination of the other amino acids?
Question: For example, are there pathways for the deamination of phenylalanine that simply produce ammonia or pathways for it to be synthesized from phenylpyruvate with ammonia being utilized to form the amine group? Preferably, I want to know how it is with human metabolism mainly. Answer: For most amino acids, the removal of the α-amino group involves α-ketoglutarate and glutamate. The amino group is first transferred to a-ketoglutarate by transaminases, and the resulting glutamate is then deaminated (via glutamate dehydrogenase) to yield ammonia. The same is true for amination. Glutamate and glutamine are the two major amino group-donors. Most ketoacids are converted to their respective amino acids by transamination involving glutamate or glutamine. Glutamine can be synthesized by amination of glutamate with ammonia without transamination (via a synthetase enzyme) and glutamate can be aminated with ammonia too. Exceptions do exist, of course. For example, Not all transaminations involve glutamate/glutamine (as this user has replied), and serine and threonine can be directly deaminated (via dehydratase enzymes, as opposed to the dehydrogenase used for glutamate).
{ "domain": "biology.stackexchange", "id": 10110, "tags": "biochemistry" }
Is it possible to train an AI to bring a picture story in the correct order (correct story flow)?
Question: I want to know if it is possible to train a neural network (or some other kind of an AI) to bring a simple picture story in the correct order, if it is in random order, so that the story has the correct story flow. For example, this simple picture story: or this one So, imagine the pictures of these stories are in a random order and the AI has to put them in the order that the correct story is told. Most 8 year olds would be able to do that. So, can an AI learn it? How would an approach look like? Does anyone know if something like that has been achieved or even tried? From my research so far, the approach would be first to translate the images into descriptive sentences and then try to order them in a meaningful way. But I will do further research, I found so far this paper: Sort Story: Sorting Jumbled Images and Captions into Stories (2016). To clarify, this is not a "real problem" for me, I just asked from a philosophical standpoint and from interest. I will not attempt to solve it, because I think if it is possible it would be extremely difficult. Answer: This is a really hard problem for statistical AI such as a neural network. The difficulty level is due to lack of grounding and common sense in what a neural network can process. A neural network could feasibly label all objects in the example scenes, and even do pose estimation, detect activities and guess emotional state for the "actors". It can even create a vector representing the content of the image and convert to/from a caption for the image. However, so far any structure or embedding that neural networks have produced is not amenable to reasoning, or common sense. Such embeddings can be translated into other representations, but lack "grounding" in the sense of a deeper understanding based on a more general model of the world. It is this understanding - e.g. a parent will be stressed if they think a child is missing (panel 2 from second example), and may then act to find them (panel 4) - that is missing and it is not at all clear how such a world model could be added to neural network training. There are some neural network models that come close in different ways: Large language models. Descriptive text often has a narrative structure, and language models like GPT-3 can easily produce stories as sophisticated as the example panels. In theory such a model could be used to analyse the likelihood of different series of static descriptions extracted by an image captioning system, and identify the highest likelihood story based on trying all combinations. I do not know if this has been attempted. Video activity prediction. In the simpler world of immediate actions and consequences (as opposed to understanding inner state, motivation and narrative), predicting what happens next using a neural network is already possible. These predictions would be short term - In the first panel of the first example for example, a neural network might predict that the fish will go into the bucket. That doesn't mean the neural network models what a fish or a bucket actually are in enough detail to reason about this further, nor that it could extrapolate to the excited child looking at the fish in the bucket on panel 2.
{ "domain": "ai.stackexchange", "id": 3292, "tags": "neural-networks, machine-learning, reference-request, computer-vision" }
Sorting object with null values underscore sortBy
Question: I'm sorting my object by its property ActivityOrder, which will sometimes contain null values if the user has not explicitly stated an order which an activity should appear. Since null will always appear top most in the sort (unless reversed), it messed up my sort. The solution I came up with was to sort by the id if activityOrder was null since the ID will always be greater than the activityOrder within our application and then there is some logical order to the sort too. var data = [{ "id": 150, "name": "Andrew", "activityOrder": null }, { "id": 151, "name": "Andrew", "activityOrder": null }, { "id": 152, "name": "Andrew", "activityOrder": 1 }]; data = _.sortBy(data, function(o) { if (o.activityOrder === null) return o.id; return o.activityOrder }); _.each(data, function(x) { $('#cnt').append("<tr><td>" + x.Name + " - Order " + x.activityOrder + "</td></tr>") }) I had also tried amending the value if the activityOrder null to 99999999999 and resetting it back to null after doing whatever. I didn't like the idea of this one. data = _.sortBy(data, function(o) { if (o.activityOrder === null) o.activityOrder = 99999999999; return o.activityOrder }); //Do whatever //Reset Activity Order back _.each(d, function(o) { if (o.activityOrder === 99999999999) { o.activityOrder = null; } }); How best would you sort an object array by a property which contains null values? Answer: flapdoodle! if (o.activityOrder === null) o.activityOrder = 99999999999; Dude, don't do this. Really. Sorting idiom Javascript supports sorting pretty much as you'll see in other languages. Collections have sorting. Sort functions can take a function delegate for customized sorting. Typically, compare the desired values and return an integer that means less-than, equal-to, greater-than. Typically -1, 0, 1 respectively. handling null does not require converting it to a valid value in your set domain. Read the documentation!
{ "domain": "codereview.stackexchange", "id": 33306, "tags": "javascript, sorting, null, underscore.js" }
Can uranium-233 mixed with natural uranium be used as a replacement for light-enriched uranium?
Question: My idea is to make a new form of enriched uranium fuel (or more accurately, a substitute for enriched uranium) that's made by mixing uranium-233 (transmuted from thorium in a breeder reactor) with a larger quantity of natural uranium. The uranium-233 is a substitute for the additional uranium-235 in traditional enriched uranium. This would most likely be done by countries that operate both thorium-cycle reactors and uranium-cycle reactors. As far as I see, this would be advantageous over light-enriched uranium. This might be cheaper than uranium enrichment in the long-term. But the advantage I see in Uranium-233 is that it has a higher fission cross-section & fission/capture ratio than Uranium-235, especially at intermediate neutron energies. This may give better neutron economy for the Reduced-Moderation Water Reactor (in development), which is likely to have a broad range of neutron energies including intermediate spectrum. It also might allow reduced moderation for other reactor types, such as gas-cooled reactors. Fast reactors utilizing U-233 may require less fissile material, due to the higher fission cross-section of U-233 compared to U-235. Surprisingly, I can't find any research papers on this idea. Would this nuclear fuel I'm proposing have any problems operating in a reactor designed for light-enriched uranium? Would it work for a reactor designed specifically for it? How would it behave differently? Would this fuel be more difficult to reprocess once spent than traditional uranium fuel? Answer: Yes, you could substitute fissile U235 with fissile U233. Given a U235 enrichment, you could find a consistent U233 enrichment that would give you the same reactivity. It is a little more complicated because you need to look at the reactivity as a function of burnup, and find the average over the depletion time. The same process of finding equivalent enrichments is also needed when you use MOX fuel, which contains plutonium. You need to find an equivlent enrichment of U235 and the plutonium isotopes.
{ "domain": "physics.stackexchange", "id": 73126, "tags": "nuclear-physics, nuclear-engineering" }
Simple form validation script
Question: This is a simple form validation script. I'd like to: improve the jQuery validation simplify the jQuery code Questions: Should I be exporting pure JS validation to avoid potential conflicts with other libraries that users might have installed? Is it worth the effort or should I stick with my jQuery code? Is there a way to reduce the chances for conflict with the jQuery code without having to reworking it to JS? <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"></script> <script type="text/javascript"> /*<![CDATA[*/ $(document).ready(function() { // when submit button is pressed $("#form_name").submit(function() { var pass = true; var errors = { required : 'this field is required', email : 'enter a valid email address', numeric : 'enter a number without spaces, dots or commas', alpha : 'this field accepts only letters &amp; spaces' }; var tests = { email : /^([A-Za-z0-9_\-\.])+\@([A-Za-z0-9_\-\.])+\.([A-Za-z]{2,4})$/, numeric : /^[0-9]+$/, alpha : /^[a-zA-Z ]+$/ }; // clear error messages $(".error").removeClass(); $(".error-message").remove(); function displayError(el, type) { $(el).parent().addClass("error").find('label').append('<span class=\"error-message\"> &#8211; ' + errors[type] + '</span>'); } $('.required, .email, .numeric, .alpha').each(function(){ var value = $(this).val(); var valueExists = value.length === 0 ? false : true; var required = $(this).hasClass('required'); var email = $(this).hasClass('email'); var numeric = $(this).hasClass('numeric'); var alpha = $(this).hasClass('alpha'); if (required && value.length===0) { displayError(this,'required'); pass=false; } else if (email && valueExists && !tests.email.test(value)) { displayError(this,'email'); pass=false; } else if (numeric && valueExists && !tests.numeric.test(value)) { displayError(this,'numeric'); pass=false; } else if (alpha && valueExists && !tests.alpha.test(value)) { displayError(this,'alpha'); pass=false; } }); return pass; }); }); /*]]>*/ </script> Answer: To avoid conflict with any other libraries, wrap your code using the jQuery function, and additionally call jQuery.noConflict(); The only conflict you may have is if somebody else imported an object named jQuery into the global namespace. Reference: http://docs.jquery.com/Using_jQuery_with_Other_Libraries Example: jQuery.noConflict(); jQuery(document).ready(function($){ //This is a jQuery function $('.myClass'); });
{ "domain": "codereview.stackexchange", "id": 841, "tags": "javascript, jquery, validation" }
For very simple linear regression can we quantify the prediction accuracy hit between using one hot encoding and simple numerical mapping?
Question: Suppose I had a simple linear regression model that had the following input or X variable: [North] [East] [West] [South] [North, East] ... [North, East, West, South] and I decided to numerate them like: [North] - 0 [East] - 1 [West] - 2 [South] - 3 [North, East] - 4 ... [North, East, West, South] - 15 I had someone take a look at my model and tell me to use One Hot Encoder or One Hot Binary Encoder instead of assigning inputs like this. My question is from a linear regression perspective what is the advantage for using OHE over my simple numerical mapping? If we can quantify an accuracy loss would it be substantial? If I had 10 model variables that I had to map like this would the loss be more substantial? I want to know what sacrifice I'm making not using OHE Answer: The issue with numerical encoding in this context is you are enforcing that your input variable X is ordinal when it's likely not. This is telling your model that the order in which you encode your inputs are either increasing or decreasing monotonically with your target. Let's say you encoded your data like this: [North] - 0 [East] - 1 [West] - 2 [South] - 3 [North, East] - 4 ... [North, East, West, South] - 15 If you train a linear regression model with this encoding you are telling your model that [North] either indicates a higher or lower target than [North, East, West, South], and that [East], [West], [South], and [North, East] are somewhere in between. What if [West] typically has a lower target than either [North] or [North, East, West, South]? In this case you would be enforcing some constraint on your model which is not true. To test this you could split your input data in a 70/30 train-test-split and evaluate how your numerical encoding performs against one-hot-encoding on the test set. Another alternative would be to look into Target Encoding - this would allow you to keep a small feature space (an issue with one-hot-encoding) while attempting to keep your input and target increasing/decreasing monotonically. I would recommend testing all three encoding methods to figure out the ideal solution for your given problem.
{ "domain": "datascience.stackexchange", "id": 8750, "tags": "regression, linear-regression, categorical-data, one-hot-encoding" }
Shedding light on "cyber-physical systems"
Question: These days, one often hears of cyber-physical systems. Reading on the subject, though, it is very unclear how those systems differ from distributed and/or embedded systems. Examples from Wikipedia itself only make them look more like traditional distributed systems. For example: A real-world example of such a system is the Distributed Robot Garden at MIT in which a team of robots tend a garden of tomato plants. This system combines distributed sensing (each plant is equipped with a sensor node monitoring its status), navigation, manipulation and wireless networking. Obviously, any distributed system consists of sensing, actuations (which can easily include navigation) and networking. My question is, how exactly does cyber-physical systems differ from traditional distributed systems? Is it just a fancy name, or is there something considerably different with it? Answer: Reading through some of the articles linked to in the Wikipedia article, I'll respectfully disagree with @Theran. The distinction seems quite well grounded, although Wikipedia does a poor job of making it. The term embedded systems (ES) has been around since the 60s and can, arguably, refer to anything from an airplane to a Furby. I think the term cyber-physical systems (CPS) was coined to distinguish it from what are traditionally thought of as embedded systems, namely closed-loop, non-networked "boxes" that operate in a very well-defined and constrained domain with a limited power to affect physical systems. CPS, on the other hand, embody the idea of think globally, act locally (my apologies to Patrick Geddes), that is, they are usually highly networked systems that bring about change in a local physical system dependent on the state and actions of other entities in the wider network. While many robotic applications fit this definition, and can therefore be termed cyber-physical systems, many are not. What bestows the honour on MIT's robotic garden, I believe, is the fact that the robots form part of a wider, decentralised system (PDF). It is the plants, equipped with sensors, that decide when to request watering or other services from the robots, while it is the robots that then decide between them which one will fulfil that request. Furthermore, not all CPS are thought of as "robotic", for example, an intelligent power grid. Cybernetics, as @Theran has noted, is occupied with the study of control systems, and so will form a core part of studying CPS, but also has a broader range of applications in fields such as mathematics, economics, and sociology, for example. This report on cyber-physical systems (PDF), by Edward Lee from UC Berkeley, makes clear that CPS are a next step in the evolution of embedded systems with many of the same constraints (real-time capabilities, reliability) plus a few extra ones (robustness, adaptability, intelligence, interconnectedness). As such, the field of CPS is, in parts, concerned with developing completely new approaches to hard- and software architecture. For example: But I believe that to realize its full potential, CPS systems will require fundamentally new technologies [...] One approach that is very much a bottom-up approach is to modify computer architectures to deliver precision timing [...] Complementing bottom-up approaches are top-down solutions that center on the concept of model-based design [...] In this approach, "programs" are replaced by "models" that represent system behaviors of interest. Software is synthesized from the models. Lee's thoughts are echoed in this Embedded Computing column (PDF) by Wayne Wolf of Georgia Tech. After all, we've had computers attached to stuff for a long time. Why, you may ask, do we need a new term to describe what we've been doing for years? [...] We have a surprisingly small amount of theory to tell us how to design computer-based control systems. Cyber-physical systems theory attempts to correct this deficiency. [...] Cyber-physical systems actively engage with the real world in real time and expend real energy. This requires a new understanding of computing as a physical act—a big change for computing. I recommend reading both articles for a good view on how CPS are different from "mere" embedded systems. Cyberphysicalsystems.org has a concept map of CPS on their homepage that nicely illustrates many of the aspects involved in developing CPS. As for the origin of the term, none of the sources I found attributed it to anyone. Many papers defined it without attribution while clearly not being the first to use them. The term first crops up in the literature in 2006 but, by that time, the US National Science Foundation had already organised a Workshop on Cyber-Physical Systems, suggesting that the term was already in use by then.
{ "domain": "robotics.stackexchange", "id": 148, "tags": "distributed-systems, embedded-systems" }
Why in quantum mechanics must orthogonal states stay orthogonal?
Question: Given two states $|A(t)\rangle$ and $|B(t)\rangle$. If $\langle A(0)|B(0)\rangle=0$ then for all $t$, $\langle A(t)|B(t)\rangle=0$. This is a fundamental rule of quantum mechanics. And we can imply that states evolve unitary with $|A(t)\rangle = U(t)|A(0)\rangle$. Which is equivalent(?) to saying that states evolve with linearly. One can think of this as two arrows on a circle. And they evolve by going round the circle, keeping at right angles from each other. But one could imagine an evolution where the speed of the arrow on a circle depends on it's position on the circle. Then the arrows would not stay orthogonal. It would be replacing the unitary group with the holomorphic-diffeomophism group. States would evolve with operators $\psi'(t)=iH(\psi(t))\psi(t)$. i.e. non-linearly. But would always remain distinguishable. Would this be against some physical principle? Edit: Although the arrows would move around the circle at different speeds they would stay on the circle and hence the evolution is still unitary and hence states would always stay the same length. You would simply have a unitary operator dependent on the state e.g. $U(\psi(t))$. e.g. $\langle A(t)|A(t)\rangle$ would always stay the same value. But $\langle A(t)|B(t)\rangle$ would not. Answer: I would say that the physical principle is linearity. The whole basis of quantum mechanics is the notion that things like time evolution are linear. It is the fact that time evolution is linear that allows double-slit interference, for example. Once you combine linearity with norm-preservation, you get unitarity as a mathematical consequence. A linear operator $U$ that preserves the norm of all states must be unitary. Proof: $U$ must preserve the norm of the state $|A\rangle + e^{i\phi}|B\rangle$ for all $A,B,\phi$. Since $U$ also preserves the norms of $|A\rangle$ and $|B\rangle$ we have $$e^{i\phi} (\langle UA|UB\rangle-\langle A|B\rangle)+e^{-i\phi} (\langle UB|UA\rangle-\langle B|A\rangle) = 0$$ Since $e^{i\phi}$ and $e^{-i\phi}$ are linearly independent functions of $\phi$ this implies $\langle U A| UB\rangle = \langle A | B\rangle$ for all $A,B$. This is the definition of a unitary operator.
{ "domain": "physics.stackexchange", "id": 54027, "tags": "quantum-mechanics, hilbert-space, time-evolution, unitarity" }
What is a Light Efficient System?
Question: When reading this paper, I encountered this sentence on the sixth page: When used in conjunction with wide-field microscopy, iterative restoration methods are light efficient. This is most valuable in light-limited applications such as high-resolution fluorescence imaging, where objects are typically small and contain few fluorophores (15,18), or in live-cell fluorescence imaging, where exposure times are limited by the extreme sensitivity of live cells to phototoxicity (9,24,46,54). What exactly does it mean by light efficient here? Answer: This means that the techniques reduces the amount of light the organism under the microscope is exposed to - it is efficient in its use of light. I haven't read the paper in depth to understand why this might be so.
{ "domain": "physics.stackexchange", "id": 28378, "tags": "visible-light, microscopy, imaging" }
Kinematic Equations contradicting each other
Question: For example, there is an object that is being thrown into the air, and we need to find the time it takes for it to reach its maximum height. Here are the variables: a = -9.8 m/$s^2$ $v_i$ = 300 m/s $v_f$ = 0 m/s $\Delta$d = 900 m t = ? Using the kinematic equation $\Delta d = v_i t + (1/2)at^2 $ we can rearrange the equation and plug in our values to get $4.9t^2 + 300t - 900 = 0 $ This is a quadratic equation and using the quadratic formula we get that $t \approx 3s$. Now if we use the other kinematic equation, $v_f = v_i + at$, and then plug in our values and rearrange we get, $t=-300/-9.8$, which then becomes, $t\approx 31s$. I haven't tried these values with other equations, but I am fairly confident they will also give different results. My question here is, why are these equations giving me contradicting results, and which one is the right answer, if any? Sorry if this is formatted poorly, I am quite new here. Answer: The answer is simple: too much information is given, and the information given is contradictory! Think about it, if we have an object at $300m/s$ and we were to decelerate it at $9.8m/s^2$ to $0m/s$, it would take, as you have correctly calculated, $30.6s$. However, to have traveled $900m$ in that long period of time would be an understatement. It would have traveled way more distance than that, about $4590m$! Evidently, the question is flawed, so a contradiction is gotten.
{ "domain": "physics.stackexchange", "id": 53578, "tags": "homework-and-exercises, kinematics, projectile" }
Write a function to determine whether an array contains consecutive numbers for at least N numbers
Question: I am trying to write a function to determine whether an array contains consecutive numbers for at least N numbers. For example, the input is [1,5,3,4] and 3, it turns true because the array has 3 consecutive numbers, which is [3,4,5] Here this function requires sorting beforehand and it is not the most eloquent solution in my opinion. Can someone take a look and suggest some improvements on this? function hasConsecutiveNums(array, N) { if (array.length < N) return false; if (N === 0) return true; const sortedArray = array.slice().sort((a, b) => a - b); let count = 0; let prev = null; for (const num of sortedArray) { if (prev && num === prev + 1) { count++; } else { count = 1; } if (count === N) return true; prev = num; } return false; } console.log(hasConsecutiveNums([1, 4, 5, 6], 3)) // true console.log(hasConsecutiveNums([1, 4, 5, 6], 4)) // false Answer: One issue to consider: what if an element in the array is 0, and thus falsey? Then if (prev && will not be fulfilled: console.log(hasConsecutiveNums([-1, 0, 1], 3)) // false... oops function hasConsecutiveNums(array, N) { if (array.length < N) return false; if (N === 0) return true; const sortedArray = array.slice().sort((a, b) => a - b); let count = 0; let prev = null; for (const num of sortedArray) { if (prev && num === prev + 1) { count++; } else { count = 1; } if (count === N) return true; prev = num; } return false; } console.log(hasConsecutiveNums([-1, 0, 1], 3)) // false... oops Another tweak to make the code a bit more elegant would be to assign prev to the first element of the array first, and initialize count to 1, thus starting comparison on the second element rather than on the first, avoiding the need to compare against null. With this method, you also need to return true immediately if the array's length is only 1, like the other answer recommends, otherwise there won't be any iterations within which return true could be reached: function hasConsecutiveNums(array, N) { if (array.length < N) return false; if (N <= 1) return true; const sortedArray = array.slice().sort((a, b) => a - b); let prev = sortedArray.shift(); let count = 1; // first element of the array is already in prev for (const num of sortedArray) { if (num === prev + 1) { count++; } else { count = 1; } if (count === N) return true; prev = num; } return false; } console.log(hasConsecutiveNums([1, 4, 5, 6], 3)) // true console.log(hasConsecutiveNums([1, 4, 5, 6], 4)) // false console.log(hasConsecutiveNums([-1, 0, 1], 3)) // true If, as the comment notes, you'd want [1, 2, 2, 3] to return true, de-duplicate the numbers with a Set: function hasConsecutiveNums(array, N) { if (array.length < N) return false; if (N <= 1) return true; const sortedArray = [...new Set(array.slice().sort((a, b) => a - b))]; let prev = sortedArray.shift(); let count = 1; // first element of the array is already in prev for (const num of sortedArray) { if (num === prev + 1) { count++; } else { count = 1; } if (count === N) return true; prev = num; } return false; } console.log(hasConsecutiveNums([1, 4, 5, 6], 3)) // true console.log(hasConsecutiveNums([1, 4, 5, 6], 4)) // false console.log(hasConsecutiveNums([-1, 0, 1], 3)) // true console.log(hasConsecutiveNums([1, 2, 2, 3], 3)) // true
{ "domain": "codereview.stackexchange", "id": 38045, "tags": "javascript, algorithm, array" }
how to bring up sensors in a stand alone Gazebo
Question: Hello! I am starting a stand alone gazebo and loading a .world from a cpp file with : gazebo::setupServer(); // Load a world gazebo::physics::WorldPtr world = gazebo::loadWorld( "worlds/pepper_base.world"); for (unsigned int i = 0; i < 10000; ++i) { gazebo::runWorld(world, 1); } This world include a robot model containing an hokuyo laser : pepper_model.sdf : <include> <uri>model://hokuyo</uri> <pose>0.0562 0 -0.303 0 -0.0175 0</pose> </include> <joint name="hokuyo_joint" type="revolute"> <child>hokuyo::link</child> <parent>Tibia</parent> <axis> <xyz>0 0 1</xyz> <limit> <upper>0</upper> <lower>0</lower> </limit> </axis> </joint> When I am importing this model from the GUI gazebo, no problem, I can listen the laser topic with : gz topic -e /gazebo/default/virtual_pepper/hokuyo/link/laser/scan But when launching my cpp program ( without any gazebo launched, except the server written above ), there is no data comming from the laser. I tried several function from the gazebo API as : gazebo::sensors::SensorManager *mgr = gazebo::sensors::SensorManager::Instance(); gazebo::sensors::Sensor_V vect_sensor = mgr->GetSensors(); => to init() the captors "by hand" but vect_sensor is always empty whereas : gazebo::physics::LinkPtr tibia= pepper->GetLink("Tibia"); nom=tibia->GetSensorName(0); give me the name of my sensor What I am missing? Originally posted by scarlett on Gazebo Answers with karma: 11 on 2015-09-22 Post score: 0 Answer: By searching into this file in Gazebo lib : server.cc I had the idea to add few lines in my code to take care of sensors : gazebo::setupServer(); gazebo::physics::WorldPtr world = gazebo::loadWorld( "worlds/pepper_base.world"); // Make sure the sensors are updated once before running the world. // This makes sure plugins get loaded properly. gazebo::sensors::run_once(true); gazebo::sensors::run_threads(); for (unsigned int i = 0; i < 100000; ++i) { gazebo::runWorld(world, 1); gazebo::sensors::run_once(); gazebo::common::Time::MSleep(1); } Now it is working fine Originally posted by scarlett with karma: 11 on 2015-09-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 3821, "tags": "gazebo" }
Solve X(n) = 2X(n-1) + 2 recurrence relation
Question: I'm trying to solve the excersice from Knuth's "Concrete Mathematics": A Double Tower of Hanoi contains 2n disks of n different sizes, two of each size. As usual, we're required to move only one disk at a time, without putting a larger one over a smaller one. How many moves does it take to transfer a double tower from one peg to another, if disks of equal size are indistinguishable from each other? My solution was like this. Let $n$ be the number of disks of different sizes and $X_n$ - number of moves. The first few solutions are: $n = 0$ $X_n = 0$ $n = 1$ $X_n = 2$ $n = 2$ $X_n = 6$ $n = 3$ $X_n = 14$ Recurrence is $X_n = 2X_{n-1} + 2$ We can add $2$ to both sides: $X_n+2 = 2X_{n-1} + 4$ let $Y_n = X_n + 2$, then $Y_n = 2Y_{n-1}$ $Y_n = 2^n$ $X_n = 2^n - 2$ The problem is that the correct solution is $2^{n+1} - 2$, but I cannot find an error in my approach. Answer: Your $Y_0$ is probably wrong. $Y_0$ must be equal to $X_0+2=0+2=2$. Then $Y_n=2^{n+1}$ and hence $X_n=2^{n+1}-2$. Please double check.
{ "domain": "cs.stackexchange", "id": 9540, "tags": "recurrence-relation" }
Pulling a string with both ends fixed
Question: Consider a string with both ends fixed. If somewhere in middle of it be pulled and released, how would it oscillate and what is it's equation? My solution assuming the result is a standing wave: The point which is pulled will be an anti-node, so the length of string before this point is $(2n-1)\times\frac{\gamma}{4}$ and the length of string after this point will be $(2n^\prime-1)\times\frac{\gamma}{4}$. Considering the first length $l$ and the second length $l^\prime$, we will have: $$\frac{2n-1}{2n^\prime-1}=\frac{l}{l^\prime}$$ So the oscillator will be in it's $n+n^\prime -1$ harmonic. But what if that equation doesn't have natural answer, and what if it has answer so double of that is still an answer? Answer: Mathematically the shape of the string is the superposition of multiple standing waves. In general it has the form: $$ y(x,t) = \sum_{i=1}^{\infty} \sin \left(i\frac{ \pi x}{\ell} \right) \left( A_i \sin \left( i\frac{ \pi c t}{\ell} \right)+ B_i \cos \left( i\frac{ \pi c t}{\ell} \right) \right) $$ where $\ell$ is the length from end to end, and $c$ is the speed of wave propagation along the string. Now given an initial pluck of the string, of triangular shape you find the coefficients $A_i$ and $B_i$ using Fourier transform. The result is: $$ \begin{align} A_i &= 0 \\ B_i &= Y \frac{2 \ell^2 \sin \left( \frac{i \pi x_p}{\ell} \right)}{\pi^2 i^2 x_p (\ell-x_p)} \end{align} $$ where $x_p$ is the position of the pluck point, $Y$ is the pluck amplitude, and $i=1 \ldots \infty$ To arrive at this use the pluck shape $y_0(x) = Y\,{\rm if}(x \leq x_p, \tfrac{x}{x_p}, 1-\tfrac{x-x_p}{\ell-x_p})$ and the known initial conditions $$ \begin{aligned} \lim \limits_{t\rightarrow 0} y(x,t) & = y_0(x) = {\rm triangle} \\ \lim \limits_{t\rightarrow 0} \frac{\partial}{\partial t} y(x,t) & = v_0(x) =0 \end{aligned}$$ Now pre-multiply with $\sin\left(i \frac{\pi x}{\ell} \right)$ and integrate over the length of the string $$ \begin{aligned} \int \sin\left(i \frac{\pi x}{\ell} \right) y_0(x) {\rm d}x &= \int \sin\left(i \frac{\pi x}{\ell} \right) \lim_{t\rightarrow 0} y(x,t) {\rm d}x = B_i \frac{\ell}{2} \\ \int \sin\left(i \frac{\pi x}{\ell} \right) v_0(x) {\rm d}x &= \int \sin\left(i \frac{\pi x}{\ell} \right) \lim_{t\rightarrow 0} \frac{\partial}{\partial t} y(x,t) {\rm d}x = A_i \frac{i\,\pi\, c}{2} \end{aligned} $$ or $$\begin{aligned} A_i &= \frac{2}{i\,\pi\,c} \int \sin\left(i \frac{\pi x}{\ell} \right) v_0(x) {\rm d}x = 0 \\ B_i & = \frac{2}{\ell} \int \sin\left(i \frac{\pi x}{\ell} \right) y_0(x) {\rm d}x = \frac{2 Y \ell^2}{\pi^2 i^2 x_P (\ell-x_p)} \sin\left(i \frac{\pi x_p}{\ell} \right)\end{aligned} $$
{ "domain": "physics.stackexchange", "id": 55887, "tags": "waves, string, wavelength, harmonics" }
Data redundancy between train and test dataset - why is it bad (source needed)
Question: I know that it is not OK to have too similar data in the train and test set (for example two pictures that differ by only one pixel). I'm trying to find a scientifically valid explanation why it is bad, I mean a paper in a peer-reviewed journal explaining (or even mentioning) this. Couldn't find anything appropriate for several hours. Do you know any reliable source? Answer: There are several reasons why having too similar data in the train and test set is not recommended. One reason is that it can lead to overfitting, where the model performs well on the training data but poorly on the test data, because it has essentially memorized the training data instead of learning generalizable patterns. This is especially true if the training and test data are very similar, as the model may not be able to generalize to new, unseen data. Another reason is that it can give an overly optimistic estimate of model performance, because the model is being tested on data that is very similar to the training data. This is not representative of the true performance of the model on new, unseen data. While there may not be a specific paper addressing the exact scenario you described (two pictures that differ by only one pixel), there are several reasons why having too similar data in the train and test set is generally not recommended in machine learning. A paper that discusses the importance of representative sampling in machine learning is The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization by Adam Coates, Andrew Y. Ng, and Honglak Lee, published in the Journal of Machine Learning Research (JMLR) in 2011. While this paper does not specifically address the scenario you described, it does emphasize the importance of representative training and test data for accurate machine learning performance evaluation.
{ "domain": "datascience.stackexchange", "id": 11665, "tags": "machine-learning, data, training" }
Energy of an object or a system?
Question: We know that the gravitational potential energy of a system consisting of at least two bodies is given by $U=-\frac{Gm_1m_2}{r}$ where masses $m_1$ and $m_1$ are at a distance $r$ from each other. My question is since this energy is defined for a system, what will be the individual potential energies of them? For example, take the example of potential energy of an object $m$ at a height $h$(which is pretty small compared to the radius of the earth)from the ground. Here we say that the potential energy of the object is $mgh$ which is derived from $\frac{mgh}{1+\frac{h}{R}}$ considering that fact that $\frac{h}{R}=0$. But this energy is actually the energy of the configuration which consists of earth and the object. Then why do we say that the potential energy of the object is $mgh$? Does that mean the potential energy of the earth is also $mgh$? Answer: Saying the potential energy of the object is $mgh$ is technically incorrect, but nothing bad happens if you think of it like this, at least for simple systems. The reason is that we can just think of the object as the system. Then the constant gravitational field is external to the system. If we want to look at changes in mechanical energy of the object moving under the force of gravity only, then we have $$\Delta E=\Delta K=W_\text{ext}=-mg\Delta h$$ or we can just say that $$\Delta K+mg\Delta h=0$$ This $mg\Delta h$ is usually stated as the change in potential energy of the object. But it doesn't really matter what you call it as long as you recognize the validity of the above equation. In general you do need to be careful with it. For example, if we have two planets approaching each other, if we were to add up the (incorrect) changes in "individual potential energy", then we would be double-counting the change in potential energy.
{ "domain": "physics.stackexchange", "id": 82078, "tags": "gravity, potential, potential-energy" }
Why nuclear fusion deliver energy instead of taking?
Question: In nuclear physics, when you break eg. a nucleus of Uranium, some neutrons are liberated, and the original atom degrades to a lighter element. The energy that was used to keep these subatomic particles together is liberated (strong interaction), and a small part of the mass is converted to energy (E=MC^2) so you get a lot of energy with a small amount of atoms. So why nuclear fusion (the opposite operation) could even liberate MUCH more energy? I would naively expect it to take a lot of energy, not liberate it. I'm not a physician nor a student, just interested in physics, and this has always been a mystery to me. EDIT: actually, both of your answers are great and cristal-clear. That makes perfect sense and is exactly what I was looking for. I wish I could accept both answers, but the one with the graphic was a bit more complete. But thank you two! Answer: Good question. To think of it in that way can be very confusing. Every atom wants to reach the least energetic state it possibly can. This is what happens in fission, as you explain, and the uranium nucleus gives out energy after it splits into other nuclei, as those daughter nuclei have lost energy in order to reach that state, lost the energy which we use. Why do they lose energy though? This is where binding energy comes in. Binding energy is defined as the energy required to split a given nucleus into it's individual protons and neutrons. Rephrased, it is the energy released when protons or neutrons come together to form an nucleus. Now the binding energy is not what determines the stability, but the binding energy per nucleon (protons and neutrons are collectively called nucleons, as they constitute the nucleus) that determines it. For example, if one man has a 100 dollars, and a family of 10 has 500 dollars, the family has more money collectively, however, individualy, the man has more money. In the same way, it is the binding energy per nucleon , that determines the stability. The graph of binding energy per nucleon vs the elements is given- You can see that iron, has the highest binding energy, which makes it the most stable element. Now every atom wants to attain this stability that iron has. Uranium, on the right side of the graph, should split in order to come closer to the binding energy per nucleon value that iron has, whereas light elements like hydrogen need to fuse in order to gain iron's stability. You can see that hydrogen is way below iron in the graph, which is also why fusion releases a lot more energy than fission. I hope you understood, have a great day!
{ "domain": "physics.stackexchange", "id": 45059, "tags": "nuclear-physics, fusion, binding-energy" }
Bounds on this Strategy for Separating Words
Question: Question Given binary string $z \in \{0,1\}^n$, let $f(z)$ be the smallest integer $k$ such that there exists a DFA with $k$ states, such that reading $z$ from a specific starting state, we end at a state $t$ where either reading a $0$ or a $1$ at $t$ takes us to a new state. (i.e. a state which has not been reached in the path we took when reading $z$) Then, defining $F(n) = \max\{f(z):z \in \{0,1\}^n\}$, I was wondering if any bounds are known for $F$. Clearly, we have $F(n) \le n+1$. Motivation Generally, the word separator problem about given distinct binary strings, $x,y \in \{0,1\}^n, x \neq y$, to find the smallest DFA such that accepts $x$ but not $y$. I was wondering if there have been results on this particular method: Since $x\neq y$, let $z$ be the longest common prefix of $x$ and $y$. (example: if $x = 1101101,y=1100110$, then $z = 110$ because $x,y$ differ on their fourth letter) WLOG, lets assume $x= z|0|x', y=z|1|y'$, where $|$ denotes concatenation and $x',y'$ are arbitrary. If there exists a DFA of length $k$ such reading $z|0$ or $z|1$ ends at a state $s'$ not visited by reading $z$, then there is a DFA of length $k +O(\log(n))$ separating $x$ and $y$. (because $x,y$ will reach $s'$ at different times, it reduces to unary word separation, which is know to take $O(\log(n))$ states by prime number theorem) Rough Ideas Currently this strategy has stuck out to me: we have that $f(z) \le g(z_m)+F(n-m)$ where $z_m$ is the subword consisting of the first $m$ letters in $z$, and $g(w)$ is the smallest integer $k$ such that there is DFA on $k$ states, such that reading $w$ at a specific starting state, we end at a new state $t$. For upper bounding $g(w)$, for any integers $k,i$, and any $w' \in \{0,1\}^k$, there exists an DFA on $2k$ states such that when reading a word $w$, we reach the state $t$ iff $w'$ appears as a factor/substring whose first letter is the $qk+i$-th letter of $w$. (i.e. the first letter is the $m$-th letter of $w$ where $m$ has the same residue as $i$ modulo $k$) Of course, if $z$ is a string of only 1's, then $g(z_m) = m$ for all $m$, thus we need to combine this with a second idea to handle the cases when $z$ is periodic or otherwise not quasi-random in some sense, to get a sublinear bound. Answer: The second section of Robson's "Separating strings with small automata" proves $F(n) = O((n \log n)^{1/2})$. The string sequence $(10^n)^n$ gives a lower bound of $\Omega(n^{1/2})$. If the automaton has $<n$ states then both of the sequences $\delta_0^{\circ m} (\delta_{(10^n)^{n-1}1}(q_0))$ and $\delta_{10^n}^{\circ m}(q_0)$ will reach a cycle before $m=n$.
{ "domain": "cstheory.stackexchange", "id": 5138, "tags": "automata-theory, dfa" }
Chances of life on other planets, how watertight is the idea that 'As soon as life could occur, it did occur’?
Question: I’ve heard it reasoned that ‘Almost as soon as life could occur on Earth, it did. Therefore it seems likely that life develops easily, and thus that there’s probably lots of life out there’. Is this notion undermined by any of these things?: – If life develops easily, it’s likely to have developed independently multiple times on Earth (any evidence of this?). – The fact that life developed soon after particular conditions emerged indicates only that that’s what happened on that occasion. It doesn’t indicate frequency. It might occur once every trillion years under those circumstances. – Life could happen under many circumstances that life as we know it doesn’t exist. If there’s no evidence it did on Earth under different conditions, how does the reasoning hold? Answer: Yes, early life on Earth does not imply easy life. Full disclosure: I have a paper on this, Snyder-Beattie, A. E., Sandberg, A., Drexler, K. E., & Bonsall, M. B. (2021). The Timing of Evolutionary Transitions Suggests Intelligent Life is Rare. Astrobiology, 21(3), 265-278. The mental model most people have is that there is some constant rate $\lambda$ of life emergence on lifeless worlds, so we should expect the probability of life at time $t$ to grow like $1-\exp(-\lambda t)$, with the expected emergence time at $t=1/\lambda$. Earliest life was conservatively 3.77 Gya and oceans 4.5 Gya, so that gives estimates like $\lambda \sim 1.37$ per Gy. One can of course quibble about whether the rate is constant, how long the window of emergence is, and many factors. But as the question asks, that we observe life early in Earth's history may not be good evidence. Suppose $\lambda = 10^{-12}$ per year, making the probability of getting life over a 10 billion year sun-like star lifespan 1%. Any intelligent observers that evolve will be on one of these rare 1% planets. If they try to guess how likely life is on other worlds they will estimate a $\lambda$ that is much bigger, typically $10^{-9}$ or so based on when life emerged on their world. But this bias is likely to be much worse, since intelligent observers come about after a long evolutionary chain that may contain several fairly unlikely steps. For most of Earth's history life was very simple, and becoming eukaryotes, land-living or intelligent may have required very unlikely steps with their own low probabilities. Observers will always find themselves on worlds where these steps have been passed, even if they have their own super-low probabilities and the observers are extremely rare lucky cases. When you run the math on this, you will find that the hard steps get roughly evenly distributed across the habitable time of a planet. If you need 5 hard steps (say: life emerging, photosynthesis, eukaryotes, multicellularity, intelligence) the observers will typically see life show up around 1/6th of the habitable time, and they show up 5/6th of the way to the end. Random samples of histories where rare transitions happen but a full set of transitions happens before the end of the time interval. Colour indicates the number of the transition. Probability density of the transitions over the habitable period of the biosphere for 5 very hard transitions. Getting back to the questions: if we were to discover several independently evolved kinds of life on Earth (or elsewhere in the solar system, even if extinct), then we would have good reason to think that at least the life emergence step was easy and there will be lots of life in the universe. Same for discovering life that can thrive in very different environments. It might also be that true life emergence is very rare, but then spreads by panspermia across the galaxy. In that case it would seem as if life was easy, but it is just received from elsewhere. However, later hard steps will still tend to push appearance of life early since there is a need for a lot of time to have the other unlikely things to happen so that the very rare observers can emerge. What we did in our paper is to show how the pattern of steps can be used to constrain estimates of life emergence, but there is plenty of room to argue what steps were truly hard or even when they happened. But I would agree with the original post: early life does not have to imply that life is easy. But the way to know the difficulty of life is of course to look for it, not just do statistics.
{ "domain": "astronomy.stackexchange", "id": 5488, "tags": "origin-of-life" }
AMCL drift rotation
Question: Hello I have an AMCL drift problem when the robot rotates around itself. The most serious problem occurs if it performs small rotations, for example <15º, as this video I recorded. If the robot moves back and forth AMCL works perfectly. video showing the problem I have already checked the wheelbase and it is correct. Is this expected? or is there a parameter to fix this problem? thanks Originally posted by mateusguilherme on ROS Answers with karma: 125 on 2019-11-16 Post score: 0 Original comments Comment by stevemacenski on 2019-11-17: Are you sure you have your robot otherwise set up correctly? Comment by mateusguilherme on 2019-11-17: Thanks for your time. If that helps, I added some images from "rqt_graph / rqt_tf_tree". Do not hesitate to ask me for more information, I am learning ROS and I have many doubts Answer: Before integrating with AMCL I would suggest to take a look at section 1 on the Navigation Stack Troubleshooting page, this will make sure that you are estimating the odometry correctly: Is My Odometry Good Enough for AMCL Problem: The robot doesn't seem to be localized properly. Its position estimate jumps around a lot in rviz and the navigation stack doesn't seem able to follow the plans produced by the global planner. Is this a problem with AMCL or my robot's odometry. Solution: There are a couple of tests that are helpful to run to see how good the odometry of a robot is: Test 1: Open up rviz and make sure that you're subscribed to the laser scan topic for your robot. Next, set the decay time to something like 30 seconds. Also set the fixed frame to the odom frame. Perform an in-place rotation with the robot and look at the laser scans. If odometry is fairly accurate, you should see scans from the previous rotation overlap with those generated on the current rotation. You'll want to do this in an area where you have distinctive features in your laser scan. Test 2:Set up rviz the same way as the previous test. Point the robot at a wall and drive it towards it. With good odometry, the wall should stay in about the same place as the robot moves towards it. If you see a lot of movement in the positions of the scans relative to the wall that means odometry is poor. Test 3: Drive the robot straight down a hallway. The laser scans of the hallway should stay straight. If you see them move a lot, it means your odometry is poor. Originally posted by Martin Peris with karma: 5625 on 2019-11-17 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 34023, "tags": "ros-kinetic" }
Random number generator based on a hash of the time
Question: from time import time def baseRandom(): return hash(str(time())) def randFromZero(maximum): return baseRandom() % maximum def randRange(minimum, maximum): return randFromZero(maximum - minimum) + minimum I ask what's wrong because it seems so simple and stupid. This is something I just thought up one day and put together in 5 minutes, but it seems to generate even and unpredictable results. I've done a couple of simple tests like generating a lot of numbers and looking at the variance between the amount of times each number is generated, and using it to create a little grid which has tiles that are either black or white with equal probability, in which I saw no clear patterns. Hopefully someone who knows a thing or two about this can educate me. Answer: It looks like you're relying on running this on a system where the granularity of the time() function is smaller than the amount of time it takes to execute baseRandom(), do what you want with the results, and come back to call baseRandom() again. If not, you'll get more repeating numbers than you should. You're also relying on the result of the hash() function to be sufficiently pseudorandom given the input you're feeding to it. Detecting an obvious pattern depends on how you look for it. I think Knuth gave an example of a RNG that looked OK when you plotted pairs of numbers on an x-y plane, but if you took three numbers in a row as x-y-z coordinates, and look at the resulting cube from the right direction, you can clearly see a finite number of planes where all the outcomes lie. In other words, just because you tried a couple of things and didn't see a pattern doesn't mean a non-random pattern isn't there. There are fifteen tests for randomness listed here, if you're interested: http://csrc.nist.gov/groups/ST/toolkit/rng/stats_tests.html
{ "domain": "codereview.stackexchange", "id": 7276, "tags": "python, python-3.x, random, reinventing-the-wheel" }
Implication of Exponential Time Hypothesis falseness?
Question: The proof that Exponential Time Hypothesis (ETH) is true, would imply that P!=NP among other implications (like solvability of SAT in sub-exponential time is not possible in the worst case). But, if proven that ETH is false, how would it impact the Complexity Hierarchy and what would be the implications of the proof. Results, that immediately follow from the ETH falseness (based on our current knowledge)? Answer: One such result is by Megiddo and Vishkin. They proved that minimum dominating set in tournaments is in $QP$. Additionally, they showed that tournament dominating set has P-time algorithm if and only if SAT has subexponential time algorithm. Therefore, ETH falseness implies that tournament dominating set is in $P$ which seems unlikely.
{ "domain": "cstheory.stackexchange", "id": 4075, "tags": "cc.complexity-theory" }
Memory leak I can't identify using Bitmap and Graphics classes
Question: I have some parallel.for one inside another. the last parallel.for have a normal for that should Create images by combining other images. the images are generated but the memory consumed by the process slowly increases. I'm using net core 6, and as you can see I have dispatched all the Bitmaps and the Graphics objects. also I'm forcing garbage collection so the memory stop growing (I ran the code for 4 hours without forcing collection and the dispatched objects were not collected) here is the code: using System.Drawing; using System.Drawing.Drawing2D; using System.Drawing.Imaging; Console.WriteLine("Generando!"); var count = 0; Parallel.For(1, 11, (a) => { Parallel.For(1, 11, (b) => { Parallel.For(1, 11, (c) => { Parallel.For(1, 11, (d) => { Parallel.For(1, 11, (e) => { for (int f = 1; f <= 10; f++) { Bitmap source1 = new Bitmap($"1/{a}.png"); Bitmap source2 = new Bitmap($"2/{b}.png"); Bitmap source3 = new Bitmap($"3/{c}.png"); Bitmap source4 = new Bitmap($"4/{d}.png"); Bitmap source5 = new Bitmap($"5/{e}.png"); Bitmap sourceBase = new Bitmap($"Rostro Base.png"); Bitmap source6 = new Bitmap($"6/{f}.png"); var target = new Bitmap(source1.Width, source1.Height, PixelFormat.Format32bppArgb); var graphics = Graphics.FromImage(target); graphics.CompositingMode = CompositingMode.SourceOver; // this is the default, but just to be clear graphics.DrawImage(sourceBase, 0, 0); graphics.DrawImage(source6, 0, 0); graphics.DrawImage(source5, 0, 0); graphics.DrawImage(source4, 0, 0); graphics.DrawImage(source3, 0, 0); graphics.DrawImage(source2, 0, 0); graphics.DrawImage(source1, 0, 0); count++; var nombre = $"{count}_{a}-{b}-{c}-{d}-{e}-{f}"; var target2 = Cropimage(target); target2.Save($"rostros/{nombre}.png", ImageFormat.Png); source1.Dispose(); source2.Dispose(); source3.Dispose(); source4.Dispose(); source5.Dispose(); source6.Dispose(); sourceBase.Dispose(); target.Dispose(); target2.Dispose(); graphics.Dispose(); GC.Collect(); } }); Console.Write($"\r{count} imagenes generadas "); }); }); }); }); Bitmap Cropimage(Bitmap input) { // Find the min/max non-white/transparent pixels Point min = new Point(int.MaxValue, int.MaxValue); Point max = new Point(int.MinValue, int.MinValue); for (int x = 0; x < input.Width; ++x) { for (int y = 0; y < input.Height; ++y) { Color pixelColor = input.GetPixel(x, y); if (pixelColor.A > 0) { if (x < min.X) min.X = x; if (y < min.Y) min.Y = y; if (x > max.X) max.X = x; if (y > max.Y) max.Y = y; } } } // Create a new bitmap from the crop rectangle Rectangle cropRectangle = new Rectangle(min.X, min.Y, max.X - min.X, max.Y - min.Y); Bitmap newBitmap = new Bitmap(cropRectangle.Width, cropRectangle.Height); using (Graphics g = Graphics.FromImage(newBitmap)) { g.DrawImage(input, 0, 0, cropRectangle, GraphicsUnit.Pixel); } return newBitmap; } Answer: Just few tips Prefer using over manual calling Dispose(). Consumed memory isn't always busy memory, GC can free the memory anytime it want. That is OK, trust GC. Just assume that there's no leaks in managed code possible unless you manually allocated unmanaged memory. manual calling GC.Collect() is almost never effective but makes the app slower. The above code isn't an exception NEVER use GetPixel/SetPixel if you don't want to die before the app ends working, it's superslow way to deal with Bitmap count is shared counter, increment it thread-safely. No need to read the same image from disk for each Thread, lock here is more efficient. int count = 0; using Bitmap sourceBase = new Bitmap($"Rostro Base.png"); Parallel.For(1, 11, (a) => { using Bitmap source1 = new Bitmap($"1/{a}.png"); Parallel.For(1, 11, (b) => { using Bitmap source2 = new Bitmap($"2/{b}.png"); Parallel.For(1, 11, (c) => { using Bitmap source3 = new Bitmap($"3/{c}.png"); Parallel.For(1, 11, (d) => { using Bitmap source4 = new Bitmap($"4/{d}.png"); Parallel.For(1, 11, (e) => { using Bitmap source5 = new Bitmap($"5/{e}.png"); for (int f = 1; f <= 10; f++) { using Bitmap source6 = new Bitmap($"6/{f}.png"); using Bitmap target = new Bitmap(source1.Width, source1.Height, PixelFormat.Format32bppArgb); using Graphics graphics = Graphics.FromImage(target); graphics.CompositingMode = CompositingMode.SourceOver; // this is the default, but just to be clear lock(sourceBase) graphics.DrawImage(sourceBase, 0, 0); graphics.DrawImage(source6, 0, 0); graphics.DrawImage(source5, 0, 0); lock(source4) graphics.DrawImage(source4, 0, 0); lock(source3) graphics.DrawImage(source3, 0, 0); lock(source2) graphics.DrawImage(source2, 0, 0); lock(source1) graphics.DrawImage(source1, 0, 0); int localCount = Interlocked.Increment(ref count); string nombre = $"{localCount}_{a}-{b}-{c}-{d}-{e}-{f}"; using Bitmap target2 = Cropimage(target); target2.Save($"rostros/{nombre}.png", ImageFormat.Png); } }); Console.Write($"\r{count} imagenes generadas "); }); }); }); }); Bitmap Cropimage(Bitmap input) { // Find the min/max non-white/transparent pixels Point min = new Point(int.MaxValue, int.MaxValue); Point max = new Point(int.MinValue, int.MinValue); // Retreiving bitmap data to array BitmapData data = input.LockBits(new Rectangle(Point.Empty, input.Size), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb); byte[] bytes = new byte[input.Height * input.Width * 4]; // 32bpp Marshal.Copy(data.Scan0, bytes, 0, input.Height * data.Stride); // Stride can be negative in some bitmaps but Marshal supports that. // In short: (Math.Abs(data.Stride) == input.Width * 4) for 32bpp is always 'true'. input.UnlockBits(data); // bytes array contains sequence of 4-byte pixels like B G R A B G R A for (int y = 0; y < input.Height; ++y) { int rowOffset = y * input.Width * 4; for (int x = 0; x < input.Width; ++x) { int colOffset = x * 4; if (bytes[rowOffset + colOffset + 3] > 0) { if (x < min.X) min.X = x; if (y < min.Y) min.Y = y; if (x > max.X) max.X = x; if (y > max.Y) max.Y = y; } } } // Create a new bitmap from the crop rectangle Rectangle cropRectangle = new Rectangle(min.X, min.Y, max.X - min.X, max.Y - min.Y); Bitmap newBitmap = new Bitmap(cropRectangle.Width, cropRectangle.Height); using Graphics g = Graphics.FromImage(newBitmap); g.DrawImage(input, 0, 0, cropRectangle, GraphicsUnit.Pixel); return newBitmap; } This might work ~100x times faster than the initial code. It also may work without locks but I'm not sure if DrawImage uses LockBits internally and didn't try accessing single Bitmap from multiple threads. But you may try. Anyway there will be no any sensitive difference in performance. Reading same images thousands times from disk is significantly slower in comparison to reading it from memory even locked for single-threaded access.
{ "domain": "codereview.stackexchange", "id": 42787, "tags": "c#, image, memory-optimization, .net-core" }
Is there any clear tutorial for how to use AutoEncoders with text as input
Question: I have a pandas dataframe that describes some fields of the register. I have used one hot encoding to encode the feature vectors that are not numbers. Finally my dataset now has 4000 rows * 4 columns. It contains only numbers. I want to generate the same input using AutoEncoders but I didn't find any useful link that I can use for that. The ones that I used had some dimension problems when I use my data. Does Anyone recommend any helpful tutorial ? Answer: Concerning Encoding, this link helped me a lot. If you try the code in part 'One Hot Encode with scikit-learn', you will get your encoded vectors. You just have to feed it a list of all your tokens. So I have extracted the fields of the register into a list of markups. As an output of the one hot encoding part, you will get a n dimensional array, you feed it to the script in part 'Let's build the simplest possible autoencoder' of the AutoEncoder link In the Input vector you need to put how many classes you have (instead of the value 784) and in our case, you specify the number of columns of the nd array which represent the unique tokens. Then to compare the predicted output, you have to use decoding in order to compare visually the text that was regenerated.
{ "domain": "datascience.stackexchange", "id": 3413, "tags": "keras, pandas, lstm, autoencoder, text" }
What is the definition of a "pole" of a celestial body?
Question: What is the definition of a "pole" of a celestial body? Earth's pole is defined as it's rotational pole. The North and South Poles are the two points on Earth where its axis of rotation intersects its surface. Apparently, (according to everywhere I've read), the poles of astronomical bodies are determined based on their axis of rotation in relation to the celestial poles of the celestial sphere. But this is an inadequate definition. What about Pluto? Pluto's axis of rotation does not intersect its surface. Pluto is tidally locked with Charon, and Pluto orbits a barycenter that is located outside of Pluto. And yet Pluto has a North and a South Pole. Why are the North and South Poles of Pluto where they are, as opposed to any other location on the surface of Pluto? How are these poles defined for Pluto? What is the definition of a "pole" of a celestial body? Answer: I'll just add a supplement to @planetmaker's answer. As long as a body is distinct and not connected to anything else, it will have a center of mass. If the body is roughly spherical its center of mass will be near it's middle. The body's rotational axis by definition passes through its center of mass, and is parallel to it's own angular momentum vector. If there are two bodies orbiting each other, to a good approximation (but not exactly, see Which mass distributions guarantee two bodies have non-Keplerian orbits? Which non-spherical distributions still allow noncircular Keplerian orbits?) we can consider one's center of mass orbiting around the other's center of mass, and the center of both of their masses to be the pair's barycenter. Their mutual rotation and orbital angular momentum will be defined by another axis passing through that barycenter, which may be inside one of them (like the Earth-Moon or Sun-Jupiter system) or in space between them like the Pluto-Charon system. It doesn't matter. One is the rotation of a single body around its own center of mass, the other this the rotation of two centers of mass about their common center of mass. Apples and oranges. If however a body is crazy-shaped, like a big letter "C" perhaps, that center of mass might be outside the body. That poses a conundrum for placing the "pole" of the body since the axis of rotation will not intersect the body's surface. IN that case the body will simply not have a true pole. But poles are constructs and not fundamental. It still has a center of mass and a rotational axis and angular momentum vector, and those are really what matter. I don't know where the center of mass of comet 67P/Churyumov–Gerasimenko is exactly, nor where its poles are, so I think that that would be an excellent follow-up question! Source
{ "domain": "astronomy.stackexchange", "id": 5087, "tags": "orbit, rotation, definition, pole" }
Function for determining triangle type
Question: A while back I was asked to write some sample code for a job that I was applying for. They wanted a class that had a function that would accept three lengths and return the appropriate type of triangle (Scalene, Isosceles or Equilateral) based on that. They also wanted unit tests. After sending this I never heard back, so I'm wondering if anyone would have any suggestions for a better way to implement this. using NUnit.Framework; namespace Triangle { /**** * This test is divided into two halves. * 1. Implement the GetTriangleType method so that it returns the appropriate * enum value for different inputs. * 2. Write a complete series of passing unit tests for the GetTriangleType method * under various input conditions. In this example we're using NUnit, but * you can use whatever testing framework you prefer. * * When finished, send back your solution (your version of TriangleTester.cs * will suffice) including any explanatory comments you feel are needed. */ public enum TriangleType { Scalene = 1, // no two sides are the same length Isosceles = 2, // two sides are the same length and one differs Equilateral = 3, // all sides are the same length Error = 4 // inputs can't produce a triangle } public class TriangleTester { /// <summary> /// Given the side lengths a, b, and c, determine and return /// what type of triangle the lengths describe, or whether /// the input is invalid /// </summary> /// <param name="a">length of side a</param> /// <param name="b">length of side b</param> /// <param name="c">length of side c</param> /// <returns>The triangle type based on the number of matching sides passed in.</returns> public static TriangleType GetTriangleType(int a, int b, int c) { //Placing items in an array for processing int[] values = new int[3] {a, b, c}; // keeping this as the first check in case someone passes invalid parameters that could also be a triangle type. //Example: -2,-2,-2 could return Equilateral instead of Error without this check. //We also have a catch all at the end that returns Error if no other condition was met. if (a <= 0 || b <= 0 || c <= 0) { return TriangleType.Error; } else if (values.Distinct().Count() == 1) //There is only one distinct value in the set, therefore all sides are of equal length { return TriangleType.Equilateral; } else if (values.Distinct().Count() == 2) //There are only two distinct values in the set, therefore two sides are equal and one is not { return TriangleType.Isosceles; } else if (values.Distinct().Count() == 3) // There are three distinct values in the set, therefore no sides are equal { return TriangleType.Scalene; } else { return TriangleType.Error; } } } [TestFixture] public class TriangleTesterTests { [Test] public void Test_GetTriangleType() { Assert.AreEqual(TriangleType.Equilateral, TriangleTester.GetTriangleType(4, 4, 4), "GetTriangleType(4, 4, 4) did not return Equilateral"); Assert.AreEqual(TriangleType.Isosceles, TriangleTester.GetTriangleType(4, 4, 3), "GetTriangleType(4, 4, 3) did not return Isosceles"); Assert.AreEqual(TriangleType.Scalene, TriangleTester.GetTriangleType(4, 3, 2), "GetTriangleType(4, 3, 2) did not return Scalene"); Assert.AreEqual(TriangleType.Error, TriangleTester.GetTriangleType(-4, 4, 4), "GetTriangleType(-4, 4, 4) did not return Error"); Assert.AreEqual(TriangleType.Error, TriangleTester.GetTriangleType(4, -4, 4), "GetTriangleType(4, -4, 4) did not return Error"); Assert.AreEqual(TriangleType.Error, TriangleTester.GetTriangleType(4, 4, -4), "GetTriangleType(4, 4, -4) did not return Error"); } } } Answer: I don't code C#, so just some generic notes: You can omit the else keyword if you return immediately: if (a <= 0 || b <= 0 || c <= 0) { return TriangleType.Error; } if (values.Distinct().Count() == 1) //There is only one distinct value in the set, therefore all sides are of equal length { return TriangleType.Equilateral; } ... Some comments are just says what's in the code. I'd remove them, they're just noise. The code doesn't check that the the sum of the lengths of any two sides of the triangle have to be greater than the length of the third side. (a + b > c) About the specification: in case of an error you might want to throw an exception with a detailed error message instead of the Error enum which tells nothing about the cause of the error to the clients. Just a link for @Jeff's point: Too many assert in one test is a bad smell. If the first AreEqual throws an exception you won't know anything about the results of the other assert calls which could be important because they could help debugging and defect localization.
{ "domain": "codereview.stackexchange", "id": 17986, "tags": "c#, .net, unit-testing, nunit" }
Finding the objects with the cheapest seats
Question: i have the following json object const file = [ { "seatNumber": "1A", "price": "£19.99", "available": true, "disabilityAccessible": true }, { "seatNumber": "2A", "price": "£19.99", "available": false, "disabilityAccessible": false }, { "seatNumber": "3A", "price": "£19.99", "available": false, "disabilityAccessible": false }, { "seatNumber": "4A", "price": "£19.99", "available": true, "disabilityAccessible": false }, { "seatNumber": "5A", "price": "£19.99", "available": false, "disabilityAccessible": false }, { "seatNumber": "1B", "price": "£12.99", "available": true, "disabilityAccessible": true }, { "seatNumber": "2B", "price": "£12.99", "available": false, "disabilityAccessible": false }, { "seatNumber": "3B", "price": "£12.99", "available": false, "disabilityAccessible": false }, { "seatNumber": "4B", "price": "£12.99", "available": false, "disabilityAccessible": false }, { "seatNumber": "5B", "price": "£12.99", "available": true, "disabilityAccessible": false }, { "seatNumber": "1C", "price": "£12.99", "available": true, "disabilityAccessible": true }, { "seatNumber": "2C", "price": "£12.99", "available": true, "disabilityAccessible": false }, { "seatNumber": "3C", "price": "£12.99", "available": true, "disabilityAccessible": false }, { "seatNumber": "4C", "price": "£12.99", "available": true, "disabilityAccessible": false }, { "seatNumber": "5C", "price": "£12.99", "available": true, "disabilityAccessible": false }, { "seatNumber": "1D", "price": "£12.99", "available": true, "disabilityAccessible": true }, { "seatNumber": "2D", "price": "£12.99", "available": false, "disabilityAccessible": false }, { "seatNumber": "3D", "price": "£12.99", "available": true, "disabilityAccessible": false }, { "seatNumber": "4D", "price": "£12.99", "available": true, "disabilityAccessible": false }, { "seatNumber": "5D", "price": "£12.99", "available": true, "disabilityAccessible": false }, { "seatNumber": "1E", "price": "£8.99", "available": true, "disabilityAccessible": true }, { "seatNumber": "2E", "price": "£8.99", "available": true, "disabilityAccessible": false }, { "seatNumber": "3E", "price": "£8.99", "available": false, "disabilityAccessible": false }, { "seatNumber": "4E", "price": "£8.99", "available": true, "disabilityAccessible": false }, { "seatNumber": "5E", "price": "£8.99", "available": false, "disabilityAccessible": false } ] I am trying to display seatNumber of the cheapest objects. so far i have the following which works but would like to know if there is a better way following big o notation in es6 format const seats = file.filter( seat => seat.price.replace(/[^0-9.-]+/g,"") == Math.min(...file.map(function ( seat ) { return Number(seat.price.replace(/[^0-9.-]+/g,"")) }) ) ).map(seat => seat.seatNumber); console.log(seats) output is [ '1E', '2E', '3E', '4E', '5E' ] Answer: Following the little big O. You ask "...if there is a better way following big o notation..." (?) Big O notation is a formalized mathematical convention used to express how a function (mathematical function) behaves as it approaches infinity. It is used in computer science to classify an algorithms complexity in regard to a definable input metric, usually the size of the input array when dealing with arrays. You can not "follow" big O notation as it provides no insight into how to reduce an algorithms complexity apart from a comparison. Find the big O To classify your function using Big O, first we need to make it at least readable, and convert it to a function. See snippet. Now count the number of times the function loops over each item in the input array data. Experience lets you do this in your head, but to demonstrate we modify the function to count every time you handle an item in the input array. Because we need to fit a curve we need at least 3 different input sizes which I do with some random data. function bestSeats(data) { var c = 0; // the counter const seats = data.filter( seat => (c++, // count the filter iterations seat.price.replace(/[^0-9.-]+/g, "") == Math.min( ...data.map(function ( seat ) { c += 2; // this counts as two // once each to map to the arguments of Math.min // once each to find the min return Number(seat.price.replace(/[^0-9.-]+/g, "")) } ) ))).map(seat => ( c++, // count each result seat.seat )); return "O(n) = O(" + data.length + ") = " + c; } function randomData(size) { const data = []; while (size--) { data.push({seat:size, price: "$"+(Math.random() * 100 | 0)})} return data; } console.log("Eval complexity A:" + bestSeats(randomData(10))); console.log("Eval complexity B:" + bestSeats(randomData(100))); console.log("Eval complexity C:" + bestSeats(randomData(500))); The 3 points we can use to find the curve that best fits O(10) ~= 211 O(100) ~= 20,102 O(500) ~= 500,005 Experience tells me it's a polynomial of some (not too high) order. Using a graphing calculator I found a reasonable fits at 2.15 making your functions big O complexity \$O(n^{2.15})\$ Which is insanely inefficient. OMDG!!!! So keeping "...in es6 format" (?) in mind a cleaner more readable, less CO2 producing sea level rising approach is to do a two pass scan of the seats. The first pass finds the min price, the second extracts the min price seat numbers. This example uses for of loops rather than the Array methods like filter and map, because these array methods have a nasty way of blinding people to the insane level of complexity of what they do, and for of is a little quicker and often allows better optimizations than the array methods, so it's win win for for of Examples in \$O(n)\$ linear time. function bestSeats(seats) { var min = Infinity, minVal; const result = []; for (const seat of seats) { const price = Number(seat.price.slice(1)); if (price < min) { min = price; minVal = seat.price; } } for (const seat of seats) { if (seat.price === minVal) { result.push(seat.seatNumber) } } return result; } And if you must use the array methods. function bestSeats(seats) { const price2Num = seat => Number(seat.price.slice(1)); const min = (min, seat) => price2Num(seat) < min ? price2Num(seat) : min; const minPrice = "$" + seats.reduce(min, Infinity); return seats.filter(seat => seat.price === minPrice).map(seat => seat.seatNumber); }
{ "domain": "codereview.stackexchange", "id": 35710, "tags": "javascript, ecmascript-6" }
Is there any truth to interpreting definition of a second as corresponding to oscillations?
Question: As far as I understand the definition of a second, the Cs-133 atom has two hyperfine ground states (which I don't really understand what they are but it's not really important), with a specific energy difference between them. Whenever the atom transitions from the higher-energy to the lower energy state, the difference in energy is released as a photon. A photon with that energy is equivalent to EM radiation of a specific frequency. A second is then defined as 9192631770 divided by this frequency. In many places I see people claiming that the Cesium atom oscillates between the two states, transitioning from one to the next 9192631770 times per second, and that this is what the definition is based on. This makes no sense to me, and seems incompatible with the interpretation above - which is based on the energy of a single transition, not to rapid transitions. So I usually just dismiss it and/or correct the person claiming this. When I saw the "oscillations" interpretation repeated in a video by the hugely popular Vsauce, I started to think maybe I got it all wrong. Maybe the second is defined by oscillations after all? Or maybe the two interpretations are somehow equivalent? So, is there any truth to Vsauce's description? And if not, why is the misconception of oscillations so popular? Answer: The definition for the cesium clock is: 9192631770 cycles per second is frequency of the radio waves which cause maximum resonance, a physically measurable condition, in the cesium atoms. This corresponds to a particular tuning of the radio. Keeping it tuned provides the reference frequency cited.
{ "domain": "physics.stackexchange", "id": 30898, "tags": "atomic-physics, frequency, definition, si-units, metrology" }
variable repetitions in pumping lemma for context-free languages
Question: Above is the proof of the pumping lemma for context-free languages, coming from the book 'Formal Languages and automata' by Peter Linz. The picture below is in support of the proof. I do not understand two things about this proof: 1) "we can assume that no variable repeats in the subtree T_5". Why should there not be any variable that repeats itself? Why is that a problem? 2) "Similarly, we can assume that no variable repeats in the subtrees T_3 and T_4." I don't quite see how we can fix the issue if there are variables that repeat in the subtrees T_3 or T_4? Answer: It is certainly possible that some variables repeat in the subtree $T_3$, $T_4$ or $T_5$. There is nothing wrong with those situations, except that those situations are too relaxed for us to ensure "the lengths of the strings $v$, $x$, and $y$ depend only on the productions of the grammar and can be bounded independently of $w$ so that (8.2) holds" Let us verify that we are able to assume those two assumptions. One way to enable those two assumptions is described in the textbook. I will let you to convince yourself it works. Here is a slightly different way to enable those two assumptions. We know that there are repeated variables on some path from the root to a leaf. Choose a pair of the same variable so that the number of nodes in the subtree rooted at the upper appearance of the variable is the smallest among all possible such pairs. Let that pair be shown as the two $A$s in the following derivation, which corresponds to figure 8.1. $$ S\stackrel{*}{\implies}uAz\stackrel{*}{\implies}uvAyz\stackrel{*}{\implies}uvxyz$$ There is no repeated variable in subtrees $T_3$, $T_4$ and $T_5$, since any repeated variable in $T_3$, $T_4$ or $T_5$, say $B$, means the subtree rooted at the upper appearance of $B$ has smaller number of nodes than the subtree rooted at the upper $A$ (as the former tree is strictly contained in the latter tree), which contradicts to how $A$s in the figure have been selected. That means any path from the upper $A$ to any leaf in the tree passes at most $|V|+2$ nodes, where $V$ is the set of variables. ("$+2$" since $A$ may appear twice in that path and the leaf node holds a terminal symbol instead a variable') Since each node can have at most $\ell$ children, where $\ell$ is the maximum length of the string on the right side of any production, the number of leaves in the subtree rooted at the upper $A$ is at most $\ell^{|V|+1}$, i.e., $|vxy|\le\ell^{|V|+1}$. Note $\ell^{|V|+1}$ is a constant that depends on the grammar only.
{ "domain": "cs.stackexchange", "id": 20170, "tags": "formal-languages, automata, finite-automata, context-free, pumping-lemma" }
How does new species come into existence?
Question: The only reason for the creation of new species that I found from the internet is geographical isolation. Are there any more reasons? Answer: A paper from Müller et al. proposes a molecular marker called a CBC, differences in which can be used to call two different species, even when closely related: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1950759/ In this study we are looking for a molecular classifier that might indicate that two organisms belong to different species. We are interested in an indicator hypothesis that is easy to work upon and additionally yields a certain probability that two organisms belong to distinct species. Compensatory base changes (CBCs) in the internal transcribed spacer 2 region (ITS2) of the nuclear rRNA cistron have been suggested as such a classifier. Some mutation event that creates sufficiently different CBCs in a viable offspring capable of further reproduction, i.e., survives natural selection, would therefore lead to speciation, by this measure.
{ "domain": "biology.stackexchange", "id": 10370, "tags": "genetics, gene, speciation" }
The derivative of rotational kinetic energy in terms of period gives me the wrong answer. Why should I use the product rule?
Question: This is my first question here so I hope I do it correctly. I've tried to solve this, and google it, but I can't find the answer to this particular question. This equation comes from Carroll and Ostille, "An introduction to astrophysics"(chapter on Neutron stars) and it is not a homework problem. I'm reading the book on my own and I got stuck. I will try to show my work. We begin by knowing $K = {{1} \over {2}}I\omega $ for an object with rotational energy. Using $\omega = {{2\pi} \over {P}}$ we can show that $K = {{1} \over {2}}I\omega = {{2\pi^2 I} \over {P^2}}$. So far so good, this is the formula we want. Now the text says we want the ${{dK} \over {dt}}$ of ${{2\pi^2 I} \over {P^2}}$. And this is where I'm stuck, I think it's because I don't understand how $dt$ is applied when there's no t. I've taken calculus and assumed that I just applied it like a polynomial getting $ = {{-4\pi^2 I} \over {P^3}}$. But the book claims the real answer should be $ = {{-4\pi^2 I\dot{P}} \over {P^3}}$. The only way I could get that answer is via the product rule(I assume) but I have no clue where that $\dot{P}$ comes from, or why it's a reasonable answer. I do know what the symbols mean though. Please help me? Answer: $ KE = \frac{I \omega ^2}{2} $ From the chain rule, $ \frac{dK}{dt}= I \omega \frac{d\omega}{dt}$ Using $\omega= 2 \pi /P $ we get: $ \frac{dK}{dt}= I(\frac{(2\pi)^2}{P})(\frac{d(\frac{1}{P})}{dt})$ Again applying chain rule on P: $ \frac{dK}{dt}= I(\frac{(2\pi)^2}{P})(\frac{d(\frac{1}{P})}{dP})(\frac{dP}{dt}) $ Which gives us $ \frac{dK}{dt}= \frac{I \cdot 4\pi^2}{P} \cdot (\frac{-1}{P^2})(\frac{dP}{dt})$ The term $ \dot{P} $ is nothing but the first time derivative of P Hope it helps
{ "domain": "physics.stackexchange", "id": 92353, "tags": "classical-mechanics, astrophysics, calculus" }
Are antibodies removed before blood transfusion
Question: I am an O blood group person meaning, I can donate my blood to all as I don't have any Antigen A and B. But my body does contain antibodies A and B right? If they were along with the donor blood, wouldn't they cause clotting with the recipients blood? We spent a whole period over this. My teacher said they probably remove the antibody before transfusion, but I couldn't find much info on that. I postulated that these antibodies can not work outside the donor body for some reason. Thanks for the help Answer: I quote from the below. Groups and Red Cell Antigens by Laura Dean (page 7). Available at: http://www.ncbi.nlm.nih.gov/books/NBK2265/ "Red blood cell incompatibility may also occur when the patient's RBC antigens are attacked by antibodies from the donor's plasma. This tends to be a minor problem because of the small amount of antibody present in the donated plasma, which is further diluted on transfusion into the recipient's circulation." I dont think the antibodies are removed, but in countries with a good medical system I believe the matching blood type is always used. http://en.wikipedia.org/wiki/Acute_hemolytic_transfusion_reaction
{ "domain": "biology.stackexchange", "id": 11123, "tags": "blood-circulation, antibody, antigen, blood-group, blood-transfusion" }
Whiteboarding - Multiply all values in array except at the index for each
Question: Consider this interview question: You have an array of integers, and for each index you want to find the product of every integer except the integer at that index. Write a method get_products_of_all_ints_except_at_index() that takes an array of integers and returns an array of the products. For example, given: [1, 7, 3, 4] your method would return: [84, 12, 28, 21] by calculating: [7 * 3 * 4, 1 * 3 * 4, 1 * 7 * 4, 1 * 7 * 3] Here's the catch: You can't use division in your solution! And here's my solution: def get_products_of_all_ints_except_at_index(arr) products = [] (0...arr.length).each do |index| product = 1 arr.each do |num| if num != arr[index] product *= num end end products << product end products end Answer: First of all, congrats on producing a solution that works correctly. However, keep in mind that correctness is often not the only criteria for a successful candidate. Other typical criteria include performance, as well as the analysis of corner cases, where this solution does poorly. Time complexity analysis What is the time complexity of your solution? \$O(n^2)\$ That's a typical question to expect at a programming interview. The point of the question is not so much the accurate computation, but an open discussion around this topic. Would you describe this solution as optimal? No. It's a brute-force solution. A likely next question is: can we do better? Algorithm Here's the catch: You can't use division in your solution! That's some sort of hint. It's trying to guide you in a certain direction. Without the catch, would you implement the solution the same way? If we can use division, it's easy to see a simple optimization: first compute the product of all elements, and then for each element, divide the total product by the element. That would have time complexity \$O(n)\$ instead of \$O(n^2)\$. The catch prevents us from using the simple optimization. Why? So that we find another way. This is the hard part, to discover something clever under high pressure. But at least it's important to reach this point by thinking out loud about time complexity, and the nature of the problem at hand. You could compute "prefix products" \$L\$, such that \$L[i]\$ is the product of all the values that are on the left of \$a[i]\$, and "prefix products" \$R\$, such that \$R[i]\$ is the product of all the values that are on the right of \$a[i]\$. With these helper arrays, the target value to compute for each \$i\$ is \$L[i] * R[i]\$. No division needed, and time complexity of this solution is \$O(n)\$. Then a discussion can follow about tradeoffs. For example, how does this compare to your original solution? Time complexity is improved, but space complexity is now \$O(n)\$, instead of \$O(1)\$ of brute-force. A likely next question is: can we do better? Corner cases Other important points an interviewer may look for: Does the candidate look for corner cases? If you don't look for them, the interviewer will probably nudge you to go look for them. It's important to recognize the nudge. Is the candidate able to find corner cases? If you are not able to, the interviewer will probably nudge you in the general direction. It's important to recognize the nudge, and then the general direction, and verbalize all that, thinking out loud. Can the candidate correctly adapt the solution to handle the corner cases? So what's an interesting corner case here? When computing products of numbers, there may be a risk of integer overflow. At this point it's important to ask about the minimum and maximum values that may in the array, as well as the length of the array. Based on that, the candidate should discuss about the possibility of integer overflows, and try to compute if it can happen or not. And sure enough, if you are able to conclude that based on the input parameters integer overflow cannot happen, the interviewer will adjust the parameters accordingly. And then you need to discuss strategies to deal with the added complication. You can expect the interviewer to keep adding twists to the problem, raising more and more challenges. The interview can branch and go in multiple possible directions, often in directions where you seem least comfortable. It's good to try to anticipate potential complications. Trying to find corner cases is probably a good starting point. Discussion The posted question contains simply the problem description and the solution. A discussion around the solution is missing, maybe you didn't think it's important. But it is. Thinking out loud, and expressing your logic clearly during a programming interview is usually just as important as the solution itself.
{ "domain": "codereview.stackexchange", "id": 32415, "tags": "ruby, interview-questions" }
Why is the complexity of factorial a function of n?
Question: When we compute the complexity of calculating factorial of a number $n$ why is it in terms of $n$ instead of the number of the number of bits occupied by the number of bits occupied by $n$ (like we do in number-theoretic algorithms like primality checking etc.)? Answer: Complexity can be expressed in terms of any reasonable measure. For example, when discussing graph algorithms, we usually state the complexity in terms of the number of vertices and/or edges, rather than the number of bits required to write the graph as input, e.g., as an adjacency matrix. So I guess you've just come across resources that discuss the complexity in terms of the number being "factorialed" rather than the number of bits required to write down that number – I'm sure you'll agree that this is a reasonable measure, even if it's not your favourite one. Further, it's easy to convert between the two: a $k$-bit number can represent values up to $2^k$, so just take the expression you've been given and substitute $2^n$ for $n$.
{ "domain": "cs.stackexchange", "id": 11994, "tags": "complexity-theory, time-complexity, primes, factorial" }
Why does the Standard Model not unify $SU(3)$ and $SU(2)\times U(1)$?
Question: I am struggling with the definition of a unification. I read this question and i am wondering why the standard model is not unifying strong and weak force according to the given definition of a unification, namely that "'Unification' refers to explaining two sets of phenomena (theories) which were previously unrelated, and combining them into a single cohesive description." Isn't the Standard Model exactly doing this with all three forces? It gives a frame (even in one lagrangian) that explains all three forces. Regarding to this question I am wondering what we mean when we say the lagrangian of the SM is $SU(3)\times SU(2)\times U(1)$ invariant. In my understanding there is a part which is $SU(3)$ invariant and a part that is $SU(2)\times U(1)$ invariant. So my questions are: What is the precise definition of a unification in QFT? Why is the standard model not a unified theory for the strong and electroweak forces (and why is it for the electroweak theory?) In which sense is the SM-lagrangian $SU(3)\times SU(2)\times U(1)$ invariant? Or alternatively in which sense is it invariant if we define the charge of each field under each specific group? Answer: 1) I'm not aware of a precise definition of unification in QFT. In gauge theories, practitioners tend to mean that the gauge bosons transform in the adjoint representation of a single simple Lie group, for example SO($N$) or SU($N$) or $E_8$. Transforming under a single simple group means they must all have the same coupling strength at sufficiently high energies where the group remains unbroken. Recall that for each simple Lie group, one can have a separate $\frac{1}{2g^2} \operatorname{Tr} F^2$ type kinetic terms with a different gauge coupling $g$. 2) In this sense, the standard model is not unified. There are three kinds of gauge bosons: gluons that transform under the adjoint of SU(3) and the photon, W, and Z bosons which transform under SU(2)$\times$U(1), all a priori with different couplings. The smallest simple Lie group which contains SU(3)$\times$SU(2)$\times$U(1) as a subgroup is SU(5). As I understand it, proton lifetime measurements have pretty much ruled out SU(5) unification. This Wikipedia article has a more in depth discussion. In my choice of definition, I would say that the electroweak theory is not unified in the standard model. I would say that it allows for successful employment of the Higgs mechanism, which leads to nontrivial mixing between the SU(2) and U(1) factors at low energy. 3) The standard model is SU(3)$\times$SU(2)$\times$U(1) invariant in the usual sense. If one transforms the gauge bosons under the adjoint representations of their respective groups and if one transforms the fermions under the appropriate fundamental and singlet representations, the Lagrangian shifts at most by a total derivative. The SU(3) and SU(2)$ \times $U(1) groups do not split nicely in their action on the Lagrangian. For example, the quarks transform in the fundamental of SU(3) and are also charged under the E&M U(1) subgroup of SU(2) $\times$ U(1).
{ "domain": "physics.stackexchange", "id": 38790, "tags": "standard-model, gauge-theory, group-theory, unified-theories" }
Why is a star at its highest point when local sidereal time = RA (right ascension)?
Question: I'm currently taking a course called Introduction to Astronomy at Coursera. It's not very formal so they didn't present a proof of the statement in the title. I understand why logically one could argue that when a star lies on the projection of the observer's local meridean onto the celestial sphere (I understand that this projection defines the observer's local sidereal time) then this star is at its highest point on the sky. I haven't found any mathematical proof of this statement and also do not know how to approach the problem geometrically (either euclidean geometry or analytic geometry). I appreciate any hint or direct answer on how to approach the problem. Thanks. Answer: At some geocentric latitude $\phi$ and longitude $\lambda$, the elevation or altitude $a$ of a star with right ascension $\alpha$ and declination $\delta$ is given by $$\begin{aligned} \sin a = \sin\phi \sin \delta + \cos \phi \cos\delta \cos h&&&&&&(1) \end{aligned}$$ where $$\begin{aligned} h=\theta_\phi - \alpha\qquad&&\qquad&&\qquad&&\quad\,&&(2) \end{aligned}$$ is the hour angle to the star, expressed in terms of the local sidereal time $\theta_{\phi}$ and the right ascension $\alpha$. Differentiating (1) with respect to time yields $$\begin{aligned} \cos a \frac{da}{dt} = & \phantom{+}\,(\cos\phi\sin\delta - \sin\phi \cos\delta \cos h)\,\frac{d\phi}{dt} \\ & + \,(\sin\phi \cos\delta - \cos\phi \sin\delta \cos h)\,\frac{d\delta}{dt} \\ & - \cos\phi \cos\delta \sin h\,\frac{dh}{dt} &\!\!\!(3) \end{aligned} $$ While not quite zero, the time derivatives of latitude $\phi$ and declination $\delta$ are negligibly small compared to the $2\pi$ radians per sidereal day time derivative of hour angle $h$. For all practical purposes, the above thus reduces to $$\begin{aligned} \cos a \frac{da}{dt} = - \cos\phi \cos\delta \sin h\,\frac{dh}{dt} \quad\!&&&&&&(4) \end{aligned} $$ The left hand side of (4) is zero at extrema of elevation angle. Thus we're looking for conditions that make the right hand side of (4) equal zero. Since $dh/dt$ is nonzero, the right hand side is zero only if one or more of $\cos \phi$, $\cos \delta$, or $\sin h$ is zero. The first two cases ($\cos \phi = 0$ and $\cos \delta = 0$) represent conditions where elevation is constant. The only condition of interest is $\sin h = 0$, which means an hour angle of 0° or 180° (hour angle is constrained to lie between 0° (inclusive) and 360° (exclusive)). Since latitude $\phi$ and declination $\delta$ both lie between -90° and +90°, the condition $h=180^{\circ}$ represents the minimum possible elevation, while $h=0$ represents the maximum possible elevation. From (2), $h=0$ means $\theta=\alpha$, or local sidereal time being equal to right ascension.
{ "domain": "astronomy.stackexchange", "id": 677, "tags": "positional-astronomy" }
Generating bit sequence containing every possible of length 1...n
Question: Generating sequence containing every possible bit sequence of lengths from $1$ to $n$ is trivial - just generate every possible sequence of length $n$. But how do you generate the shortest sequence containing every possible bit subsequence of lengths from $1$ to $n$ (using the fact that subsequences may partially overlap)? What complexity class does that problem belong to? Answer: Such a sequence (for arbitrary $k$-ary encodings) is known as a De Bruijn sequence. For the case of the binary encodings, the De Bruijn sequence has length $2^n$. As the output is exponential, this problem is clearly not in $P$, but it lies in $EXP$, as it can be constructed via a Hamiltonian path along the De Bruijn graph (see also here)
{ "domain": "cs.stackexchange", "id": 10575, "tags": "algorithms, complexity-theory" }
Why it is only carbons that have hydrogens attached (and not just hydrogens directly attached) to the charged carbon which result in hyperconjugation?
Question: For the following compound: Looking at figure 2, my understanding of hyperconjugation is that carbon 2 is more electronegative than the hydrogens attached to it, and so pulls charge towards carbon 2, giving it a partial negative charge. This negative charge then stabilises the positive charge on carbon 1. Figure 2 My book says, for the compoound $\ce{C+H2CH2CH2CH3}$ there is only one hyperconjugation centre (i.e. the $\ce{CH2}$ attached to the C+). Why can't the hydrogens attached to the C+ act as hyperconjugation centres? E.g. won't these hydrogens be less electronegative than the C+ and so the C+ will attract the electrons closer to it , somewhat stabilising the positive charge on the cabron? Answer: Hyperconjugation happens because of an overlap of orbitals. Consider this: in (uncharged) 2-methylpropane, the central carbon has sp3 hybridisation (and a hydrogen bound to it). When the carbocation of the image is formed, the hydrogen leaves, the hybridization becomes sp2 (hence, planar), and now there is a free pi orbital available. Here is what causes hyperconjugation: pi orbitals are quite larger than how they are drawn in books, and they can overlap partially with the orbitals of the adjacent hydrogens, and get part of their charge. In the example of butane, the thing is the same: when the carbocation is formed, the carbon and the two hydrogens attached become planar (sp2), and a perpendicular pi orbital is formed. It cannot interact with the hydrogens attached to the charged carbon because they are on its nodal plane. Instead, it overlaps partially with the orbitals of the hydrogens on the adjacent carbon.
{ "domain": "chemistry.stackexchange", "id": 6161, "tags": "organic-chemistry, hyperconjugation" }
C program to read data from text file and write it to a binary one
Question: This program will read data in the form of: Number Name In each line of a text file, then it will write it to a binary file and print the resulting binary file. I am getting a warning in line 73, but I don't think it's an issue as null is converted into 0, right? while(((fread((char*)&student, 1, sizeof(student), binary_file)) != NULL)) I'm talking about this line above #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX_SIZE 50 void create_binary_file(const char *dest, const char *src); void print_binary_file(const char *dest); int main(void){ const char ascii[MAX_SIZE] = "text.txt"; char binary[MAX_SIZE]; puts("Enter the name of the binary file"); fgets(binary, sizeof(binary), stdin); binary[strlen(binary)-1] = '\0'; create_binary_file(binary, ascii); print_binary_file(binary); return 0; } void create_binary_file(const char *dest, const char *src){ struct student_tem{ int ID; char name[MAX_SIZE]; }; struct student_tem student; FILE *text_file; FILE *binary_file; if((text_file = fopen(src, "r")) == NULL){ perror(src); exit(EXIT_FAILURE); } if((binary_file = fopen(dest, "wb")) == NULL){ perror(dest); exit(EXIT_FAILURE); } while(!feof(text_file)){ if(1 != fscanf(text_file, "%i", &student.ID)){ fprintf(stderr, "Error reading student number"); exit(EXIT_FAILURE); } if(1 != fscanf(text_file, "%s", student.name)){ fprintf(stderr, "Error reading student name"); exit(EXIT_FAILURE); } fwrite(&student, 1, sizeof(student), binary_file); } fclose(text_file); fclose(binary_file); } void print_binary_file(const char *dest){ struct student_tem{ int ID; char name[MAX_SIZE]; }; struct student_tem student; FILE *binary_file; if((binary_file = fopen(dest, "rb")) == NULL){ perror(dest); exit(EXIT_FAILURE); } while(((fread((char*)&student, 1, sizeof(student), binary_file)) != NULL)){ printf("%i %s\n", student.ID, student.name); } } Answer: I am getting a warning in line 73, but I don't think it's an issue as null is converted into 0, right? while(((fread((char*)&student, 1, sizeof(student), binary_file)) != NULL)){ Well, if you look at the fread() reference documentation, you see that the return value is a simple size_t, not a pointer. NULL is meant to be a void* pointer and may be probably implemented like #define NULL ((void*)0) according to this. C is still a strongly statically typed language, a ((void*)0) isn't the same as a ((size_t)0) hence the warning. And you should fix that, it actually is an issue. Another thing you should look into is while(!feof(text_file)){ Why is iostream::eof inside a loop condition considered wrong? It's merely the same problem in plain C code. You should rather check the streams state from the results of the fscanf() operations.
{ "domain": "codereview.stackexchange", "id": 25171, "tags": "c, file" }
Back propagation through a simple convolutional neural network
Question: Hi I am working on a simple convolution neural network (image attached below). The input image is 5x5, the kernel is 2x2 and it undergoes a ReLU activation function. After ReLU it gets max pooled by a 2x2 pool, these then are flattened and headed off into the fully connected layer. Once through the fully connected layer the outputs are converts into Softmax probabilities. I've propagated froward through the network and am now working on back propagation steps. I have taken the derivative of cross entropy and softmax, and calculated the weights in the fully connected layer. Where I get confused is how to preform back propagation through Max pooling and then ultimately find the derivatives of the weights in the convolution layer. What I've found online is that you need to find the derivative of the loss with respect to flattened layer, but I am unsure on how you do that. If I could get some help with an explanation, ideally with equations it would be awesome. Cross posted in stack overflow (https://stackoverflow.com/questions/63022091/back-propagation-through-a-simple-convolutional-neural-network) Answer: The backpropagation algorithm attributes a penalty per weight in the network. To get the associated gradient for each weight we need to backpropagate the error back to its layer using the derivative chain rule. Flattening layer The derivative of a layer depends on the function that is being applied. In the case of the flattening layer it is simply reshaping (a mapping) the values. Thus no additional loss will be added at this layer. All you need to know is how the flattening occurs. For example if the forward pass flattening is $flatten\begin{pmatrix} a & b\\ c & d \end{pmatrix} = \begin{pmatrix} a \\ b \\ c \\ d \end{pmatrix}$, then you can easily map the associated cost so far back to the $2 \times 2 \times 1$ feature map. Max pooling layer In the forward pass the max pooling layer is taking the maximum value in a $3 \times 3$ window that is passed along your image. For example the bold values in the first $3 \times 3$ window would have a maximum of $11$. $maxpooling \begin{pmatrix} \bf{1} & \bf{2} & \bf{3} & 4 \\ \bf{5} & \bf{6} & \bf{7} & 8 \\ \bf{9} & \bf{10} & \bf{11} & 12 \\ 13 & 14 & 15 & 16 \end{pmatrix} = \begin{pmatrix} \bf{11} & 12\\ 15 & 16 \end{pmatrix}$ Thus the resulting error backpropagation would only pass through the maximum values which were passed down by the forward pass. For all other values the error term would not backpropagate. Thus the current error matrix you had backpropagating until this point would be multiplied by $\begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 \end{pmatrix}$ Thus only 4 error terms would continue onto earlier layers. Convolutional layers I have gone in detail about how to do backpropagation through convolutions here: CNN backpropagation between layers.
{ "domain": "datascience.stackexchange", "id": 7968, "tags": "cnn, backpropagation, convolutional-neural-network" }
Can the formula of buoyancy be used in this arrangement?
Question: The buoyant force exerted by an object in water is given by $\rho.g.v_{submerged}$ Which only depends on the volume submerged and nothing else(I think) But can it be applied to partially submerged bodies like in the diagram below? I feel like it can be applied to the part which is submerged, but my teacher says otherwise. An intuitive explaination on why it can/cannot be would be much appreciated. Answer: I think it's useful to consider what causes Archimedes' principle to be true to see why it does not apply as you are suggesting here. The fluid on the bottom is actually necessary for the principle to apply. We know from hydrostatics that the pressure of a fluid increases with depth. This means that for any object submerged in a way where there is fluid below it, the pressure below should be higher than the pressure above, where the magnitude of that difference depends on height difference, fluid density, and gravity. We also know that the force due to pressure depends on the area that the pressure acts on. So the object will have a greater force acting below it than above it, and you can see how the larger the area of the object in the horizontal direction, the greater the difference in vertical force should be due to the vertical pressure difference. The area in the vertical direction(s) should make no difference, because the forces on each side will cancel out for an object totally surrounded by water. So then when you consider that the pressure on the bottom varies based on the height of the object in each location, and that the force due to that pressure will depend on how much area the bottom has at each location, you might be able to get an intuitive grasp for why Archimedes' principle says the buoyant force depends on the submerged volume. It is really due to these pressure differences and the forces that show up due to them, and how when all combined the force scales directly with the volume. But the important thing to remember is that this only applies when fluid is below it. In this picture you have, the fluid is only above, so there is pressure pushing down on the object due to the water, but no water below the object to push back up, so Archimedes' principle no longer applies. The mistake of assuming it works when there is no fluid below the object is not uncommon though, because when Archimedes' principle is taught, it's rarely explained how the principle is derived, and so what constitutes a "submerged" object is often seen in the way you thought here. But yes your teacher is correct. I apologize for the really long winded answer, but this is how I got an intuitive understanding for this exact topic, and if you do manage to follow what I'm saying I think you will feel a lot more comfortable with buoyancy and Archimedes' principle too.
{ "domain": "physics.stackexchange", "id": 73150, "tags": "fluid-statics, buoyancy" }
Simulating noise in computed tomography reconstruction
Question: In the research of the computed tomography (CT) reconstruction, one needs to simulate the noise during CT projection and capturing. Then, my questions are: How about the proper noise type? How to add the noise, in the sinogram domain or the frequency domain? Any reference materials are also preferred. Answer: The easiest, and probably most straight forward way, seems to add the noise in the measurement domain, hence the sinogram.
{ "domain": "dsp.stackexchange", "id": 4281, "tags": "noise, reconstruction" }
Kolbe's electrolysis reaction for alkenes' production
Question: I came across a question in which the major product was asked for Kolbe's electrolysis of potassium salt of 2,3-dimethylbutane-1,4-dioic acid. The answer was given as trans-(but-2-ene). I couldn't understand how does the trans isomer form as the specific major product from this reaction, following free radical mechanism. Would someone please help me with this. Thanks! Answer: Generally during electrolysis of dicarboxylic acid we will get a few different products but the main product is unsaturated hydrocarbon. Maybe at this point, based on ChemElectroChem, 2020, 7, 4874 - down of page, the mechanism reaction is like in drawing:
{ "domain": "chemistry.stackexchange", "id": 16758, "tags": "organic-chemistry, reaction-mechanism, electrolysis, hydrocarbons" }
Which is the strongest base among the given anilines?
Question: I am facing difficulty in the following problem. In my view N,N-dimethylaniline 2 should be the strongest base because it has two methyl groups which increases electron density of N and hence it can donate it easily than in other cases. However I am not sure about it. Answer: In my opinion, option 4 should be the answer. This is because too much crowding near the nitrogen atom will throw it out of the plane, as a result of the which the lone pair of electrons of nitrogen atom get localized and are better available for accepting the protons. Moreover, presence of $\ce{-CH3}$ groups also increase basicity due to +I effect.
{ "domain": "chemistry.stackexchange", "id": 7098, "tags": "organic-chemistry, acid-base, aromatic-compounds, amines" }
Which is the true Planck's constant: $h$ or $ħ$?
Question: My quantum mechanics book says that $ħ$ is the Planck's constant. The book uses ħ throughout and not one single use of $h$. My statistical mechanics book says that $h$ is the Planck's constant and doesn't use $ħ$ at all. Now I know that one of the constant is the other scaled by $2\pi$. But one of them is the Planck's constant and the other is not. Which one of them is true Planck's constant? Answer: In the usual terminology we have \begin{align} h &&&\text{Planck's constant} \\ \hbar &= \frac{h}{2\pi} &&\text{reduced Planck's constant} \end{align} The significance of $2\pi$ here is the ratio between a full circle and a radian, because the energy of a photon is $$ E = hf = \hbar \omega \;,$$ where $f$ is the cyclic frequency of the light and $\omega = 2 \pi f$ is its angular frequency. Both are common because—by long tradition—the frequency and wavelength of waves are generally measured with respect to a full cycle, but mathematical expressions involving waves may be written down more compactly in terms of angular (radian-based) quantities such as the angular frequency and the wavenumber ($k = 2\pi/\lambda$).
{ "domain": "physics.stackexchange", "id": 51629, "tags": "quantum-mechanics, terminology, physical-constants" }
What is the status of Mpemba effect investigations?
Question: There is this puzzling thing that is called Mpemba effect: paradoxically, warm (35°C) water freezes faster than cold (5°C) water. As a physisist, I've been asked about it several times already. And I have no definite answer to that, apart from the standard: "there are many things that can influence it". So, does anyone knows about the status or progress on that effect? Any recent reviews, publications or other references? Answer: One boring Monday morning in the lab a group of us did the experiment, and to our surprise we found that the hot water (in sealed containers) did freeze faster. On closer examination we discovered that the shelves in our freezer were covered in frost, like I imagine most freezers, and the hot water was melting the frost and creating a good thermal contact between the beaker of water and the shelf. That turned out to be why the hot water froze faster. When we thoroughly cleaned the freezer shelf the effect went away and the hot water took longer to freeze. I think the rumours about hot water freezing faster illustrate the dangers of improperly controlled experiments. As Ron mentions, evaporation could also be a factor and it would be easy for a home experimenter to get the wrong conclusion. Add to that the fact we'd secretly all be delighted if we could prove hot water really does freeze faster, and you can see how the rumour has spread.
{ "domain": "physics.stackexchange", "id": 23059, "tags": "thermodynamics, water, freezing, ice" }
Why is the beamwidth of this antenna shown like this ?
Question: I am not yet super familiar with the way beamwidth are calculated, but from what I've read, it's based on the point where the strength of the signal reaches -3dB. Here is an excerpt from CCNA Wireless 200-355 Official Cert Guide. I don't know if I'm reading the diagram incorrectly but it seems clear to me that the lines from which the angle is calculated are crossing the signal at around -6 ~ -7dB and definitely not -3dB. Why is this beamwidth measured from below -5dB ? Should it be -6dB because it's twice -3dB for reasons I'm not aware of ? Answer: The labeling on the diagram is a little confusing. The outermost ring is 0 dBm, then each ring represents an amplitude decrease by 5 dBm. The dashed lines cross the solid lines between the 0 dBm and 5 dBm circles. Your way of reading it is very tempting, however --- threw me for a minute, too.
{ "domain": "physics.stackexchange", "id": 49506, "tags": "antennas" }
Do we actually need negative probabilities in quantum mechanics?
Question: I was reading this thread and I'm a bit confused. The answer says negative probabilities can account for destructive wave interference and the events cancelling out. But if events just cancel out, shouldn't that make the probability zero? Why would it be negative? Additionally, my (possibly incorrect) understanding has always been that we only get negative probabilities in QM in the context of "probability amplitudes", which is just a fancy name for the amplitudes of the wavefunctions. But it always seemed kind of weird to talk about probability amplitudes at all -- if we just square the normalized wavefunction and treat that as the PDF, like the Born rule says to do, everything is the same as in regular probability, right? Why even try to interpret the amplitudes themselves, as opposed to their squares, as probabilities? Am I mistaken in the above paragraph, and we can't always get a regular PDF from squaring the normalized wavefunction and we instead sometimes get quasiprobability distributions that allow negative probabilities? Or, are there situations where we need to consider the wavefunctions themselves, as opposed to first normalzing and squaring them, in order to correctly compute some quantity? I did also see this thread and the first answer basically says the math is just a lot simpler if we use probability amplitudes to explain interference patterns. I get that using actual wavefunctions to describe the very wave-like behavior of groups of particles, such as in the double slit experiment, is simpler and hence preferable; I wouldn't suggest that we do all the math using the PDFs. But I don't see why that necesitates this idea of "probability amplitudes" and negative probability -- why can't we just call the wavefunctions wavefunctions and leave the talk of probability entirely to the actual PDF's we get by applying the Born rule? Or CAN we do that and calling the wavefunction amplitudes probability amplitudes is just one way of thinking about that math? In other words, is this just a question of language, of whether or not try to bring in the label of probability into more of the theory by generalizing the definition of probability, or is there more to it? Answer: Not only do we not "need" negative probabilities in quantum mechanics, but in fact there are no negative probabilities in QM. All probabilities are real numbers between 0 and 1 by definition. The answer says negative probabilities can account for destructive wave interference and the events cancelling out. That is incorrect. Probability amplitudes can be negative and can experience destructive wave interfence, but probabilities cannot. Probability amplitudes are not probabilities. My (possibly incorrect) understanding has always been that we only get negative probabilities in QM in the context of "probability amplitudes", which is just a fancy name for the amplitudes of the wavefunctions. That is very close to correct; it's correct to a first approximation. 99% of the time that people talk about "negative probabilities" in QM, they really mean complex probability amplitudes. In very advanced applications, they might instead be referring to the Wigner quasiprobability distribution, which is a different notion that is loosely analogous to "negative probabilities" (but only analogous - actual probabilities are still always nonnegative). Until you become much more comfortable with QM, it's probably best to totally forget about the Wigner quasiprobability distribution for now. But it always seemed kind of weird to talk about probability amplitudes at all -- if we just square the normalized wavefunction and treat that as the PDF, like the Bourne rule says to do, everything is the same as in regular probability, right? Right (except that it's spelled "Born", not "Bourne"). Why even try to interpret the amplitudes themselves, as opposed to their squares, as probabilities? We don't interpret them as probabilities. (At least, people who know what they are talking about don't.) They are closely related to probabilities, but they also have fundamental differences. Am I mistaken in the above paragraph, and we can't always get a regular PDF from squaring the normalized wavefunction and we instead sometimes get quasiprobability distributions that allow negative probabilities? We always get a regular PDF from squaring the normalized wavefunction. We never get quasiprobability distributions; those come from a very different procedure, which it's probably best to complete ignore until you get much more familiar with QM. Or, are there situations where we need to consider the wavefunctions themselves, as opposed to first normalzing and squaring them, in order to correctly compute some quantity? Yes, there definitely are such situations. This is a deep and complicated subject. The fast and loose answer is that it's tremendously convenient to use the phase structure of amplitudes for practical calculations. The somewhat more complete answer is that we need to use the complex amplitudes in order to explain both time evolution and the possibility of changing the measurement basis. The full and deep answer is that the Kochen–Specker theorem and Bell's theorem demonstrate that we can't reproduce the predictions of the standard formalism of QM using only regular PDFs, at least not without making some extremely bizarre assumptions. The complex structure of the amplitudes is fundamentally necessary for reproducing the predictions of QM; it is not just a calculational convenience. I mean this entirely respectfully, but these theorems are deep and complex, and you probably are not yet familiar enough with QM to fully understand them. But you can take a crack at it. You should pose any follow-up questions specific to these theorems in a separate Physics SE question. But I don't see why that necesitates this idea of "probability amplitudes" and negative probability -- why can't we just call the wavefunctions wavefunctions and leave the talk of probability entirely to the actual PDF's we get by applying the Bourne rule? "Negative probability" is just a misnomer. Most people who use that term are either just being sloppy and leaving off the word "amplitude", or they are just confused. "Probability amplitude" is correct terminology, but again, probability amplitudes are NOT probabilities. Their interpretation is very different. If you don't like using two similar terms for very different mathematical concepts, then for now it's fine to just stick to the term "wavefunction" instead (although there are few minor differences between the terms "wavefunction" and "probability amplitude" around the edges). Or CAN we do that and calling the wavefunction amplitudes probability amplitudes is just one way of thinking about that math? Correct. In other words, is this just a question of language, of whether or not try to bring in the label of probability into more of the theory by generalizing the definition of probability, or is there more to it? I don't quite understand this final question, but yes, it's basically just confusing terminology. Probability amplitudes are related to probabilities, but they are NOT probabilities. They are an intermediate tool that eventually get converted into true probabilities. There are no negative probabilities in QM. It's fine to drop the p-word and just call the components of the wavefunction "amplitudes" if you want - people will still understand what you mean.
{ "domain": "physics.stackexchange", "id": 94533, "tags": "quantum-mechanics, probability, mathematics, born-rule, quasiprobability-distributions" }
How to find the compounds that form the solid (chemical formulas)
Question: If we have a solution that contains water and potassium sulfate (K2SO4). And we add to this solution : Du magnesium chloride (MgCl2) and lead nitrate (II) (Pb(NO3)2) What would the compounds that form the solid (chemical formulas)? Where is my mistake? 2 H2O + K2SO4 --> 2 KOH + H2SO4 MgCl2 +Pb(NO3)2 --> PbCl2+Mg(NO3)2 I feel something that doesn't make sense Answer: Your first reaction is wrong. You can't just form sulphuric acid from its salt, or more simply sulfate. It will just form bisulfate in some extent, with an equilibrium reaction, not a one way arrow. PbCl2 formation is expected since it is poorly soluble it will be precipitated. Look for the most insoluble union in similar cases, it will be precipitated.
{ "domain": "chemistry.stackexchange", "id": 10102, "tags": "organic-chemistry, structural-formula" }
What are some efficient algorithms for determining if a quadratic multivariate polynomial has a solution?
Question: I know that in general, multivariate polynomial satisfiability is equivalent to 3-SAT; however, I'm wondering if there are any good techniques in the quadratic case, specifically if there is a polynomial time solution. I guess the more general question would be, are there any classes of multivariate polynomials for which the satisfiability problem is efficiently solvable? Answer: You can decide if a quadratic polynomial $p: \mathbb{R}^n \rightarrow \mathbb{R}$ has real roots with some linear algebra. As you note, the general case should be hard. Observe first that $p(x) \neq 0$ for all $x \in \mathbb{R}^n$ if either $p(x) > 0$ or $p(x) < 0$ for all $x$ (this follows by continuity). So it is enough to be able to decide if $p(x) > 0$ for all $x$. In general this is related to complexity-theoretic versions of Hilbert's 17th problem: a polynomial $p(x)$ is positive over the reals if and only if you can write $p$ as the sum of squares of rational functions and a positive constant $c$ (this is a theorem by Artin). Finding this decomposition or solving the decision problem is most likely hard in general, but the quadratic case is easy, because of the magic of the spectral theorem. For more information about the general case, look at Devanur,Lipton,Vishnoi, and Monique Laurent's survey. Let us write $p(x) = p_2(x) + p_1(x) + c$ where $p_2$ is homogeneous of degree 2, $p_1$ is linear, and $c$ is a constant. Let us define $q(x_0,x) = p_2(x) + x_0p_1(x) + cx_0^2$ to be the homogenization of $p$, where $x_0$ is an additional variable. Claim. $p(x) > 0 \Leftrightarrow \forall x_0 \neq 0: q(x_0,x) > 0$ The "if" direction is easy. In the non-trivial direction, assume $p(x) > 0$ for all $x$ and assume $x_0 \neq 0$: $$ q(x_0,x) = x_0^2q\left(1,\frac{x}{x_0}\right) = x_0^2 p\left(\frac{x}{x_0}\right) > 0. $$ QED Notice also that, because $q(x_0, x)$ is continuous, if $p(x) > 0$ for all $x$ then $q(x_0,x) \geq 0$ for all $(x_0,x)$ (including $x_0 = 0$). Since $q$ is homogeneous, we can write $q(x_0,x) = y^TQy$, where $Q$ is a symmetric matrix and $y = (x_0,x)$. By the above, if $p(x) > 0$ for all $x$, then $Q$ is positive semidefinite. Moreover, $q(x_0, x) > 0$ for all $x_0 \neq 0$ if and only if the kernel of $Q$ is a subset of the hyperplane $\{y = (0,x): x \in \mathbb{R}^n\}$. Both these conditions can be decided in polynomial time by computing the SVD of $Q$.
{ "domain": "cstheory.stackexchange", "id": 2410, "tags": "cc.complexity-theory, polynomials, polynomial-time, quadratic" }
$SU(N)$ contribution to the gluon propagator
Question: This is a simple question arisen with the evaluation of the gluon propagator in the Landau gauge for $SU(N)$ Yang-Mills theory. I have to evaluate the integral $$ \int d^4xe^{ipx}\langle T^cA^c_\mu(x),T^dA^d_\nu(0)\rangle, $$ being $T_c$ the generators of the group, that we know has the form $$ G\left(\eta_{\mu\nu}-\frac{p_\mu p_\nu}{p^2}\right)\Delta(x,0) $$ being $G$ the contribution due to the $SU(N)$ group in the fundamental representation. One assumes that the propagator $\Delta(x,0)$ is given (maybe). Now, my guess for $G$ is $$ G=\frac{N^2-1}{2N}. $$ Is this correct? Answer: It kind of depends on what you mean by $T^c$. Let these matrices live in a representation $R$ of the gauge algebra. Due to $\mathrm{SU}(N)$ invariance, we have $$ \langle A^a A^b\rangle\propto \delta^{ab} $$ and therefore the colour structure $G$ is given by $$ G=T^aT^a\equiv C(R)\times \text{identity matrix} $$ as per Casimir. If $R$ is the fundamental representation, we have $C(F)=\frac{N^2-1}{2N}$. If $R$ is the adjoint representation, $C(A)=N$.
{ "domain": "physics.stackexchange", "id": 45785, "tags": "group-theory, yang-mills, propagator" }
How do matter and antimatter interact gravitationally?
Question: Me and my brother were watching television tonight, and he asked me a question that was somewhat along the lines of this: What would happen if the Earth was made of matter, and the moon was made of antimatter? I tried to answer this, saying a few things about how atoms attract, and such, but I then realized I didn't actually know the answer to this. I'm now curious as to how matter and antimatter interact gravitationally. Do they attract? Repel? I'm specifically wondering what would happen on a planetary scale, but the nitty-gritty particle interactions would be nice too. Answer: What would happen if the Earth was made of matter, and the moon was made of antimatter To start with, our elementary particle physics standard model has antiparticles which have the same mass as particles and all the quantum numbers, charge, baryon number ..., the opposite. What happens when a particle meets an antiparticle is that the quantum numbers annihilate (add up to zero) and the energy of the two masses is freed to become other particles/antiparticles , from photons to electron positron etc, as long as the total quantum numbers add to zero. This model is validated by innumerable data. As an antiparticla has the same type of mass , i.e. positive, the hypothesis is that antiparticles behave under the force of gravity the same as particles. Wnen antimatter ( like antihydrogen, which has been created in the lab) meets matter (hydrogen) the same thing happens. Annihilation of quantum numbers and release of energy. We know that the moon is composed of matter from the gross experiment of having landed on it. No explosion resulted. Any dust, meteorite small or large, of matter hitting the moon would create explosions and thus we know it is not made of antimatter. This argument extends up to galaxies , because we would be detecting the specific annihilation radiations if an antigalaxy or even a cluster of antigalaxies existed. Now the same gravitational interaction is a hypothesis and is being tested in the AEGIS experiment at CERN: A system of gratings in the deflectometer splits the antihydrogen beam into parallel rays, forming a periodic pattern. From this pattern, the physicists can measure how much the antihydrogen beam drops during its horizontal flight. Combining this shift with the time each atom takes to fly and fall, the AEGIS team can then determine the strength of the gravitational force between the Earth and the antihydrogen atoms. The AEGIS experiment will represent the first direct measurement of a gravitational effect on an antimatter system. So current knowledge tells us that an antimatter moon would behave the same as a matter moon as far as gravity goes. Maybe AEGIS will find something different. One should add that an antimatter moon in a matter planetary system, or where there exists matter and antimatter would not last long (in cosmological times) due to the explosions from matter meteorites falling on antimatter.
{ "domain": "physics.stackexchange", "id": 25453, "tags": "gravity, antimatter, matter" }
Implementation of Kosaraju's algorithm for strongly connected components
Question: To learn and practice coding in C++, I wrote an implementation of Kosaraju's two-pass algorithm for computing the strongly connected components in a directed graph, using depth-first search. This was my first time touching any C++ code in a while (I mainly program in Python or C#). As such, any critique or advice on style, layout, readability, maintainability, and best practice would be greatly appreciated. And, although performance wasn't a primary goal, I am highly interested in any optimizations that could be made (currently it can process about 800 thousand nodes with 5 million edges in around 10 seconds). #include <iostream> #include <fstream> #include <vector> #include <map> #include <list> using std::vector; using std::map; using std::list; using std::ifstream; using std::cout; using std::endl; // Constants //------------------------------- const char FILENAME[] = "SCC.txt"; // Prototypes //------------------------------- long get_node_count(const char filename[]); vector< vector<long> > parse_file(const char filename[]); map< long, vector<long> > compute_scc(vector< vector<long> > &graph); vector< vector<long> > reverse_graph(const vector< vector<long> > &graph); void dfs_loop(const vector< vector<long> > &graph, vector<long> &finishTime, vector<long> &leader); long dfs(const vector< vector<long> > &graph, long nodeIndex, vector<bool> &expanded, vector<long> &finishTime, long t, vector<long> &leader, long s); list<unsigned long> get_largest_components(const map< long, vector<long> > scc, long size); /** * Main */ int main() { // Get the sequential graph representation from the file vector< vector<long> > graph = parse_file(FILENAME); // Compute the strongly-connected components map< long, vector<long> > scc = compute_scc(graph); // Compute the largest 5 components and print them out list<unsigned long> largestComponents = get_largest_components(scc, 5); list<unsigned long>::iterator it; for (it = largestComponents.begin(); it != largestComponents.end(); it++) { cout << *it << ' '; } cout << endl; return 0; } /** * Parse an input file as a graph, and return the graph. */ vector< vector<long> > parse_file(const char filename[]) { // Get the node count and prepare the graph long nodeCount = get_node_count(filename); vector< vector<long> > graph(nodeCount); // Open file and extract the data ifstream graphFile(filename); long nodeIndex; long outIndex; while (graphFile) { graphFile >> nodeIndex; graphFile >> outIndex; // Add the new outgoing edge to the node graph[nodeIndex - 1].push_back(outIndex - 1); } return graph; } /** * Get the count of nodes from a graph file representation */ long get_node_count(const char filename[]) { // Open file and keep track of how many times the value changes ifstream graphFile(filename); long maxNodeIndex = 0; long nodeIndex = 0; while (graphFile) { // Check the node index graphFile >> nodeIndex; if (nodeIndex > maxNodeIndex) { maxNodeIndex = nodeIndex; } // Check the outgoing edge graphFile >> nodeIndex; if (nodeIndex > maxNodeIndex) { maxNodeIndex = nodeIndex; } } return maxNodeIndex; } /** * Compute all of the strongly-connected components of a graph * using depth-first search, Kosaraju's 2-pass method */ map< long, vector<long> > compute_scc(vector< vector<long> > &graph) { // Create finishing time and leader vectors to record the data // from the search vector<long> finishTime(graph.size(), 0); vector<long> leader(graph.size(), 0); // Initialize the finish time initially to be the numbers of the graph vector<long>::iterator it; long index = 0; for (it = finishTime.begin(); it != finishTime.end(); it++) { *it = index; index++; } // Reverse the graph, to compute the 'magic' finishing times vector< vector<long> > reversed = reverse_graph(graph); dfs_loop(reversed, finishTime, leader); // Compute the SCC leaders using the finishing times dfs_loop(graph, finishTime, leader); // Distribute nodes to SCCs map< long, vector<long> > scc; vector<long>::iterator lit; for (lit = leader.begin(); lit != leader.end(); lit++) { long nodeIndex = lit - leader.begin(); // Append node to SCC scc[*lit].push_back(nodeIndex); } return scc; } /** * Reverse a directed graph by looping through each node/edge pair * and recording the reverse */ vector< vector<long> > reverse_graph(const vector< vector<long> > &graph) { // Create new graph vector< vector<long> > reversed(graph.size()); // Loop through all elements and fill new graph with reversed endpoints vector< vector<long> >::const_iterator it; for (it = graph.begin(); it != graph.end(); it++) { long nodeIndex = it - graph.begin(); // Loop through all outgoing edges, and reverse them in new graph vector<long>::const_iterator eit; for (eit = graph[nodeIndex].begin(); eit != graph[nodeIndex].end(); eit++) { reversed[*eit].push_back(nodeIndex); } } return reversed; } /** * Compute a depth-first search through all nodes of a graph */ void dfs_loop(const vector< vector<long> > &graph, vector<long> &finishTime, vector<long> &leader) { // Create expanded tracker and copied finishing time tracker vector<bool> expanded(graph.size(), 0); vector<long> loopFinishTime = finishTime; long t = 0; vector<long>::reverse_iterator it; // Outer loop through all nodes in order to cover disconnected // sections of the graph for (it = loopFinishTime.rbegin(); it != loopFinishTime.rend(); it++) { // Compute a depth-first search if the node hasn't // been expanded yet if (!expanded[*it]) { t = dfs(graph, *it, expanded, finishTime, t, leader, *it); } } } /** * Search through a directed graph recursively, beginning at node 'nodeIndex', * until no more node can be searched, recording the finishing times and the * leaders */ long dfs( const vector< vector<long> > &graph, long nodeIndex, vector<bool> &expanded, vector<long> &finishTime, long t, vector<long> &leader, long s ) { // Mark the current node as explored expanded[nodeIndex] = true; // Set the leader to the given leader leader[nodeIndex] = s; // Loop through outgoing edges vector<long>::const_iterator it; for (it = graph[nodeIndex].begin(); it != graph[nodeIndex].end(); it++) { // Recursively call DFS if not explored if (!expanded[*it]) { t = dfs(graph, *it, expanded, finishTime, t, leader, s); } } // Update the finishing time finishTime[t] = nodeIndex; t++; return t; } /** * Computes the largest 'n' of a strongly-connected component list * and return them */ list<unsigned long> get_largest_components(const map< long, vector<long> > scc, long size) { // Create vector to hold the largest components list<unsigned long> largest(size, 0); // Iterate through map and keep track of largest components map< long, vector<long> >::const_iterator it; for (it = scc.begin(); it != scc.end(); it++) { // Search through the current largest list to see if there exists // an SCC with less elements than the current one list<unsigned long>::iterator lit; for (lit = largest.begin(); lit != largest.end(); lit++) { // Compare size and change largest if needed, inserting // the new one at the proper position, and popping off the old if (*lit < it->second.size()) { largest.insert(lit, it->second.size()); largest.pop_back(); break; } } } return largest; } An example input file (SCC.txt) is composed of edges represented by begin-end node numbers. It could look like this: 1 2 2 3 3 1 4 3 4 5 5 6 6 4 7 6 7 8 8 9 9 10 10 7 Answer: Fair enough. Lazy but OK I suppose. Personally when using using like this I bind it to the tightest scope possible. using std::vector; using std::map; using std::list; using std::ifstream; using std::cout; using std::endl; The idea of making it std and not standard was so that it would not be too obnoxious when going std::cout in the code. This is very obnoxious to read. A bit of work formatting these lines to make it easy to read would have gone a long way: long get_node_count(const char filename[]); vector< vector<long> > parse_file(const char filename[]); map< long, vector<long> > compute_scc(vector< vector<long> > &graph); vector< vector<long> > reverse_graph(const vector< vector<long> > &graph); void dfs_loop(const vector< vector<long> > &graph, vector<long> &finishTime, vector<long> &leader); long dfs(const vector< vector<long> > &graph, long nodeIndex, vector<bool> &expanded, vector<long> &finishTime, long t, vector<long> &leader, long s); list<unsigned long> get_largest_components(const map< long, vector<long> > scc, long size); Nothing wrong with this: // Get the sequential graph representation from the file vector< vector<long> > graph = parse_file(FILENAME); But you don' think it would have been more intuitive to read as: MyGraph graph(FILENAME); A tiny bit of work wrapping your data into a class goes a long way to make the code more readable. Your style you are using the classes available in C++ but really your style is more C than C++. Learn to use the standard algorithms: list<unsigned long>::iterator it; for (it = largestComponents.begin(); it != largestComponents.end(); it++) { cout << *it << ' '; } cout << endl; // Easier to write as std::copy(largestComponents.begin(), largestComponents.end(), std::ostream_iterator<unsigned long>(std::cout, " "); std::cout << std::endl; Main is special (you don't actually need to return anything. If there is no return in main() it is equivalent to return 0). return 0; If you're code never returns anything but 0 then I would not return anything (let the language do it stuff). Do this to distinguish it from code that can return an error code. I hope you profiled to make sure it was worth reading the file twice. long nodeCount = get_node_count(filename); vector< vector<long> > graph(nodeCount); The vector automatically re-sizes as required. It uses a heuristic to prevent it re-sizing too many times. If you are using a C++11 compiler then the cost of resizing will be minimized as it will be using move rather than copy to reduce the cost. This is almost always wrong: while (graphFile) { The last successfull read will read up-to but not past the end of file. Thus leaving the stream in a good state even though there is no data left to read from the stream. Thus the loop will be enetered but the first read will fail. But inside your loop you don't check for failure. As a result you push_back() one more node than exists in the file (though the last node is a copy of the previous node). The correct way to write the loop is: while(graphFile >> nodeIndex >> outIndex) { // Read of both values was successful // Add it to the graph. graph[nodeIndex - 1].push_back(outIndex - 1); } Learn the standard functions: if (nodeIndex > maxNodeIndex) { maxNodeIndex = nodeIndex; } // Easier to read as: maxNodeIndex = std::max(maxNodeIndex, nodeIndex); There is no point in calculating nodeIndex. Either use the iterators, or use the index into the array the combination of both techniques is untidy. for (it = graph.begin(); it != graph.end(); it++) { long nodeIndex = it - graph.begin(); // Loop through all outgoing edges, and reverse them in new graph vector<long>::const_iterator eit; for (eit = graph[nodeIndex].begin(); eit != graph[nodeIndex].end(); eit++) { reversed[*eit].push_back(nodeIndex); }
{ "domain": "codereview.stackexchange", "id": 12353, "tags": "c++, algorithm, graph, search" }
Could dark matter possibly be anti-matter?
Question: Considering the broken symmetry after the big bang - what I understand as there being a huge surplus of matter and a lesser presence of anti matter - is it possible that dark matter could be anti-matter? And thus explain the missing anti-matter we might expect from symmetry? Is there data or any reliable theories that exclude this possibility? So far our experience with anti-matter is small scale, right? We really don't have experience with anti-matter in bulk. Or are our theories reliable enough to predict bulk properties? Answer: No, we know enough of the "bulk properties" of antimatter to rule this out. Antimatter interacts with the electromagnetic field in exactly the same way as regular matter, just with the opposite charge. Therefore, antimatter should be detectable using most of the techniques we use to detect regular matter in astronomy. This works even if the antimatter is a big, cold, diffuse gas of antihydrogen, because we can detect diffuse hydrogen through its 21 cm emission line. (Not to mention the gamma rays we should see from antimatter annihilating with neighboring matter.) It is precisely because we have not seen these signals that we know dark matter is not antimatter.
{ "domain": "physics.stackexchange", "id": 31964, "tags": "solid-state-physics, symmetry, dark-matter, antimatter" }
joint_states messages different
Question: Hello, I have a custom 6DOF arm that I'm trying to interface with moveit. I have created the URDF and the model, and I have loaded it into rviz. I have made a lot of progress with it I think. I'm using ros indigo on ubuntu 14.04 I subscribe to the joint_states topic on my arduino. Using the arduino code I am able to individually control each joint using the joint state sliders when I run the display.launch I've completed the moveit configuration and it loads with no problems in rviz, and I can plan movements, etc. however when my arduino subscribes to the joint_states topic at this point, I get errors in rosserial saying [ERROR] [WallTime: 1479872259.522084] Message from ROS network dropped: message larger than buffer. What is different with these messages than the ones generated from display.launch? I performed some investigation and found something interesting: This is the message from when I use moveit demo.launch header: seq: 954 stamp: secs: 1479873407 nsecs: 957350015 frame_id: '' name: ['shoulder_base', 'upper_arm_shoulder', 'elbow_upper_arm', 'forearm_elbow', 'wrist_forearm', 'gripper_wrist', 'left_jaw_gripper', 'right_jaw_gripper'] position: [4.435017290525138e-05, 9.012486469545027e-05, 3.372531827300197e-05, 5.395870120266208e-05, 4.7403243313592324e-05, 0.8750411991072651, 0.0, 0.0] velocity: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] effort: [] and this is the message when I use display.launch header: seq: 228 stamp: secs: 1479873728 nsecs: 257888078 frame_id: '' name: ['shoulder_base', 'upper_arm_shoulder', 'elbow_upper_arm', 'forearm_elbow', 'wrist_forearm', 'gripper_wrist', 'left_jaw_gripper', 'right_jaw_gripper'] position: [0.0, 0.0, 0.0, 0.0, 0.0, 1.2491756999999999, 0.0, 0.0] velocity: [] effort: [] why the differences, and more importantly, can I make the position data the same data types/precision? Originally posted by karlhm76 on ROS Answers with karma: 43 on 2016-11-22 Post score: 0 Original comments Comment by gvdhoorn on 2016-11-23: Not an answer, but: JointState msgs are really typically used for reporting joint states, not for controlling them. Look into JointTrajectory and the related action servers. Ideally: write a hardware_interface for your controller and use ros_control. The JointState msgs .. Comment by gvdhoorn on 2016-11-23: .. that you are seeing are probably those published by one of the fake controllers in MoveIt, which are just there to make it possible to visualise things without needing real hardware. Comment by karlhm76 on 2016-11-23: Thanks for the info. Yes I've been reading about this. I was simply trying to get the thing to work as easily as possible. I was reading that the fake controller published joint states and i already had something that worked with them, or so i thought. Comment by karlhm76 on 2016-11-23: I've been having a few problems with getting started with the trajectory action server and catkin and netbeans. Not relevant here though. Even though I've made good progress with the setup I'm still fairly new at this. Answer: I've decided to look into writing an action server for FollowJointTrajectory. See how it goes I guess. Thanks again. Originally posted by karlhm76 with karma: 43 on 2016-11-24 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2016-11-24: I'd really encourage you to take a look at writing a hardware_interface for your Arduino setup. Take a look at ros_control_boilerplate by Dave Coleman. If you have communication with your Arduino setup, it shouldn't be too difficult. Comment by gvdhoorn on 2016-11-24: After you have the hardware interface implemented, you can re-use the ros_control action server (the joint_trajectory_controller) without writing any additional code. Comment by karlhm76 on 2016-11-25: This is very useful information. Thanks so much
{ "domain": "robotics.stackexchange", "id": 26309, "tags": "ros, moveit, rosserial, joint-states, buffer" }
Where is angular momentum supposed to be conserved when dealing with questions on rotation
Question: What I know Angular momentum is conserved at the point where there is no external torque on the system When solving questions based on pure rolling (fairly simple concept), if for example, we have a ball that is slipping and not rolling on a rough surface, we are asked to find the velocity when pure rolling starts. Out of the various ways to solve it, one that has always confused me was the conservation of angular momentum method. Friction acts on the point of contact, so angular moment can be conserved at point of contact. So the general formula is $$mv_0r=I\omega +mvr $$ Where, $\omega=\frac vr$ But the moment of inertia $I$, is taken about the center of mass, and not about the point of contact, to get the right answer. Now I could have lived with that, perhaps angular momentum always has the moment of inertia taken about the COM. But here is another question: A uniform rod AB of mass m and length $2a$ is falling freely without any rotation under gravity with AB horizontal. Suddenly, the end $A$ is fixed when the speed of rod is $v$. Find the angular speed of rod with which rod begins to rotate. Conserving angular momentum about A, $$L_i=L_f$$ $$mva=I\omega$$ $$mva =\frac{m(2a)^2}{3} \omega $$ $$\omega =\frac{3v}{4a}$$ In this case, moment of inertia is calculated about the point of contact A, and not the center of mass. I just want to know when do we calculate MOI about the COM, and when it’s calculation is about the point of contact. Such questions always confuse me and I want to confirm it once and for all. Answer: In your general formula: $$_0=+$$ The RHS is equal the the angular momentum about the point in contact with the ground, where the friction acts. But in order to calculate that, you need the momentum of inertia about that point, which would be done through integration. An easier way to get the angular momentum about any general point, $A$, is to calculate the angular momentum about the centre of mass of the system, which is $I\omega$, and add to it the angular momentum of the centre of mass about the point $A$, which is $mvr$. This is what the first example in your question does. In the example of the rod, you have the moment of inertia of the rod known exactly about its end point, so you do not need to calculate it about the centre of mass first and then get the angular momentum about the end point.
{ "domain": "physics.stackexchange", "id": 68715, "tags": "classical-mechanics, angular-momentum, rotational-dynamics" }
Can you see on Miller's planet?
Question: I have read this question: Miller's world would be fried by a strong flux of extreme ultraviolet (EUV) radiation. The cosmic microwave background (CMB) would be blueshifted by gravitational time dilation There are two effects acting on a body in a deep gravity well that increase this radiation: Blueshift: Time going 61,000$\times$ slower would increase the observed frequency of the photons by the same amount. The energy of photons is proportionally increased. ($E=hf$) Wouldn't Miller's planet be fried by blueshifted radiation? There is a very nice explanation by @profrob, and I must ask a follow-up question because there are two point where I am asking for some more details how specifically gravitational time dilation would cause blueshift: Is there an effect of gravitational time dilation that would increase all photon's frequency just because they (and the observer) are in a deep gravitational field? Can someone please explain that effect and how that works? And is it correct to say that this affects all photons, so not just CMB? All along I thought gravitational time dilation was a effect that is only realizable when compared to another frame, far away from the black hole. If you are on Miller's planet, your clock runs normally for you. It is only when you compare it to another clock, you realize that your clock runs much slower on Miller's planet, relative to the other clock far away from the black hole. Isn't this the same way (because of relativity) with photon's energy? Photon's energy in GR is observer dependent. For an observer on Miller's planet, the photon's would be normal (i.e. visible would still be visible), it is just when you would compare it to another photon far away from the black hole that you would see the energy difference? Now if you go with the calculations to that question, and all photons are blueshifted just because they are in a strong gravitational field (even for a local observer on Miller), then this means not just CMB, but all photons are blueshifted, including visible light photons (which would be blueshifted to non-visible range), but that means it is impossible to see on this planet. Question: Can you see on Miller's planet? Answer: I think the answer is yes, but the illumination levels would be similar to a heavily overcast day on Earth. The CMB is a blackbody continuous spectrum, so even though the peak is blueshifted into the ultraviolet, there are still visible photons hitting the planet. Indeed, a higher temperature blackbody is more intense at all wavelengths than a cooler blackbody of the same emitting area. I use italics because the blueshifted CMB comes from a tiny hot spot in the sky with a much smaller angular extent than the Sun for instance. I ran the numbers using this calculator. According to my answer to the linked question (and recall, these are numbers from a GR ray-tracing simulation published by others), the radiation incident upon Miller's planet is in the form of blackbody radiation at about 700,000 K from an intensely bright spot on the sky, which bathes (one side of) the planet with 400 kW/m$^2$ of mainly EUV and UV radiation. If you calculate what fraction of this blackbody flux falls in the visible band between 400nm and 700nm, it is only about 2.2 W/m$^2$. This is several hundred times fainter than direct illumination by direct sunlight. This is similar to the illumination one might experience on a heavily overcast day. Of course the spectrum is very, very different and heavily weighted towards the UV.
{ "domain": "physics.stackexchange", "id": 88566, "tags": "quantum-mechanics, general-relativity, gravity, black-holes" }
Converting a number to the text representation
Question: Following along with some previous questions: The @dbasnett original here: Number to Words @nhgrif's here: Int extension for translating integer to plain English I wanted to answer the original question with a different algorithm, but was not able to get the VB.net code to work. I see @nhgrif had the same idea as me, and proposed a Swift solution. This is the algorithm I would use, but in Java. Like other questions, I am looking for possible improvements, or suggestions for other aspects that may make this more robust, and more usable. public class IntToText { private static final String[] SCALES = {"", "thousand", "million", "billion", "trillion", "quadrillion", "quintillion", "sextillion"}; private static final String[] SUBTWENTY = {"zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen"}; private static final String[] DECADES = {"zero", "ten", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety"}; /** * Convert any value from 0 to 999 inclusive, to a string. * @param value The value to convert. * @param and whether to use the word 'and' in the output. * @return a String representation of the value. */ private static final String tripleAsText(int value, boolean and) { if (value < 0 || value >= 1000) { throw new IllegalArgumentException("Illegal triple-value " + value); } if (value < SUBTWENTY.length) { return SUBTWENTY[value]; } int subhun = value % 100; int hun = value / 100; StringBuilder sb = new StringBuilder(50); if (hun > 0) { sb.append(SUBTWENTY[hun]).append(" hundred"); } if (subhun > 0) { if (hun > 0) { sb.append(and ? " and " : " "); } if (subhun < SUBTWENTY.length) { sb.append(SUBTWENTY[subhun]); } else { int tens = subhun / 10; int units = subhun % 10; if (tens > 0) { sb.append(DECADES[tens]); } if (units > 0) { sb.append(" ").append(SUBTWENTY[units]); } } } return sb.toString(); } /** * Convert any long input value to a text representation * @param value The value to convert * @param useand true if you want to use the word 'and' in the text (eleven thousand and thirteen) * @param negname * @return */ public static final String asText(long value, boolean useand, String negname) { if (value == 0) { return SUBTWENTY[0]; } // break the value down in to sets of three digits (thousands). int[] thous = new int[SCALES.length]; boolean neg = value < 0; // do not make negative numbers positive, to handle Long.MIN_VALUE int scale = 0; while (value != 0) { // use abs to convert thousand-groups to positive, if needed. thous[scale] = Math.abs((int)(value % 1000)); value /= 1000; scale++; } StringBuilder sb = new StringBuilder(scale * 40); if (neg) { sb.append(negname).append(" "); } boolean first = true; while (--scale > 0) { if (!first) { sb.append(", "); } first = false; if (thous[scale] > 0) { sb.append(tripleAsText(thous[scale], useand)).append(" ").append(SCALES[scale]); } } if (!first && useand && thous[0] != 0) { sb.append(" and "); } sb.append(tripleAsText(thous[0], useand)); return sb.toString(); } public static void main(String[] args) { System.out.printf("%15d %s%n", Integer.MIN_VALUE, asText(Integer.MIN_VALUE, true, "minus")); System.out.printf("%15d %s%n", Integer.MAX_VALUE, asText(Integer.MAX_VALUE, true, "minus")); System.out.printf("%15d %s%n", 0, asText(0, true, "minus")); System.out.printf("%15d %s%n", Long.MIN_VALUE, asText(Long.MIN_VALUE, true, "minus")); System.out.printf("%15d %s%n", Long.MAX_VALUE, asText(Long.MAX_VALUE, true, "minus")); } } The program produces the output (added 'x' to avoid bulletted formatting problem): x x -2147483648 minus two billion, one hundred and forty seven million, four hundred and eighty three thousand and six hundred and forty eight x 2147483647 two billion, one hundred and forty seven million, four hundred and eighty three thousand and six hundred and forty seven x 0 zero x -9223372036854775808 minus nine quintillion, two hundred and twenty three quadrillion, three hundred and seventy two trillion, thirty six billion, eight hundred and fifty four million, seven hundred and seventy five thousand and eight hundred and eight x 9223372036854775807 nine quintillion, two hundred and twenty three quadrillion, three hundred and seventy two trillion, thirty six billion, eight hundred and fifty four million, seven hundred and seventy five thousand and eight hundred and seven Answer: I would define DECADES[0] as "" instead of "zero" so that the if (tens>0) can be removed. Notice that DECADES[0] was never used and its value "zero" was a repetition with respect to SUBTWENTY[0]. Also there is a repetition in the creation of the "subhundred" part of the number. In fact you can safely remove this part of code: if (value < SUBTWENTY.length) { return SUBTWENTY[value]; } which is anyway correctly handled by the code following. Looking carefully you see that there is a problem with the handling of "zero". Try your code with 1000 to see that it doesn't work. Notice however that the tripleAsText should return the emptystring when value is 0, because "zero" should be considered as a special case and trapped in the main function (notice that "zero" is never used in the spelling of numbers apart from zero itself). So your function could be simplified as: private static final String[] SCALES = {"", "thousand", "million", "billion", "trillion", "quadrillion", "quintillion", "sextillion"}; private static final String[] SUBTWENTY = {"", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen"}; private static final String[] DECADES = {"", "ten", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety"}; /** * Convert any value from 0 to 999 inclusive, to a string. * @param value The value to convert. * @param and whether to use the word 'and' in the output. * @return a String representation of the value. */ private static final String tripleAsText(int value, boolean and) { int subhun = value % 100; int hun = value / 100; StringBuilder sb = new StringBuilder(50); if (hun > 0) { sb.append(SUBTWENTY[hun]).append(" hundred "); if (subhun > 0 && and) { sb.append("and "); } } if (subhun < SUBTWENTY.length) { sb.append(SUBTWENTY[subhun]); } else { int tens = subhun / 10; int units = subhun % 10; sb.append(DECADES[tens]); if (units>0) { sb.append(" "); } sb.append(SUBTWENTY[units]); } return sb.toString(); }
{ "domain": "codereview.stackexchange", "id": 10008, "tags": "java, strings, converting, rags-to-riches, numbers-to-words" }
How can I find the index of the cell from map coordinates?
Question: Hi I have the (x,y,z) position of the map. From that information, how can I find the cell index and also costmap value(free, obstacle ...) related to this coordinates ? thanks Originally posted by Developer on ROS Answers with karma: 69 on 2018-03-11 Post score: 1 Answer: worldToMap Originally posted by David Lu with karma: 10932 on 2018-03-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Developer on 2018-03-12: Hi David, I could not understand your solution proposal. Let me explain the case better. I have a position(x,y,z) of a map on Rviz. And from that position information, I want to first find related cell index(I m no sure, if gridcell or costmap).Than I want to change this cost value manually. Comment by David Lu on 2018-03-13: Do you have a Costmap2dROS object that you are manipulating in code? Comment by Developer on 2018-03-14: No I don t have. I try to get the position of the related grid from the position of the interactive marker that I put on the Rviz (map). And change the value of this grid.
{ "domain": "robotics.stackexchange", "id": 30272, "tags": "ros, navigation, position, costmap" }
Search a directory structure for tar files and report on their size
Question: I have a complex directory structure on my web server that includes .tar files for download; I'm trying to create a report on the size distribution of these files (e.g. 10 are less than 1MB, two are 1-2MB, three are 2-3MB...) Is there a better way to organize this code? I started it as a console application, writing directly in the Main() function. Now I'm taking a step back and thinking about improvements. public class MyClass { private static int counter = 0; private static Dictionary<int, int> sizeMap = new Dictionary<int, int>() { {0, 0} , {1, 0} , {2, 0} , {3, 0} , {4, 0} , {5, 0} , {6, 0} , {7, 0} , {8, 0} , {9, 0} , {10, 0} , {20, 0} , {30, 0} , {40, 0} , {50, 0} , {75, 0} , {100, 0} , {999, 0} }; static void Main(string[] args) { DirectoryInfo cloud1 = new DirectoryInfo("C:\\brian"); IEnumerable<DirectoryInfo> cloud1Dirs = cloud1.EnumerateDirectories(); foreach (var dir in cloud1Dirs) { ParseDir(dir); } Console.WriteLine("=================================================="); foreach (int i in sizeMap.Keys) { Console.WriteLine(i + "M\t" + sizeMap[i]); } Console.WriteLine("Press any key to exit"); Console.ReadKey(); } private static void ParseDir(DirectoryInfo currentDir) { if (counter++ > 1000) { return; } IEnumerable<FileInfo> tarFiles = currentDir.EnumerateFiles("*.tar"); if (tarFiles.Count() > 0) { foreach (FileInfo tarFile in tarFiles) { int megLength = (int)(tarFile.Length / 1048576); int key = MakeDictionaryKey(megLength); sizeMap[key]++; Console.WriteLine(tarFile.Name + "\t" + megLength + "M"); } } else { foreach (var dir in currentDir.EnumerateDirectories()) { ParseDir(dir); } } } private static int MakeDictionaryKey(int n) { if (n <= 10) { return n; } else if (n <= 50) { return (int) Math.Ceiling(n / 10.0) * 10; } else if (n <= 75) { return 75; } else if (n <= 100) { return 100; } else { //Really large file! return 999; } } } Answer: The biggest complaint I would have with this code is that you're essentially defining your size map twice. Imagine if you decide you don't want to report on files between 50 and 75MB - you remove the "75" entry from the dictionary, but forget to remove it from the MakeDictionaryKey method, and your program will occasionally fail at runtime. First, I'd consider modifying your dictionary so instead of using "999" as a synonym for "really big file", you use int.MaxValue: , {100, 0} , {int.MaxValue, 0} In your MakeDictionaryKey method, you're then looking for the smallest key value where the key value is greater than the file size you've provided to it. With a bit of LINQ, that simply becomes: private static int MakeDictionaryKey(int n) { return sizeMap.Keys.Where(x => x <= n).Min(); }
{ "domain": "codereview.stackexchange", "id": 7468, "tags": "c#, file-system" }
Atomic Physics - Bohr's model of atom
Question: Well I'm learning about the models that have been proposed for the atom, and the Bohr model came up. My teacher told me that the one of the main postulate of the theory is that when an atom is in ground state, it can neither absorb nor emit energy. This is what gives the atom its stability. However, it can obviously absorb energy to jump to a higher energy level. I don't understand, how can it jump to a higher energy level, from the ground state obviously, without absorbing energy? Some problem here? Answer: In Bohr`s model postulates are 1.Electrons in atoms orbit the nucleus. 2.The electrons can only orbit stably, without radiating, in certain orbits (called by Bohr the "stationary orbits") at a certain discrete set of distances from the nucleus. These orbits are associated with definite energies and are also called energy shells or energy levels. In these orbits, the electron's acceleration does not result in radiation and energy loss as required by classical electromagnetics.(these postulates states atomic stability) 3.Electrons can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation with a frequency ν determined by the energy difference of the levels. so it does requires energy to be absorbed to make transition from lower energy level to higher energy level. And it will radiate energy while coming to lower energy level from higher energy level. you can read everything about bohr`s model on Wikipedia link here: http://en.wikipedia.org/wiki/Bohr_model Hope this will help you
{ "domain": "physics.stackexchange", "id": 14891, "tags": "electrons, atomic-physics" }