anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Effect of concentration on rate
Question: How do I solve this question? My thinking is this: since O2 is not a reactant in any step, and since the reaction rate depends only on the reactants, increasing the concentration of O2 does not affect the rate of the reaction. Is my thinking correct? Answer: Question is solved. Poutnik taught me to do it using steady state approximation :) Just adding the image for somebody who might find this in future.
{ "domain": "chemistry.stackexchange", "id": 13504, "tags": "kinetics" }
Fractional vertex cover number may not be feasible? Very confusing!
Question: For my project, I try to use minimum fractional vertex cover number (MFVC). (Please find below definition details) MFVC can be formulated as optimal solution of a linear program relaxation. However I find that every linear program relaxation is either infeasible, unbounded, or has an optimum solution. Does is mean that MFVC may not exist, or may not be a finite number, but MFVC is less or equal to minimum vertex cover number? Very confusing! :( Many thanks! Definitions: K-uniform hypergraph: Minimum fractional vertex cover number (MFVC): link (on page 8) MFVC as linear programming relaxation: The minimum fractionl vertex cover $\tau^* (G)$ in a k-uniform hypergraph $G=(V,E)$ is $\min \sum_{v\in V}w (v)$ such that $0 \le w(v) \le 1$, and for each $e \in E$, we have $\sum_{v\in e} w(v) \ge 1$. Answer: The linear program defining the minimum fractional vertex cover always has an optimal solution. The fact that some linear programs are infeasible or unbounded doesn’t mean that every linear program is infeasible or unbounded. To give another example, sometimes you can’t divide by $x$. But if $x$ is a positive integer, then you can always divide by $x$. This in no way contradicts the fact that there exist $x$ that you can’t divide by - it just means that this complication doesn’t occur for positive integers. To see that the minimum fractional vertex cover always exists, we have to show that the relaxation is feasible and bounded (from below). To see that it is feasible, notice that if you assign $w(v) = 1$ for all $v$, then you get a feasible solution. To see that it is bounded from below, notice that $w(v) \geq 0$ guarantees that the objective functions is always non-negative.
{ "domain": "cs.stackexchange", "id": 11351, "tags": "graphs, linear-programming" }
Information entropy and physics correlation
Question: I'm starting to study Information Theory, and the concept of entropy is not clear to me; I have already checked other posts in this category, but none of them seems to answer my indagations, so, I have some questions: Is the Shannon entropy equivalent to the thermodynamics entropy? I have read both answers, yes and no, and could not understand yet; many people say "yes", but the Shannon entropy is related to symbols, and the thermodynamic entropy is related to micro-states, and these are related to temperature (and Shannon entropy couldn't care less about temperature). However, the Gibbs' entropy formula is exactly the same to Shannon entropy, then I could not form a consensus in my head about the topic. What exactly is the difference between Boltzmann, Shannon, Gibbs and von Neumann concepts of entropy? I have read in one topic here that Shannon entropy measures the "minimum amount of yes/no questions to fully specify a system", but how does a physical system could obey this? For example, if the entropy of a volume of gas is $x$, what questions could I make to "fully specify" this gas? If these entropies are related, how could I convert J/K (thermodynamic unit) to bit (Shannon unit)? And if one use $\ln$ instead of $\log_{2}$, the unit would be nats; I understand that information is a way to measure differences between things, and is clear to me that a bit is the minimum amount of information, once it distinguish between 2 things, but what would a nat measure in this case? If a bit distinguishes between 2 things, a nat would distinguish between 2,718 things (can't understand that). I've already searched in many books and sites, and questioned my professor, but still don't have a clue in this topics, so any hint will be much apreciated... Answer: I hope that my answers below will all be helpful. There are more than one way to think about this, but the one I find most helpful is to think of thermodynamic entropy as a specific instance of Shannon entropy. Shannon entropy is defined by the formula $$ H = -\sum_i p_i \log p_i, $$ but this formula has many different applications, and the symbols $p_i$ have different meanings depending on what the formula is used for. Shannon thought of them as the probabilities of different messages or symbols being sent over a communication channel, but Shannon's formula has since found plenty of other applications as well. One specific thing you can apply it to are the microscopic states of a physical system. If the probabilities $p_i$ represent the equilibrium probabilities for a thermodynamic system to be be in microscopic states $i$, then you have the thermodynamic entropy. (Very often it is multiplied by Boltzmann's constant in this case, to put it into units of $JK^{-1}$ --- see below.) If they represent something else (such as, for example, a non-equilibrium ensemble) then you just have a different instance of the Shannon entropy. So in short, the thermodynamic entropy is a Shannon entropy, but not necessarily vice versa. (One should note, though, that this isn't the way it developed historically --- the formula was in use in physics before Shannon realised that it could be generalised, and the entropy was a known quantity before that formula was invented. For a very good overview of the historical development of information theory and physics, see Jaynes' paper "Where do we stand on maximum entropy?" It is very long, and quite old, but well worth the effort.) The paper linked above will also help with this. Essentially, the Shannon entropy is the formula quoted above; the Gibbs entropy is that same formula applied to the microscopic states of a physical system (so that sometimes it's called the Gibbs-Shannon entropy); the Boltzmann entropy is $\log W$, which is a special case of the Gibbs-Shannon entropy that was historically discovered first; and the von Neumann entropy is the quantum version of the Gibbs-Shannon entropy. This is straightforward. The physical definition of the entropy is $$ S = -k_B \sum_i p_i \log p_i, $$ where the logarithms have base $e$, and $k_B \approx 1.38\times 10^{-23} JK^{-1}$ is Boltzmann's constant. Physicists generally consider $\log p_i$ to be unitless (rather than having units of nats), so the expression has units of $JK^{-1}$ overall. Comparing this to the definition of $H$ above (with units of nats) we have $$ 1\,\mathrm{nat} = k_B\,JK^{-1}, $$ i.e. the conversion factor is just Boltzmann's constant. If we want to express $H$ in bits then we have to change the base of the logarithm from $e$ to 2, which we do by dividing by $\ln 2$: $$ H_\text{bit} = -\sum_i p_i \log_2 p_i = -\sum_i p_i \frac{\ln p_i}{\ln 2} = \frac{H_\text{nat}}{\ln 2}. $$ So we have $$ 1\,\mathrm{bit} = \ln 2\,\,\mathrm{nat}, $$ and therefore $$ 1\,\mathrm{bit} = k_B\ln 2\,JK^{-1} \approx 9.57\times 10^{-24} JK^{-1}. $$ You will see this conversion factor, for example in Landauer's principle, in which erasing one bit requires $k_B T \ln 2$ joules of energy. This is really just saying that that deleting a bit (and therefore lowering the entropy by one bit) requires raising the entropy of the heat bath by one bit, or $k_B \ln 2$. For a heat bath of temperature $T$ this can be done by raising its energy by $k_B T \ln 2\,\, J$. As for the intuitive interpretation of nats, this is indeed a little tricky. The reason nats are used is that they're mathematically more convenient. (If you take the derivative you won't get factors of $\ln 2$ appearing all the time.) But it doesn't make nice intuitive sense to think of distinguishing between 2.718 things, so it's probably better just to think of a nat as $\frac{1}{\ln 2}$ bits, and remember that it's defined that way for mathematical convenience.
{ "domain": "physics.stackexchange", "id": 45575, "tags": "thermodynamics, entropy, information" }
Algorithm: turining a fraction into a decimal expansion string
Question: I already asked this question over on Mathematics and got the suggestion to ask it here. So I'm basically implementing a number type that can represent all fractions and was working on an algorithm to compute the decimal expansion for said fractions. Let's say we have the reduced fraction $\frac{n}{m}$. For converting it into its decimal expansion I now have two algorithms. The first algorithm is simply long division to calculate the decimal expansion up to a given number of decimal places. The second is: Let $a \in \{1,2,\ldots\}$ be a specifier for accuracy. Calculate: $$ \begin{align} p &= \lceil \log_{10}(m) \rceil + a \\\\ f &= \lfloor \frac{10^p}{m} \rfloor \\\\ v &= n \cdot f \end{align} $$ Then in $v$ insert the decimal comma at the correct place or add 0. with leading zeros. Which works well but it is hard to control the accuracy with $a$. For example if I have the fraction $\dfrac{884279719003555}{281474976710656} \approx \pi$ then I get: a | dec. exp. ---|-------------------------------- v acc 0 1 | 3.0949790165124425 v acc 1 2 | 3.13919300246262025 v acc 1 3 | 3.14096156190062736 v acc 7 8 | 3.14159264580768862709685 v acc 8 9 | 3.14159265288192637912529 v acc 12 10 | 3.141592653589350154328134 v acc 12 11 | 3.141592653589350154328134 v acc 12 12 | 3.141592653589350154328134 v acc 15 f = 3.1415926535897931159979634... pi = 3.1415926535897932384626433... So its seems I can control with $a$ that at least $a-1$ decimal places are correct. But I'm not sure if this will always be the case. Also, I benchmarked both algorithms, and the second is more than 5 times faster. So I really want it to be controllable. Method Mean Error StdDev first 4,929.2 ns 24.34 ns 20.33 ns second 848.8 ns 4.00 ns 3.54 ns So my question basically is: does anybody have suggestions on improving the algorithm or maybe another algorithm that does the job even better (a.i. fast)? Answer: All we need is a function $f$ that gives the whole part of a fraction. One can define this one using integer division: $$f(a, b) = \lfloor a / b\rfloor$$ Finally let's assume we have routine $g(a, b)$ that reduces a fraction to its simplest form. This can be done with the Euclidean algorithm. Let's first take the integer part of a fraction out using $f$, and reduce our fraction using $g$, giving us $\frac{a}{b} < 1$. Then we can repeatedly ask for the next decimal $d$ and continue with $10\cdot \frac{a}{b} - d$: $d := f(10a, b)$ $a, b := g(10a - b\cdot d, b)$ This algorithm will perfectly give you all digits ad infinitum, and $g$ makes sure the numbers stay small and manageable.
{ "domain": "cs.stackexchange", "id": 15977, "tags": "algorithms" }
Problem debugging ROS nodes on Mac OS X
Question: Hello Community, I am on Mac OS X 10.11.4, ROS Jade and have some strange problems debugging ROS nodes with gdb or lldb. Namely, debugger is not able to locate ROS related shared libraries. Here is gdb output for example: Starting program: /Users/plistov/EPFL/ROS/ros_catkin_ws/devel_isolated/test_package/lib/test_package/ekf_node dyld: Library not loaded: libtf2_ros.dylib Referenced from: /Users/plistov/EPFL/ROS/ros_catkin_ws/devel_isolated/test_package/lib/test_package/ekf_node Reason: image not found Program received signal SIGTRAP, Trace/breakpoint trap. 0x00007fff5fc01075 in ?? () While DYLD_LIBRARY_PATH points to the folder containing this lib. Furthermore, I did not manage to launch standard ROS nodes. Rosbag for instance: (gdb) exec-file rosbag "/Users/plistov/EPFL/ROS/ros_catkin_ws/install/bin/rosbag": not in executable format: File format not recognised I have not experienced any problems debugging non-ROS related applications on my computer before. I, therefore, will very appreciate any hint or advice. Thanks, Peter Originally posted by Peter Listov on ROS Answers with karma: 338 on 2017-04-07 Post score: 0 Answer: The issue seems to be OS specific. Well explained here. Had to switch to lldb though. Originally posted by Peter Listov with karma: 338 on 2017-04-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27553, "tags": "ros, catkin, gdb, ros-jade" }
Why would a reaction be nonspontaneous at higher temperatures?
Question: Typically we think of a higher temperature speeding up the reaction rate and/or supplying the activation energy of a reaction. So why is it the case that some reactions are only spontaneous at lower temperatures? Using the gibbs free energy equation $ \Delta G = \Delta H - T \Delta S $, If I have a reaction where $\Delta H$ is negative (exothermic?) and $ \Delta S$ is negative it makes $\Delta G$ positive at higher temperatures which means the reaction is nonspontaneous at higher temperatures. Why would this be the case? Answer: You expect both the forward and the reverse reaction to proceed faster at higher temperature. If the reverse reaction speeds up by a higher factor, this will affect the equilibrium. "Spontaneous" is a technical term that does not reflect our day-to-day use of spontaneous. If you substitute the longer "goes forward or would have to go forward, starting from standard state, to attain equilibrium", it might make more sense. If I start at standard state at a given temperature, either the forward reaction will be faster than the reverse reaction, or vice versa. If I repeat the experiment at higher temperature, I will see an increased forward rate and an increased reverse rate. These increases are typically not of the same magnitude, however. So the direction that the reaction takes to reach equilibrium, starting from standard conditions, can be different depending on temperature. With respect to the direction of the reaction as written, it can change from spontaneous to non-spontaneous, or from non-spontaneous to spontaneous. If you write the reaction in the other direction, as Karl mentions, you come to the opposite conclusion. In conclusion, there is always one net direction of the reaction that becomes "more favored", and the other net direction that becomes "less favored" as you increase the temperature and individual rates go up.
{ "domain": "chemistry.stackexchange", "id": 16911, "tags": "entropy" }
How do I find the wave function in a separable Hilbert Space?
Question: I am confused as to how I would go about finding the wave function in the Hilbert Space. As I understand, a wavefunction in the Hilbert space can be represented as $$|\Psi\rangle = \sum_{n} c_n|\psi_n\rangle$$ and the wave function can be calculated with $$i\frac{\partial |\psi\rangle}{\partial t} = H|\psi\rangle$$ I think I should use the second equation to try and find the wave function, then find a representation for it in terms of the orthonormal basis of the Hilbert Space. But I'm not exactly sure what that process would look like. Also, I'm new to quantum mechanics and this site. If there's anything I need to do to improve my question or other resources I should look at, I'll gladly take any advice. Answer: I agree with the comment from ZeroTheHero: The standard text book examples will help you. Yet, it is also reasonable to discuss the following (rather simple but useful) computation scheme: Consider a finite dimensional Hilbert space with a given orthonormal basis $\{|n\rangle\}$. We further restrict the discussion to time-independent systems, i.e. we aim to solve the time-independent Schrödinger equation: $$ H \,|\Psi\rangle = E \, |\Psi\rangle \quad . $$ To do so, we can make use of the completeness relation $\mathbb{1} = \sum\limits_n |n\rangle\langle n|$ and multiply the equation from the left with $\langle m|$, which yields $$ \sum\limits_n \langle m|H|n\rangle \, \langle n |\Psi\rangle = E\, \langle m|\Psi\rangle \quad .$$ If we expand the wave function in the $n$-basis, i.e. if we write $|\Psi\rangle = \sum\limits_n c_n \, |n\rangle$, then we find $c_m = \langle m|\Psi\rangle$. Moreover, we can interpret $ H_{mn} \equiv \langle m|H|n\rangle $ as elements of a matrix. The above equation is nothing but the time-independent Schrödinger equation in matrix form: $$ \sum\limits_n H_{mn}\, c_n = E \,c_m \quad . $$ All in all, for the given case, we just have to choose a basis from which we build our Hamiltonian matrix and then have to diagonalize this matrix (numerically) in order to find the eigenvalues and eigenvectors. The elements of the Hamiltonian of course depend on the problem you are studying.
{ "domain": "physics.stackexchange", "id": 75418, "tags": "quantum-mechanics, hilbert-space, wavefunction, schroedinger-equation" }
Quantum mechanics problem?
Question: I had a test on Quantum mechanics a few days ago, and there was a problem which I had no clue how to solve. Could you please explain me? The problem is: Let's look at the $\hat H=E_0[|1 \rangle \langle 2| + |2 \rangle \langle1|]$ two-state quantum system, where $E_0$ is a constant, and $\langle i|j \rangle=\delta_{ij}$ $(i,j=1,2)$. \begin{equation} \hat O= \Omega_0 [3 |1 \rangle \langle1|- |2 \rangle \langle2|] \end{equation} is an observable quantity, and its expectation value at $t=0$ is: $\langle \hat O \rangle =-\Omega_0$, where $\Omega_o$ is a constant. What is the $|\psi(0) \rangle$ state of the system at $t=0$, and what is the minimum $t>0$ time, that is needed for the system to be in the state: $|\psi(t) \rangle =|1 \rangle$? I never came across a problem like this, I tried to construct the time evolution operator, $\hat U$, but I couldn't, and I have no idea how to start. Answer: Part 1 The state vector can be written in terms of the two states at time $t$ as $$ \left|\psi\left(t\right)\right> = c_1\left(t\right) \left|1\right> + c_2\left(t\right) \left|2\right> $$ and at time $t=0$ as $$ \left|\psi\left(0\right)\right> = c_1\left(0\right) \left|1\right> + c_2\left(0\right) \left|2\right>. $$ We know $$ \begin{align} -\Omega_0 = \left<\hat{O}\right> &= \left<\psi\left(0\right)\right| \hat{O} \left|\psi\left(0\right)\right> \\ &= \Omega_0 \left(c^*_1\left(0\right) \left<1\right| + c^*_2\left(0\right) \left<2\right|\right) \left(3 \left|1\right>\left<1\right|-\left|2\right>\left<2\right|\right) \left(c_1\left(0\right) \left|1\right> + c_2\left(0\right) \left|2\right> \right) \\ &= \Omega_0 \left(c^*_1\left(0\right) \left<1\right| + c^*_2\left(0\right) \left<2\right|\right)\left(3c_1\left(0\right) \left|1\right> - c_2\left(0\right) \left|2\right> \right) \\ &= \Omega_0 \left(3 \left|c_1\left(0\right)\right|^2 - \left|c_2\left(0\right)\right|^2 \right), \end{align} $$ so $$ 3 \left|c_1\left(0\right)\right|^2 - \left|c_2\left(0\right)\right|^2 = -1. $$ Since the state vector must be normalized, $$ \left|c_1\left(0\right)\right|^2 + \left|c_2\left(0\right)\right|^2 = 1. $$ You can finish this part. Part 2 The Schrödinger equation tells us $$ i \hbar \frac{d}{dt} \left|\psi\left(t\right)\right> = \hat{H} \left|\psi\left(t\right)\right>, $$ or $$ i \hbar \left({\dot{c}}_1\left(t\right) \left|1\right> + {\dot{c}}_2\left(t\right) \left|2\right>\right) = E_0 \left(\left|1\right>\left<2\right| + \left|2\right>\left<1\right|\right) \left(c_1\left(t\right) \left|1\right> + c_2\left(t\right) \left|2\right>\right). $$ I'll let you take it from here.
{ "domain": "physics.stackexchange", "id": 10994, "tags": "quantum-mechanics, homework-and-exercises, operators" }
Theory on how to break the speed of light
Question: I am 14 years old and I think I have a good grasp on physics but I have a question. Imagine you were to shrink everything in the universe (except for yourself) down to the size of an atom and it was on your finger. Now, if you move your hand, would everything in the shrunken "universe" be going faster than the speed of light? Answer: If you had an atom-sized object with the mass of the universe, it would be a black hole. With it on your fingertip, you'd be within its event horizon. Hypothetical physics questions just stopped being the most pressing matter in your mind. Also, moving it would be extremely difficult due to its huge mass. Ignoring the above, the fact of the matter is that you have a very small object, not a very large object that you can magically move around very fast. Light speed barriers remain intact.
{ "domain": "physics.stackexchange", "id": 48940, "tags": "faster-than-light, order-of-magnitude, scales" }
Why does the sensitivity to GWs drops off inversely proportional to the distance?
Question: This answer makes me wonder why the sensitivity to gravitational waves decreases proportionally to the distance. Since gravitational waves extend in all directions, my (uneducated) guess would be that the same argument can be made as for the decrease in sensitivity for electromagnetic waves. Why is this not the case? Why does the sensitivity to GWs decrease in a linear fashion? Answer: EDIT I'm leaving the original, highly upvoted answer below, but I've had a fundamental rethink about this, prompted by questions from Keith McClary and a helpful clarification from a Physics SE question. The original answer I gave is the reason that we can detect gravitational waves (GWs) at all. Their coherent nature as single oscillators, means that despite their comparatively low powers, they can be detected right across the universe. In comparison, electromagnetic sources are usually the superposed light from countless incoherent emission sources. On average this has a destructive interference effect that reduces the power (intensity) received; and due to the rapidly changing signal, it is usually intensity that is measured. However, the answer to the present question is actually just how "sensitivity" is defined. In order to detect a source, we must identify it in a background of noise. This is done by defining a signal-to-noise ratio. The signal is the product of your source strength (more on that in a minute) and how long you observed it for. The noise is a property of your instrument. The sensitivity of the instrument then is something like the minimum (signal $\times$ observation time) that will produce a significant detection. In astronomy it is conventional to express the signal in terms of power received since, due to the arguments given above and in the original answer, power (intensity) is generally what is measured. The sources of noise are therefore also defined in terms of a power and the sensivity has units of something like Watts $\times$ seconds, or more conventionally, W/Hz. In gravitational wave astronomy, because it is the amplitude that is directly detected, GW astronomers express their source signal in terms of amplitude (which is proportional to the square root of detected power) and their sensitivities are expressed in terms of gravitational wave amplitude (which is dimensionless) divided by $\sqrt{Hz}$ as a result. i.e. we are not comparing like with like. Doubling the sensitivity of a gravitational wave detector is actually like quadrupling the sensivitity of an electromagnetic wave detector. Thus there is no fundamental difference here, the apparent difference in behaviour is merely a result of how sensitivity is defined. The reason for the different definition is as per my original answer below. Original answer The difference is that usually when we detect sources of electromagnetic waves, we are detecting intensity, which obeys the inverse square law. In contrast, we are detecting the amplitude of gravitational waves, and amplitude only scales as the inverse of distance Why the difference? Sources of gravitational waves are coherent oscillators. A merging binary produces a single coherent wave train with an amplitude that can be defined and measured. By contrast, when we look at a distant star or galaxy in electromagnetic waves we are seeing the incoherent contribution from countless accelerating particles and atoms and all we can detect is the resultant summed intensity. There is no coherent electromagnetic wave with an amplitude that can be measured. This difference in behaviour is fundamentally because whilst there are positive and negative electric charges, which require contrived circumstances in which to behave coherently (e.g. in a laser), gravitational waves are produced by accelerating masses, and since there is only one sign of "gravitational charge", the individual parts of a gravitational wave source are able to act in concert quite naturally to produce a coherent waveform that has a wavelength larger than the body itself. An excellent discussion of these points can be found on the first page of the review article by Hendry & Woan (2007). In principle, if we were looking at a single coherent source of electromagnetic waves then we can detect the amplitude (for example by the force it exerts on charged particles), and then the sensitivity would just reduce as the inverse of distance. At optical frequencies the electric field varies so rapidly that this cannot be done, but it is possible at radio frequencies. Unfortunately the coherence length and coherence time (the time over which the phase of the wave is predictable) are so short that this is rarely practical in laboratory, let alone astronomical, sources.
{ "domain": "astronomy.stackexchange", "id": 4208, "tags": "astrophysics, gravitational-waves" }
Why in acute phorpyria?
Question: I am having confusing about treatment of phorpyria . I read it somewhere that for treating phopyria hemin and glucose is given IV . Please give me an easy explanation of it Answer: Since porphyria is not one disease but many, I suppose that with "acute porphyria" you mean acute intermittent porphyria. The reason of the disease is an autosomal dominant mutation on the enzyme porphobilinogen deaminase, an enzyme that converts porphobilinogen to hydroxymethylbilane by a deamination reaction. Now the heme is produced in many steps, and the enzyme above is important in the early steps of heme synthesis. In persons with porphyria, these heme precursors accumulate, causing toxicity. In acute porphyria the symptoms are mostly from the central nervous system. The figure under shows the steps of the biosynthesis of heme, and as you see at the upper right corner, defect in the enzyme porphobilinogen deaminase (PBG-deaminase) results in accumulation of porphobilinogen. Glucose and other carbohydrates can help suppress disease activity, are given by vein or by mouth, and are part of initial treatment. Intravenous heme, however, is both more specific and effective than glucose and should be started if the patient’s symptoms fail to improve within 36 hours.http://www.porphyriafoundation.com/about-porphyria/types-of-porphyria/AIP [2015-05-02] More about carbohydrates and the suppression of symptoms in porphyria http://www.porphyriafoundation.com/about-porphyria/diet-and-nutrition/the-glucose-effect-in-acute-porphyria [2015-05-02].
{ "domain": "biology.stackexchange", "id": 3887, "tags": "biochemistry" }
Dirac Delta and Sloppy Notation
Question: I am an undergraduate neuroscientist and recently I have been studying probability distributions in relation to information theory, and came across the definition of the Dirac Delta as a singular distribution. My mathematical maturity is relatively low, seeing as I am only just now taking multivariable calculus, so bear with me if my understanding is not very deep. I understand that the Dirac Delta is a rule of the form \begin{equation} \delta_{x_0}(\phi) = \phi(x_0) \end{equation} which assigns to each value of the test function $\phi(x)$ its value at $x_0$, namely $\phi(x_0)$. I also understand the proof by negation which shows that there is no regular distribution which satisfies this property, making the Delta a singular distribution. My confusion arises in the physicists' notation of Delta, which I understand to be an abuse of notation: \begin{equation} \delta(x-x_0)(\phi)=\int_{-\infty}^{\infty}\delta(x-x_0)\phi(x)dx = \phi(x_0)\end{equation} So my question is twofold: What mathematical power does one gain by representing the Dirac Delta as such and why does the $(x - x_0)$ notation come in? It seems that it could just as easily be written\begin{equation} \delta_{x_0}(\phi)=\int_{-\infty}^{\infty}\delta(x_0)\phi(x)dx = \phi(x_0)\end{equation} since this notation is simply the physicist imagining that some function which satisfies this distribution exists, even though one does not. Answer: The notation $$\delta_{x_0}(\phi) = \phi(x_0)$$ makes sense to me. It's clear that $\delta_{x_0}$ is a linear$^{[a]}$ functional, i.e. it takes a function $\phi$ as an input and produces a number $\phi(x_0)$ as output. Suppose we'd like an integral representation of $\delta_{x_0}$. One way to do it is with a limit, e.g. $$\delta_{x_0}(\phi) \equiv \phi(x_0) = \lim_{\sigma \rightarrow 0} \int_{-\infty}^\infty \underbrace{ \left[ \frac{1}{2 \pi \sigma^2} \exp \left( - \frac{(x-x_0)^2}{2 \sigma^2 }\right) \right]}_{\text{Approximate delta function}} \phi(x) \, dx \, .$$ The thing marked by the underbrace has integral 1, and as $\sigma \rightarrow 0$, it becomes thinner and taller. It's just annoying to write the limit and the Gaussian function in the integral, so physicists use a shorthand and write $$\delta_{x_0}(\phi) \equiv \phi(x_0) = \int_{-\infty}^\infty \delta(x-x_0) \phi(x) \, dx \, . \tag{$\star$}$$ Here we're thinking of $\delta(x-x_0)$ as a function. Of course, as you say, it's not really function; there is no function that satisfies the integral equation $(\star)$. Indeed, $\delta(x-x_0)$ is better thought of as a limit of a function, as we wrote in the first integral equation. Now let's answer the questions. why does the $\delta(x−x_0)$ notation come in? Well, we just showed that it more or less comes from the fact that, in a physicist's sense, $\delta$ stands for a limit of a real function, and that real function has argument $(x-x_0)$ (look at the Guassian in the first integral equation). That's a pretty weak answer though, but... What mathematical power does one gain by representing the Dirac Delta as such? ...we can see why the $\delta(x-x_0)$ is powerful through an example. Consider the equation $$ f(x_0) = \int_{-\infty}^\infty \delta(x-x_0) f(x) \, dx \, . \tag{$\star \star$}$$ Let's represent $f$ by its Fourier transform $$f(x) = \int_{-\infty}^\infty \frac{dk}{2\pi} \tilde{f}(k) e^{ikx} \, . $$ Inserting the Fourier transform into $(\star \star)$ gives \begin{align} f(x_0) &= \int_{-\infty}^\infty \int_{-\infty}^\infty dx \, \frac{dk}{2\pi} \delta(x-x_0) \tilde{f}(k) e^{ikx} \\ (\text{change variables }y\equiv x-x_0) \quad &= \int_{-\infty}^\infty \int_{-\infty}^\infty dy \, \frac{dk}{2\pi} \delta(y) \tilde{f}(k) e^{ik(y+x_0)} \\ &= \int_{-\infty}^\infty \frac{dk}{2\pi} \tilde{f}(k) e^{i k x_0} \\ &\equiv f(x_0) \, . \end{align} The point here is that we can manipulate the argument of $\delta(x-x_0)$ e.g. in changes of variables just like any other function and it works. It's pretty obvious that this should work though, because as we pointed out above, you can always think of a $\delta$ as a limit of a sequence of ever-narrower real functions. $[a]$: I say it's linear because $$\delta_{x_0}(\phi + \theta) \equiv (\phi + \theta)(x_0) = \phi(x_0) + \theta(x_0) \equiv \delta_{x_0}(\phi) + \delta_{x_0}(\theta)\, .$$
{ "domain": "physics.stackexchange", "id": 38572, "tags": "mathematical-physics, notation, singularities, dirac-delta-distributions" }
How to build an algorithm with quantum circuits to calculate the outer product of two states?
Question: I want to directly use quantum circuits or quantum algorithms to obtain the outer product of $U_A|a\rangle$ (it is a column vector) and another column vector $U_B|b\rangle$, what are the reference ideas? Answer: I don't think that quantum computing can provide a speedup here, as long as the representation of the outer product you're looking for is classical. The outer product of $|a\rangle$ and $|b\rangle$, two column vectors of respective sizes $m$ and $n$ can be simply done in $\mathcal{O}(mn)$, carrying out the product for each term of the outer product. More precisely, assuming $p$-bit number, a multiplication takes around $p^2$ operations, so the final complexity is $\mathcal{O}\left(mnp^2\right)$. Classical representation of the outer product Now, where is it possible to gain a speedup using quantum computing? No matter what you do, as long as you want a classical representation of the outer product, you will have to perform at least $\mathcal{O}(mnp)$ operations (at least to write down the results). Indeed, measuring a quantum system will only result in a single component of the outer product. You may think that you could have several terms at once by increasing the number of qubits you're dealing with, but that's also true for a classical computer: you can run the computation in parallel. The only gain you could get is if multiplying two numbers was faster, from a complexity point of view, on a quantum computer. For instance, an algorithm performing a multiplication in $\mathcal{O}\left(p\log(p)\right)$ operations would result in an algorithm to compute the outer product in $\mathcal{O}\left(mnp\log(p)\right)$, which would be better than the classical one. However: I'm not aware of such an algorithm. The gain would be mostly theoretical, as $p$ is rarely larger than $64$. This assumes that encoding a quantum state from a classical number, performing the quantum algorithm and measuring it is faster than doing the multiplication classically. Thus, though it may be possible to obtain a better complexity than with using a classical computer, the overall gain seems at best marginal. Quantum representation of the outer product in $\mathcal{O}(1)$ if the second vector is real## Note that this reasoning holds as long as you're after a classical representation of the outer product. If you're looking for a quantum representation in order to use it as part of a larger quantum algorithm, then a (trivial) quantum speedup is possible. Let us call the two column vectors $|a\rangle$ and $|b\rangle$. Their outer product is: $$|a\rangle\langle b|=\sum_{i=0}^{2^m-1}\sum_{j=0}^{2^n-1}a_i\overline{b_j}|i\rangle\langle j|$$ Now, let us assume that we have a quantum representation of $|a\rangle$ and $|b\rangle$. The state of the resulting quantum system is: $$|a\rangle\otimes|b\rangle=\sum_{i=0}^{2^m-1}\sum_{j=0}^{2^n-1}a_ib_j|i,j\rangle$$ If $b$ only has real coordinates, then $|a\rangle\otimes|b\rangle$ is already a representation of the outer product of $|a\rangle$ and $|b\rangle$. In that case, the complexity is trivially $\mathcal{O}(1)$ (assuming $|a\rangle$ and $|b\rangle$ are given). Choosing a different representation which would lead to writing down the coordinates of $|a\rangle$ and $|b\rangle$ would be I think equivalent to the classical case, and would thus provide no speedup.
{ "domain": "quantumcomputing.stackexchange", "id": 3651, "tags": "quantum-algorithms" }
Moment of weight of leaning beam
Question: Having trouble with a FBD moment. The problem is: How big can the force couple C (looks like a G in the pic) be in order for the disk to not spin? The disc has mass m and the beam also has mass m. All other variables are given. Let point A = bottom the disc and point B = where the beam touches the ground. Also let $\ell = R/\tan(\alpha)$ = distance from point A to point B. In the solution sheet, the professor stated that the counter-clockwise moment about point A due to the beam is $\frac{\ell}{2} m g$, implying that the center of mass of the beam is located halfway between A and B. How can this be? If the beam was parallel to the ground, surely that would have been the case. But now it's tilted $\alpha$ degrees - should that not shift the center of gravity to the right? For reference, this is his equation of moment equilibrium about A: $$M_a = N_b \ell - m g \frac{\ell}{2} + C = 0.$$ Answer: So you have two rigid bodies in contact to the ground. Let us call $N_a$ and $N_b$ the contact force (normal to ground) and $F_a$, $F_b$ the frictional force at A and B (arbitrarily chosen to act in a positive x direction). We can call the pin forces $P_x$ and $P_y$ acting on the bar (an reacting on the disk). The sum of moments about the disk center is $$ r F_a + C = I \ddot{\theta} $$ and about the bar center $$ \frac{\ell}{2} \cos\theta (N_b-P_y) - \frac{\ell}{2} \sin\theta (N_b+P_x) = 0 $$ couple with the motion in the x and y axes of the disk $$ m \ddot{x} = m R \ddot{\theta} = F_a -P_x \\ m \ddot{y} = N_a -P_y - m g = 0 $$ and the bar $$ m \ddot{x} = F_b + P_x \\ 0 = N_b+P_y - m g $$ For the condition of motionless with $\ddot{x}=\ddot{\theta}=0$ the above is 6 equations for 6 unknowns ($P_x, P_y, N_a, N_b, F_a, F_b$) which is solved by elimination. Next find which is the smallest $C$ value that makes $|F_a| = \mu_s N_a$ and $|F_b| = \mu_s N_b$ If you get the correct result you will have $P_x<0$ and $F_a<0$ and $C \propto \mu_s R m g $
{ "domain": "physics.stackexchange", "id": 11147, "tags": "homework-and-exercises, statics" }
How is minimum infectious dose defined?
Question: I'm a bit confused by the concept of a minimum infectious dose (MID). It seems from what my research so far has turned up, that while any dose greater than 0 could potentially cause an infection, for most pathogens the likelihood of this occurring is low enough to be dismissed. What I'm now trying to figure out is, what metric is used to define the MID? Is it the amount of the pathogen which is expected to cause an infection in 50% of those exposed (similar to an LD50?) Is there some other value used instead? Secondly, is there a specified relationship between the received dose, as a multiple of the MID, and the likelihood of infection (for example if the MID caused an infection in 50% of people, then perhaps 2 MIDs would cause infection in 75% and 3 MIDs would cause infection in 87.5%). Note, I am not asking about how the MID is empirically measured, but rather about the conceptual underpinnings of the concept itself. Also, any links you can provide to additional resources would be most appreciated. Google is useless these days now that the search results are flooded with nothing but SARS-CoV-2 specific articles. Answer: Minimum infective dose is the minimum number of infectious particles that causes infection/illness in 50% of those given that dose. Practically this is only really measurable in a controlled laboratory setting, but one can estimate it for real world circumstances as well. It isn't easy to know how exactly the probability will vary around the MID - there are too many factors related both to the individual and the pathogen to make simplistic assumptions (like assuming each individual infectious particle has an independent probability of leading to infection). It is typical that higher initial exposure (particularly for foodborne illness, but seems to be an issue with SARS-CoV-2 and other respiratory viruses as well) leads to more severe illness, however.
{ "domain": "biology.stackexchange", "id": 10595, "tags": "immunology, bacteriology, virology, pathology" }
generate graph from .eps file (preferably using R)
Question: Using the R package 'Deducer', I saved a graphic (chart) as an .eps file. I can open the .eps file, it's just a bunch of text. How do I re-generate the graphic (chart) from the .eps file (preferably using Deducer or R)? Answer: EPS is "Encapsulated PostScript". Its meant for embedding like an image in documents, or sending to printers. You can view it with a PostScript document viewer, and there are free PostScript document viewers for Linux, Windows, and Mac OSs. Ghostview, Evince, etc etc. So although you can view the graphic once you've got a PostScript document viewer, you cannot load it into R as if you had just plotted it.
{ "domain": "datascience.stackexchange", "id": 872, "tags": "r" }
Search in a nested list of a dictionary
Question: I have a dictionary structured as {a_1: [((b_11,c_11),d_11), ((b_12, c_12), d_12), ...], a_2: [...], ...}. I want to extract the maximum abs(c) among all the possible c values. Currently I'm doing this throught two for loops. One for each a, and other for each of set {b,c,d}. elements = { 0: [((0, -4), 1)], 1: [((1, -5), 1), ((0, -4), 1)], 2: [((2, -3), 1)], 3: [((3, -2), 1), ((0, -5), 1)], } max_c = 0 for a in list(elements): for (b, c), d in elements[a]: max_c = max(max_c, abs(c)) Is there any way to do this without the for loops? and if it is, is it pythonic? or I should keep this way because it's more understandable? Thanks in advance. Answer: When you are looping over the keys a and values elements[a] of a dictionary, use the items method to loop over both at the same time: for a, value in elements.items(): for (b, c), d in value: (If I knew what these values represented, I would pick a better name than value.) The code here doesn't use the keys, it only uses the values. So use the values method instead: for value in elements.values(): for (b, c), d in value: The code doesn't use b or d either. It is conventional to use the name _ for unused variables: for (_, c), _ in value: A double iteration over some lists and then over the elements of those lists can be combined into a single iteration using itertools.chain.from_iterable: from itertools import chain for (_, c), _ in chain.from_iterable(elements.values()): The repeated calls to max can become a single call taking a generator expression: values = chain.from_iterable(elements.values()) max_abs_c = max(abs(c) for (_, c), _ in values) It's common to worry whether code is "Pythonic" but it is better to think in terms of general principles (clarity, simplicity, maintainability, testability, usability, efficiency, etc.) that apply to code in any language.
{ "domain": "codereview.stackexchange", "id": 32320, "tags": "python, hash-map" }
spinor vs vector indices of Dirac gamma matrices
Question: I am struggling to understand the nature of the components of the Dirac matrices. If we view the four components of a Dirac spinor as $\psi^a$ with $a$ being a 'spinor' index, then if a gamma matrix acts on this to give another spinor, then it's indices would be ... ?? $\gamma^{\mu b}{}_{a}$ where $\mu$ selects the gamma matrix, and $a,b$ are spinor indices specifying the components of the 4x4 matrix ? Since the current four-vector is $$J^\mu = \bar{\psi} \gamma^\mu \psi$$ that suggests the $\mu$ index is a vector index here. Writing all the indices gives $$J^\mu = \bar{\psi}_a \gamma^{\mu a}{}_b \psi^b.$$ However, $$\bar{\psi} = \psi^\dagger \gamma^0$$ which makes it seem like I shouldn't view that as a vector index, because the zeroth component is being used without the rest!? I'm clearly confusing a lot of things. So how exactly should we view the components (and thus indices) of these objects? $$\gamma^{\mu a}{}_b,\ \psi^a,\ \bar{\psi}_a \text{ ... or } \bar{\psi^a} \text{ ?}$$ Answer: Gamma matrices are defined by the Clifford algebra $$ \{\gamma^\mu, \gamma^\nu\}= 2g^{\mu\nu}\mathbb I_n \,. $$ So, you see the index $\mu$ in $\gamma^\mu$ runs from $0$ upto $D-1$ where $D$ is the number of spacetime dimensions. It does not mean $\gamma^\mu$ is a vector. The $\mu$ index here only tells you how many gamma matrices are there. The dimensionality of the matrices themselves is $n= 2^{[D/2]}$ where $[\cdot]$ gives you the integer part of a number. For example, in $(1+2)-$dimensions, $D=3$ and hence the Dirac matrices are $2^{[1.5]}= 2$ dimensional, which you recognize are the Pauli matrices. The labels of the entries of the gamma matrices are known as spinor indices. So, in 3 dimensions, for example, the $a,b$ in $\gamma^\mu_{ab}$ would run from $1$ to $2$. What is a $4$-vector? It is something that transforms like a vector under Lorentz transformations $\Lambda$. Namely, $X^\mu$ is a vector if it transforms like $$ X^\mu\to {\Lambda^\mu}_\nu X^\nu \,. $$ That's the definition! Just having a $4$-dimensional column vector with Greek indices labelling its entries does not make it a Lorentz vector. It needs to transform the right way. Okay, so what is a spinor? A spinor is something that transforms like a spinor. Namely, $\psi$ is a spinor if it transforms, under a Lorentz transformation parametrized by $\omega_{\mu\nu}$, like $$ \psi \to \Lambda_{1/2} \psi\, \qquad (\Rightarrow \overline\psi \to \overline\psi\ \Lambda_{1/2}^{-1}\ ) \,, $$ where $\Lambda_{1/2} = \exp{(-\frac i2 \omega_{\mu\nu} S^{\mu\nu})}$ and $S^{\mu\nu} = \frac i4 [\gamma^\mu, \gamma^\nu]$ generates an $n-$dimensional representation of the Lorentz algebra. Let's make a remark on why we use something like $\overline \psi = \psi^\dagger \gamma^0$. Well, because we want to construct bilinear Lorentz scalars like $\psi^\dagger \psi$, but $\psi^\dagger \psi$ is not a Lorentz scalar precisely because the matrix $\Lambda_{1/2}$ is not unitary. Under a Loretz transformation, $$ \psi^\dagger \to \psi^\dagger \Lambda_{1/2}^\dagger \ne \psi^\dagger \Lambda_{1/2}^{-1}\,.$$ However, we notice an interesting property of the gamma matrix $\gamma^0$. $$ \boxed{ \Lambda_{1/2}^\dagger \gamma^0 = \gamma^0 \Lambda_{1/2}^{-1} }$$ This immediately tells us that defining something like $\overline \psi \equiv \psi^\dagger \gamma^0$ will do the job. $$ \overline \psi \to (\psi^\dagger \Lambda_{1/2}^\dagger)\gamma^0 = \psi^\dagger \gamma^0 \Lambda_{1/2}^{-1} = \overline\psi \Lambda_{1/2}^{-1} $$ Because of this special property of $\gamma^0$, now we have that $\overline\psi\psi\to \overline\psi\psi$. You can check that the gamma matrices also satisfy the relation $$ \Lambda_{1/2}^{-1} \gamma^\mu_{ab} \Lambda_{1/2} = {\Lambda^\mu}_\nu \gamma^\nu_{ab}\,. $$ Understand that this is not a transformation of the gamma matrices under a Lorentz transformation. Gamma matrices are fixed constant matrices that form the basis of an algebra. They do not transform. The above is just a property of the gamma matrices due to them being generators of a particular representation of the Lorentz algebra. However, this relation allows you to take the $\mu$ index in $\gamma^\mu$ "seriously". Because, due to this you can immediately see that under a Lorentz transformation, the current $J^\mu := \overline\psi \gamma^\mu \psi= \overline\psi^a \gamma^\mu_{ab} \psi^b$ indeed transforms like a vector. $$ J^\mu \to {\Lambda^\mu}_\nu J^\nu \,.$$
{ "domain": "physics.stackexchange", "id": 47278, "tags": "tensor-calculus, representation-theory, spinors, dirac-matrices" }
Antimatter bomb
Question: I stumbled upon this wikipedia article on antimatter weaponry. Being greatly appalled by the sad fact that large sums of money are being wasted on this, I could not stop myself from thinking for a moment about the physics behind it. If you somehow put together a gram of antimatter and a gram of matter all at the same instant and at high density (so the "annihilation efficiency" would be 100%), would there actually be any explosion? AFAIK, this would just produce large amounts of gamma photons, neutrino's etc., but there's be very little (anti)matter left to absorb the energy in the form of kinetic energy. In other words -- it would be a radiation bomb. There wouldn't even a flash of light to warn anyone. Would this indeed be the case? Or am I overlooking something here? Answer: Have a look at these cross section plots of proton proton scattering and anti-proton proton, where the anti proton has an order of magnitude higher probability of interacting. It is not true that most of the energy goes into radiation, it goes into creating particles, with an average multiplicity for annihilation at rest of about five charged particles. The interaction is strong and gluon mediated, the photons produced come from pi0 decays, direct photons are a higher order effect. Thus one gets an "explosion" as this image shows, once one obtains large numbers of such annihilations in a small space The charged pions will eventually end up as electrons and muons carrying a lot of the kinetic energy of the reaction and a destructive power. The piOs, about 1/3 of the charged number, will give two photons which on average will have enough energy to be destructive when hitting nuclei.
{ "domain": "physics.stackexchange", "id": 99010, "tags": "radiation, antimatter, explosions" }
The source of energy for movement of techtonic plates
Question: At about 7th grade we learn that the center of the earth is molten iron, and we learn about the movement of tectonic plates. That movement causes earth quakes, volcanos, mountain formations etc. All that activity expends energy, and it is all caused by the gravitational attraction of earth's mass. A naive understanding would say this will continue forever, but that sounds like perpetual motion. How is the paradox resolved? Answer: The initial heat from the formation of Earth is slowly escaping into space. The interior of the Earth heated up by a combination of two mechanisms: Losing gravitational potential energy from parts that went from a high-energy configuration (cloud of dust) to a low-energy one (clump of matter) Decay of radioactive elements This is ultimately the source of the energy for the movement of the plates. When it cools enough to make the interior solid (what happened e.g. with Mars), the plates will stop moving. Since the interior of Earth is liquid, its cooling results in chaotic movement of the liquid (convection). The liquid interior of the Earth catches on the tectonic plates by friction and causes them to move.
{ "domain": "physics.stackexchange", "id": 33498, "tags": "gravity, newtonian-gravity, energy-conservation" }
How to calculate the altitude of the Moon?
Question: With the known latitude coordinate of the observing position, how to find the altitude of the moon when it is high, i.e when it crosses the local meridian? Answer: The transit altitude of an object does not only depend on your latitude ($b$), but also on the declination of the object ($\delta$): $$h_\mathrm{tr} = \arcsin\left(\sin b \sin \delta + \cos b \cos\delta\right).$$
{ "domain": "astronomy.stackexchange", "id": 7038, "tags": "the-moon, amateur-observing, observational-astronomy" }
What exactly is a mole?
Question: I have some trouble understanding exactly what a mole represents. As I understand, one unit mole is 1/12 of the mass of an atom of carbon-12 (thus it is the mass of one nucleon?). What is a mole, then? Answer: A mole is nothing more than a countable number of things. Specifically, it is approximated by 6.02 × 1023 number of things. Hence, you can have a mole of argon atoms, a mole of electrons, or a mole of house keys all of which contain approximately 6.02 × 1023 of their respective items. Note that each of these groups of things will have a different mass. What you are referring to as 1/12 of a 12C atoms is not a mole, but an atomic mass unit.
{ "domain": "physics.stackexchange", "id": 1278, "tags": "units" }
Battery life with hot and cold wires
Question: If a AA battery was kept at a constant temperature and was connected in a simple circuit will the battery drain faster if the wires are kept at a cold temperature or at a hot temperature? I understand that the cooler the wire is the less resistance there will be for the electrons to flow through. My current thinking is that if the wires have less resistance the battery would drain faster as the electrons aren't being effected as much as the hot wire, however if the wire was hot, do the electrons need more energy to go through the wire so then the battery would drain faster? Answer: To answer this, one needs to understand a couple of somewhat independent concepts. First is that of resistance at cool temperatures. Assuming that we are talking of a regular conductor (metal) and a temperature variation of, say, 0C to 100C, the cooler conductor will indeed have a lower resistance. Now, the question of how much lower is a separate one, but let's assume that it is measurably lower. Lets mark $R_1(T_1) < R_2(T_2)$ where $T_1<T_2$ are the temperatures. If the same battery is connected to these resistors, we will have $I_1(R_1) > I_2(R_2)$ Now let's turn to the second concept. Namely it is that of the battery capacity. From Wikipedia: A battery's capacity is the amount of electric charge it can deliver at the rated voltage. The more electrode material contained in the cell the greater its capacity. A small cell has less capacity than a larger cell with the same chemistry, although they develop the same open-circuit voltage.[30] Capacity is measured in units such as amp-hour (A·h). The rated capacity of a battery is usually expressed as the product of 20 hours multiplied by the current that a new battery can consistently supply for 20 hours at 68 °F (20 °C), while remaining above a specified terminal voltage per cell. For example, a battery rated at 100 A·h can deliver 5 A over a 20-hour period at room temperature. The fraction of the stored charge that a battery can deliver depends on multiple factors, including battery chemistry, the rate at which the charge is delivered (current), the required terminal voltage, the storage period, ambient temperature and other factors.[30] The higher the discharge rate, the lower the capacity.[31] The relationship between current, discharge time and capacity for a lead acid battery is approximated (over a typical range of current values) by Peukert's law: ${\displaystyle t={\frac {Q_{P}}{I^{k}}}}$ $Q_P$ is the capacity when discharged at a rate of 1 amp. ${\displaystyle I}$ is the current drawn from battery (A). ${\displaystyle t}$ is the amount of time (in hours) that a battery can sustain. ${\displaystyle k}$ k is a constant around 1.3. Using this law, you can see that for lower current, the discharge (drain) time is larger. Vice versa, for higher current, the discharge time is smaller. In other words, $t_1(T_1) < t_2(T_2)$ or for higher temperature the discharge time is longer.
{ "domain": "physics.stackexchange", "id": 41564, "tags": "electricity, batteries" }
Rust decimal to roman numeral kata
Question: I just wrote this decimal to Roman numeral converter, inspired by Kevlin Henney's talk "Get Kata". I have been reading the Rust docs little by little and this is one of my first attempts at some code. I'd like to get feedback on style, general rust idioms and how I could have implemented this algorithm in a more succinct manner. Code also available here. use std::env; use std::io; use std::io::BufWriter; use std::io::Write; static BASE10_NUMERALS: [&str; 7] = ["I", "X", "C", "M", "X̄", "C̄", "M̄"]; static CENTRE_NUMERALS: [&str; 6] = ["V", "L", "D", "V̄", "L̄", "D̄"]; struct Bases { base: usize } impl Iterator for Bases { type Item = usize; fn next(&mut self) -> Option<usize> { self.base = self.base - 1; Some(self.base) } } fn iter_bases(largest_base: usize) -> Bases { Bases { base: largest_base } } fn encode((decimal_number, base, ): (char, usize, )) -> String { let digit = decimal_number.to_digit(10).unwrap(); let max_base = CENTRE_NUMERALS.len(); if base >= max_base { BASE10_NUMERALS[BASE10_NUMERALS.len() - 1] //This pow function is the main limiter for decimal size .repeat((10_u32.pow((base - max_base) as u32) * digit) as usize) } else { if digit == 9 { format!("{}{}" , if base == 3 { "Ī"} else { BASE10_NUMERALS[base] } , BASE10_NUMERALS[base + 1] ) } else if digit >= 5 { format!("{}{}", CENTRE_NUMERALS[ base], BASE10_NUMERALS[base] .repeat((digit - 5) as usize)) } else if digit == 4 { format!("{}{}", BASE10_NUMERALS[base], CENTRE_NUMERALS[base]) } else { // Less than 4 BASE10_NUMERALS[base].repeat(digit as usize) } } } fn main() { let args: Vec<String> = env::args().collect(); let mut writer = BufWriter::new(io::stdout()); let input = &args[1]; for roman_numeral in input .chars() .zip(iter_bases(input.len())) .map(encode) { writer.write(roman_numeral.as_bytes()) .expect("Unable to write to stdout"); } } Answer: cargo fmt and cargo clippy Use cargo fmt to format your code according to the official Rust Style Guide. Clippy's suggestions: collapse else { if .. } into else if ..; change self.base = self.base - 1 to self.base -= 1; use write_all instead of write when writing a slice of bytes, since the latter only makes one writing attempt: io::Write::write(_vectored) and io::Read::read(_vectored) are not guaranteed to process the entire buffer. They return how many bytes were processed, which might be smaller than a given buffer’s length. If you don’t need to deal with partial-write/read, use write_all/read_exact instead. (source) The interface Instead of passing in a tuple, it is preferable to pass separate parameters: fn encode(decimal_number: char, base: usize) -> String { // ... } Now, we notice that the interface is fairly curious. A better self-explanatory interface might simply be: fn to_roman(number: u32) -> String { // ... } which can be coded to encapsulate calls to encode while avoiding repetitive allocation. Grouping use declarations It's common to group use declarations together: use std::{ env, io::{self, BufWriter, Write}, }; The Bases utility Bases is not necessary — you can use (0..n).rev() to iterate from n - 1 to 0.
{ "domain": "codereview.stackexchange", "id": 40020, "tags": "rust, roman-numerals" }
Is entanglement necessary for quantum computation?
Question: Entanglement is often discussed as being one of the essential components that makes quantum different from classical. But is entanglement really necessary to achieve a speed-up in quantum computation? Answer: Short answer: yes One has to be a little bit more careful setting up the question. Thinking about a circuit as being composed of state preparation, unitaries, and measurements, it is always in principle possible to "hide" anything we want, such as entangling operations, inside the measurement. So, let us be precise. We want to start from a separable state of many qubits, and the final measurements should consist of single-qubit measurements. Does the computation have to transition through an entangled state at some point in the computation? Pure states Let's make the one further assumption that the initial state is a pure (product) state. In that case, the system must go through an entangled state. If it didn't, it is easy to simulate the computation on a classical computer because all you have to do is hold $n$ single-qubit pure states in memory, and update them one at a time as the computation proceeds. One can even ask how much entanglement is necessary. Again, there are many different ways that entanglement can be moved around at different times. A good model that provides a reasonably fair measure of the entanglement present is measurement-based quantum computation. Here, we prepare some initial resource state, and it is single-qubit measurements that define the computation that happens. This lets us ask about the entanglement of the resource state. There has to be entanglement and, in some sense, it has to be at least "two-dimensional", it cannot just be the entanglement generated between nearest neighbours of a system on a line [ref]. Moreover, one can show that most states of $n$ qubits are too entangled to permit computation in this way. Mixed states The caveat in all that I've said so far is that we're talking about pure states. For example, we can easily simulate a non-entangling computation on pure product states. But what about mixed states? A mixed state is separable if it can be written in the form $$ \rho=\sum_{i=1}^Np_i\rho^{(1)}_i\otimes\rho^{(2)}_i\otimes\ldots\otimes\rho^{(n)}_i. $$ Importantly, there is no limit on the value $N$, the number of terms in the sum. If the number of terms in the sum is small, then by the previous argument, we can simulate the effects of a non-entangling circuit. But if the number of terms is large, then (to my knowledge) it remains an open question as to whether it can be classically simulated, or whether it can give enhanced computation.
{ "domain": "quantumcomputing.stackexchange", "id": 273, "tags": "entanglement, speedup" }
Why can't we see Saturn's phases from earth?
Question: In the Astronomy Picture of the Day from August 8, 2020 ("Crescent Saturn"), the caption states From Earth, Saturn never shows a crescent phase Why is this? We can observe phases on planets such as Venus which have an apparent size a lot smaller than Saturn. Saturn is not tidally locked to the sun (only Mercury is). So why does the sunlit face of Saturn always face Earth? Answer: Phases are just different perceived illuminations of an object at different illumination and observing angles. If the observer is, with respect to the object, located in a similar direction as the light source shining on the object then you should expect to see the vast majority of the object illuminated, if the observer is located in the opposite direction you would see the object back-lit, and if you are at right angles observing the object with respect to the direction of the light source you would see the object half-illuminated. Since Mercury and Venus are always inside Earth's orbit and move with different rates around the Sun as Earth does, the Earth (the observer) is able to locate itself at any angle with respect to the light source that shines on the observed planet. This means that you can see any phase of Mercury or Venus (except from a perfect 100% illuminated phase due to the body of the Sun blocking the view). Here you have an example for Venus: Now think about what happens from the point of view of an Earth's observer for the exterior planets. The Earth will never have a chance to see the planet's back-lit side since there's no position in its orbit that would allow for this. As seen from the other planet, the Earth is always close to the Sun, so it can be seen almost exclusively during the day, which means that from the Earth you almost exclusively see illuminated regions of the planet. Mars is the closest exterior planet, so the Earth manages to gain enough elongation to see a bit of the night side, but it is a tiny fraction of the disk as viewed from here. Here you have a picture of the phases of Mars as seen from the Earth: This is even worse for far away planets. Saturn is so far away that from its vantage point, that the Earth is basically always close to the Sun (to the light source). From the Earth, Saturn seems always fully illuminated, by an extremely small margin that allows to see a slim crescent of darkness in perfect conditions. Only with spacecraft like Cassini and Voyager have we been able to see what Saturn looks like from behind. The first time humanity did this was in 1980 with photographs like this one (from Voyager): Before that we had never seen the night side covering more than a percent of the disk. Even the shadow of Saturn cast over the rings is nearly impossible to spot from the Earth. Look at this amateur photograph that shows precisely a bit of that shadow over the rings behind Saturn (bottom-right part): That shadow is almost non-existent from Earth's point of view, and it's all because Earth's orbit is inside Saturn's and Saturn is far away from the Earth. The Earth is always so close to the line connecting the light source (the Sun) and the illuminated object (Saturn), that you shouldn't expect any more phases than "full" from here. Simple geometry.
{ "domain": "astronomy.stackexchange", "id": 4718, "tags": "the-sun, saturn" }
On the Robertson uncertainty relation when $\big|\dfrac{1}{2} \langle\{A,B\}\rangle - \langle A \rangle \langle B \rangle \big|^2 \neq 0$
Question: The Robertson uncertainty relation is $\sigma^2_A \sigma^2_B \geq \big|\dfrac{1}{2} \langle\{A,B\}\rangle - \langle A \rangle \langle B \rangle \big|^2 + \big| \dfrac{1}{2i} \langle [A,B] \rangle \big|^2.$ Where $\sigma^2_X$ is the variance of the operator $X$ and $\{A,B\}$, $[A,B]$ are the anti-commutator and the commutator of the Hermitian operators $A$ and $B$, respectively. The uncertainty relation is more commom presented in the form $\sigma^2_A \sigma^2_B \geq \big| \dfrac{1}{2i} \langle [A,B] \rangle \big|^2.$ Where there are commom physical examples which have that satisfied, e.g. $[x,p] \geq \dfrac{\hbar}{2}$, but these examples have $\big|\dfrac{1}{2} \langle\{A,B\}\rangle - \langle A \rangle \langle B \rangle \big|^2=0$. I am trying to find a quantum system where the term $\big|\dfrac{1}{2} \langle\{A,B\}\rangle - \langle A \rangle \langle B \rangle \big|^2 \neq 0$, so that the lowest limit of the product of the variances of $A$ and $B$ have a dependence on the latter. So, to answer my question, it is necessary to give a possible physical system where $\big|\dfrac{1}{2} \langle\{A,B\}\rangle - \langle A \rangle \langle B \rangle \big|^2 \neq 0$ for $A$ and $B$ Hermitian. Any help or ideas are welcome. Answer: You might profit from calculating a few expectation values for, e.g. the oscillator, for which $a^\dagger |n\rangle= \sqrt{n+1} | n+1\rangle$. Take $A=a$ and $B=a^\dagger$, so that $$ [a,a^\dagger ]=1, \qquad \{ a,a^\dagger \} =a a^\dagger + a^\dagger a=1+2N. $$ Look at the first excited state, $|1\rangle$, so $a|1\rangle=|0\rangle$, so your expectation values are $$ \langle 1| a|1\rangle=\langle 1|a^\dagger| 1\rangle=0,\\ \langle 1| \{ a,a^\dagger \} |1\rangle= 3,\\ \langle 1| [a,a^\dagger ]|1\rangle= 1, $$ so that, for this state $$ \sigma_a^2 \sigma^2_{a^\dagger}\geq 9/4 +1/4= 10/4. $$ The anticommutator, of course, is not a constant, unlike the commutator, nor should you expect it to be.
{ "domain": "physics.stackexchange", "id": 48239, "tags": "quantum-mechanics, operators, heisenberg-uncertainty-principle" }
Proof that the commutator of angular momentum and 4-momentum is 0
Question: I have this commutator $[P^2,J_{\mu\nu}]$ that I'm supposed to prove is zero. If we expand it (given that $[P_{\alpha}, J_{\mu\nu}] = i(g_{\mu\alpha}P_{\nu} - g_{\nu\alpha}P_{\mu})$ and $[P_{\alpha}, P_{\beta}] = 0$) we end up with $$[P^2,J_{\mu\nu}] = [P_{\alpha}P_{\alpha},J_{\mu\nu}] = i(P_{\alpha}g_{\mu\alpha}P_{\nu} - P_{\alpha}g_{\nu\alpha}P_{\mu} + g_{\mu\alpha}P_{\nu}P_{\alpha} - g_{\nu\alpha}P_{\mu}P_{\alpha}).$$ This is supposed to be zero, but how do I prove it? If all indices are equal, or if all are different, because we are using Minkowski metric, this is straightforward. But for example in the case where $\mu=\alpha\neq\nu$, we end up with $$[P^2,J_{\mu\nu}] = [P_{\alpha}P_{\alpha},J_{\mu\nu}] = i(P_{\alpha}g_{\mu\alpha}P_{\nu} + g_{\mu\alpha}P_{\nu}P_{\alpha}).$$ How do I prove this is zero? Answer: You have written, $$[P^2,J_{\mu\nu}] = [P_{\alpha}P_{\alpha},J_{\mu\nu}] = i(P_{\alpha}g_{\mu\alpha}P_{\nu} - P_{\alpha}g_{\nu\alpha}P_{\mu} + g_{\mu\alpha}P_{\nu}P_{\alpha} - g_{\nu\alpha}P_{\mu}P_{\alpha}).$$ The $\alpha$ here is summed over, per the Einstein summation convention; I would really write it as, $$P^2 = P^\alpha P_\alpha = g^{\alpha\beta}P_\alpha P_\beta.$$ So the right hand side reduces to $i(2P_\mu P_\nu - 2P_\mu P_\nu) =0$. Always remember "conservation of indices." (There are some exceptions in some notational conventions.) Your left hand side is $[P^2,J_{\mu\nu}]$, there is no $\alpha$ index, so there is no free $\alpha$ index allowed on the right hand side either.
{ "domain": "physics.stackexchange", "id": 65682, "tags": "homework-and-exercises, special-relativity, metric-tensor, commutator, poincare-symmetry" }
Implement get_or_create() with Rust and Diesel
Question: I just started learning Rust, and try to implement a function like Django's get_or_create. Now my implementation looks too verbose[ I hope that rust can be neater. Therefore, how can I implement this function in less verbose way? There is probably a way to shorten nested match constructions below? use log::warn; use diesel::prelude::*; use diesel::result; use crate::db::{get_connection, PgPool}; use crate::models::{NewUser, User}; pub fn get_or_create_user(pool: &PgPool, email: &str) -> User { use crate::schema::users; let new_user = NewUser { email }; let mut conn = get_connection(pool); let result = diesel::insert_into(users::table) .values(&new_user) .get_result(&mut conn); match result { Ok(user) => return user, Err(err) => match err { result::Error::DatabaseError(err_kind, info) => match err_kind { result::DatabaseErrorKind::UniqueViolation => { warn!( "{:?} is already exists. Info: {:?}. Skipping.", new_user, info ); // another query to DB to get existing user by email let user = user_by_email(pool, new_user.email); return user; } _ => { panic!("Database error: {:?}", info); } }, _ => { // TODO: decide how to deal with unexpected errors return User { id: 0, email: "".into(), }; } }, } } pub fn user_by_email(pool: &PgPool, user_email: &str) -> User { use crate::schema::users::dsl::*; let mut conn = get_connection(pool); let user = crate::schema::users::dsl::users .filter(email.eq(user_email)) .first(&mut conn) .unwrap(); return user; } Answer: welcome to the Rust community! get or create You may indeed make code readable by replacing multiple nested matches with a single match that has nested patterns. For example, we match on Err(DatabaseError(UniqueViolation, info)) and that grabs all errors that contain a DatabaseError variant of Diesel Error enum, with inner UniqueViolation variant of DatabaseErrorKind. We bind the second value within DatabaseError to info, so we can print the info super easy too. If, for example, the Error is something else than UniqueViolation, we fall through to the next match arm. The pattern sublanguage is like a language within a language -- you have to learn it and build your intuition about it. The result of our effort is super readable: match result { Ok(user) => return user, Err(DatabaseError(UniqueViolation, info)) => { warn!( "{:?} is already exists. Info: {:?}. Skipping.", new_user, info ); // another query to DB to get existing user by email user_by_email(new_user.email) } Err(DatabaseError(_, info)) => { panic!("Database error: {:?}", info); } _ => { // TODO: decide how to deal with unexpected errors User { id: 0, email: "".into(), } } } I had an idea that you may only build one query, which would use ON CONFLICT, and kill two birds with one stone. Unfortunately, Diesel dsl does not seem to support ON CONFLICT (...) DO NOTHING RETURNING *. Other concerns Syntax nitpick: let user = schema::users::dsl::users .filter(email.eq(user_email)) .first(&mut conn) .unwrap(); return user; You may just return the user value directly, replacing the above code with this: schema::users::dsl::users .filter(email.eq(user_email)) .first(&mut conn) .unwrap() Result The result is available on my github: https://github.com/pczarn/codereview/tree/81d3fcddd3921bf1b4df4bb347be5dcad3de743f/2022/9/get_or_create I cleaned up your code, migrated to sqlite for local testing and this is what I got: extern crate diesel; mod schema; use diesel::sqlite::SqliteConnection; use diesel::prelude::*; use dotenvy::dotenv; use std::env; use log::warn; use diesel::prelude::*; use schema::users; #[derive(Debug, Insertable)] #[diesel(table_name = users)] struct NewUser<'a> { email: &'a str, } #[derive(Queryable)] pub struct User { id: i32, email: String, } pub fn get_or_create_user(email: &str) -> User { use diesel::result::{Error::DatabaseError, DatabaseErrorKind::UniqueViolation}; let new_user = NewUser { email }; let mut conn = get_connection(); let result = diesel::insert_into(users::table) .values(&new_user) .get_result(&mut conn); match result { Ok(user) => return user, Err(DatabaseError(UniqueViolation, info)) => { warn!( "{:?} is already exists. Info: {:?}. Skipping.", new_user, info ); // another query to DB to get existing user by email user_by_email(new_user.email) } Err(DatabaseError(_, info)) => { panic!("Database error: {:?}", info); } _ => { // TODO: decide how to deal with unexpected errors User { id: 0, email: "".into(), } } } } pub fn user_by_email(user_email: &str) -> User { use schema::users::dsl::*; let mut conn = get_connection(); let user = schema::users::dsl::users .filter(email.eq(user_email)) .first(&mut conn) .unwrap(); return user; } pub fn get_connection() -> SqliteConnection { dotenv().ok(); let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set"); SqliteConnection::establish(&database_url) .unwrap_or_else(|_| panic!("Error connecting to {}", database_url)) } fn main() { get_or_create_user("example@example.com"); get_or_create_user("example@example.com"); }
{ "domain": "codereview.stackexchange", "id": 43883, "tags": "rust" }
How is an airplane propeller screw pitch calculated?
Question: I know a 30" * 12" airplane propeller means the prop has a diameter of 30inches and will travel 12inches forward per revolution, but what I don't know is - what angle of attack does the propeller require to attain that screw pitch of 12". Forget the propeller twist(I figure that will make it more complicated). Let the 30" * 12" propeller have a constant AoA from root to tip but a tapering cord length form root to tip. So, at what AoA will the 30" * 12" specification hold true for the prop? Or is screw pitch the same as AoA(angle of attack)? Answer: An angle of attack of zero, just like the AoA of a woodscrew that you drive into a piece of timber. This assumes that blade twist is proportioned correctly, which is often not the case near the propeller hub since that would require fancy twisting in an area that does not matter much. By far the largest amount of thrust is generated near the outer portion of the propeller, so a good reference point for general pitch angle is found there. Tip angle can be one, since the tip speed is found in equations for propeller thrust, power and torque. But the tip AoA may be modified to alleviate Mach compression effects, so the best reference point for generic propeller pitch is the 0.75 blade radius point.
{ "domain": "physics.stackexchange", "id": 43325, "tags": "propulsion" }
What's electropositive ion flux in a plasma?
Question: Is it the similar to the plasma density ? Or is it directly proportional to the plasma density ? Answer: To me, the "electropositive ion flux" from this paper which the OP is referring to, looks like it is the flux of positively charged ions or possibly flux of ions of species with an affinity towards becoming positively charged. In other words, it's the number of ions passing through a unit area per unit time, which should be proportional to the number density of positive ions.
{ "domain": "physics.stackexchange", "id": 62747, "tags": "plasma-physics" }
What are the downsides of reversing fan and filter direction on a Corsi-Rosenthal box so the fan blows air into the box?
Question: I am trying to come up with a way to clean the air in my one-car garage/DIY-woodshop, removing fine sawdust particles smaller than 1-micron, and thought that maybe one or two Corsi-Rosenthal boxes would do it. Could the fan orientation be reversed so air is blown into the box, with the filters also reversed so the air flows through them out into the room? Sawdust would then collect on the inside of the filters and disposal would be easier. What would happen in that scenario if the filters couldn't keep up with the fan? If, say, 2" thick filters didn't have enough air flow and what was needed were 4" thick filters? Could the force of the air from the fan dislodge the dust from the filters? Would "blowback" result, with dust coming back out of the box through the fan? Answer: In this link is what you are describing. One could add some legs and add a 5th filter opposite the fan so the static pressur is reduced and less risk of dust escaping the fan blade boundary. I hope this helps, and please update on how the reverse CR Box is performing if possible. Thanks. https://www.facebook.com/reel/688198306589401?mibextid=NnVzG8
{ "domain": "engineering.stackexchange", "id": 5332, "tags": "airflow, air-filtration" }
Determine whether input-output pairs represent something that could be a proper math function
Question: The input is read in line by line from stdin. Input Format The first line contains an integer, T, denoting the number of test cases. The subsequent lines describe T test cases, and the input for each test case is as follows: The first line contains an integer, N, the number of (x,y) pairs in the test case. The N subsequent lines each contain two space-separated integers describing the respective x and y values for each ordered pair. Constraints \$1 \le T \le 5\$ \$2 \le N \le 100\$ \$0 \le x, y \le 500\$ x and y are both integers. Output Format On a new line for each test case, print if the set of ordered pairs represent a valid function, or if they do not. Sample Input 2 3 1 1 2 2 3 3 4 1 2 2 4 3 6 4 8 Sample Output YES YES Full problem statement I'm looking for feedback regarding ways this could be: more concise/clean, more idiomatic Haskell, or if there are algorithmic/mathematical tricks I missed that could further simplify the computation. import System.IO (readLn, getLine) import Control.Monad (replicateM, mapM_, liftM) import Data.List (nub) import qualified Data.Map.Strict as M data YesNo = YES | NO deriving Show boolToYesNo :: Bool -> YesNo boolToYesNo x = if x then YES else NO readInt :: IO Int readInt = readLn tuplefy :: [a] -> (a, a) tuplefy xs = case xs of [a, b] -> (a, b) _ -> error "each item must consist of 2 values." readPair :: [String] -> [(Int, Int)] readPair = map $ tuplefy . map (read :: String -> Int) . words getPairs :: Int -> IO [String] getPairs = flip replicateM getLine readTestCase :: Int -> IO [(Int, Int)] readTestCase = fmap readPair . getPairs readTestCases :: Int -> IO [[(Int, Int)]] readTestCases = flip replicateM $ readInt >>= readTestCase printAnswers :: Show a => [a] -> IO () printAnswers = mapM_ print isSingletonSet :: Eq b => [b] -> Bool isSingletonSet xs = case nub xs of [x] -> True _ -> False listToMultiMap :: (Ord k, Eq k, Eq v) => [(k, v)] -> M.Map k [v] listToMultiMap ((k, v):vs) = toMap (M.fromList [(k, [v])]) vs where toMap m xs = case xs of [] -> m (k0, v0):ys -> case M.lookup k0 m of Nothing -> toMap (M.insert k0 [v0] m) ys Just v0s -> toMap (M.insert k0 (v0:v0s) m) ys functionOrNot :: [(Int, Int)] -> YesNo functionOrNot tuples = boolToYesNo $ foldl1 (&&) $ M.elems $ M.map isSingletonSet $ listToMultiMap tuples main :: IO () main = getAnswers >>= printAnswers where getAnswers = (liftM . map) functionOrNot $ readInt >>= readTestCases Answer: Your monadic helper functions are all one-offs and even if they were well-named (no worries, naming just gets harder as you get more experienced, there's a reason Adam was told to do it), I'd still think it better to inline them. Then the main ends up cluttered with pointfreed binds and I turned that into a do block. (I did reorder the monadic effects a little, moving the output of each answer in front of the reading of the next question, but I found that sacrifice worthy.) I'm not sure you need to try your hand at error-handling in tuplefy - a badly formated input file should be as likely to have a typo in its number of words in a line as in its stated number of pairs in a function. listToMultiMap - (k, v) is no different from the other elements of vs, so you could go listToMultiMap = toMap M.empty. toMap is folding left, bad style. You can fold right like: toMap [] = [] toMap ((k0, v0):ys) = case M.lookup k0 m of Nothing -> M.insert k0 [v0] $ toMap ys Just v0s -> M.insert k0 (v0:v0s) $ toMap ys and won't need to carry a parameter into the recursive calls. And of course this can be written in terms of foldr: listToMultiMap = foldr foo M.empty where foo (k0, v0) m = case M.lookup k0 m of Nothing -> M.insert k0 [v0] m Just v0s -> M.insert k0 (v0:v0s) m And we could reduce the duplication in the last two lines there, but then again this whole thing is already handled by Data.Maps fromListWith. import System.IO (readLn, getLine) import Control.Monad (replicateM) import qualified Data.Map.Strict as M import Data.Bool (bool) noDuplicates :: Ord a => [a] -> Bool noDuplicates = all (==1) . M.fromListWith (+) . map (,1) -- = all ((==1) . length) . M.fromListWith (++) . map (,["Why are you looking at the ordinate?"]) -- = and . M.fromListWith (\_ _ -> False) . map (,True) main :: IO () main = do functioncount <- readLn replicateM functioncount $ do paircount <- readLn abscissae <- replicateM paircount $ read . head . words <$> getLine putStrLn $ bool "NO" "YES" $ noDuplicates abscissae It is unfortunate for pointfreedom that the first parameter of replicateM indeed should be Int. There must be a better way than flip. Perhaps a new infix operator, semantically id, that has highest precedence to the left, and lowest to the right, to simulate enclosing brackets around its entire right side and disappearing?
{ "domain": "codereview.stackexchange", "id": 19339, "tags": "programming-challenge, haskell, functional-programming" }
Based on a file containing people's names, output some statistics and generate some modified names
Question: Noobie here. I applied for a job as a Jr. engineer and they gave me this task. After they read through my solution they rejected my application because the code was hard to follow. I'd like to get an idea of what they meant by that and how I can improve the approach I took. The full assignment is below. Write a program that processes the included example file that contains people's names, outputs some statistics and generates updated output. Input: An arbitrary file with the same format as the attached example file. Your program will be tested on much (e.g. 1000x) larger files. Example lines from file: Graham, Mckenna -- ut Voluptatem ipsam et at. Marvin, Garfield -- non Facere et necessitatibus animi. McLaughlin, Mariah -- consequatur Eveniet temporibus ducimus amet eaque. Lang, Agustina -- pariatur Unde voluptas sit fugit. Bradtke, Nikko -- et Maiores ab officia sed. Adams, Luis -- error Repellendus alias officia amet et perspiciatis. Lehner, Matilde -- nesciunt Incidunt et ut necessitatibus porro. Ortiz, Anita -- fuga Tempore eos et hic. Koch, Berry -- vel Laborum perferendis inventore eveniet. Cartwright, Nicolas -- et Optio aliquid earum exercitationem vitae fugit. Fisher, Elmo -- non Ipsum provident nobis explicabo voluptas ipsa aperiam. Output: The unique count of full, last, and first names (i.e. duplicates are counted only once) The ten most common last names (the names and number of occurrences) The ten most common first names (the names and number of occurrences) A list of 25 completely unique names (using names within the file) A list of 25 modified names (must be new names made up of the 25 names from list number 4. Must be made by switching first and last names) var fs = require('fs'); var async = require('async'); var readline = require('readline'); var prompt = require('prompt'); var fullNameHash = {}; var firstNameHash = {}; var lastNameHash = {}; var listSizeN = []; var rejectList = []; var howMany = 25; var fileName = 'yesware-data.txt'; var getUserInput = function(cb) { prompt.start(); prompt.get(['fileName', 'listLength'], function (err, result) { console.log(' fileName: ' + result.fileName); fileName = result.fileName; console.log(' listLength: ' + result.listLength); howMany = result.listLength; console.log(fileName); cb(result); }); }; var readFile = function (processLine, callback) { var lineReader = readline.createInterface({ input: fs.createReadStream(fileName) }); lineReader.on('line', function (line) { processLine(line, lineReader); }); lineReader.on('close', function() { if (callback) callback(); }); }; var buildListSizeN = function(line, listSizeN) { var fullName = parseFullName(line); if(fullName){ fullName = fullName.trim(); splitNames(fullName, function(fullName) { var firstOrLastNotInArray = (!isInArray(fullName[0], listSizeN) && !isInArray(fullName[1], listSizeN)); var firstOrLastInRejectList = (isInArray(fullName[0], rejectList) || isInArray(fullName[1], rejectList)); if (firstOrLastNotInArray && !firstOrLastInRejectList) { var fullNameString = fullName[1] + " " + fullName[0]; // FirstName + lastName listSizeN.push(fullNameString); } else { var fullNameString = fullName[1] + " " + fullName[0]; // FirstName + lastName rejectList.push(fullNameString); } }); } return listSizeN; }; var isInArray = function(target, list) { if (typeof list === 'object'){ list = JSON.stringify(list); } if (list.indexOf(target) > -1) { return true; } return false; }; var getNamesWithCounts = function(list, HashWithCounts) { var namesWithCounts = {}; list.forEach(function(name) { var count = HashWithCounts[name]; namesWithCounts[name] = count; }); return namesWithCounts; }; var getFirstTen = function(list) { return list.slice(0,10); }; var parseFullName = function(line) { var ifDashes = line.indexOf('--'); if (ifDashes > 0 && line){ var fullName = line.substring(0, line.indexOf('--')); var firstName = fullName.split(' ').slice(0, -1).join(' '); var lastName = fullName.split(' ').slice(-1).join(' '); return firstName + lastName; } }; var splitNames = function(fullName, cb) { if (fullName) { var fullNameArray = fullName.split(', '); cb(fullNameArray); } }; // Input will be ex. ['Morales', 'Sua'] var hashFirstLastandFullNames = function(fullNameArray) { var firstName = fullNameArray[1]; var lastname = fullNameArray[0]; if (lastname in fullNameHash) { fullNameHash[lastname].push(firstName); } else { fullNameHash[lastname] = [firstName]; } if (firstName in firstNameHash) { firstNameHash[firstName] = firstNameHash[firstName] +=1; } else { firstNameHash[firstName] = 1; } if (lastname in lastNameHash) { lastNameHash[lastname] = lastNameHash[lastname] +=1; } else { lastNameHash[lastname] = 1; } }; var sizeOfObject = function(object) { return Object.keys(object).length; }; var sortObjectByValue = function(object) { var keys = []; Object.keys(object) .map(function (key) { return [key, object[key]]; }) .sort(function (first, second) { return second[1] - first[1]; }) .forEach(function (each) { keys.push(each[0]); }); return keys; }; var logResults = function(firstNameHashSize,lastNameHashSize,fullNameHashSize,uniqueList, modifiedList){ console.log("The unique count of full names: ", firstNameHashSize); console.log("The unique count of last names: ", lastNameHashSize); console.log("The unique count of first names: ", fullNameHashSize); console.log("**************************************************************"); var mostCommonFirstNames = sortObjectByValue(firstNameHash); var firstTenFNames = getFirstTen(mostCommonFirstNames); console.log("10 Most Common First Names: ", getNamesWithCounts(firstTenFNames,firstNameHash)); var mostCommonLastNames = sortObjectByValue(lastNameHash); var firstTenLNames = getFirstTen(mostCommonLastNames); console.log("10 Most Common Last Names: ", getNamesWithCounts(firstTenLNames,lastNameHash)); console.log("\n A list of " + howMany + " completely unique names: " + JSON.stringify(uniqueList, null, 2)); console.log("\nA list of " + howMany + " modified names: " +JSON.stringify(modifiedList, null, 2)); }; var createUniqueListFromFile = function(cb) { var uniqueList = []; readFile( function (line, lineReader) { uniqueList = buildListSizeN(line, listSizeN); if (uniqueList.length < howMany) { // do nothing } else { lineReader.close(); } }, function(){ cb(uniqueList); }); }; var swapFullNames = function(nameOne, nameTwo) { nameOne = nameOne.split(' '); nameTwo = nameTwo.split(' '); var newNameOne = nameOne[0] + " " + nameTwo[1]; var newNameTwo = nameTwo[0] + " " + nameOne[1]; return [newNameOne, newNameTwo]; }; var swapNamesInList = function(list){ var listLength = list.length; for (var i = 0; i < listLength; i++) { if (listLength[i] && list[i+1]) { var newNames = swapFullNames(list[i], list[i+1]); list[i] = newNames[0]; list[i+1] = newNames[1]; i++; } else { // do when it's the last item var newNames = swapFullNames(list[i], list[0]); list[i] = newNames[0]; list[0] = newNames[1]; } } return list; }; var main = (function() { async.series([ function(callback) { console.log("START"); getUserInput(function(){ callback(); }); }, function(callback) { var sizeOfFirstNames; var sizeOfLastNames; var sizeOfFullNames; readFile( function (line) { var fullName = parseFullName(line); splitNames(fullName, function(fullName) { hashFirstLastandFullNames(fullName); }); }, function(){ sizeOfFirstNames = sizeOfObject(firstNameHash); sizeOfLastNames = sizeOfObject(lastNameHash); sizeOfFullNames = sizeOfObject(fullNameHash); callback(null, sizeOfFirstNames, sizeOfLastNames, sizeOfFullNames); }); }, function(callback){ createUniqueListFromFile(function(uniqueList){ var tempUniqueList = uniqueList.slice(0); var modifiedNames = swapNamesInList(tempUniqueList); callback(null, uniqueList, modifiedNames); }); }], function(err, result) { result.shift(); var listSizes = result[0]; var firstNameHashSize = listSizes[0]; var lastNameHashSize = listSizes[1]; var fullNameHashSize = listSizes[2]; var uniqueListAndModifiedList = result[1]; var uniqueList = uniqueListAndModifiedList[0]; var modifiedList = uniqueListAndModifiedList[1]; logResults(firstNameHashSize,lastNameHashSize,fullNameHashSize,uniqueList, modifiedList); }); Answer: The essential issue here is that there's just too much code. The complexity of the problem doesn't merit over 200 lines of code. You're transforming some regularly formatted text into a list of names. Then answering some questions about those lists. Critiquing your code line by line just won't be that helpful in this case, because, although there are some things to improve, what you really want to do is just take a simpler approach at a high level. Here's one (of many) ways you might do that: // a few object utilities /////////////////////////// const length = o => Object.keys(o).length; const byCount = (a, b) => b[1] - a[1]; const top10 = o => Object.entries(o).sort(byCount).map(x => x[0]).slice(0,10); function addItem(countObj, key) { const newKey = (countObj[key] === undefined); if (newKey) countObj[key] = 0; countObj[key]++; } // get the name counts /////////////////////////// const lineReader = require('readline').createInterface({ input: require('fs').createReadStream('names.txt') }); const [firstNames, lastNames, fullNames] = [{}, {}, {}]; lineReader.on('line', function (line) { const matches = line.match(/(.*), (.*) -- .*/); if (!matches) return; const [_, last, first] = matches; addItem(firstNames, first); addItem(lastNames, last); addItem(fullNames, first + ' ' + last); }); // print results /////////////////////////// lineReader.on('close', function (line) { console.log('Unique first: ', length(firstNames)); console.log('Unique last: ' , length(lastNames)); console.log('Unique full: ' , length(fullNames)); console.log('Top 10 first: ', top10(firstNames)); console.log('Top 10 last: ' , top10(lastNames)); }); Note that the data structure I'm using here is a plain object, whose keys are the names and whose values are the number of times that name appears. I'll leave the final questions as an exercise, because it wasn't entirely clear to me what those questions were asking.
{ "domain": "codereview.stackexchange", "id": 22471, "tags": "javascript, beginner, interview-questions, file, async.js" }
Type converter framework (v2)
Question: This is the second version of my type converter framework. The the previous one can be found here: Type creator service & framework In this version I mostly implemented what @Dmitry Nogin suggested in his great reviews I also extended it so that each converter has access to all the registered converters in case it needs them like for creating collections. After giving it another thougt I changed my mind again it went back to name it just converters :-) neither activator, nor factory nor anything else seem to be ok. The framework's goals are: super easy to use - the user needs to override only one method super easy to extend - the user can create any converter he wants and also use any registered converter type safe can be used without generics - types are resolved dynamicaly from properties of other classes at runtime Base class for all converters: public abstract class TypeConverter { public CompositeConverter CompositeConverter { get; internal set; } public abstract bool TryConvert(Type type, object arg, CultureInfo culture, out object instance); } Composite converter to hold multiple converters: public class CompositeConverter : TypeConverter { private CompositeConverter() { } private CompositeConverter(CompositeConverter compositeConverter, TypeConverter typeConverter) { Converters = compositeConverter.Converters .Concat(new TypeConverter[] { typeConverter}).ToArray(); // update the composite converter foreach (var activator in Converters) { typeConverter.CompositeConverter = this; } } public TypeConverter[] Converters { get; private set; } = Enumerable.Empty<TypeConverter>().ToArray(); public CompositeConverter Register<TConverter>() where TConverter : TypeConverter, new() { return (this + new TConverter()); } public override bool TryConvert(Type type, object arg, CultureInfo culture, out object instance) { foreach (var activator in Converters) { if (activator.TryConvert(type, arg, culture, out instance)) { return true; } } instance = null; return false; } public object Convert(Type type, object arg, CultureInfo culture) { object instance; if (!TryConvert(type, arg, culture, out instance)) throw new NotSupportedException(); return instance; } public static CompositeConverter Create() => return new CompositeConverter(); public static CompositeConverter operator +(CompositeConverter left, TypeConverter right) => new CompositeConverter(left, right); } Base converter for user converters: public abstract class TypeConverter<TArg, TResult> : TypeConverter { public override bool TryConvert(Type type, object arg, CultureInfo culture, out object instance) { instance = type.IsAssignableFrom(typeof(TResult)) && arg is TArg ? Convert((TArg)arg, culture) : (object)null; return instance != null; } public abstract TResult Convert(TArg value, CultureInfo culture); } Several converters that I wrote to test the new architecture: public class StringToInt32Converter : TypeConverter<string, int> { public override int Convert(string value, CultureInfo culture) { return int.Parse(value); } } public class Int32ToStringConverter : TypeConverter<int, string> { public override string Convert(int value, CultureInfo culture) { return value.ToString(culture); } } public class SingleToStringConverter : TypeConverter<float, string> { public override string Convert(float value, CultureInfo culture) { return value.ToString(culture); } } public class StringToEnumConverter<TEnum> : TypeConverter<string, TEnum> { public StringToEnumConverter() { // enum constraint on T are not supported so we need to check the type here if (typeof(TEnum).BaseType != typeof(Enum)) { throw new InvalidOperationException("TEnum must by of Enum type."); } } public override TEnum Convert(string value, CultureInfo culture) { return (TEnum)Enum.Parse(typeof(TEnum), value); } } public class StringToListConverter<T> : TypeConverter<IEnumerable<string>, List<T>> { public override List<T> Convert(IEnumerable<string> values, CultureInfo culture) { // use other converters to create a list of T return values.Select(v => (T)CompositeConverter.Convert(typeof(T), v, culture)).ToList(); } } Usage: var compositeTypeConverter = CompositeConverter .Create() .Register<StringToInt32Converter>() .Register<Int32ToStringConverter>() .Register<StringToEnumConverter<ConsoleColor>>() .Register<StringToListConverter<int>>() .Register<SingleToStringConverter>(); var text1 = compositeTypeConverter.Convert( typeof(int), "123", CultureInfo.InvariantCulture); var int1 = compositeTypeConverter.Convert( typeof(string), 123, CultureInfo.InvariantCulture); var float1 = compositeTypeConverter.Convert( typeof(string), 123.456f, CultureInfo.InvariantCulture); var consoleColor1 = compositeTypeConverter.Convert( typeof(ConsoleColor), "Black", CultureInfo.InvariantCulture); var list1 = compositeTypeConverter.Convert(typeof( List<int>), new[] { "1", "3", "7" }, CultureInfo.InvariantCulture); These examples include only a StringToSomething converters but in the final project there will also be ByteToSomething converters that's why one of the requirements is to not only support strings. In the final project I will add the other helpers suggested in the first reviews because I first wanted to build a proof-of-concept and the add the nice-to-haves. I'll also make the culture parameter optional and default to the invariant culture. Answer: Thanks for sharing – it is an interesting problem :) I would go with mutable state only when it is required for performance optimization. Immutability almost always makes code a way cleaner and maintainable. We should not depend on implementation – let’s prefer abstraction where possible. I mean we should work with TypeConverter, not CompositeConverter to define a service. Here is an example. Demo code: class Program { static void Main(string[] args) { TypeConverter converter = TypeConverter.Default .Register<StringToInt32Converter>(); Console.WriteLine( converter.Convert( typeof(int), "42", CultureInfo.InvariantCulture)); } } Where: public class StringToInt32Converter : TypeConverter<string, int> { protected override int Convert(string arg, ConversionContext context) => int.Parse(arg, context.Culture); } Now library. Let’s define our abstraction as clean as possible: public abstract class TypeConverter { public static readonly TypeConverter Default = CompositeConverter.Empty; protected internal abstract bool TryConvert( ConversionContext context, object arg, out object instance); } And all the API with multiple overloads will come as two sets of extension methods – I really like this trick :) public static class Composition { public static TypeConverter Register<TConverter>(this TypeConverter that) where TConverter : TypeConverter, new() => that.Register(new TConverter()); // etc… // base method to be used above public static TypeConverter Register(this TypeConverter that, TypeConverter converter) => new CompositeConverter(that, converter); } And: public static class Conversions { public static object Convert(this TypeConverter converter, Type type, object arg, CultureInfo culture) { object instance; if (!converter.TryConvert(type, arg, culture, out instance)) throw new NotSupportedException(); return instance; } // etc // base method to be used above public static bool TryConvert(this TypeConverter converter, Type type, object arg, CultureInfo culture, out object instance) => converter.TryConvert(new ConversionContext(converter, type, culture), arg, out instance); } As you see – API references TypeConverter abstraction only. Helper class (to reduce amount of arguments): public class ConversionContext { public ConversionContext(TypeConverter service, Type type, CultureInfo culture) { Service = service; Type = type; Culture = culture; } public TypeConverter Service { get; } public Type Type { get; } public CultureInfo Culture { get; } } Now simple immutable composite converter: public class CompositeConverter : TypeConverter { public static readonly TypeConverter Empty = new CompositeConverter(); public CompositeConverter(params TypeConverter[] converters) { Converters = converters; } protected internal sealed override bool TryConvert(ConversionContext context, object arg, out object instance) { instance = null; foreach (var converter in Converters) if (converter.TryConvert(context, arg, out instance)) return true; return false; } TypeConverter[] Converters { get; } } And a super-type for your business objects: public abstract class TypeConverter<TArg, TResult> : TypeConverter { protected internal sealed override bool TryConvert(ConversionContext context, object arg, out object instance) { var match = context.Type.IsAssignableFrom(typeof(TResult)) && arg is TArg; instance = match ? (object)Convert((TArg)arg, context) : null; return match; } protected abstract TResult Convert(TArg arg, ConversionContext context); } UPDATE - using context.Service property TypeConverter converter = TypeConverter.Default .Register<StringToInt32Converter>() .Register<StringArrayToInt32ArrayConverter>(); Console.WriteLine( converter.Convert( typeof(int[]), new[] { "42" }, CultureInfo.InvariantCulture)); where: public class StringToInt32Converter : TypeConverter<string, int> { protected override int Convert(string arg, ConversionContext context) => int.Parse(arg, context.Culture); } public class StringArrayToInt32ArrayConverter : TypeConverter<string[], int[]> { protected override int[] Convert(string[] arg, ConversionContext context) => arg .Select(s => (int)context.Service.Convert(typeof(int), s, context.Culture)) .ToArray(); }
{ "domain": "codereview.stackexchange", "id": 20428, "tags": "c#, parsing, type-safety, framework" }
Support for an interactive form with multiple tabs
Question: I just failed in a JavaScript test and I would really appreciate some comments of how I can improve so I can keep learning. The test was about making a form interactive, where the code should do things like changing to the next tab while showing not visible content, validate email, etc. I have received the following feedback to consider: [-] caching (no jquery selector is cached)---- [-] performance optimization (nope, usage of jquery each instead of a native loop)--- [-] reusable code (hardly, as commented above)--- [+] clean code and good structure--- [ ] Extra points for applied design pattern $("<link rel='stylesheet' href='css/ui-lightness/jquery-ui-1.10.1.custom.css' type='text/css' media='screen' />").insertAfter("[type='text/css']"); // CSS de Jquery UI function isValidEmailAddress(emailAddress) { var pattern = new RegExp(/^((([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+(\.([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+)*)|((\x22)((((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(([\x01-\x08\x0b\x0c\x0e-\x1f\x7f]|\x21|[\x23-\x5b]|[\x5d-\x7e]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(\\([\x01-\x09\x0b\x0c\x0d-\x7f]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]))))*(((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(\x22)))@((([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.)+(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.?$/i); return pattern.test(emailAddress); }; function setTabOn(stepNumber) { stepNumber = parseInt(stepNumber); $("#step" + stepNumber + "_tab").addClass("on"); $("#step" + stepNumber + "_tab").click(function() { $(".step").hide(); $("#step" + stepNumber).show(); $("#steps a.active").removeClass("active"); }); } function nextStep(stepNumber) { stepNumber = parseInt(stepNumber) + 1; $(".step").hide(); $("ol#steps li a").removeClass("active"); $("#step" + stepNumber).show(); $("#step" + stepNumber + "_tab").addClass("active"); } $(document).ready(function() { $(".back").click(function() { var idBack = $(this).attr("id"); var idBack = idBack.match(/\d/) - 1; $(".step").hide(); $("#step" + idBack).show(); }); $("#step2_confirm_button").click(function() { $("#step2_confirm").dialog("destroy"); nextStep(2); setTabOn(2); }); $("#step2_cancel_button").click(function() { $("#step2_confirm").dialog("destroy"); }); $(".step").hide(); $("#step1").show(); $("#step1_tab").addClass("active"); $("#step2_confirm").hide(); $("#step1_next").click(function() { var flagFirstStepError = 0; var flagFirstStepEmpty = 0; var flagFirstStepEmail = 0; $("#step1_errors").html(""); $(".required").each(function() { if ($(this).val() == "") { flagFirstStepError = 1; return false } else { flagFirstStepEmpty = 1; } }); // end each if (flagFirstStepError == 1) { $("#step1_errors").append("We are sorry, but you need to enter all the required fields in order to proceed. The following information is missing: <br /><ol>"); $(".required").each(function() { if ($(this).val() == "") { var labelId = $(this).attr("id"); var y = $("#step1 label[for=" + labelId + "]").html(); $("#step1_errors").append("<li>You must fill the " + y + " field</li>"); } }); // end each $("#step1_errors").append("</ol>"); } var userEmail = $("#email").val(); if (!isValidEmailAddress(userEmail)) { $("#step1_errors").append("<p> You must provide a valid email adress"); flagFirstStepEmail = 0; } else { flagFirstStepEmail = 1; } if (flagFirstStepEmail + flagFirstStepEmail == 2) { nextStep(1); setTabOn(1); } }); //end click $("#step2_next").click(function() { $("#confirm_list li").remove(); $("#share_list li label").find("input:checked").each(function() { var x = $(this).val(); $("#confirm_list").append("<li>" + x + "</li>"); }); $("#step2_confirm").dialog(); }); }); Answer: Initial notes [-] caching (no jquery selector is cached) You should be creating variables that store the elements you're selecting with jQuery. The reason for this is that when you call $("#step" + stepNumber + "_tab") for example, jQuery will determine what type of selector it is (id), and them call document.getElementById("step" + stepNumber + "_tab"). This is particularly important when you're selecting multiple elements. [-] performance optimization (nope, usage of jquery each instead of a native loop) So you used $.each, what is the problem with that? Well it is doing a tonne of other stuff just so you can have some nice convenient syntax. For systems you're trying to optimise you should probably stay away from it, prefer this: var required = $(".required"); for (var i = 0; i < required.length; i++) { // object is required[i] } As you can see from this jsPeft, a native for loop is much faster than $.each. You can see the source code for each at this StackOverflow post. [-] reusable code (hardly, as commented above) The main issue here is that your document ready is a whopping 85 lines long. You should be splitting it up into logical groupings and placing it in functions. This is something that a lot of new programmers have trouble with and I never really understood until I saw a great example. Consider a function of 85 lines of code vs: $(document).ready(function () { setupItemA(); setupItemB(); setupItemC(); setupItemD(); }); This is much more readable, modular and reusable. [+] clean code and good structure This ties in a lot with above point 'reusable code' in my opinion. I'm assume your teacher was marking this point on consistency, indentation, appropriate calls, etc. which it seems to do pretty well. [] Extra points for applied design pattern This would depend on what design patterns you have been learning in your course. Code walkthrough I'll now go through my implementation, line-by-line. // Ideally this should go in your HTML page, I assume it wasn't an option though. $("<link rel='stylesheet' href='css/ui-lightness/jquery-ui-1.10.1.custom.css' type='text/css' media='screen' />").insertAfter("[type='text/css']"); // CSS de Jquery UI // This regex wouldn't be acceptable in a lot of companies as it's huge and difficult // to read. A link to the source would be good in this situation. function isValidEmailAddress(emailAddress) { var pattern = new RegExp(/^((([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+(\.([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+)*)|((\x22)((((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(([\x01-\x08\x0b\x0c\x0e-\x1f\x7f]|\x21|[\x23-\x5b]|[\x5d-\x7e]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(\\([\x01-\x09\x0b\x0c\x0d-\x7f]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]))))*(((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(\x22)))@((([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.)+(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.?$/i); return pattern.test(emailAddress); }; // If your spacing is more consistent, your code will look cleaner. More than one // line in a row can make code look messy. function setTabOn(stepNumber) { // This conversion is redundant, you're converting an int to an int and then using // it in a string. JavaScript does conversions like this implicitly anyhow //stepNumber = parseInt(stepNumber); // Caching var tab = $("#step" + stepNumber + "_tab"); tab.addClass("on"); tab.click(function() { $(".step").hide(); $("#step" + stepNumber).show(); $("#steps a.active").removeClass("active"); }); } function nextStep(stepNumber) { stepNumber = parseInt(stepNumber) + 1; $(".step").hide(); // You don't need to include the ol in 'ol#steps', it makes your code more specific // and thus less reusable. $("#steps li a").removeClass("active"); $("#step" + stepNumber).show(); $("#step" + stepNumber + "_tab").addClass("active"); } $(document).ready(function() { // Your indentation went a little funny in your document ready $(".back").click(backClick); $("#step2_confirm_button").click(step2Confirm); $("#step2_cancel_button").click(step2Cancel); $("#step1_next").click(step1NextClick); $("#step2_next").click(step2NextClick); // Shuffled around in better order $(".step").hide(); $("#step1").show(); $("#step1_tab").addClass("active"); $("#step2_confirm").hide(); }); function step2Confirm() { $("#step2_confirm").dialog("destroy"); nextStep(2); setTabOn(2); } function step2Cancel() { $("#step2_confirm").dialog("destroy"); } function backClick = function() { // You defined idBank twice here? //var idBack = $(this).attr("id"); //var idBack = idBack.match(/\d/) - 1; // Use this.id instead of the jQuery alternative, it's much faster var idBack = this.id // I'm not sure what your regex was trying to do $(".step").hide(); $("#step" + idBack).show(); }; function step1NextClick() { var flagFirstStepError = 0; var flagFirstStepEmpty = 0; var flagFirstStepEmail = 0; $("#step1_errors").html(""); var required = $(".required"); for (var i = 0; i < required.length; i++) { if ($(this).val() == "") { flagFirstStepError = 1; // Missing semi-colon return false; // Place } else { all on same line } else { flagFirstStepEmpty = 1; } } if (flagFirstStepError == 1) { showError(this, required); } var userEmail = $("#email").val(); if (!isValidEmailAddress(userEmail)) { // Include a closing tag, address spelt wrong ;) $("#step1_errors").append("<p>You must provide a valid email address</p>"); flagFirstStepEmail = 0; } else { flagFirstStepEmail = 1; } // Never do an operation of the left-hand side of an if if (flagFirstStepEmail == 1 && flagFirstStepEmail == 1) { nextStep(1); setTabOn(1); } } function showError(sender, required) { // Only append the string once, keep in a variable var message = "We are sorry, but you need to enter all the required fields in order to proceed. The following information is missing: <br /><ol>"; for (var i = 0; i < required.length; i++) { if ($(sender).val() == "") { var y = $("#step1 label[for=" + sender.id + "]").html(); message += "<li>You must fill the " + y + " field</li>"; } } message += "</ol>"; $("#step1_errors").append(message); } // I moved your events down here to reduce indentation, the size of document ready, and // improve readability function step2NextClick() { $("#confirm_list li").remove(); $("#share_list li label").find("input:checked").each(function() { var x = $(this).val(); $("#confirm_list").append("<li>" + x + "</li>"); }); $("#step2_confirm").dialog(); };
{ "domain": "codereview.stackexchange", "id": 3475, "tags": "javascript, performance, jquery, form" }
Bernoulli equation and pumps
Question: I have a doubt on the use of Bernoulli equation for pumps. Consider the situation in the picture. I marked different points: $1$ on the surface of first tank, $2$ in the exit from first tank, $3$ just before the pump, $4$ just after the pump and $5$ entering the second tank. Now consider Bernoulli equation in the "normal form" (ignoring the pump) $$p_a+\frac{1}{2} \rho v_a^2 +\rho g h_a=p_b +\frac{1}{2} \rho v_b^2 +\rho g h_b\tag{1}$$ And in the form for the presence of pump delivering power $\mathscr{P}$ $$(p_a+\frac{1}{2} \rho v_a^2 +\rho g h_a) Q +\mathscr{P}=(p_b +\frac{1}{2} \rho v_b^2 +\rho g h_b) Q \tag{2}$$ $a$ and $b$ are two generic points among the ones listed above. My question now is: can I use $(2)$ between any point before the pump and any point after the pump, regardless the height, velocity and pressure in such points? I have this doubt because usually one takes point $1$ and $5$ and uses $(2)$ - and I'm ok with that- but, if the answer to previous question is yes, I could also choose to use $(2)$ between $1$ and $4$ or $2$ and $5$ or $2$ and $3$ and so on and that sound strange because the quantity $p+\frac{1}{2} \rho v^2 +\rho g h$ should be the same before and after the pump, indipendently from the particular point chosen. In other words I should be able to use $(1)$, normal Bernoulli equation, between $1$ and $2$, which is not very realistic, since the fluid in $2$ will probably move with a velocity that is influenced by the pump. That is, even if $2$ is before the pump, the velocity there is different from the situation with no pump. And that's what I cannot understand here. How is that possible? And can I use $(1)$ between $1$ and $2$? Any suggestion is highly appreciated. Answer: You need to get $p_4-p_3$. Taking the datum of elevation z as that of points 3 and 4, we have $$p_{atm}+(10)\rho g=p_2+\frac{1}{2}\rho v^2$$ $$p_3+\frac{1}{2}\rho v^2=p_2+\frac{1}{2}\rho v^2$$ $$p_{atm}+(120)\rho g=p_5+\frac{1}{2}\rho v^2+(120-h)\rho g$$where h is the depth of point 5 below the surface of the tank on the right. $$p_4+\frac{1}{2}\rho v^2=p_5+\frac{1}{2}\rho v^2+(120-h)\rho g$$ If we combine these equations, we obtain: $$p_4-p_3=(120-10)\rho g$$ Power = $(p_4-p_3)Q$
{ "domain": "physics.stackexchange", "id": 31876, "tags": "homework-and-exercises, fluid-dynamics, pressure, power, bernoulli-equation" }
Record tf_static with rosbag
Question: Hi all, I'm trying to record all my topics into a bag, running rosbag record -a. However, rosbag doesn't record the /tf_static topic. How can I arrange this? Thanks Originally posted by Federico on ROS Answers with karma: 100 on 2014-06-29 Post score: 2 Answer: I think rosbag only records topics that are actually publishing messages. From what I can tell, /tf_static is only used by new tf2 publishers, and very few packages currently make use of it. I would verify that there are messages published on /tf_static before worrying that they're not being recorded. If you do have a node publishing messages to /tf_static and they're not being recorded, that's probably a bug in rosbag. Originally posted by ahendrix with karma: 47576 on 2014-06-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 18434, "tags": "ros, rosbag, tf-static" }
Compare elements of each list of lists to corresponding ones in different list
Question: My goal is to reduce processing time significantly (if possible) by making this working code more efficient. Currently 50k row by 105 column data taking about overall 2 hours to process. Share of this piece is 95%. This piece is a key part of my Python 3.6.3 script that compares two set of list of lists element by element regardless of datatype. I spent long hours but seems I reached my limits here. This is running on Windows 10. Sorry about lots of variables. Here is description: Ap, Bu - list of lists. Each sublist may contain any datatype (usually String, Number, Null, Date). 1st element of list within list is always unique string. has equal number of elements as other lists each list in Ap has corresponding list in Bu (corresponding in here means 1st element and ID element (not necessarily other elements) of a sublist of Ap matches that of Bu, that's considered there is corresponding match) each sublist will contain unique ID in the same position. Meaning index of ID is same in every sublist (be it Ap or Bu). prx - is index of a list within Ap urx - corresponding/matching index of a list within Bu, as evidenced by urx=l_urx.index (prx) cx - is index of an element in a single list of Au ux - is a corresponding element index of an element in a matching list of Bu, as evidenced by ux = l_ux.index(cx) rng_lenAp - is range(len(Ap)) rng_ls - is range(individual list within Ap) To visualize (just example): Ap = [['egg', 12/12/2000, 10, ID1, NULL], ['goog', 23, 100, ID2,12/12/2000]] Bu = [['goog', '3434', 100, ID2, 12/12/2000], ['egg', 12/12/2000, 45, ID1, NULL]] for prx in rng_lenAp: urx = l_urx.index (prx) if Ap[prx][0] == Bu[urx][0]: for cx in rng_ls: ux = l_ux.index(cx) #If not header, non-matching cells get recorded with their current value if cx!=0 and Ap[prx][cx] != Bu[urx][ux]: output[prx].append (str(Ap[prx][cx] + '^' + str(Bu[urx][ux])) #Unless it is row header or ID in column, matching cells gets 'ok' elif cx!=0 and prx!=0 and urx !=0 and Ap[prx][cx] == Bu[urx][ux]: output[prx].append ('ok' +'^' + 'ok') # Anything else gets recorded with their current value else: output[prx].append (str(Ap[prx][cx] + '^' + str(Bu[urx][ux])) There must a way to reduce processing time drastically. Currently the cell by cell comparison of 50k row by 100 column data to 50k row by 100 column data is takes about 2 hours. Expected under 30 min. 3.1 GHz, 4 CPUs (8196MB RAM). Answer: Folks, I appreciated who looked into this. After spending about 6 hours, I was able to resolve the issue by rewriting above section. Result: overall under 2 minutes processing, instead of previously reported 2 hour + processing time (for 52k X 105 to 52k X 105 matrix (list of lists) for element vs element comparison). May not still be perfect but I think this serves the purpose at this time and somebody may find this version helpful: r = prx (for better look only) c = ux (for better look only) # Outer loop to iterate thru rows (sublists) for r in rng_lenAp: # Inner loop to iterate thru columns (elements of a sublist) for c in rng_ls: # retain 1st sublist, which contains IDs if r == 0: output[r].append(str(Ap[r][c]) + '^' + str(Bu[l_urx[r]][l_ux[c]])) else: # retain 1st elements in each row: they uniquely id rows if c == 0: output[r].append(str(Ap[r][c]) + '^' + str(Bu[l_urx[r]][l_ux[c]])) else: if Ap[r][c]== Bu[l_urx[r]][l_ux[c]]: output[r].append ('ok^ok') else: output[r].append(str(Ap[r][c]) + '^' + str(Bu[l_urx[r]][l_ux[c]]))
{ "domain": "codereview.stackexchange", "id": 35105, "tags": "python, performance, python-3.x" }
Problem positioning map
Question: I have a problem positioning a map. In may Stage is the map good positioned, but in rviz is it totally different, how can I change that? Originally posted by Ico on ROS Answers with karma: 23 on 2014-05-15 Post score: 0 Original comments Comment by AbuIbra on 2014-05-15: Perhaps you just have to change your fixed frame to /map in rviz. Comment by Ico on 2014-05-16: I tried, but it's not a solution Comment by slivingston on 2014-05-16: You should give more details about what you are trying. Note that the map in Stage is not really a map in the sense of being some model or reference used by the robot, but rather it is the ground truth used by Stage for simulation, and your robot may not have direct access to it. Answer: If you are using map_server to publish the map, then you can change the location of the map origin in the corresponding YAML file. Originally posted by slivingston with karma: 254 on 2014-05-15 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Ico on 2014-05-16: I'm not using yaml file
{ "domain": "robotics.stackexchange", "id": 17965, "tags": "navigation, simulation, mapping, rviz, stage" }
Selection of system in Conservation of momentum
Question: I came across a question in which a cart is moving and having sand. Suddenly the sand valve malfunctioned and the sand starts falling from the cart. So momentum of which system will remain conserved. Answer: First, define clearly what is in The System; it can be anything you want. This definition is then fixed. If any part of this system experiences a force from outside the system in a particular direction, then the total momentum of the system in that direction is not conserved. If all the forces on any part of the system are from other parts of the system, then the momentum of the system is conserved. If you define the system as just the cart, and the sand falls vertically out through the bottom of the cart, then the sand exerts no horizontal force on the cart, and the momentum of this system is conserved. If you define the system as the cart plus original amount of sand, then, as the sand falls out and hits the ground, the ground exerts a horizontal force on part of the system, and the momentum of this system is not conserved. Edited to address comment: Suppose you define the system to be the cart and all the sand in it at a particular instant, $t_i$, and allow a small amount of this sand to dribble out the valve and start to fall to the ground. Let's consider such a small amount of sand that the first part of the dribbling sand doesn't have time to reach the ground before the last part of the small amount leaves the valve There are no external horizontal forces acting on this system, so the total momentum is conserved. The falling sand has no horizontal forces acting on it, so that part of the system has no change in momentum. So the cart and the remaining sand (after the dribble is over) also has its momentum conserved.
{ "domain": "physics.stackexchange", "id": 18709, "tags": "momentum" }
Reading a number from the standard input
Question: This would seem obvious, but there is a lot that can go wrong. What happens is the user enters alpha values etc. I have come up with this function and would appreciate any suggestions or comments on how to handle this better: #include <iostream> #include <string> #include <cstdlib> #include <limits> bool readNumber(int& value, const std::string failPrompt = "") { for (;;) { std::cin >> value; if (std::cin.fail()) { if (std::cin.eof()) return false; std::cin.clear(); std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n'); if (failPrompt.empty()) return false; std::cout << failPrompt; continue; } return true; } } int main() { std::cout << "Please enter a number.\n"; int value; if (!readNumber(value, "That is not a number, please try again\n")) return EXIT_FAILURE; std::cout << "number = " << value << "\n"; } Answer: Consider all possible errors You only considered the possibility of failing to parse the number and reaching the end of the input. However, there might also be a read error that causes fail() to return true, eof() to return false, but which cannot be recovered from. That will cause readNumber() to go into an infinite loop. A solution is to check if bad() is true, and if so return false. Another solution is to read in a whole line at a time using std::getline(), and parse the line that you read using std::stoi(), like so: for (std::string line; std::getline(cin, line); std::cout << failPrompt) { std::size_t pos; value = std::stol(line, &pos); if (line[pos] == '\0') { /* The whole line was a valid number */ return value; } /* Else there was a parse error */ } Write error messages to std::cerr You should prefer using std::cerr for error messages. This is especially useful if the normal output is redirected to a file for example. Pass strings by reference when possible Your readNumber() takes a string by value, which makes an unnecessary copy. Pass it by reference instead: bool readNumber(int& value, const std::string& failPrompt = "") Consider using std::optional to return the value It's good that you return a bool, so it makes it easy to check for an error. You could mark the function [[nodiscard]] as well so the compiler will warn if the caller ignores the return value. However, it still is annoying that you have to declare a variable first and then pass that by reference. It would be nice if you got the number that was read as the return value. Since C++17, there is a handy type that can give you both the number and indicate whether you succesfully read the input or not: std::optional. Your code would then look like: std::optional<int> readNumber(const std::string &failPrompt = "") { ... if (/* value read succesfully */) return value; else /* if there is an error */ return std::nullopt; } And then you can use it like so in main(): if (auto value = readNumber("Try again\n")) std::cout << "number = " << *value << "\n"; else return EXIT_FAILURE;
{ "domain": "codereview.stackexchange", "id": 42235, "tags": "c++" }
What is a mode in quantum optics?
Question: I am studying quantum optics and it is often cited the word "mode", in particular there are spatial and temporal modes. I really don't know what they are. I know the general definition of modes for example looking at: What is a mode? but I don't know what spatial and temporal means, and why are they so important in quantum optics. Answer: First consider the idea of normal mode in classical mechanics: it is pattern of oscillation in which every part of a many-body system oscillates at the same frequency. Now take a natural generalisation to a continuous field, and in the case of the electromagnetic field you have a distribution of amplitude where the whole pattern is oscillating at a single frequency. Thus a mode is by definition monochromatic. And also, a mode is by definition extended in both space and time, so the terminology "spatial mode" and "temporal mode" can indeed be puzzling. I think that terminology is introduced in order to draw attention to the chief way in which two or more different modes are differing from one another. If they both have the same spatial distribution but different frequency then they are said to be different temporal modes. If they have the same frequency but different spatial distribution then they are said to be different spatial modes. I think in both cases this is a convenient shorthand, a way of saying "modes of the same spatial pattern but differing frequency" or "modes of the same frequency but differing spatial pattern" but in both cases the modes are neither purely temporal nor purely spatial, but spatio-temporal. However, one can of course write a monochromatic pattern of oscillation in the form $$ f(t,x,y,z) = e^{i (\omega t + \alpha)} \phi(x,y,z) $$ and then the function $\phi(x,y,z)$ may be called a spatial mode. This terminology is commonly adopted in discussing the field patterns inside an optical resonator for example. Also, if the resonator has a sense of direction, say circular symmetry with the symmetry axis along $z$, then the $x,y$ part is transverse and the $z$ part is longitudinal and sometimes you see people blurring the distinction between the spatial modes and the temporal behaviour, since for the eigenmodes of a cavity the two are linked. The usage is not strictly uniform across the subject. Sometimes one sees the term "mode" applied to a dependence on time which is not simply sinusoidal; it is because the word, like may words in science, blurs over into its use in everyday speech and people are not always careful with their definitions.
{ "domain": "physics.stackexchange", "id": 93757, "tags": "terminology, definition, quantum-optics, oscillators, normal-modes" }
Resources for quantum algorithm basics
Question: I have just started to learn about quantum computing, and I know a little bit about qubits. What is a resource where I can learn a basic quantum algorithm and the concepts behind how it works? Answer: Most textbooks and lecture courses start with solving the Deutsch problem using quantum computing. Parts 1 to 4 of John Watrous's lecture notes will describe the concepts, starting from basics. By the end of lecture 4, you will have learned how a quantum computer can solve the Deutsch problem with fewer operations than a classical computer would need. All 22 lecture notes can be found here.
{ "domain": "quantumcomputing.stackexchange", "id": 3435, "tags": "quantum-algorithms, experimental-realization, programming, resource-request" }
Why are the true variance of these orbits out by ~pi?
Question: I have an object that is orbit around a point mass in a 2D environment with a known speed, radius and zenith. I calculate the following orbital parameters as such and can confirm that these values are correct (where $R_{p}$ is the periapsis, $R_{a}$ is the apoapsis, $e$ is the eccentricity, $a$ is the semimajor axis, $r$ is the radius, $v$ is the velocity and $GM$ is the product of the mass of the point mass and the gravitational constant): $C = \frac{2GM}{rv^{2}}$ $R_{p} = r \times \frac{-C+\sqrt{C^{2}-4(1-C)(-\sin^{2}(1-C))}}{2(1-C)}$ $R_{a} = r \times \frac{-C-\sqrt{C^{2}-4(1-C)(-\sin^{2}(1-C))}}{2(1-C)}$ $e = \left | \frac{R_{a} - R_{p}}{R_{a} + R_{p}} \right |$ $a = \frac{R_{a} + R_{p}}{2}$ I then go on to attempt to calculate the true anomaly of the orbit as follows (where $z$ is the zenith and $\theta$ is the true anomaly): $N = \frac{rv^{2}}{GM}$ $\theta = \tan^{-1} \frac{N\sin z \cos z}{(N\sin^{2} z)-1}$ This is calculates a correct value as long as the radius is greater than a value slightly less than the semiminor axis of the orbit. Adding pi to the true anomaly corrects this error, except where the radius is close in value to the length of the semiminor axis. Why is the true anomaly off by ~pi in this case? Answer: As indicated in the comments, this is probably a question for Computational Science. But I will try to answer your question in such a manner that it may be on-topic enough to stay here. I think you can solve this by using Python's atan2(y,x) function, rather than defining $z = y/x$ and using atan(z). Let me address this concern first: ...except where the radius is close in value to the length of the semiminor axis. To see what happens when $a$ and $r$ are close in value, let me set them equal to each other. Then plug in the expressions for $R_{p,a}$ into the expression for $a$. That gives $2(C-1)r = r \Rightarrow C = 3/2$. Also note, from $C = \frac{2GM}{rv^2}$ and $N = \frac{rv^2}{GM}$, that $N = 2/C$ is true regardless of the value of $r$. In the case of $r = a$, this becomes $N = 4/3$. Plug $N = 4/3$ into your expression for $\theta$. This gives $\theta = \tan^{-1}(\frac{4/3\sin{z}\cos{z}}{(4/3\sin^2{z}) - 1})$. Note that if $z = \pi/3 = 60^\circ$, then the denominator is zero. Mathematically, that's okay, because $\tan^{-1}{\infty}$ is perfectly well-defined. But I think you may be inadvertently dividing by a very small number, and that's making your $\tan^{-1}$ give you unpredictable results. If you're running into machine precision issues, then trying to fix things by adding or subtracting $\pi$ won't help. The other issue I think you're having is related. If $r$ changes by a bit, then that $4/3$ constant in the denominator of your $\tan^{-1}$ changes, which might change the sign of the expression. If it changes sign, $\tan^{-1}$ might be giving you a value for $\theta$ that's not in the quadrant you expect it to be in. If that happens, it will be off by $\pi$. If I'm correct about both of those, then atan2 should fix them both. You call atan2 with both the numerator and the denominator, so it has the sign information for both, and knows which quadrant you want. And I think it does a better job of handling values of x close to zero (the folks at Comp Sci would definitely be able to give you a better answer about that than I can).
{ "domain": "physics.stackexchange", "id": 13083, "tags": "orbital-motion" }
Stack as a Persistent Data Structure Implementation
Question: After reading some articles about immutability, I found out that there are persistent data structures. In the following, I implemented a stack as a persistent data structure. What I like about the code in the current state it contains no if-statements no method is longer than one line it works like recursion Implementation The implementation is based on the abstraction Stack, which has two concrete data-types: EmptyStack and NonEmptyStack. A Stack needs to implement 4 methods as described below. public interface Stack<E> { /** * @return the number of elements */ int size(); /** * adds a new element to the top of the {@code Stack} * * @param top represents the new element to be added * @return a {@code Stack} with {@code top} on it */ Stack<E> push(E top); /** * removes the top element of the {@code Stack} * * @return a {@code Stack} without the top element */ Stack<E> pop(); /** * @return if {@code Stack} is empty {@code Optional.empty}; * otherwise {@code Optional.of(E)} where {@code E} is the top element */ Optional<E> top(); } The EmptyStack represents the base case: when a Stack has no element. For each method it is easy to predict all return values: the size is always 0 push() always will return a NonEmptyStack with the new top element and the current instance as the previous version pop() can't return a top element; so it always will return an EmptyStack top() can't return a top element; so it always will return an Optional.empty class EmptyStack<E> implements Stack<E> { @Override public int size() { return 0; } @Override public Stack<E> push(E top) { return new NonEmptyStack<>(top, this); } @Override public Stack<E> pop() { return this; } @Override public Optional<E> top() { return Optional.empty(); } } On the other hand there is the NonEmptyStack which represents a Stack that contains elements. A NonEmptyStack is made up of its element on the top and a Stack as its tail, which represents the previous version. the size is always the size of the previous version plus 1 for the new top element push() always will return a NonEmptyStack with the new top element and the current instance as the previous version pop() always returns the tail top() always returns the element which represents the top and since it can be null I used Optional.ofNullable class NonEmptyStack<E> implements Stack<E> { private final Stack<E> tail; private final E top; NonEmptyStack(E top, Stack<E> tail) { this.tail = tail; this.top = top; } @Override public int size() { return 1 + tail.size(); } @Override public Stack<E> push(E top) { return new NonEmptyStack<>(top, this); } @Override public Stack<E> pop() { return tail; } @Override public Optional<E> top() { return Optional.ofNullable(top); } } EmptyStack and NonEmptyStack are package-private therewith a client only interacts with a Stack instead of two different implementations of it. For that I created a Factory StackFactory which returns an EmptyStack as a Stack and the clients never interacts directly with a concrete implementation. public class StackFactory<E> { public Stack<E> create() { return new EmptyStack<>(); } } Unit Tests import org.junit.jupiter.api.Test; import java.util.Optional; import static org.junit.jupiter.api.Assertions.*; class EmptyStackTest { private final Stack<String> EMPTY_STACK = new EmptyStack<>(); @Test void givenAnEmptyStack_whenQueryTheSize_thenExpect0() { // arrange // act int size = EMPTY_STACK.size(); // assert assertEquals(0, size); } @Test void givenAnEmptyStack_whenPushAnElementToIt_thenExpectANonEmptyStack() { // arrange // act Stack<String> stack = EMPTY_STACK.push("first"); // assert assertTrue(stack instanceof NonEmptyStack); } @Test void givenAnEmptyStack_whenRemoveTheFirstElement_thenExpectAnEmptyStack() { // arrange // act Stack<String> stack = EMPTY_STACK.pop(); // assert assertTrue(stack instanceof EmptyStack); } @Test void givenAnEmptyStack_whenAccessTopElement_thenExpectItDoNotExists() { // arrange // act Optional<String> top = EMPTY_STACK.top(); // assert assertTrue(!top.isPresent()); } } import org.junit.jupiter.api.Test; import java.util.Optional; import static org.junit.jupiter.api.Assertions.assertEquals; import static org.junit.jupiter.api.Assertions.assertTrue; class NonEmptyStackTest { private final String ITEM = "first"; @Test void givenEmptyStackAndItem_whenInstantiateAndQueryTheSize_thenExpect1() { // arrange Stack<String> stack = new NonEmptyStack<>(ITEM, new EmptyStack<>()); // act int size = stack.size(); // assert assertEquals(1, size); } @Test void givenNonEmptyStackWitOneItemAndANewItem_whenInstantiateAndQueryTheSize_thenExpect2() { // arrange NonEmptyStack<String> nonEmptyStack = new NonEmptyStack<>(ITEM, new EmptyStack<>()); Stack<String> stack = new NonEmptyStack<>(ITEM, nonEmptyStack); // act int size = stack.size(); // assert assertEquals(2, size); } @Test void givenEmptyStackAndItem_whenPushTheItemToTheStack_thenTheItemShouldBeInTheStack() { // arrange Stack<String> stack = new EmptyStack<>(); // act Stack<String> nonEmptyStack = stack.push(ITEM); // assert assertEquals(Optional.of(ITEM), nonEmptyStack.top()); } @Test void givenNonEmptyStackAndItem_whenPushTheItemToTheStack_thenTheItemShouldBeInTheStack() { // arrange Stack<String> emptyStack = new EmptyStack<>(); Stack<String> firstChange = emptyStack.push("value"); // act Stack<String> stack = firstChange.push(ITEM); // assert assertEquals(Optional.of(ITEM), stack.top()); } @Test void givenNonEmptyStackWithOneItem_whenRemoveTheTopItem_thenExpectEmptyStack() { // arrange Stack<String> testCandidate = new NonEmptyStack<>(ITEM, new EmptyStack<>()); // act Stack<String> stack = testCandidate.pop(); // assert assertTrue(stack instanceof EmptyStack); } @Test void givenNonEmptyStackWithTwoItems_whenRemoveTheTopItem_thenExpectNonEmptyStack() { // arrange Stack<String> testCandidate = new NonEmptyStack<>(ITEM, new EmptyStack<>()).push(ITEM); // act Stack<String> stack = testCandidate.pop(); // assert assertTrue(stack instanceof NonEmptyStack); } @Test void givenNonEmptyStack_whenQueryTopItem_thenExpectTopItem() { // arrange Stack<String> stack = new NonEmptyStack<>(ITEM, new EmptyStack<>()); // act Optional<String> top = stack.top(); // assert assertEquals(Optional.of(ITEM), top); } } Example public class Main { public static void main(String[] args) { Stack<String> stack = new StackFactory<String>().create() .push("first") .push("second") .push("third"); Stack<String> modified = stack.pop() .pop(); modified.top() .ifPresent(System.out::println); // "first" modified.pop() .top() .ifPresent(System.out::println); // nothing happens } } This little experiment was very entertaining. I appreciate every feedback and thank you very much for reading my code! :] Answer: In this implementation, null entries are problematic. From the public interface, it is impossible to tell if the stack has a null entry, or has reached the bottom: in both cases, top() will return Optional.empty(). It seems wrong to silently convert nulls into Optional.empty()s. I would do one of the following: Do not store null entries in the stack. Instead, throw an exception on pushing a null value. Stop using Optionals and instead throw an exception when top is called on an empty stack. Add an empty method to determine if the stack is empty. Other than that, very clean simple code! A few smaller comments. Note that push has the same implementation in both EmptyStack and NonEmptyStack. If you like, they could inherit from a single abstract class implementing push. This is a judgement call: repeated code is kinda bad, but adding a whole new abstract class is complicated. Perhaps the the cure is worse than the disease... Computing size is slow: time O(n). If you are worried about this, you could compute and store the size in the constructor.
{ "domain": "codereview.stackexchange", "id": 33996, "tags": "java, object-oriented, stack, immutability" }
Have there been any studies on 'surf balls'? (not related to surfing)
Question: I know the title sounds like something else, but I'm actually wondering about these things, which are also apparently called 'whale burps': This one was found on a beach on Lake Michigan. Does anyone know if any scientific studies have been done on these? If so, can you refer me to them? Answer: These surf or beach balls are made of plant material throughout the structure. They form by wave action along beaches. As the naturalist at Wilderness State Park, the campers and I discovered over 30 of these balls along the beach the summer of 2017. In previous summers we had found 0. Lake Michigan and Lake Huron were at record levels during the summer of 2017, the highest since 1998. This may explain the formation of these grass balls in abundance. We found some of the balls still in the water, in a depression, just offshore. Others were rolling in and out with the waves. Some had been deposited on the shore and were filled with large quantities of sand. I shook out nearly a cup of sand from one ball. I dissected one surf ball and found predominantly roots and grasses. I will continue to investigate the species of plants that form these curious creations and try to discover the secret to their formation
{ "domain": "biology.stackexchange", "id": 7718, "tags": "botany, life" }
Where does entropy enter in Soft Actor-Critic?
Question: I am currently trying to understand SAC (Soft Actor-Critic), and I am thinking of it as a basic actor-critic with the entropy included. However, I expected the entropy to appear in the Q-function. From SpinningUp-SAC, it looks like the entropy is entering through the value-function, so I'm thinking it enters by the $\log \pi_{\phi}(a_t \mid s_t)$ in the value function? I'm a little stuck on understanding SAC, can anyone confirm/explain this to me? Also, side-note question: is being a soft agent equivalent to including entropy in one of the object functions? Answer: In the answer I'll be using notation similar to the one from the SAC paper. If we look at the standard objective function for policy gradient methods we have \begin{align} J_\pi &= V_\pi(s_t)\\ &= \mathbb E_{a_t \sim \pi(a|s_t)}[Q(s_t, a_t)]\\ &= \mathbb E_{a_t \sim \pi(a|s_t)}[ \mathbb E_{s_{t+1} \sim p(s|s_t, a_t)} [r(s_t, a_t) + V(s_{t+1})]]\\ &= \mathbb E_{a_t \sim \pi(a|s_t)}[ \mathbb E_{s_{t+1} \sim p(s|s_t, a_t)} [r(s_t, a_t) + \mathbb E_{a_{t+1} \sim \pi(a|s_{t+1})}[ \mathbb E_{s_{t+2} \sim p(s|s_{t+1}, a_{t+1})} [r(s_{t+1}, a_{t+1}) + V(s_{t+2})]]]]\\ &\cdots\\ &= \sum_t \mathbb E_{(a_t, s_t) \sim \rho_\pi} [r( s_t, a_t)] \end{align} If you keep unwinding this $V(s_{t+i})$ you will get expected sum of rewards. We can define soft state value as \begin{align} V(s_t) &= \mathbb E_{a_t \sim \pi(a|s_t)}[Q(s_t, a_t) + \mathcal H(\cdot|s_t)]\\ &= \mathbb E_{a_t \sim \pi(a|s_t)}[Q(s_t, a_t) + \mathbb E_{a \sim \pi(a|s_t)}[-\log(\pi(a|s_t))]]\\ &= \mathbb E_{a_t \sim \pi(a|s_t)}[Q(s_t, a_t) - \log(\pi(a_t|s_t))] \end{align} third equality comes from the fact that $\mathbb E_{a \sim \pi(a|s_t)}[-\log(\pi(a|s_t))]$ is nonrandom so its the same thing as if we are sampling over $\pi$ only once. In maximum entropy framework objective function would then be \begin{align} J_\pi &= V_\pi(s_t)\\ &= \mathbb E_{a_t \sim \pi(a|s_t)}[Q(s_t, a_t) - \log(\pi(a_t|s_t))]\\ &= \mathbb E_{a_t \sim \pi(a|s_t)}[ \mathbb E_{s_{t+1} \sim p(s|s_t, a_t)} [r(s_t, a_t) - \log(\pi(a_t|s_t)) + V(s_{t+1})]]\\ & \cdots\\ &= \sum_t \mathbb E_{(a_t, s_t) \sim \rho_\pi} [r(s_t, a_t) -\log(\pi(a_t|s_t))]\\ &= \sum_t \mathbb E_{(a_t, s_t) \sim \rho_\pi} [r(s_t, a_t) + \mathcal H(\cdot|s_t)] \end{align}
{ "domain": "ai.stackexchange", "id": 1554, "tags": "reinforcement-learning, actor-critic-methods, soft-actor-critic" }
Turtlebot + Frontier Exploration: Turning in Place at the Start
Question: I have curious issues using the frontier_exploration package on my Turtlebot and would appreciate anyone helping me understand better what is going on under the hood. Firstly, this is a simulated run on Gazebo of my robot setup. After I ask the robot to go forth and explore, it begins by turning in place. This invariably happens no matter what starting exploration point I pick. I also get the following error messages, which probably point at the problem: [ WARN] [1466389076.511953487, 657.183000000]: Please select an initial point for exploration inside the polygon [ INFO] [1466389079.366449392, 658.110000000]: Sending goal [ INFO] [1466389079.574537491, 658.152000000]: Region boundary set [ INFO] [1466389086.897870100, 660.082000000]: Got new plan [ERROR] [1466389086.898554785, 660.082000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389087.596126130, 660.278000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389088.219151245, 660.484000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389088.984638998, 660.678000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389089.730673868, 660.880000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ INFO] [1466389090.284299471, 661.110000000]: Got new plan [ERROR] [1466389090.284972952, 661.110000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389090.755919337, 661.278000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389091.294571326, 661.478000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389091.920927691, 661.678000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389092.427586470, 661.878000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ INFO] [1466389092.912826709, 662.078000000]: Got new plan [ERROR] [1466389092.913268554, 662.078000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389093.670462783, 662.280000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389094.369503898, 662.478000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389095.643821480, 662.678000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ INFO] [1466389096.474418339, 662.878000000]: Got new plan [ERROR] [1466389096.474903849, 662.878000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389097.468142710, 663.078000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389098.693172111, 663.278000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389099.417643831, 663.478000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389100.220558345, 663.678000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389100.953090959, 663.878000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ INFO] [1466389101.617699327, 664.083000000]: Got new plan [ERROR] [1466389101.618116982, 664.083000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389102.286638000, 664.278000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389103.214008032, 664.478000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ERROR] [1466389104.448767752, 664.679000000]: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free [ INFO] [1466389105.521163815, 664.878000000]: Got new plan [ INFO] [1466389113.576508452, 666.078000000]: Got new plan [ INFO] [1466389119.446550737, 667.281000000]: Got new plan [ INFO] [1466389121.907575527, 667.878000000]: Goal reached Beyond this point, everything chugs along quite smoothly. Notice at the start there are no debug messages indicating recovery behavior - it just turns for some other reason. Source code reveals that the error messages are printed out in base_local_planner/src/map_grid.cpp. When I run regular navigation tutorials, using the same dwa local planner, I do not get the error messages. Here are my setup files, cobbled together from various sources: exploration.launch: <?xml version="1.0"?> <launch> <node pkg="frontier_exploration" type="explore_client" name="explore_client" output="screen"/> <node pkg="frontier_exploration" type="explore_server" name="explore_server" output="screen"> <param name="frequency" value="1.0"/> <!-- Should be less than sensor range --> <param name="goal_aliasing" value="2.0"/> <rosparam file="$(find turtlebot_navigation)/param/costmap_common_params.yaml" command="load" ns="explore_costmap" /> <rosparam file="$(find turtlebot_navigation)/param/costmap_exploration.yaml" command="load" ns="explore_costmap" /> </node> </launch> costmap_common_params.yaml: max_obstacle_height: 0.60 footprint: [[0.2, 0.2], [0.2, -0.2], [-0.2, -0.2], [-0.2, 0.2]] update_frequency: 1.0 publish_frequency: 0.5 resolution: 0.05 transform_tolerance: 0.5 robot_base_frame: base_footprint inflation_radius: 0.4 obstacle_range: 3.5 raytrace_range: 3.5 obstacle_layer: observation_sources: laser bump laser: {data_type: LaserScan, clearing: true, marking: true, topic: scan, inf_is_valid: true} bump: {data_type: PointCloud2, topic: mobile_base/sensors/bumper_pointcloud, marking: true, clearing: false, min_obstacle_height: 0.0, max_obstacle_height: 0.15} inflation_layer: enabled: true static_layer: enabled: true map_topic: map subscribe_to_updates: true costmap_exploration.yaml: track_unknown_space: true global_frame: map static_map: true rolling_window: false plugins: - {name: static_layer, type: "costmap_2d::StaticLayer"} - {name: explore_boundary, type: "frontier_exploration::BoundedExploreLayer"} - {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"} - {name: inflation_layer, type: "costmap_2d::InflationLayer"} explore_boundary: resize_to_boundary: false frontier_travel_point: middle explore_clear_space: false move_base.launch.xml: <launch> <include file="$(find turtlebot_navigation)/launch/includes/velocity_smoother.launch.xml"/> <include file="$(find turtlebot_navigation)/launch/includes/safety_controller.launch.xml"/> <arg name="odom_frame_id" default="odom"/> <arg name="base_frame_id" default="base_footprint"/> <arg name="global_frame_id" default="map"/> <arg name="odom_topic" default="odom" /> <arg name="laser_topic" default="scan" /> <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen"> <rosparam file="$(find turtlebot_navigation)/param/costmap_common_params.yaml" command="load" ns="global_costmap" /> <rosparam file="$(find turtlebot_navigation)/param/costmap_common_params.yaml" command="load" ns="local_costmap" /> <rosparam file="$(find turtlebot_navigation)/param/local_costmap_params.yaml" command="load" /> <rosparam file="$(find turtlebot_navigation)/param/global_costmap_params.yaml" command="load" /> <rosparam file="$(find turtlebot_navigation)/param/dwa_local_planner_params.yaml" command="load" /> <rosparam file="$(find turtlebot_navigation)/param/move_base_params.yaml" command="load" /> <rosparam file="$(find turtlebot_navigation)/param/global_planner_params.yaml" command="load" /> <rosparam file="$(find turtlebot_navigation)/param/navfn_global_planner_params.yaml" command="load" /> <remap from="cmd_vel" to="navigation_velocity_smoother/raw_cmd_vel"/> <remap from="odom" to="$(arg odom_topic)"/> <remap from="scan" to="$(arg laser_topic)"/> </node> </launch> local_costmap_params.yaml: local_costmap: global_frame: odom static_map: false rolling_window: true width: 4.0 height: 4.0 plugins: - {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"} - {name: inflation_layer, type: "costmap_2d::InflationLayer"} global_costmap_params.yaml: global_costmap: global_frame: map rolling_window: false track_unknown_space: true static_map: true plugins: - {name: static_layer, type: "costmap_2d::StaticLayer"} - {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"} - {name: inflation_layer, type: "costmap_2d::InflationLayer"} move_base_params.yaml: shutdown_costmaps: false controller_frequency: 5.0 controller_patience: 3.0 recovery_behaviour_enabled: true planner_frequency: 1.0 planner_patience: 5.0 oscillation_timeout: 10.0 oscillation_distance: 0.2 base_local_planner: "dwa_local_planner/DWAPlannerROS" base_global_planner: "navfn/NavfnROS" dwa_local_planner_params.yaml: DWAPlannerROS: max_vel_x: 0.5 # 0.55 min_vel_x: 0.0 max_vel_y: 0.0 min_vel_y: 0.0 max_trans_vel: 0.5 min_trans_vel: 0.1 trans_stopped_vel: 0.1 max_rot_vel: 5.0 min_rot_vel: 0.4 rot_stopped_vel: 0.4 acc_lim_x: 1.0 acc_lim_theta: 2.0 acc_lim_y: 0.0 # Goal Tolerance Parameters yaw_goal_tolerance: 0.3 xy_goal_tolerance: 0.15 # Forward Simulation Parameters sim_time: 1.0 sim_granularity: 0.025 vx_samples: 6 vy_samples: 1 vtheta_samples: 20 # Trajectory Scoring Parameters path_distance_bias: 64.0 goal_distance_bias: 24.0 occdist_scale: 1.0 forward_point_distance: 0.325 stop_time_buffer: 0.2 scaling_speed: 0.25 max_scaling_factor: 0.2 meter_scoring: true # Oscillation Prevention Parameters oscillation_reset_dist: 0.05 # 0.05 publish_traj_pc : true publish_cost_grid_pc: true global_frame_id: odom navfn_global_planner_params.yaml: NavfnROS: visualize_potential: false allow_unknown: true planner_window_x: 0.0 planner_window_y: 0.0 default_tolerance: 0.1 global_planner_params.yaml: GlobalPlanner: old_navfn_behavior: false use_quadratic: true use_dijkstra: false use_grid_path: false allow_unknown: true planner_window_x: 0.0 planner_window_y: 0.0 default_tolerance: 0.25 publish_scale: 100 planner_costmap_publish_frequency: 0.0 lethal_cost: 253 neutral_cost: 50 cost_factor: 3.0 publish_potential: true The simulated robot in the video can stay localized with the turns at the start because I have artificially widened its vision, thereby vastly improving localization ability. My real-world setup consists of a Asus Xtion Pro with a narrower view. In simulations with this actual FOV, turning in place at the start causes the robot to get lost very quickly and the whole scenario to fail. I would appreciate any pointers on how I should about investigating this, and where to start looking. Thanks! Originally posted by eugene_ang on ROS Answers with karma: 11 on 2016-06-20 Post score: 1 Answer: Try turn off the robot model (show footprint instead) and turn on the frontiers (point cloud) in rviz. And observe how the explore algorithm discover and choose the frontiers. There are three parameter about how to choose frontier as a goal to navigation. (closet, middle, centroid). the error message is about the base_local_planner can not find the target cells, try to git clone the navigation code and check how to adjust the planner parameters. Hope this can help. :) Originally posted by Jam with karma: 77 on 2016-08-10 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 25004, "tags": "ros, navigation, turtlebot, frontier-exploration, dwa-local-planner" }
Looking for in depth knowledge in evalution metric
Question: I am dealing with an unbalanced dataset. The total instances in my dataset is 1273 and the Yes class is 174 and No class is 1099. So the unbalance ratio is like 1:6. Now I know Recall measures how many yes classes we might have missed in the entire sample. I used Weka to classify the dataset. I was using Naive Bayes classifier and got recall 0.413. As per the definition of recall I can find out how many yes class I might have missed. 1273*41.3% = 525.749 However, I wonder how could I miss 525 yes classes where the number of yes classes itself 174. Any help would be appreciated Answer: I think this is just a confusion with the definition of recall. You can better remember it based on the positive class. When we talk about recall, we should look at the actual count of positive class in the dataset. In your case, if you take 'Yes' as the positive class, there are 174 entries in this class. In simple terms recall measures the ratio of actual positive class in the prediction against the actual positive class in the population whereas precision measures the ratio of actual positive class in the prediction against the total predicted positive class. Both case, only the denominator changes. Your model gives .413 as recall meaning out of the 1273 records in the population where 174 are the real positive class entries, 'x' number is classified as positive where x/174 = .413. This gives x = .413*174 = 71.862. This means your model could classify only 71 as 'Yes' out of 174 I found this excellent video lesson for this - https://www.youtube.com/watch?v=2osIZ-dSPGE
{ "domain": "datascience.stackexchange", "id": 10638, "tags": "classification, dataset, data-mining, model-evaluations, naive-bayes-classifier" }
How many tautologies are there?
Question: Given $m, n, k$, how many of $k$-DNFs with $n$ variables and $m$ clauses are tautology? (or how many $k$-CNFs are unsatisfiable?) Answer: The answer depends on $k$, $m$, and $n$. Exact counts are generally not known, but there is a "threshold" phenomenon that for most settings of $k$, $m$, $n$, either nearly all $k$-SAT instances are satisfiable, or nearly all instances are unsatisfiable. For example, when $k=3$, it has been empirically observed that when $m < 4.27 n$, all but a $o(1)$ fraction of 3-SAT instances are satisfiable, and when $m > 4.27n$, all but a $o(1)$ fraction are unsatisfiable. (There are also rigorous proofs of bounds known.) One starting point is "The Asymptotic Order of the k-SAT Threshold". Amin Coja-Oghlan has also done a lot of work on these satisfiability threshold problems.
{ "domain": "cstheory.stackexchange", "id": 3673, "tags": "co.combinatorics, lo.logic, sat" }
Feynman rules with helicity states.
Question: Whenever Feynman rules are stated they are always without any mention of the helicities - this I find to be very confusing. How does one introduce and account for that? Is there an intuitive/simple argument for why massless particles should have "helicities" (and not polarizations) and they can only be of the form $\pm\text{ some positive integer}$? (..i have seen some very detailed arguments for that which hinge on the representation theory for the little group of massless particles and various other topological considerations - i am here looking for some "quick" explanation for that..) Is there some reason why polarized gluon scattering amplitudes at the tree-level can somehow "obviously" be written down? Like for example, consider a process where two positive helicity gluons of momenta $p_1$ and $p_2$ scatter into two negative helicity gluons of momenta $p_3$ and $p_4$ then at tree level the scattering amplitude is, $A(p_1^+,p_2^+,p_3^-,p_4^-)= \frac{ig^2}{4p_1.p_2} \epsilon_2^+ \epsilon_3^-(-2p_3.\epsilon_4^-)(-2p_2.\epsilon_1^+)$ where $\epsilon^{\pm}_i$ is the polarization of the $i^{th}$ particle. I have at places seen this expression being almost directly written down. Is the above somehow obvious? Answer: The argument for the first question goes as follows: Consider the Pauli-Lubanski vector $ W_{\mu} = \epsilon_{\mu\nu\rho\sigma}P^{\nu}M^{\rho\sigma}$. Where $P^{\mu}$ are the momenta and $M^{\mu\nu}$ are the Lorentz generators. (The norm of this vector is a Poincare group casimir but this fact will not be needed for the argument.) By symmetry considerations We have $W_{\mu} P^{\mu} = 0$. Now, in the case of a massless particle, a vector orthogonal to a light-like vector must be proportional to it (easy exercise). Thus $ W^{\mu} = h P^{\mu}$, ($ h = const.$). Now, the zero component of the Pauli-Lubanski vector is given by: $ W_{0} = \epsilon_{0\nu\rho\sigma}P^{\mu}M^{\mu\nu} = \epsilon_{abc}P^{a}M^{bc} = \mathbf{P}.\mathbf{J}$, (where the summation after the second equality is on the spatial indices only, and $\mathbf{J}$ are the rotation generators ). Therefore the proportionality constant $ h = \frac{W^{0}}{P^{0}}= \frac{\mathbf{P}.\mathbf{J}}{|\mathbf{P}|}$ is the helicity. Now, on the quantum level, if we rotate by an angle of $2 \pi$ around the momentum axis, the wave function acquires a phase of: $exp(2 \pi i\frac{\mathbf{P}}{|\mathbf{P}|}.\mathbf{J}) = exp(2 \pi i h)$. This factor should be $\pm 1$ according to the particle statistics thus $h$ must be half integer. As for the second question, a very powerful method to construct the gluon amplitudes is by the twistor approach. Please see the following article by N.P. Nair for a clear exposition. Update: This update refers to the questions asked by user6818 in the comments: For simplicity I'll consider the case of a photon and not gluons. The strategy of the solution is based on the explicit construction of the angular momentum and spin of a free photon field (which depend on the polarization vectors) and showing that the above relations are satisfied for the photon field. The photon momentum and the angular momentum densities can be obtained via the Noether theorem from the photon Lagrangian. Alternatively, it is well known that the photon linear momentum is given by the Poynting vector (proportional to) $\vec{E}\times\vec{B}$, and it is not difficult to convince onself that the total angular momentum density is (proportional to) $\vec{x}\times \vec{E}\times\vec{B}$. Now, the total angular momentum can be decomposed into angular and spin angular momenta (please see K.T. Hecht: quantum mechanics (page 584 equation 16)) $\vec{J} = \int d^3x (\vec{x}\times \vec{E}\times\vec{B}) =\int d^3x (\vec{E}\times\vec{A} + \sum_{i=1}^3 E_j \vec{x} \times \vec{\nabla} A_j )$ The first term on the right hand side can be interpreted as the spin and the second as the orbital angular momentum as it is proportional to the position. Now, Neither the spin nor the orbital angular momentum densities are gauge invariant (only their sum is). But, one can argue that the total orbital angular momentum is zero because the position averages to zero, thus the total spin: $ \vec{S} =\int d^3x (\vec{E}\times\vec{A})$ is gauge invariant: Now, we can obseve that in canonical quantization: $[A_j, E_k] = i \delta_{jk}$, we get $[S_j, S_k] = 2i \epsilon_{jkl} S_l$. Which are the angular momentum commutation relations apart from the factor 2. Now, by substituting the plane wave solution: $A_k = \sum_{k,m=1,2} a_{km} \vec{\epsilon_m}(k) exp(i(\vec{k}.\vec{x}-|k|t)) +h.c.$ (The condition $\vec{\epsilon_m}(k).\vec{k} = 0$, is just a consequence of the vanishing of the sources). We obtain: $\vec{S} = \sum_{k,m=1,2}(-1){m} a^\dagger_{km}a_{km} \hat{k} = \sum_{k}(n_1-n_2)\hat{k}$ (where $n_1$, $n_2$ are the numbers of right and left circularly polarized photons). Thus for a single free photon, the total spin, thus the total angular momentum are aligned along or opposite to the momentum, which is the same result stated in the first part of the answer. Secondly, the photon total spin operators exist and transform (up to a factor of two) as spin 1/2 angular momentum operators.
{ "domain": "physics.stackexchange", "id": 3324, "tags": "quantum-field-theory, research-level, quantum-chromodynamics, feynman-diagrams, helicity" }
Lithium Oxide Colour
Question: According to Chemistry LibreText: Lithium is the lightest of all metals and is named from the Greek work for stone (lithos). It is the first member of the Alkali Metal family. It is less dense than water (with which it reacts) and forms a black oxide in contact with air. However, if I'm not wrong, all alkali-metals react with non-metals to form white oxides. $\text{Li}_2 \text{O}$ is a white chemical compound, isn't it? What's with the site stating the opposite -- anything I'm possibly missing out on? Answer: From Handbook of Inorganic Chemicals by Pradyot Patnaik: LITHIUM OXIDE White cubic crystals; refractive index $\pu{1.644}$; density $\pu{2.013 g/cm3}$; melts at $\pu{1,570 °C}$; dissolves and decomposes in water ($\pu{6.67 g/100g}$ at $\pu{0 °C}$ and $\pu{10.02 g/100g}$ at $\pu{100 °C}$). What the quote has written is not correct. Lithium on contact with air forms both lithium oxide and lithium nitride which looks black during the reaction perhaps due to constant oxidation (actually pure lithium nitride exist as red hexagonal crystals). The reaction is exothermic. $$\ce{6Li + N2 -> \underset{black}{2Li3N}}$$
{ "domain": "chemistry.stackexchange", "id": 15945, "tags": "color, salt, alkali-metals" }
'Length measure' vs. 'length measurement'
Question: I have to write a scientific paper where I refer to a CAD drawing. Now, I'm a bit confused, because I have to refer to the length "parameters" on that drawing (as opposed to the angle "parameters"), e.g. the segment lengths l1, l2, ... Should I refer to these length parameters as "length measurements" or as "length measures"? I used both as a synonym so far, but I'm not sure if that is accurate. Answer: I might be misreading your question. As I understand it, your question is more about English language usage of engineering terminology than about an engineering issue. To me, a measurement is a reading of a quantity. In your situation it is length, for an electrical engineer it might be volts or amps. Likewise, my interpretation of a measure is an action undertaken, such as: "the air strike in Foobar province was a measure against counter insurgency". For your paper, I would suggest you use length measurements.
{ "domain": "engineering.stackexchange", "id": 676, "tags": "mechanical-engineering, computer-aided-design" }
rviz visualization
Question: Hi everybody I have problems visualizing rviz, I have installed ros in ubuntu 10.04 In one terminal I run roscore and in another terminal I run rosrun rviz rviz and I get rviz revision number 1.6.7 ogre_tools revision number 1.6.2 compiled against OGRE version 1.7.3 (Cthugha) Loading general config from [/home/acp/.rviz/config] Loading display config from [/home/acp/.rviz/display_config] RTT Preferred Mode is PBuffer. Texture for pass 0: creating with size 1 x 1 Segmentation fault Any clue about this segmentation fault? In advance thank you very much Originally posted by acp on ROS Answers with karma: 556 on 2011-11-22 Post score: 0 Answer: well, it is a matter of giving some tries to $ rosrun rviz rviz and rviz will be lunched problem solved :) Originally posted by acp with karma: 556 on 2011-11-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 7396, "tags": "ros, rviz, visualization" }
From the debs Gazebo wants to remove ros-hydro-desktop-full
Question: Hi folks. The Gazebo ROS integration tutorial suggest to install ros-hydro-desktop-full. OK. And then, to install gazebo. But this last package suggest to remove the previous one. Do we have to keep gazebo-prerelease? Thx. arnaud@hercules:~$ sudo apt-get install gazebo Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: ffmpeg fltk1.3-doc fluid freeglut3-dev gir1.2-gtk-2.0 libasound2-dev libatk1.0-dev libav-tools libavahi-client-dev libavahi-common-dev libavcodec-dev libavdevice53 libavfilter2 libavformat-dev libavutil-dev libbullet libbullet-dev libcaca-dev libcairo-script-interpreter2 libcairo2-dev libcf0 libdbus-1-dev libflann-dev libflann1.7 libfltk-forms1.3 libfltk-images1.3 libfltk1.1 libfltk1.1-dev libfltk1.3 libfontconfig1-dev libgdk-pixbuf2.0-dev libgl2ps-dev libglib2.0-dev libgtk2.0-dev libharfbuzz-dev libhdf5-7 libjasper-dev libjs-sphinxdoc libjs-underscore libnetcdf-dev libnetcdfc++4 libnetcdfc7 libnetcdff5 libogg-dev libpango1.0-dev libphonon4 libpixman-1-dev libpulse-dev libpyside-dev libpyside-py3-1.1 libpyside1.1 libqhull-dev libqwt-dev libqwt5-qt4 libqwt6 libsdl-image1.2 libsdl-image1.2-dev libsdl1.2-dev libshiboken-dev libshiboken-py3-1.1 libshiboken1.1 libslang2-dev libswscale-dev libtheora-dev libusb-1.0-0-dev libv4l-dev libvtk5-dev libvtk5.8 libwebp4 libxcb-render0-dev libxcb-shm0-dev libxcomposite-dev libxcursor-dev libxft-dev libxi-dev libxinerama-dev libxml2-dev libxml2-utils libxrandr-dev libxrender-dev libxss-dev libyaml-cpp-dev libyaml-cpp0.3 phonon phonon-backend-gstreamer python-epydoc python-jinja2 python-kitchen python-matplotlib python-matplotlib-data python-opengl python-pydot python-pyparsing python-pyside python-pyside.phonon python-pyside.qtcore python-pyside.qtdeclarative python-pyside.qtgui python-pyside.qthelp python-pyside.qtnetwork python-pyside.qtopengl python-pyside.qtscript python-pyside.qtsql python-pyside.qtsvg python-pyside.qttest python-pyside.qtuitools python-pyside.qtwebkit python-pyside.qtxml python-qt4-dev python-qwt5-qt4 python-sphinx python-tz ros-hydro-actionlib ros-hydro-actionlib-msgs ros-hydro-actionlib-tutorials ros-hydro-amcl ros-hydro-angles ros-hydro-base-local-planner ros-hydro-bfl ros-hydro-bond-core ros-hydro-bondpy ros-hydro-camera-calibration ros-hydro-camera-calibration-parsers ros-hydro-camera-info-manager ros-hydro-carrot-planner ros-hydro-clear-costmap-recovery ros-hydro-collada-parser ros-hydro-collada-urdf ros-hydro-common-msgs ros-hydro-common-tutorials ros-hydro-compressed-depth-image-transport ros-hydro-compressed-image-transport ros-hydro-convex-decomposition ros-hydro-costmap-2d ros-hydro-cv-bridge ros-hydro-depth-image-proc ros-hydro-diagnostic-aggregator ros-hydro-diagnostic-analysis ros-hydro-diagnostic-common-diagnostics ros-hydro-diagnostic-msgs ros-hydro-diagnostic-updater ros-hydro-diagnostics ros-hydro-driver-base ros-hydro-driver-common ros-hydro-dwa-local-planner ros-hydro-eigen-conversions ros-hydro-eigen-stl-containers ros-hydro-executive-smach ros-hydro-fake-localization ros-hydro-filters ros-hydro-gazebo-msgs ros-hydro-gencpp ros-hydro-genlisp ros-hydro-geometric-shapes ros-hydro-geometry ros-hydro-geometry-experimental ros-hydro-geometry-msgs ros-hydro-geometry-tutorials ros-hydro-gmapping ros-hydro-image-common ros-hydro-image-geometry ros-hydro-image-pipeline ros-hydro-image-proc ros-hydro-image-rotate ros-hydro-image-transport ros-hydro-image-transport-plugins ros-hydro-image-view ros-hydro-interactive-marker-tutorials ros-hydro-interactive-markers ros-hydro-ivcon ros-hydro-joint-state-publisher ros-hydro-kdl-conversions ros-hydro-kdl-parser ros-hydro-laser-assembler ros-hydro-laser-filters ros-hydro-laser-geometry ros-hydro-laser-pipeline ros-hydro-librviz-tutorial ros-hydro-map-msgs ros-hydro-map-server ros-hydro-message-filters ros-hydro-message-generation ros-hydro-mk ros-hydro-move-base ros-hydro-move-base-msgs ros-hydro-move-slow-and-clear ros-hydro-nav-core ros-hydro-nav-msgs ros-hydro-navfn ros-hydro-navigation ros-hydro-nodelet-core ros-hydro-nodelet-topic-tools ros-hydro-nodelet-tutorial-math ros-hydro-octomap ros-hydro-opencv2 ros-hydro-openslam-gmapping ros-hydro-orocos-kdl ros-hydro-pcl ros-hydro-pcl-conversions ros-hydro-pcl-msgs ros-hydro-pcl-ros ros-hydro-perception-pcl ros-hydro-pluginlib-tutorials ros-hydro-polled-camera ros-hydro-python-qt-binding ros-hydro-qt-dotgraph ros-hydro-qt-gui ros-hydro-qt-gui-app ros-hydro-qt-gui-core ros-hydro-qt-gui-cpp ros-hydro-qt-gui-py-common ros-hydro-random-numbers ros-hydro-resource-retriever ros-hydro-robot-model ros-hydro-robot-pose-ekf ros-hydro-robot-state-publisher ros-hydro-ros ros-hydro-ros-comm ros-hydro-ros-tutorials ros-hydro-rosbash ros-hydro-rosboost-cfg ros-hydro-rosbuild ros-hydro-rosclean ros-hydro-rosconsole-bridge ros-hydro-roscpp-tutorials ros-hydro-roscreate ros-hydro-rosdoc-lite ros-hydro-roslang ros-hydro-roslaunch ros-hydro-roslisp ros-hydro-rosmake ros-hydro-rosmaster ros-hydro-rosnode ros-hydro-rosout ros-hydro-rosparam ros-hydro-rospy-tutorials ros-hydro-rostest ros-hydro-rostopic ros-hydro-rosunit ros-hydro-roswtf ros-hydro-rotate-recovery ros-hydro-rqt-action ros-hydro-rqt-bag ros-hydro-rqt-bag-plugins ros-hydro-rqt-common-plugins ros-hydro-rqt-console ros-hydro-rqt-dep ros-hydro-rqt-graph ros-hydro-rqt-gui ros-hydro-rqt-gui-cpp ros-hydro-rqt-gui-py ros-hydro-rqt-image-view ros-hydro-rqt-launch ros-hydro-rqt-logger-level ros-hydro-rqt-moveit ros-hydro-rqt-msg ros-hydro-rqt-nav-view ros-hydro-rqt-plot ros-hydro-rqt-pose-view ros-hydro-rqt-publisher ros-hydro-rqt-py-common ros-hydro-rqt-py-console ros-hydro-rqt-reconfigure ros-hydro-rqt-robot-dashboard ros-hydro-rqt-robot-monitor ros-hydro-rqt-robot-plugins ros-hydro-rqt-robot-steering ros-hydro-rqt-runtime-monitor ros-hydro-rqt-rviz ros-hydro-rqt-service-caller ros-hydro-rqt-shell ros-hydro-rqt-srv ros-hydro-rqt-tf-tree ros-hydro-rqt-top ros-hydro-rqt-topic ros-hydro-rqt-web ros-hydro-rviz ros-hydro-rviz-plugin-tutorials ros-hydro-rviz-python-tutorial ros-hydro-self-test ros-hydro-sensor-msgs ros-hydro-shape-msgs ros-hydro-shape-tools ros-hydro-smach ros-hydro-smach-msgs ros-hydro-smach-ros ros-hydro-stage ros-hydro-std-srvs ros-hydro-stereo-image-proc ros-hydro-stereo-msgs ros-hydro-tf ros-hydro-tf-conversions ros-hydro-tf2 ros-hydro-tf2-bullet ros-hydro-tf2-geometry-msgs ros-hydro-tf2-kdl ros-hydro-tf2-msgs ros-hydro-tf2-py ros-hydro-tf2-ros ros-hydro-tf2-tools ros-hydro-theora-image-transport ros-hydro-timestamp-tools ros-hydro-trajectory-msgs ros-hydro-turtle-actionlib ros-hydro-turtle-tf ros-hydro-turtlesim ros-hydro-urdf ros-hydro-urdf-parser-plugin ros-hydro-urdf-tutorial ros-hydro-urdfdom ros-hydro-urdfdom-headers ros-hydro-vision-opencv ros-hydro-visualization-marker-tutorials ros-hydro-visualization-msgs ros-hydro-visualization-tutorials ros-hydro-voxel-grid ros-hydro-xacro shiboken sphinx-common sphinx-doc tango-icon-theme tcl8.5-dev tk8.5-dev x11proto-composite-dev x11proto-randr-dev x11proto-render-dev x11proto-scrnsaver-dev x11proto-xinerama-dev Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: gazebo-prerelease ros-hydro-desktop-full ros-hydro-gazebo-plugins ros-hydro-gazebo-ros ros-hydro-gazebo-ros-pkgs The following NEW packages will be installed: gazebo 0 upgraded, 1 newly installed, 5 to remove and 0 not upgraded. Need to get 0 B/37,4 MB of archives. After this operation, 13,8 MB disk space will be freed. Originally posted by Arn-O on Gazebo Answers with karma: 316 on 2013-08-02 Post score: 0 Original comments Comment by evilBiber on 2013-08-02: I guess the package ros-hydro-desktop-full package allready installed the Gazebo Version you are looking for! (The other Version is probably 1.86 or so without the new Ros-Integration...) Comment by Arn-O on 2013-08-02: Thx for the comment. I would have said gazebo is the 1.9.0 and the 1.9.5 ... according to my tries. Answer: There is a temporary lag in the release system, as soon as the next update to Hydro is released ros-hydro-desktop-full will pull gazebo, not gazebo-prerelease Originally posted by davetcoleman with karma: 369 on 2013-08-02 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by tfoote on 2013-08-02: The repository was just synced. The new versions of ROS Hydro packages are built against the gazebo package no longer the gazebo-prerelease package. Comment by Arn-O on 2013-08-04: Thx for the help.
{ "domain": "robotics.stackexchange", "id": 3413, "tags": "ros" }
I am not getting classification output by predict_generator()
Question: I am trying to classify pre-downloaded images from my dataset to "Rock" , "Paper" , "Scissor" classifications but I am getting outputs as numbers. I have already divided the dataset to Train folder where 3 folders with the names of the categories exist. Is it because I am not using a convolutional neural network? I don't understand what to do here . Code: from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Dense, Flatten import numpy as np train_directory = 'D:\D_data\Rock_Paper_Scissors\Train' training_datgagen = ImageDataGenerator(rescale = 1./255) training_generator = training_datgagen.flow_from_directory( train_directory, target_size = (28,28), class_mode = 'categorical', classes = ["Rock", "Paper" , "Scissor"]) validation_directory = 'D:\D_data\Rock_Paper_Scissors\Train' validation_datagen = ImageDataGenerator(rescale= 1./255) validation_generator = validation_datagen.flow_from_directory( validation_directory, target_size = (28,28), class_mode = 'categorical', classes = ["Rock", "Paper" , "Scissor"] ) model = Sequential() model.add(Flatten(input_shape = (28,28,3))) model.add(Dense(128,activation = 'relu')) model.add(Dense(64, activation = 'relu')) model.add(Dense(16, activation = 'relu')) model.add(Dense(3, activation = 'softmax')) model.compile(optimizer = 'adam', loss = 'categorical_crossentropy',metrics = ['accuracy'],) filenames = validation_generator.filenames nb_samples = len(filenames) desired_batch_size = 1 model.fit_generator(training_generator,epochs=20,validation_data = validation_generator) predict = model.predict_generator(validation_generator, steps = np.ceil(nb_samples/desired_batch_size)) print(predict) Output: [[2.2792105e-01 7.7206087e-01 1.8017196e-05] [2.8221733e-03 9.9715865e-01 1.9176909e-05] [5.1788869e-04 9.9948126e-01 8.8551565e-07] ... [9.9283433e-01 7.1645100e-03 1.1474091e-06] [9.9999678e-01 3.2666808e-06 4.5816462e-09] [9.9640071e-01 3.5960779e-03 3.1497514e-06]] Answer: m.predict_generator returns a numpy array. you can take the maximum probability of a class by numypy argmax function or use predict_classes. This link might help- How to predict class label from class probability given by predict_generator for testdata?
{ "domain": "datascience.stackexchange", "id": 7442, "tags": "machine-learning, deep-learning, dataset, image-classification" }
Possible to make a shaped charge with a nuclear weapon?
Question: Ignoring the practicality of doing something like this, would it be possible to make a large scale shaped charge using nuclear weapons, or is there some limitation that makes this only possible with small scale explosions? Answer: Yes, a nuclear "shaped charge" is possible. You can think of a conventional shaped charge as a set of small explosive charges in a particular 3D arrangement(e.g., a conical shell). It is the arrangement of the charges and the relative timing with which they are triggered that determines the shape of the emerging shock front. An array of small nuclear bombs, triggered with specific relative timing, can produce a non-spherical shock front just as a shaped charge does.
{ "domain": "physics.stackexchange", "id": 62866, "tags": "nuclear-physics, explosions" }
From scratch pub sub definition and implementation
Question: I am finished with my implementation of PUB/SUB or Observer pattern, however I just want to see any ways that I may be able to improve this pattern. I wanted to be able to use the typical PubSub pattern except add performance enhancements by using the new key word to instantiate each new instance of a PubSub object, as well as make sure that the initialize fn as well as all of the public methods are chain-able. Here is my PubSub definition: (function(global) { 'use strict'; var newPubSub; global.PubSub = { core: function core() {}, init: function(moduleNs) { if (!global[moduleNs]) global[moduleNs] = new PubSub.core(); else throw new Error('A module has already been initialized under this namespace'); newPubSub = global[moduleNs]; return newPubSub; } }; var instance = PubSub.core.prototype; /** * Will check for the existance of prev array under this ns and if none exists it will create * a new prop with the ns and the function as the sole member of the array. If an array already * exists under this namespace then the function is simply pushed into that namespaces array of * methods that it will be triggered when that ns becomes published. **/ instance.sub = function(ns, fn) { if (!newPubSub[ns]) { newPubSub[ns] = [fn]; } else if (Array.isArray(newPubSub[ns]) && newPubSub[ns].length > 0) { newPubSub[ns].forEach(function(savedFn) { if (fn === savedFn) throw new TypeError('This function already exists under this namespace, please provide a new method for this namespace or add a new namespace to hold this method/fn'); else { newPubSub[ns].push(fn); } }); } return this; }; /** * Will remove all methods for a listener and the listener ns is not provided, * or will remove a single method from a listener if the ns provided has a matching method/fn **/ instance.unsub = function(ns, fn) { function revokeAllMethods(ns) { if (newPubSub[ns]) { delete newPubSub[ns]; } } // loops through all the methods in a namespace provided and compares to see if the method provided exists, if it does it will be removed function revokeSpecificMethod(ns, fn) { newPubSub[ns].forEach(function(savedMethod, idx) { // Remove the method that matches the one provided to unsubscribe if (fn === savedMethod) newPubSub[ns].splice(idx, 1); // Otherwise an error will be thrown else throw new Error('No method match found under the namespace '+ns); }); } if (typeof ns === 'string') { if (!fn) { revokeAllMethods(ns); } else if (typeof fn === 'function') { revokeSpecificMethod(ns, fn); } } return this; }; /** * Will trigger an event with provided params on any methods under the ns triggered */ instance.pub = function(ns, params) { if (!newPubSub[ns]) throw new TypeError('No method found under the namespace "' + ns + '" inside of Object. It has either been unsubscribed or never existed.'); newPubSub[ns].forEach(function(fn) { fn.apply(null, params); }); return this; }; })(window); Implementation: PubSub.init('test'); test.sub('alert', function(msg){ alert(msg); }).pub('alert',['bob']).sub('somethingelse', function(){ alert('look ma no params'); }).pub('somethingelse').pub('alert',['bob']).unsub('alert').pub('alert',[12]); Answer: if (!global[moduleNs]) global[moduleNs] = new PubSub.core(); I think this is a practice handed down from the OOP forefathers (Java, C++) where a singleton is created using a single instance of a class definition. In JS, you don't need to do this. Setting up the constructor, using new, putting methods etc is just unneeded code. You can simply use an object literal. Also, there's an alternate way to "use existing or create new global": ;(function(lib){ // `lib` is an object which you can attach your API to. var events = {}; lib.sub = function(){...}; lib.unsub = function(){...}; // This is the "use existing or create new" }(this.PubSub = this.PubSub || {})); Unless there is actually danger in proceeding, there is no need to throw errors all over the place. For instance, there is no harm in trying to subscribe a function that's already there, or unsubscribe a function that's not there. You can do a harmless log, but it's not something you should halt your app for. Your sub can be better if it just received a function instead of a function or an array. You can always create a separate function for adding an array, which in turn, just calls sub for each item in the array. This avoid bloating your sub. lib.sub = function(event, handler){ if(typeof handler !== 'function') throw new Error('Handler not a function'); // Use existing array or create an array for event name events[event] = events[event] || []; // We can use indexOf to check if the value is in the array. // If not, we push it there. if(!~events[event].indexOf(handler)) events[event].push(handler); } lib.subArray = function(event, handlers){ handlers.forEach(function(handler){ lib.sub(event, handler); }); } unsub can be simplified too. It's just a matter of finding and removing the handler. You can either do loop and splice, or do a filter. Either way is costly, but for simplicity, I went for filter. lib.unsub = function(event, handler){ if(!events[event]) return; events[event] = events[event].filter(function(storedHandler){ return storedHandler !== handler; }); } lib.unsubArray = function(event, handlers){ handlers.forEach(function(handler){ lib.unsub(event, handler); }); } lib.unsubAll = function(event){ if(!events[event]) return; // A neat way to clear an array is to set its length to 0 events[event].length = 0; } pub can be optimized. PubSub pattern is fire and forget. Fire some event, and the caller doesn't care who receives it. Thus, there is no need to throw an error if the event is in the registry or not. Also, order of events should never be relied upon. Events can fire at any time, at any order by anyone. Knowing the order means you know who called, which means your code is coupled. Taking advantage of this fact, we can use timers to fire our events instead of a loop. Handlers are third-party code, which are potentially long-running (like contain a 100k loop). The main advantage of doing so is to provide a gap between handler calls, giving other tasks like UI rendering, a chance to execute. Regarding params, you can allow pub to accept normal args, not just one. You can just slice them after the event name onwards. lib.pub = function(event){ if(!events[event]) return; var params = Array.prototype.slice.call(arguments, 1); var handlerIndex = 0; var timer = setInterval(function(){ // Using apply to send params. Events will receive them as args. // Also forcing context to null to avoid ambiguity of what `this` is events[event][handlerIndex].apply(null, params); // We clear the timer when done if(++handlerIndex === events[event].length) clearTimeout(timer); }, 0); }
{ "domain": "codereview.stackexchange", "id": 15970, "tags": "javascript, performance, object-oriented, design-patterns" }
Good reading on the Keldysh formalism
Question: I'd like some suggestions for good reading materials on the Keldysh formalism in the condensed matter physics community. I'm familiar with the imaginary time, coherent state, and path integral formalisms, but lately, I've been seeing Keldysh more and more in papers. My understanding is that it is superior to the imaginary time formalism at least in that one can evaluate non-equilibrium expectations. Answer: I am somewhat biased towards condensed matter physics, even though the subject extends also to fields such as cosmology and QCD. In the context of condensed matter physics I recommend the following books (even though various techniques also apply outside this regime): Rammer's Quantum Field Theory of Non-Equilibrium States. This was my first read on it, and I was quite content with it. If you are familiar with the idea of using periodic imaginary time to simulate a temperature then this book will explain the small additional step you need to take to grasp the basics of the Keldysh formalism. Unfortunately it's just formalism for the first 7 (!) chapters and sometimes the pace is a bit slow. Kadanoff and Baym's Quantum Statistical Mechanics: Green's Function Methods in Equilibrium and Nonequilibrium Problems. A classic. Kubo, Toda and Hashitsume's Statistical Physics II: Nonequilibrium Statistical Mechanics. Has some elements of classical statistical physics as well. The authors are very insightful. There's also chapter 18 in Kleinert which I find a nice read. This book is huge though and treats a lot of other topics. Still, if you go through Rammer then this chapter by Kleinert nicely summarizes everything without dropping too many details. The newest edition of Altland and Simons has two chapters on classical and quantum systems out of equilibrium, but I was fairly disappointed with their treatment considering the rest of the book is fantastic. As for quantum transport, where this formalism is frequently employed, I can recommend Di Ventra as an undergrad-level introductory book and this book by Datta for some other interesting topics. Weiss is excellent for dissipative (open) systems, although this field opens up a whole new can of worms so you might want to avoid at first. Other sources not in book form: Rammer and Smith's review article "Quantum field-theoretical methods in transport theory of metals" (Rev. Mod. Phys. 58 no. 2, 323–359 (1986), also available here). Solid. Kamenev and Levchenko's article "Keldysh technique and non-linear σ-model: basic principles and applications" (Adv. Phys. 58 no. 3 (2009), pp. 197-319, arXiv:0901.3586) is very advanced, but it treats some important details.
{ "domain": "physics.stackexchange", "id": 65946, "tags": "quantum-field-theory, condensed-matter, research-level, resource-recommendations, many-body" }
Change in focal length of lens inside water
Question: I read it on Yahoo Answers (link below) that the power of lens changes when the lens is placed inside water. But then, if we consider the formula f=c/2 , the radius if curvature of the lens still remains same, then, how can the focal length change? While considering the definition (point where light rays meet) we see that the focal length changes. But while considering the formula (f=c/2) focal length doesn't change. Can anyone please explain this and please do mention whether the focal length actually changes or not. http://answers.yahoo.com/question/index?qid=20090217024946AAk1njB Answer: The formula $$f=\frac{c}{2}$$ where $c$ is the radius of curvature is for mirrors, not lenses. The reason you are getting a wrong result is that you are applying a formula designed for mirrors to a lens. Incidentally, the formula $f=\frac{c}{2}$ is still valid for mirrors underwater.
{ "domain": "physics.stackexchange", "id": 25314, "tags": "lenses" }
What frequencies are present in the Fourier transform of the Dirac impulse?
Question: When I do the Fourier transform of the Dirac impulse I get a pure sinusoid (or complex exponential, however you wanna call it) but I read in several places that all frequencies are present in the dirac impulse and all of them with the same amplitude. How is this possible? Am I wrong when I perform the transform? Answer: A Dirac impulse $x(t)=\delta(t-d)$ has the continuous-time Fourier transform $X(\Omega)$ of $$\mathcal{F}\{\delta(t-d) \} = 1 e^{-j\Omega d} $$ whose magnitude is $$|X(\Omega)| = 1 ~~~, \text{ for all } \Omega $$ and a phase of $$\angle X(\Omega) = -\Omega \cdot d $$ Note that it's incomplete to think of the real or imaginary parts of the Fourier transform alone. Rather the magnitude and phase point of views are more reflective of the nature of the result. So in this case the magnitude is $1$ and hence it's said to contain all frequencies of magnitude $1$. Note that these are differential amplitude components of continuum frequency range as opposed to a finite amplitude of discrete set of frequency components, aka line components.
{ "domain": "dsp.stackexchange", "id": 6567, "tags": "fourier-transform" }
Gravitational-Wave Strain and Power (watt/square metre)
Question: If we detect a gravitational wave with a strain of, for example, $h=10^{-20}$, what is the flux of power carried by this wave, in SI units, $W/m^2$ ? How can flux of power be calculated for a given strain, such as $h=10^{-50}$ or $h=10^{-100}$? Answer: Some inital considerations. The total energy would depend on the length of time that the wave was continuous for, and the total area through which the wave passes. That is there is no hope to find the "Energy, in Joules" but instead to find the "flux" in Watts per square metre. Further thought: it is not the strain that will be proportional to the energy, but how fast the strain is changing. Consider two waves with the same strain, but one is at a much higher frequency. And recall Feynman's sticky bead thought experiment. The higher frequency wave would move the bead more quickly and so be of a higher power. At the opposite end of the spectrum, a wave with (nearly) infinite wavelength, would give a (nearly) constant strain, and so not transfer energy at all. So the relationship will be "flux" (in SI units of Joules/second/square-metre) is a function of $\dot h$, the rate of change of strain wrt time. For convenience, it is normal to resolve $h$ and $\dot h$ into + and × components. http://www.tapir.caltech.edu/~teviet/Waves/gwave_details.html gives a formula for the flux: $$\def\d{\mathrm{d}}\text{flux}=\frac{\d E}{\d A\, \d t}=\frac{c^3}{16\pi G} (\dot h_+^2 + \dot h_×^2)$$ For sinusoidal waves, $\dot h$ is proportional to $f h$ where f is the frequency of the waves, so the flux would be in proportion to $f^2h^2$, and so a wave with a strain of $h=10^{-50}$, which is $10^{-30}$ times smaller would carry $10^{-60}$ times less energy (per second per square metre) than a wave with $h=10^{-20}$ if the frequencies were equal. (That's like the difference between a the sound energy released when a mote of dust lands and a supernova)
{ "domain": "astronomy.stackexchange", "id": 6913, "tags": "gravity, general-relativity, gravitational-waves, gravitation" }
Countdown script in JavaScript
Question: I've written this little countdown script to count until the end of our school days. Now, that was written quickly, badly remembering modulo from a few years ago, and I don't think that's very optimized. I thought about updating only the seconds and check if it needs to update the minute too, etc. But it wouldn't be accurate as setTimeout is not (depending if the browser lags, etc). (function() { //the variable block here seems kinda weird, but whatever toots your horn var el = document.getElementById('countdown'), endDate = new Date('March 30, 2012 18:10:00'), curDate, diff, days, hours, minutes, seconds, tmp, countdown, //added Math.floor. this is already a shortcut, might as well make it a double one pad = function(number) { return (number < 10 ? '0' : '') + Math.floor(number) }, //calculate these constants once, instead of over and over minute = 60 * 1000, hour = minute * 60, day = hour * 24 ;(function tick() { curDate = new Date() //you want the absolute value of this, not of individual calculations using this diff = Math.abs(new Date(curDate.getTime() - endDate.getTime())) days = diff / day tmp = diff % day hours = tmp / hour tmp = tmp % hour minutes = tmp / minute tmp = tmp % minute seconds = tmp / 1000 //parseInt was redundant countdown = pad(days) + ':' + pad(hours) + ':' + pad(minutes) + ':' + pad(seconds) if ( 'textContent' in el ) { el.textContent = countdown } else { el.innerText = countdown } //dont't use arguments.calle, it's deprecated //simply use a named function expression setTimeout(tick, 1000) }()) }())​ About the semicolons missing: it is a deliberate choice as I find this more readable (the ; right before the (function() is because it is the only edge case where the semicolon insertion doesn't work correctly). Answer: Your whole code can be reduced to this. /* secsToDHMS() * Author Gary Green * http://stackoverflow.com/users/63523/gary-green */ function secsToDHMS(s) { var x = [86400, 3600, 60, 1],z,i=-1; while (z=x[++i]) x[i] = ("0" + parseInt(s / z,10)).slice(-2), s %= z; return x.join(':'); }; (function() { var currentDate = new Date(); var endDate = new Date('March 30, 2012 18:10:00'); var secsToGo = (new Date(endDate - currentDate)).getTime() / 1000; var countdownElement = document.getElementById('countdown'); var timer = setInterval(function() { if (--secsToGo < 0) { clearInterval(timer); return; } countdownElement.innerHTML = secsToDHMS(secsToGo); }, 1000); })(); Fiddle: http://jsfiddle.net/xg63V/4 Compressed version 311 bytes (function(){var d=(new Date(new Date("March 30, 2012 18:10:00")-new Date)).getTime()/1E3, f=document.getElementById("countdown"),g=setInterval(function(){if(0>--d)clearInterval(g); else{for(var a=d,b=[86400,3600,60,1],c,e=-1;c=b[++e];)b[e]=("0"+parseInt(a/c,10)).slice(-2), a%=c;f.innerHTML=b.join(":")}},1E3)})();​
{ "domain": "codereview.stackexchange", "id": 1517, "tags": "javascript, datetime" }
Integer endianness types for protocol structures, take 2
Question: Motivation When working with storage or wire protocols, we often read or write structures containing integers with specific byte-ordering (e.g. big-endian for Internet Protocol, or little-endian for USB). It's common to use functions such as the htons() family to convert values, but it's easy to miss a conversion (particularly if the protocol is the same endianness as the development system). Instead, I prefer to use the type system to distinguish host integers from their representations' protocol byte-order. Known limitations (non-goals): I haven't had a need to handle floating-point values, so only integers are handled here. It's not suitable for protocols such as IIOP or X11 where one of the hosts chooses at run-time which endianness to use. Example usage I'm currently using this when sending values over the wire. A simplified example, with all the error handling removed, looks something like: struct Response { BigEndian<std::uint16_t> seq_no; BigEndian<std::uint16_t> sample_value; }; We send by assigning (which implicitly converts our integer value to big-endian byte sequence) and then writing the structure: void send_result(std::uint16_t value) { Response r; r.seq_no = counter++; r.sample_value = value; write(fd, &r, sizeof r); } On the receive side, we read the wire representation into the same structure (so it's bitwise identical to the sending side) and then use the conversion operator to access the data in native form: std::uint16_t recv_result() { Response r; read(fd, &r, sizeof r); // ignore seq_no, for now return r.sample_value; } Changes since previous version Since version 1, I've changed the following: Renamed the views which give us access to the storage in the desired order, and added comments to make it clearer. This was previously giving the impression that round-trip would byte-reverse the value, which is not the case. Uniformly scatter/gather to native char size (in units of CHAR_BIT). Added warning for non-8-bit platforms to encourage audit of octet-orientated code. Optimised for reading/writing native-endian values directly with no wrapper class (will do the Right Thing for unicorn/dinosaur platforms such as mixed-endian or no-endian). Split out the default constructor, since serialising zero is a no-op. Added constexpr to conversion operations. Demonstrate plain old round-trip in the tests, rather than modifying the storage. I think this makes the tests clearer. Implementation #ifndef ENDIAN_HPP #define ENDIAN_HPP #include <array> #include <bit> #include <climits> #include <concepts> #include <ranges> #include <type_traits> #ifndef ENDIAN_SUPPORT_NON_8BIT static_assert(CHAR_BIT == 8, "This header splits into chars, not octets. " "Define ENDIAN_SUPPORT_NON_8BIT to enable."); #endif namespace endian { namespace detail { template<std::integral T, // type to represent auto BigFirst, // view that presents MSB first auto LittleFirst> // view that presents LSB first struct Endian { // We use unsigned T for bitwise operations using U = std::make_unsigned_t<T>; // The underlying storage std::array<unsigned char, sizeof (T)> data = {}; constexpr Endian() = default; // implicit conversion from T constexpr Endian(T value) { // unpack value starting with the least-significant bits auto uval = static_cast<U>(value); for (auto& c: data | LittleFirst) { c = static_cast<unsigned char>(uval); uval >>= CHAR_BIT; } } // implicit conversion to T constexpr operator T() const { // compose value starting with most-significant bits U value = 0; for (auto c: data | BigFirst) { value <<= CHAR_BIT; value |= c; } return static_cast<T>(value); } }; } template<std::integral T> using BigEndian = std::conditional_t<std::endian::native == std::endian::big, T, // no conversion needed detail::Endian<T, std::views::all, std::views::reverse>>; template<std::integral T> using LittleEndian = std::conditional_t<std::endian::native == std::endian::little, T, // no conversion needed detail::Endian<T, std::views::reverse, std::views::all>>; } #endif // ENDIAN_HPP Unit Tests using endian::BigEndian; using endian::LittleEndian; #include <gtest/gtest.h> #include <cstdint> #include <cstring> // Ensure there's no padding static_assert(sizeof (BigEndian<int>) == sizeof (int)); static_assert(sizeof (LittleEndian<int>) == sizeof (int)); // Helper function to inspect representation template<typename T> auto byte_array(const T& t) { std::array<unsigned char, sizeof t> bytes; std::memcpy(bytes.data(), &t, sizeof t); return bytes; } // Now the tests themselves TEST(big_endian, uint8) { const std::uint8_t x = 2; auto be = BigEndian<std::uint8_t>{x}; std::array<unsigned char, 1> expected{{2}}; EXPECT_EQ(byte_array(be), expected); // round trip back to native std::uint8_t y = be; EXPECT_EQ(y, x); } TEST(little_endian, uint8) { const std::uint8_t x = 2; auto le = LittleEndian<std::uint8_t>{x}; std::array<unsigned char, 1> expected{{2}}; EXPECT_EQ(byte_array(le), expected); std::uint8_t y = le; EXPECT_EQ(y, x); } TEST(big_endian, uint16) { const std::uint16_t x = 0x1234; BigEndian<std::uint16_t> be = x; std::array<unsigned char, 2> expected{{0x12, 0x34}}; EXPECT_EQ(byte_array(be), expected); std::uint16_t y = be; EXPECT_EQ(y, x); } TEST(little_endian, uint16) { const std::uint16_t x = 0x1234; auto le = LittleEndian<std::uint16_t>{x}; std::array<unsigned char, 2> expected{{0x34, 0x12}}; EXPECT_EQ(byte_array(le), expected); std::uint16_t y = le; EXPECT_EQ(y, x); } TEST(big_endian, uint32) { const std::uint32_t x = 0x12345678; auto be = BigEndian<std::uint32_t>{x}; std::array<unsigned char, 4> expected{{ 0x12, 0x34, 0x56, 0x78 }}; EXPECT_EQ(byte_array(be), expected); std::uint32_t y = be; EXPECT_EQ(y, x); } TEST(little_endian, uint32) { const std::uint32_t x = 0x12345678; auto le = LittleEndian<std::uint32_t>{x}; std::array<unsigned char, 4> expected{{ 0x78, 0x56, 0x34, 0x12 }}; EXPECT_EQ(byte_array(le), expected); std::uint32_t y = le; EXPECT_EQ(y, x); } Answer: This looks very good! About the limitations I haven't had a need to handle floating-point values, so only integers are handled here. If you did want to support floating-point values, and perhaps other types that might have an endianness, then your approach falls apart, or at least it would require some reinterpret_casting (or std::bit_casting since C++20) from the non-integer to the integer types, and it would still only work for types whose sizes match those of integers. Another way to solve the issue would be to reinterpret the value as std::bytes, and just rearrange them into the std::array as necessary for the desired endianness. It's not suitable for protocols such as IIOP or X11 where one of the hosts chooses at run-time which endianness to use. You could add a template parameter that would switch between a statically chosen endianness and one that can be changed at run-time, analogous to std::span's std::dynamic_extent. Add support for alignment You did ask about this in your previous version, and I think it is still important to consider this: you probably want to make sure your Endian<T> has the same alignment as T. Consider that you have a struct like this: struct Foo { char bar; uint16_t baz; uint32_t quux; }; And now you want to ensure the endianness is fixed, so you just add your wrappers: struct Foo { LittleEndian<char> bar; LittleEndian<uint16_t> baz; LittleEndian<uint32_t> quux; }; Before, sizeof(Foo) == 8 and all variables have their natural alignment. Afterwards sizeof(Foo) == 7, and baz and quux are incorrectly aligned. The compiler will see that, and will no longer be able to optimize the loads and stores to these variables on a little-endian machine. So I would do something like: template<std::integral T, // type to represent auto BigFirst, // view that presents MSB first auto LittleFirst, // view that presents LSB first std::size_t Align> // alignment requirement struct alignas(Align) Endian { … }; template<std::integral T, std::size_t Align = alignof(T)> using LittleEndian = std::conditional_t<std::endian::native == std::endian::little, T, // no conversion needed detail::Endian<T, std::views::reverse, std::views::all>, Align>; … Improvements to the unit tests I haven't used this myself, but I believe GoogleTest has some support for testing templated code, which allows you to get rid of a lot of code duplication. Even before you do that, it might be a good exercise to rewrite the tests a bit so that the actual type you want to test is only mentioned once, and that you have a generic way to generate test values. For example: TEST(little_endian, uint16_t) { using T = std::uint16_t; constexpr T x = static_cast<T>(0x12345678); constexpr LittleEndian<T> le = x; std::array<unsigned_char, 4> expected_full({0x78, 0x56, 0x34, 0x12}); decltype(byte_array(le)) expected; std::copy_n(expected.end() - sizeof(T), sizeof(T), expected.begin()); EXPECT_EQ(byte_array(le), expected); constexpr T y = le; EXPECT_EQ(y, x); } Also consider adding support for 64-bit integers. You also want to test whether you can use your class correctly in a constexpr environment. If you add alignment, you should also test that.
{ "domain": "codereview.stackexchange", "id": 45220, "tags": "c++, serialization" }
A tunnel inside the Earth (but not an ordinary tunnel)
Question: I decided to dig a tunnel inside the Earth. In equatorial plane. It should be designed in such a way that it follows the Coriolis effect. That means if, say a stone is dropped from rest into the tunnel, then it travels in the tunnel without touching the walls of the tunnel until it reaches the tunnel's lip on the other side somewhere near the surface of the Earth. So the question itself is an equation of the curve of the tunnel. I suspect that there is no nice closed form for the curve. But who knows? Nevertheless, it would be good to know how deep the tunnel goes and where it rises again to the surface and the travel time. You could say that it would be impossible to do this on Earth, but you actually could do this on the Moon. Answer: I am not sure whether you meant initially at rest relative to the universe, or to the surface of the Earth. Here are the answers to both versions: Universe Let the latitude be $\theta_0$. In the non-rotating reference frame (of the universe), the motion of the stone is in simple harmonic motion. So $r(t) = r_0 \sin (\omega t)$, where $2\pi / \omega$ is the time it takes to get to the center of the earth. The latitude is constant. And the longitude $\phi_0$ (in the non-rotating reference frame) is also constant. But the earth spins with constant angular velocity $\Omega_0 = 2\pi$ radians per 24 hours. So in the rotating coordinate $\phi'(t) = \phi_0 - \Omega t$. So measured relative to the surface of the earth in spherical coordinates you have the parametric description $$ r(t) = r_0\sin(\omega t) $$ where $r_0$ is the radius of the Earth. (Note: this assumes that the Earth is perfectly spherical and of uniform density inside, which is obviously not quite physical. Of course, digging a tunnel like that is also not quite physical...) $$ \theta(t) = \theta_0, \phi(t) = \phi_0 - \Omega t $$ Finding $\omega$ from the assumption that Earth is perfectly spherical and of uniform density is left as an exercise to the reader. A second interesting exercise is to find the conditions on the density of Earth and on the rate of revolution that allows for the stone to travel in a closed orbit (again, this is much much simpler when considered in the non-rotating reference frame...) Surface of the earth Again we work in the non-rotating reference frame. We have again, via the constant density assumption, (and assuming that the mass of stone is 1) that the potential energy is $P = \alpha r^2$. The kinetic energy is $2K = \dot{r}^2 + r^2\dot{\theta}^2 + (r\cos\theta)^2\dot{\phi}^2$. The conservation of angular momentum means that angular component of the velocity is of size $L / r$, where $L$ can be computed from the rate of rotation of the Earth. So the conserved energy is $$ E = \dot{r}^2 + \frac{L^2}{r^2} + \alpha r^2 $$ This gives an ODE for $r$. Similarly using a tilted coordinate system you can solve for the angles using the conservation of angular momentum by integrating an ODE and plugging in the solution for $r$. For the maximum depth, however, you don't need to explicitly solve the ODE: The energy is known initially: $E_0 = 0+ \frac{L^2}{r_0^2} + \alpha r_0^2$, where $L$ depends only on the rate of revolution for Earth, and $\alpha$ on the mass of Earth (assuming uniform density). $r_0$ is the radius of Earth. At the maximum depth, $\dot{r}$ is again 0. So you are down to finding the "other" positive root of the quartic polynomial $$ E_0 r^2 = L^2 + \alpha r^4 $$ which you can solve explicitly using the quadratic formula $$ r^2 = \frac{E_0 \pm \sqrt{ E_0^2 - 4 \alpha L^2}}{2\alpha } $$ where the + solution is the radius of the earth, and the - solution is the depth. Plugging in physical numbers: at the initial drop, $\dot{\theta} = 0$ and $\dot{\phi} = \Omega = \frac{2\pi}{86400} s^{-1}$. The radius of Earth we take to be $6.38 \times 10^6 m$, the mass $6\times 10^{24} kg$. So $\alpha = G M / r_0^3 = 1.5 \times 10^{-6} s^{-2}$. The conserved angular momentum is initially $$ L_0^2 = r_0^4 \cos(\theta_0)^4 \Omega^2 = 8.8\times 10^{18}\cos\theta_0^4 m^4 s^{-2} $$ and the conserved Energy is initially $$ E_0 = L_0^2 r_0^{-2} + \alpha r_0^2 = (\cos\theta_0)^4 2.15 \times 10^5 + 6.1\times 10^7 m^2 s^{-2} \sim 6.1\times 10^7 m^2 s^{-2} $$ (the initial angular momentum contribution is very small). Now $E_0^2 = 3.7 \times 10^{15}$ and $4\alpha L_0^2 = 5.28\times 10^{13}\cos(\theta_0)^4$, so the answer is that the maximum depth is very close to the center of the earth! Using the binomial expansion we get that $$ r^2 \sim \frac{L_0^2}{E_0} \implies r \sim 3.9\times 10^5 \times \cos(\theta_0)^2 m$$ So if you start at the equator where $\theta_0 = 0$, the hole will get about 94% to the center of the Earth.
{ "domain": "physics.stackexchange", "id": 1161, "tags": "classical-mechanics" }
Magnetization currents (amperian currents): how to show that they are always zero in total from the definition?
Question: For magnetic field in matter the two following amperian current densities are defined: Surface currents: u.m. $[\frac{A}{m}]$ $$J_{A,s}= \bf{M} \times \hat{n}$$ Volume currents: u.m. $[\frac{A}{m^2}]$ $$J_{A,V}= \nabla \times \bf{M} $$ Where $\bf{M}$ is the magnetization and $\hat{n}$ is the normal outgoing from the object considered. My question is : how to see, from the definition that the total amperian current is zero? I.e., why is $$I_{A,S}+I_{A,V}=0 \,\,\, \mathrm{everywhere}$$ ? Answer: If you're comfortable with delta functions etc., you can prove that the surface current formula is a special case of the volume current formula (the special case where M goes sharply to zero across a boundary). So we really only need the volume current case. Kelvin-Stokes theorem says $$\oint_\Gamma \mathbf{M}\, \cdot\, d{\mathbf{\Gamma}} = \iint_S (\nabla\times\mathbf{M})\, \cdot\, d\mathbf{S}$$ Let's prove the z component of $J$ is zero. (x and y are obviously the same argument.) Say we have some object of finite extent surrounded by air (air has magnetization zero). For each number $z_0$, we draw a loop in the $z=z_0$ plane, entirely outside the object, and then apply Kelvin-Stokes. The LHS is zero, so the formula says that the integrated z-component of $J_{A,V}$ in this $z=z_0$ slice of the object is zero. If the total in each $z=z_0$ slice is zero, then (by integration) the total in the whole object is also zero.
{ "domain": "physics.stackexchange", "id": 37064, "tags": "homework-and-exercises, electromagnetism, magnetic-fields, magnetic-moment" }
How can I use the Create of the Turtlebot on the serial port?
Question: I have a custom built Turtlebot with an iRobot Create. However I do not have a serial to USB adapter. How can I use the serial cable of the Create for my Turtlebot? I already found out that in turtlebot_node.py the port for the Create is set to ttyUSB0. What do I have to change here, so that I can use the serial port? Do I have to change anything else? Originally posted by TillScout on ROS Answers with karma: 53 on 2012-07-31 Post score: 0 Answer: You can set your serial port path in launch file: <launch> <node pkg="turtlebot_node" type="turtlebot_node.py" name="turtlebot_node" output="screen" respawn="true" args="--respawnable"> <param name="bonus" value="false" /> <param name="update_rate" value="30.0" /> <param name="port" value="/dev/ttyS0" /> <param name="publish_tf" value="true" /> </node> </launch> Originally posted by kszonek with karma: 459 on 2012-08-01 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 10433, "tags": "turtlebot" }
how os can calculate cpu cache size?
Question: There's an interview question. In my opinion, os can calculate cache hit rate and miss rate(with the total running time of process and the number of transfer of block of memory to cache). But how can know the size of cache? Is it possible to calculate with os's certain information? (such as size of transfer size to cache) We can assume anyting we need to calculate easily(like only one cache-no l1 l2 and so on, cache is fully associative or set associative or directed mapped and so on) Answer: It's not clear to me what you want. An OS can query the processor about its cache structure as long as the processor have the feature (for instance, X86 has this possibility tied with the CPUID instruction). An OS could deduce the information from any mean of identifying the processor (nowadays something like CPUID, but in a given processor family there always had been more or less precise means to deduce which member your were running on, especially if you have access to privileged instructions as an OS does) and having hardcoded tables giving the sizes for each model. Any program can try to measure the number of cache levels, their size, associativity, line size and latency. What you have to do is measure the latency of memory accesses using well-thought-out patterns and plotting the the latency versus the total memory touched by the pattern. You get some plateaus giving the latency of the cache, and the jumps positions give the cache sizes. Comparing the results of different patterns give the associativity and the line sizes. The more the cache is advanced (eviction buffers, pre-loading, ...) the more you have to be careful in the way the patterns are computed (a common technique is implementing the pattern as a pointer chase so that it is more difficult for the processor to guess the next address to be loaded) or you'll measure something different from what you want. Other factors to take into account is the effect of TLB, the presence of shared cache between cores and processors the load on the other cores, the effect of process migrations between cores by the OS, the effect of throttle by the OS or the hardware due to temperature or battery load. In conclusion such measurements are pretty technical, and although you can deduce a lot of things it is better to have a preconceived notion of the memory structure and of the external factors which may have an influence.
{ "domain": "cs.stackexchange", "id": 8913, "tags": "operating-systems, cpu-cache" }
Decidability of language containing strings of length equal to that of some substring of 1s in $\pi$?
Question: I read that the following language is decidable. $\{w | w \in \{0, 1\}^* \text{ and } 1^{|w|} \text{ is a substring in binary expansion of } \pi \}$ The proof has been given by considering the two possible cases Suppose it is true that there exists a minimal integer $N$ such that the binary expansion of $\pi$ only contains substrings of 1 having length at most $N$. That is, it contains substring $1^N$ but not $1^{N + 1}$. Define a corresponding turing machine $M_N$ such that it accepts the input string if it is of length $N$, otherwise rejects it. This machine decides the given language. Suppose the binary expansion of $\pi$ contains substrings of $1$s of all lengths. Then the Turing machine that accepts all strings decides the given language. We are not able to construct a single Turing machine that decides the language given, but one of these machines thus constructed will decide it but we do not know which. I don't quite understand how this language is decidable? Answer: You are confusing knowledge and truth. Something may be true without us knowing precisely why. An object may exist without us being able to construct it. If you belive in classical logic, then it is true that the maximum length of consequtive 1's in the expansion of $\pi$ is either finite or infinite. In each case, there exists a decision procedure for your language. Therefore a decision procedure exists. But we do not know which of the cases happens, and so we not not know a particular decision procedure, even though one exists. If you don't like this line of reasoning, then you should ask for a constructive proof of decidability of your language.
{ "domain": "cs.stackexchange", "id": 7829, "tags": "turing-machines, undecidability" }
Can we get induction without a transformer core?
Question: We usually see induction produced by two coils wound around a transformer core: [] Can you please specify if there is induction if we eliminate the core? Do we have to arrange the coils one inside the other? If the core can increase induction, to what extent can we increase it? What happens if, in the above transformer, we connect an appliance to both windings, do they both get half power or only the appliance connected to the second winding will work? Also, can we get induction with straight wires? Answer: To answer the questions Yes, there is still induction even if you remove the core of the transformer. No, you do not need to arrange the coils one inside the other, although this will increase the amount of current you induce (given that the two windings are electrically isolated from one another) We can increase the induction as determined by a parameter of the core known as the magnetic permeability $\mu$. This will depend on the type of metal you use as your core, and is quite large for iron. Connecting the transformer to multiple appliance from either winding will not do anything. In this case, there is no circuit. The intended use of a transformer is to take high voltage AC on the primary and induce current into the secondary winding, usually for the purpose of stepping down the voltage to something reasonable for consumer use. Can you get induction with straight wires? This is an interesting question, and the answer is yes. It is most evident with the differential form of Faraday's law: the presence of a changing magnetic field will produce a curling electric field in the vicinity of the wire, inducing a current. The primary coil generates an oscillating magnetic field which couples to the secondary coil. In fact, in an ideal world this oscillating EM field will propagate through all of space at the speed of light, and will couple to any coil that is any distance away. However, the effects of this field will be negligible as you move too far away, and in the case of an everyday transformer, it is necessary to incorporate the iron core to facilitate proper inductive coupling between the primary and secondary windings. You can think about why this is the case with the integral form of Faraday's law of Induction, $$\mathscr{E_{2,1}} = - \frac{d\Phi_{2,1}}{dt}.$$ Suppose the secondary is a loop of wire that encloses a total area A, and that the magnetic field generated by loop 1 passing through loop 2 is $B(t)$ at time $t$ and $B(t+\Delta t)$ at time $t+\Delta t$. What our good friend Faraday says is that in a short time $\Delta t$, the voltage induced in the 2nd winding is proportional to $A\frac{B(t+\Delta t)- B(t)}{\Delta t}$. It is intuitive then, that the strength of the magnetic flux passing through the loop is important, as having higher peak flux implies that the change in flux will be greater as well. The amount of current you will induce into the secondary winding will be significantly greater with the iron core, as the iron core traps the magnetic flux generated by the primary into a region of space that passes through the secondary. This is why it is so important to have the iron core in a transformer.
{ "domain": "physics.stackexchange", "id": 39425, "tags": "electromagnetic-induction" }
What happens if we exert more force on an object which would be travelling at near maximum light speed?
Question: If we exert force on an object which would be travelling at near light speed, to an extent to make it move greater than speed of light in vacuum, what would happen? If object doesn't move faster than light, what happens to that excess force? Is this experiment been done? What would happen if we exert force on light travelling at its maximum speed? Answer: If you apply a force (in the direction of motion) to an object with mass, moving close to the speed of light, the object will accelerate and its speed will become even closer to the speed of light. But no amount of force can make it reach, or exceed, the speed of light. The reason that objects behave this way is that momentum is not $m\mathbf{v}$ as Newton thought; it is in fact $m\mathbf{v}/\sqrt{1-v^2/c^2}$ as Einstein realized. Applying a constant force will cause the momentum to increase indefinitely, but the momentum becomes arbitrarily large as the speed approaches $c$. There is no way to exert force on light. It always travels (in vacuum) at the speed of light, and cannot be sped up or slowed down. Why? Well, photons don’t have charge, so they don’t feel electromagnetic force; in fact they carry electromagnetic force. Nor do they feel the strong or weak nuclear forces. They do feel gravitational force, but it doesn’t make them move faster or slower. (In General Relativity, they just move on lightlike geodesics.) These four forces are all the fundamental forces, as far as we know. There is plenty of experimental evidence for these facts.
{ "domain": "physics.stackexchange", "id": 53244, "tags": "special-relativity, speed-of-light, mass-energy, faster-than-light" }
Particle deflection in a magnetic dipole field
Question: Consider a small magnetic dipole of magnetic moment $\vec{\boldsymbol{\mu}} = (\mu_x, \; \mu_y, \; \mu_z)$ sitting at the origin. The magnetic field all around is \begin{equation}\tag{1} \mathbf{B} = \frac{\mu_0}{4 \pi} \Big( \frac{3 (\vec{\boldsymbol{\mu}} \cdot \vec{\mathbf{r}})}{r^5} \, \vec{\mathbf{r}} - \frac{\vec{\boldsymbol{\mu}}}{r^3} \Big). \end{equation} A particle of charge $q$ is moving from infinity with initial velocity $\vec{\mathbf{v}}_0 = (v_0, \; 0, \; 0)$ and impact parameter $b$ along the $y$ axis. The particle return to infinity with some deflection. What is the momentum variation, to lowest order? The magnetic force is simply $\vec{\mathbf{F}} = q \, \vec{\mathbf{v}} \times \vec{\mathbf{B}}$. Integrating Newton's equation gives the momentum variation if we assume a slight deviation from a straight path: $\vec{\mathbf{r}}(t) \approx (v_0 \, t, \; b, \; 0)$ for the force. Calculations give this result: \begin{align} \Delta p_x &\approx 0, \tag{2} \\[12pt] \Delta p_y &\approx \frac{\mu_0 q}{2 \pi b^2} \, \mu_z, \tag{3} \\[12pt] \Delta p_z &\approx \frac{\mu_0 q}{2 \pi b^2} \, \mu_y. \tag{4} \end{align} Surprisingly, this momentum variation doesn't depend on the initial velocity $v_0$. These components aren't exactly a vectorial product, since $\vec{\mathbf{v}}_0 \times \vec{\boldsymbol{\mu}} = (0, -\, v_0 \mu_z, \; +\, v_0 \mu_y)$. I then have two questions: I need to confirm the results (2)-(4). More importantly, how can I write (2)-(4) in a simple closed vectorial form, using only the vectors $\vec{\boldsymbol{\mu}}$, $\vec{\mathbf{v}}_0$ (and probably the impact vector $\vec{\mathbf{b}} = (0, b, 0)$)? Answer: I confirmed your results (2)-(4). They can be written in vector form as $$\Delta\vec{p}=\frac{\mu_0q}{2\pi b^2}\hat{v}_0\times\left[2(\vec{\mu}\cdot\hat{b})\hat{b}-\vec{\mu}\right].$$ It reduces to your result using $\hat{v}_0=\hat{x}$ and $\hat{b}=\hat{y}$. I found this by doing the whole calculation in vector form rather than using components. A handwaving argument for why the impulse is independent of the velocity is that, at twice the velocity, the Lorentz force is twice as large, but the time it acts over is half as large. It’s handwaving because of course in both cases the time is actually infinite. But if you put in a distance cutoff where the force becomes negligible, the time would be half as large.
{ "domain": "physics.stackexchange", "id": 59455, "tags": "particle-physics, magnetic-fields, scattering, classical-electrodynamics, magnetic-moment" }
Heisenberg's uncertainity principle
Question: In the Heisenberg uncertainty principle, $$\Delta x \cdot \Delta p \geq \frac{h}{4\pi}$$ The values of $\Delta x$ and $\Delta p$ are the standard deviations which we get from the probability distribution function of the particle and I heard that it has nothing to do with the measuring instrument. Actually while measuring, the probability distribution function of a particle also changes, Does this means that the measuring instrument has some effect? Answer: Actually while measuring, the Probability Distribution function of a particle also changes, Does this means that the measuring instrument has some effect ? The measuring process may change the boundary conditions of the solutions of the quantum mechanical equations of the system under measurement, so the complex conjugate square of the wave function ( the probability) may change. Good measurements are done with a minimal disturbance of the process under study by the measuring method.
{ "domain": "physics.stackexchange", "id": 27792, "tags": "quantum-mechanics, heisenberg-uncertainty-principle, measurement-problem" }
‘SensorManager’ is not a member of ‘gazebo::sensors’
Question: As said in the title I am getting thrown the following error: ‘SensorManager’ is not a member of ‘gazebo::sensors’ I am running ROS Kinetic with Gazebo 7 on Ubuntu 16.04 My GCC version is "gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609" Other people who have had this problem have a linking problem in the CMakelists file, but I've gone over it a few times and can't spot the problem. Any help would be appreciated. The function is in a world plugin - int getNumberOfWalls() { //Get bumpsensor gazebo::sensors::SensorManager mgr = sensors::SensorManager::Instance(); } Here is my CMakeLists.txt cmake_minimum_required(VERSION 2.8.7) project(weaselball_gazebo) add_compile_options(-std=c++11) find_package(catkin REQUIRED COMPONENTS roscpp gazebo_ros ) find_package(gazebo REQUIRED) #------------------------------------------------------------------------------ ## Dependencies include( FindPkgConfig ) if( PKG_CONFIG_FOUND ) pkg_check_modules( GAZEBO REQUIRED gazebo ) endif() message (${GAZEBO_LIBRARY_DIRS}) link_directories( ${GAZEBO_LIBRARY_DIRS} ) include_directories( ${GAZEBO_INCLUDE_DIRS} ) #if( PKG_CONFIG_FOUND ) # pkg_check_modules( RAVELIN ravelin ) #endif() #include_directories( ${RAVELIN_INCLUDE_DIRS} ) #link_directories( ${RAVELIN_LIBRARY_DIRS} ) include (FindPkgConfig) if (PKG_CONFIG_FOUND) pkg_check_modules(GAZEBO gazebo) pkg_check_modules(OGRE OGRE) pkg_check_modules(OGRE-Terrain OGRE-Terrain) endif() link_directories(${GAZEBO_LIBRARY_DIRS} ${OGRE_LIBRARY_DIRS}) include_directories(${Boost_INCLUDE_DIR} ${catkin_INCLUDE_DIRS} ${GAZEBO_INCLUDE_DIRS} ${OGRE_INCLUDE_DIRS} ${OGRE-Terrain_INCLUDE_DIRS}) #------------------------------------------------------------------------------ ## Code set( LIBS ${GAZEBO_LIBRARIES} ${RAVELIN_LIBRARIES} libRavelin.so ) set( HDRS include/common.h include/math.h include/log.h include/state.h include/virtual_time.h include/vicon.h include/video.h include/movie.h include/models.h include/weazelball.h include/gazebo_log.h include/bumpSensor.h ) set( SRCS ) add_library(configure SHARED plugins/initConditionHandler.cpp) target_link_libraries(configure ${GAZEBO_LIBRARIES}) add_library(StateRecorder SHARED plugins/stateRecorder.cpp) target_link_libraries(StateRecorder ${GAZEBO_LIBRARIES} ${CATKIN_LIBRARIES}) #This plugin is still being developed... #add_library(bumpSensor SHARED src/bumpSensor.cpp) #target_link_libraries(bumpSensor ${GAZEBO_LIBRARIES}) #target_link_libraries(${PROJECT_NAME} ${catkin_LIBRARIES} ${GAZEBO_LIBRARIES}) add_library( simulation-controller SHARED plugins/simulation_controller.cpp ${HDRS} ${SRCS} ) target_link_libraries( simulation-controller ${LIBS} ) catkin_package( DEPENDS roscpp gazebo_ros ) echoing the GAZEBO_LIBRARY_DIRS gave me: /usr/lib/x86_64-linux-gnu/gazebo-7/plugins/usr/lib/x86_64-linux-gnu/usr/lib/x86_64-linux-gnu/gazebo-7/plugins/usr/lib/x86_64-linux-gnu and finally my package.xml - <?xml version="1.0"?> <package format="2"> <name>weaselball_gazebo</name> <version>0.0.0</version> <buildtool_depend>catkin</buildtool_depend> <build_depend>roscpp</build_depend> <build_export_depend>roscpp</build_export_depend> <exec_depend>roscpp</exec_depend> <build_depend>gazebo_ros</build_depend> <exec_depend>gazebo_ros</exec_depend> <build_export_depend>gazebo</build_export_depend> <exec_depend>gazebo</exec_depend> <!-- The export tag contains other, unspecified, tags --> <export> <!-- Other tools can request additional information be placed here --> <gazebo_ros plugin_path="${prefix}/lib" gazebo_media_path="${prefix}" /> </export> </package> Once again any help would be appreciated. Originally posted by Cler on ROS Answers with karma: 1 on 2018-08-01 Post score: 0 Answer: Can you verify that you are including <gazebo-7/gazebo/sensors/SensorManager.hh>? Also try changing your line to gazebo::sensors::SensorManager *mgr = gazebo::sensors::SensorManager::Instance(); Originally posted by stevejp with karma: 929 on 2018-08-01 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Cler on 2018-08-01: Yep that was it, I feel stupid. Thanks for the help.
{ "domain": "robotics.stackexchange", "id": 31438, "tags": "gazebo, ros-kinetic" }
Ways to compute the n-the derivative of a discrete signal
Question: This is a pretty general question about how to compute derivatives of a digital signal $x[n]$. I would like to know what are the different approaches (from naive to complex) and how are they compared to one another? Is it possible with FIR/IIR filters? What are the pro's and contra's. Which are better for real-time applications? Answer: The estimation of derivative is straightforward: $$x'(n)~=\frac{x(n+1)-x(n-1)}{2}$$ $$x''(n)~={x(n+1)-2*x({n})+x({n-1})}$$ or if you have a signal sampled at $t_i=i\Delta t$, it is $$x'(t_{i})~=\frac{x(t_{i+1})-x(t_{i-1})}{2*\Delta t}$$ $$x''(t_{i})~=\frac{x(t_{i+1})-2*x(t_{i})+x(t_{i-1})}{(\Delta t)^2}$$ What you are interested in may be how to smooth the estimation. And yes you can use some recursive filters such as $y(n) = a \cdot x(n) + (1-a) \cdot y(n-1)$, or just implement your estimation with some simple windows (Hann window for example). To achieve the high SNR without distorting the signal very much, Savitzky–Golay filter also smooths your data by fitting successive sub-sets of adjacent data points with a low-degree polynomial with linear least squares. EDIT Matlab code for an N-th derivative of signal row vector x dx = x; %'Zeroth' derivative for n = 1:N % Apply iteratively dif = diff(dx,1); % First derivative first = [dif(1) dif]; last = [dif dif(end)]; dx = (first+last)/2; end
{ "domain": "dsp.stackexchange", "id": 1505, "tags": "filters, discrete-signals, signal-analysis, derivative, derivation" }
Tolerance of Natural Frequency & Resonance?
Question: I'm writing a report at the moment about natural frequency, driving frequency and resonance - and I was wondering, is there a typical % tolerance inside which the driving frequency will cause resonance (or exhibit resonance-like characteristics)? Or does this tolerance depend with the material and construct involved? (If you have any sources where I can read about this also, it'd be much appreciated). Note- I'm writing about an oscillatory system - torsion pendulum, so it would be resonance in an oscillatory sense). Answer: An oscillator is usually characterized by its quality factor Q. This is a dimensionless parameter which measures how "good" of an oscillator it is. It also relates to the quantity you are interested in - a linear, damped oscillator will exhibit a lorentzian peaked response in the frequency domain. The bandwidth of the resonance (points where the response is decreased to 50%) is given by $\Delta f = \frac{f_0}{Q}$. The quality factor can also be related to the damping coefficient - for more info check out wikipedia: http://en.wikipedia.org/wiki/Q_factor
{ "domain": "physics.stackexchange", "id": 2175, "tags": "classical-mechanics, resonance" }
An exercise that asks for informal description of the language accepted by a specific PDA
Question: This is a problem that I have found from Introduction to automata theory, languages and computation by John Hopcroft and Jeffrey Ullman. PDA P=({q0, q1, q2, q3, f)}, {a, b}, {Z0, A, B}, δ, q0, Z, {f}) has the following rules defining δ: δ(q0,a,Z)=(q1,AAZ) δ(q0,b,Z)=(q2,BZ) δ(q0,∈,Z)=(f,∈) δ(q1,a,A)=(q1,AAA) δ(q1,b,A)=(q1,∈) δ(q1,∈,Z)=(q0,Z) δ(q2,a,B)=(q3,∈) δ(q2,b,B)=(q2,BB) δ(q2,∈,Z)=(q0,Z) δ(q3,∈,B)=(q2,∈) δ(q3,∈,Z0)=(q1,AZ) Note that since each of the sets above has only one choice of move, we have omitted the set brackets from each of the rules. It has been asked to informally describe L(P). After observing, it seems to me that in case the string started with a, no prefix should contain more b's than twice the number of a's and in the string, number of a's * 2 = number of b's. Am I wrong about this particular description? How can I informally describe the language that the PDA accepts? Answer: The language of this PDA seems to be the set of all strings that have twice as many $b$'s as $a$'s. If there is a deficit of $a$'s in the prefix seen so far in the sense that the number of $b$'s exceeds twice the number of $a$'s, then the PDA would be operating essentially from state $q_2$, where it would modify the number of $B$'s on the stack. In this state, the PDA would remove two $B$'s for each $a$ it sees in the input and push a $B$ for each $b$ it sees in the input. If there is a deficit of $b$'s in the prefix seen so far in the sense that number of $b$'s is less than twice the number of $a$'s, then the PDA would be operating essentially out of state $q_1$, where it would work with the $A$'s that are on the stack: for each $a$ in the input, push two $A$'s onto the stack, and for each $b$ in the input, remove an $A$ from the stack. Thus, at any given time, the stack contains either all $A$'s or all $B$'s depending on which way the deficit is. The machine needs to deal with these two situations differently: if there is a surplus of $B$'s, then the next input $a$ causes two $B$'s to be popped from the stack, whereas if there is a surplus of $A$'s, the the next input $a$ causes two more $A$'s to be pushed onto the stack. When the deficit swings from one way to another, the machine transitions from $q_2$ to $q_1$ (via $q_3$) or from $q_1$ to $q_2$ (via $q_0$).
{ "domain": "cs.stackexchange", "id": 19989, "tags": "formal-languages, pushdown-automata" }
Two charged black holes in equilibrium
Question: Consider a pair of (possibly rotating) charged black holes with masses $m_1$ and $m_2$, and like charges $q_1$ and $q_2$. It seems that under certain conditions gravitational attraction should exactly cancel electrostatic repulsion and a stationary spacetime will result. What are these conditions? The point charges analogy suggests the equation $$k q_1 q_2 = G m_1 m_2.$$ However, it is by no means obvious this equation is the correct condition except in the large distance limit. Also, Is it possible to write down this solution of Einstein-Maxwell theory in closed form? Answer: There is a quite instructive paper G. A. Alekseev and V. A. Belinski, Equilibrium configurations of two charged masses in General Relativity, Phys.Rev. D76 (2007) 021501; arXiv:0706.1981 [gr-qc], e.g. they mentioned a work about non-existence of static equilibrium configurations of two charged black holes by P. Chrusciel and P.Tod, Commun.Math.Phys., 271 577 (2007); arXiv:gr-qc/0512043 and found condition for equilibrium of two charged masses: $m_1 m_2 = (e_1-\gamma)(e_2+\gamma)$ with $\gamma = (m_2 e_1-m_1e_2)/(l+m_1+m_2)$.
{ "domain": "physics.stackexchange", "id": 3292, "tags": "electromagnetism, general-relativity, black-holes, charge, binary-stars" }
UPDATE : How to continue Computing the Convolution
Question: $$ x(t):=\begin{cases} 1&\text{if $0<t<T$}\\ \\ 0&\text{if otherwise} \end{cases} \qquad\text{and}\qquad h(t):=\begin{cases} t&\text{if $0<t<2T$}\\ \\ 0&\text{if otherwise} \end{cases} $$ Compute $(x*h)(t)$. My approach: We first note that :$$\displaystyle x(\tau)=\begin{cases} 1&\text{if $0<\tau<T$}\\ \\ 0&\text{if otherwise} \end{cases}\qquad\text{and}\qquad\displaystyle h(t-\tau)=\begin{cases} t-\tau&\text{if $0<t-\tau<2T$}\\ \\ 0&\text{if otherwise} \end{cases}$$ We observe that : $$ 0<\tau<T $$ $$ t-2T<\tau<t $$ Thus, $(x*h)(t)\in\mathcal{R}[\max(0,t-2T),\min(t,T)]$ (i.e. Riemann integrable). Therefore : \begin{align*} (x*h)(t)&:=\int_{-\infty}^{\infty}x(\tau)h(t-\tau)\;\text{d}\tau\\ &=\int_{\max(0,t-2T)}^{\min(t,T)}t-\tau\;\text{d}\tau\\ &=\left. t\tau-\frac{\tau^{2}}{2}\right|_{\max(0,t-2T)}^{\min(t,T)}\\ &=\left[t\min(t,T)-\frac{\min^{2}(t,T)}{2}\right]-\left[t\max(0,t-2T)-\frac{\max^{2}(0,t-2T)}{2}\right]\\ &=t[\min(t,T)-\max(0,t-2T)]-\frac{1}{2}[\text{min}^{2}(t,T)-\text{max}^{2}(0,t-2T)] \end{align*} Problem: I want to further solve this to obtain the final solution as a piecewise function (if it has such form) I present to you the MATLAB simulation of the convolution First one corresponds to $T=1$ and the second one corresponds to $T=2$. Here is the algorithm used: t = linspace(0,10,1000); % assume a time span for "t" T=1; m = min(t,T); % find min of (t-1 , 5) for each "t" M = max(0,t-2*T); % find max of (t-3 , 3) for each "t" vind = M < m; % find "valid" range of "t" y = t.*(m-M)-0.5*(m.^2-M.^2);% EVALUATE the CONVOLUTION y = y.*vind; % force non-valid range to zero. figure,plot(t,y); % DISPLAY: title('convolution is :'); Credits to $\text{@Fat32}$ for the code in his answer in the link provided in the comment Answer: Some general hints for this type of problem If the function is defined piecewise than chances are your calculation and solution will also have to be done piecewise in sections. Determine first where the result is zero, i.e. where $x(\tau)$ and $h(t-\tau)$ don't overlap. This determines the start and end of the sections that you need to consider. Hint: if both functions are causal than $t=0$ is always a boundary. The length of a convolution is always equal (or smaller) than the sum of the lengths of the signals. Then find the section boundaries: these are where the mins() and maxes() changes from one argument to the other. In each section replace the max()/min() with the correct value for this section. Solve each section individually and splice the solutions together. It's also super helpful if you can develop some graphical intuition for this type of problem: draw both functions with a few different overlaps and see what happens when they start overlapping, when they are fully overlap and when they stop overlapping. You already came a long way. Give that a shot and post back if you are still stuck. Happy to help more. EDIT More help on the piecewise solution. For simplicity let's call the convolution $y(t)$ $y(t)$ must be zero for $t < 0$. The convolution cannot start earlier than any of the signals $y(t)$ must be zero for $t > 3T$. The convolution cannot be longer than the sum of the length of the signal. That means you only need to focus on the interval $0 < t < 3t$ $min(t,T)$ is $t$ for $ 0 < t < T$ and $T$ for $T < t < 3T$ $max(0,t-2T)$ is $0$ for $ 0 < t < 2T$ and $t-2T$ for $2T < t < 3T$ That means that you need to split the solution into three section: $[0,T]$, $[T,2T]$ and $[2T, 3T]$. Within each section, you can replace the min/max functions by the actual value since it doesn't change over the section. Example: in the first section $[0,T]$, we have $max(0,t-2T) = 0$ and $min(t,T)=t$ so you would be integrating over $[0,t]$.
{ "domain": "dsp.stackexchange", "id": 9924, "tags": "convolution, continuous-signals" }
Unable to install drcsim and gazebo on Ubuntu 12.04 or 12.10
Question: When i try to install drcsim, gazebo, or ros i get the following errors on both Ubuntu 12.04 as well as 12.10. I have tried multiple times to install and have the followed the directions on the site. I get the following errors when i try. Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: drcsim : Depends: gazebo (>= 1.2.0) but it is not going to be installed Depends: ros-fuerte-pr2-mechanism but it is not going to be installed Depends: ros-fuerte-std-msgs but it is not going to be installed Depends: ros-fuerte-common-msgs but it is not going to be installed Depends: ros-fuerte-image-common but it is not going to be installed Depends: ros-fuerte-geometry but it is not going to be installed Depends: ros-fuerte-pr2-controllers but it is not going to be installed Depends: ros-fuerte-geometry-experimental but it is not going to be installed Depends: ros-fuerte-robot-model-visualization but it is not going to be installed E: Unable to correct problems, you have held broken packages. Originally posted by aqiruse on Gazebo Answers with karma: 48 on 2012-12-04 Post score: 1 Original comments Comment by asomerville on 2012-12-04: If you put the errors in code formatting (see the 010101 icon above the edit box) they'll be easier to read. Comment by hsu on 2012-12-05: can you post the console output when you run 'sudo apt-get -f`, thanks. Comment by aqiruse on 2012-12-05: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or b Comment by aqiruse on 2012-12-05: basically i get the same exact error message that i do when i try to do sudo apt-get install drcsim.... the sudo apt-get install -f drcsim gives me the same message Comment by aqiruse on 2012-12-05: basically i get the same exact error message that i do when i try to do sudo apt-get install drcsim.... the sudo apt-get install -f drcsim gives me the same message Answer: Okay, I reinstalled Ubuntu 12.04 for the 7th time. Retried. And it works. It crashes often but it does launch, and I can place a model on the simulation. With a completely, fresh install, I was able to download the drcsim using "sudo apt-get install drcsim" and it offered no problems. The only difficulty I encountered was that I had to add a source pointer to gazebo-1.3/setup.sh before the one to the drcsim/setup.h in the ~/.bashrc file. Without it it would give an error. As soon as I placed it before the drcsim/setup.sh source, it will launch and run, though it still crashes but for other reasons I presume. Thank you all for trying to help me. I really appreciate it. Originally posted by aqiruse with karma: 48 on 2012-12-06 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 2842, "tags": "gazebo, installation, drcsim, ubuntu" }
Current without potential difference, in the given circuit?
Question: In the given circuit, the value of current comes out to be 0.5 A Now if I calculate potential difference between point A and B i.e. $$V_B + 3i-1.5=V_A$$ On putting value of $i=0.5A$ we get, $$V_A-V_B=0$$ Means potential difference across point A and B is zero, but still a current is flowing (in the circuit) thought it! How?! Please explain. Answer: There’s no revelation here. All you have established is the voltage drop across the 3 Ohm resistor equals the voltage rise across the 1.5 volt battery. You get the same result if you start at A and go to B through the 3 Ohm and 1 Ohm resistor and 2 volt battery. In any case, a potential difference is not required for current to flow. There are no potential differences across the interconnecting wires where zero resistance is assumed, yet the current is flowing through them. Superconductors carry current with no potential difference. So clearly a potential difference is not required for current to flow. Hope this helps.
{ "domain": "physics.stackexchange", "id": 88476, "tags": "electric-circuits, electric-current, potential, voltage" }
Can compounds dissolved in different miscible solvents react?
Question: For example, can crystal iodine dissolved in ethanol react with sodium thiosulfate dissolved in water when mixed together? Answer: There are two ways to add two miscible solvents to each other. One is the bartender skill, whereby you attempt to create two different layers of solvents and then try to keep the flask motionless to prevent mixing. The other way is to mix them all along, whether by shaking, careless pouring, or a stirring bar. I will concentrate on the second as that is, in my opinion, the far more common use in chemistry. Mixing to miscible solvents, even with compounds dissolved in them, results in a mixed solvent that for all intents and purposes of this question can be considered a new solvent. The question is no longer whether the individual solutes dissolve in their respective solvents but whether they dissolve in the mixed solvent. In many reactions, pouring the crude reaction mixture into a different solvent (miscible with the reaction solvent) serves as an easy way for the product to precipitate ready for isolation. For example, when xanthones are synthesised by dehydration in concentrated sulphuric acid, the crude mixture is then poured into water. Xanthones are not soluble in water but also not in water/sulphuric acid mixtures if the acidity is not sufficiently high so the xanthones precipitate and can be filtered off. Obviously, there are three cases here: both compounds remain soluble, neither compound remains soluble and one of the compounds remains soluble. The first two cases are pretty trivial: if both compounds are soluble they can react as they would in a single solvent. Likewise, if neither compound is, they will both precipitate and not react (disregarding very slow, very minor solid state reactions if there are any relevant interfaces between the precipitates—but I digress). The interesting case is the third case: what happens when one compound remains soluble but the other does not. Interestingly, with sufficient stirring there is typically enough contact surface exposed to molecules in solution that these two compounds will still react. Furthermore, precipitation doesn’t always mean entirely insoluble; the solubility might be low but sufficient to sustain a reaction. Thus, in most cases if one reactant is not sufficiently soluble, a reaction will still occur. I will spare a few words on the bartender skill as there is an example reaction undergrad students may well perform as part of their lab courses: a reaction to test for the presence of nitrate. At first, nitrate is reduced by iron(II) salts in diluted sulphuric acid giving nitrogen oxide: $$\ce{3Fe^2+ + NO3- + 4H+ -> 3Fe^3+ + NO + 2 H2O}\tag{1}$$ Then, the solution is layered with concentrated sulphuric acid whose greater density means it will drop to the bottom of the test tube. At the interphase, water is drawn into the sulphuric acid leading to the formation of pentaaquanitrosyliron(II) ions $\ce{[Fe(H2O)5(NO)]^2+}$ from unreacted iron(II). This complex has a dark-red, violet or brown colour and forms in a ring-like manner at the bottom of the water phase as shown in the image from Wikipedia. In German, this is known as the ring test. $$\ce{[Fe(H2O)6]^2+ + NO <=>> [Fe(H2O)5(NO)]^2+ + H2O}\tag{2}$$ This shows how the compounds may still be able to react.
{ "domain": "chemistry.stackexchange", "id": 12827, "tags": "inorganic-chemistry, solubility, solutions, solvents" }
Modifying Floyd–Warshall Algorithm for Vertex Weights
Question: I was trying to modify the Floyd–Warshall's algorithm to take into account the weights over the vertices, in addition to the weight of the edges, while computing the shortest path. The length of a path from vertex A to an adjacent vertex B, Path(A,B) , is defined as: Vertex_Weight(A) + Edge_Weight(AB) For a vertex A: Path(A,A) = 0 I was wondering if the Floyd–Warshall can be modified to handle this, and in particular if the following idea will work: The weight on a vertex V can be added to the weights of all the outgoing edges from V. Thus new edge weight Edge_Weight' is: Edge_Weight'(AB) = Vertex_Weight(A) + Edge_Weight(AB) One can transform the original graph with the vertex weights to a graph without vertex weights, where the edge weights are modified as above. The normal Floyd–Warshall can be applied now. If anyone could envisage a scenario where this might not work or if anyone knew of a better way, I would be thankful. Answer: "I was wondering if the Floyd–Warshall can be modified to handle this, and in particular if the following idea will work." Yes, your cool idea works! In fact, other people have found the same cool technique of including vertex weight in the new edge weight. By the way, I would not say you have modified the Floyd-Warshall algorithm. It is better to say that you have adapted the problem brilliantly so that Floyd-Warshall algorithm can be applied flawlessly. "If anyone could envisage a scenario where this might not work or if anyone knew of a better way, I would be thankful" It looks like there is little chance for you to become thankful by that cause.
{ "domain": "cs.stackexchange", "id": 12002, "tags": "algorithms, shortest-path" }
Why do only bass sounds (muscle cars and subwoofers) shake the environment?
Question: I have read this question: The walls have their resonant frequencies and being large objects, these tend to be low frequencies. Therefore bass sound waves are better at triggering resonance in the walls, which helps their transmittance. Why do bass tones travel through walls? I have noticed that whenever a muscle car or a car with loud subwoofers passes by, everything in the environment shakes (vibrates). This includes big objects (like the answer says), the house's walls, but it includes small objects that are close too. So I do not fully understand that answers there, they do not fully answer my question (they answer more about why the sound travels through the wall). But why do higher pitch sounds not shake (vibrate) the environment? In my experience this only works with these very loud low pitch noises. Question: Why do only bass sounds (muscle cars and subwoofers) shake the environment? Answer: This question leads to a rabbit hole of phenomena, and I believe a complete answer would turn into a book. With that in mind, here are a number of considerations that might put your mind at ease: Human hearing is limited between roughly 20 Hz and 20 kHz, but the response is not flat. We hear best around 1000 Hz (see this article for more), but for lower frequencies we need more acoustic energy to perceive a similar level of loudness. Thus, having a heavy bass is associated with pumping a lot more energy into the sound than having a loud melody. Usually we think of sound as responding linearly, or if you put in a given frequency you will get the same frequency out later. However, as the amplitude increases you start to get nonlinear effects, where additional frequencies of sound can be generated (e.g., higher-frequency rattling associated with low-frequency sources). How loud do you need to get to start having nonlinear effects? It depends on the situation. Perhaps one of the easiest ways to get this nonlinear behavior is to have a contact discontinuity (such as a crack in a solid or a mirror hanging on the wall). There are very different behaviors of the discontinuity depending on whether you are pushing or pulling (think of pulling the mirror off the wall versus pushing it into the wall), which is a highly nonlinear behavior and can lead to many new frequencies, usually higher than the original, being excited. Often, these new, higher frequencies are associated with the rattling of small objects. A final thought: As you mentioned in the original post, there are resonances to consider. Resonating a structure causes it to have relatively large displacements. If you resonate a small cup in a cupboard, it will shake a small portion of the cupboard itself, which does not lead to efficient radiation of sound (shaking of the cupboard). On the other hand, if you resonate the the cupboard, every single cup in the cupboard will start to shake at least a little. I do not believe that I have provided a sufficient answer to your question, but the number of phenomena that I could keep talking about (e.g., efficiency of radiation from different sized structures) keeps growing in my mind as I write. If you have specific questions about any of what I have written, let me know in the comments and I will do my best to answer them.
{ "domain": "physics.stackexchange", "id": 81920, "tags": "waves, acoustics" }
Bragg's interference
Question: This may be a little of a stupid question. But I was looking at a diagram describing Bragg's Law of Diffraction. and I was like...how can an interference happen if wave beam C and wave beam C' are hitting different points on the detector screen?! they are not hitting the same point on the detector. Per Wikipedia: The two separate waves will arrive at a point with the same phase, and hence undergo constructive interference, if and only if this path difference is equal to any integer value of the wavelength...but how would they arrive at the same point? I think the graph is very confusing. EDIT: a reply to dmckee comment: If an integral number of wavelenghts $n\lambda$ can fit into the distance $2l$ then the two rays contribution to the scattered wave front would be in phase and we would obtain maximum diffraction at angle $\phi$ Answer: We usually assume that the screen, $L$, is much further away than the distance $d$ between the lattice planes. Then the lines which converge on a point are NEARLY parallel, although they are not quite parallel. Then we make the approximation that they ARE parallel. If you work through the math carefully, you'll see this approximation only changes the length of the lines at second order in $\frac{d}{L}$. Thus, since $d<<L$, this approximation is valid. In simpler words: The paths aren't parallel, they converge to a point. However, in the limit that the screen is much further away than the distance between lattice planes, we can approximate the paths as parallel without changing their lengths much. The amount we're wrong is proportional to $(\frac{d}{L})^2$, which goes to zero quickly when $d$ is much less than $L$.
{ "domain": "physics.stackexchange", "id": 23738, "tags": "quantum-mechanics, diffraction, geometric-optics" }
Why sleep? No, actually, why wake?
Question: I was just reading the sleep threads, and I wondered, why is the body so constructed so that sleep is necessary? (Is it just a design error?) But then, how do things come to be awake at all? How did this happen? Yes, I can see the advantages, but how did it happen? What is "awake"? Are single-cell animals awake? The question I want to ask is, how did things progress from just sitting around ingesting and growing, like plants, to being something more? Is it more? What do "wakers" have that the rest don't? Sorry, I'm not a biologist, this may be a foolish question, or in the wrong place. Answer: Based on various comments, I've expanded this answer. Though long, I cannot cover the depth of the topic of sleep but I tried to address the issues asked in the original question while broadly highlighting various aspects about sleep research. I welcome any suggestions for improvement. What is sleep? To know what it means to be awake, you must know what it means to be in a state of sleep. These are two alternative states for animals that sleep. Finding a concise scientific definition of sleep is difficult, probably because sleep is a very complex process. For example, Diekelmann and Born (2010) state that, Sleep has been identified as a state that optimizes the consolidation of newly acquired information in memory... and later Sleep is characterized by the cyclic occurrence of rapid eye movement (REM) sleep and non-REM sleep, which includes slow wave sleep (SWS, stages 3 and 4) and lighter sleep stages 1 and 2.... Most attempts to explain what sleep is include the four stages of sleep along with REM (rapid eye movement sleep). The stages are nicely explained by this National Institutes of Heath website on Brain Basics: Understanding Sleep. Each stage of sleep can be identified by specific types of brain waves, shown below (image from here). Notice that the first wave pattern is characteristic of being awake. The wave patterns change as an individual becomes drowsy and then falls asleep. While asleep, the brain cycles through the different stages, each with its characteristic brain waves. Associated with these stages are changes in metabolic activity and physical capabilities. For example, the Brain Basics site notes that we become less responsive to external stimuli and metabolic activity slows. During REM, breathing becomes rapid and we can't move the muscles in our arms or legs. Similar states are found in other invertebrate and vertebrate organisms, which suggests that these are common indicators of sleep or a sleep-like state. Some examples of these states are given in the citations and links below. Defining "awake" seems just as difficult as defining "asleep" but once again neural activity seems to be key indicator. As shown in the figure above, being awake has characteristic wave patterns in the brain that are distinct from the wave patterns associated with sleep. In addition, none of the physiological/physical states associated with sleep stages are present when you are awake. In both cases, asleep and awake, the indicators seem to be based on characteristic neurological activities in the brain or, in the case of some invertebrtes, the organized nervous system. This interpretation is important when considering whether organisms without any type of nervous system is capable of sleep or the alternate state: awake. More on this at the end. Why sleep? Science magazine, in the 125th anniversary issue, made a list of 125 big questions that remain to be answered by scientists. One of those questions was "Why do we sleep?" The role of sleep remains elusive but many ideas have been put forth about the role of and the evolution of sleep. The leading idea seems to be memory consolidation. This article by Kavanau (1997) reviews ideas about the function of sleep and why it evolved. He suggests that sleep and memory evolved as as ways to improve or maintain effective connections among the nerve cells in the brain. Some circuits are used frequently, such as those used to process sensory information like vision. Other circuits are used less frequency, such as circuits that can be used to store memories. Sleep may allow those infrequently used circuits to be activated and used without causing conflict with the circuits used during "restful waking." In other words, sleep may allow the lesser used circuits to be exercised, which can help to consolidate memories. So, at least one function of sleep may be to help maintain efficiency in how information is moved around the brain by maintaining the brain circuitry. See also this review by Siegel (2005) and the previously mentioned article by Diekelmann and Born (2010). This Scitable article has a nice overview of the importance of sleep for humans, as does the Brain Basics site. What organisms sleep or have sleep-like activity? Sleep has been well-studied in vertebrates, from fishes (Marshall 1972) to mammals (Pace-Schott and Hobson 2002). However, not all vertebrates necessarily sleep (Kavanau 1998). For example, continuously swimming fishes like some sharks and blind cave fishes may not sleep. These are just three references of many about sleep in vertebrates. Sleep and sleep-like activity has not been as well studied in inertebrates. As noted in this thread, there is evidence for sleep in at least some insects. Cephalopods like octopuses show signs of sleep (Mather 2008). Caenorhabditis elegans, a nematode worm and model biological system, shows sleep-like states tht are similar to mammals and fruit flies (Raizen et al. 2008). The most basal organism that shows evidence of sleep-like patterns is a jellyfish called Chironex fleckeri (Kavanau 2006). Cnidarians like jellyfishes are the basal (most "simple") group of organisms with an organized nervous system. Interestingly, C. fleckeri process a lot of visual information obtained through their 24 eyes. Twelve of the eyes are simple, light-receiving structures but 8 of the eyes are camera-like, meaning they have a lens. Kavanau (2006) argued that sleep (up to 15 hours) in this jellyfish allows time for its simple nervous system to process the visual information obtained while it is awake. The presence of sleep-like activity in jellyfish and nematode worms suggest that sleep is an evolutionarily ancient phenomenon. This Wikipedia page on sleep in non-humans overviews evidence for sleep in vertebrates and invertebrates. It's not clear whether androids sleep or what they dream about if they do sleep.$^1$ Do single-celled organisms sleep? This is (to me) a tricky question. Sleep is associated with specific types of neural activity in the brain or nervous system. In addition, sleep-like neural activity has been found in diversity of organisms with some type of nervous system, and seems evolutionarily old. I therefore think that any organism that has some form of organized neural center (brain, brain-like organ or centralized nervous system) has the potential to show some type of sleep-like patterns. In contrast, organisms lacking any type organized nervous system would not be capable of sleep, at least as currently defined. If they do not sleep, then they also cannot be awake because being "awake" seems to be recognized as neural activity not typical of sleep. Therefore, it seems to me that single celled organisms like bacteria and paramecium are neither sleeping nor awake. Plants are neither sleeping nor awake. They are metabolically active but they are neither asleep nor awake because they do not have any type of neural activity. I'm not a sleep researcher (but sometimes I am a sleepy one) nor am I a neurophysiologist. I just walked through my train of thought. I welcome contrary views along with the supporting science. Literature Cited Diekelmann, S. and J. Born. 2006. The memory function of sleep. Nature Reviews Neuroscience 11: 114-126. Kavanau, J.L. 1997. Origin and evolution of sleep: Roles of vision and endothermy. Brain Research Bulletin 42: 245-264. Kavanau, J.L. 1998. Vertebrates that never sleep: Implications for sleep's basic function. Kavanau, J.L. 2006. Is sleep's 'supreme mystery' unraveling? An evolutionary analysis of sleep encounters no mystery; nr does life's earliest sleep, recently discovered in jellyfish. Medical Hypotheses 66: 3-9. Marshall, N.B. 1972. Sleep in fishes. Proceedings of the Royal Society of Medicine (abstract). Mather, J.A. 2008. Cephalopod consciousness: Behavioral evidence. Consciousness and cognition 17: 37-48. Pace-Schott, E.F. and J.A. Hobson. 2002. The neurobiology of sleep: Genetics, cellular physiology and subcortical networks. Nature Reviews Neuroscience 3: 591-605. Raizen, D.M. et al. 2008. Lethargus is a Caenorhabditis elegans sleep-like state. Nature 451: 561-572. Siegel, J.M. 2005. Clues to the functions of mammalian sleep. Nature 437: 1264-1271. Footnote I could not resist.
{ "domain": "biology.stackexchange", "id": 2756, "tags": "sleep" }
xacro arg to xacro:property, or math on xacro args?
Question: Should it be possible to set a xacro property with a value passed in as an argument? <xacro:property name="length" value=$(arg length_arg)" /> Right now I get: ValueError: could not convert string to float: $(arg chassis_length_arg) I'd like to be able to do math on xacro arguments, but ${$(arg length_arg) / 2} also doesn't work so there appears to be no way to do it other than precomputing the right values and passing them in as args. Originally posted by lucasw on ROS Answers with karma: 8729 on 2014-01-06 Post score: 0 Answer: It just now occurred to me that the solution is to make a xacro macro wrap all the math on the arguments and the call to the macro reference the arg: <xacro:macro name="foo" params="chassis_length" > ... do something with ${chassis_length/2} <xacro::macro> <xacro:foo chassis_length="$(arg chassis_length)"> Originally posted by lucasw with karma: 8729 on 2014-01-07 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 16583, "tags": "ros, ros-hydro, xacro" }
What is the best way to block cigarette smoke from the body?
Question: Would it be better to hold your breath after smelling the presence of it or quickly inhale and exhale to get rid of it in your system(body)? EDIT: Changed from carbon monoxide to cigarette smoke. Answer: You don't smell carbon monoxide in cigarette smoke. It's odorless. (And not exactly the part of cigarette smoke I would be worried about.) You smell the other components of cigarette smoke. Which are not odorless. Holding your breath and getting out of the smoke seems the way to go. Even better would be exhaling, then holding your breath until you are clear.[1] Carbon monoxide, as well as other components of cigarette smoke like nicotine, are very good at getting into your blood stream quickly. Plus, when at rest, we inhale / exhale at only a fraction of our lung capacity. By (deeply) inhaling (your alternative course of action), you would just get more of the smoke into your lungs than you already have. [1]: The trick with holding your breath is of course that, if you run out of breath while still in the middle of the smoke, you can't help but inhaling deeply...
{ "domain": "biology.stackexchange", "id": 5068, "tags": "human-biology" }
Zipfile password recovery program
Question: I would like to get a few tips on how to improve this program, which is supposed to help recovering lost passwords from .zip files. import zipfile import string import itertools import random import time """ Zipfile password cracker using a dictionary attack and, if not successful, switches to bruteforce """ def crack(zipfilename,dictionary): t0=time.time() password = None zip_file = zipfile.ZipFile(zipfilename) alphabet = string.ascii_letters + string.digits + string.punctuation #Tries at first a dictionary attack with open(dictionary, 'r') as f: for line in f.readlines(): password_string = line.strip('\n') try: password = bytes(password_string, 'utf-8') zip_file.extractall(pwd=password) t1=time.time() total = t1 - t0 print('Password found : %s' %password_string) print('Time spent : %f seconds' %total) return except: pass #If the password hasn't been found yet, the function switches to bruteforce nbcharmax = 10 #Maximum length to test for i in range(1,nbcharmax): print('Testing length = %i' % i) for j in itertools.product(alphabet, repeat=i): try: password_string=''.join(j) password = bytes(password_string, 'utf-8') zip_file.extractall(pwd=password) t1=time.time() total = t1 - t0 print('Password found : %s' %password_string) print('Time spent : %f seconds' %total) return except: pass if __name__ == "__main__": crack('test.zip','dictionary.txt') So now the program seems to work fine but is a bit slow in my opinion. I'm wondering if there's a faster way than zip_file.extractall() to open archive files? Answer: 1. Follow the styleguide (PEP8) don't import modules which you're not using (random) more, imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants. I always like to put my imports in alphabetical order, so that it gets easier to find them. you should have two newlines before defining your methods you should also have a space after every , you should put a space before and after operators (x = 'something'); this is not required when defining arguments. instead of commenting code in the middle of the method, try to split the logic of your program in separate functions. you should be using the new .format() style 2. Working on the code: you should really let the user choose where the files actually are don't use bare excepts. you can directly use .encode() if you want to convert a string to bytes. That's actually the pythonic way of doing it as per this SO discussion when reading a file, you can omit the r mode, as that's the default one 3. Efficiency The problem does not come from the zip module but rather from the cartesian product you're doing. As @jonsharpe mentioned in his comments, your program has to go through a lot of passwords. The final code would look like this: """ Zipfile password cracker using a dictionary attack and, if not successful, switches to bruteforce """ import itertools import string import time import zipfile ARCHIVE_PATH = 'test.zip' DICTIONARY_PATH = 'dictionary.txt' def dictionary_attack(): """Tries at first a dictionary attack""" t0 = time.time() with open(DICTIONARY_PATH) as f: for password in f: password = password.rstrip().encode() try: zipfile.ZipFile(ARCHIVE_PATH).extractall(pwd=password) t1 = time.time() print('Password found: {}\nTime spent: {} seconds'.format(password.decode(), t1 - t0)) return True except RuntimeError: pass return False def bruteforce_attack(nbcharmax): """If the password hasn't been found yet, the function switches to bruteforce""" alphabet = string.ascii_letters + string.digits + string.punctuation t0 = time.time() for i in range(1, nbcharmax): for j in itertools.product(alphabet, repeat=i): password = ''.join(j).encode() try: zipfile.ZipFile(ARCHIVE_PATH).extractall(pwd=password) t1 = time.time() print('Password found: {}\nTime spent: {} seconds'.format(password.decode(), t1 - t0)) return True except RuntimeError: pass return False if __name__ == "__main__": if not dictionary_attack(): bruteforce_attack(4)
{ "domain": "codereview.stackexchange", "id": 30704, "tags": "python, performance, beginner, python-3.x" }
Is it possible to project a magnetic field at a location in space?
Question: A magnetic field strength drops-off quickly as the distance from a magnet increases. Is there any way to use electromagnetic fields to create a magnetic field at a location. For example, if there are strong electromagnetic fields intersecting at a location from many strong transmitters, could a constructive interference be created which creates a local magnetic field? The engineering idea is that if you create an array of radio transmitters at a strong strength working together, could there be a interference pattern at a point far from the transmitters (maybe 100 meters away) where the fields produce a magnetic field at that location. I think this could be done as a EM field is composed of a electric field and a magnetic field, shouldn't there be a way to make the magnetic field strong at a location in space. A positive consequence of this would be that a magnetic placed at that location 100 meters away could be moved by the field. Also, if the field was alternating, rather than steady, then a electromagnet at that location could be alternating, and the force on the electromagnet could provide motive force. I guess since I have never heard of this, it cannot be done. But then again, it was once said that levitating a normal magnet could not be done, then someone did it with a spinning magnet which levitates above another magnet, now it is a popular toy. The reason it works is that the magnet is spinning so that it cannot flip over. See spin stabilized magnetic levitation. --- Update Hi, I would like to clarify, I wish to move a physical object, a small magnet at a distance, not to transmit power to the object. Thanks! --- Update 2 My engineering goal for this is a small magnetic, spin stabilized, being pushed from a distant location. A small spin stabilized magnet pushed from below by some "projected" magnetic field could potentially overcome the force of gravity. Imagine a thin lightweight spinning disk with a magnet in the center. If a projected magnetic field is pushing it upwards then it would stay stable and could be projected from ground into space. This might be done with an array of thousands of transmitters on the ground, somewhere in the desert far away from people. All transmitters could focus a small magnetic field at this magnet. Alternatively, if the magnetic field alternates with a frequency, then the magnet in the levitated device could also be alternating via electromagnetism, it could be powered by an battery or microwave power transmitted to the object. This is more or less an engineering problem if there is some basic science which can allow this to be a potential reality. -- Update 3 Could "Radiation Friction" be used? https://physicsworld.com/a/radiation-friction-could-make-huge-magnetic-fields-with-lasers/ Answer: This is possible, it's just very very difficult to do. People regularly do this with the electric field of light to move dielectric particles (insulators) in the lab, and the technique is known as "optical tweezers": http://en.wikipedia.org/wiki/Optical_tweezers The reason you don't want to try this with magnetic particles is that the magnetic field of light is much weaker than the electric field, or rather that it doesn't interact very much with most magnetic materials. Greg's answer above is half correct - The fact that these fields oscillate in time means that the applied force would oscillate as well, however, the field gradient is exploited instead to make optical tweezers work.
{ "domain": "physics.stackexchange", "id": 24829, "tags": "electromagnetism" }
How to include my own libraries in cmake to compile with rosmake?
Question: Hi there, I am working in a thesis where I have to use ROS within a system (and the other way round). So far, what I have changed in the CMakeLists.txt is including the following lines: rosbuild_add_executable(ROSPublisher2MaCIClient ROSPublisher2MaCIClient.cpp) include_directories(../../../../include/utils) target_link_libraries(${GIMUtils} ../../../../lib/libGIMutils.a) include_directories(../../../../include/MaCI) target_link_libraries(${MaCI} ../../../../lib/libMaCI.a) include_directories(../../../../include/GIMI) include_directories(../../../../include/GIMnetAP) include_directories(../../../../include) target_link_libraries(${GIMI} ../../../../lib/libGIMI.a) include_directories(/usr/include/libxml2) TARGET_LINK_LIBRARIES(xml2) add_definitions(-DLINUX_OS) But there's no way it's working. I get a lot of "undefined reference" errors and I don't have an idea of how to fix it. That's why I was wondering: Is this the way to do it? (at least, so far the lines I have included in the CMakeLists.txt or is there something I'm missing (or some other file I have to update to make it all work with rosmake). Cheers, Originally posted by jlo on ROS Answers with karma: 385 on 2012-02-08 Post score: 0 Original comments Comment by dornhege on 2012-02-10: THere is the link_directories command, that might do something useful for what you want. Comment by jlo on 2012-02-09: The link http://gim.tkk.fi/MaCI :'D (what's up with the character limit?) Comment by jlo on 2012-02-09: MaCI, GIM, GIMI and GIMutils. I would love to tell you more about these systems, but I know almost as much of them as you guys do. If anyone it's interested to take a look into MaCI, here's a link: It's in a way a state-of-the-art take on what ROS is for, used for remote controlled machinery, f.i. Comment by jlo on 2012-02-09: Sorry if it seemed a like bit of an obtuse question, my intention was to see if the syntax was right to include libraries. Then again, it will be probably better to ask it in a cmake forum. In any case, this is all I know, I need to include this path "../../../../lib/" (I checked) and the libraries Comment by dornhege on 2012-02-08: You are using some variables (MaCI, etc.) that nobody knows how they are defined and you are referring to some paths in your system that we do not know. Also nobody here know what libraries need to be linked/build for your project. Without this information it is impossible to answer that question. Comment by tfoote on 2012-03-14: @jlo The character limit is to encourage most information to be in the question and in a curated answer. Usually it's recommended for the original poster to update their question instead of replying in the comments. . Answer: In this ROS driver there is a dependency on some other libraries similar to your project. The package's CMakeList.txt might be a working example of what you need. Update: This answer to a similar question might also be helpful. Originally posted by Mani with karma: 1704 on 2012-09-30 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by tfoote on 2012-10-14: Those examples should help. You're using target_link_libraries incorrectly, the first argument should be the cmake target name to which you want to link. I'd expect ROSPublisher2MaCIClient in this case.
{ "domain": "robotics.stackexchange", "id": 8145, "tags": "ros, libraries, cmake" }
Ordering training text data by length
Question: If I have text data where the length of documents greatly varies and I'd like to use it for training where I use batching, there is a great chance that long strings will be mixed with short strings and the average time to process each batch will increase because of padding within the batches. I imagine sorting documents naively by length would create a bias of some sort since long documents and short one would tend to be similar to each other. Are there any methods that have been tried that can help reduce training time in this case without sacrificing model performance? Answer: What you are referring to is called "bucketing". It consists of creating batches of sequences with similar length, to minimize the needed padding. In tensorflow, you can do it with tf.data.Dataset.bucket_by_sequence_length. Take into account that previously it was in different python packages (tf.data.experimental.bucket_by_sequence_length, tf.contrib.data.bucket_by_sequence_length), so the examples online may containt the outdated name. To see some usage examples, you can check this jupyter notebook, or other answers in stackoverflow, or this tutorial.
{ "domain": "datascience.stackexchange", "id": 11290, "tags": "machine-learning, nlp, bert, text-classification, performance" }
rosbuild error bad current directory
Question: Hello, I'm just started to learn ROS. I'm set the package as in beginner tutorial (but my package name is firstpackage). The erros when I type make is the following mkdir -p bin cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=`rospack find rosbuild`/rostoolchain.cmake .. [rosbuild] Building package firstpackage CMake Error at /opt/ros/electric/ros/core/rosbuild/private.cmake:110 (message): [rosbuild] rospack found package "firstpackage" at "", but the current directory is "/home/unais/workspace_ros/firstpackage". You should double-check your ROS_PACKAGE_PATH to ensure that packages are found in the correct precedence order. Call Stack (most recent call first): /opt/ros/electric/ros/core/rosbuild/public.cmake:180 (_rosbuild_check_package_location) CMakeLists.txt:12 (rosbuild_init) -- Configuring incomplete, errors occurred! make: *** [all] Error 1 Qn : What would be the error? I don't know how to set my path currectly This is my .bashrc file The last some lines are this # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi source /opt/ros/fuerte/setup.bash ### Added by the Heroku Toolbelt export PATH="/usr/local/heroku/bin:$PATH" PATH=$PATH:$HOME/.rvm/bin # Add RVM to PATH for scripting export ROS_PACKAGE_PATH=~/workspace_ros:$ROS_PACKAGE_PATH export ROS_PACKAGE_PATH=~/gspeech-master:$ROS_PACKAGE_PATH export ROS_PACKAGE_PATH=~/fuerte_workspace/sandbox/beginner_tutorials:$ROS_PACKAGE_PATH export ROS_PACKAGE_PATH=~/beginner_tutorials:$ROS_PACKAGE_PATH source /opt/ros/electric/setup.bash source /opt/ros/electric/setup.bash Originally posted by unais on ROS Answers with karma: 313 on 2013-03-03 Post score: 1 Answer: clean up your .bashrc first: There should only be one line like source /opt/ros/electric/setup.bash You currently have 3. If you need to use both fuerte and electric, then don't put anything like that into your bashrc, but run this line in the shell every time you start a new shell. Then don't use multiple workspace, use one workspace for electric and one workspace for fuerte. Do not mention both in your bashrc. Ideally use rosws to create a workspace, and source it's setup.sh either manually in in the bashrc. If you use fuerte and electric, create separate workspaces, and don't source anything in your bashrc, nor modify the ROS_PACKAGE_PATH in your bashrc, just source the workspaces setup.bash when you start a new shell. Also rely on the rosmake tool rather than invoking "mkdir bin ...". The rosmake tool is much easier to use for beginners than trying to manually do the right thing. Your .bashrc ending should look e.g. like this: # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi source ~/workspace_ros/setup.bash Originally posted by KruseT with karma: 7848 on 2013-03-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by unais on 2013-03-04: Thank you . I need this "workspace_ros" work space only -- Could you please help me what all are the changes I need to edit in my .bashrc file Comment by unais on 2013-03-04: Hello is it this two lines is needed .### Added by the Heroku Toolbelt export PATH="/usr/local/heroku/bin:$PATH" PATH=$PATH:$HOME/.rvm/bin # Add RVM to PATH for scripting Comment by KruseT on 2013-03-05: Not for ROS. I just gave them for you to know where to make changes. Comment by unais on 2013-03-05: thank you very much .
{ "domain": "robotics.stackexchange", "id": 13163, "tags": "ros, beginner-tutorials" }
Why friction force peak just before the object moving?
Question: This diagram shows the amplitude of friction force of a typical contact-surface object. The friction force increases as we apply an internal force until it reach a maximum value just right before the object starts to move, and then it drop SHARPLY and remains constant as long as the object is still moving. Could you explain all of physical interactions involve in that crucial moment just before the object begins to move which result in a diagram above? Answer: The idea here is that static friction is larger than dynamic friction. This is something that depends on the nature of the materials in contact and is not in general true because friction is not a fundamental force: it is a result of some very complicated phenomena at a microscopic level. An explanation that does make sense to me is to think of the two objects in contact as having little peaks and valleys that fit into each other. To start the motion, you need to give a large enough force to get the largest peak out of the largest valley. However, once things are moving, the object has "broken free" and you can see that the motion of the body also contributes to getting in and out of the valleys i.e. you need to apply a smaller force. This is one model but there are others out there that give you the same result. The way I would prefer to think of it is that it is an experimental fact that there is some sort of "sticking together" that you break when the object is set into motion and this is the difference between static and dynamic friction.
{ "domain": "physics.stackexchange", "id": 24313, "tags": "newtonian-mechanics, friction" }
What is the difference between Euler and Eulerian graph?
Question: A Graph is Eulerian iff $\exists$ an Eulerian Cycle or all the vertices of Graph have even degree. What is an Euler graph? Wiki has a definition for the Eulerian graph but not for the Euler graph. Answer: The distinction is given at Wolfram. The Euler graph is a graph in which all vertices have an even degree. This graph can be disconnected also. The Eulerian graph is a graph in which there exists an Eulerian cycle. Equivalently, the graph must be connected and every vertex has an even degree. In other words, all Eulerian graphs are Euler graphs but not vice-versa.
{ "domain": "cs.stackexchange", "id": 19018, "tags": "graphs, discrete-mathematics" }
ros3djs Inverse kinematics
Question: Hi Everyone, I am new to ROS. I have a simple three link robotic arm, and I would like to find the inverse kinematics for the joint angles. I already have the URDF. I want to click in a 3D space in the web browser widget and the end effector should point to that location. Are there any tutorials on how to do this? Thank You! Originally posted by Kavita Krishnaswamy on ROS Answers with karma: 11 on 2016-09-20 Post score: 1 Original comments Comment by JoshMarino on 2017-02-18: You would like the end effector to reach that 3D space, or be pointing to it (outside of reach)? Both very different problems. Answer: MoveIt! is a popular choice for inverse kinematics. Take a look at the documentation. The tutorials there will help you to configure a solver and use RViz for controlling your arm. Once that's working, it's just a small step to use InteractiveMarkers in ros3djs to do the same from a web browser. Originally posted by T045T with karma: 48 on 2017-02-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25802, "tags": "ros, kinematics, inverse, ros3djs" }
Make querySelector in kotlin-js more convenient and type-safe
Question: I'd like you to take a look at those extension functions I made in kotlinjs (version 1.3.61): inline fun <reified E : Element> Element.queryHtmlSelectorOrNull(selectors: String): E? { return this.querySelector(selectors = selectors) as E? } inline fun <reified E : Element> Element.queryHtmlSelector(selectors: String): E { return this.queryHtmlSelectorOrNull(selectors = selectors) ?: error("couldn't find element by selectors '$selectors'") } Now let me elaborate. I was using classic javascript querySelector. This signature: fun querySelector(selectors: String): Element? Has some downsides/qualities: It is optional. When I force cast to for example HTMLDivElement and result is actually null, I get NullPointerException with no additional information/error message. Example: document.querySelector(".something-non-existent") as HTMLDivElement //throws I have to explicitly cast my query to specific element I want (example HTMLDivElement? or HTMLDivElement). My functions fix some of the issues, but create new issues: You can explicitly select what type you need or compiler can guess it based on variable type. Example: document.queryHtmlSelector<HTMLDivElement>(".something") or val element: HTMLDivElement = document.queryHtmlSelector(".something") The downside is that this doesn't work (need to set type explicitly): val element = document.queryHtmlSelector(".something") I would be cool somehow to "default" base type to Element if none is specified, but can't think of way to do that. I'd like something that can pretty much replace original function and just add benefits of stronger typing and nicer error message. Any hints on how to improve this are greatly appreciated. Answer: "Would be cool somehow, to "default" base type to Element if none is specified, but can't think of way to do that." Indeed it is not possible to do that, at least not with the same method. An option could be to have a separate method for that. document.queryHtmlSelectorRaw ? Or wait, actually... That leaves you back to what you were starting with, Element.querySelector. I'd like something, that can pretty much replace original function and just add benefits of stronger typing and nicer error message. Any hints on how to improve this are greatly appreciated. Given those requirements, I'd say that the current methods is about as good as you're gonna get. You have the stronger typing (but you will have to specify it). You have the better error message if it's null. The only things that I would consider are: Get rid of reified, I'm not convinced that you need it. Change method names, queryHtmlSelector can easily be confused with the built in querySelector. May I recommend just the name selector or select? Improve the error message even further, such as No element found using selector '$selector' on $this.
{ "domain": "codereview.stackexchange", "id": 37829, "tags": "kotlin" }
Emf in ac generator
Question: I know that emf induced in ac generator equals [N.B.A.w.sin(wt)] And i noticed that the negative sign of lenz rule disappeared. Does this mean that emf will not oppose the change in magnetic flux? Answer: As we know ♀(magnetic flux)=NBACos(wt) According to faraday's law emf=-d♀/dt Differentiating it with respect to t we will get emf=NBAwSin(wt) This does not mean that it violates lenz's rule.But if you study lenz's rule or faraday's law carefully you will came to know that emf produced does not opposes magnetic flux but it opposes change in magnetic flux So in case of ac generator let us assume that Bvector represnts magnetic field and let the armature coil is moving in such a way that the flux through it is decreasing then the emf 's direction (for simplicity use current's direction) will be in such a way that it compensate the decreasing flux by creating a magnetic field in same direction. If flux is increasing through coil then the current will flow in that direction such that it tries to decrase the incresing flux.
{ "domain": "physics.stackexchange", "id": 52248, "tags": "electromagnetic-induction" }
How can there be single electron, while we observe many simultaneously?
Question: Wheeler and Feynman's idea of single electron universe says that all the electrons and positrons are in fact a single particle bouncing forward and backward in time. I don't get the nature of bouncing phenomenon here. We observe multiple electrons simultaneously, so does this mean that the bouncing occur in infinite speed and no time? Is this possible? How should we understand that "bouncing"? Answer: It was just a "what if" discussion, as this link shows; Feynman was struck by Wheeler's insight that antiparticles could be represented by reversed world lines, and credits this to Wheeler, saying in his Nobel speech: “ I did not take the idea that all the electrons were the same one from [Wheeler] as seriously as I took the observation that positrons could simply be represented as electrons going from the future to the past in a back section of their world lines. That, I stole! It never was a complete model , as there is an asymmetry between electrons and positrons in numbers, and all those other pesky particles discovered since then. But it proved a useful idea of Wheeler with use in Feynman diagrams. So understanding that "bouncing" , is understanding how to read Feynman diagrams and decide on the limits in the integrals they represent when a calculation is done.
{ "domain": "physics.stackexchange", "id": 50225, "tags": "electrons, antimatter" }