anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Simple stop watch class
Question: I've made a little library where I add some functionalities from time to time that I miss in the Java standard library. I am sad that the Java library for example has no stop watch class that allows me to measure time without the need to deal with details of a time library. When I write library functionalities, I have the following requirements to myself: Keep it simple, stupid. The code has to be on point and should only do what is really needed. The code has to be very readable, even if this leads to some more code lines. The task has to be solved efficiently. Memory and calculation time should not be wasted. If the standard library or other stuff like that is needed, only use stable, long-lasting and modern technologies, prioritized in this order. So this class was born. What do you think about it? Did I meet my own requirements? import java.time.Instant; import java.time.Duration; public class StopWatch { private Instant startDate; private Instant stopDate; public StopWatch() { reset(); } public void start() { if (startDate == null) { startDate = Instant.now(); } } public void stop() { if (stopDate == null && startDate != null) { stopDate = Instant.now(); } } public void reset() { startDate = null; stopDate = null; } public long getMilli() { if (startDate != null && stopDate != null) { Duration duration = Duration.between(startDate, stopDate); long nanos = duration.getSeconds() * 1000000000 + duration.getNano(); return nanos / 1000000; } return 0; } } Purpose The code following now is not part of the code that should be reviewed. Some people wanted me to show the use case and that it is: I have a class thats objects calculate prime numbers. I want to measure how much time it needs to calculate these numbers, so therefore i need a stopwatch class for preventing overbloating my code with date-time-apis of my programming language. Main.java public class Main { public static void main(String[] args) throws Exception { StopWatch sw = new StopWatch(); sw.start(); PrimeNumberCalculator pnc = new PrimeNumberCalculator(); pnc.setCurrentNumber(10000); StringBuilder sb = new StringBuilder(); for (int i = 0; i < 1000; i++) { pnc.calculateNextPrime(); sb.append((i+1) + ";" + pnc.getCurrentPrimeNumber() + "\n"); } sw.stop(); System.out.println(sb.toString()); System.out.println(sw.getMilli()); } } PrimeNumberCalculator.java public class PrimeNumberCalculator { private long currentNumber; private long currentPrimeNumber; public PrimeNumberCalculator(long currentNumber) { this.currentNumber = currentNumber; } public PrimeNumberCalculator() { this(3); currentPrimeNumber = 2; } public long getCurrentPrimeNumber() { return currentPrimeNumber; } public void setCurrentNumber(long currentNumber) { this.currentNumber = currentNumber; } public void calculateNextPrime() { boolean isPrime; do { isPrime = true; for (long i = 2; i < currentNumber; i++) { if (currentNumber % i == 0 && currentNumber != 2) { isPrime = false; break; } } if (isPrime) { currentPrimeNumber = currentNumber; } currentNumber++; } while (!isPrime); } } The class is also needed in my own Minesweeper implementation, where players can play against the time and be ranked in criterias of time needed. Answer: For a library class, JavaDocs should be given (well, here the interface is so small and well-named that one can easily guess the tasks of the methods, but anyway). As already said, the getMilli() method should be replaced with a getDuration() one. Your conversion to millis not only includes ugly computation code, but also cripples the available resolution. And if some user wants milliseconds, there's the Duration.toMillis() method.
{ "domain": "codereview.stackexchange", "id": 42677, "tags": "java, timer" }
Has non-conservation of baryon number been observed?
Question: CP violation (as I understand it) allows for non-conservation of baryon number, and thus can contribute (at least a little) to the baryon asymmetry in the universe today (far more matter than antimatter). But have we ever seen baryon number change in the laboratory? Or had almost-but-not-quite-conclusive evidence of it? Answer: Short answer: nothing has been seen. Long answer: Questions like this on the experimental limits in particle physics can usually be answered by looking things up in the Particle Data Group's annual Review of Particle Physics. There is a summary online version and an extensive (but free!) print version. EDIT: Here (pdf) is the full section on conservation laws. (And if you are game for a 44 MB pdf download you can get the full 2012 review here.) See page 23 of the linked pdf for the relevant section, though if you trust the standard model the limits on lepton number violation can also be converted into an indirect limit on baryon number violation because $B-L$ is a conserved quantum number in the standard model. Usually people look for $B$ violation in models beyond the standard model because it is exponentially suppressed in the standard model at temperatures below the electroweak phase transition (above the phase transition, at very early times in the universe, $\Delta B=\Delta L$ processes are in thermal equilibrium; but now there is an insurmountable energy barrier to $B$ violating reactions, which can only happen by quantum tunneling at an extremely small (i.e. unobservable) rate). The full review lists about a dozen odd searches for baryon violation, all of which are limits, i.e. maximum constraints on the rates. No $B$ violating processes have been observed in nature. (Obviously baryogenesis happened somehow in the early universe, but we have no direct evidence for how. By the way, the CP violation in the standard model is in principle able to create baryogenesis above the electroweak phase transition, but it turns out to be numerically far too small to get the right answer. That is why people look for beyond-SM sources of CP violation.) The PDG limits on $B$ violating decay rates look like $\Gamma(Z\to p e)/\Gamma_{tot} < 1.8\times10^{-6}$ at 95% confidence. That means that the $Z$ boson undergoes this particular $B$ violating decay less than about one in a million times. The exponents for all the processes listed are in the same range, $-5$ to $-8$, so any $B$ violating processes are quite rare and below current detection thresholds. The most famous $B$ violating process is proton decay, which is stringently constrained. The lifetime of the proton is $>2.1\times 10^{29}$ years. Constraints on individual decay channels are even tighter, for example $\tau(p\to e^+ \pi)>8200\times 10^{30}$ years. Constraints on bound neutron decays are similar. $n\leftrightarrow\bar{n}$ oscillation is constrained to $\gtrsim 10^8$ seconds, a surprisingly weak bound. But then again, neutrons are funny like that. :) EDIT: Prompted by Lumo's good comment above to clarify the relationship between $B$ and $CP$ violation. They are logically independent things: one can exist without the other. The reason they are often brought up in the same breath is that they are both part of the Sakharov conditions which are needed to dynamically produce a baryon-antibaryon asymmetry in the early universe: $B$ violation, $C$ and $CP$ violation, departure from thermal equilibrium. The proof that these are necessary conditions for baryogenesis is pretty trivial (see the wiki page) so I won't go into that. But neither are they sufficient conditions - you have to do detailed calculations to work out the asymmetry in a given particle physics model. It turns out that the standard model falls about eight orders of magnitude short in its predicted asymmetry despite having all the conditions 1-3 fulfilled at the electroweak phase transition. This is why people attack the problem hoping to find beyond standard model physics. (A few still hope that the standard model can work if exotic quark matter states get involved in the QCD phase transition somehow. I don't know enough about the QCD phase transition to tell you how reasonable this is, but quark matter proposals have had a checkered history.)
{ "domain": "physics.stackexchange", "id": 9577, "tags": "particle-physics, conservation-laws, cp-violation, baryons, baryogenesis" }
How to prove that antipodal points on the Bloch sphere are orthogonal?
Question: I started by assuming two antipodal states $$ |(\theta,\psi)\rangle = \cos\dfrac{\theta}{2}|0\rangle + \sin\dfrac{\theta}{2}e^{i\psi}|1\rangle\\ |(\theta+\pi,\psi+\pi)\rangle= \cos\dfrac{\theta+\pi}{2}|0\rangle + \sin\dfrac{\theta+\pi}{2}e^{i(\psi+\pi)}|1\rangle $$ and then try to take the inner product of them. However, the math doesn't check out (as in the result I get $\ne0$). What is wrong with my deduction? Answer: In spherical coordinates antipodal point to $(\theta,\psi)$ is $(\theta+\pi,\psi)$, not $(\theta+\pi,\psi+\pi)$
{ "domain": "quantumcomputing.stackexchange", "id": 4784, "tags": "mathematics, bloch-sphere" }
Angular momentum quantum mechanics operator
Question: In classical mechanics the angular momentum is given as $\bf L = r \times p$ and when going to quantum mechanics you replace $\bf r$ and $\bf p$ by their respective quantum operators, namely $\bf \hat{r} = r$ and ${\bf \hat{p}} = -i \hbar \bf{\nabla}$, that is $${\bf \hat{L}}= -i \hbar {\bf r} \times \nabla.\tag{1}$$ Shouldn't one use a symmetrised form for $\bf\hat{L}$, i.e. $${ {\bf \hat{L}} = -i \hbar \frac{1}{2} (\ \bf r \times \nabla - \nabla \times r )}?\tag{2}$$ Answer: No, because by definition of the vector product, it is antisymmetric and thus your definition of L would be 2 times the "normal" L. (check it by computing each component!) L is defined by the correspondance principle (+ its commutation relations).
{ "domain": "physics.stackexchange", "id": 49515, "tags": "quantum-mechanics, classical-mechanics, angular-momentum, operators" }
Internal energy and rest mass energy
Question: when a cup of water is at rest, does the potential energy between water molecules contribute to the rest mass or just the kinetic energy of the molecules? Also, is it the same as asking whether the total internal energy of a resting system = the rest mass of system? Thank you Answer: Short version: the mass of the system is (modulo an uninteresting factor of $c^2$) the same as the total energy of the system computed in a frame where it has zero total mometum (therefore including contributions from the mass of the parts, the kinetic energy of the parts relative the center of mass and the potential energy of the system but not the kinetic energy of the center of mass). Notably, the mass of the system is generally not the same as the sum of the mass of its parts (though it is impractical to measure the difference in most cases). You should keep in mind that the electromagnetic potential energy of a condensed fluid (like liquid water) is generally negative. That is the energy of a drop of liquid water is less than the energy of the same set of molecules with the same speed taken far from each other.
{ "domain": "physics.stackexchange", "id": 39868, "tags": "energy, mass, potential, kinetic-theory" }
Will the plates of a parallel plates capacitor keep its charge after being charged then seperated from the non- conductor?
Question: If i had 3 plates 2 metals and 1 glass .I put them together to form a basic parallel plates capacitor. After charging it and approaching it to an electroscope nothing happened that's because electric field outside a capacitor =0 . But what happens if i have separated the 3 plates , will the metal plates keep its charge?why? What happens when i approach them to the electroscope? Answer: It will depend on how you separate the plates and whether you have disconnected the charger too. If you handle the plates with non conducting gloves, separated they will keep their charge until it dissipates to the air due to the existing small conductivity of the air. With bare hands prepare yourself for a shock. Until the charge dissipates the electroscope will see the field they create over the plates. The charges will distribute themselves on both sides of the plate. This link shows the field for single metal plate and when in a capacitor.
{ "domain": "physics.stackexchange", "id": 5111, "tags": "electrostatics, electric-fields, capacitance" }
What does convolution has the meaning?
Question: As I know, if we want to know the LTI system output, then we do convolution between input x[n] and impulse response h[n]. but actually,in this question, I want to know what does convolution has the behind meaning. Why do we do convolution(sum of products) not just using adder or multiplier for calculation between input signal and impulse response? Answer: I explained in "Why is time inversion?" Red is a reference delta impulse (of height 1), whereas green is a typical response to that impulse, denoted by function h(t). In LTI, output is proportional to the input, that is if we have a delta impulses at the input at time 0, output at time t will be a*h(t). Now, instead of single impulse at the origin, you apply a series of input impulses at various times. What will be the output? Say, there was input impulse of height $a_1$ at $T_1$. Since current time is $t$, impulse occurred $t-T_1$ seconds ago and its contribution to the current output y(t) is $a_1 h(t - T_1)$. There is another contribution from another impulse, occurred at $T_2$. Its contribution is $a_2 h(t-T_2)$. So, $y(t) = a_1 h(t-T_1) + a_2 h(t-T_2)$. You simply add up the contributions beacuse of LTI linearity. In general, you have $y_t = \sum_0^t {a_i h(t-i)}$. That is a convolution formula. It also appears when you multiply to polynomials $(a_0 + a_1 z + a_2 z^2 + ...)(b_0 + b_1 z + b_2 z^2 + ...) =\sum_0^\infty {c_n z^n} $ where $c_n = a_i b_{n-i}$. That is why you tend to represent series as z-transforms. In this case you can simply multiply them and have convolution on the background.
{ "domain": "dsp.stackexchange", "id": 3663, "tags": "convolution" }
How to Pick a Journal for Publishing a Research Work?
Question: I got my MS in EE way back in 1994 and have not presented at a conference, or published in a journal before, but I had and still have an intense interest in the applied math that I learned. I have been working my own research project for a few years, and I want to publish a paper on my work. What I did is develop an algorithm similar to a discrete Kalman fiter, and it gives better estimates under certain conditions. I am looking for advice on what journal would a good choice. It would take to long for me to learn LaTeX (is that what it's called), and I don't have the education needed to develop the sophisticated math that I see in many journals. Answer: Getting work published in a high impact factor journal is quite a slog. If you're starting from not having presented at a conference etc. before, that's going to be an even bigger ask. However, it's not impossible. For IEEE journals, short notes about signal processing topics can be sent to the IEEE Signal Processing Letters journal. However, starting there might be too much. What I'd suggest is to look at starting with an online journal such as arxiv or PLOSone. Even before submitting there, I'd suggest posting a question here that might be relevant to the situations where the algorithm might be better than others, and also posting an answer with your algorithm. The aim with any of these submissions is to get feedback about your work and improve it or its presentation.
{ "domain": "dsp.stackexchange", "id": 3175, "tags": "estimation, self-study, research" }
Multipole expansion in cylindrical coordinates
Question: I am seeking the general solution for the Laplace equation in cylindrical coordinates or $$\nabla^2 \omega = 0. $$ In several texts, the general solution can be found via separation of variables and I get the general solution $$\omega = (A_0+B_0\theta)(C_0+D_0 \ln r) + \sum_{n = 1}^\infty (A_n\cos(\lambda_n\theta)+B_n\sin(\lambda_n\theta))(C_nr^{\lambda_n}+D_nr^{-\lambda_n})$$ In this general solution, most of the terms are represented by the exterior and interior multipole expansion except for $B_0D_0\theta\ln r$. So my first question is why does this term show up and why is it not included in the multipole expansion? Since the multipole expansion is an orthogonal basis shouldn't it cover all possible solutions? Another problem I have is that I have found that $$\omega = -\dfrac{2}{r} [A_{1L} \cos(\theta) + B_{1L} \sin(\theta) + C_{1L}(\theta \cos(\theta)- \sin(\theta) \ln r) + D_{1L}( \cos(\theta) \ln r + \theta \sin(\theta))]$$ is a solution to the Laplace equation. This was obtained by taking the Laplacian of a solution of $\psi$ where $\nabla^4 \psi = 0$. Specifically I see terms with $\dfrac{\ln r}{r}$ appear. Has this solution been discussed anywhere and how does it fit into the exterior/interior multipole expansion? EDIT: Modified equation to clearly group harmonic terms Answer: The expression you give is indeed the general solution for a harmonic function (i.e. $\nabla^2 f=0$) in two dimensions. The solution $f=\theta\ln(r)$ is usually omitted because it cannot be sustained as a periodic function over a $2\pi$ range in $\theta$. Moreover, even if you have a limited range in $\theta$, this term is singular at the origin, which reflects the fact that if you set two plates at an angle at different electrostatic potentials (say) then the solution will be singular because your boundary conditions are discontinuous. However, for regions which are also bounded in $r$ (i.e. $0\leq \theta\leq\theta_0<2\pi$, $r>r_0>0$) this term is a crucial part of the solution and it is trivial to construct boundary conditions that cannot be matched without it. If you have any books that claim otherwise -- i.e. that omit this term in situations which include a wedge with a limited range in $\theta$, and without appropriate boundary conditions to rule it out -- then they are wrong. Most resources I know don't fall into this category, but if you have specific examples then we can comment on the details for those. (Apologies for having missed some terms in a previous version. Take this as a learning opportunity: displaying a big sum of terms without explicitly indicating which terms are repeated can, and will, make people misread your work. Communication is a two-way process but you need to make your expressions as easy to read (or as hard to misread) as possible.) The functions \begin{align} \omega_1 = \frac{\cos(\theta) \ln r + \theta \sin(\theta)}{r} \quad\text{and}\quad \omega_2 = \frac{ - \sin(\theta) \ln r +\theta \cos(\theta)\sin(\theta)}{r} \end{align} are indeed harmonic. They are not included explicitly in the multipolar expansion because they are not separable - they cannot be expressed in the form $\omega=R(r) \Theta(\theta)$. (The first two functions, in $A_{1L}$ and $B_{1L}$, are explicitly included in it.) Since the multipolar expansion is a basis, the two functions above can always be expressed in terms of it - i.e. they can be cast as a multipolar series if so desired. Note, however, that for this function to be allowed, you need a limited range in $\theta$, of the form $\theta_0<\theta<\theta_1<2\pi+\theta_0$, or you will have a discontinuous function (or, at best, a discontinuous derivative). Depending on the exact situation, you will also need to work in a domain that's bounded from below in $r$, or the $\omega_i$ will have infinite energy. Both of these constraints obviously affect the details of the orthogonality of the multipolar components, so you'll need to work a bit harder to make the expansion work. I won't post that process here because it depends on exactly what domain you want, and it's on you to do the drudgework. If you get stuck you can ask it here, of course.
{ "domain": "physics.stackexchange", "id": 25970, "tags": "potential, boundary-conditions, multipole-expansion" }
Zero charge density, yet non-zero current density?
Question: I am reading about retarded potentials in Griffiths' Introduction to Electrodynamics, and came across the particular case of wires. Griffiths takes wires to be electrically neutral, therefore the retarded scalar potential is zero, which has an integrand proportional to charge density, which he has taken to be zero. The retarded vector potential depends on the current density in the wire, which is non-zero. However, I recall that the current density $\textbf{J} = \rho \textbf{v}$. Hence, my question is, how can we have a non-zero current density, but a zero charge density? Are the $\rho$'s different? The current density being constant in say the z direction, and zero charge density does not violate conservation of charge, but how do we reconcile this fact with the relationship between $\rho$ and $\textbf{J}$? Is the charge density, found for instance in the Maxwell equation $\nabla \cdot \textbf{E} = \frac{\rho}{\epsilon_0}$ an 'overall' charge density which takes every charge into account, while the $\rho$ found in the expression for $\textbf{J}$ a 'mobile' charge density? Answer: If the positive and negative charges are moving at different velocities, then their contributions to the current density will be different, even if their charge densities are equal and opposite. For example, in a normal wire, we can usually model the electrons as moving and the positive charges as stationary (though this is material-dependent and in particular doesn't hold for semiconductors). The charge density of positive charges is $\rho$ and the charge density of electrons is $-\rho$, so the total charge density is $\rho-\rho=0$. But the electrons are moving at velocity $-\vec{v}$ (where $\vec{v}$ points in the direction of the electric field) and the protons are effectively stationary, so the total current density is $\rho\vec{0}-\rho(-\vec{v})=\rho\vec{v}$, which is nonzero. You can add the current densities like this because Maxwell's equations are linear, and therefore obey the principle of superposition. For another example, suppose we somehow had a wire made out of electrons and positrons of equal charge density (neglecting annihilation or any of the more interesting physics that this would imply). When an electric field is applied to this wire, the positrons will move in the direction of the electric field and the electrons will move in the opposite direction. Since the electron and positron charge have the same absolute value, the magnitude of the force on each species will be the same, and they will move at the same speed. Once again, the charge density is zero, since the positrons have charge density $\rho$ and the electrons have charge density $-\rho$. For the current density, we define $\vec{v}$ to be pointing in the same direction as the electric field; then the positrons have charge density $\rho$ and are moving with velocity $\vec{v}$, and the electrons have charge density $-\rho$ and are moving with velocity $-\vec{v}$, so the total current density is $\rho\vec{v}-\rho(-\vec{v})=2\rho\vec{v}$. So even when the charge carriers move at the same speed, the current density can still be nonzero if their velocity vectors are pointing in different directions.
{ "domain": "physics.stackexchange", "id": 51991, "tags": "electromagnetism, electric-current" }
Is every Lorentz invariant a Lorentz scalar?
Question: All examples of lorentz invariant quantities that I have come across seem to be scalars: rest mass, proper time, spacetime interval,dot product of two 4 vectors etc. Another thing is that these are all index contractions. So, is there any lorentz invariant quantity which isn't a Lorentz scalar? (My guess is that there isn't: if the quantity isn't scalar, it must have indices. Such a thing must be a tensor of non zero rank. But a thing which is a tensor under lorentz transformation will have its components change from frame to frame and therefore can't be an invariant. One loophole in this reasoning is to assume that the indexed quantity is in fact a tensor of some rank. So, is it possible to have indexed quantities constructed from spacetime which aren't tensors? ) Answer: “Scalar” and “Lorentz invariant” are synonyms in the context of Special and General Relativity. However, it is possible to have constant tensors whose components don’t actually change when transformed, such as the Minkowski metric tensor $\eta_{\mu\nu}$ in Special Relativity. We don’t call these “invariant”. Some people call these tensors “isotropic”; others reserve this terminology for constant tensors in Riemannian spaces, such as the Kronecker delta $\delta_{ij}$, rather than those in semi-Riemannian spaces. As for indexed quantities constructed from spacetime which aren't tensors... The coordinates $x^\mu$ have one index but don’t constitute a tensor in curved spacetime. (But $dx^\mu$ is a tensor.) The Lorentz transformations $\Lambda^\mu{}_\nu$ have two indices but aren’t tensors. Christoffel symbols have three indices but aren’t tensors. “Scalar” means “rank-0 tensor”. “Vector” means “rank-1 tensor”. Tensors are always defined with respect to a particular transformation group, so you can have rotational tensors, Lorentz tensors, etc.
{ "domain": "physics.stackexchange", "id": 62010, "tags": "special-relativity, tensor-calculus, lorentz-symmetry, covariance, invariants" }
Is random initialization of the weights the only choice to break the symmetry?
Question: My knowledge Suppose you have a layer that is fully connected, and that each neuron performs an operation like a = g(w^T * x + b) were a is the output of the neuron, x the input, g our generic activation function, and finally w and b our parameters. If both w and b are initialized with all elements equal to each other, then a is equal for each unit of that layer. This means that we have symmetry, thus at each iteration of whichever algorithm we choose to update our parameters, they will update in the same way, thus there is no need for multiple units since they all behave as a single one. In order to break the symmetry, we could randomly initialize the matrix w and initialize b to zero (this is the setup that I've seen more often). This way a is different for each unit so that all neurons behave differently. Of course, randomly initializing both w and b would be also okay even if not necessary. Question Is randomly initializing w the only choice? Could we randomly initialize b instead of w in order to break the symmetry? Is the answer dependent on the choice of the activation function and/or the cost function? My thinking is that we could break the symmetry by randomly initializing b, since in this way a would be different for each unit and, since in the backward propagation the derivatives of both w and b depend on a(at least this should be true for all the activation functions that I have seen so far), each unit would behave differently. Obviously, this is only a thought, and I'm not sure that is absolutely true. Answer: Randomising just b sort of works, but setting w to all zero causes severe problems with vanishing gradients, especially at the start of learning. Using backpropagation, the gradient at the outputs of a layer L involves a sum multiplying the gradient of the inputs to layer L+1 by the weights (and not the biases) between the layers. This will be zero if the weights are all zero. A gradient of zero at L's output will further cause all earlier layers(L-1, L-2 etc all the way back to layer 1) to receive zero gradients, and thus not update either weights or bias at the update step. So the first time you run an update, it will only affect the last layer. Then the next time, it will affect the two layers closest to the output (but only marginally at the penultimate layer) and so on. A related issue is that with weights all zero, or all the same, maps all inputs, no matter how they vary, onto the same output. This also can adversely affect the gradient signal that you are using to drive learning - for a balanced data set you have a good chance of starting learning close to a local minimum in the cost function. For deep networks especially, to fight vanishing (or exploding) gradients, you should initialise weights from a distribution that has an expected magnitude (after multiplying the inputs) and gradient magnitude that neither vanishes nor explodes. Analysis of values that work best in deep networks is how Xavier/Glorot initialisation were discovered. Without careful initialisation along these lines, deep networks take much longer to learn, or in worst cases never recover from a poor start and fail to learn effectively. Potentially to avoid these problems you could try to find a good non-zero fixed value for weights, as an alternative to Xavier initialisation, along with a good magnitude/distribution for bias initialisation. These would both vary according to size of the layer and possibly by the activation function. However, I would suspect this could suffer from other issues such sampling bias issues - there are more weights, therefore you get a better fit to desired aggregate behaviour when setting all the weight values randomly than you would for setting biases randomly.
{ "domain": "ai.stackexchange", "id": 568, "tags": "neural-networks, training, weights-initialization" }
OFDM Simulation process
Question: I'm trying to understand OFDM by making a simulation. Are these steps correct? generate M random complex QAM symbols. example: (1+j,1-j,-1-j,1-j....) Split my M samples into a N 2048 sized arrays Take IFFT of each array individually Add cyclic prefix to each array Make N square shaped signals by oversampling my IFFT arrays Pulse shape the individual signals with raised cosine Mix up the signals with frequency separation = 1/symbol_period by multiplying them by $e^{j2\pi ft}$ Answer: generate M random complex QAM symbols. example: (1+j,1-j,-1-j,1-j....) Split my M samples into a N 2048 sized arrays Take IFFT of each array individually Add cyclic prefix to each array Exactly! In real-world OFDM systems, not all carriers are used; typically, you leave out one or two DC carriers and leave a few guard carriers at the band edges, so you'd have something like split my M samples into 1950-sized chunks, and map them to the elements of a 2048-vector, leaving the center ones and edge ones free The edge guard carriers make the fifth step unnecessary: Make N square shaped signals by oversampling my IFFT arrays so scratch that, usually. Pulse shape the individual signals with raised cosine Don't do that! That's wrong. This is OFDM, and it's only used when you encounter multipath channels; matched filtering does nothing good here; it only makes things worse by making the channel more complicated and reducing average power. Remember why you'd do raised cosine with single carrier systems: You want things to be nicely pulse-shaped, which simply doesn't make sense in the context of OFDM, and you want matched filtering because you want to maximize SNR in an AWGN channel, and OFDM is never used in channels that are purely AWGN, and because after convolution with the conjugate pulse shaping filter in the receiver, you might have an easier time recovering timing, but that doesn't apply to OFDM either, because timing recovery must, in the wideband channel scenario, be done using other means (keywords: Schmidl-Cox, autocorrelation timing recovery methods). Mix up the signals with frequency separation = 1/symbol_period by multiplying them by $e^{j2\pi ft}$ No; your subcarrier separation is what the IFFT does; you just mix up the baseband signal generated by step 4 to bandpass with your $e^{j2\pi ft}$. Since you're only simulating things, and we know that bandpass-region channels have an equivalent representation in baseband (in fact, only because we know that works is that we can use OFDM at all): don't do this at all, but work with the complex baseband signal directly.
{ "domain": "dsp.stackexchange", "id": 6876, "tags": "fft, digital-communications, ofdm, ifft, numpy" }
How does the runtime of the Ukkonen's algorithm depend on the alphabet size?
Question: I am concerned with the question of the asymptotic running time of the Ukkonen's algorithm, perhaps the most popular algorithm for constructing suffix trees in linear (?) time. Here is a citation from the book "Algorithms on strings, trees and sequences" by Dan Gusfield (section 6.5.1): "... the Aho-Corasick, Weiner, Ukkonen and McCreight algorithms all either require $\Theta(m|\Sigma|)$ space, or the $O(m)$ time bound should be replaced with the minimum of $O(m \log m)$ and $O(m \log|\Sigma|)$". [$m$ is the string length and $\Sigma$ is the size of the alphabet] I don't understand why that is true. Space: well, in case we represent branches out of the nodes using arrays of size $\Theta(|\Sigma|)$, then, indeed, we end up with $\Theta(m|\Sigma|)$ space usage. However, as far as I can see, it is also possible to store the branches using hash tables (say, dictionaries in Python). We would then have only $\Theta(m)$ pointers stored in all hash tables altogether (since there are $\Theta(m)$ edges in the tree), while still being able to access the children nodes in $O(1)$ time, as fast as when using arrays. Time: as mentioned above, using hash tables allows us to access the outgoing branches of any node in $O(1)$ time. Since the Ukkonen's algorithm requires $O(m)$ operations (including accessing children nodes), the overall running time then would be also $O(m)$. I would be very grateful to you for any hints on why I am wrong in my conclusions and why Gusfield is right about the dependence of the Ukkonen's algorithm on the alphabet. Answer: As @jogojapan mentions in the comments, hasing generally is only amortized $O(1)$, so you would only get amortized bounds for the algorithm. However, I think you do not even get these: In order to get amortized $O(1)$ hashing, the hash tables have to be of size $\Omega(\Sigma)$, so you still have $\Theta(m\Sigma)$ space (and the same time requirement for initialization). What's more, in practice the time for setting up all these hash tables will be much higher than the time to set up arrays. You might fare better with using a global hash table that is indexed with (node, character)-pairs, but at least the "only amortized" argument will remain.
{ "domain": "cs.stackexchange", "id": 2816, "tags": "algorithms, data-structures, algorithm-analysis, strings" }
Implementing a simple queue in C
Question: I am learning C and I have some trouble with pointers. I decided to create a queue to practice a bit. The program works as intended, however I want to know some good practices and suggestions. Here is queue.c: #include <stdlib.h> #include "bank_sim.h" #include "queue.h" void initiliaze(Queue *queue) { queue->head = NULL; queue->tail = NULL; } void enqueue(Queue *queue, Customer *customer) { if (queue->head == NULL) { queue->head = customer; queue->tail = customer; queue->customer_cnt++; } else { queue->tail->next = customer; queue->tail = customer; queue->customer_cnt++; } } Customer *dequeue(Queue *queue) { if (queue->head == NULL) { return NULL; } else { Customer *head_customer = queue->head; queue->head = queue->head->next; head_customer->next = NULL; queue->customer_cnt--; return head_customer; } } queue.h: #ifndef QUEUE_H #define QUEUE_H #include "bank_sim.h" typedef struct { Customer *head; Customer *tail; int customer_cnt; } Queue; void initiliaze(Queue *queue); void enqueue(Queue *queue, Customer *customer); Customer *dequeue(Queue *queue); # endif Finally bank_sim.h: #ifndef BANK_SIM_H #define BANK_SIM_H typedef struct Customer Customer; struct Customer { int id; int service_time; struct Customer *next; }; typedef struct { Customer *customer; int occupied; } Bank_Teller; #endif Comments regarding anything is welcome including whether my structure for Customer and Queue can be improved. Answer: One major thing: Queue consists of node which consist of data. Here, your data is Customer. Data should not be coupled with the Queue i.e. think this way. typedef struct Node{ Node* next; // navigation pointer. Customer customer; // data }Node; Considering the current implementation, here are a few points to go with: Initialize misses to initialize customer_cnt = 0;. Where are you allocating the Queue instance. Instead, have getQueueNode which allocates memory for the node and initialise that as well. Allocation at one place helps for e.g: easy debugging when malloc returns null due to insufficient memory. Extract common code outside the if-else block in enqueue(). Put related code at once i.e. in dequeue operations for head_customer could be consecutive.
{ "domain": "codereview.stackexchange", "id": 10947, "tags": "beginner, c, queue" }
How to detect anomalies in each feature - time series
Question: I have a dataset with 5 features corresponding to 5 sensors that measure each three seconds the state of an accelerator. It is structured as well: Sensor 1 | Sensor 2 | Sensor 3 | Sensor 4 | Sensor 5 | Label 1.5 1.1 0.8 1.2 1.2 0 1.2 1.4 1.4 1.4 1.1 0 1.2 1.1 1.2 1.3 1.5 0 The label indicates if the time series is anomaly(=1) or not(=0). I have an anomaly detection task, and the frameworks I've chosen (1, 2) give me as output an array with length 3 where I have the labels predicted: (0, 1, 0). I usually worked with anomaly detection frameworks which gave me a threshold and I could have easily marked the values above it as anomalies. In this specific case, with this array of length 3, is it right to assume that I could rewrite the following dataset as this? (True = Anomaly, False = normal) Sensor 1 | Sensor 2 | Sensor 3 | Sensor 4 | Sensor 5 | False False False False False True True True True True False False False False False So, instead mark one value at time, it directly mark all the time series as anomaly? Answer: I believe the assumption you've made is incorrect (the whole row being anomalous or not). To explain this thoroughly, you would need to know which algorithm you're using in order to detect whether or not the final label is 0 or 1. For anomaly detection, you can approach the problem as a Supervised (pretty much classification problem), Unsupervised or Semi-Supervised. Assuming from your data, you've chosen the unsupervised approach. Unsupervised Anomaly Detection An unsupervised anomaly detection has a plethora of algorithm subsets; Distance Based, Statistical, Classification, Angle Based, DBscan, Neural Networks and more. Under those subsets lie algorithms that can help detect anomalous values such as unique NN architectures etc. Some algorithms (most of the ones I've used) have some form of dimensionality reduction such as PCA. Due to that fact, its a lot more difficult to grasp whether a specific column (e.g Sensor 1 is anomalous on its own) A better way to wrap your head around how/why a datapoint is tagged anomalous or not would be to plot a t-SNE graph. If you're interested in me editing my answer to create an anomaly detection model that tags data points as anomaly or not (along with the anomaly score for you to be able to set your personal threshold) and plotting a t-SNE graph, let me know.
{ "domain": "datascience.stackexchange", "id": 10213, "tags": "time-series, dataset, anomaly-detection, dataframe, anomaly" }
What is evidence for an irreversible change?
Question: Knowing some about thermodynamics and reactions, I do understand how it can be shown that a change is reversible. But irreversible? Why can't it be that a change that was deemed irreversible thousands of years ago via new changes perhaps developed by physicists or changes, processes and reactions from some other part of the world or space, a change that was deemed irreversible in the future can be shown to be reversible? I think that for instance diseases that were deemed irreversible as science progressed, we could make changes that were priorly said to be irreversible, in fact reversible. Answer: naI think what you are essentially asking is that, "Can we violate or circumvent the second law of thermodyanmics?" The answer is no, based on all the physics we know so far and observations we have made so far. When you freeze the melted ice back to ice you are creating irreversible changes elsewhere in the universe (e.g., outside your refrigerator). That said, there is a deeper conversation on whether the second law fundamental or only a measure of our ignorance about the complex system. A conversation that is not philosophically well resolved. You can search for work by Ilya Prigogine, or about the arrow of time. But setting philosophical questions aside, in the real-known world we cannot violate or circumvent the second law of thermodynamics.
{ "domain": "physics.stackexchange", "id": 6301, "tags": "thermodynamics, reversibility" }
Is it possible to prove that the curl of a gradient equals zero in this way?
Question: If $(\nabla\times\nabla\Phi)_i = \epsilon_{ijk}\partial_j\partial_k\Phi$, where Einstein summation is being used to find the $i$th component... Using Clairaut's theorem $\partial_{i}\partial_{j}\Phi = \partial_{j}\partial_{i}\Phi$, so $$\epsilon_{ijk}\partial_j\partial_k\Phi = \epsilon_{ijk}\partial_k\partial_j\Phi = -\epsilon_{ikj}\partial_k\partial_j\Phi$$ Now here is where I am confused. If the positive is equal to the negative, the value must be zero. Have I adequately shown this? Answer: Yes, that's fine. You could write out each component individually if you want to assure yourself. A more-intuitive argument would be to prove that line integrals of gradients are path-independent, and therefore that the circulation of a gradient around any closed loop is zero. The curl is a limit of such a circulation, and so the curl must be zero.
{ "domain": "physics.stackexchange", "id": 4910, "tags": "homework-and-exercises, vectors, vector-fields" }
Escape velocity of a rocket standing on Ganymede (Moon of Jupiter)
Question: I want to calculate the escape velocity of a rocket, standing on the surface of Ganymede (moon of Jupiter) and trying to leave Ganymede. My thinking was, the kinetic energy $E_{\text{KIN}}$ must be equal or bigger then the potential energy $E_{\text{POT}}$. So $E_{\text{KIN}}\geq E_{\text{POT}}$ Assuming $m$ is the mass of the rocket and $M_G$ is the mass of the moon Ganymede and $M_R$ is the radius of the moon and $v$ is the velocity of the rocket and $G$ is the Gravitational constant we can set $$G \frac{mM_G}{R^2_G}=\frac{mv^2}{2R_G}$$ which results in $$\sqrt{2G \frac{M_G}{R_G}}=v$$ The first question, that I have is: Is $G$ a constant only for earth or does it apply to all other planets, too? (If it does not apply, how do I calculate it?) If you have a look at the following picture you can see, that the rocket does not only have to overcome Ganymede but Jupiter himself, too: So I built the formula this way: $$G \frac{M_J}{R^2} + G \frac{M_G}{R^2_G}=\frac{v^2}{2R_G}$$ where $M_J$ is the mass of Jupiter and $R$ is the distance from Jupiter's center to the starting point of the rocket. Does this sound right? Answer: The answer is not correct, but only because the potential energy and kinetic energy formulas are written wrong. To find the escape velocity, the change in kinetic plus potential energy of the rocket is set to zero. If the rocket is going at escape velocity, it isn't moving at the end, and it has no potential energy (in the usual convention where the potential energy between point gravitating masses vanishes at infinity). This gives the equation: $$ {mv^2\over 2} -{G m M_J\over R_{JG}} - {G m M_G\over R_G} = 0 $$ Where the left side involves $R_{JG}$, the distance between Jupiter and Ganymede, $R_G$ is the radius of Ganymede, $M_J$, the mass of Jupiter, $M_G$, the mass of Ganymede, and m, the mass of the rocket (which divides out of both sides). There are also small corrections depending on the rotational velocity of Ganymede and whether you are starting on the Jupiter side or the far-Jupiter side of the moon. Solving for v gives your equation. I don't know why you divided by R factors, the potential energy is just the product of the masses divided by the distance times G. G is universal for all matter in the universe--- it is a universal constant which is the same on Earth, on Jupiter, or anywhere else. You can think of it as a constant the fixes the correct unit of mass for our universe as the Planck mass, once you first set the speed of light and Planck's constant (divided by 2pi) to one. The answer for the escape velocity is $$ v= \sqrt{{2 GM_J \over R_{GJ}} + {2 GM_G\over R_G} } $$ EDIT: stupid omission I left out something non-negligible, which is that Ganymede is orbiting Jupiter! So it has a large velocity already, enough to comepensate for half the negative potential energy of gravity. The rocket, if it takes off in the optimal direction--- going along with the orbit direction of Ganymede, will only need to increase its velocity to the escape velocity from the initial already large orbital velocity. The rocket, relative to surface Ganymede, will only need to get a velocity increment which equals v from above. The starting velocity of the rocket is equal to the Ganymede orbital velocity around Jupiter $v_i$, which is found by making the gravitational force the centripetal force $$ v_i = \sqrt{GM_J\over R_{JG}} $$ And the correct answer for the escape velocity that a rocket would need to acquire relative to Ganymede is $v-v_i$. EDIT: second G? Added a lost G constant :/
{ "domain": "physics.stackexchange", "id": 1918, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, rocket-science, escape-velocity" }
How to see intuitively that $\kappa_T$ is constant for solids?
Question: In thermodynamics we define $$\kappa_T = -\dfrac{1}{V}\left(\dfrac{\partial V}{\partial p}\right)_T.$$ By this definition, if we are treating $V$ as a function of $p$ and $T$, then $\kappa_T$ is also a function of $p$ (since we are keeping $T$ fixed), thus it can depend on pressure and temperature. I've read, though, that $\kappa_T$ is nearly constant for a solid which allows us to treat it as a constant in whatever we have to do. My question here is: how can we see conceptually that $\kappa_T$ is constant and doesn't depend on the pressure? Obviously there are other ways to see this: I believe (although I haven't tried yet) that using Statistical Mechanics we could deduce $\kappa_T$ in the same way as we get the specific heat of solids considering the solid as a lattice of harmonic oscillators. This would be a mathematical deduction which could show why $\kappa_T$ is nearly constant. Another way would be experimentally. We could perform experiments and looking at the data conclude that $\kappa_T$ is nearly constant. But is there any other, conceptual way, to infer that $\kappa_T$ is nearly constant? Is there any intuitive point of view here? Answer: The force between atoms in a solid is going to look something like: This is obviously not linear, but for small displacements $\delta r$ around the equilbrium position we can expand the force as a polynomial: $$ F(r_0-\delta r) = A\delta r + \mathcal O (\delta r^2) $$ then drop the terms in $\delta r^2$ on the grounds that they are small. So for sufficiently small displacements we can assume the displacement is linear in the applied force, which will give us a constant compressibility. And most solids and liquids have a very low compressibility so under most circumstance the displacements of the atoms $\delta r$ are indeed very small. That's why the compressibility is roughly constant. Once you apply enough pressure to move outside the linear area the compressibility cannot be assumed constant. For example a quick Google found this data for the variation of the compressibility of water with pressure:
{ "domain": "physics.stackexchange", "id": 29500, "tags": "thermodynamics" }
How does class_weight work in Decision Tree?
Question: I am interested in Cost-Sensitive learning. And I am trying to understand how class_weight in DecisionTree works in terms of math. I read a lot of articles that there are a lot of algorithms Cost Sensitive Decision Tree. So what exactly does class_weight do in Decision Tree? Answer: It is used, for example, when classes are imbalanced, so different weights are assigned to different classes, instead of equal ones. Another case is when some class is more significant than others, so loss wrt this class counts more. The class_weight parameter (eg for decision tress) is used by giving different weight to different class samples (doc: https://scikit-learn.org/stable/modules/generated/sklearn.utils.class_weight.compute_class_weight.html, src: https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/utils/class_weight.py#L11), which is then used to position the sample accordingly. Note that class_weight can be used in different ways depending on algorithm and model used. What makes sense for each algorithm. Mathematicaly, usually, is a simple multiplication of the sample value in some loss function Related: How does the class_weight parameter in scikit-learn work?. For CARTs with Gini Criterion: The original gini impurity is defined as: $${\displaystyle \operatorname {I} _{G}(p)=\sum _{i=1}^{J}\left(p_{i}\sum _{k\neq i}p_{k}\right)=\sum _{i=1}^{J}p_{i}(1-p_{i})=1-\sum _{i=1}^{J}{p_{i}}^{2}}$$ If classes are assigned weights $w_i$ then the weighted gini impurity is computed as follows: the weight of all the observations in a potential child node, $c$, is $$t_c = \sum_i w_i * n_i$$ where $n_i$ is the number of observations of class $i$ in $c$, and $w_i$ is the weight assigned to class $i$. The impurity of child node $c$ is then $$i_c = 1 - \sum_i (\frac{w_i * n_i}{t_c})^2$$ where $n_i$ is again the number of observations of class $i$ in the node, $w_i$ is the weight assigned to the class and $t_c$ is as calculated previously. The impurity of the entire potential split is then $$\sum_c \frac{t_c}{t_p} * i_c$$ where $t_c$ and $i_c$ are as calculated previously, and $t_p$ is the total weight of all observations in the parent node that is being split. Reference: How is the Weighted Gini Criterion defined?
{ "domain": "datascience.stackexchange", "id": 9278, "tags": "python, classification, scikit-learn, decision-trees, class-imbalance" }
Relativistic effects in element 137 (Feinmanium) and above
Question: Here I saw that if an element above atomic no 137 has to exist, it must have electron speed greater than speed of light. My question is , has this calculation been done keeping in mind Einstein's relativity? {I am just asking, I have not done this calculation.} Answer: Yes, the value 137 occurs even considering Einstein's theory of special relativity. 137 comes from considering Sommerfeld or Dirac relativistic theories, when the nucleus is modeled as a point. See equation 1 of A new method for solving the Z > 137 problem and for determination of energy levels of hydrogen-like atoms
{ "domain": "chemistry.stackexchange", "id": 4635, "tags": "elements, periodic-table" }
Isolating testable portions in rendered html, without imposing on content or formatting
Question: I'm unit testing some webpages and I'm trying to figure out how to best isolate the portions that need to be tested. There are two goals: Don't impose on the webpage's content or format, and be as clear as possible to the webpage editors what's going on, so they don't inadvertently break the tests. I've come up with something that works, but I wondering if I'm reinventing the wheel, or if there's just a better/more elegant/more standard way of doing this. Below is how I've documented it: Take this (Django) html template: <!DOCTYPE html> <html lang="en"> <head> <title>My user profile</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <meta name="viewport" content="width=device-width"/> </head> <body> <h1>My user profile</h1> <p>Back to <a href="{% url 'birth_year_stats' %}">user-stats</a></p> <ul> <li>Username: {{ user.username }}</li> <li>Email: {{ user.email }}</li> <li>Name: {{ user.first_name }} {{ user.last_name }}</li> <li>Year of birth: {{ user.profile.birth_year }}</li> </ul> </body> </html> Each of these Django variables (such as {{ first_name }}) will be tested, by comparing the value as stored in the database with a test user: test_user = {"username": "kermit", "password": "timrek", "first_name": "Kermit", "last_name": "The Frog", "birth_year": 1955, "email": "kermit@muppets.com"} which leads to a problem: How do you determine which string in the rendered template... <ul> <li>Username: kermit</li> <li>Email: kermit@muppets.com</li> <li>Name: Kermit The Frog</li> <li>Year of birth: 1955</li> </ul> ...refers to which variable? While we could test each with something like this Python code: page_content_bytes = client.get(reverse('user_profile')).content match = re.search(r'<li>Email: (.+?)</li>', str(page_content_bytes)) self.assertIsNotNone(match) self.assertEqual(test_user['email'], match.group(1)) This imposes specific content and formatting onto the webpage--<li>Email:...</li>, can never be altered without breaking the tests. In addition, there's nothing making it clear that this must never be touched, or the tests will fail. To solve this problem, I've come up with a specialized html comment that clearly isolates each testable portion. So this: <ul> <li>Username: {{ user.username }}</li> <li>Email: {{ user.email }}</li> <li>Name: {{ user.first_name }} {{ user.last_name }}</li> <li>Year of birth: {{ user.profile.birth_year }}</li> </ul> Becomes this: <ul> <li>Username: <!-- UNITRQD-start: username -->{{ user.username }}<!-- UNITRQD-end --></li> <li>Email: <!-- UNITRQD-start: email -->{{ user.email }}<!-- UNITRQD-end --></li> <li>Name: <!-- UNITRQD-start: first_name -->{{ user.first_name }}<!-- UNITRQD-end --> <!-- UNITRQD-start: last_name -->{{ user.last_name }}<!-- UNITRQD-end --></li> <li>Year of birth: <!-- UNITRQD-start: birth_year -->{{ user.profile.birth_year }}<!-- UNITRQD-end --></li> </ul> Which, when rendered, becomes this: <ul> <li>Username: <!-- UNITRQD-start: username -->kermit<!-- UNITRQD-end --></li> <li>Email: <!-- UNITRQD-start: email -->kermit@muppets.com<!-- UNITRQD-end --></li> <li>Name: <!-- UNITRQD-start: first_name -->Kermit<!-- UNITRQD-end --> <!-- UNITRQD-start: last_name -->The Frog<!-- UNITRQD-end --></li> <li>Year of birth: <!-- UNITRQD-start: birth_year -->1955<!-- UNITRQD-end --></li> </ul> Now it's clear exactly what's being tested, to both the testing code and to whomever edits the web page. The comments are admittedly distracting, but as long as each block is preserved--both the variable-value and all the text-and-spacing in the comments themselves--each of these block may be moved around freely, without affecting either the template or the tests. The above template therefore becomes the following: <!-- UNITRQD blocks These blocks contain text that is expected by the unit tests~~ including the text inside the comments. Changing them will cause the tests to fail. If one must be changed, it must first be communicated to the appropriate persons, so the tests can be adjusted. The benefit of these blocks is to minimize the portions in this template that must not be changed, but to make it exactly clear which piece refers to which test. Note that only dynamic information is contained in these testing blocks. No hard-coded, static, *publicly displayed* text or formatting is required by the tests. As long as each individual block is left untouched, they may be moved around freely. --> <!DOCTYPE html> <html lang="en"> <head> <title>My user profile</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <meta name="viewport" content="width=device-width"/> </head> <body> <!-- UNITRQD: user-profile page --> <h1>My user profile</h1> <p>Back to <a href="{% url 'birth_year_stats' %}">user-stats</a></p> <ul> <li>Username: <!-- UNITRQD-start: username -->{{ user.username }}<!-- UNITRQD-end --></li> <li>Email: <!-- UNITRQD-start: email -->{{ user.email }}<!-- UNITRQD-end --></li> <li>Name: <!-- UNITRQD-start: first_name -->{{ user.first_name }}<!-- UNITRQD-end --> <!-- UNITRQD-start: last_name -->{{ user.last_name }}<!-- UNITRQD-end --></li> <li>Year of birth: <!-- UNITRQD-start: birth_year -->{{ user.profile.birth_year }}<!-- UNITRQD-end --></li> </ul> </body> </html> And now you can test each field with, for example: """ ': email -->(.+?)<!' Captures all text between ': email -->', and the IMMEDIATELY-FOLLOWING '<!'. The question-mark in '.+?' is 'reluctant'. Eliminating it would capture all text through the *final* '<!' in the document. """ page_content_bytes = client.get(reverse('user_profile')).content match = re.search(r': email -->(.+?)<!', str(page_content_bytes)) self.assertIsNotNone(match) self.assertEqual(test_user[email], match.group(1)) Answer: It's an interesting solution, and an admirable effort. Unfortunately, this is over-engineering, it increases the complexity of your project, and may also weaken your security. The biggest problem with including markers like <!-- UNITRQD-start: username --> in the code is that it leaks internal implementation details on the user interface. This example might not reveal a lot: it's kind of obvious that you would have a table with a field "username". But as the practice spreads in the project, revealing more and more intimate details about your design, attackers might be able to piece together enough information to help them hack your site. You can work around this issue by implementing deployment scripts that wipe out these markers when released in production. But that's non-negligible extra work, and adds to the overall complexity of your project. This imposes specific content and formatting onto the webpage First of all, the ultimate target of your testing is not 100% clear. What are you really trying to test here? It seems you want to make sure that your elements like email are between specific invisible markers. But is this is a good target for testing? It sounds a bit analogous to micro-managing. I'm not sure what these tests will really achieve. Have you considered the possibility of going too far? Isn't there a better target for testing? <li>Email:...</li>, can never be altered without breaking the tests. Well, what's wrong with breaking tests? If the code evolves, it's normal that tests have to evolve with it. Imagine that your project has 100% coverage: all execution paths are verified by unit tests. At that point you cannot change anything without breaking tests. In addition, there's nothing making it clear that this must never be touched, or the tests will fail. In the code I write tests for, I don't normally add a friendly comment saying "hey, there are some unit tests for this code, check it out". So it seems normal to me that you never know that by touching something some tests might break. You know that afterwards, when running the tests. Conclusions It's ok for tests to break. If you find that you spend too much time fixing tests, then maybe you're testing the wrong things. Take a step back, and find better targets to test, better identifying characteristics. If you want to test that nobody removed {{ user.username }} from the template, write a test for exactly that, on the template file itself before rendering Normally there are no signs attached to code saying "beware, this stuff is unit tested". Instead, after an editor is done editing, they should run the test suites before they commit. Even better: make it automated in pre-commit hooks, or in a centralized continuous integration system. It's ok that a CI will catch errors a bit late, as long as you enforce a policy of cleaning up failed builds ASAP. For a software that reports errors, in general, it is recommended to be careful what you include in error messages, to avoid revealing too much. The code under review doesn't report errors, but I think we can extend that concept to testing techniques as well. It's probably best to avoid revealing anything about the internal design. It is a recommended practice to not reveal internal implementation details in error messages, and I think your technique violates that principle, therefore it should be avoided. Keep your testing simple. (Keep everything simple, in general.) I think you're clever technique can cause more problems than it solves. Remember Occam's Razor: simple solutions tend to be the best. This technique in question is not exactly simple.
{ "domain": "codereview.stackexchange", "id": 10104, "tags": "html, unit-testing, user-interface" }
How to log Clearpath Husky topic values in a text file
Question: At the moment, I control my Husky robot by using a wireless joypad as described in Husky-ROS tutorial roscore roslaunch clearpath_base example.launch roslaunch clearpath_teleop teleop.launch Since I need to log some parameters (such as left and right linear velocity, encoders ticks, etc..), I use "rostopic echo" and then I re-direct the output to a text file. For example, I do as follows: rostopic echo /clearpath/robots/default/cmd_vel >> log-for-vel.txt After this, I wrote down a bash script which is able to extract only the desired values from the log files. Typically, each topic file is something like this: header: seq: 778 stamp: secs: 1380713547 nsecs: 429730892 frame_id: '' left_speed: 0.34 right_speed: 0.34 left_accel: 0.5 right_accel: 0.5 --- header: seq: 779 stamp: secs: 1380713544 nsecs: 735946893 frame_id: '' left_speed: 0.26 right_speed: 0.26 left_accel: 0.5 right_accel: 0.5 And I just need to extract timestamp and left and right velocity information. Since this is an intricate solution, I would like to know if there is any smarter solution to log these values in a text files without having to redirect the output for each topic. What's the best solution for my problem? I really hope you can help me! Thank you very much! Originally posted by Marcus Barnet on ROS Answers with karma: 287 on 2013-10-27 Post score: 0 Answer: You could use the rosbag tool. See: http://wiki.ros.org/ROS/Tutorials/Recording%20and%20playing%20back%20data Originally posted by BennyRe with karma: 2949 on 2013-10-27 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by PaulvdVorst on 2013-11-01: I also recommend using rosbag, which lets you record whatever topics you like, and then play them back later in a different ROS environment. http://wiki.ros.org/rosbag You can then use rosbag to save only the values you want in a TXT file, see some of the responses to this question for further instructions for this: http://answers.ros.org/question/9102/how-to-extract-data-from-bag/?answer=13122 I hope this helps! Let me know if you need any more information :) Paul -Clearpath Robotics | Robot Whisperer
{ "domain": "robotics.stackexchange", "id": 15973, "tags": "ros, clearpath, husky, topic" }
'Learn projects the hard way': logfind project
Question: I'm a new programmer, who has just finished his first small project. It's a sort of basic imitation of the grep command from Linux. I'm learning from projects the hard way. Here is the description of the project: The first project that we’ll try is called logfind. It’s nothing more than a simple version of grep so you can learn a few simple basic things in Python: Opening files based on regular expressions in a file. Scanning those files for strings from command line arguments. Create a complete installable Python project. When I run the script, the first thing it will look for islogfind.txt file on the user's computer, then it will scan this file according to a regular expression to search for filenames of the example.extension type. The files specified in the logfind.txt file will be searched for the strings that the user has specified on the command line. I have a few questions about my program: Could someone critique my code on style and readability? Could you also check the overall design of the code and evaluate it? Are there functions that's syntax can be shortened? Source code: # logfind.py # Project explanation: http://projectsthehardway.com/ # A basic implementation of the linux grep command. The program will search for the given strings in files that are listed in a logfind.txt file. # By default the program will search for all the given strings in a file. If -o option is enabled, the program will return the file if one of the strings is present. # The results will be written to results.txt located in the current working directory. import argparse import os import re def cl_handler(): """ Handles the command line input, with strings as the words to search for in files, and -o as optional argument """ parser = argparse.ArgumentParser(description="find strings inside files") parser.add_argument("strings", nargs = "*", help="The files will be searched according to this words") parser.add_argument("-o", action="store_true", help="This option resets the string1 AND string2 and string3 logic to a string1 OR string2 OR string3 search logic") args = parser.parse_args() return args.strings, args.o def scan_logfind(logfind_dir): """ Opens the logfind file and scans it for filenames according to a regular expression (filename.extension). Returns a list with the filenames """ files = [] with open(logfind_dir, "r") as logfind: regex = re.compile(r"^[\w,\s-]+\.[A-Za-z]+$") for word in logfind.read().split(): file = regex.match(word) if file: files.append(word) return files def scan_directory(file): """ Scans the computer for a specified file, starting with the home directory. Returns the absolute directory of the file """ home = os.path.expanduser("~") for root, dirs, files in os.walk(home): for f in files: if f == file: file_directory = os.path.join(root, f) return file_directory def search_strings(file_dir, strings, or_option=False): """ Searches the file for the specified files. Returns boolean true if all strings are found in the file If the or_option is enabled the function will return boolean true if one string is found in the file. """ with open(file_dir, "r") as logfile: logfile = logfile.read().lower() results = [] for string in strings: if string in logfile: results.append("True") else: results.append("False") if or_option: for result in results: if result == "True": return True return False else: for result in results: if result == "False": return False return True def main(): """ main """ results = open("results.txt", "w") strings, or_option = cl_handler() logfind = scan_directory("logfind.txt") logfiles = scan_logfind(logfind) logfiles_dir = [] for logfile in logfiles: logfiles_dir.append(scan_directory(logfile)) for logfile_dir in logfiles_dir: if search_strings(logfile_dir, strings, or_option): results.write("{}\n".format(logfile_dir)) print("Search complete. Results written to results.txt") if __name__ == "__main__": main() Answer: Generators Sometimes you make a list a piece at a time, to write a generator function you just yield the result each iteration and not build a list, the pattern: result = [] for i in a_thing: result.append(do_thing(i)) return result Must be avoided as it wastes memory and is boilerplate. More concretely, you have: def scan_logfind(logfind_dir): """ Opens the logfind file and scans it for filenames according to a regular expression (filename.extension). Returns a list with the filenames """ files = [] # <- 1 with open(logfind_dir, "r") as logfind: regex = re.compile(r"^[\w,\s-]+\.[A-Za-z]+$") for word in logfind.read().split(): file = regex.match(word) if file: files.append(word) # <- 2 return files # <- 3 You should write: def scan_logfind(logfind_dir): """ Opens the logfind file and scans it for filenames according to a regular expression (filename.extension). Returns a generator with the filenames """ ## Deleted -- files = [] # <- 1 with open(logfind_dir, "r") as logfind: regex = re.compile(r"^[\w,\s-]+\.[A-Za-z]+$") for word in logfind.read().split(): file = regex.match(word) if file: yield word # Gives this element out of the function. ## Deleted -- return files # <- 3 Ternary when sensible Ternaries should not be overused, but can be a valuable tool to reduce code-bloat, see: if string in logfile: results.append("True") else: results.append("False") The action append is the same, only the value changes, a perfect job for ternaries: results.append("True" if string in logfile else "False") But the whole top part of the function search_strings should be a generator expression as noted above. Temporary overuse You really like temporary variables, I would make less use of them, especially if they got confusing names: file = regex.match(word) # This is not a filename nor a filecontent Just write: if regex.match(word): yield word Another example: file_directory = os.path.join(root, f) return file_directory This follows the: result = computation(thing) return result Anti-pattern, just write: return os.path.join(root, f) A case analysis: search_strings search_strings was the worst of your functions considering all the wheels that you reinvented there. Python has a ton of built-ins, please use them. A first refactor was just: Reducing nesting by closing the file early. Using generator expressions. Using the built-ins all and any def search_strings(file_dir, strings, or_option=False): """ Searches the file for the specified files. Returns boolean true if all strings are found in the file If the or_option is enabled the function will return boolean true if one string is found in the file. """ with open(file_dir, "r") as f: logfile = f.read().lower() results = ("True" if string in logfile else "False" for string in strings) if or_option: return any(result == "True" for result in results) else: return all(result == "True" for result in results) After: Realizing you are using "True" instead of True for misterious reasons. Using a ternary for more FP. def search_strings(file_dir, strings, or_option=False): """ Searches the file for the specified files. Returns boolean true if all strings are found in the file If the or_option is enabled the function will return boolean true if one string is found in the file. """ with open(file_dir, "r") as f: logfile = f.read().lower() any_or_all = (any if or_option else all) return any_or_all(string in logfile for string in strings) A FP purist would avoid all temporary variables and write it like: def search_strings(file_dir, strings, or_option=False): """ Searches the file for the specified files. Returns boolean true if all strings are found in the file If the or_option is enabled the function will return boolean true if one string is found in the file. """ with open(file_dir, "r") as f: return (any if or_option else all)(string in f.read().lower() for string in strings) The second and third versions are more or less readable to different people, choosing between them is subjective, but surely any of them is better than your version (3-7 lines vs 20 of yours and are simpler then it).
{ "domain": "codereview.stackexchange", "id": 16624, "tags": "python, beginner, regex, file" }
VSLAM ROS Source Down?
Question: I am unable to go to download the VSLAM ROS source code found here: https://code.ros.org/svn/ros-pkg/stacks/vslam/trunk Has it been removed or is the site just temporarily down? Originally posted by aelkman on ROS Answers with karma: 1 on 2017-01-20 Post score: 0 Answer: code.ros.org has been off-line for a long time now. Looks like contradict/ros-vslam is a fork of that. Originally posted by gvdhoorn with karma: 86574 on 2017-01-20 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 26785, "tags": "ros, vslam" }
Why does thermal radiation only occur at infrared and visible frequencies?
Question: The resources that I've checked out seem to say thermal radiation only occurs in the infrared and visible spectrum. For example my heat transfer textbook and the Wikipedia page on emissivity. In my mind thermal radiation is just energy emitted as a result of internal collisions from temperature. So, since temperature can range from 0K to huge numbers, I would think you would have a similar range of emitted energies. So then thermal radiation could occur throughout the whole of the electromagnetic spectrum. Answer: You are correct. Thermal radiation can have any frequency at all; it depends on the temperature of the radiating body. However, most bodies in the universe have temperatures that make them emit most of their radiation in the visible or infrared part of the spectrum. If the body in question is a "black body" (one that absorbs all electromagnetic radiation falling onto it) and it is thermal equilibrium with its environment, then it emits "black body radiation". The correct spectrum of black body radiation was first calculated by Planck. It shows that a black body radiates at all frequencies, but with a peak at frequency that depends on the temperature. The sun, for example, with a temperature of $5800\:\mathrm{K}$ peaks in the yellow to green part of the visible spectrum. However, its measurable radiation extends well into the IR and UV part of the spectrum. Very hot stars will shine more strongly in the UV, and even into the Xray part of the spectrum. Another example of black body radiation is the Cosmic Microwave Background. With a temperature of $2.75\:\mathrm{K}$ the CMB peaks in the microwave part of the spectrum, at $160\:\mathrm{GHz}$.
{ "domain": "physics.stackexchange", "id": 35998, "tags": "electromagnetic-radiation, thermal-radiation" }
Language consisting of all Turing machine encodings
Question: $A=${$ ⟨M⟩$:$M$ $is$ $a$ $Turing$ $Machine$ } What can be said about $A$ ? Specifically, is $A$ decidable,regular,CFL,CSL? I would say $A$ is decidable since we can write an algorithm to check whether a string is a valid encoding of a Turing machine . But, is $A$ Regular[or CFL or CSL] ? Edit : Someone argued that he could make an encoding where all the possible strings(What would be the alphabet here?-same as the encoding I suppose) are valid encoding of a TM(since there is a one-to-one correspondence between two countable infinite sets), hence making $A$ regular . Answer: The complexity of $A$ depends on the encoding used for Turing machines. It is easy to come up with an encoding in which every string encodes some Turing machine (there are lots of ways). In contrast, it is easy to come up with artificially "hard" encodings, say the $k$th Turing machine being encoded by $1^kb$, where $b=1$ iff the $k$th Turing machine halts on the empty input; under this encoding, $A$ is not decidable. Nevertheless, it seems intuitively clear that any reasonable encoding makes $A$ decidable, though it's hard to say anything more without knowing the exact encoding.
{ "domain": "cs.stackexchange", "id": 4068, "tags": "formal-languages, computability, turing-machines" }
Regarding the advantages of generalized linear phase filters
Question: I know that for a linear-phase filter with frequency response given by $$H(e^{j\omega}) = |H(e^{j\omega})|e^{j\phi(\omega)} $$ if the input of the system is $$x[n] = s[n]\cos[\omega_0 n] $$ where $s[n] $ is a narrowband signal with bandwidth $W\ll\omega_0$, then the output of the system is approximately $$y[n] \approx |H(e^{j\omega_0})|s[n-\tau_g]\cos[\omega_0 (n -\tau_p)] $$ (if $\tau_g$ and $\tau_p$ are integers) where $$\tau_g =\left.-\frac{d\phi(\omega)}{d\omega}\right|_{\omega=\omega_o} $$ $$\tau_p =\left.-\frac{\phi(\omega)}{\omega}\right|_{\omega=\omega_o} $$ A broadband signal can be thought as a superposition of narrowband signals, this means that if we have a filter with linear phase (i.e. $\phi(\omega)= \tau\omega$) both group delay and phase delay are constant and equal, therefore all the narrowband signal packets will be shifted in the same amount. But what about generalized linear phase filters? In this case $$\phi(\omega) = n_d \omega + \phi_o $$ so the output of a narrowband input centered at $\omega_0$ will now be $$y[n]= |H(e^{j\omega_0})|s[n-n_d]\cos\left[\omega_0\left(n -n_d-\frac{\phi_0}{\omega_0}\right)\right] $$ and the term $\frac{\phi_0}{\omega_0} $ will differ from one signal packet to another as the center frequency ($\omega_0$) of each packet changes. So why are generalized linear phase filters as important as linear phase ones? Isn't the term $\frac{\phi_0}{\omega_0} $ important? The only related question I could find here did not have a complete answer to my question. Answer: Note that with the definition of generalized linear phase $\phi(\omega)$ according to $$H(e^{j\omega})=|H(e^{j\omega})|e^{j\phi(\omega)}\tag{1}$$ and $$\phi(\omega)=\alpha\omega+\beta\tag{2}$$ the restriction that the impulse response $h[n]=\text{IDTFT}\{H(e^{j\omega})\}$ be real-valued only allows two possible values for $\beta$: $$\beta\in\{0,\pi\}\tag{3}$$ This is due to the required conjugate symmetry of $H(e^{j\omega})$: $H(e^{j\omega})=H^*(e^{-j\omega})$. The value $\beta=0$ obviously results in a system without phase distortion, and the value $\beta=\pi$ results in a sign reversal without any other phase distortions. However, if you define generalized linear phase according to $$H(e^{j\omega})=A(\omega)e^{j\phi(\omega)}\tag{4}$$ with $\phi(\omega)$ given by $(2)$, and with a real-valued but possibly bipolar amplitude function $A(\omega)$, then for real-valued filters the constant $\beta$ can take on four different values: $$\beta\in\{0,\pi/2,\pi,3\pi/2\}\tag{5}$$ For $\beta=0$ and $\beta=\pi$ the amplitude function $A(\omega)$ is even, and we get no phase distortion apart from a sign reversal for $\beta=\pi$. However, for $\beta=\pi/2$ and $\beta=3\pi/2$, $A(\omega)$ is odd, and there will be phase distortion, i.e., the group delay does not equal the phase delay. So for narrow-band signals, the envelope experiences a different delay than the carrier. Basically, - apart from the delay - the carrier is just shifted by $\pm 90$ degrees. This is what is desired when implementing differentiators or Hilbert transformers. Consequently, systems with generalized linear phase and with $\beta=\pi/2$ or $\beta=3\pi/2$ are important for implementing causal (FIR) approximations to differentiators and Hilbert transformers, where the phase shift is implemented exactly, apart from a delay necessary to make the filter causal. In sum, systems with generalized linear phase generally distort the phase (unless $\beta=0$ or $\beta=\pi$), but these distortions are desired when implementing differentiators and Hilbert transformers, and - apart from a delay - they implement the desired phase exactly. As a final remark, note that the approximation of the response of an LTI system to a narrow-band input signal of bandwidth $W$ using phase delay and group delay is only valid if the magnitude of the system's frequency response is (approximately) constant in the band $[\omega_0-W,\omega_0+W]$.
{ "domain": "dsp.stackexchange", "id": 5031, "tags": "filters, delay, linear-phase, distortion" }
If a system is in a state $L^2 =2 \hbar^2$ and $L_x=0$ why can't $L_z$ be 0?
Question: A particle is in a state where $L_x=0$ and $L^2 = 2\hbar^2$. This means $l=1$ and $m_x = 0$. I will call this state $Y^x_{lm} = Y^x_{10}$ I wanted to know what possible values $L_z$ could have in this state if it would be measured. I know $L_z$ can only take the values $\hbar, 0$ or $-\hbar$ and corresponding eigenfunctions $Y^z_{11}$, $Y^z_{10}$ or $Y^z_{1-1}$ I want to write the state $Y^x_{10}$ as a linear combination of the eigenfunctions of $L_z$. If I rotate the coordinate system around the y-axis to put the x-axis on the z location I can prove that $$Y^x_{10} = \frac{1}{\sqrt{2}} (Y^z_{11} - Y^z_{1-1})$$ Noticable From the mathematics I can clearly see that a particle in state $Y^x_{10}$ can never have the value $L_z=0$. My questions Why? Is there an insightful explanation why the it cannot take a value 0? Answer: The intuitive explanation is that an eigenstate of $\hat L_x$ with eigenvalue $m_x=0$ is invariant by rotation around the $x$ axis (by definition), but not by rotations around any other axis. Thus it cannot be an eigenstate of $\hat L_z$ with eigenvalue $m_z=0$. See for example this picture of the eigenfunctions of $\hat L{}^2$ and $\hat L_x$ :
{ "domain": "physics.stackexchange", "id": 27863, "tags": "quantum-mechanics, angular-momentum" }
Making a CFG for a^i b^j c^k such that i+j = 3k
Question: I have the language $L = \{a^i b^j c^k \mid i+j=3k\}$, however I am struggling to convert it to a CFG. I have made it into a PDA fairly easily, its just now getting this to the CFG which is the issue. I have thought about dividing it into 3 cases then taking the union of them for example: $i=0, j=3k$ $i=3k, j=0$ But this still hasn't gotten me very far Any help would be appreciated Thanks Answer: I am not sure whether it is a good idea to actually provide the grammar here in an answer but since the grammar is actually fairly easy I cannot see which details I could hide in a hint, so here it is: $G = (\{S, A\}, \{a, b, c\}, P, S)$ with $$ P = \{S \to aaaSc \mid aabAc \mid abbAc \mid bbbAc \mid \varepsilon, A \to bbbAc \mid \varepsilon\}.$$ You start by adding three $a$s for each $c$, after finitely many (possibly 0) applications of the first rule, you start creating three $b$s for each $c$ but one time you may actually add 2 $a$s, 1 $b$ for one $c$ or 1 $a$, 2 $b$s for one $c$ (2nd and 3rd rule, respectively).
{ "domain": "cs.stackexchange", "id": 11438, "tags": "context-free, formal-grammars, pushdown-automata" }
How to calculate energy from power I receive every 5 minutes
Question: I have to calculate energy from power measurements, which I receive every 5 minutes. The values are summed and assigned to measurements, then monthly measurements and finally yearly measurements. Having that aggregates of power measurements, the question is if I can calculate energy in kWh unit by equation: $\text{energy} = \text{power_sum} \times \frac{5}{60} \times \frac 1 {1000}$ as I want to convert 5 min to hours and W to kW. Answer: Yes, if your power values are in W and if you assume that power consumption is constant over each five minute period then your formula gives your total energy consumption in kWh. Essentially, you are integrating the power consumption values using the rectangle rule. For a slightly more accurate estimate, you could use the trapezoidal rule, which assumes that power consumption varies linearly over each five minute period, instead of remaining constant. If your power values are $P_0$ to $P_n$ every $5$ minutes over a period of $5n$ minutes then your energy estimate becomes $\displaystyle \text{Energy} = \left[ P_0 + P_n + 2\sum_{k=1}^{n-1} P_k \right] \times \frac 1 {24000}$
{ "domain": "physics.stackexchange", "id": 78694, "tags": "homework-and-exercises, energy, electricity, power" }
Shifting a fourier spectrum by subpixel amount in python
Question: I am working on a Fourier Ptychography problem. My research problem requires me to shift the Fourier spectrum of an image by a floating-point value. For a real-valued image, we can simply use cv2.warpAffine to shift the image by floating-point values (aka subpixel shifting). Taking fourier transform of an image produces a complex matrix. The problem is, cv2.warpAffine does not support complex matrices, and so I cannot use it on them. I tried searching for alternatives, but none of them seem to work. I came across numpy.roll, but the problem is, it does not support subpixel shifting. Rounding off the shifting values translates to loss of information in my case. Is there a solution in python, that allows for subpixel shifting on complex matrices? Thanks. EDIT: Based on Marcus' answer, I did some digging and implemented a nifty little script for subpixel shifting in python based on a Matlab script for the same. Here's the link to the script. Hope it helps! Answer: You'd not do it in frequency domain at all. Use the shift property of the discrete Fourier transform! Simply multiply the rows of your original image with an $e^{j2\pi \Delta f_x x/W}$ pointwise ($x$: pixel index in that row, $W$: width of image) before transforming to frequency domain to shift by $\Delta f_x$ in row direction. Same for shifts in column direction; multiply columns with $e^{j2\pi \Delta f_y y/H}$ pointwise ($y$: pixel index in that column, $H$: height of image). There's no restrictions on the "fineness" of $\Delta f$ in either direction, and any 2D shift can be understood as a shift in row and one in column direction. If necessary, transform your image from frequency to spatial domain, do the multiplications, and transform back.
{ "domain": "dsp.stackexchange", "id": 8739, "tags": "fft" }
Ubuntu 22.04 and Gazebo Classic not working
Question: Hello, I upgraded Ubuntu 20.04 to the 22.04 version. I installed ROS2 Humble (sudo apt install ros-humble-desktop-full) and I wanted to use Gazebo Classic with Nav2 simulations, but I cannot use it - my packages have unmet dependencies, that I couldn't fix. Please hint me what I should do now. My terminal logs: When I tried normal sudo apt install gazebo: ljaniec@ljaniec-PC:~$ gazebo > > Command 'gazebo' not found, but can be installed with: > > sudo apt install gazebo > > ljaniec@ljaniec-PC:~$ sudo apt install gazebo Reading package lists... Done Building dependency tree... Done Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gazebo : Depends: libgazebo11 (= 11.10.2+dfsg-1) but 11.12.0-1~focal is to be installed E: Unable to correct problems, you have held broken packages. When I tried sudo apt install ros-humble-gazebo-ros-pkgs: ljaniec@ljaniec-PC:~$ sudo apt install ros-humble-gazebo-ros-pkgs > > Reading package lists... Done > > Building dependency tree... Done > > Reading state information... Done > > Some packages could not be installed. This may mean that you have > > requested an impossible situation or if you are using the unstable > > distribution that some required packages have not yet been created > > or been moved out of Incoming. > > The following information may help to resolve the situation: > > The following packages have unmet dependencies: > > gazebo : Depends: libgazebo11 (= 11.10.2+dfsg-1) but 11.12.0-1~focal is to be installed > > gz-tools2 : Conflicts: gazebo (>= 11.0.0) but 11.10.2+dfsg-1 is to be installed > > libgazebo-dev : Depends: libgazebo11 (= 11.10.2+dfsg-1) but 11.12.0-1~focal is to be installed > > E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. > > ljaniec@ljaniec-PC:~$ sudo apt install libgazebo11 Reading package lists... Done Building dependency tree... Done Reading state information... Done libgazebo11 is already the newest version (11.12.0-1~focal). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Best Łukasz Originally posted by ljaniec on Gazebo Answers with karma: 1 on 2022-11-12 Post score: 0 Answer: This problem was resolved, description: https://github.com/gazebosim/gazebo-classic/issues/3277 # Check what is going to be removed dpkg -l | grep '^ii.*\(libignition\|sdformat\|gazebo\).*~focal' # Remove everything while [[ -n $(dpkg -l | grep '^ii.*\(libignition\|sdformat\|gazebo\).*~focal' | awk '{ print $2 };') ]]; do sudo dpkg -r $(dpkg -l | grep '^ii.*\(libignition\|sdformat\|gazebo\).*~focal' | awk '{ print $2 }';); done # Install new Gazebo on Jammy (or just install the humble desktop full) sudo apt install libgazebo-dev Originally posted by ljaniec with karma: 1 on 2022-11-15 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by lfernandez on 2022-11-17: Thanks @jlaniec but I'm not using focal repos but jammy. I haven't updated Ubuntu. It's a clean installation on Ubuntu 22.04.
{ "domain": "robotics.stackexchange", "id": 4674, "tags": "gazebo-11" }
Quantum mechanics and rigorous math
Question: I was reviewing a little of quantum mechanics in a rigorous way, so i realized there is a lot of concepts similar in words but different in its meanings, i would appreciate any help to understand it: Kets -> It is the "physics entity" you are measuring: Momentum, energy, etc... Operators -> What you actually use to get the results of Momentum, energy, etc... Eigenkets -> The basis, the possible states the operator can have: exp: Spin up or spin down Eigenvalues -> The possible values you can get. Eigenfunctions -> Reserved to wave functions Eigenvectors -> ? State vector -> ? Answer: Kets In quantum mechanics the possible states of the system are elements of a separable, projective Hilbert space (i.e. two states differing by an overall complex constant are equivalent). The kets e.g. $|\psi\rangle$ are elements of the Hilbert space $\mathcal H$, the bras e.g. $\langle \psi|$ are elements of its dual, $\mathcal H^*$. Operators Each physically measurable quantity is associated with a Hermitian operator, e.g. position $\hat x$, momentum $\hat p$, energy (aka. the Hamiltonian) $\hat H$ etc. Eigenkets These are eigenvectors of the operators that correspond to the observables. Note that since we are dealing with Hermitian operators the eigenkets corresponding to distinct eigenvalues are orthogonal. *Note, terms often used synonymously with "eigenket" in physics: Eigenstate, eigenvector, eigenfunction. Eigenvalues The spectrum of the operators that correspond to observables are the (only) possible values of that quantity that can be obtained upon measurement of the observable in question. Given a system in a state $|\psi\rangle$, the probability of obtaining the value $a$ and (necessarily) finding the system in the associated eigenstate $|a\rangle$ after measuring the observable $\hat A$ is given by: $$\mathrm{Probability}=|\langle a|\psi\rangle|^2 \tag{1}$$ Eigenfunctions There is an isomorphism $\mathcal H\cong L^2(\Bbb R)$ that allows us to associate elements of our Hilbert space with elements of the space of square integrable functions, elements of the latter are called wavefunctions. Additional note: You can be more rigorous than this, rigorous quantum mechanics is possible to do but requires a significant amount of heavy lifting and a fair amount of machinery that is not introduced even in fairly advanced courses.
{ "domain": "physics.stackexchange", "id": 73379, "tags": "quantum-mechanics, hilbert-space, operators, probability, quantum-states" }
Does the loss of mass create an observable change in a comet's orbit?
Question: Comets lose their mass through water evaporation due to close encounter with sun, so my question is does the loss of mass create an observable change in a comet's orbit? What kind of change that may be shrinking orbit or expanding orbit? Answer: This problem was studied in Yu & Zheng (1995), who evaluated the effects of the change in the Sun's mass over time and the change in a comet's mass over time, for the case of Shoemaker-Levy 9, which had recently crashed into Jupiter. Given their mass model for the comet (Equation 6), they found that the Sun's mass loss created an increase in semi-major axis of about 8.5 centimeters per year, whereas the comet's mass loss created an increase in semi-major axis of about 10,000 kilometers per year. Several things to note: The comet orbited Jupiter prior to breaking up, so it did not come as close to the Sun as most short-period comets. Mass loss rates can change over time, depending on the distance from the Sun. Shoemaker-Levy 9 should not be considered a normal comet, in any sense, given its orbit and eventual destruction. 10,000 kilometers a year, though, is nothing to sniff at. Over the course of an orbit, that can be quite a lot - although keep in mind that longer orbits involve much larger semi-major axes - and I'd argue that it should be observable, given the correct calculations of how the orbit should evolve over time.
{ "domain": "astronomy.stackexchange", "id": 1864, "tags": "orbit, the-sun, comets" }
Relationship between thermal conductivity and reaching the steady temperature
Question: The diagram is for three materials with 3 different thermal conductivities, i think the material with higher thermal conductivity is the number 1 because it transfers the heat much better so it gets to the steady temperature sooner than the other two,am i correct? If not or if my reason isn't correct , i appreciate your helps. Answer: This must represent the average temperature, or the temperature at some specific location within the body (since the temperature will be a function of spatial location and time). Just ask yourself what the curve would look like if the thermal conductivity of the material were zero. This should give you your answer.
{ "domain": "physics.stackexchange", "id": 29275, "tags": "thermodynamics, thermal-conductivity" }
List and post tasks with jQuery
Question: I've written the code below to display a list of tasks from a Flask API. The form and associated .ajax method post to the API, immediately appending the data to the list of tasks. The code is working, but I'm fairly new to jquery and am curious if my code could be improved. <html> <head> <title>Home</title> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script> <script type="text/javascript"> $(document).ready(function () { $.getJSON("/api/v1/tasks", function(result){ $.each(result['tasks'], function(key, item){ $("div").append('<li>Task:' + item.title + ' Description:' + item.description + 'Status: ' + item.done + '</li>'); }); }); }); </script> </head> <body> <div></div> <p>Add Task</p> <form id="task-form"> <input id="title" type="text"> <input id="description" type="text"> <button type="submit">Submit</button> </form> <script> $("#task-form").submit(function(e){ e.preventDefault(); var data = { 'title': document.getElementById('title').value, 'description': document.getElementById('description').value }; var json = JSON.stringify(data); $.ajax({ type : 'POST', url : '/api/v1/tasks', data: json, contentType: 'application/json;charset=UTF-8', success: function(result) { $("div").append('<li>Task:' + data["title"] + ' Description:' + data["description"] + 'Status: false</li>'); } }); }); </script> </body> </html> Answer: Here are some improvements that can be done Move the <script>s to the end of <body>. This way, the UI is show quickly to the end user and does not have to wait to load the scripts. Also, as the code is moved to the end of the <body>, all the elements are already loaded and there is no need to use ready(). Create a string of HTML and append it to the element. By this, the DOM is accessed only once for any number of items in the tasks array. There is no need to convert the Object to String using JSON.stringify(). jQuery does this for you. Use val() to get the value of an element. It is considered to trim the string which are entered by user. Use unique ID on the <div> element and use it to select it using jQuery. $('div') will select all the <div> elements on the page and the results will be unexpected when there are more than one elements on page. $.getJSON("/api/v1/tasks", function (result) { var html = ''; result.tasks.forEach(function (item) { // Append to the string html += '<li>Task:' + item.title + ' Description:' + item.description + ' Status:' + item.done + '</li>'; }); // Update the DOM $('div').append(html); }); $("#task-form").submit(function (e) { e.preventDefault(); $.ajax({ type: 'POST', url: '/api/v1/tasks', data: { title: $('#title').val(), description: $('#description').val() }, contentType: 'application/json;charset=UTF-8', success: function () { $('div').append('<li>Task:' + data.title + ' Description:' + data.description + 'Status: false</li>'); } }); }); Here's same code written using Revealing module pattern. var taskModule = (function ($) { 'use strict'; // Private variables and functions var _apiURL = 'api/v1/tasks', _readTasks = function () { $.getJSON(_apiURL, _showTasks); }, /** * To add new task in Database * @param {Object} task Object containing task title and description */ _addTask = function (task) { var self = this; $.ajax({ url: _apiURL, data: task, success: function () { _showTasks([task]); } }); }, _showTasks = function (tasks) { var taskHTML = ''; tasks.forEach(function (task) { taskHTML += '<li>Task:' + item.title + ' Description:' + item.description + ' Status:' + (item.done || 'false') + '</li>'; }); $('#tasksContainer').append(taskHTML); }, _bindEvent = function () { $('#task-form').on('submit', function (e) { e.preventDefault(); _addTask({ title: $('#title').val(), description: $('#description').val() }); }); }; // Load tasks from server _readTasks(); // Bind form submit event _bindEvent(); // For public Interface // return { // publicAlias: method/Variable name // }; }(jQuery));
{ "domain": "codereview.stackexchange", "id": 21378, "tags": "javascript, jquery, json, api, to-do-list" }
Why is $K \cdot T$ a conserved vector?
Question: I am following along Chapter 2 of Takagi's Vacuum Noise and Stress Induced by Uniform Acceleration. For a free real scalar field $\phi$ the stress-energy tensor is: $$ T_{\mu\nu} = ( \partial_{\mu} \phi ) ( \partial_{\nu} \phi ) - g_{\mu\nu} \tfrac{1}{2} g^{\alpha\beta} ( \partial_{\alpha} \phi ) ( \partial_{\beta} \phi ) - \tfrac{1}{2} g_{\mu\nu} m^2 \phi^2 $$ For $K$ a timelike Killing vector of the spacetime, define: $$ H_{K} = - \int_{\Sigma} d^3\Sigma_{\nu}\ K^{\mu} T_{\mu}^{\ \nu} $$ where $\Sigma$ is a spacelike hypersurface and $d^3\Sigma_{\nu}$ the 3-volume 1-form over this surface. Then $H$ is a conserved charge and is independent of the choice of $\Sigma$ used to integrate it. Takagi says that $K^{\mu} T_{\mu}^{\ \nu}$ is a conserved vector. So I have two questions: 1. Does $K^{\mu} T_{\mu}^{\ \nu}$ being a 'conserved vector' mean that it obeys $\partial_{\nu} K^{\mu} T_{\mu}^{\ \nu}= 0$? If this is true, how do I see this? 2. What does it mean that $H_K$ is a conserved charge? Does it mean $\mathcal{L}_{K} H_{K} = 0$ (Where $\mathcal{L}_{K}$ is the Lie derivative)? Normally you'd have $K = \frac{\partial}{\partial x^0}$ for ordinary Minkowski time and so I'd understand $H_{\partial_0}$ being conserved as the statement $\frac{\partial}{\partial x^0} H_{\partial_0} = 0$ EDIT: I've also read the following statement in DeWitt's A Global Approach to Quantum Field Theory: In a general stationary background $H_{K}$ is the only conserved charge that there is for this system. Why is this true? I know that in a general stationary spacetime there exists one global timelike Killing vector, but independent of this isn't it still true that $T_{\mu\nu}$ is a conserved current? To me it seems that there should still be four corresponding conserved charges, independent of whether the spacetime is stationary or not. Answer: $K_\mu T^{\mu \nu}$ being conserved means it has no covariant divergence: $$ \nabla_\nu (K_\mu T^{\mu \nu}) = 0 \,.$$ To see this, expand using the product rule and apply energy-momentum conservation and Killing's equation. Note that $\partial_\nu (K_\mu T^{\mu \nu})$ is not a scalar. $H_K$ is not some field defined through spacetime so Lie derivatives aren't really appropriate here. The statement that $H_K$ is independent of $\Sigma$ can be proved as follows: for any $\Sigma, \Sigma'$ consider the volume in spacetime bounded by these two surfaces along with the timelike boundary at spatial infinity. Then integrate $ \nabla_\nu (K_\mu T^{\mu \nu})$ over this volume and apply the divergence theorem, assuming that $T_{\mu \nu}$ vanishes sufficiently quickly at spatial infinity. If we choose $\Sigma = \Sigma_t$ to be a surface of constant $t$, then this result can be specialised to $$ \frac{\mathrm{d}}{\mathrm{d}t} H_K(\Sigma_t) = 0 \,,$$ which is what is meant by $H_K$ being conserved. The conservation law $\nabla_\mu T^{\mu \nu} = 0$ is a little different from a conservation law of the form $\nabla_\mu J^\mu = 0$. The presence of an extra free index in the former case means we cannot apply the usual (covariant) divergence theorem to conclude the existence of a conserved charge.
{ "domain": "physics.stackexchange", "id": 50647, "tags": "quantum-field-theory, energy-conservation, stress-energy-momentum-tensor, qft-in-curved-spacetime" }
100% accuracy on both train and test after feature engineering
Question: The original dataset is of ~17K compound structures almost equally divided with labels indicating yes or no, after heavy use of mol2vec and rdkit I have created ~300 datapoints Using the boosted trees method on the same shuffled train and test dataset gives 98% train accuracy and 89% test accuracy, but a simple neural network gives 100% train and test accuracy I have checked the code again to ensure target leakage is not occurring, I have also coded it from the scratch twice to ensure I haven't made any mistake, yet I do not believe I should be getting 100% accuracy on both train and test Does this mean that the model is actually accurate due to so many data points? Answer: Yes - Getting 100% accuracy is possible for neural networks compared to tree-based models. Neural networks can learn non-linear relationships through the activation function. Tree-based models are restricted to piece-wise linear relationships.
{ "domain": "datascience.stackexchange", "id": 8244, "tags": "neural-network, tensorflow, overfitting" }
MAP (BCJR) Algorithm with channel LLRs
Question: I have a question about the decoding of convolutional codes with the MAP (BCJR) algorithm. Let $\mathbf{u}$ denote the uncoded bits and $\mathbf{v}$ is the coded bits. Here is the point! Let $\mathbf{v}$ is modulated with any arbitrary linear modulation (M-PSK, M-QAM) and transmitted through an AWGN channel. The receiver performs the following to obtain the log-likelihood ratios of the uncoded bits $\mathbf{u}$: $$L\left(u_{l}\right) \equiv \ln \left[\frac{P\left(u_{l}=+1 \mid \mathbf{r}\right)}{P\left(u_{l}=-1 \mid \mathbf{r}\right)}\right]$$ which equals to $$L\left(\alpha_{k}\right)=\log \frac{ <u_l=0>\sum_{m'} \sum_{m}\alpha_{k-1}\left(m^{\prime}\right) \gamma_{k}\left(m^{\prime}, m\right) \beta_{k}(m)}{<u_l=1>\sum_{m^{\prime}} \sum_{m} \alpha_{k-1}\left(m^{\prime}\right) \gamma_{k}\left(m^{\prime}, m\right) \beta_{k}(m)}.$$ Here, $m$ and $m'$ stands for the trellis states, $<u_l = i>$ denotes that the lefthandside expression is under the condition of $u_l=i$. Now, my question is: can I manipulate the MAP algorithm as given below: $$L\left(u_{l}\right) \equiv \ln \left[\frac{P\left(u_{l}=+1 \mid L(\mathbf{c})\right)}{P\left(u_{l}=-1 \mid L(\mathbf{c})\right)}\right]$$ where $L(\mathbf{c})$ is the channel LLRs. In this case, I could not find an answer how to write $\gamma_k$ values Answer: I finally derive a solution for my problem. I believe that this explanation is consistent. Let $\mathbf{a} = [a_0,a_1,...,a_N]$ is the uncoded bit vector. Without loss of generality, let us assume that the convolutional encoder is of rate $R=1/n$. Therefore, each $a_k$ will be encoded to a $\mathbf{c}_k = [c_{0,k},c_{1,k},\dots,c_{n-1,k}]$. Here, the overall encoded data can be written as the concetanation of $\mathbf{c}_k$ arrays: $\mathbf{c} = [\mathbf{c}_1,\mathbf{c}_2,\dots, \mathbf{c}_N]$ In the algorithm, $\gamma_k(m',m)$ is defined as $$\gamma_k(m',m) = \sum_{i=0,1}\Pr(a_k=i|m', m)\Pr(Y_k|a_k=i)\Pr(a_k=i)$$ where $m'$ and $m$ denotes the states of the Trellis diagram at the $k-1$th and $k$th branches ,respectively. $Y_k$ denotes the modulated $\mathbf{c}_k$ . Since the encoder performs the encoding $a_k \rightarrow\mathbf{c}_k$ in the ($k$-th branch of the convolutional encoder), the expression $\Pr(Y_k|a_k=i)$ is equivalent to $\Pr(Y_k|\mathbf{d}_k = \mathbf{c}^{i}_k)$. Here, $\mathbf{c}^i_k$ denotes that the encoded $a_k$ when $a_k=i$ ($i = 0,1$). Using Bayesian theorem: $$\Pr(Y_k|a_k = i) = K \Pr(a_k=i|Y_k)$$ as well as $$\Pr(Y_k|\mathbf{d}_k = \mathbf{c}^i_k) = K \Pr(\mathbf{d}_k = \mathbf{c}_k^i|Y_k) = K \prod_{\ell=1}^n \Pr(d_{\ell,k}=c_{\ell,k}^i|Y_k)$$ where $K$ is a consant under the assumption that $a_k$ elements are generated equalikely. In the sequel, $K$ will be omitted since is of no significance on the derivation. On the other hand, it is assumed that the receiver has calculated the channel LLR values of the channel LLRs ($LLR(\mathbf{d}_k)$) with the help of $Y_k$ modulated noisy signals. (NOTE: Channel LLRs do not imply $LLR(\mathbf{c}_k)$). The channel LLR values can be calculated as $$LLR(d_{l,k}) = \log \left( \frac{\Pr(d_{\ell,k} = 0 |Y_k)}{\Pr(d_{\ell,k} = 1 |Y_k)} \right)$$ After some algebra, $\Pr(d_{\ell,k} = c_{\ell,k}^i |Y_k)$ can be written as $$ \Pr(d_{\ell,k} = c_{\ell,k}^i |Y_k) = \frac{\exp\left\{(1-c_{\ell,k}^i)LLR(d_{\ell,k})\right\}}{1+ \exp\left\{LLR(d_{\ell,k})\right\}}$$ Therefore, the expression $\gamma_k(m',m)$ can be now written in terms of channel LLRs.
{ "domain": "dsp.stackexchange", "id": 10801, "tags": "digital-communications, channelcoding" }
Identify outliers for annotation in text data
Question: I read the book "Human-in-the-Loop Machine Learning" by Robert (Munro) Monarch about Active Learning. I don't understand the following approach to get a diverse set of items for humans to label: Take each item in the unlabeled data and count the average number of word matches it has with items already in the training data Rank the items by their average match Sample the item with the lowest average number of matches Add that item to the ‘labeled’ data and repeat 1-3 until we have sampled enough for one iteration of human review It's not clear how to calculate the average number of word matches. Answer: The idea is to find the documents which are not well represented in the current labeled data. The first point is indeed a bit vague and can probably be interpreted in different ways. My interpretation would be something like this: For every document $d_u$ in the unlabeled data, count the number of words in common with every document $d_l$ in the labeled data. This value is the "match score" between $d_u$ and $d_l$. Note: I think that this value should be normalized, for example using the overlap coefficient. Note that other similarity measures could be used as well, for instance cosine-TFIDF. As output from the above step, for a single document $d_u$ one obtains a "match score" for every labeled document. The average across the labeled documents gives the "average match" for $d_u$.
{ "domain": "datascience.stackexchange", "id": 8843, "tags": "machine-learning, nlp, annotation, active-learning" }
Laminar flow hood/air straightener
Question: I was looking at laminar flow filters to aid in my mycology work I'm doing. Most filters use an aluminum extrusion with fins. I figured the options out there are pretty expensive for what you get and figured I could just 3d print my own straightener. Question is what shape do I use? I have seen honeycomb used and would be an easy option, but it really got me thinking about what shape would be optimal for this. We all know that when streamlining a positive body that a teardrop is close to optimal, but putting teardrops in a grid would yield a weird negative shape in the pattern. Any thoughts or ideas on this would be greatly appreciated, I plan on open sourcing the final design. Thanks Answer: Just doing a quick search, this NASA paper (https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19810020599.pdf) indicates that no particular geometry has significant advantages for producing flow uniformity. I would stick to a well-tested geometry that is the simplest to produce with a 3D printer. Also, keep in mind the purpose of using honeycomb is to remove turbulence (vortices) from the flow via reducing the hydraulic diameter. Thus, so long as you don’t inflict a large pressure drop through the honeycomb (if you care about pressure drop), any shape with an appreciable reduction in flow hydraulic diameter will reduce the turbulent intensity. This paper seems to be useful as well: https://conservancy.umn.edu/bitstream/handle/11299/108725/pr338.pdf?sequence=1
{ "domain": "engineering.stackexchange", "id": 2815, "tags": "fluid-mechanics, airflow, aerodynamics, airfoils" }
What is the fate of the typical tree?
Question: Walking through the local established broadleaf forests dominated by oak and tulip poplar (Eastern U.S.), it is common to find patches of young tree saplings, say 10-20 oak or tulip poplar saplings growing in close proximity. The obvious restriction on living space means that probably at most one of these trees will make it to maturity. In fact, judging by the apparent lack of mid-aged trees of these types in the woods, it seems very possible that none of these trees will last much longer. I'm curious: at what point and how would these trees die? The answer might also help me take better care of trees I've planted in my yard. I can easily speculate about some causes: Competition for water/nutrients in the soil Not enough sunlight Animal predators Pathogens But these suggestions don't answer the how or the when. And it's not clear which accounts the biggest impact. Clearly, the above factors didn't prevent the trees from growing in the first place. And the first suggestion (competition for nutrients) puts a limit on their growth but doesn't say per se why a plant would die. Answer: This is a great question. Without going into too much detail, there is a series of studies done in northeastern US temperate forests (specifically at Great Mountain Forest in Connecticut, USA) on the growth, survival, and dispersal of canopy trees. In this particular community, light appears to be the primary limiting factor. Seedlings and saplings in low-light environments (i.e. stuck under the canopies of mature trees) grow slowly and have poor carbon balance, which makes them susceptible to mortality (probably from pathogens, although the proximal cause of death of seedlings and saplings is rarely known). Early successional tree species (in GMF, white ash/red maple/red and white oak) grow especially slowly, and hence are more likely to die, in low-light conditions; late successional (hemlock and beech) grow slowly even in high light, but are tolerant of slow growth, so are likely to survive until the adult tree above them dies and falls down. The exact answer to "at what point do these trees die" is a little tricky, but here's a figure from Kobe et al. showing estimated survival probability for different species as a function of light availability (the assumption is of independent mortality probabilities, given light availability, at each time step, so there is no particular stage at which individuals die in the model). Kobe, Richard K., Stephen W. Pacala, John A. Silander Jr, and Charles D. Canham. “Juvenile Tree Survivorship as a Component of Shade Tolerance.” Ecological Applications 5, no. 2 (1995): 517–532. Pacala, Stephen W., Charles D. Canham, John Saponara, John A. Silander Jr, Richard K. Kobe, and Eric Ribbens. “Forest Models Defined by Field Measurements: Estimation, Error Analysis and Dynamics.” Ecological Monographs 66, no. 1 (1996): 1–43. Pacala, Stephen W., Charles D. Canham, and John A. Silander Jr. “Forest Models Defined by Field Measurements: I. The Design of a Northeastern Forest Simulator.” Canadian Journal of Forest Research 23, no. 10 (1993): 1980–1988. Pacala, Stephen W., Charles D. Canham, John A. Silander Jr, and Richard K. Kobe. “Sapling Growth as a Function of Resources in a North Temperate Forest.” Canadian Journal of Forest Research 24, no. 11 (1994): 2172–2183.
{ "domain": "biology.stackexchange", "id": 10877, "tags": "botany, ecology, plant-physiology, trees" }
Where does gravitational potential energy come from?
Question: If a body with mass is a finite distance away from another body with mass, then they possess a gravitational potential energy. Consider that both the bodies didn't have any energy previously, but when they are in one another's gravitational influence they are gaining gravitational potential energy. Is this energy coming from a quantum field of gravity (just like, electric potential energy is coming from photon field and strong nuclear potential energy from gluon field)? Is this the case? If not, where am I wrong? Answer: Total energy is a conserved quantity, so gravitational potential energy always comes from some other form of energy. Consider that both the bodies didn't have any energy previously This is not possible. In order for the bodies to have gravitational potential energy now, they must have had some other form of energy previously. Or some other system must have transferred energy (work) to the bodies. A scenario where an isolated system without energy suddenly obtains energy violates the known laws of physics.
{ "domain": "physics.stackexchange", "id": 95035, "tags": "gravity, potential-energy" }
Mocking config data in JavaScript unit tests
Question: I'd really like someone to sanity check my approach for unit testing the summarise() function and mocking its dependencies. Background Each option has a set of values, which come from the app state Options also each have a config, which are defined in configs.js (potentially a large list, with 'dynamic' data-like variations) The tests I'm testing the summarise function. To isolate my tests, I've mocked the option configs. This means I don't have to couple the test to certain option configs, which allows settings to be changed freely. My tests work, but I feel it has a few issues: I can't directly spy on a config object, so I've exported a function getOptionConfig() which I can spy on. I feel like it would be cleaner to avoid an API; and just spy on the object if possible. I feel it'd be much cleaner to pass the _optionConfigs object to summarise() (to avoid mocks completely). However, this would only pass the same issue to any functions which call summarise(). Because I'm forcing a return value in the spies, I'm not testing the parameter that I'm passing to getOptionConfig(). Is this bad? configs.js const _optionConfigs = { exampleOption: { type: 'red', }, exampleOption2: { type: 'blue', }, exampleOption3: { type: 'red', exclude: true, }, exampleOption4: { type: 'red', }, // ... the list goes on... }; export const getOptionConfig(id) { return _optionConfigs[id]; } summarise.js import { getOptionConfig } from './config'; /** * Summarise an option's values * @param {string} optionName - an option name, used for referencing its config data * @param {array} optionValues - application state. values of a particular option * @returns {string} a summary of the option's values */ export const summarise(optionName, optionValues) { const optionConfig = getOptionConfig(optionName); if (optionConfig.exclude) { return ''; } if (optionConfig.type === 'red') { return optionValues.map(value => value + ' with a dash of red').join(', '); } else (optionConfig.type === 'blue') { return optionValues.map(value => value + ' with a bit of blue').join(', '); } } summarise.test.js // ENV - jasmine import * as configs from './configs'; describe('summarise', () => { it('ignores when excluded', () => { spyOn(configs, 'getOptionConfig').and.returnValue({ exclude: true, type: 'blue', }); const summary = summarise('testOption3'); expect(summary).toBe(''); }); it('summarises blue types', () => { spyOn(configs, 'getOptionConfig').and.returnValue({ type: 'blue', }); const summary = summarise('testOption2', [ 'Value 1', 'Value 2', ]); expect(summary).toBe('Value 1 with a bit of blue, Value 2 with a bit of blue'); }); it('summarises red types', () => { spyOn(configs, 'getOptionConfig').and.returnValue({ type: 'red', }); const summary = summarise('testOption', [ 'Value 1', 'Value 2', ]); expect(summary).toBe('Value 1 with a dash of red, Value 2 with a dash of red'); }); }); Answer: I agree with point number 2, just passing in the config is better (i.e. dependency injection). You could do it like this: // Definition of summarize: export const summarize(config, optionName, optionValues){ // stuff } // In some other file: const summarizeWithConfig = require('./summarize'); const config = require('./config'); // Create a new function which has it's first argument bound to the config let summarize = summarizeWithConfig.bind(null, config); Now summarize can be used like before, and in your tests you can simply test the unbound function. This kind of pattern should be used in all your code, and will make it a lot easier to test. However, binding up functions like this becomes a little tedious (with the extra name "summarizeWithConfig", the ugly bind syntax, etc), so I would really recommend using classes instead, and have the constructors of those classes as injection points.
{ "domain": "codereview.stackexchange", "id": 25191, "tags": "javascript, unit-testing, mocks" }
3 Eigenstates for $X$?
Question: Unless I have made an error, the Pauli $X$ operator has 3 eigenstates. Qiskit lists the two eigenstates for $X:|+\rangle= \frac 1{\sqrt2}(|1\rangle + |0\rangle)$ and $|-\rangle = \frac 1{\sqrt 2}|0\rangle -|1\rangle$. Attempting to prove this myself, I did the work behind finding eigenstates with $\lambda=\pm1$, found from finding the determinant: $$\det(X-\lambda I)$$ Yet, I found that there were two possible solutions for the eigenstate correlated with eigenvalue $-1$. Assuming that we know our eigenvalues, we find eigenstates of $X$ through the form: $$(X-\lambda I)\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}=\begin{bmatrix} 0-\lambda&1\\1&0-\lambda \end{bmatrix}\begin{bmatrix} x_1\\x_2\end{bmatrix} = \begin{bmatrix} 0\\0 \end{bmatrix}$$. When $\lambda=1$, our eigenstate is $\frac 1{\sqrt 2}(|0\rangle + |1\rangle)$, or $|+\rangle$. But when $\lambda=-1$, we are left with $x_1 + x_2 =0$, meaning that either $x_1$ or $x_2$ can be $-1$. This leaves our eigenvalue of $\lambda = -1$ having eigenstate of both $\frac 1{\sqrt 2}(|0\rangle - |1\rangle)$, or $\frac 1{\sqrt 2}(|1\rangle - |0\rangle)$. This is problematic considering that the $X$ basis of measurement is based on the eigenstates of $X: |+\rangle$ and $|-\rangle$. Please help me understand this dilemma. Thank you. Answer: Remember that states differing by a global phase are physically equivalent. The two states mentioned differ by a global factor of $-1$. By the same token, one can find other eigenstates with eigenvalue $+1$ such as $(-|0\rangle-|1\rangle)/\sqrt{2}$.
{ "domain": "physics.stackexchange", "id": 92829, "tags": "quantum-information" }
How exactly are nucleoli made of NORs?
Question: I read that nucleoli are made of DNA, RNA, and nucleolus organizer regions. I don't quite understand how that happens with NORs. Isn't there an envelope delimiting the nucleolus from the rest of the nucleus content? Do NORs from different chromosomes come together? I mean, they are not detaching from the rest of the chromosome and aggregating freely... Answer: No there is no membrane that separates the nucleolus from the rest of the nucleus. The nucleolus is an aggregate of all kinds of molecules involved with ribosome assembly like: precursor and mature rRNA, Small nucleolar RNAs, rRNA-processing enzymes, ribosomal proteins and partly assembled ribosomes (but no fully functional ribosomes) Aside from all these protein and RNA there are indeed regions of the chromosome called Nucleolus organizer regions (NORs) present in the nucleolus. These NORs are located near the tips of the short p region of chromosome 13, 14, 15, 21, and 22. During interphase these regions decondensate due to proteins like nucleolin and are thus open for transcription. Many of the protein I mentioned are needed to form or present in the aggregation that is the nucleolus.
{ "domain": "biology.stackexchange", "id": 9172, "tags": "cell-biology" }
What exactly is hydrolysis? What are the products of hydrolysis of aluminium?
Question: I came across three reactions while studying p-block compounds in inorganic chemistry. $$\ce{2Al + 2NaOH + 6H2O -> 2 Na[Al(OH)4] + 3H2} \label{eq:1} \tag{1}$$ $$\ce{Al2O3 + 2NaOH + 3H2O -> 2 Na[Al(OH)4]} \label{eq:2} \tag{2}$$ $$\ce{Al2O3 + 6NaOH + 3H2O -> 2 Na3[Al(OH)6]} \label{eq:3} \tag{3}$$ Look at the above reactions. $\eqref{eq:1}$ and $\eqref{eq:2}$ have different reactants(aluminium and aluminium oxide) but they give the same product. On the other hand, $\eqref{eq:2}$ and $\eqref{eq:3}$ have same reactants but give different products. What is exactly going on in these reactions. How do I predict what product is going to be formed in the major amount? The writers of the book haven't specified the reaction conditions. Thanks Answer: I will answer in a more general way to your question, because you have a specific one. Aluminium is absolutely a metal, but it can manifest peculiar features of non-metals. Its oxide, $\text{Al}_2\text{O}_3$ is very inert, while the hydroxide $\text{Al}(\text{OH})_3$ is a light-blue jelly solid with an amphoteric behavior, that is it can react both with acids and bases. I suggest you to read the "bible" Chemistry of the Elements of Greenwood and Earnshaw.
{ "domain": "chemistry.stackexchange", "id": 15254, "tags": "inorganic-chemistry, coordination-compounds, hydrolysis" }
What does allelomorph mean?
Question: Is there any difference between allele and allelomorph since most websites call them the same. If they are same then why two different term? Answer: Allele, allelomorphs - the common word in both is "Allele" - When Multiple genes code for a contrasting or similar traits/ or multiple genes in the same locus on homologous chromosomes can be qualified as an allele. Now let's look at second word "Morph" - Adjective MORPHOLOGICAL meaning that - relating to the form or structure of things. Now I'm going to mix this word MORPH with another word POLY which gives "Polymorphic" (Meaning Many Shapes/Forms) - Polymorphism is the ability of an object to take on many forms. This is the same behavior that we could see in an allele. Let me put this polymorphism more specific to our topic. Gene-Polymorphism - A gene is said to be polymorphic if more than one allele of a gene exists within a population. In addition to having more than one allele at a specific locus, each allele must also occur in the population at a rate of at least 1% to generally be considered polymorphic. - Wiki definition Now comparing the definition of allele with this second point of Gene Polymorphism, we get Allelomorphs - Meaning that a single Allele taking different forms/ multiple occurrences within a given population. Here is an example for easier understanding, Eye Color is a Trait, Let us say that eye color is controlled by a single gene, for simplicity (this is generally not true in nature). Blue Eye gene variant is an Allele, Red Eye gene variant is an Allele, Yellow Eye gene variant is an Allele, Your eye color is then controlled by which of these alleles you have for the eye color gene. Here eye color allele takes different forms. Thus making it polymorphic, Putting all these forms together we get Allelomorph(s) (plural), each individual one is as an allele/allelomorph (singular)
{ "domain": "biology.stackexchange", "id": 10996, "tags": "genetics, terminology" }
Gravitational Waves - Are all detectors finding the same gravitational waves?
Question: I read that there have been approximately 90 recorded cases of gravitational waves. Have the 4 different gravitational wave detectors agreed on specific individual recordings or have all 90 cases been identified by individual detectors but not confirmed by other detectors? Answer: Most, but not all, detections have been detected simultaneously$^\star$ by two or more detectors. It helps to know a little history. O1 (observing run 1): September 2015 - January 2016. LIGO-Hanford and LIGO-Livingston were both operating, and all events were detected by both detectors. (I think there were 3 total events detected, where one of these was first announced as a candidate and later upgraded to a detection). In fact, the consistency of the signal in both detectors (in terms of arrival time and other parameters) is an important criteria for being able to claim a detection at all, because it reduces the probability of a terrestrial effect causing the signal. (It's much harder to think of terrestrial effects that can cause two gravitational wave detectors to go off simultaneously, than to think of effects that would cause a trigger in one detector). O2: December 2016 - August 2017: LIGO-Hanford and LIGO-Livingston were operating the entire time and both detected all (approximately 10) events. Virgo was added to the network in August, and participated in the detection of at least 2 events. Ironically, Virgo did not detect GW170817 (the famous binary neutron-star multi-messenger event), but the non-detection of Virgo actually helped to constrain the sky position of the event significantly (because, essentially, the event had to be in a blind spot for Virgo), which was crucial in enabling the multi-messenger detection. O3: April 2019 - March 2020: LIGO-Hanford, LIGO-Livingston, and Virgo were all operating. KAGRA also started running, but not at a sufficient level of sensitivity to detect events. There were a mix of events that were detected by all three detectors, by two detectors, and for the first time, "single" events detected by only one detector were discovered. (Detecting "singles" required a lot of development of data analysis algorithms, and probably would not have been possible as a first discovery of GWs, but became possible as there was increased confidence working with data). The main factor determining the number of detectors seeing an event is simply which combination of detectors is online and in observing mode at the time the event occurs. There is regular maintenance performed where one or more of the detectors is taken offline, and seismic events can lead to one or more of the interferometers losing lock, meaning they are not able to observe until they are re-locked. O4 is supposed to start later this year, if all goes well. $^\star$ By "simultaneously", I mean "up to the light travel time between the detectors (which is order 10 ms for Hanford and Livingston)." The small time delay (as well as phase shifts and differences in the amplitude of the signal due to the interaction of the polarization of the wave and the detector antenna pattern) allow the network to localize the events on the sky.
{ "domain": "physics.stackexchange", "id": 87143, "tags": "gravitational-waves, gravitational-wave-detectors" }
Is the radiation problem actually solved in the classical quantum model of hydrogen?
Question: It's often said that in classical physics a electron-proton system is not stable due to "Bremsstrahlung" and that one instead has to look at it quantum mechanically. This doesn't make sense to me. The quantum mechanical Hamiltonian doesn't account for "Bremsstrahlung" either. Is this taken care of only in QED? Answer: The quantum mechanical model of a bound electron-proton system does not include brehmsstrahlung because the electrons are not small balls orbiting the nucleus. They exist in stationary energy eigenstates, and do not emit any radiation unless they are making a transition. At the most simplified level, we could simply couple the classical electromagnetic field to the quantum mechanical model of the atom by inserting the expected values of the charge and current densities into Maxwell's equations. If one does this, then one finds that the Larmor radiation formula yields a radiated power which depends on $\frac{d}{dt}\langle\mathbf p\rangle$. For an energy eigenstate, $\langle \mathbf p \rangle=0$, so no radiation is generated. As a more sophisticated model, one could "second-quantize" the electromagnetic field and couple the single-electron Hilbert space to the photon Fock space. In this picture, if we restrict our attention to the space of states in which the electron is in an "old" energy eigenstate, then the zero-photon state of the electromagnetic field is an effective ground state, and no brehmsstrahlung photons are emitted. That being said, such a state is not a true eigenstate of the full Hamiltonian if the electron is not in its ground state, and vacuum fluctuations can induce transitions in which the electron moves to a lower energy state and the photon number increases by one - this is spontaneous emission.
{ "domain": "physics.stackexchange", "id": 70799, "tags": "quantum-mechanics, atomic-physics, quantum-electrodynamics, hydrogen" }
temperature of each layer of a cooler
Question: I have a cooler with five layers and I know the temperature outside and inside of the cooler. I created a Matlab model that demands me the temperature of each layer of the cooler. I want to know how I can find the temperature of each layer(considering that I know their thermal resistance) Can someone help me to find the temperature of each layer Answer: Finally I find how to calculate the temperature of each layer
{ "domain": "physics.stackexchange", "id": 48474, "tags": "thermodynamics, temperature, thermal-conductivity" }
Probability basic rules when additional variables are added.
Question: I'm very new in this field but yesterday I was thinking in the following problem. Inside a black box there are ten balls five are red, and five are blue. If the balls are the same size the probability to get one red ball is 0.5, and get a blue one is 0.5. But what happens if the balls are not the same size? If the balls are different size for example 2:1 or 3:1 how this change the probability to get a blue or a red one? If the shapes are for example cubes and other shapes, this modify the probabilities? Some idea where can I find answer to my question? Answer: You can't really settle on a probability value unless you specify more precisely the procedure used to extract balls from the urn. Even in the case where the balls have equal weight, if you were allowed to peek into the urn and select a ball of the color of your liking, then the odds would not be 1:1 (you could always extract the blue ball, for example). Similarly, imagine a case where instead of having two ball of the same size you have a large cube and a tiny cylinder, but extraction from the urn is performed by another person that decides which one to extract depending on the flip of a fair coin. Then you could expect the probabilities to be 50% for the cube and 50% for the cylinder. The urn model in which balls are considered equally likely to be extracted is nothing more than what the name says: a model. That is, an idealized description of some phenomenon. It may or may not be a useful representation of an aspect of the world depending on the context. For example, even the simple model of a coin flip which assigns a probability of 50% to heads and 50% to tail is a simplification that hides a lot of complicated physics, but which nevertheless is a good approximation to our day to day experience. See the paper "Dynamical bias in the coin toss" by Persi Diaconis and others (https://statweb.stanford.edu/~susan/papers/headswithJ.pdf).
{ "domain": "physics.stackexchange", "id": 52765, "tags": "probability" }
Buffered asynchronous writer
Question: Imagine I have connection in which I have to write numbers, represented by this contract: interface ISocket { Task WriteAsync(Byte[], Int32 offset, Int32 count, CancellationToken cancel); } There is one single thread calling this WriteAsync method. Till now each caller to ISocket was buffering independently and knows when it has to flush, however I would like to centralize that logic. Also I want to make sure that when such buffer is being transmitted, I can take advantage of async/await and release the thread pool thread. My idea would be to create something like this: class SocketWriter { private ISocket _socket; private Byte[] _buffer; // assume 8K private Int32 _cursor; [ ... ] private async Task FlushCacheIfNotEnoughSpace(Int32 length, CancellationToken cancel) { if(!FitsInBuffer(length)) { await _socket.WriteAsync(_buffer, 0, _cursor, cancel).ConfigureAwait(false); _cursor = 0; } } [ ... ] public async Task WriteAsync(UInt64 value, CancellationToken cancel) { await FlushCacheIfNotEnoughSpace(8, cancel).ConfigureAwait(false); CopyToByteArray(value); } public Task Flush(CancellationToken cancel) { return FlushCacheIfAny(cancel); } } There are some memory constraints, so I cannot create more than one of those Byte[] buffers. CopyToByteArray requires enough space in the _buffer and increments _cursor accordingly. Note that FlushCacheIfNotEnoughSpace only awaits if the given length does not fit the current buffer. My intention is that when the buffer is not complete and there is space for another number this method call executes synchronously, or properly awaits using IO completion ports otherwise: await _writer.WriteAsync(1UL, cancel).ConfigureAwait(false); I wonder if there is a better or more elegant way of doing this, because it seems like a lot of "noise": await _writer.WriteAsync(obj.Member1, cancel).ConfigureAwait(false); await _writer.WriteAsync(obj.Member2, cancel).ConfigureAwait(false); await _writer.WriteAsync(obj.Member3, cancel).ConfigureAwait(false); await _writer.WriteAsync(obj.Member4, cancel).ConfigureAwait(false); Answer: OK, now it is SOLID :) Let's do not mix infrastructure and application code. Unfortunatly a lot of stuff is missing in .NET, so infrastructure first. AsyncStream: public interface IAsyncStream { Task WriteAsync( byte[] buffer, int offset, int count, CancellationToken cancellationToken); Task FlushAsync(CancellationToken cancellationToken); } BufferedAsyncStream: public class BufferedAsyncStream : IAsyncStream { public BufferedAsyncStream(IAsyncStream inner, long bufferSize = 8192) { Inner = inner; Buffer = new byte[bufferSize]; BufferStream = new MemoryStream(Buffer); } public async Task WriteAsync( byte[] buffer, int offset, int count, CancellationToken cancellationToken) { if (FreeSpace < count) await FlushAsync(cancellationToken) .ConfigureAwait(false); BufferStream.Write(buffer, offset, count); } public async Task FlushAsync(CancellationToken cancellationToken) { await Inner.WriteAsync(Buffer, 0, Allocated, cancellationToken) .ConfigureAwait(false); await Inner.FlushAsync(cancellationToken) .ConfigureAwait(false); BufferStream.Seek(0, SeekOrigin.Begin); } int FreeSpace => (int)BufferStream.Length - (int)BufferStream.Position; int Allocated => (int)BufferStream.Position; IAsyncStream Inner { get; } byte[] Buffer { get; } MemoryStream BufferStream { get; } } AsyncWriter: public class AsyncWriter { public AsyncWriter(IAsyncStream stream) { Stream = stream; } public Task WriteAsync(ulong value, CancellationToken cancellationToken) { var buffer = BitConverter.GetBytes(value); return Stream.WriteAsync(buffer, 0, buffer.Length, cancellationToken); } public Task FlushAsync(CancellationToken cancellationToken) { return Stream.FlushAsync(cancellationToken); } IAsyncStream Stream { get; } } Now application code. ISocket being adapted to IAsyncStream: class SocketStream : IAsyncStream { public SocketStream(ISocket socket) { Socket = socket; } public Task WriteAsync(byte[] buffer, int offset, int count, CancellationToken cancellationToken) { return Socket.WriteAsync(buffer, offset, count, cancellationToken); } public Task FlushAsync(CancellationToken cancellationToken) { return Task.CompletedTask; } ISocket Socket { get; } } So, we can construct writer in the following way: var writer = new AsyncWriter( new BufferedAsyncStream( new SocketStream(socket))); UPDATE You could use this overload to remove "noise": public class AsyncWriter { ... public async Task WriteAsync( CancellationToken cancellationToken, params ulong[] values) { foreach (var value in values) await WriteAsync(value, cancellationToken) .ConfigureAwait(false); } Now it is this way: await writer .WriteAsync( cancellationToken, obj.Member1, obj.Member2, obj.Member3, obj.Member4) .ConfigureAwait(false);
{ "domain": "codereview.stackexchange", "id": 18298, "tags": "c#, async-await" }
Is there a quantum implementation like HashSet?
Question: There are many data structures in classical computers, like Tree, HashSet, etc. These data structures give convenience to the performance (time complexity) of algorithms. I am wondering how to create a similar data structure on a quantum computer. Specifically, I want to know if there is a quantum HashSet that supports $\mathcal{O}(1)$ cost for adding and accessing elements. If not, how might one implement a hash function on a quantum computer? I think a quantum computer can do at least the same as a classical computer, but I could not find a solution on Google. Answer: A major problem with implementing a hash set on a quantum computer is that, if you are inserting a superposed item, it can go into a superposition of buckets. But if you don't operate on a bucket, you can't possibly have inserted an item into it. Therefore to insert (or query) a superposed item correctly, which goes into a superposition of all the buckets, you have to apply at least one operation per bucket. Therefore the number of operations scales like O(n) instead of O(1). That being said, this is kind of unfair to the quantum computer. The classical logic circuit implementing a hash set also uses O(n) gates to perform a query, but in a CPU we hide almost all of that circuit behind an abstraction we call "querying memory". We only pay attention to the time cost instead of the hardware time cost. The quantum circuit model tends to highlight the hardware time cost. So, in order for a quantum hash set to be efficient, you need quantum memory that's cheap enough compared to quantum computation that you think of querying across that memory as having constant cost instead of cost proportional to the size of the memory.
{ "domain": "quantumcomputing.stackexchange", "id": 2748, "tags": "quantum-gate, quantum-algorithms, complexity-theory, quantum-circuit" }
How/Why did Feynman relate the element of Hamiltonian matrix $H_{12}$ to the amplitude to go from $|1\rangle$ to $| 2\rangle$?
Question: Our problem, then, is to understand the matrix $U(t_2,t_1)$ for an infinitesimal time interval—for $t_2=t_1+Δt$. We ask ourselves this: If we have a state $ϕ$ now, what does the state look like an infinitesimal time $Δt$ later? Let’s see how we write that out. Call the state at the time t, $\ket{ψ(t)}$ (we show the time dependence of $ψ$ to be perfectly clear that we mean the condition at the time $t$). Now we ask the question: What is the condition after the small interval of time $Δt$ later? The answer is $ \newcommand{\bk}[2]{\left\langle #1 | #2 \right\rangle} \newcommand{\ket}[1]{\left| #1 \right\rangle} \newcommand{\bra}[1]{\left\langle #1 \right|} \newcommand{\biik}[3]{\left\langle #1 | #2| #3\right\rangle} $ $$\ket{ψ(t+Δt)}=U(t+Δt,t)\ket{ψ(t)}.$$ We can also resolve the $\ket{ψ(t)}$ into base states and write $$\bk{i}{ψ(t+Δt)}=\sum_j \biik{i}{U(t+Δt,t)}{j} \bk{j}{ψ(t)}.$$Each amplitude at $(t+Δt)$ is proportional to all of the other amplitudes at $t$ multiplied by a set of coefficients. Let’s call the $U$-matrix $U_{ij}$, by which we mean $$U_{ij}=\biik{i}{U}{j}.$$ Then we can write $$C_i(t+Δt)=\sum_j U_{ij}(t+Δt,t)C_j(t).$$This, then, is how the dynamics of quantum mechanics is going to look. [..] if $Δt$ goes to zero, nothing can happen—we should get just the original state. So, $U_{ii}→1$ and $U_{ij}→0$, if $i≠j$. In other words, $U_{ij}→δ_{ij}$ for $Δt→0.$ Also, we can suppose that for small $Δt$, each of the coefficients $U_{ij}$ should differ from $δ_{ij}$ by amounts proportional to $Δt$; so we can write $$U_{ij}=δ_{ij}+K_{ij}Δt.$$ However, it is usual to take the factor $(−i/ℏ)$ out of the coefficients $K_{ij}$, for historical and other reasons; we prefer to write $$U_{ij}(t+Δt,t)=δ_{ij}−\frac{i}{ℏ} H_{ij}(t)Δt.$$The terms $H_{ij}$ are just the derivatives with respect to $t_2$ of the coefficients $U_{ij}(t_2,t_1)$, evaluated at $t_2=t_1=t.$ Using this form for $U$, we have $$C_i(t+Δt)=\sum_j \left[δ_{ij}−\frac{i}{ℏ}H_{ij}(t)Δt\right]Cj(t).$$ Taking the sum over the $δ_{ij}$ term, we get just $C_i(t)$, which we can put on the other side of the equation. Then dividing by $Δt$, we have what we recognize as a derivative $$C_i(t+Δt)−Ci(t)Δt=−\frac{i}{ℏ}\sum_j H_{ij}(t)Cj(t)$$ or $$iℏ\frac{dC_i(t)}{dt}= \sum_j H_{ij}(t)Cj(t).$$ This is how Feynman defined $H_{ij}$ as the derivative of $U_{ij}$. This is the ${ij}^\text{th}$ element of Hamiltonian matrix. Then he wrote rather abruptly, The coefficients $H_{ij}$ are called the Hamiltonian matrix or, for short, just the Hamiltonian. (How Hamilton, who worked in the 1830s, got his name on a quantum mechanical matrix is a tale of history.) It would be much better called the energy matrix, for reasons that will become apparent as we work with it. So the problem is: Know your Hamiltonian! So, $H_{ij}$ which is the time-derivative of $U_{ij}$ matrix is related to the energy of the system. But after two chapters, he from nowhere mentioned that $H_{ij}$ is the amplitude to go from $\ket 1$ to $\ket 2$. As A positively ionized hydrogen molecule consists of two protons with one electron worming its way around them. If the two protons are very far apart, what states would we expect for this system? The answer is pretty clear: The electron will stay close to one proton and form a hydrogen atom in its lowest state, and the other proton will remain alone as a positive ion. So, if the two protons are far apart, we can visualize one physical state in which the electron is “attached” to one of the protons. There is, clearly, another state symmetric to that one in which the electron is near the other proton, and the first proton is the one that is an ion. We will take these two as our base states, and we’ll call them $\ket{1}$ and $\ket{2}.$ There is some small amplitude for the electron to move from one proton to the other. As a first approximation, then, each of our base states $\ket{1}$ and $\ket{2}$ will have the energy $E_0$, which is just the energy of one hydrogen atom plus one proton. We can take that the Hamiltonian matrix elements $H_{11}$ and $H_{22}$ are both approximately equal to $E_0.$ The other matrix elements $H_{12}$ and $H_{21}$, which are the amplitudes for the electron to go back and forth, we will again write as $−A$. I'm not understanding this; $H_{12}$ & $H_{21}$ are the time-derivatives of $U_{12}\;\&\;U_{21}$ respectively. How can they be the amplitude to go from $\ket 1$ to $\ket 2$? After all, it is related to Kronecker delta $\delta{ij}$ or if under time-evolution, then related to $U_{ij}$. $U_{ij}$ should be the amplitudes for the electron to go back and forth that is the amplitude for the hydrogen ion to go from $\ket 1$ to $\ket 2$ or vice-versa. So, why did Feynman wrote $H_{ij}$ as the amplitude instead of $U_{ij}$ after all, $H_{ij}$ is the time-derivative of $U_{ij}$ & not any amplitude to go from $\ket 1$ to $\ket 2$?? Answer: $ \newcommand{\bk}[2]{\left\langle #1 | #2 \right\rangle} \newcommand{\ket}[1]{\left| #1 \right\rangle} \newcommand{\bra}[1]{\left\langle #1 \right|} \newcommand{\biik}[3]{\left\langle #1 | #2| #3\right\rangle} $To first order, we can write $\hat U(\delta t)=1-\frac{i}{\hbar}\hat H\delta t$. Then if we start in state $\ket{1}$, our amplitude on state $\ket{2}$ is $\bra{2}\hat U(\delta t)\ket{1}=-\frac{i}{\hbar}\delta t\bra{2}\hat H\ket{1}$. So we see that the instantaneous transition rate to go from $\ket{1}$ to $\ket{2}$ is (up to factors of $\hbar$) $\bra{2}\hat H\ket{1}$, as desired.
{ "domain": "physics.stackexchange", "id": 25172, "tags": "quantum-mechanics, operators, hamiltonian, time-evolution" }
Neutrino versus Anti-neutrino Detection
Question: Is there a detection method in use that can distinguish a neutrino from its anti-neutrino? Answer: Some detection methods only detect neutrinos, as opposed to antineutrinos. For instance, the original solar neutrino experiment in the Homestake mine worked on the basis of the reaction $$ \nu+{}^{37}{\rm Cl}\to {}^{37}{\rm Ar}+e^- $$ if I recall correctly. Only a neutrino can induce that reaction; an antineutrino can't. Other experiments can detect neutrinos through multiple channels, some of which (elastic scattering off an electron, for instance) are sensitive to both $\nu,\overline\nu$ and some of which only detect one.
{ "domain": "physics.stackexchange", "id": 15318, "tags": "particle-physics, experimental-physics, neutrinos" }
Can't compile my C++ ros_nxt publisher
Question: I'm new in programming, so i'm trying to make my own publisher to the topic wich controls the motors of my NXT, so here is the code that i have written.I'm not sure if it is the correct thing that i need to control the motors, but i can't compile it. #include <nxt_msgs/JointCommand.h> #include <ros/ros.h> template<class M> ros::Publisher ros::NodeHandle::advertise(const nxt_msgs::JointCommand, uint32_t, bool) int main(int argc, char **argv) { ros::init(argc, argv, "JointCommand"); ros::NodeHandle n; ros::Publisher pub = n.advertise<nxt_msgs::>("JointCommand", 1); ros::Rate loop_rate(10); int test=1; nxt_msgs::JointCommand c; c = nxt_msgs::JointCommand(); test = c.effort; pub.publish(c); ros::spinOnce(); loop_rate.sleep(); return 0; } when i try to make my package it gives me the following error : /ros_workspace/NXT/src/rc.cpp:6:1: error: expected initializer before ‘int’ Originally posted by Saho on ROS Answers with karma: 1 on 2013-01-23 Post score: 0 Answer: The error seems to be caused by the template before int main(). I'm not sure what you're trying to do with that line...it looks like a function prototype without a semicolon. The template also doesn't make sense. If the prototype was valid, you'd also want to add a semicolon at the end to terminate Originally posted by mirzashah with karma: 1209 on 2013-01-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 12554, "tags": "ros" }
Find the values of $A$, $B$, and $C$ such that the action is a minimum
Question: A particle is subjected to the potential $V (x) = −F x$, where $F$ is a constant. The particle travels from $x = 0$ to $x = a$ in a time interval $t_0 $. Assume the motion of the particle can be expressed in the form $x(t) = A + B t + C t^2$. Find the values of $A, B$ and $C$ such that the action is a minimum. I was thinking it can solved using Lagrangian rather than Hamilton. There's no frictional force. $$L=\frac{1}{2}m\dot{x}^2+Fx$$ $$\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{x}}\right)-\frac{\partial L}{\partial x}=0\implies m\ddot{x}=F\implies \ddot{x}=\frac{F}{m}$$ Differentiate $x(t)$ twice. $$2C=\frac{F}{m}\implies C=\frac{F}{2m}$$ For finding B I was thinking to integrate $\ddot{x}$ once. $$\dot{x}=\int \ddot{x} \mathrm dt =\ddot{x}t$$ initial position is 0 so, not writing constant. $$\dot{x}=\frac{F}{m}$$ Differentiate $x(t)$ once. $$B+2Ct=\frac{F}{m}$$ $$\implies B=\frac{F}{m}-\frac{2Ft}{2m}=-\frac{Ft}{2m}$$ Again, going to integrate $\ddot{x}$ twice. $$x=\iint \ddot{x} dt dt=\frac{\ddot{x}t^2}{2}$$ initial velocity and initial position is 0. $$x=\frac{Ft^2}{2m}$$ $$A+Bt+Ct^2=\frac{Ft^2}{2m}$$ $$A=\frac{Ft^2+Ft-F}{2m}$$ According to my, I think that C is the minimum (I think B is cause, B is negative; negative is less than positive). And, A is maximum. A person were saying that "It asked you to minimise the action; it told you the particle moved from $0$ to $a$ in time $t_0$; it gave you the equation of the trajectory." In my work where should I put the interval? Answer: The Euler-lagrangian equation gives the equations of motion that once solved give you a family of solutions that minimize the action. A unique solution is given by specifying boundary conditions. It is just a case of inputing those boundary conditions. Wlog let $ x(0)=0 $ and $x(t_0)=a $. Integrating $\ddot{x} = \frac{F}{m}$ gives the general solution $x(t)=\frac{F}{2m}t^2 +Bt + A$, fixing C. Subbing in $x(0)=0$ gives $A=0$ and subbing $x(t_0)=a$ gives $B$ as $B=\frac{a - Ct_0^2}{t_0}$.
{ "domain": "physics.stackexchange", "id": 82448, "tags": "homework-and-exercises, lagrangian-formalism, projectile" }
How long can an octopus survive out of the water?
Question: I saw videos of octopuses crawling on the ground and I was wondering how long an octopus can survive when out of the water? Does it depend on either its size (i.e., does a big octopus from deep sea survive longer than a tiny octopus) or the species? Answer: Short answer Under ideal conditions, an octopus may survive several minutes on land. Background Octopuses have gills and hence are dependent on water for the exchange of oxygen and carbon dioxide. Gills collapse on land because of the lack of buoyancy (source: UC Santa Barbara). Octopuses have three hearts. Two of these are dedicated to move blood to the animal’s gills, emphasizing the animal's dependence on its gills for oxygen supply. The third heart keeps circulation flowing to the organs. This organ heart actually stops beating when the octopus swims, explaining the species’ tendency to crawl rather than swim (source: Smithsonian). According to the Scientific American, crawling out of the water is not uncommon for species of octopus that live in intertidal waters or near the shore (Fig. 1). Because most species of octopus are nocturnal, we humans just don't see it often. Their boneless bodies are seemingly unfit for moving out of water, but it is thought to be food-motivated, e.g. shellfish and snails that can be found in tidal pools. Octopuses depend on water to breathe, so in addition to being a cumbersome mode of transportation, the land crawl is also a gamble. When their skin stays moist, a limited amount of gas exchange can occur through passive diffusion. This allows the octopus to survive on land for short periods of time, because oxygen is absorbed through the skin, instead of the gills. In moist, coastal areas it is believed they can crawl on land for at least several minutes. Mostly they go from pool to pool, never staying out of the water for extended periods. If faced with a dry surface in the sun, they will not survive for long (source: Scientific American). Fig. 1. Octopus on land. Source: BBC Your sub questions; I think small octopuses may survive longer, since passive gas exchange is the mode of survival on land. In general, an increase in diameter causes the volume to increase with a third power, while surface increases with a power of two. Therefore, an increase in body size reduces the surface-to-volume ratio and leads to reduced gas exchange. Because passive gas exchange needs large surface-to-volume ratios, I am inclined to believe small octopuses may cope better with terrestrial environments. However, in hot, arid conditions it is likely a bigger one will have an advantage, because it can store more oxygen in its blood. In terms of species, I have to say I couldn't find any sources going in so much detail on this. Likely, as said, smaller species may do better in cool, moist conditions, while larger specimens may be better off in dry environments. Reference - Harmon Courage, Sci Am; (November 2011)
{ "domain": "biology.stackexchange", "id": 8074, "tags": "physiology, marine-biology, invertebrates, molluscs" }
Is Gauss' law valid for time-dependent electric fields?
Question: The Maxwell's equation $\boldsymbol{\nabla}\cdot \textbf{E}(\textbf{r})=\frac{\rho(\textbf{r})}{\epsilon_0}$ is derived from the Gauss law in electrostatics (which is in turn derived from Coulomb's law). Therefore, $\textbf{E}$ must be an electrostatic field i.e., time-independent. Then how is this equation valid for the electric field $\textbf{E}(\textbf{r},t)$ which is time-dependent (for example, the electric field of an electromagnetic wave)? Can we prove that $\boldsymbol{\nabla}\cdot \textbf{E}(\textbf{r},t)=\frac{\rho(\textbf{r},t)}{\epsilon_0}$ ? EDIT: I have changed $\boldsymbol{\nabla}\cdot \textbf{E}=0$ to $\boldsymbol{\nabla}\cdot \textbf{E}=\frac{\rho}{\epsilon_0}$ in the question. Answer: You need to watch what you mean by the ambiguous term "derive", which can mean either "was derived historically" (i.e. was motivated by or is a derivative of, in the non-mathematical sense) or "is derived logically/mathematically". Historically, I think you are correct that $\boldsymbol{\nabla}\cdot \textbf{E}(\textbf{r},t)=\frac{\rho(\textbf{r},t)}{\epsilon_0}$ was "derived" by Maxwell from the electrostatic version $\boldsymbol{\nabla}\cdot \textbf{E}(\textbf{r})=\frac{\rho(\textbf{r})}{\epsilon_0}$, which in turn was "derived" from Coulomb's law. Logically, it's the other way around. $\boldsymbol{\nabla}\cdot \textbf{E}(\textbf{r},t)=\frac{\rho(\textbf{r},t)}{\epsilon_0}$ is a fundamental law of the universe (at least it is in classical electromagnetism; in reality it is "derived" from quantum electrodynamics). The electrostatic version of it can be "derived" mathematically from this law as a special case. The same is true of Coulomb's law.
{ "domain": "physics.stackexchange", "id": 33042, "tags": "electromagnetism, electrostatics, classical-electrodynamics, maxwell-equations, gauss-law" }
Negative absolute pressure with positive absolute temperature
Question: Can the derivative defining pressure $dU \over dV$ or ${∂S \over ∂V}|_{E,N} $ be negative in processes occuring in system not cosmological but statistical (gases or solids or liquids - I mean the statistical study of systems). Altough I have read about negative temperatures and negative pressures, could we have for a system at positive temperature, a negative price of pressure (absolute)? Note: I have read Are negative temperatures typically associated with negative absolute pressures? Answer: Depending on how "real" you want the system to be. In a Casimir plate setup in a way the vacuum gets a negative pressure. But for any system of real particles a negative pressure means, that the system will be unstable and collapse, so you cannot have negative pressure in equilibrium. For example, negative pressure can formally occur in the van der Waals model of non-ideal gases, but there it is only an artifact as the uniform phase is unstable (even when the pressure does not drop below zero), and instead there is an equilibrium between gas and liquid at a constant temperature and pressure.
{ "domain": "physics.stackexchange", "id": 22720, "tags": "thermodynamics, statistical-mechanics" }
if you have an object on an inclined slope, by applying a horizontal force would you be able to lift the object from the slope?
Question: I am programming a free body force diagram and as part of it a am allowing the user to adjust the angle of the plane and apply a force to an object on such plane at any angle to the horizontal. My question is, if they apply a large enough force horizontally while the slope is on an angle would the object lift from the plane? Any help/advice would be appreciated Answer: Your force has a component along the slope, so yes, the object will move along the slope. It will not leave the surface though, if that's what you mean by "lift" If you find it counterintuitive why the object has a vertical acceleration component despite your applied force being horizontal, you must think about the normal force. This is always, as the name suggests, normal to the surface i.e. the slope. Hence, it has that vertical component you seek.
{ "domain": "physics.stackexchange", "id": 21277, "tags": "newtonian-mechanics, forces" }
Are probabilities of mutations symmetric?
Question: For the premise of this quiestion let's assume that there is an allele A and an allele B. The allele A has a probability P to mutate into the allele B in the given timeframe. Is it also true that the allele B has the probability P to mutate into the allele A? Answer: The short answer is no, there is heterogeneity in the rate at which different nucleotides mutate into one another. This is generally a property of their differing chemistries (although I'm not an expert on this). Therefore, it doesn't really make sense to talk of alleles being $p$ or $q$ like we do in most of population genetics, because the actual nucleotide (whether it's A, C, G or T does make a difference). This is important in fields like phylogenetics, where people construct substitution matrices which describes the rate at which one base in a sequence changes to another nucleotide. Currently, it's possible to estimate the matrix parameters from empirically data relatively easily. For example, in one of the earliest and most simple models, Kimura (1980) introduced a matrix which had two parameters - one for the mutation rate of transition substitutions (A/C -> G/T, more likely) and one for the rate of transversions (A/G -> T/C, less likely). Later methods got progressively more complex, which account for e.g. the different amino/keto properties of different nucleotides. Felsenstein's model (1981) accounted for the equilibrium frequency of the target nucleotide. These substitution values are also allowed to vary across time as well within a phylogenetic tree (e.g. Yang 1994). References Kimura, Motoo. "A simple method for estimating evolutionary rates of base substitutions through comparative studies of nucleotide sequences." Journal of molecular evolution 16.2 (1980): 111-120. Felsenstein, Joseph. "Evolutionary trees from DNA sequences: a maximum likelihood approach." Journal of molecular evolution 17.6 (1981): 368-376. APA Yang, Ziheng. "Maximum likelihood phylogenetic estimation from DNA sequences with variable rates over sites: approximate methods." Journal of Molecular evolution 39.3 (1994): 306-314.
{ "domain": "biology.stackexchange", "id": 10801, "tags": "genetics, mutations, allele" }
How to determine at which temperature does iron in stainless steel undergoes spontaneous oxidation?
Question: It is my understanding that iron oxidation in stainless steel is prevented by adding chromium, which creates chromium oxides at the surface that shield the bulk of the material from further oxidation. I have some stainless steel samples that I heated up to 400°C, and they came out of the oven with signs of oxidation. How can I determine at which temperate will iron start oxidation despite this protective chromium oxide layer? Note that I do not have enough samples to conduct an experiment to find the temperature, I'm looking for a theoretical approach. Answer: Stainless steels are indeed protected by an extremely tightly-adherent, conformal coating of chromium oxide which forms the instant that the part is first exposed to air. This coating is optically transparent because of its thinness- only a few tens of atoms thick- but it protects the material underneath it because the rate at which oxygen can diffuse through it is extremely slow. When it is heated as in your furnace, the diffusion rate is increased and so the creation of chrome oxide is "turned on" again until such time as the thickened oxide begins to choke off the diffusion of oxygen again, and the thickness of the oxide stabilizes. This temperature-dependent, diffusion-limited and self-limiting pattern of behavior is called self-passivation and greatly complicates the dynamics of high-temperature corrosion. As the oxide layer thickens, it begins to show optical interference effects between the front and back surfaces of the layer- and instead of appearing perfectly colorless, the film gets colored- first a light yellow which darkens into orange and then a brilliant blue as the film thickens. This color pattern then fades out and repeats; by counting these "fringes" you can accurately estimate the total thickness of the oxide.
{ "domain": "physics.stackexchange", "id": 53964, "tags": "thermodynamics, material-science, physical-chemistry" }
How do I publish gazebo position of a robot model on odometry topic
Question: Hello, I have a robot model that runs using teleop key, and I want to publish its position and velocity on odometry topic. How should I achieve this? Thanks. Originally posted by cybodroid on ROS Answers with karma: 234 on 2015-12-06 Post score: 2 Answer: You can use the gazebo_ros_p3d plugin in your robot's URDF to directly publish (ground truth) odometry for your robot. Here's a usage example: tracker_chassis.gazebo.xacro.xml For your use case (publishing directly to odom) it would probably look like this: <plugin name="p3d_base_controller" filename="libgazebo_ros_p3d.so"> <alwaysOn>true</alwaysOn> <updateRate>50.0</updateRate> <bodyName>base_link</bodyName> <topicName>odom</topicName> <gaussianNoise>0.01</gaussianNoise> <frameName>world</frameName> <xyzOffsets>0 0 0</xyzOffsets> <rpyOffsets>0 0 0</rpyOffsets> </plugin> Originally posted by Stefan Kohlbrecher with karma: 24361 on 2017-10-09 This answer was ACCEPTED on the original site Post score: 12 Original comments Comment by Ankita on 2021-07-24: Thanks, it worked for me. Now, I am able to get position data of my robot from the gazebo. Thanks a lot.
{ "domain": "robotics.stackexchange", "id": 23159, "tags": "gazebo, navigation, odometry, state, robot" }
Assigning books to boxes
Question: I am trying to model the following problem correctly as a min-cut network flow problem. I have $n$ books and 2 boxes. I also have books that I know must go in one of the two boxes. In addition, each book has a certain profit if I put it in the same box with another book. So for instance, if I pair book $i$ with book $j$ I might have a profit of 10 dollars so long as they're in the same box. If I have 3 books in one box, I'd have to sum the profit of 1 and 2, 2 and 3, and 1 and 3. I want to find the best way to assign the not-yet assigned books to either box 1 or 2 to maximize my profit. Formally: 2 boxes: $b_1$ and $b_2$ Set: $N$ of $1...n$ books $S_1$ = set of all books that must go to box 1 $S_2$ = set of all books that must go to box 2 $p_{ij}$ = The profit by having books $i$ and $j$ in the same box Objective (roughly): $max(\sum_{i=1}^{2}\sum p_{ij})$ (maximize the profit over all boxes) My ideas so far: Formulate the problem as a min-cut problem because we are trying to end up with two sets of books (one for box 1, one for box 2). Would it be correct to say that $-min(-\sum_{1}^{2}\sum p_{ij})$ is equivalent to our maximization above? I tried simplifying it further but I'm not sure how. Make source node for box 1, node for each book not assigned (not in $S_1$ and not in $S_2$) and a sink node for box 2. My question: With the previous formulation in mind, I'm confused on what the edges would be like. I have edges from box 1 to the book nodes and then the book nodes to box 2 but I'm not sure if this makes sense, largely because I need to make sure my summation notation is correct and how to turn that into an appropriate graph. Could anyone offer advice on the minimization I wrote above and how to translate it to a graph correctly? Answer: First, I assume that it doesn't matter which box a pair of books go into, e.g., the value of book 1 and 2 being in box 1 is the same as the value of book 1 and 2 being in box 2. Now, denote $B_1, B_2$ as the books in boxes 1 and 2 respectively. The value of this partitioning is precisely $$ \sum_{{i,j} \in B_1:\ i<j} p_{ij}+ \sum_{{i,j} \in B_2:\ i<j} p_{ij}= \sum_{{i,j} \in N:\ i<j} p_{ij} - \sum_{i \in B_1}\sum_{j\in B_2}p_{ij}$$ (in words, the value of the paired books in box 1 + the values of the paired books in box 2 = the value of all paired books - the value of the books that aren't paired) Observe that $\sum_{{i,j} \in N:\ i<j} p_{ij}$ is constant regardless of how you place your books and therefore maximizing $\sum_{{i,j} \in B_1:\ i<j} p_{ij}+ \sum_{{i,j} \in B_2:\ i<j} p_{ij}$ is equivalent to minimizing $\sum_{i \in B_1}\sum_{j\in B_2}p_{ij}$ (as you've roughly stated). Under this assumption (that the box doesn't matter), we now exactly have a min cut problem. Let your set of vertices be $N$ (there is no need for a source node to identify which box is which). We let the graph be complete (there is an edge between every pair of nodes) the value of the edge between $i$ and $j$ is $p_{ij}$. A cut on this graph is a partitioning ${B_1,B_2}$. The value of such a cut is $\sum_{i \in B_1}\sum_{j\in B_2}p_{ij}$. Hence, to find the maximum way to place your books, it suffices to find a minimum cut on this graph. In the event you want a formulation with terminal nodes (sink/sources), you can add in dummy nodes s and t (denoting box 1 and box 2), connect edges from all books to these nodes with an arbitrary weight W. A partitioning that divides s, t has weight \begin{align}\sum_{i \in B_1}\sum_{j\in B_2}p_{ij}+\sum_{j\in B_2}p_{sj}+\sum_{i\in B_1} p_{it}&=\sum_{i \in B_1}\sum_{j\in B_2}p_{ij}+|B_2|⋅W+|B_1|⋅W\\ &=\sum_{i \in B_1}\sum_{j\in B_2}p_{ij}+|N|⋅W.\end{align} In this formulation, a valid cut is $\{s\},\{t,N\}$ with value $W\cdot |N|$ and therefore to ensure at least one book goes in each box, $W$ needs to be made large. There is one final component of your problem we have not addressed, i.e., there is a set of books $S_1$ that MUST go into box 1 and a set $S_2$ that MUST go into box 2. Off the top of my head, I do not see a way to fix this without incorporating a source and sink $s$ and $t$ (for box 1 and 2 respectively). The formulation I've given above is almost sufficient. If $j\in S_1$ (book j MUST go into box 1), make $p_{sj}$ arbitrarily large (much more than $W$). This ensures an arbitrarily large cost will be charged if you try to but a book that must go into box 1 into box 2. Similarly, if $i\in S_2$, make $p_{jt}$ arbitrarily large. Assuming $S_1$, and $S_2$ are non-empty (that books are already forced into each box), $W$ can be taken as 0 (since a proper partition is enforced by $S_1,S_2$) and $p_{sj}=p_{it}$ for $j\in S_1, i\in S_2$ can be as little as $\sum_{{i,j} \in N:\ i<j} p_{ij}$ (assuming each $p_{ij}\geq 0$).
{ "domain": "cs.stackexchange", "id": 13701, "tags": "algorithms, graphs, optimization" }
Can a gene-expression or epigenetic 'user-history' be found in the body?
Question: (EDITED - a lot of what I am saying is implicit and simplified. I'm not looking to recreate the numerous textbooks and scientific papers on how DNA works). As far as I can understand it, an organisms basic building blocks (proteins) are made up of DNA, Genes, and Chromosomes. The most basic form of this is DNA, made up of molecules in double helix strands. DNA carries all the instructions for the body in a simple 'database' or 'blueprint' form. Genes are chunks, or sequences, of the DNA telling the body 'what to do'(1) by reading the 'DNA database' in a myriad of unique coding sequences. Genes naturally(2) switch on and off throughout an organisms lifespan allowing amongst other things for an organism to grow from young to old. DNA is essentially(3) stored in Chromosomes. External and environmental factors can influence the expression of genes, and the information stored in the DNA (4) - the study of which is Epigenetics. These Epigenetic factors can change over time, switching genes on and off. (5) Does the body store a history of these expressions? Do the chromosomes (or some other part) of an older organism store a 'user-history' of which genes where previously activated when the organism was younger? eg, can you tell which genes where active when a person was 12, from the cells of an 80 year old? Apologies, I do not have a background in Biology. (6) EDIT NOTES (1) what to do, how to do it, when to do it. not necessarily all life supporting instructions but also the minute-to-minute, day-to-day processes etc. (2) naturally. there is no black and white in nature, only shades of grey. (3) essentially. not exactly and not in all cases but mostly. (4) As in, factors influence the expression of genes (which in turn are made up of DNA information). I am not suggesting that external factors can influence the 'DNA database' itself but rather just how they are 'read', sequenced and expressed as genes. (5) again, naturally. no sharp on/offs but variations of concentrations. (6) I may not have used the correct terminology. Answer: Main question Does the body store a history of these expressions? Do the chromosomes (or some other part) of an older organism store a 'user-history' of which genes where previously activated when the organism was younger? Cells can sometimes have a "memory" of the gene expression state which allows the cell to perpetuate its gene expression programme. This is facilitated by epigenetic mechanisms. Cells can also maintain a short term memory via feedbacks and switches (Casadesús and D'Ari, 2002). Immune cells store the memory of previous exposures to a certain pathogen (in this case each clonal population stores one piece of history which is unlike one cell storing the entire history). However, I don't think there is an explicit "user history" kind of mechanism. Most responses are dynamic (and memoryless); there are only a very few situations (such as immune response) that really need a history log. Usually the cellular memory is just one level deep. Storing deeper than that would cost the cell a lot. Other corrections not directly related to the question As far as I can understand it, an organisms basic building blocks (proteins) are made up of DNA, Genes, and Chromosomes. This statement is extreme oversimplification. As a matter of fact, water is the most abundant molecule in the bodies of most (if not all) organisms. RNAs and proteins are essential molecules required for cellular functions. Lipids are also important as they constitute the cell membrane. Moreover, proteins are not made up of "DNA, genes and chromosomes". Your subsequent explanation of how the genes work is correct but this statement is quite wrong. The most basic form of this is DNA, made up of molecules in double helix strands. Misleading again. It is not correct to call DNA as "most basic form". In what way? This statement is also opinion-based. DNA is essentially(3) stored in Chromosomes. Chromosome is an assembly consisting of the DNA and some proteins. In a way, DNA is contained in the chromosomes. As also pointed out by Remi, your statement can have misleading interpretations. External and environmental factors can influence the expression of genes, and the information stored in the DNA (4) - the study of which is Epigenetics. These Epigenetic factors can change over time, switching genes on and off. This process is simply called "gene regulation" or "regulation of gene expression". Epigenetic mechanisms (such as DNA methylation and histone modifications) are one of the mechanisms of gene regulation but they are not the only one. You can simply google "gene regulation" and you'll get plenty of resources on this topic.
{ "domain": "biology.stackexchange", "id": 6128, "tags": "genetics, gene-expression, genomes, epigenetics" }
How to store option key names
Question: I have to store some properties in a database properties table which looks like this: CREATE TABLE `property` ( `id` int(11) NOT NULL AUTO_INCREMENT, `computer_name` varchar(64) NOT NULL, `name` varchar(64) NOT NULL, `value` varchar(1024) NOT NULL, PRIMARY KEY (`id`) ); Sample data: +----+---------------+-----------------+-------+ | id | computer_name | name | value | +----+---------------+-----------------+-------+ | 9 | PC002 | firewall_status | 3 | | 10 | PC011 | firewall_status | 0 | | 11 | PC011 | some_property | 1 | To access data in that table I have a getProperty($computerName, $propertyName) method in a repository, which finds the value of specified property for certain computer. And in different parts of my application I call the method like this: $computer = $repository->findComputer(); $firewallStatus = $repository->getProperty($computer->name, 'firewall_status'); What I don't like here is hard coded name of the property. My solution was to put available property names to a model: class ComputerModel { const PROPERTY_FIREWALL_STATUS = 'firewall_status'; const PROPERTY_SOME_PROPERTY = 'some_property'; } ... $computer = $repository->findComputer(); $firewallStatus = $repository->getProperty($computer->name, ComputerModel::PROPERTY_FIREWALL_STATUS); We discussed storing property names as constants and decided that we have some trade-offs: Good (mostly for code maintanance): keep the value in one place which make it more simple to change it; keep available property dictionary in one place (in the model); make the code completion available in IDE instead of searching for correct property name to copy/paste it; Not good (mostly for debugging): by storing a literal values we create extra overhead by storing variable names in new variables; worsen debugging by add one more step to find the object containing debug information; we need to support additional dictionary; possible data inconsistency if someone adds new value to database but forget to update it in the model. Subjective: improves/reduces code readability. The question is: what is the best way to store such parameter names? Or maybe to use literals. Answer: keep the value in one place which make it more simple to change it You will never change it (why would you?) so this is not a valid reason. And if you did change it then search/replace works fine. keep available property dictionary in one place (in the model) This looks like MySQL to me. So I suggest using an enum for this instead of a varchar. It will be much faster, take less storage space, prevent invalid values, and have the benefit of storing the dictionary right in the database. make the code completion available in IDE instead of searching for correct property name to copy/paste it This is nice - however is there no other way to do this? Can you not add some kind of configuration file that will define these? In the "not good" column I would put adds complexity for little gain. You've taken a string, and turned it back into a different string, only uppercase. Now, if you were going to store these as integers (i.e. do the equivalent of enum yourself) then making constants has value. You are turning a string into a number, so it actually does something. So, my suggestion: Either use enums in your database, and just write them as strings, or store integers, and use the constants. But don't store these properties as strings. (BTW, suppose you want to convert from one to the other: Search/replace! I've seen too many cases of overengineering things just from fear of search/replace.)
{ "domain": "codereview.stackexchange", "id": 2040, "tags": "php" }
Examples of Differential equations in Biology
Question: I am a mathematician currently teaching some math classes at a university. Next semester, I'll have bachelor's degree biochemistry students. I want to know where certain math tools might be needed in such a field. Where do differential equations arise naturally in professional Biology? Maybe from some data analysis of a complex phenomenon? What scenario? I was thinking of a scenario where, for a complex set of variables, you would take one of them, say $x$ and plot $\frac{\Delta x}{\Delta t}$ vs $x$ and find that the relationship was e.g. polynomial ($\frac{\Delta x}{\Delta t} \propto x^{3}$). Answer: For biochemistry students the obvious link to differential equations is the Michaelis–Menten equation which is a simple model of the kinetics of reactions involving enzymes. Your students will be hearing a lot about Michaelis–Menten in their biochemistry classes if they haven't already. It's also very accessible to students having their first exposure to differential equations. Considering the wider field of biology there are several major topics that revolve around differential equations. Population Biology and the related field of Population Genetics are heavily mathematical with lots of differential equations involved. Expanding beyond ordinary differential equations there is Alan Turing's Theory of Morphogenesis which tried to explain striping and spotting in animal hides using a system of coupled partial differential equations to model reaction rates and diffusion.
{ "domain": "biology.stackexchange", "id": 12478, "tags": "theoretical-biology, protein-structure" }
Why is relativistic beaming/ Doppler beaming occur at non-relativistic speeds
Question: The reflexive motion of a binary star system causes the host star to occasionally wobble towards and away from an observer on Earth, which gives rise to an effect called relativistic beaming. This is when light becomes more concentrated in the direction of motion of the host star when viewed from an observer on Earth. In the rest frame of the star, light will be radiating isotropically (uniformly in all directions). It is surprising to me why this effect is even considered relativistic when the host star wont be moving more than $10^3 \frac{m}{s}$ but appariantly it's true. Can anyone explain why this effects occurs? Thanks. Answer: The effect is upon the brightness of relativistic jets (including those emitted by the binaries), not upon the brightness of the accreting matter (binaries themselves). Here's an explaination https://en.wikipedia.org/wiki/Relativistic_beaming
{ "domain": "physics.stackexchange", "id": 60182, "tags": "special-relativity, doppler-effect, exoplanets, binary-stars" }
Bushing and Shaft Alignment
Question: I"m working on a design and having difficulty calculating possible misalignment. I have a shaft (diameter D, tolerance t) that is running through two bushings. The two bushings are separated by a distance L. The two bushings are of width W1 and W2 (measured along the axis parallel to the shaft), with internal diameter of D1 and D2 with tolerances of T1 and T2. (see image below) How do I go about calculating the potential misalignment of the shaft? When I started this I though it was going to be a quick 5 minute challenge but the more I work on it the more difficult it's seeming. Answer: I would like to point out that your CAD application can most likely do this for you. Just make a parametric drawing and sweep the values you need. This may be more practical as you go forward since the cad application can effortlessly make changes based on geometric constraints and you don't need to calculate this. Image 1: Parametric sweep of one value. No harder than typing 3 values. Note the drawing s intentionally exaggerated. Making the sweep by script is not much harder. This opens up statistical analysis by montecarlo simulation. Doing it this way reduces chance of error made during the mathematical analysis. Of course, this is both a good thing and a bad thing. Analytical analysis can give insights into global minima, but as the complexity of your analysis grows this might not be feasible so a local minima may suffice. But yeah you can calculate this by hand too, the problem is that one you start adding the things you now consider trivial this may no longer be a trivial extension. Image 2: Vector expression You can express this as a equation of vector expressions $$ \vec a + \vec b + \vec c = 0 $$ where $\vec a$ is known, magnitude of $\vec b$ is known and direction of $\vec c$ is linked to vector $\vec b$. So vector $\vec b$ is $D \cdot \{sin(\theta), cos(\theta)\}$ therefore $\vec c$ is $x \cdot\{-cos(\theta), sin(\theta)\}$. Since you have 2 directions and 2 unknowns this is solvable. But i wont take this further.
{ "domain": "engineering.stackexchange", "id": 1672, "tags": "mechanical-engineering, tolerance" }
How to fit a math formula to data?
Question: I have math formula and some data and I need to fit the data to this model. The math is like. $y(x) = ax^k + b$ and I need to estimate the $a$ and the $b$. I have triet gradient descend to estimate these params but it seems that is somewhat time consuming. Is there any efficient way to estimate the params in a formula? Answer: If you know $k$, which it seems you do, then this is just a linear regression. In fact, with just one feature (the $x^k$), this is a simple linear regression, and easy equations apply without you having to resort to matrices. $$ \hat b=\dfrac{ \text{cov}(x^k, y) }{ \text{var}(x^k) }\\ \hat a =\bar y-\hat b \bar x $$ These are the ordinary least squares estimates of $a$ and $b$.
{ "domain": "datascience.stackexchange", "id": 11373, "tags": "gradient-descent" }
Fourier Series Representation of Continuous-Time Periodic Signals
Question: As a novice in signal processing, I have been going through Signals & Systems by Oppenheim to try and understand how continuous time periodic signals are represented by Fourier series coefficients. The book defines a pair of equations which represent the Fourier series of a periodic continuous time signal. I cannot understand when one should use the synthesis equation (3.38) and when one should use the analysis equation (3.39) to determine the set of Fourier series coefficients. Can someone give examples/explain as to when I should use one equation over another? Why is this so? Answer: When you are given a continuous-time periodic signal $x(t)$ and you want to find out the corresponding CTFS (continuous-time Fourier series) coefficients $a_k$ associated with $x(t)$, then you use the analysis equation; i.e, analyse $x(t)$ to find out $a_k$ use $$ \boxed{ a_k = \frac{1}{T} \int_{0}^{T} x(t) e^{-j \frac{2 \pi}{T} k t} dt }\tag{1}$$ On the other hand, when you are given a set of CTFS coefficients $a_k$ and you want to obtain the corresponding continuous-time periodic signal $x(t)$, then you use the synthesis equation; i.e., sum up exponentials weighted by $a_k$ and create $x(t)$ as in $$ \boxed{ x(t) = \sum_{k=-\infty}^{\infty} a_k e^{j \frac{2\pi}{T} k t } } \tag{2}$$
{ "domain": "dsp.stackexchange", "id": 7032, "tags": "fourier-transform, continuous-signals" }
Having trouble calculating the asymptotic running time of MAX-HEAPIFY
Question: I don't understand the $T(2n / 3)$ part in the recurrence relation for MAX-HEAPIFY in the book CLRS. There is another post that explains it but I can't realize it. Answer: Suppose that you are running MAX-HEAPIFY on some vertex $v$ of a heap $H$. Then the subtree $H_v$ rooted at $v$ is also a heap. Let $n$ be the number of vertices of $H_v$. Clearly if $v$ has no children or only one (left) children then $T(n) = O(1)$ so let's focus on the case in which $v$ has two children $u$ and $w$, where $u$ is the left child and $w$ is the right child. Let $n_u$ be the number of vertices in $H_u$ and $n_w$ be the number of vertices in $H_w$. Clearly the worst case happens when we choose to recurse on the subtree with most nodes between $H_u$ and $H_w$. By the properties of the heap we know that $n_u \ge n_w$ so we can restrict ourselves to the case in which we recurse on $H_u$. The question now becomes: how large can $n_u$ be compared to $n$? To answer this question let $h_v$ be the height of $H_v$. We know that the height $h_u$ of $H_u$ must be $h_v - 1$. Moreover, the height of $H_w$ can be either $h_v-1$ or $h_v-2$ (otherwise $H$ was not a complete binary tree). The maximum number of nodes in a binary tree of a generic height $h$ is at most $2^{h+1}-1$ (which corresponds to a perfect binary tree binary tree). This tells us that $n_u \le 2^{h_u + 1} - 1 = 2^{h_v} - 1$. Moreover, the number of nodes in a complete binary tree of height $h$ is at least $2^h$ (where $2^h - 1$ nodes are from a perfect binary tree of height $h-1$ and there must be at least one node on the $h$-th level). This tells us that $n_w \ge 2^{h_w} \ge 2^{h_v-2}$. We are now ready to find the maximum possible ratio between $n_u$ and $n = n_u + n_w + 1$. $$ \begin{align*} \frac{n_u}{n} &= \frac{n_u}{n_u + n_w + 1} \le \frac{n_u}{n_u + 2^{h_v-2} + 1} = 1 - \frac{2^{h_v-2} + 1}{n_u + 2^{h_v-2} + 1} \\ &\le 1 - \frac{2^{h_v-2} + 1}{2^{h_v-1} + 2^{h_v-2} + 1} = 1 - \frac{2^{h_v-2} + 1}{3 \cdot 2^{h_v-2} + 1} \\ & < 1 - \frac{2^{h_v-2} + 1}{3 \cdot ( 2^{h_v-2} + 1)} = 1 - \frac{1}{3} = \frac{2}{3}. \end{align*} $$
{ "domain": "cs.stackexchange", "id": 18680, "tags": "asymptotics, heaps" }
How do I know in which state the qubit is in each step of the circuit for the simulator in qiskit?
Question: I would like to know how to know in which state a qubit is (I am talking about single-qubit errors), because in order to apply a non unitary gate in the simulator I have to renormalize the state or the corresponding non unitary Kraus operator (the non unitary gate in my circuit). Therefore I need to know in which state the circuit it. I am of course talking about the simulator, which in fact is classical and therefore it must be a way to know the state at each point of the circuit. Answer: You can take snapshots of the statevector of the circuit when you use the 'qasm_simulator'. You simply append snapshot instructions into your circuit where you would like to see the statevector, and then can see the values in the result object that is returned. You add a snapshot instruction using from qiskit.extensions.simulator import snapshot qc.snapshot('my_label')
{ "domain": "quantumcomputing.stackexchange", "id": 1318, "tags": "quantum-gate, qiskit, programming, noise" }
Percent Increase vs. Factor By Which Something Increases
Question: I would appreciate a quick peer check on the following. I am currently enrolled in a class where, I think, the professor is repeatedly confusing percent increase and factor by which something increases. There is a difference isn't there? I'm not going crazy here am I? As I understand it, to calculate the percent increase I would do: $$ IV = initial \; value\\ FV = final \; value\\ Assume \; FV > IV\\ \% \, Inc. = \left( \frac{FV - IV}{IV} \right) * 100 = \left( \frac{FV}{IV} - 1 \right) * 100 $$ If I want the factor by which the value increased with respect to the initial value I would take the ratio of the final value to the initial: $$ Inc. \; Factor = \frac{FV}{IV} $$ To reiterate, these are not the same things. I just want to make sure I'm not confusing anything before I point it out to him and ask for clarification. Answer: Yes, what you show is correct. For example, a value going from 8 to 12 is a 50% increase. You can also say it was increased by a factor of 1.5. People sometimes get a little sloppy with this. If it's clear enough from context what the professor really means, I would let it go. If you're the professor, it's OK to insist your students use the terms correctly. When you're the student, it's better to reserve picking a fight for something more important.
{ "domain": "engineering.stackexchange", "id": 1321, "tags": "heat-transfer" }
java.sql.Timestamps: There's gotta be an easier way
Question: I have a function that counts how many "assists" a team performs during the course of their day. The way I've written it feels bulky and inefficient using GregorianCalendar. Basically, the function counts specific types of assistance then returns the information in an object. Does anyone know of a cleaner way to do this? public static AssistReportData getAssistReportByDate(int teamId, Date date) throws ClassNotFoundException, SQLException { AssistReportData ard = null; Connection conn = getConnection(); String query = "SELECT assistanceProvided FROM fauassist WHERE teamId=? AND timeOut BETWEEN ? AND ?;"; PreparedStatement ps = conn.prepareStatement(query); ps.setInt(1, teamId); GregorianCalendar gc = new GregorianCalendar(); gc.setTime(date); gc.set(Calendar.HOUR, 0); gc.set(Calendar.MINUTE, 0); gc.set(Calendar.SECOND, 0); ps.setTimestamp(2, new java.sql.Timestamp(gc.getTime().getTime())); GregorianCalendar gc2 = new GregorianCalendar(); gc2.setTime(date); gc.set(Calendar.HOUR, 59); gc.set(Calendar.MINUTE, 59); gc.set(Calendar.SECOND, 59); ps.setTimestamp(3, new java.sql.Timestamp(gc2.getTime().getTime())); ResultSet rs = ps.executeQuery(); while (rs.next()) { ard = new AssistReportData(); ard.setTeam(teamId); switch (rs.getString("assistanceProvided")) { case ("TEAM EFFORT"): ard.addTeamEffort(1); break; case ("OUTSIDE AGENCY"): ard.addOutsideAgency(1); break; case("SEARCH WARRANT"): ard.addSearchWarrant(1); break; case("TRANSPORT"): ard.addTransport(1); default: } } return ard; } Answer: Bug gc.set(Calendar.HOUR, 59); I suppose you mean gc.set(Calendar.HOUR, 23); here? Potential Bug while (rs.next()) { ard = new AssistReportData(); // ... } I suppose this works currently because you only get a single result row, but please be aware that if you have multiple rows, you will simply be re-referencing until the last row of your ResultSet. Getting dates All you need is a nice method that converts Date to Timestamp: // using shortened variable names for brevity private static Timestamp convertAndReset(Date date, int hr, int min, int sec, int nano) { GregorianCalendar calendar = new GregorianCalendar(); calendar.setTime(date); calendar.set(Calendar.HOUR, hr); calendar.set(Calendar.MINUTE, min); calendar.set(Calendar.SECOND, sec); Timestamp result = new Timestamp(calendar.getTimeInMillis()); result.setNanos(nano); return result; } You can them call it as such: ps.setInt(1, teamId); ps.setTimestamp(2, convertAndReset(date, 0, 0, 0, 0)); ps.setTimestamp(3, convertAndReset(date, 23, 59, 59, 0)); Java 8? If you happen to be on Java 8, then congratulations: the new java.time.* classes ('Time APIs') are much better for chronological representations. You can then consider rewriting the same method as such: private static Timestamp convertAndReset(Date date, int hr, int min, int sec, int nano) { return Timestamp.valueOf(date.toInstant().atZone(ZoneId.systemDefault()) .toLocalDate().atTime(hr, min, sec, nano)); } Convert Date to Instant via Date.toInstant(). Convert to ZonedDateTime via Instant.atZone(ZoneId). Convert to either a LocalDate or LocalDateTime. Using the former because... Setting the time via LocalDate.atTime() is arguably more fluent, which gives us a LocalDateTime instance with the desired time in another 'step'. Finally, call Timestamp.valueOf(LocalDateTime) to get the Timestamp instance.
{ "domain": "codereview.stackexchange", "id": 16338, "tags": "java, mysql, datetime" }
Can colors be detected using Neural Nets?
Question: How do I represent a color as an activation value within a neuron? ( might be off-topic) I want to detect colors Using Neural Nets In my Knowledge any activation function which is generally used will push the values in between 0-1 which isn't of any use to me... Just wondering how Windows 10 decides on the theme color change when we switch themes.. (led me to think about this or its just simple averaging on pixel values of the channels?) So seems like an impossible task for Neural Nets then? Answer: Normally color spaces are not considered to be one dimensional. Given three types of human cone cells, the most natural approach is probably not to use a single but three neurons. If color input is encoded as RGB values, then one neuron would be for red, one for green, and one for the blue channel.
{ "domain": "datascience.stackexchange", "id": 2828, "tags": "machine-learning, neural-network" }
What energy is $E=mc^2$ talking about?
Question: Everyone knows Einstein’s $E=mc^2$ equation: it is the beautiful relation that links energy with mass. My high school (tenth grade) book says that the equation can be used to find the energy released from nuclear fission and gives several examples where mass is mindlessly multiplied by 9x10^16. I, however, don’t believe in what the book says - which is by the way written by the school for the school. What energy does the equation give? Is it the energy from nuclear fission or nuclear fusion or matter-antimatter annihilation, because each one of these is orders less powerful than the other? Wikipedia (third paragraph, second line) says that the way to release the energy expressed in the Einstein equation is by matter-antimatter annihilation. If this is true, how did the equation mark the beginning of many great inventions that have nothing to do with antimatter? My questions are What energy does the equation $E=mc^2$ give? If it is the absolute energy contained in matter that can only by released by matter-antimatter destruction, how was the equation useful at all in our limited technology that cannot utilize or even find antimatter? Please give me a complete answer that explains things in a simple but non-dumbed down way and please excuse my naivety, lack of understanding, lack of knowledge; I am fifteen years old. Answer: If this is true, how did the equation mark the beginning of many great inventions like the steam engine? It did not mark the beginning of the steam engine and has nothing to do with steam power. The final paper that Einstein wrote about his then new theory of Special Relatively was published in September 1905, and introduced $m = E/c^2$. This is according to Einstein's Miracle Year Thanks to anna v for this correction A particle at rest has its rest mass equal to energy but the m in the formula is the relativistic mass, en.wikipedia.org/wiki/… the rest mass is the $m_0$in the formula for relativistic mass. The energy released on total annihilation is exactly what the equation says, the product of the mass of the object and the speed of light squared, I will leave it to you to ensure this is dimensionally correct. If it is the absolute energy contained in matter that can only by released by matter-antimatter destruction, how was the equation useful at all in our limited technology that cannot utilize or even find antimatter? True, we do not utilise antimatter, but we can make tiny amounts of it, amounts so small in mass that even their total energy on annihilation is tiny. Antimatter is also produced naturally, from Cosmic Rays, for example: Of primary cosmic rays, which originate outside of Earth's atmosphere, about 99% are the nuclei (stripped of their electron shells) of well-known atoms, and about 1% are solitary electrons (similar to beta particles). Of the nuclei, about 90% are simple protons, i. e. hydrogen nuclei; 9% are alpha particles, identical to helium nuclei, and 1% are the nuclei of heavier elements, called HZE ions. A very small fraction are stable particles of antimatter, such as positrons or antiprotons. The precise nature of this remaining fraction is an area of active research. An active search from Earth orbit for anti-alpha particles has failed to detect them. You have lots of things correctly set out in your post, except you have the dates mixed up. The equation is far more useful (actually essential) in many theories of modern physics than in any practical application, excluding nuclear bombs, in which a small amount of matter is converted to energy. You need to look up the definition of energy in an article such as Wikipedia Energy, as this explains the way energy is viewed today, and how difficult it is to define it, as it is a fundamental concept.
{ "domain": "physics.stackexchange", "id": 38226, "tags": "energy, mass-energy, antimatter, matter" }
ForEachAsync extension method (a way to run an async operation on each item of a sequence in parallel)
Question: In a recent project I worked on we faced some issues due to an excess of parallelization (thousands of threads were created and the overall result was a degradation of performance and several spikes in CPU usage). What we needed to solve this problem was a way to apply an asynchronous operation to each item of a sequence in a parallel fashion, with the possibility of specifying a maximum degree of parallelism. By looking at this Stack Overflow question I jumped into Stephan Toub's ForEachAsync. Starting from there I implemented the following extension methods: using System; using System.Collections.Generic; using System.Threading.Tasks; using System.Collections.Concurrent; using System.Linq; namespace Lib.Concurrency.Extensions { /// <summary> /// Extension methods for enumerables /// </summary> public static class EnumerableExtensions { /// <summary> /// Executes an asynchronous operation for each item inside a source sequence. These operations are run concurrently in a parallel fashion. The invokation returns a task which completes when all of the asynchronous operations (one for each item inside the source sequence) complete. It is possible to constrain the maximum number of parallel operations. /// </summary> /// <typeparam name="T">The type of the items inside <paramref name="source"/></typeparam> /// <param name="source">The source sequence</param> /// <param name="maxDegreeOfParallelism">The maximum number of operations that are able to run in parallel</param> /// <param name="operation">The asynchronous operation to be executed for each item inside <paramref name="source"/></param> /// <returns>A task which completes when all of the asynchronous operations (one for each item inside <paramref name="source"/>) complete</returns> /// <exception cref="ArgumentNullException"><paramref name="source"/> is <c>null</c>.</exception> /// <exception cref="ArgumentNullException"><paramref name="operation"/> is <c>null</c>.</exception> /// <exception cref="ArgumentOutOfRangeException"><paramrefname="maxDegreeOfParallelism"/> is less than or equal to zero</exception> public static Task ForEachAsync<T>( this IEnumerable<T> source, int maxDegreeOfParallelism, Func<T, Task> operation) { if (source == null) throw new ArgumentNullException(nameof(source)); if (operation == null) throw new ArgumentNullException(nameof(operation)); EnsureValidMaxDegreeOfParallelism(maxDegreeOfParallelism); var tasks = from partition in Partitioner.Create(source).GetPartitions(maxDegreeOfParallelism) select Task.Run(async () => { using (partition) { while (partition.MoveNext()) { await operation(partition.Current).ConfigureAwait(false); } } }); return Task.WhenAll(tasks); } /// <summary> /// Executes an asynchronous operation for each item inside a source sequence. These operations are run concurrently in a parallel fashion. The invokation returns a task whose result is a sequence containing the results of all the asynchronous operations (in source sequence order). It is possible to constrain the maximum number of parallel operations. /// </summary> /// <typeparam name="TSource">The type of the items inside the source sequence</typeparam> /// <typeparam name="TResult">The type of the object produced by invoking <paramref name="operation"/> on any item of <paramref name="source"/></typeparam> /// <param name="source">The source sequence</param> /// <param name="maxDegreeOfParallelism">The maximum number of operations that are able to run in parallel</param> /// <param name="operation">The asynchronous operation to be executed for each item inside <paramref name="source"/>. This operation will produce a result of type <typeparamref name="TResult"/></param> /// <returns>A task which completes when all of the asynchronous operations (one for each item inside <paramref name="source"/>) complete. This task will produce a sequence of objects of type <typeparamref name="TResult"/> which are the results (in source sequence order) of applying <paramref name="operation"/> to all items in <paramref name="source"/></returns> /// <exception cref="ArgumentNullException"><paramref name="source"/> is <c>null</c>.</exception> /// <exception cref="ArgumentNullException"><paramref name="operation"/> is <c>null</c>.</exception> /// <exception cref="ArgumentOutOfRangeException"><paramref name="maxDegreeOfParallelism"/> is less than or equal to zero.</exception> public static async Task<IEnumerable<TResult>> ForEachAsync<TSource, TResult>( this IEnumerable<TSource> source, int maxDegreeOfParallelism, Func<TSource, Task<TResult>> operation) { if (source == null) throw new ArgumentNullException(nameof(source)); if (operation == null) throw new ArgumentNullException(nameof(operation)); EnsureValidMaxDegreeOfParallelism(maxDegreeOfParallelism); var resultsByPositionInSource = new ConcurrentDictionary<long, TResult>(); var tasks = from partition in Partitioner.Create(source).GetOrderablePartitions(maxDegreeOfParallelism) select Task.Run(async () => { using (partition) { while (partition.MoveNext()) { var positionInSource = partition.Current.Key; var item = partition.Current.Value; var result = await operation(item).ConfigureAwait(false); resultsByPositionInSource.TryAdd(positionInSource, result); } } }); await Task.WhenAll(tasks).ConfigureAwait(false); return Enumerable.Range(0, resultsByPositionInSource.Count) .Select(position => resultsByPositionInSource[position]); } private static void EnsureValidMaxDegreeOfParallelism(int maxDegreeOfParallelism) { if (maxDegreeOfParallelism <= 0) { throw new ArgumentOutOfRangeException( nameof(maxDegreeOfParallelism), $"Invalid value for the maximum degree of parallelism: {maxDegreeOfParallelism}. The maximum degree of parallelism must be a positive integer."); } } } } Can you spot any error or issue with this code? Any suggestion to improve this implementation is welcome (I have already planned new overloads to offer support for cancellation). Update ( 10th September 2018 ) After some testing with the version of the code showed above we decided to opt for a different implementation based on SemaphoreSlim class. The issue we found with the previously posted version is due to the fact that fixed a maximum degree of parallelism of n, then exactly n partitions will be created and then exactly n tasks will be created. The desired behavior is different: if the maximum degree of parallelism is set to n, then the number of parallel tasks should be less then or equal to n. For instance given a sequence of m items, with m < n, then we expect m parallel operations. This was not possible with the implementation showed above. Here is the final version of the code (support for cancellation is still missing): using System; using System.Collections.Generic; using System.Threading.Tasks; using System.Collections.Concurrent; using System.Linq; using System.Threading; namespace Deltatre.Utils.Concurrency.Extensions { /// <summary> /// Extension methods for enumerables /// </summary> public static class EnumerableExtensions { /// <summary> /// Executes an asynchronous operation for each item inside a source sequence. These operations are run concurrently in a parallel fashion. The invokation returns a task which completes when all of the asynchronous operations (one for each item inside the source sequence) complete. It is possible to constrain the maximum number of parallel operations. /// </summary> /// <typeparam name="T">The type of the items inside <paramref name="source"/></typeparam> /// <param name="source">The source sequence</param> /// <param name="operation">The asynchronous operation to be executed for each item inside <paramref name="source"/></param> /// <param name="maxDegreeOfParallelism">The maximum number of operations that are able to run in parallel. If null, no limits will be set for the maximum number of parallel operations (same behaviour as Task.WhenAll)</param> /// <returns>A task which completes when all of the asynchronous operations (one for each item inside <paramref name="source"/>) complete</returns> /// <exception cref="ArgumentNullException"><paramref name="source"/> is <c>null</c>.</exception> /// <exception cref="ArgumentNullException"><paramref name="operation"/> is <c>null</c>.</exception> /// <exception cref="ArgumentOutOfRangeException"><paramref name="maxDegreeOfParallelism"/> is less than or equal to zero.</exception> public static Task ForEachAsync<T>( this IEnumerable<T> source, Func<T, Task> operation, int? maxDegreeOfParallelism = null) { if (source == null) throw new ArgumentNullException(nameof(source)); if (operation == null) throw new ArgumentNullException(nameof(operation)); EnsureValidMaxDegreeOfParallelism(maxDegreeOfParallelism); return (maxDegreeOfParallelism == null) ? ApplyOperationToAllItems(source, operation) : ApplyOperationToAllItemsWithConstrainedParallelism(source, operation, maxDegreeOfParallelism.Value); } private static Task ApplyOperationToAllItems<T>( IEnumerable<T> items, Func<T, Task> operation) { var tasks = items.Select(operation); return Task.WhenAll(tasks); } private static async Task ApplyOperationToAllItemsWithConstrainedParallelism<T>( IEnumerable<T> items, Func<T, Task> operation, int maxDegreeOfParallelism) { using (var throttler = new SemaphoreSlim(maxDegreeOfParallelism)) { var tasks = new List<Task>(); foreach (var item in items) { await throttler.WaitAsync().ConfigureAwait(false); #pragma warning disable IDE0039 // Use local function Func<Task> bodyOfNewTask = async () => #pragma warning restore IDE0039 // Use local function { try { await operation(item).ConfigureAwait(false); } finally { throttler.Release(); } }; tasks.Add(Task.Run(bodyOfNewTask)); } await Task.WhenAll(tasks).ConfigureAwait(false); } } /// <summary> /// Executes an asynchronous operation for each item inside a source sequence. These operations are run concurrently in a parallel fashion. The invokation returns a task whose result is a sequence containing the results of all the asynchronous operations (in source sequence order). It is possible to constrain the maximum number of parallel operations. /// </summary> /// <typeparam name="TSource">The type of the items inside the source sequence</typeparam> /// <typeparam name="TResult">The type of the object produced by invoking <paramref name="operation"/> on any item of <paramref name="source"/></typeparam> /// <param name="source">The source sequence</param> /// <param name="operation">The asynchronous operation to be executed for each item inside <paramref name="source"/>. This operation will produce a result of type <typeparamref name="TResult"/></param> /// <param name="maxDegreeOfParallelism">The maximum number of operations that are able to run in parallel. If null, no limits will be set for the maximum number of parallel operations (same behaviour as Task.WhenAll)</param> /// <returns>A task which completes when all of the asynchronous operations (one for each item inside <paramref name="source"/>) complete. This task will produce a sequence of objects of type <typeparamref name="TResult"/> which are the results (in source sequence order) of applying <paramref name="operation"/> to all items in <paramref name="source"/></returns> /// <exception cref="ArgumentNullException"><paramref name="source"/> is <c>null</c>.</exception> /// <exception cref="ArgumentNullException"><paramref name="operation"/> is <c>null</c>.</exception> /// <exception cref="ArgumentOutOfRangeException"><paramref name="maxDegreeOfParallelism"/> is less than or equal to zero.</exception> public static Task<TResult[]> ForEachAsync<TSource, TResult>( this IEnumerable<TSource> source, Func<TSource, Task<TResult>> operation, int? maxDegreeOfParallelism = null) { if (source == null) throw new ArgumentNullException(nameof(source)); if (operation == null) throw new ArgumentNullException(nameof(operation)); EnsureValidMaxDegreeOfParallelism(maxDegreeOfParallelism); return (maxDegreeOfParallelism == null) ? ApplyOperationToAllItems(source, operation) : ApplyOperationToAllItemsWithConstrainedParallelism(source, operation, maxDegreeOfParallelism.Value); } private static Task<TResult[]> ApplyOperationToAllItems<TItem, TResult>( IEnumerable<TItem> items, Func<TItem, Task<TResult>> operation) { var tasks = items.Select(operation); return Task.WhenAll(tasks); } private static async Task<TResult[]> ApplyOperationToAllItemsWithConstrainedParallelism<TItem, TResult>( IEnumerable<TItem> items, Func<TItem, Task<TResult>> operation, int maxDegreeOfParallelism) { var resultsByPositionInSource = new ConcurrentDictionary<long, TResult>(); using (var throttler = new SemaphoreSlim(maxDegreeOfParallelism)) { var tasks = new List<Task>(); foreach (var itemWithIndex in items.Select((item, index) => new { item, index })) { await throttler.WaitAsync().ConfigureAwait(false); #pragma warning disable IDE0039 // Use local function Func<Task> bodyOfNewTask = async () => #pragma warning restore IDE0039 // Use local function { try { var item = itemWithIndex.item; var positionInSource = itemWithIndex.index; var result = await operation(item).ConfigureAwait(false); resultsByPositionInSource.TryAdd(positionInSource, result); } finally { throttler.Release(); } }; tasks.Add(Task.Run(bodyOfNewTask)); } await Task.WhenAll(tasks).ConfigureAwait(false); } return Enumerable .Range(0, resultsByPositionInSource.Count) .Select(position => resultsByPositionInSource[position]) .ToArray(); } private static void EnsureValidMaxDegreeOfParallelism(int? maxDegreeOfParallelism) { if (maxDegreeOfParallelism <= 0) { throw new ArgumentOutOfRangeException( nameof(maxDegreeOfParallelism), $"Invalid value for the maximum degree of parallelism: {maxDegreeOfParallelism}. The maximum degree of parallelism must be a positive integer."); } } } } Answer: Rather than writing something custom, you could use the TLP Dataflow library. public static Task ForEachAsync<TSource>( this IEnumerable<TSource> items, Func<TSource, Task> action, int maxDegreesOfParallelism) { var actionBlock = new ActionBlock<TSource>(action, new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = maxDegreesOfParallelism }); foreach (var item in items) { actionBlock.Post(item); } actionBlock.Complete(); return actionBlock.Completion; } Check out this Fiddle to see it in action. EDIT If you need the results: public static async Task<IEnumerable<TResult>> ForEachAsync<TSource, TResult>( this IEnumerable<TSource> items, Func<TSource, Task<TResult>> action, int maxDegreesOfParallelism) { var transformBlock = new TransformBlock<TSource, TResult>(action, new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = maxDegreesOfParallelism }); var bufferBlock = new BufferBlock<TResult>(); using (transformBlock.LinkTo(bufferBlock, new DataflowLinkOptions {PropagateCompletion = true})) { foreach (var item in items) { transformBlock.Post(item); } transformBlock.Complete(); await transformBlock.Completion; } bufferBlock.TryReceiveAll(out var result); return result; }
{ "domain": "codereview.stackexchange", "id": 33361, "tags": "c#, .net, concurrency, async-await, task-parallel-library" }
Ramifications of black hole stellar system
Question: Recently, I got around to seeing the movie Interstellar. In it, the characters of the movie visit a stellar system that appears to be built around a black hole instead of a star. On top of this, their mission is to find a habitable planet in this system. Question: I was wondering how stable the system, as a whole, would be in terms of operation and would any of the planets be habitable at all due to the obvious differences between a star and a black hole. In the movie the black hole is supposed to be a 'supermassive rotating black hole'. As for being a binary star system, it's never stated as such nor depicted as such. It's depicted visually as the black hole alone being the center. However one of the characters does mention a "Neutron Star" as part of the system so it could possibly be a binary star system. The ambient lighting for the planets is generated by the accretion disk of the black hole. The size and rotation speed of the event horizon are not defined in the movie. At least not that I can remember. As for the planets, their proximity varies, with the first planet depicted as being so close to the event horizon that it is effected by time dilation. The distances of the other two are not specified directly, but traveling to the second planet out seemingly takes days, while traveling to the third planet out is stated to take months, if I remember correctly. As for their surface gravity, the first planet is depicted as having higher surface gravity than Earth but not so much higher that movement is impossible, just strained. The second planet is depicted as being '80% Earth's gravity', if memory serves me correctly. To better help define certain variables relevant to the question I found an info-graphic related to the movie that illustrates the size of the black hole and it's rotational speed: http://tinyurl.com/pqph8wl Answer: For those who haven't seen it: Some human explorers land on a planet orbiting a black hole. The black hole is surrounded by a large accretion disk. The planet orbits at a distance such that going any closer to the black hole will mean that your odds of getting out are slim; it's also composed of water. Finally, time dilation from the black hole means that even though the characters spend about two hours on the planet, a decade or so passes for their colleague on board. The basic answer is that a planet can orbit a black hole. There are stable circum-black-hole orbits, just as there are stable orbits around just about any celestial body. There's a problem: A black hole typically forms as a result of a supernova. This will eject most nearby planets out of the stellar system. Alternatively, it's unlikely that a planet could be captured by a black hole and be in a stable orbit, so the whole premise - while possible - is highly unlikely. Then again, it's improbable that a planet will be made out of water, a wormhole will open up near Jupiter, or Matthew McConaughey will star in a decent sci-fi movie, so why should anything else in the story be normal? However, you can't just put a planet anywhere near a black hole, give it a strong enough push, and hope it orbits. The innermost orbit is at the boundary of the photon sphere. On this sphere, only photons can orbit. Inside it, nothing can orbit. However, the only stable orbit is twice as far away, at $2r_p$. The radius of the sphere is $$r_p=\frac{3GM}{c^2}$$ We'll assume that the object is not rotating (I don't remember exactly if it is or isn't rotating, but it's simpler in this demo to say it isn't.). The formula for gravitational time dilation is $$t_0=t_f \sqrt{1-\frac{3r_0}{2r_f}}$$ where $$r_0=\frac{2GM}{c^2}$$ Assuming that $t_0$ (the time for the observer inside the field) is two hours (7200 seconds) and $t_f$ is ten years (315360000 seconds), $$\frac{t_0}{t_f}=\frac{1}{43800}=1-\frac{(3)2GM}{(2)r_fc^2}$$ Simplifying, and saying that $\frac{2GM}{c^2}=\frac{2}{3}r_p$, $$\frac{r_p}{r_f}=\frac{43799}{43800}$$ $$r_f \approx 1.0000228315715 r_p$$ which is outside the photon sphere, but just barely. However, it's well inside $2r_p$, and so most likely instable. The planet is, in short, not going to survive for long. And so giant waves - mini-spoiler - are the least of Matthew McConaughey's problems. Post-question-edit modifications: It couldn't have been a supermassive black hole; these form at the center of galaxies. It could have been a stellar-mass black hole, though an intermediate-mass black hole is also likely - if not likelier, if the massiveness is emphasized. The existence of the neutron star is interesting. If the black hole were intermediate-mass, I would expect that it would have gobbled up the neutron star by now - and the planets, too. So I'd bet the black hole is a slightly-more-massive-than-average stellar-mass black hole. I highly doubt that multiple planets could orbit a black hole - for the reason I gave above; the supernova would have destroyed them or flung them out of the system. Would any of the planets be habitable? I doubt it. The accretion disk could heat up enough to provide some light, but there probably wouldn't be a lot. I'll write up the calculations either later today or possibly tomorrow, as I'm a bit LaTeXed-out after writing a math-heavy answer on Worldbuilding to find the luminosity, but I suspect it'll be negligible - as will Hawking radiation, in case any smart-aleck was planning on bringing that up.
{ "domain": "astronomy.stackexchange", "id": 4904, "tags": "orbit, black-hole, exoplanet, habitable-zone" }
Origin of the biochemical term, Pi (inorganic phosphate)
Question: I would like to know when the term Pi (inorganic phosphate) was introduced in the representation of biochemical reactions, how it was originally defined, and the justification given then for using it rather than an individual species of phosphate. (I would also be interested in the current justification, but that’s probably another question.) Let me provide some background to my question. Phosphoric acid (H3PO4) has three ionizations, which produce successively the species: dihydrogen phosphate (H2PO4–), monohydrogen phosphate (HPO42–) and orthophosphate (PO43–). At pH 7.4, according to the Wikipedia entry on phosphate, the main species are the mono- and di-hydrogen phosphates (61% and 39% respectively). The term Pi must have been introduced in the 1950s at latest (perhaps before the war), at a time when there would have been little knowledge of the nature of the species involved in reactions involving phosphate — certainly not at the active sites of enzymes. One of the reasons I am curious to know how the term was introduced is the extent to which it persists in 21st century biochemical text books, where it would seem that many authors either do not know or do not care to explain to their readers why they are still using it at a time when much more is known about the reaction mechanisms. Neither of two well-known texts explain the different ionizations of phosphate, and give only parenthetical definitions in terms of a single species — different in each case: Berg et al. referred to Pi as orthophosphate, whereas Nelson and Cox’s, Lehninger Principles of Biochemistry referred to it as HPO42–. Acknowledgement: This question was provoked by the SE-Biology question — Where is the H+ ion in this step of glycolysis coming from? Answer: This terminology is at least as old as September 1944 when Enzymatic Synthesis of Acetyl Phosphate Journal of Biological Chemistry 155, 55-70 was published by Lipmann, which says: Inorganic phosphate, referred to as Pi, was estimated colorimetrically See also the definition of "inorganic phosphate" and "orthophosphate" from this 1943 University of Wisconsin Thesis: Compound: Inorganic phosphate or orthophosphate Definition: Phosphate whose calcium salt is insoluble in water-alcohol mixtures under the conditions to be described. E.G. NaH2PO4 In other words, "orthophosphate" was a generic term for mono, di, or tri basic phosphate. It did not have the narrower meaning attributed in the OP.
{ "domain": "biology.stackexchange", "id": 6830, "tags": "biochemistry, metabolism, enzymes, phosphate" }
Actionlib notifications not received
Question: Hi, I have been trying to notify to both action server and action client when any of them dies. I would like the client to be notified by the server whenever this server dies. Similarly, I would like the server to be notified if the current client dies for some reason. I am using a simple action server with the execute callback option, and a client with the done, active and feedback callbacks. In the destructor of the node containing the action server, I have added the following code: if ( as_.isActive() ) { as_.setAborted(); } as_.shutdown(); Unfortunately, the client never gets any kind of notification. I was expecting the done callback to receive an aborted status, or something similar. For the client notification, I have tried to use the isServerConnected() and getState() functions from the node's destructor, but I always get the same result whether the server is running or not: isServerConnected()=0 and getState()=Aborted. I have also called cancelGoal() and stopTrackingGoal() to let the server cancel the current goal, but they seem to have no effect. Maybe I am missing some important key points. Any Suggestion? Joan Originally posted by joan on ROS Answers with karma: 245 on 2011-07-24 Post score: 1 Answer: I think I've run into this before when trying to do something similar. In my case, ROS had already shutdown when my destructor was called which meant that no actionlib related messages were going over the wire. There's actually a tool called bond that would be perfect for your use case. Not only will it let you know when nodes go down cleanly, it'll also let you know if they go down from a crash. Hope this helps. Originally posted by eitan with karma: 2743 on 2011-07-26 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 6242, "tags": "ros, actionlib, simpleactionclient" }
Is it only possible to look at solar systems with stars at least as old as ours to be able to find intelligent life?
Question: Assuming the speed it takes to development intelligent life is the same in all solar systems, would we need to look at exoplanets with stars as old or older than ours to find intelligent life? A young star would be assumed to have been surrounded by planets that are relatively new and haven’t had time to develop intelligent life. Visiting those planets with young stars relative to the sun would most likely only give us planets with prehistoric life. Answer: Assuming the speed it takes to development intelligent life is the same in all solar systems, We don't know if this is true. We don't know if: Life has started on more than one planet in the Universe. How long, on average, it takes for life to start. Whether life normally evolves past the "bacterial slime" stage to form complex multicellular life, and how long this takes. Whether multicellular life normally develops intelligence, and how long this takes. How long intelligent life survives, on average. So we don't know what time it takes to develop intelligent life. However, planets are the same age as their stars, so it is reasonable to suppose that older stars are more likely to harbour intelligent life. Moreover, visiting any extrasolar planets is too hard with current technology. Stars are just too far away.
{ "domain": "astronomy.stackexchange", "id": 4316, "tags": "solar-system, exoplanet, star-formation, planetary-formation, extra-terrestrial" }
Drinking River Water Unsafe?
Question: I've read that drinking river water is unsafe. Did people in the past drink water from the well exclusively or did they deal with diarrhea constantly? Answer: It depends how far in the past you're looking. Post-agriculture but pre modern sanitation, yes diarrhoea is a major problem. Pre-agriculture, however would have been a different matter. The majority of pathogens in river water are derived either from agriculture (e.g. cryptosporidium, giardia) from large human populations (cholera) or rodents associated with human populations (leptospira). There would, therefore have been far fewer disease causing organisms in river water Pre-agriculture. Add to that the effect on the immune system of Post-agriculture diet and lifestyle and it would seem unlikely that the cleanliness of most river water would have been a major problem.
{ "domain": "biology.stackexchange", "id": 6039, "tags": "pathology" }
SUSY Loop diagrams from a categorical viewpoint
Question: In the paper "A Prehistory of $n$-Categorical Physics" J. Baez and A. Lauda give an account of the use of category theory throughout physics. In section “Penrose (1971)” starting from page 25 they explain how one can use the language of monoidal categories to interpret Feynman diagrams from a categorical point of view (using the fact that all representations of a group and their intertwiners form a monoidal category). Then on page 29-30 it is said that the divergence of loop diagrams is related to the fact that the relevant unitary representations of the Poincaré group are infinite-dimensional (and loops in the diagrams give the dimension of the representation). I was working on a project for university based on this paper and when talking about the part mentioned above, my professor said that this was not completely correct. He mentioned that when including supersymmetry, hence enlarging the relevant symmetry group, certain loop diagrams become finite and this seems to be in contradiction with the dimension argument from Baez and Lauda. Now I was wondering what the formal explanation behind this phenomenon is? Is it because the super-Poincaré group does have finite-dimensional unitary representations or because the transition from dimensions to super-dimensions introduces a cancellation which keeps the loops finite? Answer: The divergences of Feynman diagrams have nothing to do with the infinite dimensionality of the unitary representations of the Poincaré Group (PG). I agree with the argument given by your professor. And you don't even need SUSY to argue that the claim in the paper is misleading/wrong. There are non-supersymetric models in lower dimensions which are perfectly finite (e.g., Glimm & Jaffe's $\phi^4_2$), yet they have reps of PG which are infinite-dimensional (as long as $d>0$, all unitary reps are infinite-dimensional). And, more importantly, Feynman diagrams know nothing about the unitary representations of the Poincaré Group. The PG appears in two different ways in QFT (cf. e.g. this PSE post): Particles, described by unitary (and hence infinite-dimensional) reps of PG, and Fields, described by finite-dimensional (and hence non-unitary) reps of PG. Feynman diagrams encode the properties of fields, not particles, and therefore they carry the information of finite-dimensional (non-unitary) representations. The unitary representations appear when using the LSZ formula which, in short, amputates external legs and attaches a polarisation vector carrying the one-particle state information. This remains true when considering the super-Poincaré Group (SPG): Particles are organised into multiplets, which are unitary reps of SPG, and which can be thought of as collections of unitary reps of the standard PG. They are still infinite dimensional (recall that the infinite dimensionality is required because the group is non-compact; the culprit is the subgroup of translations, which is also present in the super case, and whose eigenvalues are momenta; supermultiplets also carry momentum quantum numbers, and this is where the infinite dimensionality comes from). Fields are organised into superfields, which are finite-dimensional reps of SPG, and which can be thought of as collections of finite-dimensional reps of the standard PG. The improved UV behaviour of super-theories has nothing to do with the dimension of the representations; indeed, the particles are still infinite-dimensional, and the fields are still finite-dimensional. It has to do with cancellations, or with more subtle properties of supersymmetry (e.g., the so-called non-renormalisation theorems; in short, divergences must be supersymmetric, but sometimes one can prove that there is no counterterm with the required symmetry/divergence structure, and so the divergence is not there to begin with, cf. e.g. this PSE post). So what do Baez & Lauda mean? hard to tell, but my guess is the following: loops indeed are associated to traces over a representation of PG (times a representation of an internal group, like colour), and so they are in a sense proportional to the dimensionality of the rep. But the rep is that of the field associated to the line, not a particle, and so it is finite-dimensional. For example, gluon loops typically grow like $N^2$, and quark loops like $N$; this is because gluons live in the adjoint, and quarks in the fundamental. These are finite-dimensional representations. So the authors are either confused or I didn't understand their point.
{ "domain": "physics.stackexchange", "id": 60671, "tags": "quantum-field-theory, supersymmetry, feynman-diagrams, representation-theory, category-theory" }
Littlewood Polynomial Heatmap
Question: The following Python program generates heat maps of the roots of Littlewood polynomials. It works fine with a small number of roots, however I tried to use 2^21 in the first loop and it ate up 12 gigabytes of RAM on my computer. How can I make the code less memory-intensive while still allowing reasonably fast performance? Ideally, I would like to use this code to generate very large images but I cannot do so with the program in its current state. Any help would be greatly appreciated. Thanks! import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.cm as cm from math import sqrt, sin, log10 def f(x): # function that maps [0,1] to itself for the color space so that values closer to 1 end up further from 1 return sqrt(x) def roots(n): # given n, returns a Littlewood polynomial with coefficients representing the binary representation of n roots = np.polynomial.polynomial.polyroots([int(i)*2-1 for i in bin(n)[2:]]) return [[i.real,i.imag] for i in roots] # get array of all possible roots and split values up into arrays x and y temp=[] for i in xrange(2**16): if i%100==0: print i for j in roots(i): temp.append(j) x=[i[0] for i in temp] y=[i[1] for i in temp] # generate histogram of roots to make heatmap gap=0.01 print len(np.arange(-2.5,2.5,gap)) normal_histogram = np.histogram2d(x,y,bins=np.arange(-2.5,2.5,gap),normed=True) # generate image from PIL import Image normal_histogram = normal_histogram[0] size=len(normal_histogram) im = Image.new("RGB",(size,size)) im2 = Image.new("RGB",(size,size)) #color = [int(256*k) for k in cm.hot(histogram[i][j])[0:3]] for i in range(len(normal_histogram)): print i for j in range(len(normal_histogram)): temp = int(normal_histogram[i][j]*256) im.putpixel((i,j),tuple([int(256*k) for k in cm.hot(f(normal_histogram[i][j]))][0:3])) im2.putpixel((i,j),tuple([int(256*k) for k in cm.nipy_spectral(f(normal_histogram[i][j]))][0:3])) im.save("test.png") im2.save("test2.png") Things I've tried using a txt file to store the polynomial roots using matplotlib's ability to make heatmaps to plot the data Answer: As Oscar mentions, switching to python3 would probably make this easier, just because it's better designed for collection-like things that aren't lists. However, I don't think it's strictly necessary. The first thing that I notice is that you're caching an awful lot of data into that temp array. The temp array itself, as far as I can tell, doesn't need to exist at all: you can just pack the numbers straight into x and y. However, that would at best halve your memory usage, which isn't going to cut it here. Fundamentally, for the operations that you're doing, you should not need to store the x and y values either. Instead, you just want to tally up the histogram. I suggest that you implement your own version of histogram2d. I'll grant that home-rolling a python function that works at even comparable speed to numpy is usually downright impossible. In this case, however, you just need to go faster than the combined numpy function and allocating and packing 12GB of data! That is likely perfectly tractable. If in fact you can't get the performance that you want without letting numpy work its magic, consider batching up instead. Get the first batch of X thousand array elements, build a histogram, discard that batch and get the next X thousand, build another histogram, sum the histograms, and repeat. Looping against a generator and yield is probably the easiest way to go about doing this. Because your program works fine with 2**16 in the first loop, and nothing beyond generating the histogram needs memory allocated in proportion to that loop size, that should solve the issue for essentially as big a number as you have the patience to run the program for.
{ "domain": "codereview.stackexchange", "id": 30290, "tags": "python, performance, python-2.x, memory-optimization" }
Looking for the actual reason of refraction explained precisely without analogies
Question: I'm a high school teacher trying to teach my students (15year olds) about refraction. I've seen a lot of good analogies to explain why the light changes direction, like the marching band analogy, that the light "choose" the fastest way etcetc, and for most of my students these are satisfying ways to explain the phenomenon. Some students, however, are able to understand a more precise and physically correct answer, but I can't seem to find a good explanation of why the lightwaves actually changes direction. So what I'm looking for is an actual explanation, without analogies, of how an increase/decrease in the speed of a lightwave cause it to change direction. Thanks a lot Answer: You could appeal to boundary conditions of Maxwell's equations, or any wave equation for that matter. This is not as abstract as it sounds. See my rough and hurried drawing below: The waves in the first medium travel quickly, so their crests are further apart than in the second medium. The frequencies of all waves being the same, the ratio of the spacings is $n_1/n_2$. Now you can explain that the electromagnetic field has to be continuous across the interface - it can't suddenly jump from one value to another. Therefore, the variations of both waves must align at the interface. You can then fiddle with the geometry a bit to show that the spacing between the intersections of the left hand waves with the interface is proportional to $n_1\sin\theta_1$. The spacing between the intersections of the right hand waves with the interface is proportional to $n_2\sin\theta_2$ (same proportionality constant, to wit $c/\nu$. If, as we have argued, the variations must match up, $n_1\sin\theta_1 = n_2\sin\theta_2$ and you get then Snell's law. You don't of course need to derive Snells law to show that the directions of the waves have to be different if their variations along the interface have to align.
{ "domain": "physics.stackexchange", "id": 11987, "tags": "visible-light, refraction" }
Leetcode longest substring with no duplicates
Question: class Solution(object): def lengthOfLongestSubstring(self, s): """ :type s: str :rtype: int """ longest = '' max_len = 0 for key,value in enumerate(s): if value in longest: longest = longest[longest.index(value)+1:] + value else: longest+=value max_len = max(max_len,len(longest)) return max_len Here is my solution for leetcode's "longest-substring-without-repeating-characters question". I don't quite understand why my time complexity is so efficient (99% out of all python submissions). Since strings are immutable in python, assuming a strength of length n, won't my algorithm be $$ 1+2+3+...n-1+n = O(n^2)$$ time? But the optimal solution is on \$O(n)\$ yet the runtime from the testcases says my solution is just as good as the optimal one provided?Am I missing something. Also I am having a hard time proving the space complexity for my solution as well. Answer: The linearity of your approach is due to the fact that your longest could never be longer than the size of the alphabet (otherwise it'd have a duplicate for sure). It means that your code runs at \$O(N * A)\$, where \$N\$ is a length of the string, and \$A\$ is a size of alphabet. Since the latter is a constant, you may safely take it out of the big-O, yielding \$O(N)\$. The asymptotic constant is still a little bit too large (I don't know what alphabet is used in the test cases, but it is safe to assume at least 26), so there is a room for improvement. You correctly noticed that the immutability of the strings may hurt performance; try to get rid of them. A most obvious approach is to preallocate a list (of the alphabet size), and use it as a circular buffer. In fact, even that is suboptimal. Try to get rid of the explicit buffer at all (hint: two indices into the original string). It also may be beneficial to keep a dictionary indexed by letters currently present in the buffer with the values being their corresponding indices. This way you can do a constant time lookup rather than linear search in the buffer. Keep in mind however that such a dictionary may itself be costly; I expect it to improve performance if the alphabet is quite large. Try and profile it. As for the code, it is simple and clean. Not much to say.
{ "domain": "codereview.stackexchange", "id": 35883, "tags": "python, algorithm, programming-challenge, strings, complexity" }
Select topics with search pattern (regex) in rosbag2
Question: Is there a simple way to select multiple topics (e.g. all topics from a node or (sub)namespace) with some sort of search pattern in rosbag2? The previous ROS(1) rosbag implementation did allow to use regular expressions of the form $ rosbag record -e "/(.*)_foo/bar" I can not find anything like that in rosbag2 CLI or documentation. However, this looks like a pretty common use case. Am I missing something or is there a more generic way of doing just this in ROS2. Originally posted by Phgo on ROS Answers with karma: 218 on 2020-10-20 Post score: 0 Original comments Comment by timok on 2020-11-11: Wondering the same thing. There is no issue in rosbag2 repo for regex. I thought that ros2 targets to have feature parity with ros(1). Is it not so? Comment by timok on 2020-11-11: I added issues for -e and -x options for rosbag2 Answer: To sum this up, it looks like this feature is simply not yet implemented and there is at least no obvious generic way in ROS2 of achieving this. You can follow up the corresponding issue on the rosbag2 repo. If you find a suitable workaround feel free to add another answer. UPDATE: In the meantime the feature was added and is available for galactic with rosbag -e and -x option. For older distributions you might want to consider the workaround provided by @gilaadb -e REGEX, --regex REGEX recording only topics matching provided regularexpression -x EXCLUDE, --exclude EXCLUDE exclude topics matching provided regular expression. Works with -a and -e, subtracting excluded topics Originally posted by Phgo with karma: 218 on 2021-01-08 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 35658, "tags": "ros2" }
causal sketches
Question: I don't have much of an idea of how to draw causal sketches. I know that you need to work out the gradient of the light cones, which can be done using a given metric and using null vectors. But how do you use null vectors to work out the gradient of metric? I've got a metric of $ds^2=-dt^2+(e^t)(dx^2)$ that I need to draw a causal sketch for. How do I find the gradient (using the null-vector), to draw light cones for a causal sketch? Answer: The null curves have: $$ 0 = ds^2 = -dt^2 + e^t dx^2$$ that gives you the null curves differential equation: $$ \frac{dx}{dt} = e^{-t/2} $$ Solving it you get the equation of the null curves, and so the light cone.
{ "domain": "physics.stackexchange", "id": 10906, "tags": "homework-and-exercises, general-relativity, causality" }
Regression - predict n numbers based on another n numbers
Question: I'd like to get a recommendation how to attack a problem of predicting multiple numbers. Training data contains 4 columns, each says a probability of record being in the bucket. So, for example: X = [0.25, 0.5, 0.25, 0.0] and corresponding output should be e.g. Y = [0.8, 0.1, 0.0, 0.1]. Each row should sum to 1.0 What type of approach would you recommend? I already tried simple neural network with 4 neurons in the last layer and softmax activation but wondering if there is a better solution. Thanks! Answer: I would recommend using an encoder-decoder structure based on LSTM-RNN for sequence-to-sequence learning; it will provide you with exactly what you want. A link that I find very helpful is: https://machinelearningmastery.com/define-encoder-decoder-sequence-sequence-model-neural-machine-translation-keras/
{ "domain": "datascience.stackexchange", "id": 3173, "tags": "regression" }
How close to a host star can a tidally locked planet be and its dark side still maintain a moderate temperature?
Question: So, imagine an atmosphere-less planet, tidally locked to a sun-like star. How close to the star can the planet be before its dark side becomes too hot? I imagine that at some point the rocks on its sunlit side will melt and evaporate so that the dark side would experience rocky precipitation. Would this be true? Also, at some point the atmosphere of the star itself would engulf the planet. But at which point would these effects make the conditions on the dark side unbearable? Answer: Your general idea about this process is correct. At close semimajor axis distance, rock can evaporate and will form a Silicate-oxygen atmosphere. For low-mass rocky planets, the condensation flow from day to night-side, as it necessarily is very hot, will have to compete with the possibility of instead escaping vertically from the nightside, instead of precipitating. For higher-mass planets, vertical escape will be too difficult, and the hot Silicate-Oxygen gas will recondense on the nightside and therefore heat via thermal radiation and condensation latent heat. This is a problem on which current research is in progress, and not many groups have worked on this, with the exception of one fantastic article that came out this year. They glue four different 1-D models together to represent vertical and horizontal for each day and night-side of catastrophically evaporating planets together, in order to create a sort of fake-2D model of the planetary Silicate-atmosphere. Their nightsides are very hot in general (500-1000K), but they sadly do not give silicate densities or pressures. Hence, without the density, it is difficult to estimate how much the heat transfer between the Silicate gas and a human would be, i.e. how much a human would 'feel'. If you are very curious though, I am sure you can construct this effect, by assuming Silicate saturation pressures, which are given in this article.
{ "domain": "astronomy.stackexchange", "id": 5546, "tags": "planet, exoplanet, temperature, tidal-locking" }
How does the coherence of a spin state relate to the physical concept of "coherence"?
Question: In my textbook the coherence of a spin state $|\psi\rangle $ is measured by the quantity $|\langle \uparrow|\psi| \downarrow \rangle|$. The thing is that I am not sure how this quantity is related to the physical concept of coherence. So is there anybody who could motivate why this quantity actually measures coherence? Answer: Suppose I have two different spins, spin $A$ is in state $\left|\uparrow \right\rangle$ and sping $B$ is in state $\left|\downarrow \right\rangle$. I give you one of these spins randomly, so you have a 1/2 probability of spin up and 1/2 probability of spin down. There's no coherence between up and down because you either have a spin which is definitely up or you have one which is definitely down. The density matrix for this situation is $$\rho = \frac{1}{2} |A\rangle \langle A| + \frac{1}{2} |B\rangle \langle B| = \frac{1}{2} \left( \left|\uparrow\right\rangle\left\langle\uparrow\right| + \left|\downarrow\right\rangle\left\langle\downarrow\right| \right) = \frac{1}{2} \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) \, . $$ Now if we compute $\left\langle \uparrow \right| \rho \left| \downarrow \right\rangle$ we get zero. This is because there are no off-diagonal terms in $\rho$. Physically this means that if we were to measure $\langle\sigma_x\rangle$ we would get zero:$^{[a]}$ \begin{equation} \begin{array}{c} \langle A | \sigma_x | A \rangle = 0, \quad \text{probability }=1/2 \\ \langle B | \sigma_x | B \rangle = 0, \quad \text{probability }=1/2 \\ \end{array} \end{equation} Now suppose instead I gave you $$ |C\rangle = \frac{1}{\sqrt{2}} \left( \left|\uparrow\right\rangle+\left|\downarrow\right\rangle \right) \, .$$ This is a true quantum superposition of spin up and down. You can't think of this as classical probability of spin up or spin down. The density matrix is $$\rho = |C\rangle \langle C| = \frac{1}{2} \left( \begin{array}{cc} 1 & 1 \\ 1 & 1 \end{array} \right) = \frac{1}{2} \left( \left|\uparrow\right\rangle\left\langle\uparrow\right| + \left|\downarrow\right\rangle\left\langle\downarrow\right| + \left|\downarrow\right\rangle\left\langle\uparrow\right| + \left|\uparrow\right\rangle\left\langle\downarrow\right| \right)$$ and you can compute that $\left\langle\uparrow \right| \rho \left| \downarrow \right\rangle = 1$. So you see that the nonzero value of $\left\langle\uparrow \right| \rho \left| \downarrow \right\rangle$ directly detects whether or not the state has coherence. Physically this corresponds to the fact that in this state $\langle \sigma_x \rangle = 1 \neq 0$. $[a]$: The same is true for measuring $\sigma_y$.
{ "domain": "physics.stackexchange", "id": 19159, "tags": "quantum-mechanics, quantum-spin, coherence" }
What could a cloud of mini radio dishes see?
Question: Suppose an astronomer gave a 1 m radio dish to 500 people scattered over the face of the Earth and connected them to the internet. The people are directed to set their radio antennae up in their backyard per given instructions. The astronomer would then control the array of radio antennae remotely via the internet. Could such an array of radio dishes be used to do any kind of meaningful astronomical interferometry? If so, what kind? If not, why not? I fully expect that the radio dishes wouldn’t hold a candle to the sensitivity other radio dish networks (EVT, VLA, EVLA, etc.) due to the tiny dish size. I expect time synchronization would be an issue, but perhaps they could find a clever way to synchronize. I imagine this array would have very high resolution due to the large number of large and small baselines — extended further by the Earth’s rotation. Between low dish sensitivity, large numbers of dishes and a huge baseline count, I’m curious what such a network could be capable of. Answer: Partial answer: I imagine this array would have very high resolution due to the large number of large and small baselines and I expect time synchronization would be an issue, but perhaps they could find a clever way to synchronize. The 1 meter dishes are small and so for the dish to have any relevance to the project the wavelength has to be a lot smaller. To do interferometry your synchronization will have to be extremely good; a 3 cm wavelength would be 10 GHz and the period is 0.1 nanosecond. You'll need to have all kinds of expensive and sophisticated electronics to maintain a stable and drift-free timebase at that level of accuracy. You might be able to leverage your long baselines but fairly wide fields of view for these small dishes by using timing somehow. If you had ten distant dishes each pointed in 50 different directions and there was a sudden event, you might be able to localize it by arrival time differences, a bit like how lightning detectors work. I don't know anything about it, but there is also something called intensity interferometry but it's not clear if that's actually helpful at all. See Hanbury Brown and Twiss effect and (paywalled) R. Hanbury Brown, R.Q. Twiss (1954) LXXIV. A new type of interferometer for use in radio astronomy A new type of interferometer for measuring the diameter of discrete radio sources is described and its mathematical theory is given. The principle of the instrument is based upon the correlation between the rectified outputs of two independent receivers at each end of a baseline, and it is shown that the cross-correlation coefficient between these outputs is proportional to the square of the amplitude of the Fourier transform of the intensity distribution across the source. The analysis shows that it should be possible to operate the new instrument with extremely long baselines and that it should be almost unaffected by ionospheric irregularities. See Boffin : a personal story of the early days of radar, radio astronomy and quantum optics See also Dainis Dravens' 2010 BOSON INTERFEROMETRY From astronomy to particle physics, and back especially the radio part. Only somewhat related, with some discussion of the effect before going on to discuss the optical implementation (but using electronic correlation): Intensity interferometry: Optical imaging with kilometer baselines I wrote a related answer here: https://astronomy.stackexchange.com/a/42131/7982
{ "domain": "astronomy.stackexchange", "id": 5505, "tags": "observational-astronomy, radio-astronomy, radio-telescope, interferometry" }
CameraInfoManager is depreciated, what is replacing it?
Question: The camera_info_manager webpage says CameraInfoManager class moved to the camera_info_manager namespace. For compatibility with ROS C Turtle, the global CameraInfoManager class name is still supported. It is deprecated in Electric Emys and will be removed in Fuerte. Is this just a name change or is something different replacing it? Thanks. Originally posted by Kevin on ROS Answers with karma: 2962 on 2011-08-03 Post score: 1 Answer: It's just the rename to camera_info_manager::CameraInfoManager from CameraInfoManager. Originally posted by tfoote with karma: 58457 on 2011-08-03 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 6335, "tags": "ros, camera-info-manager" }
KMP - longest palindromic prefix
Question: I wanted to ask if there is a way to use KMP (Knuth-Morris-Pratt) algorithm to find the longest palindromic prefix of a word. I've seen this algorithm used for determining if a word is rotation of the other for example. We can concatenate first word with itself and look for a pattern which would be the second word (and if found, the answer is yes). But I cannot imagine how could I use KMP for my problem. I am not sure what would the pattern be etc. Can it also be done in linear time using KMP? Thank you. Answer: KMP algorithm is able to solve your problem. Suppose your input string can be represented as $S=AB$ where $A$ is a palindrome ($B$ can be an empty string). Now reverse $S$ we can get $S'=B'A$. consider the string $T=S*S'=AB*B'A$ where '*' is a character which doesn't appear in $S$. As we can see, $A$ is a border of $T$. Conversely, if $X$ is a border of $T$, then $X$ is a palindrome and thus a palindromic prefix of $S$. So the answer of your problem is the longest border of $T$, which directly corresponds to the last item of "fail" array in KMP algorithm. In fact, in order to obtain the longest palindromic prefix of a word, you can use some general methods such as Manacher's algorithm, which is the best choice in dealing palindromes. Manacher's algorithm can find the longest palindromic substring for every palindromic center in linear time. For example, if the input string is "abbabbba", then the algorithm can give you the following results (every digit corresponds to a length of palindromic substring centered in its position): a b b a b b b a 101410501252101 Manacher's algorithm can solve your problem easily. After getting the longest palindromic substring for every palindromic center, you can simply check for every palindromic center, whether the longest palindromic substring touches the left first character. Consequently, the whole algorithm can be done in $O(|s|)$, where $s$ is the input string.
{ "domain": "cs.stackexchange", "id": 12372, "tags": "algorithms, search-algorithms, string-matching" }
Calculator using Shunting Yard algorithm
Question: I have implemented a simple calculator using the Shunting Yard algorithm. It takes infix expression as an input "3 + 5 * 2", which evaluates and returns the value 13. public class ShuntingYardAlgorithm{ private static Map<String, Operator> operatorMap = initializeOperatorMap(); private static Map<String, Operator> initializeOperatorMap() { operatorMap = new HashMap<>(); operatorMap.put(Operator.ADD.sign, Operator.ADD); operatorMap.put(Operator.SUBTRACT.sign, Operator.SUBTRACT); operatorMap.put(Operator.MULTIPLY.sign, Operator.MULTIPLY); operatorMap.put(Operator.DIVIDE.sign, Operator.DIVIDE); return operatorMap; } public ShuntingYardAlgorithm() { // Default Constructor } private enum Operator { ADD(1, "+"), SUBTRACT(2, "-"), MULTIPLY(3, "*"), DIVIDE(4, "/"); final int precedence; final String sign; Operator(int precedence, String sign) { this.precedence = precedence; this.sign = sign; } } private static boolean isHighPrecedence(String currentToken, String peekedOperator) { return operatorMap.containsKey(peekedOperator) && operatorMap.get(peekedOperator).precedence >= operatorMap.get(currentToken).precedence; } public static Integer evaluateExpression(String infix) { Stack<String> operatorStack = new Stack<>(); Stack<Integer> outputStack = new Stack<>(); for (String currentToken : infix.split("\\s")) { if ("(".equalsIgnoreCase(currentToken)) { operatorStack.push(currentToken); } else if (operatorMap.containsKey(currentToken)) { while (!operatorStack.isEmpty() && isHighPrecedence(currentToken, operatorStack.peek())) { String higherPrecedenceOperator = operatorStack.pop(); Integer operandLeft = outputStack.pop(); Integer operandRight = outputStack.pop(); outputStack.push(evaluate(operandLeft, operandRight, higherPrecedenceOperator)); } operatorStack.push(currentToken); } else if (currentToken.equalsIgnoreCase(")")) { while (!"(".equalsIgnoreCase(operatorStack.peek())) { String higherPrecedenceOperator = operatorStack.pop(); Integer operandLeft = outputStack.pop(); Integer operandRight = outputStack.pop(); outputStack.push(evaluate(operandLeft, operandRight, higherPrecedenceOperator)); } operatorStack.pop(); } else { outputStack.push(Integer.valueOf(currentToken)); } } while (!operatorStack.empty()) { String higherPrecedenceOperator = operatorStack.pop(); Integer operandRight = outputStack.pop(); Integer operandLeft = outputStack.pop(); outputStack.push(evaluate(operandLeft, operandRight, higherPrecedenceOperator)); } return outputStack.pop(); } private static Integer evaluate(Integer operandLeft, Integer operandRight, String operator) { BigDecimal retVal = BigDecimal.ZERO; BigDecimal left = BigDecimal.valueOf(operandLeft); BigDecimal right = BigDecimal.valueOf(operandRight); if (operator.equals(Operator.MULTIPLY.sign)) { retVal = left.multiply(right); } else if (operator.equals(Operator.ADD.sign)) { retVal = left.add(right); } else if (operator.equals(Operator.SUBTRACT.sign)) { retVal = left.subtract(right); } else if (operator.equals(Operator.DIVIDE.sign)) { retVal = left.divide(right); } return retVal.intValue(); } } Answer: Minor remarks: The java convention is to put the opening { on the same line, not on a new line. It's great that you're consistent in your style though! If you're doing the calculations using BigDecimal anyway, why not do that all the way and store the intermediat values as BigDecimal's as well in your stack? method name isHighPrecedence wasn't immediately clear to me which one was the higher precedence. Not sure how to fix this one easily though. Bigger idea: I like how you used an enum for the Operators. This gives a nice list of all the possible (known) operators and an easy place to check if things like precedence are implemented correctly. What I would do though, is go a step further. In java, an Enum is still a full class (except that all instances are created on class loading, you can't create new instances). This means that you can also provide methods there. I would expect the precedence comparison to be put in there. Although initially this might still be a bit clunky with the bracket handling (more on that later). Another nice candidate is parsing an operator from a string. This can easily be implemented as follows: public static Operator from(String oper) { for (Operator operator : Operator.values()) { if (operator.sign.equalsIgnoreCase(oper)) { return operator; } } return NIL; } What most people don't know, is that you can also override methods for each of the instances. The evaluate is a perfect candidate for this one. For example: private enum Operator { ADD(1, "+") { @Override public BigDecimal evaluate(BigDecimal left, BigDecimal right) { return left.add(right); } }, SUBTRACT(2, "-") { @Override public BigDecimal evaluate(BigDecimal left, BigDecimal right) { return left.subtract(right); } }, ... [include all operators here] }; final int precedence; final String sign; Operator(int precedence, String sign) { this.precedence = precedence; this.sign = sign; } public abstract BigDecimal evaluate(BigDecimal left, BigDecimal right); That way, if you have an operator you can just call outputStack.push(higherPrecedenceOperator.evaluate(operandLeft, operandRight)); Changing the operator stack to actually contain Operator instead of String got me in a little bit of trouble storing the opening bracket on there. Since the idea is to first parse the operand from the input string, let's also add an Operand for the closing bracket, and a default option for a missing operand (that we can abuse for number inputs as well). OPEN_BRACKET(0, "(") { @Override public BigDecimal evaluate(BigDecimal left, BigDecimal right) { throw new IllegalStateException("Cannot apply bracket operand to left and right numbers"); } }, CLOSE_BRACKET(0, ")") { @Override public BigDecimal evaluate(BigDecimal left, BigDecimal right) { throw new IllegalStateException("Cannot apply bracket operand to left and right numbers"); } }, NIL(0, "") { @Override public BigDecimal evaluate(BigDecimal left, BigDecimal right) { throw new IllegalStateException("trying to evaluate invalid operator"); } }; updating the evaluateExpression method to work with Operators required a little bit of reordering to handle the numbers correctly again: public static BigDecimal evaluateExpression(String infix) { Stack<Operator> operatorStack = new Stack<>(); Stack<BigDecimal> outputStack = new Stack<>(); for (String currentToken : infix.split("\\s")) { Operator currentOperator = Operator.from(currentToken); if (currentOperator == Operator.OPEN_BRACKET) { operatorStack.push(currentOperator); } else if (currentOperator==Operator.CLOSE_BRACKET) { while (Operator.OPEN_BRACKET.equals(operatorStack.peek())) { Operator higherPrecedenceOperator = operatorStack.pop(); BigDecimal operandLeft = outputStack.pop(); BigDecimal operandRight = outputStack.pop(); outputStack.push(higherPrecedenceOperator.evaluate(operandLeft, operandRight)); } operatorStack.pop(); } else if (currentOperator != Operator.NIL) { while (!operatorStack.isEmpty() && operatorStack.peek().isHigherPrecedence(currentOperator)) { Operator higherPrecedenceOperator = operatorStack.pop(); BigDecimal operandLeft = outputStack.pop(); BigDecimal operandRight = outputStack.pop(); outputStack.push(higherPrecedenceOperator.evaluate(operandLeft, operandRight)); } operatorStack.push(currentOperator); } else { outputStack.push(BigDecimal.valueOf(Integer.valueOf(currentToken))); } } while (!operatorStack.empty()) { Operator higherPrecedenceOperator = operatorStack.pop(); BigDecimal operandRight = outputStack.pop(); BigDecimal operandLeft = outputStack.pop(); outputStack.push(higherPrecedenceOperator.evaluate(operandLeft, operandRight)); } return outputStack.pop(); } This got me thinking. What do we have to change to allow the bracket operantors to just evaluate similarly to the other operators instead? We could first update the current evaluate method to take the outputStack instead of a left and right operand. That way the Operator can decide how many numbers it needs. Specifically for the brackets, we also need to pass in the Operator stack so that the opening bracket can just push itself on the stack, and the closing bracket can pop the stack untill it finds an opening bracket. private enum Operator { ADD(1, "+") { @Override public void evaluate(Stack<BigDecimal> numbers, Stack<Operator> operatorStack) { BigDecimal right = numbers.pop(); BigDecimal left = numbers.pop(); numbers.push(left.add(right)); } }, SUBTRACT(2, "-") { @Override public void evaluate(Stack<BigDecimal> numbers, Stack<Operator> operatorStack) { BigDecimal right = numbers.pop(); BigDecimal left = numbers.pop(); numbers.push(left.subtract(right)); } }, MULTIPLY(3, "*") { @Override public void evaluate(Stack<BigDecimal> numbers, Stack<Operator> operatorStack) { BigDecimal right = numbers.pop(); BigDecimal left = numbers.pop(); numbers.push(left.multiply(right)); } }, DIVIDE(4, "/") { @Override public void evaluate(Stack<BigDecimal> numbers, Stack<Operator> operatorStack) { BigDecimal right = numbers.pop(); BigDecimal left = numbers.pop(); numbers.push(left.divide(right)); } }, OPEN_BRACKET(Integer.MIN_VALUE, "(") { @Override public void evaluate(Stack<BigDecimal> numbers, Stack<Operator> operatorStack) { operatorStack.push(this); } }, CLOSE_BRACKET(Integer.MAX_VALUE, ")") { @Override public void evaluate(Stack<BigDecimal> numbers, Stack<Operator> operatorStack) { while (!operatorStack.isEmpty() && operatorStack.peek() != OPEN_BRACKET) { operatorStack.pop().evaluate(numbers, operatorStack); } if (operatorStack.isEmpty()) { //no open bracket found! throw new IllegalStateException("closing bracket requires earlier matching opening bracket"); } operatorStack.pop(); } }, NIL(0, "") { @Override public void evaluate(Stack<BigDecimal> numbers, Stack<Operator> operatorStack) { throw new IllegalStateException("trying to evaluate invalid operator"); } }; private final int precedence; private final String sign; Operator(int precedence, String sign) { this.precedence = precedence; this.sign = sign; } public static Operator from(String oper) { for (Operator operator : Operator.values()) { if (operator.sign.equalsIgnoreCase(oper)) { return operator; } } return NIL; } public boolean isHigherPrecedence(Operator other) { return this.precedence <= other.precedence; } public abstract void evaluate(Stack<BigDecimal> numbers, Stack<Operator> operatorStack); } Quick note that I also updated the priority values for the open and close brackets so that they're handled correctly while going through the stack. This way we can greatly simplify the evaluateExpression implementation as well: public static BigDecimal evaluateExpression(String infix) { Stack<Operator> operatorStack = new Stack<>(); Stack<BigDecimal> outputStack = new Stack<>(); for (String currentToken : infix.split("\\s")) { Operator currentOperator = Operator.from(currentToken); if (currentOperator == Operator.NIL) { //number (or error from unknown operator?) outputStack.push(BigDecimal.valueOf(Integer.valueOf(currentToken))); } else { while (!operatorStack.isEmpty() && !operatorStack.peek().isHigherPrecedence(currentOperator)) { Operator higherPrecedenceOperator = operatorStack.pop(); higherPrecedenceOperator.evaluate(outputStack, operatorStack); } operatorStack.push(currentOperator); } } while (!operatorStack.empty()) { operatorStack.pop().evaluate(outputStack, operatorStack); } return outputStack.pop(); } The best part about implementing the Operators this way? We can easily add new operators, and they're not limited to 2 operands either! For example, adding the following Operand to the enum class: ABS(5, "abs") { @Override public void evaluate(Stack<BigDecimal> numbers, Stack<Operator> operatorStack) { numbers.push(numbers.pop().abs()); } }, is all we need to do to then input things like "abs -2" which gets resolved to "2".
{ "domain": "codereview.stackexchange", "id": 36615, "tags": "java, algorithm, calculator" }