anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How did scientists come to know what to do with $E=mc^2$?
Question: When Albert Einstein proposed his famous equation $E=mc^2$ for rest mass he never said anything it's about about breaking the nucleus. Then how did scientists come to know what this mean? But again mass is not only about the nucleus. There are electrons and protons made up of quarks. Shouldn't that protons and neutrons break apart too, until all became pure energy like photons? Answer: Nuclei are not elementary particles, but they are made up of smaller elements with a lower mass, i.e., protons and neutrons. This is they main reason why they can decay (break down). Electrons are elementary particles, and there is no other particle similar to the electron (technically, a lepton) with a smaller mass, therefore they cannot "break down". However, electron can be converted in photons (as you said, "pure energy") in the process called electron-positron annihilation. Basically, if an electron $e^-$ collides with its antiparticle $e^+$ (positron) the outcome of the collision is the production of photons $\gamma$ as $$e^-+e^+\to2\gamma$$ So, you cannot break down an electron to get photons, the only way to get photons out of electrons is to collide matter and antimatter. Regarding neutrons, they can actually "break down". There are more than one type of neutron decay, but the most frequent is $$n\to p+e^-+\nu$$ where $n$. The proton decay is a matter of speculations so far. However, all particles can annihilate when they collide with their own antiparticles, producing photons, in a way which is analogous to the electron-positron decay. Importantly, if you use $E=m c^2$ to calculate the total energy before and after any decay process, the initial and final energies are the same, that is, the energy is conserved.
{ "domain": "physics.stackexchange", "id": 71361, "tags": "special-relativity, nuclear-physics, mass-energy" }
What is the time average over entire phase space
Question: If this equation is an ensemble average over phase space $$\langle A\rangle=∫_Γ ∏^{3N} \ {\rm d}q_i \ {\rm d}p_i\ A({q_i},\,{p_i})ρ({q_i},\,{p_i})$$ what is the time average of $A$ and how that can be calculated? Answer: Using an abbreviated notation, the time average is $$\bar A \equiv \lim_{T\to \infty} \frac 1 T \int_0^T A(p(t),q(t)) \ dpdq$$ Also notice that your expression is wrong. It should be, if you want to write it explicitly, $$\langle A \rangle = \int_{\Gamma} \prod_{i=1}^{3N} dp_i dq_i \ A(p_1,q_1,\dots p_{3N}, q_{3N}) \ \rho (p_1,q_1,\dots p_{3N}, q_{3N})$$ or, in abbreviated notation, $$\langle A \rangle = \int A(p,q) \rho(p,q) \ dp dq$$
{ "domain": "physics.stackexchange", "id": 47054, "tags": "statistical-mechanics, time, phase-space" }
Why does acceleration seem not to be the gradient of gravitational potential?
Question: Consider a spherically symmetric distribution of density $\rho(r)$. We can define the mass enclosed within each radius $r$ using $\frac{dM(r)}{dr} = 4\pi r^2 \rho(r)$, with the condition that $M(r=0) = 0$. The gravitational acceleration and potential can then be given as, $$\begin{align} a_g &= -\frac{GM}{r^2}\\ \Phi_g &= -\frac{GM}{r}. \end{align}$$ But then the acceleration wouldn't seem to be the gradient of the potential! $$a_g = - \frac{GM}{r^2} \neq -\nabla \Phi_g = \frac{d}{dr}\left(\frac{GM(r)}{r}\right).$$ What am I missing? This comes up, for example when writing the equations of stellar structure, or the Lane-Emden equations. I've seen these written as both, $\frac{1}{\rho}\frac{dP}{dr} = -\frac{GM}{r^2}$, and as $\frac{1}{\rho}\frac{dP}{dr} = -\nabla \Phi_g$ for example, the Wikipedia article on Lane-Emden includes both of these expressions. This seems to suggest that $\Phi_g \neq \frac{GM(r)}{r}$... Answer: Actually, your expression for the potential $\Phi(r)$ is incorrect. The expression $\Phi(r) = -\frac{GM(r)}{r}$ is only valid outside the sphere. As an explicit demonstration of its invalidity, note that $$\underset{r\rightarrow0}{\text{lim}}\,\Phi(r)=\underset{r\rightarrow0}{\text{lim}}\,\left[-\frac{G}{r}\int_0^r4\pi r'^2\rho(r')\,dr'\right]=0$$ assuming that $\rho(r)$ is finite. In particular, this also predicts that $\Phi(0)=0$ for a uniform-density sphere. However, for a uniform density sphere, we actually have $\Phi(0)=-2\pi G\rho R^2$ (using the convention $\Phi(\infty)=0$). Actual Potential Inside Distribution The gravitational potential $\Phi_r(a)$ at radius $a$ due to a thin spherical shell of radius $r$ with surface density $\rho(r)$ is $$\Phi_r(a)=\left\{\begin{aligned} &-\frac{4\pi G r^2\rho(r)}{a} &&: a>r\\ &-4\pi G \rho(r) r &&: a\leq r \end{aligned} \right.$$ and thus the correct expression for the potential due to an arbitrary spherically-symmetric charge distribution $\rho(r)$ becomes $$\Phi(a)=\int_0^\infty\Phi_r(a)\,dr.$$ If $\rho(r)=0$ for all $r\geq R$, then we can rewrite the integral as $$\Phi(a)=\int_0^R\Phi_r(a)\,dr=\left\{\begin{aligned} &\int_0^R-\frac{4\pi G r^2\rho(r)}{a}\,dr &&: a>r\\ &\int_0^a-\frac{4\pi G r^2\rho(r)}{a}\,dr+\int_a^R-4\pi G \rho(r) r\,dr &&: a\leq r \end{aligned} \right. \\ =\left\{\begin{aligned} &-\frac{GM(R)}{a} &&: a>r\\ &-\frac{GM(a)}{a}+\int_a^R-4\pi G \rho(r) r\,dr &&: a\leq r \end{aligned} \right..$$ In essence, your expression for potential was missing the term $\int_a^R-4\pi G \rho(r) r\,dr$, which caused the errors you saw. Finally, answering your question We then compute the gradient, $\nabla\Phi(a)$, using the $a\leq R$ case for $\Phi$ listed above. In Mathematica this easily becomes: -D[Integrate[-((4*G*Pi*r^2*\[Rho][r])/a), {r, 0, a}] + Integrate[-4*G*Pi*r*\[Rho][r], {r, a, R}], a]//TeXForm yielding $$a_g(a)=-\nabla\Phi(a)=-\int_0^a \frac{4 \pi G r^2 \rho (r)}{a^2} \, dr=-\frac{GM(a)}{a^2},$$ exactly as you wanted. Lane-Emden in a nutshell In essence, Lane-Emden is just the combination of Poisson's equation $\Delta\Phi=4\pi G\rho$ (which is universally valid), Newton's law $\rho\nabla\Phi=\nabla P$ (which is valid in a gravitating fluid), the polytropic constitutive relation $P=C\rho^k$ (which is an approximation), and spherical symmetry. The polytropic assumption allows you to eliminate $P$, resulting in a differential equation solely in $\rho$. (Minor note: Lane-Emden is commonly expressed in a dimensionless parameter $\theta$, rather than $\rho$ itself.)
{ "domain": "physics.stackexchange", "id": 13491, "tags": "newtonian-gravity, acceleration, potential-energy, gauss-law" }
process numerical input arguments of mixed ints, floats, and "array-like" things of same length>1
Question: I'm trying to process numerical input arguments for a function. Users can mix ints, floats, and "array-like" things of ints and float, as long as they are all length=1 or the same length>1. Now I convert them as follows: convert to float convert to np.ndarray; dtype==float ---------------- ----------------------------------- float int np.float64 range object resulting in len == 1 range object resulting in len > 1 len==1 list or tuple of int or float len > 1 list or tuple of ints & floats len==1 np.ndarray of dtype np.int or np.float len > 1 np.ndarray of dtype np.int or np.float then test all resulting arrays to make sure they are the same length. If so, I return a list containing floats and arrays. If not, return None. I want to avoid up-conversion of booleans and small byte-length ints. The script below appears to do what I want by brute force testing, but I wonder if there is a better way? Desired behaviors: fixem(some_bad_things) returns None fixem(all_good_things) returns [42.0, 3.14, 3.141592653589793, 2.718281828459045, 3.0, 42.0, 3.0, 42.0, array([1. , 2.3, 2. ]), array([3.14, 1. , 4. ]), array([1., 2., 2.]), array([3., 1., 4.]), array([0., 1., 2.]), array([0, 1, 2])] and sum(fixem(all_good_things)) returns array([149.13987448 149.29987448 156.99987448]) def fixit(x): result = None if type(x) in (int, float, np.float64): result = float(x) elif type(x) == range: y = list(x) if all([type(q) in (int, float) for q in y]): if len(y) == 1: result = float(y[0]) elif len(y) > 1: result = np.array(y) elif type(x) in (tuple, list) and all([type(q) in (int, float) for q in x]): y = np.array(x) if y.dtype in (int, float): if len(y) == 1: result = float(y[0]) elif len(y) > 1: result = y.astype(float) elif (type(x) == np.ndarray and len(x.shape) == 1 and x.dtype in (np.int, np.float)): if len(x) == 1: result = float(x[0]) elif len(x) > 1: result = x.astype(float) return result def fixem(things): final = None results = [fixit(thing) for thing in things] floats = [r for r in results if type(r) is float] arrays = [r for r in results if type(r) is np.ndarray] others = [r for r in results if type(r) not in (float, np.ndarray)] if len(others) == 0: if len(arrays) == 0 or len(set([len(a) for a in arrays]))==1: # none or all same length final = floats + arrays return final import numpy as np some_bad_things = ('123', False, None, True, 42, 3.14, np.pi, np.exp(1), [1, 2.3, 2], (3.14, 1, 4), [1, 2, 2], (3, 1, 4), (3,), [42], (3.,), [42.], np.array([True, False]), np.array([False]), np.array(False), np.array('xyz'), np.array(42), np.array(42.), np.arange(3.), range(3)) all_good_things = (42, 3.14, np.pi, np.exp(1), [1, 2.3, 2], (3.14, 1, 4), [1, 2, 2], (3, 1, 4), (3,), [42], (3.,), [42.], np.arange(3.), range(3)) for i, things in enumerate((some_bad_things, all_good_things)): print(i, fixem(things)) print(sum(fixem(all_good_things))) # confirm Answer: I suggest using isinstance instead of type to check the types of variables. You can read about it in details here: What are the differences between type() and isinstance()? So, for example, instead of writing: if type(x) in (int, float, np.float64): you would write: if isinstance(x, (int, float)): You can check that it works for np.exp(1) which is of type np.float64. When the x is of type range the following check is redundant: if all([type(q) in (int, float) for q in y]) as the elements of y will be always integers. Also, there is no need to convert range to list. The following will also work: result = float(x[0]) if len(x) == 1 else np.array(x) To check if a list is empty in Python we usually write: if not others: instead of: if len(others) == 0: Imports should be at the top of the script. Move the import numpy as np line there. In the future, when you have a function that can accept variables of different types and its behavior depends on which type it gets, you could try using singledispatch. I could come up with the following implementation: from collections.abc import Iterable from functools import singledispatch from numbers import Real import numpy as np @singledispatch def fixit(x): return None @fixit.register def _(x: Real): return float(x) @fixit.register def _(x: Iterable): y = np.array(x) if y.dtype in (np.int, np.float) and len(y.shape) == 1: return float(y[0]) if len(y) == 1 else y.astype(float) else: return None I didn't check it thoroughly but for your test cases it works. (but looks a bit ugly) A better way to solve your problem would be to convert immediately all the values to NumPy arrays of at least one dimension. We would require a np.atleast_1d function for that: def to_normalized_data(values): arrays = list(map(np.atleast_1d, values)) sizes = set(map(np.size, arrays)) has_bad_types = any(array.dtype not in (np.int32, np.float64) for array in arrays) if len(sizes) > 2 or has_bad_types: return None max_size = max(sizes) singletons = [float(array[0]) for array in arrays if array.size == 1] iterables = [array.astype(float) for array in arrays if array.size == max_size] return singletons + iterables >>> to_normalized_data(all_good_things) [42.0, 3.14, 3.141592653589793, 2.718281828459045, 3.0, 42.0, 3.0, 42.0, array([1. , 2.3, 2. ]), array([3.14, 1. , 4. ]), array([1., 2., 2.]), array([3., 1., 4.]), array([0., 1., 2.]), array([0., 1., 2.])] >>> sum(to_normalized_data(all_good_things)) array([149.13987448, 149.29987448, 156.99987448]) >>> print(to_normalized_data(some_bad_things)) None Answering your comments: For some reason my anaconda's numpy (1.17.3) returns int64 rather than int32 as you have in item 6 Looks like this behavior is OS-specific. Probably a better way to check the types of the obtained arrays would be by using np.can_cast. So, instead of writing: has_bad_types = any(array.dtype not in (np.int32, np.float64) for array in arrays) we could write: has_bad_types = not all(np.can_cast(array.dtype, np.float64) for array in arrays) Item #6 doesn't reject two different lengths if no singletons are present. With ([1, 2, 3], [1, 2, 3, 4]) as input, the output is [array([1., 2., 3., 4.])] and [1, 2, 3] just falls through the cracks and disappears Welp, I missed this case... We can add it back as: if len(sizes) > 2 or 1 not in sizes or has_bad_types: also my original script tested if len(x.shape) == 1 in order to reject ndm > 1 arrays which item #6 doesn't, but that can be easily added back with something like testing for set(map(np.ndim, arrays)) == set((1,)). This is important because np.size won't distinguish between a length=4 1D array and a 2x2 array. Yep, that's right. Taking all the above into account, the final code could look like this: def to_normalized_data(values): arrays = list(map(np.atleast_1d, values)) sizes = set(map(np.size, arrays)) have_bad_types = not all(np.can_cast(array.dtype, np.float64) for array in arrays) have_several_dimensions = set(map(np.ndim, arrays)) > {1} if len(sizes) > 2 or 1 not in sizes or have_bad_types or have_several_dimensions: return None max_size = max(sizes) singletons = [float(array[0]) for array in arrays if array.size == 1] iterables = [array.astype(float) for array in arrays if array.size == max_size] return singletons + iterables
{ "domain": "codereview.stackexchange", "id": 36718, "tags": "python, python-3.x, numpy, converting" }
Why don't we breathe nitrogen when it makes up most of the air?
Question: Why don't we breathe nitrogen while it makes most of the air? Why do we always tend to breathe oxygen, not hydrogen and nitrogen? Answer: Animals use oxygen as a chemical energy source because oxygen gas can react with many other compounds to form oxides, which releases energy and happen spontaneously. Both carbon and nitrogen can be made to react with oxygen, but otherwise they are pretty inert. So of all the gasses in the air present at over a fraction of a percent, oxygen is the only one we can use for energy. Nitrogen gas itself (N2) is incredibly chemically inert; N2 requires energy input into it to react chemically. Biometabolism relies upon a chemical release of energy. If we had ammonia gas (NH3) in our air it would be a great redox source of energy... taking energy from the ammonia could produce N2. N2 takes a lot of work put into it to get the nitrogen out for other uses. Hydrogen (and sulfer) are both possible substitutes for oxygen in the role of redox energy source, but are normally pretty small components of our environment. On another planet they might well be the basis of biometabolism. Of course the fact that plants can use carbon dioxide to fix carbon is a different case of biology using a gas out of the air. Its the defining quality of plants! The energetics of using CO2 is endothermic - it requires energy input. They have to use sunlight to get the energy to utilize this energy and its very costly energetically. Animals can afford to move and grow because they use oxygen while they eat plants.
{ "domain": "biology.stackexchange", "id": 6865, "tags": "cellular-respiration, respiration" }
Textbook Problem: Fiber Coupling Spheres
Question: I am reading the book Fundamentals of Photonics. I try to solve problems at the end of chapter 1, but I got stuck on two problems. Tiny glass ball are often used as lenses to couple light into and out of optical fibers. The fiber end is located at a distance $f$ from the sphere. For a sphere of radius $a=1$mm and refractive index $n=1.8$, determine $f$ such that a ray parallel to the optical axis at a distance $y=0.7$mm is focused onto the fiber, as illustrated in the picture. Could you help me? Answer: You can look for geometric optics: Paraxial Approximation, but the case when angles aren't small (it's strange a little); In optical engineering we use it for calculations: It seems that right answer is 0.156 mm
{ "domain": "physics.stackexchange", "id": 20505, "tags": "homework-and-exercises, optics" }
Vibrational spectroscopy energy spectrum
Question: I have a question regarding vibrational spectroscopy. In vibrational spectroscopy we are describing the vibration of molecules with the Morse potential which gives us stationary wavefunctions that give us the probability amplitude for finding the whole atom at a certain distance away from the other atom. The energys corresponding to this wavefunctions are discrete and so the spectrum is also discrete. However if the molecule is classical vibrating it should be described by a superposition of eigenstates and hence the spectrum should be continuous. So where is the problem? Answer: However if the molecule is classical vibrating it should be described by a superposition of eigenstates and hence the spectrum should be continuous. So where is the problem? Superposition of eigenstates is fine, but it does not imply absorption spectrum should be continuous. When you analyze interaction of external field with the Hamiltonian, it is always strong for resonant frequencies defined by difference between Hamiltonian eigenvalues, and much weaker for frequencies that are not close to any such resonant frequency. Superposition state only changes which resonant frequencies can be "seen" in that state, but it does not introduce continuous spectrum. For continuous spectrum, one needs to have continuous spectrum of the Hamiltonian.
{ "domain": "physics.stackexchange", "id": 93334, "tags": "harmonic-oscillator, spectroscopy, vibrations" }
Inconsistent integral and distance in spherical coordinates
Question: I am currently studying this problem: 14 b) There you see an integral $$A(r) = \int f(\theta) (-\sin(\phi), \cos(\phi),0) d \Omega$$ where $f$ is the function containing all the rest of the integrand that you see there and I don't see why this integral is not zero? (I am especially referring to the integral in the second row of part b) . I mean clearly: $$\int_{0}^{2\pi} (-\sin(\phi), \cos(\phi),0) d\phi = 0$$ so why does this integral not vanish completely? Also I don't get why there is this $\phi'$ in the denominator? I mean, don't we have $$||r-r'||= \sqrt{r^2+r'^2-2rr'\cos(\theta-\theta')}$$ so this should not depend on $\phi'$? (This is the reason why I said that $f$ only depends on $\theta$.) Answer: Your statement about $|| r-r'||$ is true only if $r$ and $r'$ have the same $\phi$ coordinate. (same "longitude") The denominator does have a $\phi '$ dependance. The value of that modulus will be larger when $\phi \neq \phi '$.
{ "domain": "physics.stackexchange", "id": 14390, "tags": "homework-and-exercises, electrostatics, coordinate-systems, calculus" }
Which algorithm can I use to estimate total number of passengers carried from time series of passenger counts
Question: I have time series data coming at 10sec intervals from passenger counter in a bus [10,10,10,10,9,9,9,5,5,5,10,10 ...]. I need to estimate the total number of passengers carried in 1 hour. When the counts decrease, it means someone/somepeople got off. And when it increases it means new people got on. Answer: Maybe I'm missing something, but it seems to me that, to know the total number of people that have been in a bus during an hour, you just need to start with the initial value of people for that hour and add all the increments (not the decrements) over that hour. For instance, if during one hour we had the following counter values: 10, 10, 10, 10, 9, 9, 9, 5, 5, 5, 10 We would first compute the successive differences (starting at the first value): 10, 0, 0, 0, -1, 0, 0, -4, 0, 0, +5 And then we would add only the positive values together: 10 + 5 = 15 Please, clarify if my understanding of the problem is not correct.
{ "domain": "datascience.stackexchange", "id": 11728, "tags": "time-series, counts" }
What is the difference between: Chandrashekhar limit and Schwarzchild radius?
Question: I want a qualitative difference between the Chandrashekhar limit and Schwarzchild radius. They both pretty much look like the same thing. Answer: The key difference (other than the fact that one is a radius limit and the other a mass limit) is that the Schwarzschild limit is when the gravity is so strong that no force can oppose it, but the Chandrasekhar limit is when the gravity is so strong that only one specific force, the gas pressure of degenerate electrons, cannot oppose it. Also, as pointed out above, the former is a limit on the radius for a given mass, for any forces, whereas the latter is a limit on the mass itself, applying only when the electrons are degenerate. The connection is that the Schwarzschild limit says how much you can compress a given mass before no force could oppose gravity, whereas the Chandrasekhar limit says how much mass you need such that the self-gravity, given a long enough time of losing heat, will always contract within the Schwarzschild limit for that mass. Any mass above the Chandra limit will eventually contract below its Schwarzschild limit, but a lesser mass might not contract below its Schwarzschild limit, ever. Also, a mass much larger than the Chandrasekhar limit can contract below its Schwarzschild radius without ever becoming degenerate, so would collapse even if there was no such thing as degeneracy or the Chandra limit.
{ "domain": "physics.stackexchange", "id": 39015, "tags": "black-holes, astrophysics, event-horizon, stellar-physics" }
What happens when two photons collide with one another, head on, dead center?
Question: If two photons were to collide directly, head on, and are of the same energy, what happens? Are new particles created, is energy released? Or do they just pass through one another? Answer: Photons don't directly interact with each other, but if one photon produced an e+/e- pair then the second photon could interact with that pair. The interaction has to conserve the energy of the two photons and conserve their momentum as well of course. But yes they could (and most probably depending on their energy) just pass right "through" each other.
{ "domain": "physics.stackexchange", "id": 23900, "tags": "photons, collision" }
How to Normalize a Wave Function?
Question: To talk about this topic let's use a concrete example: Suppose I have a one-dimensional system subjected to a linear potential, such as the hamiltonian of the system is: $$H=\frac{\hat{p}^2}{2m}-F\hat{x}, \qquad \hat{x}=i\hbar\frac{\partial}{\partial p},$$ then I might want to find the eigenfunctions of the hamiltonian: $$\psi _E(p)=\langle p|E\rangle,$$ where $|p\rangle$ are the eigenvectors of the momentum operator and $|E\rangle$ are the eigenvectors of the hamiltonian. After a bit of work with the TISE I came to the following expression for $\psi _E(p)$: $$\psi _E(p)=N\exp\left[-\frac{i}{\hbar F}\left(\frac{p^3}{6m}-Ep\right)\right].$$ I am almost there! The only thing missing is the normalization constant $N$. How should I move forward? I could try to apply the normalization condition directly by imposing the integral of this function equal to 1, but this seems like a lot of work. However my lecture notes suggest me to try to take advantage of the fact that the eigenvectors of the hamiltonian must be normalized: $$\langle E'|E\rangle=\delta(E-E')$$ where $\delta$ is the Dirac's Delta Function.1 However I cannot see how to use this information to derive the normalization constant $N$. Are my lecture notes right? How should I use the normalization condition of the eigenvectors of the hamiltonian then? Is it quicker to simply try to impose the integral equal to 1? [1]: Based on my current understanding this is a generalization (not so rigorous) of the normalization condition of the eigenvectors of an observable in the discrete case: $$\langle E'|E\rangle=\delta _k \ \Rightarrow \ \langle E'|E\rangle=\delta(E-E')$$ where $\delta _k$ is the Kronecker Delta, equal to one if the eigenvectors are the same and zero otherwise. Answer: The proposed "suggestion" should actually be called a requirement: you have to use it as a normalization condition. This is because the wavefunctions are not normalizable: what has to equal 1 is the integral of $|\psi|^2$, not of $\psi$, and $|\psi|^2$ is a constant. Just like a regular plane wave, the integral without $N$ is infinite, so no value of $N$ will make it equal to one. One option here would be to just give up and not calculate $N$ (or say that it's equal to 1 and forget about it). This is not wrong! The functions $\psi_E$ are not physical - no actual particle can have them as a state. Physical states $\psi(p)$ are superpositions of our basis wavefunctions, built as $$\psi(p) = \int dE\, f(E) \psi_E(p)$$ with $f(E)$ some function. This new wavefunction is physical, and it must be normalized, and $f(E)$ handles that job - you have to choose it so that the result is normalized. But there are two reasons we decide to impose $\langle E | E' \rangle = \delta(E-E')$. One is that it's useful to have some convention for our basis, so that latter calculations are easier. Having a delta function is unavoidable, since regardless of the normalization the inner product will be zero for different energies and infinite for equal energies, but we could put some (possibly $E$-dependent) coefficient in front of it - that's just up to convention. The other reason is that if you dig a little deeper into the normalization of the $\psi(p)$ above, the delta function appears anyway. We have $$\langle \psi | \psi \rangle = \int dp\, \int dE\, \int dE'\, f(E)^* f(E') \psi_E^*(p) \psi_{E'}(p),$$ and you can see that the inner product $\langle E | E' \rangle$ is right there, in the $E$ integral. So we have to use the fact that it is proportional to $\delta(E-E')$, and it's neater to fix the constant of proportionality beforehand. So to recap: having $\langle E | E' \rangle \propto \delta(E-E')$ just falls out of the definition of the $\psi_E(p)$, and it's also obviously the manifestation of the fact that stationary states with different energies are orthogonal. We're just free to choose what goes in front of the delta function, which is equivalent to giving a (possibly energy dependent) value for $N$. Using $\delta(E-E')$ by itself is just the simplest choice, but sometimes other factors are used. Now, actually calculating $N$ given this convention is pretty easy: I won't give you the answer, but notice that when you calculate the inner product of two wavefunctions with different energies (that is, the integral of $\psi_E^* \psi_{E'}$), the parts with $p^3$ in the exponential cancel, because they don't depend on the energy. What's left is a regular complex exponential, and by using the identity $$\int_{-\infty}^\infty dx\, e^{ikx} = 2\pi \delta(k)$$ (which is rigorous enough for our purposes), you show that the whole thing must be proportional to $\delta(E'-E)$, and derive the value of $N$ from there.
{ "domain": "physics.stackexchange", "id": 71156, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, hamiltonian, normalization" }
[SOLVED] fedora: rqt_plot runtime error
Question: I'm running ROS Kinetic on Fedora 29. After running successfully these commands: rosrun turtlesim turtlesim_node rosrun turtlesim turtle_teleop_key rqt_graph I try to run rqt_plot /turtle1/pose/x /turtle1/pose/y or equivalently rqt_plot /turtle1/pose/x:y and it gives the following error: Warning: QT_DEVICE_PIXEL_RATIO is deprecated. Instead use: QT_AUTO_SCREEN_SCALE_FACTOR to enable platform plugin controlled per-screen factors. QT_SCREEN_SCALE_FACTORS to set per-screen factors. QT_SCALE_FACTOR to set the application global scale factor. PluginManager._load_plugin() could not load plugin "rqt_plot/Plot": Traceback (most recent call last): File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/qt_gui/plugin_handler.py", line 99, in load self._load() File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/qt_gui/plugin_handler_direct.py", line 54, in _load self._plugin = self._plugin_provider.load(self._instance_id.plugin_id, self._context) File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/qt_gui/composite_plugin_provider.py", line 71, in load instance = plugin_provider.load(plugin_id, plugin_context) File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/qt_gui/composite_plugin_provider.py", line 71, in load instance = plugin_provider.load(plugin_id, plugin_context) File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/rqt_gui_py/ros_py_plugin_provider.py", line 60, in load return super(RosPyPluginProvider, self).load(plugin_id, plugin_context) File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/qt_gui/composite_plugin_provider.py", line 71, in load instance = plugin_provider.load(plugin_id, plugin_context) File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/rqt_gui/ros_plugin_provider.py", line 101, in load return class_ref(plugin_context) File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/rqt_plot/plot.py", line 55, in __init__ self._data_plot = DataPlot(self._widget) File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/rqt_plot/data_plot/__init__.py", line 149, in __init__ raise RuntimeError('No usable plot type found. Install at least one of: PyQtGraph, MatPlotLib (at least %s) or Python-Qwt5.' % version_info) RuntimeError: No usable plot type found. Install at least one of: PyQtGraph, MatPlotLib (at least 1.4.0) or Python-Qwt5. So I've checked these three libraries and it seems they're all installed: -for PyQtGraph I have python3-pyqtgraph -for MatPlotLib I have python2-matplotlib -for Python-Qwt5 I have PyQwt-5.2.0-40.fc29.src.rpm So I don't get why this is not working properly since I have all of them. Anyway, I think that instead of python3-pyqtgraph I should install python2-pyqtgraph but dnf doesn't find it because it's an old version I think (or maybe it is deprecated). Originally posted by 0novanta on ROS Answers with karma: 18 on 2019-03-27 Post score: 0 Answer: Solved by running sudo pip install PyQtGraph Originally posted by 0novanta with karma: 18 on 2019-03-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by cottsay on 2019-09-17: This is the right way to work around this. For context, python2-pyqtgraph was dropped prior to the Fedora 30 release: https://src.fedoraproject.org/rpms/python-pyqtgraph/c/99946ff0e44ba9241e6c2a3ffc56bf1f77b0554e?branch=master
{ "domain": "robotics.stackexchange", "id": 32770, "tags": "ros-kinetic, rqt-plot" }
How do I interpret this block diagram correctly?
Question: I am reading notes ahead of the class and have encountered this particular slide: While I completely agree with the first block diagram, I am at a loss trying to understand how the second block diagram is equivalent to the first. Here's how I interpret the first diagram: x[n] enters H1 and H1 outputs b0 x[n] + b1 x[n-1]. The output of H1 enters H2, and H2 outputs -a1 y[n-1] + b0 x[n] + b1 x[n-1] Here's how I interpret the second diagram: I am actually lost! I have no idea how to make sense of it. I understand how the "simpler" diagrams work, for example the multiplication and addition components. But this is Greek to me, and it seems like I cannot find any list of rules to interpret this. Answer: It is better to "parse" these networks from the output back towards the input, calling their input some general $x$ and performing substitutions and/or compositions. So, let's call these networks $U$pper and $L$ower. From the upper diagram: $$UH_2[n] = x[n] + x[n-1] \cdot -a_1$$ and $$UH_1[n] = x[n] \cdot b_0+x[n-1] \cdot b_1$$ Now, the output of $U$pper is given by the composition of the two: $$UH_y[n] = UH_2[UH_1[n]]$$ This is because, the output of one, becomes the input to the other. Or... $$UH_y[n] = UH_1[n] + UH_1[n-1] \cdot -a_1$$ ...and if you substitute... $$UH_y[n] = x[n] \cdot b_0 + x[n-1] \cdot b_1 + (x[n-1] \cdot b_0 + x[n-2] \cdot b_1) \cdot -a_1 \Rightarrow \\ x[n] \cdot b_0 + x[n-1] \cdot b_1 + x[n-1] \cdot b_0 \cdot -a_1 + x[n-2] \cdot b_1 \cdot -a_1$$ Notice here that if you are trying to get to the $-1$ of the $x[n-1]$, then that would be the $x[n-2]$. And this concludes the $U$pper part. For the $L$ower part, we are not going to go through the whole thing, because of two reasons: If you notice, the $H2, H1$ are reversed (The $L$ower network calls $H2$ what the $U$pper network calls $H1$). We have already mapped $H2, H1$. So, the lower network's response is: $$LH_y[n] = LH_1[LH_2[n]]$$ Or... $$LH_y[n] = LH_2[n] \cdot b_0+LH_2[n-1] \cdot b_1$$ And if you substitute: $$LH_y[n] = (x[n]+ -a_1 \cdot x[n-1]) \cdot b_0 + (x[n-1]+ -a_1 \cdot x[n-2]) \cdot b_1 \Rightarrow \\ x[n] \cdot b_0 + -a_1 \cdot b_0 \cdot x[n-1] + x[n-1] \cdot b_1 + b_1 \cdot -a_1 \cdot x[n-2]$$ Which, after you re-arrange, looks exactly the same. Hope this helps.
{ "domain": "dsp.stackexchange", "id": 7961, "tags": "filters, filter-design, digital-filters" }
How will I calculate the time and space complexity for this pyramid algo?
Question: This is an algo. programmed for displaying a letter pyramid if the buildPyramids() method is passed argument str, i.e. "12345": 1 121 12321 1234321 123454321 Code: void buildPyramids(string str) { size_t len = str.length(); size_t i, j, k, m; for(m=0; m<len; m++) { for(i=len-m-1; i > 0; i--) { cout << " "; } for(j=0; j<=m; j++) { cout << str[j]; } for(k=1; k<j; k++) { cout << str[j-k-1]; } cout << endl; } } What's the correct way to calculate the space and time complexity for the same? Could you also guide me to some resources for a deeper understanding of the same? Answer: There is one major for loop in this case for(m=0; m<len; m++) It has a complexity of O(len) Inside this loop, there are 4 other loops, but they are additive in nature and not nested. Each of those loops has a length of <=len. Thus the overall complexity of this program would be O(len*len) The space complexity will be also be the same.
{ "domain": "cs.stackexchange", "id": 15465, "tags": "complexity-theory, time-complexity, space-complexity, c++" }
Cross Correlation, how can any signals except the trivial cases be uncorrelated?
Question: I have been working on a problem where I am trying to subtract two Power Spectral Densities (PSD's) from one another in the following way. $$|F(k) - G(k)|^2 = |F(k)|^2 - |G(k)|^2$$ Which is only valid if the cross correlation terms are 0. It's my understanding that for signals $f$ and $g$, the cross correlation is given by $$(f \star g)(\tau) = \int_{-\infty}^{\infty} \overline{f(t)}g(t+\tau) dt$$ And now that the definition for these to be uncorrelated, $$(f \star g) = 0$$ has to be true for all $\tau$. But what i don't understand is that unless either $f$ or $g$ are just zero for all $t$ there will be a value of $\tau$ where $$\overline{f(t)}g(t+\tau) \neq 0$$ It then follows that if there are any non-zero points then there must exist other ones which cancel so that the integral ends up being 0. It seems to me that the chances of this happening for any useful finite signal is next to nothing and so it's highly unlikely that anybody could ever use the spectral subtraction. Can anyone explain what I am missing? Answer: Tl;DR version: You are not missing anything; finite-duration signals cannot be uncorrelated signals. If the crosscorrelation function $x\star y$ of $x$ and $y$ is zero everywhere, then the Fourier transform of $x\star y$, which is $X(f)Y^*(f)$ (or $X^*(f)Y(f)$ for left-handed folks), must also have value $0$ for all $f$. But, finite-duration signals have Fourier transforms with support $(-\infty, \infty)$ and so the product $X(f)Y^*(f)$ cannot be zero everywhere. It is of course possible to have uncorrelated signals, but these have Fourier transforms with finite (and non-overlapping) support, and so (by duality) must have infinite duration.
{ "domain": "dsp.stackexchange", "id": 7548, "tags": "power-spectral-density, correlation" }
Why are there 6 interactions with 4 positive charges?
Question: From a homework assignment, there are 4 spheres spaced 1cm apart. Each of the spheres are charged to +10nC and weigh 1 gram. The question wants us to find the final speed of the charges once they've drifted far apart. I've found the answer, and I realized where my calculations were incorrect. What I don't understand is why my calculations are incorrect. To find the solution to this question, I realized I'd need to find the potential energy of the entire system. I assembled the potential energy with the following formula: $$ \Delta U=\int -F_edr = \frac{kq_1q_2}{r}$$ Which lead me to this incorrect equation: $$U_f-U_i=0-4(\frac{kQ}{L} + \frac{kQ}{L} + \frac{kQ}{\sqrt2 L})$$ The intuition behind this formula was the interaction of three spheres on one sphere would be $\frac{kQ}{L} + \frac{kQ}{L} + \frac{kQ}{\sqrt2 L}$ so the interaction of all spheres on each other would be 4 times that quantity. However, it turns out there are only 6 total interactions in the entire system: $$U_f - U_i = 0 - (\frac{kQ}{L} + \frac{kQ}{L} + \frac{kQ}{L} + \frac{kQ}{L} + \frac{kQ}{\sqrt2 L} + \frac{kQ}{\sqrt2 L})$$ But wouldn't this count interactions going in only one direction? Specifically, if the spheres were named A, B, C, and D––order doesn't matter. Then the only interactions that could have been noted would be $A\,\to\,B, A\,\to\,C, A\,\to\,D, B\,\to\,C, C\,\to\,D, B\,\to\,D$. But what about the half of the interactions: $B\,\to\,A, C\,\to\,A, D\,\to\,A, C\,\to\,B, etc.$ Does the other half not matter in the context of potential energies? Answer: No, if you included the other ones you would be double-counting (notice your way has 12 terms instead of 6). An interaction involves two charges. So you have the A-B interaction, the A-C interaction, etc. The X-Y interaction Isn't the combination "X on Y" and "Y on X" interaction. It's just a single interaction between the two charges. Another way to look at it is to use the other (but technically the same) definition of potential energy as the energy needed to construct the charge configuration by bringing in each charge slowly from infinity. 1) Bring charge $D$ in first for free. 2) Bring in charge $C$ which feels a force from $D$. 3) Bring in charge $B$ which feels a force from both $C$ and $D$. 4) Bring in charge $A$ which feels a force from charges $B$, $C$, and $D$. And there are your six terms you need for the total energy.
{ "domain": "physics.stackexchange", "id": 62019, "tags": "homework-and-exercises, electrostatics, charge, potential-energy, coulombs-law" }
Does silver oxide react with hydrogen sulfide?
Question: Electronic connectors are often silver plated. However, silver tarnishes fairly quickly and heavily. There exists a widespread misconception that the tarnishing of silver contacts is harmless, because silver oxide has about the same conductivity as silver itself. The problem however is that silver does not oxidize under normal conditions. The tarnish on the contacts is not silver oxide, but silver sulfide that develops due to the presence of some hydrogen sulfide in the air. Unlike silver oxide, silver sulfide is not a conductor, but a semiconductor with various potential adverse effects for the connection. I have come across a reference that suggests oxidizing silver contacts before using them in order to prevent the development of the silver sulfide layer. This implies that silver oxide does not react with hydrogen sulfide in the air under normal conditions. Is this claim correct? EDITS I have corrected the typo by changing "sulfur dioxide" to "hydrogen sulfide" in the title and body of the question. Thanks for pointing this out in the answer! Answer: Silver sulfide is formed by Ag and hydrogen sulfide not sulfur dioxide. You need reducing conditions where the latter can be reduced to $$\ce{SO2 +reduction -> H2S}$$ Actually, sulfur dioxide chemisorbs on ultraclean silver surface, however heating can remove it. So this is reversible sorption, as suggested by Lassiter [1]. Just note that Auger electron spectroscopy is done under extremely clean environment. There is no trace of water, oxygen or any other component! Real atmosphere is far more complicated and tons of photochemical reactions occur in the atmosphere. A typical indoor air has plenty of undesirable components. How does $\ce{SO2}$ react with Ag must be another story because we cannot avoid or control other factors. Coming to the second part of the query: If we have surface layer of silver oxide, will it prevent sulfide formation. In principle, possibly yes, because $\ce{Ag2O}$ is decent oxidizing agent. The moment traces of $\ce{H2S}$ come in contact with the oxide, it will reduce the oxide to elemental silver. This is my personal speculation. As per Franey et al. [2]: Polycrystalline silver has been exposed to the atmospheric gases $\ce{H2S}$, $\ce{OCS}$, $\ce{CS2}$ and $\ce{SO2}$ in humidified air under carefully controlled laboratory conditions. $\ce{OCS}$ is shown to be an active corrodant while $\ce{CS2}$ is quite inactive. At room temperature, the rates of sulfidation by $\ce{H2S}$ and $\ce{OCS}$ are comparable, and are more than an order of magnitude greater than those of $\ce{CS2}$ and $\ce{SO2}$. It appears that $\ce{OCS}$ is the principal cause of atmospheric sulfidation of silver except near sources of $\ce{H2S}$ where high concentrations may render the latter gas important. At constant absolute humidity, the sulfidation rate of silver by both H2S and OCS decreases from 20 to 40 °C and then increases to 40 to 80 °C. So may hydrogen sulfide may not be a major culprit! References Lassiter, W. S. Interaction of Sulfur Dioxide and Carbon Dioxide with Clean Silver in Ultrahigh Vacuum. J. Phys. Chem. 1972, 76 (9), 1289–1292. https://doi.org/10.1021/j100653a011. Franey, J. P.; Kammlott, G. W.; Graedel, T. E. The Corrosion of Silver by Atmospheric Sulfurous Gases. Corrosion Science 1985, 25 (2), 133–143. https://doi.org/10.1016/0010-938X(85)90104-0.
{ "domain": "chemistry.stackexchange", "id": 11839, "tags": "inorganic-chemistry" }
Does the Shannon theorem not apply when the amplitude of a wave is changed faster than half the time period of the wave?
Question: Shannon's version of the sampling theorem states that if a function contains frequencies all strictly less than $B$ hertz, then it is completely determined by giving its ordinates at a series of points spaced $\frac{1}{2B}$ seconds apart. Now, let us suppose we are talking about just one frequency (say a laser with frequency $B-\varepsilon$ hertz, where $\epsilon$ is an arbitrarily small positive number) with an amplitude that is varied over time for relaying a signal. Let's assume that samples of this wave are taken at the following points of time, measured in seconds: $ \{0, \frac{1}{2B}, \frac{2}{2B}, \frac{3}{2B}, \ldots \} $. Suppose the laser amplitude is changed to half its value at $\frac{1.4}{2B}$ seconds and is then doubled back to its original amplitude at $\frac{1.6}{2B}$, wouldn't this information go undetected? It seems to me that the theorem assumes that amplitude does not change inside one wavelet. Answer: Once you start changing the amplitude you are increasing the bandwidth of the signal. That's called "amplitude modulation" and the highest frequency is now the sum of the original frequency and the highest frequency in the modulation signal. The sampling theorem still holds. You still only need twice the bandwidth but the bandwidth has increased due to the modulation
{ "domain": "dsp.stackexchange", "id": 10125, "tags": "sampling, nyquist" }
How does electrophoretic separation work in terms of separating soil into minerals?
Question: I found this diagram and would like to know how it works with soil. Google searches yield protein/biology things, anyone know how this method would work with soil? Answer: The search term: "continuous particle electrophoresis" yielded much more results. electrophoresis - the movement of charged particles in a fluid or gel under the influence of an electric field. source: http://www.youtube.com/watch?v=LNUgcq1M4LI - source: http://www.nature.com/articles/srep19911/figures/1 Heard some guy invented it while messing with some clay, but couldint find any video examples of that.
{ "domain": "chemistry.stackexchange", "id": 6986, "tags": "reduction-potential, distillation" }
How does rocker bogie keep the body almost flat?
Question: How does rocker-bogie mechanism keep the body flat / keep the solar panel almost flat all the time? I know there is an differential system that connect both rocker bogie (left and right) together. But how does it actually work? Edited: Please provide relevant references. Answer: I was looking for something similar, and I found Mars Rover Rocker-Bogie Differential to be really helpful. With my level of understanding it took me a while. But the link my professor provided me with really helped me, it has decent animations to help understanding the concept. Okay so here's my understanding of the mechanism. The differential system essentially could consist of either of these two mechanisms: The Differential Gearbox and the Differential Bar. The differential Gearbox consists of three gears. The one in the middle (2) is connected to the body, while the ones on the side (1) rockers of the system. If you were to pick the rocker bogie up and hold the body intact, and tilt one of the side rocker downwards, the gears in the gearbox would make the rocker on the other side tilt upward. If you tilt it upwards, the other one would tilt it downwards. More complex system use more gears to make the whole system more sensitive to the movements. As for the Differential bar, it's not used on the rover as it interferes with the solar panel, but it works in a similar way. Except its rod that connects the two rockers. And this rod is pivoted in the middle onto the body. So, as far as my understanding goes, look at the picture and consider the gears on either side (Gears numbered 1). If one of the gear(let's say the top one) is turning clockwise, it would make Gear no. 2 (which sits in the grove of both of the other gears.) move anti-clockwise. Which in turn would make the other No.1 Gear turn anti-clock wise. NOTE: ALL THE GEARS FIT INTO THE GROOVE OF EACH OTHER. So any rotation that takes place in this system affects the other two gears. And the gears are attached or arranged in such a way that No.1's are running in the opposite direction of each other. Here you can see the differential Bar on Curiosity.
{ "domain": "robotics.stackexchange", "id": 156, "tags": "wheeled-robot, rocker-bogie" }
What made this tiny honeycomb-like structure?
Question: I found this on the ground in Maryland. I believe the twig is from a tulip poplar tree. The structure looks like a tiny honeycomb except that the cells are individual tubes rather than a true hexagonal honeycomb shape. It's one layer deep and wraps completely around the branch. I assume it was made by some tiny insect, but what? Answer: Many insects lay eggs in tight clusters; because of the shape of the eggs (like honeycomb cells themselves), the cluster looks a bit like a honeycomb. If you had a high quality macro shot, the eggs might be easier to identify definitively. Newly hatched Beet Armyworms: Wheel Bug (Arilus cristatus) Eggs: Wheel bugs are common in eastern North America, but are confirmed to occupy areas of Mexico and Guatemala. With wheel bug for size: My guess is that you found a cluster of Wheel Bug eggs. Females lay eggs at a low elevation on trees, bushes, twigs, and other objects.[5] Secreted glue serves as an adhesive which maintains the cluster formation of the eggs. - Wikipedia
{ "domain": "biology.stackexchange", "id": 7362, "tags": "entomology" }
Scalar particles are described by a real scalar field or by a complex one?
Question: Well, in the title is already stated my main question. I know you can use a complex scalar field to describe two real scalar fields, by using just one that involves both of them. But, in the modern quantum field theories, how are actual scalar particles such as the Higgs boson described? I mean, which Klein Gordon lagrangian density do you use, the real one or the complex one? Answer: In the Standard Model, the Higgs field has two complex components. The "two" comes from how it couples to the $SU(2)_L$ gauge field, the one that couples only to the left-handed components of the fermions. Even without the $SU(2)_L$ gauge field, the "complex" would still be related to the fact that it also couples to the $U(1)_Y$ gauge field, the one whose charges are called "hypercharges." Under a $U(1)_Y$ gauge transformation $A_\mu\rightarrow A_\mu+\partial_\mu\theta$, the Higgs field transforms as $\Phi\rightarrow \exp(i\theta)\Phi$. As noted in the OP, we could think of it as four real fields instead of two complex fields, but the complex representation is easier to manage. More generally, any time the scalar field is "charged" with respect to a $U(1)$ gauge field, representing it as a complex field tends to be easier to manage than representing it as a pair of real fields. For example, the Abelian Higgs model (which is sometimes studied in the context of superconductivity, for example) involves a complex scalar field.
{ "domain": "physics.stackexchange", "id": 54960, "tags": "field-theory, standard-model, higgs, complex-numbers, klein-gordon-equation" }
Program that organizes files based on folder name and file name
Question: I am rather new to coding, and I would like to see what improvements should I make to the code I wrote to ensure that I am using good practices, and that it will function as it's supposed to. The following code detects the operating system (I have it set for either Linux or Windows), detects the folders around it, detects the files around it, then moves the files into folders based on the similarity between the file and folder names. import java.io.BufferedReader; import java.io.InputStreamReader; public class Organizer { public static void main(String[] args) throws Exception{ //This part organizes the files String p1 = null; String p2 = null; if ((confstring()).equalsIgnoreCase("linux")){ p1 = "bash"; p2 = "-c"; } else if ((confstring()).equalsIgnoreCase("windo")){ p1 = "CMD"; p2 = "/C"; } String newname = null ; String[] confcmd = configure() ;//Retrieves the commands from the configure() function String[] folder = execute(confcmd[0], p1,p2);//Executes the first command and gets a list of folders String[] interact = new String[folder.length] ; for (int n=0 ; n < folder.length; n++){ interact[n] = folder[n].substring(0,3);//Makes the keyword to compare the filenames to //System.out.println(interact[n]); } String[] stdoutput = execute(confcmd[1],p1,p2); if ((confstring()).equalsIgnoreCase("linux")){//both of the following if statements get the filenames, compares them to the keyword, then moves and renames the files for ( int n=0 ; n < interact.length; n++){ for(int i=0 ; i<stdoutput.length; i++){ if((stdoutput[i]).contains(interact[n])){ //System.out.println(stdoutput[i]); newname = confcmd[2] + " " + stdoutput[i] + " " + folder[n] +confcmd[3] + stdoutput[i] + ")" + "-" +stdoutput[i]; //The ")" results from a shell scripting quirk I had to address //System.out.println(newname); execute(newname, p1, p2); } } } } else if((confstring()).equalsIgnoreCase("windo")){ for ( int n=0 ; n < interact.length; n++){ for(int i=0 ; i<stdoutput.length; i++){ if((stdoutput[i]).contains(interact[n])){ String[] date = execute(confcmd[3]+" "+stdoutput[i],p1,p2); date[0] = date[0].substring(0,10); date[0] = date[0].replace("/","-"); newname = confcmd[2] + " " + stdoutput[i] + " " + folder[n] + "\\" + date[0] + "-" + stdoutput[i]; System.out.println(newname); execute(newname,p1,p2);// } } } } } private static String[] configure(){//Sets the OS and the commands needed to organize the files String[] commands = null; String os = System.getProperty("os.name"); os = os.substring(0, 5) ; if ((os).equalsIgnoreCase("linux")){ commands = new String[4] ; commands[0] = "ls -d */" ; commands[1] = "find . -maxdepth 1 -type f -printf '%f\n'"; commands[2] = "mv" ; commands[3] = "$(date +%d-%m-%y -r " ; } else if ((os).equalsIgnoreCase("windo")){ commands = new String[4]; commands[0] = "DIR /B /AD" ; commands[1] = "DIR /B /A-D-H"; commands[2] = "MOVE" ; commands[3] = "DIR /B /T:C"; } else if ((os).equalsIgnoreCase("mac")){ } return commands ; } private static String confstring(){ String os = System.getProperty("os.name"); os = os.substring(0, 5) ; //System.out.println(os); return os; } private static String[] execute(String cmd, String p1, String p2) throws Exception{//Runs the commands int term = 0; String[] intermediate = new String[100]; ProcessBuilder builder = new ProcessBuilder(p1, p2, cmd); builder.redirectErrorStream(true); Process p = builder.start(); BufferedReader r = new BufferedReader(new InputStreamReader(p.getInputStream())); String line; for (int n=0; true ; n++ ) { line = r.readLine(); if (line == null) { intermediate[n]="EXIT";break; } System.out.println(line); intermediate[n] = line ; } for (int i = 0 ; !(intermediate[i]).equals("EXIT"); i++){ term = i ; } term ++; //System.out.println(term); String[] output = new String[term]; for (int k=0 ; k < term; k++){ output[k]=intermediate[k]; } return output; //output is used to organize files } } Answer: Some suggestions: Don't wrap shell commands in Java unless you absolutely have to. Java supports listing files, moving them, getting dates etc. natively. As an added bonus you'll instantly support basically any platform without writing any platform specific code, and you won't have to do a bunch of string parsing to get the result in a usable format. Format your code with a linter or IDE. It will make it much easier to read for others. Short variable names (p1, confcmd, etc.) make your code harder to read. Programming is a brain problem, not a typing problem, so you should optimise for understanding rather than brevity. Anything in your main function is basically not reusable in other Java code. If you can think of any use case of importing your code into another file, you should create a separate method for that functionality avoiding dependencies on the surrounding shell (such as arguments). Inline comments are a code smell. Everywhere you feel the need for an inline comment you should ask yourself if you could refactor the code in such a way that the comment is unnecessary.
{ "domain": "codereview.stackexchange", "id": 20664, "tags": "java, beginner" }
What is standardization in computer science?
Question: I was reading about the differences standards and specifications for C. I understood that programming languages are usually stadardized and I learnt that there are different approches to stadardization. Anyway I did not understand yet what this terms refer to. If I think to C, I believe it refers to a standard API, the API of the C standard Library (libc). This seems to make sense to me, as if someone told me "Look these are the only functions, types, Macros....etc you can use", because they're the only thing you "see" after you downloaded the library and they make your code portable. Anyway, I don't know if my idea of standardization is in some-ways correct, if it's totally wrong or if it's a simplified version, so here I am. Answer: A standard for a programming language is a document defining the syntax and semantics for that language. Usually, for real-world languages, this document involves a (hopefully) precise description in intuitive terms, rather than a formal semantics, written in mathematical terms. Still, this document acts as a contract between the programmer and the language implementation (usually a compiler/interpreter, plus some "standard libraries"). The programmer knows that, if they stick to what the standard mandates, they will get back the intended result from the implementation. Vice versa, the implementor of the language can assume that the programmer used no other features than those mandated. Often, the implementor will also provide non-standard features (additional APIs, additional language constructs, OS-specific libraries, etc.). Note that the standard does not only involve describing the APIs in the standard library. In the case of the C language, as mentioned by the OP, the standard defines among others: the syntax, the type system, the memory model, the statements, the variable declarations, etc. For instance, what happens if we write int *x; *x=0;? The ISO standard answers that: an implementation printing mooo! on screen is perfectly conforming (as any other implementation is). So, summing up, a standard is the authoritative document which answers questions like "is running code C guaranteed to have behavior B?". Most standards are written by a committee made of the most prominent organizations which developed / are developing the most widespread implementations. They vote on which features should be in or out, until they reach an agreement on what should be the standard definition. Some languages are revised every few years, so to include more features.
{ "domain": "cs.stackexchange", "id": 20329, "tags": "terminology, programming-languages, c" }
Minesweeper grid representation + calculating neighbor-sums
Question: I'm trying to get better at Python and would like a critique. Problem statement: Accept a 2D grid containing 1's and 0's (1 being mine locations) and print a solved sum grid (the way the final grid would show upon completion of a minesweeper game with all the neighbor sums). Beyond that, what might be horribly wrong with the code below? I am interested in finding a recommended balance between readability and what the cool kids call idiomatic. For example, List comprehension in some places was deliberately omitted in favor of making something that I thought was easier on the eyes. The top-level functions are called via: if __name__ == '__main__': locMap = [[1,0,0,0],[0,0,1,0],[0,1,0,1]] # Sample input print "Location map = " printMineField(locMap,locMap) newMap = getSummedMap(locMap) print "Summed map = " printMineField(locMap,newMap) The output is: ''' Location map = * 0 0 0 0 0 * 0 0 * 0 * Summed map = * 2 1 1 2 3 * 2 1 * 3 * ''' The actual functions are: def generatePaddedGrid(grid): """ Input: 2D list. Output: 2D list Purpose: Returns a 0-padded boundary around a grid. Useful to perform easier 2D traversal + operations of the inner grid without painful boundary checking """ # Get Dimensions of grid nRows = len(grid) nCols = len(grid[0]) # Create a grid with +2 row/col padding (on all sides) paddedGrid = [[0 for x in range(0,nCols + 2)] for x in range(0,nRows + 2)] # Insert 'centered' locMap into the Padded Grid for xIter in range(0,nRows): for yIter in range(0,nCols): paddedGrid[xIter+1][yIter+1] = grid[xIter][yIter] return paddedGrid def getSummedMap(locMap): """ Input: 2D list Output: 2D list Purpose: Takes in a 2D array (locMap) and returns a same-sized array with each cell now containing the sum of all the neighbors including diagonals (max of 8 neigbor cells) """ # Insert + Center this grid into a padded grid to prevent access errors paddedGrid = generatePaddedGrid(locMap) nRows = len(locMap) #Actual rows to be iterated nCols = len(locMap[0]) #Actual cols to be iterated. # Create a target/output grid of the actual size to write the sums into sumGrid = [[0 for x in range(0,nCols)] for y in range(0,nRows)] for xIter in range(1,nRows+1): for yIter in range(1,nCols+1): Sum = 0 # Top + Bottom + Left + right Sum += paddedGrid[xIter-1][yIter] +\ paddedGrid[xIter+1][yIter] +\ paddedGrid[xIter][yIter+1] +\ paddedGrid[xIter][yIter-1] # Then 4 diagonals Sum += paddedGrid[xIter-1][yIter-1] +\ paddedGrid[xIter+1][yIter-1] +\ paddedGrid[xIter+1][yIter+1] +\ paddedGrid[xIter-1][yIter+1] sumGrid[xIter-1][yIter-1] = Sum return sumGrid def printMineField(locMap, SumMap): """ Input: 2 x 2D Arrays Output: No return, just print an overlayed Sum-map over a given Loc-map but leave the mine-locations as '*' (instead of the neighbor-sums) """ numActualRows = len(locMap) #Actual rows to be iterated numActualCols = len(locMap[0]) #Actual cols to be iterated. for xIter in range(0,numActualRows): for yIter in range(0,numActualCols): if locMap[xIter][yIter] == 1: print '*', else: print str(SumMap[xIter][yIter]), print '\n' Answer: One idea ... in your getSummedMap function, you could put all the +1, -1, 0's in a list of 2-tuples and get the Sum with one for loop, e.g. ... neighbor_incs = [(-1, 0), (1, 0), (0, 1), (0, -1), # Top + Bottom + Left + right (-1, -1), (1, -1), (1, 1), (-1, 1), ] # diagonals for x_inc, y_inc in neighbor_incs: Sum += paddedGrid[xIter + x_inc][yIter + y_inc]
{ "domain": "codereview.stackexchange", "id": 4770, "tags": "python, minesweeper" }
How does paracetamol interfere with immune system?
Question: Paracetamol is used to reduce body temperature when it is to high. The high body temperature (fever) is known to be an indication that immune system fights against an infection. In this context I have two related questions: Is high body temperature one of the mechanisms of immune system to suppress an infection or it is just a side effect? If it is just a side effect, does paracetamol remove it by decreasing the intensity with which immune system fights with an infection or not? Answer: First question: Yes. The immune system releases pyrogenic cytokines such as IL-1. Bacteria aren't typically used to 37C, they prefer working at under that temperature to function in the environment. Our body however can take a few degrees here or there however this severely compromises the bacterial enzyme activity. The same is true for other pathogen enzymes. The body also increases copper concentrations in the blood for similar reasons. Problem is of course if the body goes into overdrive and raises our temperature too much, this compromises our own ability to fight the infection so in that case antipyretics like paracetamol can reduce fever. Second question. It isn't a side effect, however paracetamol works by blocking things like IL-1 that raise temperature. How IL-1 raises your temperature is quite interesting, it tells the hypothalamus (our thermostat) that it should be set higher. So we feel cold (so we try to keep ourselves warm) and the body thinks it's cold (so it increases the temperature by burning glucose mainly)
{ "domain": "biology.stackexchange", "id": 968, "tags": "pharmacology, immunology, medicinal-chemistry" }
How to Increase UR Arm Joint Speed
Question: Hi, I want to increase the universal robot joint speed. I do not set the speed anywhere in my code. The only place I have it set is on the UR touchpad (and it is set to the very max, 191 deg/second). However, when I measure the robot's speed, it is about 90deg/sec. I am using a Universal Robot 3 e-series and ROS Melodic. I have used both the Universal Robots package from Universal Robots and ROS Industrial Also, after launching my UR driver, I see the topic /speed_scaling_factor, which echos "data: 1.0" and data: 1.0 The /joint_group_vel_controller/command also seems like it could be related to joint speed, but nothing seems to be written to that topic. Does anyone know how to increase the speed of a UR robot arm? Originally posted by fruitbot on ROS Answers with karma: 19 on 2020-11-04 Post score: 0 Answer: Your description is a bit vague, and there could be various reasons for why robots don't move at their maximum capabilities. None of the packages you mention are artificially limiting the joint velocity limits, so I'd be surprised if it'd make any difference which you use. When you write: I have used both the Universal Robots package from Universal Robots and ROS Industrial I assume you're not talking about the drivers, are you? One thing to check would be to make sure you have acceleration limits configured in your MoveIt configuration, if you generated it yourself. The setup assistant will not configure them for you, which will make MoveIt use 1.0 rad/s^2 as defaults. Another thing to check would be MoveIt's settings: the default for the speed and acceleration scaling factors has recently been changed to something much lower than 1.0 -- and for good reason. If you don't change that, you'll get much slower motion. Edit: Yes, I am referring to the drivers. you should not be using ur_driver with a modern CB3 or e-Series controller. I very much doubt that you have. Neither ur_modern_driver: it's been marked as deprecated for about a year now. I doubt you could have missed that. As far as the acceleration limits, I don't see an option to set those in the Moveit! Setup Assistant, but the file my_moveit_config/config/joint_limits.yaml has the variable has_acceleration_limits set to false (so I would assume it is not the issue) No, that's actually a problem. See here where DEFAULT_ACCEL_MAX will be used if there are no acceleration limits. And DEFAULT_ACCEL_MAX = 1.0 (here). Additionally, config/chomp_planning.yaml has an acceleration variable, smoothness_cost_acceleration, which is set to 1.0. I tried setting it equal to 0.01 and 100 and observed no difference in speed. That's all unrelated, unless you're actually using CHOMP as your planner. Which I don't believe you are. Originally posted by gvdhoorn with karma: 86574 on 2020-11-05 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by fruitbot on 2020-11-05: Yes, I am referring to the drivers. As far as the acceleration limits, I don't see an option to set those in the Moveit! Setup Assistant, but the file my_moveit_config/config/joint_limits.yaml has the variable has_acceleration_limits set to false (so I would assume it is not the issue) Here is an example of the joint configuration in joint_limits.yaml: joint_limits: elbow_joint: has_velocity_limits: true max_velocity: 3.14 has_acceleration_limits: false max_acceleration: 0 Additionally, config/chomp_planning.yaml has an acceleration variable, smoothness_cost_acceleration, which is set to 1.0. I tried setting it equal to 0.01 and 100 and observed no difference in speed. Thank you for your help. Please let me know if any of the above doesn't check out or if you have any other suggestions. Comment by fruitbot on 2020-11-05: Ah that was the problem! I set has_acceleration_limits to true and increased max_acceleration to 5 and now see that the arm speed has increased significantly. Thank you for your help!
{ "domain": "robotics.stackexchange", "id": 35713, "tags": "ros, ros-melodic, ros-industrial, universal-robot" }
Determining Language from Context Free Grammar
Question: I am trying to understand how to write the language given the predicates of the context free grammar. As an example, I have the following grammar: $S \to 0B \mid 1A$ $A \to 0 \mid 0S \mid 1AA$ $B \to 1 \mid 1S \mid 0BB$ What this says to me is that the first symbol can be either a 0 or 1, and the next could be 0 or 1 or 1 or 0, and you continue down the parse tree from there. How does one determine what the language of the grammar is? I having trouble putting the patterns of symbols into a language definition. Answer: Structural induction would prove the following: All words generated from $S$ have the same number of 0s and 1s. All words generated from $A$ have an excess of one 0. All words generated from $B$ have an excess of one 1. Presumably the reverse is also true, but I'll leave such pondering for you.
{ "domain": "cs.stackexchange", "id": 9913, "tags": "formal-languages, formal-grammars" }
Confusion regarding Matrix representation of Lorentz transformation in Jackson
Question: In Jackson (3rd edition p. 545) there are the following equations: $$A = e^L \tag{11.87}$$ $$\det A = \det(e^L) = e^{Tr L}$$ $$g\widetilde{A}g = A^{-1} \tag{11.88}$$ $$ A = e^L , g\widetilde{A}g = e^{{g\widetilde{L}g}} , A^{-1} = e^{-L}$$ $$ g\widetilde{L}g = -L $$ $\widetilde{A} $ is the transpose of $A$. I have several doubts: How is it derived the equation $\det(e^L) = e^{TrL}$? Are we assuming a special type of $L$? In $ g\widetilde{A}g = e^{{g\widetilde{L}g}}$, how is it possible to have $g = e^{g}$? Answer: The property $\det(e^L) = e^{Tr L}$ follows from the matrix identity \begin{equation} \log \det (M) = Tr \log(M) \to \det(M) = e^{\text{Tr}\, log (M)} \end{equation} Choosing $M=e^L$ yields $\,\det(e^L) = e^{\text{Tr}\, L}$. The second property can be shown using a Taylor expansion for the matrix $L$ if we assume $g^2=1$. Suppose we want to show that \begin{equation}g \,e^\tilde{L} g = e^{g \tilde{L} g} \end{equation} By expanding the l.h.s. one has \begin{equation} g \,e^\tilde{L} g = g\left( 1 + \tilde{L} + \frac{1}{2}\tilde{L}^2+... \right) g. \end{equation} Expanding the r.h.s. we have \begin{equation} e^{g \tilde{L} g} = 1 + g \tilde{L} g+\frac{1}{2} g \tilde{L} g \cdot g \tilde{L} g + ... \end{equation} Assuming $g^2=\mathbb{1}$ the two expressions agree.
{ "domain": "physics.stackexchange", "id": 67951, "tags": "special-relativity, metric-tensor, inertial-frames" }
Are black holes really singularities?
Question: Can't black holes just be super dense objects? They could still be black (having the color of black never really required special physics, after all) and have a really strong gravitational field. If we suspect that it actually absorbs light due to its gravity then it is possible that there is gravity strong enough to capture light, not letting any radiate out when matter falls into this object. I'm just having a hard time accepting that anything can exist with an infinitely large property as it would lead to infinitely large mass leading to infinitely large forces... that would just destroy the universe infinitely fast, wouldn't it? So I think a supermassive object can be a badass just as well without having to be a singularity. Answer: This answer is to some degree opinion-based. I share your scepticism about the existence of strict mathematical singularities as General Relativity would predict. This is mainly, because the assumption of a strict singuarity ignores quantum theory. One approach to overcome the singularity is a Gravastar. Related is a Planck star. Both approaches try to overcome the paradoxes near the singuarity. A full answer might eventually be provided by a still to be defined quantum gravity. Another difficulty for "real" black holes is rotation. The Schwarzschild solution is not to be expected to occur in real-world black holes. Instead a Kerr solution (or might be a Kerr-Newman metric) will get closer to real astronomical objects, hence including at least rotation, and might be some residual electric charge. A massless particle travelling with the speed of light results in an undefined energy according to Special Relativity. Nature opens a physics dedicated to massless particles by allowing them to take any energy to resolve this undefined range. In a similar way nature may open a new kind of physics inside black holes to resolve the paradox between General Relativity and quantum theory.
{ "domain": "astronomy.stackexchange", "id": 1931, "tags": "gravity, black-hole, singularity" }
Why does tension on a wedge affect it at all?
Question: I can't understand how tension is affecting the wedge in the 2nd diagram. I understand how it affects the other two masses but not how it accelerates the wedge. If the pulley is friction less wouldn't the string just slide over the pulley and not do anything? If you need context one of the questions I was stuck on was attempted to be answered in this video. https://www.youtube.com/watch?v=zaXmU6t_BKA&t=65s Answer: Because the rope changes direction as it goes over the pulley, it does exert a net horizontal force on the pulley (and hence on the wedge) unless the two wedge angles are equal. This horizontal force is $T(\cos \theta_2 - \cos \theta_1)$ However, the blocks also exert forces $R_1$ and $R_2$ on the wedge, with horizontal components $R_1 \sin \theta_1$ and $R_2 \sin \theta_2$. So the blocks exert a net horizontal force on the wedge $R_1 \sin \theta_1 - R_2 \sin \theta_2$ If we assume there is no friction between the wedge and the ground then the total horizontal force on the wedge is $R_1 \sin \theta_1 - R_2 \sin \theta_2 + T(\cos \theta_2 - \cos \theta_1)$ If the blocks are in equilibrium then we have $T = m_1 g \sin \theta_1$ on one side and $T = m_2 g \sin \theta_2$ on the other side. We also have $R_1 = m_1 g \cos \theta_1$ and $R_2 = m_2 g \cos \theta_2$. So the net horizontal force on the wedge is then $m_1 g \cos \theta_1 \sin \theta_1 - m_2 g \cos \theta_2 \sin \theta_2 + T(\cos \theta_2 - \cos \theta_1) \\ = T \cos \theta_1 - T \cos \theta_2 + T(\cos \theta_2 - \cos \theta_1)$ which is zero, as expected. In other words, the condition for the blocks to exert zero net horizontal force on the wedge is that same as the condition for the blocks to be in equilibrium, which is $m_1 \sin \theta_1 = m_2 \sin \theta_2$.
{ "domain": "physics.stackexchange", "id": 98242, "tags": "newtonian-mechanics, forces, free-body-diagram, string" }
Does the observed period of a pulsar change with the time of year?
Question: Over in the Physics SE a question was posted asking about the difference in the time dilation of the Earth between perihelion and aphelion: Does Earth experience any significant, measurable time dilation at perihelion? Rather to my surprise it turns out that because of the changes in the Earth-Sun distance and the Earth's orbital velocity there is a difference of about $60\mu$s per day between the two extremes. A commentator pointed out that pulsars can be measured accurately enough to detect this difference. However I have never heard of a pulsar measurement having to be corrected for the time of year, and Googling has found me nothing related. I would be interested to know if this is something that needs to be considered. The difference is slightly over one part in $10^9$, so presumably it depends on whether pulsars can be timed this accurately. Answer: Yes. In terms of pulsar timing measurements, this is a massive effect! A +/- 30 km/s doppler shift changes the pulsar frequency by +/- 1 part in 10000. This sounds small, but the accumulated phase shift over many periods is readily apparent. In addition, the light travel time across the solar system has to be taken into account, as well as the Earth's rotation and some other smaller effects - such as Shapiro delay. If the question refers to the specific annually varying difference in clock rates caused by the different gravitational potential experienced by an Earthbound telescope on an elliptical (as opposed to circular) orbit -the answer is still yes. This is item 4 in the list of applied corrections given on p.52 of "Pulsar Astronomy" by Lyne et al. The maximum effect is a rate change of $3\times 10^{-9}$ leading to a maximum lead or lag of 1.7 ms.
{ "domain": "astronomy.stackexchange", "id": 7189, "tags": "pulsar" }
Comparing two Strings which could be null or blank in a Comparator
Question: I would like a review of how I am comparing two java.util.String in a private method in a java.util.Comparator. Either of the Strings could be null or blank, and would be "less than" the other String if that other String were not null/blank. My gut feeling is that this is at the very least inelegant, probably difficult to read, and at worst inefficient if it had to be done millions of times per second. Oh, and there could even be a flaw in the logic! Is there a better way to do this? private Integer compareDateStrings(BeanToDoTask arg0, BeanToDoTask arg1, String strProperty) { /* Don't worry too much about this part. */ String strDate0 = BeanUtils.getProperty(arg0, strProperty); _logger.debug("strDate0 = " + strDate0); String strDate1 = BeanUtils.getProperty(arg1, strProperty); _logger.debug("strDate1 = " + strDate1); /* If strDate0 is null or blank and strDate1 is not, then strDate1 is greater. */ if ((strDate0 == null || strDate0.equals(""))) { if (strDate1 != null && !strDate1.equals("")) { return -1; } else { /* They both are null or blank! */ return 0; } } /* We know strDate0 is not null or blank. */ if (strDate1 == null || strDate1.equals("")) { return 1; } /* At this point neither strDate0 or strDate1 are null or blank, so let's compare them. */ return strDate0.compareTo(strDate1); } Answer: You could write the code like this, it is doing the same, but I think it is more readable, you almost don't need any comment, to assume the return value. private Integer compareDateStrings(BeanToDoTask arg0, BeanToDoTask arg1, String strProperty) { String strDate0 = BeanUtils.getProperty(arg0, strProperty);_logger.debug("strDate0 = " + strDate0); String strDate1 = BeanUtils.getProperty(arg1, strProperty);_logger.debug("strDate1 = " + strDate1); return compareDateStrings(strDate0, strDate1); } private Integer compareDateStrings(String strDate0, String strDate1) { int cmp = 0; if (isEmpty(strDate0)) { if (isNotEmpty(strDate1)) { cmp = -1; } else { cmp = 0; } } else if (isEmpty(strDate1)) { cmp = 1; } else { cmp = strDate0.compareTo(strDate1); } return cmp; } private boolean isEmpty(String str) { return str == null || str.isEmpty(); } private boolean isNotEmpty(String str) { return !isEmpty(str); }
{ "domain": "codereview.stackexchange", "id": 30348, "tags": "java" }
Shoot the Messenger pt. 2
Question: This is a follow up to Messenger supporting notifications and requests I've written a lightweight (I think) class that acts as a messenger service between classes for both notifications (fire and forget updates to other classes) and requests (a notification sent out that expects a returned value). Since the last question, I have altered the request system to return an IEnumerable filled with all the results from all of the registered functions. This allows the caller to pick from the results or operate on all of them, although I imagine typical use will be to call .Single() or .First(). I have also removed the default static instance to prevent laziness and poor code practices, and I have removed a pointless cast between Action and Delegate I'm looking for a general review here on style, usability, best practices, etc. Here's the code (.NET Fiddle here) public class Messenger { /// <summary> /// The actions /// </summary> private Dictionary<Type, Delegate> actions = new Dictionary<Type, Delegate>(); /// <summary> /// The functions /// </summary> private Dictionary<Type, Collection<Delegate>> functions = new Dictionary<Type, Collection<Delegate>>(); /// <summary> /// Register a function for a request message. /// </summary> /// <typeparam name="T"> Type of message to receive. </typeparam> /// <typeparam name="R"> Type of the r. </typeparam> /// <param name="request"> The function that fills the request. </param> public void Register<T, R>(Func<T, R> request) { if (request == null) { throw new ArgumentNullException("request"); } if (functions.ContainsKey(typeof(T))) { functions[typeof(T)].Add(request); } else { functions.Add(typeof(T), new Collection<Delegate>() { request }); } } /// <summary> /// Register an action for a message. /// </summary> /// <typeparam name="T"> Type of message to receive. </typeparam> /// <param name="action"> The action that happens when the message is received. </param> public void Register<T>(Action<T> action) { if (action == null) { throw new ArgumentNullException("action"); } if (actions.ContainsKey(typeof(T))) { actions[typeof(T)] = (Action<T>)Delegate.Combine(actions[typeof(T)], action); } else { actions.Add(typeof(T), action); } } /// <summary> /// Send a request. /// </summary> /// <typeparam name="T"> The type of the parameter of the request. </typeparam> /// <typeparam name="R"> The return type of the request. </typeparam> /// <param name="parameter"> The parameter. </param> /// <returns> The result of the request. </returns> public IEnumerable<R> Request<T, R>(T parameter) { if (functions.ContainsKey(typeof(T))) { var applicableFunctions = functions[typeof(T)].OfType<Func<T, R>>(); foreach (var function in applicableFunctions) { yield return function(parameter); } } } /// <summary> /// Sends the specified message. /// </summary> /// <typeparam name="T"> The type of message. </typeparam> /// <param name="message"> The message. </param> public void Send<T>(T message) { if (actions.ContainsKey(typeof(T))) { ((Action<T>)actions[typeof(T)])(message); } } /// <summary> /// Unregister a request. /// </summary> /// <typeparam name="T"> The type of request to unregister. </typeparam> /// <typeparam name="R"> The return type of the request. </typeparam> /// <param name="request"> The request to unregister. </param> public void Unregister<T, R>(Func<T, R> request) { if (functions.ContainsKey(typeof(T)) && functions[typeof(T)].Contains(request)) { functions[typeof(T)].Remove(request); } } /// <summary> /// Unregister an action. /// </summary> /// <typeparam name="T"> The type of message. </typeparam> /// <param name="action"> The action to unregister. </param> public void Unregister<T>(Action<T> action) { if (actions.ContainsKey(typeof(T))) { actions[typeof(T)] = Delegate.Remove(actions[typeof(T)], action); } } } Example usage: public class Receiver { public Receiver(Messenger messenger) { messenger.Register<string>(x => { Console.WriteLine(x); }); messenger.Register<string, string>(x => { if (x == "hello") { return "world"; } return "who are you?"; }); messenger.Register<string, string>(x => { if (x == "world") { return "hello"; } return "what are you?"; }); } } public class Sender { public Sender(Messenger messenger) { messenger.Send<string>("Hello world!"); Console.WriteLine(""); foreach (string result in messenger.Request<string, string>("hello")) { Console.WriteLine(result); } Console.WriteLine(""); foreach (string result in messenger.Request<string, string>("world")) { Console.WriteLine(result); } } } Answer: Good You are following the naming guidelines The method and parameternames are well choosen and meaningful Improvable XML comments should, if used, be complete e.g /// <typeparam name="R"> Type of the r. </typeparam> instead of often calling typeof(T) call it once and reuse it. using early returns removes horizontal spacing creation of the Collection<Delegate>() functions.Add(typeof(T), new Collection<Delegate>() { request }); this is less readable than this functions.Add(type, new Collection<Delegate>() { request }); So this public void Register<T, R>(Func<T, R> request) { if (request == null) { throw new ArgumentNullException("request"); } if (functions.ContainsKey(typeof(T))) { functions[typeof(T)].Add(request); } else { functions.Add(typeof(T), new Collection<Delegate>() { request }); } } will become public void Register<T, R>(Func<T, R> request) { if (request == null) { throw new ArgumentNullException("request"); } var type = typeof(T); if (functions.ContainsKey(type)) { functions[type].Add(request); return; } functions.Add(type, new Collection<Delegate>() { request }); } or this public IEnumerable<R> Request<T, R>(T parameter) { if (functions.ContainsKey(typeof(T))) { var applicableFunctions = functions[typeof(T)].OfType<Func<T, R>>(); foreach (var function in applicableFunctions) { yield return function(parameter); } } } will become public IEnumerable<R> Request<T, R>(T parameter) { var type = typeof(T); if (!functions.ContainsKey(type)) { return Enumerable.Empty<R>(); } var applicableFunctions = functions[type].OfType<Func<T, R>>(); foreach (var function in applicableFunctions) { yield return function(parameter); } }
{ "domain": "codereview.stackexchange", "id": 10917, "tags": "c#" }
What is the apparent rotation time for a planet relative to an other planet?
Question: Just my curiosity. This is not my assignment etc. Let planet A take m Earth days to complete one full round around the Sun, and planet B take n Earth days for one full round around the Sun. Assuming that both are orbiting in the same direction, what is the duration recorded by an observer on the planet A, as the number of Earth days the planet B took to complete one full round around planet A? Answer: Note, an observer on Planet A can't see a rotation of Planet B. He can see a the Sun orbiting around him, and Planet B orbiting around the Sun. Thus If Planet B has an orbit of smaller radius than Planet A, then the observer can see Planet B sometimes before, sometimes after its Sun. If Planet B has an orbit of larger radius than Planet A, then the observer can see Planet B to cycle on the sky, roughly on the plane of the Ecliptics, but a little bit faster as the Sun. The calculation is this: Planet A takes $\frac{1}{m}$ full circle in a day. Planet B takes $\frac{1}{n}$ full circle in a day. Thus, the observer on Planet A can see an $|\frac{1}{m}-\frac{1}{n}|$ full circle of rotation of Planet B in a day. Thus, he can see a period of Planet B of $\frac{1}{|\frac{1}{m}-\frac{1}{n}|}$ Earth days, which is in simplified form $$\underline{\underline{|\frac{mn}{m-n}|}}.$$
{ "domain": "astronomy.stackexchange", "id": 3142, "tags": "planet, apparent-motion" }
Using the t-SNE algorithm on microarray data + an error bonus
Question: I'm trying to use the t-SNE algorithm on some microarrays data. More specifically my data frame has 18600 columns with genes (features) and 72 rows with conditions with replicates ( 10xWt , 10xTg , etc ). The expression values are in log2 scale. Here is the code that I'm trying to run. # t-SNE implementation library(Rtsne) set.seed(1) for(i in 1:15){ tsne = Rtsne(data.T[,-18601], dims = 2, perplexity=i, verbose=TRUE, max_iter = 1000, pca=T) colors = rainbow(length(unique(data.T$classes))) names(colors) = unique(data.T$classes) plot(tsne$Y, t='n', main="tsne") text(tsne$Y, labels=data.T$classes, col=colors[data.T$classes]) readline(prompt="Press [enter] to continue") } Please note that I'm not counting the column 18601 because this colums contains the labels/classes for each condition. The think here is that when I execute this script, R returns me this error: Error: protect(): protection stack overflow Should I change the --max-pp-size or it's a bug in Rtsne package? Also I was wondering if it is more meaningful to run the tSNE algorithm using not the log2 values of the expression level but the log fold change values in respect to the Wt (wild type) condition. I'm asking because I couldn't find a such other implementation of the tSNE on microarray data. For the configuration of the Rtsne function I read this article Any other suggestion on the implementation is welcomed. Answer: Converting your data.frame to a matrix (and then removing the data.frame) will often free up enough memory that you won't run into this. Note that a matrix is more memory efficient than a data.frame and you're requiring Rtsne() to hold both in memory at the same time (many math-centric functions will end up converting things to a matrix at some point for efficiency). For what it's worth, it's never been entirely clear to me what the interaction is between a data.frame and the pointer protection stack, but it's often the case that this solves this sort of error.
{ "domain": "bioinformatics.stackexchange", "id": 230, "tags": "r, visualization, microarray, clustering" }
Show that boring boolean circuit belongs to NP-complete class
Question: We say that a boolean circuit is boring if it returns the same result for $>\frac34$ possible input, where we have $n$ input gates. Hence, boring circuit returns the same output ($0$ or $1$) for $>\frac34 2^n$ inputs. Prove that checking if boolean circuit is boring is NP-complete Can you help me? I have no idea how to start. I tried to reduce $3$-SAT but no result. Answer: Given a Boolean formula $\varphi$, let $x,y$ be two fresh variables, and consider the circuit computing the function $$ x \lor y \lor \varphi. $$ This circuit is boring iff $\varphi$ is satisfiable.
{ "domain": "cs.stackexchange", "id": 9639, "tags": "complexity-theory, np-complete, circuits" }
Standard deviation of the spectrum of white noise
Question: I have a temporal signal which looks like $f(t) = t\eta(t)$, where $\eta(t)$ is a white noise with the mean $\eta_m$ and STD $\sigma$. I want to calculate the corresponding spectrum, $F(\omega)$, take its magnitude, $|F(\omega)|$, and finally compute the STD of $|F(\omega)|$ in terms of the given parameters. I have searched some literatures but I couldn't find anything which helps me directly tackle the problem. So, anyone know how to do this, or know which resources you would refer me to? The following pictures display $f(t)$ and $|F(\omega)|$. The one below is the histogram of $|F(\omega)|$ excluding some values around the central peak. I want to derive a mathematical analysis which can compute the standard deviation of the above histogram. Answer: There are a few misconceptions here and confusion in what you've plotted versus what you've asked. This is an attempt to clarify things. ${\tt CORRECTED}$ : OK, so your noise $\eta(t)$ is non-zero mean, which is why the $t\eta(t)$ term increases. As robert says, "white noise" is a useful construct in continuous time. Only "bandlimited white noise" exists in discrete time. Your question's title Standard deviation of the spectrum of white noise needs interpretation to make any sense. The power spectral density of bandlimited white noise is known, and is constant. If the variance of the noise is $\sigma^2$ then the value of the power spectral density is $\sigma^2$ for all $\omega$. This means that the power spectral density does not have a standard deviation. It's possible to take the DFT of one realization of the bandlimited white noise. The DFT of bandlimited white noise is... bandlimited white noise. One interpretation of your question is then: what is the variance of one realization of the DFT of bandlimited white noise? Does that get you the answer you need?
{ "domain": "dsp.stackexchange", "id": 3346, "tags": "fourier-transform, noise" }
Bond length comparison between two carbon atoms
Question: Why is the bond length of double and triple bonds between two carbon atoms shorter than the single bond length between two carbon atoms? Answer: In the case of a carbon-carbon single bond, 2 electrons are shared in the bond connecting the two carbon atoms. With a carbon-carbon double bond, 4 electrons are shared between the two carbon atoms and 6 electrons are shared in a triple bond. Having additional electrons between the two atoms 1) improves the bonding overlap (makes the bond stronger) between the two carbon atoms and 2) better screens the two carbon nuclei form each other. Both of these factors, better bonding overlap and better nuclear screening, will allow the two carbon atoms to approach closer together. Consequently, the more electrons (or the more bonds) between two carbon atoms, the shorter the distance between them.
{ "domain": "chemistry.stackexchange", "id": 4294, "tags": "organic-chemistry, bond" }
Algorithm Design for Linear Programming
Question: I am trying to complete question and would like to avoid copying answers, but I do not necessarily understand what I am doing. I am working on the following problem: Suppose you are consulting for a company that manufactures PC. equipment and ships it to distributors all over the country. For each of the next n weeks, they have a projected supply $s_i$ of equipment (measured in pounds) that has to be shipped by an air freight carrier. Each weeks supply can be carried by one of two air freight companies, $A$ or $B$. Company $A$ charges a fixed rate $r$ per pound (so it costs $r$ x $s$ to ship a week's supply $s_i$). Company $B$ makes contracts for a fixed amount $c$ per week. independent of the weight. However, contracts with company $B$ must be made in blocks of four consecutive weeks at a time. A schedule for the PC company is a choice of air freight company ($A$ or $B$) for each of the $n$ weeks with the restriction that company $B$, whenever it is chosen, must be chosen for blocks of four contiguous weeks in time. The cost of the schedule is the total amount paid to company $A$ and $B$, according to the description above. Give a polynomial-time algorithm that takes a sequence of supply values $s_1, s_2, \cdots, s_n$ and returns a schedule of minimum cost. For example, suppose $r = 1$, $c = 10$, and the sequence of values is $11, 9, 9, 12, 12, 12, 12, 9, 9, 11.$ Then the optimal schedule would be to choose company $A$ for the first three weeks, company $B$ for the next block of four contiguous weeks. and then company $A$ for the final three weeks. I am trying to write algorithm, in pseudo code, that completed the following steps in order to solve the challenge, but I don't know even where begin this solution. My initial suspect on how to solve this: Determine what the "optimal" formula looks like, which is: MIN((r*s + OPT(i-1), (3c + OPT(i-3)) This would determine the most optimal between $A$ or $B$ given their restrictions. Therefore I would need to determine the most optimal schedules for the weeks preceding the $i^{\text{th}}$ week. Meaning I would end up running OPT(n-1) for every n, thus resulting in OPT(n) total runs so (n) time complexity. Any advice on how to generate pseudocode for this would be appreciated...I am trying to learn but am not very effective. Answer: The approach you used is called dynamic programming. In dynamic programming, a series of decisions are made in order to maximize some function, where the options available at any given time depend on the decision we have made before. You may want to check a tutorial on TopCoder or some course material at your hand. You have started to figure out the recurrence relation, the hallmark of dynamic programming. However, it looks like you made an off-by-one error. You can imitate the pseudocode in that tutorial. Or many other pseudocode such as those in the Introduction to algorithm by Corman et al. Here is my pseudocode for this problem, where you can find the correct recurrence relation. I assume $n\ge4$; otherwise the only choice is to use company $A$ always. Let $OPT$ be an array of size $n$. $OPT[0]=0$, $OPT[1]=s_1r$, $OPT[2]=(s_1+s_2)r$, $OPT[3]=(s_1+s_2+s_3)r$ For $i$ = 4 to $n$, $OPT[i] = \min(OPT[i-1] + s_ir, OPT[i-4] + 4c)$ Output $OPT[n]$
{ "domain": "cs.stackexchange", "id": 12628, "tags": "algorithms, dynamic-programming" }
ROS2 on Minimal Ubuntu?
Question: Is it possible to run ROS2 on Minimal Ubuntu? Originally posted by thinwybk on ROS Answers with karma: 468 on 2018-07-12 Post score: 0 Answer: On Dockerhub, the new Ubuntu 18.04 LTS image is now the new Minimal Ubuntu 18.04 image. Launching a Docker instance with docker run ubuntu:18.04 therefore launches a Docker instance with the latest Minimal Ubuntu. (Minimal Ubuntu, on public clouds and Docker Hub) The ROS2 Bouncy docker images are already based on "Minimal Ubuntu". Refer to its Dockerfile FROM ubuntu:bionic (ubuntu:18.04 is equivalent to ubuntu:bionic). (ROS2 Ardent is not.) Originally posted by thinwybk with karma: 468 on 2018-07-26 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31254, "tags": "ros2, ros-bouncy" }
In a vacuum, can you see light which is not travelling towards you?
Question: In air, when there is light propagating in a direction, we can still see it even when it is not primarily travelling in our direction, because a small part of the light hits the air molecules, and changes its direction; it travels towards us. Does this mean that, in a vacuum, you would not be able to see light which is not travelling towards you? Answer: If the light has nothing to scatter off of to reach your eyes you won't see anything.
{ "domain": "physics.stackexchange", "id": 67224, "tags": "visible-light, scattering, vacuum" }
What happens to the training data after your machine learning model has been trained?
Question: What happens after you have used machine learning to train your model? What happens to the training data? Let's pretend it predicted correct 99.99999% of the time and you were happy with it and wanted to share it with the world. If you put in 10GB of training data, is the file you share with the world 10GB? If it was all trained on AWS, can people only use your service if they connect to AWS through an API? What happens to all the old training data? Does the model still need all of it to make new predictions? Answer: In many cases, a production-ready model has everything it needs to make predictions without retaining training data. For example: a linear model might only need the coefficients, a decision tree just needs rules/splits, and a neural network needs architecture and weights. The training data isn't required as all the information needed to make a prediction is incorporated into the model. However, some algorithms retain some or all of the training data. A support vector machine stores the points ('support vectors') closest to the separating hyperplane, so that portion of the training data will be stored with the model. Further, k-nearest neighbours must evaluate all points in the dataset every time a prediction is made, and as a result the model incorporates the entire training set. Having said that, where possible the training data would be retained. If additional data is received, a new model can be trained on the enlarged dataset. If it is decided a different approach is required, or if there are concerns about concept drift, then it's good to have the original data still on hand. In many cases, the training data might comprise personal data or make a company's competitive advantage, so the model and the data should stay separate. If you'd like to see how this can work, this Keras blog post has some information (note: no training data required to make predictions once a model is re-instantiated).
{ "domain": "ai.stackexchange", "id": 671, "tags": "neural-networks, machine-learning, training, datasets, training-datasets" }
Moment of inertia of hollow body and solid body
Question: I read in my textbook about the various results of moment of inertia for different geometrical shapes like solid and hollow cylinder, sphere, disc and ring etc. Something general I noted is that $M.I$ for a hollow body is always greater than a solid body, say for a sphere $$MI_{hollow}=\frac{2}{3}MR^2,$$ $$MI_{solid}=\frac{2}{5}MR^2.$$ So clearly $MI_{hollow}$ is greater. And this works even for 2D shapes like ring and disc for which $MI_{ring}$ is twice $MI_{disc}$. My question is is true for any irregular shape that the hollow will have more MI than the solid version of the body. And secondly, is there any proof to it? Answer: You can find the general approach to calculating the moment of inertia here: http://hyperphysics.phy-astr.gsu.edu/hbase/mi.html#:~:text=For%20a%20point%20mass%2C%20the,a%20collection%20of%20point%20masses. The moment of inertia is a measure of an objects resistance to angular acceleration. For an object of given mass the moment of inertia will be greatest when the mass is concentrated further away from the center of rotation. This is why the moment of inertia is greater for the hollow sphere than the solid sphere of the same mass. Regarding any two irregular shapes, one hollow and one solid but having the same mass, the hollow shape will have the greater moment of inertia about the center of mass. This follows from the moment of inertia of a particle rotating about a point being $I=MR^2$. The moment of inertia varies linearly with mass but as the square of the distance away from the center of rotation. Hope this helps.
{ "domain": "physics.stackexchange", "id": 95662, "tags": "rotational-dynamics, rigid-body-dynamics, moment-of-inertia" }
How do aerobic Rhizobium bacteria survive in root nodules while fixing atmospheric nitrogen?
Question: I read that rhizobium carry the enzyme nitrogenase, which is irreversibly damaged upon the exposure of oxygen. Inside the root nodule of legumes leg-haemoglobin keeps a microaerophilic environment where nitrogenase enzyme can function and molecular nitrogen can be fixed (source: Brock's Biology of Microorganisms). Shouldn't therefore Rhizobium bacteria die in such microaerophilic condition inside the root nodules? Answer: TLDR One hypothesis for survival is nitrate respiration. Long Answer Ampount of oxygen available in root nodules is controlled by the host plant by two ways: By presence of leghemoglobin By the diffusion resistance Under anaerobic condition the bacteria survives by making use of denitrification process which could be used to produce ATP under anaerobic conditions. It is known for a while that, beside nitrogenase activity, in many symbiotic associations between legumes and Rhizobium the activity of nitrate reductase also exists 12. Most of the nitrate reductase ativity has been found to be concentrated in infected bacterial body i.e., bacteriod3. The bacteriod uses the process of conversion of nitrate into nitrite for creating a proton gradient which is inturn used for generation of ATP. Reference.
{ "domain": "biology.stackexchange", "id": 6852, "tags": "microbiology, symbiosis" }
Density of states and elliptic integral
Question: It is known, for example Equation (14) in the graphene review of Castro Neto (arXiv), that the full expression for the density of states (DOS) of graphene is in terms of an elliptic integral. Close to the Dirac point, the well known DOS which goes linearly with energy is found. How can one recover Eq. (14)? Or, more precisely, how to show the following integral for DOS leads to an elliptic integral? $\rho (E) = \int \frac{d^2 k }{(2 \pi)^2} \delta(E-E_{\pm}(\boldsymbol{k}))$ where $E_{\pm}(\boldsymbol{k}) = \pm t \sqrt{3+f(\boldsymbol{k})}$ and $f(\boldsymbol{k}) = 2 \cos (\sqrt{3}k_y a) + 4 \cos (\tfrac{\sqrt{3}}{2}k_y a) \cos (\tfrac{3}{2} k_x a)$ Answer: I've also stucked with such problem. Since this is old question, but I didn't find the full answer on it, I'll write down my attemption. It doesn't represent the full solution; however, I think that it almost gives the answer. The density of states $\rho (E)$ is the imaginary part of the self-energy $\Sigma (\mathbf r , \mathbf r, E+i\epsilon)$, where $\epsilon \to 0^{+}$: $$ \rho (E) = -\frac{1}{\pi}\text{Im}\left(\lim_{\epsilon \to 0^{+}}\Sigma (\mathbf r , \mathbf r , E + i\epsilon) \right) $$ How can we determine $\Sigma (\mathbf r , \mathbf r , E + i\epsilon)$? By the definition, the Green operator is $$ \hat{G} (T) = \frac{1}{T\hat{I} - \hat{H}} \equiv \sum_{\mathbf k}\frac{|\mathbf k \rangle \langle \mathbf k|}{T - E(\mathbf k)}, \quad T \equiv E + i\epsilon $$ Next, the Green function which connects the site $l$ of the lattice with itself (which is exactly the self energy) is $$ \tag 1 \Sigma (T, l, l) = \sum_{\mathbf k}\frac{\langle \mathbf{l}|\mathbf k \rangle \langle \mathbf k| \mathbf{l}\rangle}{T - E(\mathbf k)} $$ Lets talk about graphene in the nearest neigbours approximation. Its lattice is hexagonal (honeycomb), which can be represented by two interpenetrating triangular lattices with the strength of interaction given by $t$. Only the nearest cites (say, $A$ and $B$) of these lattices interact, so $\hat{H}$ lives in the space which it the direct product of spaces of two triangular lattices. Now this results in the fact that the hamiltonian can be given in the form of sum of two-dimensional matrix. Thus, for given cite the denominator of $(1)$ is $$ \tag 2 \begin{pmatrix} T & -\mu t \\ -\mu^{*}t & T\end{pmatrix} $$ Here $\mu$ defines the character of lattice, being $$ \mu = e^{ik_{x}a} + e^{i\left(\frac{\sqrt{3}k_{y}a}{2} - \frac{k_{x}a}{2} \right)} + e^{-i\left(\frac{\sqrt{3}k_{y}a}{2} + \frac{k_{x}a}{2} \right)} $$ Substituting $(2)$ into $(1)$, you can convert $(1)$ to the form $$ \tag 3 \rho (E) = -\frac{1}{\pi}\text{Im}\left[ \lim_{\epsilon \to 0^{+}}\int \limits_{\text{1st Br. zone}}\frac{d^{2}\mathbf k}{(2 \pi)^{2}}\frac{T}{T^{2} - t^{2}|\mu|^{2}}\right] = -\frac{E}{8t^{2}\pi}\text{Im} \left[\lim_{\epsilon \to 0^{+}}\tilde{G}\left(\frac{T}{t}\right)\right], $$ where $$ \tilde{G}\left(\frac{T}{t}\right) \equiv \frac{1}{\pi^2}\int \limits_{-\pi}^{\pi}\frac{dxdy}{ \frac{\frac{T^{2}}{t^{2}} - 3}{2} - \cos(2y) - 2\cos (y)cos(3x)} $$ Such quantity can be computed (there is no derivation of this result) for $\frac{\frac{T^{2}}{t^{2}} - 3}{2} > 3$, and $$ \tag 4 \tilde{G}\left(\frac{T}{t}\right) = \frac{T}{t\pi}\frac{1}{\sqrt{\left(\frac{T}{t} - 1\right)^3\left(\frac{T}{t} + 3\right)}}K\left( \frac{4\sqrt{\frac{T}{t}}}{\sqrt{\left(\frac{T}{t} - 1\right)^3\left(\frac{T}{t} + 3\right)}}\right), $$ where $K(x)$ is the elliptic integral of the first kind: $$ K(x) = \int \limits_{0}^{\frac{\pi}{2}}\frac{dy}{\sqrt{1 - x^{2}\sin^{2}(y)}} $$ The only thing which you have to do is to compute analytic continuation of $(4)$ and then to compute its imaginary part multiplied by four (which corresponds to the degeneracy of spins and two sites). An edit Here is the full derivation of the density of states in graphene.
{ "domain": "physics.stackexchange", "id": 29376, "tags": "quantum-mechanics, condensed-matter, solid-state-physics, graphene" }
Capacity of Uniquely Solvable Puzzle (USP)
Question: In their seminal paper Group-theoretic algorithms for matrix multiplications, Cohn, Kleinberg, Szegedy and Umans introduce the concept of uniquely solvable puzzle (defined below) and USP capacity. They claim that Coppersmith and Winograd, in their own groundbreaking paper Matrix multiplication via arithmetic progressions, "implicitly" prove that the USP capacity is $3/2^{2/3}$. This claim is reiterated in several other places (including here on cstheory), yet nowhere is an explanation to be found. Below is my own understanding on what Coppersmith and Winograd do prove, and why it's not enough. Is it true that the USP capacity is $3/2^{2/3}$? If so, is there a reference for the proof? Uniquely solvable puzzles A uniquely solvable puzzle (USP) of length $n$ and width $k$ consists of a subset of $\{1,2,3\}^k$ of size $n$, which we also think of as three collections of $n$ "pieces" (corresponding to the places where the vectors are $1$, the places where they are $2$, and the places where they are $3$), satisfying the following property. Suppose we arrange all the $1$-pieces in $n$ lines. Then there must be a unique way to put the other pieces, one of each type in each line, so that they "fit". Let $N(k)$ be the maximum length of a USP of width $k$. The USP capacity is $$ \kappa = \sup_k N(k)^{1/k}. $$ In a USP, each of the pieces needs to be unique - that means that no two lines contain a symbol $c \in \{1,2,3\}$ in exactly the same places. This shows (after a short argument) that $$ N(k) \leq \sum_{a+b+c=k} \min \left\{ \binom{k}{a}, \binom{k}{b}, \binom{k}{c} \right\} \leq \binom{k+2}{2} \binom{k}{k/3}, $$ and so $\kappa \leq 3/2^{2/3}$. Example (a USP of length $4$ and width $4$): $$\begin{align*} 1111 \\ 2131 \\ 1213 \\ 2233 \end{align*}$$ Non-example of length $3$ and width $3$, where the $2$- and $3$-pieces can be arranged in two different ways: $$\begin{align*} 123 && 132 \\ 231 && 321 \\ 312 && 213 \end{align*}$$ Coppersmith-Winograd puzzles A Coppersmith-Winograd puzzle (CWP) of length $n$ and width $k$ consists of a subset $S$ of $\{1,2,3\}^k$ of size $n$ in which the "pieces" are unique - for any two $a \neq b \in S$ and $c \in \{1,2,3\}$, $$ \{ i \in [k] : a_i = c \} \neq \{ i \in [k] : b_i = c \}. $$ (They present it somewhat differently.) Every USP is a CWP (as we commented above), hence the CWP capacity $\lambda$ satisfies $\lambda \geq \kappa$. Above we commented that $\lambda \leq 3/2^{2/3}$. Coppersmith and Winograd showed, using a sophisticated argument, that $\lambda = 3/2^{2/3}$. Their argument was simplified by Strassen (see Algebraic complexity theory). We sketch a simple proof below. Given $k$, let $V$ consist of all vectors containing $k/3$ each of $1$s, $2$s, $3$s. For $c \in \{1,2,3\}$, let $E_c$ consist of all pairs $a,b \in V$ such that $\{ i \in [k] : a_i = c \} = \{ i \in [k] : b_i = c \}$, and put $E = E_1 \cup E_2 \cup E_3$. Every independent set in the graph $G = (V,E)$ is a CWP. It is well-known that every graph has an independent set of size $|V|^2/4|E|$ (proof: select each vertex with probability $|V|/2|E|$, and remove one vertex from each surviving edge). In our case, $$ |V| = \binom{k}{k/3} \binom{2k/3}{k/3}, \quad |E| \leq 3|E_1| = \frac{3}{2}\binom{k}{k/3} \binom{2k/3}{k/3}^2. $$ Hence $$ \frac{|V|^2}{4|E|} = \frac{1}{6} \binom{k}{k/3} \Longrightarrow \lambda \geq \frac{3}{2^{2/3}}. $$ Answer: Like many other questions, the answer to this one can be found in Stothers' thesis. A local USP is a CWP in which the only way in which a 1-piece, a 2-piece and a 3-piece can fit together is if their union is in $S$. Clearly a local USP is a USP, and a construction from [CKSU] shows that the USP capacity is achieved by local USPs (we are going to show that constructively). Coppersmith and Winograd construct an almost 2-wise independent distribution $S$ on $2^V$ with the following two properties: (1) $\Pr[x \in S] = (|V|/2|E|)^{1-\epsilon}$, (2) For any $x,y,z \in V$ such that the 1-piece of $x$, the 2-piece of $y$ and the 3-piece of $z$ together form a vector $w \in V$: if $x,y,z \in S$ then $w \in S$. We choose a random subset $S$ of $V$ according to the distribution, and for each edge $(x,y) \in E$, we remove both vertices $x,y$. The expected number of vertices left is roughly $(|V|^2/2|E|)^{1-\epsilon}$. The resulting set $T$ is a local USP: if there are $x,y,z \in T$ in which the 1-piece of $x$, the 2-piece of $y$ and the 3-piece of $z$ fit, forming a piece $w$, then $x,y,z,w \in S$, and so all of $x,y,z$ are removed from $S$.
{ "domain": "cstheory.stackexchange", "id": 1643, "tags": "ds.algorithms, co.combinatorics, algebraic-complexity, matrix-product" }
How to derive the sum-of squares error function formula?
Question: I'm attending a Machine Learning course and I'm studying linear models for classification right now. Slides present approaches to learn linear discriminants (Least squares, Fisher's linear discriminant, Perceptron and SVM), more specifically, how to compute the weight matrix $\tilde{\textbf{W}}$ to determine the discriminant function: \begin{equation}y = \tilde{\textbf{W}}^T \tilde{\textbf{x}} + w_0. \end{equation} My problem is about least squares: I don't understand how the minimization of sum-of-squares error function: \begin{equation}E(\tilde{\textbf{W}}) = \frac{1}{2} Tr\Bigl\{(\tilde{\textbf{X}}\tilde{\textbf{W}} - \textbf{T})^T(\tilde{\textbf{X}}\tilde{\textbf{W}} - \textbf{T})\Bigr\} \end{equation} (where $Tr$ is the trace). is derived and how it is possible to reach the closed formula solution: \begin{equation}\tilde{\textbf{W}} = (\tilde{\textbf{X}}^T\tilde{\textbf{X}})^{-1}\tilde{\textbf{X}}^T\textbf{T}\end{equation} Can someone explain me the main steps in the simplest and clearest possible way to make sense of these formulas? I'm a beginner. P.S. These formulas come from C.Bishop. Pattern Recognition and Machine Learning. Answer: There are two interpretations of this formula that I explain one of them. \begin{equation} Xw = y \end{equation} \begin{equation} X^tXw = X^ty \end{equation} The above is for making sure that you make a square matrix that it has an inverse. It is possible that $X^tX$ does not have any inverse but its chance for linear regression problems is not that much. The reason is that you have a matrix $X$ which belongs to $R^{m \times n}$ which $m$ represents the number of samples and $n$ represents the number of features. Usually, the number of samples is much more than the number of features. Next, \begin{equation}(X^tX)^{-1}(X^tX)w = (X^tX)^{-1}X^ty\end{equation} \begin{equation}w = (X^tX)^{-1}X^ty\end{equation} Consequently, you have found a closed form for the $w$ linear regression problem which can be generalised to non-linear regression. Be aware that $(X^tX)^{-1}X^t$ is called the pseudo-inverse of the matrix $X$. The reason is that $X$ is not a square matrix and it does not have inverse but by the mentioned formula you can find its pseudo-inverse. Just multiply it by $X$ and you will get Identity matrix. There is another interpretation of this. You can find here.
{ "domain": "datascience.stackexchange", "id": 4006, "tags": "machine-learning, neural-network, linear-regression, optimization" }
Tensor manipulation
Question: Having a bit of trouble applying what I know about tensor manipulation, given, $T^{\mu \nu} = \left( g^{\mu \nu} - \frac{p^\mu n^\nu + p^\nu n^\mu}{p \cdot n} \right)$, I need to compute quantities such as (i) $T^{\mu \nu}T_{\mu \nu}$, (ii) $T^\mu{}_\nu$ and (iii) $ T^\mu{}_\mu $. Start with (iii) $ T^\mu{}_\mu = g_{\mu\nu}T^{\mu\nu}$ I don't think this can be correct because both indices appear twice. Just by inspection I would guess that $T^\mu{}_\mu = \left( g^{\mu}{}_\mu - \frac{p^\mu n_\mu + p_\mu n^\mu}{p \cdot n} \right)$ $g^\mu{}_\mu =2$ here I summed all the diagonal terms $\frac{p^\mu n_\mu + p_\mu n^\mu}{p^{\mu} n_\mu} $= $\frac{2 p\cdot n}{p \cdot n} = 2$ $T^\mu{}_\mu = 0$? If this is the case then I get the feeling that if the indices are doing things like that, i.e., if you have repeated indices on the tensor, then you will end up with a scalar - and maybe that that scalar will be zero. (ii) $T^\mu{}_\nu = g_{\nu\sigma}T^{\mu\sigma}=g_{\nu\sigma}\left(g^{\mu\sigma} - \frac{p^\mu n^\sigma + p^\sigma n^\mu}{p\cdot n}\right)=\left(g_{\nu\sigma}g^{\mu\sigma} - \frac{p^\mu n_\nu + p_\nu n^\mu}{p \cdot n}\right)$ I think that $g_{\nu\sigma}g^{\mu\sigma} = \delta^\nu_\mu$ and $\frac{p^\mu n_\nu + p_\nu n^\mu}{p \cdot n}$ I think, is simplified as far as it can go. (i) I wanted to start with $T_{\mu\nu}T^{\mu\nu}=T^{\mu\nu}g_{\mu\sigma}g_{\nu\rho}T^{\sigma\rho}$ but I can sort of see that this will, firstly, immediately be: $\left( g^{\mu \nu} - \frac{p^\mu n^\nu + p^\nu n^\mu}{p \cdot n} \right)\left( g_{\mu \nu} - \frac{p_\mu n_\nu + p_\nu n_\mu}{p \cdot n} \right)$. And also that there are tricks involved in getting the answer quickly, but if I multiply out the terms I get $\left( g^{\mu \nu}g_{\mu \nu} - g_{\mu \nu}\frac{p^\mu n^\nu + p^\nu n^\mu}{p \cdot n} -g^{\mu \nu}\frac{p_\mu n_\nu + p_\nu n_\mu}{p \cdot n} + \frac{p^\mu n^\nu + p^\nu n^\mu}{p \cdot n}\frac{p_\mu n_\nu + p_\nu n_\mu}{p \cdot n}\right)$ Here's where my understanding breaks down, how do I evaluate $g_{\mu\nu} p^\mu n^\nu$ in terms of raising and lowering, which index do I choose? It must be a scalar, since it is a double sum. $g_{\mu\nu} p^\mu n^\nu = p \cdot n$. $\left( g^{\mu \nu}g_{\mu \nu} - g_{\mu \nu}\frac{p^\mu n^\nu + p^\nu n^\mu}{p \cdot n} -g^{\mu \nu}\frac{p_\mu n_\nu + p_\nu n_\mu}{p \cdot n} + \frac{p^\mu n^\nu + p^\nu n^\mu}{p \cdot n}\frac{p_\mu n_\nu + p_\nu n_\mu}{p \cdot n}\right)=\delta_\mu^\nu -2-2+ 4 \frac{(p\cdot n)^2}{(p \cdot n)^2} = \delta_\mu^\nu $ Which seems like it could be the right answer, but is it? Answer: Start with (iii) $ T^\mu{}_\mu = g_{\mu\nu}T^{\mu\nu}$ I don't think this can be correct because both indices appear twice. What's wrong with $ g_{\mu\nu}T^{\mu\nu}$? Both indices are contracted. Explicitly it means $$ \sum_{\mu=0}^3\sum_{\nu=0}^3 g_{\mu\nu}T^{\mu\nu}$$ which is a perfectly good scalar. $g^\mu{}_\mu =2$ here I summed all the diagonal terms What dimension spacetime are you working in? The general result is $$ g^\mu_\mu \equiv g_{\mu\nu} g^{\mu\nu} = \delta^\mu_\mu = D, $$ which equals 4 in 4 dimensions. if you have repeated indices on the tensor, then you will end up with a scalar - and maybe that that scalar will be zero. Contracting indices on a tensor does not necessarily give you a scalar and if it does that scalar is not necessarily zero. An important example is given by the contractions of the Riemann tensor: $$ \begin{array}{lcl} R_{\mu\nu} &\equiv& R^\alpha_{\ \mu\alpha\nu} \\ R &\equiv& R^\mu_\mu = g^{\mu\nu} R_{\mu\nu} = g^{\mu\nu} R^\alpha_{\ \mu\alpha\nu} \end{array}$$ neither of which necessarily vanish. (They are physically important because they enter into Einstein's field equation of general relativity: $ R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = 8\pi G T_{\mu\nu}$). I think that $g_{\nu\sigma}g^{\mu\sigma} = \delta^\nu_\mu$ and $\frac{p^\mu n_\nu + p_\nu n^\mu}{p \cdot n}$ I think, is simplified as far as it can go. I think you mean $\delta^\mu_\nu$, which is numerically equal to $\delta^\nu_\mu$, but still different. :) Otherwise you're correct. Here's where my understanding breaks down, how do I evaluate $g_{\mu\nu} p^\mu n^\nu$ in terms of raising and lowering, which index do I choose? It must be a scalar, since it is a double sum. $g_{\mu\nu} p^\mu n^\nu = p \cdot n$. Either index - you get the same answer both ways. Your answer is right: $p\cdot n$. $\left( g^{\mu \nu}g_{\mu \nu} - g_{\mu \nu}\frac{p^\mu n^\nu + p^\nu n^\mu}{p \cdot n} -g^{\mu \nu}\frac{p_\mu n_\nu + p_\nu n_\mu}{p \cdot n} + \frac{p^\mu n^\nu + p^\nu n^\mu}{p \cdot n}\frac{p_\mu n_\nu + p_\nu n_\mu}{p \cdot n}\right)=\delta_\mu^\nu -2-2+ 4 \frac{(p\cdot n)^2}{(p \cdot n)^2} = \delta_\mu^\nu $ There is an error in your first term: both indices are contracted, so it should be $\delta_\mu^\mu = D$. There is also an error in your last term. Expanding out: $$ (p^\mu n^\nu + p^\nu n^\mu)(p_\mu n_\nu + p_\nu n_\mu) = p^\mu n^\nu p_\mu n_\nu +p^\mu n^\nu p_\nu n_\mu+p^\mu n^\nu p_\mu n_\nu +p^\nu n^\mu p_\nu n_\mu. $$ If you look carefully you'll see that you get $(p\cdot n)^2$ terms but also $ p^2 n^2 $ terms as well. Which seems like it could be the right answer, but is it? No, because the expression should be a scalar and you have a tensor with uncontracted indices.
{ "domain": "physics.stackexchange", "id": 6843, "tags": "homework-and-exercises, tensor-calculus, special-relativity, stress-energy-momentum-tensor" }
How does liquid stay in a Pasteur pipette/eye dropper instead of dripping out?
Question: From what I understand about how pipettes work (correct me if I am wrong), when you squeeze the bulb of a pipette, you are removing the air from it and when you dip the end in liquid and release the bulb, the area increases so the pressure in the bulb drops. So due to the atmospheric pressure pushing down on the liquid surface outside the pipette and not having an equal pressure pushing down inside the pipette, the liquid rises into the pipette until the area in the bulb gives an equal pressure to atmospheric pressure. However, I don't understand how the liquid stays in the pipette once you raise it out of the liquid and doesn't fall out until you squeeze the bulb again. It seems like the liquid should be heavy enough to fall out? Is it due to intermolecular forces holding the liquid there? Answer: There are two factors at play: the first is surface tension, which tends to generalize the action of forces across an area, instead of those forces localizing at one infinitismal point. Second, atmospheric pressure is still in effect. As gravity pulls on the liquid in the pipette, pressure in the bulb diminishes, so external atmospheric pressure pushes back on the liquid. It DOES move down the pipette, but only until those pressures equalize.
{ "domain": "physics.stackexchange", "id": 42903, "tags": "pressure, everyday-life" }
What would be the basis vectors for this 2D crystal structure?
Question: In the above image, I have a 2D crystal structure. The lattice vectors are described by: a = {-1/2, -Sqrt[3]/2}; b = {1, 0}; and the location of atoms A and B are given by: \[Tau][A] = {2/3, 1/3}; \[Tau][B] = {0, 0}; Now the problem I'm tasked with is to plot the locations of the atoms A and B, which I have done as seen above, where A is red and B is blue. I also have to plot the lattice vectors which I have done in blue arrows as seen above, and lastly I have to plot the basis vectors and this is where I don't know what to do. I don't know what the basis vectors are. Googling has led me to discover that a lot of people use lattice vectors and basis vectors interchangeably, and overall, I have no clear definition to work with. Sometimes people say basis vectors are orthonormal, sometimes they need to be linear combinations with integers, sometimes with any real numbers. I'm frustrated by the looseness with which all kinds of sources use these terms, and ultimately I don't have a clear reference to go with. Answer: When talking about crystal lattices, the lattice vectors are what determines the translational symmetry of the crystal, and you have correctly identified those. The basis vectors are the vectors that tell you where the different atoms in your unit cell are. Thus, the basis vectors are those "locations of atoms A and B": The basis vector for atom B is just $(0,0)$ and the basis vector for atom A is $(2/3, 1/3)$. They tell you how to find your atoms within the unit cell once you have found the origin of that cell via the lattice vectors.
{ "domain": "physics.stackexchange", "id": 7610, "tags": "condensed-matter, solid-state-physics, material-science, crystals" }
Annihilation Operator on the Fock space
Question: I agree that $$\hat a|0\rangle=0$$ But then, based on the above, the following should hold $$\hat a_k |N_1,...,N_{k-1},0,N_{k+1},...\rangle=|N_1\rangle\oplus\cdots\oplus |N_{k-1}\rangle\oplus \hat a_k|0\rangle \oplus|N_{k+1}\rangle\cdots$$ $$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=|N_1\rangle\oplus\cdots\oplus |N_{k-1}\rangle\oplus 0 \oplus|N_{k+1}\rangle\cdots$$ However, I've seen QFT textbooks claim that the calculation should result in $0$. $$\hat a_k |N_1,...N_{k-1},0,N_{k+1},...\rangle=0$$ Then does this mean that $$|N_1\rangle\oplus\cdots\oplus |N_{k-1}\rangle\oplus 0 \oplus|N_{k+1}\rangle\cdots=0\tag{*}$$ I don't see how what we've derived makes sense. Perhaps someone could explain why (*) makes sense mathematically? Answer: Suppose I have two quantum modes $|\cdot \rangle_A$ and $|\cdot \rangle_B$. Suppose mode $A$ is in state $$\sum_n a_n|n\rangle_A$$ and mode $B$ is in state $$\sum_m b_m|m\rangle_B . $$ Then the state of the combined system is $$\sum_{n,m} a_n b_m \left( |n\rangle_A \otimes |m\rangle_B \right). $$ Note that the coefficients are multiplied. Now consider the special case where mode $A$ is in state $$|0\rangle_A . $$ Then the system state is $$ \sum_m b_m \left( |0\rangle_A \otimes |m\rangle_B \right) = |0\rangle_A \otimes \left( \sum_m b_m |m\rangle_B \right) . $$ If we act $a_A$ on this thing we get $$ a |0\rangle_A \otimes \left( \sum_m b_m |m\rangle_B \right) = 0 \otimes \left( \sum_m b_m |m\rangle_B \right) = 0. $$ If your question really boils down to what those little $\otimes$ symbols really mean then that might make another good question. You can roughly (and quite successfully) think of them as a thing which multiplies numbers and concatenates vector spaces (i.e. concatenates the spaces of the various quantum modes). It's the multiplication of the numbers that leads to the zero in this problem. This discussion of second quantisation may be useful. P.S. Before the theory train demolishes my house, I know that this summary of how tensor products work is rough and incomplete. This is intentional.
{ "domain": "physics.stackexchange", "id": 16642, "tags": "quantum-field-theory, second-quantization, hilbert-space" }
Why is the centre of compression different for two different graphs of a longitudinal wave?
Question: According to the first picture, the centre of compression corresponds with the peak of the wave. However, in the second picture, the centre of compression lies on the x-axis. This is because to the left side of the compression point on the x-axis, the particles move to the right. On the right side, the particles move to the left. This causes compression at the red points. What causes this difference and which one is correct? Answer: The text under the second picture is right. The slope of the displacement graph determines the amount of compression/rarefaction. The waveform in the first picture shows something else than displacement. For sound it could be pressure but I have no clue what the author is displaying.
{ "domain": "physics.stackexchange", "id": 86769, "tags": "waves" }
Why torque on current carrying circular loop in uniform magnetic field is differs from results of $\mu \times \vec{B}$ if we apply calculus method
Question: We have a current carrying circular wire kept in uniform magnetic field $\vec{B}$, as shown, I tried to derive the torque $\vec{\tau}$ acting on it For 2 elemental parts on wire subtending angle $d\theta$ at center right at opposite to each other $$d\vec{\tau} = 2idl B \sin\theta r$$ it gives $$\tau = -2i r^2 \cos \theta$$ on varying $\theta$ from $0$ to $\pi$ I get net torque $\tau = 4ir^2\times B$ but by applying $\tau = I\vec{A}\times\vec {B}$ Note- Here $r$ is radius of wire, there is a factor of 2 because they are 2 elemental parts situated just opposite to each other .Torque on both elemental parts would have same magnitude and would add up ,I replaced $dl$ with $rd\theta$ .Angle $\theta$ is shown in pic. I got another answer $\tau = I\pi r^2 B$, which is correct I don't know why there is so much differences in answers , although both processes look correct Answer: There are actually two cross-products, and you've ignored one of them. I'm going to assume that the magnetic field is in the plane of the loop, pointing along $\mathbf{\hat{y}}$, while the loop is in the $xy-$plane. The infinitesimal torque is given by $$\text{d}\boldsymbol{\tau} = \text{d}\mathbf{F}\times \mathbf{r}.$$ The infinitesimal force is $\text{d}\mathbf{F} = I \text{d}\mathbf{l}\times\mathbf{B}$. As you've pointed out, $$\text{d}\mathbf{F} = I B r\,\,\text{d}\theta \sin{\theta}\, \mathbf{\hat{z}}$$ So far, what you've done seems correct. (You can verify that the net force on the wire is indeed $0$ by integrating over $\theta$.) However, when you calculate the infinitesimal torque, you need to find $$\text{d}\boldsymbol{\tau} = \text{d}\mathbf{F}\times \mathbf{r} = I B r \sin{\theta}\text{d}\theta \mathbf{\hat{z}}\times (r \cos{\theta} \mathbf{\hat{x}} + r\sin{\theta} \mathbf{\hat{y}} ),$$ where in the last step I've written $\mathbf{r} = r \cos{\theta} \mathbf{\hat{x}} + r\sin{\theta} \mathbf{\hat{y}}.$ Expanding this, we have two terms. It can be easily shown (I'll leave it as an exercise) that one of those terms integrates to $0$ as $\theta$ runs from $[0,2\pi)$, and the other term gives you $$\boldsymbol{\tau} = -\mathbf{\hat{x}} I B r^2 \int_0^{2\pi}\sin^2\theta\, \text{d}\theta = -I \pi r^2 B\, \mathbf{\hat{x}},$$ which is exactly what you'd get if you calculated it using $$\boldsymbol{\tau} = I \mathbf{A}\times\mathbf{B} = I \pi r^2 B \mathbf{\hat{z}}\times \mathbf{\hat{y}} = - I \pi r^2 B \mathbf{\hat{x}}.$$
{ "domain": "physics.stackexchange", "id": 69450, "tags": "electromagnetism, magnetic-fields, magnetic-moment" }
Center of momentum frame for two dimensions
Question: I have been investigating the scenario of two photons traveling in space (in a lab frame of reference) of different frequencies are about to collide with an angle $\theta$ between them. I set their 4-momentum vectors to be $(hν_1/c , hν_1/c \cos(\theta) ,hν_1/c \sin(\theta), 0)$ for a gamma photon and $(hν_2/c , hν_2/c , 0 , 0)$ for a CMB photon. I am having a hard time figuring out what the energy would be in a zero momentum frame due to the angle. The pair production along with a Lorentz transformation lead me to believe that $hν_1/c \sin(\theta)$ would always equal zero which is hard to believe as I can adjust the angle in other frames. Answer: Set c=1, and, so $h\nu=E=p$, much less cluttered. Assuming you are serious about the signs you have chosen, the total energy, momentum four vector in the lab frame is $$ (p_1+p_2, p_2+ p_1 \cos\theta, p_1\sin\theta, 0). $$ In the center of momentum frame, $(E,0,0,0)$. But $s=E^2-\vec{p}^2$ is a relativistic invariant, and so the same in both frames, so $$ E^2= 2p_1p_2 (1-\cos\theta), $$ with a minimum at 0, and a maximum at 4$p_1p_2$. This is the smallest energy squared at which a pair of particles of mass m will be produced at rest, where $m^2=p_1p_2$.
{ "domain": "physics.stackexchange", "id": 76094, "tags": "special-relativity, particle-physics, photons, mass-energy, pair-production" }
Sum of all distances in connected DAG in $O(n\log n)$
Question: I have a DAG with $n$ nodes and $n-1$ edges. The edges in DAG (fixed) are defined as follows: For every node $i$, $1 \le i \le n-1$ is connected to node $i+1$. The lengths of the $n-1$ edges are given as input. I need to compute the sum of all of the distances between all pairs of different vertices, i.e., to compute $\sum_{u<v} d(u,v)$, which is the sum of the distances for each pair $u,v$ such that $u<v$. For example, suppose $n=4$, and the edge lengths are $1,1,1,1$. Then the pairs along with their distance would be $(1,2):1,(1,3):2,(1,4):3,(2,3):1,(2,4):2,(3,4):1$, so the total sum of distances would be $1+2+3+1+2+1=10$. I tried Dijkstra's Algorithm (time complexity $O(n\log n)$) on every node but the time complexity is $O(n^2 \log n)$ which is inefficient. Is there a way to reduce it to $O(n \log n)$ or less? Answer: I think the question is not clear enough, but what I understood is that we want to find the sum of all the shortest path from $u$ to $v$ such that $u < v$. The trick is that the graph is actually a directed chain, so we can represent the edges in an array $A$ of length $n - 1$. The brute force formula is $\sum_{i=1}^{n -1} \sum_{j=i + 1}^{n} \delta(i,j)$. Where $\delta(i,j)$ is the shortest path from $i$ to $j$, because our graph is a chain, we can simply express this as $\delta(i,j) = \sum_{k=i}^{j - 1} A_{k}$. In the example you provided with 4 nodes $A =[1,1,1]$. The above formulas have a complexity of $\Theta(n^3)$, which can be improved to $\Theta(n^2)$ by doing a prefix sum on $A$. But a pattern can easily detected by looking carefully where the items of $A$ repeat. For example, if $A = [1,2,3]$ the above formulas would provide the following operations: $(1 + (1+2) + (1+2+3)) + (2 + (2+3)) + (3)$, here each "big parenthesis" is the computation from each node. The last node is obviously ignored. We can rewrite this as: $3 *(3 * 1) + 2 *(2 * 2) + 1 * (1 * 3)$ For example $3 *(3 * 1)$ means that $3$ was repeated once ($1$) on $3$ parenthesis. Likewise, $ 2 *(2 * 2)$ means that $2$ is repeated two times on $2$ parenthesis and so on. The repetitions have a pattern: the number of times it appears on a "big parenthesis" is actually the index of the element, but the number of times is repeated inside the parenthesis decreases as the index increases because less paths starts from nodes that are closer to the end of the chain. Work through the example and you will find the pattern, make sure you understand this paragraph. As you mentioned that you have not found the pattern, the above discussion can be summarized in this formula: $\sum_{i=1}^{n - 1} A_{i} * i * (n - i)$. Remember that $n$ is equal to the number of nodes and $A$ contains the edges, which are $n - 1$. The proof of the above intuition is left as an exercise for the reader :-). EDIT: Just in case, the complexity is $\Theta(n)$ because we only do a loop to compute the summation and the inner operations are $\Theta(1)$. Depending on how you represent the graph in the input, the memory complexity can be either $\Theta(1)$ or $\Theta(n)$.
{ "domain": "cs.stackexchange", "id": 8553, "tags": "algorithms, graphs, shortest-path" }
How do you apply a filter after DFT on an image?
Question: Let's say size of the image is 100 x 100 and the kernel matrix is 5x5. I took the DFT of both the image and the kernel. But how do I multiple these two matrices? And which parts involve in these multiplication? real, imaginary or magnitude? Thanks Is this what am I supposed to do? And at what locations I put 0s to make the kernel 100x100? for i to 102 for j to 102 { newReal = realImage * realKernel - imaginaryImage * imaginaryKernel newImaginary = realImage * imaginaryKernel + imaginaryImage * realKernel } and do InverseFourierTransform(newReal, newImaginary) Answer: Based on your C++ snipped, the correct way to do complex multiplication between samples of DFT sequences should be as follows: $$ Z[k] = X[k]\cdot Y[k] = (a + jb)(c+jd) = (ac-bd) + j (ad+bc) $$ Where the real part of the result is $ac-bd$, and the imaginary part is $ad+bc$. Then you can take the inverse 2D-DFT. Note that, for an alias free circular convolution implementation, your DFT length should be at least $100 + (5-1)/2 = 102 \times 102$ points. Then after the inverse DFT of $ 102 \times 102 $ points, discard the first $2$ rows and columns and reatin the remaining $100 \times 100$ part which are the samples of the filtered image.
{ "domain": "dsp.stackexchange", "id": 6921, "tags": "image-processing, fourier-transform, convolution" }
Is one of these solved problems incorrect?
Question: I am currently learning about the Lorentz transformations. So I compared two solved problems from two different textbooks in order to see how the Lorentz transformations are applied. I got confused because I believe the two problems imply two different conclusions to what seems to be two analogous problems. Let's look at the first one. $\textbf{First Problem:}$ The following problem is taken from Hugh Young's and Roger Freeman's University Physics with Modern Physics, 13th edition, example 37.6: NOTE: Im only looking at the length variables for now as the next problem only looks at the length variable. As expected, we obtain the locations of events 1 and 2 via the Inverse Lorentz transformation. $$x_1 = \gamma(x'_1 +vt'_1) = \frac{0 +0.600c(0)}{\sqrt{1 -0.600^2}}\text{ m} = 0$$ $$x_2 = \gamma(x'_2 +vt'_2) = \frac{-300 +0.600c(0)}{\sqrt{1 -0.600^2}}\text{ m} = -375\text{ m}$$ $\textbf{Second Problem:}$ The following problem is taken from the online version of Openstax's University Physics Volume 3, taken on 07/OCT/2022, example 5.7 (Can be accessed here: https://openstax.org/books/university-physics-volume-3/pages/5-5-the-lorentz-transformation): Here, the second problem is, to my understanding, analogous to the first one. However, what I see from the second problem is a length contraction, whereas on the first problem, I see a length dilation. So one of them must be wrong or at least erroneous. I suspect it is the second one. Here, I take that the surveyor takes two measurements. $(x_1, t_1), (x_2, t_2)$ such that $\Delta x = x_2 -x_1 = 100\text{ m}$. I also noticed that the problem uses the following equation: $$x_2 -x_1 = \frac{x'_2 +vt}{\sqrt{1 -v^2/c^2}} -\frac{x'_1 +vt}{\sqrt{1 -v^2/c^2}}$$ Which implies $$x_1 = \frac{x'_1 +vt}{\sqrt{1 -v^2/c^2}}$$ and $$x_2 = \frac{x'_2 +vt}{\sqrt{1 -v^2/c^2}}$$ From the Lorentz transformations we see that: $$x_1 = \frac{x'_1 +vt'_1}{\sqrt{1 -v^2/c^2}}$$ and that $$x_2 = \frac{x'_2 +vt'_2}{\sqrt{1 -v^2/c^2}}$$ The equations above assume that $vt = vt'_2 = vt'_1$ and therefore $t'_2 = t'_1$, which, I believe, is not always the case as $t'_2$ and $t'_1$ depend on $t_1$ and $t_2$, respectively. Moreover, the problem does not give the assumption that $t'_2 = t'_1$ or any data regarding $t_1$ and $t_2$. Is it fair to say that solved problem 2 is erroneous, since we do not have enough observations/data about time in frame S to make inferrences about space in frame S'? (In relativity time and space are not independent of each other afterall) $\textbf{A question:}$ This leads me to another thought. Let's go back to the first solved problem. Although it is what I expected given my understanding of the Lorentz transformations, from Stanley's perspective, the spaceship is travelling at a speed of 0.600c. If my understanding of length contraction is correct, shoudn't stanley observe the length of the ship to be $300\sqrt{1 -0.6^2}\text{ m} = 240\text{ m}$? Because this seems to be a length dilation. I wonder if anyone can give me an intuition regarding this transformation? Answer: Both problems are correct. The issue is that they're not talking about the same thing. In particular, in the first problem, the pair of events considered is the same between the two frames; while in the second problem, it's a different pair of events in each frame. In the first problem, it is hopefully evident that there are only two events being considered: the front of the spaceship crossing the finish line (Event $A$), and the message being emitted from the back of the spaceship (Event $B$). The Lorentz transformation is then being applied to find the coordinates of these events in one frame given their coordinates in the other frame. But there's an important difference in the second problem. Whenever we say "the length of an object in frame $S$", we mean "the distance between two simultaneous events on the worldlines of the object's ends." Suppose that in frame $S$, events $A$ and $B$ (on the worldlines of the object's ends) are simultaneous. The length as measured in frame $S$ will then be the spatial distance between $A$ and $B$ (in that frame.) But in a different frame $S'$, event $B$ is not simultaneous with $A$! Instead, there is another event $B'$, on the same worldline as $B$, that is simultaneous with $A$ in the frame $S'$. And in the frame $S'$, the length of the object is the spatial separation between $A$ and $B'$, not $A$ and $B$. Since we're not considering the same pair of events in both frames, there is no reason to believe that the two spatial separations will be related in the same way they were in the first problem; and indeed, we do not get the same result. As a general rule, the way to avoid "paradoxes" in special relativity is to carefully phrase everything in terms of "events". In the above example, carefully defining the idea of "length" in terms of simultaneous events makes it clear that the events in question must be different in different frames.
{ "domain": "physics.stackexchange", "id": 91274, "tags": "homework-and-exercises, special-relativity, inertial-frames, observers" }
cylinder of milk: time required to reach given temperature
Question: This is a very practical question, I've looked at Wikipedia's heat equation page but it is too complicated. I am in the middle of making some yogurt but I've broken my thermometer. The temperature of the milk (post-stirring) was approximately 145 degrees Fahrenheit at 16:30. The milk is contained in a pot/cylinder of 9" diameter and 4" high (1 gallon of milk before evaporation). The air temperature is 73 degrees Fahrenheit. How long till the temperature reaches 110 degrees Fahrenheit? I'll accept an actual time but would prefer an annotated simple (non-differential) equation pre-solved for a cylindrical shape including a reference to milk's heat capacity. I can't believe how much this sounds like one of those inane homework questions, but I need an answer soon (before 110 degrees). Answer: There is no quick and easy way to calculate this and certainly not before your yoghurt reaches $110\:\mathrm{F}$ (that's the most unusual 'ultimatum' I've come across on this site, ever!) Nor is it possible to derive the expressions for that temperature (temperature v. time function) without using differential equations. We can distinguish two main cases: 1. Temperature inside the pot is uniform: For stirred liquids that's a reasonable assumption. We can then derive a cooling curve by simple application of Newton's cooling law. But a differential equation is needed for that too. The cooling curve takes on the general form: $$\Large{T_2=T_{\infty}+(T_1-T_{\infty})e^{-\frac{hA}{mc_p}t}}$$ Where $T_1$ is the initial temperature, $T_2$ the temperature after an amount of time $t$ has elapsed, $T_{\infty}$ the ambient temperature, $h$ the convection heat transfer coefficient, $A$ the total surface area of the object, $m$ its mass and $c_p$ the specific heat capacity of the object's material. 2. There are radial temperature gradients: For non-stirred liquids or solids we know the temperature will vary from the outermost layer to the core: $$\frac{\partial T}{\partial r} < 0$$ (For cooling). The consequence of this temperature gradient is that, all other things being equal, this second mode of cooling is slower than the first case because core heat has to conduct (diffuse) through the various layers between core and surface. In this case Fourier's heat equation for a cylinder can be used. That's a second order, linear, partial differential equation in $r$, $T$ and $t$ ($T(r,t)$). This excellent paper should give an idea of the mathematical complexity involved in solving this equation. Most engineers/physicists would seek practical, numerical solutions using Wolfram's NDSolve feature or similar.
{ "domain": "physics.stackexchange", "id": 32773, "tags": "thermodynamics" }
How to prove formally that grammar isn't LR(1)
Question: I want to prove that grammar $$ \begin{cases} S'\rightarrow S\\ S\rightarrow aSb ~|~ A\\ A\rightarrow bA~|~b \end{cases} $$ isn't $LR(1)$. I've constructed parser table and got Shift-Reduce conflict. I want to prove that without parser table, using another $LR(1)$ definition. Here's definition: Grammar is $LR(1)$, if from $S' \Rightarrow^*_r uAw \Rightarrow_r uvw$ $S' \Rightarrow^*_r zBx \Rightarrow_r uvy$ $FIRST(w) = FIRST(y)$ $\Rightarrow uAy=zBx.$ So how can prove that? Answer: $$S'\Rightarrow^*\underbrace{ab}_uA\underbrace{b}_w\Rightarrow \underbrace{ab}_u\underbrace{b}_v\underbrace{b}_w$$ $$S'\Rightarrow^*\underbrace{abbbb}_zA\underbrace{b}_x\Rightarrow \underbrace{ab}_u\underbrace{b}_v\underbrace{bbbb}_y$$ $$FIRST(w)=FIRST(y)=b$$ But: $$abAbbbb=uAy\neq zBx=abbbbAb$$
{ "domain": "cs.stackexchange", "id": 1475, "tags": "formal-grammars, parsers" }
Black hole as a gravitational spherical shell. Why not?
Question: I think that Leonard Susskind's holography, George Chapline's "dark energy star," the Emil Mottola and Pawel Mazur's "Gravastar," the Polchinski's "firewall," and the recent ideas of nonsingular black holes clearly suggest possibility of understanding this phenomenon as a massive spherical shell with an asymptotically thin wall. I think that whole mass of the black hole can be located on the same place of the surface that today we call events horizon. What could prevent the collapse of this shell, is the hypothesis that gravity has a limit of intensity. This limit only happens in the event horizon. I imagine that the intensity of gravity should not be infinite. If this is possible, then black holes have no content, because inside them there would be no gravitational field, no space, no time. A place that does not really exist. A contour region of our universe. Can a black hole be a spherical shell? Answer: The problem with this model of the gravitational field (a problem that was first noticed by Einstein) is that something needs to keep the mass shell from collapsing in upon itself. The simplest way to try to do this is to suppose that the mass shell is really made up of many bodies in circular orbits around the center of mass. This works fine, so long as the radius of the shell is larger than the Schwarzschild radius $R_{S}=\frac{2GM}{c^{2}}$ (the radius of the event horizon); however, as the radius approaches $R_{S}$, the orbital speed of particles approaches $c$, which is impossible. (If you try to make the shell a solid, you run into a similar problem with the speed of acoustic waves that can propagate along the solid shell.) Einstein concluded, on the basis of this kind of calculation, that black holes were not possible. However, that is not quite correct. What is not possible is for there to be a static black hole (like the mass shell model). There is no timelike Killing vector in Schwarzschild spacetime, because at the event horizon, the variable $t$ changes from timelike to spacelike. (And $r$ becomes timelike; this represents the fact that if you are falling into the black hole, a location at smaller $r$ must lie in in your future.)
{ "domain": "physics.stackexchange", "id": 89046, "tags": "black-holes, event-horizon, singularities, black-hole-firewall" }
Is the M81 group bound to the Local group?
Question: Andromeda and the Milky Way are set to collide in 4bn years. Will the Local Group collide with M81, and what about further away groups? if so, when? Answer: Interestingly enough, although we tend to group galaxy clusters together into superclusters, superclusters are actually not gravitationally bound. This suggests that, due to Hubble flow, galaxies in separate clusters will never "collide". In fact, it may be impossible to ever leave one's galaxy cluster. EDIT: In a previous version of this post, I stated the the Great Attractor may be reason to suspect a gravitational binding of Laniakea. In reality, the Great Attractor only reduces the expected relative velocities of these galaxies away from one another by between $200-400$ $\mathrm{km}\cdot\mathrm{s}^{-1}$—I would like to thank eshaya for pointing out my error here. Due to the current expansion rate of the universe of $67.6^{+0.7}_{-0.6}$ $\mathrm{km}\cdot\mathrm{s}^{-1}\cdot\mathrm{Mpc}^{-1}$, some galaxies in Laniakea have recession velocities of up to $30000$ $\mathrm{km}\cdot\mathrm{s}^{-1}$, as given by Tully et al. (2014)—far greater than could ever hope to be counteracted by the Great Attractor. The question now becomes one of whether the M81 Group is gravitationally bound to the Local Group. Given the M81 Group's status as being one of the closest to the Local Group, at only $3.6$ $\mathrm{Mpc}$ distant, I would not be entirely surprised if through some chance it managed to be. Given the current estimate for Hubble's constant, due to Hubble flow the M81 Group would be expected to recede at $\sim243.4$ $\mathrm{km}\cdot\mathrm{s}^{-1}$. This is what would need to be countered through gravitational effects for the two groups to be bound, and thus eventually coalesce.
{ "domain": "astronomy.stackexchange", "id": 1958, "tags": "galaxy, cosmological-inflation, local-group" }
Take element of a Set collection depending on the value of another element
Question: I had to populate the fields of a Service object using a Set (HashSet) with any configuration parameters -> ConfigurationMap (Class with 2 attributes: key and value). The problem is that the attribute quantity of Service object can be set from two distinct parameters ("service_quantity" or "publications_amount") based on the value of another parameter("type"). I've gotten it to work with the code below, but I think it's pretty messy and inefficient. Is there any way to make it more readable considering that the methods signatures are invariable? @Override protected void populateServiceConfigurationData(Service theService, Set<ConfigurationMap> configurationParams) { boolean publishService = false; for (ConfigurationMap configParam : configurationParams) { String paramValue = configParam.getValue(); String paramKey = configParam.getKey(); setFieldValue(theService, paramKey, paramValue); if("type".equals(paramKey) && paramValue.equals("20")) { publishService = true; } } if(publishService) { for (ConfigurationMap configParam : configurationParams) { String paramKey = configParam.getKey(); if(paramKey.equals("publications_amount")) { String paramValue = configParam.getValue(); setFieldValue(theService, paramKey, paramValue); } } } } @Override public void setFieldValue(Service service, String paramKey, String paramValue) { if ("type".equals(paramKey)) { service.setType(Integer.valueOf(paramValue)); } if ("service_quantity".equals(paramKey) || "publications_amount".equals(paramKey)) { service.setQuantity(Integer.valueOf(paramValue)); } if ("gateway_id".equals(paramKey)) { service.setGatewayId(paramValue); } if ("comissions".equals(paramKey)) { service.setCommissions(paramValue); } } Answer: I think it's pretty messy and inefficient Let's say that it is getting messy, especially if the number of parameters (and hidden rules) keeps growing. About being inefficient, I am not sure. Even if the input is very large, the method populateServiceConfigurationData runs in \$O(N)\$, so I don't think it will be a big problem. My suggestions: Readability: setting the quantity of the service depends on more than one parameter. So the "rule" to set quantity would be more evident if included in a single function, like setServiceQuantity. Performance: the input set can be converted to a map, to easily access the parameters. The performance gain is minimal, but it should help to make the code more clear. Design: using setFieldValue in populateServiceConfigurationData seems unnecessary. The method populateServiceConfigurationData can be completely independent to setFieldValue. More on this later. public void populateServiceConfigurationData(Service service, Set<ConfigurationMap> configurationParams) { // Convert the input set to a map Map<String, String> params = configurationParams.stream() .collect(Collectors.toMap(ConfigurationMap::getKey, ConfigurationMap::getValue)); if (params.containsKey("type")) { service.setType(Integer.valueOf(params.get("type"))); } if (params.containsKey("gateway_id")) { service.setGatewayId(params.get("gateway_id")); } if (params.containsKey("comissions")) { service.setCommissions(params.get("comissions")); } setServiceQuantity(service, params); } private void setServiceQuantity(Service service, Map<String, String> params){ boolean isPublishService = params.get("type").equals("20"); String quantity = isPublishService ? params.get("publications_amount") : params.get("service_quantity"); service.setQuantity(Integer.valueOf(quantity)); } (Note: not tested, it's just to give an idea) Now the logic for setting each property of the service is clear and can be easily extracted into functions like setServiceQuantity. One issue is that populateServiceConfigurationData needs to know how to set the properties into the service (for example, parsing type to Integer). If you don't like this approach, the setters can be replaced with setFieldValue (as before) or let service do the parsing in its own methods. The method setFieldValue looks fine, the only thing I can suggest is to use else if since only one condition will match per invocation.
{ "domain": "codereview.stackexchange", "id": 40883, "tags": "java, collections, set" }
Lower bound of a summation with an exponential
Question: For the following (related to a binary tree complexity question): $$f(n) = \sum_{h=0}^{\lg{}n} h2^h$$ Is there any way to express this only in terms of $n$? Or approximate it? Put in another way, I figure at worst for an upper bound, we could guess at it in the following way: $f(n) = 0 + (1 \cdot 2) + (2 \cdot 2^2) + (3 \cdot 2^3) + ... + (\lg{}n \cdot 2^{\lg{}n})$ This means since we go from $0$ to $\lg{}n$, and the largest above when reduced is $n\lg{}n$, I could write $\mathcal{O}(n\lg^2{}n)$. Supposedly $f(n)$ reduces to $\Theta(n\lg{}n)$ or better (by better I mean closer to $\Theta(1)$) and I'm struggling to show that... and I don't know if it's possible even. I can't prove it though so I can't claim it's not, reducing the summation if possible would hopefully lead to a quick answer. Answer: There is a general formula for this sum: $$ \sum_{h=0}^m h2^h = \sum_{h=1}^m \sum_{k=1}^h 2^h = \sum_{k=1}^m \sum_{h=k}^m 2^h = \sum_{k=1}^m (2^{m+1}-2^k) = m2^{m+1} - (2^{m+1}-2). $$ Overall, we get $$ \sum_{h=0}^m h2^h = (m-1)2^{m+1} + 2. $$ When $m = \lg n$, this works out to be $2n\lg n - 2n + 2$.
{ "domain": "cs.stackexchange", "id": 7335, "tags": "time-complexity, asymptotics, binary-trees" }
Meaning of the direction of the cross product
Question: I was doing calculations with torque and then I came across something very confusing: I understand that the magnitude of the torque is given by product of the displacement(from the center of rotation) at a point and the force applied at that point. This magnitude perhaps, says something about how "powerful" the rotation is. The question that confuses me, however, is what does the direction of the cross product tell? The direction is perpendicular both to the force and the displacement. What really is this the direction of? Answer: When studying angular things - torque, angular velocity, angular momentum, etc. - physicists do a clever thing to avoid having to describe curves. You see, you might be tempted to draw a curved arrow for a torque, indicating that you are twisting something around in a circular-ish way. But then when you try to add two such arrows together, all of a sudden you realize your notation no longer has a natural, intuitive meaning. Instead, we draw the arrow pointing perpendicular to the plain of the curve you are tempted to draw. More precisely in the case of torque, perpendicular to the plain defined by the radial vector and the force vector. Note that this uniquely defines what plane your curved arrow must reside in, and, given the right-hand rule, clears up the ambiguity as to which way your curved arrow should point (if your right-hand fingers curl in the direction of the curved arrow you want to draw, your thumb points in the direction of the straight arrow you should draw instead). It is then a simple matter to encode the magnitude of the torque/angular velocity/whatever in the length of this vector. The benefit is that you end up with straight arrows describing everything, and they add exactly as your torques should add - you have a genuine vector space, and are free to abstract away from all diagrams. And it is not even terribly counterintuitive - the torque vector is parallel to the axis around which you are applying torque. If you think about it long enough, you should be able to convince yourself that if you had to choose a single direction to define things, this is the least ambiguous.
{ "domain": "physics.stackexchange", "id": 18817, "tags": "vectors, torque" }
What is the biochemistry of love?
Question: How is love induced between humans? Say, between mother and child, couples, etc. Does the phenomenon of love exist in other mammals, too? Answer: "Love" is a subjective phenomenon that can't really be applied to non-human animals because we can't ask them about their subjective experiences. However there is some clear evidence, especially in more social mammals, of caring behavior and social bonds for other conspecifics. As far as biological basis, I suppose you could consider it biochemistry, the peptide hormone oxytocin is correlated with formation of social bonds between animals of many species (although Wikipedia isn't the greatest source, you will find this particular page is saturated with links to scientific literature). I would caution against the popular press treatment of oxytocin as "the love hormone" and caution against simplifying a complex set of behaviors to the action of one peptide, but certainly the evolutionary conservation and broad applicability suggest that this is indeed an important hormone in social relationships.
{ "domain": "biology.stackexchange", "id": 6665, "tags": "biochemistry, zoology, neurophysiology, human-physiology, mammals" }
What planets or exoplanets orbit the Sun’s elder twin HIP 102152?
Question: Has anyone done any research on the planets that orbit the HIP102152? Since that star is similar to ours and older, I postulate that it’s likely those planets are most worthwhile searching for advanced civilizations. Answer: No planets have been detected in orbit around HIP 102152. That does not mean that no planets exist, but that our current techniques are no able to detect them. Most planet are detected by the transit method. This observes the very small dip in light when a planet goes in front of the star. However if the planet's orbit doesn't line up exactly with Earth, then the planet will not be detected. Other methods can detect large planets that orbit close to the star, or very large planets in orbit very var from the star. Solar systems like ours are harder to detect. Given what we know about the abundance of planets, it is likely that HIP 102152 has a planetary system. But actual detection might no be possible with current technology. Moreover there is no obvious reason to think that life or intelligent life is particularly likely just because the star is similar to the sun. The nature of the star may rule out intelligent life. But as far as we know, life just needs a reasonally stable energy source and a lot of luck. It doesn't need a solar twin.
{ "domain": "astronomy.stackexchange", "id": 4315, "tags": "star, exoplanet, extra-terrestrial" }
Bash Shell Script uses Sed to create and insert multiple lines after a particular line in an existing file
Question: This code seems to work, but I'd like someone with shell-fu to review it. Is the temp file a good idea? How would I do without? Is sed the right tool? Any other general advice for improving my shell script Script/Code to Review: # Grab max field lengths from each .hbm.xml file and # put them into the corresponding .java file for myFile in $(find generatedSchema/myApp/db/ -name *.hbm.xml) do # Calculate the java file name javaFile=${myFile/%\.hbm\.xml/.java} echo $javaFile # Find each field name and length and format it for the java file. # Save result lines to temp.txt sed -n '/<property name="[^"]\+" type="string">/{ N s/ \+<property name="\([^"]\+\)" type="string">\n \+<column name="[^"]\+" length="\([0-9]\+\)".*/ @SuppressWarnings({"UnusedDeclaration"}) public static final int MAX_\1 = \2;/p}' $myFile >temp.txt # Load results from temp file into a shell variable str="" while read line do echo $line str="$str\n $line" done < temp.txt # Put new lines after the serialVersinUID line in the existing Java file. sed -i -e "s/^ private static final long serialVersionUID = [0-9]\+L;$/\0\n$str/" $javaFile # clean up rm temp.txt done Existing Input .hbm.xml file excerpt: <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="com.myCo.CodeReview" table="code_review" catalog="stackexchange"> <!-- ... --> <property name="address1C" type="string"> <column name="address1_c" length="50" /> </property> <property name="address2C" type="string"> <column name="address2_c" length="50" /> </property> <!-- etc. --> Existing Input Java File: // ... preserve java code above private static final long serialVersionUID = 20130318164201L; // ... preserve java code below Desired Output Java File: // ... preserved java code above private static final long serialVersionUID = 20130318164201L; @SuppressWarnings({"UnusedDeclaration"}) public static final int MAX_address1C = 50; @SuppressWarnings({"UnusedDeclaration"}) public static final int MAX_address2C = 50; // ... preserved java code below Background This script generates Java objects that represent database tables. It modifies the output of the Hibernate Database Reverse-Engineering tools to add maximum field length tokens to my Java objects that match the length of those columns in the database. Answer: 1. Comments on your code Your example input file is not a well-formed XML document. An XML document must have a single root element, but your example has two. What tool did you use to generate this file? You should ask yourself how it went wrong. (If you're just writing it "by hand" in some other program using a bunch of print statements, you might ask yourself why you are using XML at all rather than something simple like CSV. You clearly can't be intending to process these file using standard XML tools, since most XML tools will refuse to process ill-formed documents.) In the find command: find generatedSchema/myApp/db/ -name *.hbm.xml the pattern *.hbm.xml would be expanded by the shell (rather than the find command) if there happened to be any files matching the pattern in the directory where this command is being run. So single-quote the pattern to make sure this can't happen: find generatedSchema/myApp/db/ -name '*.hbm.xml' You might also consider adding the argument -type f to ensure that the names returned all refer to regular files. (It's not very likely in your case that someone will name a directory foo.hbm.xml, but adding a -type argument to a find command is often good practice.) In this line you don't need to escape the dots: javaFile=${myFile/%\.hbm\.xml/.java} The pattern in ${VAR/pat/rep} here is not a regular expression, so dots are not special. For example: $ FOO='a.b.c' $ echo ${FOO/./|} a|b.c You use regular expressions to transform your XML into Java source. Now, as I'm sure you're aware, you can't reliably process XML using regular expressions. Perhaps what you've got here works because you control the format of these XML files. But really you should be using an XML-aware tool. See section 2 below. You use a temporary file to store the output of your sed command, and then load it into a variable using a loop over its lines. You don't need this loop, because you can load the contents of a file into a variable by writing: VARIABLE=$(<filename) And you don't need the temporary file either, because you can load the output of a command into a variable by writing VARIABLE=$(command) So just write str=$(sed -n "...") You could choose better variable names than myFile (why yours?) and str (what's in the string?). In your last sed command, you use the expression s/pattern/\0\n$str/ to append $str after every match of pattern. You might consider using p;c instead so that you can simplify the pattern (it doesn't have to match the whole line any more): sed -i -e "/^private static final long serialVersionUID/{p;c\ $str }" You expand variables myFile and javaFile in your script in contexts like this: sed -i -e "..." $javaFile This will go wrong if the variable javaFile contains spaces. You should double-quote the variable to ensure that it's treated as a single argument to sed even if it contains spaces. (Perhaps you know in this case that it can't contain spaces, but double-quoting variables containing filenames is still a good habit to get into.) sed -i -e "..." "$javaFile" I don't like the way you modify the Java source code in-place. This would be dangerous if this is your original Java source code, because a bug in your script could overwrite the Java source code with nonsense and break your build. (But since I can't see the rest of your build system I can't tell if this concern is justified.) If the whole Java file is automatically generated, wouldn't it be better to do it in one go rather than piecemeal like this? I have a feeling that there are more improvements to be made to this code generation process. 2. Using a programming language with XML support If you need to process data from XML documents, then use a programming language together with an XML parsing library. For example, here's how you might rewrite this same script in Python. First, you'd change the way you generate your .hbm.xml files so that they are well-formed XML documents. Let's suppose that they now look like this: <properties> <property name="address1C" type="string"> <column name="address1_c" length="50" /> </property> <property name="address2C" type="string"> <column name="address2_c" length="50" /> </property> </properties> Then you might write the following Python program: #!/usr/bin/python import fileinput import sys import xml.etree.ElementTree def main(): # Check command-line argument. hbm_xml_filename = sys.argv[1] ext = '.hbm.xml' if not hbm_xml_filename.endswith(ext): raise RuntimeError("Filename {} doesn't end with {}" .format(hbm_xml_filename, ext)) # Load property names and maximum lengths from input file. tree = xml.etree.ElementTree.parse(hbm_xml_filename) properties = [] for property in tree.findall('property[@type="string"]'): column = property.find('column') properties.append(dict(name = property.attrib['name'], length = column.attrib['length'])) # Rewrite Java in-place to add "MAX_name = length;" declarations. java_filename = hbm_xml_filename.replace(ext, '.java') for line in fileinput.input(files=[java_filename], inplace=True): sys.stdout.write(line) if line.startswith('private static final long serialVersionUID'): sys.stdout.write('\n') for p in properties: sys.stdout.write( '@SuppressWarnings({{"UnusedDeclaration"}}) ' 'public static final int MAX_{name} = {length};\n' .format(**p)) if __name__ == '__main__': main() You'll see that I've used the xml.etree.ElementTree module to parse and query the XML, I've used an XPath query to find all properties with the attribute type="string", and I've used the fileinput module to update the Java source code in-place. Then you can run this Python program from your shell script: find generatedSchema/myApp/db/ -type f -name '*.hbm.xml' | while read property_file; do add_property_lengths.py "$property_file" done 3. Addendum You said you were having some difficulty getting VAR=$(command) to work. You should be able to verify that it works in general by testing it in the shell: $ NUMBERS=$(yes '' | head | nl -ba) $ echo $NUMBERS 1 2 3 4 5 6 7 8 9 10 Clearly we have different versions of sed (I'm using the BSD-ish version that comes with Mac OS X, so I had to specify -E for extended regular expressions and change the regular expression syntax a bit) but the following works for me: VAR=$(sed -E -n '/ *<property name="([^"]+)" type="string">/{ N s/ *<property name="([^"]+)" type="string">\n *<column name="[^"]+" length="([0-9]+)".*/ @SuppressWarnings({"UnusedDeclaration"}) public static final int MAX_\1 = \2;/ p }' "$myFile") so you should be able to figure out how to make something like it work for you (if you must do it this way).
{ "domain": "codereview.stackexchange", "id": 3563, "tags": "bash, shell, sed" }
Insertion sorting an int array
Question: I'd like to improve this Insertion sort code package com.arun.sort; import java.util.Arrays; public class InsertionSort { public static void main(String[] args) { int[] arr = { 9, 4, 6, 2, 1, 7 }; insertionSort(arr); System.out.println("Elements after sorting :" + Arrays.toString(arr)); } public static int[] insertionSort(int[] arr) { int value, hole; for (int i = 1; i < arr.length; i++) { value = arr[i]; hole = i; while (hole > 0 && arr[hole - 1] > value) { arr[hole] = arr[hole - 1]; hole--; } arr[hole] = value; } return arr; } } Answer: Step 1: extract to a method In the current code you have a hardcoded array, and the main logic follows right after. It's hard to test this way. What if you want to see if the implementation works with a different set of numbers? You have to rewrite the array. Better to extract the main logic into its own, independent method: int[] sort(int[] arr) { // ... } Now you can test with multiple different inputs easier: arr = new int[]{2, 5, 1, 8, 12, 3, 7}; InsertionSort.sort(arr); System.out.println(Arrays.toString(arr)); arr = new int[]{4, 3, 2, 1, 2}; InsertionSort.sort(arr); System.out.println(Arrays.toString(arr)); Step 2: convert print statements to proper unit tests The problem with print statements is that every time you change something and rerun, you have to re-verify the output of each statement. Unit tests can automate the verification step, and converting is easy enough to do: private void sort(int[] arr) { InsertionSort.sort(arr); } @Test public void testMixedValues() { int[] arr = {2, 5, 1, 8, 12, 3, 7}; sort(arr); assertEquals("[1, 2, 3, 5, 7, 8, 12]", Arrays.toString(arr)); } @Test public void testDecreasingValues() { int[] arr = {4, 3, 2, 1}; sort(arr); assertEquals("[1, 2, 3, 4]", Arrays.toString(arr)); } Btw, I didn't type the expected strings in the assertEquals. I wrote the test cases first, with "" as the expected values, and ran the tests. All the tests failed, of course, but the error messages told me the actual values that were different from the expected "". I verified that they are correct and copy-pasted the correct texts into the test cases. Now you can make changes and the test cases will flag an error if something breaks. Unless you do something really horrible, typically only a few of the test cases will break, and you don't need to reverify the others that are still working, which makes debugging a lot easier. Return void instead of int[] It might be better to return void instead of int[]. The problem with returning int[] is that the caller might wonder if the returned array is a new array or not. It is not a new array. For example in this code: int[] arr1 = { 9, 4, 6, 2, 1, 7 }; int[] arr2 = InsertionSort.sort(arr1); arr2[3] = -1; Both arr1 and arr2 will be modified, which might be counter-intuitive. The fact that java.util.Collections.sort() also returns void suggests that it's better that way. Minor things The method name insertionSort is redundant, because the word "insertion" is already in the class name. How about simply sort?
{ "domain": "codereview.stackexchange", "id": 8822, "tags": "java, beginner, sorting, insertion-sort" }
About Born's rule
Question: I wanted to gain a better understanding of the Born rule to make my class on quantum mechanic feel less ad hoc. To do so I attempted to show that the version (1) given in my book is equivalent to the version (2) on Wikipedia. The version in my book: $P_x = \frac{\langle\psi_x\mid\psi_x\rangle}{\langle\psi\mid\psi\rangle}$ Where $P_x$ is the probability of getting a state $\psi_x$ when measuring. Version on Wikipedia: the probability of measuring a given eigenvalue $\lambda_i$ will equal $\langle\psi\mid P_i\mid\psi\rangle$, where $P_i$ is the projection onto the eigenspace of A corresponding to $P_i$. There is a short explanation but I would be glad if someone could put it into better words. Specifically I don't understand what $P_i$ is. I do know about the Hermitian operator, its eigenvalues and eigenspaces. But for example, why is $P_i = \mid\psi_i\rangle\langle\psi_i\mid$ when the eigenspaces are one-dimensional. And how do I go from there to the form in my book? Answer: $P_i = \mid\psi_i\rangle\langle\psi_i\mid$ is the one-dimensional "projection" operator. By "one-dimensional" it means this projection operator projects $\psi$ onto a single dimension in Hilbert space. Firstly, any wavefunction $\psi$ can be written as a linear combination of orthogonal components. That is, $\psi = \sum a_i\mid\psi_i\rangle$ where $a_i$ is some coefficient. If there are $n$ such non-zero coefficients, $\psi$ can be thought of as a vector in $n$ dimensions, having components in each direction of length $a_i$ in this $n-dimensional$ Hilbert space. $a_i$ is also the amplitude that the result $\psi_i$ will be obtained if the wave-function is measured in this basis. The probability is amplitude^2. The projection measurement essentially "projects" the state $\psi$ onto one of these components. It is easiest to demonstrate why $P_i = \mid\psi_i\rangle\langle\psi_i\mid$ by applying it to the state $\psi$. $P_i\psi = P_i\sum a_k\mid\psi_k\rangle = \mid\psi_i\rangle\langle\psi_i\mid\sum a_k\mid\psi_k\rangle = \mid\psi_i\rangle\sum a_k\langle\psi_i\mid\psi_k\rangle = a_i\mid\psi_i\rangle$ Therefore, the operator $P_i$ acting on some arbitrary state $\psi$, projects $\psi$ onto its i-th component vector. (This is analogous to projecting a 2D vector onto say its x-component in Euclidean geometry. For instance if a vector $V = ax + by$, then the projection onto x-axis would yield $V_x = ax$) So since, $P_i\mid\psi\rangle=a_i\mid\psi_i\rangle$ we can easily show that $\langle\psi\mid P_i\mid\psi\rangle=\sum \langle\psi_k\mid a_k^*a_i\mid\psi_i\rangle = \sum a_k^*a_i\langle\psi_k\mid\psi_i\rangle = a_ia_i^* = \mid a_i\mid^2$ Therefore we have shown that $\langle\psi\mid P_i\mid\psi\rangle$ gives the probability of the wavefunction being in the eigen-state $\psi_i$. The next step is to show how the one in your book is also the probability. Note your book's use of $P_x$ is to represent probability and is not the projection operator. First consider the denominator $\langle\psi\mid\psi\rangle = \sum^j\sum^k\langle\psi_j\mid a_j^* a_k\mid\psi_k\rangle$. The only terms that survive is when i=j. Therefore we arrive at: $\langle\psi\mid\psi\rangle = \sum^j\langle\psi_j\mid a_j^* a_j\mid\psi_j\rangle = \sum^j a_j^*a_j = \sum^j \mid a_j\mid ^2 $ This is the total probability of any state which, if this is normalized, should be 1. Therefore $\langle\psi\mid\psi\rangle = 1$ for normalized states, otherwise it is the sum of all possible amplitudes^2. Next we consider the numerator. This is the dot product of the x-components of the state, which will yield $a_x^* a_x = \mid a_x \mid ^2$ Therefore, numerator over denominator gives $\frac{\mid a_x \mid ^2}{\sum^j \mid a_j\mid ^2}$. This is the probability for a particular state x to occur divided by the probability that any of the possible states will occur (which should be 1 for normalized states).
{ "domain": "physics.stackexchange", "id": 4680, "tags": "quantum-mechanics, measurement-problem, probability, born-rule" }
How do I interpret the wording of this passage about abstract binding trees from the book Practical Foundations of Programming Languages
Question: On page 7/8, section 1.2, of Practical Foundations of Programming Languages, 2nd edition, Robert Harper gives this initial definition of abstract binding trees: The smallest family of sets closed under the conditions If $x \in \mathcal{X}_s$, then $x \in \mathcal{B}[\mathcal{X}]_s$ For each operator $o$ of arity $(\vec{s_1}.s_1,\ldots,\vec{s_n}.s_n)s$, if $a_1 \in \mathcal{B}[\mathcal{X},\vec{x_1}]_{s_1},\,\ldots,\, a_n \in \mathcal{B}[\mathcal{X},\vec{x_n}]_{s_n}$, then $o(\vec{x_1}.a_1;\ldots;\vec{x_n}.a_n) \in \mathcal{B}[\mathcal{X}]$ (Here $\mathcal{X}$ denotes a set of variables, $\mathcal{X},x$ the union of $\mathcal{X}$ with $\{x\}$ where $x$ is fresh for $\mathcal{X}$, $\vec{x}$ a sequence of variables,$\mathcal{X}_s$ a set of variables of sort $s$, $\mathcal{B}[X]_s$ the set of abstract binding trees of sort $s$ over the variables in $\mathcal{X}$ This definition is almost correct, but fails to properly account for renaming of bound variables. An abt of the form $\text{let}(a_1;x.\text{let}(a_2;x.a_3))$ is ill-formed according to this defnition, because the first binding adds $x$ to $\mathcal{X}$, which implies that the second cannot also add $x$ to $\mathcal{X},x$, because it is not fresh for $\mathcal{X},x$. I am confused about his meaning here. How does this definition result in an ill-formed abt? By first/second binding does he mean A) outer/inner (read from left to right) or B) inner/outer (read from the inside out)? What I think he is saying: Because of the outer("first") binding of $x$, assume that $x$ occurs free in $a_2$. For example $a_2=x,\, a_3=x$. Then because $x$ occurs free in $a_2$, it must be that $a_2 \in \mathcal{B}[\mathcal{X}]$ where $\mathcal{X} = \{x\}$. Since $a_3$ occurs inside an abstractor that binds $x$, $a_3 \in \mathcal{B}[\mathcal{X,x}]$, but then $a_3 \in \mathcal{B}[\{x\},x]$ which is ill-formed since $x$ is not fresh for $\{x\}$ But then I think of the concrete example $\text{let}(y,x.\text{let}(z,x.x))$ in which $a_2 \in \mathcal{B}[\{z\}]$ and $a_3 \in \mathcal{B}[\{z\},x]$, which poses no problems in this interpretation. Edit to elaborate on the accepted answer... What I now believe Harper meant is that the outer binding of $x$ indicates that $x$ is considered to be among the free, or "already used" variables in the inner let. This may or may not mean that $x$ must actually appear free in the inner let. In either case, it means that validation of abts for well-formedness must proceed from the outside-in. In the specific examples Harper gives, the outer binding of $x$ means $x \in X$ in the validation of the inner let: if $a_2 \in \mathcal{B}[\mathcal{X}]$ and $a_3 \in \mathcal{B}[\mathcal{X},x] \ldots$ (<-- ill formed; $x$ is not fresh for $\mathcal{X}$) If in particular the wording means that $x$ must specifically appear free in the inner let, then in the given example, it would have to be in $a_2$ as suggested in my question and in the answer below. This amounts to saying that a particular instance of an abt in $\mathcal{X}$ is not automatically an abt in $\mathcal{X}\cup\mathcal{Y}$ for any set of variables $\mathcal{Y}$. Answer: Just to clear something up that may not have been obvious, $\chi$ is a set and the notation $B[\chi, x]$ is meant to be ABTs under free variables that are either $x$ or are in $\chi$. In this notation I believe it is implied that $x \notin \chi$ when you write $B[\chi, x]$, which is important. Using the definition of ABT above, you cannot prove for any $\chi$ that let(z, x.x) is in $B[\chi,x]$, but i claim that this is necessary to prove if you want to use it as the inner formula $a_1$ in let(y, x.$a_1$). The reason you cannot prove let(z, x.x) is in $B[\chi,x]$ is because using the rule#1 stated above, the free variable $x$ is only an ABT in $B[\chi]$ when $x \in \chi$ (or alternatively: $x$ is only an ABT in $B[\chi',x]$ for some $\chi'$), but then using rule#2, we deduce $z \in B[\chi] \wedge x \in B[\chi, x] \rightarrow let[z,x.x] \in B[\chi]$. As I had mentioned in the first paragraph, this implies $x \notin \chi$, which means that the formula let(z, x.x) is ONLY in ABTs where $x$ is not a free variable. The reason why $x$ needs to be free in this inner formula is because otherwise we cant apply rule#2 again on the outer formula let(y, x.$a_1$).
{ "domain": "cs.stackexchange", "id": 21433, "tags": "type-theory, variable-binding, syntax-trees" }
Why does a smooth rolling ball roll indefinitely despite there being static friction?
Question: For a rigid smooth rolling ball rolling down a ramp (as seen above), the acceleration of the center of mass is given by: $$a_{\text{com}, x} = - \frac{g \sin θ}{1+\frac{I_\text{com}}{MR^2}}.$$ However, if $θ=0$, the acceleration would be zero, meaning that the ball would travel at constant velocity. Why wouldn’t there be acceleration in the backwards direction caused by the static frictional force? Answer: Real rolling objects on a horizontal plane (i.e., for $θ=0$) do not slow down due to static or dynamic friction but aerodynamic drag and rolling friction (caused by deformation of the rolling object). The former two occur when two surfaces move against each other, which does not happen during pure rolling motion, i.e., as long as the rolling condition $ωR = v$ is met. As rolling friction is excluded (your ball is rigid) and air resistance is not mentioned, the ball will just continue rolling when on a plane. Static friction plays a role on an inclined plane since it causes some of the downhill force to accelerate the ball rotationally instead of linearly. (This is as long as the downhill force is smaller than the static friction.) If you place a resting, non-rotating ball on an inclined plane without static or dynamic friction, it would simply slide down the plane without rotating. Dynamic friction becomes relevant if the rolling condition is not met. We then have a rolling-with-slipping scenario, and the sliding friction linearly decelerates and rotationally accelerates the ball or vice versa until the rolling condition is met.
{ "domain": "physics.stackexchange", "id": 95022, "tags": "newtonian-mechanics, rotational-dynamics, energy-conservation, friction, rigid-body-dynamics" }
Formula for Bessel low-pass filter coefficients
Question: When I am filtering a signal in python, it has a built in function to generate bessel filter coefficients given a cutoff ratio and a filter order (number of poles). I am trying to translate this to C code, but I cannot seem to find the formula that is used to calculate the coefficients. Can someone point me to the formulas to calculate the IIR filter coefficients given a cutoff ratio and order? I am trying to read the scipy source on github but I am having a very hard time of it... Answer: Looking at the source code for scipy.signal.bessel, you're out of luck for finding a formula: they don't use one. They just have a big if \ elif sequence for various values of filter order: if N == 0: p = [] elif N == 1: p = [-1] elif N == 2: p = [-.8660254037844386467637229 + .4999999999999999999999996j, -.8660254037844386467637229 - .4999999999999999999999996j] elif N == 3: p = [-.9416000265332067855971980, -.7456403858480766441810907 - .7113666249728352680992154j, -.7456403858480766441810907 + .7113666249728352680992154j] elif N == 4: The Wikipedia page has some formulae for Bessel filters in the continuous domain: $$ \theta(s) = \sum_{k=0}^n a_k s^k $$ where $$ a_k = \frac{(2n-k)!}{2^{n-l}k!(n-k)!} \mbox{ for } n = 0,1,\ldots,n. $$ To get discrete-time near-equivalents, you'll need to do a continuous-to-discrete transformation. Note that this will not necessarily preserve the nice phase properties of the continuous-time Bessel filter.
{ "domain": "dsp.stackexchange", "id": 5148, "tags": "lowpass-filter, python, infinite-impulse-response" }
Can upward resistive force be greater than downward gravitational force?
Question: Objects falls through air, eventually it will reach terminal velocity, but why won't this upward force increase further? Answer: The resistive forces are generally proportional to the speed of the object falling. So when they start falling (with initial speed zero) gravitational force will be greater than the resistive force. Eventually the object gains the speed up to an instant where resistive force becomes equal to the gravitational force and since an equilibrium is reached at that point the speed stops changing at that point - so stops the change in resistive force. (Because it is directly proportional to speed) So everything reaches a steady state and objects fall with their constant velocity. Although when we mathematically work out the time it takes to reach this speed, it comes out infinite. So actually resistive force will always be a bit lesser than the gravitational force. Although a complete mathematical description ( like the one given above ) proves the point to be proven. But a more intuitive and physical approach to this question can be thought of this way. The nature of all resistive forces is essentially to decrease the relative motion between the contact surfaces. So no friction force/resistive force will have the nature to make the objects speed keep on increasing forever. Although it can't prove that in a particular case with other forces also acting on the object, whether the terminal velocity will be zero or not. We have to work out all the math if we want an exact answer.
{ "domain": "physics.stackexchange", "id": 26314, "tags": "forces" }
A misunderstanding about the energy profile of reactions with a catalyst involved
Question: All of us are aware of the importance of the catalysts in bio-chemistry. For a high school learner like me, catalysts ,and therefore, enzymes play a bridge-like role that connect high school bio to high school chemistry. Yet, I got baffled about the two common energy profiles drawn for an anonymous reaction with a presence of a catalyst. The question may seem rudimentary, I agree, but the possible answers to my question were too technical for me to understand. One of the energy profiles had several curves in the pathway for $E$, and the other one has only one curve in the pathway where the catalyst should have affected. My guess is that the first energy profile is about a net change in $E_\mathrm{a}$ but the second one demonstrates the reality when an exothermic reaction occurs. Now, here are the questions: Why are the two energy profiles different? Please be as if you're teaching thermo-chemistry to a little kid! Why do those curves occur in the latter energy profile and is there a way to know how many of those curves will occur in the presence of a catalyst? Image credits: chemwiki.ucdavis.edu (page here) and wiki (page here). Answer: The upper graph is just the most simple way to visualize the effect of a catalyst on a reaction $\ce{S -> P}$: The activation energy is lowered. The activation energy for the reaction in that direction is the difference of the energies of the starting material $S$ and a transition state $TS^\#$. Since it is the same starting marterial in the presence or absence of the catalyst, the energy of the transition state is different. Can the same transition state have two different energies - just through the remote distance magical action of a catalyst located somewhere? Probably not! It is much more plausible that - in the absence and presence of a catalyst - two completely different (different in structure and different in energy) transition states are involved. Exactly this situation is described in the second graph! The catalyst "reacts" with the starting material, either by forming a covalent bond, by hydrogen bonding, etc. and thus opens a different reaction channel with different intermediates and a transition state not found in in the non-catalyzed reaction. In the end, the same product $P$ is formed and the catalyst is regenerated, but this doesn't mean that the catalyst wasn't heavily involved in the reaction going on.
{ "domain": "chemistry.stackexchange", "id": 4729, "tags": "physical-chemistry, thermodynamics, catalysis" }
Why only plane polarized light is absorbed in polarized glasses?
Question: Polarizing glasses can cancel out the light that is reflected from either the horizontal plane or the vertical plane. But why it can not cancel out the light that is reflected from the perpendicular plane? Answer: A typical polarizing filter contains light-absorbing molecules that are long and thin, all oriented in one direction. An individual molecule is electrically conductive along its length, so absorbs light whose electric field vector is oriented in the direction of the molecule's length. Light that is plane-polarized with the electric field in that direction is absorbed, while light plane-polarized in an orthogonal direction is not absorbed. Any light that can be represented as consisting of two plane-polarized components (e.g., light whose polarization is circular, elliptical, or random) is converted to linearly polarized light by passing through that kind of polarizing filter. The light that emerges from the filter is all plane-polarized in the direction orthogonal to the absorption axis of the filter. It is interesting to note that light polarized at 45 degrees to vertical can be considered to contain two components polarized in the vertical and horizontal directions. So, light polarized at 45 degrees is only partly absorbed by a polarizing filter that absorbs at 0 degrees: light emerges that is polarized at 90 degrees. Light that reflects at an angle of about 54 degrees away from normal from a smooth glass (or plastic, water, or even stone) surface is polarized with its electric vector parallel to the surface. So, in your photo light reflected from the tabletop has horizontal polarization, while light reflected from the vertical surfaces has vertical polarization. So, if you tip your head 90 degrees, or rotate your polarizing glasses 90 degrees, you will find that reflections from vertical surfaces will be absorbed while reflections from horizontal surfaces will be transmitted. Light that is reflected multiple times can have its polarization direction rotated to almost any angle.
{ "domain": "physics.stackexchange", "id": 63691, "tags": "visible-light, everyday-life, polarization" }
Can I use potassium acetate instead of potassium carbonate?
Question: In the synthesis of N-ethoxycarbonyl L-proline methyl ester from L-proline and ethyl chloroformate, I use $\ce{K2CO3}$ to neutralize $\ce{HCl}$. Can I use $\ce{CH3CO2K}$ instead of $\ce{K2CO3}$? I think I cannot use it because it reacts with $\ce{HCl}$ forming acetic acid which can hydrolyse the ester. Is it correct? Answer: Stay with the carbonate. In the course of the reaction $\ce{HCl}$ is formed. The aim is to remove this from the reaction mixture both as soon as well as complete as reasonably possible. Compare The addition of sodium acetate will increase the complexity of your reaction mixture. $\ce{HCl}$ is soluble in methanol (reference), and upon dissociation ($\ce{HCl <=> H+ + Cl-}$), the $\ce{H+}$ and the acetate $\ce{OAc-}$ join each other to generate the buffer system of $\ce{HOAc <<=> H+ + OAc-}$. Think about the $\ce{HOAc}$ as a reservoir, it may release $\ce{H+}$ at any moment you probably do not intend; a prominent example may be an aqueous workup. The quantity of $\ce{HOAc}$ generated may be small -- in comparison of the methanol used as solvent of reaction. Still it may complicate an aqueous workup and following extraction, because this organic substance solutes so well in water, but fairly enough in organic solvents such as hexanes, or chloroform, too. So your extraction of the product may be affected. A distillation (bp 118 C (pure acetic acid) instead of 65 C (pure methanol) at ambient pressure) ahead of the workup may remove this from your reaction mixture. with the alternative addition of $\ce{K2CO3}$. Darn efficient removal of $\ce{H+}$ for once and forever, because the quench along $\ce{K2CO3 + 2H+ -> 2K+ + H2O + CO2}\uparrow$ is practically irreversible. In addition, it allows you to monitor the advancement of the reaction since $\ce{K2CO3}$ is not well soluble in methanol.
{ "domain": "chemistry.stackexchange", "id": 14296, "tags": "organic-chemistry, reaction-mechanism" }
Publishing a message in an interrupt
Question: Hi, I don't want to publish messages in while(ros::ok) loop, but rather when an interrupt is called. How can I do that? I'm completely new to ROS, so please try to answer, so that I could understand it correctly. Originally posted by Loreno on ROS Answers with karma: 1 on 2016-05-20 Post score: 0 Answer: The closest thing to an "interrupt" in ROS is that ROS uses callbacks that get called when particular ROS events happen. For example, when creating a subscriber you provide a callback function that executes every time a message on a particular topic is received. You could easily publish a topic in a subscriber callback. There are also callbacks for service providers (which could also contain a publish command). One thing you may be interested in is a Timer (Python and CPP). These allow you to specify a callback and a time period (as a ros::Duration). The callback will then be executed at the frequency specified by the time period. You can, of course, publish messages in that timer callback as well. Originally posted by jarvisschultz with karma: 9031 on 2016-05-20 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24705, "tags": "ros, publisher" }
QED and anomaly
Question: I've just started to learn anomalies in quantum field theories. I have a question. How to show that QED is free from vector current anomaly and what would happen if it were not? In other words, how can we show that $\partial_\mu j^\mu=0$ even at the quantum level? As I understand, violation of current conservation will cause a violation of Ward identity. A violation of Ward identity is related to violation of unitarity. How does the unphysical photon polarization states appear in the theory through anomaly? And how do their appearance violate the unitarity of the theory? Why would the vector current anomaly be a problem in QED but not the chiral current anomaly? Don't we have to get rid of the axial current anomaly in QED? Answer: 1. How can we show that $\partial\cdot j\equiv 0$ at the quantum level? For example, by showing that the Ward Identity holds. It should be more or less clear that the WI holds if and only if $\partial\cdot j=0$. There are multiple proofs of the validity of the WI; some of them assume that $\partial\cdot j=0$, and some of them use a diagrammatic analysis to show that the WI holds perturbatively (and this is in fact how Ward originally derive the identity, cf. 78.182). It is a very complicated combinatiorial problem (you have to show inductively that an arbitrary diagram is zero when you take $\varepsilon^\mu\to k^\mu$), but it can be done. Once you have proven that the WI holds to all orders in perturbation theory, you can logically conclude that $\partial\cdot j\equiv 0$. For a diagrammatic discussion of the WI, see for example Bjorken & Drell, section 17.9. See also Itzykson and Zuber, section 7-1-3. For scalar QED see Schwartz, section 9.4. Alternatively, you can also show that $\partial\cdot j=0$ by showing that the path integral measure is invariant (à la Fujikawa) under global phase rotations. This implies that the vector current is not anomalous. 2.a. How does the unphysical photon polarization states appear in the theory through anomaly? Take your favourite proof that the WI implies that the unphysical states do not contribute to $S$ matrix elements, and reverse it: assume that $\partial \cdot j\neq 0$ to convince yourself that now the unphysical states do contribute to $S$ matrix elements. Alternatively, make up your own modified QED theory using a non-conserved current and check for yourself that scattering amplitudes are not $\xi$ independent. 2.b. And how do their appearance violate the unitarity of the theory? Morally speaking, because unphysical polarisations have negative norm. If the physical Hilbert space contains negative-norm states, the whole paradigm of probability amplitudes breaks down. 3. Why would the vector current anomaly be a problem in QED but not the chiral current anomaly? Because in pure QED the axial current is not coupled to a gauge field, and therefore its conservation is not fundamental to the quantum theory. The axial anomaly in pure QED would be nothing but a curiosity of the theory (a nice reminder that classically conserved current need not survive quantisation). On the other hand, in QED the vector current is coupled to a gauge field, the photon field, and as such its conservation is crucial to the consistency of the theory: without it the WI fails, and therefore we lose unitarity (or covariance, depending on how you formulate the theory).
{ "domain": "physics.stackexchange", "id": 38293, "tags": "quantum-field-theory, quantum-electrodynamics, unitarity, quantum-anomalies, ward-identity" }
Extending $\mathbb R^3$ coordinate systems concepts
Question: I was thinking about how to use different coordinate systems in 3D space and how to describe curved surfaces embedded in 3D space when I realized that all the notations I know make sense only if everything is embedded $\mathbb R^3$, where I can use cartesian coordinates. In particular, what made me understand about this limitation is the following: Given a certain coordinate system $(q_1,q_2...)$ a definition for tangent basis is $\vec e_{q_i}=\frac {\partial \vec P}{\partial {q_i}}$. Using this definition the derivative of a vector field along the coordinate $q_j$ become: $\frac {\partial \vec A}{\partial q_j}=\frac {\partial A^i}{\partial q_j} \vec e_{q_i}+A^i\frac {\partial \vec e_{q_i}}{\partial q_j}$. What's the problem? the terms $\frac {\partial \vec P}{\partial {q_i}}$ and $\frac {\partial \vec e_{q_i}}{\partial q_j}$. Indeed for me these terms make sense only if the space is embedded is $\mathbb R^3$ because then I can give to $P$ and $\vec e_{q_i}$a cartesian form that allows me to perform the derivative. For example for polar coordinates on a plane: $\vec P= r \cos(\theta)\vec i+r \sin(\theta)\vec j$ form which I obtain $\vec e_\theta=-r\sin(\theta) \vec i+r \cos(\theta) \vec j$. If I would like to describe a curved space from inside I wouldn't know how to interpret these terms (for example space-time in general relativity or to do an easier example an inhabitant of flatland that lives on a curved surface). Does this make sense for you or do you think these concepts can be used also for the description of a curved space from inside? If it is possible, can you give me an intuitive idea of the interpretation of those terms that I point out? Answer: There are two different ways to do differential geometry: the 'extrinsic' view, and the 'intrinsic' view. The extrinsic view is what you just described: you set up an embedding of your manifold inside a copy of $\mathbb R^n$, and you refer your manifold's geometry to that ambient space. The intrinsic view is what you're asking about: studying the geometry of the space without any reference to geometrical concepts that lie outside that space. As it happens, the intrinsic view is perfectly possible, but you do need additional structures in place for it. So, for example: Since you can no longer use the idea of distance provided by the ambient space, you need your manifold to come equipped with a metric. Since you can no longer use the 'invariant directions' of the cartesian basis vectors of your space, if you want to compare tangent vectors at different (or neighbouring) points, then you will need a way to 'transport' vectors from one place to the other, and for this you need your manifold to come equipped with a suitable connection (which can often be derived from the metric). These concepts completely supersede the notions in your question (including in particular the derivatives of the basis vectors and their expressions in terms of an invariant cartesian basis), which are not available in intrinsic geometry. For more details, see a good textbook on differential geometry. I really like the first volume of Spivak's course, but it is a fairly long book, and there's probably shorter introductions out there.
{ "domain": "physics.stackexchange", "id": 78357, "tags": "differential-geometry, coordinate-systems, curvature" }
Optimize Array.indexOf
Question: How can I optimize checking a value in recursion before operating on it? I'm using Array.indexOf (this is Javascript) var nums = [], lengths = []; function findNumLength(n) { //preliminary check to see if //we've done this number before var indexOf = nums.indexOf(n); if(indexOf !== -1) { return lengths[indexOf]; } function even (n2) { return n2%2===0; } if(n===1) { return 1; } if(even(n)) { l = findNumLength(n/2) + 1; if(indexOf===-1) { lengths.splice(0,0 ,l); nums.push(n); } return l; } else { l = findNumLength(3*n + 1) + 1; if(indexOf===-1){ lengths.splice(0,0,l); nums.push(n); } return l; } } (note: I've answered my own question with one solution I've found; it is by no means the only solution (though it may be the best. I don't know). Please, still answer.) Answer: var nums = [], lengths = []; function findNumLength(n) { This function name would do better to actually mention collatz. //preliminary check to see if //we've done this number before var indexOf = nums.indexOf(n); if(indexOf !== -1) { return lengths[indexOf]; } indexOf is going to search through the entire list to find the correct number. That's going to be slow. Instead, I'd suggest that you use an array large enough each number n could be an index into it. Leave the default undefined for any entries you haven't calculated yet. function even (n2) { return n2%2===0; } Functions are going to be somewhat expensive, and you only use this one once. It might be better just to stick this in the if. if(n===1) { return 1; } if(even(n)) { I'd make this else if, just to be more explicit l = findNumLength(n/2) + 1; if(indexOf===-1) { If this wasn't true, we'd have returned above. So why are you testing it here? lengths.splice(0,0 ,l); nums.push(n); I'm not really following what you are doing here. Shouldn't you be pushing on both arrays? } return l; } else { l = findNumLength(3*n + 1) + 1; if(indexOf===-1){ lengths.splice(0,0,l); nums.push(n); } return l; } There's a lot of common logic between both sides of the if. Most of it should be move after if and run in either case. }
{ "domain": "codereview.stackexchange", "id": 1487, "tags": "javascript, optimization, array" }
Efficient Traversal and Manipulation of the DOM with Native JavaScript Using For/In Loop
Question: With native JavaScript, I intend to traverse a collection of elements in the DOM that contain a link and an image (and possibly other elements). The image may or may not be inside the link—in most cases the image will be in the link, but in some cases it may be directly adjacent to the link, or it may be a child of another div or span that is adjacent to its relevant link. However, both the link and the image will always be contained within a container element, of which there may be several. For example, envision a stock photo gallery arranged in a grid layout. The HTML for each item in the gallery's collection may look like any one of the snippets below (simplified for cleanliness and brevity). It might be: <div class="item"> <a href="..."> <img src="..." /> </a> </div> Or it may be: <div class="item"> <img src="..." /> <a href="...">Image Title</a> </div> Or it may be: <div class="item"> <span data-blah="..."> <a href="...">Image Title</a> </span> <span data-meh="..."> <img src="..." /> </span> </div> Note: I can't control the given HTML. Nevertheless, I would like to traverse and manipulate the DOM flexibly enough to handle any instance of HTML structure given above. To do so efficiently, I'd first need to build the collection of container elements before traversing, yes? So this: var items = document.querySelectorAll('.item'); for(i in items) { ... } // more efficient rather than this: for(i in document.querySelectorAll('.item')) { ... } // less efficient as the latter would need to re-query the .item selector again inside the loop, which is inefficient. Right? However, to manipulate the link and image of each item being looped, I still need to access the link and image of said item. To do that, I could use querySelector() on the .item like so: var items = document.querySelectorAll('.item'); for(i in items) { var link = items[i].querySelector('a'), img = items[i].querySelector('img'); // manipulate link and img elements } But is triggering querySelector() twice per loop-instance the most efficient approach? Is there a more efficient approach to access the link and image of each .item element being looped-through than by calling querySelector() twice on each looped .item element? Consider that the primary intent of manipulation is to change the link URL to the image source URL and add some attributes to the link and/or image. Seems simple enough. I just want to make sure the loop logic is efficient. Ideally, the logic will ultimately be used in either a bookmarklet or a browser plugin. The collection of .item elements could potentially number in the thousands, so I'm hoping to keep this traversal and manipulation process as efficient as possible. Answer: You should not iterate over an array or array-like object using a foreach loop, you should use a good old-fashioned for loop instead: var items = document.querySelectorAll('.item'); for (var i = 0; i < items.length; ++i) { ... } Yes, this sequential for loop doesn't look as elegant as a foreach loop, but unfortunately, using a foreach loop for arrays in JavaScript is incorrect. As for this: var items = document.querySelectorAll('.item'); for (var i = 0; i < items.length; ++i) { var item = items[i]; var link = item.querySelector('a'); var img = item.querySelector('img'); } Having two .querySelector lookups in each iteration is not really a problem, because the lookups are done on a small object, a tiny subset of the entire document. I wrote the loop body slightly differently from your original: I put the value of items[i] in a local item variable. This is to avoid duplicating item[i]. It's good to eliminate duplications, because if you ever need to change something, you need to do it in all duplicates, which is potentially error-prone. I declared link and img on two lines. This is slightly more verbose, but a bit clearer.
{ "domain": "codereview.stackexchange", "id": 11735, "tags": "javascript, dom, query-selector, bookmarklet" }
Parse a value to a given datatype
Question: For our (internal) automated test project, I am writing a part where parameters can be passed to an SSIS Package before it gets executed. The tester delivers the parameter name, value and desired datatype in a table format. I get the datatype and parses it to the correct type. We assume a happy flow (when testers do something wrong, no pretty feedback is needed, they can read the exceptions). The following code is the snippet I am not happy with: foreach (var row in parametersTable.Rows) { object value; if (parametersTable.ContainsColumn("DataType") && !String.IsNullOrEmpty(row["DataType"])) { string dataType = row["DataType"].ToLower(); if ("int64".Equals(dataType)) { value = Int64.Parse(row["Value"]); } else if ("datetime".Equals(dataType)) { value = DateTime.Parse(row["Value"]); } else { throw new ArgumentException("DataType " + row["DataType"] + " is not supported"); } } else { value = row["Value"]; } pfe.AddParameter(row["Name"], value); } I think, my string manipulation and testing can be improved and I'd like to make the "test and parse" section more expandable. Maybe an Enum or an helper object where I just have to add a single value to make it work for additional datatypes. Answer: There is no magic method in .NET that will know the datatype and magically returns the right parsed value for you. So, creating your own method of parsing and doing manipulation is not a bad thing. This can be improved though. Use a method: I prefere to place code that does a certain manipulation or calculation on variables in a separate method/function. This gives me the ability to reuse the code if necessary and maintaining the code is also easier then looking in chunks of code to change just a small bit (like adding a datatype in your case). switch vs. if/else: When facing a simple check for a value or condition of a variable, an if/else-statement will certainly do. But when you have a list of options for a variable, it's better to use a swicth-statement. Your code not only looks cleaner, it is cleaner and again, easier to maintain. This results in following code: public object ParseValue(string type, string value) { switch(type) { case "int64" : return Int64.Parse(value); case "datetime" : return DateTime.Parse(value); case "double" : return Double.Parse(value); default: throw new ArgumentException("DataType is not supported"); } } And the usage: foreach (var row in parametersTable.Rows) { object value; if (parametersTable.ContainsColumn("DataType") && !String.IsNullOrEmpty(row["DataType"])) { var dataType = row["DataType"].ToLower(); value = ParseValue(dataType, row["Value"]); } else { value = row["Value"]; } pfe.AddParameter(row["Name"], value); } Another way of using a method is making it return a bool to see if the parsing succeeded and using the out parameter. This results in following method and usage: public bool TryParseValue(string type, string value, out object parsedValue) { switch(type) { case "int64" : parsedValue = Int64.Parse(value); return true; case "datetime" : parsedValue = DateTime.Parse(value); return true; case "double" : parsedValue = Double.Parse(value); return true; default: parsedValue = null; return false; } } foreach (var row in parametersTable.Rows) { object value; if (parametersTable.ContainsColumn("DataType") && !String.IsNullOrEmpty(row["DataType"])) { var dataType = row["DataType"].ToLower(); if(!TryParseValue(dataType, row["Value"], out value)) { throw new ArgumentException("DataType is not supported"); } //'value' will be correct if the parsing succeeeded } else { value = row["Value"]; } pfe.AddParameter(row["Name"], value); }
{ "domain": "codereview.stackexchange", "id": 10380, "tags": "c#, strings" }
Was the filling of the Three Gorges Dam's impact on the Earth's rotation rate detectable?
Question: I'm a big fan of Vsauce and the video How Earth Moves is just one example of science related to the Earth available there (there's plenty more). But the statement in this video starting at 01:39 strikes me as a little surprising. You know how a figure skater, spinning in place, can slow down their (rotational) speed by extending their arms out; by moving some of their body mass away from the middle of their body? Well the same thing can happen to Earth. The Three Gorges Dam did exactly what a figure skater does when they move their arms away from their center. It transferred thirty nine trillion kilograms of water one hundred and seventy five meters above sea level. NASA calculated that that massive amount of water moved, caused Earth’s rotation to slow down, so that every day of your life since that dam was finished, has been longer by 0.06 microseconds. (emphasis added) If I calculated correctly, that would be a "slip" of about 1 centimeter per year at the equator. There are many causes of change to the Earth's distribution of mass, it's moment of inertia, and therefore the length of a day. Tidal forces from the Moon can contribute substantially to changes in Earth's rotation as well, and there could be short term geological effects as well. Is this roughly 0.06 microsecond/day shift well below the "noise" level, where in this case I'm using "noise" to refer to effects that are smaller than the error that can be modeled? So for example exchanges between sea ice and sea water can be measured and modeled to some level of accuracy, and the Moon's tidal effects can be measured and modeled to great precision, so even if the Three Gorges Dam effect turns out to be small compared to those, it could still be detected as long as its effect is larger than the error that those effects can be accurately calculated. below: Screen shots from the Vsauce YouTube video Water. Answer: One needs to calculate the change in the moment of inertia of the Earth and use conservation of angular momentum (the rotation period is proportional to the moment of inertia). Most of the water will ultimately come from the oceans, effectively removing a thin layer of water. Jerry Mitrovica discusses this effect (in reverse) in a Nautilus interview: Is water moving off glaciers, slowing the Earth’s rotation, this time analogous to a figure skater putting arms out? Right. Glaciers are mostly near the axis. They’re near the North and South Poles and the bulk of the ocean is not. In other words, you’re taking glaciers from high latitudes like Alaska and Patagonia, you’re melting them, they distribute around the globe, but in general, that’s like a mass flux toward the equator because you’re taking material from the poles and you’re moving it into the oceans. That tends to move material closer to the equator than it once was. So the melting mountain glaciers and polar caps are moving bulk toward the equator? Yes. Of course, there is ocean everywhere, but if you’re moving the ice from a high latitude and you’re sticking it over oceans, in effect, you’re adding to mass in the equator and you’re taking mass away from the polar areas and that’s going to slow the earth down. The contribution of the removed water to the moment of inertia depends on the distance from the axis and hence on the latitude. This is a simple calculation if we assume the world is all ocean. The moment of inertia of the lake is $m(R\cos L)^2$ and the moment of inertia of a spherical shell is $\frac{2}{3}mR^2$, where m is the mass of water, $R$ is the Earth's radius and $L$ is the latitude of the lake ($30.82305$ degrees for Three Gorges). The relative change in the moment of inertia $I$ of the Earth is then $$\frac{mR^2}{I}(\cos^2 L - \frac{2}{3}) = \frac{39 \times 10^{12} \times(6.37 \times 10^{6})^2}{8.04×10^{37}}(\cos^2 L - \frac{2}{3})$$ $$=1.97×10^{-11}(\cos^2 L - \frac{2}{3})$$ Multiplying by the number of microseconds in a day ($8.64 \times 10^{10}$) gives $$1.7(\cos^2 L - \frac{2}{3}) = 1.7 × 0.071 = 0.12$$ microseconds. Why the difference from NASA's $0.06$ ? Note that the expression changes sign at $\cos^2 L = \frac{2}{3}$ or $L ≈ 35$ degrees (pretty close to the latitude of Three Gorges). The Earth will actually speed up if the lake is at high latitudes and slow down if it is at the equator. The $\frac{2}{3}$ term comes from the "all ocean" assumption. As I understand this paper, the $\frac{2}{3}$ term should be multiplied by $\frac{1.414}{1.38}$ to account for the shape of the oceans (search in the PDF for those numbers), resulting in $0.09$ microseconds.
{ "domain": "earthscience.stackexchange", "id": 1634, "tags": "water, earth-rotation, dams" }
Installing pcl1.4 in ROS , please help
Question: Hii , By default the old version of pcl get installed which is 1.1. Standalone version of pcl1.4 works fine but when I try the unstable version I am getting an error this is what I have done to make pcl1.4 working in ros. download perception_pcl_unstable to my ros_workspace folder. then did rosmake, but I get an error about deserialization. Can somebody please help me out. Thank you. Originally posted by faizan on ROS Answers with karma: 11 on 2012-02-16 Post score: 1 Original comments Comment by faizan on 2012-02-16: I have even done this : Download perception_pcl_unstable from SVN and overlay it to your electric installation by placing it first in your ROS_PACKAGE_PATH. export ROS_PACKAGE_PATH=~/perception_pcl_unstable:/opt/ros/electric/stacks:$ROS_PACKAGE_PATH Comment by faizan on 2012-02-16: I have even done this : Download perception_pcl_unstable from SVN and overlay it to your electric installation by placing it first in your ROS_PACKAGE_PATH. Comment by Dan Lazewatsky on 2012-02-17: Can you post the exact error you're getting, and the svn path you're checking out? Comment by faizan on 2012-02-17: @ Diaz, I am srry but I posted this question in pcl forum, and they are helping me with it. But to help others, I was getting an error regarding desearialize. Then the authors at pcl.org told me to change my svn_url to point it to new version of pcl. I did that thing by uncommenting it in makefile in pcl. Answer: After discussion on pcl-users@pointclouds.org, it became clear that @faizan was installing the standalone PCL 1.4 library and not the unstable version of the perception_pcl ROS stack. It remains to be seen exactly what svn URL is needed to answer this question fully. EDIT: Another option is installing the ALPHA version of the ROS Fuerte distribution, which includes PCL 1.4. Beware: Fuerte is still very much a work in progress. Originally posted by joq with karma: 25443 on 2012-02-17 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Kevin on 2012-02-26: Now that PCL 1.5.1 is out and they are shooting for 1.6 by end of March, why can't we setup PCL as an external library like we did with opencv? Then this would question could extend to flann, cminpack, etc Comment by tfoote on 2012-03-05: Setting up PCL as an external library is in progress for fuerte.
{ "domain": "robotics.stackexchange", "id": 8276, "tags": "pcl" }
Is there any way to read the Feynman Lectures for free?
Question: The Feynman Lectures on Physics probably constitute the most famous introductory physics textbook ever written. The level of intuition it provides is hard to beat; I first started reading it in middle school, and a decade later I'm still finding new insights in it. Yet when I recommend it to others, they often have sticker-shock. It is a three-volume book, after all. So is there any legal way to read the Feynman Lectures for free, on the Internet? Answer: The publishers of the Feynman Lectures recently released a free online edition! See this link. This should prove an invaluable resource for physics students. One note for Feynman Lectures purists like myself: the content is of course the same as the original books, but the look and feel of this edition is very different from the original. The font has been changed for easier online readability, the equations have been retyped in Latex, and the diagrams have been redrawn.
{ "domain": "physics.stackexchange", "id": 11344, "tags": "specific-reference, education" }
Optimization of code for searching in db
Question: This is my code: var test = (from x in myDb.myTable where (x.name == tmp || x.name == tmp2 || x.name == tmp3) && x.unit == u select x).FirstOrDefault(); if (test == null) test = (from x in myDb.myTable where (x.name == tmp || x.name == tmp2 || x.name == tmp3) select x).FirstOrDefault(); How to optimize it>? Answer: var test = (from x in myDb.myTable where (x.name == tmp || x.name == tmp2 || x.name == tmp3) select x) .AsEnumerable() .OrderBy(x => x.unit == u ? 0 : 1) .FirstOrDefault(); I removed the x.unit == u condition. Instead I sort the items to make the ones where this condition would be met to appear first. FirstOrDefault then makes the rest. I split the EF part from the LINQ-to-objects part with AsEnumerable() as I am not sure if EF can translate the order by to SQL. If it can, you can try this var test = (from x in myDb.myTable where (x.name == tmp || x.name == tmp2 || x.name == tmp3) orderby x.unit == u ? 0 : 1 select x) .FirstOrDefault();
{ "domain": "codereview.stackexchange", "id": 3028, "tags": "c#, optimization, entity-framework" }
Leak detection simple class
Question: Basic idea is to use a Class, with static methods to add and remove references in a static vector, that keeps track of these references, and check that vector upon exit. The class is detecting intentional leaks that I create, but maybe you can find a case where it does not detect the leak, that I am not seeing #include <iostream> #include <stdexcept> #include <memory> #include <vector> #include <cstdlib> // for REM_LEAKt #include <algorithm> // for std::remove void checkLeakStack(); class LeakDbg { private: LeakDbg() { /// Constructor will only be called once, /// since it's a singlton class std::atexit( checkLeakStack ); } public: struct Pair { std::string name; void* ref; bool operator==( const Pair &other ) const { return ref == other.ref; } }; static bool locked; static std::vector<Pair> stack; static LeakDbg& instance() { static LeakDbg INSTANCE; return INSTANCE; } static void addRef(const std::string& nm, void* ptr) { stack.push_back(Pair{ nm, ptr }); } static void remRef(void* ptr) { /// If it's not enabled, it means /// it's OK to remove a ref if( !LeakDbg::locked ){ Pair search = Pair{"",ptr}; std::vector<Pair> vect; // std::remove(vect.begin(), vect.end(), search); stack.erase(std::remove(stack.begin(), stack.end(), search), stack.end()); } } }; bool LeakDbg::locked = false; std::vector<LeakDbg::Pair> LeakDbg::stack = std::vector<LeakDbg::Pair>(); void checkLeakStack() { /// Here the stack should be emoty /// you can print or assert if the stack is not empty std::cout << "There are " << LeakDbg::stack.size() << " leaks ..." "\n"; for ( LeakDbg::Pair pair : LeakDbg::stack) { const std::string msg = pair.name + " is leaked"; std::cout << msg << std::endl; } } Add defines #define ADD_LEAK(msg, ptr) LeakDbg::addRef(msg, ptr); #define REM_LEAK(ptr) LeakDbg::remRef(ptr); #define CREATE_LEAK_DET() LeakDbg::instance(); #define LCK_LEAK_DET(st) LeakDbg::locked = st; // ADD_LEAK -> Add it in a class constructor // REM_LEAK -> Add it in a class destructor // CREATE_LEAK_DET -> Call it once to make sure "std::atexit" is called // LCK_LEAK_DET -> If set to "true", all objects destructed after, // -> will be considered a leak Test struct Test { Test() { ADD_LEAK( "Cls", this ) } ~Test() { REM_LEAK( this ) } }; int main() { CREATE_LEAK_DET() Test *obj1 = new Test(); Test *obj2 = new Test(); Test *obj3 = new Test(); delete obj2; LCK_LEAK_DET( true ) } Update 12/12/2019 If anybody is interested, I refactored the code to be reusable and less intrusive. Github Answer: Here are some things that may help you improve your code. Use the required #includes The code uses std::string which means that it should #include <string>. It might compile on your machine because some other header includes that file, but you can't count on that, and it could change with the next compiler update. Use only necessary #includes The #include <stdexcept> and #include <memory> lines are not necessary and can be safely removed because nothing from those headers appears to be used here. Avoid C-style macros I'd advise not using C-style macros like the ones in this code, preferring either inline functions or even lambdas. See ES.31 for details. Consider thread safety If multiple threads are using this code, there is likely to be a problem because the single shared instance of the std::vector is not protected by a mutex. I would also recommend renaming the existing locked variable to something like complete or finished to better distinguish what it's doing. Avoid singletons A singleton is basically just another way to create global variables, and we don't like global variables much because they make code linkages much harder to see and understand. See I.3 for more on that. In this case, since you already have two global variables, much of the complexity can easily be avoided by simply using a namespace instead of a class. Here's one way to do that which eliminates the need for instance and CREATE_LEAK_DET: namespace LeakDbg { struct Pair { std::string name; void* ref; bool operator==( const Pair &other ) const { return ref == other.ref; } }; static bool locked = false; static std::vector<Pair> stack; static void addRef(const std::string& nm, void* ptr) { stack.emplace_back(Pair{ nm, ptr }); } static void remRef(void* ptr) { if( !LeakDbg::locked ){ stack.erase(std::remove(stack.begin(), stack.end(), Pair{"",ptr}), stack.end()); } } void checkLeakStack() { std::cout << "There are " << LeakDbg::stack.size() << " leaks ..." "\n"; for ( LeakDbg::Pair pair : LeakDbg::stack) { std::cout << pair.name << " is leaked\n"; } } static const bool registered{std::atexit( checkLeakStack ) == 0}; } Consider the user The current code requires that the user explicitly instruments the code, which seems a bit intrusive. Here's an alternative approach the modifies things just slightly, using the Curiously Recurring Template Pattern, or CRTP for short. First we isolate the leak detector bits into a templated class. template <typename T> struct LeakDetector { LeakDetector() { LeakDbg::addRef(typeid(T).name(), this); } ~LeakDetector() { LeakDbg::remRef(this); } }; Now to use it is much simpler than before. No ugly macros are required and we only need to add one simple thing to the declaration of the class to be monitored: struct Test : public LeakDetector<Test> { Test() { } ~Test() { } }; An even less intrusive approach might be to override new and delete as outlined in this question. Consider alternatives Leak detection is a worthwhile thing to do, since many C++ bugs stem from that kind of error. However, there are already a number of existing approaches to this, some of which may already be installed on your computer. There is, for example the useful valgrind tool. If you're using clang or gcc and have the libasan library installed, you can get a very nice runtime printout. Just compile the code with g++ -g -fsanitize=address myprogram.cpp -o myprogram Then at runtime, a memory leak report might look like this: There are 2 leaks ... Cls is leaked Cls is leaked ================================================================= ==71254==ERROR: LeakSanitizer: detected memory leaks Direct leak of 1 byte(s) in 1 object(s) allocated from: #0 0x7fe67c2c69d7 in operator new(unsigned long) (/lib64/libasan.so.5+0x10f9d7) #1 0x4057a6 in main /home/edward/test/memleak/src/main.cpp:97 #2 0x7fe67bcbb1a2 in __libc_start_main (/lib64/libc.so.6+0x271a2) Direct leak of 1 byte(s) in 1 object(s) allocated from: #0 0x7fe67c2c69d7 in operator new(unsigned long) (/lib64/libasan.so.5+0x10f9d7) #1 0x405774 in main /home/edward/test/memleak/src/main.cpp:95 #2 0x7fe67bcbb1a2 in __libc_start_main (/lib64/libc.so.6+0x271a2) SUMMARY: AddressSanitizer: 2 byte(s) leaked in 2 allocation(s).
{ "domain": "codereview.stackexchange", "id": 36790, "tags": "c++, memory-management" }
Dice-rolling Python script
Question: I completed writing a dice roll script in Python but I thought it looked too messy. Is there anything I should change here? import random, os class Dice: result = [] total = 0 def __roll_(sides=1): return random.randint(1, sides) def roll(sides=1, times=1): for time in range(0, times): Dice.result.append(Dice.__roll_(sides)) Dice.result = Dice.result[len(Dice.result) - times:len(Dice.result)] Dice.sumResult() return Dice.result def sumResult(): Dice.total = 0 for num in range(0, len(Dice.result)): Dice.total += Dice.result[num] return Dice.total def saveResult(directory=''): if directory == '': savetxt = open('savedResult.txt', 'a+') else: savetxt = open(os.path.join(directory, 'savedResult.txt'), 'a+') savetxt.write(str(Dice.result) + '\n') savetxt.close() def saveTotal(directory=''): if directory == '': savetxt = open('savedTotal.txt', 'a+') else: savetxt = open(os.path.join(directory, 'savedTotal.txt'), 'a+') savetxt.write(str(Dice.total) + '\n') savetxt.close() Answer: Your class is not a class, self is totally missing. You have to rewrite the whole thing. Internal methods start with one single underscore _roll. You can access lists from the end with negative indices, len in unnesseccary. Never change the internal state of a instance and return a value. Do the one or the other. You can join with empty strings, if is unneccessary. Open files with the with-statement. Never use the string representation of python objects like lists or dicts for other purposes than debugging. Remember the naming conventions in PEP-8. import random import os class Dice: def __init__(self, sides=1): self.sides = sides self.result = [] self.total = 0 def _roll(self): return random.randint(1, self.sides) def roll(self, times=1): self.result[:] = [self._roll() for time in range(times)] self.sum_result() def sum_result(self): self.total = sum(self.result) def save_result(self, directory=''): with open(os.path.join(directory, 'savedResult.txt'), 'a') as txt: txt.write('%s\n' % ', '.join(map(str, self.result))) def save_total(directory=''): with open(os.path.join(directory, 'savedTotal.txt'), 'a') as txt: txt.write('%d\n' % self.total)
{ "domain": "codereview.stackexchange", "id": 17718, "tags": "python, python-3.x, dice" }
Why does the Simbad page "A.A. Michelson's Jovian Galilean-satellite interferometer" show data for Betelgeuse?
Question: When searching for things related to How did Michelson measure the diameters of jupiter's moons using optical interferometry? I came across the ui.adsabs.harvard.edu entry A. A. Michelson's Jovian Galilean-Satellite Interferometer at Lick Observatory in 1891 which links to this Simbad page. But I don't understand what I'm looking at. The title of the Simbad page is A.A. Michelson's Jovian Galilean-satellite interferometer at Lick Observatory in 1891. but the date directly below it seems to be Betelgeuse. I don't use Simbad so I'm not sure exactly how to understand the juxtaposition of the title's Jovian Galilean-satellite interferometer" with data about Betelgeuse. Is it possible to explain the page's purpose? Answer: The SIMBAD link might be there just because Osterbrock's 2004 AAS presentation about the interferometer mentioned an observation of Betelgeuse. This would be consistent with the policy stated in Wenger et al. 2000: No assessment is made of the relevance of the citation in terms of astronomical contents: the paper can be entirely devoted to the object, or simply give a side mention of it - in both cases this gives a reference in SIMBAD. [...] SIMBAD approach favours exhaustivity, at the cost of increased information noise. Other articles about SIMBAD bibliography, from Laloë et al. 1993 to Delacour et al. 2018, show an evolution of their process from mostly manual to largely automated, but knowledgeable humans are still in the loop. Sometimes an instrument or project intended primarily to investigate one subject also makes important contributions when reused on another subject. For example, the Dark Energy Survey of distant galaxies has also contributed to the discovery of several trans-Neptunian objects. However, that is not exactly what happened here. Michelson 1920 discusses another interferometer spanning the objective of the 100-inch telescope at Mt. Wilson, noting that atmospheric seeing conditions affected the results less than expected. Betelgeuse required a longer baseline, so Michelson and Pease 1921 used four 6-inch mirrors mounted on a 20-foot steel beam.
{ "domain": "astronomy.stackexchange", "id": 4931, "tags": "history, jupiter, interferometry, star-catalogues, betelgeuse" }
Calculating the power for acceleration of an elevator
Question: An elevator does need some kind of acceleration when it starts to rise up so there has to be s force acting on it. But it surely does not accelerate the whole time so after some distance or time it reaches a speed and stops to accelerate. In an old physics textbook I found an example where the power a motor would need to lift it was calculated like this P = F * v But they did not take the average speed for v but the speed which would be reached at the end of the acceleration. Why was this speed used and not the average speed? Answer: What are you trying to calculate? The average power OR the instantaneous Power at the end of the acceleration? The first part will be calculated by taking the average force/acceleration along with the average speed. The second part will be calculated using the final speed after acceleration and multiplying it by the weight of the lift + passengers (as this is the amount of force required to maintain a constant speed). Please post the exact question you want answered. Edit: Do not make two posts regarding the same question.
{ "domain": "physics.stackexchange", "id": 60315, "tags": "newtonian-mechanics, acceleration, power" }
Is Norton's dome valid (or does $\frac{d^2 \vec{p}}{dt^2} = \vec{0} \implies \frac{d^n \vec{p}}{dt^n} = \vec{0} \ \forall \ n > 2$)?
Question: I came across Norton's dome and I don't agree that it proves anything. First, here's an obviously ridiculous and completely nonsensical example that I constructed from thinking about simple harmonic motion: Consider a particle of unit mass at rest in free space, at $x = 0$. Newton's second law states that $\ddot{x} = 0$ at $t=0$. There's an obvious way to describe the particle's motion that satisfies Newton's second law of motion, and that is $x(t) = 0$. Let us denote this solution by $s_1(t)$. However, here's another non-obvious way to describe the particle's motion that satisfies Newton's second law: $$ x(t) = \begin{cases} \sin(t - T) - t + T \quad &\text{if }\ t \ge T \\ 0 & \text{if }\ t < T \end{cases} $$ Let us denote this solution by $s_2(t)$. Notice that $s_2(t)$, $\dot{s_2}(t)$ and $\ddot{s_2}(t)$ all obviously equal $0$ for all $t < T$. Now consider time $t = T$. Notice that: $$ \begin{align*} s_2(T) & = \sin(T - T) - T + T = 0 \\ \dot{s_2}(T) & = \cos(T - T) - 1 = 0 \\ \ddot{s_2}(T) & = -\sin(T - T) = 0 \end{align*} $$ Thus, $s_2(t)$ is a completely valid solution which obeys the constraint that the particle is at rest at time $t=0$. However, notice that as $t$ grows beyond $T$, the particle starts to spontaneously shoot off to infinity in the negative $x$ direction. Also notice that $T$ is an arbitrarily chosen constant, it could assume any positive real number as its value. Thus, the second solution states that the particle could spontaneously start shooting off to infinity in the negative $x$ direction after some arbitrary amount of time $T$. So we have proven that Newtonian mechanics isn't deterministic, can harbor uncaused events, yada yada. Why are Norton's dome and my example not valid? They're incorrect because in those solutions, the particle isn't really at rest when the spontaneous motion starts occurring. Notice that the fourth derivative of $r(t) = \frac{d^4}{dt^4} \left( \frac{(t - T)^4}{144} \right) = \frac{1}{6}$, and that the third derivative of $s_2(t)$ at $t=T$ is equal to $-\cos(T-T) = -1$. Clearly Norton believes that an object being "at rest" does not necessitate that $\frac{d^n \vec{p}}{dt^n} = \vec{0} \ \forall \ n \ge 2 $. He believes that Newton's first law does not imply $ \frac{d^2 \vec{p}}{dt^2} \implies \frac{d^n \vec{p}}{dt^n} = \vec{0} \ \forall \ n \ge 2 $. However, believing this leads to all sorts of completely nonsensical predictions for even the simplest system one can think of, a particle at rest in free space, as I have shown above. Therefore, it only makes sense to believe that $ \frac{d^2 \vec{p}}{dt^2} = \vec{0} \implies \frac{d^n \vec{p}}{dt^n} = \vec{0} \ \forall \ n > 2$. So, coming to my questions: Are my arguments valid? Is Norton's second solution just plain incorrect? If you disagree with my arguments and instead agree with Norton, what do you think of my completely nonsensical second solution for a particle at rest in free space? Answer: Your example is not analogous to Norton's dome. Norton's point is that Newtonian mechanics says that the trajectory of a particle obeys $\ddot{x}(t) = F(x(t),\dot{x}(t), t)$ at every point in time $t$, and that physics hence is often interpreted to constitute a notion of "causation" between the force $F$ and the motion $x(t)$ as the solution $x(t)$ is usually claimed to be unique (as a consequence of various uniqueness theorems about solutions to differential equations, it often is) given an initial position $x(t_0)$ and velocity $\dot{x}(t_0)$. What you're talking about has nothing to do with either Norton's dome - or, indeed, Newtonian mechanics - since your $s_2(t)$ only obeys the differential equation $\ddot{s}_2(t) = 0$ at $t \leq T$, and not for all times $t$. $s_2(t)$ is hence not a valid solution to the equations of motion. In contrast to this, Norton's dome purposefully constructs a dome with a shape such that its associated $F$ acting on a particle sliding over it is not Lipschitz continuous, and hence the Picard-Lindelöf theorem can no longer guarantee uniqueness of the solution to the equatios of motion. Then, Norton shows explicitly that there are two different solutions $x(t)$ that both start with the particle at rest at the top of the dome (in the simple, colloquial sense $\dot{x} = 0$ - the particle isn't moving) and both solve the equations of motion for all times $t$. Sure, you could force uniqueness in this case by says that you need to specify all derivatives of $x(t)$ at $t_0$, but that's not what physics does - a lot of classical physics is founded on the idea that position and velocity at a given time (or position and momentum) completely specify the state of a particle since they suffice to produce a unique trajectory via the equations of motion (there could be a digression about gauge theory here but that would be beside the point). Also, note that e.g. for analytic functions, knowing all derivatives means knowing the value of the function at every point - this would mean the equations of motion lose their predictive power if we essentially need to know the solution already to fully specify it, and the idea of "causation" that Norton is concerned about would no longer enter.
{ "domain": "physics.stackexchange", "id": 88858, "tags": "newtonian-mechanics, symmetry-breaking, determinism" }
How to read files in a directory?
Question: Hello I need to read some files from a directory and the std::filesystem does not seems to work. I already tried std::experimental::filesystem but I get the same error ‘std::filesystem’ has not been declared respectively ‘std::experimental::filesystem’ has not been declared. Is there any other way to read files? std::vector<std::string> get_folder_content(std::string path) { std::vector<std::string> files; std::filesystem::directory_iterator path_iterator(path); for (const auto& entry : path_iterator) { files.push_back(entry.path().string()); } return files; } Originally posted by anonymous74063 on ROS Answers with karma: 16 on 2021-05-06 Post score: 0 Original comments Comment by gvdhoorn on 2021-05-07: I'm sorry to close your question, especially since it already has an answer, but this is not a ROS question. It's a general C++ question. Please keep your ROS Answers questions on topic, meaning: they should ask questions about ROS. For all other questions (about programming, robotics, math, etc), try to find an appropriate forum, such as Stack Overflow, the Ubuntu fora, Robotics Stack Exchange, etc. Answer: I found this c++ implementation of the glob function some time ago and have been using it since: std::vectorstd::string glob(const std::string& pattern) { using namespace std; // glob struct resides on the stack glob_t glob_result; memset(&glob_result, 0, sizeof(glob_result)); // do the glob operation int return_value = glob(pattern.c_str(), GLOB_TILDE, NULL, &glob_result); if (return_value != 0) { globfree(&glob_result); stringstream ss; ss << "glob() failed with return_value " << return_value << endl; throw std::runtime_error(ss.str()); } // collect all the filenames into a std::liststd::string vector filenames; for (size_t i = 0; i < glob_result.gl_pathc; ++i) { filenames.push_back(string(glob_result.gl_pathv[i])); } // cleanup globfree(&glob_result); // done return filenames; } You can call this function as: filenames = glob("/*."); Originally posted by Akhil Kurup with karma: 459 on 2021-05-06 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by anonymous74063 on 2021-05-06: Thanks I gonna try it out.
{ "domain": "robotics.stackexchange", "id": 36414, "tags": "c++, catkin" }
Finding a minimum weight path with certain restrictions
Question: I have a directed weighted multigraph whose vertices are sets of URLs. We add to this multigraph all edges of the form $i\to j$ where $i\subset j$ (such edges are of zero weight), where $i$, $j$ are vertices of the graph. Now having a set of source vertices and destination vertices, I need to find the minimum weight path (or several paths, if there are several such paths of the same weight) from a source to a destination of nonzero length and having at least one edge not of the form $i\to j$ where $i\subset j$. Also it is desirable that the paths do not have adjanced edges of the form $i\to j$ where $i\subset j$. I am currently working with Python and NetworkX. I think (but not sure), that we can assume that the set of source vertices is disjoint with the set of destination vertices. Answer: You can solve this in $O(|E| \log |V|)$ time, if all weights are non-negative. Basically, you'll build a larger graph, of twice the size, then do a shortest-paths query in this graph. I will call the edges of the original multigraph regular and added edges of the form $i\to j$ for $i\subset j$ as irregular. For each set $i$, you have two vertices $i^-,i^+$ in the new graph, where $i^-$ represents a step along the path that visits set $i$ before visiting any regular edge; and $i^+$ represents a step along the path after visiting a regular edge. It is easy to work out the edges in this graph (if you have an irregular edge $i \to j$ in the original graph, you have $i^- \to j^-$ and $i^+ \to j^+$; if $i \to j$ is a regular in the original graph, you have $i^- \to j^+$ and $i^+ \to j^+$). Add a source node $s$ to this large graph, with an edge $s \to i^-$ for each $i$ in your set of source vertices; and similarly add a sink node $t$ with an edge $i^+ \to t$ for each sink vertex $i$. Finally, find the shortest path from $s$ to $t$ in this bigger graph using any standard algorithm for shortest paths (e.g., Dijkstra's algorithm). You can accommodate the restriction about adjacent edges by doubling the size of the graph again. I'll let you work out the details. Basically, each vertex in the bigger graph represents a vertex in the original graph, plus a little bit of state summarizing everything you need to know about what comes before this on the path.
{ "domain": "cs.stackexchange", "id": 11992, "tags": "graphs, graph-traversal, weighted-graphs, python" }
Store a hash in a cookie to identify the user's account to keep logged in after close browser
Question: I made a option to user choose to stay logged in even after he closes the browser. It's really simple to implement, but i'm not sure about the hash i used to identify the user that is stored in a cookie. To generate the hash i used bin2hex(random_bytes(59/2)) this function generate a string like this one: 118f9bc738e28b5079cf04bc3c88b2754f8ae04c9f1e1127d020389c49 To make sure that it's unique for each user, before register new accounts i check all hashes that i already have in my database. I only use this hash to identify the user using the cookie, is it safe? The lenght of the hash is good enough to a Brute force attack be impossible, right? What do you guys think about my code? register.php $hash = bin2hex(random_bytes(59/2)); login.php ## ## Get data from database using $_POST variable from <form> ## if(!empty($_POST['stay_loggedIn'])){ setcookie('stay_loggedIn', $fetch_data['hash'], strtotime('+14 days'), '/'); } index.php if(!empty($_COOKIE['stay_loggedIn']) && empty($_SESSION['logged_cookie'])){ ## Set this session variable to run this code only one time $_SESSION['logged_cookie'] = 1; $hash = $_COOKIE['stay_loggedIn']; $logeIn = $conn->prepare("SELECT * FROM users WHERE `hash` = :hash"); $logeIn->bindValue(':hash', $hash, PDO::PARAM_STR); $logeIn->execute(); $user_data = $logeIn->fetch(PDO::FETCH_ASSOC); ## ## Set session variables ## } logout.php $params = session_get_cookie_params(); setcookie('stay_loggedIn', '', time() - 3600, $params['path'], $params['domain'], $params['secure'], isset($params['httponly'])); Answer: I think this is a bad approach on two fronts (the second one is more important): Having to check a randomly generated 'hash' against the database, to see if it has been used before, is inefficient. First of all, this is not a hash, it's a random string. A hash is created from something sensible, to verify it later. Can I assume you already have an unique identifier for each user in your database? You could store that in the cookie, it tells you which user it is. User id's don't need to be a secret. However, you don't want someone to change it and see information from another to user. To prevent this, you could add a small random string to the cookie, and in your database, to verify that they are who they say they are. Such a string is called a 'token'. An user is only valid if the user id and the token in the cookie match with the id and token in the database. Your queries will now probably be quicker because you can use the user id to look up the token. Staying 'logged in' after closing the browser is done by not erasing the session cookie when the browser closes. It's as simple as that. See: https://www.php.net/manual/en/function.session-set-cookie-params.php (see 'lifetime', it defaults to 0).
{ "domain": "codereview.stackexchange", "id": 34271, "tags": "php, security" }
Why do spiral arms occur at potential minima?
Question: I've been learning about the density wave theory of spiral arms, and also how the gravitational potential of galaxies is non-axisymmetric, resulting in a sinusoidal spiral potential. I've then learnt that the spiral arms/density waves occur at this spiral potential minima. Why is this? Answer: In general there will be a gravitational force associated with any gradient in potential. The force tends to zero where the potential gradient is zero - at the minimum in potential - and so that's where matter accumulates.
{ "domain": "physics.stackexchange", "id": 97008, "tags": "potential, astrophysics, galaxies" }
Vertex function QED and Deeply Virtual Compton Scattering (DVCS)
Question: In QED we have one vertex where one line is virtual and the other two are physical: But recently I came across the so-called Deeply Virtual Compton Scattering in which, after the interaction of an electron with a proton, a physical photon appears, as is possible in QED? Or another field theory is used here? I also don't understand why this reaction is called DVCS. A photon is born real. (I would be grateful if you could advise me where to read about it in detail). Answer: I am assuming you were looking at this paper, as it is using the same Feynman diagrams that you show. Addressing the name of DVCS - Compton scattering is an inelastic scattering process where a photon interacts with a charged particle, resulting in altered momenta of both the photon and the charged particle. In DVCS case, the charged particle is a quark inside the proton. As Triatticus points out in the comments, it's virtual because the photon which scatters off the quark is virtual, and I assume that deeply comes from deep inelastic scattering and the Bjorken limit, i.e., the energy and momentum of the virtual photon going to infinity at the same rate. As the diagrams show, the $e p \rightarrow e p \gamma$ gets contributions from two mechanisms, as one can't be sure how the photon was generated - the first one is the DVCS, and the other is Bethe-Heitler interaction (with the addition of an interference term). If the only interacting fermions were electrons, the process would be described by the theory of QED - unfortunately, the proton is a composite particle held together by the strong interaction. One can't know the kinematic properties of individual quarks inside the proton, but we can characterise the contents of the proton by parton distribution functions describing what fraction of the total momentum is carried by each parton (and is dependent on the energy at which the nucleon is probed). There are also nucleon form factors describing the electric and magnetic distributions inside a nucleon. The document says (in section 1.3.4.1) the B-H term is computed "using pure QED and can be expressed as a function of the elastic Form Factors", while the DVCS term is said to be parameterised by combinations of form factors. So in conclusion, while only interactions between fermions and photons are present, you need information about the structure of the proton (a QCD bound state) to carry out the calculation. I would also argue that the photon-quark interaction is not described by QED, as the only fermions in QED are the electrons (you could probably argue muons and tau are included as well).
{ "domain": "physics.stackexchange", "id": 94689, "tags": "quantum-field-theory, experimental-physics, quantum-electrodynamics, protons, elementary-particles" }
simple transaction process
Question: I ask if the following code need some refactoring : public static int InsertBonus(DataTable dt, int month, int year) { int affectedRow = -1; using (IfxConnection con = new IfxConnection(ConfigurationManager.ConnectionStrings["xxx"].ToString())) { using (IfxTransaction tran = con.BeginTransaction()) { try { StringBuilder cmdTxt = new StringBuilder(); cmdTxt.Append(" DELETE FROM rh7uiw WHERE bonus_mon = ? AND bonus_year = ? "); using (var myIfxCmd = new IfxCommand(cmdTxt.ToString(), con)) { myIfxCmd.CommandType = CommandType.Text; myIfxCmd.Parameters.Add("bonus_mon", IfxType.Integer); myIfxCmd.Parameters.Add("bonus_year", IfxType.Integer); if (con.State == ConnectionState.Closed) { con.Open(); } myIfxCmd.Parameters[0].Value = ((object)month) ?? DBNull.Value; myIfxCmd.Parameters[1].Value = ((object)year) ?? DBNull.Value; affectedRow = myIfxCmd.ExecuteNonQuery(); } cmdTxt.Length = 0; cmdTxt.Append(" INSERT INTO rh7uiw (emp_num,bonus_year,bonus_mon,bonus_value,bonus_no,rr_code) VALUES(?,?,?,?,?,?) "); using (var myIfxCmd = new IfxCommand(cmdTxt.ToString(), con)) { myIfxCmd.CommandType = CommandType.Text; myIfxCmd.Parameters.Add("emp_num", IfxType.Integer); myIfxCmd.Parameters.Add("bonus_year", IfxType.Integer); myIfxCmd.Parameters.Add("bonus_mon", IfxType.Integer); myIfxCmd.Parameters.Add("bonus_value", IfxType.Integer); myIfxCmd.Parameters.Add("bonus_no", IfxType.Integer); myIfxCmd.Parameters.Add("rr_code", IfxType.Integer); foreach (DataRow r in dt.Rows) { if (con.State == ConnectionState.Closed) { con.Open(); } myIfxCmd.Parameters[0].Value = (r["emp_num"]) ?? DBNull.Value; myIfxCmd.Parameters[1].Value = (r["year"]) ?? DBNull.Value; myIfxCmd.Parameters[2].Value = (r["month"]) ?? DBNull.Value; myIfxCmd.Parameters[3].Value = (r["bonus_al"]) ?? DBNull.Value; myIfxCmd.Parameters[4].Value = 1; myIfxCmd.Parameters[5].Value = 1; affectedRow = myIfxCmd.ExecuteNonQuery(); } } tran.Commit(); con.Close(); con.Dispose(); return affectedRow; } catch { tran.Rollback(); throw; } } } } Answer: Yes: You don't need to use a StringBuilder when you are not building a string. Just use the string. Don't check the state of the connection for every query. Just open the connection before the first query, and it stays open. The check for null for month and year in the first query is not needed. An int can not be null. As you are creating the connection in a using block, you don't need to close and dispose it. The using block will do that for you. Returning affectedRow from the method seems pointless, as it only contains how many records the last query affected.
{ "domain": "codereview.stackexchange", "id": 4630, "tags": "c#, asp.net" }
How to create a Time Series Training Dataset with variable sequence length
Question: I have time series data with variable sequence lengths. So something like: date value label 2020-01-01 2 0 # first input time series 2020-01-02 1 0 # first input time series 2020-01-03 1 0 # first input time series 2020-01-01 3 1 # second input time series 2020-01-03 1 1 # second input time series how is it possible to create a training dataset (numpy arrays) of shape [samples, time_steps, n_features] when time_steps is not consistent? Additional Info: The model that is going to be trained is an LSTM which is capable to handle variable input lengths. Answer: I solved it the following way: zero padding all time series that are smaller than the longest one adding a Masking()layer with masking_value=0. which ignores all zero values before feeding the network. The part of the model with the Masking layer looks like the following: model = keras.Sequential() model.add(layers.Masking(mask_value=0., input_shape=(None, 1))) model.add(layers.LSTM(100)) Additional Info: Implementing the Masking Layer for a Model with two separate Inputs was kind of unconvinient with a the Keras Functional API, therefore i implemented that part of the model with the Keras Sequential API and connected it to the rest of the model which is implemented with the Functional API.
{ "domain": "datascience.stackexchange", "id": 7337, "tags": "python, time-series, dataset, training" }