anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Second derivative of energy as frequency of oscillations
Question: Is there a way to algebraically see why when I take the second derivative of a potential energy in a point where it is minimal (force is zero), I generally get the frequency (squared) of the oscillations around this point? Answer: In a harmonic oscillator, $$F=-kx$$ where $k$ is a constant and $x$ is the displacement from the mean position. Thus, $$\frac{\mathrm dF}{\mathrm dx}=-k\tag{1}$$ about the equilibrium point. Now, $$F=-\frac{\mathrm dV}{\mathrm dx}\tag{2}$$ where $V$ is the ootential energy at any point. Now substituting equation $(2)$ in equation $(1)$, we get $$-\frac{\mathrm d^2V}{\mathrm dx^2}=-k$$ Cancelling the minus signs, and dividing by the mass of the body ($m$), we obtain \begin{align} \frac 1 m \frac{\mathrm d^2V}{\mathrm dx^2}&=\frac k m\\ &=\omega^2 \end{align} where $\omega$ is the angular frequency of oscullation.
{ "domain": "physics.stackexchange", "id": 69903, "tags": "classical-mechanics, harmonic-oscillator, differentiation, approximations, calculus" }
Nucleotide Count
Question: I am learning Java and hence practicing it from here. The solution seems pretty trivial to me but still I would like to have a honest review which would guide me to following ideas: Am I maintaining class Invariants? Do I need to have more Domain specific error messages? Although the solution seems trivial but I am missing some performance boost? Am I following proper encapsulation? Using correct data structures? Any other suggestion are most welcome. import java.util.Map; import java.util.HashMap; import java.util.Collections; public class DNA { private final Map<Character, Integer> nucleotideMap = new HashMap<>(); private final String dnaSequence; public DNA(String dnaSequence) { this.dnaSequence = dnaSequence; initializeDefaultNucleotideMap(); if (!dnaSequence.isEmpty()) { countNucleotides(); } } private Map<Character, Integer> initializeDefaultNucleotideMap() { nucleotideMap.put('A', 0); nucleotideMap.put('C', 0); nucleotideMap.put('G', 0); nucleotideMap.put('T', 0); return nucleotideMap; } private void countNucleotides() { for (Character chr : dnaSequence.toCharArray()) { nucleotideMap.put(chr, nucleotideMap.get(chr) + 1); } } public int count(char nucleotide) { if (!nucleotideMap.containsKey(nucleotide)) { throw new IllegalArgumentException("Invalid nucleotide"); } return nucleotideMap.get(nucleotide); } public Map<Character, Integer> nucleotideCounts() { return Collections.unmodifiableMap(nucleotideMap); } } Test suites: import static org.assertj.core.api.Assertions.assertThat; import static org.assertj.core.api.Assertions.entry; import org.junit.Test; public class NucleotideTest { @Test public void testEmptyDnaStringHasNoAdenosine() { DNA dna = new DNA(""); assertThat(dna.count('A')).isEqualTo(0); } @Test public void testEmptyDnaStringHasNoNucleotides() { DNA dna = new DNA(""); assertThat(dna.nucleotideCounts()).hasSize(4).contains( entry('A', 0), entry('C', 0), entry('G', 0), entry('T', 0) ); } @Test public void testRepetitiveCytidineGetsCounted() { DNA dna = new DNA("CCCCC"); assertThat(dna.count('C')).isEqualTo(5); } @Test public void testRepetitiveSequenceWithOnlyGuanosine() { DNA dna = new DNA("GGGGGGGG"); assertThat(dna.nucleotideCounts()).hasSize(4).contains( entry('A', 0), entry('C', 0), entry('G', 8), entry('T', 0) ); } @Test public void testCountsOnlyThymidine() { DNA dna = new DNA("GGGGGTAACCCGG"); assertThat(dna.count('T')).isEqualTo(1); } @Test public void testCountsANucleotideOnlyOnce() { DNA dna = new DNA("CGATTGGG"); dna.count('T'); assertThat(dna.count('T')).isEqualTo(2); } @Test public void testDnaCountsDoNotChangeAfterCountingAdenosine() { DNA dna = new DNA("GATTACA"); dna.count('A'); assertThat(dna.nucleotideCounts()).hasSize(4).contains( entry('A', 3), entry('C', 1), entry('G', 1), entry('T', 2) ); } @Test(expected = IllegalArgumentException.class) public void testValidatesNucleotides() { DNA dna = new DNA("GACT"); dna.count('X'); } @Test public void testCountsAllNucleotides() { String s = "AGCTTTTCATTCTGACTGCAACGGGCAATATGTCTCTGTGTGGATTAAAAAAAGAGTGTCTGATAGCAGC"; DNA dna = new DNA(s); assertThat(dna.nucleotideCounts()).hasSize(4).contains( entry('A', 20), entry('C', 12), entry('G', 17), entry('T', 21) ); } } Answer: Am I maintaining class Invariants? Yes. But I wonder if you really need to expose nucleotideCounts. You did it safely, by wrapping the map in unmodifiableMap (and because the values are immutable too). To eliminate the risk of abuses and errors, it would be better to keep this implementation detail hidden. Do I need to have more Domain specific error messages? I don't see an obvious reason to do that. The IllegalArgumentException seems appropriate when trying to get an invalid nucleotide. Although the solution seems trivial but I am missing some performance boost? Since the number of nucleotides is unlikely to change, using a Map for the storage seems a bit overkill. A simple array of 4 elements would be more light-weight. That being said, premature optimization is considered evil, so I wouldn't worry about this too much until the current implementation is proven to be a bottleneck. Am I following proper encapsulation? As mentioned earlier, it would be better to hide the implementation detail of the storage of the nucleotide counts, that is, remove the nucleotideCounts method. But it depends on your use case. If you really need that method, then you cannot remove it. In any case, it's important to ask that question: should this be hidden? Using correct data structures? Yes. Any other suggestion are most welcome. Some additional tips and observations: No need to keep dnaSequence in a field. Once you built the map of counts from it, you no longer need it. I suggest to get rid of it. In the constructor, it's unnecessary to check if dnaSequence is empty. The iteration logic in countNucleotides naturally embeds that check. You'll get an NPE on invalid nucleotides, for example new DNA("hello")
{ "domain": "codereview.stackexchange", "id": 19695, "tags": "java, algorithm, programming-challenge, unit-testing" }
Can fats be composed of fatty acid esters other than triglycerides?
Question: Fats are mainly referred to triglycerides. In triglycerides, a glycerol molecule form three ester linkages with fatty acids. I think it is also possible for a butane-1,2,3,4-tetraol molecule to form four ester linkages with fatty acids, or for a ethane-1,2-diol to form two ester linkages with fatty acids. But why they not considered a form of fats, or to make my question clearer, why should fat be triglycerides? Answer: Indeed there's nothing stopping such molecules from existing. In fact it is possible to make dendrimeric polyol polyesters such as Olestra, where sucrose (a disaccharide containing eight free hydroxyl groups) has 6-8 hydroxyls esterified with long-chain fatty acids. Olestra (and presumably a number of other similar polyol polyesters) behaves much like regular triglyceride cooking oils, and it was used in the recent past in commercial products for human consumption. However, these do not arise from natural biochemistry, and Olestra at least does not appear to be significantly metabolized by humans. As such, considering them to be fats may be technically correct by slightly extrapolating their chemistry and material properties, but biochemically speaking is probably a discouraged classification. Ben Krasnow of Applied Science has a nice video with the story, properties and synthesis of Olestra, and he even fries some chips in it!
{ "domain": "chemistry.stackexchange", "id": 17754, "tags": "food-chemistry, fats" }
Why does water have several different solid phase but only one liquid and gas phase
Question: Why does water have several different solid phase but only one liquid and gas phase? Is there any meaning? or any reason behind it? Or is it just the way the nature behaves? Answer: Solid phases differ by the arrangement of the molecules. Molecules in solids stay at the same place so you can have different geometrical arrangements (different phases). In liquids and gases, molecules always move, so you cannot define a fixed arrangement.
{ "domain": "physics.stackexchange", "id": 39620, "tags": "states-of-matter" }
Which is the correct formula for change in displacement?
Question: I am trying to find distance between two bodies A and B ($s$) as a function of time ($t$) given acceleration ($a$) as a function of distance. My approach is to model the situation with a differential equation that i then can solve. An object A is released, with initial velocity $0$, $d$ meters above a static body B. A accelerates towards (or away from) B with the acceleration $a(s)$. Is it correct to express the displacement as $$s(t+\Delta t)=s(t)+v(t)\Delta t$$ or as $$s(t+\Delta t)=s(t)+v(t)\Delta t+\frac{a(s(t))\Delta t^2}{2}~?$$ Note: Here i let $\Delta t$ tend towards $0$. Answer: The first equation is a first order approximation which will be useful only if $\Delta t$ is very small (unless the acceleration is zero throughout the interval, in which case it is exactly correct). The second equation is a second order approximation, which will be useful (that is not too far out!) over larger time intervals, $\Delta t$, but is exactly correct if the acceleration is constant.
{ "domain": "physics.stackexchange", "id": 39940, "tags": "kinematics, displacement" }
Sub-exponential for Subset Sum on the total bit length of the input
Question: Was crawling the internet, and found this in a paper (1) "[...], a variant of dynamic programming called dynamic dynamic programming has been shown to have a worst-case sub-exponential time complexity of $2^{O(\sqrt x)}$ when the total bit length $x$ of the input set is used as the complexity parameter." As I understand this claim violates the Exponential Time Hypothesis, and there has been some answers in the forum clearly stating no such sub-exponential algorithm exists, would be interesting to know what the experts in the area think about the proposed algorithm in terms of worst case complexity. Direct link to the paper (2) explaining the dynamic dynamic approach. (1) Thomas E. O’Neil -An Empirical Study of Algorithms for the Subset Sum Problem (2) Thomas E. O’Neil, Scott Kerlin - A Simple $2^{O(\sqrt x)}$ Algorithm for PARTITION and SUBSET SUM Answer: This doesn't violate the exponential time hypothesis. The exponential time hypothesis says that k-SAT takes exponential time, i.e., $\Omega(2^{cn})$ for some constant $c>0$. So if you found a $O(2^{\sqrt{n}})$ time algorithm for k-SAT, that would violate the exponential time hypothesis -- but a a $O(2^{\sqrt{n}})$ time algorithm for subset sum does not violate the hypothesis.
{ "domain": "cs.stackexchange", "id": 10048, "tags": "time-complexity, np-complete, runtime-analysis" }
For a particle to have physical mass, is it always necessary to have a mass term in the lagrangian?
Question: Since the self-energy adds to the bare mass defined in the Lagrangian, is it possible to create a physical particle mass from the self-energy alone, with no mass terms occuring in the Lagrangian? On a possibly related note, Wikipedia says: "The photon and gluon do not get a mass through renormalization because gauge symmetry protects them from getting a mass." Answer: It is possible for particles to get masses at loop level while they are absent at tree level as long as there is no (non-anomalous) symmetry that forbids it. However, in most models if there is a particle that doesn't have a bare mass then its due to a symmetry which then protects it from getting masses at loop level. This is often a subtle topic due to chiral symmetry, \begin{equation} \psi \rightarrow e ^{i \gamma_5 \alpha}\psi \end{equation} which can protect fermions from getting masses under loop corrections. This symmetry is broken by a fermionic mass term, $\bar{\psi} \psi $, but can be conserved by the rest of the Lagrangian. In this case if the mass term doesn't appear at tree level (due to some imposed symmetry) it can't appear at higher orders since the chiral symmetry will protect it. This topic is often discussed in the context of neutrinos.
{ "domain": "physics.stackexchange", "id": 16069, "tags": "quantum-field-theory, mass, lagrangian-formalism" }
Deriving potential inside a conductive, neutral shell containing a charge
Question: The shell has radii $a$ and $b$ ($a<b$), and it has a point charge $q$ in the center. The field and potential in every possible region are: I can't see how my professor derived the potential for the region $0<r<a$ (even though I can see why it makes sense). I've tried looking for the same result somewhere but I can't find any explanations. Useful links or any possible explanation are welcome. Answer: By definition, potential is a measure of how much work is necessary to bring a test charge, $q$ from infinity to a particular point. In the case of bringing a test charge, $q$ to somewhere in the region $0\leq r\leq a$, we first have to bring the particle through a field between $r>b$, and then through a field, $0\leq r\leq a$. That is, we exert a force equal and opposite the force exerted along the field, $-E$, the entire way to our desired point. However, because $E$ changes infinitesimally, we compute this via an integral (or rather a series of integrals). From $r=\infty\to b$, we have, $$V=-\int_\infty^b\frac{1}{4\pi\epsilon_0}\frac{q}{r^2} \mathrm d r=\frac{q}{4\pi\epsilon_0}\frac{1}{r}\bigg\rvert_\infty^b=\frac{q}{4\pi\epsilon_0}\frac{1}{b}.$$ In the region, $a\leq r\leq b$, the electric field is $0$, and so the work done is also zero. Finally, in the region, $0\leq r\leq a$, we have (to bring a test charge from $a$, the bound to somewhere within the region, $r$), such that, $$V=-\int_a^r\frac{q}{4\pi\epsilon_0}\frac{1}{r^2}\mathrm d r=\frac{q}{4\pi\epsilon_0}\frac{1}{r}\bigg\rvert_a^r=\frac{q}{4\pi\epsilon_0}\left(\frac{1}{r}-\frac{1}{a}\right).$$ Summing the total work done then, we have, $$V(r)=\frac{q}{4\pi\epsilon_0}\left(\frac{1}{r}-\frac{1}{a}+\frac{1}{b}\right).$$ This is equivalent to the potential at a point $r$ in the region, $0\leq r\leq a$. On a more procedural note, the general idea is, $$V=-\int_\infty^rE\,\mathrm d r,$$ where we might have to break up the integral into different bits as the e-field varies over space. Edit to address comment: I'll clarify what I meant by, test charge, $q$. If we place a test charge, $q$ at any point in an electric field, the force on $q$ is given by $qE$. In calculating work then, we would technically have to integrate the expression $-qE$, over our specified bounds, however, because $q$ (the test charge) is a constant, we can just ignore it. The expression that I arrived at in my solution ignores the test charge, and the $q$ in the solution refers only to the charge generating the field. If we multiply the final $V(r)$ by test charge, $q$ we end up with potential experienced by that test charge, but the general potential is just $V(r)$. Hope that clears up some stuff.
{ "domain": "physics.stackexchange", "id": 39604, "tags": "electric-fields" }
The SOTA of derivative-free optimization
Question: As titled, I want to ask what is the SOTA of derivative-free algorithm. I am not familiar with this thing at all, the only derivative-free optimization algorithm I am familiar with is GA, and others like Bayesian Optimization, I just know the name, but I don't even know what they do. The model I want to optimize is some bi-lstm models with about 3 layers, which has the input shape of (timesteps~=1024, features=512), and the output shape of (timesteps, 256). What I also know is that the model is a little big, but the loss calculation is not differentiable, and RL algorithms are not suitable for me, so it lead me down to this. Answer: The problem is not the input size but the model size. Indeed, derivative-free/zero-order optimization methods usually tend to estimate a descent direction that correlates with some notion of local gradient (that might happen not to exist because the loss is not differentiable) Now, you can consider zero-order optimization as a "practical" finite difference method: $$ f'(x) = lim_{\,h\,\rightarrow 0} \frac{f(x+h) - f(x)}{h} $$ Thus, what they aim to do is to find a multidimensional direction $h$ that is a descent direction At this point, you can see that in order for such direction to have a span of the whole parameter space, you need $N$ function evaluation (in the general case), thus will be very computationally expensive to optimize the problem if your bi-lstm is very big in model size However, nothing stops you from taking a direction that might not span the whole space Clarified that the problem is the model size, you can consider any evolutionary strategy algorithm for such problem, something like Cross Entropy Methods (CE) or CMA-ES should work fine for your problem You can find improved versions in the citations of the paper of such methods, though they might lack an implementation in the programming language you are using
{ "domain": "ai.stackexchange", "id": 4106, "tags": "optimization, state-of-the-art" }
Why do we call translations and time evolutions "operators" while they are not (linear) operators?
Question: I have been reading Sakurai's Modern Quantum Mechanics and I am confused by this: A translation operator $\mathscr T(\delta x)$ is defined as $\mathscr T(\delta x) |x \rangle = | x + \delta x \rangle$ and it is used to derive the momentum operator $\hat p$ by the expression $\mathscr T(\delta x) = \exp(- \mathrm i \hat p \delta x/\hbar)$. And similarly to derive Hamiltionian $\hat H$ the time evolution operator $\mathscr U(t, t_0)$ is introduced. Question: Notice that $\mathscr T(\delta x) | 0 \rangle = | \delta x\rangle \neq | 0 \rangle$; $\mathscr T(\delta x) |\lambda x \rangle = |\lambda x + \delta x \rangle \neq \lambda\mathscr T(\delta x) |x \rangle$ ($\lambda \in \mathbb C$); ... We know that the translations are not linear operators, but we still call them operators, and treat them as operators. We even find them Hermitian adjoints ($\mathscr T(\delta x)^\dagger \mathscr T(\delta x) \simeq \hat 1$, or "the infinitesimal translations are unitary"), which doesn't make any sense to me. The problem also occurs in discussing the time evolution operators and Sakurai wrote: $\mathscr U(t, t_0)^\dagger \mathscr U(t, t_0) = \hat 1$, where the time evolution operators are not infinitesimal. So when we call $\mathscr T(\delta x)$ and $\mathscr U(t, t_0)$ "operators" what do we want to say? If they are not operators in $\mathrm{End} (\mathbb H)$ (endomorphisms, i.e. linear operators), where do they belong to? Are they affine mappings in the affine group (I know little about this group)? And, how do we treat the Hermitian adjoints of them? I would like a mathematically rigorous answer instead of an intuitive explaination, thank you! Answer: (a) Not all operators are linear operators. (b) Both spatial translations and time translations obey the property \begin{equation} \mathcal{O}(a |\psi_1\rangle + b | \psi_2 \rangle ) = a \mathcal{O} |\psi_1\rangle + b \mathcal{O}| \psi_2 \rangle \end{equation} and therefore are linear operators on Hilbert space. Note that $|0\rangle \neq 0$ (ie the "0 ket" does not equal the "0 vector on Hilbert space"), this seems to be a confusion in your question.
{ "domain": "physics.stackexchange", "id": 71984, "tags": "quantum-mechanics, hilbert-space, operators" }
Decomposition of $H(z)$ as maximum-phase, minimum-phase
Question: The frequency response is: $$H(z) = 2-7z^{-1}+7z^{-2}-2z^{-3}$$ I see that it has $3$ zeros: $z_{01} = \frac 12$, $z_{02} = 2$, and $z_{03} = 1$; and $3$ poles in: $$z_x = 0$$ Now, I have to write it like: $$H(z) = H_{\rm min}(z) H_{\rm max}(z) H_{\rm uc}(z)$$ where $H_{\rm min}(z)$ is the minimum-phase frequency response, $H_{\rm max}(z)$ is the maximum-phase frequency response and $H_{\rm uc}(z)$ only has zeros on $\lvert z\rvert=1$. For $H_{\rm uc}(z)$, I have one zero on $\lvert z\rvert=1$, so: $$H_{\rm uc}(z) = z-1$$ Is that OK? For the minimum-phase frequency, I have all the poles/zeros that are inside the unit circle, so: $$H_{\rm min}(z) = \frac{z-1/2}{z^3}$$ For the maximum-phase frequency, I have all the poles/zeros that are outside the unit circle, so: $$H_{\rm max}(z) = z-2$$ But I also know that I can find $H_{\rm max}(z)$ as: $$H_{\rm max}(z) = H_{\rm min}\left(z^{-1}\right) z^{-M_i}$$ where $M_i$ is the quantity of zeros of $H_{\rm min}(z)$. So: $$H_{\rm max}(z) = \frac{z^{-1} - 1/2}{z^{-3}} = z^3 \left(z^{-1} - 1/2\right) = z^2 \left(z-1/2\right)$$ So I have $2$ different expressiones for the same $H_{\rm max}(z)$. What am I doing wrong? Answer: So let's check that: $$H(z) = H_{\rm min}(z) H_{\rm max}(z) H_{\rm uc}(z) \tag{1}$$ where: $$ H_{\rm min}(z) = \frac{z-1/2}{z^3}\tag{2} $$ and $$H_{\rm max}(z) = z-2\tag{3}$$ and $$H_{\rm uc}(z) = z-1\tag{4}$$ Substituting $(2)$, $(3)$, and $(4)$ into $(1)$: \begin{align} H_{\rm min}(z) H_{\rm max}(z) H_{\rm uc}(z) &= \frac{z-1/2}{z^3} (z-2) (z-1)\\ &= \frac{z-1/2}{z^3} \left(z^2 - 3z + 2\right)\\ &= \frac{1}{z^3} \left(z^3 - 3z^2 + 2z - \frac{1}{2}\left(z^2 - 3z + 2\right)\right) \\ &= \frac{1}{z^3} \left( z^3 - \frac{7}{2} z^2 + 5 z - 1\right) \\ &\not= H(z) \end{align} So something is wrong! Try: $$ H_{\rm min}(z) = 1-1/2z^{-1}\tag{2A} $$ and $$H_{\rm max}(z) = 1-2z^{-1}\tag{3A}$$ and $$H_{\rm uc}(z) = 2(1- z^{-1})\tag{4A}$$ Now: \begin{align} H_{\rm min}(z) H_{\rm max}(z) H_{\rm uc}(z) &= \left(1-1/2z^{-1}\right) \left(1-2z^{-1}\right) \left(1-z^{-1}\right) 2\\ &= \left(1 - \frac{5}{2} z^{-1} + z^{-2}\right) \left(1-z^{-1}\right) 2\\ &= 2 - 5 z^{-1} + 2z^{-2} - \left(2z^{-1} - 5 z^{-2} + 2 z^{-3}\right)\\ &= 2 - 7 z^{-1} + 7 z^{-2} - 2z^{-3}\\ &= H(z) \end{align} I believe your interpretation of Oppenheim and Schafer (OS) is incorrect. And I can find no reference that says the poles have to be outside the unit circle for a maximum phase system. This one and the one I referenced in the comments both only mention the zero locations. And this is what my copy of OS says: which also only mentions zero locations. I believe exercise 5.63 is in error. The definition used in the body of the book is: which again does not mention pole locations. Also, having $$ H_{\rm max}(z) = H_{\rm min}\left(z^{-1}\right) z^{-M_i} $$ ensures that $H_{\rm max}$ is causal for FIR $H_{\rm min}$. So you have two options: either the exercise 5.69 definition is wrong, or the relationship between $H_{\rm max}$ and $H_{\rm min}$ is wrong.
{ "domain": "dsp.stackexchange", "id": 3970, "tags": "filters, signal-analysis, filter-design, phase, minimum-phase" }
Input from multiple channel, merge and use in nextflow
Question: I need to merge multiple channel to reuse them in two processes. So far I tried as follows: channel.from([15,20,21]).into {k1;k2} channel.from(['a','b','c']).into {m1;m2} channel.from([1,2,3]).into {l1;l2} para1=m1.merge(k1).merge(l1) para2=m2.merge(k2).merge(l2) process p1{ input: each m, k, l from para1 """ echo $m, $k, $l """ } process p2{ input: each m, k, l from para2 """ echo $m, $k, $l """ } The error I am getting: Unknown process directive: '_in_each' Did you mean of these? each If I use para1.view() I can see the merged elements as expected. Any help Answer: I don't think you can unpack tuples using the each qualifier like that. You can, however, unpack them in the script block: process test { input: each triple from para1 script: def (m, k, l) = triple """ echo "${m} ${k} ${l}" """ } Consider also that the merge operator is deprecated and will be removed from future Nextflow releases. Instead, you may need to join by index, for example: def indexedChannel( items ) { return Channel.from( items.withIndex() ).map { item, idx -> tuple( idx, item ) } } k_ch = indexedChannel( [ 15, 20, 21 ] ) m_ch = indexedChannel( [ 'a', 'b', 'c' ] ) l_ch = indexedChannel( [ 1, 2, 3 ] ) m_ch .join( k_ch ) .join( l_ch ) .map { idx, m, k, l -> tuple( m, k, l ) } .into { para1; para2 }
{ "domain": "bioinformatics.stackexchange", "id": 1775, "tags": "nextflow" }
What is the implication of Schmit decomposition?
Question: According to schmidt decomposition if I have pure state $|\psi\rangle$ in the composite hilbert space $AB$ ( both $A$ and $B$ are hilbert spaces of dimension $n$ ) then it can be writen as $$|\psi\rangle = \sum_i \lambda_i |i_A\rangle |i_B\rangle$$ here $\{i_A\}$ and $\{i_B\}$ are orthonormal basis for hilbert spaces $A$ and $B$ respectively. The above holds true for any vector $|\psi\rangle$ ( if I am not wrong ) even if its not normalized ( if not normalized then $\sum_i \lambda_i^2$ wont be equal to 1 ). Thus any vector of of a space of dimension $n$ x $n$ is being written in linear combination of only $n$ orthonormal vectors ( $|i_A\rangle |i_B\rangle$ for $1 <= i <= n$ ), which should not be possible. Am I missing something or interpreting schmidt decomposition incorrectly ? Answer: Denote $|\psi\rangle = \sum\limits_{i = 1}^m \sum\limits_{j = 1}^n h_{ij} |ij\rangle$ as $|\psi\rangle \rightarrow H = (h_{ij})_{m \times n}$. Then we have the following lemma: Lemma: Define matrix $U$ (in the original basis) as a new setting for Alice and $V$ for Bob, then a state $|\psi\rangle$ in the basis of the new settings is $U^* H V^\dagger$. Proof: Denote the original bases of Alice and Bob both as $|0\rangle, |1\rangle$, the new setting as $|0_a\rangle, |1_a\rangle$ and $|0_b\rangle, |1_b\rangle$ for Alice and Bob, respectively. and so $$ \begin{bmatrix} |0_a\rangle\\ |1_a\rangle \end{bmatrix} = U \begin{bmatrix} |0\rangle\\ |1\rangle \end{bmatrix}, \begin{bmatrix} |0_b\rangle\\ |1_b\rangle \end{bmatrix} = V \begin{bmatrix} |0\rangle\\ |1\rangle \end{bmatrix}. $$ In this way, the state is expressed as $$ |\psi\rangle = [|0\rangle, |1\rangle] H \begin{bmatrix} |0\rangle\\ |1\rangle \end{bmatrix} = [|0_a\rangle, |1_a\rangle] (U^\dagger)^T H V^\dagger\begin{bmatrix} |0_b\rangle\\ |1_b\rangle \end{bmatrix}, $$ Thus, in the basis of new setting, the state is expressed as $U^* H V^\dagger$. Linear algebra tells us that: Given a matrix $H$, there are some unitary matrices $U,V$ such that $U^* H V^\dagger$ is a diagonal matrix. So, given any state $|\psi\rangle$, there are some basses $|0_a\rangle, |0_b\rangle$ such that $|\psi\rangle$ can be expressed as a diagonal matrix ${\rm Diag}(\lambda_1,\lambda_2,\cdots,\lambda_n)$, that is, $$ |\psi\rangle = \sum_i \lambda_i |0_a\rangle |0_b\rangle. $$ Obviously, $|0_a\rangle, |0_b\rangle$ may be different for different $|\psi\rangle$.
{ "domain": "physics.stackexchange", "id": 21610, "tags": "quantum-mechanics, quantum-information, many-body" }
Proof that $2^n \times 2^n$ operator be decomposed in terms of $2 \times 2$ operators
Question: What is the proof that any $2^n\times 2^n$ quantum operator can be expressed in terms of the tensor product of $n$ number of $2\times 2$ quantum operators acting on a single qubit space each? Answer: I presume you mean a $2^n\times 2^n$ quantum operator, $U$? Let's assume we can write $$ U=\sum_{x,y\in\{0,1\}^n}U_{xy}|x\rangle\langle y|. $$ All we have to do is show that we can construct any $|x\rangle\langle y|$ using Pauli operators. But this is just the same as $$ \bigotimes_{i=1}^n|x_i\rangle\langle y_i|, $$ so provided I can create any $|x_i\rangle\langle y_i|$ for $x_i,y_i\in\{0,1\}$ using Pauli matrices, I'm done. It's a simple exercise to check $$ |0\rangle\langle 0|=(\mathbb{I}+Z)/2\qquad |1\rangle\langle 1|=(\mathbb{I}-Z)/2 $$ and $$ |0\rangle\langle 1|=(X+iY)/2\qquad |1\rangle\langle 0|=(X-iY)/2. $$
{ "domain": "quantumcomputing.stackexchange", "id": 581, "tags": "quantum-gate, gate-synthesis, linear-algebra" }
Intermediate/Coding representation for Levenshtein Distance
Question: The phrases: The quick brown fox jumps over the lazy dog [A] and The uick brown fox jumps oower the lazy dog [B] can be compared using Levenshtein Distance algorithm to determine similarity by calculating the minimum number of single character additions, deletions, or replacements are necessary to transform A into B. I'm interested to know if there is an intermediate representation, or possibly a coding scheme for the Levenshtein Distance. Not for use between two phrases, but just a coding applied to a single phrase such that character index does not affect comparisons. In B, the 'q' is missing compared to A. A normal string comparison would match 'The ' and then fail at 'uick brown fox...' merely because of a single character offset. The Levenshtein Distance could be used to compare it to the original phrase A for a more forgiving comparison, but in my case, I won't have two phrases, just one. So, I'm looking for some way of unambiguously coding a sentence in packets of information, little atoms of truth (I'm thinking one packet per character?) that maintain a local ordering and so-on, but if some of the packets are wrong, it doesn't affect later characters. Each unique phrase should map to one and only one unique encoding/intermediate representation, Sets A' and B'. Computing the Levenshtein Distance of A and B would then be the same as computing the intersection of sets A' = B'. Alternatively - if this problem does not have a solution (and this sure maps to a well-trodden area of research, I wouldn't be surprised), some convincing argument/proof for its unsolvability. Answer: There's indeed some research in this vein for the edit distance with some positive and some negative results. (I might not be understanding the question precisely, so I'll try to answer questions I know to answer.) Here's one interpretation: (I1) you want to compute, for each string A a set f(A) such that, for any two strings A,B, the edit distance ed(A,B) is equal to the symmetric difference between f(A) and f(B) (in some sense the opposite of intersection of the two sets). This question has been well-studied (though by far is not solved), and is known as the question of embedding edit distance into Hamming distance ($\ell_1$). In particular, achieving (I1) precisely is not possible, but is possible up to some approximation (i.e., we approximate ed(A,B) up to some factor): Krauthgamer and Rabani prove that $\Omega(\log n)$ approximation is required ($n$ is the length of the strings); Ostrovsky and Rabani prove that $2^{O(\sqrt{\log n\log\log n})}$ approximation is achievable. Here's a slightly more liberal interpretation: (I2) we produce some sketch f(A) for each string A, and we estimate the distance ed(A,B) via some calculation on f(A), f(B) (i.e., not necessarily by taking the symmetric difference). The question is to have f(A) to be much shorter than the length of the original string, $n$ (otherwise, one has a trivial solution by f(A)=A). This interpretation (I2) is more general than (I1) (=easier to achieve), though we do not know of any strictly better solutions. There's some partial progress, where the estimation of ed(A,B) is done from f(A) and B (i.e., one string, say B, is fully known). There's surely more literature in this vein, but let me know first if this is anywhere close to what you meant.
{ "domain": "cstheory.stackexchange", "id": 292, "tags": "string-matching, edit-distance" }
Relativity and components of a 1-form
Question: I have a question regarding Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald (1973), Gravitation ISBN 978-0-7167-0344-0. It is a book about Einstein's theory of gravitation. At page 313, the exercise 13.2. "Practice with Metric" presents a four-dimensional manifold in spherical coordinates + $v$ that has a line element $$ds^2 = - (1-2 M/r) dv^2 + 2 dv dr+r^2 (d\theta^2 + sin^2 \theta d\phi^2).$$ The question (b) is: Define a scalar field $t$ by $$t \equiv v - r - 2M \ln((r/2M)-1)$$What are the covariant and contravariant compoenents of the 1-form $dt$ (equal to u tilde)? What is the squared length $u^2$of the corresponding vector? Show that $u$ is timelike in the region $R > 2M$. My attempt: First differentiate to get the 1-form $dt$: $$dt = dv - dr - dr/2M \cdot \frac{1}{(r/2M)-1} = dv - dr (1+\frac{1}{r-2M})$$ However, the correction tells that $u_r = -1/(1-2M/r)$ which is not equivalent to what I wrote. Where is my mistake? I understand that the squared length of $u$ comes from the non-zero term v covariant and contravariant: $1\cdot -1/(1-2M/r)$ and the r, $\phi$ and $\theta$ terms have zero components in the contravariant terms. Now to prove that it is timelike in a certain region, I need to do the dot product of dt with the spatial components and find zero? For the angles, it seems rather trivial, but for $r$, I am not sure how to show that $dt \cdot dr = 0$. Could someone help me please? Answer: You forgot the $2M$ multiplied by the $\ln(r/2M-1)$. $\rm {d} $$t$$=\rm {d} $$v$$-\rm {d} $$r$$-\frac{2M}{r/2M-1}\cdot\frac{1}{2M}\rm d$$r$ $\rm d$$t$$=\rm d$$v-$$\big(1+\frac{1}{r/2M-1}\big)\rm d $$r$ $\rm d$$t$$=\rm d$$v-$$\frac{1}{1-2M/r}\rm d $$r$ You now have $u_v = 1,u_r=-1/(1-2M/r), u_\theta=u_\varphi=0$ $u^v=g^{v \mu}u_\mu=1\cdot u_r=-1/(1-2M/r)$ and $u^r=g^{r \mu}u_\mu=1\cdot u_v+(1-2M/r)\cdot u_r=1-1=0$ $u^{\mu} u_{\mu}= -1/(1-2M/r)$ is negative in the region $r>2M$ and hence timelike.
{ "domain": "physics.stackexchange", "id": 59594, "tags": "differential-geometry, metric-tensor, tensor-calculus, relativity" }
Explanation of Width of Slit Being Proportional to Light Intensity
Question: In this question Immortal Player states that: The intensity of light due to a slit (source of light) is directly proportional to width of the slit. The second answer posted by R C Mishra states that: The amplitude should be proportional to the width. While I am inclined to believe that the first statement is correct due to other sources I found online, I do not understand why the width of the slit would be proportional to the intensity. Intuitively, I would have thought that the width of the slit would be proportional to the amplitude, since a wider opening should allow a taller wave (greater amplitude) to pass through, and the height of the wave should be directly proportional to the slit width. Could someone please explain to me what is wrong with my intuition, and why the width of the slit is actually proportional to the intensity of the light. Answer: I'm getting conflicting answers from various sources on the internet, and stackexchange isn't helping. Unfortunately, I'm thinking R C Mishra is right and the accepted answer there is wrong (as of when this is posted). If I'm wrong, I'd like someone to point out why. Initially, I made the following comment: My naive approach is to think in terms of conservation of energy. When you halve the slit, only half the energy gets through, so the intensity is halved. Amplitude, on the other hand, does not obey any conservation law. Emphasis on naive here. Most of what I said was true, but the conclusion about intensity is false. My comment was incomplete because it left out the effects of single-slit diffraction. Short Intuitive Answer. Amplitude is additive, so doubling the slit width doubles the amplitude at the middle of the projected pattern. Intensity does not follow any straightforward additivity law! Instead, you need to use conservation of energy (which is not the same thing). In the case of a single-slit, you have one factor of 2x because the slit is twice as wide, and in addition there is another 2x factor because the diffraction pattern is twice as narrow. Therefore, conservation of energy + diffraction considerations tell you there is 4x more intensity at the middle of the pattern. I find it really interesting how both reasonings, despite being so distinct, end up giving you the exact same conclusion. Longer Intuitive Answer. For amplitude, remember that amplitude is additive. This means that if region 1 contributes amplitude $a_{1}$ to point $P$, and region 2 contributes amplitude $a_{2}$ to point $P$, the result will be amplitude $a_{1} + a_{2}$ at point $P$. (Note $a_{1}, a_{2}$ can be negative or positive, so it's not as straightforward as you might expect since cancellation may occur.) Since the light from the single-slit is projected to a wall that is very far away from the slits, we may assume all distances from each point at the slit to the middle point of the pattern are approximately the same. As a result, if a planewave is sent through the slit, all points contribute the same amplitude. Since doubling the slit width means you're doubling the number of points at the slit, additivity of amplitude means you will double the amplitude at the middle point of the pattern. Since amplitude (at the middle point of the pattern) is directly proportional to slit width, intensity (at the middle point of the pattern) is directly proportional to the square of the slit width. However, we can actually reason about intensity directly as well! For intensity, we think in terms of conservation of energy (as I mentioned in my original comment), but also remember to factor in diffraction and interference. When you double the slit, twice the energy gets through, so you'd expect the intensity at the middle of the pattern to be doubled. However, making the slit wider causes the diffraction pattern to become narrower, so the energy is twice as focused at the center. Therefore, there is another factor of 2x for intensity, and therefore the intensity (at the middle point of the pattern) is quadrupled. Math Analysis. When planar light passes through a slit, it creates a single-slit pattern. (For reference, the double-slit pattern occurs when two single-slit patterns interfere and create dark fringes within the middle spot, as you can see below.) We'll only look at the single-slit scenario. The way to derive the single-slit pattern is to use the Fraunhofer diffraction equation (if you are interested, I can explain where this equation comes from). Based on the derivation here, the amplitude function for a single-slit of width $w$ is $$ A(\theta) = A_{0}w \operatorname{sinc} \left( \tfrac{\pi w\sin\theta}{\lambda} \right). $$ I changed a few symbols from the wikipedia page. Here, $\theta$ is the angle from the slit to the point on the screen you are looking at, $A(\theta)$ is the amplitude function, $A_{0}$ is a constant, and $\lambda$ is the wavelength. The middle of the screen is at $\theta = 0$. When we plug this into our function, we get $A(0) = A_{0}w$. As we can see, the amplitude at the middle of the screen is directly proportional to the slit width. The intensity is the square of the amplitude, so we have $$ I(\theta) = A_{0}^{2}w^{2} \operatorname{sinc}^{2} \left( \tfrac{\pi w\sin\theta}{\lambda} \right). $$ At the middle of the screen we have $I(0) = A_{0}^{2}w^{2}$. We can see the intensity is proportional to the square of the slit width.
{ "domain": "physics.stackexchange", "id": 64486, "tags": "quantum-mechanics, optics, waves, double-slit-experiment" }
Optimizing calls for reauthentication by caching tokens and cookies
Question: I am currently developing a REST service in ASP.NET Core 2.2 that acts as a wrapper upon a REST API provided by a reporting solution. The reporting systems REST API authenticates the user based on a username and a password and subsequent requests must contain both a token and the cookies received during the authentication process. My primary goal is optimize the traffic by caching tokens and cookies for similar requests (that use the same reporting project which has its own REST API instance) and reuse them until their expiration. My secondary goal is to write a class that is very easy to consume by the controller which does not have even know about authentication and retrials. I am using Api Client Generation Tools to automatically generate the code used for actual REST API calls (type MsiRestClient). The client service public class MsiRestClientService : IMsiRestClientService { #region Constants private const int TokenRefreshCoolDownPeriod = 30; // seconds #endregion #region Variables private static readonly object LockSync = new object(); private static readonly SemaphoreSlim GetNewTokenSemaphore = new SemaphoreSlim(1, 1); /// <summary> /// stores all the tokens generated for each project source /// </summary> private static readonly Dictionary<string, TokenInfo> MsiProjectSourceToTokenMap = new Dictionary<string, TokenInfo>(); private static readonly Dictionary<string, DateTime> LastSuccessfulTokenFetchTimestamps = new Dictionary<string, DateTime>(); #endregion #region Properties // this is used to provide information about current reporting instance private IMsiProjectSourceService MsiProjectSourceService { get; } private ILoggingService Logger { get; } #endregion #region Constructor public MsiRestClientService(ILoggingService logger, IMsiProjectSourceService msiProjectSourceService) { Logger = logger; MsiProjectSourceService = msiProjectSourceService; } #endregion #region Private methods private TokenInfo GetProjectSourceTokenInfo(string projectSourceName) { lock (LockSync) { return MsiProjectSourceToTokenMap.ContainsKey(projectSourceName) ? MsiProjectSourceToTokenMap[projectSourceName] : null; } } private void SetProjectSourceTokenInfo(string projectSourceName, TokenInfo token) { lock (LockSync) { MsiProjectSourceToTokenMap[projectSourceName] = token; } } private DateTime? GetLastSuccessfulTokenFetchTimestamp(string projectSourceName) { lock (LockSync) { return LastSuccessfulTokenFetchTimestamps.ContainsKey(projectSourceName) ? LastSuccessfulTokenFetchTimestamps[projectSourceName] : (DateTime?) null; } } private void SetLastSuccessfulTokenFetchTimestamp(string projectSourceName, DateTime dateTime) { lock (LockSync) { LastSuccessfulTokenFetchTimestamps[projectSourceName] = dateTime; } } private (MsiRestClient, CookieContainer) GetMsiRestClientInfo(string projectSourceName, CookieContainer cookies = null) { // using provided cookies if any, otherwise creating new ones CookieContainer actualCookies = cookies ?? new CookieContainer(); var msiHttpClientHandler = new HttpClientHandler { CookieContainer = actualCookies }; var msiHttpClient = new HttpClient(msiHttpClientHandler); var msiRestClient = new MsiRestClient(msiHttpClient) { // this is required before the generator does not fetch the URL correctly BaseUrl = MsiProjectSourceService.GetMsiProjectSourceRestApiUrl(projectSourceName) }; if (string.IsNullOrWhiteSpace(msiRestClient.BaseUrl)) throw new ArgumentException($"No MSI Rest Api URL found for project source {projectSourceName}"); return (msiRestClient, actualCookies); } private async Task<bool> RefreshToken(string projectSourceName) { // do not refresh if the token if it has just been successfully been refreshed recently // this is done to avoid mass-refresh when multiple clients want to use the service to query MSI var credentials = MsiProjectSourceService.GetMsiProjectSourceCredentials(projectSourceName); if (credentials == null) { Logger.LogError($"Failed to get credentials for project source {projectSourceName}"); return false; } await GetNewTokenSemaphore.WaitAsync(); try { DateTime lastSuccessfulTokenFetchTimestamp = GetLastSuccessfulTokenFetchTimestamp(projectSourceName) ?? new DateTime(2000, 1, 1); int interval = (int) (DateTime.Now - lastSuccessfulTokenFetchTimestamp).TotalSeconds; if (interval < TokenRefreshCoolDownPeriod) return true; var (msiRestClient, cookies) = GetMsiRestClientInfo(projectSourceName); var authData = new AuthRequest { Username = credentials.Username, Password = credentials.Password }; try { await msiRestClient.PostLoginAsync(authData); } //TODO: replace with ApiException when NullReferenceException is solved catch (Exception exc) { Console.WriteLine("Failed to authenticate for MSI project source: " + exc); throw; } TokenInfo ti = new TokenInfo {Token = msiRestClient.Token, Cookies = cookies}; SetProjectSourceTokenInfo(projectSourceName, ti); SetLastSuccessfulTokenFetchTimestamp(projectSourceName, DateTime.Now); return true; } finally { GetNewTokenSemaphore.Release(); } } // checks if token information is available and reauthenticates if needed. Also, allows to forcefully reauthenticate (e.g. caller knows about a failure) private async Task<TokenInfo> EnsureTokenInfo(string projectSourceName, bool force) { var tokenInfo = GetProjectSourceTokenInfo(projectSourceName); if (force || tokenInfo == null) { await RefreshToken(projectSourceName); tokenInfo = GetProjectSourceTokenInfo(projectSourceName); } if (tokenInfo == null) { throw new ApplicationException( $"Failed to get cached info for project source {projectSourceName}. Should not happen since token info was just refreshed"); } return tokenInfo; } private string HandleRestApiCallException(Exception e, int trial) { string errorMessage = $"Unexpected exception during Rest Api Call {trial}"; if (e is ApiException apiExc) { return $"ExecuteWithTokenRefresh failed #{trial}: Response = {apiExc.Response}, Code = {apiExc.StatusCode}"; } if (e is ArgumentNullException) { errorMessage = $"ExecuteWithTokenRefresh failed #{trial}: {e.Message}"; Logger.LogInfo(errorMessage); return errorMessage; } if (e is NullReferenceException) { errorMessage = "Null reference exception received while executing ExecuteWithTokenRefresh - did you fix ApiException code?"; Logger.LogError(errorMessage); return errorMessage; } return errorMessage; } private async Task<ValidationResult<TRes>> ExecuteWithTokenRefresh<TRes>(string projectSourceName, Func<MsiRestClient, string, Task<TRes>> requestFunc) { var tokenInfo = await EnsureTokenInfo(projectSourceName, false); // creating a REST client based on data got from authentication (including cookies) var (msiRestClient, _) = GetMsiRestClientInfo(projectSourceName, tokenInfo.Cookies); try { var result = await requestFunc(msiRestClient, tokenInfo.Token); // no exception means that it successfully completed return new ValidationResult<TRes> {Payload = result}; } catch (Exception e) { HandleRestApiCallException(e, 1); } // error is most probably caused by an authentication / transient REST service -> retrying tokenInfo = await EnsureTokenInfo(projectSourceName, true); var (msiRestClientBis, _) = GetMsiRestClientInfo(projectSourceName, tokenInfo.Cookies); string errorMessage; try { var result = await requestFunc(msiRestClientBis, tokenInfo.Token); // no exception means that it successfully completed return new ValidationResult<TRes> { Payload = result }; } catch (Exception e) { errorMessage = HandleRestApiCallException(e, 2); } return new ValidationResult<TRes> { IsError = true, Message = errorMessage }; } #endregion #region Public methods public async Task<TokenInfo> GetTokenInfo(string projectSourceName) { bool refreshResult = await RefreshToken(projectSourceName); if (!refreshResult) return null; return GetProjectSourceTokenInfo(projectSourceName); } // all actual calls that do not deal with authentication or retrial simply use ExecuteWithTokenRefresh to wrap the actual call public async Task<ValidationResult<SessionInfo>> GetCurrentUserSessionInfo(string projectSourceName) { return await ExecuteWithTokenRefresh(projectSourceName, async (msiRestClient, token) => await msiRestClient.SessionSessionIdUserInfoGetAsync(token)); } #endregion } Usage [HttpGet("[action]")] public async Task<ActionResult<ValidationResult<SessionInfo>>> GetCurrentUserSessionInfo(string localName = null) { Logger.LogInfo("Test/GetCurrentUserSessionInfo called"); var result = await MsiRestClientService.GetCurrentUserSessionInfo(localName); return new ActionResult<ValidationResult<SessionInfo>>(result); } Am I on the right track? Answer: Review Don't use regions to group members by type. This is redundant grouping. (Regions pattern or anti-pattern?) Use proper naming conventions and casing of variables. LockSync is generally called syncRoot. GetNewTokenSemaphore indicates a method name, rename it to newTokenMutex. It's a mutex because you use the semaphore as a mutex. Prefer TryGetValue over the two-phase ContainsKey + Indexer lookup on a Dictionary. Refactor GetProjectSourceTokenInfo and GetLastSuccessfulTokenFetchTimestamp to use this method instead. GetMsiRestClientInfo is a factory method, so rename it to CreateMsiRestClientInfo. GetMsiRestClientInfo creates instances of HttpClient. This class uses a socket connection and is IDisposable to manage its connection with it. But you never dispose instances of this class. Also, creating instances all the time might lead to an influx in socket connections. (HttpClient Considerations) I suggest to also provide a cache of instances, given the cookies and a dispose strategy. RefreshToken mixes sandbox (return false) with error-prone (throw) statements. There is no clear specification what this method should return when. It seems a mess. Using DateTime.Now to validate cache expiration is bad practice. Prefer a strategy that does not rely on your system's local time. An option is to consider StopWatch.
{ "domain": "codereview.stackexchange", "id": 35334, "tags": "c#, asynchronous, rest, asp.net-core, asp.net-core-webapi" }
Download Progress Calculator
Question: Here is my progress calculator class. It is used for downloads/uploads. It takes bytes in and calculates the percentage(to the nearest 5%) that should show on a progress bar. Please let me know how I could improve it. Thanks a lot!! public class ProgressCalculator { private int totalAmount; private int progressAmount; private int amountProgressItems; private int progressBarPercentage; //returns if the progress bar needs to change public boolean progress(int amount) { progressAmount = progressAmount + amount; return updateProgressbarPercentage(); } public int getCurrentValue() { return progressBarPercentage; } //returns if the progressbar needs to change public boolean addProgressItem(int itemAmount) { totalAmount = totalAmount + itemAmount; amountProgressItems++; return updateProgressbarPercentage(); } //returns true if has been updated; private boolean updateProgressbarPercentage() { int newProgressBarPercentage = round(calculateProgressPercentage()); if (progressBarPercentage == newProgressBarPercentage) { return false; } else { progressBarPercentage = newProgressBarPercentage; return true; } } public int getAmountOfItems() { return amountProgressItems; } public boolean removeProgressItem(int byteAmount) { if (amountProgressItems > 0) { totalAmount = totalAmount - byteAmount; amountProgressItems--; } return updateProgressbarPercentage(); } private int calculateProgressPercentage() { double x = progressAmount; double y = totalAmount; double result = (x / y) * 100; return (int) result; } private int round(int num) { int temp = num % 5; if (temp < 3) return num - temp; else return num + 5 - temp; } public void clear() { totalAmount = 0; progressAmount = 0; amountProgressItems = 0; progressBarPercentage = 0; } } Answer: It's really hard to tell what this class does. From what you say, I feel like you have a number of TransferTasks and want to have a sort of ProgressTracker for them. So, as for me, your goal is to make user's code look like this: ProgressTracker progressTracker = new ProgressTracker(); progressTracker.setListener(new ProgressTrackerListener() { @Override public void onProgressChanged(double progressPercentage) { // TODO: update your UI here } }); ... TransferTask transferTask = new UploadTask("/home/jiduvah/1.txt"); progressTracker.track(transferTask); transferTask.start(); Why? Because it doesn't need any comments to describe what happens here: you just write it in English. Update - In case you're absolutely sure you like your approach. First, in this code: private int round(int num) { int temp = num % 5; if (temp < 3) return num - temp; else return num + 5 - temp; } I see magic like 5 and 3. Looks like it does exactly what you need to do, and you've even wrote about meaning of 5, but 3 is still a mystery. Then: double x = progressAmount; double y = totalAmount; double result = (x / y) * 100; return (int) result; When you divide int by int, you get int. When you divide int by double or double by int, you get double. So you may just write: double result = 100 * progressAmount / (double)totalAmount; Regarding the whole idea, I'm not sure whether this code works at all, because it's hard to understand how one should use it.
{ "domain": "codereview.stackexchange", "id": 1976, "tags": "java, android" }
Push, Delete and Print stack elements
Question: I've tried asking about the performance on the HackerRank discussion forum, it didn't work out. The task is to write a program with three operations: 1 x Push the element x onto the stack. 2 Delete the element present at the top of the stack. 3 Print the maximum element in the stack. The first input line is the number of lines in the program, all subsequent lines are one of the three instructions. Sample Input: 10 1 97 2 1 20 2 1 26 1 20 2 3 1 91 3 Sample Output: 26 91 My Solution: data = [] for _ in range(int(input())): ins = input().split() if ins[0] == '1': data.append(int(ins[1])) elif ins[0] == '2': data.pop() else: print(max(data)) It gets slow on working with input size of 1000 elements or so, how could I speed this up? Answer: Try tracking the current maximum, otherwise frequent occurrences of 3 will push your run time towards \$\mathcal{O}(n^2)\$. If you take a closer look at what your input actually means, you will notice that smaller values being pushed onto the stack have actually no significance if a greater value has being pushed previously. So for every fill level of the stack, you already know the corresponding maximum at the time you push onto the stack. Use that knowledge: current_max = [] for _ in range(int(input())): ins = input().split() if ins[0] == '1': new_max = int(ins[1]) if current_max and new_max < current_max[-1]: new_max = current_max[-1] current_max.append(new_max) elif ins[0] == '2': current_max.pop() elif ins[0] == '3': print(current_max[-1]) By storing the maximum instead of the raw value on the stack, you can always access the current maximum directly. Just don't forget to handle the special case when data is empty, so the new value will always be the maximum.
{ "domain": "codereview.stackexchange", "id": 22005, "tags": "python, performance, programming-challenge, python-3.x" }
How can a moving rod in homogeneous magnetic field have changing enclosed flux?
Question: A conducting rod XY and a conducting rectangular loop ABCD are placed in infinitely wide area with homogeneous magnetic field (not changing with respect to time). They are moved perpendicularly to the magnetic field as shown below. The induced emf in ABCD is zero because there is no changing enclosed flux. I cannot digest how the rod XY can have non zero induced emf. It is a bit counter intuitive as in my mental model both have no changing enclosed flux. Answer: Rod $XY$ has a motional emf (10.2) , $BLv$, induced and let us assume (as the direction of the magnetic filed is not specified) that end $Y$ is at the higher potential relative to end $X$. In the second example magnitude of the motional emf due to rods $AB$ and $DC$ is the same, $BLv$, and so the net emf induced in the loop is zero as ends $B$ and $C$ are at a higher potential than ends $A$ and $D$.
{ "domain": "physics.stackexchange", "id": 52334, "tags": "electromagnetism" }
Curious about an old algorithm which calculates modular inverse
Question: I am not sure if I should ask this question here or somewhere else. In fact, I initially asked my question here at mathoverflow.net but it was marked as off-topic Background: I was searching through random mathematics paper that are related to cryptography and I came across this paper (page 3). I just read the abstract and algorithm itself, I don't understand Chinese. It offers new method to find a Modular inverses. It has some interesting properties that I observed: during each step iteration of the loop: $x_{11} * x_{22} + x_{12} * x_{21} = m$ which is good to validate the result during each iteration algorithm terminates in even number of steps for some unknown reason In abstract section, author says this method was invented by this mathematicians. My question is, why does this algorithm always terminate in aeven number of steps (or number of iterations of the loop is always even)? Algorithm to calculate: $a^{-1} (\bmod m)$: $\text{xgcd}(a, m):$ $\quad x_{11} \leftarrow 1, x_{21} \leftarrow 0, x_{12} \leftarrow a, x_{22} \leftarrow m$ $\quad \text{While }(x_{12} > 1) \text{ do}$ $\quad \quad \text{If }(x_{22} > x_{12}) \text{ then}$ $\quad\quad\quad\quad q \leftarrow \Big\lfloor\frac{x_{22} - 1}{x_{12}}\Big\rfloor$ $\quad\quad\quad\quad r \leftarrow x_{22} - q ~x_{12}$ $\quad\quad\quad\quad \begin{pmatrix}x_{11} & x_{12}\\x_{21} & x_{22}\end{pmatrix} \leftarrow \begin{pmatrix}x_{11} & x_{12}\\q~x_{11} + x_{21} & r\end{pmatrix} $ $\quad\quad \text{If }(x_{12} > x_{22}) \text{ then}$ $\quad\quad\quad\quad q \leftarrow \Big\lfloor\frac{x_{12} - 1}{x_{22}}\Big\rfloor$ $\quad\quad\quad\quad r \leftarrow x_{12} - q~x_{22}$ $\quad\quad\quad\quad \begin{pmatrix}x_{11} & x_{12}\\x_{21} & x_{22}\end{pmatrix} \leftarrow \begin{pmatrix}q~x_{21} + x_{11} & r\\x_{21} & x_{22}\end{pmatrix}$ $\quad \text{Return } x_{11}$ Screenshot of the algorithm Python and SageMath implementations Answer: You can replace: $$ ~~q \leftarrow \Big\lfloor\frac{a - 1}{b}\Big\rfloor $$ $$ r \leftarrow a - q ~b $$ By $q, r \leftarrow \text{divmod}(a, ~ b)$ Algorithm terminates in even number of steps for some unknown reason Those matrices assignments seem to be a fancy way of writing: $$ \gcd(m_0, \color{blue}{a_0}) = \gcd(a_0, \color{red}{m_0 \pmod{a_0}}) = \gcd(\color{red}{m_1}, \color{blue}{\underbrace{a_0 \pmod{m_1}}_{a_1}}) = \dots = \gcd(m_n, 1)$$ So $a_i$ decreases every $2$ iterations, and $\gcd$ stops when $a_n = 1$. Of course, you're not interested in computing the $\gcd$ per see, since you already know it's $1$, but finding the integers such that $Xm + Ya = gcd(m, a) = 1$.
{ "domain": "cs.stackexchange", "id": 7888, "tags": "algorithm-analysis, correctness-proof, number-theory, modular-arithmetic" }
Are there compressors that can compress both water and water vapor?
Question: I noticed while reading about compressors that they have various types and are generally made for a specific function. However, are there types of compressors (or pumps) that can compress both water and water vapor? Answer: Water at "normal" pressures is assumed to be incompressible. However, if you subject water to about 200 atmospheres of pressure then its volume will reduce by about 1%. So, water vapor will behave differently, depending on the ratio of gas to liquid. There are tables which you can consult to work out the density or specific volume for given temperatures. Those tables are often called "steam tables".
{ "domain": "engineering.stackexchange", "id": 5084, "tags": "mechanical-engineering, fluid, compressors" }
Can we develop a virus which is amicable for us but deadly for SARS-CoV-2 and HIV?
Question: I am not biologist and do not have special education in it etc. It is know that in the wild, some species have their deadly enemies. What I am suggesting is can we humans find/create virus that is amicable for us, but super deadly for SARS-CoV-2, HIV etc? If I am not mistaken, there are plenty of native bacteria/viruses in human body. As an option can we genetically modify one of them to turn them worst enemy for some kind of viruses like SARS-CoV-2 for restricted amount of time? P.S I am not a specialist, therefore I apologize if my questions are too naive. With regards, Almas Answer: Viruses are obligate intracellular parasites. They act like a living organism only when they infect a cell. Apart from some protein structures which help them to attach and enter specific kinds of cells, viruses don't have enzyme systems that can be used for activities like "destroying other viruses". A virus can not infect another virus as this would be a one time action. Viruses infect cells in order to replicate, a virus can't replicate by using another virus. So if you engineer a virus which will attach and destroy another kind of virus, it will be no longer a virus, it will be an unnecessarily complicated antiviral drug. In addition to this point, you should consider that our bodies have leukocytes (white blood cells) which can identify and destroy viruses. Some kinds of leukocytes also produce antibodies to fight with the infection agents. If you aim to engineer a bacterium in order to fight with a virus inside our bodies, first you have to provide that the bacterium will stay in the tissues of our bodies for a sufficient amount of time and/or its secretions which attacks to the virus will enter and stay at least in the blood stream. It is obvious that this approach is way more complicated and hard to achieve.
{ "domain": "biology.stackexchange", "id": 10746, "tags": "human-biology, virology, coronavirus, antibody, antibiotics" }
How to obtain the explicit form of Lorentz transformation matrix using Lie algebra?
Question: Consider the Minkowski space $\mathbb R^4$ with the Minkowski metric tensor \begin{align} \langle,\rangle:\ \mathbb R^4\times\mathbb R^4&\longrightarrow\mathbb R \\ (u,v)&\longmapsto\langle u,v\rangle=-u_0v_0+u_1v_1+u_2v_2+u_3v_3=[u]^\top\eta\,[v] \end{align} where $\eta=$ diag$(-1,I_3)$. We know that the Lorentz transformation is represented by a matrix $L\in GL_4(\mathbb R)$ such that $$L^\top\eta\,L=\eta. $$ Each Lorentz transformation is a transformation of space-time coordinates between a stationary inertial frame $(t,x)$ and a moving inertial frame $(t',y)$ in the direction of the velocity $v=(v_1,v_2,v_3)$ with respect to the frame $(t,x)$. I want to derive the explicit form of $L$. One common approach is decomposing vector $v$ into two terms $v=v_{\perp}+v_{\parallel}$, do some classical algebra and geometry, and then we will get the concrete matrix \begin{align} L=\begin{bmatrix} \gamma & -\gamma v \\ -\gamma[v] & I_3+(\gamma-1)[\widetilde v][\widetilde v]^\top \end{bmatrix} \end{align} This way is indeed quite simple, but it doesn't satisfy my desire. I interest in the approach using Lie algebra, which may involve more calculate, but it will be a good chance, a first step to get used to dealing with Lie algebra and from there do it proficiently in the next parts of General Relativity. At this time, I don't know much about Lie algebra, I don't know what it's about, I don't know what tool it uses. So I really hope that anyone will guide me. Answer: Lie algebra is the linear approximation of the Lie group at the identity $$L = 1_4 + \epsilon \ l_{ik}$$ where $1_4$ is the 4d identity matrix and $$l_{ik}$$ a linear transformation in the 2d-plane with indices $i,k$ e.g. $l_{0,2}$ $$L=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) \ + \ \epsilon \ \left( \begin{array}{cccc} a & 0 & b & 0 \\ 0 & 0 & 0 & 0 \\ c & 0 & d & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} \right) $$ Now you can simply use $$\partial_\epsilon\ \left( \left( 1_4 + \epsilon \ l_{ik}\right)^T \cdot \eta \cdot \left( 1_4 + \epsilon \ l_{ik}\right) - \eta \right) |_{\epsilon\to 0} =0 $$ and $$\det\left( 1_4 + \epsilon \ l_{ik}\right) =1$$ This last equation, indeed, is the defining equation of the special orthogonal group $(SO(3,1))$, the Lorentz group. As a Lie group, you can go to the limit of infinite powers $$\left( 1_4 + \epsilon \ l_{ik} \right)\to \left( 1_4 + \frac{\epsilon}{n} \ l_{ik}\right)^n \to e^{\epsilon \ l_{ik}} $$ to retain the three 1-parameter subgroups of boosts and the three rotation groups in the six 2-planes of $\mathbb R^4$ It needs some analysis of matrix functions, to show that the product limit of powers of $$\lim_{n\to \infty }\ (1 + \frac{x}{n})^n = e^n\ = \ \sum{\frac{x^n}{n!}}$$ is working within a frame of convergencies in norms for matrix operators, and for operators in Hilbert spaces, generally. The explicit forms are $$l_{0,1}=\left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} \right) $$ etc with exponential, $$\exp\left(u\ \left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array}\right)\right)$$ $$ 1_4 + \left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} \right) * \sum_0^\infty \ \frac{u^{2n+1}} {(2n+1)!} + \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} \right) *\sum_1^\infty \frac{u^{2n}} {2n!} =\cosh u \ 1_4 + \sinh u \ l_{0,3}$$ and the same with alternating signs and trigonometric functions for the rotation sungroups
{ "domain": "physics.stackexchange", "id": 99745, "tags": "special-relativity, group-theory, lie-algebra" }
Drift velocity of charges in current
Question: In an electric curcuit, charges (electrons e.g.) move randomly around very, very fast. When a current is set in a curcuit, the charges still move randomly, but have a drift velocity around the curcuit. This is only in the order of about 0.1 mm/s. The question is short and simple, and maybe the answer is too. Why are collisions happening so much more frequently while drifting than when there is no current? Turning a flashlight or an electrical heater on must gives a large increase in collision frequency to produce that much more energy than in the electrostatic case. Or what. Answer: The collision frequency of electrons in a metal at room temperature is given by the thermal distribution of the electron velocities (please note that this is already a somewhat questionable approximation, metals really require a quantum mechanical treatment). I do not believe that this collision frequency increases much when a current flows trough the metal. What does happen, though, is that on average there is no energy transfer between the electrons and the lattice if there is no drift, because the electrons are in thermodynamic equilibrium with the lattice. When we add an electric field electrons accelerate a little between any two collisions and then they are not in thermal equilibrium with the metal ions any longer. As a result they will shed their additional kinetic energy to the ions in these collisions which will heat the metal.
{ "domain": "physics.stackexchange", "id": 18760, "tags": "electric-circuits, electric-current" }
Why is the charge Q multiplied by -1, when calculating the z component of the magnetic Force?
Question: This question is from a practice exam, and this is the solution that the teacher gave. I don't understand why the charge is multiplied by a -1, when calculating the z-component of the magnetic force. Can you explain to me why? Answer: From the definition of the cross product: $$F_z = q(v_xB_y - v_yB_x)$$ With $v_x = 0,$ $$F_z = q(0 - v_yB_x) = -qv_yB_x$$
{ "domain": "physics.stackexchange", "id": 44255, "tags": "homework-and-exercises, electromagnetism, magnetic-fields" }
How would the gravitons cause time dilation?
Question: If we consider gravity to be a particle (graviton) then how can time dilation be explained by quantum physics? I suspect the graviton flux passing through an atom will slow down the atomic cohesion speed and velocity. Is that a valid statement? Answer: We can have an idea of what happens, but it is necessarily an incomplete view. For a linear approximation of gravity, it is similar to the way we treat and understand the photon -- with some differences to be pointed out in a couple paragraphs below. The graviton is the quantum excitation of the gravitational field, which can be expressed in terms of the spacetime metric. So it is the smallest change in the metric, the quantum unit of change. A small perturbation of the vacuum metric, classically, can be written as $$g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}$$ With the first term on the right the Minkowski spacetime metric and the second term denoted by $h_{\mu\nu}$ denoting the perturbation. The graviton is the smallest perturbation, the quanta of the gravitational field. The h terms can be quantized similarly to the quantization of the electromagnetic field, with 2 spin possibilities as is true for any massless field. There are creation and annihilation operators that one can define, and thus create state vectors to define any configuration. So, yes, a graviton will change the metric ever so slightly, and in a linear approximation we can calculate the h terms (if not too complex a situation). The metric change h can then have time dilating (or contracting, or oscillating. A gravitational change will usually involve a huge number of gravitons) ever so slightly. The spatial part of the metric can also change slightly. You get enough gravitons together and you can describe or 'create' a macroscopically observable metric change. If the metric change is physical, not simply that we changed coordinate systems, the invariant curvature will change. Some caveats: 1- just like photons used to describe electromagnetic fields it is hard to describe complex macroscopic fields that way. But gravitational waves can be described similarly to electromagnetic waves as a superposition of many gravitons at various different frequencies. 2- gravitational waves, and specifically gravitons, have spin 2 (photons are spin 1), and are indeed massless. They have therefore two polarizations but unlike (say the electric field of) the photon they are oscillations in two dimensions -- a gravitational wave going in the z direction will vibrate a bit like a balloon squeezed in the x axis and expanding in the y axis, and then the other way around and oscillate back and forth, or at 45 degrees, denoted respectively as + and x polarizations. 3- the above is a linear approximation, in a weak field. But gravity, as general relativity has it, is a nonlinear theory, so the gravitons also interact with each other (unlike, for the most part electromagnetism where photons pass by each other), and since they carry energy (equivalent to mass) they also interact with anything that has energy (or mass). To solve the nonlinear problem even in classical general relativity is not usually possible if you have many particles, or gravitons, and other bodies interacting. The nonlinear quantum theory for Gravity has not been able to be developed and be consistent -- we still don't know what strong quantum gravity really is. The linear approximation we believe is like the view expressed above. We can solve the full nonlinear classical relativity equations for certain symmetric spacetimes, like spherical or axially symmetric and stationary, and some others. The 3 body problem in general relativity is not solvable exactly. 4- to be accurate, one has to be careful in general relativity that what one calculates is not simply an artifact of the coordinate system, or of changing the gauge, so that one is calculating invariant changes in the curvature. 5- the gravitational interaction is many orders of magnitude weaker than electromagnetism, and the effect of a graviton on an atom is too small to be measured. You'd need a hugely high freq graviton and you'd be in the realm of very nonlinear quantum gravity before you could get anything measurable. However, it is definitely the case that a macroscopic gravitational field, many many gravitons, will have time dilation effects on atoms. The atomic transition in the cesium atom used to calculate time in the GPS satellites is indeed affected by the gravity change between there and the surface of the earth, and the times then have to adjusted to correspond to earth-surface times. It's been measured, and adjusted.
{ "domain": "physics.stackexchange", "id": 39298, "tags": "quantum-mechanics, time" }
Doubt on the derivation of grand canonical probability density function
Question: I'm following the derivation proposed in Statistical Mechanics by Huang. He's considering two systems, the one of interest labeled 1 and a heat/particle reservoir, labeled 2. $Q_N$ is the partition function of the whole system (1+2). I can follow up to $$ Q_N(V,T)=\sum_{N_1=0}^N\int_{V_1} dq_1\int dp_1\frac{1}{h^{3N_1}N_1!}Q_{N_2}(V_2,T)e^{-\beta H(q_1,p_1,N_1)} $$ but then he says that The relative probability $\rho(q_1,p_1,N_1)$ that there are $N_1$ particles in $V_1$ with coordinates $\{q_1,p_1\}$ is proportional to the summand of $\int dp_1dq_1\sum_{N_1}$. And so $$ \rho(q_1,p_1,N_1)=\frac{1}{h^{3N_1}N_1!}\frac{Q_{N_2}(V_2,T)}{Q_N(V,T)}e^{-\beta H(q_1,p_1,N_1)}. $$ While I find this derivation quite intuitive I am not fully convinced. Just because this function is normalized (which is the only thing I can personally infer from this proof) doesn't mean that is the right PDF. I hope that someone can give me a solid motivation on why this is the case indeed. Answer: I agree that Huang is not being very clear here. However, what he is doing is pretty simple, since he is merely using the definition of relative probability, or, in more mathematical terms, he is calculating a marginal probability density. Simple example with two variables: you have the joint probability density $\rho_{X,Y}(x,y)$ of the two random variables $X,Y$, and you want to get the density $\rho_X(x)$ for the random variable $X$. The recipe is simple: $$\rho_X(x) = \int_y \rho_{X,Y}(x,y) dy \label{0}\tag{0}$$ Now, your case. The joint probability density is $$\rho(p,q,N)=\frac 1 {h^{3N} N!} \frac{e^{-\beta H(p,q,N)}}{Q_N(V,T)} \tag{1}\label{1}$$ where $$H(p,q,N)=H_1(p_1,q_1,N_1)+H_2(p_2,q_2,N_2) \tag{2}\label{2}$$ The marginal probability density for system 1 is obtained by integrating over the variables relative to system 2 like in Eq.\ref{0}. However, since we don't care which particles are in system 1 and which in system 2 (the particles are identical), we also have to multiply by all the possible systems with $N_1$ particles in $V_1$ and $N_2=N-N_1$ particles in $V_2$. We therefore obtain $$\rho_1(p_1,q_1,N_1)=\frac{N!}{N_1!N_2!} \int dq_2 dp_2 \rho(p,q,N) \tag{3}\label{3}$$ Applying \ref{3} with $\rho(p,q,N)$ defined in \ref{1} and \ref{2}, you shall obtain the desired result.
{ "domain": "physics.stackexchange", "id": 45784, "tags": "statistical-mechanics, partition-function" }
Interpretation of a good overfitting score
Question: As shown below, my deep neural network is overfitting : where the blue lines is the metrics obtained with training set and red lines with validation set Is there anything I can infer from the fact that the accuracy on the training sets is really high (almost 1) ? From what I understand, it means that the complexity of my model is enough / too big. But does it means my model could theoretically reach such a score on validation set with same dataset and appropriate hyperparameters ? With same hyperparameters but bigger dataset ? My question is not how to avoid overfitting. Answer: It doesn't tell you very much, to be honest. It does mean that (assuming your training and validation distributions are similar) your model could get the same results on your validation set should you train on that, but that would still be overfitting. Really, the only useful thing overfitting tells you is that you don't have enough regularisation.
{ "domain": "ai.stackexchange", "id": 760, "tags": "neural-networks, machine-learning, deep-learning, overfitting" }
How do I calculate the number of photons emitted?
Question: I am reading through these slides about the transition radiation in the optical range. It says that the spectrum of the intesity of the photons as a function of the frequency is given by: $$\dfrac{dI}{d\omega}=\dfrac{e^2}{6\pi c} \left( \dfrac{\gamma \omega_p}{\omega} \right)^4$$ where $\omega_p$ is the plasma frequency and $\gamma$ is the relativistic gamma factor. How do I calculate from this formula the number of the photons emitted in a given range of frequencies? Answer: Considering the intensity is in $W/m^2$ you will need to consider the area over which you want to calculate the number of photons say $A$. You can then multiply the $dI$ by the area to get the infinitesimal power $dP$ $$dP=dIA=\frac{A\alpha}{\omega^4}d\omega$$ where $\alpha=\frac{e^2}{6\pi c}\left(\gamma\omega_p\right)^4$, considering the energy of a photon at a frequency $\omega$ is $E=\hbar\omega$ the number of photons at $\omega$ emmited per unit time for $dI$ is then $$dN=\frac{A\alpha}{\hbar\omega^5}d\omega$$ So to get the number of photons per unit time emitted over a range of frequencies $\omega_1,\omega_2$ is $$N=\int_{\omega_1}^{\omega_2}\frac{A\alpha}{\hbar}\omega^{-5}d\omega=\frac{A\alpha}{4\hbar}\left(\omega_1^{-4}-\omega_2^{-4}\right)$$
{ "domain": "physics.stackexchange", "id": 66892, "tags": "electromagnetic-radiation" }
Adding "realistic" noise to signals of different amplitudes
Question: I have a question regarding basic signal processing, which I have to start using since I have started studying fault isolation. The thing is, I have three computed signals, which are gathered in the matrix $\mathbf X \in \Re^{p \times n}$, where $p$ is the signal length and $n=3$ is the number of signals. The three signals should emulate some real measurements in different sensors, for instance, $$\mathbf x_i = A_i \sin(\omega t), \quad \text{with } A_1 \neq A_2 \neq A_3.$$ Now, I want to add some white Gaussian noise to these signals, but then I started thinking; how can I do this somewhat "realistically" when the three signals have different amplitudes? My basic understanding is that the amplitude of the noise in real measurements is independent of the amplitudes of the signals, hence implying that my signal-to-noise ratio will not be the same for the three signals (since the variance of the noise is taken as constant). Is this correct? What I have done at the moment is simply to form $\mathbf Y = \text{vec}\left(\mathbf X\right) \in \Re^{pn \times 1}$ (with $\text{vec}$ being the vectorization operator), calculated the variance of $\mathbf Y$, and then added noise as a fraction of this variance (with zero mean). The reason for this is that I often come across the phrase "we added this percentage of noise to the signals", and I don't suspect that one adds this to each signal seperately according to the variance of each signal. Sorry for the long post. I hope you get my point and can answer my questions. James Answer: I am assuming we're talking about discrete measurements here, i.e. $n$ instead of $t$ with $n = 0, \cdots, p-1$. If all you want is adding some realistic noise to your three-component matrix $\mathbf X$, you could do this in the case of white Gaussian noise $\mathbf W_{p\times n}$. Using the subscript $\sigma$ for noisy data: Same noise in all three channels: \begin{align} \mathbf X_\sigma &= \mathbf X + \mathbf W\tag{$p\times n$ matrices}\\ \Rightarrow\mathbf x_{i\sigma} &= \mathbf x_i + \mathbf w\tag{$p\times 1$ vectors}\\ \Rightarrow x_{i\sigma}[n] &= A_i \sin(\omega n)+ w[n] \quad\text{ with } w \sim \mathcal N\left(0, \sigma^2\right) \end{align} Even though you're adding the same noise in all three channels, i.e. $w_1=w_2=w_3=w$, the SNR of the channels will be different (directly proportional the amplitudes $A_i$) unless the three different sensors are capturing exactly identical (real copies) measurements. Different noise values in the three channels: \begin{align} \mathbf X_\sigma &= \mathbf X + \mathbf W\tag{$p\times n$ matrices}\\ \Rightarrow\mathbf x_{i\sigma} &= \mathbf x_i + \mathbf w_i\tag{$p\times 1$ vectors}\\ \Rightarrow x_{i\sigma}[n] &= A_i \sin(\omega n)+ w_i[n] \quad\text{ with } w_i \sim \mathcal N\left(0, \sigma_i^2\right) \end{align} Your SNR of each channel $i$ will be different as the value will be directly proportional to the amplitudes $A_i$ and inversely proportional the channel's variance $\sigma_i^2$. If you were having one physical quantity in all three signals (a three-axis sensor for instance), you could compute the norm for each value of $n$ and work with one signal measurement, the magnitude of size $p\times 1$. Like: $$\left|x[n]\right|=\sqrt{\sum_{i=1}^3 x_i^2[n]}$$ Then you would be looking at one $p\times 1$ signal component versus one $p\times 1$ noise component.
{ "domain": "dsp.stackexchange", "id": 3675, "tags": "discrete-signals, noise" }
Is purification physically meaningful?
Question: Consider a quantum system with Hilbert space $\mathscr{H}$ and suppose the quantum state is specified by a density operator $\rho$. Since it is Hermitian, it has a spectral decomposition: $$\rho = \sum p_i |\phi_i\rangle \langle \phi_i |.$$ Now take another quantum system with Hilbert space $\mathscr{H}'$ with dimension at least equal to the first. Take any basis $|\psi_i\rangle$ and consider the state $$|\Psi\rangle = \sum \sqrt{p_i} |\phi_i\rangle \otimes \lvert\psi_i\rangle.$$ A partial trace over the second system yields the first state. This is the purification. A mixed state is always a partial trace of some pure state in a composite system. There are issues, however: (1) the purification is highly non-unique, any Hilbert space of dimension equal or higher to the first will work, and we can pick any basis we want yielding distinct pure states. (2) this is a mathematical construction. The purifying system seems to have no true meaning physically, this seems to be further implied by the non-uniqueness described in (1). So is purification a purely mathematical construction with no physical meaning, or it indeed has some physical meaning ? If so, what is the physical meaning of the purification? Answer: This really depends whether you believe in the "church of the larger Hilbert space". If you feel that pure states are more fundamental than mixed states, then you might argue that any mixed state is just a lack of knowledge, and somewhere out there is the missing piece of the system which will give you full information (i.e., a pure state). Even though you don't know what it is, you know it is out there. (As you can see, this is really more a matter of interpretation of quantum mechanics, since mathematically, the two perspectives are equivalent.) There are many cases where this is a very reasonable perspective on mixed states regardless of what you believe, e.g. when you have a pure state which becomes mixed by coupling to the environment: While the state of the system looks mixed to you, it has just unitarily interacted with the environment, so the overall state system + environment will be pure, and the environment will hold the purification of your system. Clearly, this is true more generally as long as your initial global state is pure and you consider part of an isolated sytem (i.e. unitary dynamics). Beyond that, purifications are of course also a powerful mathematical tool.
{ "domain": "physics.stackexchange", "id": 51303, "tags": "quantum-mechanics, hilbert-space, quantum-information, quantum-entanglement, density-operator" }
Is the only absolute difference between types of light frequency?
Question: Probably a bad question but for some reason, it seems too simple in my head that anyone at home could theoretically create anything from radio waves to gamma waves by generating electrical signals at different frequencies. Say I had a electronic frequency generator that was able to produce a signal at any frequency, and for illustrative purposes, say there was a diode hooked up this generator that could receive its signals. If it created a signal at $10^{12}$ Hz, the diode would give off infrared radiation. If I increased the signal to $10^{20}$ Hz, the diode would give off gamma radiation. I’m using this example just to emphasize my question, is frequency the absolute and only differentiator in types of light on the electromagnetic spectrum? Answer: Your question is a little confused at the end, but I think the answer to what you're trying to ask is "yes". Names like infrared and gamma apply here to ranges that have been divided up for historical and practical reasons, but they do not denote something other then an electromagnetic wave within certain frequency ranges. If you had the hypothetical device that you mentioned, then you could create waves of any of the types that you mentioned. (Although I know of no such single device that covers such a range.) Also note that there can be more than one name for a particular range. For example "radio" waves have different bands and those bands go by different names by country and by science / engineering discipline. For example, K Band or X Band. The "electromagnetic spectrum" would cover all frequencies by definition, so your device would not create something "outside" of the spectrum. It might, I suppose, create a wave in a frequency range that has no conventional name.
{ "domain": "physics.stackexchange", "id": 60130, "tags": "waves, electromagnetic-radiation" }
Alternative to the Josh Bloch Builder pattern in C#
Question: The Josh Bloch Builder Pattern (here after JBBuilder) introduced me to the idea of using a nested class to aid in construction. In Java the JBBuilder pattern is aimed at fixing the telescoping constructor problem. That problem occurs in java mostly because java doesn't have named and optional arguments the way C# does. Because the C# has them I'm wondering how good an idea the JBBuilder is in C#. I'm trying to solve a different problem with a structure similar to the JBBuilder. I will also use a nested class to build. But rather than following the JBBuilder pattern and simulating optional arguments I'll use the nested class to let me simulate constructors with different names. These simulated constructors will be much like static factory methods but without them being static. I'm avoiding static factory methods because they can't be passed around dependency injection style. I want whatever decides which method to call to not have to know which concrete method this is. So I'm hanging them off a stateless instance that could have any implementation. These simulated constructors will give an input string different meanings and choose an implementation of a Strategy Pattern to wrap that string with different Validate() behavior. Using differently named 'constructors' allows us to deal with the fact that input in both cases is the same type: string. One is a regular expression. The other is wildcard pattern. The meaning of the string is decided by the method used from the Build object. The different names avoid a single constructor being forced to do logic or accept a flag to understand the strings meaning and select an implementation or choosing a constructor based on the type passed in. The new pattern presented below is a hack that allows constructors to have different names without resorting to static factory methods. It gives the string different types that then polymorphically decide the Validate() behavior. My questions: Is the new pattern needed in c#? If not, what makes it unnecessary? Does the nested builder class have a name when used to allow non-static constructors different names? It's structurally similar to the Josh Bloch Builder but the motivation here is different. The Strategy Pattern used by the new pattern: public interface IValidationStrategy { bool Validate(string pStringToValidate); } public class RegexValidator : IValidationStrategy { private Regex regEx; public RegexValidator(Regex regEx) { this.regEx = regEx; } public bool Validate(string stringToValidate) { return regEx.IsMatch(stringToValidate); } } public class WildCardValidator : IValidationStrategy { private string wildCard; public WildCardValidator(string wildCard) { this.wildCard = wildCard; } public bool Validate(string pStringToValidate) { //http://stackoverflow.com/questions/30299671/matching-strings-with-wildcard string regex = Regex.Escape(wildCard).Replace("\\*", ".*"); return Regex.IsMatch(pStringToValidate, "^" + regex + "$"); } } The new pattern: Accepts a Dependency Injection (IValidationStrategy) and using a nested builder class to construct the different implementations in an immutable way. The method used decides the input string is interpreted, as a wildcard or a regular expression. public class StringValidator { private IValidationStrategy validationStrategy; //Dependendency Injection constructor public StringValidator(IValidationStrategy validationStrategy) { this.validationStrategy = validationStrategy; } public bool Validate(string stringToValidate) { return validationStrategy.Validate(stringToValidate); } public class Builder { public StringValidator Regex(string regex) { return new StringValidator(new RegexValidator(new Regex(regex))); } public StringValidator WildCard(string wildCard) { return new StringValidator(new WildCardValidator(wildCard)); } } } Two different ways to test: class Program { static void Main(string[] args) { Console.Out.WriteLine( "IsValid: {0}", new StringValidator.Builder() .Regex(@"\d+") .Validate("55") ); Console.Out.WriteLine( "IsValid: {0}", new StringValidator.Builder() .WildCard("*") .Validate("Whatever string to be validated") ); // Or, if you hate using nameless temporary objects Console.Out.WriteLine(); StringValidator.Builder stringValidatorBuilder = new StringValidator.Builder(); string regex = @"\d+"; StringValidator regValidator = stringValidatorBuilder.Regex(regex); bool isValid = regValidator.Validate("55"); Console.Out.WriteLine("IsValid: {0}", isValid); string wildCard = "*"; StringValidator wildCardValidator = stringValidatorBuilder.WildCard(wildCard); isValid = wildCardValidator.Validate("Whatever string to be validated"); Console.Out.WriteLine("IsValid: {0}", isValid); } } Outputs: IsValid: True IsValid: True IsValid: True IsValid: True This code is from a Software Engineering Stack Exchange answer by Vladimir Stokic that I've made some improvement's to with his kind permission. You can explore the original form, that had no builder, in the edit history if you wish. The responses are surprising me with how fixated they are on explaining how to properly use the builder pattern. I see no reason every use of a inner class must conform to the Josh Bloch Builder Pattern. I mentioned it mostly to show how that patterns usefulness in C# is limited compared to in Java. Perhaps a different example solution will help. If I didn't mind using static factory methods (I do, they can't be passed around nicely) I'd solve the problem of constructors needing different names more simply: public class StringValidatorStatic { private IValidationStrategy validationStrategy; //Dependendency Injection constructor public StringValidatorStatic(IValidationStrategy validationStrategy) { this.validationStrategy = validationStrategy; } public static StringValidatorStatic Regex(string regex) { return new StringValidatorStatic(new RegexValidator(new Regex(regex))); } public static StringValidatorStatic WildCard(string wildCard) { return new StringValidatorStatic(new WildCardValidator(wildCard)); } public bool Validate(string stringToValidate) { return validationStrategy.Validate(stringToValidate); } } And use it this way: static void Main(string[] args) { Console.Out.WriteLine( "IsValid: {0}", StringValidatorStatic .Regex(@"\d+") .Validate("55") ); Console.Out.WriteLine( "IsValid: {0}", StringValidatorStatic .WildCard("*") .Validate("Whatever string to be validated") ); } This works fine. Usage isn't more complicated. But now I'm stuck referring to this statically. I want my version of the builder to be something that can be passed around as a reference. Answer: Why create a class to wrap the validation strategy at all? A common pattern with immutable objects is to have an empty or seed value to start construction from. public interface IValidationStrategy { bool Validate(string pStringToValidate); } public class RegexValidationStrategy : IValidationStrategy { private Regex regEx; public RegexValidationStrategy(Regex regEx) { this.regEx = regEx; } public bool Validate(string stringToValidate) { return regEx.IsMatch(stringToValidate); } } public class StringValidator { private class EmptyValidator : IValidationStrategy { public bool Validate(string input) => true; } public static readonly IValidationStrategy Empty = new EmptyValidator(); } public static class RegexValidationStrategyExtensions { public static IValidationStrategy Regex(this IValidationStrategy strategy, string pattern) { return new CompositeValidationStrategy(strategy, new RegexValidationStrategy(new Regex(pattern))); } } public class CompositeValidationStrategy : IValidationStrategy { private readonly IValidationStrategy left; private readonly IValidationStrategy right; public CompositeValidationStrategy(IValidationStrategy left, IValidationStrategy right) { this.left = left; this.right = right; } public bool Validate(string input) { return left.Validate(input) && right.Validate(input); } } Then your test: StringValidator .Empty .Regex(@"^\d+$") .Validate("55") // true You can also apply more than 1 rule: StringValidator .Empty .Regex(@"^\d+$") .Regex("5{2,}") .Validate("1255") // true By having separate classes and using extension methods to construct instances, you make it trivially easy to add more validators - create a type that implements the strategy and add an extension method to instantiate the type. You could add additional extension methods to IValidationStrategy to make composition easier and avoid leaking the knowledge of CompositeValidationStrategy.
{ "domain": "codereview.stackexchange", "id": 22711, "tags": "c#, design-patterns" }
The 100 chair survivor challenge
Question: I came across the following challenge. Here is my implementation in Python 3.7 A room has 100 chairs numbered from 1-100. On the first round of the game the FIRST individual is eliminated. On the second round, the THIRD individual is eliminated. On every successive round the number of people skipped increases by one (0, 1, 2, 3, 4...). So after the third person, the next person asked to leave is the sixth (skip 2 people - i.e: 4 and 5). This game continues until only 1 person remains. Which chair is left by the end? I actually don't know what the true answer is, but my code outputs 31. I think this is the fastest solution and at worst has an O(N^2) time complexity n = 100 skip = 0 players = [x for x in range (1, n+1)] pointer = 0 while len(players) > 1: pointer += skip while pointer >= len(players): pointer = pointer - len(players) players.pop(pointer) skip += 1 print(players[0]) Answer: The loop while pointer >= len(players): pointer = pointer - len(players) is a long way to say pointer %= len(players). You'd be in a better position factoring the computations into a function, def survivor(n): skip = 0 players = [x for x in range (1, n+1)] pointer = 0 while len(players) > 1: pointer += skip while pointer >= len(players): pointer = pointer - len(players) players.pop(pointer) skip += 1 return players[0] and adding an if __name__ == '__main__' clause. This way it is easy to generate first few results. TL;DR I did so, generated first 20 numbers, 1 2 2 2 4 5 4 8 8 7 11 8 13 4 11 12 8 12 2 and search this sequence in OEIS. Surprisingly there was a match. A closer inspection demonstrated that the problem is a Josephus problem with eliminating every n-th person. Try to prove, or at least convince yourself, that it is indeed the case. Further reading revealed that there is a linear solution.
{ "domain": "codereview.stackexchange", "id": 35033, "tags": "python, python-3.x, programming-challenge" }
Cardinality of the set of algorithms
Question: Someone in a discussion brought up that (he reckons) there can be at least continuum number of strategies to approach a specific problem. The specific problem was trading strategies (not algorithms but strategies) but I think thats beside the point for my question. This got me thinking about the cardinality of the set of algorithms. I have been searching around a bit but have come up with nothing. I've been thinking that, since turing machines operate with a finite set of alphabet and the tape has to be indexable thus countable, it's impossible to have uncountable number of algorithms. My set theory is admittedly rusty so I am not certain at all my reasoning is valid and I probably wouldn't be able to prove it, but it's an interesting thought. What is the cardinality of the set of algorithms? Answer: An algorithm is informally described as a finite sequence of written instructions for accomplishing some task. More formally, they're identified as Turing machines, though you could equally well describe them as computer programs. The precise formalism you use doesn't much matter but the fundamental point is that each algorithm can be written down as a finite sequence of characters, where the characters are chosen from some finite set, e.g., roman letters, ASCII or zeroes and ones. For simplicity, let's assume zeroes and ones. Any sequence of zeroes and ones is just a natural number written in binary. That means there are at most a countable infinity of algorithms, since every algorithm can be represented as a natural number. For full credit, you should be worried that some natural numbers might not code valid programs, so there might be fewer algorithms than natural numbers. (For bonus credit, you might also be wondering if it's possible that two different natural numbers represent the same algorithm.) However, print 1, print 2, print 3 and so on are all algorithms and all different, so there are at least countably infinitely many algorithms. So we conclude that the set of algorithms is countably infinite.
{ "domain": "cs.stackexchange", "id": 12043, "tags": "algorithms, turing-machines, combinatorics" }
Writing unit tests for constructor with shared_ptr arguments
Question: Should I write unit tests which verify that constructor is throwing in simple cases when a nullptr was provided instead of a shared_ptr? Let's say I have the following class: #pragma once #include "SharedItemsListInteractorInput.hpp" #include "SharedItemsListInteractorOutput.hpp" #include "Store.hpp" namespace ri { class ItemsListInteractor : public SharedItemsListInteractorInput { private: std::shared_ptr<Store> _store; std::shared_ptr<SharedItemsListInteractorOutput> _output; public: ItemsListInteractor( const std::shared_ptr<Store> &store, const std::shared_ptr<SharedItemsListInteractorOutput> &output ); void fetchItems() override; }; } Implementation: #include "ItemsListInteractor.hpp" #include <iostream> namespace ri { ItemsListInteractor::ItemsListInteractor( const std::shared_ptr<Store> &store, const std::shared_ptr<SharedItemsListInteractorOutput> &output ) { if (store == nullptr) { throw std::invalid_argument("Store should not be an std::nullptr"); } if (output == nullptr) { throw std::invalid_argument("Output should not be an std::nullptr"); } _store = store; _output = output; } void ItemsListInteractor::fetchItems() { _store->getItems(0, [=] (std::vector<SharedItem> items, bool nextPage) { if (_output != nullptr) { _output->didFetchItems(items, nextPage); } }); } } Construction tests: #include "ItemsListInteractor.hpp" #include "catch.hpp" #include "fakeit.hpp" using namespace Catch; using namespace fakeit; using namespace ri; TEST_CASE( "items list interactor", "[ItemsListInteractor]" ) { SECTION( "no store" ) { Mock<SharedItemsListInteractorOutput> outputMock; Fake(Dtor(outputMock)); auto output = std::shared_ptr<SharedItemsListInteractorOutput>(&outputMock.get()); REQUIRE_THROWS_AS(ItemsListInteractor(nullptr, output), std::invalid_argument); } SECTION( "no output" ) { Mock<Store> storeMock; Fake(Dtor(storeMock)); auto store = std::shared_ptr<Store>(&storeMock.get()); REQUIRE_THROWS_AS(ItemsListInteractor(store, nullptr), std::invalid_argument); } } Feels like writing constructor tests, in this case, brings too much boilerplate code. Answer: Missing includes You are missing at least: in the header: #include <memory> in the implementation: #include <exception> Extraneous includes The implementation includes <iostream> but never uses it. Prefer <iosfwd> over <iostream> where possible, and prefer nothing over <iosfwd> where possible. Prefer initialization to assignment Instead of _store = store; _output = output; it's better to use initializers - especially if you set your compiler to warn when you forget to initialize (recommended): ItemsListInteractor::ItemsListInteractor( std::shared_ptr<Store> store, std::shared_ptr<SharedItemsListInteractorOutput> output) : store{std::move(store)}, output{std::move(output)} { if (!this->store) throw std::invalid_argument("Store should not be null"); if (!this->output) throw std::invalid_argument("Output should not be null"); } I made other improvements in the above: pass by value, then std::move() in preference to passing by const ref name the members the same as the arguments to prevent accidental use after move from (and to be less ugly - a leading underscore shouts "DANGER", especially to those who also write C). use the (more idiomatic) operator bool to test the pointers' validity - this is easier to understand than comparing against a shared pointer implicitly constructed from std::nullptr_t. I changed the message - an empty shared pointer is a different type to the null pointer. Don't repeat the test in the method There's no way that output can be assigned to, other than via the compiler-provide copy constructor and assignment operator, both of which require a valid ItemsListInteractor. A valid ItemsListInteractor cannot have a null output if its constructor succeeded. void ItemsListInteractor::fetchItems() { store->getItems(0, [=](std::vector<SharedItem> items, bool nextPage) { output->didFetchItems(items, nextPage); }); } Tests The tests look adequate, as far as they go. The fetchItems() method should also be tested.
{ "domain": "codereview.stackexchange", "id": 28340, "tags": "c++, unit-testing" }
False solution of Landau Hamiltonian
Question: The Landau Hamiltonian in 2D is given (in natural units $q=c=2m=1$) by $$ \hat{H} = (\hat{\vec{p}}-\vec{A}(\hat{\vec{x}}))^2 \,,$$ where $\vec{A}$ is the magnetic vector potential field. We know that the momentum operator $\hat{\vec{p}}$ may be shifted using the position operator, that is, if $f$ is any scalar field, then $$\exp(if(\hat{\vec{x}}))\hat{\vec{p}}\exp(-if(\hat{\vec{x}})) = \hat{\vec{p}}-(\nabla f)(\hat{\vec{x}}) \,.$$ Hence we may re-write $\hat{H}$ as \begin{align} \hat{H} &= \exp(i\int_{\vec{x_0}}^{\hat{\vec{x}}}A(\vec{l})\cdot d\vec{l})\,\,\,\,\,\hat{\vec{p}}^2\,\exp(-i\int_{\vec{x_0}}^{\hat{\vec{x}}}A(\vec{l})\cdot d\vec{l})\label{eq:one}\end{align}where $\vec{x_0}$ is any arbitrarily chosen reference point. Since we know the (non-normlizable) eigenstates of $\hat{\vec{p}}^2$, namely, for any $\vec{k}$, we have $\psi_k^{\mathrm{A=0}}(x)=\exp(\pm i\vec{k}\cdot\vec{x})$, we may now write down the eigenstates of $\hat{H}$ as $$ \psi_k^{\mathrm{A\neq0}}(x)=\exp(i\int_{\vec{x_0}}^{\vec{x}}A(\vec{l})\cdot d\vec{l})\exp(\pm i\vec{k}\cdot\vec{x}) \,. $$ This is of course false, as it doesn't give the famous quantization of the the Landau energy levels. My question is: what is the mistake I made? Answer: You've assumed that $\vec{A}$ can be described by a function $f$ such that $\nabla f=\vec{A}$. You've also attempted to write down an explicit formula for $f$, namely $f=\int_{x_0}^{x}\vec{A}\cdot d\vec{\ell}$. This all works fine, assuming $\vec{A}$ has no curl. If $\vec{A}$ has curl, you cannot find a function $f$ such that $\nabla f=\vec{A}$, and your formula $f=\int_{x_0}^{x}\vec{A}\cdot d\vec{\ell}$ is not well-defined because it depends on the path $x_0\rightarrow x$. Of course, if $\vec{A}$ has no curl, it describes a system with zero magnetic field, and you don't get Landau quantization. You are interested in precisely when $\vec{A}$ HAS curl, which is exactly when your argument fails.
{ "domain": "physics.stackexchange", "id": 55219, "tags": "quantum-mechanics, magnetic-fields, wavefunction, hamiltonian, wilson-loop" }
The Von Neumann-Wigner interpretation as the explanation for the fine-tuning problem and the existence of free-will
Question: Disclaimer: Some of the concepts I'm using here are considered by some to be pseudo-science. I do not intend to have a hocus-pocus discussion and fairies and wizards here, my intention is to have a rational scientific discussion. If anyone feels that I am invoking any nonsensical ideas please do point them out to me with clear reasoning. There exists a theory supposed to resolve the measurement problem in quantum physics which goes by the name of the Von Neumann-Wigner interpetation and I think it has some stigma associated with it. Be that as it may, consider for a moment that by some as yet unknown mechanism, something - which I will refer to simply as the abstract, "consciousness" for the sake of convenience - is capable not only of initiating wave-function collapse, but furthermore, capable of having some influence on the outcome of the observation. This is a supposition, I have no strong proof of this. Given that, I reason that one can explain at least the following 2 problems in our current understanding of the universe. Firstly, the so called, "Fine-tuning Problem" which asks why the universe seems to have such carefully chosen parameters. And secondly, the action of "free-will" and where it might have a place in our reality. The Fine-tuning Problem I will assume that everyone is aware of this and also aware that the Weak Antropic Principle is often postulated as an explanation to this problem by suggesting that given a large selection of possible universes, the one with the right conditions for sapient life is the one which will have people like us asking such philosophical questions as "why is the universe the way it is". However, this theory relies on invoking the notion of multiple universes in some form and often comes hand in hand with multiverse theory which seems like a stretch. Now, here is my question. Is it not possible to validate the WAP using simple quantum physics provided that the above supposition is correct? An "un-observed" quantum system will evolve in a superposition of states until it is observed and collapsed. So then, we do not need multiverse theory to explain this problem. The universe can be said to have evolved in multiple possibilities as a quantum system for some period of time (effectively having many universes evolving simultaneously) before one of the many branches of possibility 'lucks out' (statistical inevitability) and ultimately gives rise to the first ever sapient life. In that "instant" (there is no real concept of time in this abstract possibility space?), the whole universal wave-function is collapsed due to the presence of an observer, and the history of that "branch" is retroactively "collapsed" into existence. Thereby explaining the Fine-tuning problem without the need to invoke multiverse theory. Free Will In ancient times, Descartes postulated that the universe was divided into two realms, the physical world, and the mental (or spirit) world (this theory is known as "Descartes Duality". As we all know, this theory falls apart because in order for the "spirit" world to have any influence on the physical world, there would need to be an exchange of energy between the two worlds and this would violate the principle of energy conservation. However, is it not true than in the funky realm of quantum mechanics, a system - right before the moment of collapse - can have many possible outcomes (sometimes infinite). If my above supposition is correct, then this abstract "consciousness" (whatever that really is) may be able to subtly influence the outcome of quantum measurements and thus, influence the evolution of the physical world from the "outside" which in turn opens the door for the existence of free-will. In the aftermath of the Newtonian revolution, the concepts of materialism, determinism, and strong objectivity came about and declared the concept of "free-will" as unscientific. It would seem to me that since the advent of quantum physics in the early 20th century, the material paradigm should have loosened its grip! We now know that the physical reality we live in is - at best - the tip of a very abstract iceberg. We are now understanding the universe in terms of abstract, infinite-dimensional Hilbert spaces! We have such mad concepts as non-locality and the uncertainty principle and yet, it seems almost as if the mainstream scientific community is still stuck in the 17th century! Why is the concept of materialism still so prevalent in modern science? Also, note: I am not suggesting that this "consciousness" has total control over the outcomes of quantum measurements, we know from experiment (and math) that the probability distributions for these measurements follow certain patterns and rules (the Born rule), but it is perfectly possible to influence the outcome of any specific measurement without affecting the overall probability distribution is it not? I am not trying to prove anything here. I am merely exploring a theory and am curious as to why the scientific community does not seem to look into this? There must be something I'm missing? Is it simply because they feel that this makes too many assumptions without enough tangible evidence? Is it because they feel this is too "pure" and not practically relevant? It seems to me that we have two big problems in science which fit together like a plug in a socket. On the one hand, we have no working explanation for the nature of consciousness. On the other hand, we have the Measurement Problem of quantum physics... Surely I can't be the only one noticing the correspondence here?! Answer: I am not trying to prove anything here. I am merely exploring a theory and am curious as to why the scientific community does not seem to look into this? There must be something I'm missing? Firstly, without meaning in the least to sound patronising, I think you have a very good question, but I will try to list the problems that I think physics has with your question. To be blunt, you are not proving anything here. You don't have a theory, you have a hypothesis. This implies that no facts to support your argument are supplied, which would tip the balance towards a theory, as would any predictions made in your question. Any answer you get regarding the involvement of conciousness (however it is "created/generated", will involve a degree of personal opinion, in the same way as a question regarding "life" after death /ghosts, will, in my opinion, ( you see, here we go with the opinions:) also involve personal feelings, biases, prejudices etc, because in both situations we have no physics based definite theories that we can use to explore the subject further. That's, again imo, why the science community, or at least the physics section of it, does not look into it, because there is nothing to definite to go on. If you were, for example, to provide a theory with supporting evidence that ghosts existed, and that you could somehow summon them on command and others could replicate your findings, then of course physics would investigate, but we both know, so far anyway, that nobody can treat ghosts in the same way as we treat a scientific theory like General Relativity, that makes predictions (which we can and do test, daily), which explains physical phenomenona that earlier theories could not, such as the motion of Mercury. Be that as it may, consider for a moment that by some as yet unknown mechanism, something - which I will refer to simply as the abstract, "consciousness" for the sake of convenience - is capable not only of initiating wave-function collapse, but furthermore, capable of having some influence on the outcome of the observation. This is a supposition, I have no strong proof of this. Short answer, nobody else has proof either. I can give you 6 or 8 different causes for the nature of reality, (as could lots of people on this site) but I can't even start to prove a single one of them. On the one hand, we have no working explanation for the nature of consciousness. On the other hand, we have the Measurement Problem of quantum physics... Surely I can't be the only one noticing the correspondence here My "answer" to that is, we have two mysteries (which again is a personal opinion, look up decoherence, as some will say, there is no mystery to solve in the first place). But these issues, to stay on safer ground, may or may not be connected. I think your question will be closed because, although everyone likes a mystery, not many people like paying to see unsolved movies, and that is the central point I am trying to make. It does not seem that the physics community can proceed along either of these issues with the paucity of evidence we presently unfortunately possess. I'll be honest, I expected as much... The thing is I don't want to have some silly conversation with philosophers! I want hard physics from people who know what they're talking about. Instead, no progress is made because we refuse to look in new places... Physics should be trying to explain everything I know you expected as much, :), and you know my answer is the only answer I can give you :), because you wanted an answer based on the physics method, which is experimental and math based, so I can only answer in those terms. Every so often even a professional physicist or a mathematician (and I am not one of either) asks a similar question on this site....because people want to know, which we both know is what separates us from sheep, say. As I am sure you already know, Einstein and Bohr spent a good bit of their time on this, (as have you, I'm sure). It has not given us a definitive answer in over a century of debate. It's as if the physics community has decided that physics ends at this arbitrary boundary and won't look into anything unless it's on the right side of that line. Ok, I would like to keep the philosophy out of it as much as possible too. Take the conscious mind. Physics can tell you, to some degree, how we make and store memories, pretty much the way as primates do, but physics can't tell you how we humans are able to imagine how a house or bridge looks before it's built, but a non human animal, no matter how big it's brain, can't achieve that level of abstraction. Why not? In other words, we spend at least as much time in an abstract world as in the real world, and yet our genes are 94% ( or some figure like that) as close to certain primates. Why is that? I have no idea. I would absolutely not agree that the physics community has drawn a line in the sand on these issues. Iconclasts exist in every field, and if any particular physicist thought his/ her time was best spent "solving" how we can think abstractly, and prove their theory to the same experimental standards as in other areas of physics, they would do it. Physicists, AFAIK, can often be as competitive, ego-driven, and posterity minded as people in any other field of human enterprise. But they don't do it because, mainly I believe because there is no obvious open crack in these issues that can be expanded and exploited to make a math based theory that can be tested and makes predictions, that is to follow the physics principles that work for every other accepted theory. Perhaps if theoretical physicists would spend some time thinking about these concepts they would come up with testable predictions They do, they really do think about these concepts, the interpretation of quantum mechanics is still debated, it's just that no new insight has been expanded enough to convince enough people "to give it legs" and become more openly debated. Physics should be trying to explain everything. Of course it should, within the experimental framework of physics, which as I say above, finds it difficult to get a physical toehold on the topics you raise. We are thinking about how we think, that's pretty philosophically based, (unfortunately), at the moment. I don't agree that my "hypothesis" makes no predictions. Consider this experiment. If consciousness is capable of influencing the outcome of particular observations, then have a person consciously trying to influence the outcome of a spin up/down measurement to go "up" at certain times and other times not to try at all and see if there is any slight skew in the distribution as a result of their conscious influence on the system. Repeat the whole process many times to eliminate statistical anomalies. When has anyone ever tried to consciously affect the result of a quantum measurement in a lab? I don't know offhand, but I would be extremely surprised if this has not been tried lots and lots of times and I bet you a beer, the results are inconclusive and are completely random. I would doubt that Nature/ Reality/ God???, would make it that easy for us. It's something you could do yourself, and actually you do a version (the many worlds interpretation) of this experiment every time you try to predict the future. I invariably get the future wrong, and I would think you do too. Also, I have looked into decoherence in the past and it was my understanding that it does not even try to resolve the measurement problem? Decoherence is a theory for the mechanism by which information is "lost" to the environment but it does not explain what causes collapse in the first place. Have I misunderstood? Now I admit I am on iffy territory here. It was my assumption that decoherence is one of the reasons why for example, quantum computers are difficult to build, as outside influences affect results on a quantum scale without human conscious being involved directly, but obviously indirectly when we notice the machine has started to behave like a normal computer, rather than one that exploits a superposition of states. You could ask a question here on this site, and / or read up on this topic easily enough on the Web. It is my understanding that some physicists believe decoherence extends further into the quantum world that others. I do apologise, I simply never followed the arguments over it, ( they are probably beyond me), but at least it is physics and it is testable.
{ "domain": "physics.stackexchange", "id": 35710, "tags": "quantum-mechanics, cosmology, measurement-problem, anthropic-principle" }
ERA Interim, how to handle total precipitation
Question: I am trying to compute accumulated annual precipitation for year 2000-2001 based on the ERA Interim, but I am stuck with some unresolved questions: If I download the synoptic monthly means of "total precipitation" from ECMWF for year 2000-2001 with "Select time" set to "00:00:00" and "12:00:00" respectively, and chose "Select step" as "3", I get a *.nc -file with 48 time steps. The file also contains a "scale_factor" and "add_offset". I assume that I must use those values in (add_offset + (downloaded data * scale_factor)) to get actual precipitation in meters? And I assume that if I sum the 48 time steps of data (corrected with add_offset and scale_factor) and divide by 2 (2 years of data) then I get the average accumulated precipitation per year. Is this correct? What then puzzles me is that the "scale_factor" is very low (3.9E-8), meaning that the values I get will be almost equal to the "add_offset" which is 0.0013. Are there any obvious mistakes I make? Or do any of you know of a guide that is more elaborate than the documentation found at ECMWF? Thanks a lot in advance for any ideas. Answer: First of all, these netCDF files follow the CF Metadata Conventions, which describe the use of scale_factor and add_offset in section 8.1 Packed data of the conventions description. In short, you're applying them correctly: If both attributes are present, the data are scaled before the offset is added. However, I think that you've selected the wrong fields for your purpose. The total precipitation variable is an accumulation since the start of the re-forecast, and you've selected a monthly mean of accumulations over the first 3 hours of the 00Z re-forecasts and the same thing from the 12Z re-forecasts. This means that you're missing the accumulated precipitation from hours 3 to 12 of the re-forecasts, so your annual mean estimates might only be 1/4 of any ballpark values you're expecting. I suspect that you need to choose step=12 rather than step=3, as described in the ERA FAQ. Also note, I think the comment in that FAQ about the monthly mean from daily mean data availability being "planned" but "not yet implemented" is out of date and that may be the better route to calculate the annual means (see also section 3 of Berrisford et al, 2011).
{ "domain": "earthscience.stackexchange", "id": 657, "tags": "precipitation, reanalysis, scaling, era" }
Incorporating newline-as-statement-terminator heuristics into context-free languages
Question: Several block structured languages (Scala, Go, Ruby, Julia, Quorum, ...) use semicolons as statement terminators, but allow newlines instead of semicolons under certain circumstances. My question is: how can I represent Scala-like optional semicolons in a context free grammar? The specific issue is that some kinds of nesting delimiters "enable newlines" while others "disable newlines," so you need to pay attention to what kind of delimiters you are most immediately inside of. I'm specifically asking about Scala's heuristic, because Julia and Quorum don't have specs. Ruby has a spec, but handles the problem by scattering "[no line-terminator here]" throughout the formal grammar and I've been unable to find a general rule. Go has a well described heuristic but it's lexical only, which makes it obvious how to specify and implement, but its usability is somewhat disappointing. (Go inserts a semicolon even if you haven't closed the most recent ( token.) Scala goes well beyond the Go lexical rule with a nesting rule (cf reference, Section 1.2). In addition to lexically ensuring that both the token before and after the newline are consistent with the insertion of a statement terminator, newlines-as-statement-terminators are disabled between matching ( and ) parenthesis and [ and ] brackets, but then re-enabled between matching { and } braces. I can figure out how to implement a simple pre-processor as a push-down automaton. The automaton starts with enabled on the stack. As you process the token stream, when you see a ( or [ push disabled onto the stack, when you see { push enabled onto the stack, and when you see ), ] or } pop the top of the stack. Then when you see a newline that otherwise satisfies the lexical rules for the insertion of a statement terminator, you insert the terminator if and only if enabled is on the top of the stack. So Scala's newline-to-statement-terminator rule is "context free" in some sense, but I haven't been able to figure out how to incorporate this push-down automaton in with the rest of the language grammar. Answer: I do not know Scala (though I hope it has a good opera-tional semantics). I am answering on the basis of the information you give in the question. I think your problem is using the stack rather than the finite state to remember local behavior. Finite state is for information you are currently using, while the stack is for information that you have to remember for future use, but do not need right now. At least, I see this description as a convenient way of designing pushdown uses - though I am aware of all the equivalence games between various definitions of PDA. So your automaton should start with nothing on the stack, but with a terminator register containing enabled. As you process the token stream, when you see a ( or [ push the content of the register onto the stack and put disabled in the register, when you see { push the content of the register onto the stack and put enabled in the register, and when you see ), ] or } pop the top of the stack and put it in the register. Then when you see a newline that otherwise satisfies the lexical rules for the insertion of a statement terminator, you insert the terminator if and only if enabled is in the register. The register may be seen as a simple finite-state control that can be used for a cross-product with the rest of the finite-state control of the PDA parsing your Scala syntax. I expect that by proceding this way, you get a stack policy for terminator insertion rules that can be smoothly merged with the policy for parsing the rest of the syntax, since various types of parenthesis must match for the rest of the language too. And you parser should sing without dissonance. Doing finite state control cross-prodduct is another technique for designing PDAs (among other devices, most likely). I think it has been extensively analyzed for FA (I think, but it should be checked that this is related to Krohn–Rhodes theory).
{ "domain": "cs.stackexchange", "id": 3145, "tags": "context-free, programming-languages, pushdown-automata" }
Algorithm for competing cells of 0s and 1s
Question: I'm working on a practice algorithm problem, stated as follows: There are eight houses represented as cells. Each day, the houses compete with adjacent ones. 1 represents an "active" house and 0 represents an "inactive" house. If the neighbors on both sides of a given house are either both active or both inactive, then that house becomes inactive on the next day. Otherwise it becomes active. For example, if we had a group of neighbors [0, 1, 0] then the house at [1] would become 0 since both the house to its left and right are both inactive. The cells at both ends only have one adjacent cell so assume that the unoccupied space on the other side is an inactive cell. Even after updating the cell, you have to consider its prior state when updating the others so that the state information of each cell is updated simultaneously. The function takes the array of states and a number of days and should output the state of the houses after the given number of days. Examples: input: states = [1, 0, 0, 0, 0, 1, 0, 0], days = 1 output should be [0, 1, 0, 0, 1, 0, 1, 0] input: states = [1, 1, 1, 0, 1, 1, 1, 1], days = 2 output should be [0, 0, 0, 0, 0, 1, 1, 0] Here's my solution: def cell_compete(states, days): def new_state(in_states): new_state = [] for i in range(len(in_states)): if i == 0: group = [0, in_states[0], in_states[1]] elif i == len(in_states) - 1: group = [in_states[i - 1], in_states[i], 0] else: group = [in_states[i - 1], in_states[i], in_states[i + 1]] new_state.append(0 if group[0] == group[2] else 1) return new_state state = None j = 0 while j < days: if not state: state = new_state(states) else: state = new_state(state) j += 1 return state I originally thought to take advantage of the fact they are 0s and 1s only and to use bitwise operators, but couldn't quite get that to work. How can I improve the efficiency of this algorithm or the readability of the code itself? Answer: EDIT: Thanks to @benrg pointing out a bug of the previous algorithm. I have revised the algorithm and moved it to the second part since the explanation is long. While the other answer focuses more on coding style, this answer will focus more on performance. Implementation Improvements I will show some ways to improve the performance of the code in the original post. The use of group is unnecessary in the for-loop. Also note that if a house has a missing adjacent neighbour, its next state will be the same as the existing neighbour. So the loop can be improved as follows. for i in range(len(in_states)): if i == 0: out_state = in_states[1] elif i == len(in_states) - 1: out_state = in_states[i - 1] else: out_state = in_states[i - 1] == in_states[i + 1] new_state.append(out_state) It is usually more efficient to use list comprehensions rather than explicit for-loops to construct lists in Python. Here, you need to construct a list where: (1) the first element is in_states[1]; (2) the last element is in_states[-2]; (3) all other elements are in_states[i - 1] == in_states[i + 1]. In this case, it is possible to use a list comprehension to construct a list for (3) and then add the first and last elements. new_states = [in_states[i-1] == in_states[i+1] for i in range(1, len(in_states) - 1)] new_states.insert(in_states[1], 0) new_states.append(in_states[-2]) However, insertion at the beginning of a list requires to update the entire list. A better way to construct the list is to use extend with a generator expression: new_states = [in_states[1]] new_states.extend(in_states[i-1] == in_states[i+1] for i in range(1, len(in_states) - 1)) new_states.append(in_states[-2]) An even better approach is to use the unpack operator * with a generator expression. This approach is more concise and also has the best performance. # state_gen is a generator expression for computing new_states[1:-1] state_gen = (in_states[i-1] == in_states[i+1] for i in range(1, len(in_states) - 1)) new_states = [in_states[1], *state_gen, in_states[-2]] Note that it is possible to unpack multiple iterators / generator expressions into the same list like this: new_states = [*it1, *it2, *it3] Note that if it1 and it3 are already lists, unpacking will make another copy so it could be less efficient than extending it1 with it2 and it3, if the size of it1 is large. Algorithmic Improvement Here I show how to improve the algorithm for more general inputs (i.e. a varying number of houses). The naive solution updates the house states for each day. In order to improve it, one needs to find a connection between the input states \$s_0\$ and the states \$s_n\$ after some days \$n\$ for a direct computation. Let \$s_k[d]\$ be the state of the house at index \$d\$ on day \$k\$ and \$H\$ be the total number of houses. We first extend the initial state sequence \$s_0\$ into an auxiliary sequence \$s_0'\$ of length \$H'=2H+2\$ based on the following: $$ s_0'[d]=\left\{\begin{array}{ll} s_0[d] & d\in[0, H) \\ 0 & d=H, 2H + 1\\ s_0[2H-d] & d\in(H,2H] \\ \end{array}\right.\label{df1}\tag{1} $$ The sequence \$s_k'\$ is updated based on the following recurrence, where \$\oplus\$ and \$\%\$ are the exclusive-or and modulo operations, respectively: $$ s_{k+1}'[d] = s_k'[(d-1)\%H']\oplus s_k'[(d+1)\%H']\label{df2}\tag{2} $$ Using two basic properties of \$\oplus\$: \$a\oplus a = 0\$ and \$a\oplus 0 = a\$, the relationship (\ref{df1}) can be proved to hold on any day \$k\$ by induction: $$s_{k+1}'[d] = \left\{ \begin{array}{ll} s_k'[1]\oplus s_k'[H'-1] = s_k'[1] = s_k[1] = s_{k+1}[0] & d = 0 \\ s_k'[d-1]\oplus s_k'[d+1] = s_k[d-1]\oplus s_k[d+1]=s_{k+1}[d] & d\in(0,H) \\ s_k'[H-1]\oplus s_k'[H+1] = s_k[H-1]\oplus s_k[H-1] = 0 & d = H \\ s_k'[2H-(d-1)]\oplus s_k'[2H-(d+1)] \\ \quad = s_k[2H-(d-1)]\oplus s_k[2H-(d+1)] = s_{k+1}[2H-d] & d\in(H,2H) \\ s_k'[2H-1]\oplus s_k'[2H+1] = s_k'[2H-1] = s_k[1] = s_{k+1}[0] & d = 2H \\ s_k'[2H]\oplus s_k'[0] = s_k[0]\oplus s_k[0] = 0 & d = 2H+1 \end{array}\right. $$ We can then verify the following property of \$s_k'\$ $$ \begin{eqnarray} s_{k+1}'[d] & = & s_k'[(d-1)\%H'] \oplus s_k'[(d+1)\%H'] & \\ s_{k+2}'[d] & = & s_{k+1}[(d-1)\%H'] \oplus s_{k+1}[(d+1)\%H'] \\ & = & s_k[(d-2)\%H'] \oplus s_k[d] \oplus s_k[d] \oplus s_k[(d+2)\%H'] \\ & = & s_k[(d-2)\%H'] \oplus s_k[(d+2)\%H'] \\ s_{k+4}'[d] & = & s_{k+2}'[(d-2)\%H'] \oplus s_{k+2}'[(d+2)\%H'] \\ & = & s_k'[(d-4)\%H'] \oplus s_k'[d] \oplus s_k'[d] \oplus s_k'[(d+4)\%H'] \\ & = & s_k'[(d-4)\%H'] \oplus s_k'[(d+4)\%H'] \\ \ldots & \\ s_{k+2^m}'[d] & = & s_k'[(d-2^m)\%H'] \oplus s_k'[(d+2^m)\%H'] \label{f1} \tag{3} \end{eqnarray} $$ Based on the recurrence (\ref{f1}), one can directly compute \$s_{k+2^m}'\$ from \$s_k'\$ and skip all the intermediate computations. We can also substitute \$s_k'\$ with \$s_k\$ in (\ref{f1}), leading to the following computations: $$ \begin{eqnarray} d_1' & = & (d-2^m)\%H' & \qquad d_2' & = & (d+2^m)\%H' \\ d_1 & = & \min(d_1',2H-d_1') & \qquad d_2 & = & \min(d_2', 2H-d_2') \\ a_1 & = & \left\{\begin{array}{ll} s_k[d_1] & d_1 \in [0, L) \\ 0 & \text{Otherwise} \\ \end{array}\right. & \qquad a_2 & = & \left\{\begin{array}{ll} s_k[d_2] & d_2 \in [0, L) \\ 0 & \text{Otherwise} \\ \end{array}\right. \\ & & & s_{k+2^m}[d] & = & a_1 \oplus a_2 \label{f2}\tag{4} \end{eqnarray} $$ Note that since the sequence \$\{2^i\%H'\}_{i=0}^{+\infty}\$ has no more than \$H'\$ states, it is guaranteed that \$\{s_{k+2^i}\}_{i=0}^{+\infty}\$ has a cycle. More formally, there exists some \$c>0\$ such that \$s_{k+2^{a+c}}=s_{k+2^a}\$ holds for every \$a\$ that is greater than certain threshold. Based on (\ref{f1}) and (\ref{f2}), this entails either \$H'|2^{a+c}-2^a\$ or \$H'|2^{a+c}+2^a\$ holds. If \$H'\$ is factorized into \$2^r\cdot m\$ where \$m\$ is odd, we can see that \$a\geq r\$ must hold for either of the divisibilty. That is to say, if we start from day \$2^r\$ and find the next \$t\$ such that \$H'|2^t-2^r\$ or \$H'|2^t+2^r\$, then \$s_{k+2^t}=s_{k+2^r}\$ holds for every \$k\$. This leads to the following algorithm: Input: \$H\$ houses with initial states \$s_0\$, number of days \$n\$ Output: House states \$s_n\$ after \$n\$ days Step 1: Let \$H'\leftarrow 2H+2\$, find the maximal \$r\$ such that \$2^r\mid H'\$ Step 2: If \$n\leq 2^r\$, go to Step 5. Step 3: Find the minimal \$t, t>r\$ such that either \$H'|2^t-2^r\$ or \$H'|2^t+2^r\$ holds. Step 4: \$n\leftarrow (n-2^r)\%(2^t-2^r)+2^r\$ Step 5: Divide \$n\$ into a power-2 sum \$2^{b_0}+2^{b_1}+\ldots+2^{b_u}\$ and calculate \$s_n\$ based on (\ref{f2}) As an example, if there are \$H=8\$ houses, \$H'=18=2^1\cdot 9\$. So \$r=1\$. We can find \$t=4\$ is the minimal number such that \$18\mid 2^4+2=18\$. Therefore \$s_{k+2}=s_{k+2^4}\$ holds for every \$k\geq 0\$. So we reduce any \$n>2\$ to \$(n-2)\%14 + 2\$, and then apply Step 5 of the algorithm to get \$s_n\$. Based on the above analysis, every \$n\$ can be reduced to a number between \$[0, 2^t)\$ and \$s_n\$ can be computed within \$\min(t, \log n)\$ steps using the recurrence (\ref{f2}). So the ultimate time complexity of the algorithm is \$\Theta(H'\cdot \min(t, \log n))=\Theta(H\cdot\min(m,\log n))=\Theta(\min(H^2,H\log n))\$. This is much better than the naive algorithm which has a time complexity of \$\Theta(H\cdot n)\$.
{ "domain": "codereview.stackexchange", "id": 36004, "tags": "python, performance, algorithm, programming-challenge, cellular-automata" }
Why total heat of hydrogenation of 1,3-cyclohexadiene is more than that of benzene?
Question: I'm told that heat of hydrogenation (HOH) is directly proportional to number of π bonds and inversely proportional to stability. So, is the aromaticity responsible for this? Also, what is the general approach to the problems like this? Say, I encountered napthalene and some hydrocarbon which contains about 2 rings and have π bonds less than naphthalene, just like stated above – then how shall I decide? Is there any specific rule like if a compound will be aromatic then it's HOH will be reduced by this amount or something like that? Answer: Yes. The relatively smaller Heat of hydrogenation (HOH) for benzene as compared to that for 1,3-cyclohexadiene is due to the aromaticity of the first. Analyzing the thermochemistry is indeed among the first and perhaps more intuitive ways to present and quantifies aromaticity itself. A) Cycloexene HOH = -120 kJ/mol B) 1,4-Cycloexadiene HOH = -240 kJ/mol C) 1,3-Cyclohexadiene HOH = -232 kJ/mol D) Benzene HOH = -208 kJ/mol While for B the HOH is about double as compared to that of cyclohexane and thus is according to the assertion in the question, the conjugation in C does results in a "lower than expected" value. The 8 kJ/mol difference is the energy of resonance of the two conjugated double bonds. The situation is even more striking for D as this difference from "an expected value" amounts now to about 152 kJ/mol. This is again a measure of stability and its distinctively high value is a manifestation of aromaticity. The HOH of benzene is even less than that of cyclohexadienes. This shouldn't come as surprise though, as for there are not really double bonds. In other words, while hydrogenation of individual double bond is an exothermic reaction, hydrogenation of benzene to cycloexadienes is an endothermic process. Once you disrupt aromaticity, then you can think again in terms of individual double bonds (although delocalisation may occur to little extent, as in C). And this fact should answer the question "why HOH of benzene is higher than that of cyclopentene?", too.
{ "domain": "chemistry.stackexchange", "id": 11329, "tags": "organic-chemistry, heat, hydrocarbons, aromaticity" }
Magnetic field (by current density)
Question: Good, I would need someone to help me know what I'm doing wrong: we know that the magnetic field that generates a circular loop of radius "a" on its axis is $$\vec{B}(\vec{r})=\frac{\mu_0I}{4\pi}\int\frac{\vec{dl'}\times(\vec{r}-\vec{r}\ ')}{|\vec{r}-\vec{r}\ '|^3}=\frac{\mu_0Ia^2}{2(z^2+a^2)^{3/2}}\hat{k}$$ with $\vec{r}-\vec{r}\ '=z\hat{k}-a\hat{\rho}\ '$ and $\vec{dl}\ '=ad\varphi'\vec{\varphi}\ '$ now, when I try to calculate it with the expression of the current density with $$\vec{J}=\underbrace{\frac{I}{2\pi a}\hat{\varphi}}_{\vec{J_l}}\delta(z)\delta(\rho-a)$$ then if $$\vec{B}(\vec{r})=\frac{\mu_0}{4\pi}\int\frac{\vec{J}\times(\vec{r}-\vec{r}\ ')}{|\vec{r}-\vec{r}\ '|^3}dV'=\frac{\mu_0Ia}{4\pi(z^2+a^2)^{3/2}}\hat{k}$$ with $\vec{r}-\vec{r}\ '=z\hat{k}-a\hat{\rho}\ '$ Can someone help me? Am I miscalculating the current density? Why isn't that it? Answer: The idea is that $J{\rm d}V = I {\rm d}l$, so the current density should be $$ J({\bf r}) = I\delta(z)\delta(\rho - a) $$ That way $$ \int {\rm d}V~ J({\bf r}) = 2\pi a I = \int {\rm d}l ~ I $$ When you put that into your second expression you will get the same result
{ "domain": "physics.stackexchange", "id": 54363, "tags": "homework-and-exercises, electromagnetism, magnetic-fields" }
A simple and safe implementation for strnstr (search for a substring in the first n characters of char array)
Question: I'd like to suggest the following implementation: // Find an instance of substr in an array of characters // the array of characters does not have to be null terminated // the search is limited to the first n characters in the array. char *strnstr(char *str, const char *substr, size_t n) { char *p = str, *pEnd = str+n; size_t substr_len = strlen(substr); if(0 == substr_len) return str; // the empty string is contained everywhere. pEnd -= (substr_len - 1); for(;p < pEnd; ++p) { if(0 == strncmp(p, substr, substr_len)) return p; } return NULL; } The rationale for the first parameter is not to be const is that you may want to use the return value pointer to modify the array in that location. for completeness, in C++, it's possible to add an overloaded variant that's const: const char *strnstr(const char *str, const char *substr, size_t n) { return strnstr((char *)str, substr, n); } Any comments? As suggested, here's a test program: #include <iostream> #include <cstring> #include <string> int main() { char s[] = "1234567890abcdefgh"; size_t n = sizeof(s) - 1; const char *patterns[] = { "efgh", "0ab", "0b", NULL }; const char *result = NULL; const char *pPattern = patterns[0]; std::cout << "array of length " << n << " is: " << s << std::endl; for (int i = 0; pPattern; pPattern = patterns[++i]) { result = strnstr(s, pPattern, n); std::cout << "finding " << pPattern << " n=" << n << ": " << (result ? result : "(null)") << std::endl; } pPattern = patterns[0]; result = strnstr(s, pPattern, n-1); std::cout << "finding " << pPattern << " n=" << n-1 << ": " << (result ? result : "(null)") << std::endl; return 0; } Output: array of length 18 is: 1234567890abcdefgh finding efgh n=18: efgh finding 0ab n=18: 0abcdefgh finding 0b n=18: (null) finding efgh n=17: (null) Answer: Design: The str as an array and searched up to n is inconsistent with string like functions and strstr(). Rather than "the search is limited to the first n characters in the array", I'd also expect characters in str that follow a null character are not searched. IMO, a design flaw. Following review assumes str[i] == 0 has no special meaning. Weak argument name str. str is the address of an array and maybe not a string. Calling a potential non-string str conveys the wrong idea. Suggest src, etc. When looking for sub-strings, I like needle and haystack. As the C version does not change str contents, recommend making that const. "The rationale for the first parameter is not to be const is that you may want to use the return value pointer to modify the array in that location." does not apply. Just cast char * the return value. Follow strstr()s tyle. // From C library. char *strstr( const char *s1, const char *s2); // Expected signatures: (spaced for clarity) // C char *strnstr(const char *src, const char *substr, size_t n); // C++ char *strnstr( char *src, const char *substr, size_t n); const char *strnstr(const char *src, const char *substr, size_t n); Using a name that is close to standard names is tricky. C reserves name with certain prefixes, etc and so does *nix, etc. Maybe use CP_strnstr() and an optional #define strnstr CP_strnstr. Corner case: Returning str with if(0 == substr_len) return str; does not make sense when size == 0. I'd expect NULL. Underflow possible. The length of the needle may be longer or shorter than the haystack // add check if (n + 1 < substr_len) { return NULL; } pEnd -= (substr_len - 1); Minor In debug mode, consider testing against NULL char *strnstr(char *str, const char *substr, size_t n) { assert(str || n == 0); assert(substr);
{ "domain": "codereview.stackexchange", "id": 45129, "tags": "c++, c, strings" }
Computing Riemann tensor components for Alcubierre metric
Question: I am currently trying to compute the Riemann tensor components for the Alcubierre metric, and already, on the computation of the first component, I'm running into some issues. The trouble component in question is $R^x_{\;txt}$ , so I've started with the formula: $$ R^x_{\;txt} = \partial_x\Gamma^x_{\;tt} -\partial_t\Gamma^x_{\;xt}+\Gamma^x_{\;x\mu}\Gamma^\mu_{\;tt}-\Gamma^x_{\;t\mu}\Gamma^\mu_{\;xt} $$ Using the Christoffel symbols provided by Mueller and Grave's Catalogue of Spacetimes, I started computing the first term, $\partial_x\Gamma^x_{\;tt}$: $$ \begin{align} \partial_x\Gamma^x_{\;tt} &= \partial_x\frac{f^3f_xv_s^4-c^2ff_xv_s^2-c^2f_tv_s}{c^2} \\ &=\frac1{c^2}\Big(\partial_xf^3f_xv_s^4-\partial_xc^2ff_xv_s^2-\partial_xc^2f_tv_s\Big) \end{align} $$ And, again isolating the first term and computing further: $$ \partial_xf^3f_xv_s^4 = f_xv_s^4\partial_xf^3 + f^3v_s^4\partial_xf_x + f^3f_x\partial_xv_s^4 $$ If my math is correct. The area I'm having trouble is in the partial derivative $\partial_xv_s$. I can't quite understand how to compute it. For reference, $v_s$ is defined by Alcubierre as a function of time: $$ v_s(t) = \frac{dx_s(t)}{dt} $$ and $x_s(t)$ is simply described as an "arbitrary function of time", describing the trajectory of the hypothetical spacecraft in the scenario of the metric. I don't quite see how the actual $x$ axis relates to this function, and so I'm at a loss as to how I should treat a derivative of the function with respect to $x$. What am I missing? Answer: As G. Smith helped answer, $v_s(t)$ is a function that depends solely on time, and as such does not change across the x direction, so $\partial_xv_s(t) = 0$. The function is defined as the derivative of a function named $x_s(t)$, which also depends only on time, and it's the name of this function which mislead me.
{ "domain": "physics.stackexchange", "id": 65519, "tags": "general-relativity, metric-tensor, curvature, warp-drives" }
Frequency estimates from three-dimensional data
Question: I have a lot of three-dimensional positional data from patients with tremor, unevenly sampled at approx. 50 Hz, with timestamps. I am trying to find the dominant frequencies in the signal/tremor. However, the dominant frequency may not be constant. My current strategy so far has been Calculating the magnitude (dR = sqrt(df$X**2+df$Y**2+df$Z**2), and detrending by removing the mean. Interpolating this to a sample rate of 50 Hz using spline (see sample data in Figure 1) Using pwelchfrom package oce to obtain the PSD with pwelch(indsp$y, fs = smplfrq) My hypothesis is that I should observe a peak somewhere around 8-12 Hz. However, zero to few peaks are observed (if I'm reading this right), and those peaks have very little strength. What can I do to improve my frequency analysis? Is it correct to do the calculation on the magnitude of the "signal"? I would love to be able to extract the dominant frequencies and amplitude from my data. EDIT: Added info from comments. Answer: I have [...] three-dimensional positional data [...], unevenly sampled at approx. 50 Hz, with timestamps. As a first step, it would be good to interpolate them so that they appear to be sampled at regular $\frac{1}{50Hz}$ intervals. I am trying to find the dominant frequencies in the signal/tremor. In this case, even a simple Discrete Fourier Transform (DFT) over a long enough segment (say for instance 10 times the longest period you are trying to discover) would suffice. However, the dominant frequency may not be constant. In which case, you would have to consider something like a spectrogram or more generally, obtaining the frequency response over time (i.e. sliding windows). The only "problem" that the Magnitude representation has is that $d = \sqrt{\sin(\theta_1)^2 + \sin(\theta_2)^2}$ is constant for the right choice of $\theta_1, \theta_2$, even if the individual components oscillate. The danger of this happening in exactly that way depends on how the data are acquired but the point remains that by obtaining the magnitude you are introducing some interference in the measurements. So, use with caution. Now, the way that a multidimensional DFT works is by first of all assuming that the dimensions the signal is measured over are orthogonal and then (as a byproduct of that) repeatedly applying the transform to the "remaining" dimension. In the one dimensional case, you have the way a quantity evolves in time, as a time series. The application of the DFT here is straightforward and it decomposes the quantity over time into a sum of sinusoids over time. In the two dimensional case, you apply the DFT over the "rows" of a two dimensional matrix holding your "signal" and then once more over the columns of the already transformed rows from the previous step. The result of this process is a two dimensional spectrum where the equivalent frequency bin (from the one dimensional case) is now a frequency ring. And this is because in the two dimensional case, it is not enough to ask "which frequency... (?)", you also have to specify "...along a particular direction". For more information about this, please see here and here. In the three dimensional case, you apply DFT to the "rows", you then apply DFT to the "columns" of the transformed "rows" and then you apply DFT once again along the "depth" rows (or, really, the remaining dimension) of the previously transformed data. This returns a spatial representation of your data where the "frequency ring" (which used to be the "frequency bin") is now a "frequency shell", that is, a hollow sphere. It is not enough to ask "which frequency..." now, you have to specify the direction on the surface of a sphere. You may be wondering "so what?" by now and that is putting it mildly. The point here is that if you do a three dimensional DFT you will also be able to infer the most dominant direction along which the tremor is happening too. And that might be "correlateable" with other parameters of the health condition. That is, different brain circuits deteriorating, leading to tremors along specific directions. To do that, do a three dimensional DFT, shift the spectrum, so that the low frequencies are towards the centre of the described "cube", take the magnitude of the complex result and find the maximum value (that is not at DC). The maximum value will be at some point $m,n,k$. The "angle" between a vector denoting the "forward" direction and the $m,n,k$ vector would give you the direction that this movement is happening. The tremor might be up-down, diagonal, back to front, circular, etc. A three dimensional DFT will characterise this periodic movement fully. Hope this helps.
{ "domain": "dsp.stackexchange", "id": 7405, "tags": "signal-analysis, frequency-spectrum" }
Inspector Rubberduck and the abstract inspections
Question: The Rubberduck code inspections have just seen yet another structural change, hopefully for the better. The IInspectionModel interface was originally named IInspection; it only exposes the bare-bones inspection properties, those needed by the CodeInspectionSetting class: public interface IInspectionModel { /// <summary> /// Gets the inspection type name. /// </summary> string Name { get; } /// <summary> /// Gets the name of the inspection, without the "Inspection" suffix. /// </summary> string AnnotationName { get; } /// <summary> /// Gets a short description for the code inspection. /// </summary> string Description { get; } /// <summary> /// Gets a value indicating the type of the code inspection. /// </summary> CodeInspectionType InspectionType { get; } /// <summary> /// Gets a value indicating the severity level of the code inspection. /// </summary> CodeInspectionSeverity Severity { get; set; } } The CodeInspectionSetting type is XML-serialized into the Rubberduck settings file; this allows us to let the user determine an inspection's Severity level: [XmlType(AnonymousType = true)] public class CodeInspectionSetting : IInspectionModel { [XmlAttribute] public string Name { get; set; } [XmlIgnore] public string Description { get; set; } // not serialized because culture-dependent [XmlIgnore] public string AnnotationName { get; set; } [XmlAttribute] public CodeInspectionSeverity Severity { get; set; } [XmlIgnore] public string SeverityLabel { get { return RubberduckUI.ResourceManager.GetString("CodeInspectionSeverity_" + Severity, RubberduckUI.Culture); } set { foreach (var severity in Enum.GetValues(typeof (CodeInspectionSeverity))) { if (value == RubberduckUI.ResourceManager.GetString("CodeInspectionSeverity_" + severity, RubberduckUI.Culture)) { Severity = (CodeInspectionSeverity)severity; return; } } } } [XmlAttribute] public CodeInspectionType InspectionType { get; set; } public CodeInspectionSetting() { //default constructor required for serialization } public CodeInspectionSetting(string name, string description, CodeInspectionType type, CodeInspectionSeverity severity) { Name = name; Description = description; InspectionType = type; Severity = severity; } public CodeInspectionSetting(IInspectionModel inspection) : this(inspection.Name, inspection.Description, inspection.InspectionType, inspection.Severity) { } } The SeverityLabel is used by the settings UI. Does it even belong there? The IInspection interface exposes additional members, including a method responsible for returning actual inspection results (given a parser state, but that's now an implementation detail): /// <summary> /// An interface that abstracts a runnable code inspection. /// </summary> public interface IInspection : IInspectionModel, IComparable<IInspection>, IComparable { /// <summary> /// Runs code inspection on specified parse trees. /// </summary> /// <returns>Returns inspection results, if any.</returns> IEnumerable<CodeInspectionResultBase> GetInspectionResults(); /// <summary> /// Gets a string that contains additional/meta information about an inspection. /// </summary> string Meta { get; } } Up until two days ago, every code inspection implemented IInspection directly. The recent changes introduce an abstract class, to help remove the redundant code that's common to all implementations: public abstract class InspectionBase : IInspection { protected readonly RubberduckParserState State; protected InspectionBase(RubberduckParserState state) { State = state; } public abstract string Description { get; } public abstract CodeInspectionType InspectionType { get; } public abstract IEnumerable<CodeInspectionResultBase> GetInspectionResults(); public virtual string Name { get { return GetType().Name; } } public virtual CodeInspectionSeverity Severity { get; set; } public virtual string Meta { get { return InspectionsUI.ResourceManager.GetString(Name + "Meta"); } } // ReSharper disable once UnusedMember.Global: it's referenced in xaml public virtual string InspectionTypeName { get { return InspectionsUI.ResourceManager.GetString(InspectionType.ToString()); } } public virtual string AnnotationName { get { return Name.Replace("Inspection", string.Empty); } } protected virtual IEnumerable<Declaration> Declarations { get { return State.AllDeclarations.Where(declaration => !declaration.IsInspectionDisabled(AnnotationName)); } } protected virtual IEnumerable<Declaration> UserDeclarations { get { return State.AllUserDeclarations.Where(declaration => !declaration.IsInspectionDisabled(AnnotationName)); } } public int CompareTo(IInspection other) { return string.Compare(InspectionType + Name, other.InspectionType + other.Name, StringComparison.Ordinal); } public int CompareTo(object obj) { return CompareTo(obj as IInspection); } } The main reason for introducing this base class, was to enable the @Ignore {InspectionName} annotations that the IgnoreOnceQuickFix is inserting, without having to specify in every single inspection that it needs to check for IsInspectionDisabled(AnnotationName). Another reason was to avoid the redundant filtering on !declaration.IsBuiltIn, since most inspections operate on user declarations and their usages. This is the quick fix class in question: public class IgnoreOnceQuickFix : CodeInspectionQuickFix { private readonly string _annotationText; private readonly string _inspectionName; public IgnoreOnceQuickFix(ParserRuleContext context, QualifiedSelection selection, string inspectionName) : base(context, selection, InspectionsUI.IgnoreOnce) { _inspectionName = inspectionName; _annotationText = "'" + Parsing.Grammar.Annotations.AnnotationMarker + Parsing.Grammar.Annotations.IgnoreInspection + ' ' + inspectionName; } public override bool CanFixInModule { get { return false; } } // not quite "once" if applied to entire module public override bool CanFixInProject { get { return false; } } // use "disable this inspection" instead of ignoring across the project public override void Fix() { var codeModule = Selection.QualifiedName.Component.CodeModule; var insertLine = Selection.Selection.StartLine; var codeLine = insertLine == 1 ? string.Empty : codeModule.get_Lines(insertLine - 1, 1); var annotationText = _annotationText; var ignoreAnnotation = "'" + Parsing.Grammar.Annotations.AnnotationMarker + Parsing.Grammar.Annotations.IgnoreInspection; int commentStart; if (codeLine.HasComment(out commentStart) && codeLine.Substring(commentStart).StartsWith(ignoreAnnotation)) { annotationText = codeLine + ' ' + _inspectionName; codeModule.ReplaceLine(insertLine - 1, annotationText); } else { codeModule.InsertLines(insertLine, annotationText); } } } For context, here's an implementation - the ImplicitPublicMemberInspection, which finds public members without an explicit access modifier: public sealed class ImplicitPublicMemberInspection : InspectionBase { public ImplicitPublicMemberInspection(RubberduckParserState state) : base(state) { Severity = CodeInspectionSeverity.Warning; } public override string Description { get { return RubberduckUI.ImplicitPublicMember_; } } public override CodeInspectionType InspectionType { get { return CodeInspectionType.MaintainabilityAndReadabilityIssues; } } private static readonly DeclarationType[] ProcedureTypes = { DeclarationType.Function, DeclarationType.Procedure, DeclarationType.PropertyGet, DeclarationType.PropertyLet, DeclarationType.PropertySet }; public override IEnumerable<CodeInspectionResultBase> GetInspectionResults() { var issues = from item in UserDeclarations where ProcedureTypes.Contains(item.DeclarationType) && item.Accessibility == Accessibility.Implicit let context = new QualifiedContext<ParserRuleContext>(item.QualifiedName, item.Context) select new ImplicitPublicMemberInspectionResult(this, string.Format(Description, ((dynamic)context.Context).ambiguousIdentifier().GetText()), context); return issues; } } All inspection classes are sealed, and pass the RubberduckParserState dependency down the base type's constructor. For completeness' sake, here's the accompanying ImplicitPublicMemberInspectionResult class: public class ImplicitPublicMemberInspectionResult : CodeInspectionResultBase { private readonly IEnumerable<CodeInspectionQuickFix> _quickFixes; public ImplicitPublicMemberInspectionResult(IInspection inspection, string result, QualifiedContext<ParserRuleContext> qualifiedContext) : base(inspection, result, qualifiedContext.ModuleName, qualifiedContext.Context) { _quickFixes = new CodeInspectionQuickFix[] { new SpecifyExplicitPublicModifierQuickFix(Context, QualifiedSelection), new IgnoreOnceQuickFix(qualifiedContext.Context, QualifiedSelection, Inspection.AnnotationName), }; } public override IEnumerable<CodeInspectionQuickFix> QuickFixes { get { return _quickFixes; } } } All inspection results that can be ignored once have an IgnoreOnceQuickFix; Rubberduck uses this list of quick-fixes to dynamically populate a menu in the code inspection window's toolbar. Since I introduced the base/abstract class Something doesn't feel right anymore about IInspection: am I right to think it has become superfluous? The base class should simply implement IInspectionModel, and IInspection could be removed - and heck, IInspectionModel could then be renamed back to IInspection as it originally was called. It really feels like CodeInspectionSetting knows more than it needs to about an inspection. Anything else jumps at you? Answer: A couple minor suggestions... I'd introduce some extension methods to give the common predicates on collections of declarations somewhere to live: public static class DeclarationsPredicates { public static IEnumerable<Declaration> WhichAreUserDeclarations(this IEnumerable<Declaration> source) { if (source == null) { // throw an error or sneakily coerce to Enumerable.Empty<Declaration>(); } return source.Where(declaration => !declaration.IsBuiltIn); } public static IEnumerable<Declaration> WhereInspectionIsNotDisabledByAnnotation( this IEnumerable<Declaration> source, string annotationName) { if (source == null) { // throw an error or sneakily coerce to Enumerable.Empty<Declaration>(); } return source.Where(declaration => !declaration.IsInspectionDisabled(annotationName)); } } I'm sure there are others that you'd find useful! You could now rewrite your properties as: protected virtual IEnumerable<Declaration> Declarations { get { return State.AllDeclarations.WhereInspectionIsNotDisabledByAnnotation(AnnotationName); } } protected virtual IEnumerable<Declaration> UserDeclarations { get { return State.AllUserDeclarations.WhereInspectionIsNotDisabledByAnnotation(AnnotationName); } } I think that's better because you've centralised your logic (I'm not overly fond of the name though). You could potentially remove the .IsInspectionDisabled() method from Declaration and put the logic in the extension method but that's up to you. This is interesting ((dynamic)context.Context)... Why are you doing that? I agree that SeverityLabel definitely doesn't fit ;) I'd expect it to be readonly anyway: // C#6 public string SeverityLabel => RubberduckUI.ResourceManager.GetString("CodeInspectionSeverity_" + Severity, RubberduckUI.Culture); If you want to change the label, you should have to change the severity! You have a lot of magic strings around the place: "Inspection", "Meta", "CodeInspectionSeverity_"... They should be well named constants. The trailing underscore in this name looks odd to me: RubberduckUI.ImplicitPublicMember_ I'm guessing that the string is "ImplicitPublicMember_"? Either way, chop the underscore off. This method is named in camelCase but should be PascalCase ambiguousIdentifier() This property name doesn't look quite right: string Meta { get; } I think it should be Metadata. Your doc comment here isn't quite right: /// <summary> /// Gets a value indicating the severity level of the code inspection. /// </summary> CodeInspectionSeverity Severity { get; set; } It actually gets or sets a value... There's also the <value> tag that you should add. I use the GhostDoc extension to help generate code documentation - that might help you out too. I agree with your summary at the end - I'd drop back to just using an ICodeInspection interface. I changed the name because I don't like the double I in IInspection and it's also a little more descriptive. On the whole - very nice!
{ "domain": "codereview.stackexchange", "id": 17926, "tags": "c#, rubberduck" }
Determining the reaction order of acetylsalicylic acid to salicylic acid with a spectrophotometer
Question: How can I determine the reaction order of the reaction acetylsalicylic acid + water to salicylic acid + acetic acid with use of a spectrophotometer? I thought of measuring the absorbance spectrum of both substances individually, compare the 2 spectra, and then look at which wavelength the absorbance of acetylsalicylic acid exists whereas the absorbance of salicylic acid at that wavelength is negligible. Lastly, use that wavelength to calculate concentrations of acetylsalicylic acid at certain time intervals and conclude the reaction order out of the change in concentration vs time. (for example making a graph on Excel for Concentration vs time -> linear = 0 order, exponential = 1st order) Is this possible or am I thinking too hard/easy? Is there any other other way? Answer: I think your reasoning is sound. However, a factor you may need to take into account is that you have a number of possible related species here, each of which has a different UV absorbance spectrum. You have acetylsalicylic acid, acetylsalicylate anion, salicylic acid, salicylate anion, and salicylate dianion (this one will probably have a markedly different UV spectrum from the others). The relative amounts of these species present would be affected by pH, so it would be important to buffer the reaction solution, to maintain constant pH.
{ "domain": "chemistry.stackexchange", "id": 4271, "tags": "experimental-chemistry" }
Derivation of the index of refraction of glass as a function of rotation angle and number of fringe transitions
Question: Recently, I've looked at how the index of refraction of a piece of glass can be related to the angle of rotation from normal incidence and the associated number of fringe transitions through a Michelson interferometer. I'm having a bit of trouble going through the derivation of the following result: $$ n_g = \frac{(2tn_a - N \lambda_0)(1- \cos{\theta}) + \frac{N^2 \lambda_0^2}{4t}}{2tn_a(1-\cos{\theta}) - N \lambda_0}.$$ In this expression, $n_g$ is the index of refraction of glass, $n_a$ is the index of refraction of air, $t$ is the thickness of the piece of glass, $N$ is the number of fringes, and $\theta$ is the angle of rotation of the glass plate from the perpendicular (see the figure). The figure for carrying out this analysis is given here: I've almost figured out the whole derivation, but I'm getting stuck at a final step. I'm following the analysis that you can get from Light Principles and Experiments, by George S. Monk (page 377 or thereabout). I'll go through the work I've done so far, and where I'm having trouble. The heart of the problem is that we're looking at the optical path difference between two arms in an interferometer, and that optical path difference is all from this change in distance due to rotating the glass plate. First, note that the path length when the light travels along $OP$ and the glass plate isn't inclined is given by $n_g \overline{ab} + n_a \overline{bc}$. Then, the optical path as the glass plate is rotated at an angle $\theta$ is given by $n_g \overline{ad} + n_a \overline{de}$. The total increase in optical path length for the interferometer is then twice the difference between these optical path lengths (since the light travels it twice). Therefore, the total increase in optical path length is: \begin{equation} \label{eq:rotating-interference} \Delta S = 2 \left( n_g \overline{ad} + n_a \overline{de} - \left( n_g \overline{ab} + n_a \overline{bc} \right) \right). \end{equation} From the figure, some simplifications can be made. Note that $\overline{ad} = t / \cos{\alpha}$. Furthermore, one can use Snell's equation to find that $\angle dce = \theta$, so this gives: \begin{equation} \overline{de} = \overline{dc} \sin{\theta} = \left( \overline{fc} - \overline{fd} \right) \sin{\theta}. \end{equation} Then, note that $\overline{fc} = t \tan \theta$ and $\overline{fd} = t \tan \alpha$. Finally, $\overline{bc} + t = t / \cos{\theta}$. For constructive interference, $\Delta S$ must be equal to $N \lambda$. Making the appropriate substitutions, one gets: \begin{equation} \frac{n_g t}{\cos{\alpha}} + n_a \left( t \tan \theta \sin{\theta} - t \tan \alpha \sin{\theta} \right) - n_g t - \frac{n_a t}{\cos{\theta}} + n_a t = \frac{N \lambda}{2}. \end{equation} This is where I'm having trouble. In the text I reference, the author simply gives the following explanation (this in paraphrased in my words). This can be further simplified by using Snell's equation, $n_g \sin{\alpha} = n_a \sin{\theta}$. The result becomes: \begin{equation} n_g = \frac{(2tn_a - N \lambda_0)(1- \cos{\theta}) + \frac{N^2 \lambda_0^2}{4t}}{2tn_a(1-\cos{\theta}) - N \lambda_0}. \end{equation} Unfortunately, I can't seem to get that. I've been able to simplify the equation a bit, but not to this final result. Here's what I've done so far. First, I note that: \begin{equation} n_g \sin{\alpha} = n_a \sin{\theta} \implies \sin{\alpha} \sin{\theta} = \frac{n_g}{n_a} \sin^2{\alpha}. \end{equation} This means: \begin{equation} \frac{n_g}{\cos{\alpha}} - n_a \tan{\alpha} \sin{\theta} = \frac{1}{\cos{\alpha}} \left( n_g - n_a \frac{n_g}{n_a} \sin^2{\alpha} \right) = n_g \cos{\alpha}. \end{equation} Similarly: \begin{equation} \tan\theta \sin\theta - \frac{1}{\cos\theta} = -\cos\theta. \end{equation} Therefore, the equation for constructive interference simplifies to: \begin{equation} n_g \cos\alpha + n_a \left(1 - \cos\theta \right) = \frac{N \lambda}{2t}. \end{equation} This is where I'm at right now. I don't see how I can get this to the result shown at the beginning of the post. It seems like I would maybe have to square the equation, but honestly I'm not sure. Any help in figuring out this last step (or series of steps) would be appreciated. One last thing: using Snell's equation, we can write: \begin{equation} \cos\alpha = \frac{\sqrt{n_g^2 - n_a^2 \sin^2 \theta}}{n_g}. \end{equation} Edit: I've worked on the expression a bit more, but still can't quite get the result. Isolating for $n_g \cos\alpha$, we get: \begin{equation} n_g \cos\alpha = \frac{N \lambda}{2t} - n_a \left(1 - \cos\theta \right). \end{equation} Squaring this equations and noting that $\left( n_g \cos \alpha \right)^2 = n_g^2 - n_a^2 \sin^2 \theta$, this becomes: \begin{equation} n_g^2 - n_a^2 \sin^2 \theta = \left( \frac{N \lambda}{2t} \right)^2 - 2 \left( \frac{N \lambda}{2t} \right) n_a \left(1 - \cos\theta \right) + n_a^2 \left(1 - \cos\theta \right)^2. \end{equation} Then, $\left( 1 - \cos \theta \right)^2 + \sin^2 \theta = 2 \left( 1 - \cos \theta \right)$. Simplifying, we get: \begin{equation} n_g^2 = \left( \frac{N \lambda}{2t} \right)^2 + 2 n_a \left(1 - \cos\theta \right) \left[ n_a - \frac{N \lambda}{2t} \right]. \end{equation} I don't know if this helps at all, but I think it might hopefully point us in the right direction. Answer: This equation \begin{equation} n_g \cos\alpha + n_a \left(1 - \cos\theta \right) = \frac{N \lambda}{2t}. \end{equation} is wrong because you dropped the $-n_gt$ that was in your starting equation. The correct equation should be \begin{equation} n_a \left(1 - \cos\theta \right) - n_g \left(1- \cos\alpha\right)= \frac{N \lambda}{2t}. \end{equation} As you note, you can eliminate $\alpha$ using Snell's law and the relation: $\sin^2\alpha + \cos^2\alpha = 1$. This introduces a square root, but when you square the equation the $n_g^2$ terms cancel nicely, making the end of the derivation easy.
{ "domain": "physics.stackexchange", "id": 85594, "tags": "refraction, geometric-optics, interferometry" }
Gazebo not updating urdf
Question: I suspect that my Gazebo simulation is not updating from my urdf file. To test this is made the the length of my rectangular robot 10x the normal size, and Gazebo still shows it without any change. See the photo: Obviously this is not right (...or is it?)! I am worried that the changes I am making in my urdf file are not being updated in Gazebo. Below are the launch files I am using: bender_simulation.launch: <launch> <arg name="model" default="$(find bender_model)/model.urdf.xacro"/> <arg name="gui" default="false" /> <arg name="rvizconfig" default="$(find bender_model)/urdf.rviz"/> <include file="$(find bender_model)/gazebo.launch"> <arg name="model" value="$(arg model)" /> </include> <node name="rviz" pkg="rviz" type="rviz" args="-d $(arg rvizconfig)" /> <rosparam command="load" file="$(find bender_model)/joints.yaml" ns="bender_joint_state_controller" /> <rosparam command="load" file="$(find bender_model)/diffdrive.yaml" ns="bender_diff_drive_controller" /> <node name="bender_controller_spawner" pkg="controller_manager" type="spawner" output="screen" args="bender_joint_state_controller bender_diff_drive_controller"/> <!--node name="rqt_robot_steering" pkg="rqt_robot_steering" type="rqt_robot_steering"> <param name="default_topic" value="/bender_diff_drive_controller/cmd_vel"/> </node--> <!-- <node name="teleop" pkg="teleop_twist_keyboard" type="teleop_twist_keyboard.py" output="screen"> <remap from="cmd_vel" to="/bender_diff_drive_controller/cmd_vel"/> </node> --> </launch> Gazebo.launch: <launch> <!-- these are the arguments you can pass this launch file, for example paused:=true --> <arg name="paused" default="false"/> <arg name="use_sim_time" default="true"/> <arg name="gui" default="true"/> <arg name="headless" default="false"/> <arg name="debug" default="false"/> <arg name="model" default="$(find bender_model)/model.urdf.xacro"/> <!-- We resume the logic in empty_world.launch, changing only the name of the world to be launched --> <include file="$(find gazebo_ros)/launch/empty_world.launch"> <arg name="world_name" value="$(find bender_model)/bender_worlds/navigation_testing.world"/> <arg name="debug" value="$(arg debug)" /> <arg name="gui" value="$(arg gui)" /> <arg name="paused" value="$(arg paused)"/> <arg name="use_sim_time" value="$(arg use_sim_time)"/> <arg name="headless" value="$(arg headless)"/> </include> <param name="robot_description" command="$(find xacro)/xacro.py '$(find bender_model)/model.urdf.xacro'" /> <!-- push robot_description to factory and spawn robot in gazebo --> <!--node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model" args="-z 1.0 -unpause -urdf -model robot -param robot_description" respawn="false" output="screen" /--> <node name="spawn_urdf" pkg="gazebo_ros" type="spawn_model" args="-param robot_description -urdf -model robot" /> <node pkg="robot_state_publisher" type="robot_state_publisher" name="robot_state_publisher"> <param name="publish_frequency" type="double" value="50.0" /> </node> </launch> Originally posted by cmfuhrman on ROS Answers with karma: 200 on 2018-03-21 Post score: 1 Answer: I am able to answer my own question. I was loading a robot I had accidentally saved in my .world file instead of spawning a new robot from my xacro file. my launch file now looks like this: <launch> <!-- these are the arguments you can pass this launch file, for example paused:=true --> <arg name="paused" default="false"/> <arg name="use_sim_time" default="true"/> <arg name="gui" default="true"/> <arg name="headless" default="false"/> <arg name="debug" default="false"/> <!--arg name="model" default="$(find bender_model)/model.urdf.xacro"/--> <!-- Robot pose --> <arg name="x" default="0.1"/> <arg name="y" default="0.5"/> <arg name="z" default="0"/> <arg name="roll" default="0"/> <arg name="pitch" default="0"/> <arg name="yaw" default="0"/> <!-- We resume the logic in empty_world.launch, changing only the name of the world to be launched --> <include file="$(find gazebo_ros)/launch/empty_world.launch"> <arg name="world_name" value="$(find bender_model)/bender_worlds/navigation_testing.world"/> <arg name="debug" value="$(arg debug)" /> <arg name="gui" value="$(arg gui)" /> <arg name="paused" value="$(arg paused)"/> <arg name="use_sim_time" value="$(arg use_sim_time)"/> <arg name="headless" value="$(arg headless)"/> </include> <param name="robot_description" command="$(find xacro)/xacro.py '$(find bender_model)/model.urdf.xacro'" /> <!-- push robot_description to factory and spawn robot in gazebo --> <!--node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model" args="-z 1.0 -unpause -urdf -model robot -param robot_description" respawn="false" output="screen" /--> <node name="spawn_urdf" pkg="gazebo_ros" type="spawn_model" args="-param robot_description -urdf -model bender -x $(arg x) -y $(arg y) -z $(arg z) -R $(arg roll) -P $(arg pitch) -Y $(arg yaw)" /> <node pkg="robot_state_publisher" type="robot_state_publisher" name="robot_state_publisher"> <param name="publish_frequency" type="double" value="10.0" /> </node> </launch> Originally posted by cmfuhrman with karma: 200 on 2018-04-02 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 30409, "tags": "gazebo, urdf, ros-kinetic" }
What is the speed of light in geometrical optics?
Question: Geometrical optics or Hamiltonian optics is the short wave length limit of Maxwells equations of light. In Hamiltonian form it is equivalent to the Hamiltonian description of a single, classical, non-relativistic particle with mass $m$. However in non-relativistic, classical mechanics it is sometimes said, that the speed of light is assumed to be infinite. But I believe it must be assumed, that the speed of light is at least finite (but probably not invariant for inertial observers?) for the theory to make sense. But I'm not sure. What is the assumed speed of light within the framework of this theory? Answer: In the construction of geometrical optics, no assumption is made on the speed of light. One can construct the Fermat principle and associated approximations even in fully relativistic formalism. (An example can be found in the book Gravitation by Misner, Thorne and Wheeler.) On the other hand, if deriving non-relativistic mechanics from relativity, we do assume that the constituents of the mechanical system move at speeds $v$ much smaller than the speed of light $c$ in our lab frame. I.e. $v/c \approx 0$ and our lab frame plays a privileged role. The fact that both classical mechanics and ray optics both have a Lagrangian and Hamiltonian formulation does not mean they are both ``nonrelativistic" in some sense or that they can be derived using some unified limit - they are not. In fact, even fully relativistic particle mechanics have a Lagrangian and Hamiltonian formulation. This is because the Lagrangian and Hamiltonian formalism is more of a mathematical method how to describe dynamics of a broad class of systems. But! Your are correct that using Hamiltonian optics and generally any light dynamics along with non-relativistic mechanics leads to a weird inelegant system. Namely, weird extra terms pop up e.g. in dispersion relations of light when transforming between reference frames using Galilean transformations. This was often historically hand-waived away by saying that the "nice" equations for the propagation of light are defined with respect to the reference frame of the medium and this also lead to the postulation of aether, the ``vacuum medium". Ultimately, this weird aether business and the Michelson-Morley experiment lead Einstein to his special relativity. So the "classical" or "non-relativistic" sets of theories for light and massive particles are not quite consistent and one cannot really expect them to be. (They are quite useful nonetheless...)
{ "domain": "physics.stackexchange", "id": 35322, "tags": "speed-of-light, hamiltonian-formalism, geometric-optics" }
Why doesn't incoherent light cancel itself out?
Question: What is the precise mathematical description of an incoherent single-frequency signal for any type of wave? The reason I'm asking is because of the following apparent paradox in which incoherent light cannot exist. Consider sunlight, for example, which has passed through a polarizing filter and frequency filter, so that only waves with wave numbers very close to $k_0$ are allowed to pass through. Since sunlight is totally incoherent, it seems reasonable to model the signal as a sum of sine waves $E_\alpha(x,t)=A_\alpha\sin(k_0x-\omega_0t+\phi_\alpha),$ where $E$ is the electric field in the direction of the polarizing filter, $\omega_0=ck_0$, $A_\alpha$ is a random amplitude, and $\phi_\alpha$ is a random phase shift. If the light were coherent, then the $\phi_\alpha$'s would all be identical; so it seems reasonable that for "maximal incoherence" the $\phi$'s and $A_\alpha$'s would be different and uniformly distributed. But then for every component with phase shift $\phi$ and amplitude $A$, there exists a wave $A_\alpha\sin(k_0x-\omega t-\phi-\pi)$, which cancels the original. Hence all components cancel and there is no wave (spectrometer detects nothing). So what's the flaw here? I'm guessing the model of incoherent light is where the problem lies, but maybe it's in the reasoning. I'm also curious whether or not the answer necessarily relies on quantum mechanics. EDIT: Since there are some votes to close based on the proposed duplicate, I'll just say that both questions get at the same idea, but I think mine (which could also have focused on polarization) is more specific, since I'm asking for a precise model and whether quantum physics is a necessary part of the explanation. From what I can tell, the answers to the linked question do not address these points. Answer: That phase is random does not mean that the waves of all the phases are present at any space point at any time. The averaging happens in the eye (or photodetector), which has the reaction time and the spatial resolution bigger than the coherence time and coherence length of the light. This is where the model described in the OP applies... except the eye/photodetector does not register the amplitude of the electromagnetic wave, but its intensity: $$ I \propto \left[E(x,t)\right]^2 $$ For this quantity the averaging gives a finite result. Remarks If our eyes/detectors were measuring the wave amplitude rather than its intensity, then they would not be able to perceive even coherent light, due to averaging over times and lengths much greater than the period and wavelength of light. @uhoh has brought up a useful analogy in the comments: Why doesn't white noise cancel itself out? White noise actually cancels itself out in the same sense, as implied in the OP: it has zero (or constant) average. It is the intensity of the white noise that is not zero. Supplementary: Modeling incoherent light Incoherence may come from many sources: different atoms emit at different times, with different frequencies, different polarizations, and in different directions the light may come from different sources the observed light may be coming not directly from the source, but after multiple reflections Thus, the light observed at point $\mathbf{x}$ is a sum of many waves: $$ \mathbf{E}(\mathbf{x},t) = \sum_i \mathbf{E}_i(\mathbf{x},t) $$ Now, even if we assume that all these waves are plane waves with random amplitides and initial phases, we have $$ \mathbf{E}(\mathbf{x},t) = \sum_i \mathbf{A}_i\cos(\mathbf{k}_i\mathbf{x} - \omega_i t +\phi_i) $$ We can now meaningfully consider this as a random wave field and characterize it by its correlation functions: $$ K_{\alpha\beta}(\mathbf{x},t;\mathbf{x}',t') = \langle E_\alpha (\mathbf{x},t)E_\beta(\mathbf{x}',t')\rangle $$ Update In more rigorous quantum optics terms, one uses correlation coefficient rather than the correlation function to characterize the coherence of light, see degree of first order coherence, and also the Loudon's The Quantum Theory of Light. More references A series of articles from 60s: Coherence Properties of Blackbody Radiation. I. Correlation Tensors of the Classical Field, Coherence Properties of Blackbody Radiation. III. Cross-Spectral Tensors The Nobel lecture by Roy GLauber
{ "domain": "physics.stackexchange", "id": 80303, "tags": "waves, electromagnetic-radiation, interference" }
Creating "completely different" anagrams in python
Question: For a text adventure I'm writing, I need to establish an anagram X of a word Y such that letter Z of X is not letter Z of Y. In other words, ABCDEE could go to EEDCBA or DCEEBA but not ABCEED, because ABCEED matches ABCDEE at slot 5. # # amak.py: this makes an anagram of a word with no identical letter slots. # # in other words, HEAT and HATE have the first letter identical, but EATH has no letter slots in common with HEAT. # import re import sys from collections import defaultdict #option(s). There may be more later. shift_1_on_no_repeat = False try_rotating_first = False # determine if we can still switch a pair. With 3 letters left, it is not possible. With 2, it should be. # def can_take_even(x): if x % 2 == 0: return x > 0 else: return x > 3 # here is the explanation of the algorithm: # # 1. unless we have exactly 3 letters to place, we look for the 2 most frequent letters that have not been switched yet nd switch the earliest incidences of each # 2. if there are 3 unique letters remaining, then we go a->b->c. # 2a. Note that we can never have 2-1 left, because the previous would have to have 3-?-?. If we started with, say, 2-2-1, we would have 1-1-1 after. Similarly we can never have x-(summing less to x) unless we start with something unviable, because we'd have to have had x+1 and (something less than x+1) on the previous try. If we had x on the previous try, we would have deducted from it. # note having y>x/2 in x letters means we cannot have a unique anagram. That is because we would have x-y slots to move the y to, but x<2y so that doesn't work. def find_nomatch_anagram(x): x = re.sub("[- '\.]", "", x.lower()) # allow for spaces, apostrophes, etc. old_string = list(x) new_string = ['-'] * len(x) f = defaultdict(list) letters_to_place = len(old_string) if not len(x): print("Blank string...") return "" for y in range(0, len(x)): if old_string[y] not in 'abcdefghijklmnopqrstuvwxyz': print("Nonalphabetical character in", x, 'slot', y, "--", old_string[y]) return "" f[x[y]].append(y) if shift_1_on_no_repeat and len(f) == len(old_string): return x[1:] + x[0] #abcde quickly sent to bcdea if try_rotating_first: for y in range(1, len(x)): retval = x[-y:] + x[:-y] print("Trying", retval) bad_matches = False for z in range(0, len(x)): bad_matches |= (retval[z] == old_string[z]) if not bad_matches: return retval for q in f: if len(f[q]) > len(old_string) / 2: print(q, "appears too many times in", x, "to create an anagram with no letter slots in common.") return "" while can_take_even(letters_to_place): u = sorted(f, key=lambda x:len(f[x]), reverse=True) x1 = f[u[0]].pop(0) x2 = f[u[1]].pop(0) new_string[x1] = u[1] new_string[x2] = u[0] letters_to_place -= 2 if letters_to_place == 3: u = sorted(f, key=lambda x:len(f[x]), reverse=True) new_string[f[u[0]][0]] = u[1] new_string[f[u[1]][0]] = u[2] new_string[f[u[2]][0]] = u[0] for y in range(0, len(x)): if old_string[y] == new_string[y]: print("Uh oh, failure at letter", y) print(old_string[y]) print(new_string[y]) sys.exit() if new_string[y] == '-': print("Uh oh, blank letter at", y) sys.exit() return ''.join(new_string) def show_results(q, result_string = "has this anagram with no letters in common:"): temp = find_nomatch_anagram(q) if not temp: return print(q, result_string, temp) if len(sys.argv) > 1: for q in sys.argv[1:]: if q == 's1': shift_1_on_no_repeat = True #this works for one option, but what if there are several? elif q == 'tr': try_rotating_first = True #this works for one option, but what if there are several? for q in sys.argv[1:]: if q != 's1' and q != 'tr': show_results(q, "<=>") # this feels like a real hack, again. I want to process meta commands before any results, though. else: #these are just general test cases show_results("aabbb") #throw error show_results("stroll") show_results("aaabbbc") show_results("aaabbcc") show_results("basically") show_results("TeTrIs") show_results("try this") show_results("") What I have works. But I am wondering about a few things: is there any way I can write the command line better? I am taking two passes through it right now, but this seems inefficient. I want to be able to give the user the option of trying the obvious anagrams (shift everything 1/2/3/etc. letters over until you find one) While my algorithm seems to work provably, the code for it seems awkward. I plan (n/2) swaps where I match the 2 top remaining frequencies for unswapped letters, then take them, until I am at 3 or 0. Then I do a 3-way rotation for the final letters. Answer: Things that stand out as particularly good: Test cases, including tests that unsolvable inputs get handled correctly. Comments, including comments that describe the high level algorithm. The algorithm description makes it clear how it's avoiding painting itself into a corner. Testing, as early as possible, for inputs that break your function rather than discovering part way through is fantastic. Possible room for improvement: Test cases should specify their expected output. Ideally, you'd have a testing rig that automatically checks they give a satisfying output. Think about how you handle failures. In python, I'd suggest that exceptions are the way to go. Where you have multiple approaches to solving the same problem, that is a particularly good sign that it's time to split them out of a mega-function. Many of your variable names could do with elaboration, especially the single letter ones. Run the code through a Pep-8 style linter and ensure it's laid out in standard pythonic form. For example single line condition and response is discouraged. This algorithm produces an output that is determinisitic and depending on its place in your game may be a little bit boring and predictable. Having the rotate mode mixes it up a bit, but that actually makes for an even more obvious pattern. It may be better to see whether you can introduce some random element. The easist option would be to do a random shuffle, and then swap out any letters that violate the rule.
{ "domain": "codereview.stackexchange", "id": 35099, "tags": "python" }
What is the difference between relative time dilation and absolute time dilation
Question: I know special relativity says that traveling at high speeds (or really any speed) causes time dilation; and General relativity says that gravity also causes time dilation. I was wondering if relative time dilation (where two observers each measure the other's time to be slow) was caused not by time dilation, but instead because with the relative velocity difference between them, if they became increasingly far from each other, light would take longer and longer to reach them from the other. This would result in them both observing each other to have a slower time, though neither would necessarily experience the time dilation. Answer: It's a sensible thought but no. "A sees B's clock running slow." , which you meet in introductory relativity explanations, is shorthand for "A sees the ticks of B's clock arrive at a certain rate. A knows that with every tick, the clock is getting further away (or, in some cases, nearer) so each light signal has further (or less far) to travel, and A compensates for that in working out the rate at which B's clock had to be ticking in order to arrive at the rate they perceive. This calculated rate is slow."
{ "domain": "physics.stackexchange", "id": 49179, "tags": "general-relativity, special-relativity, time-dilation" }
How does Vibrio cholerae benefit from infecting its host?
Question: As far as I know, V. cholerae secretes a toxin called choleragen into the intestinal lumen which affects the intestinal epithelial cells causing release of Na+ and Cl- ions into the lumen and reducing the lumen's water potential which causes water to flow into the intestinal lumen resulting in diarrhea. How does this benefit V. cholera in any way? Answer: After V.cholerae gets into the human intestine it starts to multiply its numbers, and then becomes virulent after sufficiently expanding its numbers. This virulence drives the diarrhea which in part causes the bacteria to slough off into the intestinal lumen, and then into the external environment again. So in short it uses the human intestine to increase cell numbers. See work by Bonnie Bassler for a really fascinating understanding of the complexity of this infection.
{ "domain": "biology.stackexchange", "id": 6644, "tags": "bacteriology, ecology, pathology, bacterial-toxins, infectious-diseases" }
How to read and writing ROS message efficient into files c++
Question: Hi! With the following code I am able to read and write ros message into binary files using c++. But I have to create first memory buffer to use the ros ros::serialization::OStream or ros::serialization::IStream is there a way how to stream messages directly into files similar to a std::ofstream? void LaserLineFilterNode::callback (const sensor_msgs::LaserScan::ConstPtr& _msg) { { // Write to File std::ofstream ofs("/tmp/filename.txt", std::ios::out|std::ios::binary); uint32_t serial_size = ros::serialization::serializationLength(*_msg); boost::shared_array<uint8_t> obuffer(new uint8_t[serial_size]); // This i like to avoid ros::serialization::OStream ostream(obuffer.get(), serial_size); // This i like to avoid ros::serialization::serialize(ostream, *_msg); // I would like to use the ofstream here? ofs.write((char*) obuffer.get(), serial_size); // This i like to avoid ofs.close(); } { // Read from File to msg_scan_ //sensor_msgs::LaserScan msg_scan_ --> is a class variable std::ifstream ifs("/tmp/filename.txt", std::ios::in|std::ios::binary); ifs.seekg (0, std::ios::end); std::streampos end = ifs.tellg(); ifs.seekg (0, std::ios::beg); std::streampos begin = ifs.tellg(); uint32_t file_size = end-begin; boost::shared_array<uint8_t> ibuffer(new uint8_t[file_size]); // This i like to avoid ifs.read((char*) ibuffer.get(), file_size); // This i like to avoid ros::serialization::IStream istream(ibuffer.get(), file_size); // I would like to use the ifstream here? ros::serialization::deserialize(istream, msg_scan_); // This i like to avoid ifs.close(); } .... Originally posted by Markus Bader on ROS Answers with karma: 847 on 2014-02-12 Post score: 2 Answer: rosbag has a C++ API for reading and writing bag files: http://wiki.ros.org/rosbag/Code%20API Originally posted by ahendrix with karma: 47576 on 2014-02-12 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 16961, "tags": "ros, c++, binary, message, serialization" }
Intuition behind the Hadamard gate
Question: I'm trying to teach myself about quantum computing, and I have a decent-ish understanding of linear algebra. I got through the NOT gate, which wasn't too bad, but then I got to the Hadamard gate. And I got stuck. Mainly because while I "understand" the manipulations, I don't understand what they really do or why you'd want to do them, if that makes sense. For example, when the Hadamard gate takes in $|0\rangle$ it gives $\frac{|0\rangle + |1\rangle}{\sqrt{2}}$. What does this mean? For the NOT gate, it takes in $|0\rangle$ and gives $|1\rangle$. Nothing unclear about that; it gives the "opposite" of the bit (for the superposition, it takes in $\alpha|0\rangle+\beta|1\rangle$ and gives $\beta|0\rangle + \alpha|1\rangle$) and I understand why that is useful; for the same reasons (basically) that it is useful in a classical computer. But what (for example) is the Hadamard gate doing geometrically to a vector $\begin{bmatrix}\alpha \\ \beta \end{bmatrix}$? And why is this useful? Answer: The Hadamard gate might be your first encounter with superposition creation. When you say you can relate the usefulness of the Pauli $X$ gate (a.k.a. NOT) to its classical counterpart – well, Hadamard is exactly where you leave the realm of classical analogue, then. It is useful for exactly the same reason, however, namely that it is often used to form a universal set of gates (like clasical AND with NOT and fan-out, or NOR with fan-out alone). While a single $H$ gate is somewhat directly useful in random number generation (as Yuval Filmus said), its true power shows when appearing in more instances or in combination with other gates. When you have $n$ qubits initialized in $|0\rangle$, for example, and apply one $H$ to each of them in any order, what you get is $$(|0\rangle + |1\rangle) \otimes (|0\rangle + |1\rangle) \otimes \ldots \otimes (|0\rangle + |1\rangle) / 2^{n/2}$$ which can be expanded to $$1/2^{n/2} \cdot (|00\ldots00\rangle + |00\ldots01\rangle + |00\ldots11\rangle + \ldots + |11\ldots11\rangle)$$ Voilà, we can now evaluate functions on $2^n$ different inputs in parallel! This is, for example, the first step in Grover's algorithm. Another popular use is a Hadamard on one qubit followed by a CNOT controlled with the qubit you just put into a superposition. See: $$CNOT \big(2^{-1/2}(|0\rangle+|1\rangle)\otimes|0\rangle \big) = 2^{-1/2} CNOT(|00\rangle + |10\rangle) = 2^{-1/2} (|00\rangle + |11\rangle)$$ That's a Bell state which is a cornerstone of various quantum key distribution protocols, measurement-based computation, quantum teleportation and many more applications. You can also use a CNOT repeatedly on more zero-initialized target qubits (with the same control) to create $$2^{-1/2} (|00\ldots00\rangle + |11\ldots11\rangle)$$ which is known as the GHZ state, also immensely useful. Last but not least, it's a quite useful basis transform that is self-reversible. So another Hadamard gate undoes, in a sense, what a previous application did ($H^2 = I$). You can experiment around what happens if you use it to "sandwich" other operations, for example put one on the target qubit of a CNOT gate and another after it. Or on both of the qubits (for a total of 4 Hadamards). Try it yourself and you'll certainly learn a lot about Quantum computation! Re "what is the Hadamard gate doing geometrically to a vector": read up on the Bloch sphere, you'll going to hear about it everywhere. In this representation, a Hadamard gate does a 180° rotation about a certain slanted axis. The Pauli gates (NOT being one out of three) also do 180° rotations but only about $x$ or $y$ or $z$. Because such geometrical operations are quite restricted, these gates alone can't really do much. (Indeed, if you restrict yourself to those and a CNOT in your quantum computer, you just build a very expensive and uneffective classical device.) Rotating about something tilted is important, and one more ingredient you usually need is also rotating by a smaller fraction of the angle, like 45° (like in the Phase shift gate).
{ "domain": "cs.stackexchange", "id": 7370, "tags": "logic, quantum-computing" }
Height of Mirror Required
Question: I was curious about the minimum height of a mirror required to see your full body, and I found out that it was half of your height, in other words the minimum height to view the image = your height / 2. Using this, how do you calculate the minimum height of a mirror required to view other objects. For instance, say you have an object half of the distance to the mirror that you are (if you are 1 meter away, the object will be at 0.5). Is there a relationship to find out what height the mirror must be to see the whole object, in terms of the height of the object? Assume the mirror is a plane mirror. Edit: Making it clearer, I'll include a practice question. You are standing near a plane mirror at a distance of 2d, and there is a ball at point d. What height must the mirror be, in terms of height of the ball, b, in order for you to observe the whole ball in the mirror? Edit 2: My specific question is how do you find the minimum height of the mirror required to see objects other than you in the mirror. For example, say you are standing near a plane mirror with a distance of 2d, and there is a cat at distance d from plane mirror. What height must the mirror be, in terms of height of the cat, h in order for you to observe the whole ball in the mirror? Answer: You will be able to understand it better if you imagine that the mirror forms a reflection of every object present in front of its reflecting surface. It will only be visible to you when your field of view makes it possible to see the reflection. In this diagram, object A forms a reflection A'. It is only when the field of view of the person covers the reflection for it to be visible. Note that if the mirror were to be smaller and not lie beneath A, even then a reflection would be formed whose view would be limited by the light rays reaching you after reflecting at a certain angle.
{ "domain": "physics.stackexchange", "id": 100309, "tags": "optics, visible-light, reflection" }
Coracoid vs. Coronoid - Etymology/Naming Choice?
Question: The word coracoid (e.g., coracoid process of scapula) literally means "resembling a crow/raven" or "of the form of a crow/raven." In this case, I assume, resembling the hooked characteristic of a corvid's beak. Etymology: korax ("raven",Greek) + -oid (from Greek -oeidēs meaning "form"). However, the word coronoid (e.g., coronoid process of the ulna or of the mandible) also refers to a hooked projection of bone. Its etymology, however, is more straightforward: Etymology: korōnē ("hooked", Greek) + -oid (from Greek -oeidēs meaning "form"). My question: Why the distinction? - Why is one termed the "coracoid" while the other structures are referred to as "coronoid"? Since the coracoid process looks no more like an actual corvid's beak than the other structures, it seems that they should have all been referred to as "coranoid". I get that we've just kept things the way they originally were b/c we like to do that in biology, but what was the original reason for the naming of the coracoid to break from the trend? Answer: It appears that both naming conventions originate with Galen, the Greek physician, almost 2000 years ago (for example, see: Singer, 1952). Although the precise motivations behind the naming conventions aren't entirely clear, it seems that the origin of the coracoid naming convention is quite simple: Galen thought it looked specifically like a raven's beak, as you state, rather than the beak of some other bird. Fortuine (2000) seems to suggest that the origins for both words originally applied to beaks of different birds, which implies that Galen may have thought some bone structures looked like beaks of one bird, and other bone structures looked like the beaks of another bird, so he used the corresponding terms of description. Specifically, "korōnē" may have referred to a crow or cormorant (I feel like the cormorant link makes more sense, though both are supported by Arnott (2007) - it may not be possible to know how the distinction was applied from surviving texts), as well as referring to a hooked shape. The same source mentions several other bird-derived terms, including the coccyx resembling the beak of a cuckoo and "rostrum" used more generally for beak-like structures. Today, coronoid and coracoid may sound similar enough to be easily confused by students of medicine. However, giving these different bones distinct names does allow for some useful shorthand, such that one need not refer to the "coronoid of the scapula": I would assume, though I'm not sure if it is possible to find a good reference on this, that this is the motivation to keep things as-is, besides of course the history. References Arnott, W. G. (2007). Birds in the Ancient World from A to Z. Routledge, New York. Duckworth, W. L. H., & Lyons, M. C. (2010). Galen on anatomical procedures: the later books. Cambridge University Press. Fortuine, R. (2000). The words of medicine: sources, meanings, and delights. Charles C Thomas Publisher. Singer, C. (1952). Galen's elementary course on bones.
{ "domain": "biology.stackexchange", "id": 6869, "tags": "human-anatomy, terminology, anatomy, etymology" }
What resources are available for learning QCL?
Question: I'm struggling to find much about the language QCL, rather than about quantum computing itself. Is there anything out there like that? It doesn't have to be free. Answer: A quick googling reveals that Bernhard Ömer has worked extensively on this topic. Check out the documentation section here. He describes the installation procedure on the corresponding GitHub page. Quantum Programming in QCL (PDF) My master thesis in computing science deals with computational and architectural questions of quantum programming and illustrates the design of quantum algorithms in QCL. For readers with a CS rather than a physical background, this book also features a brief introduction into quantum physics in general. A Procedural Formalism for Quantum Computing (PDF) My master thesis in theoretical physics about QCL. Besides a general introduction to quantum programming and a description of the language, a complete QCL implementation of the Shor algorithm is presented. Structured Quantum Programming (PDF) My PhD thesis on structured programming languages for quantum computing (latest revision Jan 9 2009). Classical Concepts in Quantum Programming This paper from the QS2002 conference describes classical concepts in QCL, including new features like conditional operators, quantum conditions and quantum if-statements. The print version appeared in the International Journal of Theoretical Physics 44/7, pp. 943-955, 2005. Also, check out these video lectures on QCL by Macheads101: Quantum Programming Tutorial #1: Installing QCL Quantum Programming Tutorial #2: Basic Qubit Operations Quantum Programming Tutorial #3: The V Gate
{ "domain": "quantumcomputing.stackexchange", "id": 590, "tags": "programming, resource-request, qcl" }
Would an Object Near a Pre-Blackhole Star Experience the Same Gravity as Post-Blackhole?
Question: My question was inspired by this question, which got me thinking. According to Newton's Law of Gravitation, $$F = G\frac{m_1m_2}{r^2},$$ the gravity of an object is inversely proportional to the square distance between the objects, meaning that the closer the objects, or, rather, their centers of mass, get, the higher the gravitational force between them. If this is the case, why are black holes "special"? Seeing as a star is made of gas & plasma, would an object at what would become the event horizon after it becomes a black hole be "sucked in" and, assuming it isn't destroyed by heat or various pressures, not be able to get out of the gravitational pull? If an object were extremely close to the center of gravity of a planet, whether it was solid or gas, would it be able to get out? The Schwartzchild radius for Earth, according to Wikipedia, is 8.87 millimeters. If someone were able to get that close to its center of gravity, would he/she be able to escape? What about for smaller objects, which have a Schwarzchild radius measured in nanometers or smaller, which is the size of atoms & subatomic particles? I assume there is a limit where subatomic forces like the strong & weak forces take over, but what is that limit & why does it happen? Answer: Here are several thought experiments (and what happens in each). I'll ignore relativistic effects like time distortion - not for your sake, but mine :) The earth collapses to a black hole beneath our feet. We fall with it, and end up inside a black hole, presumably dead. The earth collapses to a black hole beneath our feet, but we stay in the same place. In this case, we feel exactly the same gravitational pull (although a lot more fear). The earth remains the same size and shape, and we drill down to within its swarzchild radius. Say we're considering a particle at the very center of the earth. It will feel no gravitational pull at all, because each bit of the earth will be pulling it in a different direction. The pull from the mass at the north pole will be opposed by the pull from the mass at the south pole, and so on. Escape is easy from here. Black holes are special not because the mass is large (many black holes have masses on the order of the mass of the sun), but because the radius is small. You can get close to the black hole, and every particle there will pull you in the same direction. The forces add up, instead of cancelling.
{ "domain": "physics.stackexchange", "id": 15175, "tags": "general-relativity, black-holes, newtonian-gravity" }
Why can't matter anihilate with itself?
Question: I am wondering, in patricle physics, when we have an annihilation vertex, we always have a particle and an antiparticle annihilating. Why is that ? Let me make an example to be clearer. Let's take a weak interaction vertex. We know an electron and an electronic antineutrino can annihilate each other on a weak interaction vertex to give a W- boson. Why is this not possible with an electron and an electronic neutrino ? I understand in some cases it is forbidden by conservation of charge, that is why an electron and positron can annihilate but not 2 electrons. But in the example above, I fail to see which conservation is violated, if there is one. Or is this annihilation rule an experimental postulate ? Answer: One has to realize that words in physics have a definite meaning. This meaning, for words used in physics, is dependent on the specific mathematical model that uses it. The mathematical model for particle physics is called the standard model All matter and energy emerges from interactions and composites of these particle. The model encapsulates the large number of particle data measured over the last half century. In all particle interactions , particles may transmute at the vertices changing quantum numbers and identity as defined in the table. Changing identity is not called annihilation. The basic definition of annihilation as used in particle physics is that after the interaction all the quantum numbers of the initiating particles add up to zero. The particles coming out of the vertex add up to zero, and thus new pairs can appear with different quantum numbers. Thus we get, as an example, e+e- annihilating into a quark antiquark pair, which have completely different quantum numbers than the electron positron pair, but their addition adds up to zero. In your example an electron_antineutrino scattering on an electron will make the lepton number zero but not the charge, so it is not within the definition of "annihilation". It is merely an interaction. Of course an electron_neutrino on an electron gives a lepton number of 2, and it can only be a scattering. Searching for Feynman diagrams I did find this: A Feynman diagram representing the annihilation of an electron neutrino and a positron to a muon neutrino and a muon. So the author does not include charge zero within the definition of annihilation so one might find such a usage of the word, but it is not mainstream and it should be noted that it is from an astrophysics course, not a particle physics :).
{ "domain": "physics.stackexchange", "id": 36477, "tags": "quantum-field-theory, particle-physics, conservation-laws, antimatter, weak-interaction" }
Algorithm for a special case of SAT/#SAT
Question: Does anyone know of an algorithm that can solve the following special case of SAT in polynomial time? Are there any algorithms that can solve the counting (#SAT) version of it in polynomial time? Special case: If clause $a$ and clause $b$ have one or more variables in common, that is, there exists some variable $x$ that is in both $a$ and $b$, at least one of the shared variables is positive in $a$ and negated in $b$ or vice-versa. Example: $$(a \vee b \vee c) \wedge (\bar{a} \vee c \vee d)$$ Example of an instance that does not fit in the special case: $$(a \vee b \vee c) \wedge (a \vee c \vee d)$$ Answer: From Solving #SAT using vertex covers, published at SAT'06 by Naomi Nishimura, Prabhakar Ragde, and Stefan Szeider: A cluster formula is a variable-disjoint union of so-called hitting formulas; any two clauses of a hitting formula clash in at least one literal. The known polynomial time algorithm for computing the number of models of a hitting formula can be extended in a straight-forward way to compute the number of models of a cluster formula. Clash is later clarified/defined as: We consider propositional formulas in conjunctive normal form (CNF), represented as sets of clauses. That is, a literal is a (propositional) variable $x$ or a negated variable $\overline{x}$; a clause is a finite set of literals not containing a complementary pair $x$ and $\overline{x}$; a formula is a finite set of clauses. For a literal $ℓ = \overline{x}$ we write $\overline{ℓ} = x$; for a clause $C$ we set $\overline{C} = \{ \overline{ℓ} : ℓ ∈ C \}$. We say that two clauses $C$, $D$ overlap if $C ∩ D \neq ∅$; we say that $C$ and $D$ clash if $C$ and $\overline{D}$ overlap. Note that two clauses can clash and overlap at the same time. [...] A formula is a hitting formula if any two of its clauses clash (see [17]). A cluster formula is the variable-disjoint union of hitting formulas [...] Lemma 3. #SAT can be solved in polynomial time for cluster formulas. So their cluster formula seems to be exactly what you have defined. There's also a journal version that paper, it seems; the result is (not surprisingly) also mentioned in a 2011/2012 survey paper "Backdoors to satisfaction" on which Szeider is a co-author, and which was published in some festschrift My first instinct that this was a more suitable question on cstheory.SE was perhaps not wrong. :-) Also the notion of hitting formulas is cited to [17]: H. Kleine Buning and X. Zhao. Satisfiable formulas closed under replacement. In H. Kautz and B. Selman, editors, Proceedings for the Workshop on Theory and Applications of Satisfiability, volume 9 of Electronic Notes in Discrete Mathematics. Elsevier Science Publishers, North-Holland, 2001. In another paper, the fact that It is known that for hitting formulas the satisfiability problem can be solved efficiently [7]. is cited to an even older paper: K. Iwama "CNF satisfiability test by counting and polynomial average time" SIAM J. Comput., 18 (1989), pp. 385–391
{ "domain": "cs.stackexchange", "id": 4251, "tags": "algorithms, complexity-theory, satisfiability, polynomial-time, counting" }
Having trouble proving a language is NP-complete
Question: I'm asked to prove that, if P=NP, that 0*1* is NP-complete, but I'm having trouble going about doing it. I know it's fairly easy to prove it's NP by creating a TM to verify an input (which can be done in O(n) time, and that's polynomial). But then I now have to reduce an NP-complete problem to 0*1* in order to prove that 0*1* is NP-complete. I'm thinking reducing SAT to it, but I have no idea how to do that, since in SAT all you can use is and, or, and negate, and there's no way to tell if a 1 came before a 0 in an input by doing that (at least, as far as I can tell). Thanks Answer: I might be taking this question too lightly, but if you are given that P = NP then showing that 0*1* is in P should satisfy the definition of NP-Complete... i.e if a language L is in P we now know its also in NP (because P=NP is given) and every language in NP is clearly polynomial time reducible to L! Think about it, if A is polynomial time reducible to L and we know L is in P (you'd have to prove this part), then A is also in P...and since P=NP...we know all languages in NP will be polynomial time reducible to L. This satisfies the two conditions of NP-completeness: 1. L is in NP and 2. All A in NP is polynomial time reducible to L
{ "domain": "cs.stackexchange", "id": 2221, "tags": "complexity-theory, np-complete, np-hard, np" }
G4v Gravity Theory: Why does this get rid of Dark Energy?
Question: Earlier this year, Carver Mead of CalTech published a paper which seems to be garnering a lot of attention: http://arxiv.org/abs/1503.04866 http://www.npl.washington.edu/AV/altvw180.html http://www.geekwire.com/2015/after-100-years-einsteins-general-relativity-faces-a-big-party-and-a-big-test/ I also watched the video of his talk at CalTech: https://www.youtube.com/watch?v=XdiG6ZPib3c The Q&A at the end of this talk seemed to indicate that he may be misapplying GR equations for tasks for which they may not have been designed or for which they need proper manipulation. The G4v theory claims, among other things, that it does away with the need for a Cosmological constant (which, based on the gravitational wave uses, I can understand) and also DOES AWAY with Dark Energy. It seems future LIGO experiments could provide supporting or refuting evidence for G4v. My Question: How/why does this theory do away with the need for Dark Energy? Does it invalidate prior calculations that the univerise is expanding at an accelerating rate? Or does it just describe the accelerating expansion without the need for the cosmological constant? If the latter, that still requires something accelerating the expansion, so I'm confused. Answer: John G. Cramer discussed G4V in a recent Analog Alternate View Column (Mar. 2016), and how Advanced LIGO data could possibly falsify G4V, General Relativity or even both of them (Their predicted gravity wave signatures signatures differ.) Cramer also stated that there would be no dark energy since G4v explains distant receding Type IIa supernova dimming as partially due to relativistic beaming leaving no need for a cosmological constant. In other words, the accelerated expansion is an illusion because more distant Type IIa supernovas appear dimmer than previously predicted if G4V is correct.
{ "domain": "physics.stackexchange", "id": 28507, "tags": "general-relativity, dark-energy, cosmological-constant" }
Does a fast process always have to be adiabatic?
Question: In common questions on thermodynamic processes, say for example a simple straight-forward question like "A gas at $T_1\ K$ and $P_1$ atm is suddenly released at atmospheric pressure. Find the final temperature of the gas", we assume the process to be adiabatic since no heat is exchanged between the system and the surroundings in that small interval of time. So my question is, do all quick processes have to be adiabatic, and similarly, is a slow process always isothermal? Answer: If a process is rapid enough that there is little heat transfer between system and surroundings then treating it as adiabatic serves as a good first approximation (an adiabatic process strictly requires zero heat transfer). For example this approximation is employed in calculating sound speed through a medium, because contraction-expansion cycle of the medium due to passage of acoustic wave is considered rapid. Ultimately whether such an approximation is good enough is verified only by doing experiments. A slow process on the other hand approximates a quasistatic process. A quasistatic process is one in which the system passes through a succession of equilibrium states while executing a process. A quasistatic process can be isothermal if the process involves maintaining constant temperature, but is not limited to it.
{ "domain": "physics.stackexchange", "id": 48057, "tags": "thermodynamics, ideal-gas, gas, adiabatic" }
Prepending Input layer to pre-trained model
Question: I'm trying to input numpy arrays of shape (1036800,) - originally images of shape (480, 720, 3) - into a pre-trained VGG16 model to predict continuous values. I've tried several variations of the code below: input = Input(shape=(1036800,), name='image_input') initial_model = VGG16(weights='imagenet', include_top=False) x = Flatten()(initial_model(input).output) x = Dense(200, activation='relu')(x) x = Dense(1)(x) model = Model(inputs=input, outputs=x) Previous variations of the above code yielded errors related to the input being the wrong dimensions, input_shape needing to have 3 channels (when using (1036800,) for that parameter in the initialization of VGG16), and the most recent error that results from running the above code is this: Traceback (most recent call last): File "model_alex.py", line 57, in <module> model = initialize_model() File "model_alex.py", line 20, in initialize_model x = Flatten()(initial_model(input).output) File "/home/aicg2/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 596, in __call__ output = self.call(inputs, **kwargs) File "/home/aicg2/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 2061, in call output_tensors, _, _ = self.run_internal_graph(inputs, masks) File "/home/aicg2/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 2212, in run_internal_graph output_tensors = _to_list(layer.call(computed_tensor, **kwargs)) File "/home/aicg2/.local/lib/python2.7/site-packages/keras/layers/convolutional.py", line 164, in call dilation_rate=self.dilation_rate) File "/home/aicg2/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 3156, in conv2d data_format='NHWC') File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 639, in convolution input_channels_dim = input.get_shape()[num_spatial_dims + 1] File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 500, in __getitem__ return self._dims[key] IndexError: list index out of range Here is the full code. Here is the sample data file used in the script. One of the possible approach towards fixing this might be to resize the raw image files to 224x224 and turn them into numpy arrays of shape (224, 224, 3) so they can be plugged into the pre-trained model's first layer. However, I don't want to warp the images or waste another night pre-processing data when I should already be training. Besides that, I all I can think to do is Google my problem and try to adapt the found solutions or aimlessly tweak various shape related parameters and functions -- neither of which has gotten me very far over the past 4 hours. Answer: The issue is that you shouldn't flatten the images into 1-dimensional vector because the VGG16 contains 2D convolution layers (e.g. spatial convolution over images), which require the input to have the shape of (number_of_images, image_height, image_width, image_channels), given that keras.backend.image_data_format() returns 'channels_last'. If your image_data_format is 'channels_first', change the input data shape to (number_of_images, image_channels, image_height, image_width). Here is your fixed code (tested with Keras 2.0.4): x_train = x_train.reshape((x_train.shape[0], 480, 720, 3)) x_test = x_test.reshape((x_test.shape[0], 480, 720, 3)) initial_model = VGG16(weights='imagenet', include_top=False) input = Input(shape=(480, 720, 3), name='image_input') x = Flatten()(initial_model(input)) x = Dense(200, activation='relu')(x) x = BatchNormalization()(x) x = Dropout(0.5)(x) x = Dense(1)(x) model = Model(inputs=input, outputs=x) model.compile(loss='mse', optimizer='adam') model.fit(x_train, y_train, epochs=20, batch_size=16) score = model.evaluate(x_test, y_test, batch_size=16)
{ "domain": "datascience.stackexchange", "id": 1875, "tags": "python, keras" }
What does a sphere moving close to the speed of light look like?
Question: What shape does the viewer in a reference frame with $v=0$ perceive? I suppose that since the sphere moves in one direction only (oX only, not oY) its section would change into an ellipse, where the horizontal diameter would be shorter. However, my textbook says that the viewer still perceives a regular spherical shape. How come? Answer: This is just a footnote to Crazy Buddy's answer (which is correct! :-): Length contraction is a real phenomenon, and indeed the RHIC observes this every day because the nuclei are moving so fast that the collision is between two disks not two spheres. However to see something you need to have light emitted from the object reach your eye, and the light from different parts of the moving sphere takes different times to reach your eye. This distorts the image of the contracted object and has the apparently paradoxical effect of making it look spherical even though it is contracted. So the moving sphere looks spherical even though it isn't spherical. The calculation of how light from the object reaches your eye is quite involved, and I'm afraid I don't know of a simple analogy to understand it. There are various animations showing this effect on the web. See for example this one.
{ "domain": "physics.stackexchange", "id": 13778, "tags": "special-relativity, speed-of-light, reference-frames, observers" }
What would be alternative ways to model IConvention?
Question: public class Item { /*...*/ } public class Model { /*...*/ } public interface IConventionInstanceSelector { IConventionInstance Select(Item item); } public interface IConventionInstance { void ApplyTo(Model model); } public interface IConvention { bool IsMatchedBy(Item item); IConventionInstance CreateInstance(Item item); } public class ConventionInstanceSelector : IConventionInstanceSelector { readonly List<IConvention<Item>> _conventions; public ConventionInstanceSelector(IEnumerable<IConvention<Item>> conventions) { _conventions = conventions.ToList(); } public IConventionInstance Select(Item item) { foreach(var convention in _conventions) { if(convention.IsMatchedBy(item)) { return convention.CreateInstance(item); } } return NullConventionInstance.Instance; } } public class ModelReader { IConventionInstanceSelector _selector; public ModelReader(IConventionInstanceSelector selector) { _selector = selector; } public Model Read(IEnumerable<Item> items) { var model = new Model(); foreach(var item in items) { _selector.Select(item).ApplyTo(model); } return model; } } I'm a bit concerned about the specification and factory method being merged into one interface (IConvention) and that they can be called separately when in fact one depends on the decision of the other. I also get the feeling there's a pattern here I'm not seeing. Just to give a little bit more context: I'm trying to build an API whereby people can build their own conventions that get applied to an input stream (an enumeration of items in this case) and have each convention apply a transformation on the input (the item in this case) and add it to a model. The main concern is the convention represents both the fact an item satisfies the convention as well as the logic to apply it to the model AND the fact that how I've modeled it right now, those are two distinct operations whereby the latter relies on the first to be able to do it's job properly. It's like specification pattern meets factory pattern. Answer: First of all, the combination of a Specification and another method is not that unusual - it's called the Tester-Doer Pattern (although to be fair, I'm not sure whether it's a bona-fide pattern, as I've never seen it described outside of .NET). Even so, despite the precedence, I completely understand why you dislike the Temporal Coupling it implies. So which alternatives are available? The first refactoring we could apply is to change the definition of IConvention to emply a Maybe monad. Assuming that we've defined a Maybe<T> class, IConvention now looks like this: public interface IConvention { Maybe<IConventionInstance> CreateInstance(Item item); } This would allow you to in a type-safe way to check whether or not an IConventionInstance instance could be created from the item. However, once you realize that a Maybe monad really is nothing more than a special case of IEnumerable, you could consider refactoring IConvention once more: public interface IConvention { IEnumerable<IConventionInstance> CreateInstances(Item item); } In the normal implementation, an implementation would return zero or one item. This would make the ConventionInstanceSelector and IConventionInstanceSelector types redundant (always a good thing), and you could implement the ModelReader class like this (caveat: I haven't tried to compile it): public class ModelReader { IEnumerable<IConvention> _conventions; public ModelReader(IEnumerable<IConvention> _conventions) { _conventions = conventions; } public Model Read(IEnumerable<Item> items) { var model = new Model(); var conventionInstances = from item in items from c in _conventions from ci in c.CreateInstances(item) select ci; foreach(var ci in conventionInstances) { ci.ApplyTo(model); } return model; } }
{ "domain": "codereview.stackexchange", "id": 1384, "tags": "c#" }
Simplifying exception handling on Enumerators
Question: I have a files search function that iterates over a given directory until a given depth is reached and returns the found files. I did this via the Enumerate methods of the Directory class and yield return the results. Since it's very likely that I hit a directory that I can't access (e.g. system directories) or generate a too long path (if the depth is large), I have to catch the exceptions from these cases. However since I can't use yield in try/catch statements, I find my code pretty bloated, because I have to seperate the critical method calls from the rest. Is there a better/shorter way to do this, or are there best practices for that case? private IEnumerable<string> SearchSubdirs(string currentDir, int currentDepth) { IEnumerable<string> exeFiles; try { exeFiles = Directory.EnumerateFiles(currentDir, "*.exe"); } catch (UnauthorizedAccessException uae) { Debug.WriteLine(uae.Message); yield break; } catch (PathTooLongException ptle) { Debug.WriteLine(ptle.Message); yield break; } foreach (string currentFile in exeFiles) { // Ignore unistaller *.exe files if (currentFile.IndexOf("unins", 0, StringComparison.CurrentCultureIgnoreCase) == -1) { yield return currentFile; } } if (currentDepth < maxDepth) { IEnumerable<string> subDirectories; currentDepth++; try { subDirectories = Directory.EnumerateDirectories(currentDir); } catch (UnauthorizedAccessException uae) { Debug.WriteLine(uae.Message); yield break; } catch (PathTooLongException ptle) { Debug.WriteLine(ptle.Message); yield break; } foreach (string subDir in subDirectories) { foreach (string file in SearchSubdirs(subDir, currentDepth)) { yield return file; } } } } Answer: I would use Linq methods to simplify the code. Also I've extracted 2 methods to simplify the main method. And please rename GetFilesWeAreLookingFor to whatever your find appropriate :). private static IEnumerable<string> GetFilesWeAreLookingFor(string currentDir) { try { return Directory.EnumerateFiles(currentDir, "*.exe") .Where(fileName => fileName.IndexOf("unins", 0, StringComparison.CurrentCultureIgnoreCase) == -1); } catch (UnauthorizedAccessException uae) { Debug.WriteLine(uae.Message); } catch (PathTooLongException ptle) { Debug.WriteLine(ptle.Message); } return Enumerable.Empty<string>(); } private IEnumerable<string> GetNestedFiles(string currentDir, int nestedDepth) { try { return Directory.EnumerateDirectories(currentDir) .SelectMany(subDirectory => SearchSubdirs(subDirectory, nestedDepth)); } catch (UnauthorizedAccessException uae) { Debug.WriteLine(uae.Message); } catch (PathTooLongException ptle) { Debug.WriteLine(ptle.Message); } return Enumerable.Empty<string>(); } private IEnumerable<string> SearchSubdirs(string currentDir, int currentDepth) { IEnumerable<string> filesWeAreLookingFor = GetFilesWeAreLookingFor(currentDir); if (currentDepth < _maxDepth) filesWeAreLookingFor = filesWeAreLookingFor.Concat(GetNestedFiles(currentDir, currentDepth + 1)); return filesWeAreLookingFor; }
{ "domain": "codereview.stackexchange", "id": 3398, "tags": "c#, exception-handling" }
How to restore deleted objects in R?
Question: Suppose if I delete all the objects from current session: rm(list=ls()) Is there any way in base R or using a function from a package which lets me restore the deleted objects from the current session? Answer: The answer is unfortunately no. There is no handy ctrl-z method. A tip to avoid these situations: I suggest you always save either the 'environment' or, as I prefer to do, the scripts with the codes for the desired objects, and save them regularly. I never type any commands directly into a work space, but always in a script which I save. This is so I can always look back at which steps I took in my coding. Personally, I save my scripts with date notations in the file name - so when I change a script I'll still have access to the old version of the scripts. It has happened quite a few times that I've changed a code and later realized that the old code was better. It's quite a hassle when you don't have the old file saved.
{ "domain": "datascience.stackexchange", "id": 7013, "tags": "r, data-mining, statistics" }
Has the use of the holographic principle of string theory in condensed matter physics silenced the skeptics?
Question: It seems to me that the use of string theory in calculations of strongly-interacting matter in condensed matter physics is an example of the theory being on the right track. And then there's the application of string theory to black holes. Have these quieted the skeptics? If not, what's the deal? Could a theory be used in making accurate calculations and still be wrong? Aren't the odds of that happening rather small? Answer: I know very little about this field, so please don't take my answer too seriously. My impression is that, it's mainly string theorists who are excited about this line of research; condensed matter physicists are mostly skeptical (but many are following the field with interest). I guess the main reason is that these holographic calculations have not yet (as far as I know) given anything which is both new and impressive (from condensed matter perspective). But I have to add that, recently I have noticed papers using holographic methods with only conventional (and serious) condensed matter authors (say, this one). This might be a sign of slow acceptance by part of the condensed matter community...? My impression is also that these holographic calculations are in no way under control (compared to the original Maldacena proposal) and relies on many layers of conjectures. Given a QFT, I don't think anybody knows how to systematically construct the gravitational dual. People just try to construct space times with the correct asymptotic symmetries (AdS, Lifshitz, Schrödinger, ...), try different matter configurations and then assume that the AdS/CFT dictonary is still valid. They only try to say certain generic things about a big class of QFTs, using holographic methods, rather than calculate precise quantities for a very specific QFT. But these tools have potential to become very useful for non-perturbative physics if they come under control in the future. But whether string theory can make precise and useful calculations for condensed matter physics, won't say anything about how correct it is as a theory of quantum gravity (or "everything"), as you seem to imply. What application to black holes do you have in mind? Microscopic calculations of black hole entropy? These calculations show that string theory is consistent as a theory of gravity (there are many other impressive calculations of this sort). Only few people would disagree on the impressive consistency of string theory and no alternative theory has been as successful in this regard. But consistency is not enough to declare a theory as correct.
{ "domain": "physics.stackexchange", "id": 11338, "tags": "string-theory, soft-question" }
Identification of a segmented black insect in France
Question: Found in the Lot department of southern France. Answer: I think this is some sort of soldier fly larva (family Stratiomyidae). That would explain lack of legs. There are thousands of species world wide, with both aquatic and terrestrial larvae, so it might be possible to narrow it down a bit more. Image from bugguide.net for comparison: Thanks to @bli for reminding me of dipteran larvae!
{ "domain": "biology.stackexchange", "id": 6017, "tags": "zoology, species-identification, entomology" }
How this equation happens and what it describes?
Question: I'm having some trouble with Momentum and Impulse, In this equations, Thrust = F = m * a = dp / dt = m * (dv / dt) + v * (dm / dt) How is the, m * (dv / dt) + v * (dm / dt) Happens, and what it means? Answer: If $m$ and $v$ are both functions of time, then $$\frac{dp}{dt} = \frac{d}{dt} (m(t)v(t)) = m\frac{dv}{dt} + v\frac{dm}{dt}, $$ by the product rule for differentiation.
{ "domain": "physics.stackexchange", "id": 55378, "tags": "newtonian-mechanics, forces, momentum, rocket-science" }
Physics behind a match performing a trick on center of mass
Question: https://www.youtube.com/watch?v=Ucdw0DDI4n8 I've seen another variation where the whole match stick turned to ash. What's going on in this trick? Answer: This is really two tricks in one. Let's look at each one individually. The forks/cork/match set is balancing while being mostly not on top of the cup. This has entirely to do with the center of mass for those objects. The center of mass for those four items appears to be on the lip of the cup. This is why, when the presenter pushes down on it, it "wobbles" and then goes back to balancing. To understand more of what is going on, you can try looking at the system from the side. If the cork is on the leftmost part of what we can see, and the fork-grips are on the rightmost part of what we can see, you can start drawing some arrows representing the forces and torques on the system. The "balance point" is where the match meets the glass. If the torque on the left side of the balance point is equal to the torque on the right side, then the whole system will stay up. Effectively, the fork-grips are stopping the rest of the things from falling. You can do this process with anything hanging off an edge; try it with a book! If you make diagrams showing the center-of-mass and the torques on each object, you may notice they look very similar. The fun part of the forks/corks/match is that you may not expect it on a single match stick. Speaking of which... The match being lit on fire This is appears just to be simple showmanship. (Fire! Pretty!) If the match-stick can support the weight of the forks/cork/match combination, it'll stay up. Turning the match-stick into ash simply degrades the structural integrity of the match. It may also change the weight distribution of the forks/cork/match combination, but not by much, so it stays up. So, simple analysis of weight distribution and torque lets us do this sort of things.
{ "domain": "physics.stackexchange", "id": 13314, "tags": "newtonian-mechanics, gravity" }
publish only some topics of a rosbag file (Command line or launch file)
Question: Hi! I am using rosbag play in a launch file, and one of the recorded topics is processed data from the other topics. As I want to produce again, I would like to play all topics except that one. I know I should have given it an anonymous name or I can play it in a namespace. But I would like just not to have that topic playing. Thanks! Cristian Originally posted by cduguet on ROS Answers with karma: 187 on 2012-09-11 Post score: 4 Answer: Another option would be to create another bag file without the unwanted topic using 'rosbag filter' http://www.ros.org/wiki/rosbag/Tutorials/Producing%20filtered%20bag%20files Originally posted by david.hodo with karma: 395 on 2012-09-11 This answer was ACCEPTED on the original site Post score: 7
{ "domain": "robotics.stackexchange", "id": 10981, "tags": "ros, roslaunch, rosbag, namespace, play" }
A mobile robot only moves horizontally and vertically, navigation avoid obstacle problem
Question: Hi all, I am simulating a mobile robot that only moves horizontal and vertical, like: It's length is 7 meters, width is 2.5 meters, and LiDAR is on middle of the robot, the green arrow is the direction it can move, but only one direction at a time. The wheels are using steering, so it can rotate all wheels for 90 degrees to change direction, like: In this case, footprint data sets to:[[-1.25, -3.5], [-1.25, 3.5], [1.25, 3.5], [1.25, -3.5]] In this case, local_planner is: dwa_local_planner When it moves vertical, it can find the local obstacle, and it can avoid obstacle(Let mobile robot stop), like: When it moves horizontal, it finds the obstacle, too, but it can't avoid obstacle, like: Then, I use dynamic_reconfigure update my footprint data to: [[-3.5, -3.5], [-3.5, 3.5], [3.5, 3.5], [3.5, -3.5]] And, it success, like: But now, It can't move horizontal to the edge of map, like: I want it to move like: But this case it can't avoid obstacle of edge of mobile robot. My parameters of navigation: move_base_params.yaml footprint: [[-1.25, -3.5], [-1.25, 3.5], [1.25, 3.5], [1.25, -3.5]] controller_frequency: 5.0 controller_patience: 15.0 planner_frequency: 0.0 planner_patience: 5.0 max_planning_retries: 0 oscillation_timeout: 30.0 recovery_behavior_enabled: true recovery_behaviors: [{name: conservative_reset, type: clear_costmap_recovery/ClearCostmapRecovery}, {name: aggressive_reset, type: clear_costmap_recovery/ClearCostmapRecovery}] conservative_reset: reset_distance: 1.25 dwa_local_planner_params.yaml base_local_planner: dwa_local_planner/DWAPlannerROS DWAPlannerROS: max_vel_x: 0.5 min_vel_x: -0.5 max_vel_y: 0.5 min_vel_y: -0.5 max_vel_trans: 0.5 min_vel_trans: -0.5 max_vel_theta: 0.05 min_vel_theta: -0.05 acc_lim_x: 5.0 acc_lim_y: 5.0 acc_lim_theta: 3.2 xy_goal_tolerance: 0.03 yaw_goal_tolerance: 100 latch_xy_goal_tolerance: true sim_time: 1.0 vx_samples: 10 vy_samples: 10 vth_samples: 15 path_distance_bias: 24.0 goal_distance_bias: 72.0 occdist_scale: -1.0 forward_point_distance: 0.01 stop_time_buffer: 0.2 scaling_speed: 0.25 max_scaling_factor: 0.2 prune_plan: true oscillation_reset_dist: 0.001 oscillation_reset_angle: 0.002 publish_traj_pc : true publish_cost_grid_pc: true costmap_common_params.yaml robot_base_frame: base_footprint transform_tolerance: 0.4 update_frequency: 5.0 publish_frequency: 1.0 obstacle_range: 6.0 publish_voxel_map: true navigation_map: map_topic: /map obstacles: observation_sources: scan scan: sensor_frame: laser_link data_type: LaserScan topic: scan marking: true clearing: true global_costmap_params.yaml global_costmap: global_frame: map static_map: true raytrace_range: 7.0 resolution: 0.05 z_resolution: 0.2 z_voxels: 10 inflation: cost_scaling_factor: 3.0 inflation_radius: 0.6 plugins: - {name: navigation_map, type: "costmap_2d::StaticLayer" } - {name: obstacles, type: "costmap_2d::VoxelLayer" } - {name: inflation, type: "costmap_2d::InflationLayer" } local_costmap_params.yaml local_costmap: global_frame: odom static_map: false rolling_window: true raytrace_range: 7.0 resolution: 0.05 width: 10.0 height: 10.0 origin_x: 0.0 origin_y: 0.0 inflation: cost_scaling_factor: 3.0 inflation_radius: 3.5 plugins: - {name: obstacles, type: "costmap_2d::VoxelLayer" } - {name: inflation, type: "costmap_2d::InflationLayer" } My questions are: Can I eliminate global costmap obstacle in local costmap? In addition to changing the footprint, how can I change the obstacle inflation in different axis? Is there any other solution to figure out this problem in this case? If not, I think I should give up using navigation to avoid obstacle in this case. This is my first time ask question here, if there is any violation or Insufficient narrative, please let me know and I will improve it as soon as possible. Thanks! Originally posted by Arthur6057 on ROS Answers with karma: 13 on 2020-08-11 Post score: 1 Answer: Thank you for providing all the required information but I'm not entirely sure to get what is not working for you exactly. I'm still answering to provide you some insights. First of all you shouldn't change the footprint, the first value you set it to ([[-1.25, -3.5], [-1.25, 3.5], [1.25, 3.5], [1.25, -3.5]]) is correct because it matches with the robot dimensions. I would recommend you to check #q12874 and also costmap2d wiki to understand the relation between the footprint and the inflation radius. The costmap calcluates a cost for each cell according to the distance from an obstacle, but it's only the case if this distance is less than the inflation_radius which is set to 3.5 for your local costmap. So when you increase the footprint all the cells closer than 3.5 meters of an obstacle will be considered unavailable because the inflation_radius matches your footprint size and it won't calculate the cost of the other cells. You want to set the inflation_radius to the maximum distance that you want your robot to avoid the obstacles (from the robot center). If you can rotate your robot that would mean that you are only limited by its length, I would set it to 1.25 + distance_from_obstacle (1.25 is length/2 ). If you can't rotate it then it should be set to 3.5 + distance_from_obstacle (3.5 for width/2). From small and/or slow robots distance_from_obstacle can be around 0.1 meters but given the dimensions of your robot you should at the very least have it to 0.5 meters. So you should set the inflation_radius of your local and global costmaps to minimm 4 meters (because you can't rotate it if I understood propperly, if not you can have it to minimum 2.75 meters). Eventhough it might not be the core issue it will probably improve the navigation. The second thing I've noticed is that you have set occdist_scale to -1. If you look at the dwa_local_planner wiki about the trajectory scoring parameters. You can tell the local planner to prefer trajectories close to the global planner trajectory (path_distance_bias), or to trust the local goals more than the global trajectory (goal_distance_bias) or to avoid obstacles over respecting the trajectory (occdist_scale). Then the cost of the trajectory is calculated with : cost = path_distance_bias * (distance to path from the endpoint of the trajectory in meters) + goal_distance_bias * (distance to local goal from the endpoint of the trajectory in meters) + occdist_scale * (maximum obstacle cost along the trajectory in obstacle cost (0-254)) In your case occdist_scale is negative so you probably end up with negative cost values (I haven't found what happen in this case but a trajectory with a negative cost would likely be ignored). You can set it to the default value (0.01) to see if there are improvements. To direclty answer your questions : Check #q10620, it should help you understand the difference between the two costmaps. You don't change it in different axis, but a big inflation_radius shouldn't prevent your robot to find trajectories. if you have a big inflation_radius but you have set occdist_scale to a small value (but not negative) it doesn't matter. I think you should be able to have it working now. Originally posted by Delb with karma: 3907 on 2020-08-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Arthur6057 on 2020-08-12: Thanks for your reply, I checked costmap2d wiki and dwa_local_planner wiki again, now, I do know what my problem is, and it works successfully, I apologize for not looking for answers carefully.Thank you very much for all your help. Comment by Delb on 2020-08-12: It's normal to make mistakes, even though move_base is well documented there are a lot of parameters to be aware of. Can you tell what exactly solved your issue ? Is it simply changing the inflation_radius, the occdist_scale parameter or both ? Comment by Arthur6057 on 2020-08-12: Sorry for replying late, I only change occdist_scale to default value(0.01) and keep footprint be [[-1.25, -3.5], [-1.25, 3.5], [1.25, 3.5], [1.25, -3.5]]. inflation_radius I didn't change it in global costmap, because I only want mobile robot move straight, if changes it bigger, it will plan a curved path. In this case, if mobile robot encounter obstacles, I don't want it to replan, I just want it to stop. That I think I could keep inflation_radius as it in local costmap. Sorry for my explanation misled you.
{ "domain": "robotics.stackexchange", "id": 35395, "tags": "ros, navigation, ros-melodic, move-base, avoid-obstacle" }
When does the nearest neighbor heuristic fail for the Traveling Salesperson?
Question: Can you provide an example of NN algorithm failure on the Euclidean traveling salesman problem? I was trying to construct a specific example of this for my friends and was failing. Answer: Consider a ladder a----b----c | | | d----e----f Say length of a-b is $2$ and length of a-d is $1$. The optimal route is a-b-c-f-e-d-a, $10$ units long. Starting at a, NN would produce a-d-e-b-c-f-a which is $7 + \sqrt{17} > 11$ units long. There is actually a four node example, a rhombus A /|\ B-+-C \|/ D Say length of B-C is $10$, length of A-D is $24$ and thus length of A-B is $13$. The optimal route is A-B-D-C-A, $52$ units long. NN would produce the path A-B-C-D-A, $60$ units long.
{ "domain": "cs.stackexchange", "id": 1669, "tags": "algorithms, heuristics, traveling-salesman" }
Landau levels degeneracy in symmetric gauge
Question: I'm reading David Tong's lecture notes on the Quantum Hall Effect. When symmetric gauge taken, a basis of the lowest landau level wave functions is $$\psi_{LLL,m}\sim\left(\frac{z}{l_B}\right)^m e^{-|z|^2/4l_B^2},$$ where $z=x-iy$, and we have $$J_z\psi_{LLL,m}=\hbar m \psi_{LLL,m}.$$ On page 25, it says that the profiles of the wavefunctions form concentric rings around the origin. The higher the angular momentum $m$, the further out the ring. The wavefunction with angular momentum $m$ is peaked on a ring of radius $r=\sqrt{2m}l_B$. This means that in a disc shaped region of area $A=\pi R^2$, the number of states is roughly (the integer part of) $$N=R^2/2l_B^2=A/2\pi l_B^2$$ I can't understand these two statements. I think the profile of $e^{-|z|^2/4l_B^2}$ does form concentric rings around the origin, but does not when multiplied by $(\frac{z}{l_B})^m$. And why $r_{max}=\sqrt{2m}l_B$? For the second statement, my understanding is that it divide the area in real space by the area "a wave function occupies", but if this is the case, shouldn’t there be a $m$ in the denominator? Answer: $r_\text{max}$ is the location where $|\psi|^2$ is maximized. Even after multiplying the wavefunction by $z^m$, $|\psi|^2$ is still symmetric under rotation around the origin (only a function of $|z|$), so in this sense the wavefunctions still represents concentric rings. Remember the integer $m$ actually labels different eigenstates. The area of the annulus between two neighboring states is $\pi r_{m+1}^2-\pi r_m^2=2\pi l_B^2$, so the number of states is roughly $\frac{A}{2\pi l_B^2}$. Alternatively, as $r_{\text{max}}$ grows with $m$, the maximal value of $m$ before the wavefunction completely goes out of the disc is determined by $\sqrt{2m}l_B\leq R$, which gives $m\leq \frac{R^2}{2l_B^2}$. So that's roughly how many eigenstates there are in the disc.
{ "domain": "physics.stackexchange", "id": 89226, "tags": "quantum-mechanics, quantum-hall-effect" }
Is there a "Planck law" for any type of field/particle?
Question: When looking at the derivation of Planck's law, I wondered if we could do the same derivation for any field, not just the electromagnetic field. Is it indeed possible? Is there an equivalent of Planck's law for any type of field? Answer: The specific form of Planck's law depends on mainly two properties of the field. The first is the spin, which tells you what kind of statistics it follows. The statistics tells you how many particles lie in a certain state at a given temperature. In the case of EM fields, it has spin 1, so it follows the Bose-Einstein statistics. The second is the dispersion relation, which in turn determines the density of states, which then tells you effectively how many states there are that have a certain energy. Photons have a linear dispersion relation, which in 3D gives a density of states that is proportional to the square of energy. So yes, if we were given a field that still follows Bose-Einstein statistics, but has a different dispersion relation, we could get the Planck's law for that particle that looks different from the one for photons. Let me expand on that with specific examples. For ease of illustration, let's also use the frequency distribution. Planck's law tells us that the spectral radiance of radiation exiting a small hole on a cavity full of photon gas is $$B_\nu(T) = \frac{c}{4\pi} u_\nu(T)\sim \frac{(h\nu)^3}{e^{h\nu/k_B T} - 1} \>.$$ Here $u_\nu(T)$ is the spectral energy density function, which gives us the distribution of energy in each mode ($\nu$) at a given temperature. $B_\nu(T)$ and $u_\nu(T)$ are directly proportional like this because radiation is the same in all directions. The expression for $u_\nu$ is derived via statistical mechanical properties of the system. Suppose the total energy of the system is $U(T)$, we want to find out how $U(T)$ is distributed into modes with different energy, which means we want to know what is $dU/d\epsilon$. $U(T)$ can be written as the integral $$U(T) = \int_0^\infty n(\epsilon, T) g(\epsilon)\epsilon d\epsilon \>. $$ The integrand is the spectral energy distribution we want. Note that it is called spectral because the spectral distribution of the radiation in frequency can be rewritten as a function of energy given the dispersion relation. Here $n(\epsilon, T)$ is the mean occupation number of a given state with energy $\epsilon$ at a given temperature $T$. For a free Bose-Einstein field (particles in a box), it is $(e^{\beta\epsilon} - 1)^{-1}$, where $\beta = 1/k_B T$ is the reciprocal temperature. This tells us at a certain temperature, how many particles are in a certain state with energy $\epsilon$. $g(\epsilon)$ is the density of states, which tells us how many states are available with this energy. This can be derived from the dispersion relation, which relates the wave vector $k$ (or momentum $\hbar k$) to the energy. To find out how they are specifically related, you can refer to the Wikipedia page on density of states. Most often the density of states depends on $\epsilon$ by a power law, $g(\epsilon)\sim \epsilon^m$. In the case of photons, the dispersion relation is $ \epsilon = \hbar k c $, and as a result $m = 2$. Knowing $n(\epsilon, T)$ and $g(\epsilon)$, we then basically have Planck's law, $$u_\epsilon(T) \sim \epsilon \times \frac{1}{e^{\beta\epsilon} - 1} \times \epsilon^2 =\frac{\epsilon^3}{e^{\beta\epsilon} - 1} \>. $$ Note that this derivation is quite general, and $u_\epsilon(T) = n(\epsilon) g(\epsilon) \epsilon $ is always true. So if we had a different field with a different dispersion relation, say for example $\epsilon \sim k^2$ instead of $\epsilon \sim k$, we would have a density of states $g(\epsilon)\sim \epsilon^{1/2}$, which would give $u_\epsilon \sim \epsilon^{3/2}/(e^{\beta\epsilon} - 1) $. Of course, the question of how useful this is still remains. Even if we found out the spectral energy distribution of some field, it doesn't necessarily have the same significance as the EM field. The field might interact with matter very differently, and we might not really want of conceive of it as a "radiation".
{ "domain": "physics.stackexchange", "id": 37767, "tags": "quantum-mechanics, quantum-field-theory" }
How is the depth of the filters of convolutional layers determined?
Question: I am a bit confused about the depth of the convolutional filters in a CNN. At layer 1, there are usually about 40 3x3x3 filters. Each of these filters outputs a 2d array, so the total output of the first layer is 40 2d arrays. Does the next convolutional filter have a depth of 40? So, would the filter dimensions be 3x3x40? Answer: Does the next convolutional filter have a depth of 40? So, would the filter dimensions be 3x3x40? Yes. The depth of the next layer $l$ (which corresponds to the number of feature maps) will be 40. If you apply $8$ kernels with a $3\times 3$ window to $l$, then the number of features maps (or the depth) of layer $l+1$ will be $8$. Each of these $8$ kernels has an actual shape of $3 \times 3 \times 40$. Bear in mind that the details of the implementations may change across different libraries. The following simple TensorFlow (version 2.1) and Keras program import tensorflow as tf def get_model(input_shape, num_classes=10): model = tf.keras.Sequential() model.add(tf.keras.layers.Input(shape=input_shape)) model.add(tf.keras.layers.Conv2D(40, kernel_size=3)) model.add(tf.keras.layers.Conv2D(8, kernel_size=3)) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(num_classes)) model.summary() return model if __name__ == '__main__': input_shape = (28, 28, 1) # MNIST digits have usually this shape. get_model(input_shape) outputs the following Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 26, 26, 40) 400 _________________________________________________________________ conv2d_1 (Conv2D) (None, 24, 24, 8) 2888 _________________________________________________________________ flatten (Flatten) (None, 4608) 0 _________________________________________________________________ dense (Dense) (None, 10) 46090 ================================================================= Total params: 49,378 Trainable params: 49,378 Non-trainable params: 0 _________________________________________________________________ where conv2d has the output shape (None, 26, 26, 40) because there are 40 filters, each of which will have a $3\times 3 \times 40$ shape. The documentation of the first argument (i.e. filters) of the Conv2D says filters – Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). and the documentation of the kernel_size parameter states kernel_size – An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions. It doesn't actually say anything about the depth of the kernels, but this is implied from the depth of the layers. Note that the first layer has $(40*(3*3*1))+40 = 400$ parameters. Where do these numbers come from? Note also that the second Conv2D layer has $(8*(3*3*40))+8 = 2888$ parameters. Try to set the parameter use_bias of the first Conv2D layer to False and see the number of parameters again. Finally, note that this reasoning applies to 2d convolutions. In the case of 3d convolutions, the depth of the kernels could be different than the depth of the input. Check this answer for more details about 3d convolutions.
{ "domain": "ai.stackexchange", "id": 1683, "tags": "convolutional-neural-networks, filters, convolutional-layers, convolution-arithmetic, feature-maps" }
Two Different Sorts of Inertia: Inertial Mass and Moment of Inertia
Question: There are two different sorts of inertia: inertial mass and moment of inertia. I am currently reading about moment of inertia. Now, I know inertia is an important concept; with it, we can determine how difficult something is to move. For linear motion, the difficulty of altering an object's position is dependent upon the mass of the object being moved. For rotational motion, the difficulty in rotating an object around an axis is dependent upon the mass, and the radius of that mass to the the axis of rotation? Why do we need to different measures of inertia? And for rotational motion, why is the inertia dependent upon mass, and its radial distance from the axis of rotation? Answer: The moment of inertia is merely a generalisation/application of the ‘usual’ inertia to rotations. Since translations and rotations are different kinds of motion, it appears sensible (to me) to have different kinds of inertia associated with them. Regarding your second question: Imagine a particle at position $(x,0,0)$ which you would like to rotate with angular velocity $\omega$ about the $(0,0,z)$ axis. To do so, you have to initially accelerate the particle along the $(0,y,0)$ axis to velocity $v_y = \omega x, v_{x,z} = 0$, as this is the velocity the particle would have at this point if it were already rotating. As you can clearly see, the momentum $p$ associated with this velocity is proportional to $r$ ($p_y = m v_y = m \omega x$), hence it takes more energy to accelerate a particle to angular velocity $\omega$ if it is further away from the centre of rotation. As this is exactly the quantity described by the ‘moment of inertia’, the moment of inertia depends on the radial distance of the mass.
{ "domain": "physics.stackexchange", "id": 5470, "tags": "newtonian-mechanics, kinematics, rotational-kinematics, inertia, moment-of-inertia" }
Why isn't $SO(n)/SO(n\!-\!1)$ a symmetric space?
Question: It's my understanding that one way to define a symmetric space $G/H$ is by the commutation relations $$ [T^a, T^b] = f^{abc} T^c, \qquad [T^a, X^{\hat{b}}] = f^{a\hat{b}\hat{c}}X^{\hat{c}}, \qquad [X^{\hat{a}}, X^{\hat{b}}] = f^{\hat{a}\hat{b}c} T^c$$ where the generators $\{T,X\}$ of $G$, and the generators $\{T\}$ of $H$ live in the Lie algebra $$ \mathfrak{h} = \{T\}, \qquad \mathfrak{g} = \{T\} \oplus \{X\}$$ If this is enough to define a symmetric space (such that it then has all the other properties: automorphism $\sigma$, where $\sigma^2 = I$ and so on), then $SO(n)/SO(n-1)$ should be a symmetric space. Its generators satisfy this relation, since the broken generators can be given by $$\left(\begin{matrix} 0 & \vec{v}\\ -\vec{v}^T & 0 \end{matrix}\right),$$ which clearly commute to give the $T$ generators. However, it is always said that the symmetric space is $SO(n)/(SO(n-1)\times SO(1))$. Which step am I missing here? UPDATE: Perhaps it is a symmetric space, as suggested by this master's thesis (p. 14). However other works (p. 3) emphasise that the subgroup should be $SO(n-1) \times SO(1)$. The distinction is vital, because being able to treat $SO(n)/SO(n-1)$ as a symmetric space allows great simplification of symmetry breaking phenomena. Answer: Both spaces: $SO(n)/SO(n-1)$ and also $SO(n)/S(O(n-1) \times O(1)))$ (Please notice the difference from the form given in the question) are symmetric spaces. These spaces are locally isomorphic as homogeneous spaces, because the additional group: $$O(1) \cong \mathbb{Z}_2$$ is discrete. (Due to the fact that it is the group of orthogonal one dimensional matrices, thus it is the discrete group $\{ \pm 1\}$ ). (The notation $S(O(n-1) \times O(1))$, means that we pick up only matrices with unit determinant from the direct product. Due to the local isomorphism, The Lie algebra decomposition for both spaces is identical and thus both are symmetric spaces: The decomposition commutation relation in the form given in the question is sufficient for a homogeneous space to be symmetric. These spaces are very known symmetric spaces as, the first is a sphere $$S^{n-1} = SO(n)/SO(n-1)$$ and the second is a real projective space: $$\mathbb{R}P^{n-1} = SO(n)/S(O(n-1) \times O(1)))$$ (Which is a sphere with each two antipodal points identified). A basic reference for symmetric spaces is Helgason's book. More elementary expositions can be found in the physics literature, for example Appendix: A in McMullan and TsuTsui, and the article by: ArvaniToyeorgos, where the two cases in the question are given in Exapmples 7. A more advanced reference in which quatum dynamics on homogeneous and symmetric spaces is described, is the report by Camporesi.
{ "domain": "physics.stackexchange", "id": 55953, "tags": "differential-geometry, symmetry, group-theory, lie-algebra" }
[Solved]catkin_make error: c++: error: $(catkin_LIBRARIES) no such file or directory
Question: Good afternoon partners I´m running ROS Groovy on Raspbian . I wrote a publisher in C++ "robot1.cpp" . After that and following the instructions from: http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29 I added these lines at bottom of my CMakeLists.txt: include_directories(include ${catkin_INCLUDE_DIRS}) > > add_executable(robot1 src/robot1.cpp) > > target_link_libraries(robot1 ${catkin_LIBRARIES}) add_dependencies(robot1 alvaro_generate_messages_cpp) When I do "catkin_make" I get the next : [ 21%] Building CXX object alvaro/CMakeFiles/robot1.dir/src/robot1.cpp.o /tmp/ccREQxsT.s: Assembler messages: /tmp/ccREQxsT.s:505: Warning: swp{b} use is deprecated for this architecture Linking CXX executable /home/pi/catkin_ws/devel/lib/alvaro/robot1 c++: error: $(catkin_LIBRARIES):There is no such file or directory make[2]: *** [/home/pi/catkin_ws/devel/lib/alvaro/robot1] Error 1 make[1]: *** [alvaro/CMakeFiles/robot1.dir/all] Error 2 make: *** [all] Error 2 Invoking "make" failed I´ve just followed the tutorial steps so that I don´t know what´s happening. Can you help me please? Originally posted by 4LV4R0 on ROS Answers with karma: 70 on 2014-03-27 Post score: 0 Original comments Comment by 4LV4R0 on 2014-03-27: Oh my God! I´m quite blind .... apologies for any inconvenience. I need my glasses to see ... I´m absolutely a disaster. It was exactly what you said to me. I didn´t realice it. ${catkin_LIBRARIES} rather than $(catkin_LIBRARIES). Thanks a lot! Answer: Do you have find_package(catkin REQUIRED COMPONENTS roscpp ... in your CMakeLists.txt? Are your sure your CMakeLists.txt contains ${catkin_LIBRARIES} rather than $(catkin_LIBRARIES) (suggested by your error) ? Originally posted by Wolf with karma: 7555 on 2014-03-27 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 17444, "tags": "ros" }
Turning a car on a perfect surface
Question: Say you have a car with perfect wheels and a perfect surface with so much friction that it never skids. When the car is traveling forward, the wheels INSTANTANEOUSLY turn 90 degrees. Does the car come to a complete stop or does it change direction and continue traveling left/right at the same speed? Answer: Let's rework this problem slightly into something more realistic so that people who are complaining this problem can't exist are somewhat satisfied . Lets say the car went airborne while traveling forward, and then the wheels turned to 90 degrees while it was airborne. It lands in one of 3 ways: Front wheels first Back wheels first All wheels simultaneously In all cases one should think of a rigid body and apply forces at the respective areas (like in engineering statics). In all cases the line of actions for the reaction force are pointing towards the rear of the car and create a moment tipping the rear upwards. Now whether or not there is enough force to tip the car over depends on the energetics involved.
{ "domain": "physics.stackexchange", "id": 15361, "tags": "newtonian-mechanics" }
Would my implementation of this enum pass?
Question: I have a feature with feature flagging enabled, basis the condition I want to load different pages in my screen, to achieve this I have the following enum: enum class ImageHolderEnum( val titleId: Int, val qFragment: BaseFragment, val fragment: BaseFragment ) : IPageHolder { PAGE1(R.string.tab_shop_baby, BabyTwoFragment(), BabyThreeFragment()) { override fun getTitle(): Int = titleId override fun getFragmentToAdd(): BaseFragment = if (isFeatureAllowed()) qFragment else fragment }, PAGE2(R.string.tab_shop_mom, MomThreeFragment(), MomTwoFragment()) { override fun getTitle(): Int = titleId override fun getFragmentToAdd(): BaseFragment = if (isFeatureAllowed()) qFragment else fragment }, PAGE3(R.string.tab_shop_dad, DadTwoFragment(), DadThreeFragment()) { override fun getTitle(): Int = titleId override fun getFragmentToAdd(): BaseFragment = if (isFeatureAllowed()) qFragment else fragment }; fun isFeatureAllowed(): Boolean { val qSlideConfig: QSlideConfig by remoteFeatureFlag() // Kind of dependency injection here return qSlideConfig.isQSlideEnabled() } } The interface is as follows: interface IPageHolder { fun getTitle(): Int fun getFragmentToAdd(): BaseFragment } I am concerned if I am using the dependency injection inside the enum and breaking some principles. Answer: Why are you giving seperate implementations for each enum value? All getTitle() and getFragmentToAdd() implementations are the same. enum class ImageHolderEnum( val titleId: Int, val qFragment: BaseFragment, val fragment: BaseFragment ) : IPageHolder { PAGE1(R.string.tab_shop_baby, BabyTwoFragment(), BabyThreeFragment()), PAGE2(R.string.tab_shop_mom, MomThreeFragment(), MomTwoFragment()), PAGE3(R.string.tab_shop_dad, DadTwoFragment(), DadThreeFragment()), ; override fun getTitle(): Int = titleId override fun getFragmentToAdd(): BaseFragment = if (isFeatureAllowed()) qFragment else fragment fun isFeatureAllowed(): Boolean { val qSlideConfig: QSlideConfig by remoteFeatureFlag() // Kind of dependency injection here return qSlideConfig.isQSlideEnabled() } } Interfaces can also have properties, so there's no need to have getTitle() as a function, when it can just be a value. interface IPageHolder { val titleId: Int fun getFragmentToAdd(): BaseFragment } And you can then use it like this in the enum class: enum class ImageHolderEnum( override val titleId: Int, ...
{ "domain": "codereview.stackexchange", "id": 38388, "tags": "enum, kotlin" }
Finding the minimum number of required deletions to have a non-repeating string
Question: I wrote code for the following problem: Given a string, print out the number of deletions required so that the adjacent alphabets are distinct. Please suggest different methods by which I can increase the speed of execution. tst = int(input()) for i in range(0,tst): str = input() length = len(str) str = list(str) j=0; count = 0 while(j<length): if(j+1<length): if((str[j] == str[j+1])): del str[j+1] count+=1 length-=1 else: j+=1 else: j+=1 print(count) Answer: Here's your culprit: del str[j+1] When you remove one character from a string, all subsequent characters need to be shifted into the hole that you create. That changes your algorithm from O(Length) to a worst-case scenario of O(Length2), if every character is the same. Additionally, your inner loop looks a lot like C code. Here is one way to reformulate it. def count_consecutive_deletions(s): deletions = 0 for i in range(1, len(s)): if s[i] == s[i - 1]: deletions += 1 return deletions def testcases(): for _ in range(int(input())): yield input() for case in testcases(): print(count_consecutive_deletions(case))
{ "domain": "codereview.stackexchange", "id": 11565, "tags": "python, performance, algorithm, python-3.x, edit-distance" }
How did our ancestors discover the Solar System?
Question: I wonder, how did our ancestors discover the Solar System? They did not have any telescopes to see distant objects, right? Even a planet looks like a star from a distance. They discovered the rotations of different planets without having much technology. Answer: 1. Ancient cultures observed the sky Night skies are naturally dark and there was no light-pollution in ancient times. So if weather permits, you can easily see a lot of stars. No need to tell about the Sun and the Moon. Ancient people had good reasons to study the night skies. In many cultures and civilizations, stars (and also the Sun and the Moon) where perceived to have religious, legendary, premonitory or magical significance (astrology), so a lot of people were interested in them. It did not took long to someone (in reality a lot of different people independently in many parts of the world) to see some useful patterns in the stars that would be useful to navigation, localization, counting hours, counting days and relating days to seasons, etc. And of course, those patterns in the stars were also related to the Sun and the Moon. So, surely all ancient cultures had people who dedicated many nights of their lifes to study the stars in detail right from the stone age. They would also perceive meteorites (falling stars) and eclipses. And sometimes a very rare and spetacular comet. Then there are the planets Mercury, Venus, Mars, Jupiter and Saturn. They are quite easily to notice to be distinct from the stars because all the stars seems to be fixed in the celestial sphere, but the planets don't. They are very easily to notice to be wandering around in the sky with the passage of the days, specially for Venus, which is the brightest "star" in the sky and is also a formidable wanderer. Given all of that, the ancient people surely become very aware of those five planets. About Mercury, initially the Greeks thought that Mercury were two bodies, one that showed up only in the morning a few hours before the sunrise and another only a few hours after the sunset. However, soon they figured out that in fact it was only one body, because either one or the other (or neither) could be seen in a given day and the computed position of the unseen body always matched the position of the seen body. 2. The Earth seems to be round Now, out of the stone age, already into ancient times, navigators and merchants who travelled great distances perceived that the Sun rising and setting points could variate not only due to the seasonal variation, but also accordingly to the location. Also, the distance from the polar star to the horizon line also variates accordingly with the location. This fact denounces the existence of the concept nowadays known as latitude, and this was perceived by ancient astronomers in places like Greece, Egypt, Mesopotamia and China. Astronomers and people who dependend on astronomy (like navigators) would wonder why the distance from the polar star to the horizon varied, and one possibility was that it is because the Earth would be round. Also, registering different Sun angles in different locations of the world on a given same day and at a given same hour, also gives a hint that the Earth is round. The shadow on the Moon during a lunar eclipse also gives a hint that the Earth is round. However, this by itself is not a proof that the Earth is round, so most people would bet on some other simpler thing, or simply don't care about this phenomenon. Most cultures in ancient times presumed that the world was flat. However the idea of the world being round exists since the ancient Greece. Contrary to the popular modern misconception, in the Middle Ages, almost no educated person on the western world thought that the world was flat. About the Earth's size, by observing different Sun positions and shadows angles in different parts of the world, Erasthotenes in ancient Greece calculated the size of Earth and the distance between the Earth and the Sun correctly for the first time as back as the third century B.C. However, due to the confusion about all the different and inconsistent unit measures existent back then and the difficulty to precisely estimate long land and sea distances, confusion and imprecision persisted until the modern times. Ancient cultures also figured out that the shiny part of the Moon was illuminated by the Sun. Since the Full Moon is easily seen even at midnight, this implies that the Earth is not infinite. The fact that the Moon enters in a rounded shadow when exactly in the opposite side of the sky as the Sun also implies that it is the Earth's shadow on the Moon. This also implies that Earth is significantly larger than Moon. 3. Geocentrism So, people observed the Sun, Moon, Mercury, Venus, Mars, Jupiter, Saturn and the fixed sphere of stars all revolving around the sky. They naturally thought that the Earth would be the center of the universe and that all of those bodies revolved around the Earth. This culminated with the work of the phylosopher Claudius Ptolemaeus about geocentrism. Altough we now know that the ptolomaic geocentric model is fundamentally wrong, it could be used to compute the position of the planets, the Sun, the Moon and the celestial sphere of stars, with a somewhat acceptable precision at the time. It accounted to include the observation of planets velocity variations, retrograde motions and also for coupling Mercury and Venus to the Sun, so they would never go very far from it. Further, based on the velocity of the motion of those bodies in the sky, then the universe should be something like: Earth at the center. Moon orbiting the Earth. Mercury orbiting the Earth farther than the Moon. Venus orbiting the Earth farther than Mercury. Sun orbiting the Earth farther than Venus. Mars orbiting the Earth farther than Sun. Jupiter orbiting the Earth farther than Mars. Saturn orbiting the Earth farther than Jupiter. The celestial sphere of stars rotating around the Earth, being the outermost sphere. In fact, the ptolomaic model is a very complicated model, way more complicated than the copernic, keplerian and newtonian models. Particularly, this could be compared to softwares that are based on severely flawed concepts but that are still working due to a lot of complex, tangled and unexplainable hacks and kludges that are there just for the sake of making the thing work. 4. The discovery of the Americas Marco Polo, in the last years of the 1200's, was the first European to travel to China and back and leave a detailed chronicle of his experience. So, he could bring a lot of knowledge about what existed in the central Asia, the East of Asia, the Indies, China, Mongolia and even Japan to the Europeans. Before Marco Polo, very few was known to the Europeans about what existed there. This greatly inspired European cartographers, philosophers, politicians and navigators in the years to come. Portugal and Spain fough a centuries-long war against the invading Moors on the Iberian Peninsula. The Moors were finally expelled in 1492. The two states, were looking for something profitable after so many years of war. Since Portugal ended its part of the war first, it had a head start and went to explore the seas first. Both Portugal and Spain were trying to find a navigation route to reach the Indias and the China in order to trade highly profitable spices and silk. Those couldn't be traded by land efficiently anymore due to the fact that the lands on West Asia and North Africa were dominated by Muslim cultures unfriendly to Christian Europeans, a situation that were just made worse after the fall of Constantinople in 1453. Portugal, were colonizing the Atlantic borders of Africa and eventually they managed to reach the Cape of Good Hope in 1488 (with Bartolomeu Dias). A Genovese navigator called Cristoforo Colombo believed that if he sailed west from the Europe, he could eventually reach the Indies from the east side. Inspired by Marco Polo and subestimating the size of Earth, he estimated that the distance between the Canary Islands and the Japan to be 3700 km (in fact it is 12500 km). Most navigators would not venture in such voyage because they (rightly) tought that Earth was larger than that. Colombo tried to convice the king of Portugal to finance his journey in 1485, but after submitting the proposal to experts, the king rejected it because the estimated journey distance was too low. Spain, however, after finally expelling the Moors in 1492, were convinced by him. Colombo's idea was far-fetched, but, after centuries of wars with the Muslims, if that worked, then Spain could profit quickly. So, the Spanish king approved the idea. And just a few months after expelling the Moors, Spain sent Colombo to sail west towards the Atlantic and then he reach the Hispaniola island in Central America. After coming back, the news about the discovery of lands in the other side of the Atlantic spread quickly. Portugal and Spain then divided the world by the Treaty of Tordesillas in 1494. In 1497, Amerigo Vespucci reached the mainland America. Portugal would not be left behind, they managed to navigate around Africa to reach the Indies in 1498 (with Vasco da Gama). And they sent Pedro Álvares Cabral, who reached the Brazil in 1500 before crossing the Atlantic back in order to go for the Indies. After that, Portugal and Spain quickly started to explore the Americas and eventually colonize them. France, England and Netherlands also came to the Americas some time later. 5. The Earth IS round After, the Spanish discovered and settled into the Americas (and Colombo's plan in fact didn't worked). The question that if it was possible to sail around the globe to reach the Indies from the east side remained open and the Spanish were still interested on it. They eventually discovered the Pacific Ocean after crossing the Panama Ishtums by land in 1513. Eager to find a maritime route around the globe, the Spanish crown funded an expedition leadered by the Portuguese Fernão de Magalhães (or Magellan as his name was translated to English) to try to circle the globe. Magellan was an experienced navigator, and had reached what is present day Malaysia traveling through the Indian Ocean before. They departed from Spain in September 20th, 1519. It was a long and though journey that costed the lives of most of the crew. Magellan himself did not survived, having died in a battle in the Phillipines on 1521. At least, he lived enough to be aware that they in fact reached East Asia by traveling around the globe to the west, which also proves that the Earth is round. The journey was eventually completed by the leadership of Juan Sebatián Elcano, one of the crewmen of Magellan. They reached Spain back through the Indian and Atlantic Oceans on September 6th, 1522 after traveling for almost three years a distance of 81449 km. 6. Heliocentrism There were some heliocentric or hybrid geo-heliocentric theories in ancient times. Notably by the Greek philosopher Philolaus in the 5th century BC. By Martianus Capella around the years 410 to 420. And by Aristarchus of Samos around 370 BC. Those models tried to explain the motion of the stars as rotation of the Earth and the position of the planets, specially Mercury and Venus as translation around the Sun. However those early models were too imprecise and flawed to work appropriately, and the ptolomaic model still was the model with the better prediction of the positions of the heavenly bodies. The idea that the Earth rotates was much less revolutionary than heliocentrism, but was already more-or-less accepted with reluctancy in the middle ages. This happens because if the stars rotated around Earth, they would need do so at an astonishing velocity, dragging the Sun, the Moon and the planets with it, so it would be easier if Earth itself rotated. People were uncomfortable with this idea, but they still accepted it, and this became easier to be accepted after the Earth sphericity was an established concept. In the first years of the 1500's, while the Portuguese and Spanish were sailing around the globe, a polish and very skilled matemathical and astronomer called Nikolaus Kopernikus took some years thinking about the mechanics of the heavenly bodies. After some years making calculations and observations, he created a model of circular orbits of the planets around the Sun and perceived that his model were much more simpler than the ptolomaic geocentric model and was at least as precise. His model also features a rotating Earth and fixed stars. Further, his model implied that the Sun was much larger than the Earth, something that was already strongly suspected at the time due to calculations and measurements and also implied that Jupiter and Saturn were several times larger than Earth, so Earth would definitively be a planet just like the other five then-known planets were. This could be seen as the borning of the model today know as Solar System. Fearing persecution and harsh criticism, he avoided to publish many of his works, sending manuscripts to only his closest acquaintances, however his works eventually leaked out and he was convinced to allow its full publication anyway. Legend says that he was presented to his finally fully published work in the very day that he died in 1543, so he could die in peace. There was a heated debate between supporters and oppositors of Copernic's heliocentric theory in the middle 1500's. One argument for the opposition was that star parallaxes could not be observed, which implied that either the heliocentric model was wrong or that the stars were very very far and many of them would be even larger than the Sun, which seemed to be a crazy idea at the time. Tycho Brache, which did not accepted heliocentrism, in the latest years of the 1500's tried to save geocentrism with an hybrid geo-heliocentric model that featured the five heavenly planets orbiting the Sun while the Sun and the Moon orbited Earth. However, he also published a theory which better predicted the position of the Moon. Also, by this time, the observation of some supernovas showed that the celestial sphere of the stars were not exactly immutable. In 1600, the astronomer William Gilbert provided strong argument for the rotation of Earth, by studing magnets and compasses, he could demonstrate that the Earth was magnetic, which could be explained by the presence of enourmous quantities of iron in its core. 7. With telescopes All of what I wrote above happened without telescopes, only by using naked eye observations and measurements around the globe. Now, add even some small telescopes and things change quickly. The earliest telescopes were invented in 1608. In 1609, the astronomer Galieu Galilei heard about that, and constructed his own telescope. In January of 1610, Galieu Galilei, using a small telescope, observed four small bodies orbiting Jupiter at different distances, figuring out that they were Jupiter's "moons", he also could predict and calculate its positions along their orbits. Some months later, he also observed that Venus had phases as seen from the Earth. He also observed Saturn's rings, but his telescope was not powerful enough to resolve them as rings, and he tought that they were two moons. These observations were incompatible with the geocentric model. A contemporary of Galilei, Johannes Kepler, working on Copernicus' heliocentric model and making a lot of calculations, in order to explain the differing orbital velocities, created an heliocentric model where the planets orbits the Sun in elliptic orbits with one of the ellipse's focus in the Sun. His works were published in 1609 and 1619. He also suggested that tides were caused by the motion of the Moon, though Galilei was skeptical to that. His laws predicted a transit of Mercury in 1631 and of Venus in 1639, and such transit were in fact observed. However, a predicted transit of Venus in 1631 could not be seen due to imprecision in calculations and the fact that it was not visible in much of the Europe. In 1650 the first double star were observed. Further in the 1600's, the Saturn rings were resolved by the use of better telescopes by Robert Hooke, who also observed a double star in 1664 and developed microscopes to observe cellular structures. From them on, many stars were discovered to be double. In 1655, Titan were discovered orbiting Saturn, putting more confidence on the heliocentric model. More four Saturnian moons were discovered between 1671 and 1684. 8. Gravitation Heliocentrism was reasonably well-accepted in the middle 1600's, but people was not confortable with it. Why the planets orbits the Sun afterall? Why the Moon orbits Earth? Why Jupiter and Saturn had moons? Although Keplerian mechanics could predict their moviment, it was still unclear what was the reason that makes them move that way. In 1687, Isaac Newton who was one of the most brilliant physic and mathematic that ever lived (although he was also an implacable persecutor of his opponents), provided the gravitational theory (based on prior work by Robert Hooke). Ideas for the gravitation theory and the inverse square law already were developed in the 1670's, but he could publish a very simple and clear theory for gravitation, very well-fundamented in physics and mathematics and it explained the motions of the celestial bodies with a great precision, including comets. It also explained why the planets, the Moon and the Sun are spherical, explained tides and it also served to explain why things falls to the ground. This made heliocentrism to be definitely widely accepted. Also, Newton gravitational law predicted that Earth rotation would make it not exactly spherical, but a bit ellipsoidal by a factor of 1:230. Something that agreed with measures done using pendulums in 1673. 9. What are the stars and the Solar System afterall? In the early 1700's, Edmund Halley, already knowing about newtonian laws (he was a contemporary of Newton) perceived that comets who passed near Earth would eventually return, and he found that there was a particular instance of sightings every 76 years, so he could note that those comets in reality were all the same comet, which is called after him. The only remaining problem with the heliocentric model was the lack of observation of parallax to the stars. And nobody knew for sure what the stars were. However, if they in fact are very distant bodies, most of them would be much larger than the Sun. At the first half of the 1700's, trying to observe parallax, James Bradley perceived phenomena like the aberration of light and the Earth's nutation, and those phenomena also provides a way to calculate the speed of light. But the observation of parallax remained a challenge during the 1700's. In 1781, Uranus were discovered orbiting the Sun beyond Saturn. Although barely visible to the naked eye in the darkest skies, it was so dim that it escaped observation from astronomers until then, and so were discovered with a telescope. The first asteroids were also discovered in the early 1800's. Investigation on pertubations on Uranus' orbit due to the predicted newtonian and keplerian movement eventually leaded to the discovery of Neptune in 1846. In 1838, the astronomer Friedrich Wilhelm Bessel who measured the position of more than 50000 stars with the greatest precision as possible, could finally measure the parallax of the star 61 Cygni successfully, which proved that stars were in fact very distant bodies and that many of them were in fact larger than the Sun. This also demonstrates that the Sun is a star. Vega and Alpha Centauri also had their parallaxes measured successfully in 1838. Further, those measurements permitted to estimate the distance between those stars and the Solar System to be on the order of many trillions of kilometers, or several light-years.
{ "domain": "astronomy.stackexchange", "id": 988, "tags": "solar-system, history" }
Resolving multiple "paths" in nested attributes
Question: I need to resolve "multiple paths" specified like so: p1 = 'item.element.[compact|fullsize].documents.[note|invoice]' A list like this is expected: ['item.element.compact.documents.note', 'item.element.fullsize.documents.note', 'item.element.compact.documents.invoice', 'item.element.fullsize.documents.invoice'] Code: def resolve_paths(path): parts = path.split('.') depth = len(parts) new_paths = [] for level, part in enumerate(parts): mult_branches = re.findall(r'\[(\w+)(?:\|(\w+))*\]', part) if mult_branches: mult_branches = flatten_iterable(mult_branches) for branch in mult_branches: interm_path = '.'.join(parts[:level] + [branch] + parts[level+1:]) new_paths.extend(resolve_paths(interm_path)) return new_paths elif level == depth - 1: new_paths.append(path) return new_paths Several tests I wrote for this function pass, but I'm not entirely happy with this solution, it's somewhat convoluted. Better solutions? Simplifications? Answer: These combinatorial problems typically have a compact and elegant solution using itertools. In this case, it is itertools.product you want to use: from itertools import product def resolve_paths(path): subpaths = path.split('.') for idx, subpath in enumerate(subpaths): if subpath[0] == '[' and subpath[-1] == ']': subpaths[idx] = subpath[1:-1].split('|') else: subpaths[idx] = [subpath] for path in product(*subpaths): yield '.'.join(path) I have made it an iterator, which I like better for this type of problems. >>> path = 'item.element.[compact|fullsize].documents.[note|invoice]' >>> list(resolve_paths(path)) ['item.element.compact.documents.note', 'item.element.compact.documents.invoice', 'item.element.fullsize.documents.note', 'item.element.fullsize.documents.invoice']
{ "domain": "codereview.stackexchange", "id": 15020, "tags": "python, recursion" }
How does air drying actually work (e.g. on clothes)?
Question: I wonder how laundry, which you hang inside or outside on a laundry rack, clothesline or something similiar becomes actually dry. Water turns gaseous at around 100 C° (depends on the sea level of course) but undeniably the air temperature either inside or outside is not even remotely close to the 100 C°, especially not where I live in central Europe. So, how do the water molecules actually get out of the laundry? Answer: No, water doesn't only turn into a gas at 100 °C. Every liquid has a vapor pressure dependent on its temperature. 100 °C is only the special case where the vapor pressure equals atmospheric pressure. Once that point is exceeded, bubbles form in the liquid, which we call "boiling". Evaporation happens without boiling because the vapor pressure is non-zero. This process is slower because there is less pressure "forcing" the water vapor into the air. Since the process is bi-directional, it also matters how much water is already in the air. Think of each molecule on the surface of the water having probability of detaching from the liquid and diffusing into the air. The higher the vapor pressure relative to the ambient pressure, the higher this probability. However, water molecules in the air also have a probability of condensing. When there are few water molecules in the air, more will evaporate into the air than the other way around, and the clothes will dry. If, however, the air is humid enough and the clothes cool enough, water molecules in the air actually have a higher probability of condensing onto the clothes than they do evaporating from the clothes. In that case the clothes will actually get more wet. This phenomenon is commonly called "dew". In typical situations of clothes on clothesline on a sunny day, the equilibrium reached where the same number of water molecules evaporate from the clothes as condense on them is what you call "dry". Even "dry" clothes in typical conditions still contain significant moisture, but not enough for us to feel.
{ "domain": "physics.stackexchange", "id": 41467, "tags": "thermodynamics, water, air, evaporation, humidity" }
[SOLVED] Remove all ROS_PACKAGE_PATH
Question: Hi guys, i am trying to install a package (https://github.com/erik-nelson/blam) which says in its instructions: This repository contains two ROS workspaces (one internal, one external). The build process is proctored by the update script. To build, first make sure that you do not have any other ROS workspaces in your ROS_PACKAGE_PATH, then clone the repository and from the top directory execute ./update My question is, to do that I should delete from my .bashrc the following lines? source /opt/ros/indigo/setup.bash source ~/catkin_ws/devel/setup.bash However when I do echo $ROS_PACKAGE_PATH I obtain /home/dan/catkin_ws/src:/opt/ros/indigo/share:/opt/ros/indigo/stacks should I also remove the /stacks to install de package? and Another question, when I install that, to use my normal workspaces, should I source from the terminal? I mean, how can I use my normal workspaces: /home/lsi2/catkin_ws/src:/opt/ros/indigo/share:/opt/ros/indigo/stacks when I install that Thanks a lot!!!! Originally posted by Kailegh on ROS Answers with karma: 146 on 2016-09-07 Post score: 0 Original comments Comment by hachbani on 2019-05-22: Hello, Did you manage to get it to work ? I'm started feeling not optimistic at all about this BLAM thing after struggling for days to get the update file to work.. kind of tried every relevant suggestion I found on Internet Comment by gvdhoorn on 2019-05-22: @hachbani: what are you referring to with "BLAM thing"? Comment by hachbani on 2019-05-22: @gvdhoom What I'm referring to is the first step to build the BLAM, when I run sudo ./update (Without the sudo I get a lot of permission denied errors) I get the following errors: https://pastebin.com/9E6754Vp I removed the catkin_ws path from ROS_PACKAGE_PATH, I only have /opt/ros/kinetic/share. I think it's because of the catkin_make_isolated.. but can't figure out why it doesn't work neither how to fix it Any suggestion please ? Comment by gvdhoorn on 2019-05-22: "BLAM" is not something I recognise. If you're referring to something that is known in a particular community, please explain or provide a link. Finally, I would recommend you post a new question instead of commenting on a 3 year old one that has already been answered. Comment by hachbani on 2019-05-22: Will do, thanks for ur time! I posted my question here as a reply cause it' relevant to the post since OP had kind of the same issue. Answer: I think you should just remove home/dan/catkin_ws/src from ROS_PACKAGE_PATH, and run update. Then If something wrong, you should remove source /opt/ros/indigo/setup.bash and source ~/catkin_ws/devel/setup.bash, and run update. Originally posted by Shay with karma: 763 on 2016-09-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Kailegh on 2016-09-08: yeah i forgot to post the solution, first one is right, you only have to delete the part refering to your personal workspaces, works fine thanks
{ "domain": "robotics.stackexchange", "id": 25701, "tags": "ros-package-path" }
Find the Pumping Length for Language L of (2+3k) a's or (10+12k) b's
Question: The following question on the theory of computation is GATE 2019 CS question 24: For $Σ = \{a, b\}$, let us consider the regular language: $$L = \{x \mid x = a^{2+3k} \text{ or } x = b^{10+12k}, k \geq 0\}$$ Which one of the following can be a pumping length (the constant guaranteed by the pumping lemma) for $L$? (A) 3$\quad$(B) 5$\quad$(C) 9$\quad$(D) 24 My Attempt I tired to solve like this. I divide the minimum string possible into $x(y^i)z$ that is $a^2$ so getting i value $2$ but option not available. Then I take second minimum $a^5$ i.e., taking $x=\epsilon$ and $y=a^5$ and $z=\epsilon$. I am getting $i=5$. So is it correct answer?? Answer: I am getting i=5. So is it correct answer? I am afraid I could not understand your argument and conclusion clearly. Anyway, can you check if you are able to pump $a^2, a^5, b^{10}, b^{22}$ using the pumping length of your choice? Here are a few results stated in the form of exercises that you can prove and use to solve the problem in the question and beyond. Assume $L$ is an arbitrary regular language. Exercise 1. Show that there exists $p_0=p_0(L)\in \Bbb N^+$ such that $p\in\Bbb N$ is a pumping length for $L$ iff $p\ge p_0$. We call $p_0$ the minimum pumping length of $L$. Exercise 2. Let $L_{m,n}=\{a^m, a^{m+n}, a^{m+2n}, \cdots\}$ for $m,n\in\Bbb N$, $0<n$. Show that $p_0(L_{m,n})=m+1$. Exercise 3. Let $L_1, L_2$ are two regular languages. Show that $p_0(L_1\cup L_2)\le\max(p_0(L_1), p_0(L_2))$ Exercise 4. Let $L_1, L_2$ are two regular languages over disjoint alphabets that do not contain the empty word. Show that $p_0(L_1\cup L_2)=\max(p_0(L_1), p_0(L_2))$ Exercise 5. Show that 11 is the minimum pumping length of $\{x\mid x=a^{2+3k} \text{ or } x=b^{10+12k},\ k\ge0\}$.
{ "domain": "cs.stackexchange", "id": 13242, "tags": "formal-languages, regular-languages, pumping-lemma, discrete-mathematics" }
Haskell: downpair :: (Monad m) => m (a, b) -> (m a, m b) implementation
Question: I'm trying to implement function downpairs which takes monad of pairs and transform it into pair of monad My current implementation is import Control.Monad import Control.Arrow import Data.Maybe downpair :: (Monad m) => m (a, b) -> (m a, m b) downpair = ((return . fst =<<) &&& (return . snd =<<)) main = do print $ downpair $ Just ("Hello", "World") Is there way to implement downpair in more compact maner? PS Sorry for my poor English Answer: (return . fst =<<) can be replaced by fmap fst. Your definition then becomes downpair = fmap fst &&& fmap snd
{ "domain": "codereview.stackexchange", "id": 11950, "tags": "haskell, monads" }
Feature classification - am I doing it right?
Question: I have a system where i get as input array of feature strings: ["kol","bol","sol","nol"] The length of this array is dynamic, i can get 2, 4 or 6 etc, total features <20 I need to make a decision according to this array, the decision is another string: x = ["feature1","feature5","feature3","feature8"] #in y = "john" #decide What I end up doing is creating a table, 1 if exist, 0 otherwise, for each training set (dataframe pandas): feature1 feature2 feature3 feature4 feature5... decision 1 0 1 0 1 1 (john mapped to 1, Ly to 2, etc) I feed this into a Decision Tree Classifier using sklearn. (DecisionTreeClassifier) I train it with 100+ input feature arrays and desired outcomes. It works, but i do have a feeling that it won't really provide value if the input will be different than trained data, because there is no real meaning/weight to these binary values. These features strings comes from a Bag of Words in which if appear on a text, i extract it, to create a well defined set of features to train/predict. can I, or should I change the values from 1/0 to a more weighted ones? how do i get them? Is this a right approach assuming i have a bag of words in which i look for in a text and produce features that both in the text and the bag. Answer: This looks closely similar to text classification. The main concept in any supervised classification is that the model receives the same features (in the same order) when it is applied as when it was trained. This is why traditionally the bag of word representation is used: every word in the vocabulary is assigned an index $i$ to be represented as feature $i$. The value of the feature can be boolean (1 if present in the instance, 0 otherwise) or numerical (frequency of the word in the instance, or some more complex value like TFIDF). The meaning of these feature is simple: it tells the model whether a particular word is present or not. The model calculates how often a particular label is associated with a particular word. Thus in a decision tree the model is made of conditions such as: "if the instance contains word A and does not contain word B and contains word C then the label is Y". Crucially, the vocabulary is fixed at the training stage. This implies that any new word found in the test instances cannot be used at all. This is the problem of out-of-vocabulary (OOV) words. It's also usually recommended to remove the least frequent words, because they likely happen by chance and cause a high risk of overfitting (see previous link). Overfitting is when the model thinks that there's a strong association between a particular word and a label even though it only had one or two examples which happened by chance.
{ "domain": "datascience.stackexchange", "id": 10627, "tags": "machine-learning, classification, feature-selection" }
Change simulator stage model configurations in runtime
Question: Can I change the stage/stageros robot models properties in runtime wihtout restarting the stage simulator? Originally posted by Pablo Iñigo Blasco on ROS Answers with karma: 2982 on 2011-12-17 Post score: 1 Original comments Comment by Pablo Iñigo Blasco on 2011-12-20: The attatched patch in my own answer is a solution for the problem itsef. As Perco suggested I requested this enhacement. I only focused on laser config so this patch can be improved. Almost any aspect of Stage can be reconfigured in runtime using this template - Bridge ros-service C++Stage API Comment by Arkapravo on 2011-12-20: @Pablo Iñigo Blasco : I am using Stage (as Player/Stage and also as ROS/stageros) for about 2 years now, I am very apprehensive of any solution to your problem. I follow your question most keenly. Comment by Pablo Iñigo Blasco on 2011-12-17: I've refactored the entry to clarify what is the question and what is the answer. Comment by Pablo Iñigo Blasco on 2011-12-17: Currently there is no support for this feature. Maybe somebdoy need it. Here you also can find the answer about how to solve the problem. Comment by joq on 2011-12-17: Is this a question? Answer: This is not currently supported. However here you can find a partial solution. Here I attached a patch over the stage package which provides a service to change elemental attributes of lasers beams of the robot models. This service can be also a template for further runtime configuration changes. Example: import roslib roslib.load_manifest('stage') import rospy import stage.msg import stage.srv stage_model_config_srv=rospy.ServiceProxy('/stageros/set_models_configurations',stage.srv.SetModelConfig) model_config=stage.msg.ModelConfig() model_config.laser_fov=-1.0 model_config.laser_range_max=-1 model_config.laser_range_min=-1 model_config.laser_resolution=-1 model_config.laser_samples=8 stage_model_config_srv(models_configurations=[model_config]) It was useful for some test bencharks in my research and maybe will be useful for someone else. But the below diff text is a good resume of the changes made. However the attached patch (rename from it to .diff) C:\fakepath\stage_configuration_service.diff.jpg should be used. It is a bit more verbose (a lot of indentation eclipse-ros format changes) This patch have been applied over: URL: https://code.ros.org/svn/ros-pkg/stacks/stage/trunk Repository Root: https://code.ros.org/svn/ros-pkg Repository UUID: eb33c2ac-9c88-4c90-87e0-44a10359b0c3 Revision: 38364 Diff text: Index: msg/ModelConfig.msg =================================================================== --- msg/ModelConfig.msg (revision 0) +++ msg/ModelConfig.msg (revision 0) @@ -0,0 +1,8 @@ +#geometry_msgs/Pose2D laser_pose +#to maintiain its current value set each config property to -1 + +int32 laser_samples +int32 laser_resolution +float64 laser_fov +int32 laser_range_max +int32 laser_range_min Index: src/stageros.cpp =================================================================== --- src/stageros.cpp (revision 38364) +++ src/stageros.cpp (working copy) @@ -47,6 +47,7 @@ #include #include "tf/transform_broadcaster.h" +#include "stage/SetModelConfig.h" #define USAGE "stageros " #define ODOM "odom" @@ -79,6 +80,9 @@ std::vector cmdvel_subs_; ros::Publisher clock_pub_; + ros::ServiceServer models_configurations_service_; + bool model_config_callback(stage::SetModelConfig::Request& request, stage::SetModelConfig::Response& response); + // A helper function that is executed for each stage model. We use it // to search for models of interest. static void ghfunc(Stg::Model* mod, StageNode* node); @@ -213,9 +217,47 @@ this->laserMsgs = new sensor_msgs::LaserScan[numRobots]; this->odomMsgs = new nav_msgs::Odometry[numRobots]; this->groundTruthMsgs = new nav_msgs::Odometry[numRobots]; + + this->models_configurations_service_ = localn.advertiseService("set_models_configurations", &StageNode::model_config_callback, this); } +bool StageNode::model_config_callback(stage::SetModelConfig::Request& request, stage::SetModelConfig::Response& response) +{ + boost::mutex::scoped_lock lock(msg_lock); + if(request.models_configurations.size()>lasermodels.size()) + { + ROS_ERROR("Service stage model config: configurations.count > simulator_models.count"); + return false; + } + + for(unsigned int i=0;ilasermodels[i]->GetConfig(); + if(current_msg_config.laser_samples!=-1) + laser_config.sample_count=current_msg_config.laser_samples; ///SetConfig(laser_config); + } + + return true; +} + // Subscribe to models of interest. Currently, we find and subscribe // to the first 'laser' model and the first 'position' model. Returns // 0 on success (both models subscribed), -1 otherwise. Index: srv/SetModelConfig.srv =================================================================== --- srv/SetModelConfig.srv (revision 0) +++ srv/SetModelConfig.srv (revision 0) @@ -0,0 +1,2 @@ +ModelConfig[] models_configurations +--- \ No newline at end of file Index: CMakeLists.txt =================================================================== --- CMakeLists.txt (revision 38364) +++ CMakeLists.txt (working copy) @@ -22,6 +22,9 @@ include_directories(${STAGE_INCLUDE_DIRS} ${PROJECT_SOURCE_DIR}/include/Stage-3.2) link_directories(${STAGE_LIBRARY_DIRS} ${PROJECT_SOURCE_DIR}/lib) +rosbuild_genmsg() +rosbuild_gensrv() + rosbuild_add_executable(bin/stageros src/stageros.cpp) rosbuild_link_boost(bin/stageros thread) Originally posted by Pablo Iñigo Blasco with karma: 2982 on 2011-12-17 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Pablo Iñigo Blasco on 2011-12-17: Done. https://code.ros.org/trac/ros-pkg/ticket/5307 Comment by Eric Perko on 2011-12-17: I suggest you open an enhancement ticket against the stage package and attach the patch to that. That way, this patch won't get lost and may make it into a future release of the stage package. Please update your answer to include that trac link once you create the ticket.
{ "domain": "robotics.stackexchange", "id": 7674, "tags": "simulation, stage" }
I wanna learn Quantum Physics
Question: I'm searching for a book, video documentary, or any other source of information to learn Quantum Physics from the beginning. I know almost nothing on physics, so I guess I would need the basics first. And although I can't skip the math, what I'm seeking is more on the theoretical part. Also I'd love to know how history on this goes. I mean, what Plank did, Einstein, Schrodinger... Things the way they happened. But of course, anything helps. Answer: As others have stated, it really depends on why you want to learn quantum mechanics, and how deeply you want to learn it. (1) If you want to learn it as badly as you want to watch a movie at the movie theaters (i.e. not that badly - you're just mildly interested), then I'd recommend, aside from the books already mentioned, Mr. Tompkins in Paperback by George Gamow. It's a classically wonderful story book that plunges you into the wonderland of modern physics (up until the mid 1900's though). Also, I'd recommend watching a bunch of youtube videos of Richard Feynman. Richard Feynman (1918-1988) was a theoretical physicist with an extremely interesting personality and view of the world. Watching videos of him will get you into science and critical thinking. Finally, reading The Quantum Universe by Hey and Walters will give you what you want. (Beware! There's a book by the same title written by Brian Cox which, in my opinion, isn't that great) (2) If you want to learn it to scratch it off your bucket list (i.e. you're more than mildly interested in it - it's always attracted you, but you have many more primary interests), I'd recommend to go through what I mentioned in the previous paragraph, and then go through The Theoretical Minimum by Susskind and Hrabovsky. Then, maybe if you're up for it, pick up Introduction to Quantum Mechanics by Griffiths. (3) If you really want to learn it so badly that you're willing to embark on a life changing journey to truly understand the beauty of quantum mechanics and possibly many other advanced topics of physics, this page is designed for you. Also, once you go through quantum mechanics for the first time (if you do), watch this lecture by Sidney Coleman titled "Quantum Mechanics in Your Face". It'll give the right way of thinking about both quantum mechanics and classical physics. If you're in between (2) and (3), I'd recommend taking a look at The Road to Reality by Penrose. It's huge, but it might be (a) well suited for you given your background, and (b) the type of journey you're looking for. Also, as others have stated, the only way to correctly communicate the ideas of quantum mechanics is through the mathematics on which the theory is built. Why this dissuades people so much is because you actually have to think, and most people enjoy having ideas given to them in a way their mind is already accustomed to. That's exactly why I recommended Richard Feynman videos (his books are great too) in (1). If you can learn to appreciate critical thinking and intelligence, the mathematics will become mental masturbation. Blatantly put, the only real way to learn quantum mechanics is to embark on the journey described in (3), and this is more than possible if you can find the motivation through sources like those outlined in (1).
{ "domain": "physics.stackexchange", "id": 27717, "tags": "quantum-mechanics, resource-recommendations, education" }
Reasons for dry conditions in substituition reactions with benzene
Question: All substitution reactions of benzene must be carried out in dry conditions with a catalyst that produces a powerful electrophile. This was a statement from my book. My question is, why must it be carried out in dry conditions? What will occur if moisture is present? Answer: Benzene and other aromatic hydrocarbons are immisible with water. So, probably there is no problem with benzene if we there is moisture. In most of the substitution reactions which benzene undergo, requires Lewis acid catalyst, like ferric halide or aluminium halide. Most commonly used catalysts are Anhyd. $\ce{AlCl3}$ and $\ce{FeBr3}$, which become inactivated if they react with water. So, this requires dry conditions.
{ "domain": "chemistry.stackexchange", "id": 10607, "tags": "organic-chemistry, reaction-mechanism, aromatic-compounds" }
Time in a cosmological model
Question: In a cosmological model usually we set that $g_{00}$ is only a function of $x_0$ because we want that each steady experimentator measure the same time? Answer: I'm not sure what the question is, but we usually define the time coordinate as for an observer travelling along the Hubble flow, i.e. isotropic with respect to the expansion of the universe. Then in something like the Friedmann metric we have $g_{00} =1$ (assuming a trivial lapse function). Edit (more detail): I'm not sure this answers your question still, but I thought I'd include more detail: To be more technical, all we need is for the spacetime manifold to be globally hyperbolic (i.e. foliated by 3-manifold hypersurfaces) in order to define a global t. But in cosmology we already have much more: we assume the cosmological principle, which means spacetime can be foliated into spacelike hypersurfaces which are also spherically symmetric about each point. This (along with Weyl's Postulate, which demands the matter geodesics are orthogonal to the spacelike hypersurfaces at $t$), gives us the orthogonal structure we need to write $$ ds^2= -dt^2 + h_{ij}(x^{\mu}) dx^{i}dx^{j} $$ for $i,j=1,2,3$. And then we can interpret $t$ as the cosmic time, which, as I said before, is the proper time for moving along with the Hubble flow. You can then quite easily go on to show the 3-metric $h_{ij}$ can be written of the form $a(t)^2 h_{ij}(x^{k})$, and then get to the FLRW metric, but I'm being a bit sloppy and lazy with this - I assume you're only interested in the $t$ coordinate term. If you're instead asking about why we make the assumptions in the cosmological principle, then that's a different question which I'm sure has been answered on here before.
{ "domain": "physics.stackexchange", "id": 73868, "tags": "general-relativity, cosmology, coordinate-systems, time, space-expansion" }