anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
What kind of data do I need to be able to complete Differential Expression using Limma | Question: I am very new to bioinformatics but have some background in coding with R. I have been asked to find a data set and complete differential expression using the Limma package in R.
I have tried a few data sets but haven't been able to get any code to work.
What kind of data do I need to have in order for Differential Expression to work with Limma R? I have tried load and read.table, is there a read function that I am missing?
I was advised to look on Gene Expression Omnibus to find published data sets to work on but again I am struggling to find any that will work.
Thank you
Answer: Limma is relatively generic in what it accepts. Typically it is either data on log2 scale that are either already normalized or need some of the inbuilt normalization methods (such as quantile normalization), or in case of RNA-seq you can provide raw counts which then can run through the limma-voom pipeline.
The format is always a matrix of counts/values, with genes/observations in rows and samples/specimen in columns.
For practice, you could follow the RNA-seq/limma-voom section (chapter 15) of the limma manual which has a RNA-seq case study available at https://subread.sourceforge.net/RNAseqCaseStudy.html with example data and code.
GEO has a lot of published data, often microarray gene expression values or RNA-seq, but lets discuss this in a separate question if you need it to keep this here focused on limma. | {
"domain": "bioinformatics.stackexchange",
"id": 2535,
"tags": "r, differential-expression"
} |
Why is bandwidth, range of frequencies, important when sending wave signals, such as in radio? | Question: So in wired/wireless networking and radio, signals are sent in form of wave. Then the concept of bandwidth comes in, which is the difference between highest frequency and lowest frequency in a signal. But I do not get why bandwidth determines the maximum information per second that can be sent. If we are able to send signals of any frequency in the bandwidth, then as the number of signals that are of frequencies in an aggregated signal increases, information that can be sent increases without bound.
Is this not possible because when adding some frequency, information in some frequency is necessarily violated? And why does maximum information per second that can be sent depends only on bandwidth, not highest frequency in the aggregated signal?
Answer: From a physics perspective, the fundamental reason for this is something called the bandwidth theorem (and also the Fourier limit, bandwidth limit, and even the Heisenberg uncertainty principle). In essence, it says that the bandwidth $\Delta\omega$ of a pulse of signal and its duration $\Delta t$ are related:
$$
\Delta\omega\,\Delta t\gtrsim 2\pi.
$$
A signal with a limited time duration needs more than one frequency to be realizable. (Conversely, you need infinite time to confirm that a signal really is monochromatic.) The bandwidth theorem, which can be proved rigorously for reasonable definitions of the bandwidth and the duration, means that the smaller the time duration is, the larger the bandwidth it requires. It is a direct consequence of a basic fact of Fourier transforms, which is that shorter pulses will have broader support in frequency space.
(This last statement is easy to see. If you have a signal $f(t)$ and you make it longer by a factor $a>1$, so your new signal is $g(t)=f(t/a)$, the new signal's transform is now
$$
\tilde g(\omega)
=\int g(t)e^{i\omega t}\text dt
=\int f(t/a)e^{i\omega t}\text dt
=a\int f(\tau)e^{ia\omega \tau}\text d\tau
=a\tilde f(a\omega),
$$
and this now scales the other way, so it's narrower in frequency space.)
More intuitively, the theorem says that it's impossible to have a very short note with a clearly defined pitch. If you try to play a central A, at 440 Hz, for less than, say, 10 milliseconds, then you won't have enough periods to really lock in on the frequency, and what you hear is a broader range of notes.
Suppose your communications protocol consists of sending pulses of light down a fibre, with a fixed time interval $T$ between them, in such a way that sending a pulse means '1' and not sending it means '0'. The rate at which you can transmit information is essentially given by the pulse separation $T$, which you want to be as short as possible. However, you don't want this to be shorter than the duration $\Delta t$ of each pulse, or else the pulses may start triggering the detection of neighbouring pulses. Thus, to increase the capacity of the fibre, you need to use shorter pulses, and this requires a higher bandwidth.
Now, this is probably very much a physics perspective, and the communications protocols used by real-world fibres and radio links are much more complex. Nevertheless, this limitation will always be there, because there will always be an inverse relation between the width of a signal on the frequency and the time domains. | {
"domain": "physics.stackexchange",
"id": 15313,
"tags": "electromagnetism, signal-processing, radio, radio-frequency"
} |
Generating Ising model steady state configurations | Question: What is the most efficient way to simulate steady state configurations of the Ising model? I am just interested in having a large set of random steady state configurations of the 1D Ising model (with homogeneous coupling constants). A few ideas came to mind:
Brute force sampling. Since the Ising model is exactly solvable in 1D and 2D, one has exact expressions for the probabilities of each state. However, random sampling over a set of $2^N$ will likely cause memory problems for small $N$ already.
Monte Carlo dynamics. One could run the usual Monte Carlo algorithms (e.g. Glauber dynamics) on random initial states and wait until the system converges to thermal equilibrium. However, this seems inefficient when you are not interested in the dynamics and only want steady state configurations.
Using the density of states. One could also first randomly sample the energy of the system, according to $P(E) \sim N(E) \exp(-\beta E)$, where $N(E)$ is the density of states, which is computable (at least numerically). Then one generates a random configuration with this energy, e.g. using a spin flip algorithm where one flips single spins to increase/decrease the energy until it matches the target energy. But I'm not sure if the configurations obtained this way statistically follow the Boltzmann distribution.
Note: in 1D there is also an exact expression for the Ising density of states, $g(E(k)) = 2 \binom{N-1}{k}$ with $E(k) = -N + 2k + 1$. See this other question: Ising model density of states.
Any ideas on what is the best way to approach this?
Answer: For the one-dimensional model, the most efficient way, by far, of simulating the Ising model is by using a Markov chain on $\{-1,1\}$, generating one spin at a time, conditionally on the values taken by the previous spins. Note also that in this way, you are sampling exactly from the Gibbs distribution, with no approximation (in contrast to a Monte Carlo approach).
For simplicity, let me consider the model with free boundary condition, that is, the model with Hamiltonian
$$
\beta\mathcal{H} = - \beta\sum_{i=2}^N \sigma_{i-1}\sigma_i .
$$
(You can also add a magnetic field, but I won't do it here to simplify the exposition).
Then, $\sigma_1$ is equal to $+1$ or $-1$ with probability $\tfrac12$ by symmetry. Moreover, for any $k\geq 2$,
$$
\mathrm{Prob}(\sigma_k=\sigma_{k-1} \,|\, \sigma_1, \dots, \sigma_{k-1})
=
\mathrm{Prob}(\sigma_k=\sigma_{k-1})
=
\frac{e^{\beta}}{e^{\beta} + e^{-\beta}}
=
\frac{1}{1+e^{-2\beta}}.
$$
Let us call this probability $p$.
To summarize:
You sample $\sigma_1$: it is $+1$ with probability $\tfrac12$ and $-1$ with probability $\tfrac12$.
Given $\sigma_1$, you set $\sigma_2 = \sigma_1$ with probability $p$ and $\sigma_2 = -\sigma_1$ with probability $1-p$.
Given $\sigma_2$, you set $\sigma_3 = \sigma_2$ with probability $p$ and $\sigma_3 = -\sigma_2$ with probability $1-p$.
and so on...
This is very easy to implement and extremely fast (of course, compute $p=1/(1+e^{-2\beta})$ only once). Then most of the time is taken by the pseudorandom number generation. In this way, you can simulate chains of arbitrarily large length without any problem.
(See also this answer for another point of view of the relationship between one-dimensional models and Markov chains.)
Explanation of the formula for $p$.
The simplest way of seeing why the formula for $p$ given above holds is by using either the random-cluster or the high-temperature representations of the Ising model, if you're familiar with them (they are described, for instance, in Sections 3.7.3 and 3.10.6 in this book).
If you are not familiar with these representations, let me try to provide a direct argument.
Let $s_1,\dots,s_N \in \{-1,1\}$ and write $s=(s_1,\dots,s_{k-1},s_k,\dots,s_N)$ and $s'=(s_1,\dots,s_{k-1},-s_k,\dots,-s_N)$ (that is, the configuration $s'$ is obtained from the configuration $s$ by flipping the spins at $k, k+1, \dots N$).
Now,
$$
\frac{{\rm Prob}(\sigma = s)}{{\rm Prob}(\sigma = s')}
=
\frac{\exp\bigl( -\beta \mathcal{H}(s) \bigr)}{\exp\bigl( -\beta\mathcal{H}(s') \bigr)}
=
\exp(2\beta\, s_{k-1}s_{k}).
$$
In particular,
$$
\frac{{\rm Prob}(\sigma_k=\sigma_{k-1})}{{\rm Prob}(\sigma_k = -\sigma_{k-1})}
=
\exp(2\beta).
$$
But this implies that
$$
{\rm Prob}(\sigma_k=\sigma_{k-1})
= e^{2\beta}\, {\rm Prob}(\sigma_k = -\sigma_{k-1})
= e^{2\beta} \bigl( 1 - {\rm Prob}(\sigma_k = \sigma_{k-1}) \bigr),
$$
and therefore
$$
(1+e^{2\beta})\, {\rm Prob}(\sigma_k=\sigma_{k-1}) = e^{2\beta},
$$
from which the formula for $p$ follows immediately. | {
"domain": "physics.stackexchange",
"id": 71567,
"tags": "statistical-mechanics, computational-physics, simulations, ising-model"
} |
What happens to the water on the surface of the Earth if the Earth is not rotating about its axis in the Earth-Moon system? | Question: Suppose the Earth is not rotating. As usual, the Moon follows its normal path around the Earth. Let's assume it's a circular motion and that there are no other gravitational influences.
A test particle on the surface of the Earth directed to the Moon on the line that connects the middle points of both will feel the Earth's gravity directed to its center and the Moon's gravity directed to the Moon.
A test particle on the opposite side of the Earth will also feel the Earth's gravity directed to its center (equal but opposite to the gravity experienced by the above-mentioned test particle) and the (smaller because of the greater distance to the Moon) gravity caused by the Moon.
So the surface water on Earth is attracted on both sides where the test particles reside with equal strength to the Earth. But the gravitational influence of the Moon on the water is bigger on the side directed to the Moon than it is on the opposite side. The difference is the greatest for the two test particles.
You would expect that because the water is pulled to the Moon by a little bigger gravitational force on the side of the Earth facing the Moon than on the opposite side (not facing the Moon), a bulge of water, directed to the Moon, will emerge that rotates around the Earth in sync with the Moon's motion.
But we must of course not forget that in this case there are also centrifugal forces that pull on the water due to the rotation of the Earth and the Moon around their CM (this is of course not the rotation of the Earth around its axis which I put zero). The CM between the mass and the Moon lies at 4600(km) from the center of the Earth in the direction of the Moon. The centrifugal force is bigger on the part of the Earth the furthest from the CM, where the gravitational effect of the Moon is the smallest (when the Earth is rotating also the centrifugal forces due to the rotation of the Earth itself come into play, which makes the situation more complicated).
So the question reduces to: What is the ratio between the centrifugal force plus the gravitational force caused by the Moon on the furthest part on Earth to the Moon and the centrifugal force plus the gravitational force caused by the Moon on the closest part of the Earth.
On the far side (from the Moon) of the Earth the Moon's gravitation is smaller but the centrifugal force bigger, while on the close side the Moon's gravitation is bigger and the centrifugal force smaller. Anybody who can do this quite straightforward calculation (for finding the ratio)? One thing is sure: two opposite bulges will develop which each go around the Earth in the same time as the Moon makes one complete cycle around the Earth. The ratio gives us information about the height of the bulbs.
EDIT
I made an obvious error (to make my error clear I let the question as it is). The Earth isn't rotating (as can be read in my question), so no centrifugal forces are present. It's true that, in this case, the CM (4200(km) from the center of the Earth) rotates around the center of the Earth in sync with the rotation of the Moon, but the Earth doesn't rotate around the CM. So there are only tidal forces at work which causes the two bulges rotating around the Earth in one Moon cycle.
Answer: The ratio is 1. It is possible, and often done, to analyze the situation as you have described. But it is not necessary to do it that way. You can just think of the Earth as being in free fall toward the Moon. The forces on it are just the same as if it were positioned at the same distance, but with no transverse velocity, so that the two bodies would soon violently slam together. They don't, because the transverse velocity causes them to orbit about each other endlessly, but that transverse velocity does not affect the variation of the gravitational fields, or the acceleration of the Earth, which is directly toward the Moon in either case.
The only thing the enters into the calculation when considered this way is the difference between the Moon's gravitational field on either side of the Earth, and that at the center of the Earth (these are known as tidal forces). When the distance from the Earth to the Moon is much greater than the radius of the Earth (as it is) these differences are very nearly identical.
You will get the same answer using centrifugal force, but it is unnecessarily complicated. | {
"domain": "physics.stackexchange",
"id": 53511,
"tags": "newtonian-gravity, earth, moon, tidal-effect"
} |
Is this sorting problem NP-complete? | Question: Consider array $A=(a_1,a_2,...,a_n)$ such that $a_i$s are positive integers. Moreover, we have $k$ binary tuples, each with length $n$. In each iteration, we choose one of those tuples, and decrease it from the array. For example, if $A=(5,7,4,2)$ and we decrease $(1,0,0,1)$ from it, it will become $(4,7,4,1)$. We cannot use a tuple more than one time. Moreover, we know that sum of all those $k$ tuples is $A$. What is the minimum number of tuples we need to decrease from $A$, in order that $A$ be sorted in decreasing order?
I want to know whether it is an NP-hard problem.
Example: $A=[2,3,4,5]$
tuples=$(0,1,1,1),(0,0,1,1),(1,0,0,1),(1,0,1,1),(0,1,1,1),(0,1,0,0)$
The answer is 4. By subtracting (0,1,1,1),(0,1,1,1),(0,0,1,1) and (1,0,0,1), we will have [1,1,1,1] which is decreasingly sorted. We used 4 tuples, and it is not possible with fewer tuples(at least, I could not).
Answer: You can reduce from exact-cover by $3$-sets (X3C): given a set $X$ of $3n$ elements $x_1, \dots, x_n$, and a collection $S = \{S_1, \dots, S_m\}$ of $m$ sets each containing exactly $3$ elements, is there an exact cover, i.e., a subset $S'$ of $S$ such that $|S'|=n$ and $\cup_{S_j \in S'} S_j = X$?
Consider the instance of your problem where:
$A = (a_0, a_1, a_2, a_3, \dots a_{3n}) = (3n, 3n, 3n-1, 3n-2, \dots 1)$ is an array of length $3n+1$ (for ease of notation I'm indexing the array from $0$) where $a_0 = 3n$ and $a_i = 3n-i$ for $i \ge 1$
There is a tuple $t_j$ for each set $S_j$. The $i$-th entry (again, indexing from $0$) is set to $1$ in $t_j$ if and only if $x_i \in S_j$.
Suitably add a bunch of tuples, each with only a single entry set to $1$, in order to ensure that the sum of all tuples yields exactly $A$ (notice that only you need polynomially many tuples).
If there is an exact cover $S'$, then selecting all $t_j$ such that $S_j \in S'$ yields the array $(3n, 3n-1, 3n-2, \dots, 0)$.
If you can use $n$ tuples to transform $A$ into a decreasing vector, then each entry $a_i$ with $i \ge 1$ is decreased at least once. Moreover, since each tuple has at most $3$ entries set to $1$, each entry $a_i$ with $i \ge 1$ is decreased exactly once, while $a_0$ is never decreased. This means that the collection of sets $S_j$ such that $t_j$ is a selected tuple is an exact cover.
To summarize, there is a solution to the instance of your problem using at most $n$ tuples if and only if there a solution to the X3C instance. | {
"domain": "cs.stackexchange",
"id": 20921,
"tags": "np-complete, reductions, sorting, np-hard, polynomial-time-reductions"
} |
Finding the k-th smallest ternary sum of elements from three different arrays | Question: The problem goes like this:
Given arrays $\{ a_i: 0\leq i \leq n-1 \},\{ b_i: 0\leq i \leq n-1 \} $ and $\{ c_i: 0\leq i \leq n-1 \}$, we want to know what is the $k$-th smallest combination $a_r+b_s+c_t$ where $r, s, t$ are arbitrary indices.
Since $k$ is relatively much smaller than $n^3$ (we may suppose $k \approx n $ for simplicity), it would be wasteful to: naively enumerate all $n^3$ possibilities and find the $k-$th smallest using a binary heap.
What is a more efficient way (in terms of time complexity) to solve this problem? I try to optimize the naive algorithm described above by first sorting three arrays then do the heaping for $\{ a_r + b_s + c_t: r+s+t < k \}$, but I believe this is far from a most efficient algorithm. Thanks.
(Rmk: the algorithm is intended to be comparison-based, since elements might be non-integers.)
Answer: You can first sort the three arrays. In the following answer we assume these arrays are already sorted.
You can maintain a priority queue $Q$. Initially it contains only $a_0+b_0+c_0$. Then in each iteration, you pop the smallest element (say $a_r+b_s+c_t$), and add $a_{r+1}+b_s+c_t$, $a_r+b_{s+1}+c_t$ and $a_r+b_s+c_{t+1}$ (duplicates are not allowed, while it is easy to check whether a combination already exists in $Q$). Now we can claim the popped sum in the $i$-th iteration is the $i$-th smallest sum.
This algorithm takes $O(n\log n)$ time. If the given three arrays are already sorted, the time is reduced to $O(k\log k)$.
We can use mathematical induction on $i$ to prove the correctness. Suppose the $i$-th smallest sum is $a_r+b_s+c_t$, then we have the following chain:
\begin{align}
&a_0+b_0+c_0\\
\le\ &a_1+b_0+c_0 \\
\le\ &\cdots \\
\le\ &a_r+b_0+c_0 \\
\le\ &a_r +b_1+c_0 \\
\le\ &\cdots \\
\le\ &a_r +b_s+c_0 \\
\le\ &\cdots \\
\le\ &a_r+b_s+c_t.
\end{align}
For convenience, we denote by $S_0,\ldots,S_{\ell}$ these sums in the chain. Let $S_j$ be the first sum in the chain that is not popped before the $i$-th iteration. Then in the iteration that $S_{j-1}$ is popped, $S_j$ is added to $Q$ according to our algorithm. This means the sum popped in the $i$-th iteration must be no more than $S_j$, thus no more than $S_{\ell}=a_r+b_s+c_t$. By inductive assumption, the first $i-1$ smallest sums are popped before the $i$-th iteration, so the sum popped in the $i$-th iteration is exactly the $i$-th smallest sum. | {
"domain": "cs.stackexchange",
"id": 12881,
"tags": "algorithms, algorithm-analysis, sorting, trees, heaps"
} |
How to choose frequency of local oscillator for heterodyne principal | Question: I am trying to implement the heterodyne principle in order to make an AM signal demodulator that can demodulate signals on a set bandwidth(550 - 1720kHz). Is there a formula or some other method of deciding which local oscillator frequency to use? Or have I completely missed the mark on what exactly this does?
Answer: Your bandwidth is between 550 kHz - 1720 kHz, but that is centered around a particular frequency. Now it depends if you want to move the signal to baseband (centered at zero frequency) or some other intermediate frequency. You get to decide this as the system designer.
Given a signal centered around the frequency $f_c$, that you wish to modulate to a new frequency $f_0$, the expression is simple for the required oscillator frequency $f_{LO}$
$$f_0 = f_c - f_{LO}$$
$$\rightarrow f_{LO} = f_c - f_0$$
The signs assume that you are shifting down from $f_c$ to $f_0$, which is usually the case. For example, when getting a signal to baseband, you set $f_c = f_{LO}$ so that $f_0 = 0$.
Some systems do a combination of upconverting and downconverting, in which the equations have their signs changed to accomodate an increase or decrease in frequency. | {
"domain": "dsp.stackexchange",
"id": 10314,
"tags": "signal-analysis, digital-communications, demodulation, oscillator, amplitude-modulation"
} |
Is it possible for a substance to absorb a longer wavelength of EM wave and emit a shorter wavelength? | Question: I know a fluorescent lamp works by emitting UV first and then the specific substance inside absorbs UV and finally emits visible light. An object may emit infrared under sunlight due to heating. But those examples are absorbing short wavelength of EM and then reemitting a longer one. Is there a substance that can do the opposite (e.g.: absorb visible light and then emit UV)?
Answer: It is possible, but impractical. You can "upconvert" light. This effect, as Wikipedia relays, is due to the sequential absorption of two photons per emitted photon.
This is certainly possible to make in a fluorescent / phosphorescent light scenario, but the engineering challenges (or, more specifically - the cost) involved in such a solution would be steep.
There is a chemical model of such a compound in the two-photon-absorbance wiki site, but I have not the nomenclature skills to name it for you. I even tried using online search for the structure (and it took me a while. I hate nested benzene groups - probably why I chose inorganic chemistry) | {
"domain": "chemistry.stackexchange",
"id": 8514,
"tags": "photochemistry"
} |
In the adiabatic theorem, how do we know which eigenstate we start on? (STIRAP) | Question: I am aware of the question here, but it doesn't have an answer and also doesn't answer my question. I'm wondering about a specific case in STIRAP, where the 3 eigenstates are $$|\Psi_\pm\rangle = \frac{1}{\sqrt{2(\Omega_P^2+\Omega_S^2)}}\left[\begin{array}{ccc} \Omega_P \\ \pm\sqrt{\Omega_P^2+\Omega_S^2} \\ \Omega_S \end{array}\right], |\Psi_\text{dark}\rangle = \frac{1}{\sqrt{\Omega_P^2+\Omega_S^2}}\left[\begin{array}{ccc} \Omega_S \\ 0 \\ -\Omega_P \end{array}\right],$$ with eigenenergies $E_\pm = \pm \frac{\hbar}{2} \sqrt{\Omega_P^2+\Omega_S^2},\ E_\text{dark} = 0$. Then we kick off $\Omega_S$ while leaving $\Omega_P = 0$ to push the system into $|\Psi_\text{dark}\rangle$ and we can apply the adiabatic elimination to slowly nudge the dark state from $|1\rangle \rightarrow |3\rangle$.
Here's my question: what if we kick off $\Omega_P$ first, and perform transfer through the $|\Psi_\pm\rangle$? I assume doing that would put the system into some superposition of the $|\Psi_\pm\rangle$? What superposition would it be, $|\Psi_+\rangle + |\Psi_-\rangle$ or $|\Psi_+\rangle - |\Psi_-\rangle$ or something else?
If the answer is $|\Psi_+\rangle + |\Psi_-\rangle$, wouldn't that mean we can also perform coherent transfer through $|\Psi_\pm\rangle$? Or would the system fall into a specific eigenstate instead?
Answer: With $\Omega_P = 0$ (with non-zero $\Omega_S$) the dark state is $| 1 \rangle$, as you say, and when $\Omega_S = 0$ (with non-zero $\Omega_P$) then the dark state is $| 3 \rangle$. The idea is that the system state should be dark throughout.
If you start with $\Omega_S \ne 0,\;\Omega_P = 0$ then the system will first evolve towards $| 1 \rangle$ by optical pumping (before the STIRAP sequence commences). If it is not already in $| 1 \rangle$ then this involves photon scatting which of course is what we normally wish to avoid in STIRAP.
If you start with $\Omega_P \ne 0,\;\Omega_S = 0$ then the system will first evolve towards $| 3 \rangle$ by optical pumping (before the STIRAP sequence commences). If it is not already in $| 3 \rangle$ then this involves photon scatting which of course is what we normally wish to avoid in STIRAP. The STIRAP sequence (with $\Omega_P$ slowly switched off while $\Omega_S$ is slowly switched on) will then transfer the system to $| 1 \rangle$ while remaining in dark state throughout. | {
"domain": "physics.stackexchange",
"id": 93203,
"tags": "quantum-mechanics, quantum-optics, approximations, adiabatic"
} |
Capacitance of bodies with different charge | Question: How does one calculate the capacitance of two bodies with different charges? I was looking at coefficients of potential, but they don't seem helpful.
Answer:
How does one calculate the capacitance of two bodies with different
charges?
To be clear, capacitance doesn't depend on the amount of charge; the capacitance is determined by the geometry of the bodies.
If you have two conductors, there are actually three capacitances to consider, the self-capacitance of each and mutual capacitance of the two conductors.
In electrical circuits, the term capacitance is usually a shorthand
for the mutual capacitance between two adjacent conductors, such as
the two plates of a capacitor. However, for an isolated conductor
there also exists a property called self-capacitance, which is the
amount of electrical charge that must be added to an isolated
conductor to raise its electrical potential by one unit (i.e. one
volt, in most measurement systems).[20] The reference point for this
potential is a theoretical hollow conducting sphere, of infinite
radius, centered on the conductor.
Let $Q_1$ be the charge on conductor 1 and $Q_2$ the charge on conductor 2.
Further let $C_1$ be the self-capacitance of conductor 1, $C_2$ the self-capacitance of conductor 2, and $C_{12}$ the mutual capacitance.
We can then write:
$$Q_1 = C_1 V_1 + C_{12}(V_1 - V_2)$$
$$Q_2 = C_2 V_2 + C_{12}(V_2 - V_1)$$
For an intentional capacitor, the self-capacitance of the conductors is insignificant, i.e., the mutual capacitance dwarfs the self-capacitance of the conductors and we speak of the capacitance of the capacitor which is understood to be the mutual capacitance. | {
"domain": "physics.stackexchange",
"id": 12297,
"tags": "capacitance"
} |
Ancilla qubit error spreading to multiple data qubits? | Question: In this paper, the author considers a four qubit stabilizer circuit shown below. Note that the first four qubits from the top are data qubits and can be in any state (here we just put them in $\vert 0000\rangle$). The last qubit is the ancilla qubit used for the stabilizer measurement.
The claim is that an error on the ancilla qubit can "propagate to multiple data qubits". What does that exactly mean?
Concretely, suppose I have an $X$ error after the Hadamard on the ancilla qubit. Then, it appears to do nothing since the state is already in the $\vert +\rangle$ state. If I had a $Z$ error then, the ancilla gets flipped to $\vert -\rangle$ and then we apply the stabilizer measurment. How do I see this as an error on the data qubits?
Answer: Consider the following circuit.
Let the $|\psi\rangle$ state be some arbitrary state
$$|\psi\rangle = a|0\rangle + b|1\rangle$$
If there are no errors, then you can compute
$$|\psi\rangle |+\rangle \to |\psi\rangle |+\rangle$$
$$|\psi\rangle |-\rangle \to Z|\psi\rangle |-\rangle$$
Thus, when the target qubit is ∣+⟩, the control qubit is unchanged.
However, now suppose a phase-flip error happens to the target qubit right before the CNOT gate, when the target qubit is $∣+⟩$.
Since this flips the target qubit to $∣−⟩$, this means that the control qubit state becomes $Z|\psi⟩$, whereas it should have been $∣ψ⟩$:
$$|\psi\rangle |+\rangle \xrightarrow{Z_2 \text{error}} |\psi\rangle |-\rangle \xrightarrow{\text{CNOT}} Z|\psi\rangle |-\rangle \equiv (Z|\psi\rangle)(Z |+\rangle)$$
whereas, without the $Z_2$ error, state would've been just $|\psi\rangle |+\rangle\,.$ So you can see here that a single error on the target qubit became two errors. | {
"domain": "quantumcomputing.stackexchange",
"id": 5512,
"tags": "error-correction, quantum-circuit, stabilizer-code"
} |
Setting unique code to model before creating (ruby/rails) | Question: I have a Table model that needs a unique code. I have a method creating the code (and making sure it's unique), but I have the same line self.code = rand.to_s[2..5] twice. Is there a neat way to only have that line once?
class Table < ActiveRecord::Base
before_create :set_code
private
def set_code
self.code = rand.to_s[2..5]
while(Table.find_by(code: code) != nil)
self.code = rand.to_s[2..5]
end
end
end
Answer: It's the same in Ruby as in other languages where you want a loop to always run at least once: Move the condition to the end.
begin
code = ...
end until(Table.find_by(code: code).nil?)
Ruby's syntax is a little odd ("end until"), but it's the same as a do { ... } while() in C-like languages.
However, the overall system you're using is really strange. If all you want is a unique number for the new table, just use id. That's what it's for. It's set automatically at the database layer, so there'll be no collisions.
If you just want 4 random digits, don't make a random float (which may not have enough digits, e.g. 0.0), convert it to a string and pull out a section. Just call rand(1000..9999).to_s. Or rand(10_000).to_s.rjust(4, '0') to get a number padded with leading zeros if necessary.
However, anything you do in the application layer is vulnerable to race conditions! Between the time you check if a code already exists in the database, and the time you save your record, that code may have been added, giving you duplicates. E.g. 2 threads both check for the code 1234 at the same time, and both see that it's not in the database, so both then save records with that same code. Oops.
The proper way to avoid such things is to add a uniqueness constraint in the database itself, and rescue the resulting RecordNotUnique error that'll get raised.
Here's an article discussing other ways to generate unique tokens, and a follow-up on how to handle collisions.
However: just use id. Or find a gem that generates unique tokens for you. | {
"domain": "codereview.stackexchange",
"id": 25698,
"tags": "ruby, ruby-on-rails"
} |
Difference between torsion, out of plane, coplanar and perpendicular bends? | Question: In Infrared spectroscopy, stretches are easily understandable. But, how do I visualize (or conceptually understand) the difference between dihedral (torsions), out of plane bends, coplanar bends, and perpendicular bends? Can you give examples?
Answer: A molecule with N atoms has 3N degrees of freedom
3 translational modes
if it is linear, it will have 2 rotational modes; if non-linear 3
This leaves 3N-6 (3N-5 if linear) vibrational modes. Vibrational modes are further subdivided into stretching, bending or torsional motions.
Stretching motions can be symmetric or asymmetric
Bending motions are often referred to as scissoring, twisting, wagging, and rocking
Torsional motions refer to rotations about bonds
This diagram illustrates these various stretching and bending vibrations
Depending on the geometric relationship between the transition moment for a specific vibrational mode and the symmetry axis for the mode, vibrational modes may be further classified as parallel or perpendicular.
Here is a link to a 31 page article (slide show) that has a lot of background and pictures on this subject. Section 4.11 (page 19) might be particularly interesting to you.
One final example that you might be interested in, formaldehyde has 4 atoms and is non-linear so it will have 6 vibrational modes. These are all pictured on page 2 in this link. | {
"domain": "chemistry.stackexchange",
"id": 1587,
"tags": "physical-chemistry, molecules, spectroscopy, molecular-structure, ir-spectroscopy"
} |
PCL : computing FPFH at given keypoints | Question:
Suppose i have a bunch of keypoints detected by 2D SIFT or SURF and i want to calculate FPFH descriptors at those keypoints. How can i do this?
Originally posted by Pavel on ROS Answers with karma: 3 on 2013-03-04
Post score: 0
Answer:
There is a PCL tutorial to show how to do it
Originally posted by kalectro with karma: 1554 on 2013-03-04
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Pavel on 2013-03-11:
@kalectro: thank you but I'd like to compute FPFH for single points and not for an entire point cloud. How can i do this?
Comment by kalectro on 2013-03-11:
As for as I know FPFH descriptors need a pointcloud to work with. So do the following:
If you have a bunch of keypoints, just create a new empty point cloud and add the keypoints one by one.
Comment by Pavel on 2013-03-11:
@kalectro. Thank you again. I thought of that but for computing a local descriptor the points around a keypoint should be needed as well. Am i wrong?
Comment by kalectro on 2013-03-11:
You are right, I did not read the tutotial and only assumed it would work like the SHOT descriptor. Try to ask this question on pcl-users.org
Comment by Pavel on 2013-03-11:
Already did that but unfortunately no one has answered (http://www.pcl-users.org/Computing-FPFH-at-given-keypoints-td4026504.html). Thank you anyway.
Comment by hamed12 on 2021-11-07:
how to save descriptor output in pcl
move descriptor output to the matlab | {
"domain": "robotics.stackexchange",
"id": 13177,
"tags": "pcl"
} |
Set default value for remaining size of list by use of Java 8 and Stream | Question: I have written a method which will return the size 16 List every time. If the input List has size less than 16, the remaining list object will be filled with default values. Is there any way to remove the if and for condition from the enrichAddress method?.
@Component
public class TestService{
public List<TestB> enrichAddress(List<TestA> cardAccountDetails) {
if (CollectionUtils.isEmpty(cardAccountDetails)) return Collections.emptyList();
List<TestB> testBList = cardAccountDetails.stream().map(AccountService::buildAddress).collect(Collectors.toList());
if (testBList.size() < 16) {
int accountCounts = testBList.size();
for (; accountCounts < 16; accountCounts++) {
testBList.add(buildDefaultAddress());
}
}
return testBList;
}
private TestA buildDefaultAddress() {
TestA testA = new TestA();
testA.setA("NONE");
testA.setAccS("TEST_STATUS");
testA.setAccT("TEST_ACCOUNT_TYPE");
testA.setADes("");
return testA;
}
private static TestA buildAddress(TestB testB) {
TestA testA = new TestA();
testA.setDes(testB.getDes());
testA.setAcc(testB.getAcN());
testA.setAccSt(testB.getAccS());
testA.setAccT(testB.getAcCT());
testA.setAc(testB.getAct());
return testA;
}
}
Answer: As mtj already mentioned in the comment, adding stream trickery to fulfill this need only makes the code less readable. Streams are nifty, exciting and popular but they are not always the right tool for the job. Sometimes a plain loop is still the best choice.
The common pattern in padding a collection (well, most often it's a string that gets padded) to certain size is to use a while loop. It has least amount of excess variables and code and the syntax communicates intent quite naturally ("while size is less than 16, do this"). But this still only tells that you are intentionally padding the collection to 16 elements. You also need to document in comments the reason why you are padding the collection to 16 elements.
List<Address> addresses = cardAccountDetails
.stream()
.map(AccountService::buildAddress)
.collect(Collectors.toList());
while (addresses.size() < 16) {
addresses.add(buildDefaultAddress());
} | {
"domain": "codereview.stackexchange",
"id": 38723,
"tags": "java, stream, hash-map"
} |
robot_pose_ekf is there a way to override the tf base_footprint | Question:
Robot-pose-ekf it publishes a tf :odom_combined → base_footprint. I would like to change it to base_link. How is this best done rather then adding a base_footprint to my urdf.
Originally posted by rnunziata on ROS Answers with karma: 713 on 2013-12-18
Post score: 0
Answer:
I added the base_footprint and static tf to base_link. This seemed to be the easiest solution that works.
Originally posted by rnunziata with karma: 713 on 2014-01-07
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 16485,
"tags": "ros"
} |
Intermolecular Forces for Ionic Compound? | Question: So in class we have learned London dispersion, dipole-dipole, ion-dipole and hydrogen bonding for intermolecular forces. Our teacher always uses covalent molecules as examples. So I was wondering which intermolecular forces ionic bonds have, if so, how are they formed?
Answer: Ions in ionic compounds are held together by electrostatic attractions, i.e. the idea that "opposite charges attract".
The strength of an electrostatic attraction is given by Coulomb's law:
$$F = \frac{1}{4\pi\varepsilon_0}\frac{q_1q_2}{r^2}$$
where $q_1$ and $q_2$ are the charges on the two ions and $r$ is the distance between them. In a completely ionic bond, $q_1$ and $q_2$ are multiples of the elementary charge, $e = 1.602 \times 10^{-19} \text{ C}$; however, no bond is completely ionic. | {
"domain": "chemistry.stackexchange",
"id": 17205,
"tags": "intermolecular-forces, covalent-compounds, ionic-compounds"
} |
Merge two OccupancyGridMaps | Question:
Hi there,
as the topic says: I want to merge/combine two OccupancyGridMaps.
The conditions:
I recieve two messages (nav_msgs/OccupancyGrid) by two services. These two maps have the same resolution, but different postions and size (orientation can be omitted).
My question:
Is there an easy way to combine/overlay these maps and build a new map out of these? Or do I have to do this the hard way?
Kind regards, tiko
Originally posted by tiko on ROS Answers with karma: 13 on 2012-03-05
Post score: 1
Answer:
The occupancy_grid_utils package (http://www.ros.org/wiki/occupancy_grid_utils) provides a "union of grids" operation.
Originally posted by bhaskara with karma: 1479 on 2012-03-06
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 8494,
"tags": "ros, navigation, nav-msgs, occupancy-grid"
} |
Derivative of the electromagnetic tensor invariant $F_{\mu\nu}F^{\mu\nu}$ | Question: The electromagnetic field tensor is $F_{\mu\nu}=\partial_\mu A_\nu - \partial_\nu A_\mu$. I am trying to calculate the quantity
$$ \frac{\partial(F_{\alpha\beta}F^{\alpha\beta})}{\partial(\partial_{\mu}A_{\nu})}.
$$
This calculation arises when trying to derive the electromagnetic equations of motion (i.e. Maxwell's equations) from the Lagrangian $\mathcal{L}=C F_{\mu\nu}F^{\mu\nu}$. According to P. 14 of these online notes, this derivative is
$$ \frac{\partial(F_{\alpha\beta}F^{\alpha\beta})}{\partial(\partial_{\mu}A_{\nu})}=2F^{\alpha\beta}\frac{\partial F_{\alpha\beta}}{\partial(\partial_\mu A_{\nu})}
.$$
This result surprises me. I can use the product rule to find
$$ \frac{\partial(F_{\alpha\beta}F^{\alpha\beta})}{\partial(\partial_{\mu}A_{\nu})}=F_{\alpha\beta}\frac{\partial(F^{\alpha\beta})}{\partial(\partial_{\mu}A_{\nu})}+F^{\alpha\beta}\frac{\partial(F_{\alpha\beta})}{\partial(\partial_{\mu}A_{\nu})}
$$
and it is clear that if $F_{\alpha\beta}\frac{\partial(F^{\alpha\beta})}{\partial(\partial_{\mu}A_{\nu})}=F^{\alpha\beta}\frac{\partial(F_{\alpha\beta})}{\partial(\partial_{\mu}A_{\nu})}$ then you get the desired result. However, I can't see why this is true. In particular, I don't understand how to take the derivative
$$\frac{\partial(F^{\alpha\beta})}{\partial(\partial_{\mu}A_{\nu})}=\frac{\partial(\partial^{\alpha}A^{\beta}-\partial^{\beta}A^{\alpha})}{\partial(\partial_{\mu}A_{\nu})}=\frac{\partial(\partial^{\alpha}A^{\beta})}{\partial(\partial_{\mu}A_{\nu})}-\frac{\partial(\partial^{\beta}A^{\alpha})}{\partial(\partial_{\mu}A_{\nu})}
$$ where the downstairs part has lower indices and the upstairs part has upper indices.
Answer: For your first question: the components of the metric don't depend on $\partial_\mu A_\nu$, or for that matter, anything at all. So we have, e.g.
$$J^\mu \partial K_\mu = J^\mu \partial (\eta_{\mu\nu} K^\nu) = J^\mu \eta_{\mu\nu} \partial K^\nu = J_\nu \partial K^\nu = J_\mu \partial K^\mu$$
where $\partial$ stands for any kind of derivative whatsoever and $J$ and $K$ are arbitrary. The proof for your case is identical.
For your second question: simply do the same thing. Note that
$$\frac{\partial J^\nu}{\partial J_\mu} = \frac{\partial (\eta^{\rho\nu} J_\rho)}{\partial J_\mu} = \eta^{\rho\nu}\frac{\partial J_\rho}{\partial J_\mu} = \eta^{\rho \nu} \delta^\mu_\rho = \eta^{\mu\nu}$$
where you can adapt this reasoning to your own example. After you do this a couple times, it becomes completely second nature, and you won't have to write out the steps. Everything works out exactly how you would expect, just "lining up the indices",
$$\frac{\partial J^\nu}{\partial J_\mu} = \eta^{\mu\nu}, \quad \frac{\partial J^\nu}{\partial J^\mu} = \eta^{\nu}_\mu, \quad \frac{\partial J_\nu}{\partial J_\mu} = \eta_\nu^\mu, \quad \frac{\partial J_\nu}{\partial J^\mu} = \eta_{\mu\nu}$$
where, in order to write all four results the same way, I defined $\eta^\mu_\nu = \delta^\mu_\nu$. | {
"domain": "physics.stackexchange",
"id": 49283,
"tags": "homework-and-exercises, electromagnetism, lagrangian-formalism, maxwell-equations, variational-calculus"
} |
How can I get the right sample in RRT star Dubins? | Question: I am trying to find a solution in S(1)*R^2 (x,y, orientation) with obstacles (refer to image) using RRT star and Dubins Model.
The code takes a lot of time to find a suitable random sample with x,y, theta such that a successful Dubins path can be connected between the two points without the vehicle (a rectangle colliding any of the obstacles). The fact that the random sample needs to be at the correct angle so that the vehicle's path is collision free is 1 out of 100,000 random samples. This makes the code very slow even when my computer is at its full processing power. None of my internal codes take much time. I timed all of them, only the fact of achieving that 1 out of 100,000 sample causes the code to take so much time. I tried decreasing my discretization space by half but the problem still exists.
Answer: Apart from the fact that the collision-free region looks pretty tight, one of the main reasons why you got very few usable samples is that a connection attempt is made directly between a node in the tree and the sampled configuration. Due to that, in order for a sampled configuration to have any chance at all of a successful connection, it has to have its x and y coordinates being on the gray area, which is already relatively small compared to the whole map. Even when you have a sampled configuration falling in the gray area, you still need to take care of the rotation as well as making sure that those configurations can be connected by a relatively constrained Dubin path.
A better approach would be making use of a steering function. (Actually the RRT$^*$ paper also describes this a bit.) To make the point, let's consider when we use a straight line to connect two configurations. Let's say a sampled configuration is $q_\text{rand}$ and its nearest neighbor on the tree is $q_\text{near}$. Instead of connecting $q_\text{near}$ directly to $q_\text{rand}$, you try to connect $q_\text{near}$ to some $q'$ that lies on the straight line connecting $q_\text{near}$ and $q_\text{rand}$ but is closer to $q_\text{near}$. This is kind of steering a configuration from $q_\text{near}$ towards $q_\text{rand}$. It will increase the chance of success as it is not too greedy to always try to go all the way to a sampled configuration.
Of course, with Dubin paths, the actual implementation details are definitely going to be different. But the core idea stays the same: limit the RRT extension to be some step size from existing nodes on the tree. | {
"domain": "robotics.stackexchange",
"id": 1741,
"tags": "motion-planning, matlab, algorithm, path-planning, rrt"
} |
RTABMap Ground Plain Off | Question:
Hi,
I've having problems getting the ground plan on rtabmap to show correctly. The pic below shows the ground plan beneath the grid floor. How can I correct this please. I've tried some static transforms but they haven't worked.
https://www.dropbox.com/s/8rng8qk9c8kemln/Capture.PNG?dl=0
Incorrect Camera Optical Frame??
https://www.dropbox.com/s/xgminqnsig67a05/TF%20Camera%20Optical%20Frame.PNG?dl=0
Full TF tree:
https://www.dropbox.com/s/s8en0mc5wvlus1s/TF%20Tree.PNG?dl=0
RTABMap Startup Parameters
PARAMETERS
* /rosdistro: noetic
* /rosversion: 1.16.0
* /rtabmap/rgbd_odometry/approx_sync: False
* /rtabmap/rgbd_odometry/approx_sync_max_interval: 0.0
* /rtabmap/rgbd_odometry/config_path:
* /rtabmap/rgbd_odometry/expected_update_rate: 0.0
* /rtabmap/rgbd_odometry/frame_id: camera_link
* /rtabmap/rgbd_odometry/ground_truth_base_frame_id:
* /rtabmap/rgbd_odometry/ground_truth_frame_id:
* /rtabmap/rgbd_odometry/guess_frame_id:
* /rtabmap/rgbd_odometry/guess_min_rotation: 0.0
* /rtabmap/rgbd_odometry/guess_min_translation: 0.0
* /rtabmap/rgbd_odometry/keep_color: False
* /rtabmap/rgbd_odometry/max_update_rate: 0.0
* /rtabmap/rgbd_odometry/odom_frame_id: odom
* /rtabmap/rgbd_odometry/publish_tf: True
* /rtabmap/rgbd_odometry/queue_size: 10
* /rtabmap/rgbd_odometry/subscribe_rgbd: False
* /rtabmap/rgbd_odometry/wait_for_transform_duration: 0.2
* /rtabmap/rgbd_odometry/wait_imu_to_init: False
* /rtabmap/rtabmap/Mem/IncrementalMemory: true
* /rtabmap/rtabmap/Mem/InitWMWithAllNodes: false
* /rtabmap/rtabmap/approx_sync: False
* /rtabmap/rtabmap/config_path:
* /rtabmap/rtabmap/database_path: ~/.ros/rtabmap.db
* /rtabmap/rtabmap/frame_id: camera_link
* /rtabmap/rtabmap/gen_depth: False
* /rtabmap/rtabmap/gen_depth_decimation: 1
* /rtabmap/rtabmap/gen_depth_fill_holes_error: 0.1
* /rtabmap/rtabmap/gen_depth_fill_holes_size: 0
* /rtabmap/rtabmap/gen_depth_fill_iterations: 1
* /rtabmap/rtabmap/gen_scan: False
* /rtabmap/rtabmap/ground_truth_base_frame_id:
* /rtabmap/rtabmap/ground_truth_frame_id:
* /rtabmap/rtabmap/initial_pose:
* /rtabmap/rtabmap/landmark_angular_variance: 9999.0
* /rtabmap/rtabmap/landmark_linear_variance: 0.0001
* /rtabmap/rtabmap/map_frame_id: map
* /rtabmap/rtabmap/odom_frame_id:
* /rtabmap/rtabmap/odom_frame_id_init:
* /rtabmap/rtabmap/odom_sensor_sync: False
* /rtabmap/rtabmap/odom_tf_angular_variance: 0.001
* /rtabmap/rtabmap/odom_tf_linear_variance: 0.001
* /rtabmap/rtabmap/publish_tf: True
* /rtabmap/rtabmap/queue_size: 10
* /rtabmap/rtabmap/scan_cloud_max_points: 0
* /rtabmap/rtabmap/subscribe_depth: True
* /rtabmap/rtabmap/subscribe_odom_info: True
* /rtabmap/rtabmap/subscribe_rgb: True
* /rtabmap/rtabmap/subscribe_rgbd: False
* /rtabmap/rtabmap/subscribe_scan: False
* /rtabmap/rtabmap/subscribe_scan_cloud: False
* /rtabmap/rtabmap/subscribe_scan_descriptor: False
* /rtabmap/rtabmap/subscribe_stereo: False
* /rtabmap/rtabmap/subscribe_user_data: False
* /rtabmap/rtabmap/wait_for_transform_duration: 0.2
Realsense Startup Parameters
PARAMETERS
* /camera/realsense2_camera/accel_fps: 250
* /camera/realsense2_camera/accel_frame_id: camera_accel_frame
* /camera/realsense2_camera/accel_optical_frame_id: camera_accel_opti...
* /camera/realsense2_camera/align_depth: True
* /camera/realsense2_camera/aligned_depth_to_color_frame_id: camera_aligned_de...
* /camera/realsense2_camera/aligned_depth_to_fisheye1_frame_id: camera_aligned_de...
* /camera/realsense2_camera/aligned_depth_to_fisheye2_frame_id: camera_aligned_de...
* /camera/realsense2_camera/aligned_depth_to_fisheye_frame_id: camera_aligned_de...
* /camera/realsense2_camera/aligned_depth_to_infra1_frame_id: camera_aligned_de...
* /camera/realsense2_camera/aligned_depth_to_infra2_frame_id: camera_aligned_de...
* /camera/realsense2_camera/allow_no_texture_points: False
* /camera/realsense2_camera/base_frame_id: camera_link*
* /camera/realsense2_camera/calib_odom_file:
* /camera/realsense2_camera/clip_distance: -1.0
* /camera/realsense2_camera/color_fps: 15
* /camera/realsense2_camera/color_frame_id: camera_color_frame
* /camera/realsense2_camera/color_height: 480
* /camera/realsense2_camera/color_optical_frame_id: camera_color_opti...
* /camera/realsense2_camera/color_width: 640
* /camera/realsense2_camera/confidence_fps: 30
* /camera/realsense2_camera/confidence_height: 480
* /camera/realsense2_camera/confidence_width: 640
* /camera/realsense2_camera/depth_fps: 15
* /camera/realsense2_camera/depth_frame_id: camera_depth_frame
* /camera/realsense2_camera/depth_height: 480
* /camera/realsense2_camera/depth_optical_frame_id: camera_depth_opti...
* /camera/realsense2_camera/depth_width: 640
* /camera/realsense2_camera/device_type:
* /camera/realsense2_camera/enable_accel: True
* /camera/realsense2_camera/enable_color: True
* /camera/realsense2_camera/enable_confidence: True
* /camera/realsense2_camera/enable_depth: True
* /camera/realsense2_camera/enable_fisheye1: False
* /camera/realsense2_camera/enable_fisheye2: False
* /camera/realsense2_camera/enable_fisheye: True
* /camera/realsense2_camera/enable_gyro: True
* /camera/realsense2_camera/enable_infra1: True
* /camera/realsense2_camera/enable_infra2: True
* /camera/realsense2_camera/enable_infra: False
* /camera/realsense2_camera/enable_pointcloud: False
* /camera/realsense2_camera/enable_pose: False
* /camera/realsense2_camera/enable_sync: True
* /camera/realsense2_camera/filters:
* /camera/realsense2_camera/fisheye1_frame_id: camera_fisheye1_f...
* /camera/realsense2_camera/fisheye1_optical_frame_id: camera_fisheye1_o...
* /camera/realsense2_camera/fisheye2_frame_id: camera_fisheye2_f...
* /camera/realsense2_camera/fisheye2_optical_frame_id: camera_fisheye2_o...
* /camera/realsense2_camera/fisheye_fps: 15
* /camera/realsense2_camera/fisheye_frame_id: camera_fisheye_frame
* /camera/realsense2_camera/fisheye_height: 480
* /camera/realsense2_camera/fisheye_optical_frame_id: camera_fisheye_op...
* /camera/realsense2_camera/fisheye_width: 640
* /camera/realsense2_camera/gyro_fps: 400
* /camera/realsense2_camera/gyro_frame_id: camera_gyro_frame
* /camera/realsense2_camera/gyro_optical_frame_id: camera_gyro_optic...
* /camera/realsense2_camera/imu_optical_frame_id: camera_imu_optica...
* /camera/realsense2_camera/infra1_frame_id: camera_infra1_frame
* /camera/realsense2_camera/infra1_optical_frame_id: camera_infra1_opt...
* /camera/realsense2_camera/infra2_frame_id: camera_infra2_frame
* /camera/realsense2_camera/infra2_optical_frame_id: camera_infra2_opt...
* /camera/realsense2_camera/infra_fps: 15
* /camera/realsense2_camera/infra_height: 480
* /camera/realsense2_camera/infra_rgb: False
* /camera/realsense2_camera/infra_width: 640
* /camera/realsense2_camera/initial_reset: False
* /camera/realsense2_camera/json_file_path:
* /camera/realsense2_camera/linear_accel_cov: 0.01
* /camera/realsense2_camera/odom_frame_id: camera_odom_frame
* /camera/realsense2_camera/ordered_pc: False
* /camera/realsense2_camera/pointcloud_texture_index: 0
* /camera/realsense2_camera/pointcloud_texture_stream: RS2_STREAM_COLOR
* /camera/realsense2_camera/pose_frame_id: camera_pose_frame
* /camera/realsense2_camera/pose_optical_frame_id: camera_pose_optic...
* /camera/realsense2_camera/publish_odom_tf: True
* /camera/realsense2_camera/publish_tf: True
* /camera/realsense2_camera/reconnect_timeout: 6.0
* /camera/realsense2_camera/rosbag_filename:
* /camera/realsense2_camera/serial_no:
* /camera/realsense2_camera/stereo_module/exposure/1: 7500
* /camera/realsense2_camera/stereo_module/exposure/2: 1
* /camera/realsense2_camera/stereo_module/gain/1: 16
* /camera/realsense2_camera/stereo_module/gain/2: 16
* /camera/realsense2_camera/tf_publish_rate: 0.0
* /camera/realsense2_camera/topic_odom_in: camera/odom_in
* /camera/realsense2_camera/unite_imu_method: none
* /camera/realsense2_camera/usb_port_id:
* /camera/realsense2_camera/wait_for_device_timeout: -1.0
* /rosdistro: noetic
* /rosversion: 1.16.0
Cheers
Mark
Originally posted by MarkRobotics on ROS Answers with karma: 3 on 2023-03-20
Post score: 0
Original comments
Comment by matlabbe on 2023-03-22:
You may not enough karma to upload an image, can you add a link to image uploaded somewhere else?
Comment by MarkRobotics on 2023-03-24:
Thanks - I've added a link
Answer:
When the bottom of the point cloud doesn't align with the XY map plane, it means either rtabmap is not using the base frame of your robot, or the Z value of TF between base frame and camera frame is wrong.
Is this used on a robot? Set rtabmap's frame_id to base_footprint or base_link (the base frame of your robot directly on the floor) if it is not already done. If the base frame is already the right one, adjust TF or URDF between the base frame of the robot and the camera base frame.
Originally posted by matlabbe with karma: 6409 on 2023-03-24
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by MarkRobotics on 2023-03-26:
Hi Thanks for this - I've updated the question with more information re: your comments here. At the moment the camera isn't connected to the robot. Well not in the TF sense. Just realsense and rtabmap running.
I was looking at the camera_*_optical_frame - which might be the cause - though I'm not sure why given I'm using the scripts out of the box without any modification to try and understand where this problem is coming from. I've added a screen shot of this too.
Thanks
Mark
Comment by matlabbe on 2023-03-27:
In TF tree, the base frame is camera_frame, so it is normal that half of the image is below the XY plane. When you will be on a robot, you may have a frame base_link->camera_frame with z=0.5 meters for example, which will make the bottom of the cloud matching the XY plane if rtabmap is using base_link. You can simulate this with only the camera, by publishing:
rosrun tf static_transform_publisher 0 0 0.5 0 0 0 base_link camera_link 100
then set frame_id of rtabmap to base_link instead of camera_link.
Comment by MarkRobotics on 2023-03-27:
Thanks - That's done the job. Thanks for your help | {
"domain": "robotics.stackexchange",
"id": 38322,
"tags": "ros, slam, navigation, rtabmap"
} |
Why is 2's complement used for representing negative numbers? | Question: So I was reading the use complement operation in the digital logic and found, it is used to give additive inverse of the number and subtracting a number (like we do) is difficult for computer. So it adds the negative representation of the number, which is $a - b = a + (-b)$.
Now both complement's used to represent the negative of the number, but why 2's complement is preferred? This is my main question
#include <iostream>
#include <bitset>
int main() {
std::cout << std::bitset<8>(7) << std::endl; // 00000111
std::cout << std::bitset<8>(-7) << std::endl; // 11111001
return 0;
}
Fyi, I know that in 1's complement there can be two zeros $+0$ or $-0$.
Answer: As you said, one's complement wastes one configuration of bits.
Moreover, it requires additional logic to handle the two representations of zero. It also requires a more complex (hardware) implementation of addition
when the operands have different signs (in two's complement the addition implementation is exactly the same as the one for "unsigned" numbers). | {
"domain": "cs.stackexchange",
"id": 21538,
"tags": "binary, digital-circuits, bit-manipulation"
} |
Python: Astar algorithm implementation | Question: I have implemented Astar algorithm for a problem on an online judge relating to maze given start and end positions along with a grid representing the maze. I output the length of the path along with the path itself. The following is the implementation in Python using the Euclidean distance:
import heapq, math, sys
infinity = float('inf')
class AStar():
def __init__(self, start, grid, height, width):
self.start, self.grid, self.height, self.width = start, grid, height, width
class Node():
def __init__(self, position, fscore=infinity, gscore=infinity, parent = None):
self.fscore, self.gscore, self.position, self.parent = fscore, gscore, position, parent
def __lt__(self, comparator):
return self.fscore < comparator.fscore
def heuristic(self, end, distance = "Euclidean"):
(x1, y1), (x2, y2) = self.start, end
if (distance == "Manhattan"):
return abs(x1 - x2) + abs(y1 - y2)
return math.sqrt((x2 - x1)**2 + (y2 - y1)**2)
def nodeNeighbours(self, pos):
(x, y) = pos
return [(dx, dy) for (dx, dy) in [(x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1)] if 0 <= dx < self.width and 0 <= dy < self.height and self.grid[dy][dx] == 0]
def getPath(self, endPoint):
current, path = endPoint, []
while current.position != self.start:
path.append(current.position)
current = current.parent
path.append(self.start)
return list(reversed(path))
def computePath(self, end):
openList, closedList, nodeDict = [], [], {}
currentNode = AStar.Node(self.start, fscore=self.heuristic(end), gscore = 0)
heapq.heappush(openList, currentNode)
while openList:
currentNode = heapq.heappop(openList)
if currentNode.position == end:
return self.getPath(currentNode)
else:
closedList.append(currentNode)
neighbours = []
for toCheck in self.nodeNeighbours(currentNode.position):
if toCheck not in nodeDict.keys():
nodeDict[toCheck] = AStar.Node(toCheck)
neighbours.append(nodeDict[toCheck])
for neighbour in neighbours:
newGscore = currentNode.gscore + 1
if neighbour in openList and newGscore < neighbour.gscore:
openList.remove(neighbour)
if newGscore < neighbour.gscore and neighbour in closedList:
closedList.remove(neighbour)
if neighbour not in openList and neighbour not in closedList:
neighbour.gscore = newGscore
neighbour.fscore = neighbour.gscore + self.heuristic(neighbour.position)
neighbour.parent = currentNode
heapq.heappush(openList, neighbour)
heapq.heapify(openList)
return None
if __name__ == '__main__':
sys.stdin = open('input.txt', 'r')
sys.stdout = open('output.txt', 'w')
matrix = [[int(num) for num in line.split()] for line in sys.stdin]
size = matrix.pop(0)
coordinates = matrix.pop(0)
n, m = size[0], size[1]
x1, y1, y2, x2 = coordinates[0], coordinates[1], coordinates[2], coordinates[3]
path = AStar((x1-1, y1-1), matrix, n, m).computePath((y2-1, x2-1))
print(len(path))
for pos in path:
print(pos[0] + 1, pos[1] + 1)
Answer: self.start, self.grid, self.height, self.width = start, grid, height, width
I would not put these all on the same line like that. I think it would be much easier to read spread over multiple lines:
self.start = start
self.grid = grid
self.height = height
self.width = width
I would probably have the Node class as toplevel instead of nested. I don't think you're gaining much by having it inside AStar. You could name it _Node to make it "module-private" so that attempting to import it to another file will potentially raise warnings.
In Node's __lt__ implementation, I wouldn't call the second parameter comparator. A comparator is something that compares, whereas in this case, that's just another node. other_node or something would be more appropriate.
In heuristic, I'd personally make use of an else there:
if (distance == "Manhattan"):
return abs((x1 - x2) + abs(y1 - y2))
else:
return math.sqrt((x2 - x1)**2 + (y2 - y1)**2)
It makes it clearer that only one of the lines will be executed. Personally, I only neglect the else in a case like that if the if was an "early exit" precondition check, and I want to avoid nesting the entire rest of the function inside a block. That's not a problem here though.
nodeNeighbors (which should be node_neighbors) would be cleaner broken over several lines:
def nodeNeighbours(self, pos):
(x, y) = pos
return [(dx, dy)
for (dx, dy) in [(x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1)]
if 0 <= dx < self.width and 0 <= dy < self.height and self.grid[dy][dx] == 0]
I think that makes it a lot easier to see what's going on in it.
Again, in many places you're assigning two or more variables on one line:
(x1, y1), (x2, y2) = self.start, end
current, path = endPoint, []
openList, closedList, nodeDict = [], [], {}
x1, y1, y2, x2 = coordinates[0], coordinates[1], coordinates[2], coordinates[3]
I would break those up. Especially once you get to 3+ on a line, for the reader to see what variable matches up with what value, they'll need to count from the left instead of just checking what's on each side of a =.
In computePath, it seems like closedList should be a set. It doesn't appear as though order matters with it, and neighbour in closedList will be faster with a set than it will with a list. It looks though like openList is required to be a list though due to it being passed to heapify.
I don't think I'd reassign stdin and stdout. The reassignment of stdin seems completely unnecessary, and changing stdout will make it harder to debug later using print statements. You don't necessarily want all printed text to be sent to the file.
If need-be, you can specify what file you want printed to when printing:
with open('output.txt', 'w') as out_f:
print("To file!", file=out_f) | {
"domain": "codereview.stackexchange",
"id": 39300,
"tags": "python, algorithm, game, pathfinding, a-star"
} |
Are uniform acceleration and uniform motion the same? | Question: Any answers are appreciated, thanks. :D
Answer: No!
Uniform motion is motion in which velocity is constant and acceleration is 0.
Uniform acceleration : Motion in which acceleration is constant , therefore velocity is increasing or decreasing or even just the direction is changing for example a uniform circular motion
Note: Every uniform motion is having uniform acceleration but vice-versa does not always hold true | {
"domain": "physics.stackexchange",
"id": 69854,
"tags": "kinematics, acceleration, terminology, velocity"
} |
Deck of Cards written in C++ | Question: I created a deck of cards that I used in a BlackJack game. I stripped out my black jack logic and left just the Deck, card and minimum dealer.
The deck class is there because I want to create additional decks such as pinnacle or skip-bo.
I that about using an Enum class for card objects or a switch statement for creating cards.
I wasn't sure about shuffle() so I just use randomizeCards. Almost all tutorials use rand() with time(). I read that it's a weak generator and use another one. I've used my random in all kinds of projects.
I'm mainly worried about size and speed. Is it as small as it can be and using the RAM as efficiently as possible.
#include<iostream>
#include<string>
#include<vector>
#include<random>
using namespace std;
class Card {
public:
Card() {}
~Card() {}
unsigned value=0;
string suit="";
string name="";
string rank="";
};
class Deck {
public:
Deck() {}
void makeDeck();
vector<Card> cards;
vector<Card> suit;
Card card;
};
void Deck::makeDeck() {
for(int i=0; i<4; i++) {
for(int j=1; j<14; j++) {
if(i==0)card.suit="HEARTS";
if(i==1)card.suit="DIAMONDS";
if(i==2)card.suit="CLUBS";
if(i==3)card.suit="SPADES";
if(j==1) {
card.name="ACE";
card.value=j;
card.rank=j;
cards.push_back(card);
}
else if(j==2) {
card.name="TWO";
card.value=j;
card.rank=j;
cards.push_back(card);
}
else if(j==3) {
card.name="THREE";
card.value=j;
card.rank=j;
cards.push_back(card);
}
else if(j==4) {
card.name="FOUR";
card.value=j;
card.rank=j;
cards.push_back(card);
}
else if(j==5) {
card.name="FIVE";
card.value=j;
card.rank=j;
cards.push_back(card);
}
else if(j==6) {
card.name="SIX";
card.value=j;
card.rank=j;
cards.push_back(card);
}
else if(j==7) {
card.name="SEVEN";
card.value=j;
card.rank=j;
cards.push_back(card);
}
else if(j==8) {
card.name="EIGHT";
card.value=j;
card.rank=j;
cards.push_back(card);
}
else if(j==9) {
card.name="NINE";
card.value=j;
card.rank=j;
cards.push_back(card);
}
else if(j==10) {
card.name="TEN";
card.value=10;
card.rank=j;
cards.push_back(card);
}
else if(j==11) {
card.name="JACK";
card.value=10;
card.rank=j;
cards.push_back(card);
}
else if(j==12) {
card.name="QUEEN";
card.value=10;
card.rank=j;
cards.push_back(card);
}
else if(j==13) {
card.name="KING";
card.value=10;
card.rank=j;
cards.push_back(card);
}
}
}
}
//end of deck class
class Dealer {
public:
Dealer() {}
void playAgain();
private:
void getCard(vector<Card> &hand);
void printHand(vector<Card> &hand);
int randomizeDeck(vector<Card> &cards);
Deck deck;
vector<Card> player;
vector<Card> house;
int credits=100;
int bet=0;
};
void Dealer::playAgain() {
deck.makeDeck();
printHand(deck.cards);
char choice='y';
cout<<endl<<endl<<"============================="<<endl<<endl;
cout<<endl<<"Would you like to play again?(y/n): ";
cin>>choice;
if(choice=='y') {
//deal21();
deck.makeDeck();
printHand(deck.cards);
} else {
cout<<"goodbye";
exit(0);
}
}
void Dealer::getCard(vector<Card> &hand) {
int pick=randomizeDeck(deck.cards);
hand.push_back(deck.cards[pick]);
deck.cards[pick]=deck.cards.back();
deck.cards.pop_back();
}
void Dealer::printHand(vector<Card> &hand) {
for(int i=0; i<hand.size(); i++) {
cout<<hand[i].name<<" of "<< hand[i].suit<<endl;
}
}
int Dealer::randomizeDeck(vector<Card> &cards) {
int max=cards.size()-1;
random_device rd;
mt19937 numGen(rd());
static uniform_int_distribution<int> roll(0,max);
return roll(numGen);
}
int main() {
Dealer dealer;
dealer.playAgain();
return 0;
}
Answer: Design review
Card
Let’s start by looking at the Card class. As designed, Card has 4 data members:
value
suit
name
rank
value is an unsigned int, while the others are all std::strings.
This is enormously wasteful. A card really only has two properties: rank and suit. If you know the rank, you know the value and the name, so duplicating that data in the class is just wasting space.
I don’t even see how your code even compiles, because rank is a string, but in Deck::makeDeck() you try to shove integer values into it.
In addition, using the wrong types also wastes space. The suit doesn’t need to be a string. It can have only one of 4 possible values. You can fit the suit in a single byte. Using a string is not only ~20–50 times bigger, it might also require memory allocations, and every operation becomes more expensive (for example, comparing suits could be a simple byte value compare, rather than a much more expensive string comparison).
I think the first thing you need to look at is enumerations. Enumerations are perfect for when you have something that has a fixed, small number of values. Like a card suit, for example. You could do:
enum class suit_t
{
hearts,
diamonds,
clubs,
spades
};
That’s it! To make it even better, you can specify that it has to be a single byte in size:
enum class suit_t : unsigned char
{
hearts,
diamonds,
clubs,
spades
};
Then if you need things like conversion to a string for printing, you can add functions or stream inserters as needed:
enum class suit_t : unsigned char
{
hearts,
diamonds,
clubs,
spades
};
// a to_string() function:
auto to_string(suit_t s) -> std::string
{
switch (s)
{
case suit_t::hearts:
return "hearts";
case suit_t::diamonds:
return "diamonds";
case suit_t::clubs:
return "clubs";
case suit_t::spades:
return "spades";
}
}
// a stream inserter:
template <typename CharT, typename Traits>
auto operator<<(std::basic_ostream<CharT, Traits>& out, suit_t s)
-> std::basic_ostream<CharT, Traits>&
{
switch (s)
{
case suit_t::hearts:
out << "hearts";
break;
case suit_t::diamonds:
out << "diamonds";
break;
case suit_t::clubs:
out << "clubs";
break;
case suit_t::spades:
out << "spades";
break;
}
return out;
}
You can do the same thing for the rank:
enum class rank_t : unsigned char
{
ace = 1,
two = 2,
three = 3,
four = 4,
five = 5,
six = 6,
seven = 7,
eight = 8,
nine = 9,
ten = 10,
jack = 11,
queen = 12,
king = 13
};
Then you can add whatever functions you need:
constexpr auto to_value(rank_t r) noexcept -> int
{
switch (r)
{
case rank_t::jack:
[[fallthrough]];
case rank_t::queen:
[[fallthrough]];
case rank_t::king:
return 10;
default:
return static_cast<int>(r);
}
}
And with those, making a card type is trivial:
struct card_t
{
rank_t rank;
suit_t suit;
};
That’s about all you need! Then of course you can add useful functions, like a stream inserter:
template <typename CharT, typename Traits>
auto operator<<(std::basic_ostream<CharT, Traits>& out, card_t c)
-> std::basic_ostream<CharT, Traits>&
{
if (out)
{
auto oss = std::basic_ostringstream<CharT, Traits>{};
oss << c.rank << " of " << c.suit;
out << oss.view();
}
return out;
}
And to really illustrate how big a difference makes, compare the sizes of your Card and my card_t. Using Clang and libc++:
Card : 80
card_t: 2
Using GCC:
Card : 104
card_t: 2
You see? By using enumerations, the card class is ~40–50 times smaller. Assuming a cache line size of 64 b, you can actually fit an entire deck of card_t in just two cache lines, whereas not even a single Card would fit. (And if you were really clever, you could get the size of card_t down to a single byte, so a whole deck could fit in a cache line.) And operations like comparing two cards will be thousands of times faster.
If I were designing a card class, I would make both the suit and the rank inner types of that class, to keep everything clean:
struct card_t
{
enum class rank_t : unsigned char
{
ace = 1,
two = 2,
three = 3,
four = 4,
five = 5,
six = 6,
seven = 7,
eight = 8,
nine = 9,
ten = 10,
jack = 11,
queen = 12,
king = 13
};
enum class suit_t : unsigned char
{
hearts,
diamonds,
clubs,
spades
};
rank_t rank = rank_t::ace;
suit_t suit = suit_t::spades;
constexpr auto operator==(card_t const&) const noexcept -> bool = default;
};
If you want to support more deck types, like including the knight rank, you can just add that into the rank_t enumeration (you could adjust the number values so the knight rank is 12, the queen is 13, and the king is 14). If you want more suits, like stars or waves, no problem, just add those to suit_t. If you want to add support for jokers, trumps, or the fool, things get a little more complicated, but not ridiculously so. You could add a “none” suit… or maybe special “white” and “black” suits for the jokers. Or you could use a variant to distinguish between suited cards, trumps, jokers, and the fool.
Deck
Once you’ve shrunk the size of the card class, the deck class will also be much more efficient.
Your deck class has 3 data members, but I don’t see the point of suit, and card seems to be a misguided way to have a local variable in your makeDeck() function. The only data member you actually need is cards.
Now, cards is a vector, and that’s not bad… but consider that you don’t really need the full power of a vector. You know in advance that there can never be more than 52 cards in your deck. Given that, you could use something like Boost’s static_vector, and avoid any dynamic allocation at all.
But for a start, vector is fine.
However, what the deck class really needs is some member functions, because if you’re just going to expose the cards data member, you might as well not have a deck class at all: just use a vector of cards. Designing a good interface is an art and a science, but a decent deck class might look something like this:
class deck_t
{
public:
// creates a full deck of 52 cards
static auto full_deck() -> deck_t;
// creates an empty deck
deck_t();
// creates a deck with the cards given
template <std::input_iterator It, std::sentinel_for<It> Sen>
deck_t(It, Sen);
template <std::input_range Rng>
deck_t(Rng&&);
// sort the deck
auto sort() -> void;
template <typename Cmp>
auto sort(Cmp) -> void;
// shuffle the deck
template <std::uniform_random_bit_generator<std::remove_reference_t<Gen>>
auto shuffle(Gen&&) -> void;
// draw cards from the top of the deck
auto draw() -> card_t;
auto draw(std::size_t) -> std::vector<card_t>;
template <std::output_iterator O>
auto draw(std::size_t, O) -> O;
// draw cards randomly from the deck
template <std::uniform_random_bit_generator<std::remove_reference_t<Gen>>
auto draw_randomly(Gen&&) -> card_t;
template <std::uniform_random_bit_generator<std::remove_reference_t<Gen>>
auto draw_randomly(std::size_t, Gen&&) -> std::vector<card_t>;
template <std::output_iterator O, std::uniform_random_bit_generator<std::remove_reference_t<Gen>>
auto draw_randomly(std::size_t, O, Gen&&) -> O;
// place cards onto the top of deck
auto replace(card_t) -> void;
template <std::input_iterator It, std::sentinel_for<It> Sen>
auto replace(It, Sen) -> It;
template <std::input_range Rng>
auto replace(Rng&&) -> std::borrowed_iterator_t<Rng>;
// place cards at the bottom of the deck
auto place_at_bottom(card_t) -> void;
template <std::input_iterator It, std::sentinel_for<It> Sen>
auto place_at_bottom(It, Sen) -> It;
template <std::input_range Rng>
auto place_at_bottom(Rng&&) -> std::borrowed_iterator_t<Rng>;
// place cards randomly into the deck
template <std::uniform_random_bit_generator<std::remove_reference_t<Gen>>
auto place_randomly(card_t, Gen&&) -> void;
template <std::input_iterator It, std::sentinel_for<It> Sen, std::uniform_random_bit_generator<std::remove_reference_t<Gen>>
auto place_randomly(It, Sen, Gen&&) -> It;
template <std::input_range Rng, std::uniform_random_bit_generator<std::remove_reference_t<Gen>>
auto place_randomly(Rng&&, Gen&&) -> std::borrowed_iterator_t<Rng>;
// move some cards from the top of this deck to the top of another
auto transfer_card_to(deck_t&) -> void;
auto transfer_cards_to(deck_t&, std::size_t) -> void;
auto transfer_all_cards_to(deck_t&) -> void;
// create a new deck from some part of the top of this deck
auto draw_deck(std::size_t) -> deck_t;
// create a new deck by drawing randomly from this deck
template <std::uniform_random_bit_generator<std::remove_reference_t<Gen>>
auto draw_deck_randomly(std::size_t, Gen&&) -> deck_t;
// standard container functions //////////////////////////////////////////
auto begin() const noexcept -> card_t const*;
auto end() const noexcept -> card_t const*;
auto cbegin() const noexcept -> card_t const*;
auto cend() const noexcept -> card_t const*;
auto rbegin() const noexcept -> std::reverse_iterator<card_t const*>;
auto rend() const noexcept -> std::reverse_iterator<card_t const*>;
auto crbegin() const noexcept -> std::reverse_iterator<card_t const*>;
auto crend() const noexcept -> std::reverse_iterator<card_t const*>;
auto front() const -> card_t const&;
auto back() const -> card_t const&;
auto operator[](std::size_t) const -> card_t const&;
auto at(std::size_t) const -> card_t const&;
auto empty() const noexcept -> bool;
auto size() const noexcept -> std::size_t;
auto max_size() const noexcept -> std::size_t; // should return 52
};
You may not need all the functions in that interface, but some of them will sure come in handy. This is what a simple game might look like with this interface:
// start with a full deck, and shuffle it
auto deck = deck_t::full_deck();
// start a new game
while (playing)
{
deck.shuffle(rng);
// also need a discard pile
// (though we could just "discard" by putting cards at the bottom of the deck)
auto discard_pile = deck_t{};
// let's say there are 4 players, each with their own hand
// (we can reuse the deck class for a hand)
constexpr auto player_count = std::size_t{4};
auto player_hands = std::array<deck_t, player_count>{};
// now each player draws 5 cards
for (auto&& player_hand : player_hands)
player_hand = deck.draw_deck(5);
// now each player examines their hand
std::ranges::for_each(player_hands, [](auto&& hand) { hand.sort(); });
// now give each player a chance to exchange cards to get a better hand
for (auto player = std::size_t{0}; player < player_count; ++player)
{
auto exchange_count = // ask player how many cards to exchange
player_hands[player].transfer_cards_to(discard_pile, exchange_count);
deck(player_hands[player], exchange_count);
}
// now sort by the score of the hand
std::ranges::sort(player_hands, hand_score_func);
// and print the results
std::cout << "the results in order are:\n";
std::ranges::for_each(player_hands, [](auto&& hand) { std::cout << '\t' << hand << '\n'; });
// restore all the cards back to the deck for another game
discard_pile.transfer_all_cards_to(deck);
std::ranges::for_each(player_hands, [&deck](auto&& hand) { hand.transfer_all_cards_to(deck); });
}
Dealer
The dealer class structure isn’t bad, but you could use the deck class for both the player and house hands. They are, in a way, decks, after all.
The real problem with the dealer class is that it just does too much. It’s not just a dealer… it plays the whole game… and even handles asking if the player wants to replay… and shuffles the deck… and draws the cards….
Strip it down. First of all, the proper name for this class probably isn’t “dealer”, but rather “game”. All the class should do is handle setting up and then playing through a game. Shuffling? Let the deck class handle that. Randomly drawing cards? Again, let the deck class handle that. Asking the user if they want to replay? Let something else handle that. The game class should be just about the game itself.
class game
{
public:
// play a game
auto play() -> void;
private:
std::mt19937 _rng;
};
And the play function could be something like:
auto game::play() -> void
{
auto deck = deck_t::full_deck();
deck.shuffle(_rng);
auto player = deck.draw_deck(N); // N is how many cards each player starts with
auto house = deck.draw_deck(N);
auto credits = 100;
auto done = false;
while (not done)
{
// player moves first
auto choice = get_player_choice();
switch (choice)
{
case 'd': // player wants to draw cards
{
auto count = get_player_draw_count();
deck.transfer_cards_to(player, count);
}
break;
case 'b': // player wants to make a bet
// and so on
}
// now the house makes its move
// check for whether someone won, and if so, print the result and
// set done to true
}
}
After the game is done and play() returns, then you can have an external loop—in main(), for example—that asks “do you want to play again?”, and if the answer is yes, it will just call play() again.
Think like a game programmer
All games fundamentally have the same structure:
initialize();
while (game_is_not_over())
{
input(); // get input from the user
update(); // update the game state
render(); // display the current state
}
Even a game as simple as a console-only card game uses the same pattern. After initializing the game (setting up the deck and dealing hands to all the players), you start the loop where you first ask the user what they want to do… and then you update the game state, which includes giving the AI its turn and checking to see if there are an winners… then you output the current situation to the player, telling them what cards they’re holding, or whether the game is over (and who won).
If you want to allow replaying, then you just need to wrap all that in a loop:
do
{
auto game = game_t{};
while (game.running())
{
game.get_input();
game.update();
game.render(std::cout);
}
std::cout << "\n\n=============================\n\n\n";
} while (wants_to_play_again());
std::cout << "goodbye\n";
Before starting any games, you could set up your random number generator:
auto main() -> int
{
auto prng = std::mt19937{std::random_device{}()};
do
{
auto game = game_t{prng};
while (game.running())
{
game.get_input();
game.update();
game.render(std::cout);
}
std::cout << "\n\n=============================\n\n\n";
} while (wants_to_play_again());
std::cout << "goodbye\n";
}
Starting from that high-level design, you can fill in the functions you need. And by designing it this way, you will already have the general structure in place to convert to a graphical game if you like. (The differences would mostly be that you’re no longer getting input from std::cin or rendering text output to std::cout; the actual game logic in the update function should be unchanged.)
General issues
Don’t use std::endl
If you just want a newline, use '\n'. std::endl doesn’t just print a newline, it flushes the stream, which can be expensive.
So if you want a spacing of 2 lines, and you do something like std::cout << std::endl << std::endl << "========" << std::endl << std::endl;, you are flushing the stream at least 4 times. That’s ridiculous.
All you need is: std::cout << "\n\n========\n\n";.
Don’t shuffle the deck every time a card is drawn
Shuffling over and over doesn’t actually improve randomness, and it costs.
Instead, just shuffle once at the beginning of the game. That’s how it works in real life, after all.
Shuffle once, and then just keep picking cards off the top of the deck.
Code review
using namespace std;
Never, ever do this. Just put std:: where it’s needed.
Card() {}
~Card() {}
You don’t need to explicitly spell out these functions. Classes get default constructors and destructors by default. Actually writing out these functions causes three problems:
It confuses developers.
An experienced C++ coder who sees these functions will immediately be suspicious, because why would anyone waste time writing code that they don’t need to. If someone sat down and wrote these out, that must mean there’s some reason for it… right? Now the coder will waste time trying to hunt down the reason.
It creates a maintenance burden.
Every line of code you write adds to the maintenance debt. It makes the code harder to read (because more code is harder to read than less code), harder to spot problems (because if you have a bunch of useless, noise code around the important stuff, the important stuff will be harder to see, and so will bugs), and harder to work with (because every time you want to make a change, you have to check more code to see if there will be any conflicts).
It actually slows down your program.
When you define the default constructor or destructor like this, you are creating a non-trivial default constructor and non-trivial destructor. That’s bad. Trivial default constructors and trivial destructors are a good thing, because if they’re trivial, the compiler can optimize them away in most cases. This can give you huge performance gains.
So just don’t write functions you don’t need.
unsigned value=0;
Don’t use unsigned types for regular numbers. They cause surprises when doing calculations and comparisons. Just use an int.
Other than that, I already showed how you can optimize the card class down to something only 2 bytes large, that will be blazingly fast.
Deck() {}
Again, you don’t need this, so don’t write it.
vector<Card> cards;
vector<Card> suit;
Card card;
I don’t understand the point of all these members. A deck shouldn’t be anything other than a bunch of cards. Why do you also need a bunch of suits, and a separate, single card?
void Deck::makeDeck() {
for(int i=0; i<4; i++) {
for(int j=1; j<14; j++) {
There are a lot magic numbers hidden in there. Magic numbers are a bad idea. You should always use named constants that explain what’s going on:
void Deck::makeDeck() {
for (int i = suit_min; i < suit_max; ++i) {
for (int j = rank_min; j < rank_max; ++j) {
You should also give your variables meaningful names. Names like i and j make sense when the loop counter is just an index, and serves no other purpose. But these values do have another meaning:
void Deck::makeDeck() {
for (int suit = suit_min; suit < suit_max; ++suit) {
for (int rank = rank_min; rank < rank_max; ++rank) {
The next problem with this function is that you use an outside variable for the card. That makes no sense. The card should be a local variable.
And of course, the biggest and most obvious problem with the function is that it’s incredibly repetitive. You could simplify it considerably like this:
void Deck::makeDeck() {
// you should really clear the deck first
cards.clear();
// to speed things up, you should reserve the memory needed beforehand
cards.reserve(number_of_cards_in_deck);
for (int suit = suit_min; i < suit_max; ++i) {
for (int rank = rank_min; j < rank_max; ++j) {
auto card = Card{};
// set up some default values
card.value = rank;
// card.rank = rank; // this won't compile because your types are wrong!
switch (suit)
{
case suit_hearts:
card.suit = "HEARTS";
break;
case suit_diamonds:
card.suit = "DIAMONDS";
break;
// and so on...
}
switch (rank)
{
case rank_ace:
card.name = "ACE";
break;
case rank_two:
card.name = "TWO";
break;
// and so on...
// face cards are special because their value isn't the same as
// their rank
case rank_jack:
card.name = "JACK";
card.value = 10;
break;
case rank_queen:
card.name = "QUEEN";
card.value = 10;
break;
case rank_king:
card.name = "KING";
card.value = 10;
break;
}
cards.push_back(std::move(card));
}
}
}
But this is still an ugly function. A much better solution is to make a proper card class, with proper suit and rank classes. Then you can just do something like:
void Deck::makeDeck()
{
constexpr auto all_ranks = std::array{
card_t::rank_t::ace,
card_t::rank_t::two,
// and so on...
};
constexpr auto all_suits = std::array{
card_t::suit_t::hearts,
card_t::suit_t::diamonds,
card_t::suit_t::clubs,
card_t::suit_t::spades
};
cards.clear();
cards.reserve(all_ranks::size() * all_suits::size());
for (auto&& rank : all_ranks)
for (auto&& suit : all_suits)
cards.emplace_back(rank, suit);
}
For a different game type, with different suits or ranks, or with the addition of jokers or trumps, you just need to modify those arrays, and/or add a third array for specials.
That’s how clean and simple C++ code gets when you make proper types. Get the types right, and everything else just magically falls into place.
Dealer() {}
Yet again, unnecessary code.
void Dealer::playAgain() {
deck.makeDeck();
printHand(deck.cards);
char choice='y';
cout<<endl<<endl<<"============================="<<endl<<endl;
cout<<endl<<"Would you like to play again?(y/n): ";
cin>>choice;
if(choice=='y') {
//deal21();
deck.makeDeck();
printHand(deck.cards);
} else {
cout<<"goodbye";
exit(0);
}
}
The way this function is currently written, you only get two games and then the program quits. That’s because you’ve embedded the “play again” option in the middle of the same function that actually plays the game.
I’d say before you consider the notion of “play again”, you first need to consider “play”. If you have a “play” function, then the “play again” function is just:
auto play_again()
{
auto done = false;
while (not done)
{
play();
std::cout << "\n\n=============================\n\n"
"\nWould you like to play again?(y/n): ";
auto choice = '\0';
while (not (std::cin >> choice) or (choice != 'y' and choice != 'n'))
{
if (std::cin.eof())
{
// reached the end of the input, which pretty much means we're done
done = true;
break;
}
// clear any errors on cin
std::cin.clear();
if (not std::cin)
throw std::runtime_error{"input is completely borked"};
// note you could also ignore everything on cin up to the next newline
std::cout << "Sorry, didn’t understand your answer.\n"
"Play again? (y/n): ";
}
if (choice == 'n')
done = true;
}
std::cout << "goodbye\n";
}
Or, even better, separate all that query logic into its own function:
template <std::forward_range Rng>
auto get_choice(std::istream& in, std::ostream& out, std::string_view query, Rng const& choices)
{
while (true)
{
out << query;
out.flush();
if (auto choice = '\0'; in >> choice)
{
if (std::ranges::find(choices, choice) != std::ranges::end(choices))
return choice;
out << "Sorry, '" << choice << "' is not a valid choice.\n";
}
else
{
throw std::runtime_error{"bad input"};
}
}
}
auto play_again()
{
while (true)
{
play();
std::cout << "\n\n=============================\n\n\n";
if (auto const choice = get_choice(std::cin, std::cout, "Would you like to play again?(y/n): ", "yYnN"); choice == 'n' or choice == 'N')
break;
}
std::cout << "goodbye\n";
}
With the “play” function separated, all the actual game logic can go there.
} else {
cout<<"goodbye";
exit(0);
}
Never, ever use std::exit() in C++. It’s a C function; it doesn’t work properly in C++.
void Dealer::getCard(vector<Card> &hand) {
int pick=randomizeDeck(deck.cards);
hand.push_back(deck.cards[pick]);
deck.cards[pick]=deck.cards.back();
deck.cards.pop_back();
}
This is a rather peculiar way to draw a card from a deck. First of all, it doesn’t make sense to pull a card randomly from out of the middle of the deck. That’s not how people play cards in real life. They shuffle the deck once at the beginning of the game, and then just draw from the top of the deck.
But then you do something even more peculiar: you copy the card from the middle of the deck… then copy the card at the top of the deck over that first card… then remove the card at the top of the deck. That’s a lot of gymnastics just to draw a card from a deck. All that wouldn’t be necessary if the deck were shuffled.
void Dealer::printHand(vector<Card> &hand) {
for(int i=0; i<hand.size(); i++) {
cout<<hand[i].name<<" of "<< hand[i].suit<<endl;
}
}
You can take the hand as a const reference here, because you’re not modifying it. Also, I don’t see the logic in making this a member of Dealer. This is a useful function, generally, so why not make it a free function?
In any case, you have a bug. i is an int… but hand.size() is a vector<Card>::size_type, which is an unsigned integer of some kind. Don’t use old-fashioned for-loops with ranges… but if you really must, get the types right. That loop should be:
for (auto i = decltype(hand.size())(0); i < hand.size(); ++i) {
std::cout << hand[i].name << " of " << hand[i].suit << '\n';
}
But again, don’t use old-school for loops for this kind of thing. Use a range-for:
for (auto&& card : hand) {
std::cout << card.name << " of " << card.suit << '\n';
}
or an algorithm:
std::ranges::for_each(hand, [](auto&& card) {
std::cout << card.name << " of " << card.suit << '\n';
});
And even better would be if cards knew how to print themselves. Again, get the types right, and everything else just works. If cards did know how to print themselves, then this function could just be:
auto print_hand(std::ostream& out, std::vector<Card> const& hand)
{
std::ranges::for_each(hand, [&out](auto&& card) { out << card << '\n'; });
}
Even better still would be if you had a class for hands (which could also just reuse the deck class), because then you would’t need this function at all. You could just do std::cout << hand;. As always, spend a little time getting the low-level types right, and the high-level logic becomes trivial.
int Dealer::randomizeDeck(vector<Card> &cards) {
int max=cards.size()-1;
random_device rd;
mt19937 numGen(rd());
static uniform_int_distribution<int> roll(0,max);
return roll(numGen);
}
What you’re doing here is clever, but there are a lot of problems with how you’re doing it.
First, int is the wrong type. If you are indexing a vector, the correct type to use is the vector’s size_type. vector’s size_type is going to be unsigned (which, yes, is not good, but it was a decision made long ago, before people realized the problems it would cause), and may also be much bigger than int. The net result is that if you try to cram the result of cards.size() - 1 into an int… you could end up triggering undefined behaviour. That’s very bad.
The fix here is so trivial it’s almost laughable. Just don’t say int. Just use auto (or nothing at all):
auto get_random_card_index(std::vector<Card> const& cards)
{
auto const max = cards.size() - 1;
std::random_device rd;
std::mt19937 numGen(rd());
static std::uniform_int_distribution roll(0, max);
return roll(numGen);
}
Bug fixed.
The next major problem with this function is that every time you ask for a random index into a set of cards, you’re constructing a new random number generator. That makes no sense, and it is remarkably wasteful. Much better would be just pass a random number generator into the function:
template <std::uniform_random_bit_generator<std::remove_reference_t<Gen>>
auto get_random_card_index(std::vector<Card> const& cards, Gen&& gen)
{
auto dist = std::uniform_int_distribution{0, cards.size() - 1};
return dist(gen);
}
But this still isn't a great function, because it doesn’t take into account the case where the set of cards is empty.
All these problems would evaporate if you just shuffled the deck once, and then picked from the top of the deck. | {
"domain": "codereview.stackexchange",
"id": 40488,
"tags": "c++, c++11, playing-cards"
} |
What is the colloidal osmotic pressure? | Question: The oncotic pressure or colloidal osmotic pressure is the osmotic pressure developed due to the presence of colloids in a solution. But since the colloids are not true solution, why should the colloids be termed as solutes soluble in the solvent and consequently capable of producing any osmotic pressure? Should they not behave like insoluble substances who do not affect the osmolarity of the solution? And theses colloidal osmotic pressure plays a very important role in trans-capillary transfer dynamics.
If the colloids do affect the osmotic properties, is the expression to measure the colloidal osmotic pressure same as that for true solutions(vant-hoff equation)?
Answer: I think, first I should clarify what causes the osmotic pressure:
Osmosis occurs when two solutions of different concentrations are separated by a membrane which will selectively allow some species, e.g. the solvent, through it but not others, e.g. the solute.
So, there is a concentration gradient between the two solutions which would lead to a diffusion from the side with the higher to the side with the lower concentration.
But since the solute cannot pass through the membrane the diffusive flow cannot even the concentration differences out and thus constantly presses against the membrane exerting a force on it which leads to a pressure.
The pressure originating from this tendency for osmotic flow to occur is called the osmotic pressure $\Pi$ of the solution.
So, the question is: Is the theory behind diffusion applicable to colloidal solutions?
To reason about this, I will first address the matter of whether colloids can be considered "real" solutes or not. Colloidal particles are usually some polymer particle coated with a tenside layer. So, it is the molecules of those tenside layers that interact with the solvent molecules and they are chosen in such a way that the colloidal particles neither lump together (coalesce) nor sink to the ground or float to the top - that keeps the colloidal solution stable. So, a colloidal particle is surrounded by solvent molecules which solvate the tenside layer. Of course, the situation is a bit different from a "normal" solution since the colloidal particle is a lot bigger than, say, an alcohol molecule and thus the the ratio between solute particle size and the size of its solvation sphere is much smaller. But, in essence, a colloidal particle and an alcohol molecule are solvated in pretty much the same way.
Furthermore it can also be assumed that interactions (e.g. electrostatic forces) between the colloidal particles are weak.
This means that colloidal particles are thus not subject to external forces which influence their movement. They simply move through the solution by Brownian motion. This is important because then their motion can mathematically be described by a stochastic process which is the basis for the thermodynamic description of diffusion.
And, as I described above, the reason for diffusion to happen is the same as the origin of the osmotic pressure.
So, to answer your last question: Yes, I think, the description for osmotic pressure caused by colloidal particles should be the same as for "normal" solutes. | {
"domain": "chemistry.stackexchange",
"id": 563,
"tags": "physical-chemistry, aqueous-solution, osmosis"
} |
Find a weighted median for unsorted array in linear time | Question: For days, I'm trying to figure out, whether it is possible to find an item in array which would be kind of weighted median in linear time.
It is very simple to do that in exponential time.
So let's say that we have an array, each item $x$ of this array has two attributes - price $c(x)$ and weight $w(x)$. The goal is to find an item $x$ such that
$$
\sum_{y\colon c(y) < c(x)} w(y) \leq \frac{1}{2} \sum_y w(y) \text{ and }
\sum_{y\colon c(y) > c(x)} w(y) \leq \frac{1}{2} \sum_y w(y).
$$
If the array was sorted by price, it would be simple:
Go from the first item one by one, count sum and if the sum becomes greater that half the total weight, then you found the desired item.
Could you give me some hint how to find such an item in linear time?
Answer: Let $A$ be an input array containing $n$ elements, $a_i$ the $i$-th element and $w_i$ its corresponding weight. You can determine the weighted median in worst case linear time as follows. If the array length is $\leq 2$, find the weighted median by exhaustive search. Otherwise, find the (lower) median element $a_x$ using the worst case $O(n)$ selection algorithm and then partition the array around it (using the worst case $O(n)$ partition algorithm from QuickSort). Now determine the weight of each partition. If weight of the left partition is $< \frac{1}{2}$ and weight of the right partition is $\leq \frac{1}{2}$ then the weighted (lower) median is $a_x$. If not, then the weighted (lower) median must necessarily lie in the partition with the larger weight. So, you add the weight of the "lighter" partition to the weight of $a_x$ and recursively continue searching into the "heavier" partition. Here is the algorithm's pseudocode (written in Latex).
FIND-WEIGHTED_LOWER_MEDIAN(A)
if $n$ == 1
return $a_1$
elseif $n$ == 2
if $w_1 \geq w_2$
return $a_1$
else
return $a_2$
else
determine $a_x$, the (lower) median of A
partition A around $a_x$
$W_{low}$ = $\sum\limits_{{a_i} < {a_x}} {{w_i}}$
$W_{high}$ = $\sum\limits_{{a_i} > {a_x}} {{w_i}}$
if $W_{low} < \frac{1}{2}$ AND $W_{high} \leq \frac{1}{2}$
return $a_x$
else if $W_{low} \geq \frac{1}{2}$
$w_x = w_x + W_{high}$
B = \{ a_i \in A: a_i \leq a_x \}
FIND-WEIGHTED_LOWER_MEDIAN(B)
else
$w_x = w_x + W_{low}$
B = \{ a_i \in A: a_i \geq a_x \}
FIND-WEIGHTED_LOWER_MEDIAN(B)
Now let's analyze the algorithm and derive its complexity in the worst-case. The recurrence of this recursive algorithm is $T(n) = T(\frac{n}{2} + 1) + \Theta(n)$. Indeed, we have no more than a recursive call on half the elements plus the (lower) median. The initial exhaustive search on up to 2 elements costs $O(1)$, determining the (lower) median using the select algorithm requires $O(n)$, and partitioning around the (lower) median requires $O(n)$ as well. Computing $W_{low}$ or $W_{high}$ is again $O(n)$. Solving the recurrence, we get the complexity of the algorithm in the worst case, which is $O(n)$.
Of course, a real implementation should not use the worst case $O(n)$ selection algorithm, since it is well known that the algorithm is not practical (it is of theoretical interest only). However, the randomized $O(n)$ selection algorithm works pretty well in practice and can be used since it's really fast. | {
"domain": "cs.stackexchange",
"id": 15662,
"tags": "algorithms, time-complexity, arrays"
} |
Existence of a perfect blackbody | Question: Why is a perfect blackbody not possible? On an electronic level, what is the reason for the non-existence of an ideal blackbody?
Answer:
On an electronic (sic) level, what is the reason for the non-existence of an ideal blackbody?
This is based on Black Body Wikipedia
A widely used model of a black surface is a small hole in a cavity with walls that are opaque to radiation. Radiation incident on the hole will pass into the cavity, and is very unlikely to be re-emitted if the cavity is large. The hole is not quite a perfect black surface — in particular, if the wavelength of the incident radiation is longer than the diameter of the hole, part will be reflected. Similarly, even in perfect thermal equilibrium, the radiation inside a finite-sized cavity will not have an ideal Planck spectrum for wavelengths comparable to or larger than the size of the cavity.
In my comment above, I mentioned the Cosmic Microwave Background Radiation spectrum and I would like to explain why I used that particular example:
Why is a perfect blackbody not possible:
In nature, it is possible to observe an almost perfect blackbody spectrum. In the picture below the extract, you see the error bars as an indication of just how close the CMB radiation is to as what we predict by Plancks's Law.
The big bang theory is based upon the cosmological principle, which states that on large scales the Universe is homogeneous and isotropic. According to theory, the Universe approximately a second after its formation was a near-ideal black body in thermal equilibrium at a temperature above 1010 K. The temperature decreased as the Universe expanded and the matter and radiation in it cooled. The cosmic microwave background radiation observed today is "the most perfect black body ever measured in nature". It has a nearly ideal Planck spectrum at a temperature of about 2.7 K. [My emphasis]. It departs from the perfect isotropy of true black-body radiation by an observed anisotropy that varies with angle on the sky only to about one part in 100,000.
Image Source: Big Bang Duke University | {
"domain": "physics.stackexchange",
"id": 35235,
"tags": "thermal-radiation"
} |
Explaining boiling using internal energy | Question: Internal energy is summation of potential and kinetic energies of a substance
Using this interpretation of internal energy, how can I explain boiling?
Kinetic remains constant.
Potential should decrease because of an increase in average spacing.
But internal energy should increase.
Can somebody explain what flaw I am making?
Answer: You are wrong when you say the potential energy of particles decrease when they move apart.
The forces between the particles are attractive not repulsive.
Recall the definition of potential energy. It is the work done by external force on a charged particle under influence of electric field.
Therefore, the potential energy for particles, which attract each other, increases when they move apart.
So when to particles get close their potential energy decreases, hence their stability increases. This also happens to be one of the reasons why they come closer. | {
"domain": "chemistry.stackexchange",
"id": 8135,
"tags": "transition-state-theory"
} |
Video compression - why not just sending the changed parts? | Question: I'm trying to understand how video compression through dense optical flow works (I'm a newbie with video compression).
I easily understood what optical flow is, the thing I can't understand is why a simple video compression could not just send the parts of a frame marked with a "motion vector" (as you would send only the parts that have moved to or from a position from frame to frame).
I mean: MPEG is way more complex than that, what's the catch with sending parts of a frame that have changed to "stick them over it to get the next one"? Is this a technical flaw (i.e. too much data to be transferred) or is there something else involved that invalidates this simple reasoning?
Answer: You could do that, in theory, if you knew "what are the parts" (or objects) frame-to-frame in a video. But:
objects move, change scale, rotate, have color changes, they even can be occluded.
the segmentation is a complicated task in general.
MPEG 4, at least, allows to manipulate objects instead of pixels, see MPEG-4 Natural Video Coding - An overview. If objects and backgrounds are already decoupled, like filmed with a green screen:
then the object contents can be processed separately. Otherwise, what is more important is that, visually, the representation is accurate enough. Thus, a block from some object in a frame can be cast to another block of a different object in a different frame, as long as the result is satisfactory. Thus, in a lot of cases, the block matching is sufficient, and more importantly, relatively fast to compute. | {
"domain": "dsp.stackexchange",
"id": 5313,
"tags": "image-processing, video-processing, video-compression, video"
} |
Swinging Tic-Tac-Toe | Question: This is just a simple PvP tic tac toe program, I'm learning and just looking for maybe some feedback on how to improve it?
There are mainly two things I can see improvement for, the action listener where I could do something like; "a button has been pressed, get the name of the button, pass the name of the button, and change button to X or O based on that". It would shorten the lines from needing one for each button, to just one for the whole button array.
And also the gameWin check might be able to be improved? I don't know, maybe like the tie or using another way to compare ordered numbers in an array?
I'm not experienced and simply doing this for fun, any feedback would be awesome XD
package ttt;
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
public class Ttt {
static JFrame mainFrame = new JFrame("Tic Tac Toe");
static JButton[] gameButtons = new JButton[9];
public static void main (String args[]) {
getGui();
}
public static void getGui() {
mainFrame.setVisible(true);
mainFrame.setResizable(false);
mainFrame.setSize(600, 620);
mainFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
JPanel mainPanel = new JPanel(new GridLayout(3, 3));
mainFrame.add(mainPanel);
for (int i = 0; i < gameButtons.length; i++) {
gameButtons[i] = new JButton(String.valueOf(i));
gameButtons[i].setForeground(Color.WHITE);
gameButtons[i].setBackground(Color.WHITE);
gameButtons[i].setFont(new Font("Times New Roman", Font.BOLD, 60));
gameButtons[i].addActionListener(new Response());
mainPanel.add(gameButtons[i]);
}
}
private static class Response implements ActionListener {
static int rounds = 0;
static JButton winMessage = new JButton(), tieMessage = new JButton();
@Override
public void actionPerformed(ActionEvent e) {
if (e.getSource() == gameButtons[0]) {
roundJudge(0);
} else if (e.getSource() == gameButtons[1]) {
roundJudge(1);
} else if (e.getSource() == gameButtons[2]) {
roundJudge(2);
} else if (e.getSource() == gameButtons[3]) {
roundJudge(3);
} else if (e.getSource() == gameButtons[4]) {
roundJudge(4);
} else if (e.getSource() == gameButtons[5]) {
roundJudge(5);
} else if (e.getSource() == gameButtons[6]) {
roundJudge(6);
} else if (e.getSource() == gameButtons[7]) {
roundJudge(7);
} else if (e.getSource() == gameButtons[8]) {
roundJudge(8);
} else if (e.getSource() == winMessage) {
System.exit(0);
} else if (e.getSource() == tieMessage) {
System.exit(0);
}
gameJudge();
}
public void roundJudge(int buttonNumber) {
if (rounds %2 == 0) {
gameButtons[buttonNumber].setText("X");
} else if (rounds %2 == 1) {
gameButtons[buttonNumber].setText("O");
}
gameButtons[buttonNumber].setEnabled(false);
rounds += 1;
}
public void gameJudge() {
if (gameButtons[0].getText().equals(gameButtons[1].getText()) && gameButtons[1].getText().equals(gameButtons[2].getText())) {
gameWin(gameButtons[0].getText());
} else if (gameButtons[3].getText().equals(gameButtons[4].getText()) && gameButtons[4].getText().equals(gameButtons[5].getText())) {
gameWin(gameButtons[3].getText());
} else if (gameButtons[6].getText().equals(gameButtons[7].getText()) && gameButtons[7].getText().equals(gameButtons[8].getText())) {
gameWin(gameButtons[6].getText());
} else if (gameButtons[0].getText().equals(gameButtons[3].getText()) && gameButtons[3].getText().equals(gameButtons[6].getText())) {
gameWin(gameButtons[0].getText());
} else if (gameButtons[1].getText().equals(gameButtons[4].getText()) && gameButtons[4].getText().equals(gameButtons[7].getText())) {
gameWin(gameButtons[1].getText());
} else if (gameButtons[2].getText().equals(gameButtons[5].getText()) && gameButtons[5].getText().equals(gameButtons[8].getText())) {
gameWin(gameButtons[2].getText());
} else if (gameButtons[0].getText().equals(gameButtons[4].getText()) && gameButtons[4].getText().equals(gameButtons[8].getText())) {
gameWin(gameButtons[0].getText());
} else if (gameButtons[2].getText().equals(gameButtons[4].getText()) && gameButtons[4].getText().equals(gameButtons[6].getText())) {
gameWin(gameButtons[2].getText());
} else if (gameButtons[0].isEnabled() == false && gameButtons[1].isEnabled() == false && gameButtons[2].isEnabled() == false && gameButtons[3].isEnabled() == false && gameButtons[4].isEnabled() == false && gameButtons[5].isEnabled() == false && gameButtons[6].isEnabled() == false && gameButtons[7].isEnabled() == false && gameButtons[8].isEnabled() == false) {
gameTie();
}
}
public static void endFormat(JFrame name1, JButton name2) {
for(JButton gameButton : gameButtons) {
gameButton.setVisible(false);
gameButton.setEnabled(false);
}
mainFrame.setVisible(false);
name1.setLayout(new GridLayout(1, 1));
name1.setVisible(true);
name1.setResizable(false);
name1.setSize(600, 620);
name1.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
name2.setToolTipText("Click anywhere to exit the program.");
name2.setFont(new Font("Times New Roman", Font.BOLD, 60));
name2.addActionListener(new Response());
name2.setOpaque(false);
name2.setContentAreaFilled(false);
name2.setBorderPainted(false);
}
public static void gameWin(String team) {
JFrame winFrame = new JFrame("Tic Tac Toe");
endFormat(winFrame, winMessage);
winMessage.setText("Player " + team + " Won!");
winFrame.add(winMessage);
}
public static void gameTie() {
JFrame tieFrame = new JFrame("Tic Tac Toe");
endFormat(tieFrame,tieMessage);
tieMessage.setText("Tie!");
tieFrame.add(tieMessage);
}
}
}
Answer:
actionPerformed
Since you create a new Response() for each button, you may as well make it have an identification information.
However, a dedicated per-button Response() is an overkill. You can uniquely identify each button with gameButton[i].setActionCommand("" + i);. Then e.getActionCommand() in the will tell you which button was pressed.
roundJudge
Is a bit more complicated then necessary. Consider
private string[] labels = { "X", "O" };
private void roundJudge(int i) {
gameButton[i].setText(labels[round % 2]);
gameButton[i].setEnabled(false);
round++;
} | {
"domain": "codereview.stackexchange",
"id": 22411,
"tags": "java, beginner, game, swing, tic-tac-toe"
} |
How Ants know about Earthquake? | Question: How does an ant know about Earthquake is it because of an organ or due to other factors?
Answer: No, they may not sense Earthquake by an organ but due to other reasons. This may be any other physical factors which can be measured.
"...There is therefore
little reason to believe that these ants react to earthquake
precursors other, perhaps, than those that may affect colonies
directly, by altering physical variables that can be directly measured
by other means..."
Fascinating though the behavior and physiology of ants may be, they cannot be employed as reliable predictors or even sensors of earthquakes.
Source: Shaken, not stirred: a serendipitous study of ants and earthquakes | {
"domain": "biology.stackexchange",
"id": 2474,
"tags": "physiology, biophysics, ant"
} |
Why are we forced to choose a specific value for $\pi$ field in Nambu-Goldstone phenomenon? | Question: In the sigma-model of spontaneous symmetry breaking, we have degenerate vacuum states. But if we don't pick up a particular value of VEV, we won't have any symmetry breaking. As I read from a book, in field theory at finite volume, there would be no problem; but at infinite volume, we are forced to choose a value of the $\pi$ field. But why?
Answer: This is the superselection phenomenon. If you have a particle in a potential well which has a symmetric minimum, like a particle in 2d with the potential
$$ V(x,y) = a(x^2+y^2)^2 - b(x^2+y^2) $$
With $a>0$ and $b>0$, the ground state is symmetric under rotations--- the ground state wavefunction will be rotationally invariant. This can be proved rigorously, but it is also clear from the relation of the path integral to a biased random walk--- the random walk will wander around the sphere.
When you have a field theory in finite volume (on a lattice), the field value center of mass will also random walk around the circle. If you have two scalar fields, X and Y, the values are X(p) Y(p), where p ranges over the points of the lattice. There are a large but finite number of points p, the field operators are like the X,Y position of the point p, and the particles can all move thier center of mass, which will fluctuate until it is all around the circle.
But if you take the infinite lattice limit, the center of mass degree of freedom is infinitely massive, and has no fluctuation. This means that no local operator will have matrix elements between states with different values for the center of mass position--- the local operators just shift the value at a few nearby lattice sites, and the mean value is defined by infinitely many lattice sites far away.
The result is that every different value of the center of mass position is a different vacuum, completely unreachable from any other. Under this condition, there is no sense is saying that the vacuum is superposed over all orientations, because any local observer will never see more than one orientation--- the vacuum is frozen at some center of mass value which can be determined by local measurements, and the value never quantum fluctuates over all space.
Such a situation was called a "superselection sector" by Wigner. The terminology is unfortunate, because it is not much like a selection rule, and this was the analogy. A better name might be a "macroscopic variable".
The same thing happens in statistical mechanics. If you have a solid crystal on a table, the canonical ensemble sums over all possible positions of the solid on the table--- but once you see where the solid is, it is never going to move, because a coordinated center of mass motion that moves all the points of a solid in the same direction is infinitely improbable in the large system limit.
There are two limits involved in a field theory--- the infrared infinite lattice limit, and the ultraviolet fine-lattice limit. In the ultraviolet limit, the m term goes to zero by scaling, so if you make the lattice too fine, you will find that the field fluctuations do not pick out a direction at small distances, the field values are random from point to point. But over a coarse lattice, the same theory on this coarse lattice has a negative mass. This means that the fields X,Y will tend to point in some direction, and nearby lattice points will tend to point in the same direction.
The vanishing of the m term at short distances means that any finite volume field theory is effectively like a finite number of particles--- no superselection rule. In the infinite volume limit, the superselection rule emerges, and the field vacuum is one or another direction, not all of them in coherent superposition. The reason is that the local operators cannot knock you from one vacuum to another. | {
"domain": "physics.stackexchange",
"id": 2784,
"tags": "quantum-field-theory, symmetry-breaking"
} |
Laserscan tf to odom frame (RVIZ) | Question:
Hi at all out there,
have a problem with a point that is displayed in RVIZ, but seems not to exist in our data, because all filtering efforts fail.
We do the following steps:
A laserscanner subscriber delivers scan data
We transform the data in our odometry frame with the tf package
We throw away some points that are to far away for our calculation
We apply an algorithm on the remaining data. We tried different algorithms but the problem still remains.
The point lies on the odometry frame origin.
Originally posted by Karl_Auer on ROS Answers with karma: 3 on 2015-05-29
Post score: 0
Answer:
Found the problem. I'm splitting my laserscan into left and right and somehow i had some points in both vectors. Now I checked the vectors against each other for duplicates and erased them - problem solved.
Originally posted by Karl_Auer with karma: 3 on 2015-05-29
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 21796,
"tags": "rviz, laserscan, transform"
} |
Computing the set of points nearest to a point in a polygon boundary | Question: Let $P$ be a polygon. For each point $x$ on the boundary of $P$, denote by $N_P(x)$ the set of points in $P$, that are nearer to $x$ than to any other point on the boundary of $P$.
Given a subset $X$ of the boundary (e.g. an interval contained in one of the sides of $P$), what is an algorithm for computing $N_P(X)$?
Here is an example: $X$ is the blue interval at the bottom, and $N_P(X)$ is the blue polygon.
Answer: The following ideas might help.
The set $X$ can be partitioned into a set of disjoint intervals such that each interval is continuous and lies on one of the sides of polygon. For each interval $I$, we can find $N_{P}(I)$ and union of all $N_{P}(I)'s$ would give the required set $N_{P}(X)$. Now, let us find $N_{P}(I)$ for an interval $I$:
For ease of simplification, orient the polygon such that interval $I$ lies on the $x$-axis; similar to the one shown in your figure.
Let $a$ and $b$ be the two endpoints of $I$ such that $a\leq b$. It is easy to see that the polygon region that lies within the region $y \in (-\infty,\infty)$ and $x \in (-\infty,a) \cup (b,\infty)$ does not belongs to $N_{P}(I)$.
This leaves us with the region $R$ defined as $y \in (-\infty,\infty)$ and $x \in [a,b]$. Find all the segments of the polygon that lies in this region. You can do this by taking each side of the polygon and keeping its that part that lies in $R$. Suppose $S$ be this set of segments that we obtain.
For every $q \in [a,b]$, we will find the region $N_{P}(q)$. Let $S_{q} \subseteq S$ be the set of segments that intersects with the line $x = q$. The segment in $S_{q}$ that is closest to the $x$-axis will be the main competitor. Let this segment be $S_{q}^{c}$ and suppose it has height $h$. If the point $(q,h/2)$ lies in the polygon, then all the points: $(q,y)$, where $y\leq|h|/2$ lies in $N_{P}(q)$; otherwise not. If we can follows this process for every $q \in [a,b]$, then the union of $N_{P}(q)$'s will give $N_{P}(I)$.
Now, how can we perform step $4$ efficiently? In other words, we need obtain the segment $S_{q}^{c}$ for every $q \in [a,b]$? It is simple. Run the sweep line algorithm on $S$ with event points being the start and end points of the segments. Keep track of the line segments that are closest to the $x$-axis. In that way, you will find $N_{P}(I)$. | {
"domain": "cs.stackexchange",
"id": 18498,
"tags": "computational-geometry"
} |
In rocky planets,does fast rotation cause flatting or low flatting imply slow rotation? | Question: As far as I know, Venus and Mercury have 0 flatting, but Mars and Earth have detectable flatting, and Venus and Mercury are both rotating slowly. I'm confused as to the relation between rotational speed and flatting.
Does rotation cause a rocky planet to be flat? Or zero flatting implies Venus and Mercury rotate slowly? Which is correct?
Answer: The flattening of a planet is a function of both its spin rate and its structure. But for a series of planets of homologous structure the flattening depends on the spin rate. the faster it spins the greater the flattening (for spin rates typical of planets any way). So if a planet is spinning slowly it will display little polar flattening, similarly if it does not show flattening it should be spinning slowly. | {
"domain": "astronomy.stackexchange",
"id": 1500,
"tags": "rotation, terrestrial-planets"
} |
Confirmation of Dijsktra application explanation | Question: Please can someone confirm that my description of the application of Dijkstra's algorithm is correct for this graph?
1 3
A -- B -- C
\ /
2 \ / 1
\ /
D
Step Current Node
B C D
1 A 1(A) - 2(A)
2 B 1(A) 4(B) 2(A)
3 D 1(A) 3(D) 2(A)
4 C 1(A) 3(D) 2(A)
Description:
Tentative distances to B and D determined. C not yet visible.
Shortest distance from previous step taken. No new information on shortest path to B. Tentative distance to C now known. No new information on D.
We choose to backtrack to A, and then move to D because the distance to D (2) is smaller than that to C (4). We update the tentative distance to C because via D it is shorter than via B.
We choose to move to C directly from D because it is shorter (3) than backtracking and going via A (4). Moving to C gives us no new shorter distances. All nodes visited, so end.
Answer: It's more traditional to describe dijkstra with the open and closed sets:
You start with an empty closed set and an open set containing $ \{ (A,0) \}$
You select the node with the lowest cost and remove it from the open set, then you expand the neighbours and add them to the open set $ \{ \{B,1\} \{D,2\} \}$ or update the elements in the open set and Add the selected node to the closed set
step1:
open = $ \{ (B,1, A), (D,2, A) \}$
closed = $\{(A)\}$
step2:
open = $ \{ (C,4, B), (D,2, A) \}$
closed = $\{(A),(B,A)\}$
step3:
open = $ \{ (C,3, D) \}$
closed = $\{(A),(B,A),(D,A)\}$
There is no actual sense of "backtracking" because you are considering all the open nodes in parallel.
Many other search algorithms work in the same way with the only difference in how a node is selected from the open set. Breadth-first takes the oldest node in the set, depth-first takes the youngest, Dijkstra takes the node with the lowest cost, A* takes the node with the lowest cost+heuristic. | {
"domain": "cs.stackexchange",
"id": 9096,
"tags": "shortest-path"
} |
Safely using android 'Context' inside threads | Question: Oftentimes I find myself using context inside threads, multiple times. The problem is that I do not want to hold a long-lived strong reference of it in my threads to avoid leaking it. I keep a weakReference to it in global space and when my activity dies, it will obviously not prevent any GC events or such.
I have made the following approach to using context in threads:
public interface UseContext<Result, Param extends Context> {
Result work(Param context);
}
public static <T extends Context, Result> Optional<Result> useContext(final WeakReference<T> reference,
final UseContext<Result, T> task) {
final T context;
final boolean contextLost = (context = reference.get()) == null;
final boolean isActivity = context instanceof Activity;
if (contextLost)
return Optional.absent();
final boolean activityIsFinishing = (isActivity && ((Activity) context).isFinishing());
if (activityIsFinishing)
return Optional.absent();
return Optional.fromNullable(task.work(context));
}
Whenever I have to use context inside a thread, I do this (example) :
MiscUtils.useContext(reference, activity -> {
activity.runOnUiThread(() -> Toast.makeText(activity, message, Toast.LENGTH_SHORT).show());
return null;
});
Is this a correct approach? Also, I'm not entirely convinced that holding a strong reference to Context in global space and using it in runnable is a bad idea. I understand that it was something to do with anonymous inner classes causing memory leaks.
Answer: I think it's a bit of overhead.
Depending on what you need to do with the Context, you can the Application Context instead of the Activity itself. This context is available while the process runs, so there no leakage is possible.
In my experience, Activities being leaked have been caused by singletons initialized by passing the Activity instance (as a Context). Be careful also with Views, as they hold a reference to the Context.
Using the StrictMode with the setClassInstanceLimit and a bit of logging has helped me catch these leaks quite fast. | {
"domain": "codereview.stackexchange",
"id": 15102,
"tags": "java, android, thread-safety"
} |
Understanding Permeation Units | Question: I have encountered a type of unit that I cannot understand. The unit describes the rate of permeation through a substance:(cm^3 . mm)/m^2 . d . atm
Could someone please explain how to interpret these units? Please attempt to leave the answer in simple - to - understand terms as chemistry, not engineering, is my primary field.
Answer: $$
\frac{\left(\mbox{amount of gas}\right)\left(\mbox{thickness of membrane}\right)}
{\left(\mbox{membrane area}\right)\left(\mbox{time}\right)\left(\mbox{differential pressure of gas}\right)}
$$
Units of gas permeability constants (source of the formula)
Gas Barrier Introduction
Introduction to Material Permeability Indexes | {
"domain": "engineering.stackexchange",
"id": 999,
"tags": "pressure, measurements, gas"
} |
Chemical potential as a function of temperature | Question: I have considered an ideal Fermi gas. Then, we can obtain an expression for chemical potential as a function of temperature. I want to understand the physical significance to it or what it really means. Isn't chemical potential generally a function of temperature for all kinds of gases?
Answer: I think the best way to think about it is in terms of entropy ($S$). The chemical potential $\mu$ is related to the entropy $S$ by $\mu = -T \frac{\partial S}{\partial N} .$
The entropy $S$=$S(N,V,T)$ (or (N,V,E), or etc...) is a function of N. Chemical potential is a useful concept because it tells you how the entropy changes due to changes in N, the number of particles or whatever in your system.
Then statements like "particles will go from high chemical potential to low chemical potential" are just code for "the entropy isn't maximized right now, so the particles will move around so that the entropy is maximized". | {
"domain": "physics.stackexchange",
"id": 69145,
"tags": "statistical-mechanics, temperature, fermions, chemical-potential"
} |
Stepwise $SU(2)$ Rotations on the Bloch Sphere from $\pi$ to $2\pi$ | Question: Based on the useful straight-forward answers to both of my former questions, on multiple rotations of a qubit and bloch sphere subplots, I was able to implement the following $SU(2)$ rotations:
At this point it is worth mentioning that (as a learner) I am really grateful for the high-quality support. The code looks as follows (I mainly used the sources "A Lie Group: Rotations in Quantum Mechanics", p. 67 from Jean-Marie Normand or equivalently A Representations of SU(2) and Lecture notes: Qubit representations and rotations, p. 3):
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import colors
from qiskit.visualization.bloch import Bloch
from qiskit.visualization import plot_bloch_vector
from sympy.physics.matrices import msigma
from sympy.physics.quantum.dagger import Dagger
from sympy import Matrix
from sympy import I, N, re, exp, sin, cos, pi, eye
import numpy as np
def to_spherical(vec):
x = np.real(vec[0])
y = np.real(vec[1])
z = np.real(vec[2])
hxy = np.hypot(x, y)
r = np.hypot(hxy, z)
ϕ = np.arctan2(y, x) #az
θ = np.arctan2(hxy, z) #el
return [r, ϕ, θ]
def to_cartesian(polar):
r = polar[0]
ϕ = polar[1]
θ = polar[2]
x = r * np.sin(θ) * np.cos(ϕ)
y = r * np.sin(θ) * np.sin(ϕ)
z = r * np.cos(θ)
return [np.real(x), np.real(y), np.real(z)]
def rn_su2_euler(vec, rx, ry, rz):
spherical_vec = to_spherical(vec)
ϕ = spherical_vec[1]
θ = spherical_vec[2]
sx = msigma(1)
sy = msigma(2)
sz = msigma(3)
M_q = (np.sin(θ)*np.cos(ϕ)*sx + np.sin(θ)*np.sin(ϕ)*sy + np.cos(θ)*sz)
U_n = Matrix([[exp(-I*(rx+rz)/2)*cos(ry/2), -exp(-I*(rx-rz)/2)*sin(ry/2)], [exp(I*(rx-rz)/2)*sin(ry/2), exp(I*(rx+rz)/2)*cos(ry/2)]])
M_q_rotated = U_n*M_q*Dagger(U_n)
return M_q_rotated
def extract_angles(M_q_rotated):
cos_θ_rotated = float(N(re(M_q_rotated[0,0])))
θ_rotated = np.arccos(cos_θ_rotated)
temp = float(N(re(M_q_rotated[1,0])))
temp = temp/np.sin(θ_rotated)
ϕ_rotated = np.arccos(temp)
return (ϕ_rotated, θ_rotated)
def get_gradient_colors(rgb, n):
red=rgb[0]
yel=rgb[1]
blu=rgb[2]
result = [colors.to_hex([red,yel,blu])]
cr = red/n
cy = yel/n
cb = blu/n
for i in range(n):
if(red!=0):
red -= cr
if(yel!=0):
yel -= cy
if(blu!=0):
blu -= cb
result.append(colors.to_hex([red,yel,blu]))
return result
fig, ax = plt.subplots(figsize = [8, 12], nrows=3, ncols=2)
fig.patch.set_facecolor('white')
[axis.set_axis_off() for axis in ax.ravel()]
rotations = [[0, 0, pi/8], [0, 0, pi/8], [0, pi/8, 0], [0.00001, -pi/8, 0], [0, pi/8, pi/8], [0, pi/8, -pi/8]]
start_vec = [1, 0, 0]
num_iterations = 8
for m, rotation in enumerate(rotations):
ax = fig.add_subplot(320+(m+1), axes_class = Axes3D)
rot_x = rotation[0]
rot_y = rotation[1]
rot_z = rotation[2]
_bloch = Bloch(axes=ax)
_bloch.vector_color = get_gradient_colors([0, 0, 1], num_iterations)
_bloch.vector_width = 1
sv = []
vec = start_vec
sv.append(vec)
for i in range(num_iterations):
M_q_rotated = rn_su2_euler(vec, rot_x, rot_y, rot_z)
(ϕ_rotated, θ_rotated) = extract_angles(M_q_rotated)
vec = np.array(to_cartesian([1, ϕ_rotated, θ_rotated]))
sv.append(vec)
_bloch.add_vectors(sv)
_bloch.render()
My question: How can I achieve that the vectors (mirrored) cover the second half of the hemisphere? Analogous to this: How can I make the vectors in the lower right image cover the other half of the hemisphere? What I understood from the sources is that in the function rn_su2_euler the parameters that are the Euler angles rx can take a value from $0$ to $2\pi$ and ry a value from $0$ to $\pi$ and rz from $0$ to $4\pi$, see for example A Representations of SU(2). But somewhere it seems that there is still a minor bug.
For the sake of completeness, I uploaded the notebook here to GitHub.
Answer: The function rn_su2_euler needs some adjustments.
def rn_su2_euler(vec, rx, ry, rz, plot=False):
stabilized_vec = np.array(vec)+0.000001
spherical_vec = to_spherical(stabilized_vec)
ϕ = spherical_vec[1]
θ = spherical_vec[2]
if plot: print(f'Initial vector: \t{vec}.')
if plot: print(f'Spherical coords: \t{spherical_vec}.')
# https://www.phys.hawaii.edu/~yepez/Spring2013/lectures/Lecture1_Qubits_Notes.pdf (p. 3)
sx = msigma(1)
sy = msigma(2)
sz = msigma(3)
M_q = (np.sin(θ)*np.cos(ϕ)*sx + np.sin(θ)*np.sin(ϕ)*sy + np.cos(θ)*sz)
if plot: print(f'#{i}: M_q={M_q}')
r_hat = np.array([rx, ry, rz])
r = np.sqrt(float(np.tensordot(r_hat, r_hat, axes=1)))
if plot: print(f'Rotation angle = {r}')
n_hat = r_hat/(r)
sigma_hat = np.array([sx, sy, sz])
n_sigma_product = Matrix(np.tensordot(n_hat,sigma_hat, axes=1))
U_n = N(exp(-1j*n_sigma_product*r/2))
#U_n_syntetic = [[np.cos(r/2), -1j*np.sin(r/2)],[-1j*np.sin(r/2), np.cos(r/2)]]
M_q_rotated = N(U_n*M_q*Dagger(U_n))
# Source: https://en.wikipedia.org/wiki/Pauli_matrices#Pauli_vector
q_1 = re((M_q_rotated[0,1]+M_q_rotated[1,0])/2)
q_2 = re((M_q_rotated[1,0]-M_q_rotated[0,1])/2j)
q_3 = re(M_q_rotated[0,0])
q_rotated = [np.real(N(q_1)), np.real(N(q_2)), np.real(N(q_3))]
return q_rotated
Thus, the rotation works in both directions and reaches all parts of the circle.
The call of this would have to be revised accordingly.
The output at the end can then look like this:
I have provided a possible and working implementation on this GitHub page. | {
"domain": "quantumcomputing.stackexchange",
"id": 3913,
"tags": "qiskit, bloch-sphere, experimental-realization"
} |
Is this covariant derivative identity true? | Question: Trying to work through a textbook derivation of the geodesic deviation equation, I've calculated this identity:$$u_{;\beta}^{\alpha}u_{\alpha}=u_{\alpha;\beta}u^{\alpha}.$$
If this is true, I'm making progress. If it isn't, it's back to the drawing board. Does anyone know if it is correct? I've tried writing it out as$$\left(\frac{\partial u^{\alpha}}{\partial x^{\beta}}+u{}^{\gamma}\Gamma_{\gamma\beta}^{\alpha}\right)u_{\alpha}=\left(\frac{\partial u_{\alpha}}{\partial x^{\beta}}-u_{\gamma}\Gamma_{\alpha\beta}^{\gamma}\right)u^{\alpha},$$
but I'm none the wiser.
Answer: The identity in question is given as,
$$(\nabla_\beta u^\alpha) u_\alpha=(\nabla_\beta u_\alpha)u^\alpha$$
Expanding the left-hand side, we find,
$$(\nabla_{\beta}u^\alpha)u_\alpha = (\nabla_\beta g^{\alpha \delta}u_\delta)u_\alpha = (u_\delta \nabla_\beta g^{\alpha\delta} + g^{\alpha\delta}\nabla_\beta u_\delta)u_\alpha$$
The Levi-Civita connection is precisely the connection which is compatible with the metric structure; as such parallel transporting two tangent vectors preserves the inner product, so we have,
$$\nabla_\beta g^{\alpha\delta} = 0$$
Hence, for the left-hand side, we are left with a term $u^\alpha \nabla_\beta u_\alpha$. Just a general tip regarding these computations: try to avoid expanding the covariant derivative explicitly in terms of the Christoffel symbols as you have done - there are usually better paths to take, just as one is usually advised not to fully expand commutators most of the time. | {
"domain": "physics.stackexchange",
"id": 17747,
"tags": "homework-and-exercises, differential-geometry, metric-tensor, differentiation"
} |
Gravitational wave detection time difference between LIGO Livingston and LIGO Hanford | Question: Quote from LIGO's news release:
By looking at the time of arrival of the signals—the detector in
Livingston recorded the event 7 milliseconds before the detector in
Hanford—scientists can say that the source was located in the Southern
Hemisphere.
GPS coordinates of LIGO Livingston: 30.5630018,-90.7763949
GPS coordinates of LIGO Hanford: 46.4554032,-119.4092701
Distance: 3030.13 km
Now if the gravitational waves propagate outwards at the speed of light a difference of 7ms implies the following maximum possible detector distance:
299792458 * 0.007 / 1000 = 2099 km
Where am I wrong ?
Update:
As pointed out by james-k in the comments below, the calculated 2099 km is the minimum distance and not the maximum ... these waves seem to have caused some mental derangement ;)
Answer: Here's how my non-scientist mind envisions it. I draw a straight line between the two LIGO sites on a map. Then I take another straight line (like a straight edge/ruler) that represents the GW coming in. If the GW line is coming from the south exactly parallel with the line I drew on the map, then both sites would detect the "chirp" at exactly the same time (0 milliseconds difference). If the GW line comes from the south exactly perpendicular to my drawn line, hitting the LA site first, then chirp would be detected at each site at approx 10 milliseconds (?) apart (assuming that's how long it takes for light to travel between the two sites in a straight line).
So I envision the GW "line" came in from the south somewhere between parallel and perpendicular (closer to perpendicular) in relation to my drawn line. Someone else smarter than I can figure out the maths and correct angles, numbers, etc.
Does that make any sense? | {
"domain": "astronomy.stackexchange",
"id": 4647,
"tags": "gravitational-waves, ligo"
} |
Is this railway carriage side likely made of single metal sheet? | Question: A neat image from Wikipedia:
Here the railway carriage side looks like it is crafted from a single sheet of metal (if we ignore the doors and the windows and some minor parts like that small thing near the carriage number) - no seams, no rivets, no anything just a 20+ meters wide and about 3 meters tall single sheet of metal.
Is it indeed possible and reasonable to craft a carriage side of such large single sheet or should I expect that there're neatly hidden seams somewhere?
Answer:
Is it indeed possible
Yes
and reasonable to craft a carriage side of such large single sheet
It can be reasonable, but typically isn't. To form an entire side of the carriage from one sheet of steel would require a forming press that is gargantuan. However, one could instead have three more reasonably sized presses and then weld the sections together. Further, this could be more flexible - changing out the press plates is easier and faster, storing smaller press plates is cheaper and easier, if the machine breaks down you can still use the other presses, whereas if one piece of the gargantuan press breaks down, the entire operation might be waiting for the repair.
There are a whole host of issues that follow the same reasoning - if a panel is damaged, it's expensive and time consuming to rework, and affects a much larger part of the carriage than a single damaged panel would. The pieces are much harder to work with, move, and fasten, etc, etc.
So it's very unlikely that it's a single seamless sheet of steel or aluminum.
or should I expect that there're neatly hidden seams somewhere?
You should expect seams, and in fact, as ratchet freak pointed out:
"The bodyshell is [...]of full monocoque construction with an all-welded mild steel stressed skin,"
Monocoque is a technique where the skin or surface of the construction serves an integral support purpose.
The seams, therefore, may also provide support as ribs, and so you may find that having seams provides an added benefit, or perhaps reduces the need for internal supports.
The seams, of course, are not visible for both aerodynamic and visual appeal purposes, and it's relatively easy to hide weld lines with grinding, sanding, and polishing.
So without actual proof of manufacturing process, I believe you can safely assume smaller sheets with invisible seams are more likely than single seamless formed sheets. | {
"domain": "engineering.stackexchange",
"id": 220,
"tags": "materials, rail, metals"
} |
Is Hartmanis-Stearns conjecture settled by this article? | Question: The paper
"On the computational complexity of algebraic numbers: the Hartmanis--Stearns problem revisited"
by Boris Adamczewski, Julien Cassaigne, Marion Le Gonidec
https://arxiv.org/abs/1601.02771
claims in Theorem 2.3: "
An algebraic irrational real number cannot be generated by a one-stack machine, or equivalently, by a determistic push down automaton."
I have read and checked the article, have not found any gap in it, and thus the Hartmanis-Stearns conjecture is closed by the theorem? Since the conjecture is hard to prove, I suspect that I misunderstood the article or some fault in it.
Answer: First, the name of the conjecture is "Hartmanis-Stearns", not "Hartmanis-Stearn".
Second, the Hartmanis-Stearns conjecture concerns those real numbers computable by a multi-tape Turing machine in real time; in other words, the TM must compute the n'th digit in n time.
Third, the result of Adamczewski et al. is only about finite automata and deterministic pushdown automata, both of which are weaker models than real-time Turing machines. | {
"domain": "cstheory.stackexchange",
"id": 4155,
"tags": "cc.complexity-theory, co.combinatorics, computability, automata-theory, nt.number-theory"
} |
what are best features | Question: I am doing OCR for Kannada and English. What are the best features in image? Please also tell me some tools to perform OCR before implementing it as code (like Rapiedminer for machine learning).
Answer: Tesseract is an often used OCR tool, but before that you need to pre-processing your image with some basic operations such as binarization, morphological operation, and illumination equalization, and letter extraction. | {
"domain": "dsp.stackexchange",
"id": 1518,
"tags": "image-processing, computer-vision, preprocessing, ocr"
} |
Genetic Optimization, Heuristic regarding choosing the number of generations and population size | Question: I have a simple model with some fitness function that I'm trying to max out.
This model have ~20 variables, each about ~15 options.
Is there a heuristic formula or a study of some sort that can guide me regarding the best:
Population size
Generations
There are other options of course such as Mutation rate / Crossover rate but I mainly interested in the 2 above.
Answer:
Is there a heuristic formula or a study of some sort that can guide me regarding the best
Maybe, but I don't know any. My approach is to run a few experiments and observe the speed of convergence with different values for the size of the population, typically in the range 50 to 500. My experience is that usually this parameter doesn't have a big impact (especially compared to rate of mutation/crossover) so I tend to stay on the low side for efficiency reasons.
It's not technically required to specify the number of generations provided there is a criterion to check convergence, either manually or programmatically. I think it's more important to have such a criterion, because there's a always a risk that the predefined number of generations won't be sufficient. | {
"domain": "datascience.stackexchange",
"id": 7189,
"tags": "genetic-algorithms"
} |
Refactoring repetitive GUI components in Java | Question: I have written a GUI map application using the JSwing design interface. After I finished the coding phase and started to refactor, I found that there are a lot of repeated components in the code.
This is mainly caused by 20 toggle buttons and their action listeners. Since each button has its own property, it's pretty hard to refactor them to another class and they are also a part of GUI components so this makes the refactoring even harder...
So my question is, how can I manage this "Frankenstein", reduce its complexity and make it easy to maintain? Any advice on programming practices that I could apply to this code are also appreciated.
So here it goes:
package mainWindow.views;
import java.awt.EventQueue;
import java.awt.Graphics;
import java.awt.Graphics2D;
import javax.swing.JFrame;
import javax.swing.JOptionPane;
import javax.swing.JPanel;
import javax.swing.border.EmptyBorder;
//...
public class mainWindowGUI extends JFrame {
/**
*
*/
private static final long serialVersionUID = -18439880310575154L;
private JPanel mainMenu;
private JLabel lblSearchMethod;
private JToggleButton tglbtnBfs;
private JToggleButton tglbtnDfs;
private JToggleButton tglbtnIds;
private JButton btnReset;
private JToggleButton loc_oradea;
private JToggleButton loc_zerind;
private JToggleButton loc_arad;
//...
private ActionListener locReceiver0;
private ActionListener locReceiver1;
//...
and other initial values
This is the main methods:
public static void main(String[] args) {
EventQueue.invokeLater(new Runnable() {
public void run() {
try {
mainWindowGUI frame = new mainWindowGUI();
frame.setVisible(true);
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
Outside main method: constructors, actionListeners, GUI generation methods, all the toggle buttons and methods for control flow...
public mainWindowGUI() {
createEvents();
layouts();
locationBtns();
buttons();
labels();
eventFlag = 0;
startNode = 0;
graph.print();
}
public void eventFork(int eventFlag, int startNode){
if(eventFlag == 1) {
graph.bfs(startNode);
graph.bfsCost();
graph.bfsPath();
System.out.println("\n---------------- Searh Terminated ----------------");
}
else if(eventFlag == 2){
graph.dfs(startNode);
graph.dfsCost();
graph.dfsPath();
System.out.println("\n---------------- Searh Terminated ----------------");
}
else if(eventFlag == 3){
graph.ids(startNode);
graph.idsCost();
System.out.println("\n---------------- Searh Terminated ----------------");
}
else{
JOptionPane.showMessageDialog(null, "No seraching method has been selected, please reset and select a searching method...");
}
}
ActionListener methods:
private void createEvents() {
resetListener = new ActionListener() {
@Override
public void actionPerformed(ActionEvent actionEvent) {
AbstractButton abstractButton = (AbstractButton) actionEvent.getSource();
boolean selected = abstractButton.getModel().isSelected();
tglbtnBfs.setSelected(selected);
tglbtnDfs.setSelected(selected);
tglbtnIds.setSelected(selected);
tglbtnIds.setSelected(selected);
loc_oradea.setSelected(selected);
loc_zerind.setSelected(selected);
loc_arad.setSelected(selected);
loc_timisoara.setSelected(selected);
loc_lugoj.setSelected(selected);
loc_mehadia.setSelected(selected);
loc_drobeta.setSelected(selected);
loc_craiova.setSelected(selected);
loc_giurgiu.setSelected(selected);
loc_bucharest.setSelected(selected);
loc_urziceni.setSelected(selected);
loc_hirsova.setSelected(selected);
loc_eforie.setSelected(selected);
loc_vaslui.setSelected(selected);
loc_iasi.setSelected(selected);
loc_neamt.setSelected(selected);
loc_fagaras.setSelected(selected);
loc_rimnicuVilcea.setSelected(selected);
loc_sibiu.setSelected(selected);
loc_pitesti.setSelected(selected);
loc_random.setSelected(selected);
}
};
dialogListener = new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
JOptionPane.showMessageDialog(null, "All reset!");
}
};
bfsBeginptn = new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
//BFS function
JOptionPane.showMessageDialog(null, "BFS search selected, Please select a starting point...");
eventFlag = 1;
}
};
dfsBeginptn = new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
//BFS function
JOptionPane.showMessageDialog(null, "DFS search selected, Please select a starting point...");
eventFlag = 2;
}
};
idsBeginptn = new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
//BFS function
JOptionPane.showMessageDialog(null, "IDS search selected, Please select a starting point...");
eventFlag = 3;
}
};
//and 17 other action listeners...
}
These are the toggle buttons:
private void locationBtns(){
loc_bucharest = new JToggleButton("");
loc_bucharest.setForeground(Color.BLACK);
loc_bucharest.setBounds(582, 374, 15, 15);
mainMenu.add(loc_bucharest);
loc_bucharest.addActionListener(locReceiver9);
loc_oradea = new JToggleButton("");
loc_oradea.setBounds(228, 42, 15, 15);
mainMenu.add(loc_oradea);
loc_oradea.addActionListener(locReceiver0);
loc_zerind = new JToggleButton("");
loc_zerind.setBounds(199, 100, 15, 15);
mainMenu.add(loc_zerind);
loc_zerind.addActionListener(locReceiver1);
//and 17 other buttons...
}
This method is for the GUI layout:
private void layouts() {
setTitle("HiveMap");
setIconImage(Toolkit.getDefaultToolkit().getImage(mainWindowGUI.class.getResource("/mainWindow/assets/icon.jpg")));
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
setBounds(100, 100, 930, 590);
mainMenu = new JPanel();
mainMenu.setBorder(new EmptyBorder(5, 5, 5, 5));
setContentPane(mainMenu);
mainMenu.setLayout(null);
}
This is for all the labels:
private void labels() {
super.paint(g);
Point2D.Float oradea = new Point2D.Float(245, 88);
Point2D.Float zerind = new Point2D.Float(216, 146);
Point2D.Float arad = new Point2D.Float(193, 202);
Point2D.Float timisoara = new Point2D.Float(193, 310);
Point2D.Float Lugoj = new Point2D.Float(285, 352);
Point2D.Float mehadia = new Point2D.Float(289, 403);
Point2D.Float drobeta = new Point2D.Float(285, 454);
Point2D.Float craiova = new Point2D.Float(403, 469);
Point2D.Float giurgiu = new Point2D.Float(565, 493);
Point2D.Float bucharest = new Point2D.Float(599, 420);
Point2D.Float urziceni = new Point2D.Float(663,390);
Point2D.Float hirsova = new Point2D.Float(777, 403);
Point2D.Float eforie = new Point2D.Float(815, 469);
Point2D.Float vaslui = new Point2D.Float(747, 268);
Point2D.Float iasi = new Point2D.Float(697, 177);
Point2D.Float neamt = new Point2D.Float(613, 146);
Point2D.Float fagaras = new Point2D.Float(479, 257);
Point2D.Float rimnicuVilcea = new Point2D.Float(382, 310);
Point2D.Float sibiu = new Point2D.Float(350, 245);
Point2D.Float pitesti = new Point2D.Float(492, 368);
Graphics2D g2 = (Graphics2D) g;
Line2D lin1 = new Line2D.Float(oradea, zerind);
Line2D lin2 = new Line2D.Float(zerind, arad);
Line2D lin3 = new Line2D.Float(arad, timisoara);
Line2D lin4 = new Line2D.Float(timisoara, Lugoj);
Line2D lin5 = new Line2D.Float(Lugoj, mehadia);
Line2D lin6 = new Line2D.Float(mehadia, drobeta);
Line2D lin7 = new Line2D.Float(drobeta, craiova);
Line2D lin8 = new Line2D.Float(craiova, pitesti);
Line2D lin9 = new Line2D.Float(pitesti, bucharest);
Line2D lin10 = new Line2D.Float(bucharest, giurgiu);
Line2D lin11 = new Line2D.Float(bucharest, urziceni);
Line2D lin12 = new Line2D.Float(urziceni, hirsova);
Line2D lin13 = new Line2D.Float(hirsova, eforie);
Line2D lin14 = new Line2D.Float(urziceni, vaslui);
Line2D lin15 = new Line2D.Float(vaslui, iasi);
Line2D lin16 = new Line2D.Float(iasi, neamt);
Line2D lin17 = new Line2D.Float(bucharest, fagaras);
Line2D lin18 = new Line2D.Float(fagaras, sibiu);
Line2D lin19 = new Line2D.Float(sibiu, rimnicuVilcea);
Line2D lin20 = new Line2D.Float(rimnicuVilcea, craiova);
Line2D lin21 = new Line2D.Float(rimnicuVilcea, pitesti);
Line2D lin22 = new Line2D.Float(sibiu, oradea);
Line2D lin23 = new Line2D.Float(sibiu, arad);
g2.draw(lin1);
g2.draw(lin2);
g2.draw(lin3);
g2.draw(lin4);
g2.draw(lin5);
g2.draw(lin6);
g2.draw(lin7);
g2.draw(lin8);
g2.draw(lin9);
g2.draw(lin10);
g2.draw(lin11);
g2.draw(lin12);
g2.draw(lin13);
g2.draw(lin14);
g2.draw(lin15);
g2.draw(lin16);
g2.draw(lin17);
g2.draw(lin18);
g2.draw(lin19);
g2.draw(lin20);
g2.draw(lin21);
g2.draw(lin22);
g2.draw(lin23);
}//end of paint()
}//end of class
Answer: Arrays and loops are your friend.
g2.draw(lin1);
g2.draw(lin2);
g2.draw(lin3);
g2.draw(lin4);
g2.draw(lin5);
g2.draw(lin6);
g2.draw(lin7);
g2.draw(lin8);
g2.draw(lin9);
g2.draw(lin10);
g2.draw(lin11);
g2.draw(lin12);
g2.draw(lin13);
g2.draw(lin14);
g2.draw(lin15);
g2.draw(lin16);
g2.draw(lin17);
g2.draw(lin18);
g2.draw(lin19);
g2.draw(lin20);
g2.draw(lin21);
g2.draw(lin22);
g2.draw(lin23);
If you held all these Line2D as an array named lines could refactor the above as:
for (Line2D line : lines) {
g2.draw(line);
}
Similarly, if you declare your Point2DFloat as an array you could simply match the right points using index e.g.
pointfloats[0] = new Point2D.Float(245, 88);
// etc...
for (int i = 0, j = 0; i < lines.length; i++, j += 2) {
lines[i] = new Line2D.Float(pointfloats[j], pointfloats[j + 1]);
}
Do note that we're still declaring the points line by line. This is undesirable and you have 'magic numbers' all over the place. A better approach would be to externalize this data, and read it all into a data array so you can just have 3 arrays and loops to connect them. This also has the added benefit of not needing to compile for a data change.
This really goes for most of this code, as all that needs to be done is to hold the objects in an array e.g. your ActionListener setSelected procedure should just be 1 loop. | {
"domain": "codereview.stackexchange",
"id": 26560,
"tags": "java, gui"
} |
Confusion of the Isobraic Process Heat equation | Question: I was currently reading a wiki article on Isobaric Process (Link at bottom f post). And am confused as to why in there derivation of heat of a an isobaric system they use specific heat capacity for volume when volume is changing and pressure is constant, so why they relating internal energy to a heat capacity of a constant volume and not pressure and how is $C_p=C_v+R$, as in my text book 'Physics for scientist and engineers' by tippler it states that $C_p-C_v=nR$, so where has the n gone from the wiki equation?
$\underline{Screen \: Capture \: Of \: Wiki}$
https://en.wikipedia.org/wiki/Isobaric_process
Answer: The reason why the specific heat at constant volume is used even though it is not a constant volume process is because it happens to be that change in internal energy for an ideal gas, for ANY process, is given by
$$\Delta U = C_v\Delta T$$
This can be derived as follows:
For a constant pressure process:
$$\Delta U=Q-W$$
$$\Delta U=C_p\Delta T – P\Delta V$$
For one mole of an ideal gas
$$P\Delta V=R\Delta T$$
Therefore
$$ \Delta U=C_p\Delta T – R\Delta T$$
Also, for any ideal gas
$$R=C_p-C_v$$
Substituting
$$ \Delta U=C_p\Delta T – (C_p-C_v)\Delta T$$
$$\Delta U=C_v\Delta T$$
Concerning the equation for R, $C_p-C_v=R$ for one mole of an ideal gas (n=1)
We can prove that $C_p-C_v=R$ is by using the basic definitions of the specific heats and enthalpy, combined with the ideal gas law.
Specific heat definitions, ideal gas (they are actually partial derivatives holding P and V constant, respectively):
$$C_p = \frac {dH}{dT}$$
$$C_v = \frac {dU}{dT}$$
Definition of enthalpy (H)
$$H = U + PV$$
For one mole of an ideal gas, ideal gas law
$$PV=RT$$
Therefore
$$H = U+RT$$
Taking the derivative of the last equation with respect to temperature:
$$\frac {dH}{dT} =\frac {dU}{dT}+R$$
Substituting the specific heat definitions into the last equation, we get
$$C_p – C_v = R$$
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 56942,
"tags": "thermodynamics"
} |
Solvation of uranyl formate/acetate | Question: Uranyl formate has a structure something like:
$\ce{UO2-(OCHO)2}$
So the structural formulae I've seen have two double-bonded oxygens directly attached to the U and each formate group is attached by single bonds to the oxygens. Like this.
The empirical formula for the dry powder is written as a monohydrate.
Can someone explain how this thing gets solvated in water and what the solvated ions are, exactly? I just realized I was assuming it dissociates into a 'uranyl' ion (whatever that looks like) and two formate ions$\ldots$but then I'm confused about where the formate's oxygen goes.
Answer: This is a uranyl ion, one of the most common forms of dissolved uranium:
Because the two U=O bonds can form a single delocalised region, there's a strong tendency towards linearity (c.f. carbon dioxide). It tends to dissolve in water by gaining equatorial water ligands (4 or 5), some of which it may then hydrolyse, depending on the pH:
The axial oxygen atoms are also valid targets for hydrogen bonding, typically gaining one hydrogen bond each.
The difficulty in dissolving uranyl formate in particular suggests that the formate ion is quite well-bound, and it seems that in this case you get a combination of water and formate ligands in the complex around the uranyl. I was expecting the formate to be a bi-dentate ligand, but according to the results of a combined computational and experimental study 1, it's uni-dentate (which incidentally allows it to form polymeric chains in solution).
Here's a picture from that study showing a computationally obtained structure of a mixed water/formate-ligated uranyl structure.
2013, Dalton Trans., Christian Lucks, André Rossberg, Satoru Tsushima, Harald Foerstendorf, Karim Fahmya, Gert Bernharda, DOI:10.1039/C3DT51711J | {
"domain": "chemistry.stackexchange",
"id": 575,
"tags": "inorganic-chemistry, reaction-mechanism, structural-formula"
} |
Categorical Variables - Classification | Question: I have a categorical variable, country which takes on values like India, US, Pakistan etc. I am currently using a linear SLM for a classification task.
So my country value varies from 1-20. How should this be a feature in the classification task. Should i have a one hot vector like (1,0,0..) for us and assign this vector 20 weights, or should i have integer from 1_20 and assign a single weight? I am using scikit learn. Does the answer depend on classifier?
Answer: The answer depends less on the classifier and more on the nature of the variable. In your case One Hot Encoding might be the best answer.
Label Encoding (Replacing categorical variables with integers) is useful when the variable is ordinal, i.e. it has a sense of order. For example the days of the week or the months of the year. Since they follow a fixed order, you can encode January as 1 and February as 2 and so on. The classifier would interpret Feb as being greater than Jan in some way (which is okay for a task like weather prediction and so on).
Can your countries be considered to be ordinal? If not, One Hot Encode them. | {
"domain": "datascience.stackexchange",
"id": 1794,
"tags": "machine-learning, scikit-learn, categorical-data"
} |
Does earth's Umbra reach Sun-Earth L2? | Question: Moon can be fully eclipsed by the virtue of being fairly close to Earth. Any body distant enough will not be eclipsed fully, Earth's disc not fully covering Sun's disc. How's that for anything at the L2 point? Will it be lit by Sun's corona or will it bask in deep shadow of Earth?
Bonus question: if Umbra does reach there, how large a body could remain fully shaded at L2? What's the Umbra's radius there?
Answer: The Lagrangian point $L_2$ is very close to the most distant point from Earth with an umbra.
$L_2$ is like the radius of the Hill sphere at $r=a\sqrt[3]{\frac{m}{3M}}$ for circular orbits, with $m$ the mass of Earth, $M$ the mass of the Sun, and $a$ the distance Earth-Sun. The ratio $\frac{m}{3M}$ of the Earth and the triple mass of the Sun is almost exactly $10^{-6}$, the cubic root hence $0.01$.
The diameter ratio of Earth and Sun is about $1/109$. Therefore the umbra of Earth ends near $92\%$ the distance to $L_2$.
The answer to another bonus question would then be: If Earth would be $9\%$ larger in diameter, but with the same mass, its umbra would end almost exactly at $L_2$.
Earth's orbit isn't perfectly circular, but the aphel/perihel ratio of about $1.04$ is insufficient to question the result qualitatively.
The error of the implicite assumptions $\tan x=x=\sin x$ is negligible at the considered level of accuracy. | {
"domain": "astronomy.stackexchange",
"id": 1373,
"tags": "lunar-eclipse, lagrange-point"
} |
Basic questions in Majorana fermions | Question: Why any fermion can be written as a combination of two Majorana fermions? Is there any physical meaning in it? Why Majorana fermion can be used for topological quantum computation?
Answer: I put an extra answer, since I believe the first Jeremy's question is still unanswered. The previous answer is clear, pedagogical and correct. The discussion is really interesting, too. Thanks to Nanophys and Heidar for this.
To answer directly Jeremy's question: you can ALWAYS construct a representation of your favorite fermions modes in term of Majorana's modes. I'm using the convention "modes" since I'm a condensed matter physicist. I never work with particles, only with quasi-particles. Perhaps better to talk about mode.
So the unitary transformation from fermion modes created by $c^{\dagger}$ and destroyed by the operator $c$ to Majorana modes is
$$
c=\dfrac{\gamma_{1}+\mathbf{i}\gamma_{2}}{\sqrt{2}}\;\text{and}\;c{}^{\dagger}=\dfrac{\gamma_{1}-\mathbf{i}\gamma_{2}}{\sqrt{2}}
$$
or equivalently
$$
\gamma_{1}=\dfrac{c+c{}^{\dagger}}{\sqrt{2}}\;\text{and}\;\gamma_{2}=\dfrac{c-c{}^{\dagger}}{\mathbf{i}\sqrt{2}}
$$
and this transformation is always allowed, being unitary. Having doing
this, you just changed the basis of your Hamiltonian. The quasi-particles
associated with the $\gamma_{i}$'s modes verify $\gamma{}_{i}^{\dagger}=\gamma_{i}$,
a fermionic anticommutation relation $\left\{ \gamma_{i},\gamma_{j}\right\} =\delta_{ij}$,
but they are not particle at all. A simple way to see this is to try
to construct a number operator with them (if we can not count the
particles, are they particles ? I guess no.). We would guess $\gamma{}^{\dagger}\gamma$
is a good one. This is not true, since $\gamma{}^{\dagger}\gamma=\gamma^{2}=1$
is always $1$... The only correct number operator is $c{}^{\dagger}c=\left(1-\mathbf{i}\gamma_{1}\gamma_{2}\right)$.
To verify that the Majorana modes are anyons, you should braid them
(know their exchange statistic) -- I do not want to say much about
that, Heidar made all the interesting remarks about this point. I
will come back later to the fact that there are always $2$ Majorana
modes associated to $1$ fermionic ($c{}^{\dagger}c$) one. Most has
been already said by Nanophys, except an important point I will discuss
later, when discussing the delocalization of the Majorana mode. I
would like to finnish this paragraph saying that the Majorana construction
is no more than the usual construction for boson: $x=\left(a+a{}^{\dagger}\right)/\sqrt{2}$
and $p=\left(a-a{}^{\dagger}\right)/\mathbf{i}\sqrt{2}$: only $x^{2}+p^{2} \propto a^{\dagger} a$
(with proper dimension constants) is an excitation number. Majorana
modes share a lot of properties with the $p$ and $x$ representation
of quantum mechanics (simplectic structure among other).
The next question is the following: are there some situations when
the $\gamma_{1}$ and $\gamma_{2}$ are the natural excitations of
the system ? Well, the answer is complicated, both yes and no.
Yes, because Majorana operators describe the correct excitations of
some topological condensed matter realisation, like the $p$-wave
superconductivity (among a lot of others, but let me concentrate on this specific one, that I know better).
No, because these modes are not excitation at all ! They are zero
energy modes, which is not the definition of an excitation. Indeed,
they describe the different possible vacuum realisations of an emergent
vacuum (emergent in the sense that superconductivity is not a natural
situation, it's a condensate of interacting electrons (say)).
As pointed out in the discussion associated to the previous answer,
the normal terminology for these pseudo-excitations are zero-energy-mode.
That's what their are: energy mode at zero-energy, in the middle of
the (superconducting) gap. Note also that in condensed matter, the
gap provides the entire protection of the Majorana-mode, there is
no other protection in a sense. Some people believe there is a kind
of delocalization of the Majorana, which is true (I will come to that
in a moment). But the delocalization comes along with the gap in fact:
there is not allowed propagation below the gap energy. So the Majorana
mode are necessarilly localized because they lie at zero energy, in
the middle of the gap.
More words about the delocalization now -- as I promised. Because
one needs two Majorana modes $\gamma_{1}$ and $\gamma_{2}$ to each
regular fermionic $c{}^{\dagger}c$ one, any two associated Majorana
modes combine to create a regular fermion. So the most important challenge
is to find delocalized Majorana modes ! That's the famous
Kitaev proposal arXiv:cond-mat/0010440 -- he said unpaired Majorana instead of delocalised, since delocalization comes for free once again. At the
end of a topological wire (for me, a $p$-wave superconducting wire)
there will be two zero-energy modes, exponentially decaying in space
since they lie at the middle of the gap. These zero-energy modes can
be written as $\gamma_{1}$ and $\gamma_{2}$ and they verify $\gamma{}_{i}^{\dagger}=\gamma_{i}$
each !
To conclude, an actual vivid question, still open: there are a lot
of pseudo-excitations at zero-energy (in the middle of the gap). The
only difference between Majorana modes and the other pseudo-excitations
is the definition of the Majorana $\gamma^{\dagger}=\gamma$, the
other ones are regular fermions. How to detect for sure the Majorana
pseudo-excitation (zero-energy mode) in the jungle of the other ones
? | {
"domain": "physics.stackexchange",
"id": 6963,
"tags": "condensed-matter, quantum-computer, topological-order, anyons, majorana-fermions"
} |
Why is my training accuracy decreasing higher degrees of polynomial features? | Question: I am new to Machine Learning and started solving the Titanic Survivor problem on Kaggle.
While solving the problem using Logistic Regression I used various models having polynomial features with degree $2,3,4,5,6$ . Theoretically the accuracy on training set should increase with degree however it started decreasing post degree $2$ . The graph is as per below
Answer: Higher polynomial degrees correspond to more parameters. Typically, a model with more parameters will fit the data better, as it will have higher likelihood (and the goal is to maximize the log-likelihood of the parameters). Yes it will overfit, but overfitting should still mean higher accuracy on the training data. So why would more parameters stop fitting the training data? That's because of the Bayesian Occam's razor effect.
Models with more parameters do not necessarily have higher marginal likelihood. Think about it as follows; as your parameters increase, the model needs to spread the probability mass over the solution space ever more thinly, so the model will tend to be flat (i.e. counter intuitively it will underfit). This is referred to as the conservation of probability mass principle.
For more on this, refer to Kevin P. Murphy's "Machine Learning: A Probabilistic Perspective" 5.3.1
Another suspect would be the curse of dimensionality, as the solution space will grow exponentially as you add more parameters. So although it might seem that the model suddenly underfits the training data, the solution space has grown too much after increasing the degree beyond a certain threshold. | {
"domain": "datascience.stackexchange",
"id": 10883,
"tags": "scikit-learn, logistic-regression, accuracy, classifier"
} |
Chaitin's constant is normal? | Question: According to this source, Chaitin's constant $\Omega$ is normal.
Each halting probability is a normal and transcendental real number that is not computable, which means that there is no algorithm to compute its digits. Indeed, each halting probability is Martin-Löf random, meaning there is not even any algorithm which can reliably guess its digits.
Source (Wikipedia)
Furthermore, the definition of normal is that each digit occurs with equal probability $1/b$. And that each duets of digits occur with $1/b^2$ probability and every triplets occurs with probability $1/b^3$ and so on.
Chaitin's omega is calculated via
$\Omega = \sum_{p \in halts} 2^{-|p|}$
Writing $\Omega$ in binary, we obtain a list of 0 and 1. For example,
2^-1=0.1 +
2^-2=0.01 +
2^-3=0.001 +
~skip 2^-4 as it does not halt
2^-5=0.00001 +
...
=\Omega
=0.11101...
Clearly, we can see that the position of each bits corresponds to the halting state of the program of length corresponding to the bit.
Here is what I am struggling with
If $\Omega$ is indeed normal, then it means that exactly 50% of programs halt and exactly 50% do not. This seems very counter intuitive.
For example, suppose I generate java programs by randomly concatenating single characters. The majority of them, I would guess more than 99.99% would not even compile. Would this not imply that at least 99.99% of them will not halt? How do we justify that exactly half will halt and exactly half will not, by virtue of $\Omega$ being normal.
Or is wikipedia incorrect about $\Omega$ being normal?
Answer: In contrast to your example, Chaitin's constant is not defined as follows:
$$ \Omega = \sum_{n\colon \text{$n$th program halts}} 2^{-n}. $$
Instead, there is a set $\Pi \subseteq \{0,1\}^*$ of allowed programs which is prefix-free (no string is a prefix of another string). Each of the programs in $\Pi$ is legal (this negates your Java example). If the programs are encoding in unary then it is indeed the case that the $n$th program has length $n$, and then your definition of $\Omega$ works. But for other encodings, the definition of $\Omega$ is
$$ \Omega = \sum_{ p \in \Pi\colon p \text{ halts} } 2^{-|p|}, $$
where $|p|$ is the length of the binary string $p$. Kraft's inequality shows that $\sum_{p \in \Pi} 2^{-|p|} \leq 1$.
Chaitin's constant is algorithmically random: the (prefix) Kolmogorov complexity of the first $n$ bits is $n - O(1)$. To show this, note first that $\Omega_n$, the first $n$ bits of $\Omega$, suffice to determine whether a program of length $n$ (under the encoding $\Pi$) halts or not. Indeed, as a fraction, $\Omega_n \leq \Omega < \Omega_n + 2^{-n}$. Run all programs in parallel, and whenever $p$ stops, add $2^{-|p|}$ to some counter $C$ (initialized at zero). Eventually $C \geq \Omega_n$ (since $C \to \Omega$ from below). At this point, if the input program of length $n$ didn't halt, then you know that it doesn't halt, since otherwise $\Omega \geq C + 2^{-n} \geq \Omega_n + 2^{-n}$.
Given this, suppose that for some $K>0$ and infinitely many $n$, you could compute $\Omega_n$ using $n - K$ bits. For each such $n$, you can find a string whose Kolmogorov complexity is larger than $n$, by considering the output of all halting programs of length at most $n$. For large enough $K$, the result is a program of length at most $n$ for computing the a string whose Kolmogorov complexity is more than $n$. This contradiction proves that for some $K$, the Kolmogorov complexity of $\Omega_n$ is at least $n-K$.
Algorithmic randomness of $\Omega$ implies, in particular, that the frequency of 0s and 1s in its binary expansion tends to 1/2. Indeed, suppose that for some (rational) $\epsilon > 0$ there exist infinitely many $n$ such that the fraction of 1s in $\Omega_n$ is at most $1/2-\epsilon$. Since there are only at most $2^{h(1/2-\epsilon)n}$ strings with at most $1/2-\epsilon$ many 1s, we can compress $\Omega_n$ to size $h(1/2-\epsilon)n + 2\log n + C_\epsilon$ (the constant $C_\epsilon$ depends on $\epsilon$ since the program needs to know $\epsilon$). However, this is $n - \omega(1)$, contradicting the algorithmic randomness of $\Omega$. | {
"domain": "cs.stackexchange",
"id": 19531,
"tags": "halting-problem"
} |
Has Chandra Varma explained cuprate superconductivity? | Question: Chandra Varma is a theoretical physicist at University of California, Riverside. A couple years ago, he gave a talk at my institution purporting to explain superconductivity in the cuprates. It all sounded so great and convincing, and I want to know what the status is with that.
Here (link) (arxiv version link) is a 2006 paper explaining his theory in detail, and here (link) is a more recent one. Supposedly, there is a spontaneous breaking of time-reversal symmetry, resulting in microscopic currents with long-range order, running in loops between the copper and oxygen atoms. These currents can function as the glue for electron-electron pairing. These currents, which are very hard to detect, have actually been seen by spin-polarized ARPES and by spin-polarized neutron diffraction. Here (link) and here (link) is the spin-polarized neutron diffraction study, and Here (link) is a more recent 2010 neutron-scattering experiment confirming the same results, published in Nature. Here (link) is a report of him discussing his theory with other experts in superconductivity theory.
So, as far as I can tell, this is a simple, elegant, experimentally-proven theory explaining cuprate superconductivity. The theory and supporting experiments are at least five years old. But everyone still says that cuprate superconductivity is a mystery. What's the deal? Is there a problem or controversy in this theory? Does the theory explain only a small part of the mystery of superconductivity? Am I misunderstanding something?
(I am a condensed-matter physicist but not a superconductivity specialist.)
Answer: OK - taking the questions one at a time. Full disclosure: I'm a member of the phonon tribe, but I'm trying not to let that cloud my response here.
"So, as far as I can tell, this is a simple, elegant, experimentally-proven theory explaining cuprate superconductivity. The theory and supporting experiments are at least five years old. But everyone still says that cuprate superconductivity is a mystery. What's the deal? Is there a problem or controversy in this theory?"
Sure: So why does [your field] need a whole journal, anyway?
Joking aside, yes, there are problems with this theory. From the experimental side, for the most recent evidence, the presence of orbital currents was excluded in the pseudo gap phase using NQR[dx.doi.org/
10.1103/PhysRevLett.106.097003
], and a muon spectroscopy study done on the very same sample that was used for the first polarized diffraction study shows that there is a substantial amount of magnetic impurity phase [dx.doi.org/10.1103/PhysRevLett.103.167002], making the clear cut interpretation of orbital currents ambiguous. The predicted fluctuations indicate one should see a strong quasi elastic response in bulk probes like neutron scattering upon entry into the pseudo gap phase, but as far as I know, there is no evidence to this point. This theory is seemingly absent an explanation for the observations from neutron scattering about spin resonances (gapped excitations that are observed at the AF ordering vector). It may be a red herring, but as the resonance is universal in unconventional SC and scales with Tc, some explanation is warranted. Further, this theory is silent on the matter of electron doping, which also induces SC in some copper-oxide materials.
Does the theory explain only a small part of the mystery of superconductivity?
It provides correct order of magnitude estimates for the Tc and size of the superconducting gap. It explains some of the exotic behavior exhibited in the pseudogap phase.
Am I misunderstanding something?
Probably. There are hundreds (thousands?) of theories of HTSC, all of different merit, and all interpreting the evidence very carefully. As is pointed out in the comments, these things are pretty contentious, and I'll reiterate that the evidence has been refuted in several cases.
Now, none of this answers your primary question, which is whether or not this theory solves HTSC. The answer to that question is definitely maybe. It does get the order of Tc and the energy gap correct, and has supporting evidence. However, the data that these papers provide as evidence present some ambiguities in the interpretation of the experimental result in light of sample preparation issues and choice of probe. Fundamentally, a theory of high Tc must have some predictive power of the Tc, the energy gap, be supported by most of the experimental evidence, and to some extent, what the sample makers could put into a crucible and have pop out as HTSC. It is not clear that this theory has done the last of these two sufficiently, and some reconciliation needs to be done. | {
"domain": "physics.stackexchange",
"id": 2054,
"tags": "condensed-matter, superconductivity"
} |
Why are we using water pipe instead of air pipe to warm our houses? | Question: In France, most of the houses are using water pipes. I wonder why this choice was originally done.
Answer: It all boils down to energy and heat capacity.
Water has a specific heat of 4.186 J/g degreesC, versus air, which has a specific heat of 1.005 J/g degreesC.
To keep a radiator at a temperature designed to heat a room, 70C or more, it would take a multiple amount of air blown through, as not only the specific heat per gram but also the density of air is much smaller than the density of water. Much larger pipes would have to be designed.
Air is good for convective heating, i.e. displacing the air in the room with hot air, from ducts large enough to be able to transfer the energy needed to heat a room. | {
"domain": "physics.stackexchange",
"id": 18264,
"tags": "thermodynamics, everyday-life"
} |
Deal with partial messages using SocketAsyncEventArgs | Question: As you know, when dealing with SocketAsyncEventArgs, this is possible to receive partial messages and depending on the protocol used, you have to deal with it. In my case, this is real-time market data and every messages are separated with a \r\n pattern. In order to reduce memory allocation, I decided to build a handler for this purpose and I wish to have your feedback. Basically, every time that I receive data, I write to the handler and check if at least I have a completed message. If yes, I push it to the upper layer. If not, wait for a second transmission and so on.
Handler Tests
public class MessageHandlerTests
{
private MessageHandler _messageHandler;
[SetUp]
public void SetUp()
{
_messageHandler = new MessageHandler(8192, '\n');
}
[Test]
public void TryRead_Should_Return_Positive_Count_After_Receiving_Delimeter_On_First_Write()
{
// Arrange
var msg = "2008-09-30 16:29:56,26.6000,100,104865900,26.6000,26.6100,2836662,0,0,E,\r\n";
var msgBytes = Encoding.ASCII.GetBytes(msg);
// Act
_messageHandler.Write(msgBytes, 0, msgBytes.Length);
var count = _messageHandler.TryRead(out var readBytes);
// Assert
Assert.AreEqual(count, msg.Length);
Assert.AreEqual(Encoding.ASCII.GetString(readBytes, 0, count), msg);
}
[Test]
public void TryRead_Should_Return_Positive_Count_After_Receiving_Delimeter_On_Second_Write()
{
// Arrange
var msg1 = "2008-09-30 16:29:56,26.6000,100,104865900";
var msg2 = ",26.6000,26.6100,2836662,0,0,E,\r\n";
var msg1Bytes = Encoding.ASCII.GetBytes(msg1);
var msg2Bytes = Encoding.ASCII.GetBytes(msg2);
// Act
_messageHandler.Write(msg1Bytes, 0, msg1Bytes.Length);
_messageHandler.Write(msg2Bytes, 0, msg2Bytes.Length);
var count = _messageHandler.TryRead(out var readBytes);
// Assert
Assert.AreEqual(count, msg1.Length + msg2.Length);
Assert.AreEqual(Encoding.ASCII.GetString(readBytes, 0, count), msg1 + msg2);
}
[Test]
public void TryRead_Should_Return_Positive_Count_After_Receiving_Delimeter_On_First_Write_With_Remainder()
{
// Arrange
var msg1 = "2008-09-30 16:29:56,26.6000,100,104865900,26.6000,26.6100,2836662,0,0,E,\r\n";
var msg2 = "2008-09-30 ";
var msgBytes = Encoding.ASCII.GetBytes(msg1 + msg2);
// Act
_messageHandler.Write(msgBytes, 0, msgBytes.Length);
var count = _messageHandler.TryRead(out var readBytes);
// Assert
Assert.AreEqual(count, msg1.Length);
Assert.AreEqual(Encoding.ASCII.GetString(readBytes, 0, count), msg1);
}
[Test]
public void TryRead_Should_Return_Positive_Count_After_Receiving_Delimeter_On_Second_Write_With_Remainder()
{
// Arrange
var msg1 = "2008-09-30 16:29:56,26.6000,100,104865900";
var msg2 = ",26.6000,26.6100,2836662,0,0,E,\r\n";
var msg3 = "2008-09-30 ";
var msg1Bytes = Encoding.ASCII.GetBytes(msg1);
var msg2Bytes = Encoding.ASCII.GetBytes(msg2 + msg3);
// Act
_messageHandler.Write(msg1Bytes, 0, msg1Bytes.Length);
_messageHandler.Write(msg2Bytes, 0, msg2Bytes.Length);
var count = _messageHandler.TryRead(out var readBytes);
// Assert
Assert.AreEqual(count, msg1.Length + msg2.Length);
Assert.AreEqual(Encoding.ASCII.GetString(readBytes, 0, count), msg1 + msg2);
}
[Test]
public void Should_Return_Zero_After_One_Succesful_TryRead()
{
// Arrange
var msg = "2008-09-30 16:29:56,26.6000,100,104865900,26.6000,26.6100,2836662,0,0,E,\r\n";
var msgBytes = Encoding.ASCII.GetBytes(msg);
// Act
byte[] readBytes;
_messageHandler.Write(msgBytes, 0, msgBytes.Length);
var count1 = _messageHandler.TryRead(out readBytes);
var count2 = _messageHandler.TryRead(out readBytes);
// Assert
Assert.Greater(count1, 0);
Assert.AreEqual(count2, 0);
}
}
Handler
public class MessageHandler
{
private readonly char _delimeter;
private readonly MemoryStream _completeStream;
private readonly MemoryStream _remainderStream;
private readonly byte[] _readBytes;
public MessageHandler(int bufferSize, char delimeter)
{
_delimeter = delimeter;
_completeStream = new MemoryStream(bufferSize);
_completeStream.Seek(0, SeekOrigin.Begin);
_remainderStream = new MemoryStream(bufferSize);
_remainderStream.Seek(0, SeekOrigin.Begin);
_readBytes = new byte[bufferSize];
}
public void Write(byte[] message, int offset, int count)
{
// check if delimeter is found
var delimeterIndex = message.GetLastDelimeterIndex(offset, count, _delimeter);
// if not found, simply copy bytes into the remainder and return
if (delimeterIndex == -1)
{
_remainderStream.Write(message, offset, count);
return;
}
// if remainder exists, copy bytes into complete
if (_remainderStream.Position > 0) {
_remainderStream.WriteTo(_completeStream);
_remainderStream.SetLength(0);
}
// copy received bytes with last delimeter into complete
_completeStream.Write(message, offset, delimeterIndex + 1);
// delimeter found at the end of the message
if (delimeterIndex == count - 1)
return;
_remainderStream.Write(message, delimeterIndex + 1, count - delimeterIndex - 1);
}
public int TryRead(out byte[] output)
{
output = null;
if (_completeStream.Position == 0)
return 0;
var length = (int)_completeStream.Length;
_completeStream.Position = 0;
_completeStream.Read(_readBytes, 0, length);
_completeStream.SetLength(0);
output = _readBytes;
return length;
}
}
public static class ByteExtensions
{
public static int GetLastDelimeterIndex(this byte[] buffer, int offset, int length, char delimeter)
{
for (var i = offset + length - 1; i >= offset; i--)
{
if (buffer[i] == delimeter)
return i;
}
return -1;
}
}
Answer: Style is good, comments are useful, and the API is simple and general-purpose.
I'd appreciate a guard clause in the constructor, providing a meaningful error message if I try to misuse it.
if (bufferSize <= 0)
throw new ArgumentOutOfRangeException(nameof(bufferSize), "Buffer Size must be greater than 0");
Similarly, they could be added to Write.
From an API perspective, I kind of expected this to return a single message every time TryRead is called. It doesn't, and that's fine, but inline documention (\\\) would not go amiss in explaining the expected behaviour.
Reusing _readBytes and passing out from TryRead is a bit surprising and perhaps risky.
An alternative design with lesser memory requirements would be to pass the MessageHandler a stream, and just have it write to said stream when data comes in. This obsolves the class of the responsibility of maintaing a buffer (indeed, two buffers) and significantly simplifies the API. The TryRead code can them be implemented independantly and work with any MemoryStream. Alternatively, construct said memory stream from a byte array, and pass that out directly.
This would have fairly serious implications for threading, but I judge you arn't worried about that given your commentary and code. More important, holding onto the stream/buffer would allow a consumer to mess up future results before calling TryRead (currently they can only lose the data they have just received); but if memory is big problem then this would help.
One other alternative (which may or may not be applicable) would be to call an event whenever a complete message is read. You could do this all with a single MemoryStream if you trust the callbacks not to mess with it (which you already do to an extent in TryRead).
GetLastDelimeterIndex is not specific to Delimeters; I'd call it something more generic, like LastIndexOfChar (after Array.LastIndexOf).
Your tests are good, but every test of MessageHandler.Write expects it to consume the whole array. This means that a chunk of intricate logic is virtually untested, and is exactly the kind of code that results in hard-to-find bugs.
... indeed, there are multiple related bugs.
// copy received bytes with last delimeter into complete
_completeStream.Write(message, offset, delimeterIndex + 1); // should be delimeterIndex - offset + 1
if (delimeterIndex == count - 1) // should be count + offset - 1
return;
_remainderStream.Write(message, delimeterIndex + 1, count + offset - delimeterIndex - 1); // should be count + offset - delimeterIndex - 1
I found them while trying to write this test:
public void Partial_Writes_Are_Not_Completely_Broken()
{
// Arrange
var msg = "fish fish fish\r\nfox fox fox";
var msgBytes = Encoding.ASCII.GetBytes(msg);
int startCrop = 5;
int endCrop = 4;
int delimiterIndex = msg.LastIndexOf('\n');
// Act
byte[] readBytes;
_messageHandler.Write(msgBytes, startCrop, msgBytes.Length - startCrop - endCrop);
var count1 = _messageHandler.TryRead(out readBytes);
_messageHandler.Write(msgBytes, delimiterIndex, 1);
var count2 = _messageHandler.TryRead(out readBytes);
// Assert
Assert.AreEqual(count1, delimiterIndex - startCrop + 1);
Assert.AreEqual(count2, msg.Length - delimiterIndex - endCrop);
}
Which testing framework are you using? I'm too lazy to try to work it out, but usually the expected value comes first in Assert calls, and the actual value second. This tripped me up while I was debugging the test. | {
"domain": "codereview.stackexchange",
"id": 30992,
"tags": "c#, asynchronous, socket"
} |
About what 'time' in the Universe's history did the r-process and s-process begin respectively? | Question: I was reading about this but there is something for which I haven't found a reliable source yet.
When did each process begin and is there any estimation of the abundances of the elements throughout the history of the Universe?
Thanks in advance.
Answer:
Q: "About what 'time' in the Universe's history did the r-process and s-process begin respectively?"
The big bang occurred 13.799 ± 0.021 Gya. Approximately 9.8 Gya the s-process started, during the formation of the first population stars. The r-process was approximately 14 billion years ago.
Q: "... estimation of the abundances of the elements throughout the history of the Universe?"
An image included at the end of this answer shows how the stars are divided into "populations" and the elements in each category, with Pop III (the oldest) containing the lightest elements and Pop I the heavier metals.
The birth and death of the earliest stars lead to the next generation and after the second population the r and s processes started.
The paper "Nucleosynthesis of Heavy Elements by Neutron Capture" linked below lists "estimation of the abundances of the elements" but obviously does not show which star exploded when, nor how much of each element was released at a particular moment in history.
Once the stars started to form so did some of the heavier elements, it's an ongoing process of creation and destruction.
Supporting information and links for the answer:
Three Four basic processes can be identified by which heavy nuclei can be built by the continuous addition of protons or neutrons [mine: to seed isotopes]:
• p-process (proton)
• rp-process (rapid proton capture process)
• s-process (slow neutron)
• r-process (rapid neutron)
Capture of protons on light nuclei tend to produce only proton-rich nuclei. Capture of neutrons on light nuclei produce neutron-rich nuclei, but which nuclei are produced depends upon the rate at which neutrons are added. Slow capture produces nuclei near the valley of beta stability, while rapid capture (i.e., rapid compared to typical beta-decay timescales) initially produces very neutron-rich radioactive nuclei that eventually betadecay towards the valley of beta stability. Some nuclei can be built by more than one process.
These are the only two ways nature can assemble heavy elements. We should not be surprised then that there are two major distributions of heavy nuclei: the r-process and the s-process distributions. We shall see that the r-nuclei in our Solar System likely formed in an environment that experienced a freezeout from equilibrium while the s-nuclei must have formed in an environment that was striving for, but never reached, NSE. The differing character of these scenarios results in the different character of the r- and s-process abundance distributions.
Once an abundance of heavy elements is available, nature may make modifications to it by exposing it to a flux of photons, neutrinos, or nucleons. Such events are probably responsible for the production of the majority of p-nuclei.
[ The above introduction was sourced from "Nucleosynthesis: the s-, r- and p- processes" and the following paper. ]
There are various papers upon the subject, the difficulty of this subject is well explained in the paper "The r-, s-, and p-processes in Nucleosynthesis" by Bradley S. Meyer where he writes: "It is impossible for a single paper to cover all relevant aspects of the r-, s-, and p-processes ...".
The paper "Nucleosynthesis of Heavy Elements by Neutron Capture" attempts to analyse some of the previous work and determine when the r and s processes began, due to the variety of means to initate the creation of these processes it's difficult to nail down an $\underline {exact}$ time.
On page 3 they write:
"... The remainder of the paper will be concerned with the implications of these abundances for ideas concerning nucleosynthesis. The elements iron, cobalt, and nickel, excluded from Table 1, provide a different class of abundance problem, both experimentally and theoretically. Although they are syn- thesized primarily by a different process (e-process) than the heavy elements whose abundances constitute the main burden of this paper, they have significance as seed nuclei for the s- and r-processes. The ^-process certainly has built upon seed-iron nuclei, and the r-process may have done so as well. The fraction of iron-group nuclei exposed to neutron fluxes of various intensities has important implications for nucleosynthesis and galactic history."
Thus the paper concludes on page 45:
"This result is in substantial agreement with the conclusions of Hoyle and Fowler (1963), to which the reader is referred for the detailed consequences of assigning the r-process to massive stars which evolve rapidly and are thus capable of producing r- process elements early in the history of the Galaxy as well as subsequently.
The uncertainty indicated in equation (46) is an estimate based on the range of conditions which were found in Section Va to be suitable for production of r-process material in good agreement with the observed abundances of the r-nuclei. It is of interest to note that stars with masses in the range given by equation (46) are just those which Iben (1963) and Fowler (1964) suggest may well be disintegrated by explosive nuclear burning after the onset of general relativistic instability. If the third r-process peak was synthesized in the same object, the relation p ~ $T_{9^3}$ from equation (44) can be compared with Figure 14 to find the conditions corresponding to a solution with cycle time 3 sec; the results are $T_9$ = 1.0, log n$_n$ = 25.5 as shown in Figure 18.
It becomes increasingly important to have some independent evidence for locating the r-process site, whether massive objects, conventional supernovae, or both.".
The theory of Big Bang Nucleosynthesis states:
"It is now known that the elements observed in the Universe were created in either of two ways. Light elements (namely deuterium, helium, and lithium) were produced in the first few minutes of the Big Bang, while elements heavier than helium are thought to have their origins in the interiors of stars which formed much later in the history of the Universe. Both theory and observation lead astronomers to believe this to be the case.
Recent Work:
In the paper "Origin of the heavy elements in binary neutron-star mergers from a gravitational wave event" by Daniel Kasen, Brian Metzger, Jennifer Barnes, Eliot Quataert, Enrico Ramirez-Ruiz (Submitted on 16 Oct 2017) they write:
"Here we report models that predict the detailed electromagnetic emission of kilonovae and enable the mass, velocity and composition of ejecta to be derived from the observations. We compare the models to the optical and infrared radiation associated with GW170817 event to argue that the observed source is a kilonova. We infer the presence of two distinct components of ejecta, one composed primarily of light (atomic mass number less than 140) and one of heavy (atomic mass number greater than 140) r-process elements. Inferring the ejected mass and a merger rate from GW170817 implies that such mergers are a dominant mode of r-process production in the Universe.".
Elements heavier than Iron ($F_e$) and where they are thought to have been formed is shown in this chart from the Wikipedia Stellar Nucleosynthesis webpage:
"Big Bang nucleosynthesis produced no elements heavier than lithium, due to a bottleneck: the absence of a stable nucleus with 8 or 5 nucleons. This deficit of larger atoms also limited the amounts of lithium-7 produced during BBN. In stars, the bottleneck is passed by triple collisions of helium-4 nuclei, producing carbon (the triple-alpha process). However, this process is very slow and requires much higher densities, taking tens of thousands of years to convert a significant amount of helium to carbon in stars, and therefore it made a negligible contribution in the minutes following the Big Bang.
Thus, the reasoning for my answer, having established where heavy elements came from (first, excluding man-made) your question is when: The answer is: The r-process and s-processes first started during the formation of the first stars.
References: (Thanks @Thomas)
"Observing the r-Process Signature in the Oldest Stars" by Frebel, Anna.
"R-process enrichment from a single event in an ancient dwarf galaxy" by Ji, Alexander P., Frebel, Anna, Chiti, Anirudh, Simon, Joshua D.
"ACS Imaging of the Ultra-Faint Dwarf Galaxy Reticulum II: Age-Dating a Unique Nucleosynthetic Event" by Simon, Josh.
From Wikipedia: "The s-process in stars"
"The s-process is believed to occur mostly in asymptotic giant branch stars, seeded by iron nuclei left by a supernova during a previous generation of stars. In contrast to the r-process which is believed to occur over time scales of seconds in explosive environments, the s-process is believed to occur over time scales of thousands of years, passing decades between neutron captures. The extent to which the s-process moves up the elements in the chart of isotopes to higher mass numbers is essentially determined by the degree to which the star in question is able to produce neutrons. The quantitative yield is also proportional to the amount of iron in the star's initial abundance distribution".
"Stars observed in galaxies were originally divided into two populations by Walter Baade in the 1940s. Although a more refined means of classifying stellar populations has since been established (according to whether they are found in the thin disk, thick disk, halo or bulge of the galaxy), astronomers have continued to coarsely classify stars as either Population I (Pop I) or Population II (Pop II). They have even postulated a third population (Population III; Pop III), though stars of this type have yet to be observed.
The classification system is based on the metal content of the stars (their metallicity, usually given the symbol [Z/H]). Pop I stars are the most metal-rich, with metallicities ranging from approximately 1/10th to three times that of the Sun (i.e. from [Z/H]=-1.0 up to [Z/H]=+0.5). This means that the gas from which Pop I stars formed must have been recycled (incorporated into, and then expelled) from previous generations of stars a number of times, and that Pop I stars are relatively young compared to Pop II and Pop III stars.
The Sun ([Z/H]~1.6) is a fairly typical Pop I star, as are most of the stars in the immediate solar neighbourhood. In fact, the majority of the stars contained within the thin disks of galaxies are Pop I stars, but Pop I stars can also found in the bulge.". [Source: http://astronomy.swin.edu.au/cosmos/P/Population+I]
"Figure 1. Simple illustration of chemical enrichment of the universe: massive Population III stars form out of primordial gas, explode as supernovae, and enrich the interstellar medium with products of stellar nucleosynthesis. Subsequent cycles of star formation and death (Population II) steadily enrich the universe with metals over time. The rst low-mass stars to form in the universe are still observable today. Two main contributors to chemical enrichment after the rst stars are 8 − 10M⊙ stars that explode as core-collapse supernovae, and less massive stars that enrich the interstellar medium via strong mass loss and stellar winds (AGB stars). Their nucleosynthetic products, the r-process and s-process elements, are the subject of this review". [Source: "Observational nuclear astrophysics: neutron-capture element abundances in old, metal-poor stars"] | {
"domain": "physics.stackexchange",
"id": 46254,
"tags": "cosmology, nuclear-physics, nucleosynthesis"
} |
How do you find probabilites of carrying an X linked recessive disease in a blank pedigree given only one phenotype? | Question:
In the above pedigree, the black square indicates the male
affected with Hemophilia which is a X-linked recessive trait. What is the
probability that the proposed child, which is a male (indicated as ?), will carry the disease?
I started off by trying to figure out the probability that the sister has an X chromosome carrying the disease, which I got as 10/16 (Since the mother must be XcXc or XcX and the father either XY or XcY; possible genotypes are XcX, XcX, XcXc, XcXc, XcX, XX, XcXc, XXc)
I then repeated the procedure for each generation and got the final answer as 9/16. However, the correct answer is 1/8. How do you obtain this? Are my assumptions flawed?
Source: https://biolympiads.com/wp-content/uploads/2016/08/inbo2009-Q.pdf
Answer: Assume the disease is rare. The parents of the affected male are thus probably X/Y and Xc/X. Their daughter (the sister of the affected male) has a probability of 1/2 of being a carrier (Xc/X). Her daughter also has a 1/2 probability of being a carrier. Her son has a 1/2 probability receiving the mutant X chromosome. Therefore, the probability that the son is affected is 1/2 * 1/2 * 1/2 = 1/8.
I can’t right now, but I’ll try and make it clearer with a figure tomorrow. | {
"domain": "biology.stackexchange",
"id": 8236,
"tags": "genetics, homework, pedigree"
} |
Length of tube used in barometer is less than 76cm | Question: In barometer the height of mercury in the inverted tube above the open surface level will be 76cm, if mercury is used. Now what if we take a tube filled with mercury but of length less than 76cm and invert it into the tub? (Experiment being done at the sea level)
Answer: Although the question already has an answer, as pointed out by @Farcher and @Kyle Kanos, I'll repeat my comment here.
The mercury will rise to the very top of the tube. Then, if you measure it's height to get the pressure, your measurement will be invalid, as the mercury cannot rise as high as it wants to.
As long as the tube is strong enough, the glass will exert a downward pressure on the mercury, which will counteract the excess pressure from the atmosphere. If the glass is not strong enough, then the pressure will shatter it, after which the mercury will drop back into the bath. | {
"domain": "physics.stackexchange",
"id": 53870,
"tags": "pressure, fluid-statics"
} |
Reference for checking primitive recursiveness | Question: There is a theorem that states a function $f$ can be computed with a Turing-machine in time $O(g)$ with primitive recursive $g$ (of the length of input) iff $f$ is primitive recursive.
Where can I find a reference for this theorem? Wikipedia seems to state it but gives no references, nor does my books.
Answer: This is known as a Ritchie-Cobham property, or a honesty property of primitive-recursive functions. See for instance Theorem VIII.8.8, page 297 in P.G. Odifreddi, Classical Recursion Theory, vol. 2, 1999.
Odifreddi refers to
Kleene, General recursive functions of natural numbers, Math. Ann. 112:727--742, 1936,
Cobham, The intrinsic computational difficulty of functions, Log. Meth. Phil. Sci. 2:24--20, 1964,
Meyer, Depth of nesting and the Grzegorczyk hierarchy, Not. Am. Math. Soc. 12, 1965. | {
"domain": "cstheory.stackexchange",
"id": 614,
"tags": "reference-request, computability, lo.logic, turing-machines"
} |
Heuristic for sokoban puzzle problem | Question: I am trying to write IDA* for Sokoban puzzle problem (http://en.wikipedia.org/wiki/Sokoban) but It seems that my heuristic is not so good for making the algorithm fast.
My heuristic is to consider $h(s)$ as minimum distance between the Player and Boxes plus minimum distance between a Box and a Target in state $s$.
Can you please provide some better heuristics for this problem?
Edit: consider the number of Boxes(and Targets) is at most 3!
Answer: Hi there CoderInNetwork,
That ain't an easy question and any advances regarding a good heuristic function would be very welcome. Indeed, I will refer in my answer to Andreas Junghanns' PhD written in 1999 (yeap, 16 years ago and still the current state of the art in the field). You can find it in citeseer:
Andreas Junghanns' PhD
Go to Section 4 and you will see a discussion about a heuristic function. The easiest one comes from a simple observation: every stone has to go to one and only one target location so you have to solve a minimum cost perfect matching in a bipartite graph where one set of vertices is made of the stones and the other one is made of the target locations ---so that there are obviously edges going only from the first set to the second and the goal is to find the minimum assignment of stones to goals. The cost of every edge would be equal to the estimated distance to get to the goal assigned from every stone.
This can be done in $O(n^3\log_{2+\frac{n}{4}}n)$ using minimum cost augmentation ---again refer to Andreass Junghanns' PhD, Section 4.3.3 in page 52.
Even considering that this heuristic can be computed in an acceptable time (after making a good number of code optimizations) note that it does not account for dead locks. This is, it will naively move the stones without considering that either the walls or other stones can lead to unsolvable configurations.
From this observation, Andreas Junghanns introduced the idea of using Pattern Databases (PDBs) for improving this lower bound (see a discussion of their implementation in Chapter 5). At the time of writing his PhD, PDBs were not very well known but they are quite well known nowadays and a lot of progress has been made.
Finally, regarding your question: even if there are only 3 stones, problems can be really difficult to solve (or, at least, to solve optimally if that's what you are aiming for). See Appendix B, page 158, The 61 Kids Problems and you'll see lots of challenging problems. Many of those contain only 3 stones.
Hope this helps, | {
"domain": "cs.stackexchange",
"id": 4046,
"tags": "search-algorithms, heuristics"
} |
E: Unable to locate package ros-dashing-desktop while installing ROS 2 via Debian Packages | Question:
Hello, I am new to ROS, I am trying to install ROS2 for gym_gazebo2 but getting this error message even after following the exact procedure.
I am following instructions from: https://index.ros.org/doc/ros2/Installation/Dashing/Linux-Install-Debians/
The codes are as follows:
root@ubuntu:/home/priyam# sudo apt update && sudo apt install curl gnupg2 lsb-release
Hit:1 http://us.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:3 http://security.ubuntu.com/ubuntu focal-security InRelease
Hit:4 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
272 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree
Reading state information... Done
lsb-release is already the newest version (11.1.0ubuntu2).
lsb-release set to manually installed.
The following NEW packages will be installed:
curl gnupg2
The following packages will be upgraded:
libcurl4
1 upgraded, 2 newly installed, 0 to remove and 271 not upgraded.
Need to get 401 kB of archives.
After this operation, 462 kB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 libcurl4 amd64 7.68.0-1ubuntu2.4 [234 kB]
Get:2 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 curl amd64 7.68.0-1ubuntu2.4 [161 kB]
Get:3 http://us.archive.ubuntu.com/ubuntu focal/universe amd64 gnupg2 all 2.2.19-3ubuntu2 [5,316 B]
Fetched 401 kB in 4s (111 kB/s)
(Reading database ... 185904 files and directories currently installed.)
Preparing to unpack .../libcurl4_7.68.0-1ubuntu2.4_amd64.deb ...
Unpacking libcurl4:amd64 (7.68.0-1ubuntu2.4) over (7.68.0-1ubuntu2.1) ...
Selecting previously unselected package curl.
Preparing to unpack .../curl_7.68.0-1ubuntu2.4_amd64.deb ...
Unpacking curl (7.68.0-1ubuntu2.4) ...
Selecting previously unselected package gnupg2.
Preparing to unpack .../gnupg2_2.2.19-3ubuntu2_all.deb ...
Unpacking gnupg2 (2.2.19-3ubuntu2) ...
Setting up gnupg2 (2.2.19-3ubuntu2) ...
Setting up libcurl4:amd64 (7.68.0-1ubuntu2.4) ...
Setting up curl (7.68.0-1ubuntu2.4) ...
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for libc-bin (2.31-0ubuntu9) ...
root@ubuntu:/home/priyam# curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -
OK
root@ubuntu:/home/priyam# sudo sh -c 'echo "deb [arch=$(dpkg --print-architecture)] http://packages.ros.org/ros2/ubuntu $(lsb_release -cs) main" > /etc/apt/sources.list.d/ros2-latest.list'
root@ubuntu:/home/priyam# sudo apt update
Hit:1 http://us.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://packages.ros.org/ros/ubuntu focal InRelease
Hit:3 http://security.ubuntu.com/ubuntu focal-security InRelease
Hit:4 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:5 http://packages.ros.org/ros2/ubuntu focal InRelease
Hit:6 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
271 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@ubuntu:/home/priyam# sudo apt install ros-dashing-desktop
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package ros-dashing-desktop
Originally posted by duttonide on ROS Answers with karma: 3 on 2020-12-14
Post score: 0
Answer:
Hit:1 http://us.archive.ubuntu.com/ubuntu focal InRelease
you appear to be running Ubuntu Focal (ie: 20.04).
ROS 2 Dashing is not supported on Focal.
See also Installing ROS 2 Foxy Fitzroy (which is supported on Focal) and Installing ROS 2 Dashing Diademata which states the supported OS version: Ubuntu Bionic (or 18.04).
And the page you link (Installing ROS 2 via Debian Packages) has this right after the TOC:
Debian packages for ROS 2 Dashing Diademata are available for Ubuntu Bionic.
Finally: REP 2000: ROS 2 Releases and Target Platforms also lists supported OS for Dashing.
Originally posted by gvdhoorn with karma: 86574 on 2020-12-14
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by duttonide on 2020-12-15:
Thanks it worked fine but this problem seems to appear as follows:
bash: /usr/share/gazebo-9/setup.sh: No such file or directory
root@ubuntu:/home/duttonide# source ~/ros2learn/environments/gym-gazebo2/provision/mara_setup.sh
bash: /usr/share/gazebo-9/setup.sh: No such file or directory
root@ubuntu:/home/duttonide# cd ~/gym-gazebo2/examples/MARA
bash: cd: /root/gym-gazebo2/examples/MARA: No such file or directory
root@ubuntu:/home/duttonide# python3 gg_random.py -g
python3: can't open file 'gg_random.py': [Errno 2] No such file or directory
root@ubuntu:/home/duttonide#
Comment by gvdhoorn on 2020-12-15:
I'm not sure what those errors mean.
I would suggest you mark this question as answered (by clicking on the checkmark to the left of the answer), and then post a new question for your new problems.
Comment by gvdhoorn on 2020-12-15:
Also: I would advise you to not log into your system as root.
That's very much discouraged. | {
"domain": "robotics.stackexchange",
"id": 35872,
"tags": "ros2"
} |
Classical limit of non-interacting, relativistic quantum gas (Kapusta/Gale p.8) | Question: I want to understand two equations in "Finite temperature field theory" by Kapusta and Gale on page 8. The partition function is
$$
\ln Z = V\int \frac{d^3 p}{(2\pi)^3}\;\ln\left(1\pm e^{-\beta(\omega-\mu)}\right)^{\pm 1},
$$
where the upper sign is for fermions and the lower sign for bosons.
For the dispersion relation $\omega=\sqrt{p^2+m^2}$ and in the classical limit $T\ll\omega-\mu$, I want to show
$$
P=\frac{T}{V}\ln Z=\frac{m^2 T^2}{2\pi^2}e^{\mu/T}K_2\left(\frac{m}{T}\right),
$$
where $K_2$ is a modified Bessel function, which has (as one possible form) the integral representation (see NIST DLFM 10.32.8)
$$
K_2(z)=\frac{z^2}{3}\int_1^\infty dt\;e^{-zt}(t^2-1)^{3/2}.
$$
In the limit $T\ll\omega-\mu$ we can use
$$
\ln\left(1+e^{-\beta(\omega-\mu)}\right)\approx e^{-\beta(\omega-\mu)}
$$ and
$$
\ln\left(\frac{1}{1-e^{-\beta(\omega-\mu)}}\right)\approx e^{-\beta(\omega-\mu)},
$$
i.e. fermions and bosons look the same in this limit.
I find something similar to the expression in the book, but not quite the right thing:
$$P=T\int \frac{d^3 p}{(2\pi)^3}\;\ln\left(1\pm e^{-\beta(\omega-\mu)}\right)^{\pm 1}\\
=\frac{4\pi T}{(2\pi)^3}e^{\beta\mu}\int_0^\infty dp\;p^2 e^{-\beta\sqrt{p^2+m^2}}
$$
Use the substitution $x=\sqrt{\frac{p^2}{m^2}+1} \Rightarrow p\,dp=m^2 x\, dx$ and $p=m\sqrt{x^2-1}$:
$$
P=\frac{Tm^3}{2\pi^2}e^{\frac{\mu}{T}}\int_1^\infty dx\;x\sqrt{x^2-1}e^{-\frac{m}{T}x}.
$$
This does not look too far off, but it is not correct. Can someone help me?
Thanks!
Answer: Ok, I think I figured it out:
Take the fermionic expression, first perform an integration by parts, then the substitution that I tried above and only then use the classical approximation:
$$
P=T\int\frac{d^3p}{(2\pi)^3}\ln\left[1+e^{-\beta(\omega-\mu)}\right]\\
=\frac{T}{2\pi^2}\int_0^\infty dp\;p^2\ln\left[1+e^{-\beta(\sqrt{p^2+m^2}-\mu)}\right]\\
=\frac{T}{2\pi^2}\left(\left.\frac{p^3}{3}\ln\left[1+e^{-\beta(\sqrt{p^2+m^2}-\mu)}\right]\right|_{p=0}^{p=\infty} - \int_0^{\infty}dp\;\frac{p^3}{3}\frac{e^{-\beta(\sqrt{p^2+m^2}-\mu)}}{1+e^{-\beta(\sqrt{p^2-m^2}-\mu)}}\left(-\beta\frac{p}{\sqrt{p^2+m^2}}\right)\right)\\
=\frac{1}{6\pi^2}\int_0^{\infty}dp\;\frac{p^4}{\sqrt{p^2+m^2}}\frac{1}{1+e^{\beta(\sqrt{p^2-m^2}-\mu)}}\\
=\frac{1}{6\pi^2}\int_1^\infty \frac{dx\;m^2 x\;m^3(x^2-1)^{3/2}}{mx}\frac{1}{e^{\frac{m}{T}x-\frac{\mu}{T}}+1}\\
\approx\frac{m^4}{6\pi^2}\int_1^\infty dx\;(x^2-1)^{3/2}\; e^{-\frac{m}{T}x+\frac{\mu}{T}}\\
=\frac{m^2 t^2}{2\pi^2}e^{\frac{\mu}{T}}\frac{1}{3}\left(\frac{m}{T}\right)^2\int_1^\infty dx\;(x^2-1)^{3/2}\; e^{-\frac{m}{T}x}\\
=\frac{m^2 T^2}{2\pi^2}e^{\frac{\mu}{T}}K_2\left(\frac{m}{T}\right)
$$
The bosonic case is analogous. | {
"domain": "physics.stackexchange",
"id": 16673,
"tags": "thermodynamics, statistical-mechanics"
} |
Sum of Independent Exponential Random Variables | Question: Can we prove a sharp concentration result on the sum of independent exponential random variables, i.e. Let $X_1, \ldots X_r$ be independent random variables such that $Pr(X_i < x) = 1 - e^{-x/\lambda_i}$. Let $Z = \sum X_i$. Can we prove bounds of the form $Pr(|Z-\mu_Z|>t) < e^{-t^2/\sum (\lambda_i)^2}$. This follows directly if we use the variance form of chernoff bounds and hence I believe is true , but the bounds that I read require bounded-ness or have some dependence on bounded-ness of the variables. Could someone point to me to a proof of the above ?
Answer: For concreteness, say that the pdf of the r.v. $X_i$ is
$$p(X_i = x) = \frac{1}{2} \lambda_i e^{-\lambda_i|x|}.$$
This is the Laplace distribution, or the double exponential distribution. Its variance is $\frac{2}{\lambda_i^2}$. The cdf is
$$
\Pr[X_i \leq x] = 1 - \frac{1}{2}e^{-\lambda_i x}
$$
for $x \geq 0$.
The moment generating function of $X_i$ is
$$
\mathbf{E}\ e^{uX_i} = \frac{1}{1 - u^2/\lambda_i^2},
$$
for $|u| < \lambda_i$. Using this fact and the exponential moment method which is standard in the proof of Chernoff bounds, you get that for $X = \sum_i X_i$ and $\sigma^2 = 2\sum_i \lambda_i^{-2}$, the following inequality holds
$$
\Pr[X > t\sigma] < e^{-t^2/4},
$$
as long as $t \leq 2\sigma \min_{i}{\lambda_i}$. You can find a detailed derivation in the proof of Lemma 2.8 of this paper. | {
"domain": "cstheory.stackexchange",
"id": 4098,
"tags": "pr.probability, randomness, chernoff-bound"
} |
Why is crystalline graphite black yet shiny? | Question: I am unable to find images of pure crystalline graphite with high confidence, but based on various sources I believe that it should actually be both black and shiny, in the sense that it reflects much less visible light than a white piece of paper, and yet has a much more metallic sheen than paper. For example, this webpage has this image, which I guess is of pure crystalline graphite:
From my understanding, the absorption/emission spectrum of a material in the visible range is determined mostly by the permissible energy levels of the 'valence' electrons within the material. In a metal crystal, the atoms belong to a single macromolecule with macromolecular orbitals holding valence electrons, and so the electrons can easily absorb/emit photons across a wide continuous spectrum, hence contributing to the shiny lustre of metals. But this explanation also applies to crystalline graphite, where in each sheet we have macromolecular orbitals spanning the entire sheet (which also explains its conductivity along sheets). However, crystalline graphite seems to be significantly 'blacker' than crystalline silicon as shown in the below image from wikimedia:
Why is it so? What is the most significant reason contributing to the 'dark' colour of graphite? I also noticed that if you take a graphite pencil and shade an oval completely (yes MCQ), it looks black under usual lighting unless it is at an angle to catch the light from an overhead lamp at which point it appears to be very shiny.
I guess that crystalline graphic is in fact metallic just like silicon, and that its apparent black colour is merely an illusion in the sense that it is just a matter of having a higher ratio of absorption to emission of visible photons, meaning that we simply need a brighter light to observe its metallic lustre. Based on this, I believe that a polished crystal of graphite will have less reflectance than a polished crystal of silicon, which in turn will have less reflectance than a polished crystal of silver, but that if we ignore reflectance then they should all have qualitatively the same sheen.
If my guess is right, what I am missing are the key factors that determine reflectance at visible wavelengths that correspond to energy levels in the valence band. Is it that in some crystals the valence electrons that absorb incident photons lose a significant fraction of the gained energy via phonons, whereas in other crystals they cannot easily lose that energy via phonons? Can anyone give specific details of how graphite, silicon and silver differ in this regard?
Answer: Crystalline silicon is, with its diamond-like cubic crystal structure, optically isotropic. Its refractive index at $555\,\mathrm{nm}$ is $4.070+0.0376i$, which results in reflectance at normal incidence $R=36.7\%$ [1].
Crystalline graphite, having a hexagonal crystal structure, is birefringent. At normal incidence its ordinary ray, with $n=2.724+1.493i$, is reflected slightly less than a ray incident on silicon, $R=32.3\%$ [2]. But extraordinary ray, with $n=1.504+0.008i$, has much smaller reflectance: $R=4.05\%$ [3], which results in about twice as smaller amount of unpolarized incident light being reflected from graphite than from silicon at normal incidence.
But that's not the whole story. If total reflectance were the only factor influencing the look of these crystals, we'd have just smaller brightness of graphite than silicon, but the same lustre. In actuality, in graphite the extraordinary ray, having small reflectance, is not absorbed as strongly as the ordinary ray: due to the relatively small extinction coefficient, at $555\,\mathrm{nm}$ about $18\%$ of it is transmitted through a $10\text{-}\mathrm{\mu m}$ layer of graphite[4].
In a non-ideal, polycrystallic, graphite with a fair amount of subsurface defects this results in scattering of the extraordinary ray on the defects, which leads to it being partially transformed into bunches of ordinary and extraordinary rays, some of the secondary rays being rapidly absorbed, others getting to exit the material—generally, at points shifted from the points of initial entry. The end result is that graphite looks, albeit still shiny, somewhat duller than silicon: the specular reflections are smeared by the additional diffuse reflections. | {
"domain": "physics.stackexchange",
"id": 71067,
"tags": "visible-light, reflection, crystals, metals, phonons"
} |
What is the purpose of the Maxwell Stress Tensor? | Question: In the calculation of the forces acting on a charge/current distribution, one arrives at the Maxwell stress tensor:
$$\sigma_{ij}=\epsilon_0 E_iE_j + \frac{1}{\mu_0} B_iB_j -\frac{1}{2}\left(\epsilon_0E^2+\frac{1}{\mu_0}B^2\right)$$
In the case of electrostatics, this element of the stress tensor denotes the electromagnetic pressure acting in the $i$ direction with respect to a differential area element with its normal pointing in the $j$ direction. Equivalently, we can replace "electromagnetic pressure" with "electromagnetic momentum flux density" in order to "make sense". With this mathematical construction, assuming a static configuration, the total force acting on a bounded charge distribution $E$ is given by
$$(\mathbf{F})_i=\oint_{\partial E} \sum_{j}\sigma_{ij} da_j $$
Where $da_j$ is the area element pointing in the $j$ direction (e.g. $da_{3}=da_z=dxdy$).
What I would like to know is, what is the advantage of introducing such an object? I have yet to see a problem where this has any real utility. Sure, we can now relate the net force on a charge distribution to the E&M fields on the surface, but are there any problems where that is really better than just straight up calculating it? In an experiment, does one ever really measure the E&M fields on the boundary of an apparatus to calculate the net force?
Answer:
but are there any problems where that is really better than just straight up calculating it?
In the first place, it allows us to formulate general theorem of conservation of momentum in macroscopic electromagnetic theory. Suppose some amount matter is enclosed inside an imaginary surface $\Sigma$ in vacuum, no matter is present on the boundary itself but field may be non-zero anywhere. In a simplified terms, the theorem is
rate of change of total momentum in a region of space bounded by imaginary surface $\Sigma$ in vacuum = surface integral of Maxwell tensor $\sigma$ over $\Sigma$
or formally
$$
\frac{d}{dt}\bigg(\mathbf P_{matter} + \mathbf P_{field} \bigg) = \oint_{\Sigma}d\Sigma_{i} \sigma_{ij}.
$$
but are there any problems where that is really better than just straight up calculating it?
The Maxwell tensor is also utilizable in calculation of total EM force on a solid object, for example the force on electrically polarized body in external electric field or force on a magnet near metal body or another magnet. Such calculations can be done with pencil and paper for highly symmetrical systems like dielectric cylinder in charged parallel-plate capacitor or two magnet cylinders facing each other with their bases. | {
"domain": "physics.stackexchange",
"id": 97160,
"tags": "electromagnetism, stress-energy-momentum-tensor, stress-strain"
} |
Using slam_gmapping for an extern simulator | Question:
Hi,
I have my own simulator and I would like to use slam_gmapping package. I understood that I have to publish to two topics /scan and /tf.
I understood the data sent to /scan, but /tf is a transformation, I understood how tf works with the ros tutorial but what data, transformation are expected ? a frame_id and a child_frame_id are requiered for each message. How can I decide which frame I need ? How to build the tree ?
I saw that may be a basic tree is map -> odom -> base_link but which correspond to what ?
I tried to publish to /scan and /tf but the gmapping doesn't initialize at all and answers me with a warning :
[ WARN] [1433385884.267084755, 109.725132801]: MessageFilter [target=odom]: Dropped 100.00% of messages so far. Please turn the [ros.gmapping.message_notifier] rosconsole logger to DEBUG for more information.
I tried to figure it out, but I am a little desperate now. What did I miss ?
Thank you,
Originally posted by GG31 on ROS Answers with karma: 16 on 2015-06-12
Post score: 0
Answer:
To answer to myself, slam_gmapping works with the clock, that means you have to publish to /clock, if not, nothing happens.
Now, /tf of /map->/odom publishes 000 and 0001 for the rotation, but when I use slam_gmapping, it publishes nan values only. I think it is related to the first frame who has nan values :
update ld=-nan ad=nan
Laser Pose= -nan -nan -nan
But I don't know why.
What is ld? Laser Pose? How can I change this values?
Originally posted by GG31 with karma: 16 on 2015-06-16
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by dornhege on 2015-06-16:
You do not have to publish the clock! This is supposed to be the real time of the system and only published by simulation. | {
"domain": "robotics.stackexchange",
"id": 21896,
"tags": "ros, slam, navigation, gmapping, tf-tree"
} |
How to change map origin of rtabmap? | Question:
Hi.
I'm using rtabmap and rtabmap_ros as localization mode to recognize robot's global pose.
To make my map simplify, I want to change map origin from pose which odometry started to desired pose. Is there any way to do this?
Originally posted by tilt on ROS Answers with karma: 11 on 2018-07-02
Post score: 1
Answer:
You could add a static_transform_publisher to add a transform prior to /map in TF tree. For example, if you want to change map origin from 5 meters, you may publish /world -> /map with such transform.
Originally posted by matlabbe with karma: 6409 on 2018-07-06
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 31148,
"tags": "slam, localization, navigation, ros-kinetic, rtabmap"
} |
If turning a perfectly monochromatic laser on for a finite time gives a frequency spread, where did the other frequency photons come from? | Question: I changed again the mind experiment to avoid any explanation involving doppler effect or nonlinear interaction
I expressed the same problem under a different angle but it will probably make the situation more clear in: Creating light pulse via: turning on and off coherent state VS putting photons in many mode, different in the quantum regime? I hope it helps.
There is something that always confused me to link the laws of physics and signal processing.
Consider you have a monochromatic laser of frequency $\omega_0$, for example, made with hydrogen transition. Consider the beam has a given width of $w$.
At $t=-\infty$ you start your laser. You never turn it off.
You have at your disposal a device that measures the intensity of the electric field. I call A the point representing this device.
For $t<0$ the device is outside of the beam. I start to move it so that it crosses in an orthogonal direction the light beam. At $t=0$ it is in the beam and it stays like this until $t=T$, then it is outside of the beam.
If you analyse the electric field in A you will have something like (I work with complex signal for simplicity but replace my $exp$ by $cos$ or $sin$ if you wish):
$$ E(t<t_0)=0 $$
$$ E(t_0<t<t_0+T)=e^{j \omega_0 t} $$
$$ E(t_0+T<t)=0 $$
If I do the spectral analysis of such signal I won't have a single mode $\omega_0$.
But from a physical point of view the only frequency of light I used in all this is $\omega_0$, I didn't excite any other mode than this one.
What does that physically mean?
In the classical regime there is no paradox for me, it is just that I can represent my wave from two points of view: I turn on a monochromatic source for a finite amount of time, or I send many modes on different frequencies with Fourier amplitude corresponding to the Fourier transform of $E(t)$. Both approaches are mathematically and physically totally equivalent here.
The paradox comes in the quantum regime.
I used a device that produces photons at a single frequency of $\omega_0$. But if I study the classical signal on A, I find many modes occupied. It is a kind of paradox because I never created other photons than the ones in $\omega_0$ initially.
Then two possible answers: either photons have been created in the other modes afterward. Either there is indeed the only photon in $\omega_0$ in all the experiments.
I am not convinced by the first possibility, indeed if I reason in the frame associated with A, it will see the signal I described having many frequencies. And it is just a change of frame I cannot create photons in another mode because of this.
It cannot be explained by the Doppler effect as well as in my mind experiment I move at a constant speed in an orthogonal direction to the light beam. Furthermore, the doppler effect is a shift. Here I don't have a shift in frequency, but I have a "creation" of many other frequencies. It is different.
In addition, the laser is based on hydrogen transition which emits photons of a very specific frequency: it couldn't know that a device will cross the beam later on. The emission frequency is independent of what I will do in the lab.
As I proposed in an answer, I think it means that there are different way to describe the signal, but only one of them corresponds to how it was physically produced ?
In practice, in this example, the interpretation is that photons only occupy the mode $\omega_0$, no other photons exist in all this experiment.
The thing is that in A I will see those photons only for $0<t<T$ and not after. By doing a Fourier transform I can describe the light I see as if other light mode were excited. Which is not the case.
Would you agree with this and my proposed answer? People proposed another answer but as I explained in the comment I am not convinced by those answers. I justified my point. If I made a mistake in my answer where is it? I think I am convinced by it but I would like some external point of view.
A different way to ask the question:
The laser being monomode and always turned on, the quantum state inside of its cavity or of the light it radiates can be described using coherent states of frequency $\omega_0$. If photons exist at other frequency could you write down the interaction that created them.
(I don't think those other frequencies photons exist but if they do I would like to see in the answer precisely written the mechanism that created them).
Note: A kinda similar question has been asked here Fourier transform paradox(?) of a wave packet but I am not very convinced by the answers. Also, to avoid explanation like nonlinearity induced by some shutter I would put on the laser path, I choose to take an example with a laser rotation.
Answer: Reviewing the mode expansion
The laser being monomode and always turned on, the quantum state inside of its cavity or of the light it radiates can be described using coherent states of frequency $\omega_0$. If photons exist at other frequencies could you write down the interaction that created them?
The first issue here is that you're assuming that modes must have definite frequencies, and hence that photons in those modes have definite frequencies. This is an idealization, which doesn't hold in the real world. Let's review where it comes from in the textbooks:
Consider the classical electromagnetic field, either in complete vacuum, or in a perfectly closed system with time-independent non-dissipative boundary conditions, and without any matter present to absorb energy.
Under these unrealistic assumptions, the field has solutions that oscillate periodically in time forever, with absolutely no decay, which we'll call modes.
Upon quantizing the system, we find that the quantum state of the field is described by a quantum harmonic oscillator for each mode, and we call the excitations within each mode photons.
Thus, modes of the classical field only have definite frequencies under idealized assumptions. In the real world, modes don't need to have definite frequencies, and neither do photons. In fact, in the real world, there are often cases where there are multiple valid sets of modes to use, which correspond to multiple valid definitions of photons; this will resolve your paradox below.
A toy example
Here's a toy model to illustrate subtleties with the mode expansion. (It actually won't be relevant to the final answer, but it might help build intuition.)
In free space, we can describe the evolution of a single degree of freedom of a field by a quantum harmonic oscillator. So more generally, consider a degree of freedom evolving under the Hamiltonian
$$H(t) = \frac{p^2}{2m} + \frac12 m \, \omega(t)^2 x^2.$$
The time-dependent of $\omega(t)$ could represent, e.g. the effect of fluctuations of the cavity walls. The classical solutions to the equations of motion are not sinusoids, and hence don't have a definite frequency.
The same remains true when we quantize. At every time, we can define instantaneous raising and lowering operators in the usual way, along with an instantaneous vacuum, corresponding to an instantaneous mode which oscillates sinusoidally at the instantaneous frequency. Similarly, at every time, we can define a ladder of instantaneous energy eigenstates,
$$|n(t) \rangle = \frac{(a^\dagger(t))^n}{\sqrt{n!}} |0(t) \rangle$$
In the case where $\omega(t)$ changes slowly, the adiabatic theorem applies, so $|n(t) \rangle$ at time $t$ evolves into the state $|n(t') \rangle$ at a later time $t'$. Similarly, you can define instantaneous coherent states,
$$|z(t) \rangle \propto e^{z a^\dagger(t)} |0(t) \rangle$$
which in the adiabatic limit evolve into other instantaneous coherent states.
The adiabatic limit demonstrates that coherent states do not necessarily have definite frequency. Recall that for the electromagnetic field, the "position" variable is the vector potential $\mathbf{A}$, and the conjugate momentum is $\mathbf{E}$. A reasonable physical definition of "definite frequency" is that the observed electric field is sinusoidal, i.e. $\langle p(t) \rangle$ is sinusoidal for this coherent state. But it isn't, because Ehrenfest's theorem tells us that
$$\frac{d \langle p(t) \rangle}{dt} = - m \, \omega(t)^2 \langle x(t) \rangle$$
or, differentiating again,
$$\frac{d^2 \langle p(t) \rangle}{dt} = - \omega(t)^2 \, \langle p(t) \rangle $$
which does not have sinusoidal solutions when $\omega(t)$ varies. (This isn't actually related to your paradox, but it illustrates how you can get frequency spread inside a cavity even if only "one mode" is excited.)
In the non-adiabatic case, we can get even weirder behavior. For example, suppose that $\omega(t)$ suddenly changes at $t = 0$,
$$\omega(t) = \begin{cases} \omega_< & t < 0, \\ \omega_> & t > 0. \end{cases}$$
We can define two sets of ladder operators before and after $t = 0$ corresponding to frequencies $\omega_<$ and $\omega_>$, and thereby define two independent sets of states, $|n_< \rangle$ and $|n_>\rangle$. In particular, if you start in the state $|0_< \rangle$, you won't end up in $|0_> \rangle$. Instead, you end up with some "$t > 0$" photons, not because there was an explicit source term, but because the natural definition of photons changed at $t = 0$.
Addressing the paradox
Let me boil down your paradox to the following:
Start with a monochromatic plane wave in free space, containing only photons of frequency $\omega$.
Couple a detector to this plane wave for a finite time $T$.
The detector "sees" photons of frequency $\omega'$ in a width $\sim \hbar/T$ about $\omega$. In other words, to the detector, it's as if the laser pulse were only a time $T$ long, even though it really is infinite.
There's really no problem here, you just have to be careful with what it means for a detector to "see photons". In your situation, the state of the electromagnetic field is perfectly well-defined. Your detector can't perfectly capture that state, but no detector can see literally everything, nor should we expect any to.
For example, if I were color blind, a red photon and a green photon would look the same to me. That doesn't mean that my eyes are converting red photons to green, or a mixture of red and green, it just means they can't tell the difference. If your detector just measures the electric field for a short time, it's effectively color blind, so that's it.
Refining the paradox
This might not be satisfying, so let's consider an alternative detector which explicitly measures photons, following the question you linked. Suppose the detector works as follows: at a prescribed time, two perfectly conducting metal plates suddenly sweep down. The plates are separated by a distance $L = c T$, so they effectively "cut out" a time $T$ of the pulse. Then, the detector just counts up the photons inside it, along with their frequencies. The paradox is that the detector sees photons of frequency $\omega'$ in a width $\sim \hbar/T$ about $\omega$.
You can probably now see the trick, given the first section. The detector plates have changed the boundary conditions of the electromagnetic field. That means the photons the detector measures correspond to a different set of modes than the free space photons. The free space modes look like $e^{ik x}$ with no boundary conditions, while the detector modes look like $\sin(k' x)$ with the $k'$ defined by hard wall boundary conditions.
Upon quantizing each set of modes separately, we find that a state of the electromagnetic field corresponding to only photons in one free space mode also generally corresponds to photons in multiple detector modes. The standard mathematical tool used to swap between the equivalent mode descriptions is the Bogoliubov transformation.
This appeared in a simple form in the previous section, where $|0_< \rangle \neq |0_> \rangle$. It is also the reason behind the Unruh effect, the fact that an accelerating detector sees a thermal bath of photons, even in vacuum: this is due to the mismatch between detector-defined photons, and the plane wave photons defined in inertial frames in free space. Hawking radiation also runs on the same principle.
So in some sense, the resolution to your paradox is quite "exotic". But really, this ambiguity of modes was always there into the formalism of quantum field theory. Most textbooks ignore it only because there is a unique set of modes if you stay in inertial frames in free space, but this breaks down quickly. | {
"domain": "physics.stackexchange",
"id": 100226,
"tags": "laser, fourier-transform"
} |
Does GC content determine codon bias or does codon bias determine GC content | Question: I was wondering if someone knows the answer to this quenstion, because I can't find a clear answer, maybe there isn't a clear answer :).
question
Which of these two is right? or maybe both influence each other
GC% determines codon bias
codon bias determines GC%
Thankyou,
Answer: Think about it this way, the G-C content is averaged over the entire genome, and varies between different species. Whether you are dealing with prokaryotes, with relatively compact genomes, or with eukaryotes, with lots of non-coding regions, the open reading frames will, in general, be influenced by the average G-C content across the genome. Therefore we would predict that in species with lower G-C their open reading frames would be biased to using the codons in the genetic code that are higher in A-T. N.B. this also means that their tRNAs should also favour anticodons higher in A-T. In contrast, we would predict that in species with high G-C content, the open reading frames would be biased, in general, to using codons that are higher in G-C.
That is all G-C bias in codons is; for some amino acids like Met and Trp, there is only a single codon, but for other amino acids there can be up to six codons that can each encode the same amino acid side chain. It is in these ones with a higher number of degenerate codons that you are going to see the effect of codon bias. | {
"domain": "biology.stackexchange",
"id": 5382,
"tags": "genomes, human-genome, codon"
} |
What causes the difference in magnetic permeability between materials? | Question: Why do some materials (like iron) have greater magnetic permeability than others (like aluminum)? We don't need to consider negative permeability here.
Is it that more of the atoms have electrons that polarize their spin and some atoms don't polarize at all? If so, why would that be?
Or is it that within each atom, there is a greater level of polarization of electron spin? That is, the electrons don't get quite straight.
Answer: The point is that spins are interacting with each other as well as with the external field. Plus, they get disturbed by the statistical motion of the particles. That is the reason for example, why ferromagnetism breaks down above a certain temperature. Furthermore, due to the Pauli principle, spins tend to pair with respect to opposite axes, which cancels magnetic moments.
So both of your own answers are kind of right: 1) less or more atoms tend to participate in the mutual alignment of spins, depending on temperature, and 2) spins contribute to varying degree to the magnetic moment of each atom, depending on the Pauli principle/the electron configuration. | {
"domain": "physics.stackexchange",
"id": 80843,
"tags": "magnetic-fields"
} |
Which galaxy is receding from the Milky Way the fastest? What is known of the mechanism behind its recession? | Question: Galaxies are always in motion relative to the Milky Way. My question is, which galaxy is receding the fastest from our viewpoint?
What is the theorised mechanism that causes this?
Answer: This question may be a duplicate to: On what scale does the universe expand?
The mechanism for the expansion of the universe is gravity, manifested through what's known as Dark Energy. Dark Energy is a component to the universe which today, composes 68.3% of the energy density of the universe. You may be confused as to how gravity could be responsible for the expansion of the universe, but the equation of state of DE,
w = pressure/density
is negative (w_DE = -1), which means that it has anti-gravitational properties. Additionally, its density remains constant as the coordinates of the universe change, which is the reason why it dominates later on in the universe's history (while all other components, matter (dark or otherwise) and radiation/relativistic matter, become diluted with the expansion of the universe).
To answer your question, because the expansion of the universe is happening at all points in space, it looks like the "center" of expansion is at every point in space (see: Space expansion in layman terms), the galaxies furthest away (highest redshift), are the ones moving quickest away from the Milky Way Galaxy. | {
"domain": "astronomy.stackexchange",
"id": 670,
"tags": "galaxy, redshift"
} |
What's the mathematical proof of "Net velocity"? | Question: We often calculate the velocity of a boat moving upstream as the (velocity of boat in still water$-$velocity of stream), and add them for downstream. My teacher told me that it's "net velocity" of a body, but can someone prove mathematically that net velocity is the sum of all the velocities acting on the body? I get it intuitively but I want to know the maths clearly behind arriving at such a conclusion.
Edit: some of the answerers think that I'm talking about relative velocity. I'm not talking of the relative velocity, I'm talking about the "final" velocity of the boat which a stationary observer from, say, the shore of the stream will observe the boat to travel with.
Answer: I'm answering my own question as I think I came up right with this:
When a boat is travelling up a stream; say velocity of the boat is $v_b$(in still water), the velocity of the stream is $-v_s$. Now let's along still water, in a time $t$ the displacement of boat $d_b= v_bt$. After the boat has travelled the distance, the stream moves backwards, and since the boat is on the stream, it moves back the same distance the stream travels, so, the displacement of the boat now is the displacement of the stream, which is $d_s=-v_st$. So the final displacement of the boat from initial position is $$d_b+d_s$$$$=v_bt - v_st$$$$=t(v_b-v_s)$$ and so the speed of the average velocity boat is $(v_b-v_s)$.
We observe this as the final velocity as the time interval between the boat covering distance $d_b$ and then moving back by $d_s$ is $0$, i.e., the "moving forward" and "coming back" occurs simultaneously and what we observe is the final displacement divided by time taken. | {
"domain": "physics.stackexchange",
"id": 84415,
"tags": "homework-and-exercises, forces, kinematics, velocity"
} |
Does an enumerator print the first occurrence of a word in finite time? | Question: For a proof I need to use the fact that every word in the language of an enumerator occur on the output paper in finite time. Is it true?
For example, the language of the natural numbers in decimal representation. Can the enumerator print the odd numbers first and then the even numbers? (if yes, I am wrong)
$1, 3, 5, 7, ..., 2, 4, 6 ,8, ...$
As I know, we get to the even numbers but not in finite time (maybe transfinite induction based on this)
Answer: Your interpretation is correct, and the easiest way to look at it is to know both definitions of recursively enumerable sets:
There is an algorithm (potentially running forever) that enumerates the members of $S$.
There is an algorithm for which the algorithm halts only on elements of $S$.
These two definitions are equivalent, and if you use the second one then you can easily answer your own question.
However, do note an important caveat. Although for any given word $w \in S$ there is some time $T_w$ such that after $T_w$ steps, $w$ will have been enumerated. However, there is no way to bound $T_w$ from above as a function of $w$. It can grow faster than any computable function. | {
"domain": "cs.stackexchange",
"id": 2522,
"tags": "turing-machines"
} |
It seems that the Euler equation in thermodynamics and The first law of thermodynamics are in contradiction | Question: The Euler equation in thermodynamics are as followed:
$U=TS-PV+\mu N$
But The first law of thermodynamics states that
$dU=TdS-PdV+\mu dN$
But I think that The Euler equation can be written by
$dU=TdS+SdT-PdV-VdP+\mu dN+Nd\mu$
Then, $SdT-VdP+Nd\mu=0$
But I don't think that this is always correct.
Edit
I see that it's only correct if the system's homogeneous. Can you give me an example of a homogeneous system and nonhomogeneous system?
Answer: This is the Gibbs–Duhem equation. It's related to extensiveness of energy, entropy, volume and number of particles.
The assumption is that the edge effects are much smaller than the volume effects, in particular, when combining two systems the total energy is the sum of energies of the smaller systems, without "interaction terms". | {
"domain": "physics.stackexchange",
"id": 54660,
"tags": "thermodynamics"
} |
Measuring mass of an object in outer space | Question: How do (or will) people measure the masses of objects in outer space? (Like in ISS).
Answer: Apply a known force to the object, measure its acceleration, then use $m = \frac F a$ to find its mass.
Or swing the object in a circle with radius $r$ at constant speed $v$, measure the centripetal force, use $m = \frac {Fr}{v^2}$ to find its mass.
Or hit the object with a known mass $M$ travelling at relative speed $u$, measure the velocities $v_M$ and $v_m$ after the collision, and then by conservation of momentum $m = M\frac{u - v_M}{v_m}$.
Edit:
A calibrated spring scale (a.k.a. a newton meter) lets you apply a known force or measure an unknown force. | {
"domain": "physics.stackexchange",
"id": 76323,
"tags": "experimental-physics, technology"
} |
Clusterize Spectrum | Question: I have pandas table which contains data about different observations, each one was measured in different wavlength. These observsations are different than each other in the treatment they have gotten.
The table looks something like this:
>>>name treatment 410.1 423.2 445.6 477.1 485.2 ....
0 A1 0 0.01 0.02 0.04 0.05 0.87
1 A2 1 0.04 0.05 0.05 0.06 0.04
2 A3 2 0.03 0.02 0.03 0.01 0.03
3 A4 0 0.02 0.02 0.04 0.05 0.91
4 A5 1 0.05 0.06 0.04 0.05 0.02
...
I would like to classify the different observations based on their spectrum (the numerical columns).
I have tried to run PCA and to paint it according to the treatment the observations got, and to compare it to the results of classifications like k-means and Spectral clustering, but i'm not sure that I choose the right methods because is seems all the time like the clusters are too much like euclidean distance and i'm not sure that they take into account the spectrum (I have used all the numerical columns for the prediction).
This is for exampel the comparison between the PCA+Colors compared to the Spectral cllasification:
PCA:
classification( the points located according to PCA1 PCA2 but the colores are according the the classification:
as you can see here, it seems like the classification is based on real distance and I would like something that take into account all the numerical values.
So, i'm looking for any insights regard other methods of classifications that could give me better results or maybe other ideas how I can check if there are clusters inside my data based on the measurments in different columns, like if I could predict the treatment from the clusters
Answer: This sounds like a normal supervised classification task.
Have you tried other standard methods like Support Vector Machines, RandomForests, Gradient Boosting, kNN, Neural Networks etc. as well or is there a particular reason why you only tried clustering methods.
Clustering methods like kmeans or spectral clustering are usually used in an unsupervised setting where class memberships are not available. Often they make certain assumptions about the data which might be violated, e.g. kmeans assumes spherical clusters, which is clearly not the case for your data. | {
"domain": "datascience.stackexchange",
"id": 8204,
"tags": "classification, clustering, k-means, pca, spectral-clustering"
} |
Influence of a variable in composition of Boolean functions | Question: Suppose $f$ and $g$ are Boolean functions without a constant term, and where every variable has the same influence. How to show every variable will have the same influence in $f \circ g$?
To me it seems like influence of a variable in $f \circ g$ is the product of influence of the outer variable in $f$ with the influence of the variable in $g$, but I'm not sure
Answer: Suppose that $f\colon \{\pm1\}^n \to \{\pm1\}$ and that $g\colon \{\pm1\}^m \to \{\pm1\}$ is balanced. The composed function $f \circ g\colon \{\pm1\}^{nm} \to \{\pm1\}$ is given by
$$ (f \circ g)(x) = f\bigl(g(x_{1,1},\ldots,x_{1,m}),\ldots g(x_{n,1},\ldots,x_{n,m})\bigr). $$
The influence of $x_{i,j}$ is the probability that if we sample $x \in \{\pm1\}^{nm}$ and construct $x'$ by flipping $x_{i,j}$ then $(f \circ g)(x) \neq (f \circ g)(x')$. This happens if:
$g(x_{i,1},\ldots,x_{i,m}) \neq g(x'_{i,1},\ldots,x'_{i,m})$.
$f(y_1,\ldots,y_n) \neq f(y'_1,\ldots,y'_n)$, where $y_i = g(x_{i,1},\ldots,x_{i,m})$ and $y'_i = g(x'_{i,1},\ldots,x'_{i,m})$.
The first property happens with probability $\operatorname{Inf}_j[g]$.
Since $g$ is balanced, the vector $(y_1,\ldots,y_n)$ is uniformly random. Therefore, given that the first property happens, the second property happens with probability $\operatorname{Inf}_i[f]$. In total,
$$ \operatorname{Inf}_{i,j}[f \circ g] = \operatorname{Inf}_i[f] \operatorname{Inf}_j[g]. $$
Here is a calculational proof. Recall that
$$ \operatorname{Inf}_i[f] = \sum_{i \in S} \hat{f}(S)^2. $$
The Fourier expansion of $f \circ g$ is
$$
f \circ g = \sum_{S \subseteq [n]} \sum_{\substack{T_i \subseteq [m] \\ \text{for all } i \in S}} \hat{f}(S) \prod_{i \in S} \hat{g}(T_i) \prod_{i \in S} \prod_{j \in T_i} x_{i,j}.
$$
Since $g$ is balanced, every monomial appears exactly once: $S$ needs to be the set of $i$ indices that appear in the monomial, and for each $i$, $T_i$ needs to be the set of $j$ indices such that $x_{i,j}$ appears in the monomial. (If $g$ were unbalanced, then $S$ could be any superset of the set of $i$ indices appearing in the monomial, with $T_i = \emptyset$ for any $i$ not appearing in the monomial.) Therefore
\begin{align}
\operatorname{Inf}_{i,j}[f \circ g] &= \sum_{i \in S} \sum_{\substack{T_k \, \forall k \in S \\ j \in T_i}} \hat{f}(S)^2 \prod_{k \in S} \hat{g}(T_k)^2 \\ &=
\sum_{i \in S} \hat{f}(S)^2 \cdot \sum_{j \in T_i} \hat{g}(T_i)^2 \cdot \prod_{\substack{k \in S \\ k \neq i}} \sum_{T_k} \hat{g}(T_k)^2 \\ &=
\sum_{i \in S} \hat{f}(S)^2 \cdot \operatorname{Inf}_j[g] \\ &= \operatorname{Inf}_i[f] \cdot \operatorname{Inf}_j[g],
\end{align}
using
$$
\sum_T \hat{g}(T)^2 = 1,
$$
since $g^2 = 1$. | {
"domain": "cs.stackexchange",
"id": 19997,
"tags": "probability-theory, boolean-algebra"
} |
Why does a thermal memory need a thermal bath? | Question: In the article "Thermal Memory: A Storage of Phononic Information
Phys. Rev. Lett. 101, 267203 – Published 29 December 2008" it's said that a thermal memory need a thermal bath, similar to a power supply in electronic memory, to avoid that thermal fluctuations destroy the memory.
How does a thermal bath prevent that thermal fluctuations destroy the thermal memory?
Addition. The paragraph of the mentioned article is:
Like an electronic memory that records data by maintaining voltage in a capacitance, a thermal memory stores
data by keeping temperature somewhere. Therefore, any
thermally insulated system might be a candidate for thermal memory since it keeps its temperature (thus data) for a
very long time. However, perturbation is unavoidable in
such a thermal system, especially when the data are read
out, namely, the local temperature is measured. Generally,
without external energy source or sink, any thermally
insulated system will not be able to recover its original
state after the data reading (temperature measuring) process, because there exists energy exchange between the
system and the reader (thermometer) during this process.
We thus have to turn to a thermal-circuit with power
supply, i.e., driven by external heat bath.
Answer: My interpretation (based on reading the paragraph - no deep background in the physics of these devices so keep that in mind as you take this in):
What this is saying is that if you store information as "heat", then the process of measuring the temperature of a "bit" requires some heat to flow - and this will in turn lower the temperature of the element you are measuring. In order to prevent the process of repeated measurement from zeroing out the heat differences, you need to pump back the heat you lost by measurement. Hence the need for a heat source. | {
"domain": "physics.stackexchange",
"id": 19562,
"tags": "thermodynamics, temperature, phonons, computer"
} |
Trie Data Structure Implementation in C++11 using Smart Pointers -- follow-up | Question: Link to my first question.
Link to my latest source code.
I followed @JDługosz recommendations. How does it look? Do you have any further recommendations? Is it better (if possible) to replace shared_ptr with unique_ptr? How could someone extend it to use the Unicode character set?
#pragma once
#include <iostream>
#include <memory>
#include <string>
namespace forest {
class trie {
private:
struct Node {
std::shared_ptr<Node> children[26];
bool end = false;
};
std::shared_ptr<Node> root = std::make_shared<Node>();
public:
void insert(const std::string & key) {
std::shared_ptr<Node> n = root;
for (auto c : key) {
int index = c - 'a';
auto& slot = n->children[index];
if (!slot) slot = std::make_shared<Node>();
n = slot;
}
n->end = true;
}
bool search(const std::string & key) {
std::shared_ptr<Node> n = root;
for (auto c : key) {
int index = c - 'a';
auto& slot = n->children[index];
if (!slot) return false;
n = slot;
}
return n && n->end;
}
};
}
Answer: pointers
I supposed the use of shared pointers was the possibility of sharing tree representation across instances, "persistence", and transactions. I’ve just watched some presentations on persistent data structures (and on Google’s trie, for that matter) so that was in my mind.
I agree with Frank about pointers. When calling code that operates on an object, it doesn’t care how the object is owned, so making it take an argument of type shared_ptr means it can’t take objects owned by unique_ptr, or that are direct members of larger structures, or on the stack, etc. So these arguments are passed as a reference.
In the Standard Guidelines, pointers are always non-owning. You mark them with owner<> to indicate otherwise.
I agree that the root does not need to be dynamically allocated. But you need to avoid having an ugly special case for the first node.
Node* n = &root;
for (auto c : key) {
int index = to_index(c);
auto& slot = n->children[index];
if (!slot) slot = make_unique<Node>();
n= slot.get();
}
duplicated traversal code
I note that both functions have the same logic to traverse the tree, in their core. Normally, like in standard containers, there will be a single function that does this, and it is used by all the other functions.
If these are the only two functions you have, it’s probably not worth the effort. But if you have more (remove, find the closest match, etc.) then you should do that.
26
The first thing I noticed on your update was that you replaced the evil macro with a magic number, rather than a better way of defining a constant.
static constexpr size_t nodesize = 1+'z'-'a';
std::unique_ptr<Node> children[nodesize];
bad key
int index = c - 'a'; // note signed result wanted
if (index<0 || index>=nodesize) throw invalid_argument("oops");
Both functions go over a string in the same manner, so make that a common function.
int index = to_index(c);
Portability of character encoding
It’s been noted that the letters are not necessarily contiguous in the source character set. However, if you are writing in (original) EBCDIC you have worse problems and would not be able to type the { } characters into the source file. (I’ve discussed C++ on a primitive type of forum software running on an EBCDIC system that was lacking [ ] and a few others, and it is not simple.
The execution character set is distinct from the source character set, and depends on the locale. More generally, you can see that it depends on the source of the strings such as a saved file — if the file uses a character set that doesn’t use the same codes for letters as we expected, then things will go bad.
So, part of the specification is that the input strings will always be in UTF-8, or (sufficient for our purposes) is compatible with ASCII.
What about at compile-time though? The standard says that the value of a character literal 'a' is in the execution character set, not the source character set, which is good. Except that the execution character set is not known until run time, so how can it do that?
However, you can specify that a character is using UTF-8, regardless of any locale or whatnot going on in the compiler or target system.
static constexpr size_t nodesize = 1+u8'z'-u8'a'; | {
"domain": "codereview.stackexchange",
"id": 30109,
"tags": "c++, performance, c++11, pointers, trie"
} |
Why waves in superposition pass through each other without interference in same medium? | Question: Wave can interact constructively (add up) or destructively (cancel) but how about when they are in a superposition state why is there no interference when they meet up in same medium? Imagine 2 pulses of different amplitudes approach and went passed each other in same medium should interfer but it isn't so in superposition as they can pass through each other undisturbed.
Answer: When the wave equation in the medium is linear, waves will pass right through each other.
A linear differential equation means that when $\Psi(\vec{r}, t)$ and $\Phi(\vec{r}, t)$ are solutions, then also any linear combination (for example their sum) is a solution.
This is true to good approximation for light and sound in air. | {
"domain": "physics.stackexchange",
"id": 57388,
"tags": "quantum-mechanics, waves, interference, superposition"
} |
Evaluating limits of action angle problems | Question: I am really troubled with finding the limits in "action-angle integral" problems. It is said that the limit is taken over generalised coordinate $q$ such that we have a complete liberation or rotation in the $p$ vs $q$ space. But how can we get this limit?
considering a particular problem, let's say $V(x)=F|x|$ is given.
Then the variable $J$ the is defined as $J= \int_a^bdx({2mE-2mF|x|})^{1/2} $ where E is a constant.
How do I evaluate $a$ and $b$ now? Is there a general scheme that we can use for such problems?
Answer: In general start with
$$
E=\frac{p^2}{2m}+V(x)\, . \tag{1}
$$
For a given $E$ the turning points of the motion $x_\pm$ are at found when $V(x_\pm)=E$ since, at the turning points, there is no kinetic energy (the momentum $p=0$). The turning points define the boundaries of your motion and thus your integration limits.
Reorganize (1) into
$$
p=\pm\sqrt{2m(E-V(x))}\,
$$
and integrate. Because of the sign change in $p$ the integration over a full cycle ought to be broken into a part where $p>0$ and a part where $p<0$. It shouldn’t be too hard to justify that
$$
J=2\int_{x_-}^{x_+} \sqrt{2m(E-V(x))}dx\, ,
$$
so it’ s just a job of finding $x_\pm$ for your specific potential.
[Nota: your potential is $k\vert x\vert$ but your integral has instead $F\vert x\vert$. I presume there’s a typo somewhere] | {
"domain": "physics.stackexchange",
"id": 44207,
"tags": "hamiltonian-formalism, phase-space"
} |
Isentropic flow equation derivation | Question: I am trying to derive the isentropic flow equations for a compressible gas by myself and at the end, I have different formulation than the one in the literature. Can you please tell me what am I doing wrong?
So we have a nozzle:
If we do an energy balance and consider that the kinetic energy and the potential energy in point 1 is negligible, one ends up with this relation:
$h_1=h_2+\dfrac{v_2^2}{2}$
By using the ideal gas relation $h = CpT$, and if we then divide the equation by Cp and then by $T_2$, we end up with something like this:
$\dfrac{T_1}{T_2}=1 + \dfrac{v_2^2}{2T_2Cp}$
And finally using $Cp = Cv + R$, denoting $Cp/Cv = k$, using the formula for the mach number $M = v/c$ and the speed of sound in gas $c = \sqrt{(T_2Rk)}$, we end up with this relation:
$\dfrac{T_1}{T_2}=1 + \dfrac{k-1}{2}M^2$
But from the literature one can find that this formula is written like this:
$\dfrac{T_t}{T}=1 + \dfrac{k-1}{2}M^2$
The question is... Is the total temperature $T_t$ in their equation the same as $T_1$ in our case? And if the nozzle discharges to the ambient, is $T$ in their formulation the same as $T_2$ in ours? I'm a bit confused with the symbols and the meanings and I want to learn how this works.
P.S. This is a copy of a question from enter link description here. I have been suggested to try and ask here.
Answer:
Is the total temperature $T_t$ in their equation the same as $T_1$ in our case?
$T_t$ is called the stagnation temperature, which is the temperature of a flow when it's brought to rest ($v=0$), the $T_1$ you defined in the first equation is called the stagnation temperature since you ignored kinetic energy.
$$C_pT_t = C_pT + \frac{v^2}{2}$$
$$T_t = T + \frac{v^2}{2C_p}$$
And if the nozzle discharges to the ambient, is $T$ in their formulation the same as $T_2$ in ours?
Yes, the arbitary $T$ in the equation is for any location in the nozzle including the discharge. | {
"domain": "engineering.stackexchange",
"id": 1103,
"tags": "fluid-mechanics, thermodynamics, compressed-gases, compressible-flow"
} |
generic flyweighting function | Question: I made this little function for flyweighting any type that has operator<. I followed the convention of make_unique and make_shared with make_one. Not sure if that's the best name.
// Create a single instance of an object, using
// operator< to determine equality.
// Note that we take a value instead of a constructor
// arguments because often we'll want to mutate a field,
// then create a new object, without passing in everything.
template<class T>
std::shared_ptr<const T> make_one(const T& value) {
struct cmp {
bool operator()(const T* a, const T* b) const {
return *a < *b;
}
};
static std::set<const T*, cmp > values;
auto iter = values.find(&value);
if(iter != values.end()) {
return (*iter)->shared_from_this();
}
auto ptr = std::shared_ptr<T>(new T(value), [](T* p) {
values.erase(p); delete p;
});
values.insert(ptr.get());
return ptr;
}
Test code:
struct Foo : std::enable_shared_from_this<Foo> {
int x = 0;
Foo(int x) : x(x) { }
bool operator<(Foo rhs) const { return x < rhs.x; }
};
auto p0 = make_one(Foo{0});
auto p1 = make_one(Foo{0});
assert(p0 == p1);
auto p2 = make_one(Foo{42});
assert(p2 != p0);
Review goals:
Can I avoid requiring enable_shared_from_this?
Can I make it thread safe without too much trouble?
Any other ideas for improvement.
Answer: The first thing I would note is that you should point out non obvious features of your implementation. The nice thing that is not immediately obvious is that when all external references are gone the object is automatically deleted from the make_one()::values set.
This is not really a requirement of a flyweight.
But a nice little trick here.
So if the automatic deletion is required then the answers to the question are:
Can I avoid requiring enable_shared_from_this?
No. I don't think so.
You could re-invent some reference counting model but that seems a lot of work in comparison to simply using the standard libraries.
As an alternative, You can get around it slightly by using std::set<std::weak_ptr<T>> the trouble here is that even though the resource would be cleaned up the actual space for the resource would be maintained after the object is destroyed.
If on the other hand automatic deletion is not required.
Can I avoid requiring enable_shared_from_this?
Yes. there are simpler ways that this could be written if there is no requirement to delete upon all external references going away. Now if you want it again it will still be there.
template<class T>
T const& make_one(const T& value) {
static std::set<T> values;
auto result = values.insert(value);
return *(result.first);
}
Can I make it thread safe without too much trouble?
Yes simply add a lock on the make_one()
Any other ideas for improvement.
You could add the ability to move an object into your make_once() rather than copy it each time. Even better would be to allow the construction of T from its parameters.
// Move an object in your set
template<typename T>
std::shared_ptr<const T> make_one(T&& value)
// Emplace a value in your set
template<typename T, typename... Args>
std::shared_ptr<const T> make_one(Args&&... value) | {
"domain": "codereview.stackexchange",
"id": 35314,
"tags": "c++, c++11"
} |
What is the relation between observables (as defined in the measure-theoretic framework) and POVMs? | Question: A POVM is typically defined as a collection of operators $\{\mu(a)\}_{a\in\Sigma}$ with $\mu(a)\in\mathrm{Pos}(\mathcal X)$ positive operators such that $\sum_{a\in\Sigma}\mu(a)=I$, where I take here $\Sigma$ to be some finite set and $\mathcal X$ some vector space (using the notation from Watrous' TQI).
On the other hand, when discussing observables in a measure-theoretic framework, given a nonempty set $\Omega$, a $\sigma$-algebra $\mathcal F$ on $\Omega$ (in other words, given a measurable space $(\Omega,\mathcal F)$), and denoting with $\mathcal E(\mathcal X)$ the set of effects on the space $\mathcal X$, that is, the set of Hermitian operators $E$ such that $0\le E\le I$, we say that a mapping $\mathrm A:\mathcal F\to\mathcal E(\mathcal X)$ is an observable if, for any state $\psi\in\mathcal X$, the function
$$\mathcal F\ni X\mapsto \langle \psi|\mathrm A(X)\psi\rangle\in\mathbb R$$
is a probability measure. Here I'm taking definition and notation from Heinosaari et al. (2008).
In other words, $\mathrm A$ is an observable iff, for all $\psi\in\mathcal X$, the triple $(\Omega,\mathcal F,\mathrm A_\psi)$ is a probability space, with $\mathrm A_\psi$ defined as $\mathrm A_\psi(X)=\langle \psi|\mathrm A(X)\psi\rangle$.
I'm trying to get a better understanding of how these two different formalisms match.
In particular, an observable as thus defined is closer to a POVM than an "observable" as usually defined in physics (which is just a Hermitian operator), right?
Are these observables equivalent to POVMs? That is, does any such observable correspond to a POVM, and vice versa?
I can see that a POVM can be thought of as/is the map $\mu:\Sigma\to\mathrm{Pos}(\mathcal X)$, which then extends to a map $\tilde\mu:2^{\Sigma}\to\mathrm{Pos}(\mathcal X)$ such that $\tilde\mu(2^\Sigma)=1$, which is then an observable.
However, I'm not sure whether any observable also corresponds to such a POVM.
Answer: These two definitions define the same concept: the POVM measurement. The observable definition is how POVM is defined for use in the case of infinite index set and dimension (see e.g. POVM) and POVM definition in the question is how it is simplified for use in the finite case. If you are working in finite dimensions, the two constructions are equivalent.
POVM from observable
We will construct a POVM $\{\mu(a)\}_{a\in\Sigma}$ from the observable $A: \mathcal{F} \to \mathcal{E}(\mathcal{X})$. Begin by choosing a finite $\Sigma \subset \mathcal{F}$ such that $\cup_{a\in\Sigma}a = \Omega$ and $a \cap b = \emptyset$ for every $a, b\in\Sigma$. This serves as the index set for the POVM and establishes the connection between the two definitions. Define
$$
\mu(a) := A(a).
$$
Note that this defines valid POVM elements because $0 \le A(X) \le I$ for every $X\in\mathcal{F}$ and
$$
\sum_{a\in\Sigma} \mu(a) = \sum_{a\in\Sigma} A(a) = A\left(\bigcup_{a\in\Sigma} a\right) = A(\Omega) = I
$$
where in the second equality we used $\sigma$-additivity of $A_\psi$ for every $|\psi\rangle$ and in the last step we used the fact that $A_\psi$ is a probability measure for every $|\psi\rangle$.
Observable from POVM
(This merely fills in some details in the construction you sketched in the question.)
We will construct an observable from the POVM $\{\mu(a)\}_{a\in\Sigma}$. Let $\Omega = \Sigma$ and $\mathcal{F} = \mathcal{P}(\Sigma)$. For $X\in\mathcal{F}$ define
$$
A(X) := \sum_{a\in X}\mu(a)
$$
where we are using the fact that $X$ is finite. Note that the right-hand side is a valid effect. We need to show that for every $|\psi\rangle$, $A_\psi$ is a probability measure. It is clear that for every $X\in\mathcal{F}$, $A_\psi(X) \in [0, 1]$ and $A_\psi(\emptyset) = 0$. Now, $\mathcal{F}$ is finite, because $\Omega$ is finite. Thus, to show $\sigma$-additivity we just need to show additivity. Let $X_{i=1,\dots,k}$ be a collection of disjoint subsets of $\Omega$, then
$$
A_\psi\left(\bigcup_{i=1}^k X_i\right) = \sum_{a\in\cup_{i=1}^k X_i} \mu_\psi(a) = \sum_{i=1}^k \sum_{a\in X_i} \mu_\psi(a) = \sum_{i=1}^k A_\psi(X_i)
$$
where $\mu_\psi(a) = \langle\psi|\mu(a)\psi\rangle$ and in the second equality we used the fact that $X_i$ are disjoint.
Remark: Note that the definition of observable appears to be more flexible: it doesn't just give rise to a single POVM. Instead, every partitioning of $\Omega$ using the elements of the $\sigma$-algebra gives rise to a potentially different POVM. However, this fact corresponds to the trivial observation that you can obtain a new POVM by grouping and adding elements of a given POVM. | {
"domain": "quantumcomputing.stackexchange",
"id": 2340,
"tags": "mathematics, measurement, terminology-and-notation, povm"
} |
How to find the set of edges for the directed graph associated with a partial order? | Question: I have a set $S$, and a partial order relation $\preceq$ defined on $S$. The way this partial order is given to me is through a function $f:S\times S \to \{true, false\}$, where $f(a,b) = true$ if and only if $a\preceq b$. Given this setup, I can construct a directed graph $D = (S, E)$, where $E= \{(a,b) \in S\times S | f(a,b) = true\}$. I can find all the elements of $E$ in time $|S|^2/2$ by examining all the possibilities. I am looking for an algorithm that can take advantage of the properties of partial order (in particular, transitivity), to reduce the expected time to find all the elements of $E$ to a linear order function of $|S|$.
Answer: As Yury already mentioned, the output size can be too large to hope for subquadratic time, when measured as a function of the input size $n$. But even when the output size is small, very little can be done. In particular, suppose that the input is a partial order with a single comparable pair, chosen uniformly at random among all such partial orders. Then the output size is $O(1)$ but nevertheless it takes $\Omega(n^2)$ queries to find the comparable pair. This is true regardless of whether you're considering only deterministic algorithms or whether you're doing expected case analysis of randomized algorithms. | {
"domain": "cstheory.stackexchange",
"id": 2283,
"tags": "graph-algorithms, directed-acyclic-graph, partial-order"
} |
Which is the branch of engineering that deals with making gadgets? | Question: Background
I am currently preparing for Entrance Examination for Engineering college , somehow I noticed that I don't even know which branch I want to take and what is the kind of Education I am opting for .I always wanted to make cool stuffs not like Robots in particular but like something worth usage like that, but I just know there are few branches for but which is the one which can lead me into getting the required knowledge needed to make devices maybe like TV or a phone things dealing with semiconductors , Transistors ?
Main
So what is the branch that can enable me to build gadgets like modern day use somehow maybe I want to relate with the creation of gadgets that can acts as an interface b/w user and the digital world
Thats all what I want to know.
Note: I couldn't find the required tag for the need so maybe it wont be good enough to rely on that.
Answer: This question is leading. It leads people to see that you want to be a some sort of an electrical engineer. However, none of the answers actually answers you not knowing.
Making gadgets is not a engineering branch. Its a interdiciplinary branch combining many different fields. At the end of the day wether your a industrial designer or an engineer matters little. You get a suitable foundation studying mostly any of the engineering branches. Be it EE, Mechatronics, electrical engineering, mechanical engineering etc.
What matters after this is doing the work. If you want to make innovative designs then by neccesity you will need to work with what you want to do on your own time, and sadly atleast partially with your own money. | {
"domain": "engineering.stackexchange",
"id": 1867,
"tags": "building-design, circuits"
} |
How to calculate the integral of a loop? | Question: A certain field has a singularity at the origin, and the divergence of its curl is zero at any point outside the origin, but surface integral of the curl is not zero in the area of any closed surface containing the origin. So how should the Stokes theorem related to this field be expressed at this time?
Answer: The vector field you're considering is
$$
\vec{v} = \frac{\sin^2(\theta/2)}{r \sin \theta}
$$
If you look carefully, you will notice that this vector field is singular when $\theta = \pi$ (i.e., along the negative $z$-axis.) This means that the curl of this vector field is not defined there either (though its limiting value as you approach the negative $z$-axis is in fact $\hat{r}/2r^2$). And you cannot enclose the origin with a closed surface to perform the integrals required in Stokes' theorem, since your surface cannot include the negative $z$-axis, where the field is undefined.
This vector field, by the way, is better known as the vector potential for the Dirac monopole. This vector field and its nuances have been previously discussed on this site here and here, among other places. | {
"domain": "physics.stackexchange",
"id": 78884,
"tags": "vector-fields, berry-pancharatnam-phase"
} |
How to keep track of changes done to concurrent stacks? | Question: I am writing a small RPN calculator that works on some stacks concurrently. For example, if I have two functions:
add-int(a: int, b: int): a b +
add-float(a: float, b: float): a b +
and I have two stacks with the following:
Int [1 2 3]
Float [4.0 5.5]
I can just do:
add-int add-float
and the new stacks will be:
Int [1 5]
Float [9.5]
Everything works well there, but if I were to add parametric polymorphism with this function:
add-num(a: num, b: num): a b +
Then it would not work as one would expect. Is it going to add the two integers or the two floats? or perhaps one from each?
To solve the issue, I added a new stack Num that keeps the address of every number pushed into the other stacks to keep a sense of history, but it created two new issues.
First, memory addresses take a lot of space, sometimes more than the data itself, and this would only get worse if I were to add new number types like ratios and complex.
Second, what if I wanted to pop a number that is on top of the stack for its type but way behind in the Num stack.
For example:
> 1 2 3.14 4.0 5/5 -- This is the expression.
Int [1 2] -- These are the stacks.
Float [3.14 4.0]
Ratio [5/5]
Num [&Int[0] &Int[1] &Float[0] &Float[1] &Ratio[0]]
> add-int -- Then I do this.
If I call the function above, there is no issue popping the two integers, but what about deleting the two pointers in the Num stack? It can get pretty expensive to look for the element and remove it every single time a number is popped.
I have realized that this approach is not pretty effective in terms of space consumption and performance, so is there an alternative, more efficient way of keeping track of the changes done to these stacks?
Answer: I would use doubly-linked lists instead of opaque stacks for this.
Node = {
type : "INT" or "FLOAT"
value : int or float
prev-generic : pointer to Node
next-generic : pointer to Node
prev-specific : pointer to Node
next-specific : pointer to Node
}
Now you can keep three lists: a list of ints (using the specific pointers), a list of floats (using the specific pointers), and a list of both (using the generic pointers). If you keep track of the head of each list, you can treat them as stacks (pushing and popping in $O(1)$), and a popped node can also be deleted from both stacks in $O(1)$ just like in a normal doubly-linked list. | {
"domain": "cs.stackexchange",
"id": 11526,
"tags": "concurrency, memory-management, stacks"
} |
Confusion in half filled or full filled electronic configuration | Question: At the end of electronic configuration, we were taught that, electron orbitals are most stable when they are either fully filled or half filled.
E.g., the final valence configuration of chromium is $\ce{(4s)^1 (3d)^5}$ and not $\ce{(4s)^2 (3d)^4}$.
But the final electronic configuration of chlorine is $\ce{(3s)^2 (3p)^5}$ and not $\ce{(3s)^1 (3p)^6}$.
Why doesn't chlorine follow the half filled or full filled theory?
Answer: Having a half-filled or filled subshell is stabilising, as you say. But for Cl, the difference in energy between 3s and 3p is greater than the additional stabilisation from filling the 3p subshell. 3d and 4s are closer in energy, so for Cr it is favourable to promote a 4s electron and have a 4s1 configuration. | {
"domain": "chemistry.stackexchange",
"id": 16394,
"tags": "quantum-chemistry, orbitals, electronic-configuration"
} |
In QM we have position and momentum space, what about energy space? | Question: In quantum mechanics we can express our wave function in terms of position space where the position operator, position eigenstates and wave function have the forms $ \hat{x} = x $, $|x \rangle = \delta(x-x') $ and $ \psi(x) $ respectively.
Now I have learned that a wave function can be thought of as existing independent to any basis we use which gives us the freedom to use any basis depending on the problem at hand, I.e. the momentum basis of momentum eigenstates to name one. This happens to be the Fourier transform of the position space wave function.
Well what if I decide to use energy eigenstates as my basis, is there such thing as an energy wave function $\psi(E)$ with the interpretation that $\psi(E)dE$ is the probability to find the system in the energy range $E$ to $E+ dE$? This space would have eigenstates of $|E \rangle = \delta(E-E')$. How would I transform from position to energy space?
Answer: Yes, you can use eigenstates of the energy operator (the Hamiltonian) to express any state, but
i) The spectrum of the Hamiltonian can be discrete, so that instead of $|\psi\rangle = \int \psi(E)|E\rangle \, dE$, you have $|\psi\rangle = \sum \psi_E |E\rangle$. In fact, in introductory quantum mechanics when introducing the time independent Schrödinger equation, we use that any state can be decomposed as such.
ii) The spectrum of the Hamiltonian can be degenerate, so there is more than one state with the same energy. For example, for a free particle, the energy is $\mathbf p^2/2m$. It depends only on the magnitude of $\mathbf p$, not the direction. Therefore, a formula like $|\psi\rangle = \int \psi(E) |E\rangle$ isn't general enough. Which state with energy $E$ does $|E\rangle$ refer to? You need something like $|\psi\rangle = \int \psi(E, \hat p) |E, \hat p\rangle$ where $\hat p$ is a unit vector and signifies that the state has momentum in the $\hat p$ direction.
Since $E$ is, in this case, a function only of the magnitude of the momentum, we are in effect using the magnitude and direction of the momentum, so this is basically the same as momentum space. It is changing to spherical coordinates in momentum space followed by the change of variable $p \mapsto E = p^2/2m$. This is convenient for example when doing statistical physics because the Bose-Einstein and Fermi-Dirac distributions are functions of the energy, so it's convenient to use the energy as a variable.
iii) The momentum operator is always the same, so its eigenfunctions are always the same, and they are simple; they are plane waves.
On the other hand, the energy, the Hamiltonian operator, is different for each system -- a hydrogen-like atom, a harmonic oscillator, molecules, a crystal, a superconductor... In the general case, finding the eigenstates of the energy is hard and amounts to completely solving the dynamics of the system. You can see this from that if $$|\psi\rangle = \sum \tilde{c}_n(t) |E_n\rangle $$
and $\hat H|E_n\rangle = E_n |E_n\rangle$, then the Schrödinger equation gives $\tilde c_n(t) = c_n \exp(-iE_nt/\hbar)$ (the $|E_n\rangle$ should be time-independent). In the case of degeneracy, the $|E_n\rangle$ can be chosen orthogonal, so that $c_n = \langle E_n |\psi(t=0)\rangle = \int \psi_{E_n}^* \psi$. Hence you can get the time evolution for any initial state, that is, completely solve the dynamics.
iv) In relativistic theories, energy is the time component of the 4-momentum. Since the split between time and space is observer-dependent, in a relativistic theory, going over to momentum space one must also Fourier transform with respect to time. However, because it holds that $E^2 - p^2 = m^2$, only three components in momentum space are actually independent.
To summarize, yes, you can use eigenstates of the energy to expand the state (accounting for discrete and degenerate spectra), but finding the transition to that eigenbasis is equivalent to solving the Schrödinger equation. If you can do that, go ahead. If you cannot, you can still get somewhere with an energy eigenbasis if you only need general properties of the energy. This can be the case in thermodynamics, e.g. superconductors and Landau's theory of Fermi liquids. | {
"domain": "physics.stackexchange",
"id": 41846,
"tags": "quantum-mechanics"
} |
How to measure and verify ultrasound frequency and intensity from a cheap ultrasonic body contouring device? | Question: Research shows that applying 1 MHz non-focused ultrasound at 3 W/cm2 may reduce subcutaneous fat thickness. In fact, high-end and expensive (costs $5000 >) HIFU ultrasonic transducers are now being used in cosmetic clinics that allow the user to regulate ultrasound frequency, pulsing, power and time in order to target body fat.
To my surprise, I noticed that one can also buy inexpensive (costs only $50), portable 1Mhz Ultrasound cavitation device from Ebay or Aliexpress that also claims to reduce body fat. Scam or not, one way to justify its claim will be to measure and verify the emitting frequency and intensity.
I bought such a device from Aliexpress (costed $130) to play around with it. My skin can feel a definite vibration when I turn on the device, however, it is very hard to tell if ultrasound is actually exciting my tissue lesions.
I want to know how to set up an inexpensive testing setup and:
1) Find out if the device is really emitting ultrasound. If yes, how would I measure its frequency? Device manual says 1MHz, but I suspecting it might be a bs.
2) How to measure the intensity and penetration depth. Finding one value should lead me to calculate another.
3) Find out if the device's ultrasound is focused or non-focused. My gut feeling is the ultrasound is non-focused or weakly focused.
One way to find more clues would be to rip through the device and look at the transducer circuit. The transducer hardware might have a model number that can be used to find it's spec sheet online. However, I haven't unscrewed the device yet and I doubt I would find anything helpful on the circuit board as it probably came directly from a questionable Chinese factory.
Further reading:
There has been plenty of research conducted so far regarding effects of non-focused ultrasound on animal contouring:
1) The effects of nonfocused external ultrasound on tissue temperature and adipocyte morphology.
2) Ultrasound dose calculation
Answer: You can measure this with a piezoelectric transducer and an oscilliscope. Several companies sell thin film piezo transducers that you can just stick onto your device under test. The transducer has leads that you would connect to a piezo pre-amplifier which will amplify and buffer the signal coming from your DUT. Then attach a scope lead to the output of the pre-amp and you can see the signal on your scope. | {
"domain": "engineering.stackexchange",
"id": 2692,
"tags": "experimental-physics, biomedical-engineering, ultrasound, waves"
} |
Can we resolve tension force into components for net force? | Question: When solving certain questions , I noticed that in some places we take component of gravity along the string while in others, we take component of tension along gravity, I think that both of these methods should give the same result but they dont. Some people also told me that assume you have a car being pulled by two ropes, will acceleration be double? But why is tension force so special? What am i doing wrong by taking component of tension along an other force? Thanks in advance.+
Answer: If the distance between the pulleys is $2x$ and the weight is a distance $y$ below the pulleys then the distance $z$ from either pulley satisfies
$z^2 = x^2 + y^2$
Differentiating and using the fact that $x$ is constant gives
$\displaystyle 2z \frac {dz}{dt} = 2y \frac{dy}{dt}
\\ \displaystyle \Rightarrow \frac {dy}{dt} = \frac z y \frac {dz}{dt} = \sec \theta \frac {dz}{dt}$
Your method assumes that $y = z \cos \theta$ implies $\frac {dy}{dt} = \frac {dz}{dt}\cos \theta$ but this would only be true if $\theta$ were constant, which it is not. | {
"domain": "physics.stackexchange",
"id": 81030,
"tags": "newtonian-mechanics, forces"
} |
Finding the electric field for a shell of charge | Question: Suppose we have charge density defined
$$
\rho(x,y,z) =
\begin{cases}
0 & 0 \leq r < a \\
K & a \leq r\leq b\\
0 & b< r
\end{cases}
$$
For some constants $K,a,b$
How would we find the electric field for all points in space? Any help I would greatly appreciate! I imagine we argue that the field must be symmetric, but I was having difficulty proceeding from there.
Answer: The electric field due to a spherical shell is zero inside the sphere and equal to the field of an equivalent charge at the centre outside the sphere. Thus the field is directed outward everywhere, and its strength is zero for $r\lt a$, $K\frac43\pi(r^3-a^3)/r^2$ for $a\le r\le b$ and $K\frac43\pi(b^3-a^3)/r^2$ for $b\lt r$. | {
"domain": "physics.stackexchange",
"id": 5678,
"tags": "homework-and-exercises, electrostatics, electric-fields"
} |
How can policy gradients be applied in the case of multiple continuous actions? | Question: Trusted Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO) are two cutting edge policy gradients algorithms.
When using a single continuous action, normally, you would use some probability distribution (for example, Gaussian) for the loss function. The rough version is:
$$L(\theta) = \log(P(a_1)) A,$$
where $A$ is the advantage of rewards, $P(a_1)$ is characterized by $\mu$ and $\sigma^2$ that comes out of neural network like in the Pendulum environment here: https://github.com/leomzhong/DeepReinforcementLearningCourse/blob/69e573cd88faec7e9cf900da8eeef08c57dec0f0/hw4/main.py.
The problem is that I cannot find any paper on 2+ continuous actions using policy gradients (not actor-critic methods that use a different approach by transferring gradient from Q-function).
Do you know how to do this using TRPO for 2 continuous actions in LunarLander environment?
Is following approach correct for policy gradient loss function?
$$L(\theta) = (\log P(a_1) + \log P(a_2) )*A$$
Answer: As you has said, actions chosen by Actor-Critic typically come from a normal distribution and it is the agent's job to find the appropriate mean and standard deviation based on the the current state. In many cases this one distribution is enough because only 1 continuous action is required. However, as domains such as robotics become more integrated with AI, situations where 2 or more continuous actions are required are a growing problem.
There are 2 solutions to this problem:
The first and most common is that for every continuous action, there is a separate agent learning its own 1-dimensional mean and standard deviation. Part of its state includes the actions of the other agents as well to give context of what the entire system is doing. We commonly do this in my lab and here is a paper which describes this approach with 3 actor-critic agents working together to move a robotic arm.
The second approach is to have one agent find a multivariate (usually normal) distribution of a policy. Although in theory, this approach could have a more concise policy distribution by "rotating" the distribution based on the co-variance matrix, it means that all of the values of the co-variance matrix must be learned as well. This increases the number of values that must be learned to have $n$ continuous outputs from $2n$ (mean and stddev), to $n+n^2$ ($n$ means and an $n \times n$ co-variance matrix). This drawback has made this approach not as popular in the literature.
This is a more general answer but should help you and others on their related problems. | {
"domain": "ai.stackexchange",
"id": 319,
"tags": "deep-learning, reinforcement-learning, policy-gradients, proximal-policy-optimization, trust-region-policy-optimization"
} |
Product of operators, eigenvalues | Question: If I've two hermitian operators let's say A and B,then their eigenfunctions(vectors) form a basis... If I now take a product of both of them and create a new operator AB (composition of both), that operator will be hermitian as well(if A and b commute), but what about its basis ?
Question: Is it simply all possible combinations of $|A\rangle$$|B\rangle$, thats the product of eigenfunctions of the A and B operator?
Question: Can all functions be written in the A or B basis?
Example: I have two particles 1 and 2, their angular momentum operators for the "z direction" are $ L_{z1}$ and $L_{z2}$ , they are both hermitian, so I can calculate their eigenfunctions, that form a basis(this are the spherical harmonics$ |l_1,m_1\rangle $ and $|l_2 ,m_2\rangle$ ), I now create the $ L_{z1}$$L_{z2}$ operator. If I now calculate the eigenfunctions of this new operator, they will form a basis, is this basis composed of function in the form of $|l_1,m_1\rangle |l_2,m_2\rangle$, so just a product of the eigenfunctions of both operators?
Answer: As " ACuriousMind " said the product of AB is Hermitian iff A and B commutes .
Considering they commute, then one can obtain a common eigen-basis for both A and B. Also Eigen-basis of either A or B can be used to diagonalize the AB operator.
Here goes the proof(i'll be taking only non-degenerate case, as degenerate one will require more detailed proof).
Consider $|a\rangle$ and $|b\rangle$ be the eigen-vectors of A and B with eigen-values $\lambda_a$ and $\lambda_b$ respectively.
so we have the following,
$[A,B] = AB-BA = 0$ and,
$A|a\rangle = \lambda_a|a\rangle$ , $B|b\rangle = \lambda_b|b\rangle$.
as the operators are hermitian, they have real eigen-values.
Part 1 :
If set G1 and G2 are the eigen-basis of linear operator A , then any vector from G2 can be written as a linear combination of vectors of G1. Which in turn will lead to the conclusion that G1 and G2 are essentially the same except for a constant multiplication of a complex constant, i.e
$A|a_{G2}\rangle = \lambda_a^{G2}|a_{G2}\rangle = \lambda_a^{G2}c_a^{G1,G2}|a_{G1}\rangle$, where $|a_{G2}\rangle$ and $|a_{G1}\rangle$ are from set G1 and G2 and $c_a^{G1,G2}$ is a constant
Part 2:
$AB|b\rangle = \lambda_bA|b\rangle = BA|b\rangle$
Which says, $A|b\rangle$ is also an eigen vector of operator B with $\lambda_b$ as an eigen-value.
Using the result from part 1, we can say that vector $A|b\rangle = c_a|b\rangle$.
Hence set of vectors $|b\rangle$ can be taken as a common eigen-vector set of both operators A and B.
Also as for your question, eigen-vectors of the operator AB,
$AB|b\rangle = \lambda_bc_a|b\rangle$.
Hope this answers your query( though it was not clear in your question). | {
"domain": "physics.stackexchange",
"id": 40287,
"tags": "quantum-mechanics, operators, linear-algebra"
} |
catkin_make install fails to install the executable correctly | Question:
Hi there,
we are using the below attached CMakeLists.txt to build a simple executable. Building works OK and we can also successfully run it from the devel folder (devel/lib/lawnmower_intrudor_detector/intrudor_detector_node).
However after we install the executable using the "catkin_make install" command into the install/lib/lawnmower_intrudor_detector/intrudor_detector_node, the shared libraries the executable is linking against are not found anymore:
./intrudor_detector_node: error while loading shared libraries: libpcl_common.so.1.7: cannot open shared object file: No such file or directory
We verified and the 2 executables (in devel and install) indeed have different checksums - which to the best of our knowledge should not happen, right?
Using: ROS Groovy, Ubuntu 12.04
CMakeLists.txt is below. Please note that the install macro does not work when passed in the name of the package, hence only executable.
cmake_minimum_required(VERSION 2.8.3)
project(lawnmower_intrudor_detector)
find_package(catkin REQUIRED COMPONENTS laser_geometry roscpp sensor_msgs tf)
## System dependencies are found with CMake's conventions
#find_package(PCL 1.6.1 REQUIRED)
find_package(PCL 1.7.0 REQUIRED)
catkin_package(
INCLUDE_DIRS include
CATKIN_DEPENDS laser_geometry roscpp sensor_msgs tf
DEPENDS eigen
)
## Specify additional locations of header files
include_directories(include
${catkin_INCLUDE_DIRS}
${PCL_INCLUDE_DIRS}
)
## Declare a cpp executable
add_executable(intrudor_detector_node src/intrudor_detector.cpp)
## Add dependencies to the executable
# add_dependencies(lawnmower_intrudor_detector_node ${PROJECT_NAME})
target_link_libraries(intrudor_detector_node
${catkin_LIBRARIES}
${PCL_LIBRARIES}
)
install(TARGETS intrudor_detector_node
ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
Update: PCL_LIBRARIES are as follows: http://pastebin.com/cWTf0cjr .
The weird thing about it is that there is no semicolons delimiting the respective library.
Update 2:
ldd output for both executables in devel and install workspaces as follows: http://pastebin.com/ZpQBhUdJ . In install workspace pcl_common* is not found anymore.
Originally posted by dejanpan on ROS Answers with karma: 1420 on 2013-03-10
Post score: 3
Original comments
Comment by jbohren on 2013-03-10:
Does it also not work if you cd into the build directory and run make install ?
Comment by William on 2013-03-11:
To clarify, catkin_make doesn't install the file, catkin_make just runs CMake and Make together as a convenience. You can reproduce this without catkin_make with 'mkdir build && cd build && cmake ../src -DCMAKE_INSTALL_PREFIX=../install -DCATKIN_DEVEL_PREFIX=../devel && make '
Comment by William on 2013-03-11:
These are unrelated to your question, but your CMakeLists.txt find_package(...)'s PCL but does not export it in catkin_package(...) (this may or may not be desired), and you DEPEND on eigen, but never find_package(...) it, which is likely not desired behavior.
Comment by dejanpan on 2013-03-11:
@Jon: yes, it does NOT work with cd-ing into build and running make install as well.
Comment by dejanpan on 2013-03-11:
Jon, we found out that it works if append the path to the PCL libraries before: "export LD_LIBRARY_PATH=/usr/local/lib:${LD_LIBRARY_PATH}". Note that we compilled PCL ourselves and make installed it into /usr/local.
Comment by William on 2013-03-11:
@dejanpan can you output PCL_LIBRARIES and post it here? It seems like the only way the above comment fixes it is if PCL does not return absolute path libraries.
Comment by dejanpan on 2013-03-11:
@William: I updated the question with the PCL_LIBRARIES message status. Weird thing is that the libraries are not delimited by the semicolon. Is that a problem?
Comment by William on 2013-03-11:
@dejanpan No, that is a side effect of printing a list in CMake: http://stackoverflow.com/questions/7172670/best-shortest-way-to-join-a-list-in-cmake
Comment by William on 2013-03-11:
@dejanpan Can you post the output of ldd /path/to/your/executable? (In both install and devel space for comparison)
Comment by dejanpan on 2013-03-11:
@William: See update 2. In fact the executable in install workspace does not find pcl_common library anymore.
Comment by William on 2013-03-11:
@dejanpan I don't know why that is, let me play with it for a bit, maybe I can come up with something.
Comment by dejanpan on 2013-03-11:
@William: It seems that we found the solution which by running the "sudo ldconfig" command. Now the pcl_common is found and the library correctly executed. Though we are not sure why the ldconfig would need to be run manually.
Comment by William on 2013-03-11:
@dejanpan That makes sense, that will update the ld cache. Can you answer your own question here and accept it? Thanks
Comment by dejanpan on 2013-03-11:
@William: I did. But do you happen to know if you really have to update the ldconfig manually or should there be a cronjob or anything else?
Comment by William on 2013-03-11:
@dejanpan I believe you would need to run ldconfig anytime you install libraries to /usr/lib or /usr/local/lib, this is run automatically by apt usually. So, since you manually installed PCL (I assume) then you would need to manually update the ld cache.
Comment by William on 2013-03-11:
But I am not 100% certain about that.
Answer:
The solution: run "sudo ldconfig" command. Now the pcl_common is found and the executable correctly executed. Though we are still not sure why the ldconfig would need to be ran manually.
Originally posted by dejanpan with karma: 1420 on 2013-03-11
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 13294,
"tags": "catkin-make, catkin"
} |
The case of an initial velocity greater than final velocity with a drag force proportional to the velocity squared | Question: consider an object with a mass $m$ falling in a fluid with a drag force proportional to its velocity squared $(f=kv^2)$.
the governing differential equation can be found using Newton's second law of motion as
$$ A \frac{dv}{dt} + v^2 = v_{lim}^2 ~,~v(0)=v_0$$
where $A=\frac{m}{k}$ and $v_{lim}=\sqrt{\frac{mg}{k}}$ is the final velocity with the initial condition $v(0)=v_0$.
the solution of the equation is
$$v(t)=v_{lim} \tanh \left(\frac{v_{lim}t}{A}+ \tanh^{-1} \left(\frac{v_0}{v_{lim}} \right) \right)$$
where $\tanh^{-1}$ is the inverse hyperbolic tangent.
The given solution does reflect the physical phenomenon in the case of $(v_0<v_{lim})$ ie the velocity increases from $v_0$ to $v_{lim}$.
In the case of $(v_0>v_{lim})$, the expected physics "behavior" of the solution is that the velocity decreases over time (from $v_0$ to $v_{lim}$) and yet we never see such decrease when plotting the function due to the fact that the $\tanh^{-1}(x)$ function is only defined for $x<1~ie~(v_0<v_{lim})$.
So, is there maybe another formula that better describe the second case?
Answer: Using the software Maple 2022 to solve the differential equation, I end up with the solution
$$v(t) = v_\mathrm{lim} \frac{\frac{v_0}{v_\mathrm{lim}} \cosh(x) + \sinh(x)}{\frac{v_0}{v_\mathrm{lim}} \sinh(x) + \cosh(x)},$$
where $x = t \frac{v_\mathrm{lim}}{A}$.
This solution agrees with yours for $v_0 < v_\mathrm{lim}$ but is also valid when $v_0 \geq v_\mathrm{lim}$.
It seems to me that the solution given to you is simply only valid in the case $v_0 < v_\mathrm{lim}$. | {
"domain": "physics.stackexchange",
"id": 93417,
"tags": "newtonian-mechanics, fluid-dynamics, velocity, drag"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.