anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Generating random email addresses | Question: The code below generates only 10 email domains. To me, this is bruteforce programming. Is there another random integer library? Could I use something like random.seed() in C++?
What is a more elegant way to generate 100 emails names using some Python libs/APIs?
import random
domains = [ "hotmail.com", "gmail.com", "aol.com", "mail.com" , "mail.kz", "yahoo.com"]
letters = ["a", "b", "c", "d","e", "f", "g", "h", "i", "j", "k", "l"]
def get_one_random_domain(domains):
return domains[random.randint( 0, len(domains)-1)]
def get_one_random_name(letters):
email_name = ""
for i in range(7):
email_name = email_name + letters[random.randint(0,11)]
return email_name
def generate_random_emails():
for i in range(0,10):
one_name = str(get_one_random_name(letters))
one_domain = str(get_one_random_domain(domains))
print(one_name + "@" + one_domain)
def main():
generate_random_emails()
In get_one_random_name(), how can I make this email_name string grow with random letters without using "+ "?
How can I choose letters randomly and put them together without +?
Answer: From this Stack Overflow question :
import random
foo = ['a', 'b', 'c', 'd', 'e']
print(random.choice(foo))
In your case,
def get_one_random_domain(domains):
return domains[random.randint( 0, len(domains)-1)]
becomes :
def get_one_random_domain(domains):
return random.choice(domains)
and maybe removes the need for a function in the first place.
Your letters list can easily be defined with a list comprehension and some manipulation of ord and chr :
>>> [chr(ord('a')+i) for i in range(12)]
['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l']
As suggested in the comments, you can also use string.ascii_lowercase[:12]
Now, your get_one_random_name could be easily improved.
def get_one_random_name(letters):
email_name = ""
for i in range(7):
email_name = email_name + letters[random.randint(0,11)]
return email_name
using the previous trick, it becomes :
def get_one_random_name(letters):
email_name = ""
for i in range(7):
email_name = email_name + random.choice(letters)
return email_name
Also, here is a recommandation from PEP 8 :
For example, do not rely on CPython's efficient implementation of
in-place string concatenation for statements in the form a += b or a =
a + b. This optimization is fragile even in CPython (it only works for
some types) and isn't present at all in implementations that don't use
refcounting. In performance sensitive parts of the library, the
''.join() form should be used instead. This will ensure that
concatenation occurs in linear time across various implementations.
In your case, you can easily build a list of random letters and call join.
def get_one_random_name(letters):
return ''.join(random.choice(letters) for i in range(7))
In generate_random_emails :
you don't need conversions.
you don't need temporary variables
you don't need the first argument of range if it is 0.
so you get:
def generate_random_emails():
for i in range(0,10):
print(get_one_random_name(letters) + '@' + get_one_random_domain(domains))
Also, it might be a good idea to return a list instead of printing values to make code easier to reuse.
At this stage, the code looks like :
import random
domains = [ "hotmail.com", "gmail.com", "aol.com", "mail.com" , "mail.kz", "yahoo.com"]
letters = string.ascii_lowercase[:12]
def get_one_random_domain(domains):
return random.choice(domains)
def get_one_random_name(letters):
return ''.join(random.choice(letters) for i in range(7))
def generate_random_emails():
return [get_one_random_name(letters) + '@' + get_one_random_domain(domains) for i in range(10)]
def main():
print(generate_random_emails())
if __name__ == "__main__":
main()
A good idea would be to change the function names and add arguments instead of hard coded values :
import random
domains = [ "hotmail.com", "gmail.com", "aol.com", "mail.com" , "mail.kz", "yahoo.com"]
letters = string.ascii_lowercase[:12]
def get_random_domain(domains):
return random.choice(domains)
def get_random_name(letters, length):
return ''.join(random.choice(letters) for i in range(length))
def generate_random_emails(nb, length):
return [get_random_name(letters, length) + '@' + get_random_domain(domains) for i in range(nb)]
def main():
print(generate_random_emails(10, 7))
if __name__ == "__main__":
main() | {
"domain": "codereview.stackexchange",
"id": 8756,
"tags": "python, python-3.x, random"
} |
Representation of the Poincare algebra on the space of smooth functions | Question: The following representation describes how a field $\varphi$ transforms under the Poincaré group $\mathcal{P}$.
$$\mathsf{S} : \left\lbrace \begin{aligned}
\mathcal{P} \times C^{\infty}(\mathcal{M}) &\longrightarrow C^{\infty}(\mathcal{M})\\ \big((\mathbf{a},\mathbf{\Lambda}), \varphi \big)\ \ &\longmapsto \mathsf{S}_{(\mathbf{a},\mathbf{\Lambda})}\cdot \varphi (x^{\mu}) := \varphi\big((\Lambda^{-1})^{\mu}{}_{\nu}\, (x- a)^{\nu}\big)
\end{aligned} \right.$$
Remarks:
This is equivalent to giving a map $\mathcal{P} \longrightarrow \mathrm{End}\big( C^{\infty}(\mathcal{M}) \big)$. ("Module"="Representation"). The reason one has $(\Lambda^{-1})^{\mu}{}_{\nu}$ and $- a^{\mu}$ is: -1) geometrically in analogy with the simple case of translations, if the graph of a function is to be moved to the right by $a$, the argument is $(x-a)$. - 2) algebraically, to have a left action as opposed to right action.
There are other ways a field may transform, mathematically: there are other representations of $\mathcal{P}$ on $C^{\infty}(\mathcal{M})$, the present one is what physicists call scalar, as in "scalar field".
For a Lorentz transformation $\mathbf{\Lambda} \in SO(1,3)\subset\mathcal{P}$ close to the identity one usually writes the following expansion
$$\Lambda^{\mu}{}_{\nu} = \delta^{\mu}{}_{\nu} + \omega^{\mu}{}_{\nu} + o(\omega)\qquad \qquad (Eq. 1)$$
(from which one obtains $\mathbf{\Lambda} \in SO(1,3)\ \Leftrightarrow \omega^{\mu\nu}= - \omega^{\nu\mu}$. $\eta^{\rho\sigma}$ involved) but for the representation above one writes
$$\mathsf{S}_{\mathbf{\Lambda}} = \Big( \mathsf{Id} - \frac{i}{2\hbar} \omega^{\mu\nu} \boldsymbol{\mathsf{J}}_{\mu\nu} \Big) + o(\omega) \qquad\qquad (Eq. 2)$$
The action of the generators $\boldsymbol{\mathsf{J}}_{\mu\nu}$ of $\mathcal{P}$ on $C^{\infty}(\mathcal{M})$ are then differential operators:
$$\mathsf{S}_{\mathbf{\Lambda}} \cdot \varphi (x^{\mu}) =\varphi\big((\Lambda^{-1})^{\mu}{}_{\nu}\, x^{\nu} \big) $$
Using $(\Lambda^{-1})^{\mu}{}_{\nu} = \delta^{\mu}{}_{\nu} - \omega^{\mu}{}_{\nu} + o(\omega)$ on the r.h.s. one obtains
$$ \Big(\mathsf{Id} - \frac{i}{2\hbar} \omega^{\mu\nu} \boldsymbol{\mathsf{J}}_{\mu\nu}\Big)\cdot \varphi (x^{\mu}) = \varphi \Big( (\delta^{\mu}{}_{\nu} - \omega^{\mu}{}_{\nu} + o(\omega))\, x^{\nu} \Big) \\
= \varphi (x^{\mu})\, -\, \omega^{\mu}{}_{\nu}\, x^{\nu}\, \frac{\partial \varphi}{\partial x^{\mu} } = \varphi (x^{\mu})\, -\, \omega^{\mu\nu}\, x_{\nu}\, \partial_{\mu} \varphi
$$
and thus
$$ \frac{1}{2i \hbar} \omega^{\mu\nu} \boldsymbol{\mathsf{J}}_{\mu\nu}\cdot \varphi = -\, \omega^{\mu\nu}\, x_{\nu}\, \partial_{\mu} \varphi$$
Of course there is a sum over all $\mu \nu$. As presented here $\boldsymbol{\mathsf{J}}_{\mu\nu}$ cannot be totally determined but one can choose it antisymmetric (by exchange of $\mu \leftrightarrow \nu$. For a fixed choice $\mu_0\nu_0,\ \boldsymbol{\mathsf{J}}_{\mu_0\nu_0}$ is a differential operator, antisymmetry has no sense) because a symmetric part would not contribute by antisymmetry of $\omega^{\mu\nu}$. Hence (antisymmetrizing the r.h.s.)
$$ \frac{1}{2i \hbar} \boldsymbol{\mathsf{J}}_{\mu\nu} = \frac{1}{2} \Big( - x_{\nu}\, \partial_{\mu} + x_{\mu}\, \partial_{\nu}\Big)\quad \Longleftrightarrow\quad \boldsymbol{\mathsf{J}}_{\mu\nu} = i\hbar \Big(x_{\mu}\, \partial_{\nu} - x_{\nu}\, \partial_{\mu}\Big) \quad (Eq. 3)$$
Similarly the generator of translations is found to be $\ \boldsymbol{\mathsf{P}}_{\mu}= - i\hbar \partial_{\mu}$
Question: I was double checking that the commutation relations of this representation of the Poincaré algebra did coincide with those of the defining representation (up to the $i\hbar$ factor) but THEY DIFFER BY A MINUS SIGN!! Where does it come from?
Relation in the defining rep.:
$$\begin{gather}
[\mathbf{P}_{\mu}, \mathbf{P}_{\nu}] = 0\\
\left[\mathbf{J}_{\mu\nu}, \mathbf{P}_{\rho} \right] = \eta_{\nu\rho}\, \mathbf{P}_{\mu} - \eta_{\mu\rho}\, \mathbf{P}_{\nu}\\
[\mathbf{J}_{\mu\nu}, \mathbf{J}_{\rho\sigma}] = \eta_{\mu\sigma}\, \mathbf{J}_{\nu\rho} - \eta_{\nu\sigma}\, \mathbf{J}_{\mu\rho} + \eta_{\nu\rho}\, \mathbf{J}_{\mu\sigma} - \eta_{\mu\rho}\, \mathbf{J}_{\nu\sigma}
\end{gather}$$
($(Eq .1)\ \Leftrightarrow\ \mathbf{\Lambda}= \mathbf{Id}_4 + \omega^{\mu}{}_{\nu}\, \mathbf{E}_{\mu}{}^{\nu} = \mathbf{Id}_4 + \omega^{\mu\nu}\, \eta_{\nu\rho} \mathbf{E}_{\mu}{}^{\rho} = \mathbf{Id}_4 + \frac{1}{2}\omega^{\mu\nu}\, \mathbf{J}_{\mu\nu}$ $\ \Rightarrow\ \mathbf{J}_{\mu\nu} = \eta_{\nu\rho} \mathbf{E}_{\mu}{}^{\rho} - \eta_{\mu\rho} \mathbf{E}_{\nu}{}^{\rho} $ with $\mathbf{E}_{\alpha}{}^{\beta}$ the $4\times 4$ matrix with $1$ on line $\alpha$, column $\beta$ and $0$ elsewhere.)
I also double checked these relations because they depend on conventions (Calculate using the inclusion $\mathcal{P}\subset GL_5(\mathbb{R})$)! These are associated to the following parametrization
$$\mathbf{R}_{z}(\epsilon) = \begin{pmatrix}
1 & 0 & 0 & 0\\
0 & \cos \epsilon & - \sin \epsilon & 0\\
0 & \sin \epsilon & \cos \epsilon & 0\\
0 & 0 & 0 & 1
\end{pmatrix}$$
$$ \mathbf{\Lambda}_x \Big(\epsilon =\beta = \frac{v}{c}\Big) = \begin{pmatrix}
\gamma=(1-\epsilon^2)^{-1/2} & -\gamma\beta & 0 & 0\\
-\gamma\beta & \gamma & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{pmatrix}$$
of rotations and boosts whose generators $\mathbf{J}^z, \mathbf{K}^x$ are related to the $\mathbf{J}_{\mu\nu}$ by
$$\mathbf{J}^i := - \frac{1}{2}\, \epsilon^{ijk}\, \mathbf{J}_{jk}\ ,\ \mathbf{K}^i := \mathbf{J}_{0i} \quad \Longleftrightarrow\quad \mathbf{J}_{ij} = \epsilon_{ijk}\, \mathbf{J}^k\ , \ \mathbf{J}_{0i} = \mathbf{k}^i$$
with the convention that $\epsilon^{ijk}= - \epsilon_{ijk}$ (because I use $\eta =\mathrm{diag}(1,-1,-1,-1)$)!!! Formally, if one pretends the generators are numbers then
$$\boldsymbol{\mathsf{J}}_{\mu\nu} = \begin{pmatrix}
0 & \boldsymbol{\mathsf{K}}^1 & \boldsymbol{\mathsf{K}}^2 & \boldsymbol{\mathsf{K}}^3 \\
-\boldsymbol{\mathsf{K}}^1 & 0 & \boldsymbol{\mathsf{J}}^3 & -\boldsymbol{\mathsf{J}}^2 \\
-\boldsymbol{\mathsf{K}}^2 & -\boldsymbol{\mathsf{J}}^3 & 0 & \boldsymbol{\mathsf{J}}^1 \\
-\boldsymbol{\mathsf{K}}^2 & \boldsymbol{\mathsf{J}}^2 & -\boldsymbol{\mathsf{J}}^1 & 0
\end{pmatrix} $$
(Remark: I again "triple" checked, not the same as between $F_{\mu\nu}$ and $\mathbf{E}, \mathbf{B}$. Rather $-\mathbf{B}$...)
Cancelling the $i\hbar$, one finds for example
$$\left[\frac{1}{i\hbar}\mathbf{J}^1, \frac{1}{i\hbar}\mathbf{P}_2\right] = \left[\big(x_2\, \partial_3 - x_3\, \partial_2\big) , -\partial_2\right] = \partial_3 = -\frac{1}{i\hbar}\mathbf{P}_3$$
THERE IS AN EXTRA MINUS SIGN
Answer: I finally got it!! Here is the detail I missed
$$x_i := \eta_{i\mu}\, x^{\mu} = - x^i\quad \text{and}\quad \partial_{\mu}:= \frac{\partial}{\partial x^{\mu}} $$
(again, convention $\eta= \mathrm{diag}(1,-1,-1,-1)$) so that
$$ \partial_0\, x_0=1 \quad \text{but}\quad \partial_i\, x_i = - \partial_i\, x^i = -1$$.
Anyway I checked it on a simple case, $\varphi\in C^{\infty}(\mathbb{R}^2)$:
$$\mathbf{R}_{\epsilon}^{-1}:=\begin{pmatrix}
\cos \epsilon & -\sin\epsilon \\ \sin \epsilon & \cos\epsilon
\end{pmatrix}^{-1}= \begin{pmatrix}
\cos \epsilon & \sin\epsilon \\ -\sin \epsilon & \cos\epsilon
\end{pmatrix} =\mathbf{Id} - \epsilon\, \mathbf{J} + o(\epsilon)$$
with $\ \mathbf{J} = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$. Now for the representation on $C^{\infty}(\mathbb{R}^2)$:
$$ \boldsymbol{\mathsf{S}}_{\epsilon}\cdot \varphi \begin{pmatrix} x\\ y \end{pmatrix} = \varphi \left(\mathbf{R}_{\epsilon}^{-1}\cdot \begin{pmatrix} x\\ y \end{pmatrix} \right) =\varphi \begin{pmatrix} x + \epsilon\, y + o(\epsilon)\\ -\epsilon\, x +y + o(\epsilon)\end{pmatrix} $$
$$\Longleftrightarrow\quad\left(\boldsymbol{\mathsf{Id}} + \epsilon\, \boldsymbol{\mathsf{J}} \right)\cdot \varphi \begin{pmatrix} x\\ y \end{pmatrix} = \varphi \begin{pmatrix} x\\ y \end{pmatrix} + \epsilon\, y\, \partial_x \varphi \begin{pmatrix} x\\ y \end{pmatrix} - \epsilon\, x\, \partial_y \varphi \begin{pmatrix} x\\ y \end{pmatrix}$$
$$\Longleftrightarrow\quad \boldsymbol{\mathsf{J}} = y\, \partial_x\ -\, x\, \partial_y $$
which is suppose to correspond to $\boldsymbol{\mathsf{J}}^3 = \boldsymbol{\mathsf{J}}_{12}= x_1\, \partial_2\ -\, x_2\, \partial_1 $. It does because of the first eq. of the answer! | {
"domain": "physics.stackexchange",
"id": 40699,
"tags": "representation-theory, lie-algebra, calculus, poincare-symmetry, matrix-elements"
} |
Why isn't $\mu$ differentiated here? | Question: McIntyre Quantum mechanics while describing Stern-Gerlach experiment
The results of the experiment suggest an interaction between a neutral particle and a magnetic field. We expect such an interaction if the particle possesses a magnetic moment $\boldsymbol{\mu}$. The potential energy of this interaction is $E=-\mu \cdot \mathbf{B}$, which results in a force $\mathbf{F}=\nabla(\boldsymbol{\mu} \cdot \mathbf{B})$. In theStern-Gerlach experiment, the magnetic field gradient is primarily in the z-direction, and the resulting -component of the force is
$$
\begin{aligned}
F_{z} &=\frac{\partial}{\partial z}(\boldsymbol{\mu} \cdot \mathbf{B}) \\
& \cong\mu_{z} \frac{\partial B_{z}}{\partial z}
\end{aligned}
$$
Why wasn't $\boldsymbol{\mu}$ differentiated in $$
\begin{aligned}
F_{z} &=\frac{\partial}{\partial z}(\boldsymbol{\mu} \cdot \mathbf{B}) \\
& \cong \mu_{z} \frac{\partial B_{z}}{\partial z}~?
\end{aligned}
$$
Won't $\boldsymbol{\mu}$ depend on $z$ as the particle travels through space?
Answer: $$\mathbf{F}=\nabla(\boldsymbol{\mu} \cdot \mathbf{B})$$
is a slightly facetious way of putting it. The electron's magnetic moment $\mu$ is a constant and independent from other quantities in the Stern-Gerlach experiment, so it's the same as writing:
$$F_z=\mu \frac{\partial B_z}{\partial z}$$ | {
"domain": "physics.stackexchange",
"id": 85093,
"tags": "quantum-mechanics, vectors, calculus"
} |
Parametric and covariant expressions for the acceleration vector | Question: I am reading S. Neil Rasband book about Classical Dynamics. In the first chapter, there are two different forms of the acceleration:
What he calls the "intrinsic". Given a trajectory with parameter
$s(t)$, considers $x(s)$ and $\dot{x}(s)$, then:
$$\boldsymbol{a}(t)=
\ddot{s}\hat{\boldsymbol{\tau}}+\frac{v^2}{R}\hat{\boldsymbol{n}}$$
where $\hat{\boldsymbol{\tau}}$ is the tangent vector to the
trajectory, $v=\dot{s}$ the speed along the trajectory, $R$ is the
radius of curvature and $\hat{\boldsymbol{n}}$ is the normal vector.
And the covariant form:
$$a^{i}=\frac{d^{2}x^{i}}{dt^{2}}+\Gamma^{i}_{jk}\frac{dx^{j}}{dt}\frac{dx^{k}}{dt}$$
where $\Gamma^{i}_{jk}$ is a Christoffel Symbol.
I know these are two ways of describing the same thing, one does not deal with coordinates but a trajectory, and the other one is valid for any coordinate system.
My question is the next one: Is there a way of going from one description to the other one? What is the explicit relation between them? (if there is any).
Answer: The "Instrinsic" acceleration:
\begin{align*}
\vec{a} & =\ddot{s}\,\vec{\hat{\tau}}+\frac{v^2}{|R|}\vec{\hat{n}}\\
\text{with:}\quad\\
\frac{1}{|R|}&=|k|\,\,,\text{$k$ curvature}\,,\\
\vec{\hat{n}}&=\frac{\vec{k}}{|k|}\,,\\
v&=\frac{ds}{dt}\,,\\
\tau&=\tau(s)\,,\Rightarrow\quad |\tau|=1\\\\
\vec{a} & =\ddot{s}\,\vec{{\tau}}+v^2\,\vec{{k}}\\\\
&\text{with:}\\
\vec{\tau}&=\frac{d\vec{r}}{ds}\\\vec{k}&=\frac{d^2\vec{r}}{ds^2}\,
\Rightarrow\\\\
\quad\vec{a}&=\frac{d\vec{r}}{ds}\,\ddot{s}+\frac{d^2\vec{r}}{ds^2}
\,\dot{s}^2 \tag{1}
\end{align*}
The position vector to the geodetic line is :
\begin{align*}
\vec{r}&=
\begin{bmatrix}
x(s) \\
y(s) \\
z(s) \\
\end{bmatrix}\,\Rightarrow\\
\vec{v}&=\frac{d\vec{r}}{dt}=\frac{d\vec{r}}{ds}\,\frac{ds}{dt}\\
\vec{a}&=\frac{d^2\vec{r}}{dt^2}=\frac{d}{dt}
\left(\frac{d\vec{r}}{ds}\,\dot{s}\right)= \frac{d\vec{r}}{ds}\ddot{s}+\frac{d^2\vec{r}}{ds^2}
\,\dot{s}^2 \tag{2}
\end{align*}
So equation (2) is identical to equation (1) $\checkmark$
But the covariant equation is the ODE for the geodetic line $s(t)$
with:
\begin{align*}
&\frac{d\vec{r}}{ds}\ddot{s}+\frac{d^2\vec{r}}{ds^2}
\,\dot{s}^2=0\,\Rightarrow\\
&\left(\frac{d\vec{r}}{ds}\right)^T\left(\frac{d\vec{r}}{ds}\ddot{s}+\frac{d^2\vec{r}}{ds^2}
\,\dot{s}^2\right)=0\,,\text{solve for $\ddot{s}$}\\
&\ddot{s}+\underbrace{g^{-1}\left(\frac{d\vec{r}}{ds}\right)^T\frac{d^2\vec{r}}{ds^2}}_{\Gamma^i_{jk}}
\,\dot{s}^2=0\quad,g=\left(\frac{d\vec{r}}{ds}\right)^T\,\left(\frac{d\vec{r}}{ds}\right)\,,\text{$g$ is the Metric}
\end{align*} | {
"domain": "physics.stackexchange",
"id": 49339,
"tags": "classical-mechanics, acceleration, curvature, covariance"
} |
Is the quiescent centre only found in monocot roots? | Question: I read that the quiescent centre is present between the dermatogen and calyptrogen. As calyptrogen is only present in monocot root, does that mean quiescent centre is only found in monocot roots?
Answer: Note that quiescent centers are typically defined by cell division activity not anatomical placement and are not confined to monocots.
For example, this paper discusses the quiescent center in Arabidopsis roots1.
Reference:
1: Doerner, P. (1998). Root development: quiescent center not so mute after all. Current Biology, 8(2), R42-R44. | {
"domain": "biology.stackexchange",
"id": 9990,
"tags": "botany, plant-anatomy"
} |
Projectile with air resistance from a given hight | Question: Objects take same time to fall a given height independent of their horizontal speed when air resistance is ignored, is this also true when air resistance is not ignored? In presence of air resistance, the drag co-efficient needs to be considered and also the fact that air resistance increases with increasing speed. So, if three identical objects are shot with three different initial speeds, won't they reach the ground at slightly different times? Can anyone please explain?
Answer: I am still not sure why there won't be any drag in this case, anyone can explain in layman's terms?
I will venture to give it try. There will be air drag both horizontally and vertically, but that air drag force is always in the direction opposing the direction of motion.
There are basically two forces acting on a projectile in air: Gravity and air drag (air resistance).
The air drag force is directed so as to oppose the direction of motion. Gravity is always directed downward.
The thing that affects air time (time to fall from the initial height to the ground) is the downward component of the acceleration of the projectile. The things that affect the downward acceleration is the downward force of gravity, $mg$ (the objects weight, which we will consider constant for the height involved) and the upward air drag force acting opposing its downward motion, which we can call $D$. So since $F=ma$ or $a=\frac{F}{m}$, the downward vertical acceleration of the object will be
$$a_{vertical}=-\frac{mg-D}{m}$$
Where $D$ is the upward air drag force. I need to emphasize that $D$ is not a constant. It involves many variables. Perhaps most importantly if is a function of velocity and can even vary as the square of the velocity. All other factors being being equal, the drag force increases with increasing velocity until the drag force equals the gravitational force and the velocity becomes constant (the so called terminal velocity).
There is no horizontal forces acting on the projectile other than horizontal air drag force which opposes its horizontal motion causing it to decelerate. Therefore the horizontal acceleration is
$$a_{horizontal}=\frac{-D}{m}$$
Bottom line. The horizontal and vertical accelerations, and therefore velocities, are basically independent of one another. The different initial horizontal velocities of the cannon balls should not affect the air time (falling time) of the cannon balls. They will, however, effect the distance travelled by the different balls.
In closing I suspect based on the comments going back and forth others may disagree with me. I have an open mind and am amenable to improving this answer based on sound technical input from others
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 58700,
"tags": "newtonian-mechanics, projectile, drag"
} |
rawlog_mrpt_to_bagfile | Question:
Hi all,
I need 2d laser data of horizontal terrain, but I am unable in getting that, so I am using data from (mrpt.org), rawlog format but can any please tell me how to convert the rawlog file in to bag or input data.
Originally posted by ravi on ROS Answers with karma: 11 on 2011-06-09
Post score: 0
Original comments
Comment by KoenBuys on 2011-06-09:
Could you also tell what kind of data you are searching after? Perhaps some users have this laying around.
Answer:
There are conversions between ROS data types and some mrpt data types in the mrpt-ros-pkg repo
The documentation is quite spares, but I do remember seeing conversion methods which you could use in a custom program.
Originally posted by tfoote with karma: 58457 on 2011-06-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 5804,
"tags": "laser, rosbag, laserscan"
} |
Rotational Motion (Axe and Grindstone) | Question: You have a grindstone that is 90.0kg, has a radius of 0.34m and is turning st 90 rpm. You press a steel axe against it with a radial force of 20.0N. Assuming that the kinetic coefficient of friction between steel and stone is 0.2, (a) calculate the angular acceleration of the grindstone (b) calculate the turns made by stone before it came to a stop
I can’t find the mistake in my working. My answer for (a) is half what it should be which in turn affected my answer for (b) by making it twice what it should be.
Answer: You cannot use $F=ma$ when working with angular acceleration. You have to use $\tau=I \alpha$. $\text{ } \tau$ is torque, tangent-force times radius. Alpha, $\alpha$, is angular acceleration, $\frac{d \omega}{dt}$. $\text{ } I$ is moment of inertia. Look for the formula for the $I$ of a disc.
That’s all the help allowed to give. | {
"domain": "physics.stackexchange",
"id": 81875,
"tags": "homework-and-exercises, friction, rotational-kinematics"
} |
Is the Klein's bottle a good analogy to the relation between T-tubule and sarcolemma? | Question: I am not quite seeing how the T-tubule and sarcolemma is connected. It says that the T-tubule is an "invagination" of the sarcolemma, which is sarcolemma folded from the inside to form a T-tubule
Can someone verify whether the following picture is a good presentation of this relationship?
Answer: If we're looking at diagrams for the t-tubule with respect to the sarcolemma, we see the two are actually fused:
The function therein is to allow depolarization to penetrate the interior of the cell quickly. The surface isn't transected anywhere, it simply folds inward. You can visualize this by sort of pushing a hole through some putty with a finger, and making a hole to the other side. Really no need to upvote me here, just easier to paste images in answers. | {
"domain": "biology.stackexchange",
"id": 3657,
"tags": "muscles"
} |
Why is the colour-index measurement used? | Question: So I'm learning about the colour-index currently and what is confusing me is why this is useful. The colour of a star is dependent on the temperature, so wouldn't astronomers have a natural idea of what colour a star would be based on its temperature, making the colour index useless? My thought is that perhaps it provides more information about the contents of the surface of the star than the temperature does, but once again wouldn't we automatically know this based on the temperature alone?
Answer: You can't measure the temperature of a star.
What you can measure is the properties of the light coming from the star. One way to do this is to view the star in different coloured filters, and compare the brightness of the star. This gives the colour index.
So the colour index is what is actually measured - and the temperature is deduced from that. | {
"domain": "astronomy.stackexchange",
"id": 6631,
"tags": "color-index"
} |
How to use hydro and groovy Both together? | Question:
i want to use both hydro and groovy ros desktop but when i use below commad to change ros hydro to ros groovy not happened, what's problem?
#source /opt/ros/groovy/setup.bash
Originally posted by programmer on ROS Answers with karma: 61 on 2014-01-21
Post score: 0
Original comments
Comment by bchr on 2014-01-22:
Could you clarify your problem?
Comment by programmer on 2014-01-22:
I was written my codes in groovy, now i install hydro but i can't run my codes now,
i want to run my code on groovy but, by default ros run it on hydro...
what's problem?
How can i do?
Comment by bchr on 2014-01-22:
Try to explain exactly what you are running, how, and what the errors are. If you do not give any detail, we cannot help you.
Comment by programmer on 2014-01-22:
i want to run hector_slam on groovy but always ros run it on hydro.
Answer:
Hello!
Note that you might not necessarily be able to run any Groovy program on Hydro (Hydro program on Groovy) but assuming it does, you'd need to build it with your target installation sourced. To clarify, an example where your package is in the directory ~/catkin_workspace:
Every time you want to switch installations you'll need to do all of these:
source /opt/ros/groovy/setup.bash or source /opt/ros/hydro/setup.bash
cd ~/catkin_workspace
rm /devel -r
rm /build -r
catkin_make
source devel/setup.bash
rosrun my_package do_cool_stuff
Note that you need to remove /devel and /build (actually probably just /devel, but I'm not sure about that one) because it contains references to the current installation, ie it is Groovy or Hydro specific.
-Tim
Originally posted by Tim Sweet with karma: 267 on 2014-01-22
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by programmer on 2014-01-22:
Thank Tim for your good answer... :-)
but i never use catkin make, i always use rosmake to make anything in ros, it cause something difference?
Comment by Tim Sweet on 2014-01-24:
I've personally never used rosmake, but you should be able to just substitute it in there | {
"domain": "robotics.stackexchange",
"id": 16721,
"tags": "ros"
} |
Why aren't all conductors always charged? | Question: If you place a conductor beside an insulator, the insulator will become negatively charged and the conductor will become positively charged. Air is an insulator. So why don't all conductors placed in air automatically become positively charged and the air around it become negatively charged?
See, for example, the "Cause" section on the Triboelectric effect Wikipedia page.
Answer: The triboelectric effect isn't a result of placing two materials next to each other, it comes as a result of rubbing them together. This is important because the two materials do not naturally want to become charged, you have to add energy, typically in the form of the friction that comes from rubbing. This overcomes the activation energy, so to speak, that is required for the electrons to jump from one material to the other. Everyday interactions between objects and air molecules are not energetic enough to cause this effect. | {
"domain": "physics.stackexchange",
"id": 11650,
"tags": "electromagnetism"
} |
What is the evolutionary advantage of menstruation? | Question: I was wondering why woman have fertile periods. Based on some simple reasoning I would expect that if woman were always fertile this would increase the chance of reproduction(one of the important things in life). But having menstrual periods would decrease this chance because only the period close to ovulation would result in fertilization.
So why did evolution result in woman having menstrual cycles?
Answer: Human female ovaries only have a certain number of ova; none are manufactured in adulthood. Once they're gone (that's when menopause occurs), they're gone.
You can also ask, why did evolution result in a limited number of oocytes then? This kind of second guessing of physiology is endless.
In your scenario, an ovum would need to be released every few days in order to be fertile continuously. That alone would decrease the number of years that a woman could be fertile. Since a fetus needs about nine months to gestate, I think once a month fertility is more than adequate. The earth's population is already too large, which also translates as successful reproductively.
If a fetus could gestate in a month, I could see an advantage to constant fertility. But the disadvantages would also be great. How could a woman take care of so many babies? How could she possibly feed, say, six to nine babies a year with only two breasts? (Mammals that frequently have larger litters have more mammary glands.) How could her own body compensate nutritionally not only for the burden of feeding the fetuses but also of the babies?
Asking why didn't something evolve from a teleological standpoint rarely results in a satisfying answer. If constant fertility were advantageous, maybe it would be present. | {
"domain": "biology.stackexchange",
"id": 6275,
"tags": "human-biology, evolution, development"
} |
Angular momentum in string theory | Question: Since strings are extended objects, is all angular momentum in string theory essentially "orbital" angular momentum? Or is there still a kind of intrinsic angular momentum assigned to a string?
Either way, is there anything that prevents the "intrinsic spin" of a particle represented by a string from being arbitrarily large?
Answer: The orbital momentum of a string may be arbitrarily large. Whether it should be called "orbital" or "intrinsic" depends on the perspective. The right answer is the formula, such as
$$J_{ij} = \int d\sigma [x_i (\sigma) p_j (\sigma) - p_i(\sigma) x_j(\sigma) + \gamma_{ij}^{ab} \theta_a(\sigma) \theta_b(\sigma)]$$
I added some superstring term, too. The $xp$ terms may be viewed as a density of the orbital angular momentum on the string; the fermionic term is its most direct fermionic generalization. However, both of these terms, and especially the latter, become "intrinsic angular momentum" when you expand the fields $x,p,\theta$ to Fourier modes and interpret the string as a particle with internal oscillations. In particular, the intrinsic spin-1/2 always comes from the quantization and/or excitations of the fermionic degrees of freedom.
The formula says much more than just some confusing dogmatic word "intrinsic" vs "orbital", at least to a person who wants to understand the terms accurately enough - at the level of maths. | {
"domain": "physics.stackexchange",
"id": 772,
"tags": "string-theory, angular-momentum"
} |
What is $\phi '$ in orbital mechanics? | Question: For the last week or so, I have been teaching myself orbital mechanics within the context of Braeunig's Rocket and Space Technology.
I noticed a symbol, $\phi '$, and was wondering what context that was used in? I think it is an angle measurement, but I know that it is almost equal to the semi-major axis on earth.
From the box labeled Geodetic Latitude, Geocentric Latitude, and Declination, found about half way down after selecting Orbital Mechanics here.
Answer: This appears to be the geocentric latitude ($\psi$), which is the angle between the equatorial plane and the point on the surface of the ellipse. It can be calculated from the geodetic latitude (also known as the geographic latitude) ($\phi$) and the eccentricity of the ellipse ($e$):
$$\psi(\phi)=\tan^{-1}\left(\left(1-e^2\right)\tan\phi\right)$$
If $e=0$, then the two lattitudes are the same, because the focus is at the center of the ellipse.
Image courtesy of Wikipedia user Wmc824 under the Creative Commons Attribution 3.0 Unported license.
It might make more sense to use this latitude to describe the position of an orbiting object, but the vector from the focus to the object's position $(r,\theta)$ is perpendicular to the tangential component of the object's velocity vector, meaning that the geodetic latitude may be better. | {
"domain": "astronomy.stackexchange",
"id": 2097,
"tags": "orbital-mechanics"
} |
Why we write "Ignore the input " when describing an Enumerator?(Sipser Chapter 3) | Question: The Theorem and its proof is given below:
But I am wondering why we write "Ignore the input " when describing an Enumerator? could anyone explain this for me please?
Answer: It's just an explicit statement that the behaviour of the machine doesn't depend on its input in any way. This could be inferred from the rest of the machine's description, but explicitly saying it makes it clear that it's not a mistake. | {
"domain": "cs.stackexchange",
"id": 11542,
"tags": "complexity-theory, turing-machines, computability"
} |
Problems with "How to use a PCL tutorial in ROS" | Question:
Hello! I'm new testing ubuntu, ros and PCL. Well this is my problem: I tested the tutorial of ros page, all things look good but when i write : rosrun my_pcl_tutorial example input:=/camera/depth/points nothing happen (I'm using Kinect so I changed the input). Then I used rviz to see the output and the input and I can see something, I added in rviz a camera and PointCloud2 but I'm not sure about the images that rviz shows in Camera and PointCloud2 I don't know if the image in PointCloud2 have the Voxel Filter. I hope that someone can help me, please. Thanks.
Originally posted by App on ROS Answers with karma: 16 on 2013-09-26
Post score: 0
Original comments
Comment by georgebrindeiro on 2013-09-26:
Could you organize your question a little better? I don't understand what your question is. Please cite specifically what behavior you expected to happen, what behavior is being observed, and what question you want answered.
Answer:
Sorry. I could not view the tutorials, but I could finally use rviz and run the program that comes with this format: sensor_msgs/PointCloud2. To view we only have to add "PointCloud2" in rviz, when we select are the topics to which we can subscribe. Now I have another problem: I can not see anything if I use the format pcl / PointCloud . I run the program with rosrun my_pcl_tutorial ... open after rviz and add PointCloud2 but to see the output is supposed to appear the topic "output" is not listed yet, what is the problem?
Thanks =)
Originally posted by App with karma: 16 on 2013-10-01
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 15664,
"tags": "ros, pcl"
} |
How to get the external (cluster) job id and not internal snakemake job id in cluster log files? | Question: Background I am using snakemake and I want to add the external (cluster) jobid to my cluster output and error log files. Below is my shell script.
snakemake --configfiles config/config_resources.yml config/config_parameters.yml --latency-wait 60 --use-conda --printshellcmds --jobname PipeName_{name}.jobid{jobid} --jobs 300 --cluster "bsub -W 480 -n {threads} -M {resources.mem_gb}G -R 'rusage[mem={resources.mem_gb}G]' -o logs/{name}_{jobid}.out -e logs/{name}_{jobid}.err".
But I only get the internal one in de log name from snakemake (e.g. 1 to 5), instead of the one from the cluster.
Question How to I get the external job id in there and not the local one made by snakemake? Or where in my shell script did I go wrong?
Cheers and thanks in advance.
Answer: bsub supports some filename placeholders using a percent sign, like %J for the LSF job ID, so you can use that:
snakemake ... --cluster "bsub ... -o logs/{name}_%J.out -e logs/{name}_%J.err"
That way snakemake will format the cluster command string with its wildcards (like name) as it constructs the bsub command, and then bsub will put the job ID in place in those filenames. Since Snakemake just sees that as one big command to run for the cluster and doesn't know the details other than formatting the text, I think the two shouldn't clash. | {
"domain": "bioinformatics.stackexchange",
"id": 2275,
"tags": "snakemake, hpc"
} |
Estimating angular speed from position for control purpose | Question: I am new to robotics, however I am designing a PID to control the angular velocity (azimuth, elevation) of a couple of Faulhaber motors.
The input to the PID control is the actual angular velocity, which is not observed though, since it is derived from the position of each motor at time $t$.
The PID sample period is aprox. 30 ms, whereas the input data rate from the joystick is aprox. 300 samples/s, corresponding to a sample period of 3.33 ms. The joystick input gets transformed into the desired angle speed, that the PID will control.
I was initially filtering position data using a normal 2D linear Kalman Filter, but the angular velocity is not linear (by formula), hence I switched to Extended Kalman filtering.
My questions are the following:
Is this latter approach that makes use of EKF correct?
Which are the parameters that I have to check in order to properly set the update rate of the PID loop?
Thx in advance!
Answer: Why did you switch to EKF? KF is standard for estimating the angular rate of a electrical motor, whose transfer function is linear indeed. You might want to consider exploring also the Savitzky-Golay filter as a valid alternative, which is not model based.
Concerning the sample period of the controller, it is normally a good rule to take a frequency separation of one decade, at least, between the rate of the system and the bandwidth of the input signal. Therefore, the actual sample rate of the joystick input is an overkill.
A sample period for the PID of 5-10 ms is usually fine, given the low mechanical cut-off frequency of the motor (higher rates are better, but not required). Then, acquire the joystick at >50 ms. | {
"domain": "robotics.stackexchange",
"id": 668,
"tags": "pid, ekf"
} |
Can we observe the shapes of MOs? | Question: Condensed question formulation:
Is there an experimental method to directly visualise the 3D form of a MO wave function, or at least the electron density associated with it?
Full statement:
Molecular orbital theory is a crowning achievement of modern Chemistry: it is by far the most accurate description of chemical bonding we possess and accounts for the vast majority of related phenomena.
An array of evidence exists to confirm predictions from MO theory. Ab initio computational methods such as Hartree-Fock are able to produce a very accurate description of orbital coefficients and energies, which match experimental data for the energies (via photoelectron spectroscopy etc.) or predicted reaction mechanisms/FMO interactions (in kinetic runs).
But do we have any experimental evidence for the shape of MOs?
I'd presume the most feasible way of observing some measure related to the MO wave function $\Psi$ would be to follow a probabilistic approach and try imaging the electron density assumed to be given by $|\Psi|^2$.
Of course, X-ray diffraction first springs to mind; the notorious issue with that being the phase problem. In my superficial understanding: due to the loss of phase information, in going from the XRD pattern to the electron density isosurface (via ab initio methods), essentially we model some initial guess phase and try to fit it to the intensity data.
Since usually the location of nuclei is of far greater practical interest, the formalisms developed to model the phases (e.g. Hansen-Coppens) tend to focus on charge densities localised on each atom (though they are not necessarily of spherical symmetry). This is useful in actually getting the form of the molecule, but produces a picture different to the expected electronic occupation of delocalised MOs.
Is it possible to instead produce the delocalised picture if our ab initio phase guess is based on a RHF calculation? But even if possible, does this have any more legitimacy than the localised picture? I.e. it seems, as far as XRD is concerned, these are equally possible interpretations of the same intensity data where we just forced some phase information as convenient.
Sorry for the long description, this is an in depth presentation of my thoughts on the issue thus far. Any experimental technique/fundamental aspect of MO behaviour I'm missing would be appreciated. But the bottom line is, as stated in the beginning, can we experimentally visualise the shape of MOs or at least their modulus squared?
Answer: The very question you pose is addressed thoroughly in this open access work:
https://www.nature.com/articles/ncomms9287 .
The short answer to your question is that the electron density can be mapped using a technique akin to diffraction as described in their work. You mentioned a distinction between the electron density and the orbitals, and the orbital is not itself observable; but as you suggested, the electron density can be mapped. | {
"domain": "chemistry.stackexchange",
"id": 14787,
"tags": "quantum-chemistry, molecular-orbital-theory, theoretical-chemistry, orbitals, x-ray-diffraction"
} |
How valid are Koestler’s criticism of evolutionary theory? | Question: I recently read Arthur Koestler's 1967 book The Ghost in the Machine. In it, Koestler criticises the neo-Darwinian theory of evolution—beneficial random mutations preserved by natural seleciton—as insufficient to explain the formation of complex forms like eyes and eggs. The issues Koestler has with the theory are ones that I've been trying to wrap my head around since before I read the book, but I'm aware that:
a) the book is half a century old;
b) Koestler was not a biologist or scientist; and
c) neo-Darwinian theory/the modern synthesis seems to have stood the test of time
so I'm wondering how accurate Koestler's account of the theory is, and if he is wrong what the retorts to his claims are. Here are two examples he gives of complex forms:
[The giant panda] has on its forelimbs an added sixth finger, which comes in very ‘handy’ for manipulating the bamboo-shoots which are its principal food [but] that added finger would be a useless appendage without the proper muscles and nerves [and t]he chances that among all possible mutations those which produced the additional bones, muscles and nerves should have occurred independently are of course infinitesimally small.
and
The decisive novelty of the reptiles was that, unlike amphibians, they laid their eggs on dry land...[b]ut the unborn reptile inside the egg still needed an aquatic environment...[i]t also needed a lot of food...[s]o the reptilian egg had to be provided with a large mass of yolk for food, and also with albumen—the white of egg—to provide the water. Neither the yolk by itself, nor the egg-white itself, would have had any selective value...[e]ach change, taken in isolation, would be harmful, and work against survival.
Instead of random mutations and external selection, he suggests that ‘internal selection’ works at all levels, from chemical upwards, to correct ‘misprints’ long before the developed organism is exposed to any sort of external selection. The implication that there must therefore be some plan towards which embryonic development works is supported by two examples:
the growing eye-bud of the embryo is an autonomous holon, which, if part of its tissue is taken away, will nevertheless develop into a normal eye
and
[the fruit fly has a recessive gene that when paired with another in a fertilised egg will produce an eyeless fly.] If now a pure stock of eyeless flies is made to inbreed, then the whole stock will have only the ‘eyeless’ mutant gene, because no normal gene can enter the stock to bring light into their darkness...within a few generations, flies appear in the inbred ‘eyeless’ stock with eyes that are perfectly normal.
His other main point is that evolution takes a zig-zag path, evolving down until reaching an evolutionary dead-ends before retracting to ‘an earlier or more primitive, but also more plastic and less committed stage—followed by a sudden advance in a new direction’. For example:
[A]mphibian...ancestry...goes back to the most primitive type of lung-breathing fish; whereas the apparently more successful later lines of highly specialised gill-breathing fishes all came to a dead end.
and
...the human adult resembles more the embryo of an ape than an adult ape
Is Koestler's science just faulty, or are these valid criticisms that've been resolved since?
Answer: Maybe both. Certainly his understanding of limb developement doesn't match modern understanding. The quote you provide seems to indicate that he thought the appearance of an additional digit would require a multitude of coordinated mutations. Our current understanding seems to indicate that it's all about changes in the regulation of genes. For example in fruit flies mutations in the antennapedia gene can cause antenna to grow where legs should be or visa versa. Another example is polydactyly(extra toes) in cats which be be caused by mutations to a single regulatory region.
within a few generations, flies appear in the inbred ‘eyeless’ stock
with eyes that are perfectly normal.
I'm not a fruit fly expert, but this seems highly dubious to me. There is a well studied eyeless mutation, but it's dominant, not recessive. Furthermore getting two copies of this mutant eyeless gene is lethal at the embryonic stage. This means that a colony of adult eyeless flys are necessarily all heterozygous. That is, they have one copy of the mutated eyeless gene, and one copy of the normal eyeless gene. By Mendel's laws, a quarter of the offspring from the completely eyeless generation are going to have two copies of the normal eyeless gene, and so will have normal eyes. I wouldn't be surprised if the flies with normal eyes outcompete the flies without eyes, so eventually the eyeless flies will disappear from the colony. I can't tell you whether Koestler has mis-characterized this gene, or is talking about some other gene all together. Citation needed. | {
"domain": "biology.stackexchange",
"id": 10969,
"tags": "evolution, human-evolution"
} |
How to convert time-domain signal to complex envelope? | Question: Matlab and Simulink Communications Toolbox digital demodulators are defined to only work on the complex envelope representation of a baseband signal.
To obtain the time-domain representation of this signal, I believe one takes the real part of the complex envelope.
Given just this real part, how does one convert the time-domain representation of modulated data back into a complex envelope such that a Matlab demodulator will demodulate it?
Application: I'm trying to simulate a simple audio FSK system in Simulink.
Answer: These are the main ideas:
Consider a receiver that picks up a signal $r(t)$. This signal has bandwidth $W$ and is centered on carrier frequency $f_c$.
Using the Hilbert transform, eliminate the negative frequencies of $r(t)$. The resulting signal, $r_+(t)$, is called an analytic signal.
Now, downconvert the analytic signal using a complex exponential, so that the spectrum is centered around 0. This signal, called $\tilde r(t)$, is the complex envelope of $r(t)$. Its spectrum goes from $-W/2$ to $W/2$, and its bandwidth is $B=W/2$.
The complex envelope is almost always complex, but if the spectrum of $r(t)$ is symmetrical around $f_c$, the CE is real. Examples of this are AM DSB and BPSK.
In the transmitter you would do the opposite:
Start with a complex envelope $\tilde s(t)$ that represents the information you want to transmit. Its spectrum should go from $-W/2$ to $W/2$. An example would be a QAM signal with complex symbols.
Upconvert the CE to frequency $f_c$, multiplying by a complex exponential. The result is an analytic signal with only positive frequencies. The signal will be centered around $f_c$ and cover the frequency range from $f_c-W/2$ to $f_c+W/2$.
Take the real part of the analytic signal to convert it to a real signal that can be physically generated and transmitted.
All of these steps can be accomplished in Matlab. Be sure to read the documentation for the hilbert command. I can't comment on the corresponding Simulink blocks, since I don't use them.
Also note that it is very likely that you don't have to do all this. If you can perform all your simulations using the complex envelope, you'll save a lot of computing time. The reason is that the transmitted and received signals need to be sampled at least at a rate $2(f_c+W/2)$, whereas the complex envelope can be sampled at $W$ real samples per second -- a huge difference for large $f_c$. | {
"domain": "dsp.stackexchange",
"id": 3132,
"tags": "matlab, digital-communications"
} |
Why does the heaviest point on a spinning ball tend to become the topmost pole? | Question: I have a mostly hollow, small clear plastic ball. Inside the ball is a weight stuck to the edge but the weight is smaller than the ball so the ball will always come to rest with the weight at the bottom when placed on a smooth surface.
I've been spinning it on my desk with the weight at the top, it spins well despite it being top heavy, I assume like a gyroscope. If I spin it with the weight at the bottom however, it loses stability and it very quickly slows down until it regains stability with the weight at the top again, with further experimentation I found that wherever the weight is, once the ball starts spinning it stabilises with the weight at the top.
What causes the rotational energy to transform into gravitational potential energy?
Any why is the ball more stable with the weight at the top axis than with it at the bottom axis?
Answer: Veritasium has made a video about this. With this as a solution video. Someone also posted this video response, which also has a good explaination.
Even though they use a disk with a unbalanced center of mass, the same still applies to your ball. The rotating ball will try to rotate around its center of mass. However if this does not align with the point of contact with the floor then the point of contact will be dragged along in a circle. If there is friction between the floor and the ball then this will cause the rotating ball to precess.
The reason why this "pushes" the center of mass to the top is better explained in the videos. Although it still seems counter intuitive to me, we have to remember that our intuition is not always right. | {
"domain": "physics.stackexchange",
"id": 13359,
"tags": "quantum-spin, stability, gyroscopes, weight"
} |
Check digital input signals that is sent as an integer using bitwise checking | Question: I have a program that works with a hardware via an ActiveX API. The API have events that is triggered by the hardware.
In the hardware there is an IO card. There is 16 in and 16 out. Changes in input signal triggers the event handler below.
The event uses an int to show which signal is set/unset. This by using binary addresses for each signal
Signal Mask
INPUT_0 0x01;
INPUT_1 0x02;
INPUT_2 0x04;
INPUT_3 0x08;
...
INPUT_11 0x800
(there are some signals that are used for something else and can't be used as Input)
The signals can be set/unset in any order, but the event will trigger every time a change is done!
The value, p_nBits, will include all signals state. This means that if INPUT_0 is set and then INPUT_1 is set
then p_nBits will be 3, i.e. both signals! But INPUT_0 has already been handled and should not be handled again until it is unset.
I have an integer currentBits that is the current state of the input signals.
Any error in logic?
What can be improved?
Is it possible to make it faster?
// Current state of Input signals
int currentBits = 0;
public void IoPort_sigInputChange(int p_nBits)
{
lock (lockObj)
{
if ((p_nBits & INPUT_0) == INPUT_0)
{
if ((currentBits & INPUT_0) != INPUT_0)
{
currentBits |= INPUT_0;
RaiseRunModeEvent(true);
}
}
else
{
if ((currentBits & INPUT_0) == INPUT_0)
{
currentBits &= ~INPUT_0;
RaiseRunModeEvent(false);
}
}
if ((p_nBits & INPUT_1) == INPUT_1)
{
if ((currentBits & INPUT_1) != INPUT_1)
{
currentBits |= INPUT_1;
RaiseAllPositionsSetEvent(true);
}
}
else
{
if ((currentBits & INPUT_1) == INPUT_1)
{
currentBits &= ~INPUT_1;
RaiseAllPositionsSetEvent(false);
}
}
if ((p_nBits & INPUT_2) == INPUT_2)
{
if ((currentBits & INPUT_2) != INPUT_2)
{
currentBits |= INPUT_2;
RaiseOrderEndedEvent(true);
}
}
else
{
if ((currentBits & INPUT_2) == INPUT_2)
{
currentBits &= ~INPUT_2;
RaiseOrderEndedEvent(false);
}
}
// more checks may occur
}
}
Answer: Firstly, you can simplify the outer else statements:
...
else
{
if ((currentBits & INPUT_0) == INPUT_0)
{
...
}
}
...
to
...
else if ((currentBits & INPUT_0) == INPUT_0)
{
} ...
Secondly, if the check of all masks follows the same pattern as showed, you can create a mask checker method like this one (in C# 7.0):
private void HandleMask(int bitPattern, (int mask, Action<bool> action) maskItem)
{
if ((bitPattern & maskItem.mask) == maskItem.mask)
{
if ((currentBits & maskItem.mask) != maskItem.mask)
{
currentBits |= maskItem.mask;
maskItem.action(true);
}
}
else if ((currentBits & maskItem.mask) == maskItem.mask)
{
currentBits &= ~maskItem.mask;
maskItem.action(false);
}
}
public void IoPort_sigInputChange(int p_nBits)
{
lock (lockObj)
{
var maskItems = new(int mask, Action<bool> action)[]
{
(mask: INPUT_0, action: RaiseRunModeEvent),
(mask: INPUT_1, action: RaiseAllPositionsSetEvent),
(mask: INPUT_2, action: RaiseOrderEndedEvent),
};
foreach (var maskItem in maskItems)
{
HandleMask(p_nBits, maskItem);
}
}
}
Update
In .net 3.5 you should/could do it in a way like this:
private void HandleMask(int bitPattern, int mask, Action<bool> action)
{
if ((bitPattern & mask) == mask)
{
if ((currentBits & mask) != mask)
{
currentBits |= mask;
action(true);
}
}
else if ((currentBits & mask) == mask)
{
currentBits &= ~mask;
action(false);
}
}
public void IoPort_sigInputChange(int p_nBits)
{
lock (lockObj)
{
var maskItems = new[]
{
new { mask = INPUT_0, action = (Action<bool>)RaiseRunModeEvent },
new { mask = INPUT_1, action = (Action<bool>)RaiseAllPositionsSetEvent },
new { mask = INPUT_2, action = (Action<bool>)RaiseOrderEndedEvent },
};
foreach (var maskItem in maskItems)
{
HandleMask(p_nBits, maskItem.mask, maskItem.action);
}
}
} | {
"domain": "codereview.stackexchange",
"id": 30024,
"tags": "c#, bitwise"
} |
Construction of a collection of subsets of $\{1,2,\ldots,n\}$ with certain properties | Question: Let $n$ be a large positive integer. Given a collection $\mathfrak S$ of subsets of $[n] := \{1,2,\ldots,n\}$, and a vector $z=(z_1,\ldots,z_n)\in \{\pm 1\}^n$, define
$$
f_{\mathfrak S}(z) := \sum_{\sigma \in \mathfrak S} z_\sigma,
$$
where $z_\sigma := \prod_{j \in \sigma} z_j$. For any $j \in [n]$, let $d_{\mathfrak S}(j) = |\partial_j \mathfrak S|$ be "degree" of $j$ w.r.t $\mathfrak S$, where
$$
\partial_j \mathfrak S := \{\sigma\setminus\{j\} \mid \sigma \in \mathfrak S, \,j \in \sigma\} = \{s \subseteq [n] \mid j \not \in s\text{ and }s \cup \{j\} \in \mathfrak S\}.
$$
Also define the minimal degree w.r.t $\mathfrak S$ as $d^\star_{\mathfrak S} := \min_{j \in [n]} d_{\mathfrak S}(j)$.
Examples
Pairs. If $\mathfrak S$ is the collection of distinct unordered pairs (i.e doubletons) of elements of $[n]$, then it is easy to see that $\partial_j \mathfrak S$ is the collection of all singletons of $[n]$ except $\{j\}$, and so $d_{\mathfrak S}^\star = n-1$ for all $j \in [n]$.
Simplicial Complex. On the other hand, if $\mathfrak S$ is a simplicial complex, then $\partial_j S$ is the collection of elements of $\mathfrak S$ which don't contain $j$. For example, if $\mathfrak S = K_{n,d}$ is the collection of all subsets of $[n]$ with $d$ or fewer elements (where $d \in [n]$), then $\partial_j \mathfrak S$ is isomorphic to $K_{n-1,d-1}$, and so
$$
d_{\mathfrak S}^\star = |K_{n-1,d-1}| = \sum_{i=0}^{d-1} {n-1 \choose i}.
$$
Monster. Finally, if $1 \le k \le n$, and $G_1,\ldots,G_k$ is a partition of $[n]$, and $\mathfrak S$ is the collection of subsets of $[n]$ which contain exactly one element from each $G_i$, then $d_{\mathfrak S}^\star = p^{k-1}$, where $p := n/k$.
Let us say that $\mathfrak S$ is "nice" if the following three conditions are satisfied:
(1) Exponential Degree: $d_{\mathfrak S}^\star \ge 2^{\alpha n}$ for some constant $\alpha \in (0, 1]$.
(2) Efficiency: $f_{\mathfrak S}(z)$ can be computed in linear time time $O(n)$, for any $z \in \{\pm 1\}^n$.
(3) Invariance: $f_{\mathfrak S}(\overline z) = f_{\mathfrak S}(z)$ for any $z,\overline z \in \{\pm 1\}^n$ such that the components of $\overline z$ are a permutation of the components of $z$.
It is clear that the "pairs" example verifies the conditions (2) and (3), but not (1). The "simplicial complex" example verifies conditions (1 and 3) or (2 and 3). This is because thanks to https://cstheory.stackexchange.com/a/52570/44644, $f_{\mathfrak S}$ can be computed in time of order $O(nd)$; if the time complexity is really $\Theta(nd)$ (see Question 2 below), then it can be linear is if $d=O(1)$, which forces $d_{\mathfrak S}^\star$ to be of polynomial order $n^{d-1}$.
Finally, still thanks to the previous reference, for appropriate choice of $k$ (for example assume WLOG that $n$ is even, and take $k=n/2$) the "monster" example verifies conditions (1) and (2), but not (3). Condition (1) is clear to see. For condition (2), simply note that by construction $f_{\mathfrak S}(z) := \sum_{\sigma \in \mathfrak S} z_\sigma = \sum_{i=1}^k \prod_{j \in G_i} z_j$, which definitely takes $O(n)$ time to compute. Condition (3) fails because every partitioning of $[n]$ induces a partial order on $[n]$, and so $f_{\mathfrak S}(\overline z)$ will be different from $f_{\mathfrak S}(z)$ in general.
Question 1
What is an explicit construction of a "nice" $\mathfrak S$ ?
N.B: I'm fine with randomized constructions, which verify the above 3 conditions in an appropriate probabilistic sense.
Question 2
Is $\mathfrak S = K_{n,d}$ efficient ?
Answer: Here's an elaboration on my comment, assuming I didn't get lost in all the notation above:
For $\mathfrak{S}=\mathcal{P}([n])$, the power set of $[n]$, it holds that
\begin{equation*}
f_{\mathfrak{S}}(z) = \sum_{S\subseteq [n]} z^S = 2^n\prod_{i=1}^n \left(\frac{1+z_i}{2}\right),
\end{equation*}
which implies that $f_{\mathfrak{S}}(z)$ is $2^n$ if $z_i=1$ for all $i\in [n]$ and otherwise is zero. As best I can tell, this example trivially satisfies all the criteria above.
The case of $\mathfrak{S}=K_{n,d}$ is slightly trickier, but it can be computed using $O(d)$ arithmetic operations (which seemed to be the measure in the other link as well); in particular, this is without considering the time it takes to read and store large numbers (which seems to be the case in the linked answer as well, so hopefully this is okay). If $z$ has $k$ entries with $-1$, we may assume the first $k$ entries are $-1$ and the rest are $+1$ by invariance. Then looking at each monomial separately, we find
\begin{equation*}
f_{K_{n,d}}(z) = A_{n,d,k,\mathsf{even}}-A_{n,d,k,\mathsf{odd}}
\end{equation*}
where $A_{n,d,k,\mathsf{even}}$ is the number of subsets of $[n]$ of size at most $d$ with even intersection with $\{1,\ldots,k\}$ and analogously for $A_{n,d,k,\mathsf{odd}}$.
To calculate $A_{n,d,k,\mathsf{even}}$, standard combinatorics implies this is just
\begin{equation}
\label{eq:even}
A_{n,d,k,\mathsf{even}} = \sum_{j=0, \text{$j$ even}}^d {k \choose j}\left(\sum_{\ell=0}^{d-j} {n-k \choose \ell}\right) = \sum_{j=0, \text{$j$ even}}^d {k \choose j} S_{n,k,d-j},
\end{equation}
where
\begin{equation*}
S_{n,k,r}\triangleq \sum_{\ell=0}^{r} {n-k \choose \ell}.
\end{equation*}
The key now is to calculate the $S_{n,k,r}$ using few arithmetic operations by being careful. The key is using the recurrences
\begin{equation*}
{m \choose i} = {m \choose i-1}\cdot \frac{m-i+1}{i},
\end{equation*}
one can compute all ${n-k \choose 0},\ldots, {n-k \choose d}$ using $O(d)$ arithmetic operations total. Then, using the recurrence
\begin{equation*}
S_{n,k,r+1} = S_{n,k,r} + {n-k \choose r+1},
\end{equation*}
one can compute each of the $S_{n,k,0},\ldots,S_{n,k,d}$ using $O(d)$ more arithmetic operations. After similarly computing all of ${k \choose 0}, \ldots, {k \choose d}$ using $O(d)$ arithmetic operations, the identity for $A_{n,d,k,\mathsf{even}}$ requires $O(d)$ more arithmetic operations given what has already been calculated. The same thing can be done for $A_{n,d,k,\mathsf{odd}}$. Of course, if one cares about time, it takes $O(n)$ to count the number of $-1$ entries, and if you're concerned about bit complexities, which should all be like $O(d\log n)$, you'll probably get something like $n+\tilde{O}(d^2\log n)$ for total time complexity using fancy multiplication algorithms. | {
"domain": "cstheory.stackexchange",
"id": 5655,
"tags": "ds.algorithms, circuit-complexity, ds.data-structures, linear-algebra, randomized-algorithms"
} |
Can you see yourself in space? | Question: Can one see them self in space if no near by star was in view? Is there enough star light to read once your eyes adjust?
Answer: No.
If you go to a random spot in the visible universe, you will usually be far from any galaxies because the separation between galaxies is large compared to the size of the galaxies themselves.
Since distant galaxies are so dim that we can't even see them, you certainly cannot see your reflection by them. | {
"domain": "physics.stackexchange",
"id": 31011,
"tags": "visible-light, astronomy, space"
} |
Entropy of a single point (TV diagram) | Question: In my thermodynamics class, in a problem regarding a $TV$ diagram where the cycle is a closed rectangle (image at the end), I came across this question:
Calculate the point along the path $d\to a$ where the gas has the same entropy as in point $b$, justifying accordingly.
Now, this was very confusing to me, because what I've learned is that, since entropy is a state function:
it can't be measured at a single point
in a closed cycle (such as this), $\Delta S_{gas}=0$
Could you please tell me if these suppositions are wrong? And if there is a point at which the entropy of the gas equals its entropy at $b$, how do I calculate either one?
Answer:
Now, this was very confusing to me, because what I've learned is that,
since entropy is a state function:
That is correct. Entropy is a system property.
it can't be measured at a single point
While it is not usually given specific values at a given point (one exception is in the steam tables for water), you do not need to calculated a specific value of entropy to solve this problem so the supposition is irrelevant.
in a closed cycle (such as this), Δ=0
That is correct. Start at any point and if you return to that point $\Delta S=0$.
Could you please tell me if these suppositions are wrong? And if there
is a point at which the entropy of the gas equals its entropy at ,
how do I calculate either one?
Supposition 1 is not needed for solve the problem. Supposition 2 is correct. But you don't the need to "calculate" the entropy at $b$ to solve your problem. You need to understand what the processes are and how they affect entropy change. We don't solve homework and exercise problems on this site, but as guidance I suggest you ask yourself the following:
What type of process does each segment of the path (d-a, a-b, b-c, c-d) describe.
Which processes involve heat into the system and which involve heat out and how does that effect entropy change?
At which of the four points (a,b,c, or d) in the cycle is the entropy a maximum? A minimum?
What does that tell you about the entropy of the remaining two points?
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 63972,
"tags": "homework-and-exercises, thermodynamics, entropy"
} |
Electromagnetic Waves in a Dielectric | Question: I read that when an EM wave enters a dielectric material, it keeps the same frequency, while its wavelength is reduced, so that their product is a quantity less than c = 3 × 10^8 m/s.
Why does it keep the frequency and not the wavelength?
Answer: Very briefly, because of boundary conditions.
At the boundary between the first and second media, we know that the tangential component of the E field will be equal on the two sides of the boundary. So if the E field on one side is oscillating at some frequency $\omega$, then the E field on the other side must also be oscillating with frequency $\omega$. | {
"domain": "physics.stackexchange",
"id": 61486,
"tags": "electromagnetism, waves, electromagnetic-radiation, dielectric, electronics"
} |
Quantum Computing - Relationship between Hamiltonian and Unitary model | Question: When developing algorithms in quantum computing, I've noticed that there are two primary models in which this is done. Some algorithms - such as for the Hamiltonian NAND tree problem (Farhi, Goldstone, Guttman) - work by designing a Hamiltonian and some initial state, and then letting the system evolve according to the Schrödinger equation for some time $t$ before performing a measurement.
Other algorithms - such as Shor's Algorithm for factoring - work by designing a sequence of Unitary transformations (analogous to gates) and applying these transformations one at a time to some initial state before performing a measurement.
My question is, as a novice in quantum computing, what is the relationship between the Hamiltonian model and the Unitary transformation model? Some algorithms, like for the NAND tree problem, have since been adapted to work with a sequence of Unitary transformations (Childs, Cleve, Jordan, Yonge-Mallo). Can every algorithm in one model be transformed into a corresponding algorithm in the other? For example, given a sequence of Unitary transformations to solve a particular problem, is it possible to design a Hamiltonian and solve the problem in that model instead? What about the other direction? If so, what is the relationship between the time in which the system must evolve and the number of unitary transformations (gates) required to solve the problem?
I have found several other problems for which this seems to be the case, but no clear cut argument or proof that would indicate that this is always possible or even true. Perhaps it's because I don't know what this problem is called, so I am unsure what to search for.
Answer: To show that Hamiltonian evolution can simulate the circuit model, one can use the paper Universal computation by multi-particle quantum walk, which shows that a very specific kind of Hamiltonian evolution (multi-particle quantum walks) is BQP complete, and thus can simulate the circuit model.
Here is a survey paper on simulating quantum evolution on a quantum computer. One can use the techniques in this paper to simulate the Hamiltonian evolution model of quantum computers. To do this, one needs to use "Trotterization", which substantially decreases the efficiency of the simulation (although it only introduces a polynomial blowup in computation time). | {
"domain": "cs.stackexchange",
"id": 3073,
"tags": "algorithms, quantum-computing, computation-models"
} |
Clarification wanted for make_step function of Google's deep dream script | Question: From https://github.com/google/deepdream/blob/master/dream.ipynb
def objective_L2(dst): # Our training objective. Google has since release a way to load
dst.diff[:] = dst.data # arbitrary objectives from other images. We'll go into this later.
def make_step(net, step_size=1.5, end='inception_4c/output',
jitter=32, clip=True, objective=objective_L2):
'''Basic gradient ascent step.'''
src = net.blobs['data'] # input image is stored in Net's 'data' blob
dst = net.blobs[end]
ox, oy = np.random.randint(-jitter, jitter+1, 2)
src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift
net.forward(end=end)
objective(dst) # specify the optimization objective
net.backward(start=end)
g = src.diff[0]
# apply normalized ascent step to the input image
src.data[:] += step_size/np.abs(g).mean() * g
src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image
if clip:
bias = net.transformer.mean['data']
src.data[:] = np.clip(src.data, -bias, 255-bias)
If I understand what is going on correctly, the input image in net.blobs['data'] is inserted into the NN until the layer end. Once, the forward pass is complete until end, it calculates how "off" the blob at the end is from "something".
Questions
What is this "something"? Is it dst.data? I stepped through a debugger and found that dst.data was just a matrix of zeros right after the assignment and then filled with values after the backward pass.
Anyways, assuming it finds how "off" the result of the forward pass is, why does it try to do a backwards propagation? I thought the point of deep dream wasn't to further train the model but "morph" the input image into whatever the original model's layer represents.
What exactly does src.data[:] += step_size/np.abs(g).mean() * g do? It seems like applying whatever calculation was done above to the original image. Is this line what actually "morphs" the image?
Links that I have already read through
https://stackoverflow.com/a/31028871/2750819
I would be interested in what the author of the accepted answer meant by
we take the original layer blob and "enchance" signals in it. What
does it mean, I don't know. Maybe they just multiply the values by
coefficient, maybe something else.
http://www.kpkaiser.com/machine-learning/diving-deeper-into-deep-dreams/
In this blog post the author comments next to src.data[:] += step_size/np.abs(g).mean() * g: "get closer to our target data." I'm not too clear what "target data" means here.
Note I'm cross posting this from https://stackoverflow.com/q/40690099/2750819 as I was recommended to in a comment.
Answer:
What is this "something"? Is it dst.data? I stepped through a debugger and found that dst.data was just a matrix of zeros right after the assignment and then filled with values after the backward pass.
Yes. dst.data is the working contents of the layer inside the CNN that you are trying to maximise. The idea is that you want to generate an image that has a high neuron activation in this layer by making changes to the input. If I understand this correctly though, it should be populated immediately after the forward pass here: net.forward(end=end)
Anyways, assuming it finds how "off" the result of the forward pass is, why does it try to do a backwards propagation? I thought the point of deep dream wasn't to further train the model but "morph" the input image into whatever the original model's layer represents.
It is not training. However, like with training we cannot directly measure how "off" the source that we want to change is from an ideal value, so instead we calculate how to move toward a better value by taking gradients. Back propagation is the usual method for figuring out gradients to parameters in the CNN. There are some main differences with training:
Instead of trying to minimise a cost, we want to increase a metric which summarises how excited the target layer is - we are not trying to find any stationary point (e.g. the maximum possible value), and instead Deep Dream usually just stops arbitrarily after a fixed number of iterations.
We back propagate further than usual, all the way to the input layer. That's not something you do normally during training.
We don't use any of the gradients to the weights. The weights in the neural network are never changed. We are only interested in the gradients at the input - but to get them we need to calculate all the others first.
What exactly does src.data[:] += step_size/np.abs(g).mean() * g do? It seems like applying whatever calculation was done above to the original image. Is this line what actually "morphs" the image?
It takes a single step in the image data along the gradients that we have calculated will trigger more activity in the target layer. Yes this alters the input image. It should be repeated a few times, according to how extreme you want the Deep Dream effect to be. | {
"domain": "datascience.stackexchange",
"id": 1465,
"tags": "machine-learning, python, neural-network, deep-learning"
} |
Effects of selection on effective population size | Question: Background
The effective population size ($N_e$) is a central concept of evolutionary biology and is influenced by several parameters. For example: sex ratio bias affects $N_e$ $\left(N_e = \frac{4N_mN_f}{N_m+N_f}\right)$ and varying population size over time influences $N_e$ $\left(N_e = \frac{n}{\sum_{i=1}^n\frac{1}{N_i}}\right)$. There is a post on how overlapping generations influences population size.
Question
Selection also influences the effective population size. Intuitively, I'd expect that the higher is the fitness variance, the lower is the effective population size as fewer individuals contribute to the next generation. Am I right? How (what is the mathematical formulation) does fitness variance influences the effective population size?
Answer: From Conner and Hartl's A primer of ecological genetics:
"Any variance in reproductive success among individuals greater than
random expectations, a commonplace concurrence in natural populations,
reduces effective population size."
So yes, selection does reduce the effective population size and for the reason you suggest - it removes some individuals from the mating pool/ reduces their contribution to the next generation. Mathematically it can be derived as
$N_e \approx \frac{ 8N_a}{V_m + V_f + 4 }$
where $V_m$ and $V_f$ are sex-specific variances of offspring production (males are often more variable in reproductive success than females).
They give an example of selection affecting $N_e$, in a population of deer 33 males had four times the variance ($V_m$ = 41.9) in reproductive success of 35 females where variance was $V_f$ = 9.1. Thus effective population size was
$N_e \approx \frac{ 8 * (33 + 35)}{41.9 + 9.1 + 4 } = 9.9$
The notation $N_a$ is the actual number of individuals in the population. This can be seen in the calculation of effective population size when the number of males and females are not equal
$N_e = \frac{4 N_m N_f}{N_m + N_f} = \frac{4 N_m N_f}{N_a}$
An example is a population of 80 males and 80 females compared to a population of 70 males and 90 females. Both populations have equal sizes but effective population size is reduced by unequal sex ratios
$N_e = \frac{4 * 80 * 80}{160} = 160 \neq \frac{4 * 70 * 90}{160} = 157.5$ | {
"domain": "biology.stackexchange",
"id": 2904,
"tags": "evolution, ecology, natural-selection, theoretical-biology, population-dynamics"
} |
Batch Gradient Descent running too slowly | Question: Following Data Science from Scratch by Joel Grus, I wrote a simple batch gradient descent solver in Python 2.7. I know this isn't the most efficient way to solve this problem, but this code should be running faster. How can I speed it up? My gut is telling me that mse_grad2 is the problem...
from __future__ import division
import random
x = [49,41,40,25,21,21,19,19,18,18,16,15,15,15,15,14,14,13,13,13,13,12,12,11,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,8,8,8,8,8,8,8,8,8,8,8,8,8,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]
y = [68.77,51.25,52.08,38.36,44.54,57.13,51.4,41.42,31.22,34.76,54.01,38.79,47.59,49.1,27.66,41.03,36.73,48.65,28.12,46.62,35.57,32.98,35,26.07,23.77,39.73,40.57,31.65,31.21,36.32,20.45,21.93,26.02,27.34,23.49,46.94,30.5,33.8,24.23,21.4,27.94,32.24,40.57,25.07,19.42,22.39,18.42,46.96,23.72,26.41,26.97,36.76,40.32,35.02,29.47,30.2,31,38.11,38.18,36.31,21.03,30.86,36.07,28.66,29.08,37.28,15.28,24.17,22.31,30.17,25.53,19.85,35.37,44.6,17.23,13.47,26.33,35.02,32.09,24.81,19.33,28.77,24.26,31.98,25.73,24.86,16.28,34.51,15.23,39.72,40.8,26.06,35.76,34.76,16.13,44.04,18.03,19.65,32.62,35.59,39.43,14.18,35.24,40.13,41.82,35.45,36.07,43.67,24.61,20.9,21.9,18.79,27.61,27.21,26.61,29.77,20.59,27.53,13.82,33.2,25,33.1,36.65,18.63,14.87,22.2,36.81,25.53,24.62,26.25,18.21,28.08,19.42,29.79,32.8,35.99,28.32,27.79,35.88,29.06,36.28,14.1,36.63,37.49,26.9,18.58,38.48,24.48,18.95,33.55,14.24,29.04,32.51,25.63,22.22,19,32.73,15.16,13.9,27.2,32.01,29.27,33,13.74,20.42,27.32,18.23,35.35,28.48,9.08,24.62,20.12,35.26,19.92,31.02,16.49,12.16,30.7,31.22,34.65,13.13,27.51,33.2,31.57,14.1,33.42,17.44,10.12,24.42,9.82,23.39,30.93,15.03,21.67,31.09,33.29,22.61,26.89,23.48,8.38,27.81,32.35,23.84]
x_data = [[1] + [ind_var_i] for ind_var_i in x]
y_data = y
theta_0 = [0,0]
def dot(v,w):
return sum(v_i * w_i for v_i,w_i in zip(v,w))
def step(v, direction, step_size):
return [v_i + step_size * direction_i for v_i,direction_i in zip(v,direction)]
def mse_cost(theta):
return sum((y_data_i - dot(theta,x_data_i))**2 for x_data_i,y_data_i in zip(x_data,y_data))
def mse_grad2(theta):
new_theta = [0,0]
for i,_ in enumerate(theta):
new_theta[i] = -2 * sum((y_data_i - dot(theta,x_data_i)) * x_data_i[i] for x_data_i,y_data_i in zip(x_data,y_data))
return new_theta
def safe(f):
def safe_f(*args,**kwargs):
try:
return f(*args,**kwargs)
except:
return float('inf')
return safe_f
def minimize_batch(mse_cost, mse_grad2, theta_0, tolerance=0.00000001):
step_sizes = [100,10,1,0.1,0.01,0.001,0.0001,0.00001]
theta = theta_0
mse_cost = safe(mse_cost)
value = mse_cost(theta)
while True:
gradient2 = mse_grad2(theta)
next_thetas = [step(theta, gradient2, -step_size) for step_size in step_sizes]
next_theta = min(next_thetas,key=mse_cost)
next_value = mse_cost(next_theta)
if abs(value - next_value) < tolerance:
return theta
else:
theta, value = next_theta, next_value
new_theta = minimize_batch(mse_cost, mse_grad2, theta_0, tolerance = 0.000001)
print new_theta
Answer: dot() gets called three million times.
I changed it from:
def dot(v,w):
return sum(v_i * w_i for v_i, w_i in zip(v,w))
to:
def dot(v,w):
return (v[0] * w[0]) + (v[1] * w[1])
And the runtime dropped from 7.09 seconds to 2.89 seconds, with the same results. | {
"domain": "codereview.stackexchange",
"id": 14337,
"tags": "python, algorithm, python-2.x, machine-learning"
} |
Given a vaccine, can you deduce that a disease exists? | Question: For example, if a scientist from 2018 found one of today's Covid vaccines, can they tell that it is a vaccine to a hitherto unknown disease, and therefore infer that Covid exists?
I'm guessing that the answer depends on the type of vaccine, in which case I'm interested in all of them.
Answer: The main giveaway that it's designed to produce an immune response would be the presence of an immunological-adjuvant:
a substance that increases or modulates the immune response to a vaccine.
Now, of course, adjuvants are used in cancer treatment, but the difference with inoculations for pathogens is that there'd be an additional molecule or set of them which triggers a specific immune response.
There are nearly a dozen types of "vaccine" for COVID and variants, the majority are focusing on making use of the spike protein of the virus to create a response, but there are some which are designed to create a response to the capsid proteins, i.e. the viral-shell. Neither the spike protein nor the capsid would be expected to be found natively existing in the human body.
This doesn't mean they all contain the spike protein (or any other target protein), the mRNA version for example is designed to induce the body's own systems to produce the spike.
Some contain an adenovirus - a familiar companion found usually in the mucus membranes of human populations causing a number of familiar respiratory diseases. This being also designed to induce the body to produce the spike protein and only that. They would be otherwise reproductively inert.
Some contain artificially produced virus-like particles which have some characteristic in common with the target virus.
The one producing arguably the broadest response would be the virus particles themselves, artificially cultured then inactivated before administration. This would be straight-up identify the virus that the vaccine is designed to combat. At the time of writing, these seem to have been primarily developed and deployed in China. | {
"domain": "biology.stackexchange",
"id": 12055,
"tags": "vaccine"
} |
Why is there a constant in the ideal gas law? | Question: Why do we have constants?
Consider, for example, the ideal gas law,
$$PV = nRT \, . \tag{ideal gas law}$$
Sometimes I believe that the constant is there in order to make the equation work (make the units line up per se), but other times I feel like such assumptions are unnecessary.
I don't entirely understand why that constant is used, besides the fact that it is necessary for the units.
NB/ This is not intended to stir philosophical debate. I am purely curious of the nature of constants in cases such as $R$ (not $c$ as I understand that the speed of light is uniformly constantly) I am simply asking whether these constants are necessary for our equations and understandings or if they are universally constant.
Answer: Constants in physics are not just unit matching things. They are actually very fundamental. Yes, it is an heuristic and easy way to explain constants as unit keepers and I have nothing against that; but constants represent a sort of privileged group in nature. They are like symmetry points were everything moving around most do so in a way to keep their values the same.
Now for gas constant ($R$): it is an experimental constant.
Imagine that you have a thermos bottle filled with a gas having a piston at its top which you can pull/push, an electric resistance inside that you can use to heat the gas, a thermometer and a barometer. The thermometer and the barometer are placed in such a way they can give the temperature and the pressure of the gas inside the bottle.
At a certain moment you make a measurement of all these three parameters $p, V$ and $T$. Let’s say you get the values $p_0, V_0, T_0$. Now do any of the following:
Heat up the gas or pull/push the piston up/down. You can do all of that at once. After that perform a new measurement of the above parameters. Let’s say you get $p_1, V_1, T_1$.
You will realize that no matter what you do, in an isolated system, the values of the parameters $p, V$ and $T$ will always change in such a way that the ratio between the product $pV$ by $T$ is constant, i.e.,
$$φ=\frac{p_0 V_0}{T_0}=\frac{p_1 V_1}{T_1}=\frac{pV}{T}=constant \tag{1}$$
This means that, once you make an initial measurement and get a value for $φ$, in the future you’ll be required to measure just 2 of the parameters, and the third will be established using an equation of the form
$$pV=φT \tag{2}$$
The problem is, you cannot make any assumption about the general validity of equation (2). By this time, it is just and ad hoc equation which serves the purpose of your current setup or experiment. What if you increase/reduce the amount of gas inside the bottle? Or you change the gas type?
In the case of increasing/reducing the amount of gas inside, just as expected, the value of $φ$ will increase/reduce by the same proportion $n$ as the amount of gas added/removed. Or
$$φ =\frac{pV}{T}= nφ_0 \tag{3}$$
where $φ_0$ is the value of $φ$ for a unit amount of gas.
The big leap here is a discovery by Amadeo Avogadro known as Avogadro’s law, which in other words, says that, if one uses the amount of substance $n$ in terms of the number of moles instead of $\mathrm{kg}$ or $\mathrm{lbs}$, then, under the same conditions of $p$ and $T$ all gases occupy the same volume, i.e., the values of the $φ$’s are the same. He discovered that, for 1 mole of any gas under $1 \, \mathrm{atm}=101.325•10^5 \, \mathrm{ \frac{N}{m^2}}$ and $0 \, \mathrm{°C}= 273.15 \, \mathrm{K}$ the gas occupy $V_0=22.4•10^{-3} \, \mathrm{m^3}$.
Now we can generate an universal value for $φ_0$ as
$$φ_0=R=\frac{p_0 V_0}{T_0}=\frac{101.325 •10^5×22.4•10^{-3} \, \mathrm{\frac{N}{m^2}×m^3}}{273.15 \, \mathrm{K}}=8.3 \, \mathrm{J/K} \tag{4}$$
Now (2) can be written as
$$pV=nRT \tag{5}$$
and if we do so, we get a compact and universal form to describe the thermodynamic system.
But there is more in (5) then just a compact form of describing the thermodynamics system. As you can see in (4) the units of $pV$ turns out to be $J$. It actually represents total work done by an isolated thermodynamic system. Deriving (3) for the same amount of substance, we get
$$p \mathrm{d} V+V \mathrm{d} p=nR \mathrm{d}T \tag{6}$$
$p \mathrm{d} V$ is the so called expanding reversible work and $V \mathrm{d} p$ is the so called shaft work. Since in the right side of (4) the only variable is $T$ it gives a new meaning for temperature as some form of energy (or energy potential) of some sort, and we can understand heat as energy and not some kind of substance as it was thought in past. | {
"domain": "physics.stackexchange",
"id": 48369,
"tags": "thermodynamics, ideal-gas, physical-constants"
} |
Finding the optimal policy from a set of fixed policies in reinforcement learning | Question: This is an open-ended question.Suppose I have a reinforcement learning task that is being solved using many different fixed policies, one of which is optimal. The goal of the agent is not to figure out what the optimal policy is but rather which policy (from a set of predefined fixed policies) is the optimal one.
Are there any algorithms/methods that handle this?
I was wondering if meta learning is the right area to look into?
Answer: The quickest way to do this would be to use policy evaluation methods. Most of the standard optimal control algorithms consist of policy evaluation plus a rule for updating the policy.
It may not be possible to rank arbitrary policies by performance when considering all states. So you will want to rank them according to some fixed distribution of state values. The usual distribution of start states would be a natural choice (this is also the objective when learning via policy gradients in e.g. Actor-Critic).
One simple method would be to run multiple times for each policy, starting each time according to the distribution of start states, and calculate the return (discounted sum of rewards) from each one. A simple Monte Carlo run from each start state would be fine, and is very simple to code. Take the mean value as your estimate, and measure the variance too so you can establish a confidence for your selection.
Then simply select the policy with the best average value in start states. You can use the variance to calculate a standard error for this, so you will have a feel for how robust your selection is.
If have a large number of policies to select between, you could do a first pass through with a relatively low number of samples, and try to rule out policies that perform badly enough that even adding say 3 standard errors to the estimated value would not cause them to be preferred. Other than that, the more samples you can take, the more accurate your estimates of mean starting value for each policy will be, and the more likely you will be to select the right policy.
I was wondering if meta learning is the right area to look into?
In general no, but you might want to consider meta learning if:
You have too many policies to select between by testing them all thoroughly.
The policies have some meaningful low dimension representation that is driving their behaviour. The policy function itself would normally be too high dimensional.
You could then use some form of meta-learning to predict policy performance directly from the representation, and start to skip evaluations from non-promising policies. You may need your fixed policies to number in the thousands or millions before this works though (depending on the number of parameters in the representation and complexity of mapping between parameters and policy function), plus you will still want to thoroughly estimate performance of candidates selected as worth evaluating by the meta-learning.
In comments you suggest treating the list of policies as context-free bandits, using a bandit solver to pick the policy that scores the best on average. This might offer some efficiency over evaluating each policy multiple times in sequence. A good solver will try to find best item in the list using a minimal number of samples, and you could use something like UCB or Gibbs distribution to focus more on the most promising policies. I think the main problem with this will be finding the right hyperparameters for the bandit algorithm. I would suggest if you do that to seed the initial estimates with an exhaustive test of each policy multiple times, so you can get a handle on variance and scale of the mean values. | {
"domain": "ai.stackexchange",
"id": 2243,
"tags": "neural-networks, machine-learning, reinforcement-learning, meta-learning"
} |
In DNA repair, how is it determined which strand contains the error? | Question: DNA replication is more accurate than transcription (or RNA replication) because mechanisms exist for proof-reading and repair of DNA, but not for RNA.
Consider a segment of a DNA strand, AGTC. Its complement is GACT. Now suppose its complement is mutated to TACT — the DNA repair system will replace the wrong T by G. Why isn’t A in the original strand replaced by C? How does the system determine that the first strand is correct and its complement is incorrect?
Answer: The reason the fact it isn't realistic is important. DNA repair machinery works by repairing common errors that occur due to common mutational pathways. The enzymes are specific for this, for example one particular enzyme targets mutations caused by UV and itself is activated by sunlight thus it's specificity makes it repair the correct chain. Also during replication, newly synthesised DNA lacks methylation. The repair enzymes thus know which chain is correct. | {
"domain": "biology.stackexchange",
"id": 5988,
"tags": "molecular-biology, dna, molecular-genetics, dna-repair"
} |
What is meant by thermal penetration depth? | Question: What is meant by thermal penetration depth? I am doing a project on Thermoacoustics. while researching I came across about thermal penetration depth.I searched over the net but i didn't get a clear idea so please explain me about this and also give me an insight about what are the other applications of this.
Answer: As you understand from the term itself it has to do with the penetration of heat into a material.
Suppose you have a sufficiently thick material (size $D$) of uniform temperature ($T_0$), where you apply a constant (different) temperature ($T_1$) at one side. Eventually, your whole material will be at this new temperature $T_1$. But before this happens, that is, as long as the temperature of the other side of the block is still $T_0$, we can talk about penetration.
The penetration depth is the depth to which the temperature has significantly changed, often, this is approximated with
$$\sqrt{\pi a t}$$
where $a$ is the thermal diffusivity coefficient, and $t$ is time.
In this context, also the Fourier number is relevant, as it relates the penetration depth with the domain length scale, i.e.
$$ Fo=\frac{a t}{D^2}$$
For penetration theory to be applicable (initial stage), $Fo<1$. | {
"domain": "physics.stackexchange",
"id": 36462,
"tags": "thermodynamics, acoustics, terminology"
} |
How long will norovirus live? | Question: I hope this is the right place to ask. I subscribe to several Stack Exchange threads, and regard them as the best internet forum for genuine and credible answers.
I spent last evening talking to friend, at an Amateur Radio Club, who told me he has recently recovered from the norovirus. I was demonstrating an oscilloscope to him, and he also handled the handbook. All this is now in the holdall I used to carry it to the Club. I took all the precautions I could, like washing my hands and not getting them near my lips, nose etc. So, my question is: how long should I wait before opening my holdall, to be sure any infection there is dead?
Answer: Well, for him the virus is already likely at minimum levels within his body. If he's recovered (pass the fever phase) then his immune system has already dealt with the majority of the viral bodies in his system, since the peak of the immune response comes after the peak of the viral infection density (ignore B):
How much of the virus was left in his system to actually spread is hard to guess. Sometimes infections never truly go away, but merely become undetectable by conventional testing. However, if we're to assume that there's at least some norovirus left to transfer to you then you should realize that the greatest chance for exposure has already come and gone.
The moment you were in the same room together, talking in close proximity, shaking hands (if you did), or generally being close is the most likely source of any infection you could have picked up. However, while most virii don't survive well outside of the body, the norovirus notably bucks that trend. The Hawaiian Department of Health (PDF) notes that the virus can definitely survive outside the body for several days on hard surfaces (though it will not grow), can survive being frozen, and that it only takes about 100 viral units to cause an infection (which could fit a few thousand times over on a grain of salt). Scientific American also notes that norovirus can survive for months or years in sources of drinking water.
What you've already done is pretty much what you should do to avoid infection. You've washed your hands, refraining from mucous membrane contact, etc.
Ultimately how long you should wait depends on where the norovirus currently resides. A week should be sufficient if all of the norovirus is contained on the book or on other surfaces that you're not making contact with. However, if it's already had contact with your skin or mucous membrane, then you're either already fighting it off or you're about to become ill. Unfortunately it's a lot easier to expose yourself than contain something you can't see, as the Mythbusters have done a good job of showing (Video).
In all likelihood if he was still infected enough to transmit the virus to other people by the time you two started handling the same equipment and you didn't use gloves, you've probably been exposed. However, exposure doesn't automatically mean you get sick. If you're not sick within a week afterwards, you've successfully fought it off. So, congratulations, perhaps! | {
"domain": "biology.stackexchange",
"id": 753,
"tags": "virology"
} |
Decoupled Temperature for photons: why is it 0.25 $\rm eV$ rather than 13.6 $\rm eV$? | Question: When calculating the decoupled temperature of photons using Saha' equation for the following process:
\begin{equation}
e^- p\longleftrightarrow H\gamma
\end{equation}
we find that $T_{dec}=3000$ K$=0.25$ eV.
From my understanding, this phenomenon happens when it becomes thermodynamically favourable for protons and electrons to combine into neutral atoms. I was expecting it to be 13.6 eV (Rydberg energy) for this case, which is the Hydrogen's biding energy. Why is it less than that?
Answer: This is because there are hugely many more photons than charge-carriers per unit volume, roughly 10 billion photons for every electron in the universe.
As an example, consider affairs when the universe cooled to a temperature of 1 eV, or around 10,000 K. At this temperature, electrons are no longer relativistic and their density follows the Boltzmann distribution,
$$n_e = 2\left(\frac{m_e T}{2\pi}\right)^{3/2} \exp \left(\frac{\mu_e - m_e}{T}\right).$$
At $T = 10^4$ K, the electron density is $n_e \approx 10^4 \,{\rm cm}^{-3}$.
Meanwhile, the number density of photons with an energy in excess of 13.6 eV can be found by integrating the Planck spectrum,
$$n_\gamma = \frac{1}{\pi^2}\int^\infty_{13.6}\frac{E^2}{\exp(E/T)-1}\, {\rm d}E,$$
giving $n_\gamma \approx 3 \times 10^9 \, {\rm cm}^{-3}$ at $T = 10^4$ K. In other words, there are around $3\times 10^5$ more photons than electrons per unit volume with energy greater than 13.6 eV! At these temperatures, there is no shortage of energetic photons available to re-ionize neutral hydrogen once it forms.
The following illustration helps visualize this: | {
"domain": "physics.stackexchange",
"id": 66909,
"tags": "statistical-mechanics, cosmology, atomic-physics"
} |
How is the pion related to spontaneous symmetry breaking in QCD? | Question: In chapter 19 of An Introduction to Quantum Field Theory by Peskin & Schroeder, they discuss spontaneous symmetry breaking (SSB) at low energies in massless (or nearly massless) QCD, given by
$$\mathcal{L}_{\text{QCD}} = \bar{Q} (i\gamma^{\mu}D_{\mu}) Q - m\bar{Q}Q,$$
where $Q = \left( \begin{matrix} u \\ d \end{matrix}\right)$ is an $SU(2)$ quark doublet.
Due to quark-antiquark pairs being produced from the vacuum at low energies (the $q\bar{q}$ condensate), we have a non-zero vacuum expectation value (vev), which results in SSB of the global $SU(2)_A$ symmetry. For massless QCD, this is an exact symmetry so 3 goldstone bosons [one for each generator of $SU(2)$] appear. For massive QCD (where the quark masses are small), the mass term in $\mathcal{L}_{\text{QCD}}$ demotes this to an approximate symmetry, and so we obtain 3 massive pseudo-Goldstone bosons.
If we take the quark masses to be small (and so we have an approximate symmetry), they show (eq 19.92) that we have a partially conserved axial current (PCAC relation) from pion production from the vev, via
$$ \langle 0 | \partial_{\mu} j^{\mu,a}_5 | \pi^b(p) \rangle = - p^2 f_{\pi} \delta^{ab} \quad \Rightarrow \quad \partial_{\mu} j^{\mu,a}_5 \propto m_{\pi}^2,$$
where $m_{\pi}$ is the mass of the pseudo-Goldstone bosons produced from SSB.
So if we have massive quarks in $\mathcal{L}_{\text{QCD}}$, then we have an approximate symmetry and obtain 3 massive pseudo-Goldstone bosons through SSB. If we have massless quarks in $\mathcal{L}_{\text{QCD}}$, then we have an exact symmetry and obtain 3 massless Goldstone bosons.
Now, here is the problem I am struggling to answer. In the text, they refer to these goldstones as being the three pions $\pi^+$, $\pi^-$, and $\pi^0$. From measurements, we know pions are massive particles. From the above reasoning, if the goldstones from SSB of QCD are in fact pions, there must be non-zero quark masses in the QCD Lagrangian. However, the usual Dirac mass terms violate the electroweak gauge symmetry $SU(2)_L \times U(1)_Y$, so it's not possible to have these mass terms appear in the Standard Model, and quarks must obtain their mass via the Higgs mechanism from SSB of the electroweak gauge group.
So, to preserve gauge invariance in the Standard Model $SU(3)_c \times SU(2)_L \times U(1)_Y$, we must then have that there are no quark mass terms and the global axial symmetry is exact, meaning our Goldstone bosons are massless and cannot be the same particles as pions. This seems to be a complete contradiction, and I really don't understand how we can have both massive pions that correspond to pseudo-goldstones and not break gauge invariance in the SM Lagrangian.
As a side question, I don't understand the logical jump made associating pions with the Goldstone fields in the first place. Pions are composite particles consisting of $u$ and $d$ quarks, so why should/can they even be candidates to Goldstone/pseudo-Goldstones bosons? It makes sense that they are related to the $q\bar{q}$ condensate in QCD due to the nature of their composition, but other than this I fail to see the connection.
Answer: In a world without EW SSB, pions would, indeed, be perfect massless! Pion masses reflect two different SSBs.
There are two SSB scales involved in the SM: the electroweak spontaneous breaking of $SU(2)_L \times U(1)_Y$, with an order parameter of about 1/4 TeV; and the chiral spontaneous breaking of the three axial generators (not closing into an $SU(2)$, of course) with the quantum numbers of the three pions, with order parameter about 1/4–1/5 GeV, close to $\Lambda_{QCD}$; two completely different phenomena, understood very differently.
There is no sense in talking about pions above the higher scale; but below it, you do have effective quark masses, generated from the Yukawa couplings: the heart of the SM, regardless of how poorly they teach it to you.
So, below that upper scale, and around the lower one's χSSB that P&S detail, there are quark masses, for all practical purposes, and all they say makes eminent sense! Check that all statements are consistent and reasonable in all three regions demarcated by these two scales.
This includes your puzzlement about (pseudo)Goldstone bosons: the Goldstone property refers to the non-linear transformation law of the particles under the SSBroken symmetries. Pions, composite or not, are interpolated by operators transforming in the (Wigner-Weyl, or else) Goldstone mode: and our real world pions choose the latter.
This is why, as you are taught, it is very impractical to think of pions as ordinary hadron bound states, which they also are. Primarily, they are goldstons. In fact, there is a small energy region in-between $\Lambda_{QCD}$/confinement-scale and the above χSSB, where you may consider Goldstone degrees of freedom and constituent massive quarks, the chiral bag models, but angels fear to tread there... You really need not get interested in it. | {
"domain": "physics.stackexchange",
"id": 96431,
"tags": "quantum-field-theory, quantum-chromodynamics, symmetry-breaking, quantum-anomalies, pions"
} |
Implementation of LRU cache | Question: What do you think of this implementation of an LRU cache? I'm trying to migrate to modern C++, so it would be fantastic if you could give me any tips here. I've heard it's not desired to use raw pointers, right? What about only internally, as in the example?
Also, since I can't return a nullptr value in the get() method, what is the best alternative for this? Default value as in the example?
#ifndef __LRU_HPP__
#define __LRU_HPP__
#include <map>
#include<iostream>
#include<assert.h>
template<class T>
struct Default {
T value;
Default():value(){}
};
template<class K, class V>
struct node {
K key;
V value;
node<K,V>* next;
node<K,V>* prev;
explicit node(K key, V value);
};
template<class K, class V>
class lru {
private:
node<K,V>* head;
node<K,V>* tail;
std::map<K,node<K,V>*> map;
std::size_t capacity;
void replace(node<K,V>*);
public:
explicit lru(std::size_t);
void put(K key, V value);
V get(K key);
};
template<class K, class V>
inline node<K,V>::node(K key, V value) {
this->key = key;
this->value = value;
}
template<class K, class V>
inline lru<K,V>::lru(std::size_t capacity) {
assert(capacity > 1);
this->capacity = capacity;
this->head = tail = nullptr;
}
template<class K, class V>
void lru<K,V>::put(K key, V value) {
auto it = map.find(key);
if (it != map.end()) {
node<K,V>* temp = it->second;
temp->value = value;
replace(temp);
return;
}
if(capacity == map.size()) {
tail->prev->next = tail->next;
node<K,V>* target = tail;
map.erase(target->key);
tail = tail->prev;
delete tail;
}
node<K,V>* temp = new node<K,V>(key, value);
temp->prev = nullptr;
temp->next = head;
if(head != nullptr)
head->prev = temp;
else
tail = temp;
head = temp;
map.insert(std::pair<K,node<K,V>*> {key, temp});
}
template<class K, class V>
V lru<K,V>::get(K key) {
auto it = map.find(key);
if(it != map.end()) {
replace(it->second);
return it->second->value;
}
return Default<V>().value;
}
template<class K, class V>
void lru<K,V>::replace(node<K,V>* temp) {
if (temp->prev != nullptr)
temp->prev->next = temp->next;
if (temp->next != nullptr)
temp->next->prev = temp->prev;
if (tail == temp)
tail = temp->prev != nullptr ? temp->prev : temp;
if (head != temp) {
head->prev = temp;
temp->next = head;
head = temp;
}
}
#endif
Answer: Namespace Usage
I'd put all of this into some namespace, so it won't accidentally collide with other usage of the names it defines.
I'd probably also work even a little harder to do something to hide your node and Default templates. For example, node doesn't need to be visible to the outside world, so it would probably be better off defined inside of lru.
Prefer Initialization to Assignment
A constructor like this:
template<class K, class V>
inline node<K, V>::node(K key, V value) {
this->key = key;
this->value = value;
}
...is typically better written something like this instead:
template <class K, class V>
inline node<K, V>::node(K key, V value)
: key(key)
, value(value)
{}
In some cases, you must us initialization (like this) instead of assignment. In this case, it's not mandatory, but I still consider it preferable.
Consider passing by reference
Passing your keys and values by value generally requires copying. You might want to consider whether it's worth passing by reference instead. This generally accomplishes little (or nothing) if the type being passed is "small", but can improve speed substantially if the type is large.
Encapsulate knowledge of a type's internals in that type
Right now, your lru class "knows" (depends upon) the internals of the node class. I'd generally prefer if only node knew about its internals. For one possibility, you might pass a next and prev to the ctor, and let it put those values into the node itself:
template <class K, class V>
inline node<K, V>::node(K key, V value, node *prev = nullptr, node *next = nullptr)
: key(key)
, value(value)
, prev(prev)
, next(next)
{}
In this case, your code like this:
node<K, V>* temp = new node<K, V>(key, value);
temp->prev = nullptr;
temp->next = head;
...would become something like:
node<K, V> *temp = new node<K, V>(key, value, nullptr, head);
Consider other Data Structures
Although a doubly-linked list does work for the task at hand, it's often fairly wasteful, using a couple of pointers per node in addition to the data you actually care about storing.
I'd consider using a singly linked list instead. Although it may not immediately be obviously how you can do this, I'm reasonably certain you can.
Consider pre-allocating your storage
Given that the total maximum number of objects is fixed when you create the cache, I'd also consider pre-allocating storage for both the objects and the linked-list nodes when you create the cache. An allocator for a fixed-size set of fixed-size blocks can be really trivial, which can improve speed considerably over using a couple of allocations on the free store for every item you insert in the cache. | {
"domain": "codereview.stackexchange",
"id": 29906,
"tags": "c++, c++11, cache"
} |
Phenyl or benzene when naming compounds | Question: These two molecules are both aromatic:
However, the first molecule has a halogen substituent and is named "bromobenzene". The second molecule has a chlorine and a hydroxyl group and is named "4-chlorophenol". Assuming there was no alcohol group, how would we know when to use benzene or phenyl?
I thought the following:
therefore, shouldnt the first molecule have been written as "bromophenyl" and not as "bromobenzene"?
Answer: You are confused about several overlapping but mostly unrelated issues.
First, you are right that "benzyl" is the $\ce{C7H8\bond{~}}$ radical and that "phenyl" is the $\ce{C6H5\bond{~}}$ radical.
Second, although "phenol" and "phenyl" sound almost the same, you would probably do well to think of them as entirely separate naming roots. "Phenol" is the simplest aromatic alcohol. Simple aromatic alcohols with other substitutions (e.g. 4-chlorophenol) can thus be named as substituted phenols.
Third, "benzyl" and "benzene" similarly sound the same but should be treated as distinct concepts. Substituted aromatic rings can be named as benzene derivatives, such as chlorobenzene.
Fourth, for shorthand and convenience, "phenyl" and "benzyl" are sometimes used to name substituents. Note that IUPAC has detailed priority rules for what is a substituent and what is the base molecule, but chemists don't always follow these. "chlorobenzene" is a correct IUPAC name, and it views the molecule as a substituted benzene. That is, benzene is the base molecule and "chloro" is a substitution. Chemists sometimes find it convenient to reverse their perspective, even though the resulting names would not be IUPAC-approved. From a reverse perspective, the chlorine atom is the "base molecule", and it is substituted by a phenyl group. Thus you could call the chlorobenzene "phenyl chloride" if you wanted to take this reverse perspective.
Lastly, "bromophenyl" ends in "-yl" and so is the name of a "radical", i.e. a substituent or fragment, not of a complete molecule. Without any other context, I would think that "bromophenyl" was a $\ce{C6H4Br\bond{~}}$ substituent. In contrast, "bromobenzene" is a complete molecule, derived from benzene by substitution. | {
"domain": "chemistry.stackexchange",
"id": 3176,
"tags": "organic-chemistry, nomenclature, aromatic-compounds"
} |
Avoiding use of an initialized variable that will either be changed or ignored | Question: Follow on from this question
This is a follow on question from one I asked a few days ago. I am making (or rather have made) a recursive method that takes a list of ArrayList objects as its parameters. There can be any number of them, but let's pretend there are five. They are numbered from 1 to 5, so the Array contains: [1 2 3 4 5].
The Array is then passed through a recursive method which returns only the numbers at odd indices in the Array, i.e. [2 4] (as they are at indices 1 and 3).
This is the code for the recursive method:
public static ArrayList<Integer> oddList (ArrayList<Integer> tList) {
ArrayList<Integer> oddList = ListMethods.deepClone(tList);
int tempStorage = -1;
if (oddList.size() <= 0)
return oddList;
else
{
if (oddList.size()%2==0)
tempStorage = oddList.remove(oddList.size()-1);
oddList.remove(oddList.size()-1);
oddList = ListMethods.oddList(oddList);
if (tempStorage >= 0)
oddList.add(tempStorage);
}
return oddList;
}
There are two things I am curious about:
Firstly, is there an alternative to that if statement at the very end, to check that tempStorage >= 0.
Secondly, and probably more importantly, should I care? In other words, is what I've done there a cheap, easy fix, or is that common coding practice? It just seems strange having an initialised variable that will always be either changed or ignored.
Any feedback would be fantastic, thanks (and to the guys who posted on my previous post, I'm still reviewing all the information - not allowed to use most of it in this assignment, but thanks anyway!)
Answer: I already explained in my other post, not to use this tempStoragevariable in this way. At least (compared to the other version) you do not change the meaning of it in this version.
If you rename it to something like removedItemFromEvenIndex it could be fine. If and really if the requirement is that all values inside the lists are >= 0.
If all values are allowed, I would suggest to stick to the solution with the boolean I suggested in the other post. Because if all values are allowed, You can not find any "check" value to see if you removed or something or not.
From where comes this problem? You try to do 2 things together with the tempStorage:
Transport the removed value
Status flag if you removed something or not
You can not address both things with the same Integer, because you already need everything from your Integer for the first part.
Rest is the same as for the other question.
To handle this one, same approach as other question:
What we want to do here? Add every second element to the new list, starting with the second.
So a simplified description could be:
function oddList(list)
if list is empty or has only 1 element
return empty list
return new list(second element of list, oddList(all elements from the third to the end))
translate to algorithm in Java:
public static List<Integer> oddList(final List<Integer> list) {
if (list.size() <= 1)
return new ArrayList<>();
final List<Integer> newList = new ArrayList<>(Arrays.asList(list.get(1)));
newList.addAll(oddList(list.subList(2, list.size())));
return newList;
}
If we take into account the other question, this could be a clever way (you should always try to reuse existing and working solutions):
public static List<Integer> oddList(final List<Integer> list) {
final List<Integer> newList = new ArrayList<>(list); //we create a copy, otherwise list is modified for the caller
newList.remove(0); //all indices shifted one to the left
return even(newList);
} | {
"domain": "codereview.stackexchange",
"id": 3205,
"tags": "java, recursion"
} |
Finding a signals complimentary sound / frequency | Question: I know about complimentary colors and how to get them, since colors are basically frequency / wavelengths. Is it possible to find a signals complimentary, additive or subtractive frequencies in an audio file / from a signal. I know how to use FFT to get the phase, amplitude, and frequency of a signal. I'm just looking for a general idea of how I would go about creating similar color charts but for sound, we have a color charts of complimentary, additive or subtractive colors do we have one for the sound / audio range of humans?
Answer: Yes and no. Only monochromatic colors consist of a single wavelength, and those do not really "wrap around" in the form you describe. Color addition and subtraction have more to do with the three types of cone cells in our eyes than with wavelengths. Since there is no audio analogue to these cone cells, the concept of complimentary colors doesn't exist in the same way.
However, if you consider notes instead of frequencies, in the sense that two frequencies that are one or more octaves apart constitute the same note, then you can say that two notes are complimentary if they are a tritone apart. | {
"domain": "dsp.stackexchange",
"id": 3708,
"tags": "fft, audio, ifft, color, theory"
} |
Can RL still learn if part of my actions are only used once, at the beginning of the episode? | Question: I am working in an environment with 3-dimensional action space. The first two actions are only used at the first timestep and never again. The third action is used at every timestep.
Say, the action is $a = (a_1, a_2, a_3)$. At the start of an episode $i$, the agent uses actions $a_1, a_2$ only at timestep 1. Action $a_3$ is used at every timestep in the episode starting from 1 till the horizon H. The agent receives rewards $r_i$ at each timestep $i$ till the end of the episode.
I am using SAC. Since the actions $a_1, a_2$ only affect the agent's behavior at timestep 1 and are not used at any of the later timesteps, I am not sure if the RL policy will get better at choosing "good" values for $a_1, a_2$.
Will the RL be able to learn a good policy even if it doesn't quite see the effects of the first two actions in the episode data except at the first timestep?
Answer: You should be able to learn a good policy even if you use the first two actions only at the first timestep.
Using this OpenAI reference, the loss for the state action value function (from which the policy loss is later derived) is:
$$L(\phi) = \mathbb{E}_{(s, a, r, s') \sim D}\left[\left(Q(s,a|\phi) - (r + \gamma Q(s', a'|\phi_{target})\right)^2\right]$$
where $D$ is a set of transitions, $\phi_{target}$ are "old" parameters for the action state value function which are left unchanged in the parameter update, and $a' \sim \pi(.|s,\theta)$.
Note that I've simplified the equation for clarity.
The expectation in the loss is replaced in the actual algorithm with an average on a batch of transitions.
At timestep 0, the target $r + \gamma Q(s', a'|\phi_{target})$ for $Q(s_{t_0},a_{t_0}|\phi)$ (with $a' \sim \pi(.|a_{t_0},
\theta)$) in the loss will be non-zero, because $Q(s', a'|\phi_{target})$ will be non-zero and will reflect the value of $(s',a')$ accurately (e.g. thanks to transitions which happen at later timesteps). | {
"domain": "ai.stackexchange",
"id": 3449,
"tags": "reinforcement-learning, deep-rl, soft-actor-critic"
} |
Derivation of $T(z)(TT)(w)$ in CFT | Question: I am trying to derive eq. (6.213) in Di Francesco's CFT book,
$$T(z)(TT)(w) \sim \frac{3c}{(z-w)^6}+\frac{(8+c)T(w)}{(z-w)^4}+\frac{3 \partial T(w)}{(z-w)^3}+\frac{4 (TT)(w)}{(z-w)^2}+\frac{\partial(TT)(w)}{z-w}.\tag{6.213}$$
I must have made an error, since my answer was
$$T(z)(TT)(w) \sim\frac{-3 c}{(w-z)^6}-\frac{8T(w)}{(w-z)^4}+\frac{5\partial T(w)}{(w-z)^3}-\frac{\partial^2 T(w)}{(w-z)}.$$
I have two confusions:
In his OPE for $\partial T(x) T(w)$ (eq. 6.212) he has a term $\partial(TT)(w).$ Where does this come from? If you differentiate the $TT$ OPE's from earlier in the book you don't get this term because the expansion is truncated at the non-singular terms. Why this time do we keep going?
He does not write the OPE for $T(x)\partial T(w)$ but just says you can get it from differentiation. However when I differentiate I get a $\partial^2 T(w)$ term that he must not have obtained. Why is this? What rule of differentiating OPEs am I breaking to get that term?
Answer: Some introductory sources are too quick to say that only singular terms matter in chiral OPEs. Yes, meromorphic functions are determined by their singularities but it may well be the case that one needs a finite number of regular terms in order to use the OPEs during intermediate steps. It is convenient to start by finding out what this number is.
Looking at equation (6.159), singular terms in the final answer will arise in three ways.
Going up to the first regular term in the $\partial T(x) T(w)$ OPE.
Going up to the second regular term in the $T(x) T(w)$ OPE in the first line.
Keeping only the first regular term in all OPEs of the second line.
The regular term in $A(x)B(w)$ is by definition $(AB)(w)$ so the second line yields
\begin{equation}
\frac{\frac{c}{2} T(w)}{(z - w)^4} + \frac{2 (TT)(w)}{(z - w)^2} + \frac{(T\partial T)(w)}{z - w}. \quad\quad (1)
\end{equation}
To tackle the first line, we plug in the known OPEs to get
\begin{align}
\frac{\frac{c}{2} T(w)}{(x - w) (z - x)^4} + \frac{c}{(x - w)^5 (z - x)^2} + \frac{4T(w)}{(x - w)^3 (z - x)^2} + \frac{2\partial T(w)}{(x - w)^2 (z - x)^2} + \frac{2(TT)(w) + (x - w) Y}{(x - w) (z - x)^2} - \frac{2c}{(x - w)^6 (z - x)} - \frac{4T(w)}{(x - w)^4 (z - x)} - \frac{\partial T(w)}{(x - w)^3 (z - x)} + \frac{(\partial TT)(w)}{(x - w) (z - x)}. \quad\quad (2)
\end{align}
I am being lazy with the $Y$ because I don't want to figure out what the second regular term in $T(x) T(w)$ is but we will come back to that.
Integrating (2) and adding it to (1), we get exactly the desired answer up to the $\frac{1}{z - w}$ term because we haven't solved for $Y$ yet. But this term needs to be the derivative of whatever $T(z)$ is acting on because the stress tensor generates translations. For more about how certain terms in stress tensor OPEs are universal (even when nothing is a Virasoro primary), see a previous question. | {
"domain": "physics.stackexchange",
"id": 84256,
"tags": "operators, conformal-field-theory, stress-energy-momentum-tensor"
} |
How to run tests on catkin installed packages? | Question:
I'm pretty new to ROS, so my knowledge with ROS, catkin and, cmake is pretty basic. I have some questions on how should I proceed with the tests of my packages. The goal is to allow execution of each specific tests, but right now it's currently grouped (i.e, group-test-1, group-test-2, group-test-3, etc).
Here's the current setup:
Rostest is usable only in devel mode. I need to use install mode since it's the artifact that we're going to use in production.
I'm currently using catkin run_tests (i.e., catkin run_tests -Dgroup-test-1=on) to execute all tests for that group. I'm using options to group it, then have a foreach loop to create a test using add_rostest_gtest
It's not possible to run --gtest-filter since it also run some scripts
So far, here's what I think are the possible solutions:
Is it possible to use rostest in install mode? I haven't found a way to do it.
Does it make sense if I add options in my foreach loop so that I can use it this way "catkin run_test -Dgroup-test-1-specific-test-1=on", but I'm not sure if this will work.
ROS: Melodic
Catkin tools: 0.6.1
Python: 2.7.17
Cmake: 3.10.2
Gtest
I'm not sure if I'm in the right path. Any tips or advice will be really helpful.
Originally posted by achluomalus on ROS Answers with karma: 11 on 2020-11-02
Post score: 1
Answer:
This #q307598 says to make the tests into a normal executables outside of the rostest infrastructure, but I think you can do this install (I'll update after I get it working or not)
install(DIRECTORY test/ DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/test)
also
install(PROGRAMS
test/my_test.py
DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
This installs the test file into two places though, but only the BIN destination one executes, to avoid that the .test files could be split up from the .py files they are coupled with into different directories.
Originally posted by lucasw with karma: 8729 on 2021-10-25
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by 130s on 2022-08-26:
I marked this as an answer, as I've done simular/the same things and worked out. | {
"domain": "robotics.stackexchange",
"id": 35698,
"tags": "ros, ros-melodic, catkin"
} |
What is "silanol activity"? | Question: In a previous post about how pyridine failed to move much from the baseline of a TLC, I was told to look into "silanol activity". However, I was unable to find a textbook-style description. All the articles I've seen where silanol activity was discussed assumed it was a known prerequisite.
Could anyone explain silanol activity, or direct me to an online explanation please?
Furthermore, if it is referring to an acid–base reaction, would that imply that what I'm seeing on the TLC is no longer pyridine, but rather $\ce{C5H5NH+}$?
Answer: Silanol activity is perhaps an informal term which is quite frequently used in the chromatographic literature to explain the poor band shape of primary amines. Other amines show this too but to a lesser extent. That is why you will not find a formal textbook definition of silanol activity. One might also see the term silanophilic interactions used occasionally. If you look at the surface of silica gel, you will see that a silanol group, is like an alcohol analogue, except that silanol is slightly a stronger acid.
If we disperse silica particles in water, and let it stay there for sometime, the pH of water is < 7 (around 3–4, if I remember correctly). So the point to keep in mind is that silica is acidic due to its ionizable silanols.
There are different types of silanols as drawn in this paper: Sunseri, J.; Cooper, W. T.; Dorsey, J. G. Reducing residual silanol interactions in reversed-phase liquid chromatography: thermal treatment of silica before derivatization. J. Chromatogr. A 2003, 1011 (1-2), 23–29. DOI: 10.1016/S0021-9673(03)01070-7.
Now if you are doing chromatography of relatively strong organic bases, the silanol may be deprotonated by the amine (acid–base). The locally ionized silanol Si–O− groups can now act as a cation exchanger, which is a pretty strong interaction. The reason for seeing the streaking or tailing, as it is called in the formal literature, arises from those dual modes of retention. One mode could be regular partitioning or adsorption and the other
one is this additional cation-exchange behavior.
There are dozens of chromatographic tests for assessing the silanol activity of a given silica gel. Not all silica gels were created equal!
As your question
would that imply that what I'm seeing on the TLC is no longer
pyridine, but rather $\ce{C5H5NH+}$
Remember that ionization is an equilibrium process and hence dynamic. You are seeing the combined effect of both, pyridine and protonated pyridine. | {
"domain": "chemistry.stackexchange",
"id": 15041,
"tags": "organic-chemistry, experimental-chemistry, chromatography"
} |
RosActivity vs RosAppActivity | Question:
What is the different between RosActivity and RosAppActivity in android application?
Originally posted by Alexandr Buyval on ROS Answers with karma: 641 on 2012-02-16
Post score: 0
Answer:
If you're writing an Android application, you probably want your main activity to extend RosAppActivity. RosActivity contains the functionality for how intents are handled for starting activities. RosAppActivity extends RosActivity and it handles making the top bar on the screen with the dashboard.
Originally posted by selliott with karma: 51 on 2012-02-17
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 8265,
"tags": "rosjava, android"
} |
In what sense is the set of density matrices compact in infinite dimensions? | Question: Consider a complex, infinite-dimensional and separable Hilbert space $H$ and let $\mathcal I(H)$ denote the space of trace-class operators. The set of density operators $$\mathcal S(H):= \{\rho\in \mathcal I(H)\,|\, \mathrm{Tr}\rho=1\, , \, \rho \geq 0 \}$$
enjoys certain nice mathematical and physical features, some explained here. Moreover, it is well-known that $\mathcal S(H)$ is compact for a finite-dimensional Hilbert space, as discussed here.
Question: Is there a notion in which the set of density matrices is compact for an infinite-dimensional $H$? For example, in Ref. 1 it is stated (right after discussing the finite-dimensional case) that
$[\ldots]$ by using a physically natural topology (weak topology for $\rho$) one can still show that $\mathcal S(H)$ is compact.
In which topology is $\mathcal S(H)$ compact and how is it proved? Further, what makes this topology physically natural?
Edit: In Ref. 2, there is a theorem: $\mathcal S(H)$ is a compact convex subset of the real vector space $\mathcal I(H)$, and a proof which goes like this:
Now $\mathcal S(H)\subset \mathcal I(H)=\mathcal B_*(H)$ is a closed unit ball (because $||\rho||_1=\mathrm{Tr}\rho=1$ for all $\rho \in \mathcal S(H)$). Therefore, $\mathcal S(H)$ is weakly compact in the space $\mathcal I(H)$ by the Banach-Alaoglu theorem $[\ldots]$.
But I don't really understand the proof.
References:
Decoherence Suppression in Quantum Systems 2008. World Scientific Publishing. Editors: Mikio Nakahara, Robabeh Rahimi, Akira SaiToh. Page 12
Theory of Quantum Information with Memory. M. H. Chang. 2022. De Gruyter. Section 2.4. Page 54, Proposition 2.4.2
Answer: The assertion is false, if I understand well it.
First of all some general information on the notion of weak * topology.
One has a (complex) normed space $B$ and its topological dual $B'$ which is, by definition the vector space of continuous linear maps $f: B \to \mathbb{C}$.
This space has a natural structure of normed space as well with the norm
$$||f||' := \sup\{|f(x)| \:|\: x \in B \:, ||x||=1\}\:.$$
However we can equip $B'$ with another seminormed topology induced by $B$ called weak * topology, it is characterized by the fact that
$$B' \ni f_n \to f \in B'\quad iff \quad f_n(x)-f(x)\to 0 \quad \forall x\in B\:.$$
The Banach-Alaoglu theorem proves that every norm-closed ball in $B'$
$$\{f \in B' \:|\: ||f|| \leq r\}$$
is compact with respect to the weak * topology provided $B$ is complete (i.e. is a Banach space).
Sometimes it happens that two normed spaces $B_1$ and $B_2$ are in topological duality, that is one is (normed-space) isomorphic to the topological dual of the other. If $B_2$ is isomorphic through $F$ to $B_1' = F(B_2)$, the action of the elements of $B_2$ o the elements of $B_1$ is indicated by means of a pairing:
$$\langle b_1, b_2\rangle := (F(b_2))(b_1)\:.$$
Let us pass to the discussed assertion, that the space of the mixed states $S(H)$ is compact with respect to some weak * topology on that. The space $S(H)$ is not linear and it is not a closed ball of
the normed (Banach) space of the trace class operators $B_1(H)$ equipped with its natural norm topology induced by the notion of trace.
Therefore, one should try to prove that $S(H)$ is a compact subset of $B_1(H)$.
From the sketch of proof, it seems that the authors use the fact that $B_1(H)$ is the topological dual space of some other space. All that in order to apply the Banach-Alaoglu theorem as written above.(*)
As is well known, $B_1(H)$ is in fact (isometrically isomorphic to) the topological dual space of $B_\infty(H)$ (the Banach space of compact operators with the natual norm operator topology).
The duality is represented in terms of the trace operation as $B_1(H)$ is a (both sides *) ideal, as the (left-continuous) pairing
$$\langle A, B \rangle := tr(AB)\:,\:\quad A\in B_1(H)\:, \quad B \in B_\infty(H)$$
The Banach-Alaoglu theorem proves that every closed ball in $B_1(H)$ is therefore compact with respect to the weak * topology induced by the topological duality considered above. In particular, according to this weak* topology,
$$B_1(H) \ni \rho_n \to \rho \in B_1(H)\quad iff \quad tr((\rho_n-\rho)A)\to 0 \quad \forall A\in B_1(H)$$
To conclude the proof it would be sufficient to prove that $S(H)$ is closed with respect to that topology, since closed subsets of compact sets are compact as well.
But it is not the case as proved here!
On the other hand, since weak * topology is Hausdorff, compact sets are closed. This proves that the assertion, referred to the said weak * topology, is false as $S(H)$ is not closed with respect to that weak * topology.
Maybe the assertion is true using another weak (natural?) topology.
($*$) Another possibility, not followed by the authors, is to see $B_1(H) \subset B(H)$ and viewing $B(H)$ as the dual of $B_1(H)$, thus equipping this latter with the weak* topology of $B(H)$ also known as ultraweak topology.
Even another way is to see $B_1(H)$ as subset of $B_2(H)$ the Hilbert-space space of Hilbert Schmidt operators and to use the weak * topology on $B_2(H)$ induced by the Riesz anti isomorphism. | {
"domain": "physics.stackexchange",
"id": 97914,
"tags": "quantum-mechanics, hilbert-space, mathematical-physics, density-operator, quantum-states"
} |
Identifying this spider | Question: I think this is a spider. 8 legs, 2 body sections, and 2 really long Palos? Anyways, can anyone identify this bug, or point me to a resource where I could classify it?
Answer: It's camel spider, they have large pedipalps, big chelicerae, and their body is covered by little "hairs". As far as I know, there are only 2 families of that order in North America | {
"domain": "biology.stackexchange",
"id": 7333,
"tags": "species-identification, arthropod"
} |
Refactor a sequence of functions | Question: I have an http library in Ruby with a Response class, I have broken it down into smaller methods to avoid having a big intialize method, but then I end up with this:
def initialize(res)
@res = res
@headers = Hash.new
if res.empty?
raise InvalidHttpResponse
end
split_data
parse_headers
set_size
check_encoding
set_code
end
I don't feel well about calling function after function like that, any suggestions on how to improve this? Full code is at github.
Answer: Looking through your whole module it's apparent you've only had experience with imperative programming (it's full of side-effects, variable re-bounds, in-place updates, ...). I've already written on the topic (see this and this) so I won't expand here about the goodness of functional programming.
I'd write:
def initialize(response_string)
@code = parse_code(response_string)
raw_headers, @body = parse_data(response_string)
@headers = parse_headers(raw_headers)
end
You get the idea, create parsing methods (or class methods, actually they won't need instance variables, they are pure functions) with inputs (arguments) and outputs that get assigned/bound to instance variables just once (not incidentally, that makes those methods easier to test).
On the other hand, there are already good network libraries for Ruby, what's the point of re-implementing them?
[EDIT] You asked for further advice. Ok, let's get a couple of methods of Response and refactor them:
def parse_data(raw_data)
all_headers, body = raw_data.split("\r\n\r\n", 2)
raise InvalidHttpResponse unless body && raw_data.start_with?("HTTP")
raw_headers = all_headers.split("\r").drop(1)
[raw_headers, body]
end
def decode_data(body)
# You can use condition ? value1 : value2 for compactness
data = if headers["Transfer-Encoding"] == "chunked"
decode_chunked(body)
else
body
end
if headers["Content-Encoding"] == "gzip" && body.length > 0
Zlib::GzipReader.new(StringIO.new(data)).read
else
data
end
end
Ideas behind the refactor:
Put a space after a comma (more on style).
Don't reuse variables names (different values must have different names).
Don't update variables in-place.
Don't write explicit return.
Use if conditionals as expressions.
Use || && for boolean logic.
Put parentheses on calls (except on DSL code). | {
"domain": "codereview.stackexchange",
"id": 3511,
"tags": "ruby"
} |
unable to create a ros package in pr2_controller tutorial | Question:
I am following the tutorial, Using the low-level robot base controllers to drive the robot.
I got a pr2 in gazebo and rosran the pr2 controller manager. When I try to
roscreate-pkg drive_base_tutorial roscpp geometry_msgs this is what the terminal looks like:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
WARNING: current working directory is not on ROS_PACKAGE_PATH!
Please update your ROS_PACKAGE_PATH environment variable.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Created package directory /home/roger/catkin_ws/src/drive_base_tutorial
Created include directory /home/roger/catkin_ws/src/drive_base_tutorial/include/drive_base_tutorial
Created cpp source directory /home/roger/catkin_ws/src/drive_base_tutorial/src
Created package file /home/roger/catkin_ws/src/drive_base_tutorial/Makefile
Created package file /home/roger/catkin_ws/src/drive_base_tutorial/manifest.xml
Created package file /home/roger/catkin_ws/src/drive_base_tutorial/CMakeLists.txt
Created package file /home/roger/catkin_ws/src/drive_base_tutorial/mainpage.dox
Please edit drive_base_tutorial/manifest.xml and mainpage.dox to finish creating your package
I am using 12.04 ubuntu and groovy. export
declare -x ROSLISP_PACKAGE_DIRECTORIES=""
declare -x ROS_DISTRO="groovy"
declare -x ROS_ETC_DIR="/opt/ros/groovy/etc/ros"
declare -x ROS_PACKAGE_PATH="/opt/ros/groovy/share:/opt/ros/groovy/stacks"
declare -x ROS_ROOT="/opt/ros/groovy/share/ros"
and cd is in catkin_ws/src.
Please help. I do not know how to proceed
Originally posted by Ros_newbie on ROS Answers with karma: 17 on 2014-04-08
Post score: 0
Answer:
It sounds like you're trying to follow a rosbuild tutorial and create a rosbuild package in a catkin workspace.
The PR2 packages are not catkinized in Groovy. You should instead set up a overlay rosbuild workspace for working with the PR2 controllers on Groovy.
Originally posted by ahendrix with karma: 47576 on 2014-04-08
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Ros_newbie on 2014-04-09:
Thanks. It helped. | {
"domain": "robotics.stackexchange",
"id": 17587,
"tags": "ros, package, rosmake, pr2-simulator"
} |
Using the Turtlebot gyro without ROS | Question:
Hi,
We're using the iRobot Create for a project with a laptop mounted on board, like the turtlebot. However, we're using Windows rather than ROS. After some testing, we saw the dead reckoning of the iRobot is pretty poor (most problems from angle rotated), and found the turtlebot's gyro sensor and power board, which looks very convenient!
Is it possible to access and make use of this gyro sensor through the serial port to improve on the navigation within our C# program?
Is there any documentation or anywhere someone can point me to that would show me how to interface with the sensor and perform any necessary navigation-correcting calculations?
Thanks!
Originally posted by Royal2000H on ROS Answers with karma: 1 on 2012-03-10
Post score: 0
Answer:
The gyro on the power board provides input to the "user analog input" pin on the Create. Your Windows API will likely provide a method for reading this. If it doesn't, check out this document.
The next step you'll have is to correct for sensor error. There are two primary types you should worry about:
Scale error. The gyro may provide an analog value for a given rotational rate which doesn't quite match up with the spec. The turtlebot_calibration node can automatically determine this error by comparing a prediction based on the gyro with an actual observation of motion from the TurtleBot's Kinect sensor.
Drift. The analog value corresponding to a zero rotational rate drifts over time. The TurtleBot is constantly recalibrating what the "zero value" is by averaging gyro sensor values when both wheels are not moving, and using that value as an offset.
After you do those two corrections, you're left with incorporating that data into your position estimates. You can start by integrating it to determine chassis rotation in your current dead-reckoning approach, and improve it further by adding Kinect data via an extended Kalman Filter. I'm not sure what direction you want to go in for this latter point, but hopefully the above helps...1.
Originally posted by Ryan with karma: 3248 on 2012-03-10
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by Mac on 2012-03-10:
I second all of the above; at this point, though, you've just reinvented a bunch of stuff that ROS does for you, in a well-debugged, well-integrated manner. I'm told there's some rudimentary ROS support on windows; probably enough to let a linux machine do the work, and a windows do the controls. | {
"domain": "robotics.stackexchange",
"id": 8552,
"tags": "ros, irobot-create, turtlebot, gyro"
} |
Radioactive Decay | Question:
Problem:Nuclei of a radioactive element $\Bbb X$ having decay constant $\lambda$ , ( decays into another stable nuclei $\Bbb Y$ ) is being produced by some external process at a constant rate $\Lambda$.Calculate the number of nuclei of $\Bbb X$ and $\Bbb Y$ at $t_{1/2}$
I tried to create an equation for rate of change of the number of nuclei a:
$$\dfrac{dN_{X}}{dt}=\Lambda-N_X\lambda $$
I did that because in simple decay $\dfrac{dN}{dt}=-\lambda N$ holds and here it's also being produced by rate. But after integration should we write $$ln\Bigg(\dfrac{\lambda N_X-\Lambda}{\lambda N_0-\Lambda}\Bigg)=-\lambda t$$
or $$ln\Bigg(\dfrac{\lambda N_X-\Lambda}{N_0}\Bigg)=-\lambda t$$ First one because limit was on $N: (N_0\to N)$
And next what to substitute for $t$ (ie. what is $t_{1/2}$? $ln2/\lambda$ or something else?)
Also how to do it for $\Bbb Y$?
Just write $$\dfrac{dN_Y}{dt}=\lambda N_x $$?
Answer: The first of your equations is correct. You can see this in two ways. First, just look at the dimensions. In general, the argument of a logarithm should be dimensionless; only your first option is. Second, and maybe more convincingly, look at what you get when you take $\Lambda \to0$. You should be able to reproduce the standard decay equation:
\begin{equation}
N_X(t) = N_0\, e^{-\lambda\, t}~.
\end{equation}
In your first equation, the factors of $\lambda$ on the left-hand side cancel, and you get this result. With your second equation, you would get $N_X(t) = \frac{N_0}{\lambda}\, e^{-\lambda\, t}$. So that must be wrong.
As for what $t_{1/2}$ is, surely it must just be the half-life of $\mathbb{X}$ (with no creation). In particular, if $\Lambda$ is large enough, $N_X$ will actually grow, so there is no time at which half of the material is left. Since $\mathbb{Y}$ is stable, you can assume there's no relevant half-life there.
Also, your expression for $N_Y$ is correct. It's a slightly harder integration, but not too bad. | {
"domain": "physics.stackexchange",
"id": 7636,
"tags": "homework-and-exercises, radioactivity, nuclear-physics"
} |
Tapes, Trees, Trunks & Tallies | Question: I've written these two command-line tools to be used to facilitate forestry surveys.
For a little background, the idea here is that often a landowner needs to know precisely how much wood is coming out of his dirt. Not only that, but he has to account for the fact that you can't turn a cylinder (trees) into rectangular prisms (boards) without a bit of loss.
So what the landowner does is he pays my client to come down and run around the woods with a bit of measuring tape, taking the circumferences of trees. He then apparently works out the radius in his head (I hope that's what he does, otherwise everything he's ever done is completely wrong) and shouts it into a radio along with the species and how tall he believes the tree is (in 16-foot logs). Sometimes, such as in the case of a tree that has two trunks, he'll call out more than one of the same sort of tree at one time.
The computer operator - that's me - then types a three-part statement into the computer containing a short arbitrary string that identifies the tree species ("WO" for White Oak, "ASH" for Ash, "YGG" for Yggdrasil, etc...), the diameter, and the length in 16-foot logs or halves thereof. If the forester called in more than one of the same kind of tree, I can also add a number on the end that tells the program how many of the specified tree I want to add.
This is all saved to disk. Once we're back at home base, we can take the data file the tallying program produced and feed it into the analysis program, which computes (using a formula derived from the original table the client used to do this process by hand, itself derived from arcane magics) the usable board footage of each variety of tree and lists the total board footage, total number of trees, board footage by species, board footage by species and size (one log that the sawmill can produce two boards from is worth more than two boards the 'mill can produce one from), etc. etc.
The client was very (indeed, quite irrationally) concerned about the reliability of the program, so I decided the best way to store the data was to keep it in memory for as short a time as possible, and instead save the tree data to disk every time a new tree was entered. That way, if there's a program crash or power failure or the computer's thermal safety goes off, nothing is lost (with the cost of a slight increase in the risk that the power will go out while the machine is writing to the disk, thus blanking/trashing the file). Eventually I might switch to a journaling system or Python's sqlite implementation, but at design-time I decided that I'd be better off with my own simple system, with the option of converting later on if it proves to be a good idea.
I've never really showed my code to anyone before, certainly never anyone who knew what they were doing, so I'm excited to see what my comrades think of my first "professional" bit of work!
This is the tallying program that actually takes in the tree data:
'''
Created on Nov 2, 2013
@author: Schilcote
'''
def load_trees():
trees=dict()
try:
with open("trees.txt","r") as treefil:
while True:
lin=treefil.readline()
if lin=="":
break
species, count = lin.split()
trees[species]=int(count)
except FileNotFoundError:
return dict()
return trees
def save_trees(trees):
with open("trees.txt","w") as treefil:
for thetree in trees:
treefil.write(thetree+" "+str(trees[thetree])+"\n")
return
def main():
print("Autotally V0")
print("Ready")
while True:
cmd=input(">").upper()
cmdtup=list(cmd.split())
print (cmdtup)
if cmdtup[0]=="LIST":
print(load_trees())
else:
trees=load_trees()
if len(cmdtup) >= 3:
cmdtup.append(1)
key=cmdtup[0]+"_"+cmdtup[1]+"_"+cmdtup[2]
try:
trees[key]=int(trees[key])+int(cmdtup[3])
except KeyError:
trees[key]=int(cmdtup[3])
try:
print("Inserted "+str(cmdtup[3])+" "+str(cmdtup[0])+" of diameter "+str(cmdtup[1])+" with "+str(cmdtup[2])+" logs.")
except IndexError:
print("Inserted "+str(cmdtup[0])+" of diameter "+str(cmdtup[1])+" with "+str(cmdtup[2])+" logs.")
save_trees(trees)
if __name__ == '__main__':
main()
And this is the analysis side of it:
'''
Created on May 31, 2014
@author: Schilcote
'''
import collections
import pprint
def scribner(diameter,length):
"Convert length and diameter to board-feet via the Scribner method. WARNING: Only valid for measurements at top of tree. Do not use."
a = (0.0494 * diameter * diameter * length)
b = (0.124 * diameter * length)
c = (0.269 * length)
return a-b-c
def regressive_scribner(d,l):
"Convert length (in feet) and diameter (in inches) to board-feet via an equation derived from the Scribner tables. Accurate to around 10 board feet."
v = 0.0942919095863512*d**2 + 0.0231348479474668*l*d**2 - 16.494587251523 - 0.119077488412871*l - 0.00210681861605682*d*l**2
return v
def tree_to_dl(tree):
"Takes a string tree representation and turns it into a tuple of (species, diameter, logs)"
species, diameter, logs = tree.split("_")
diameter=int(diameter)
logs=float(logs)
return species, diameter, logs
from treetally import load_trees
if __name__ == '__main__':
trees=load_trees()
totaltrees=sum(trees.values())
print("# of trees:",totaltrees)
#We want all these dicts to start out with values of zero for every type of tree;
#so we set their default_factory to a lambda that simply returns 0
retzero=lambda: 0
species_counts=collections.defaultdict(retzero)
spec_diam_counts=collections.defaultdict(retzero)
species_bf=collections.defaultdict(retzero)
spec_diam_bf=collections.defaultdict(retzero)
for tree, count in trees.items():
spec, diam, logs = tree_to_dl(tree)
species_counts[spec]+=count
spec_diam_counts[(spec,diam)]+=count
bf=scribner(diam,int(logs*16)) #Scribner wants feet; 16 feet to a log
species_bf[spec]+=int(bf)*count
spec_diam_bf[(spec, diam)]+=int(bf)*count
totalbf=int(sum(species_bf.values()))
print("Total board feet:",totalbf)
print("Avg. BF per tree: ",(totalbf//totaltrees))
print("Board feet by species:")
pprint.pprint(species_bf)
print("Board feet by species & diameter:")
pprint.pprint(spec_diam_bf)
print("Counts by species:")
pprint.pprint(species_counts)
print("Counts by species and diameter:")
pprint.pprint(spec_diam_counts)
A few notes on the criticisms I anticipate:
Yes, it's not compliant with PEP8. I don't find PEP8 to be particularly pleasant to write or read, however, so pleh. This code isn't intended to be worked on by anyone but myself, so I just did what I find the most readable.
I probably shouldn't leave disused functions just lying around. The failed scribner() was the product of hours of research, however, so I decided I ought to keep it around in the conceivable case I worked out a more elegant method than my regression-based formula.
I realize that hardcoding the filename is unpythonic and generally unwise, but the impetus here was to get something the client could look at out the door as fast as possible, and as far as I know Python doesn't have that one-line command to invoke the Windows file selector that Blitz Plus has.
Thoughts?
Answer: def load_trees():
trees=dict()
You only use this variable in the case that you successfully opened the file. I'd move it after the open.
try:
with open("trees.txt","r") as treefil:
I recommend not shortening names like this. Call it tree_file it not really any serious amount of more typing, but its way easier to see what it is.
while True:
lin=treefil.readline()
if lin=="":
break
The line can be replaced by for lin in treefile:. It'll give you each line in the file one at a time.
species, count = lin.split()
trees[species]=int(count)
What if the file is messed up and count isn't a number here? You may very well simply not care.
except FileNotFoundError:
return dict()
return trees
I recommend moving return trees into the try: block.
def save_trees(trees):
with open("trees.txt","w") as treefil:
for thetree in trees:
treefil.write(thetree+" "+str(trees[thetree])+"\n")
return
This line does nothing.
def main():
print("Autotally V0")
print("Ready")
while True:
cmd=input(">").upper()
cmdtup=list(cmd.split())
print (cmdtup)
You note that you aren't following PEP8. That's okay. But you should probably be consistent. Here you've got an extra space after print, but not for any other function.
if cmdtup[0]=="LIST":
print(load_trees())
The output of printing a python dict would seem rather programmer friendly, not use friendly. I can't imagine that the output of this statement is very useful.
else:
trees=load_trees()
if len(cmdtup) >= 3:
cmdtup.append(1)
Why >=? If the user types 5 parameters, you want to add a sixth? It seems to me that you really want is ==. That is if the user only passes three parameters, add a fourth.
key=cmdtup[0]+"_"+cmdtup[1]+"_"+cmdtup[2]
For keys in internal data structures, its probably best to use a tuple, not a string. It's just less awkward to work with.
try:
trees[key]=int(trees[key])+int(cmdtup[3])
except KeyError:
trees[key]=int(cmdtup[3])
Why don't you use a defaultdict here?
What happens when the user passes 2 or 4 parameters? Your program will either die or do the wrong thing.
try:
print("Inserted "+str(cmdtup[3])+" "+str(cmdtup[0])+" of diameter "+str(cmdtup[1])+" with "+str(cmdtup[2])+" logs.")
except IndexError:
print("Inserted "+str(cmdtup[0])+" of diameter "+str(cmdtup[1])+" with "+str(cmdtup[2])+" logs.")
So, cmd_tup already contains strings, there is no need to pass it to str again. In what situations would this throw an IndexError?
save_trees(trees)
if __name__ == '__main__':
main()
def scribner(diameter,length):
"Convert length and diameter to board-feet via the Scribner method. WARNING: Only valid for measurements at top of tree. Do not use."
a = (0.0494 * diameter * diameter * length)
b = (0.124 * diameter * length)
c = (0.269 * length)
return a-b-c
def regressive_scribner(d,l):
"Convert length (in feet) and diameter (in inches) to board-feet via an equation derived from the Scribner tables. Accurate to around 10 board feet."
v = 0.0942919095863512*d**2 + 0.0231348479474668*l*d**2 - 16.494587251523 - 0.119077488412871*l - 0.00210681861605682*d*l**2
return v
def tree_to_dl(tree):
"Takes a string tree representation and turns it into a tuple of (species, diameter, logs)"
species, diameter, logs = tree.split("_")
diameter=int(diameter)
logs=float(logs)
return species, diameter, logs
from treetally import load_trees
if __name__ == '__main__':
trees=load_trees()
totaltrees=sum(trees.values())
print("# of trees:",totaltrees)
#We want all these dicts to start out with values of zero for every type of tree;
#so we set their default_factory to a lambda that simply returns 0
retzero=lambda: 0
Actually you can use int for this purpose. int() returns 0. Furthermore, collections.Counter() is actually even better.
species_counts=collections.defaultdict(retzero)
spec_diam_counts=collections.defaultdict(retzero)
species_bf=collections.defaultdict(retzero)
spec_diam_bf=collections.defaultdict(retzero)
for tree, count in trees.items():
spec, diam, logs = tree_to_dl(tree)
species_counts[spec]+=count
spec_diam_counts[(spec,diam)]+=count
bf=scribner(diam,int(logs*16)) #Scribner wants feet; 16 feet to a log
species_bf[spec]+=int(bf)*count
spec_diam_bf[(spec, diam)]+=int(bf)*count
totalbf=int(sum(species_bf.values()))
print("Total board feet:",totalbf)
print("Avg. BF per tree: ",(totalbf//totaltrees))
Why round for the average?
print("Board feet by species:")
pprint.pprint(species_bf)
print("Board feet by species & diameter:")
pprint.pprint(spec_diam_bf)
print("Counts by species:")
pprint.pprint(species_counts)
print("Counts by species and diameter:")
pprint.pprint(spec_diam_counts)
General thoughts:
Storing your data in a one-off format is not generally the best idea. Instead, I'd suggest using the csv or json modules to write the data out in a standard format. This should reduce the amount of code you have to write, and make it easier to work with other data. For example, you can easily import csv into a spreadsheet.
It's a little odd to make this a command line tool rather then a GUI or web tool. Of course, that's more work, so maybe ts not worthwhile.
Your code was pretty straightforward to follow. That's good. | {
"domain": "codereview.stackexchange",
"id": 7817,
"tags": "python, python-3.x"
} |
Order of Shielding Effect for orbitals | Question: From what I know Shielding effect is the ability of inner electrons to repel outer electrons and reduce the Nuclear charge felt by the outer electrons and this is caused by electron-electron repulsion.
When I read the explanation from a site it said that
Shielding refers to the core electrons repelling the outer rings and thus lowering the 1:1 ratio. Hence, the nucleus has "less grip" on the outer electrons and are shielded from them. Electrons that have greater penetration can get closer to the nucleus and effectively block out the charge from electrons that have less proximity.
Because the order of electron penetration from greatest to least is s, p, d, f; the order of the amount of shielding done is also in the order s, p, d, f.
But I don't understand the fact that inner orbitals would have greater shielding effect than the outer orbitals as the closer the electrons are the more the repulsion they will have as it follows the inverse squared law. So, as the outer electrons are closer to other electrons hence they will show more repulsive force than the inner electrons as the outer electrons are closer.
One explanation which I thought was that as the electrons are not stationary and moving at great speeds, the charge could be considered as symmetrically distributed over the volume and as the charge density would be low for outer orbitals they would have less shielding effect.
Is this explanation correct or am I going wrong somewhere?
Source:https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Quantum_Mechanics/10%3A_Multi-electron_Atoms/Multi-Electron_Atoms/Penetration_and_Shielding
Answer: I suggest to look at the data instead of such explanation that you expect, they dont say much on quantity of the effect.
https://en.m.wikipedia.org/wiki/Ionization_energies_of_the_elements_(data_page)
Here you can see that hydrogen and francium only differ about 3.4 times in energy needed to remove one outer electron. Now lets look at the atoms sizes:
https://chem.libretexts.org/Courses/Mount_Royal_University/Chem_1201/Unit_2._Periodic_Properties_of_the_Elements/2.08%3A_Sizes_of_Atoms_and_Ions
About 70% scroll, figure 3.7
So, if we assume that electron is just orbiting charge +1 but at different distance, result is quite similar to what we expect naively, assuming charge of lower electrons and protons in the nucleus just cancels out. As if outer electron orbits around one proton, and the rest of protons and electrons cancel out. Another position to look at this problem from is this trick:
https://en.m.wikipedia.org/wiki/Shell_theorem
Not sure it applies exactly to give solution exactly as if inner electrons and n-1 protons are cancelled, but I assume it is a good starting point as a path to dig deeper into this effect.
I dont think the way effect is described in your link is a good way to describe it, a bit too vague. | {
"domain": "chemistry.stackexchange",
"id": 15852,
"tags": "inorganic-chemistry, orbitals, electrons"
} |
Applying measurement postulate to a continuous sum of eigenvectors (by analogy) | Question: Measurement postulate:
If we measure the Hermitian operator $\hat Q$ in the state $Ψ$, the possible
outcomes for the measurement are the eigenvalues $q_1$, $q_2$, . . .. The probability $p_i$ to measure $q_i$ is given by
$$p_i =
|α_i
|^2 $$
where $Ψ(x) = \Sigma α_iψ_i(x)$
Trying to apply it to a continuous sum of eigenstates:
I am trying to apply the measurement postulate to a continuous sum of eigenstates.
In the continuous sum
$$ \Psi(x,0)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\Phi(k)e^{ikx}dk$$
$e^{ik_0x}$ plays the role of the eigenstate $ψ_i(x)$, for a particular $k_0$
$\frac{\Phi (k_0) dk_0}{\sqrt{2\pi}}$ plays the role of the coefficient $\alpha_i$
the operator $\hat Q$, for example, is $\frac{\hbar}{i} \frac{\partial}{\partial x} $
the eigenvalue associated to $e^{ik_0 x}$ is $p_0$
As a result of this analogy, it should follow that the probability to measure $p_0$ in the state $Ψ$ is:
$$ \frac{|\Phi (k_0)|^2 |dk_0|^2}{{2\pi}} $$
I know however it is not correct because:
of this weird $|dk_0|^2$,
I know that $\int |\Phi(k)|^2dk = 1$ (from Parseval theorem) so the $2\pi$ looks suspicious
Edit:
I realize that if I've had taken the other Fourier transform (the one with the $2\pi$ in the exponential), the problem with the $2\pi$ would not have arised!
Answer: I've changed your bullet points: In the continuous sum
$$ \Psi(x,0)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\Phi(k)e^{ikx}dk\,,$$
$\frac{e^{ik_0x}}{\sqrt{2\pi}}$ plays the role of the eigenstate $ψ_i(x)$ for a particular $k_0$, rather than just ${e^{ik_0x}}$, because the state needs to be normalized (in a sense to be discussed below)
${\Phi (k_0)}$ plays the role of the coefficient $\alpha_i$; we don't include the $dk$ for reasons explained below, and we've already moved off the factor of $\sqrt{2\pi}$
etc.
In moving from the discrete case to the continuous case, we can no longer talk about the probability of a getting a particular value upon measuring an observable. Instead, we proceed as follows.
First of all, imagine we're measuring a continuous variable, say the position of a particle, but we can only measure the position to be in certain regions rather than at particular places (e.g., we lack perfect precision in our measurement. In other words, we have bins of size, say, $\Delta x$, so that we measure the position to be in $\dots,[-2\Delta x, -\Delta x]$, $[-\Delta x, 0]$, $[0, \Delta x]$, $[\Delta x, 2\Delta x],\dots$ and so on. Then, we can talk about the probability $P_j$ of measuring the position to be in bin $j$, i.e., in the bin $[j\Delta x, (j+1)\Delta x]$ so that the probability of finding the position being anywhere is of course 1, i.e.
$$
1= \sum_{j=-\infty}^{\infty} P_j\,.
$$
Now, imagine that we want to make $\Delta x$ smaller and smaller, so that we are refining the precision of our measurement. In fact, let's take the limit as $\Delta x\to0$. How do we do this? Well, write
$$
1= \sum_{j=-\infty}^{\infty} {P_j}
=\sum_{j=-\infty}^{\infty} \frac{P\left(j\Delta x\leq x \leq(j+1)\Delta x\right)}{\Delta x}\Delta x\,.
$$
In order for the limit to make sense, the quantity $P_j/\Delta x$ should approach a finite limit $p(x)$ as $\Delta x \to0$. We can then interpret $p(x)$ as a probability per unit length; that is, it is a probability density. The assumption that the limit exists is a reasonable assumption, because it essentially means that as long as $\Delta x$ is small enough, doubling the interval implies that we've doubled the probability that we can find the particle in the interval.
Now, accordingly, $p(x)\Delta x$ is approximately the probability that the particle is found between $x$ and $x+\Delta x$, and hence the probability of finding the particle at $x$ is zero. This is just the way that probability distributions for continuous variables work. Finally, then, the sum above becomes the limit, i.e.
$$
1=\int_{-\infty}^{\infty} p(x)\,dx\,.
$$
Moving back to the quantum discussion, then, $|\Phi(k)|^2$ plays the role of a probability density function, and therefore $|\Phi(k)|^2dk$ is the probability that the momentum of the particle is measured to be in the interval $[k, k+dx]$. We can verify that this makes sense at least mathematically buy using the (distributional) fact that
$$
\delta (k-k_0) = \int_{-\infty}^{\infty}\frac{e^{i(k-k_0)x}}{2\pi}dk
$$
to verify that
$$
1 = \int_{-\infty}^{\infty} |\Phi(k)|^2dk\,.
$$
(This is why $\sqrt{2\pi}$ is attached to the basis function: it is essentially the normalization factor of the basis function. Re the last comment in the OP about the factor of $2\pi$ in the exponent of the exponential: this would indeed render the normalization factors equal to 1. However, note that
$$
\frac{\hbar}{i} \frac{d}{dx}e^{i2\pi k x} = (\hbar 2\pi k) e^{i2\pi k x}\,,
$$
so that the eigenvalue of the momentum operator is $p = \hbar 2\pi k$. This means that you have to re-interpret the value $k$: it is no longer the wave number; instead it is the reciprocal of the de Broglie wavelength, directly. This is fine, but it's not the convention normally chosen.) | {
"domain": "physics.stackexchange",
"id": 90872,
"tags": "quantum-mechanics, wavefunction, fourier-transform, probability, quantum-measurements"
} |
How to measure processing power of a quantum computer? | Question: Is there any way to get an equivalent of computing power of quantum computers in terms of computing power of common computers? I mean, how many teraflops (or so) can a quantum computer compute? How can I calculate that equivalence? I think it depends on the architecture, like a x64 in common computer, and a 512-qubit.
Answer: this is a very difficult/subtle/controversial question at the heart of current research that even topnotch scientists are having a lot of trouble answering precisely. there are two main lines of QM computer development:
adiabatic QM computing, this is the Dwave computer that is up to 512 qubits, but it doesnt operate using "qbit transport" which is the main line of scientific research
"standard" QM computing transports qbits so their spins can interact as laid out in "quantum circuits". there is a lot of theoretical research on this topic but physics researchers are apparently still far from actually implementing (physically building/realizing) this type of computing.
unfortunately in both cases there are other further issues:
theoretically, can one get a speedup based on the mathematics/physics of quantum computing? see eg proof of speedup with [adiabatic] qm computing tcs.se
in practice, after one builds the computer, and one applies various systems, eg error correction being one of the main ones, and decoherence being one of the main challenges to overcome, how efficient will it be? will there be any speedup over conventional computing?
QM algorithms run completely differently, they are not based on binary logic! so therefore there is not yet any way to determine what the "equivalent" or "corresponding" qm algorithm is to a "classical" algorithm that is under consideration. the best we can do is try to optimize the performance of both & see what happens but that is obviously not satisfactory and somewhat dependent on human factors.
so the best answer right now is "nobody can really say right now". or, the best quantum computer in the world, Dwave, (apparently costing in \$ millions per unit, and over $100M research so far) is now shown in scientific papers to be slower than desktop computers when algorithms are optimized on those computers. or, "its an open question subject to cutting edge international research". there are many posts in Aaronsons blog on the subject. see also
Dwaves dream machine
Will Bourne, Inc magazine 1/9/2014
Dwave & the inception of the QM computing dream collection of lots of refs, papers, se questions etc | {
"domain": "cs.stackexchange",
"id": 2289,
"tags": "terminology, quantum-computing"
} |
Getting three numbers input interactively and process them | Question: I'm new to Python and programming in general.
Let's imagine I have a simple task:
Given 3 numbers from user input, I need to count negative ones and print the result.
Considering what I know about Python at this moment, I can solve it like this:
class ListTooShort(Exception):
pass
def input_check():
temp_list = []
while True:
try:
temp_list = [float(x) for x in input('Input 3 numbers, separated by space: ').split()]
if (len(temp_list) < 3):
raise ListTooShort
if (len(temp_list) > 3):
temp_list = temp_list[:3]
except ValueError:
print('One of given values is not a number, try again.')
except ListTooShort:
print('Not enough values, try again.')
else: return temp_list
def count_negative_numbers(value_list):
counter = 0
for i in value_list:
if (i < 0):
counter += 1
return counter
print('If theres more than 3 numbers,\nlist will be trimmed to 3 first values.')
numbers = input_check()
result = count_negative_numbers(numbers)
print(f'Theres {result} negative numbers in this 3 value list')
or like this:
counter = 0
numbers = [float(x) for x in input('Input 3 numbers, seperated by space: ').split()]
if (len(numbers) >= 3):
numbers = numbers[:3]
for i in numbers:
if (i < 0):
counter += 1
print(f'Theres {counter} negative numbers in this value(s) list')
Both do the trick, but the main questions I have is:
Would it be better to not complicate the code too much when there's a simple task at hand?
Is it even worth to make functions for simple things like that or should I just go straight to solving an issue?
Should I even pay attention to things like that or its better to just "freeroam" without caring that much?
Answer: Getting exactly 3 values
I think that since you are requesting exactly 3 values, I'd abandon inputting an arbitrary number of values separated by a space. Instead, in your while loop, just enforce the constraint that your list has no more than 3 values:
temp_list = []
while len(temp_list) < 3:
try:
temp_list.append(
float(input(f"Input number {len(temp_list) + 1}: "))
)
except ValueError:
print("Error processing your number, try again!")
Defining custom exceptions
While we're at it, rarely do you need to create your own exceptions. A ValueError, if needed, would suffice:
def some_function(mylist: List[int]):
if len(mylist) != 3:
raise ValueError("Expected 'mylist' to have exactly 3 values")
print(len(mylist))
bool and int can be used the same way
When you are checking for a condition and counting True values, this behaves just like 1:
int(True)
1
int(False)
0
So you can add the boolean expression if you felt so inclined:
def count_negative_numbers(value_list):
counter = 0
for i in value_list:
counter += (i < 0)
return counter
Your for loop here is perfectly readable and doesn't really need refactoring, but a more advanced technique would be to feed a generator expression to sum:
def count_negative_numbers(value_list):
return sum(i < 0 for i in value_list)
Other Questions
Would it be better to not complicate the code too much when theres a simple task at hand?
To borrow a quote:
Simple is better than complex, complex is better than complicated.
Make it as simple as reasonably possible to solve the problem at hand. As a note, less code (one-liners) does not always mean simpler.
Is it even worth to make functions for simple things like that or should i just go straight to solving an issue?
It's good practice (both in terms of standards and practice in general) to keep up good code organization. Personally, I have found myself sometimes going back to small scripts I wrote where I know I solved a certain problem before. Writing good functions with helpful names and docstrings makes finding them much faster.
I'd keep writing functions where you find it helps readability and organization of your scripts. It will pay dividends when it comes time to write much larger applications.
Should i even pay attention to things like that or its better to just "freeroam" without caring that much?
Freeroaming is great for just getting ideas out there, but there's a balance to be struck. Writing nothing but "freeroam" code will enforce habits that will need to be broken when writing code for larger problems/applications. It's an iterative process:
Solve
Refactor
Repeat | {
"domain": "codereview.stackexchange",
"id": 44187,
"tags": "python, beginner, algorithm, comparative-review"
} |
How is linear momentum conserved in case of a freely falling body? | Question: When an object is experiencing free fall, it has a constant acceleration and hence an increasing velocity (neglecting friction). Thus its momentum is increasing. But according to law of conservation of momentum, shouldn't there be a corresponding decrease in momentum somewhere else ?
Where is it ?
Answer: Linear Momentum is conserved only in systems with net external force equal to zero. For a body falling on Earth, it experiences Earth's gravitational force so its linear Momentum increases.
But if you include Earth in your system then definitely, momentum is conserved, as an equal amount of momentum of Earth is increased in upward direction. But individually for both it's not conserved, there is an external force of gravity on each. | {
"domain": "physics.stackexchange",
"id": 68788,
"tags": "newtonian-mechanics, momentum, conservation-laws, free-fall"
} |
Deciding whether an interval contains a prime number | Question: What is the complexity of deciding whether an interval of the natural numbers contains a prime? A variant of the Sieve of Eratosthenes gives an $\tilde O(L)$ algorithm, where $L$ is the length of the interval and $\sim$ hides poly-logarithmic factors in the starting point of the interval; can we do better (in terms of $L$ alone)?
Answer: Disclaimer: I'm not an expert in number theory.
Short answer: If you're willing to assume "reasonable number-theoretic conjectures", then we can tell whether there is a prime in the interval $[n, n+\Delta]$ in time $\mathrm{polylog}(n)$. If you're not willing to make such an assumption, then there is a beautiful algorithm due to Odlyzko that achieves $n^{1/2 + o(1)}$, and I believe that this is the best known.
Very helpful link with lots of great information about a closely related problem: PolyMath project on deterministic algorithms for finding primes.
Long answer:
This is a difficult problem, an active area of research, and seems to be intimately connected to the difficult question of bounding gaps between the primes. Your problem is morally very similar to the problem of finding a prime between $n$ and $2n$ deterministically, which was recently the subject of a PolyMath project. (If you want to really dive into these questions, that link is a great place to start.) In particular, our best algorithms for both problems are essentially the same.
In both cases, the best algorithm depends heavily on the size of gaps between the prime. In particular, if $f(n)$ is such that there is always a prime between $n$ and $n + f(n)$ (and $f(n)$ can be computed efficiently), then we can always find a prime in time $\mathrm{polylog}(n) \cdot f(n)$ as follows. To determine whether there is a prime between $n$ and $n + \Delta$, first check if $\Delta \geq f(n)$. If so, output yes. Otherwise, just iterate through the integers between $n$ and $n + \Delta$ and test each for primality and answer yes if you find a prime and no otherwise. (This can be done deterministically, which is why deterministically finding a prime between $n$ and $2n$ is so closely related to determining whether there is a prime in a certain interval.)
If the primes behave like we think they do, then this is the end of the story (up to $\mathrm{polylog}(n)$ factors). In particular, we expect to be able to take $f(n) = O(\log^2 n)$. This is known as Cramér's conjecture after Harald Cramér, and proving it seems very far out of reach at the moment. But, as far as I know, it is widely believed. (One arrives at this conjecture, e.g., from the heuristic that the primes behave like the random set of integers obtained by including each integer $n \geq 3$ independently at random with probability $1/\log n$.)
There are many conjectures that imply the much much weaker bound $f(n) \leq O(\sqrt{n})$, such as Legendre's conjecture. (I'm not aware of any conjectures that are known to imply an intermediate bound, though I imagine that they exist.) And, the Riemann hypothesis is known to imply the similar bound $f(n) \leq O(\sqrt{n} \log n)$. So, if you assume these conjectures, you essentially match Odlyzko's algorithm (up a factor of $n^{o(1)}$) with a much simpler algorithm.
I believe that the best unconditional bound right now is $\widetilde{O}(n^{0.525})$ due to Baker, Harman, and Pintz. So, if you assume nothing, then Odlyzko's algorithm beats the obvious algorithm by roughly a factor of $n^{0.025}$. | {
"domain": "cstheory.stackexchange",
"id": 4468,
"tags": "cc.complexity-theory, ds.algorithms, nt.number-theory, comp-number-theory"
} |
Creating many Android cardViews with different background colors | Question: I have cardView in my android app with three child component. I set the background color for each components. Based on card INDEX I reset the background colors individually.
If it's just one cardView then its not a problem, but I have to include 16 (16 more card indexes) more cardViews exactly the same structure but only the id will be different to the view and child component. How could it be done ?
public void cardView_type_Colors(int card_idx, String colors){
if (card_idx == 0){
if (colors.equals("YELLOW")){
color_cardView1 =
color_cardViewDisplayName1 =
color_cardViewDisplayNumber1 =
color_cardViewDisplayTown1 =
color_btncardView1 = R.color.LimeYellow;
setCardView_text_colors(0,"BLACK");
} else if (colors.equals("PALEGREEN")){
color_cardView1 =
color_cardViewDisplayName1 =
color_cardViewDisplayNumber1 =
color_cardViewDisplayTown1 =
color_btncardView1 = R.color.PaleGreen;
setCardView_text_colors(0,"WHITE");
} else if (colors.equals("ORANGE")){
color_cardView1 =
color_cardViewDisplayName1 =
color_cardViewDisplayNumber1 =
color_cardViewDisplayTown1 =
color_btncardView1 = R.color.Orange;
setCardView_text_colors(0,"WHITE");
} else if (colors.equals("RED")){
color_cardView1 =
color_cardViewDisplayName1 =
color_cardViewDisplayNumber1 =
color_cardViewDisplayTown1 =
color_btncardView1 = R.color.Red;
setCardView_text_colors(0,"WHITE");
} else if (colors.equals("DEFAULT")){
color_cardView1 =
color_cardViewDisplayName1 =
color_cardViewDisplayNumber1 =
color_cardViewDisplayTown1 =
color_btncardView1 = R.color.LightGrey;
setCardView_text_colors(0,"WHITE");
} else if (colors.equals("DISABLE")){
color_cardView1 =
color_cardViewDisplayName1 =
color_cardViewDisplayNumber1 =
color_cardViewDisplayTown1 =
color_btncardView1 = R.color.DisableColor;
setCardView_text_colors(0,"BLACK");
}
} else if (card_idx == 1){
if (colors.equals("YELLOW")){
color_cardView2 =
color_cardViewDisplayName2 =
color_cardViewDisplayNumber2 =
color_cardViewDisplayTown2 =
color_btncardView2 = R.color.LimeYellow;
setCardView_text_colors(1,"BLACK");
} else if (colors.equals("PALEGREEN")){
color_cardView2 =
color_cardViewDisplayName2 =
color_cardViewDisplayNumber2 =
color_cardViewDisplayTown2 =
color_btncardView2 = R.color.PaleGreen;
setCardView_text_colors(1, "WHITE");
} else if (colors.equals("ORANGE")){
color_cardView2 =
color_cardViewDisplayName2 =
color_cardViewDisplayNumber2 =
color_cardViewDisplayTown2 =
color_btncardView2 = R.color.Orange;
setCardView_text_colors(1,"WHITE");
} else if (colors.equals("RED")){
color_cardView2 =
color_cardViewDisplayName2 =
color_cardViewDisplayNumber2 =
color_cardViewDisplayTown2 =
color_btncardView2 = R.color.Red;
setCardView_text_colors(1,"WHITE");
} else if (colors.equals("DEFAULT")){
color_cardView2 =
color_cardViewDisplayName2 =
color_cardViewDisplayNumber2 =
color_cardViewDisplayTown2 =
color_btncardView2 = R.color.LightGrey;
setCardView_text_colors(1,"WHITE");
} else if (colors.equals("DISABLE")){
color_cardView2 =
color_cardViewDisplayName2 =
color_cardViewDisplayNumber2 =
color_cardViewDisplayTown2 =
color_btncardView2 = R.color.DisableColor;
setCardView_text_colors(1,"BLACK");
}
}
}
Answer: I'll start this post off with a notice that my knowledge of the Java language isn't very high. However, I have a lot of experience with development in languages like C# and JavaScript so there are a few things I can help with. Most of my code below will be pseudo-code, but I did research more proper syntax and import statements to clean it up a bit.
Based on your post alone, it is hard to tell what is really going on which, is mostly due to the fact that you've got chained assignments, and your variable names are all very similar (this makes it hard to understand what each variable really is). I'm not quite sure why you have chained assignments here, but this is due to the lack of context. It does look very odd (and it could be my lack of knowledge in the Java language), because it looks you're assigning a Color to a String to a int to a String to what appears to be some sort of Control.
Create a Class
Without clarification, one suggestion I will make is to create a class for your Card objects. This will help to clarify what is going on a little better, and makes my upcoming suggestions much more feasible.
Card.java:
import java.awt.Color;
public class Card {
public int id = 0;
public int displayNumber = 0;
public String displayName = "CARD";
public String town = "Liverpool";
public Color backgroundColor = Color.black;
public Color foreColor = Color.white;
public CardView view = null;
public Card(int cardID, int num, String name, String town, Color bgColor) {
id = cardID;
displayNumber = num;
displayName = name;
town = town;
backgroundColor = bgColor;
}
}
I had to create an additional class called CardView since I'm not sure what that is, but assume it's a control. I just created an empty class to compile my initial testing.
CardView.java:
public class CardView { }
Again, without truly knowing what each of your variables are this is the best I can come up with and is purely based on your existing naming conventions. This class will help to make your code more readable since you now have consistent names for your variables.
Use a List
Lists are very useful objects, no matter the language you're in (I'd be really interested to see a language with a bad implementation of them). I heavily recommend using a list for this task since you can add all of your cards to it and then loop over it later when doing your assignments. For example, to create a list of Card objects:
List<Card> cards = new ArrayList<Card>();
cards.add(new Card(0, 0, "Card 1", Color.black));
cards.add(new Card(0, 0, "Card 2", Color.black));
cards.add(new Card(0, 0, "Card 3", Color.black));
Create Methods for Assigning Colors
This will help clean up a little since each method will do something very specific. For example, I'd recommend the following four methods:
applyCardColors(Card c, String backgroundColor)
Used to apply all relevant colors to a card.
applyBackgroundColor(Card c, String color)
Used to apply the background color.
determineAndApplyForeColor(Card c, String backgroundColor)
Used to determine the proper fore color and then apply it with the next method.
applyForeColor(Card c, String color)
Used to apply the proper fore color.
To some it may seem a little redundant, but it cleans up code and ensures better readability.
Use switch instead of if-else if-else
This one may not be recommended by all, but I believe it will help clean up the code since we've now broken your code down into individual methods. The ApplyBackgroundColor method has the largest switch structure, but this is due to the number of colors you can have.
public static void applyBackgroundColor(Card c, String color) {
switch (color) {
case "YELLOW": c.backgroundColor = Color.yellow; break;
case "PALEGREEN": c.backgroundColor = Color.green; break;
case "ORANGE": c.backgroundColor = Color.orange; break;
//...
default: c.backgroundColor = Color.black; break;
}
}
public static void determineAndApplyForeColor(Card c, String backgroundColor) {
switch (backgroundColor) {
case "YELLOW":
case "DISABLE": applyForeColor(c, "BLACK"); break;
default: applyForeColor(c, "WHITE"); break;
}
}
public static void applyForeColor(Card c, String color) {
switch (color) {
case "WHITE": c.foreColor = Color.white; break;
default: c.foreColor = Color.black;
}
}
This is easy to expand on in the future and prevents you from having incredibly large logical structures.
Loop over your Cards
Now that all of that ground work has been laid out, you can loop over your Card collection and apply colors that way:
for (int i = 0; i < cards.size(); i++) {
Card c = cards.get(i);
applyCardColors(c, cardColors[i]);
System.out.print(c.displayName + ": " +
c.backgroundColor + ", " +
c.foreColor + "\n");
}
Of course you'll need that other method I mentioned above to keep things separated:
public static void applyCardColors(Card c, String backgroundColor) {
applyBackgroundColor(c, backgroundColor);
determineAndApplyForeColor(c, backgroundColor);
}
Use the enum Values
My heaviest recommendation is to use the predefined enum values that you're assigning so that you can prevent the switch structure and clean this code up even more:
Color[] cardColors = new Color[] { Color.yellow, Color.green, Color.red };
List<Card> cards = new ArrayList<Card>();
cards.add(new Card(0, 0, "Card 1", "London", Color.black));
cards.add(new Card(0, 0, "Card 2", "Paris", Color.black));
cards.add(new Card(0, 0, "Card 3", "Dubai", Color.black));
for (int i = 0; i < cards.size(); i++) {
Card c = cards.get(i);
c.backgroundColor = cardColors[i];
applyForeColor(c, cardColors[i]);
System.out.print(c.displayName + ": " +
c.backgroundColor + ", " +
c.foreColor + "\n");
}
The only part I had trouble with here was using the switch structure on the enum but this is due to my lack of knowledge in Java, I can do it as below in C#, but the online compiler I have didn't like it:
public static void applyForeColor(Card c, Color backgroundColor) {
switch (backgroundColor) {
case Color.yellow: c.foreColor = Color.black; break;
default: c.foreColor = Color.white; break;
}
}
Best of luck with your future endeavors! | {
"domain": "codereview.stackexchange",
"id": 35965,
"tags": "java, android"
} |
problems in hector slam | Question:
there are 3 problem that are confirmed in my tests about hectorslam:
1, the corridor problem:hector slam think the robot is not moving when laser scan datas are almost the same at long corridor . This problem is stated clearly on ros answers, and because hector slam is not using odometer (exclude roll & pitch), the problem can not be solved
2, the quake problem: hector slam suffers when robot is during a body shake(probably when overcome an obstacle or uneven surface), it has a great possibility to messup the map(drift in yaw, the map direction is then changed) and can't recover since then. roll pitch data are introduced to solve this problem, it improves but not solved. drift happens sometimes;
maybe there is some thing wrong with my system setting: imu update at 200hz, while imu_attitude_to_tf_node run at 10hz
3, the "angle too large" problem: hector slam raise "angle too large" warning when robot turns too much in a short time. however , the warning itself is unstable, I took some tests to determined how large angle will result in this, but got no pattern. and eveytime after this warning , the map drifts.
EDIT:
here are my answers for now:
1, the corridor problem is always a tricky area in laser slam, to solve this, a simple way is to replace your laser with a long range laser; a complex way is to witch using odometer in corridor, or confuse visual data;
2&3, because teamHector develop hector slam in a USAR environment, so the robot should better move as slow as their proto robot, I slow down my robot, all the two problems disappear.
any help will be appreciated
Originally posted by panda on ROS Answers with karma: 63 on 2016-05-26
Post score: 2
Original comments
Comment by Francis Dom on 2017-05-29:
Are you using the Hector slam in a ground robot or UAV
Answer:
I've used Hector SLAM a lot myself and all of your issues are common issues with Hector SLAM. It is important to remember what environment(video) TeamHectorDarmstadt used the robot in. The particular environment is rich in features and there are literally no long corridors, so for them, the Hector SLAM mapping and navigation algorithms were perfect.
Now using the algorithm in areas with great laser ambiguity that's another story. I'm going to try and give a short explanation to each .. question? And give you a couple of ideas to code a solution.
The corridor problem occurs when one scan looks almost exactly as the next, imagine that a lot laserscanner points are registering the walls next to the robot but none of the laserscanner points ahead and behind the robot is reaching any surface (assuming the robot is travelling forward). You can actually enable odometry to use the estimated position w.r.t odometry to help the laserscanner. But make sure you have some awesome odometry, I combined odometry and a cheap IMU to first get a better odometry estimate and then enabling it in Hector SLAM, it helped some.
I've never experienced this, the robot here travels on very flat surfaces. Sudden turns will ruin the mapping though.
This can be resolved by tuning the parameters in the mapping_default.launch file, specifically tune your angle update and number of map resolution. I've actually run in to this problem mostly because of memory issues.. not sure why but it really helped to tune the parameters!
There are different things you can do to make HSLAM more robust one is loop closure and another is this Modeling Laser Intensities For Simultaneous
Localization and Mapping awesome method which would be really cool to implement!
Hope this is useful, if all fails you could look in to mapping with vision based systems like LSD-SLAM, ORB-SLAM, RTAB etc.
Originally posted by JamesWell with karma: 136 on 2016-06-03
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by panda on 2016-06-05:
thank you a lot, I have tuned the params, and figure out the robot should turn slowly when building map in few feature environment. could you give me any idea about loop closure in hector slam?
Comment by jacobperron on 2016-10-21:
How did you "enable" your odometry in hector_slam? AFAIK hector_slam does not make use of odometry at all. | {
"domain": "robotics.stackexchange",
"id": 24743,
"tags": "slam, imu, navigation, hector"
} |
Resistive forces on Simple Harmonic motion | Question: How is a simple harmonic motion affected by resistive forces? In this case, a spring block system is placed on rough horizontal surface. How to derive the block's displacement equation?
I couldn't figure out how to proceed after applying Newton's second law as friction's direction kept changing.
Answer: The problem with a block on a surface is somewhat challenging, since a) the friction force abruptly changes when the velocity changes sign (i.e. the direction of motion changes), and b) one needs to distinguish the regimes where the restoring force is greater or less than the maximum value of the friction force $\mu N$. This results in a non-linear problem that needs to be solved by sewing piecewise solutions.
An easier and more frequently treated problem is the case of a friction force proportional to velocity, which, e.g., would be the case of a pendulum slowered by the air:
$$\mathbf{F} = -\gamma \mathbf{v},$$
where $\gamma$ is the friction coefficient. With the usual approximations on the pendulum displacement (i.e., after linearizing the trigonometric functions) one obtains equation
$$m\ddot{x} -\gamma \dot{x} +m\omega^2x = 0,$$
which is a solvable linear differential equation, resulting in damped oscillations.
Update
Let us consider a block on a surface, under the action of a restoring force $-kx$ and a static-sliding friction force. For simplicity we consider the case where the block is initially at rest, i.e. its initial velocity is zero, $\dot{x} = 0$.
First of all, if $|x|<\mu N/k$ no motion will occur, since the static friction force balances the restoring force. If $|x|>\mu N/k$ the motion will occur, governed by the Newton's equation
$$m\ddot{x} = \pm \mu N - kx,$$
where the sign in front of the friction force depends on the direction of the block motion. formally, this could be written as
$$m\ddot{x} = -\text{sign}(\dot{x}) \mu N - kx,$$
where
\begin{equation}\text{sign}(\dot{x}) = \begin{cases} +1, \text{ if }\dot{x}>0,\\ -1,\text{ if }\dot{x}<0\end{cases}.\end{equation}
As was mentioned in the beginning, it is easier to solve this problem in piecewise manner:
if $x_0>\mu N/k$, the motion is governed by equation $$m\ddot{x} = \mu N - kx,$$ which is the equation of an oscillator under the action of a constant force, with equilibrium position $x_{eq} = \mu N/k$, and the amplitude $A=x_0 -\mu N/k$. It will swing through its equilibrium position and stop at the point $x_1 = x_{eq} - A = 2\mu N/k - x_0$.
if $x_1 < -\mu N/k$, the oscillator will swing back. Its velocity is now positive, and motion is now governed by equation $$m\ddot{x} = -\mu N - kx,$$ which is the equation of an oscillator under the action of a constant force $-\mu N$, with equilibrium position $x_{eq} = -\mu N/k$, and amplitude $A = |x_1 - x_{eq}| = -\mu N/k - x_1$ The oscillator will thus stop at $x_2 = -\mu N/k + A = -2\mu N/k -x_1 = x_0 - 4\mu N/k$.
We can continue reasoning in this way and arrive at the following recursive solution:
$$x_{2n+1} = 2\mu N/k - x_{2n},\\
x{2n+2} = - x_{2n + 1} -2\mu N/k.$$
The solution of this equations for the stopping points is
$$x_{2n} = x_0 -\frac{4n\mu N}{k},\\
x_{2n+1} = \frac{2(2n+1)\mu N}{k} - x_0,$$
while $|x_{i}|> \mu N/k$!
With some patience this solution could be generalized to the case of arbitrary initial conditions. | {
"domain": "physics.stackexchange",
"id": 66575,
"tags": "newtonian-mechanics, friction, spring, oscillators, non-linear-systems"
} |
Extracting Copper from Copper Carbonate | Question: In a crucible, we added copper carbonate and the total weight of both is 37.04g. After, we heated the crucible with the copper carbonate. It was easy to judge when the reaction had taken place as the colour of the substance had changed from the light green of copper carbonate to the black of copper oxide.After we observed this, we weighed the copper oxide and we notified the result which was 36.42g. We should have heated and weighed the crucible again, but we weren't able because it was broken(due to temperature change). But theoretically, which would be approximately the value of the second weighing?I'm curious because after searching on the Internet, I have seen both increase and reduction of the remaining mass.
Answer: The repeated heatings were just to ensure that you had oxidized all of the copper carbonate to copper oxide. If you heated it a second time and the mass decreased, that would just mean that some of the copper carbonated had not been oxidized by the first heating. Then you would just keep heating until the mass quit decreasing, meaning that you had definitely oxidized all of it.
So the answer to you question is simply that the mass theoretically should not have decreased at all with a second heating, unless some of the copper carbonate had not been oxidized the first time. | {
"domain": "chemistry.stackexchange",
"id": 7725,
"tags": "inorganic-chemistry"
} |
get gene lines from gtf file | Question: I would like to retrieve gene lines from a GTF file for which I only have exons & transcripts lines (output from Cufflinks) and alternative splicing possible. I need gene lines for compatibility with a pipeline dealing with GTF in Ensembl format.
Ideally, I would have gene lines representing the longest transcript - but I am open to discussion about that. The aim would be to separate the genome in genic & intergenic portions - boundaries being defined by start-end coordinates of the genes.
Sample input (e.g. gene "CUFF.105" with 5 transcripts):
cat subset_105.test | awk '$3=="transcript"{print $0}'
tig00000046 Cufflinks transcript 26170 42766 202 + . gene_id "CUFF.105"; transcript_id "CUFF.105.3"; FPKM "0.2320304094"; frac "0.197788"; conf_lo "0.164885"; conf_hi "0.299176"; cov "5.188855";
tig00000046 Cufflinks transcript 26170 42766 266 + . gene_id "CUFF.105"; transcript_id "CUFF.105.1"; FPKM "0.3061260755"; frac "0.249779"; conf_lo "0.228860"; conf_hi "0.383392"; cov "6.845843";
tig00000046 Cufflinks transcript 26170 39469 470 + . gene_id "CUFF.105"; transcript_id "CUFF.105.2"; FPKM "0.5403628578"; frac "0.161731"; conf_lo "0.372512"; conf_hi "0.708214"; cov "12.084038";
tig00000046 Cufflinks transcript 28928 39469 1000 + . gene_id "CUFF.105"; transcript_id "CUFF.105.4"; FPKM "1.1485378983"; frac "0.233911"; conf_lo "0.853387"; conf_hi "1.443688"; cov "25.684547";
tig00000046 Cufflinks transcript 29614 42766 181 + . gene_id "CUFF.105"; transcript_id "CUFF.105.5"; FPKM "0.2087076495"; frac "0.156792"; conf_lo "0.137112"; conf_hi "0.280304"; cov "4.667292";
What it came to my mind is:
sort the file by $4 (numeric: n) and $5 (reverse: nr) (sort -k4,4n -k5,5nr)
take the 1st line
remove unnecessary fields from $9
But this would fail for transcripts annotated in the reverse strand, and I won't be able to deal (how to fill) the $6 (score - a floating point value).
Any ideas of an existing method that could help me retrieve those gene lines?
Options to retrieve those lines directly from Cufflinks would also be accepted (I didn't find a suitable parameter in the manual).
Answer: Assuming none of the genes have duplicate entries across chromosomes (or strand or other biologically implausible things) in R one can:
library(rtracklayer)
library(GenomicRanges)
gtf = import.gff("file.gtf")
grl = split(gtf, gtf$gene_id) # This produces a GRangesList
grl = endoapply(grl, function(x) {
foo = x[1]
foo$type = 'gene'
start(foo) = min(start(x))
end(foo) = max(end(x))
return(c(foo, x))
})
gr = unlist(gr)
You can then either write gr out to a file (either with export() or just printing the rows). | {
"domain": "bioinformatics.stackexchange",
"id": 549,
"tags": "sequence-annotation, file-formats, gtf"
} |
Script to store, search, and delete IPFS data with description in JSON file | Question: Here is the script that I would like reviewed for the following:
Best practices and design pattern usage
Correctness in unanticipated cases
Better dictionary access
The script is made to store (IPFS) Interplanetary File System hash objects. If you would like to know more check out https://ipfs.io
Script:
#!/usr/bin/python
# PHT - Personal Hash Tracker
#
import json
import os
hashjson_global = "/Users/troywilson/testing/pht/hash.json"
choice = raw_input("What do you want to do? \n a)Add a new IPFS hash\n s)Seach stored hashes\n d)Delete stored hash\n >>")
if choice == 'a':
# Add a new hash.
description = raw_input('Enter hash description: ')
new_hash_val = raw_input('Enter IPFS hash: ')
new_url_val = raw_input('Enter URL: ')
entry = {new_hash_val: {'description': description, 'url': new_url_val}}
# search existing hash listings here
if new_hash_val not in data['hashlist']:
# append JSON file with new entry
data['hashlist'].update(entry) #must do update since it's a dictionary
with open(hashjson_global, 'w') as file:
json.dump(data, file, sort_keys = True, indent = 4, ensure_ascii = False)
print('IPFS Hash Added.')
pass
else:
print('Hash exist!')
elif choice == 's':
# Search the current desciptions.
searchTerm = raw_input('Enter search term: ')
with open(hashjson_global, 'r') as file:
data = json.load(file)
hashlist = data['hashlist']
# build dictionary map and search for description value
d = {v['description']: h for h, v in hashlist.items()}
print d.get(searchTerm, 'Not Found')
elif choice == 'd':
# Search the current descriptions and delete entry.
del_hash = raw_input('Hash to delete: ')
with open(hashjson_global, 'r') as file:
data = json.load(file)
del data['hashlist'][del_hash]
with open('hashjson', 'w') as file:
json.dump(data, file, sort_keys = True, indent = 4, ensure_ascii = False)
print ('Hash removed')
Example JSON file:
{
"hashlist": {
"QmVZATT8jWo6ncQM3kwBrGXBjuKfifvrE": {
"description": "Test Video",
"url": ""
},
"QmVqpEomPZU8cpNezxZHG2oc3xQi61P2n": {
"description": "Cat Photo",
"url": ""
},
"QmYdWb4CdFqWGYnPA7V12bX7hf2zxv64AG": {
"description": "test.co",
"url": ""
}
}
}%
Answer: I would make an object that acts like a dict and that automatically writes any changes to disk, similar to the one I wrote in this answer.
import os
import json
class PersonalHashTracker(dict):
def __init__(self, filename):
self.filename = filename
if os.path.isfile(filename):
with open(filename) as f:
# use super here to avoid unnecessary write
super(PersonalHashTracker, self).update(json.load(f) or {})
def write(self):
with open(self.filename, "w") as f:
json.dump(self, f)
def __setitem__(self, key, value):
super(PersonalHashTracker, self).__setitem__(key, value)
self.write()
def __delitem__(self, key):
super(PersonalHashTracker, self).__delitem__(key)
self.write()
def update(self, d, **kwargs):
super(PersonalHashTracker, self).update(d, **kwargs)
self.write()
You can then use it like this:
MENU = """What do you want to do?
a)Add a new IPFS hash
s)Seach stored hashes
d)Delete stored hash
q)Quit
>>"""
if __name__ == "__main__":
hash_store = PersonalHashTracker("/Users/troywilson/testing/pht/hash.json")
while True:
choice = raw_input(MENU)
if choice == 'a':
# Add a new hash.
description = raw_input('Enter hash description: ')
new_hash_val = raw_input('Enter IPFS hash: ')
new_url_val = raw_input('Enter URL: ')
if new_hash_val not in hash_store:
hash_store[new_hash_val] = {'description': description,
'url': new_url_val}
else:
print 'Hash exists!'
elif choice == 's':
# Search the current descriptions.
search_term = raw_input('Enter search term: ')
descriptions = {v['description']: h for h, v in hash_store.items()}
print descriptions.get(search_term, 'Not Found')
elif choice == 'd':
# Search the current descriptions and delete entry.
del_hash = raw_input('Hash to delete: ')
del hash_store[del_hash]
print 'Hash removed'
else:
print 'Exiting'
break | {
"domain": "codereview.stackexchange",
"id": 29599,
"tags": "python, python-2.x, json, file, database"
} |
Helmholtz decomposition of $\vec{E}$ field | Question: Edit: Can someone check my answer and possibly complete my task at the end?
The helmholtz theorem states that any vector field can be decomposed into a purely divergent part, and a purely solenoidal part.
What is this decomposition for $\vec{E}$, in order to find the field produced by its divergence, and the induced $\vec{E}$ field caused by changing magnetic fields.
The Potential Formulation:
$$\vec{E} = -\nabla V - \frac{\partial \vec{A}}{\partial t}$$
Is often transformed as $$\vec{E} = - \frac{\partial \vec{A}}{\partial t}$$
For showing induced fields, where charge density is not important ( and subsequently the scalar potential is zero).
There are a number of issues with this however using current density without including charge density violates $\vec{J} = \rho \vec{V}$
with that being said, in general, although it is a good approximation for the induced part of the field.
When not modeling $\rho$ as zero, From the lorenz gauge condition
$\nabla \cdot \vec{A} = - \mu_0 \epsilon_0 \frac{\partial V }{\partial t}$
We know the divergence of
$- \frac{\partial \vec{A}}{\partial t}$
Is non zero.
And thus that component of the $\vec{E}$ field cannot "just" be caused by the induced part, it is caused by the $\vec{E}$ fields divergence.
So what is an expression for the purely solenoidal part of the E field?
Edit:
artificially removing $V$ and then choosing the coulomb gauge to show its zero divergence is also incorrect as artifically removing $V$ with approximations should remove $A$ as well. Instead do the same with the full equation. Also, the components each terms represents is gauge dependent as well.
Perhaps the helmholtz decomposition is :
$$\vec{E} = -\nabla V - \frac{\partial \vec{A}}{\partial t}$$
Only under the coulomb gauge. As in the coulomb gauge, the first term has zero curl, but has divergence. But the second term has curl, but zero divergence?
What each part represents is gauge dependant, the full E field however being gauge independant.
Answer:
For showing induced fields, where charge density is not important ( and subsequently the scalar potential is zero).
This seems confused. We may choose a gauge in which $V=0$, in which case $\vec E = -\frac{\partial}{\partial t} \vec A$. This is called the Weyl gauge, though this is a bit of a misnomer because there is still some residual gauge freedom left; as a result, this defines a family of gauges.
For example, when we have a single point charge $q$ at the coordinate origin, the electric field is given by
$$\vec E = \frac{q \vec r}{4\pi \epsilon_0 |\vec r|^3}$$
In electrostatics, we typically choose the potentials $V= \frac{q}{4\pi \epsilon_0 |\vec r|}, \vec A=0$. However, we can perform a change of gauge
$$\matrix{V \mapsto V- \frac{\partial}{\partial t}\chi\\ \vec A \mapsto \vec A +\nabla \chi}, \qquad \chi(\vec r,t) := \frac{q t}{4\pi \epsilon_0 |\vec r|}$$ in which case we obtain the potentials $V=0$, $\vec A = -\frac{qt \vec r}{4\pi \epsilon_0 |\vec r|^3}$. This can always be done, and is not a statement about the presence or absence of charge density.
There are a number of issue with this however as using current density without including charge density violates $\vec J = \rho \vec V$.
This is only true if there's only one species of particle in your system. In general, one has that $\rho = \sum_i \rho_i$ and $\vec J = \sum_i \rho_i \vec v_i$. It's entirely possible that $\rho=0$ and $\vec J\neq 0$; for example, if you have electrons moving through a background of stationary positively-charged ions.
From the lorenz gauge condition $\nabla \cdot \vec{A} = - \mu_0 \epsilon_0 \frac{\partial V }{\partial t}$, we know the divergence of $-\frac{\partial \vec{A}}{\partial t}$ is non zero.
Unless of course $\frac{\partial V}{\partial t}=0$. But note that in the Lorenz gauge the electric field is generally not given by $\vec E = -\frac{\partial}{\partial t} \vec A$, because the Lorenz gauge is not generally a Weyl gauge.
The helmholtz theorem states that any vector field can be decomposed into a purely divergent part, and a purely solenoidal part [...] So what is an expression for the purely solenoidal part of the E field?
The Helmholtz decomposition theorem applies to (twice-differentiable) vector fields which are defined on a simply-connected bounded subset $V\subset \mathbb R^3$. In that case, we may define
$$ \vec Q(\vec r,t) \equiv \frac{1}{4\pi} \int_V \frac{\nabla' \times \vec E(\vec r',t)}{|\vec r-\vec r'|} \mathrm d^3r' - \frac{1}{4\pi} \int_{\partial V} \hat n \times \frac{\vec E(\vec r',t)}{|\vec r-\vec r'|} \mathrm dS$$
where $\nabla'$ refers to differentiation with respect to $\vec r'$, $\partial V$ is the boundary of $V$, and $\vec n$ is the normal vector to $\partial V$. From there, $\nabla \times \vec Q$ is the purely solenoidal part of $\vec E$. If $\vec E$ goes to zero as $r\rightarrow \infty$, we may remove the requirement that $V$ be bounded and obtain
$$ \vec Q(\vec r,t) \equiv \frac{1}{4\pi}\int_{\mathbb R^3} \frac{\nabla ' \times \vec E(\vec r',t)}{|\vec r-\vec r'|} \mathrm d^3 r'$$
Note also that if we choose the Coulomb gauge, then by definition $-\frac{\partial}{\partial t}\vec A$ is solenoidal. If the Helmholtz decomposition theorem applies (e.g. on the domain $\mathbb R^3$ where $\vec E\rightarrow 0$ as $r\rightarrow \infty$), then $\nabla \times \vec Q=-\frac{\partial }{\partial t} \vec A $ is the unique purely solenoidal component of $\vec E$ (by which I mean, the vector field obtained when the irrotational component has been subtracted).
My assertion with this modified formula was in relation to griffiths stating this, in the lorenz gauge, that because a wire was electrically neutral, $\rho=0$, and subsequently the scalar potential was zero.
If $\rho=0$, then the electric field is solenoidal as per Gauss's law. In the Coulomb gauge, this implies that $V=0$ and that
$$\nabla \times \vec B = -\nabla^2 \vec A = \mu_0 \vec J - \frac{1}{c^2} \frac{\partial^2}{\partial t^2} \vec A$$
$$\implies \left(\frac{1}{c^2}\frac{\partial^2}{\partial t^2} - \nabla^2\right)\vec A = \mu_0 \vec J$$
The solution to this equation is given by
$$\vec A = \int \frac{\mu_0 \vec J(\vec r', t_r)}{4\pi |\vec r-\vec r'|} \mathrm d^3 r'$$
as per the solution given in Griffiths' text. Note that this choice of gauge also satisfies the Lorenz gauge condition in this particular case. | {
"domain": "physics.stackexchange",
"id": 87681,
"tags": "electromagnetism, electric-fields, maxwell-equations"
} |
Uniform Circular Motion | Question: I am so confused on what approach to take when face with questions like find the angular velocity when I am given radius and time or radius and revolution/minutes. Are there any crucial key points to know? Or any key equations that I can use? Sorry, I tried learning on my own about the subject because my professor did a really bad job at explaining but I still have trouble. Any and all comments highly appreciated!
Answer: I'll answer this as more of a broad conceptual understanding that is good to have to do well with these problems. The key is the fact that $\pi$ is the tie between the angular and the linear motion.
Let's say you're given radius and time for a revolution, you're trying to find the angular and linear velocity. Here's how I would approach it:
Let's say you run a lap around a circular track with a radius of 50 meters in 120 seconds. When you're done running that one lap, how far did you travel, both angularly and linearly?
Well, angularly speaking, if you ran around the full circular track in 120 seconds, that means that you traveled 360 degrees around the track (angularly speaking) in 120 seconds. So we divide. $360\ \deg / 120\ s$ = $3\deg/s$
But a lot of times we work in radians. 360 degrees is just $2\pi$ radians. So instead of 360 degrees, we write $2\pi$. So your angular speed would be (in rads/sec):
$$2\pi /120$$
Boom! That's already the formula $\omega = \frac{2\pi}{T}$ that we just showed intuitively! $\omega$ is angular velocity, $T$ is your period (how long to go one revolution).
But what if I wanted to figure out how far I actually ran (distance-wise) on that track? Well in that 120 seconds, how far did I travel? Going back to geometry, I ran a whole circle circumference of radius $50m$. The formula for that is $2\pi r$.
So like before, we'll divide.
$$ v= \frac{2\pi * 50}{120} = 2.617 m / s$$
That gives us our other very important formula!
$$ v = \frac{2\pi r}{T}$$
Becuase $ v = \frac{2\pi r}{T}$ and $\omega = \frac{2\pi}{T}$, this means that $v = r\omega$. And there you have it, pretty much all the angular equations you need to know. | {
"domain": "physics.stackexchange",
"id": 40056,
"tags": "velocity"
} |
"Utopian Tree" challenge optimization | Question: Problem statement
The Utopian tree goes through 2 cycles of growth every year. The first growth cycle occurs during the monsoon, when it doubles in height. The second growth cycle occurs during the summer, when its height increases by 1 meter.
Now, a new Utopian tree sapling is planted at the onset of the monsoon. Its height is 1 meter. Can you find the height of the tree after \$N\$ growth cycles?
Input Format
The first line contains an integer, \$T\$, the number of test cases.
\$T\$ lines follow. Each line contains an integer, \$N\$, that denotes the number of cycles for that test case.
Constraints
1 <= T <= 10
0 <= N <= 60
Sample Input: #01:
2
3
4
Sample Output: #01:
6
7
Explanation: #01:
There are 2 testcases.
N = 3:
the height of the tree at the end of the 1st cycle = 2
the height of the tree at the end of the 2nd cycle = 3
the height of the tree at the end of the 3rd cycle = 6
N = 4:
the height of the tree at the end of the 4th cycle = 7
My solution:
'use strict';
function processData(input) {
var parse_fun = function (s) {
return parseInt(s, 10);
};
var lines = input.split('\n');
var T = parse_fun(lines.shift());
var data = lines.splice(0, T).map(parse_fun);
for (var i = 0; i < data.length; i++) {
var value = 1;
for (var j = 0; j < data[i]; j++) {
if (j % 2 == 0)
value *= 2;
else value += 1;
}
process.stdout.write(value + '\n');
}
}
process.stdin.resume();
process.stdin.setEncoding("ascii");
var _input = "";
process.stdin.on("data", function (input) {
_input += input;
});
process.stdin.on("end", function () {
processData(_input);
});
The running time is of \$O(n^2)\$. Can anyone tell how to optimize this code to decrease runtime?
Answer: A lot of your code seems to be Hackerrank's boilerplate code, so I'll ignore that (though it could use a review), and focus on the "meat" which is:
var value = 1;
for (var j = 0; j < data[i]; j++) {
if (j % 2 == 0)
value *= 2;
else value += 1;
}
process.stdout.write(value + '\n');
(It's run T times of course, but that's irrelevant for now.)
There's not a lot of code there, so there's little to review. Still, the if..else is not great, if you ask me. For one, I'd advice that you always use braces - even for one-liners. But if you don't, at least use linebreaks:
if (j % 2 == 0)
value *= 2;
else
value += 1;
However, you can do it all in a ternary, which I would find more appropriate here:
value += (j % 2 === 0 ? value : 1);
The parentheses aren't required, but I find they make it more readable.
Of course, you could also rely on zero being false'y in JavaScript, and just do
value += (j % 2 ? 1 : value);
A slightly faster solution to the even/odd branching would be
value += (j & 1 ? 1 : value);
In other words: If the least-significant bit is 1, the number is odd.
In all, you get:
var value = 1;
for (var j = 0; j < data[i]; j++) {
value += (j & 1 ? 1 : value);
}
process.stdout.write(value + '\n');
Of course, there may be a purely mathematical, loop-less solution to this. But that's unfortunately not my strong suit.
Update: As mjolka points out in the comments, there is a very simple pattern to this. (And I feel pretty dumb for not realizing it.) I can't really explain it more succinctly than mjolka already has, so I'll just quote the comment here:
Look at the first few terms: 1, 2, 3, 6, 7, 14, 15, 30, 31, ... and compare that to powers of two: 2, 4, 8, 16, 32, ...
As is obvious, the output values are all equivalent to a power of 2, minus 1 or minus 2. In code, that can be expressed as a function like so:
// calculate tree height after n cycles
function utopiaTreeHeight(n) {
var exp = Math.ceil(n / 2) + 1, // calculate the exponent
value = Math.pow(2, exp) - 1; // power of 2, minus 1
return value - (n & 1); // subtract another 1 if n is odd, and return
}
Which means that the rest of the code is just:
for(var i = 0 ; i < data.length ; i++) {
var value = utopiaTreeHeight(data[i]);
process.stdout.write(value + '\n');
}
Now, that's pretty clean, I'd say. Big tip of the hat to mjolka!
Below are my previous (iterative and naïve) solutions. I'll leave them in my answer, only because there's probably something of value in there, even if this particular problem has a much more elegant solution.
(End of update.)
Now, about repeating it T times: If you're processing many test cases, it might be worth it to do some pre-processing. All you really need is to iterate to the highest cycle-count. So if you're asked to solve for data = [3, 6, 5] what you really want to just solve for N = 6 but store the intermediate values for N = 3 and N = 5 along the way. You only need to loop once; from zero to Nmax.
Of course, doing so will require extra setup, which may be less efficient than simply doing what you're doing now, if there's only one or two test cases.
Just for fun, though, one solution might be:
var cycles = data.length, // cache this for later
sorted = data.slice(0).sort(), // copy and sort the input values (slightly faster if you use an explicit comparison function)
limit = sorted[cycles - 1], // get the max
target = sorted.shift(), // grab the lowest value (our first target cycle)
value = 1, // initial tree height
values = {}; // a place to store values
// loop to the highest cycle-count (note the range is 0..limit)
for(var n = 0 ; n <= limit ; n++) {
while( n === target ) {
// we reached our target cycle, so store the current value
values[n] = value;
// and grab the next target
target = sorted.shift();
}
value += (n & 1 ? 1 : value);
}
// print results in the correct order
for(var i = 0 ; i < cycles ; i++) {
process.stdout.write(values[data[i]] + '\n');
}
The while loop is there to handle duplicates in the input.
A simpler solution would be to simply store every value for N in 0..Nmax in an array. That tradeoff would be memory consumption. If data = [1, 923123] you'd end up storing 923121 values (~7MB at worst) you're not going to use. Crazy example, but if you don't know your input, well...
Still, such an approach could look like:
var limit = Math.max.apply(null, data),
values = [],
value = 1;
for(var n = 0 ; n <= limit ; n++) {
values.push(value);
value += (n & 1 ? 1 : value);
}
for(var i = 0 ; i < data.length ; i++) {
process.stdout.write(values[data[i]] + '\n');
} | {
"domain": "codereview.stackexchange",
"id": 34212,
"tags": "javascript, performance, programming-challenge"
} |
Small text adventure | Question: I made this small text adventure just to see what I could do with Python, and I'm basically wondering how it could be optimized, especially my combat as right now I'd have to rewrite it every time I want combat.
import time
import random
spider_damage = 10
inventory = []
iron_dagger_damage = random.randint(5,12)
player_damage = iron_dagger_damage
def intro():
print("You are lost in the woods. You know that if you follow the right path, you will get to the nearest village.")
time.sleep(1)
print("You see that there are 2 paths ahead. In which one do you want to go(left or right)")
def left_or_right():
choice = ""
print("Where do you want to go, left or right?")
choice = input()
return choice
def intro_end(left_or_right):
if choice == "left":
print("You go down the path to the left and...")
time.sleep(1)
print("You die a horrible death")
input()
quit()
elif choice == "right":
print("You go down the path to the right and...")
time.sleep(1)
print("Notice that the trees are thinning out")
time.sleep(1)
def iron_dagger():
iron_dagger = ''
print("You see a flash of light in the forest. Do you want to risk the forest to go see what it was?(yes or no)")
iron_dagger = input()
return iron_dagger
def choice1_end(iron_dagger):
if iron_dagger == "yes":
print("You find an iron dagger!")
inventory.append("iron dagger")
print(inventory)
elif iron_dagger == "no":
print("You continue on")
else:
iron_dagger
def part_1():
print("You finally get out of the forest")
time.sleep(1)
print("You see a giant frost spider in the distance")
print("""
(
)
(
/\ .-" "-. /\
//\\/ ,,, \//\\
|/\| ,;;;;;, |/\|
//\\\;-" "-;///\\
// \/ . \/ \\
(| ,-_| \ | / |_-, |)
//`__\.-.-./__`\\
// /.-(() ())-.\ \\
(\ |) '---' (| /)
` (| |) `
\) (/)""")
def attack_or_run():
choice2 = ""
while choice2 != "1" and choice2 != "2":
print("Do you want to attack it or run?(1 = attack, 2 = run)")
choice2 = input()
return choice2
def part_1_1(attack_or_run):
if choice2 == "2":
print("You start to run away")
time.sleep(1)
print("You trip on a rock and...")
time.sleep(1)
safe = random.randint(1, 10)
if safe == 1:
print("You continue on towards the nearest village")
else:
print("You fall into a ravine and die")
input()
quit()
if choice2 == "1":
if "iron dagger" in inventory:
print("You run towards the spider with your iron dagger")
time.sleep(1)
print("You start to attack it and...")
time.sleep(1)
else:
print("You desperatly try to kill the spider with your fists but fail miserably")
input()
quit()
def encounter_1(part_1_1):
if choice2 == "1":
John_Smith_hp = 50
Giant_spider_hp = 30
while True:
Giant_spider_hp = Giant_spider_hp - player_damage
John_Smith_hp = John_Smith_hp - spider_damage
if Giant_spider_hp < 1:
print("You kill the spider")
break;
print ("Spider health:", Giant_spider_hp, "/30")
time.sleep(0.5)
if John_Smith_hp < 1:
print("You got killed by the spider")
break;
print("Your health", John_Smith_hp, "/50")
time.sleep(0.5)
def part_1_2(attack_or_run, part_1_1):
if choice2 == "1":
bottle = ""
while bottle != "yes" and bottle != "no":
print("You find a strange bottle with some kind of potion in it. Do you want to take it with you?")
bottle = input()
print("You spot a giant on the road")
time.sleep(1)
if bottle == "yes":
print("You know you cannot deafeat him, but perhaps the bottle could help you?")
time.sleep(1)
inventory.append("bottle")
elif bottle == "no":
print("You know you cannot deafeat him...")
time.sleep(1)
print("You try to run away but realize that it's hopeless as the giant closes in on you")
time.sleep(1)
print("Perhaps that bottle could have saved you...?")
input()
quit()
else:
print("You know you cannot deafeat him...")
time.sleep(1)
print("You try to run away but realize that it's hopeless as the giant closes in on you...")
input()
quit()
def part_1_3():
if "bottle" in inventory:
choice3 = ""
print("What do you want to do?")
print("Print HELP if you don't know what to do")
print("1.drink, 2.run, 3.fight, 4.apply to weapon")
choice3 = input()
if choice3 == "drink" or choice3 == "1":
print("You drink the potion and collapse on the ground dead. Maybe that wasn't such a good idea")
input()
quit()
elif choice3 in["fight", "run", "3", "2"]:
print("You get destroyed by the giant")
input()
quit()
elif choice3 == "apply" or choice3 == "4":
print("You apply the potion to your shortsword and charge the giant")
time.sleep(1)
print("You manage to cut the giant and he collapses to the ground.")
time.sleep(1)
else:
part_1_3()
def part_1_4():
print("You finally arrive to the village")
playagain = "yes"
if playagain == "yes":
intro()
choice = left_or_right()
intro_end(left_or_right)
iron_dagger()
iron_dagger = iron_dagger()
choice1_end(iron_dagger)
part_1()
choice2 = attack_or_run()
part_1_1(attack_or_run)
encounter_1(part_1_1)
part_1_2(part_1_1, attack_or_run)
part_1_3()
part_1_4()
Answer: The most obvious problem is that parts of your code don't make any sense. For example:
iron_dagger()
iron_dagger = iron_dagger()
choice1_end(iron_dagger)
What?! This:
Calls the function and ignores the returned value;
Calls the function again and assigns the return value to the name of the function, preventing further access to the function; and
Passes the value to another function.
It's not much better inside iron_dagger where, in some sort of homage to Inception, there is a variable iron_dagger. You can simplify those lines to:
choice1_end(iron_dagger())
And should strongly consider renaming the variable.
It seems odd generally to split the program up the way you do. Why separate the function where the choice is made from the function for making the choice and the function where the choice has impact? Instead of:
intro()
choice = left_or_right()
intro_end(left_or_right)
Just have:
intro()
and handle getting and dealing with the choice inside that function (still potentially with sub-functions):
def section():
print("introductory blah")
choice = make_choice()
if choice == "whatever":
do_the_thing
...
This keeps all appropriate logic together.
Next is your reliance on scoping for variable access. Instead of e.g.
def intro_end(left_or_right):
...
choice = left_or_right()
intro_end(left_or_right)
where you pass the input function as an argument, make the return value from that function the argument:
def intro_end(choice):
...
intro_end(left_or_right())
Generally, prefer explicit arguments and return values to scoping or globals. It makes development and testing much, much easier. As another example, you should pass inventory around explicitly.
You have little in the way of input validation. There is a good community wiki on SO for this, I recommend modifying e.g. left_or_right to only ever return 'left' or 'right'. Then consider abstracting that to a def input_str(prompt, choices): function.
Your encounter suggests the use of OOP to me. Rather than two variables John_Smith_hp and Giant_spider_hp, this could be e.g.
class Character:
def __init__(self, name, hp):
self.name = name
self.hp = hp
player = Character("John Smith", 50)
enemy = Character("Giant spider", 30)
The Characters could also have attributes like damage or inventory...
if choice == 'yes':
player.inventory.append('iron dagger')
player.damage += random.randint(5, 12) | {
"domain": "codereview.stackexchange",
"id": 7974,
"tags": "python, game, adventure-game"
} |
Why doesn't the IAU definition of "Planet" disqualify Mercury and Venus as planets? | Question: Here's the IAU definition of a planet (source):
A celestial body that (a) is in orbit around the Sun, (b) has
sufficient mass for its self-gravity to overcome rigid body forces so
that it assumes a hydrostatic equilibrium (nearly round) shape, and
(c) has cleared the neighbourhood around its orbit. (p. 1)
Part b) is the sticking point. What qualifies whether something is assuming hydrostatic equilibrium? M. Burša gives the criterion in his 1984 "Secular Love Numbers and Hydrostatic Equilibrium of Planets":
(screenshot source). On the next page, Burša tabulates the k_s Love numbers for the major bodies:
Importantly, note the values of k_s = 237 and 293 for Mercury and Venus respectively. Burša concludes:
The secular Love numbers k_s computed (Table II) demonstrate that the actual state of Venus, Mercury and of the Moon is far from hydrostatic equilibrium.
The oblateness of these bodies is incompatible with their rotation rates under pure hydrostatic equilibrium.
Please correct me if I have this wrong. It appears that the IAU definition of a planet excludes Mercury and Venus due to the hydrostatic equilibrium requirement, and that this has been clear since 1984.
Answer: You are citing a paper that has been cited only six times in the peer reviewed scientific literature since it was published in 1984, which was almost 40 years ago. One of those six citations was a self-citation. Papers that are as resoundingly under-cited as that are not definitive.
With that, the "hydrostatic equilibrium" aspect of what makes a planet a "planet" simply is not well-defined. The cited paper definitely is not definitive. The bottom line from the cited paper should not be that Mercury and Venus are far from hydrostatic equilibrium. The bottom line one should deduce from that paper is that the metric used in that paper is not a good metric for hydrostatic equilibrium, and hence the low citation rate.
It is hard to find any paper that is definitively accepted as defining a good parameter regarding hydrostatic equilibrium. Mercury and Venus are very slow rotators and are close to the Sun, and hence subject to tidal forces. These get in the way of establishing a good metric. The Earth is still recovering from the glaciation that ended about 12000 years ago. Moreover, there are signs that parts of former tectonic plates have dived almost to the core mantle boundary. The Earth is not in hydrostatic equilibrium. The Moon and Mars also are not in hydrostatic equilibrium. There are fast rotators such as Haumea that are triaxial in shape. This makes little sense from a naive hydrostatic equilibrium point of view. As an aside, Mike Brown, the discoverer of Haumea, was one of the key killers of Pluto as a planet. Mike Brown proudly uses @plutokiller as his Twitter username. "Hydrostatic equilibrium" is not a good metric unless one uses "approximately in hydrostatic equilibrium" as a rather fuzzy qualifier.
Regarding the other two attributes:
Orbiting the Sun is well-defined, okay, but wow. That means there are eight planets in the entire universe. All of the exoplanets that have been discovered to date are not "planets." However, this part of the definition completely bypasses several potential problems:
The brown dwarf / super-Jupiter problem. There's no clear dividing line between a brown dwarf and a super-Jupiter.
The newly forming star system problem. Things that might eventually become planets are not quite yet planets in those newly forming star systems.
The rogue planet problem. Whether planet-sized objects ejected from a star system still count as planets is debated, and that perhaps includes the hypothetical fifth giant planet that some posit was ejected from our solar system early in its formation.
The "clearing the neighboring" concept also is well-defined; there are multiple metrics that agree that the gap between the eight planets and the myriad non-planets is a huge multiple order of magnitude gap. We don't know whether this applies outside the solar system. It probably doesn't apply for newly formed star systems, but it probably does apply for star systems more than a few hundred million years old. Almost all of the exoplanets orbiting stars other than the Sun would most likely qualify as planets were it not for the "planets orbit the Sun" clause.
One of the chief proponents of the "clearing the neighborhood" qualification, Mike Brown (mentioned above) used as evidence for the proposed demotion of Pluto's status a previously written paper by one of the key opponents of the "clearing the neighborhood" qualification, Alan Stern, who is the chief scientist for the New Horizons spacecraft that flew by Pluto and is continuing to this day. Two other papers were also used, all showing a huge gap between Mars and Pluto.
That paper by Stern found a parameter with a vast six order magnitude gap between Mars and Pluto. In that paper, Stern proposed that the eight objects in the solar system that have "cleared their neighborhood" using his own parameter be called überplanets while the lesser objects that still appear to be round-ish be called unterplanets. The IAU decided to call them planets and dwarf planets, with the exception that moons did not qualify as either a planet or dwarf planet. Dwarf planets must be objects that orbit the Sun as opposed to orbiting a planet or dwarf planet. Stern's proposal would have designated some of the larger moons as unterplanets. | {
"domain": "astronomy.stackexchange",
"id": 6854,
"tags": "planet, venus, mercury, definition"
} |
Why is the alpha particle called a particle when it is made of four particles? | Question: We know the alpha particle is the nucleus of helium. It contains four subatomic particles - two protons and two neutrons. The protons and neutrons are further made of particles called up and down quarks. So why call an alpha particle a particle. And similarly, why do we say electrons, protons, neutrons etc. are particles, even though, they are made of particles?
I can understand the case of electrons, protons, and neutrons as when they were discovered and named, quarks were not discovered. But the helium nucleus is a different case.
I reckon talking about an English word's meaning changes in the frame of reference. We do not call a quark an object, although, an object could be any real or virtual thing. So in chemistry, I don't think calling an alpha particle a particle is a good practice. I do understand there are dust particles and all but we mention them only while talking about the macroscopic world. Not so in the case of chemistry. I hope you get my point.
Answer: Actually, the alpha particle contains two protons and two neutrons. It is what is emitted from the nucleus under alpha decay, giving off what was first classified as alpha radiation by Rutherford at the end of the 19th century (very late 1800's).
In 1932 the neutron was discovered. Only then did it become clear that the nuclei of atoms contain more than just some positively charged particles (which we know as protons).
So there is this historic reason for the naming of the alpha particle.
"Particle"
The word particle does not only pertain to elementary particles (and in [quantum] chemistry, we stop at the level of nuclei as particles, everything that goes deeper is nuclear physics). There are smoke particles, dust particles, etc.
There is also the wave-particle duality. In principle, since we can observe this "alpha radiation" from radioactive processes, there must be a particle that belongs to the wave. With electromagnetic radiation, this is the photon, and in alpha radiation it is the alpha particle. | {
"domain": "chemistry.stackexchange",
"id": 16980,
"tags": "physical-chemistry, quantum-chemistry"
} |
Where does the unused generated electricity go? | Question: Assume there is a nuclear reactor consuming nuclear energy of one mega watt. The energy is converted to electricity by a generator with some energy loss.
For the sake of simplicity, let's assume the generated electricity is about 0.9 mega watts. If the load only consumes 0.1 mega watts in total, for example, where does 0.8 mega watts go? I don't think there is a giant battery to keep this unused energy.
Note that I have no idea how the electric power is generated and distributed. I only know that nuclear reaction releases energy to heat water, which in turn, rotates the generator to produce electricity. The power is distributed like "live" video streaming (with negligible delay). Please correct my understanding.
Answer: Although your question seems to have changed since I began writing this reply, I think it still applies:
The answer is that the generator only generates 0.1MW. Regardless of how much steam you supply to it, the generator doesn't actually generate more power than the load needs.
You have a nuclear reactor which is generating steam. The steam drives a steam turbine which drives the generator which, for the sake of discussion, we usually say is spinning at 3600 RPM. If you attach a 0.1MW load to the generator -- which is not really the proper way to describe it -- then you draw 0.1MW from the generator.
The reactor may have produced enough steam to provide 0.9MW, but if you don't have a 0.9MW load on the generator then the turbine will not consume that "extra" steam. It will only consume steam at a rate which will drive that generator to provide that 0.1MW.
If you increase the load on the generator then the turbine will consume steam at an increased rate, up to that 0.9MW capability of the reactor.
That's a pretty loosey-goosey explanation but should get the picture across. Just remember that the 1MW represents how much steam the reactor can provide to drive the generator, not how much energy the turbine is actually consuming or how much power the generator is actually producing.
If you have a 0.1MW load and you are producing the maximum amount of steam all the time then your reactor will soon explode from the pressure.
As an addendum I want to say that the purpose of the steam is to keep the generator spinning at 3600 (or 3000) RPM. When the load increases it drags the RPMs of the generator down and the turbine must increase its steam consumption to maintain the generator's speed of 3600 (or 3000) RPM. When the load decreases then the turbine must reduce its consumption of steam in order to keep the generator from spinning faster than 3600 (or 3000) RPM. If the turbine can't maintain the generator speed within its limits then it "trips", the regulator bypasses, and the turbine cycles down to idle until the operators can spin it back up and re-sync it with the grid.
The generator only produces enough electricity to satisfy the load, and the turbine only consumes enough steam to satisfy the demands of the generator.
Hope that helps.
(Thank you for the correction Solomon Slow.) | {
"domain": "physics.stackexchange",
"id": 84668,
"tags": "electric-circuits, electricity, energy-conservation"
} |
Intuition behind the max-flow min-cut theorem | Question: What is the intuition behind the max-flow min-cut theorem?
I know that the min-cut is the dual of max-flow when formulated as a linear program, but the result seems artificial to me.
Answer: So you have a flow thought the network. If you want the maximal flow, your network should not have any bottlenecks. And if you partition the network in two parts, where the source and the sink are in different partitions - you won't be able to push more though the network than this cut - i.e. the sum of edges.
Now the minimum cut will be the worst bottleneck in the network. So it will correspond to maxflow. | {
"domain": "cs.stackexchange",
"id": 4522,
"tags": "graphs, network-flow, intuition"
} |
Causes of entropy change | Question: My sir told me entropy change due to two reasons
entropy created
entropy due to heat exchange
Prof also said that entropy produced is zero for reversible process but not for irreversible process.
But I do not understand how they both are different. I am getting confused.
Can anybody explain me this with an example each?
Answer: Your professor is using a framework in which the entropy change of a closed system is equal to the sum of the entropy created within the system (by irreversibilities, such as viscous dissipation) plus the entropy entering and leaving the system through its boundaries. The entropy entering through each boundary of the system is given by $Q/T_\textrm{boundary}$ where $Q$ is the heat passing through that part of the boundary and $T_\textrm{boundary}$ is equal to temperature at the boundary through which the heat is flowing. So, in this framework, $$\Delta S=S_\textrm{created}+\sum{\frac{Q}{T_\textrm{boundary}}}$$ If the process is reversible, then $S_\textrm{created}=0$ and $T_\textrm{boundary}=T$, where T is the (uniform) temperature of the system.
For more on this powerful approach, see Fundamentals of Engineering Thermodynamics by Moran et al. | {
"domain": "physics.stackexchange",
"id": 35952,
"tags": "thermodynamics, statistical-mechanics, entropy, reversibility"
} |
How many centrioles/basal bodies are there in multi-ciliated cells throughout the cell cycle? | Question: I thought there were only two centrioles per cell, that convert to the basal body at some point during the cell cycle. I also thought there's one basal body per cilium, so I'm not clear on where the other basal bodies are coming from. I'd like to know the distribution of basal bodies and centrioles throughout the cell cycle in multi-ciliated organisms/cells.
Answer: There is a single basal body per cilium. During cell division the centrosome has two centrioles, however, during the differentiation of citiated cells, there is an amplification of basal bodies that nucleate from the centrioles.
In multiciliated cells the basal bodies arise from two pathways- 1. de-novo / deuterosomal / acentriolar pathway and 2. centriolar pathway.
In a work carried out by Al Jord et al (2014), the interplay of these two pathways had been studied in the brain ependymal cells. The study shows that post division, when the cell begins to differentiate, the daughter centriole serves as a site for creation of deuterosomes. During this stage deuterosome independent procentrioles also nucleate (the centriolar pathway) on both mother and daughter centrioles. However, the contribution to basal bodies is predominantly from the deuterosomes.
more than 90% of the centrioles were generated via deuterosomes and
less than 10% directly from centrosomal centriole
Following figure explains the process nicely.
I am not really sure about multiciliated unicellular organisms. My guess is that they retain old basal bodies and new ones are created by a similar mechanism. Most ciliates can also undergo sexual reproduction. | {
"domain": "biology.stackexchange",
"id": 3146,
"tags": "cell-biology, cell-cycle, cilia"
} |
Central Forces: Newtonian/Coulomb force vs. Hooke's law | Question: We know that a body under the action of a Newtonian/Coulomb potential $1/r$ can describe an elliptic orbit. On the other hand, we also know that a body under the action of two perpendicular Simple Harmonic Motions can also have an elliptic orbit. Hence I was wondering if we can differentiate between a body under the influence of a central potential $1/r$ and a body under the action of two perpendicular SHM’s just by observing the orbits without prior knowledge of the potential they are under. So my question is how can we differentiate between these two potentials?
Answer: Your two examples are both central forces. For gravity the potential is:
$$ U_g = -\frac{k}{r} $$
while for the simple harmonic motion the potential is:
$$ U_s = kr^2 $$
Both of these allow circular orbits,and for a circular orbit you cannot tell which is which. However for an elliptical orbit you can because with gravity the origin of the force is at one focus of the ellipse with for SHM the origin of the force is at the centre of the ellipse.
As a side note: these are the only two potentials that have closed orbits. This is Bertrand's theorem. The behaviour is also different with respect to the virial theorem. For the gravitational potential the average values of the kinetic energy $T$ and the potential energy $V$ are linked by:
$$ 2T = -V $$
while for the SHM potential we get:
$$ T = V $$ | {
"domain": "physics.stackexchange",
"id": 55651,
"tags": "newtonian-mechanics, forces, classical-mechanics, orbital-motion, potential"
} |
compatibility cturtle - electric | Question:
Hi all,
I'm a new ros user. I'm working on a project written in ROS cturtle's version. I'm having a lot of problems infact cturtle is supported on ubuntu 10.10 (the last) and i cannot upgrade my pc with an upgraded OS.
I've tried to import the project in Ros electric but i've too much compilation errors (i think that are libs problems).
There is a way to install cturtle on new OS? Or,
There is a way to get compatibility between the two versions?
Thank you
Originally posted by pacifica on ROS Answers with karma: 136 on 2011-10-06
Post score: 1
Original comments
Comment by tfoote on 2011-10-10:
Can you be a little more specific about what you're trying to do? Both electric and cturtle work on Maverick. Are you trying to get cturtle to run on a newer version of Ubuntu? The list of supported distros is in REP 3 http://www.ros.org/reps/rep-0003.html
Comment by Lorenzo Riano on 2011-10-07:
can you edit your post and add a sample (not the whole list!) of the errors you are receiving?
Answer:
I agree that upgrading to Electric is probably best. While there are differences, compatibility between C-turtle and Electric is quite good.
I know of no central list of all the changes introduced in each distribution. Given the federated nature of ROS development, that would not be practical. Instead, each stack publishes a ChangeList for every version released. If an API has changed, refer to the change list for the containing stack.
For example, camera driver changes are listed here:
http://www.ros.org/wiki/camera_drivers/ChangeList.
Originally posted by joq with karma: 25443 on 2011-10-07
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 6897,
"tags": "ros, cturtle, ubuntu, ros-electric"
} |
What are some real world applications of graphs? | Question: Can you give some real world examples of what graphs algorithms people are actually using in applications?
Given a complicated graphs, say social networks, what properties/quantity people want to know about them?
—-
It would be great if you can give some references. Thanks.
Answer: Graphs are definitely one of the most important data structures, and are used very broadly
Optimization problems
Algorithms like Dijkstra's enable your navigation system / GPS to decide which roads you should drive on to reach a destination.
The Hungarian Algorithm can assign each Uber car to people looking for a ride (an assignment problem)
Chess, Checkers, Go and Tic-Tac-Toe are formulated as a game tree (a degenerate graph) and can be "solved" using brute-force depth or breadth first search, or using heuristics with minimax or A*
Flow networks and algorithms like maximum flow can be used in modelling utilities networks (water, gas, electricity), roads, flight scheduling, supply chains.
Network Topology
The minimum spanning tree ensures that your internet traffic gets delivered even when cables break.
Topological sort is used in project planning to decide which tasks should be executed first.
Disjoint sets help you efficiently calculate currency conversions between NxN currencies in linear time
Graph coloring can in theory be used to decide which seats in a cinema should remain free during a infectious disease outbreak.
Detecting strongly connected components helps uncover bot networks spreading misinformation on Facebook and Twitter.
DAGs are used to perform very large computations distributed over thousands of machines in software like Apache Spark and Tensorflow
Specialized types of graphs
Bayesian networks were used by NASA to select an operating system for the space shuttle
Neural networks are used in language translation, image synthesis (such as fake face generation), color recovery of black-and-white images, speech synthesis | {
"domain": "cs.stackexchange",
"id": 16269,
"tags": "graphs"
} |
How are time resolution and signal bandwidth related? | Question: I am confused by the dual concepts of time-resolution and bandwidth. Often I will hear that a pulse-compressed radar application 'doesnt have enough BW' for some specific time resolution that is sought.
Isnt the maximum time resolution simply the reciprocal of your sampling rate?
How are those concepts related?
Answer: Dilip's points in his answer are correct. Speaking more to the context that you referenced of pulse-compression radar, I think you're getting confused by differing meanings of the oft-used word "resolution." In a broad signal processing sense, your time resolution is defined to some extent by your sample rate. But in the specific problem domain of building a radar receiver, you are concerned with being able to identify multiple echoes from distant objects and precisely observe their arrival time. "Resolution" in this context refers to resolving and separating multiple received echoes so that they can be processed independently.
A typical radar receiver uses a sliding cross-correlator to locate echoes from objects that have reflected the transmitted radar signal. The receiver knows the format of the transmitted pulse, so cross-correlation between the RF receiver's output and the transmitted pulse waveform is the optimum scheme for detecting the presence of reflected pulses in AWGN. The correlator output will contain copies of the transmitted pulse waveform's autocorrelation function (which typically has a sinc-like shape) for each received echo, shifted in time based on the range to the target that caused the reflection. In order to discriminate between the targets, their corresponding lobes at the correlator output must be sufficiently separated in time.
A "high-resolution" radar is able to finely discriminate between multiple targets in the range dimension. If your radar has multiple targets at approximately the same range, then their echoes will reach the receiver at nearly the same time. Therefore, their autocorrelation lobes will appear at the output of the correlator at nearly the same time. The ability of the radar to discriminate between the echoes depends upon the time duration of the waveform's autocorrelation lobes; a narrower autocorrelation function (ideally one that looks like an impulse) is better.
This long-winded introduction brings us to the idea of pulse compression. Pulse-compression radar waveforms are typically implemented using linear frequency modulation (also known as "chirping"); instead of transmitting a constant-frequency pulse, the transmitted frequency is swept linearly over the course of the pulse. In practice, the sweep can be done over tens or even hundreds of MHz of spectrum. What's the benefit? An autocorrelation function with nice properties:
$$
<s_{c'}, s_{c'}>(t) = T \Lambda \left(\frac{t}{T} \right) \mathrm{sinc} \left[ \pi \Delta f t \Lambda \left( \frac{t}{T}\right) \right] e^{2 i \pi f_0 t}
$$
The equation above is borrowed from the Wikipedia article; I'll defer the full explanation to that source. What's important here is the $\Delta f$ term; it refers to the amount of frequency covered by the linear frequency chirp. Since $\Delta f$ is a factor in the argument of the $\mathrm{sinc}$ function, it can easily be seen that by chirping over a larger bandwidth, the main lobe of the pulse's autocorrelation function will be narrower. Narrower lobes are more easily discriminated by the radar receiver, giving such a radar high "resolution" in terms of differentiating between similarly-ranged targets.
Just to wrap up a bit, this sort of finding should make sense. Recall that a wide-sense stationary signal's power spectral density can be defined as the Fourier transform of its autocorrelation function. The ideal autocorrelation function for a radar pulse would be an impulse; separating a bunch of echoes with "zero" width is easier than separating a bunch of more broad lobes. The Fourier transform of an impulse has infinite frequency extent. Qualitatively, it follows that autocorrelation functions with very short time extent would be comparatively wideband in the frequency domain. This is the basis for the oft-used rule of thumb in detection and estimation theory that you need a high-bandwidth signal in order to make high-resolution time-of-arrival measurements. | {
"domain": "dsp.stackexchange",
"id": 56,
"tags": "resolution, bandwidth"
} |
Why does the kink has the following vector direction? | Question: I imagine the kink to be the following image in the EM wave.
On the image, we see the charge movement 2 times and each one is very small movement. Even though it is small, it is easy to see why each movement produces the electric field a little bit shorter(due to finite speed of light).
What I dont understand is why the kink has the direction such as on the image.
I understand that electric field lines cant break but why does it join the vector as in the following image and what does join it ?
Answer: Electric field associated with moving charge shows what happens when a charge starts at rest and suddenly jerks to a new position and stops. This isn't very realistic, but it is easier to understand the field before and after. During is a little confusing. It would make more sense if the particle accelerated instead of instantly moved.
This link has an animation that gives a hint of why there are kinks. Electric field of a moving particle. Here is a screenshot from it.
This charge only behaves half as unreasonably. It starts out at rest, and suddenly begins moving at constant velocity of $80$% of the speed of light. If you drew the field lines, you would see a field lines pointed outward from the blue vectors, and then a field in different directions inside. There would be a kink at the bubble. We can use the animation to see what is going on.
The blue arrows show the field before it started. The yellow circle expands at the speed of light around the charge's original position.
At each new position, the field points away from the charge. The "news" of the each new position spreads outward at the speed of light. Points far away haven't heard about it yet. Even though the charge is moving, the news can't pass the original yellow bubble.
Points near the charge point away from it current position. Points farther away but inside the bubble point away from an older position. Inside the bubble, changes are smooth.
Changes are abrupt at the edge of the bubble because of the sudden start to the motion. It is a big change at the leading edge because the news very quickly changes from original position to very close. The field changes from medium strength to strong in a new direction. At the trailing edge, it is smaller because the change is from medium to weak in pretty much the same direction.
Here are some lecture notes that show in more detail how the kink works for the original case. Look at the diagrams near the bottom. It also shows field lines for a more reasonable acceleration. | {
"domain": "physics.stackexchange",
"id": 95678,
"tags": "electromagnetism, electrostatics, electromagnetic-radiation"
} |
Invalid point cloud from Kinect by using openni_camera | Question:
I recently upgraded my Ubuntu from Natty to Oneiric. Since we were developing a project under ros-electric, I thus upgraded the original ros-electric for Natty to ros-electric for Oneiric too. Unfortunately, now by using "roslaunch openni_camera openni_node.launch", the point cloud data from Kinect all become "NaN" and no point cloud can be displayed in rviz, although rgb image display is fine. If I use "roslaunch openni_launch openni.launch", normal point cloud can be displayed in rviz. Since we have to use openni_camera instead of openni_launch, I am wondering why openni_camera does not produce the right point cloud.
Originally posted by Ching on ROS Answers with karma: 1 on 2013-03-23
Post score: 0
Answer:
Its probabliy a bug on openNi by ros. If you search here you find multiple people with the same problem.
I recomend to put a issue ticket on the github for openni, like this one (or add to this) : https://github.com/ros-drivers/openni_camera/issues/13
Originally posted by jcc with karma: 200 on 2013-04-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13502,
"tags": "kinect, openni-launch, openni-camera, pointcloud"
} |
How do I run and control more than one Turtlebot in a single simulation gazebo / Ros | Question:
I found no .cpp file or .h. I've tried every way but can not find how to make a smooth and efficient way to each turtlebot own your odometer.
I have lost months trying to edit the xml / launch in several ways but in the end do not get anywhere. And I see that is a very common question.
Please be clear. If not understood my question is: How do I run and control more than one Turtlebot in a single simulation gazebo / Ros.
(Ros hydro)
Originally posted by ThiagoHMBezerra on ROS Answers with karma: 13 on 2015-01-23
Post score: 1
Answer:
I'm not sure what the real solution is (such that gazebo will directly publish odometry and tf for multiple robots), but I put together a hacky solution which was usable for me. Check out the two launch files in rosh_turtlebot_demo for how to startup gazebo and spawn two robots, and publish_odom.py for publishing odometry and tf.
The key is subscribing to gazebo/model_states and republishing the odom/tf data.
Originally posted by Dan Lazewatsky with karma: 9115 on 2015-01-26
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by ThiagoHMBezerra on 2015-01-27:\
Perfect, really was that I needed. I made some changes to be able to work with my project. Thank you very much.
PS: I will not need it but at some script "r = Rand ()" (I think that's it) is returning error.
Solved! | {
"domain": "robotics.stackexchange",
"id": 20674,
"tags": "kobuki"
} |
Why a differentiator is unstable from pole zeros view point? | Question: A differentiator with frequency response $j2 \pi f$ is unstable because as frequency increases its response becomes out of bound.
But from a pole zero point of view a differentiator just have zeros and no pole, so I believe it is a FIR filter, but then it should be stable see this
I am confused how one can talk about the stability of a differentiator and integrator from pole and zeros point of view. Can any body help?
Answer: I think you may confusing a continuous and a discrete differentiator. A continuous differentiator has indeed a transfer function $H(\omega) = j\cdot \omega$ but FIR implies a time discrete filter.
Discretizing a differentiator is difficult. The transfer function is NOT bandlimited. In fact, it's almost the opposite: it gets worse the higher the frequency becomes. That means you can't sample without significant amount of aliasing.
There are various approximations, the simplest one being $h_d= [1,-1]$. This has a pole at $z = 0$ and a zero at $z = 1$, so it's perfectly stable. The transfer function is
$$H_d(\omega) = 1 - e^{-j\omega} = e^{-j\omega/2} \cdot (e^{j\omega/2}- e^{-j\omega/2}) = e^{-j\omega/2} \cdot 2j \sin(\omega/2) $$
For $ \omega \ll 1$ this becomes
$$ H_d(\omega) \approx e^{-j\omega/2} \cdot j\omega $$
So for low frequencies this does look a differentiator cascaded with a half-sample delay. That delay is caused by the fact that the impulse response is not centered around n=0 but (more or less) n = 0.5.
However, at the Nyquist frequency ($\omega = \pi$)the magnitude error is substantial. It's $|H_d(\pi) = 2|$ as compared to $\pi$ for a continuous differentiator.
I am confused how one can talk about the stability of a differentiator and integrator from pole and zeros point of view.
If you are referring to poles and zeros in the Z-plane, you have implied discretization. That involves either aggressive low pass filtering or substantial aliasing. Either (or both) of these will ensure stability. | {
"domain": "dsp.stackexchange",
"id": 12244,
"tags": "fourier-transform, control-systems, poles-zeros"
} |
Proof for Upper Bound of Sum of Square Roots Problem | Question: In [1], Garey et al. identify what would later be known as the Sum of Square Roots Problem in the course of working out the NP-completeness of Euclidean TSP.
Given integers $a_1, a_2, \ldots, a_n$ and $L$, determine if $\sqrt{a_1} + \sqrt{a_2} + \cdots + \sqrt{a_n} < L$
They observe that it is not even apparent that this problem is in NP since it is not clear what the minimum digits of precision are required in the computation of the square roots to sufficiently compare the sum to $L$. However, they do cite a best known upper bound of $O(m2^n)$ where $m$ is "the number of digits in the original symbolic expression". Unfortunately, this upper bound is attributed merely to a personal communication from A. M. Odlyzko.
Does anyone have a proper reference to this upper bound? Or, in the absence of a published reference, a proof or proof sketch would also be helpful.
Note: I believe that this bound might be inferred as a consequence of more general results by Bernikel et. al. [2] from around 2000 on separation bounds for a larger class of arithmetic expressions. I'm mostly interested in more contemporaneous references (i.e.: what was known circa 1976) and/or proofs specialized to just the case of the sum of square roots.
Garey, Michael R., Ronald L. Graham, and David S. Johnson. "Some NP-complete geometric problems." Proceedings of the eighth annual ACM symposium on Theory of computing. ACM, 1976.
Burnikel, Christoph, et al. "A strong and easily computable separation bound for arithmetic expressions involving radicals." Algorithmica 27.1 (2000): 87-99.
Answer: Here is a rather sloppy proof sketch. Let $S = \sum_{i=1}^n \delta_i \sqrt{a_i}$ where $\delta_i \in \{\pm 1\}$. This is an algebraic number of degree at most $2^n$ and height at most $H = (max(a_i))^{n}$. Now it is easy to check if $S = 0$ (can be done even in $TC^0$ -- see this).If $S \neq 0$ then it is bounded away from $0$ by a quantity (because it is an algebraic number and hence is a non-zero root of a univariate polynomial) that is a function of the degree and height of the minimal polynomial of $S$. Unfortunately, the dependence on the degree is exponential in the number of square roots (and if the $a_i$'s are distinct primes, this degree bound is even tight, though that case of sign evaluation is easy to handle). The precision needed is hence exponential in the number of square roots, which is $2^n$-bits for $S$. It now suffices to truncate each of the $\sqrt{a_i}$ to say $2^{10n}$ bits to ensure the sign is guaranteed to be correct. This is easily done via polynomially many steps of Newton iteration). Now it is down to checking if the sum is positive, which is just addition and hence linear in the number of bits in the summands. Notice that this computation is in Polynomial time on a BSS machine. Also notice that we are not doing any computation directly with the minimal polynomial of $S$ itself, which could have huge coefficients and look ugly, we just use it to reason about the precision to which we need to truncate the square roots. For more details, check Tiwari's paper. | {
"domain": "cstheory.stackexchange",
"id": 4775,
"tags": "reference-request, cg.comp-geom, na.numerical-analysis"
} |
Normal Contact Force of Acting on Two Posts | Question: $\textbf{Question A.}$
A goal frame has a mass $100\,\textrm{kg}$, and has two identical posts $p_1$ and $p_2$ and a uniform crossbar. Respective contact forces $r_1$ and $r_2$ act vertically on the two posts. Find $r_1$.
$r_1=r_2$ since the crossbar if uniform.
Since the frame is in equilibrium $r_1+r_2=2r=100\,\mathrm{g};\, r=50\times 9.81=490.5\,\textrm{N}$
$\textbf{Question B.}$ If a mass of $75\,\textrm{kg}$ hangs $2\,\textrm{m}$ from $p_1$ and the crossbar length is $7\,\textrm{m}$ what are the values of $r_1$ and $r_2$?
Using moments
$\begin{align}
\tau_1&=2\times75\times9.81+\frac{7}{2}\times100\times9.81-r_1\\
&=4905-r_1
\end{align}$
and
$
\begin{align}
\tau_2&=5\times75\times9.81+\frac{7}{2}\times100\times9.81-r_2\\
&=7112.25-r_2
\end{align}
$
Equilibrium implies
$4905-r_1=7112.25-r_2;\quad r_2-r_1=2207.25$
and
$75\times9.81+100\times9.81=1716.75=r_1+r_2$
which solving gives
$r_2=1962, \quad r_1=-245.75$
Are my answers correct?
I feel part B is wrong since I am unsure whether using moments here is allowed, and if the normal contact force is a moment.
I have tried searching Google with terms such as "two pivots mechanics" and "goal post normal contact force", but have not found anything which may help.
Answer: You should take moments about the points r1 and r2, which then you will get two seperate equations, which is what you have done, and that's fine. Take another look at both of your equations, r1 and r2 are forces and since you are taking moments about both those points, you need to multiply r1 and r2 by their distance. What you have essentially done is equate the moments with the forces, which is incorrect. | {
"domain": "physics.stackexchange",
"id": 30416,
"tags": "homework-and-exercises, newtonian-mechanics"
} |
Does anyone know of any resources that detail an extensive number of receptor types, their effects, and signalling pathways? | Question: In a similar manner to this Wikipedia page, although I am not too concerned about the localisation of the receptor, or any known ligands, as I can already access this knowledge easily.
https://en.m.wikipedia.org/wiki/Alpha-2_adrenergic_receptor
Answer: From the comment section:
The IUPHAR database is one of the most extensive database for receptors and ligands. It also contains a lot of additional information and direct references to the literature. | {
"domain": "biology.stackexchange",
"id": 8845,
"tags": "receptor, signalling"
} |
Mocking IElasticClient in unit tests | Question: I'm trying to unit test how my class responds to a NEST 1.7 IElasticClient's return values, using NUnit 2.6.4 and Moq 4.2. I feel mocking the Get function requires very "heavy" It.IsAny calls, and am mainly looking for feedback on how to mock IElasticClient in a way that my tests remain readable.
My code is mostly 100% actual code, with the exception of:
The Repository and Person classes are simplified (I'm mainly looking for feedback on the Test method);
I'm expecting a generic Exception but in reality I've got a custom exception class there;
Here's the code:
public class Person
{
public string Id { get; set; }
public string FullName { get; set; }
}
public class PeopleRepository
{
private IElasticClient client;
public PeopleRepository(IElasticClient Client)
{
this.client = Client;
}
public Person Get(string id)
{
var getResponse = client.Get<Person>(p => p.Id(id));
if (getResponse.Source == null)
{
throw new Exception("Person was not found for id: " + id);
}
return getResponse.Source;
}
}
[TestFixture]
public class PeopleRepositoryTests
{
[Test]
public void RetrieveProduct_WhenDocNotFoundInElastic_ThrowsException()
{
var clientMock = new Mock<IElasticClient>();
var getRetvalMock = new Mock<IGetResponse<Person>>();
getRetvalMock
.Setup(r => r.Source)
.Returns((Person)null);
clientMock
.Setup(c => c.Get<Person>(It.IsAny<Func<GetDescriptor<Person>, GetDescriptor<Person>>>()))
.Returns(getRetvalMock.Object);
var repo = new PeopleRepository(clientMock.Object);
Assert.Throws<Exception>(() => repo.Get("invalid-id"));
}
}
Any way I can improve the way I'm mocking IElasticClient?
Answer: It's fine. Yes, in an ideal world, we'd verify that the id we passed into the PeopleRepository.Get() method is the same one that's getting passed to the client.Get() method, but that's also getting down into testing the internal implementation of the method. What you really care about here is that when the Person isn't found, a specific exception is thrown, so in this particular case it's fine.
Remember, we want to tests for specific results, not the internal implementation.
I'm going to quote Jon Skeet's comment to a related SO question.
I would test that the usage of those functions do the right thing, rather than testing that the exact same function is used. – Jon Skeet Oct 8 '12 at 9:33 | {
"domain": "codereview.stackexchange",
"id": 18126,
"tags": "c#, nunit, moq"
} |
The differences of R parity and $U(1)_R$ symmetry | Question: I know that we introduce R-parity to avoid proton decay.
But some papers introduce $U(1)_R$ Lepton Number, e.g claudia, thomas.
I have questions
1.What is the differences of R parity and $U(1)_R$?
2.What is the meaning of $U(1)_R$ Lepton Number?
Thank you
Answer: $R$ Parity is a discrete $Z_2 $ symmetry while an $R$ symmetry is a global continuous symmetry. If you use a $Z_2$ symmetry to build your model then each field can just be either odd and even, that's it. If you impose a continuous symmetry then there are an infinite number of possible choices of $R$ charges. From a model building perspective, a continuous symmetry is equivalent to a discrete symmetries with an infinite number of group elements or a $Z_{\infty}$ (but up to some topological in-equivalence between U(1) and $Z_{\infty}$).
Its been shown that supersymmetry cannot be broken without $R $ symmetry, making it a very natural assumption, while $R$ parity is not as well motivated.
With regards to your second question. This should really be another question in general, but the $U(1)_R$ lepton number is a term used in the paper you link to denote a choice of $R$ charges that simulate lepton number. | {
"domain": "physics.stackexchange",
"id": 17065,
"tags": "particle-physics, supersymmetry, gauge-theory"
} |
Did relativity make Newtonian mechanics obsolete? | Question: Did Einstein completely prove Newton wrong? If so, why we apply Newtonian mechanics even today? Because Newton said that time is absolute and Einstein suggested it relative?
So, if fundamentals are conflicting, how can both of them be true at a time?
Answer: Einstein extended the rules of Newton for high speeds. For applications of mechanics at low speeds, Newtonian ideas are almost equal to reality. That is the reason we use Newtonian mechanics in practice at low speeds.
On a conceptual level, Einstein did prove Newtonian ideas quite wrong in some cases, e.g. the relativity of simultaneity. But again, in calculations, Newtonian ideas give pretty close to correct answer in low-speed regimes. So, the numerical validity of Newtonian laws in those regimes is something that no one can ever prove completely wrong - because they have been proven correct experimentally to a good approximation. | {
"domain": "physics.stackexchange",
"id": 14197,
"tags": "newtonian-mechanics, time, relativity"
} |
Why does the Strange Quark have Strangeness -1? | Question: I have been trying to find an explanation for the strange quarks negative strangeness value, I understand the term strangeness predates the quark model, but I'm unsure if terminology carry over is the reason for the naming convention.
Apparently, it is also convention to give quantum numbers a positive or negative value depending on the charge of the particle. Does the strange quark's negative charge give it a negative strangeness?
Answer: From the Wikipedia article Strangeness:
The terms strange and strangeness predate the discovery of the quark,
and were adopted after its discovery in order to preserve the
continuity of the phrase; strangeness of anti-particles being referred
to as +1, and particles as −1 as per the original definition. For all
the quark flavour quantum numbers (strangeness, charm, topness and
bottomness) the convention is that the flavour charge and the electric
charge of a quark have the same sign. With this, any flavour carried
by a charged meson has the same sign as its charge. | {
"domain": "physics.stackexchange",
"id": 88119,
"tags": "particle-physics, standard-model, definition, conventions, quarks"
} |
How to import keras model (version 1.2.0) using keras (version 2.4.3) | Question: I want to load the model which was trained using an old keras framework (1.2.0). Currently I am using 2.4.3 version of keras. Is this even possible?
Answer: You can downgrade your keras application using:
pip install keras == 1.2.0 | {
"domain": "datascience.stackexchange",
"id": 9939,
"tags": "keras"
} |
Why not build a swarm of space telescopes? | Question: James Webb Space Telescope (JWST) has not yet started doing science, yet its successor LUVOIR is being discussed already. However I am curious; some countries have invested billions of dollars in developing technologies for JWST. Building a second copy would be many times cheaper - it's just a matter of production now, not a new research.
Why don't we build and launch a swarm of JWSTs and combine their results?
Such a method was used for 'photographing the black hole' - telescopes, set apart by thousands of kilometers, all functioned as a single Earth-sized telescope, providing immense resolution. Imagine what the resolution would be, be those space-telescopes, placed in different Lagrange-points! Wouldn't that be a greater leap of possibilities, than 15-meter mirror of LUVOIR?
PS: I set aside different wavelengths LUVOIR would cover (nothing personal ;)) - same logic can be used to that project as well when it's launched into orbit.
Answer: There are a mixture of factors here.
Firstly the telescopes used to photograph the black hole were radio telescopes. Radio-waves are at a low enough frequency that we can process them directly as electromagnetic oscillations and use a technique called "interferometry" to combine the signals detected at different telescopes.
There's less benefit in putting radio telescopes into space. The main reason for having a space telescope is to get above atmospheric distortion. But radio waves aren't distorted much by the atmosphere. You might get a longer baseline (more resolution) but the size of each telescope would have to be smaller (weight restrictions on rockets). There have been space based radio interferometers (https://arxiv.org/pdf/1303.5200.pdf), but at the moment they don't really deliver more science than could be achieved with less money on Earth.
You can't do interferometry in the same way at optical or even infra-red frequencies (it's possible only if you have a direct optical connection). So each telescope would have to be a separate observatory. And if you have separate observatories, one big one is probably better than 10 little ones. Now, sure, it would be nice to have 10 big space telescopes, but the cost-benefit of such a plan needs to be considered. Is the extra science that could be done with 10 telescopes really worth the extra cost? Probably not, especially if the capabilities of the telescopes overlap.
So we do have a swarm of telescopes, each doing something different: Chandra does X-ray astronomy. The Fermi telescope observes in gamma rays. The Swift Observatory searches for gamma ray bursts. Gaia measures the position of stars with exquisite accuracy. Kepler and later TESS look for transiting exoplanets. Planck mapped the cosmic microwave background, etc etc.
But a swarm of identical telescopes is not effective. If they operate in the visible light wavelengths, they could not be linked up in the way you imagine, and if they are radio telescopes you can get most of the same results from Earth for far less cost. | {
"domain": "astronomy.stackexchange",
"id": 6366,
"tags": "telescope, space, angular-resolution, james-webb-space-telescope"
} |
In what sense does a pure spinor represent the orientation of a unique spacelike codimension-2 plane? | Question: References 1 and 2 define a pure spinor $\psi$ to be a solution of the Cartan-Penrose equation
$$
\newcommand{\opsi}{{\overline\psi}}
v^\mu\gamma_\mu\psi=0
\hspace{1cm}
\text{with}
\hspace{1cm}
v^\mu\equiv \opsi\gamma^\mu\psi,
\tag{1}
$$
where $\gamma^0,\gamma^1,...,\gamma^{D-1}$ are Dirac matrices for a $D$-dimensional spacetime with lorentzian signature, and presumably $\opsi\equiv\psi^\dagger\gamma^0$. References 1 and 2 consider real-valued solutions of this equation, so that $v^\mu$ is an ordinary spacetime vector. Equation (1) clearly implies that $v^\mu$ is a null vector (proof: multiply equation (1) by $\opsi$), but reference 1 makes this stronger statement on page 4:
The null direction, and the orientation of ... a space-like $D - 2$ plane orthogonal to the null direction, are both encoded in a pure spinor...
In this context, "plane" means the linear span of a set of $D-2$ linearly independent vectors in spacetime. The null vector $v^\mu$ itself does not uniquely determine the orientation of such a plane. To see why it's not unique, start with any set of $D-2$ linearly independent vectors orthogonal to $v^\mu$. This defines one plane through the origin. Now add some nonzero multiple of $v^\mu$ to one of those vectors. This gives another set of $D-2$ linearly independent vectors that define a different plane through the origin, still orthogonal to $v^\mu$.
In what way does a real solution $\psi$ of equation (1) select a unique orientation for a spacelike $(D-2)$-dimensional plane orthogonal to the null vector $v^\mu\equiv \opsi\gamma^\mu\psi$?
References and notes:
Banks, Fiol, and Morisse (2006), Towards a quantum theory of de Sitter space (https://arxiv.org/abs/hep-th/0609062)
Banks, Fischler, and Mannelli (2004), Microscopic Quantum Mechanics of the $p=\rho$ Universe" (https://arxiv.org/abs/hep-th/0408076)
I posted this question on Physics SE instead of Math SE partly because reference 2 cites Misner, Thorne, and Wheeler's giant book Gravitation (1973). Maybe that book answers my question, but I don't have access to it right now.
Related: Conflicting definitions of a spinor
Answer: I hadn't considered the possibility that the authors of the cited papers are using incorrect or lazy terminology. The issue is resolved by correcting their terminology in either of two equivalent ways:
If I change their word "space-like" to "null" and change $D-2$ to $D-1$, then everything makes sense.
If I leave their words as-is (space-like and $D-2$) but add the caveat "modulo the null direction $v^\mu$", then everything makes sense. This is equivalent to the previous option.
If this is what they meant, then their claim is consistent with the "flag plane" construction in Penrose's Spinors and Spacetime (which was called to my attention in a comment by @Lightcone). I'm betting that's the same as in Misner, Thorne, and Wheeler, which Banks et al cite (but I can't check this because I don't have access to that book right now). So in hindsight, I'm pretty much convinced that this is just another case of good physicists using incorrect/lazy terminology. The authors know what they're doing, of course, and sometimes that's the problem. When we really know what we're doing, we don't always realize that what we're saying doesn't accurately convey what we're thinking.
Example
Here's an easy example to illlustrate the difference between what the authors said and what I think they actually meant. Work in three-dimensional flat spacetime ($D=3$) with metric diag$(1,-1,-1)$, and consider these three vectors:
\begin{gather}
v &= (1,1,0)\\
a &= (0,0,1)\\
b &= (1,1,1).
\end{gather}
The vector $v$ is null (lightlike). The vectors $a$ and $b$ are both spacelike and are both orthogonal to $v$. The pair $\{v,a\}$ spans a plane $P$, and the pair $\{v,b\}$ spans the same plane $P$. There are infinitely many other spacelike codimension-2 "surfaces" (lines) that have those same properties: orthogonal to $v$, and span the plane $P$ when combined with $v$.
Now here's the issue. The authors said that a real solution of the Cartan-Penrose equation encodes both the null vector $v$ and a spacelike direction (codimension-2 surface) orthogonal to $v$. That's problematic because lots of different spacelike directions are all orthogonal to $v$, and their wording suggests that the Cartan-Penrose equation selects one of them. But I think they meant that a real solution of the Cartan-Penrose equation encodes the plane $P$, or equivalently the null vector $v$ and an orthogonal spacelike direction modulo $v$, without intending to single out just one such spacelike direction. | {
"domain": "physics.stackexchange",
"id": 84088,
"tags": "string-theory, mathematical-physics, spinors, geometry"
} |
Uses of the 'Golden Ratio' in Physics | Question: What are some physics applications of the golden ratio?
$$\varphi~=~ \frac{1+\sqrt{5}}{2}~\approx~ 1.6180339887\ldots$$
Does it ever function specifically as a constant in any formulas or theorems?
EDIT: Original title said Golden Radio... facepalm. I originally asked this question at math.stackexchange but the answers there were all too abstract or useless for me.
Answer: A model that has shown some interest in recent years is the golden chain. In the golden chain you deal with a one-dimensional chain of spin-like particle, similar to the Heisenberg (or Ising) model. But in this model the spin degrees of freedom are replaced by (non-Abelian) anyons (see e.g. this thread). The type of anyon used in this model are the Fibonacci anyons.
To see how the golden ratio enters this model we have to look at the Hilbert space, specifically its dimension. In an ordinary spin chain each spin carries a degree of freedom to which we can assign a Hilbert space of dimension 2, $\mathcal{H}$. The tensor product of two spins is spanned by the singlet and triplet combination. The total Hilbert space of a chain with $n$ spins is the tensor product, $\otimes \mathcal{H}$ and it has dimension $2^n$.
Non-Abelian anyons on the other hand carry a different kind of spin. When two anyons combine they will form what is known as a fusion product. The fusion product of two anyons depends on the type of anyon you are dealing with. Fibonacci anyons satisfy the relation
$\tau \times \tau = 1 + \tau$
We can think of this analogous to spin (with a fundamental difference). When we bring two anyons together they will fuse together and form a composite-like particle. This is similar to the spin of two (s=1/2) particles which combine into a singlet (S=0) or doublet (S=1). In the case of the Fibonacci anyons the particle can form two type of composites: the vacuum particle $1$ ("zero spin") and the Fibonacci particle $\tau$.
What happens when we bring another $\tau$ \particle to this composite? It will fuse with the composite particle to form another composite. However, the allowed particles which can be formed depend on the fusion channel of the first two anyons:
If $\tau_1 \times \tau_2 \rightarrow 1$, then $(\tau_1 \times \tau_2)\times \tau_3 \rightarrow \tau$.
If $\tau_1 \times \tau_2 \rightarrow \tau$, then $(\tau_1 \times \tau_2)\times \tau_3 \rightarrow 1+\tau$
There are two ways in which a $\tau$ particle is formed in the end, while there is only one way in which a vacuum particle is formed. All in all:
$\tau \times \tau \times \tau = 1+ 2\tau$
Where the factor of two on the right hand side refers to the number of ways in which a $\tau$ particle can be formed. The dimension of the Hilbert space of three tau particles is therefore 3 dimensional.
This gives the following conclusion:
We set the dimension of the Hilbert space of zero particles equal to 1.
The dimension of H for 1 Fib. anyon is also 1.
The dimension of H for 2 Fib. anyons is 2.
The dimension of H for 3 Fib. anyons is 3.
Any guess what the dimension of the 4 anyons will be? It's 5 dimensional. You can derive it yourself: just count the number of ways in which you can fuse the anyons together.
The sequence of the dimension of the Hilbert spaces for $n$ anyons is:
$1,1,2,3,5,8,13,\ldots$
Yes, this is the Fibonacci sequence! And the Fibonacci sequence has a very nice feature: It grows roughly as $\phi^n$ where $\phi$ is the golden ratio! The dimension of the Hilbert space of $n$ Fibonacci anyons grows roughly as $\phi^n$!
One way to think of this is that the Fibonacci anyons carry a spin of dimension $\phi$. This statement is wrong though: the Hilbert space always has an integer dimension. It is therefore not referred to as a spin but rather as the quantum dimension of the anyons. The rule is that on average the Hilbert space grows by a factor of $\phi$ every time you add an anyon to the chain (just like the H-space for an Ising chain grows with a factor of two every time you add a spin to the system).
On a last note: Fibonacci anyons might be realized in certain quantum Hall systems and they are useful for topological quantum computing, if they are ever found. | {
"domain": "physics.stackexchange",
"id": 671,
"tags": "soft-question"
} |
Toroidal Planets | Question: I have read online in numerous places about the possible existence of toroidal planets, and I most people seem to believe that they could exist, but they also have no evidence to support this claim. I recently did a project in which I calculated the gravitational field around one-such planet, and I would like to continue working on it to either prove or disprove the possibility of these planets.
Before I continue, I would like to know if any mathematically rigorous investigations into this question have already been conducted. I have searched, but I haven't found anything on planets, only black holes.
Answer: I posted an answer to a similar question here:
https://physics.stackexchange.com/a/97978/1255
The wording there was pretty much asking for a broad description of the physics (which come down to bifurcation possibilities) up to, and including, the topology changes of the planet.
The paper I linked to seems to be pretty much the state of the art on this problem. Calculations producing some of the shapes within this set were reproduced by the io9 link someone gave:
As you can see, it's not exactly a torus (although upon later inspection, it looks like this might be a simple rotated ellipse, in which case I'd file it away in "wrong approaches"). This is expected, because the field distribution for a single ring of wire or gravitational mass in Newtonian gravity is extremely well understood. With certain limit-case assumptions, you can also divide the problem up into 2 components - one analogous to an infinite line, and one for the exclusively radial field from the global dynamics. But that method is approximate by definition. An infinite line of mass would obvious be a perfect infinite cylinder under the assumptions for how planetary mass behaves. It's that radial field that causes the shape to be somewhat egg shape.
The author says that he used a Monte Carlo method of randomly placing ring sources of mass within the interior of the thing. The equation for the ring is well-know, so this is a fairly obvious method. What's not obvious is how you perform an iteration between the surface and the field. How do you even define the curve of the planet's surface in the first place? I worked on this problem a little bit myself, and I've seen multiple wrong solutions online, so I'm quite skeptical if there isn't obvious voluminous work behind it. I have asked the author of that article about these details. So far, I have not received any response.
There's no better metric for the scope of the problem than the question on this site, Why is the Earth so fat? This was only concerned with a tiny tiny limit case problem. It involved a lot of numerical attempts, and the solutions weren't actually all that good in terms of accuracy.
In fact, the Monte Carlo approach has some serious issues. If you're testing the field/potential close to the surface of the planet, you'll almost certainly blow up numerically. The problem is that the field/potential could easily be infinite if a point just happens to be randomly selected very very close to the test-point on the surface. Maybe you'll just de-tune it a little bit. But then you need this very value in order to get to do the iteration on the surface profile!
I recently did a project in which I calculated the gravitational field around one-such planet, and I would like to continue working on it to either prove or disprove the possibility of these planets.
At the risk of being condescending, I'll venture a guess that you did a numerical approximation of the thin ring case? We have an explicit solution for that, and it's the bread-and-butter of the more advanced investigations. Maybe I'm wrong, I honestly have no idea what you've done. But in terms of your interests:
have read online in numerous places about the possible existence of toroidal planets, and I most people seem to believe that they could exist, but they also have no evidence to support this claim.
Are you interested in:
Possible natural formation of the very flat shapes, or the dimpled flat shapes, or a full hole in the middle?
The possible artificial construction of such bodies?
The first case hasn't been clearly ruled out yet. It's like the search for life on Mars. They might be out there, but I wouldn't hold my breath for discovery. Within the next 30 years, with the way that planet hunting has been progressing, we'll probably settle the question. So that's a good problem to get involved in. Maybe you could use it toward a degree, but it's certainly not a career.
But for the latter case, it's obviously possible. The stability requirements aren't nearly as restrictive as what some people make them out to be. Large torus plants are unstable in the most obvious sense. They're still locally stable, or at least that's what my money is on. But it's extremely tenuous. Even if you could construct it, it wouldn't be smart.
For small torus planets, the outlook is good, as long as we're not talking about serendipitous natural occurrence. Nature will probably find some other way to shed the angular momentum before it takes on the extreme shapes, because planet forming environments are extremely nasty. But if you're intentionally spinning up a planet, things should go exactly according to the evolution of shapes that the research already illustrates. The real question is - if it breaks up, what does it break up into? For a large torus, this is obvious (lots of tiny planets), but if the mass and angular momentum can barely support 2 separate bodies, then you should have a very robust global stability condition. Note that you can establish some conservative limits easily, but in reality, 2 closely orbiting tidally locked bodies is a very very difficult problem, since the tidal forces deform them. That becomes 3D, none of these nice 2D simplifications. | {
"domain": "physics.stackexchange",
"id": 27835,
"tags": "newtonian-gravity, planets"
} |
Questionable Taylor expansion for Peierls substitution | Question: In this paper, on page 3, the authors go from the tight binding model w the Peierls substitution
$$ H = \sum_{i,j} \sum_{a,b} t_{a,b} \exp\left(i \int_{\textbf{R}_{j,b}}^{\textbf{R}_{i,a}} dr'_{\mu} A_{\mu} (\textbf{r}',t ) \right) c_{i,a}^\dagger c_{j,b} $$
to $$ H= H_0 + \sum_{i,j} \sum_{a,b} t_{a,b} (L_\mu^A A_\mu + 1/2 L_{\mu v}^{AA} A_\mu A_v +..) c_{i,a}^\dagger c_{j,b} $$
where $H_0 $ is $H$ without the exponential, and then define $ L_{\mu}^A = ( \partial_{A_\mu} H)|_{A=0} $ and so on.
However, shouldn't this second line just be
$$ H= H_0 + L_\mu^A A_\mu + 1/2 L_{\mu v}^{AA} A_\mu A_v +... $$
since the partial derivative is w res to $H$? or am I missing something since we're taking a derivative over $A$ instead of $r$?
Answer:
In this paper, on page 3, the authors go from the tight binding model w the Peierls substitution
$$ H = \sum_{i,j} \sum_{a,b} t_{a,b} \exp\left(i \int_{\textbf{R}_{j,b}}^{\textbf{R}_{i,a}} dr'_{\mu} A_{\mu} (\textbf{r}',t ) \right) c_{i,a}^\dagger c_{j,b} $$
I note here, for completeness, that in their expression $t$ also depends on $i$ and $j$, and they use the notation $t_{ab}(i,j)$.
to $$ H= H_0 + \sum_{i,j} \sum_{a,b} t_{a,b} (L_\mu^A A_\mu + 1/2 L_{\mu v}^{AA} A_\mu A_v +..) c_{i,a}^\dagger c_{j,b} $$
where $H_0 $ is $H$ without the exponential, and then define $ L_{\mu}^A = ( \partial_{A_\mu} H)|_{A=0} $ and so on.
However, shouldn't this second line just be
$$ H= H_0 + L_\mu^A A_\mu + 1/2 L_{\mu v}^{AA} A_\mu A_v +... $$
Yes, their expression is wrong, for the reason you already know. This is probably just a typo.
The expansion is obtained (as they say in the paper) by using the Taylor series expansion of the exponential:
$$
H = \sum_{i,j} \sum_{a,b} t_{a,b}(i,j) \left(1
+ i \int_{\textbf{R}_{j,b}}^{\textbf{R}_{i,a}} dr'_{\mu} A_{\mu} (\textbf{r}',t )
-
\frac{1}{2}\int_{\textbf{R}_{j,b}}^{\textbf{R}_{i,a}} dr'_{\mu} A_{\mu} (\textbf{r}',t )
\int_{\textbf{R}_{j,b}}^{\textbf{R}_{i,a}} dr''_{\mu} A_{\mu} (\textbf{r}'',t ) +\ldots
\right) c_{i,a}^\dagger c_{j,b}
$$
Your expression is more correct, but also looks a little questionable to me, probably because the notation is not super clear. You are dotting $L_\mu$ with $A_\mu$, but there is no free $A_\mu(\vec r,t)$ so I suppose your expression probably should have (or implicitly has) a convolution as well as a dot product. I suppose this is the case, since I also suppose the $\partial_{A_\mu}$ means a functional derivative. And so, it must be convolved with the $A$ field to make sense. | {
"domain": "physics.stackexchange",
"id": 88368,
"tags": "hamiltonian, tight-binding"
} |
ros::Time::now() very different in callback | Question:
Hi,
I am getting very different results when calling ros::Time::now() in the main function and in a Timer's callback function.
I want to get the elapsed time since the Timer began, so I have something like this:
ros::Time t_start;
void my_callback(ros::TimerEvent& e)
{
// In this function, ros::Time::now() returns values greater than 16000
ros::Duration d_elapsed = ros::Time::now() - t_start; // d_elapsed is huge, e.g. 17108.61000
}
int main(int argc, char** argv)
{
// Initialization
ros::init(argc, argv, "my_node");
ros::NodeHandle handle;
t_start = ros::Time::now(); // This is always 0.0
ros::Timer r = handle.createTimer(ros::Duration(1.0f/20.0f), my_callback);
ros::spin();
}
When I do rostopic echo /clock, I get the large values:
---
clock:
secs: 20614
nsecs: 480000000
I tried to add a ros::spinOnce() before initialization t_start, but that did not change anything.
The rosparam /use_sim_time is true because I'm using Gazebo.
Why are the values returned from ros::Time::now() so different?
Originally posted by sterlingm on ROS Answers with karma: 380 on 2017-06-06
Post score: 1
Original comments
Comment by lucasw on 2017-06-06:
Can you include the initialization? now() shouldn't be zero.
Comment by sterlingm on 2017-06-06:
Sure, although it is just an init call and declaring a NodeHandle. I do create some service clients after the initialization, but I don't think those should matter. I edited the question to include a couple more details too.
Comment by lucasw on 2017-06-06:
If gazebo is publishing /clock then it will start at zero when it launches, then count up as long as it is running. It looks like you have been running it for more than 5 hours- is that true? Also do you print or publish the time difference every callback, otherwise how did you know d_elapsed?
Comment by sterlingm on 2017-06-06:
Yes, Gazebo was running for more than 5 hours. I just leave it on in between tests (resetting positions and odometry). I print the elapsed time at each callback for testing purposes. I need to use the elapsed time to get the index of trajectory points.
Comment by ufr3c_tjc on 2017-06-06:
t_start is likely always zero because the node hasn't yet had a chance to receive a message on /clock, so the node believes the time is zero. If you add in a sleep and spinOnce() before giving t_start a value it should be initialized with the value of the message received on /clock.
Comment by lucasw on 2017-06-07:
@ufr3c_tjc you should put that in an answer
Comment by sterlingm on 2017-06-07:
@ufr3c_tjc yep that was it. Thanks! I only needed to add a sleep before calling Time::now(). I'm glad it was a simple solution.
Answer:
From earlier comment:
t_start is likely always zero because the node hasn't yet had a chance to receive a message on /clock, so the node believes the time is zero. If you add in a sleep and spinOnce() before giving t_start a value it should be initialized with the value of the message received on /clock.
Originally posted by ufr3c_tjc with karma: 885 on 2017-06-07
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 28069,
"tags": "ros-kinetic, roscpp, timer"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.