anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Amplitudes for state of a photon | Question: $\newcommand{\bra}[1]{\left< #1 \right|}
\newcommand{\ket}[1]{\left| #1 \right>}
\newcommand{\bk}[2]{\left< #1 \middle| #2 \right>}
\newcommand{\bke}[3]{\left< #1 \middle| #2 \middle| #3 \right>}$
In this lecture video(jump to 10:15) of Prof. Binney, he is discussing passing photons through a polariod. At 10:50, he writes down the state of the incoming photon $\ket{\psi}$ as $$\ket{\psi} = \cos{\theta}\ket{\rightarrow} + \sin{\theta}\ket{\uparrow}.$$
$\ket{\uparrow}$ represents the state when the photon is always blocked and $\ket{\rightarrow}$ represents the state when the photon always passes.
How can one write the amplitudes of $\ket{\rightarrow}$ and $\ket{\uparrow}$ as $\cos{\theta}$ and $\sin{\theta}$ respectively? I understand the classical explanation of taking the components of the electric field but how can we obtain those amplitudes quantum mechanically?
Ref: Binney, James; Skinner, David The Physics of Quantum Mechanics, Oxford University Press, 2014, pp. 20-22. Google books link
Answer: $\newcommand{\bra}[1]{\left< #1 \right|}
\newcommand{\ket}[1]{\left| #1 \right>}
\newcommand{\bk}[2]{\left< #1 \middle| #2 \right>}
\newcommand{\bke}[3]{\left< #1 \middle| #2 \middle| #3 \right>}$
It comes from the normalisation condition. A general state is given by:
$$\ket{\psi} = a\ket{\rightarrow} + b\ket{\uparrow}~~~~~~~~~~~~~~~\text{where }|a|^2+|b|^2=1$$
Then we can parametrise it in terms of just one variable $\theta$ that then gives: $$\ket{\psi} = \cos{\theta}\ket{\rightarrow} + \sin{\theta}\ket{\uparrow}$$ | {
"domain": "physics.stackexchange",
"id": 65836,
"tags": "quantum-mechanics, polarization"
} |
Project Euler problem 50 | Question: I was just trying Project Euler problem 50.
The prime 41, can be written as the sum of six consecutive primes: 41
= 2 + 3 + 5 + 7 + 11 + 13
This is the longest sum of consecutive primes that adds to a prime
below one-hundred.
The longest sum of consecutive primes below one-thousand that adds to
a prime, contains 21 terms, and is equal to 953.
Which prime, below one-million, can be written as the sum of the most
consecutive primes?
Here's my code for the same in Python:
LIMIT = 1000000
x = [1] * (LIMIT + 1)
x[0] = x[1] = 0
primes = []
length = 0
for i in range(2, LIMIT):
if x[i]:
primes.append(i)
length += 1
for j in range(i * i, LIMIT + 1, i):
x[j] = 0
s = 0
prev = -1
cnt = -1
for i in range(length):
s = 0
for j in range(i, length):
s += primes[j]
if s > LIMIT:
break
if x[s] and cnt < j - i + 1:
cnt = j - i + 1
prev = s
print(prev)
I just create a list of primes using Sieve of Eratosthenes. For each prime, I'm just finding all the consecutive primes and their sums to get the answer.
This currently takes about one second to run.
Is there a faster, better, and neater way to solve this?
Thank you for your help!
EDIT:
I see there are 2 votes to close my program as it Lacks concrete context. Can you please explain what I should do to add concrete context?
Answer:
for i in range(2, LIMIT):
if x[i]:
primes.append(i)
length += 1
for j in range(i * i, LIMIT + 1, i):
x[j] = 0
This is inconsistent: either i should range up to LIMIT inclusive, or there's no need for j to range up to LIMIT inclusive.
Also, updating length in the loop is overkill. len(primes) outside the loop works just as well and doesn't distract.
There are more advanced ways of doing the sieve with range updates, but that's already covered in dozens if not thousands of answers on this site, so I'll leave it as an exercise to search for one of them.
s = 0
prev = -1
cnt = -1
It's not immediately clear what any of those variable names mean.
if x[s] and cnt < j - i + 1:
cnt = j - i + 1
prev = s
This could be done more elegantly by folding cnt and prev into a tuple:
width_sum = (-1, -1)
...
if x[s]:
width_sum = max(width_sum, (j - i + 1, s))
You want the longest, so the fastest approach will usually be to search in decreasing order of length. To keep the bookkeeping simple it may be worth aggregating partial sums:
primes = []
accum = [0]
for i in range(2, LIMIT):
if x[i]:
primes.append(i)
accum.append(accum[-1] + i)
for j in range(i * i, LIMIT + 1, i):
x[j] = 0
primes = set(primes)
for width in range(len(primes), 0, -1):
for off in range(width, len(accum)):
partial_sum = accum[off] - accum[off - width]
if partial_sum > LIMIT:
break
if partial_sum in primes:
print(partial_sum)
exit(0) | {
"domain": "codereview.stackexchange",
"id": 36650,
"tags": "python, python-3.x, programming-challenge, primes, sieve-of-eratosthenes"
} |
Baker-Campbell-Hausdorff for Many Operators | Question: I am trying to show that
$$e^{A_n}e^{A_{n-1}}...e^{A_2}e^{A_1}=e^{\sum_iA_i}e^{\frac{1}{2}\sum_{i>j}[A_i,A_j]} \tag{1}$$
is true for a set of operators $A_1,A_2,...,A_n$ such that $[[A_i,A_j],A_k]=0$ for all $(i,j,k)$. I am having troubles obtaining the second exponent in the RHS of the equation above.
In my attempt am using the Baker Campbell Hausdorff (BCH) Theorem, which states that
$$ e^{A_i}e^{A_j}=e^{A_i+A_j}e^{\frac{1}{2}[A_i,A_j]}$$
is valid for operators satisftyng $[[A_i,A_j],A_j]=[[A_i,A_j],A_i]=0$.
We can then write
$$e^{A_n}e^{A_{n-1}}...e^{A_3}e^{A_2}e^{A_1}= e^{A_n}e^{A_{n-1}}...e^{A_3}e^{A_2+A_1}e^{\frac{1}{2}[A_2,A_1]}\\\hspace{60mm}= e^{A_n}e^{A_{n-1}}...e^{A_3+A_2+A_1}e^{\frac{1}{2}[A_3,A_2+A_1]}e^{\frac{1}{2}[A_2,A_1]}\\ \hspace{84mm}= e^{A_n}...e^{A_4+A_3+A_2+A_1}e^{\frac{1}{2}[A_4,A_3+A_2+A_1]}e^{\frac{1}{2}[A_3,A_2+A_1]}e^{\frac{1}{2}[A_2,A_1]}\tag{2}$$
Then, I want to show that
$$ e^{\frac{1}{2}[A_4,A_3+A_2+A_1]}e^{\frac{1}{2}[A_3,A_2+A_1]}e^{\frac{1}{2}[A_2,A_1]}=e^{\frac{1}{2}([A_4,A_3+A_2+A_1]+[A_3,A_2+A_1]+[A_2,A_1])} $$
and so on. Which am assuming is only true if terms like $[A_4,A_3]$ commute with $[A_2,A_1]$. Is this assumption correct?
Assuming is correct, then how do I show they commute by using our initial condition that $[[A_i,A_j],A_k]=0$ for all $(i,j,k)$? I have tried, but obtain the following
$$[[A_4,A_3],[A_2,A_1]]=A_4A_3A_2A_1-A_4A_3A_1A_2-A_3A_4A_2A_1+A_3A_4A_1A_2 \\ -A_2A_1A_4A_3+A_2A_1A_3A_4-A_1A_2A_4A_3+A_1A_2A_3A_4 \tag{3}$$
but I haven't found a way to regroup it and show this is $0$. Is there an easier way to show Eq. $(1)$ is true? Or to show that Eq. $(3)$ is zero?
Answer: You have that every commutator of As commutes with all As, (Lie algebra elements). You wish to probe whether $[[A_4,A_3],[A_2,A_1]]=0$.
Consider $[A_4 A_3, [A_2,A_1]]= A_4[A_3, [A_2,A_1]] + [A_4 , [A_2,A_1]] A_3 = 0+ 0=0$, by your postulate. So, subtracting $[A_3 A_4, [A_2,A_1]]=0$, likewise null, you derive $[[A_4,A_3],[A_2,A_1]]=0$. | {
"domain": "physics.stackexchange",
"id": 87540,
"tags": "homework-and-exercises, operators, commutator"
} |
Einstein summation convention | Question: Suppose that $M$ is a differentiable manifold which represents spacetime. Located at ever point $p$ in spacetime there exists a tanget space at that point. For any tanget space there also exists a metric that is a member of that tangent space. The metric follows as $$g_{\mu\nu}(x) dx^\mu \otimes dx^\nu,$$ where $x$ is really $$x=x^\alpha=(x^0,x^1,x^2,x^3),$$ so the metric is a function of the spacetime. This is commonly summerized as just $g_{\mu\nu}$. It is common in general relativity that something called einstein summation convention is used. For example, with the christoffel symbol $$\Gamma^{\rho}_{\alpha\beta}A_\rho \implies \sum_{\rho=0}^3\Gamma^{\rho}_{\alpha\beta}A_\rho.$$ This is known as the einstein convention, where the $\rho$ index is refered to as a "dummy index". Question is does this same logic apply for the metric $$g_{\mu\nu}(x) dx^\mu \otimes dx^\nu,$$ or do the indicies here not contract?
Answer: In short, yes, the convention does apply, but let me make a few remarks before I get to the main point:
the tangent space at a point is not an element of the manifold, nor a subset. More specifically, the expression $T_p \in M$ is wrong. $T_p$ can be seen, from an extrinsic point of view, as the hyperplane tangent to the manifold at $p$
the metric is not an element of the tangent space. It is a $(0,2)$-type tensor-field defined on the tangent space, meaning that for each point $p \in M$ it is an element of $T_p^* \otimes T_p^*$. Equivalently, it is a bilinear map $g \colon T_p \times T_p \to \mathbb{R}$. It does depend on spacetime by means of its dependency on the point $p$
$x = x^{\alpha} = (x^0, x^1, x^2, x^3)$ in a coordinate chart. In general, notice that the metric depends on the spacetime point, which can't be split in this manner without a choice of coordinates. This is a subtle remark, but it is important to recall that coordinates are a choice, not a fundamental structure on the manifold
Finally, the expression $g_{\mu\nu}(x) \textrm{d}x^\mu \otimes \textrm{d}x^\nu$ does employ Einstein's convention. The object $g = g_{\mu\nu}(x) \textrm{d}x^\mu \otimes \textrm{d}x^\nu$ is the metric itself, the tensor, the bilinear map I described above. $g_{\mu\nu}(x)$ are its components on the coordinate chart $x^\alpha$. The idea is pretty similar to defining the vectors
$$e_1 = \begin{pmatrix}1 \\ 0 \\ 0\end{pmatrix}, \quad e_2 = \begin{pmatrix}0 \\ 1 \\ 0\end{pmatrix}, \quad e_3 = \begin{pmatrix}0 \\ 0 \\ 1\end{pmatrix}$$
and then writing the vector
$$v = \begin{pmatrix}v^1 \\ v^2 \\ v^3\end{pmatrix}$$
as
$$v = v^1 e_1 + v^2 e_2 + v^3 e_3 = \sum_{i=1}^{3} v^i e_i = v^i e_i,$$
the only difference being that the basis for the metric is provided by the objects $\textrm{d}x^\mu \otimes \textrm{d}x^\nu$ instead of the $e_i$. The $\textrm{d}x^\mu$ are the basis for the cotangent space at $p$, $T_p^*$, and hence $\textrm{d}x^\mu \otimes \textrm{d}x^\nu$ provide the basis for $T_p^* \otimes T_p^*$ which, as mentioned above, is the space where the metric "lives". | {
"domain": "physics.stackexchange",
"id": 85002,
"tags": "general-relativity, differential-geometry, metric-tensor, tensor-calculus, notation"
} |
An engines power is 650kW and the shuttle's speed increases from 120 m/s to 160 m/s in t seconds | Question: note sure if I'm missing something obvious, but i cant seem to find a good way to tackle this question.
A space shuttel of mass 400kg is moving in a straight line where there is no resistance. the engine is working with a constant power of 650kW, the shuttles speed increases from 120 m/s to 160m/s in a time t seconds. Find the value of t.
any help would be grateful
Answer: Take the equation: Power = force times velocity, you can show this is true because the S^-1 on both sides cancel, leaving joules = force times distance which is work done, which definitely is energy.
From there, you can convert that power to a force. Yes this force will change with time because it is velocity dependant, but its a step in the right direction.
Simpler method that I didn't think about earlier: calculate it straight from kinetic energy: Ke = 1/2 times mass times velocity squared, find the energy for 120 and 160m/s, then divide the difference by the power of the energy to get your answer | {
"domain": "physics.stackexchange",
"id": 48158,
"tags": "homework-and-exercises, newtonian-mechanics, power"
} |
Dynamic object creation in Python | Question: I want to abstract away differences between zipfile and rarfile modules. In my code i want to call ZipFile or RarFile constructors depending on file format. Currently i do it like that.
def extract(dir, f, archive_type):
'''
archive_type should be 'rar' or 'zip'
'''
images = []
if archive_type == 'zip':
constructor = zipfile.ZipFile
else:
constructor = rarfile.RarFile
with directory(dir):
with constructor(encode(f)) as archive:
archive.extractall()
os.remove(f)
images = glob_recursive('*')
return images
Is there any more elegant way for dynamic object calling?
Answer: Classes are first-class objects in Python.
amap = {
'zip': zipfile.ZipFile,
'rar': rarfile.RarFile
}
...
with amap[archive_type](encode(f)) as archive:
...
or
with amap.get(archive_type, rarfile.RarFile)(encode(f)) as archive:
... | {
"domain": "codereview.stackexchange",
"id": 4113,
"tags": "python, constructor"
} |
Determine if Class B uses Class A ( Java Reflection) | Question: Below is a function that determines if Class B uses Class A.
Currently, it tests for:
Fields
Superclass
Constructors
Methods
private boolean uses(Class<?> b, Class<?> a){
// Test for Declared field
for(Field f:b.getDeclaredFields()){
if(f.getGenericType().equals(a))
return true;
}
// Test for Inherietence
if(b.getSuperclass().getName().equals(a.getName()))
return true;
// Test for constructors
for(Constructor<?> c: b.getDeclaredConstructors()){
for(Class<?> p : c.getParameterTypes())
if(p.getName().equals(a.getName()))
return true;
}
// Test for methods
for(Method m:b.getDeclaredMethods()){
for(Class<?> p:m.getParameterTypes())
if(p.getName().equals(a.getName()))
return true;
}
return false;
}
Is there a better way to write this function?
Is the name of the function terrible?
Are there any bugs in the code?
Is my test holistic, that is, did I test for everything?
Answer:
Is my test holistic, that is, did I test for everything?
Currently, this is what you're testing for:
Declared fields on the class with getDeclaredFields();
A superclass with getSuperclass();
The parameters of the declared constructors with getDeclaredConstructors();
The parameters of the declared methods with getDeclaredMethods().
This is what you are not testing for:
Inherited fields: they are not returned by the call to getDeclaredFields(), so fields inherited from superclasses are not considered;
The method return type: you are just checking for the type of the parameters but not the return type with getGenericReturnType();
Annotations on the method (getAnnotations()), on the method parameters (getParameterAnnotations()) or on the method return type (getAnnotatedReturnType());
Annotations on the class itself with getAnnotations();
Exceptions declared to be thrown by the methods with getGenericExceptionTypes(), in the same way, the exceptions declared to be thrown by the declared constructors are also not considered;
All the hierarchy of the class: you are only testing for the potential superclass, but not for the potential implemented interfaces or the super-superclasses;
If the given class is an array type, you are not testing its component type with getComponentType().
For the generic types returned by getDeclaredFields() or getGenericReturnType() for example, you can access the actual type argument by casting the type to ParameterizedType and call getActualTypeArguments(). This will return an array of all the type argument of the parameterized type. For example, if a declared field is a Map<String, Integer>, it will return the array [class java.lang.String, class java.lang.Integer].
Are there any bugs in the code?
Yes, there is a potential bug: getSuperclass can return null:
If this Class represents either the Object class, an interface, a primitive type, or void, then null is returned
so you need to take care about handling that. Your current code is
if(b.getSuperclass().getName().equals(a.getName()))
which could throw a NullPointerException.
Other comments:
Make sure to always use curly braces, even if it is not strictly needed here. It will make the code less error-prone for the future.
This only uses the Reflection API. Determining whether a given class A truly uses a given class B or not would require going into the byte-code to see if there are any references through chained method calls, import statements or local variables. | {
"domain": "codereview.stackexchange",
"id": 19741,
"tags": "java, reflection"
} |
Derivation of $v=u+at$ | Question: I read the derivation of $v=u+at$ using integration. The steps are as follows -
$$\frac{dv}{dt}=a dt$$
$$dv=adt$$
$$\int dv=\int a dt$$
$$\int dv=a\int dt$$
$$v=at+c$$
My questions are as follows -
1. On integrating $dt$ on the RHS we get a $+c$ (constant of integration) but why is there no $+c$ on the LHS while integrating $dv$?
2.Can we take out something of the integration symbol as $a$ is taken out is the second last step?
3.Why in the the last step it is $at+c$ and not $at+ac$?
Answer: 1) On integrating dt on the RHS we get a +c(constant of integration) but why is there no +c on the LHS while integrating dv?
If we start with:
$$ dv = adt $$
and integrate both sides then we can indeed have a constant of integration on both sides:
$$ v + C_1 = at + C_2 $$
but we can just subtract $C_1$ from both sides to get:
$$ v = at + (C_2 - C_1) = at + C $$
where $C = C_2 - C_1$. So we only need to write a constant of integration on one side.
2) Can we take out something of the integration symbol as a is taken out is the second last step?
I assume you mean "Why can we take etc". The answer is that $a$ is a constant and doesn't depend on $t$ or $x$. If we are integrating a product of two terms, like $a \times t$, then we have to do it using integration by parts. If you look at the equation at the top of the Wikipedia article you'll see that:
$$ \int at = a\int t - \int t \frac{da}{dt} $$
But since $a$ is a constant $da/dt = 0$ and the second term on the right hand side is zero. That means the equation simplifies to;
$$ \int at = a\int t $$
In everyday life we wouldn't bother going through the integration by parts. We just know from experience that you can take constants outside the integral. NB if the acceleation is not constant, i.e. it's a function of time $a(t)$, then $da/dt \ne 0$ we cannot take it outside the integral and that tends to make the integration harder to do.
3) Why in the the last step it is at+c and not at+ac?
Suppose we had:
$$ v = at + Ct $$
Differentiating should give us back what we started with, but differentiating the above equation gives:
$$ \frac{dv}{dt} = a + C $$
which is wrong. On the other hand differentiating $v = at + C$ does give us back $dv/dt = a$. | {
"domain": "physics.stackexchange",
"id": 22272,
"tags": "homework-and-exercises, integration, kinematics"
} |
How to write the covariance matrix of a quantum gaussian state as a function of photon numbers? | Question: Assume having a one-mode quantum Gaussian state with quadrature observable vector $\hat r = [\hat q , \hat p ] $ and covariance matrix $\sigma$. According to definition [1]:
\begin{equation}
\sigma = \text{tr}\left(\begin{bmatrix} \hat q^2 & \frac{1}{2}\{\hat q, \hat p\}\\
\frac{1}{2}\{\hat p, \hat q\} & \hat p^2 \end{bmatrix} \rho \right)
\end{equation}
My question is how can we show the covariance matrix as a function of the average photon number $N = \text{tr}(\hat a^\dagger \hat a \rho)$? I have found an answer in [2] section III.B. (gauge-invariant states) which states the covariance matrix as:
\begin{equation}
\alpha = \begin{bmatrix} \text{Re}N + I/2 & -\text{Im}N \\ \text{Im}N & \text{Re}N + I/2 \end{bmatrix}
\end{equation}
But it is confusing to me as these two cannot be equal to each other as the off-diagonal elements in the second one have opposite signs while the off-diagonal elements of the first one are the same.
EDIT: Would you please also explain about the feasibility and meaning of $\text{Im}N$? I thought $N$ is physical observable thus it can just have real values representing the average number of photons.
Any help or reference is highly appreciated. Thanks.
[1]
C. Weedbrook et al., “Gaussian quantum information,” Rev. Mod. Phys., vol. 84, no. 2, pp. 621–669, May 2012, doi: 10.1103/RevModPhys.84.621.
[2] A. Holevo and R. Werner, “Evaluating capacities of bosonic Gaussian channels,” Phys. Rev. A, vol. 63, no. 3, p. 032312, Feb. 2001, doi: 10.1103/PhysRevA.63.032312.
Answer: Recall that $a=(q+ip)/\sqrt{2}$ in some dimensionless units (Weedbrook might change the units because I think they like $\hbar=2$; I'm using $[q,p]=i$ and $[a,a^\dagger]=1$). We can thus find the equality
$$a^\dagger a=(q-ip)(q+ip)/2=[q^2+p^2+i(qp-pq)]/2=\frac{q^2+p^2-1}{2}.$$ This relates the total number of photons to the trace of your matrix $\sigma$.
Your definition of $N$, which is standard, must necessarily be real. Still, it cannot fully characterize the state, because there are multiple Gaussian states with the same average photon number $N$. These possibilities are arranged in the relative sizes of the diagonal elements of $\sigma$ and the magnitude and phase of its two off-diagonal elements (the two off-diagonal elements are equal, because $\{A,B\}=\{B,A\}$). I presume the latter source is using some other definition to encode the other parameters in the "phase" of some other variable $N$. Now that I check, it indeed does; it assumes $a$ and $a^\dagger$ are vectors of operators; when they are scalar, their definition of $N$ should coincide with yours. When we use vectors, it is indeed possible that $N_{ij}=\langle a_i^\dagger a_j\rangle$ be complex for $i\neq j$.
The two resources use different definitions for their covariance matrices but [2] doesn't seem to realize it, so I would rely on [1] or trace through the mistakes in [2]. Both define the same vector $$x^{[1]}=R^{[2]}=(q_1,p_1,\cdots,q_n,p_n)^\top,$$ from which they both have
$$\sigma_{ij}^{[1]}=\alpha_{ij}^{[2]}=\frac{1}{2}\langle \{R_i-\langle R_i\rangle,R_j-\langle R_j\rangle\}\rangle,$$ so the two definitions should be the same. However, [2] quotes their Ref. [13] to define $\alpha$, and in that reference they arrange their parameters in a differente order: $$R^{[13]}=(q_1,\cdots,q_n,p_1,\cdots,p_n)^\top.$$ This means that for $n>1$ the definitions will not match up.
Okay, so what about the covariance matrix not being symmetric? In the case of $n=1$ we don't have to worry, because $\mathrm{Im}N=0$ and the two formulas match up because Ref. [2] is specifically looking at states with $\langle qp+pq\rangle=\langle a^2 - a^{\dagger 2}\rangle/2i=0$. In the case of $n>1$, all of the elements of $V$ are real, because they all correspond to expectation values of Hermitian operators, and $V$ is explicitly symmetric. Why isn't $\alpha$ symmetric? If we go back to Ref. [2]'s Ref. [13], they indeed define this asymmetric $\alpha$ with some $-\mathrm{Im}N$, but then they go on to treat $\alpha$ as being symmetric, saying "For arbitrary real symmetric matrix $\alpha$" and giving the $n=1$ example with $$\alpha=\begin{pmatrix}\alpha^{qq}&\alpha^{qp}\\\alpha^{qp}&\alpha^{pp}\end{pmatrix};$$ notably, they do not say $\alpha^{qp}=-\alpha^{pq}$, so I'm even less inclined to trust the details of [2]. | {
"domain": "quantumcomputing.stackexchange",
"id": 3193,
"tags": "quantum-state, continuous-variable, quantum-optics, gaussian-states"
} |
Bash script that helps in opening documents when using terminal | Question: I just finished creating my first bash script that helps me to launch documents from terminal using a pdf viewer. This is very helpful and fast (at least I think so) for people who don't use any files manager (and have lots of documents of course); therefore they launch their pdf viewer and documents from terminal.
I wish to upload the script on Github so any feedback, review, or tip on how it is or how to improve it before the upload is really appreciated.
Here is the script:
rdoc
#!/bin/env bash
declare -r CONF_DIR_PATH=~/.config/rdoc
declare -r CONF_DOC_DIR=$CONF_DIR_PATH/doc_dir
declare -r CONF_PDF_VIEWER=$CONF_DIR_PATH/pdf_viewer
declare -r TMP_FILE=/tmp/.rdoc_tmp
fn_generate_configs() {
local doc_dir_path
local pdf_viewer_name
mkdir -p $CONF_DIR_PATH
echo -n "Please enter your documents directorie's full path: "
read doc_dir_path
echo $doc_dir_path > $CONF_DOC_DIR
echo -ne "\nPlease enter your pdf's viewer name: "
read pdf_viewer_name
echo $pdf_viewer_name > $CONF_PDF_VIEWER
echo
echo Your configurations were generated succesfully.
}
fn_read_configs() {
doc_dir=$(cat $CONF_DOC_DIR 2> /dev/null)
if [ ! $? -eq 0 ]; then
echo Error: one or all of your configuration files are missing.
echo Try -h for help.
exit -1
fi
pdf_viewer=$(cat $CONF_PDF_VIEWER 2> /dev/null)
if [ ! $? -eq 0 ]; then
echo Error: one or all of your configuration files are missing.
echo Try -h for help.
exit -1
fi
}
fn_search_for_book() {
local path
local grep_opt=""
local string_to_exclude=$1/
if [ $i_status -eq 1 ]; then
grep_opt=-i
fi
if [ $r_status -eq 1 ]; then #Search recursively
for path in $1/*; do
if [ -d $path ]; then
fn_search_for_book $path
elif [ -f $path ]; then
if echo $path | grep -q $grep_opt $book_name; then
echo $path | sed "s|$string_to_exclude||" >> $TMP_FILE
fi
fi
done
else
for path in $1/*; do
if [ -f $path ]; then
if echo $path | grep -q $grep_opt $book_name; then
echo $path | sed "s|$string_to_exclude||" >> $TMP_FILE
fi
fi
done
fi
}
fn_display_books() {
local doc
local founded_docs
#Make sure a book was founded and TMP_FILE was generated
founded_docs=$(cat $TMP_FILE 2> /dev/null)
if [ ! $? -eq 0 ]; then
echo Error: no document was found with \'$book_name\' in it.
exit -1
fi
echo -e "These are the documents that were found:\n"
#Set output's color to red
tput setaf 1
for doc in $founded_docs; do
echo $doc
done
#Reset output's color
tput sgr0
}
fn_count_books() {
local doc
local cnt=0
local founded_docs
founded_docs=$(cat $TMP_FILE 2> /dev/null)
if [ ! $? -eq 0 ]; then
echo Error: \'$TMP_FILE\' manipulation while the program is running are disallowed.
exit -1
fi
for doc in $founded_docs; do
(( cnt++ ))
done
return $cnt
}
fn_final_book_name() {
echo -ne "\nWhich one of them would you like to open: "
read book_name
}
fn_generate_books_paths() {
local path
if [ $r_status -eq 1 ]; then
for path in $1/*; do
if [ -d $path ]; then
fn_generate_books_paths $path
elif [ -f $path ]; then
echo $path >> $TMP_FILE
fi
done
else
for path in $1/*; do
if [ -f $path ]; then
echo $path >> $TMP_FILE
fi
done
fi
}
fn_get_book_path() {
local founded_paths
local path
local grep_opt=""
founded_paths=$(cat $TMP_FILE 2> /dev/null)
if [ ! $? -eq 0 ]; then
echo Error: \'$TMP_FILE\' manipulation while the program is running are disallowed.
exit -1
fi
if [ $i_status -eq 1 ]; then
grep_opt=-i
fi
for path in $founded_paths; do
if ! echo $path | grep -q $grep_opt $book_name; then
continue
fi
book_path=${path}
break
done
}
fn_open_book() {
$pdf_viewer $book_path 2> /dev/null
if [ ! $? -eq 0 ]; then
echo
echo Error: \'$book_path\' can\'t be opened.
exit -1
fi
echo -e "\nOpening: $book_path"
}
fn_help_message() {
echo Usage: rdoc \<options\> [argument]
echo
echo Available options:
echo " -h Display this help message. "
echo " -g Generate new configuration files. "
echo " -r Allow recursive searching for the document. "
echo " -i Ignore case distinctions while searching for the document. "
echo " -s Search for the document and display results. "
echo " This option takes a document name or a part of it as an argument. "
echo " -o Search for the document, display results then open it using your pdf viewer. "
echo " This option takes a document name or a part of it as an argument. "
echo " (Default) "
echo "NOTE: "
echo " When using '-s' or '-o' option in a combination of other options like this: "
echo " "
echo " $ rdoc -ris document_name "
echo " "
echo " Please make sure that it's the last option; to avoid unexpected behaviour. "
}
doc_dir=""
pdf_viewer=""
book_path=""
book_name=${BASH_ARGV[0]} #book_name equals to the last arg by defualt so the default option ('-o') will work.
#Options status
r_status=0
i_status=0
s_status=0
o_status=1 #Make -o the default option
#Display help message if no options were passed
if [ $# -eq 0 ]; then
fn_help_message
exit 0
fi
while getopts ":hgris:o:" opt; do
case $opt in
h)
fn_help_message
exit 0
;;
g)
fn_generate_configs
o_status=0
;;
r)
r_status=1
;;
i)
i_status=1
;;
s)
book_name=$OPTARG
s_status=1
o_status=0
;;
o)
book_name=$OPTARG
;;
:)
echo Error: an argument is required for \'-$OPTARG\' option.
echo Try -h for help.
exit -1
;;
*)
echo Error: unknown option \'-$OPTARG\'.
echo Try -h for help.
exit -1
;;
esac
done
if [ $s_status -eq 1 ] || [ $o_status -eq 1 ]; then
#Make sure there isn't $TMP_FILE already generated from previous runs.
rm $TMP_FILE 2> /dev/null
fn_read_configs
fi
if [ $s_status -eq 1 ]; then
fn_search_for_book $doc_dir
fn_display_books
elif [ $o_status -eq 1 ]; then
fn_search_for_book $doc_dir
fn_display_books
fn_count_books
if [ $? -gt 1 ]; then #If more than 1 book were found with $book_name in it
fn_final_book_name
#Clean any leftovers of $TMP_FILE to search properly
rm $TMP_FILE 2> /dev/null
#Make sure that the user chose an available document
fn_search_for_book $doc_dir
if [ ! -f $TMP_FILE ]; then
echo
echo Error: no document was found with \'$book_name\' in it.
exit -1
fi
#Make sure that the user is specific enough about the book name
fn_count_books
if [ $? -gt 1 ]; then
echo
echo Error: More than 1 book was found with the name \'$book_name\' in it.
exit -1
fi
fi
echo -n "" > $TMP_FILE #Make sure $TMP_FILE is empty so it'll be usable in fn_generate_books_paths
fn_generate_books_paths $doc_dir
fn_get_book_path
fn_open_book
fi
exit 0
Thanks for your time guys.
Answer:
#!/bin/env bash
Prefer #!/usr/bin/env bash
CONF_DOC_DIR and CONF_PDF_VIEWER can be removed and use $CONF_DIR_PATH/config with contents:
CONF_DOC_DIR=...
CONF_PDF_VIEWER=...
Reading the config would become: source $CONF_DIR_PATH/config
TMP_FILE should be generated with mktemp so that we are guaranteed that file does not already exist
mkdir -p $CONF_DIR_PATH
If value of CONF_DIR_PATH has one or more spaces, then multiple directories would be created. Quote the variable to prevent splitting: mkdir -p "$CONF_DIR_PATH"
echo -n "Please enter your documents directorie's full path: "
Typo: directories
fn_ prefix for function names is not necessary
for path in $1/*; do
Prefer find -type f -exec ...
echo Error: no document was found with '$book_name' in it.
Quote escaping can be avoided with echo "Error: no document was found with '$book_name' in it. " | {
"domain": "codereview.stackexchange",
"id": 41302,
"tags": "bash, linux"
} |
Nuclear accident near Uranium mine | Question: What if someone, without knowing, detonates a very small nuclear bomb underground within a rich mine (rich on Uranium but not enriched type) that has ~500 tons scattered to 1km³. Would the neutron density be enough to start a nuclear explosion just before everything evaporates and pushed far? (I think not, but someone not lazy could prove it)
Answer: Making a bomb is not a simple matter, a lot of isolating the correct isotope with centrifuges is necessary , and it is called weapons grade ore. Natural deposits do not have weapons grade uranium.
Interestingly enough there are signs that fission of the type happening in fission reactors has happened once on earth naturally:
Oklo is the only known location for this in the world and consists of 16 sites at which self-sustaining nuclear fission reactions took place approximately 1.7 billion years ago, and ran for a few hundred thousand years, averaging 100 kW of thermal power during that time.
So at worst a bomb close by might start such a reaction
In 1953 George W. Wetherill of the University of California at Los Angeles and Mark G. Inghram of the University of Chicago pointed out that some uranium deposits might have once operated as natural versions of the nuclear fission reactors that were then becoming popular. Shortly thereafter, Paul K. Kuroda, a chemist from the University of Arkansas, calculated what it would take for a uranium ore body spontaneously to undergo selfsustained fission. In this process, a stray neutron causes a uranium 235 nucleus to split, which gives off more neutrons, causing others of these atoms to break apart in a nuclear chain reaction.
A very slow process, as the Oklo mines prove. | {
"domain": "physics.stackexchange",
"id": 28010,
"tags": "nuclear-physics"
} |
NCAA Pool - Generate random teams for a list of players | Question: Anyways, I am running a pool where 64 entrants will receive a random team from a field of 64 teams. I'd like for it to be randomized to a decent extent, and although I understand the limitations of RNGs, I think this should be good enough. See my code below - are there any issues with this? Could this be done in an easier way? Or alternatively, is this better asked in Cross Validated?
import random
regions = ["East", "West", "Midwest", "South"]
playerlist = []
teamlist = []
# this will be an actual list of player names when completed
for i in range(0, 64):
playerlist.append("Player " + str(i))
for i in range(0, 7):
random.shuffle(playerlist)
for i in range(0, 4):
for j in range(1, 17):
teamlist.append(playerlist[i*16+j-1] + ": " + regions[i] + " " + str(j))
# edited: this is unnecessary
#for i in range(0, 7):
# random.shuffle(teamlist)
print(teamlist)
Answer: Randomly shuffling a list seven times is not any better than doing it once. So random.shuffle(playerlist) is enough.
In Python you almost never want to iterate over the indices of a list, rather iterate over the list itself. Here you could do:
import random
import itertools
regions = "East", "West", "Midwest", "South"
players = ["Player {}".format(i) for i in range(64)]
teams = range(1, len(players) / len(regions) + 1)
random.shuffle(players)
teamlist = ["{}: {} {}".format(player, *team) for player, team in
zip(players, itertools.product(regions, teams))]
print(teamlist)
Here I used str.format to make formatting the different strings easier and itertools.product to construct a list (actually an iterable) of region, team:
>>> list(itertools.product(regions, teams))
[('East', 1), ('East', 2), ('East', 3), ('East', 4), ('East', 5), ('East', 6), ('East', 7), ('East', 8), ('East', 9), ('East', 10), ('East', 11), ('East', 12), ('East', 13), ('East', 14), ('East', 15), ('East', 16), ('West', 1), ('West', 2), ('West', 3), ('West', 4), ('West', 5), ('West', 6), ('West', 7), ('West', 8), ('West', 9), ('West', 10), ('West', 11), ('West', 12), ('West', 13), ('West', 14), ('West', 15), ('West', 16), ('Midwest', 1), ('Midwest', 2), ('Midwest', 3), ('Midwest', 4), ('Midwest', 5), ('Midwest', 6), ('Midwest', 7), ('Midwest', 8), ('Midwest', 9), ('Midwest', 10), ('Midwest', 11), ('Midwest', 12), ('Midwest', 13), ('Midwest', 14), ('Midwest', 15), ('Midwest', 16), ('South', 1), ('South', 2), ('South', 3), ('South', 4), ('South', 5), ('South', 6), ('South', 7), ('South', 8), ('South', 9), ('South', 10), ('South', 11), ('South', 12), ('South', 13), ('South', 14), ('South', 15), ('South', 16)]
Note that 0 is the default start argument for a range.
If your script becomes longer, you should probably add a if __name__ == "__main__": guard to allow importing your code from other scripts. | {
"domain": "codereview.stackexchange",
"id": 24724,
"tags": "python, random"
} |
What do you see when your eyes are closed? | Question: If you are in pitch black and you close your eyes, you sometimes can see strange shapes of various colors. A lot of the time these shapes and colors change as you observe them. This phenomenon still occurs if you are not in complete darkness.
I am wondering what this is that you are `seeing'? Is this phenomenon classified as seeing, as how can we be looking at something if our eyes are closed?
If you are not seeing these shapes and colors, then what you observe is pure darkness. What is this pure darkness that you are seeing? Since you are seeing it, is something stimulating the vision parts of your brain? Is it the back of your eyelid?
Answer: This is called a phosphene — the experience of perceiving light in the visual cortex without light actually entering the eye. This commonly happens due to stimulation of the retinal ganglion cells by something else. The most frequent source in normal individuals is pressure to the retina (e.g. rubbing a closed eye.) It is also possible for phosphenes to occur due to neuronal stimulation at the cortical level. Most people have had the experience of “seeing stars” when you stand up too quickly. This happens because orthostatic hypotension results in transiently decreased cerebral perfusion; it is the metabolic shift in oxygenation and/or glucose delivery to cortical neurons that causes the perception of light. Something similar occurs with brief rises in intracranial pressure that occurs with sneezing or coughing.* Magnetic stimuli can also induce this phenomenon.
Phosphenes can also be pathologic (i.e. disease-associated). Migraine headache sufferers may experience a visual “aura” prior to a headache, which is a type of phosphene, likely mediated at the cortical level. There is some evidence to suggest that preventing the phosphene may also reduce headache frequency. Other conditions associated with phosphenes include abnormalities of the optic nerve** and retina — just as stimulation of a peripheral somatic sensory nerve might cause a pain signal to arrive in the brain in the absence of true tactile stimulation, disorders of the optic nerve can generate disorganized signals that the brain perceives as light.
*Increased intracranial pressure leads to decreased perfusion due to the differential between peripheral and cerebral pressures.
**If you read nothing else linked to this question, take a look at that paper (free in PMC). It describes optic neuropathy associated with auditory-evoked phosphenes. The patients perceived sound as light! | {
"domain": "biology.stackexchange",
"id": 2820,
"tags": "human-biology, neuroscience, vision, human-eye, visualization"
} |
Why won't a Turing machine halt? | Question: I am reading Sipsers. The book introduces halting problem and proves that is a turing recognisable language but not a turing decidable language. Thus giving a Turing machine which does not halt on some inputs. The language to be precise is
$L=\{\langle M,w \rangle| \ M \text{is a turing machine and } w \text{ is a string M accepts} \}$. Let $D$ be the turing machine that recognises $L$. Inability of $D$ to halt on some inputs is due to the fact that there exist Turing Machine $M$ which do not halt on some input. Thus the reason for not halting is kind of recursive ( if I consider only this example ).
I am still not able to understand in crux why a turing machine won't halt. But can't come up with language over $\{0,1\}$ which is turing recognisable but not decidable. What are all the reasons a Turing machine won't halt ?
Answer: A TM s just a program. It does whatever you program it to do. If, for instance, you program it to perform the following:
while (true)
{
do_nothing
}
, then it will never halt!
The language $L$ goes over all possible machines $M$, and therefore it must encounter some machines that don't halt, for instance, the one stated above.
There is a deep difference between the reason the above machine doesn't halt (it is just programmed to do so), and the reason that any machine for $L$ will not halt–no machine for $L$ exists since that language is undecidable.
regarding your question about finding a more "natural" language that is undecidable, see Is there a “natural” undecidable language?. You may get some more intuition about undecidable languages here. | {
"domain": "cs.stackexchange",
"id": 5384,
"tags": "turing-machines, halting-problem"
} |
Why do the Feynman-Heaviside formulae for $\vec E, \vec B$ fields differ from the Lienard-Wiechert fields? | Question: Why do the expressions for Feynman-Heaviside fields look completely different from the Lienard-Wiechart fields though both of which are fields due to a point charge moving along a specified trajectory? I don't understand this. For a reference, refer to eq. (23.31), (23.32), (23.49) and (23.50) of Andrew Zangwill's Modern Electrodynamics.
I give them below for ready reference:
Answer: Firstly: they are equivalent. Feynman claims in the Feynman Lectures that you can easily check this by doing the differentiation in his formula and comparing with the fields derived from the L-W potentials. It's taken me a lifetime of trying and giving up because it's so difficult before I worked out how to do it reasonably efficiently. I ought probably to write this up and publish it!
So secondly, I think the real reason is that Feynman was very proud of his formula, which does indeed give real insight into what's going on and how it relates to the static charge case, but wanted to cover his tracks so everybody would think 'Wow how ever did he find that?' He references the formula very early in Chapter 1 of Volume 1, and again in its more natural position in the elctrodynamics volume, II, and then purports to derive it without actually doing so. There has to be an explanation, and I think this is the most natural one. | {
"domain": "physics.stackexchange",
"id": 90167,
"tags": "electromagnetism"
} |
Optimization using Quantum Logics | Question: Is it possible to solve the following kind of optimization using Quantum Computing?
Minimize
5*x1 - 7*x2
binary
x1
x2
If yes, is it possible to have a sample code using QISKit?
Answer: Qiskit has an optimization module and you can find tutorials that illustrate its functionality here.
To solve the example you posted, e.g., with the Quantum Approximate Optimization Algorithm (QAOA), you can do the following:
from qiskit import Aer
from qiskit.optimization import QuadraticProgram
from qiskit.aqua.algorithms import QAOA
from qiskit.optimization.algorithms import MinimumEigenOptimizer
# construct optimization problem
qp = QuadraticProgram()
qp.binary_var('x1')
qp.binary_var('x2')
qp.minimize(linear=[5, -7])
# initialize optimizer
qaoa_mes = QAOA(quantum_instance=Aer.get_backend('statevector_simulator'))
qaoa = MinimumEigenOptimizer(qaoa_mes)
# solve problem
result = qaoa.solve(qp)
print(result)
which prints:
optimal function value: -7.0
optimal value: [0. 1.]
status: SUCCESS
Qiskit's optimization module also provides other quantum optimization algorithms for quadratic programs and you can find a more detailed description here. | {
"domain": "quantumcomputing.stackexchange",
"id": 1867,
"tags": "qiskit, programming, resource-request, optimization"
} |
unable to use custom messages on arduino | Question:
Hi Guys,
I'm trying to write a ros publisher on an arduino Uno using a custom message type and am running into compile errors for the arduino code. I followed the instructions here: link:Rosserial_arduino custom messages and I'm able to include the generated header file. I then get the following error when I try to instantiate a variable of the message type:
I2C_Send_and_Receive.cpp:21:25: error: cannot declare variable ‘encoders’ to be of abstract type ‘GCRobotics::Encoder_msg’
In file included from I2C_Send_and_Receive.cpp:7:0:
/home/josh/sketchbook/libraries/ros_lib/GCRobotics/Encoder_msg.h:12:9: note: because the following virtual functions are pure within ‘GCRobotics::Encoder_msg’:
In file included from /home/josh/sketchbook/libraries/ros_lib/std_msgs/Time.h:7:0,
from /home/josh/sketchbook/libraries/ros_lib/ros/node_handle.h:38,
from /home/josh/sketchbook/libraries/ros_lib/ros.h:38,
from I2C_Send_and_Receive.cpp:5:
/home/josh/sketchbook/libraries/ros_lib/ros/msg.h:44:19: note: virtual int ros::Msg::serialize(unsigned char*) const
/home/josh/sketchbook/libraries/ros_lib/ros/msg.h:47:28: note: virtual const char* ros::Msg::getMD5()
My code for creating it looks like this:
#include <ros.h>
#include <Wire.h>
#include <GCRobotics/Encoder_msg.h>
#include <GCRobotics/i2cData.h>
#include <std_msgs/String.h>
#include <TimerOne.h>
ros::NodeHandle n;
//std_msgs::String encoders;
GCRobotics::Encoder_msg encoders;
ros::Publisher pub("EncoderData", &encoders);
Im using Groovy and arduino 1.0.1, Any ideas on what I'm doing wrong or missing?
Thanks!
Originally posted by hd271 on ROS Answers with karma: 38 on 2013-03-07
Post score: 0
Answer:
Under groovy (and even under Fuerte to a lesser extent) the older rosserial package is pretty busted (as it depended on internal roslib functions that have changed/disappeared). I would suggest trying out the new catkin-based version hosted here: https://github.com/ros-drivers/rosserial -- it will automatically build all messages for all installed packages found by rospack when running the make_libraries script. Please do read the README as the workflow has changed (but is probably simpler) from the past.
Originally posted by fergs with karma: 13902 on 2013-03-07
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by hd271 on 2013-03-08:
Awesome, that worked for one of my machines, but the other machine i'm running gives me this error when running make_libraries.py: rospkg.common.ResourceNotFound: kobuki_qtestsuite. But as long as one of them generates them, I guess I can just copy to the other. Thanks for your help!
Comment by fergs on 2013-03-08:
I've patched rosserial temporarily to blacklist kobuki_testsuite. There is a ticket for it as well for a future, proper, patch (https://github.com/ros-drivers/rosserial/issues/30) | {
"domain": "robotics.stackexchange",
"id": 13251,
"tags": "arduino, rosserial, ros-groovy"
} |
Counting length-2 substrings that are common to two strings at the same offset -- Python | Question: I solved the CodingBat task:
Given 2 strings, a and b, return the number of the positions where they contain the same length 2 substring. So "xxcaazz" and "xxbaaz"
yields 3, since the "xx", "aa", and "az" substrings appear in the same
place in both strings.
stringMatch("xxcaazz", "xxbaaz") → 3
stringMatch("abc", "abc") → 2
stringMatch("abc", "axc") → 0
import doctest
def all_two_chars_occurencies(string):
"""
>>> list(all_two_chars_occurencies('abcd'))
['ab', 'bc', 'cd']
>>> list(all_two_chars_occurencies('xxcaazz'))
['xx', 'xc', 'ca', 'aa', 'az', 'zz']
"""
for index, char in enumerate(string[:-1]):
yield char + string[index + 1]
def common_two_chars_occurences(a, b):
"""
Given 2 strings, a and b, return the number of the positions where
they contain the same length 2 substring.
>>> common_two_chars_occurences('xxcaazz', 'xxbaaz')
3
>>> common_two_chars_occurences('abc', 'abc')
2
>>> common_two_chars_occurences('abc', 'axc')
0
"""
equal_duets = 0
for a_duet, b_duet in zip(all_two_chars_occurencies(a),
all_two_chars_occurencies(b)):
if a_duet == b_duet:
equal_duets += 1
return equal_duets
if __name__ == "__main__":
doctest.testmod()
Answer: There is no such word as "occurencies". In any case, all_two_chars_occurencies is a very long name. I suggest pairwise_chars. Note that it could also be written in a style similar to the pairwise() recipe in Python's documentation.
In Python, explicit looping is slightly cumbersome. Fortunately, Python offers many ways to just do what you want as a "one-liner", without looping. Here's one approach, which uses a generator expression, and this technique to find its length.
def common_two_chars_occurences(a, b):
"""
Given 2 strings, a and b, return the number of the positions where
they contain the same length 2 substring.
>>> common_two_chars_occurences('xxcaazz', 'xxbaaz')
3
>>> common_two_chars_occurences('abc', 'abc')
2
>>> common_two_chars_occurences('abc', 'axc')
0
"""
return sum(1 for pair
in zip(pairwise_chars(a), pairwise_chars(b))
if pair[0] == pair[1]) | {
"domain": "codereview.stackexchange",
"id": 12788,
"tags": "python, beginner, strings, programming-challenge, unit-testing"
} |
Calculating silence in a WAV file | Question: I made this script to calculate the amount of silence in seconds in an audio file. Is this a good method to do this, and if yes, can it be improved? It takes about 4 seconds for 20 audio files of 100-200 seconds of audio each, which seems pretty long to me.
The audio file is split in windows of win_size seconds. Every window that is below sil_threshold is considered 'silent'. By multiplying the number of silent windows by the window size, the number of silent seconds is obtained.
import numpy as np
from scipy.io import wavfile
def calc_silence(audio, sil_threshold=0.03, win_size=0.25, ret="sec"):
"""Calculates total silence length in this audio file.
Keywords:
audio: location of the audiofile OR a (samplerate, audiodata) tuple
sil_threshold: percentage of the maximum window-averaged amplitude
below which audio is considered silent
win_size: length of window in sec (audio is cut into windows)
ret: the return type; can be one of the following:
- "sec": return the number of silent seconds
- "frac": return the frac of silence (between 0 and 1)
- "amp": return a list of amplitude values (one for each window)
- "issil": return an np-array of 0s and 1s (1=silent) for each window
- "chunk": return a list of (start, end, is_silent) tuples,
with start and end in seconds and is_silent is a boolean
"""
if isinstance(audio, tuple):
samplerate, data = audio
else:
# read the sample rate and data from the wave file
samplerate, data = wavfile.read(audio)
win_frames = int(samplerate * win_size) # number of samples in a window
win_amps = [] # windows in which to measure amplitude
for win_start in np.arange(0, len(data), win_frames):
# Find the end of the window
win_end = min(win_start + win_frames, len(data))
# Add the mean amplitude for this frame to the list of window amplitudes
win_amps.append(np.nanmean(np.abs(data[win_start:win_end])))
# Calculate the minimum threshold for a window to be silent
threshold = sil_threshold * max(win_amps)
# Find the windows that are silent
sils, = np.where(win_amps <= threshold)
# The silence length is the number of silent windows times the window length
sil = len(sils) * win_size
if ret == "sec":
return sil
elif ret == "frac":
return len(sils) / len(win_amps)
elif ret == "amp":
return win_amps
elif ret == "issil":
return (win_amps <= threshold).astype(int)
elif ret == "chunk":
chunks = []
t0, t1 = 0, 0
is_sil = win_amps[0] <= threshold
for wi, amp in enumerate(win_amps):
winsil = amp <= threshold
if winsil == is_sil:
t1 = (wi + 1) * win_size
else:
chunks.append((t0*win_size, t1*win_size, is_sil))
t0 = (wi + 1) * win_size
is_sil = not is_sil
chunks.append((t0*win_size, len(win_amps)*win_size, is_sil))
return chunks
else:
raise ValueError("Unknown return format: {}".format(ret))
Answer: Your implementation is simple, which isn't bad, but it suffers from an accuracy issue. What if there is a single above-threshold sample at the middle of your window, preventing it from being detected as silent, and a similar subsequent window with one above-threshold sample in the middle? That constitutes one window's worth of silent samples that have not been detected. Instead, you should adopt a moving aggregate - absolute max, or maybe absolute median. Moving max requires that you maintain a queue and a sorted list. When you get a new sample, do a sorted insert of the absolute value of the sample to the list, a push to the queue, a pop to the queue, and drop the popped value from the list. The item at the end of the list will always be the max, and the item in the middle of the list will always be the median. As soon as the threshold is detected, silence has started and should not be considered to end until the threshold is exceeded. | {
"domain": "codereview.stackexchange",
"id": 36226,
"tags": "python, python-3.x, signal-processing"
} |
Energy conservation in Kapitza-Dirac diffraction? | Question: In Kapitza-Dirac diffraction, a standing wave of light (wavevector of single wave $k$) is pulsed on for a very short period of time ($\sim \mu s$) onto a bunch of cold atoms. This results in the atoms receiving momentum kicks of $2n\hbar k$.
How energy conservation work? Where does the energy come from?
Answer: Kapitza-Dirac diffraction can be described in two different pictures:
Photon picture
Describe the pulsed lattice as two strong, counter-propagating lasers with photon momentum $\hbar k_{\text{lat}}$ and $-\hbar k_{\text{lat}}$, and energy $\hbar\omega$. The lattice is off-resonant so there are no one-photon transitions, but two-photon (= Bragg) transitions are possible. As pointed out by you, the atoms acquire $2n\hbar k_{\text{lat}}$ and corresponding kinetic energies $(2n\hbar k_{\text{lat}})^2/(2m)$. But since all photons have the same energy $\hbar\omega$, the final energy states are energetically off-resonant. Nevertheless, the photons will drive an off-resonant Rabi oscillation to higher momentum states. You need a quite high intensity to see the oscillations.
Lattice picture
Starting from a stationary atom at momentum $p=0$ we quench on an optical lattice for a certain duration. The initial state of the atom (for example a gaussian) is therefore projected onto the lattice eigenstates (Bloch states), leading to a population of a number of Bloch states, including states of the second, third, and higher Bloch bands. The subsequent unitary evolution again gives a Rabi oscillation at quasimomentum $q=0$ between the different Bloch states until the ToF image is taken and Bloch states are mapped again onto real momentum. These now have contributions from $2n\hbar k_{\text{lat}}$ momenta.
In real space, the initial state (e.g. a gaussian or even a plane wave with momentum zero) is rather smooth, corresponding to a low curvature of the wave function and low kinetic energy $-\frac{(\hbar\nabla)^2}{2m}$. Contrary, the Bloch states have many 'wiggles' in real-space which reflect the periodicity of the lattice. The lattice basically imprints its wiggles on the initial smooth wave function, thus increasing its energy. One could say, the energy 'comes from the wiggles of the lattice'. | {
"domain": "physics.stackexchange",
"id": 70996,
"tags": "diffraction, cold-atoms, optical-lattices"
} |
Electric potential of dipole | Question: We have proton and electron and distance between them is $d$. If we choose $z$ axis along this dipole, net electric potential created by dipole on the point A is equal to summation of the electric potential created by electron and proton. If we defined the distance between point A and electron as $r_1$ and distance between point A and proton is defined by $r_2$. Then, we choose a middle point of the dipole and distance between this point and A is $r$, and angle between $z$ axis and $r$ is called $\alpha$. When we are in this step, we can approximate for $r_2$ minus $r_1$. I could not understand this approximation. How we can defined as below? How it is become?
$$ r_2-r_1 \approx d\cos \alpha$$
If my question is not clear please inform me.
Answer: I tried to depict the general idea behind the approximation. I hope it makes sense. In essence, it relies on the segment from $P$ to $+q$ and from $P$ to the perpendicular line intersecting $r_2$ being about equal, so the difference is the labeled leg of the triangle shown in the upper corner of the diagram, where the angle $\alpha$ is adjacent to the side $r_2-r_1$, and has a hypotenuse of $d$, so using some simple trig, that side is also $d \cdot \cos{\alpha}$. | {
"domain": "physics.stackexchange",
"id": 29510,
"tags": "homework-and-exercises, electrostatics, dipole"
} |
Problems compiling nodes using pcl in Ubuntu 11.10 | Question:
There seems to be an issue with compiling ROS nodes that use pcl on Ubuntu 11.10. When I try to compile my node on 10.04 I don't have any problems. Both installations were installed from the debian packages ros-electric-desktop-full and ros-electric-perception-pcl-addons.
Ubuntu 10.04, 32-bit, ROS Electric: Works
Ubuntu 11.10, 32-bit, ROS Electric: Doesn't compile
My node is pretty simple, mainly just visualizing a static point cloud that I create.
In the second case, I get the following compiler error during linking.
...
Linking CXX executable ../bin/publish_primitives
/usr/bin/ld: CMakeFiles/publish_primitives.dir/src/publish_primitives.o: undefined reference to symbol 'vtkSmartPointerBase::operator=(vtkObjectBase*)'
/usr/bin/ld: note: 'vtkSmartPointerBase::operator=(vtkObjectBase*)' is defined in DSO /usr/lib/libvtkCommon.so.5.6 so try adding it to the linker command line
/usr/lib/libvtkCommon.so.5.6: could not read symbols: Invalid operation
collect2: ld returned 1 exit status
...
I manually added the required libraries in CMakeLists.txt (below) and it fixes the compiler problem.
rosbuild_add_executable(publish_primitives src/publish_primitives.cpp)
target_link_libraries(publish_primitives libvtkCommon.so libvtkFiltering.so )
Messing with the compiler flags for pcl is beyond me, so I'll leave it to the higher ups to figure out what is going on.
Originally posted by Kyle Strabala on ROS Answers with karma: 186 on 2012-01-30
Post score: 3
Answer:
It looks like there was a change in linker behavior. thread with links
Fedora documentation of the change
Originally posted by tfoote with karma: 58457 on 2012-06-09
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 8049,
"tags": "pcl, ubuntu-oneiric, ubuntu, ros-electric"
} |
Interpretation of the wave function in quantum mechanics | Question: I just started watching the coursera lectures on the basics of quantum mechanics and one of the first lectures were on deriving Schrodinger's equation and its interpretation it under Born's interpretation. What I want to ask is what the wave function \begin{equation} \psi({\bf r},t) \end{equation} return and represent. Now I know that
\begin{equation} |\psi({\bf r},t)|^2dxdydz \end{equation} is the probability of finding the quantum particle described by \begin{equation} \psi({\bf r},t) \end{equation} in the volument element \begin{equation} dV = dxdydz \end{equation} at time T. But im not sure what the wave function $\psi$ returns.
Could someone please explain in laymans term the return type of the $\psi$ function and what the lone $\psi$ function represents.
Answer: If this were computer science, we might say $\psi$ takes a $d$-tuple of reals ($r$) and another real ($t$) and returns a complex number with the attached unit of $L^{-d/2}$ in $d$ dimensions (with $L$ being the unit of length).1
If you want any more of an interpretation, well then you've already given it: $\psi(r,t)$ is the thing such that $\int_R\ \lvert \psi(r,t) \rvert^2\ \mathrm{d}V$ is the probability of the particle being observed in the region $R$ at time $t$. You can loosely think of it as a "square root" of a probability distribution.
The reason the "square root" interpretation is not quite right, and probably the reason you aren't satisfied with the $\int_R\ \lvert \psi(r,t) \rvert^2\ \mathrm{d}V$ definition, is that any particular instance of $\psi(r,t)$ carries extraneous information beyond what is needed to fully specify the physics. In particular, if we have $\psi_1$ describing a situation, then the wavefunction defined by $\psi_2(r,t) = \mathrm{e}^{i\phi} \psi_1(r,t)$ gives identical physics for any real phase $\phi$.
So the return value of the wavefunction itself is not a physical observable -- one always takes a square magnitude or does some other such thing that projects many mathematically distinct functions onto the same physical state. Even once you've taken the square magnitude, $\lvert \psi(r,t) \rvert^2$ arguably isn't directly observable, as all we can measure is $\int_R\ \lvert \psi(r,t) \rvert^2\ \mathrm{d}V$ (though admittedly for arbitrary regions $R$).
1You can check that $-d/2$ is necessarily the exponent. We need some unit such that squaring it and multiplying by the $d$-dimensional volume becomes a probability (i.e. is unitless). That is, we are solving $X^2 L^d = 1$, from which we conclude $X = L^{-d/2}$. | {
"domain": "physics.stackexchange",
"id": 15403,
"tags": "quantum-mechanics, wavefunction"
} |
Why are plasmids called plasmids? | Question: I knew from this website (https://www.etymonline.com/word/plasmid) that the word "plasmid" is a combination of "plasma" + "id" , where "id" means: belonging to or connected to.
But I don't understand how plasmids are connected to plasma even though they exist inside the bacteria cell cytoplasm not inside the blood/plasma
Answer:
even though they exist inside the bacteria cell cytoplasm
Plasma has several meanings in biology. Cytoplasm is a type of plasma.
A quick trip to Wikipedia tells us that the term plasmid was first used by Joshua Lederberg in Cell Genetics and Hereditary Symbiosis. It seems that he coined the term plasmid to reconcile the many terms used to describe extrachromosomal DNA, at the time.
These discussions have left a plethora of terms adrift: pangenes, bioblasts,
plasmagenes, plastogenes, chondriogenes, cytogenes and proviruses, which have lost
their original utility owing to the accretion of vague or contradictory connotations. At the risk of adding to this list, I propose plasmid as a generic term for any extrachromosomal hereditary determinant. The plasmid itself may be genetically simple or complex. On occasion, the nuclear reference of the general term gene will be emphasized as chromogene... | {
"domain": "biology.stackexchange",
"id": 10310,
"tags": "etymology"
} |
Is there a mapping reduction for every two language $A$ and $B$ to some language $C$? | Question: One of my friend told me that there is a language $C$ for every two languages $A$ and $B$ s.t $A \leq_{m} C$ and $B \leq_{m} C$ , he simply define two languages $A’=\{0w|w \in A\}$ and $B’=\{1w|w \in B\}$ so $C=A’ \cup B’$ and he define the function $f(w)=0w$ if $w \in A$ and and $f(w)=1w$ otherwise. I suspect this function is turing-computable. Suppose $A$ be $\overline{HP}$ which is not RE. How we can decide if $w \in A$ ? Even i ask myself can we realy prove that such a mapp-reduction always exist ?
Answer: The definition of the reduction is wrong.
We actually need to define two reductions: from $A$ to $C$ and from $B$ to $C$. The reduction from $A$ to $C$ maps $w$ to $0w$. The reduction from $B$ to $C$ maps $w$ to $1w$. | {
"domain": "cs.stackexchange",
"id": 18416,
"tags": "complexity-theory, turing-machines, computability, reductions"
} |
Best way to remove useless features (non-english words) when there are more than 100,000 features? | Question: I am in a situation where i have more than 100,000 features, and i need to select the top features to give them to my final neural network model.
So far i have been using RandomForestClassifier in sklearn, and first i use fit and then i use feature_importances to select the top n features. ( I also use StandardScaler and transform to normalize data)
now i have two questions:
am i approaching this right? is this a correct way remove useless features?
is there any better way to select for example top 200 features to give to my final model when there are more than 100,000 features? my task is sentence classification and these features are actually BOW features of non english words, so basically i want to learn the top most important "words" in my corpus that can be used to classify sentences.
my final model is a neural network, but i cannot give all the features to it and let it decide because of performance issues, i need to filter some of the features and then give them to my neural network model. also my final model is written in pytorch, so currently i am using sklearn to select top n features and then use pytorch to train the final mode, so if there is an easier approach for this please let me know.
Answer: Since you have text data, you can remove frequently occurring words. This is typically done through sorting by tf-idf score. Frequently occurring words often have less signal than rare words. | {
"domain": "datascience.stackexchange",
"id": 11062,
"tags": "neural-network, scikit-learn, feature-selection"
} |
How to play a /joint_state bag file | Question:
I recorded a joint_state bag file and I want to replay the same motion as the one recorded. Is it possible? and how would you do that?
Originally posted by mamoun on ROS Answers with karma: 1 on 2012-06-26
Post score: 0
Original comments
Comment by Lorenz on 2012-06-26:
What do you mean with replay? Do you just want to play back the bag (rosbag play) or do you want your robot to execute the same trajectory?
Comment by mamoun on 2012-06-26:
to execute the same trajectory
Answer:
I don't know of any way to re-play a trajectory that has been recorded out of the box. That problem is actually highly robot-specific anyway since at least on the PR2, the joint_state topic contains basically all encoder values of the robot. This includes the position of both arms, the head and the wheels.
However, it should be doable with a little bit of work. First, do a rosbag record /joint_states. Then, I would use the rosbag C++ or Python API to read the data and generate goals for the different robot controllers. For the arms, you can generate joint_trajectory action goals and send them to the corresonding arm controllers. See this tutorial. For the head, you probably have to define your own joint_trajectory_controller that just controls the pan and tilt joints. After building up the joint_trajectory_messages, just send the goals at the same time to the action servers. Subsequent calls do the different action client's send_goal methods should be fine there.
Originally posted by Lorenz with karma: 22731 on 2012-06-27
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 9948,
"tags": "rosbag"
} |
Shouldn't the addition of angular momentum be commutative? | Question: I have angular momenta $S=\frac{1}{2}$ for spin, and $I=\frac{1}{2}$ for nuclear angular momentum, which I want to add using the Clebsch-Gordan basis, so the conversion looks like:
$$
\begin{align}
\lvert 1,1\rangle &= \lvert\bigl(\tfrac{1}{2}\tfrac{1}{2}\bigr)\tfrac{1}{2}\tfrac{1}{2}\rangle,\tag{4.21a} \\
\lvert 1,0\rangle &= \frac{1}{\sqrt{2}}\biggl(\lvert\bigl(\tfrac{1}{2}\tfrac{1}{2}\bigr)\tfrac{1}{2},-\tfrac{1}{2}\rangle + \lvert\bigl(\tfrac{1}{2}\tfrac{1}{2}\bigr),-\tfrac{1}{2}\tfrac{1}{2}\rangle\biggr),\tag{4.21b} \\
\lvert 1,-1\rangle &= \lvert\bigl(\tfrac{1}{2}\tfrac{1}{2}\bigr),-\tfrac{1}{2},-\tfrac{1}{2}\rangle,\tag{4.21c} \\
\lvert 0,0\rangle &= \frac{1}{\sqrt{2}}\biggl(\lvert\bigl(\tfrac{1}{2}\tfrac{1}{2}\bigr)\tfrac{1}{2},-\tfrac{1}{2}\rangle - \lvert\bigl(\tfrac{1}{2}\tfrac{1}{2}\bigr),-\tfrac{1}{2}\tfrac{1}{2}\rangle\biggr),\tag{4.21d}
\end{align}
$$
where $F=I+S$, so this is the basis $\lvert F m_F \rangle = \sum_m \lvert\bigl(I S\bigr),m_I m_S\rangle $.
Now since adding angular momenta is commutative, the exchange between $I$ and $S$ shouldn't mathematically introduce any kind of difference.
In other words, in the basis described in those equations, it shouldn't matter whether I write it as $\lvert\bigl(I S\bigr),m_I m_S\rangle$ or $\lvert\bigl(S I\bigr),m_S m_I\rangle$, right?
Now the problem is the following: I have created the Hamiltonian matrix $H=-\vec{\mu}\cdot \vec{B} = -2 \mu B_z S_z/\hbar$ in the $\lvert F m_F \rangle$ representation, and actually the result depends on how you call those angular momenta, so the result could be
$$H = \begin{pmatrix}
\mu_B B & 0 & 0 & 0 \\
0 & - \mu_B B & 0 & 0 \\
0 & 0 & 0 &\mu_B B \\
0 & 0 & \mu_B B & 0
\end{pmatrix}$$
Or could be
$$H = \begin{pmatrix}
\mu_B B & 0 & 0 & 0 \\
0 & - \mu_B B & 0 & 0 \\
0 & 0 & 0 &-\mu_B B \\
0 & 0 & -\mu_B B & 0
\end{pmatrix}$$
Depending on how you "label" them, $I$ or $S$... which is very confusing!
This happens because the off-diagonal terms
$$\left\langle 1 0 \right| S_z \left| 0 0 \right\rangle = \frac{1}{2} \left( \left\langle (\frac{1}{2} \frac{1}{2}) \frac{1}{2} -\frac{1}{2} \right| + \left\langle (\frac{1}{2} \frac{1}{2}) -\frac{1}{2} \frac{1}{2} \right| \right) S_z \left( \left| (\frac{1}{2} \frac{1}{2}) \frac{1}{2} -\frac{1}{2} \right\rangle - \left| (\frac{1}{2} \frac{1}{2}) -\frac{1}{2} \frac{1}{2} \right\rangle \right)$$
will be either $\hbar/2$ or $-\hbar/2$ depending on your convention whether it's $\lvert\bigl(I S\bigr),m_I m_S\rangle$ or $\lvert\bigl(S I\bigr),m_S m_I\rangle$.
How can I understand this physically and mathematically? Shouldn't the addition be commutative and the process be blind to which labels I use?
Answer: It is just a basis redefinition.
If you exchange $I$ and $S$, you change the last basis vector $e_4=|00\rangle$ into : $e'_4=-|00\rangle$. The new basis $e'$ is expressed from the old basis $e$ with the matrix $M= M^t = M^{-1} = (M^{-1})^t = Diag (1,1,1,-1)$, with $e' = M e$, and so it explains the new expression of the hamiltonian relatively to the new basis $e'$ , you have $H' = (M^{-1})^t H M^{-1}$. | {
"domain": "physics.stackexchange",
"id": 10013,
"tags": "quantum-mechanics, angular-momentum, hamiltonian-formalism, hilbert-space, representation-theory"
} |
Automatic Downcasting by Inferring the Type | Question: In java, you must explicitly cast in order to downcast a variable
public class Fruit{} // parent class
public class Apple extends Fruit{} // child class
public static void main(String args[]) {
// An implicit upcast
Fruit parent = new Apple();
// An explicit downcast to Apple
Apple child = (Apple)parent;
}
Is there any reason for this requirement, aside from the fact that java doesn't do any type inference?
Are there any "gotchas" with implementing automatic downcasting in a new language?
For instance:
Apple child = parent; // no cast required
Answer: Upcasts always succeed.
Downcasts can result in a runtime error, when the object runtime type is not a subtype of the type used in the cast.
Since the second is a dangerous operation, most typed programming languages require the programmer to explicitly ask for it. Essentially, the programmer is telling the compiler "trust me, I know better -- this will be OK at runtime".
When type systems are concerned, upcasts put the burden of the proof on the compiler (which has to check it statically), downcasts put the burden of the proof on the programmer (which has to think hard about it).
One could argue that a properly designed programming language would forbid downcasts completely, or provide safe casts alternatives, e.g. returning an optional type Option<T>. Many widespread languages, though, chose the simpler and more pragmatic approach of simply returning T and raising an error otherwise.
In your specific example, the compiler could have been designed to deduce that parent is actually an Apple through a simple static analysis, and allow the implicit cast. However, in general the problem is undecidable, so we can't expect the compiler to perform too much magic. | {
"domain": "cs.stackexchange",
"id": 7504,
"tags": "object-oriented, type-inference, language-design"
} |
rosserial too big for Arduino | Question:
I am trying to add a magnetometer to my robot by plugging it into an Arduino and publishing the data using MagneticField message. I can't compile my code for Arduino Uno or Leonardo while at the same time using the Adafruit Magnetometer driver . I end up with this error message:
Global variables use 2,675 bytes
(104%) of dynamic memory, leaving -115
bytes for local variables. Maximum is
2,560 bytes.
I only defined the necessary variables as globel (ros handle, publisher, message to publish and sensorlib instance). Did anybody figure out a way to work with such sensors on an Arduino and publishing their data using ROS?
Originally posted by Mehdi. on ROS Answers with karma: 3339 on 2015-12-22
Post score: 1
Answer:
Late response, but I guess this is a common problem. This solved my problem: http://answers.ros.org/question/28890/using-rosserial-for-a-atmega168arduino-based-motorcontroller/
Basically you can change the assigned buffer for serial communication by limiting the number of publishers/subscribers and also their length.
Originally posted by lordricky with karma: 16 on 2017-01-25
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 23282,
"tags": "ros, arduino, rosserial, memory"
} |
Model class for networking that uses notifications | Question: I just finished a model class that represents an Instagram Networking client for my iOS app.
I am specifically looking for a review of my use of static constants. These are strings that represent notification names and notification userInfo dictionary keys.
Here is my networking class:
POPInstagramNetworkingClient.h:
#import "AFHTTPSessionManager.h"
@interface POPInstagramNetworkingClient : AFHTTPSessionManager
+ (id)sharedPOPInstagramNetworkingClient;
- (instancetype)initWithBaseURL:(NSURL *)url;
- (void)requestPopularMedia;
- (void)requestMediaWithTag:(NSString *)tag;
@end
POPInstagramNetworkingClient.m:
#import "POPInstagramNetworkingClient.h"
#import <AFNetworking.h>
@implementation POPInstagramNetworkingClient
#pragma mark - Singleton
+ (instancetype)sharedPOPInstagramNetworkingClient
{
static POPInstagramNetworkingClient *sharedPOPInstagramNetworkingClient = nil;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
//Define base URL string
static NSString * const BaseURLString = @"https://api.instagram.com/v1/";
//Create our shared networking client for Instagram with base URL
sharedPOPInstagramNetworkingClient = [[POPInstagramNetworkingClient alloc]initWithBaseURL:[NSURL URLWithString:BaseURLString]];
});
return sharedPOPInstagramNetworkingClient;
}
#pragma mark - Initializer Methods
- (instancetype)initWithBaseURL:(NSURL *)url
{
self = [super initWithBaseURL:url];
if (self) {
//Set the serializers
self.requestSerializer = [AFJSONRequestSerializer serializer];
self.responseSerializer = [AFJSONResponseSerializer serializer];
}
return self;
}
#pragma mark - Instance Methods
- (void)requestPopularMedia
{
//Define notification names
static NSString * const kRequestForPopularMediaSuccessful = @"RequestForPopularMediaSuccessful";
static NSString * const kRequestForPopularMediaUnsuccessful = @"RequestForPopularMediaUnsuccessful";
//Create manager and execute GET method to retreive popular media
AFHTTPRequestOperationManager *manager = [AFHTTPRequestOperationManager manager];
[manager GET:[NSString stringWithFormat:@"%@media/popular?client_id=76566d0e6d5a41069ea5e8c86fbbd509", self.baseURL] parameters:nil success:^(AFHTTPRequestOperation *operation, id responseObject) {
//Post success notification
[[NSNotificationCenter defaultCenter]postNotificationName:kRequestForPopularMediaSuccessful object:nil userInfo:@{@"requestForPopularMediaResults": responseObject}];
} failure:^(AFHTTPRequestOperation *operation, NSError *error) {
NSLog(@"Error: %@", error);
//Post failure notificaton
[[NSNotificationCenter defaultCenter]postNotificationName:kRequestForPopularMediaUnsuccessful object:nil userInfo:@{@"requestForPopularMediaResults": error}];
}];
}
- (void)requestMediaWithTag:(NSString *)tag
{
NSLog(@"requesting media with tag");
//Define notification names
static NSString * const kRequestForMediaWithTagSuccessful = @"RequestForMediaWithTagSuccessful";
static NSString * const kRequestForMediaWithTagUnsuccessful = @"RequestForMediaWithTagUnsuccessful";
//new - @"%@tags/%@/media/recent?client_id=76566d0e6d5a41069ea5e8c86fbbd509"
//old - @"%@tags/search?q=%@&client_id=76566d0e6d5a41069ea5e8c86fbbd509"
//Create manager and execute GET method to retreive media with tag
AFHTTPRequestOperationManager *manager = [AFHTTPRequestOperationManager manager];
[manager GET:[NSString stringWithFormat:@"%@tags/%@/media/recent?client_id=76566d0e6d5a41069ea5e8c86fbbd509", self.baseURL, tag] parameters:nil success:^(AFHTTPRequestOperation *operation, id responseObject) {
NSLog(@"Success!");
NSLog(@"Response Object: %@", responseObject);
//Post success notification
[[NSNotificationCenter defaultCenter]postNotificationName:kRequestForMediaWithTagSuccessful object:nil userInfo:@{@"requestForTaggedMediaResults": responseObject}];
} failure:^(AFHTTPRequestOperation *operation, NSError *error) {
NSLog(@"Error: %@", error);
//Post failure notification
[[NSNotificationCenter defaultCenter]postNotificationName:kRequestForMediaWithTagUnsuccessful object:nil userInfo:@{@"requestForTaggedMediaResults": error}];
}];
}
@end
I was told previously that when defining static variables I should keep their scope as limited as possible, so you will notice that I defined these inside of specific methods instead of at the top of this file.
One thing that is bothering me though is the fact that I have other classes observing for these notifications and also accessing the passed userInfo dictionary when the notification posts. I basically just copy/pasted the notification name's static string and userInfo dictionary key string.
This is definitely wrong, and I feel like these variables might need to be placed into a single file that would be included in the precompiled header, but I'm not sure if this is the correct approach so any advice on this would be greatly appreciated.
I want to make sure I am following best practices.
Answer: //Define notification names
static NSString * const kRequestForPopularMediaSuccessful = @"RequestForPopularMediaSuccessful";
static NSString * const kRequestForPopularMediaUnsuccessful = @"RequestForPopularMediaUnsuccessful";
The scope of a notification name should never be limited to a single method or function.
The whole point of notification is that one object posts the notification, and as many objects who are registered can receive the notification.
Moreover, you've defined these identical constants in multiple methods.
So, first of all, if you use it in more than one method, it variable should at least be scoped to the class or file you're using it in. Redefining it every time you use it absolutely defeats the purpose of having a variable at all.
Second, this variable actually needs to be defined in the .h file. It's a notification name. If other objects are going to register for this notification, they need to know the name of the notification, and the best way to let them know the notification name is by giving them a variable in the .h file to use rather than having them look up the notification string.
//Define base URL string
static NSString * const BaseURLString = @"https://api.instagram.com/v1/";
//Create our shared networking client for Instagram with base URL
sharedPOPInstagramNetworkingClient = [[POPInstagramNetworkingClient alloc]initWithBaseURL:[NSURL URLWithString:BaseURLString]];
In this case, since we're only ever using this string once, it's fine to just use the literal string directly in the method call.
The point of declaring a variable like this would be if we reused it multiple times within the method--as it stands, we don't re-use it, so we don't need to create the variable.
Though... if we do create a variable, it should follow proper camelCase naming conventions. | {
"domain": "codereview.stackexchange",
"id": 9261,
"tags": "mvc, objective-c, ios, static, constants"
} |
What determines the colors and patterns of a clam shell? | Question: Earlier this week I was looking at some bivalve shells that had ornate patterns which ranged in color from a light orange-pink to a deep orange-red. Here is an image I found online that seems to be of the same type of shell:
The friend who was with me said, "I wonder where the color comes from." We were trying to look for two shells with similar colors, patterns, and approximately equal sizes, and it was hard, even though we were looking through a large collection of similar shells.
What determines this ornate pigmentation? Is it influenced by environmental factors such as the minerals in the water? Does it have anything to do with the age or health of the organism? Or is it purely genetic? Is this determined by the same basic biological principle that determines the color of human hair, or is there something different at work in shells?
Answer: This post is pretty well written and seems to say that the evolutionary forces that produce shell color like this is not known. There are suggestions that the color is camoflage or the result of metabolic byproducts or that the pigments serve to strengthen the shell. Its hard to believe some of these theories given that some shells do fine without coloring (at least that humans can see).
Structurally, shells are living things. [They consist of many layers of thin irregular calcium carbonate hexagonal plates that make the mineral portion of the shell. The plates are held together by a protein glue and renewed by cellular action][3]. Whatever their evolutionary motivations, the colors are biomolecules which are in the biomatrix binding the plates together. This is a reason that shells that are left out to dry in the sun can become bleached and brittle.
I wish I had some better references about the shell structure, but I'm remembering a lot of this from a one afternoon book report given in a class, not from any intensive study on my part.
This has influenced some ideas about things like tank armor, by the way. At least there is funding for biomaterials research by agencies interested in tank armor. Lamellar armor made of segmented plates can withstand tremendous impacts at a single point like a shell being dropped from a great height by a seabird or a shell hitting a tank. Plates just a few microns wide and a fraction of a micron deep shaped like irregular hexagons are hard to make if you aren't a mollusc so far though. | {
"domain": "biology.stackexchange",
"id": 9347,
"tags": "genetics, marine-biology, pigmentation, molluscs"
} |
Difference between run, measure, transpile, execute? | Question: quite new to quantum computing and I have to do a small presentation on quantum gates using python code(notebook).Also, please review the small code I have written for its correctness. I have some questions also as below:
import matplotlib.pyplot as plt
import numpy as np
from math import pi
from qiskit.quantum_info import Statevector
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, transpile
from qiskit.tools.visualization import circuit_drawer
from qiskit.quantum_info import state_fidelity
from qiskit import BasicAer
from qiskit.visualization import plot_bloch_multivector
backend = BasicAer.get_backend('unitary_simulator')
qc = QuantumCircuit(q)
qc.u(pi/2,pi/4,pi/8,q)
qc.draw(output='mpl')
U gate has a lambda argument. What does it mean by lambda and what effect it have on the bloch sphere?
state = Statevector(qc)
plot_bloch_multivector(state) # argument is a statevector
What is this Statevector function? I am not getting it at all.
Also, is the bloch sphere showing the effect different angles applied to the gate and state achieved due to effect of lambda, theta and phi? How to know the effect of lambda?
transpiled_circuit= transpile(qc, backend)
transpiled_circuit.draw(output= "mpl")
What is the effect of transpile function? Does it break the gate into its individual components? Not able to understand its effect?
job = backend.run(transpiled_circuit)
job.result().get_unitary(qc, decimals=3)
What is the function of run? Also, what is the get_unitary giving? Is this get_unitary same as statevector(discussed above)?
Now, I donot know where to put the measure, execute and how to see the histogram and the counts?
Also, in this example, I am using "unitary simulator". Is this simulator same as qasm simulator?
Thank you in advance for resolving me doubts.
Answer: So many questions in your question! Not sure I'll be able to answer them all, but I'll hopefully get you started!
1)U gate has a lambda argument. What does it mean by lambda and what effect it have on the bloch sphere?
This is a link to the documentation for the gate that you used. From this I can see that the their u gate is the universal gate that uses three sequential rotations (Rz,Ry and Rz) to to change the state. The lamda you are referring to is that final rotation around z.
For additional information on the u gate, see this SE post.
2)What is this Statevector function? I am not getting it at all. Also, is the bloch sphere showing the effect different angles applied to the gate and state achieved due to effect of lambda, theta and phi? How to know the effect of lambda?
Yes, your Bloch vector should show the resulting state after putting your state, $|0\rangle$, through the u gate with your given rotation angles of $\pi/2$, $\pi/4$, and $\pi/8$. The $\theta$ relates to the rotation around y axis, the $\phi$ and $\lambda$ are around the z axis
As for the statevector function, what it does was it took the output of your circuit and converted it into a statevector which you have then displayed on the Bloch Sphere. You could also display it in Dirac notation using state.draw() in another cell.
3)What is the effect of transpile function? Does it break the gate into its individual components? Not able to understand its effect?
Transpile is a function that is used to convert the circuit you have built, to one readable by actual quantum computers at IBMQ. Every time a complex circuit is transpiled it may be done slightly differently. This is not an effect you tend to see, but if you get to the point where you are worried about circuit depth, then the different transpile results become important.
4)What is the function of run? Also, what is the get_unitary giving? Is this get_unitary same as statevector(discussed above)?
.run tells the backend to actually run the circuit rather than just looking at it. (You are no longer building it, you want to run it to see the result) Find more information on .run here
get_unitary will output the matrix that evolved your statevector. Click here for the documentation.
5)Now, I do not know where to put the measure, execute and how to see the histogram and the counts? Also, in this example, I am using "unitary simulator". Is this simulator same as qasm simulator? Thank you in advance for resolving me doubts.
Measure will go at the end of your circuit. You then use .execute the same way you used .run only you will execute your circuit on a different backend, as the unitary simulator is used for generating unitaries, while the qasm_simulator can simulate the measurement. After you have executed your function you will need to get_counts`` in order to plot_histogram```
I'm deliberately not showing you what this code would look like in full, because I beleive that is probably a part of the assignment, but I will say that if you look at this Qiskit tutorial page, you should be able to find some examples that may help you.
Best of luck! | {
"domain": "quantumcomputing.stackexchange",
"id": 4168,
"tags": "qiskit, programming"
} |
Scott's stochastic lambda calculi | Question: Recently, Dana Scott proposed stochastic lambda calculus, an attempt to introduce probabilistic elements into (untyped) lambda calculus based on a semantics called graph model. You can find his slides on line for example here and his paper in Journal of Applied
Logic, vol. 12 (2014).
However, by a quick search on the Web, I found similar previous research, for example, that for Hindley-Milner type system. The way they introduce probabilistic semantics is similar to Scott's (in the former, they use monads while in the latter Scott uses continuation-passing style).
In which way is the Scott's work different from previous work available, in terms of theories themselves or their possible applications?
Answer: One apparent strength of his approach is that it allows higher-order functions (i.e. lambda terms) to be observable outcomes, which measure theory generally makes quite tricky. (The basic problem is that spaces of measurable functions generally have no Borel $\sigma$-algebra for which the application function - sometimes called "eval" - is measurable; see the intro to the paper Borel structures for function spaces.) Scott does this using a Gödel encoding from lambda terms to natural numbers, and working directly with the encoded terms. One weakness to this approach may be that the encoding could be difficult to extend with real numbers as program values. (Edit: This is not a weakness - see Andrej's comment below.)
Using CPS seems to be primarily for imposing a total order on computations, to impose a total order on access to the random source. The state monad should do just as well.
Scott's "random variables" seem to be the same as Park's "sampling functions" in his operational semantics. The technique of transforming standard-uniform values into values with any distribution is more widely known as inverse transform sampling.
I believe there's just one fundamental difference between Ramsey's and Scott's semantics. Ramsey's interprets programs as computations that build a measure on program outputs. Scott's assumes an existing uniform measure on inputs, and interprets programs as transformations of those inputs. (The output measure can in principle be computed using preimages.) Scott's is analogous to using the Random monad in Haskell.
In its overall approach, Scott's semantics seems most similar to the second half of my dissertation on probabilistic languages - except I stuck with first-order values instead of using a clever encoding, used infinite trees of random numbers instead of streams, and interpreted programs as arrow computations. (One of the arrows computes the transformation from the fixed probability space to program outputs; the others compute preimages and approximate preimages.) My dissertation's chapter 7 explains why I think interpreting programs as transformations of a fixed probability space is better than interpreting them as computations that build a measure. It basically comes down to "fixpoints of measures are way complicated, but we understand fixpoints of programs pretty well." | {
"domain": "cstheory.stackexchange",
"id": 2907,
"tags": "lo.logic, pl.programming-languages, lambda-calculus"
} |
Check if Two Arrays Sorted in Decreasing and Increasing Order Satisfy a Condition | Question: Given 2 arrays $A, B$ each of size $n$. If we want to find if the two arrays satisfy the condition that $A[i] + B[i] \ge k$ for all indices $i$, it's sufficient to fulfill 3 conditions below:
Sum of all elements of $A$ and $B$ is $\ge nk$
Sum of max element of $A$ and min element of $B$ is $\ge k$ (edge case)
Sum of min element of $A$ and max element of $B$ is $\ge k$ (edge case)
Edit: Based on answers, these 3 conditions are not sufficient to prove that they work with all cases please given that they don't go over elements in between edge cases?
Answer: No, these 3 conditions are not sufficient.
For example, $n=4$, $A=[6,5,2,1]$, $B=[1,3,4,6]$, $k=7$.
The sum of all elements in $A$ and $B$ is $14+14=28=4\cdot7$.
The sum of the max in $A$ and the min in $B$ is $7$.
The sum of the min in $A$ and the max in $B$ is $7$.
However, $A[2]+B[2]=2+4=6<7$. | {
"domain": "cs.stackexchange",
"id": 21139,
"tags": "arrays, permutations"
} |
Should I include overlapping (input) Data in my training data | Question: If I have time dependent data and want to predict the relative change for a future time. Should I separate the data so that the input times don't overlap?
With an example:
I have hourly temperature readings and want to predict the temperature change 4 hours after the last input hour. Now I could split my data so that I have
(overlapping)
1st data point: 1h, 2h, 3h, ... 10h -> predict for 14h
2nd data point: 2h, 3h, 4h ... 11h -> predict for 15h
(not overlapping)
1st data point: 1h, 2h, 3h, ... 10h -> predict for 14h
2nd data point: 15h, 16h, 17h ... 25h -> predict for 29h\
Further questions:
Does it depend on the Model/Training algorithm.
Does it depend on the Data?
Does it depend on the number of data points I have available? (I have hourly data for 6 years)
Answer: In general, both methods are valid to train temporal models. The only thing you need to check is that validation and test-set don't overlap with any of your training samples.
Using the overlapping variant can sometimes increase performance, because your model might benefit from the increased number of different starting points. This can be very relevant for certain datasets. For example, traffic data has periodic characteristics (little traffic at nighttimes / high traffic at daytimes). Let's say you have hourly traffic data and a sequence length of 24. Now your sequences will always start at the exact same day time. Therefore the model might not learn to properly forecast from other starting times.
I cannot think of any drawback the overlapping variant might have, so I'd opt for the overlapping variant. To be sure which one works best for you, I think you have to find out empirically.
Does it depend on the Model/Training algorithm.
In general it shouldn't, after all, in both cases you are always feeding the model valid sequences - any model and any training algorithm should deal well with either one.
Does it depend on the Data?
Yes, if the sequences have distinct starting points the non-overlapping variant might be better suited. The traffic-data example from above applies here.
Does it depend on the number of data points I have available? (I have hourly data for 6 years)
I'd say yes, you effectively increase the number of data points by using overlapping sequences, this is good if your dataset is small. Your dataset is comparably large. | {
"domain": "ai.stackexchange",
"id": 3416,
"tags": "neural-networks, datasets, training-datasets"
} |
What technologies prevent drones from being as efficient as birds? | Question: We have large scale aircraft with long endurance and much higher speed than birds. But it seems that aircraft of comparable size to birds (i.e. drones) have much lower endurance, top speed and flight range. What technologies are currently limiting us from achieving this?
Answer: Somebody needs to design a chocolate-powered drone. Yes, seriously.
The total energy stored in a 40AH 12V battery is about the same as the calorific value of five 100g chocolate bars from your nearest supermarket.
Source: from https://www.tesco.com/groceries/product/details/?id=254381873, "Tesco Everyday Value Milk Chocolate" provides 3840 kJ/kg, and the fully charged battery holds 40x12x3600/1000 = 1728 kJ
Find a way to pack that much energy density into drone (and also consider that unlike LIPO batteries, chocolate bars don't spontaneously combust if they are mishandled!) and the comparison between drones and birds would come out rather differently.
Of course, soaring birds (and sailplanes) can stooge around all day and night, getting their energy from thermals and the wave airflows over mountain ridges - but they still need some external power to reposition themselves to take advantage of those "free" energy sources. | {
"domain": "engineering.stackexchange",
"id": 1510,
"tags": "mechanical-engineering, aircraft-design"
} |
tf transforms high latency | Question:
I'm experiencing really high latency looking up transforms.
Using ROS Groovy on the PR2
Was going through the tutorial at www ros org/wiki/image_geometry/Tutorials/ProjectTfFrameToImage and using it to modify my own code.
Problems around line 52 and 54 of "the code" when trying to lookup the transform between /r_wrist_roll_link and /wide_stereo_optical_frame.
With the default timeout and acquisition time, I get only the exception "Frame id /wide_stereo_optical_frame does not exist! Frames (1):"
In order to get it to work at all, I have to set the timeout very high, around a second, and also ask for the most recent transform with ros::Time(0) instead of the camera acquisition time.
rosrun tf tf_monitor shows that /r_wrist_roll_link and /wide_stereo_optical_frame are both publishing at 50Hz, so it is unclear why looking up a (any) transform is taking so long.
The plain tutorial code works fine, it's just when I try to do the same thing in my own project that problems happen. Clearly something is wrong, but I have made as few changes as possible.
[EDIT]
Here are the errors
[ WARN] [1376332000.619905274]: [imageCallback] TF exception:
Lookup would require extrapolation into the past. Requested time 1376332000.588570505 but the earliest data is at time 1376332000.612975642, when looking up transform from frame [/r_wrist_roll_link] to frame [/wide_stereo_optical_frame]
And originally I got (without increasing timeout)
[ WARN] [1376330015.046842618]: [imageCallback] TF exception:
Frame id /wide_stereo_optical_frame does not exist! Frames (1):
Here is my code.
#include <opencv2/opencv.hpp>
#include <vector>
#include <ros/ros.h>
#include <ros/callback_queue.h>
#include <sensor_msgs/Image.h>
#include <sensor_msgs/image_encodings.h>
#include <image_transport/image_transport.h>
#include <image_geometry/pinhole_camera_model.h>
#include <tf/transform_listener.h>
#include <cv_bridge/cv_bridge.h>
#include <dynamic_reconfigure/server.h>
#include <opencv_test/show_imageConfig.h>
#include "target.h"
namespace enc = sensor_msgs::image_encodings;
const char *WINDOW_NAME = "OpenCV ROS Demo";
cv::Mat frame;
Target target;
void drCallback(opencv_test::show_imageConfig &config, uint32_t level) {
target.setBinaryThresh(config.binaryThresh);
}
void drawMarkers(std::vector<Marker> markers, cv::Mat &output) {
cv::Mat_<cv::Vec3b> output_ = output;
// Show them
//drawClusters(horizMatches, output_, Scalar(0,255,0));
//drawClusters(vertMatches, output_, Scalar(255,0,255));
//circle(output_, isect.pos, 10, Scalar(0,0,255));
for (const Marker &m : markers) {
cv::circle(output_, m.pos, m.h_len, cv::Scalar(255,255,0));
}
}
image_geometry::PinholeCameraModel cam_model;
std::string frameId = "/r_wrist_roll_link";
//std::string frameId = "/high_def_optical_frame";
void imageCallback (const sensor_msgs::Image::ConstPtr &srcImg,
const sensor_msgs::CameraInfo::ConstPtr &srcInfo) {
cv_bridge::CvImagePtr cv_ptr;
try {
cv_ptr = cv_bridge::toCvCopy(srcImg, enc::BGR8);
} catch (cv_bridge::Exception &e) {
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
frame = cv_ptr->image;
std::vector<Marker> markers = target.processFrame(frame);
drawMarkers(markers, frame);
cam_model.fromCameraInfo(srcInfo);
tf::TransformListener tf_listener;
tf::StampedTransform transform;
ros::Time acq_time = srcInfo->header.stamp;
ros::Duration timeout(20.0/30);
try {
tf_listener.waitForTransform(cam_model.tfFrame(), frameId,
ros::Time(0), timeout);
tf_listener.lookupTransform(cam_model.tfFrame(), frameId, ros::Time(0), transform);\
// == Doesn't work ==
// tf_listener.waitForTransform(cam_model.tfFrame(), frameId,
// acq_time, timeout);
// tf_listener.lookupTransform(cam_model.tfFrame(), frameId, acq_time, transform);
} catch (tf::TransformException &ex) {
ROS_WARN("[imageCallback] TF exception:\n %s", ex.what());
return;
}
tf::Point pt = transform.getOrigin();
cv::Point3d pt_cv(pt.x(), pt.y(), pt.z());
cv::Point2d uv;
uv = cam_model.project3dToPixel(pt_cv);
static const int RADIUS = 3;
cv::circle(frame, uv, RADIUS, CV_RGB(255,0,0), -1);
}
int main (int argc, char **argv) {
ros::init(argc, argv, "show_image");
dynamic_reconfigure::Server<opencv_test::show_imageConfig> server;
decltype(server)::CallbackType f = boost::bind(&drCallback, _1, _2);
server.setCallback(f);
ros::NodeHandle n;
image_transport::ImageTransport it(n);
image_transport::CameraSubscriber sub = it.subscribeCamera("/camera/image_r aw", 1, imageCallback);
cv::namedWindow(WINDOW_NAME, 1);
while (ros::ok()) {
ros::getGlobalCallbackQueue()->callAvailable(ros::WallDuration(0.5));
if (frame.rows > 0)
cv::imshow(WINDOW_NAME, frame);
cv::waitKey(1);
}
return 0;
}
Originally posted by jy on ROS Answers with karma: 3 on 2013-08-12
Post score: 0
Original comments
Comment by Tully on 2013-08-12:
Please provide more information for us to be able to help you better. What are you running? Do you get the error once or continuously? What is the actual error output? What are the code snippets you are running and what are the variations.
Answer:
Your problem is that you are constructing the listener inside the callback. This means that you do not give it time to build up it's buffer. That's why if you query it quickly, it reports it does not exist. And with a timeout, it builds up a buffer of new information, but it will never hear about the past where you queried it.
I recommend rereading the tf and time tutorial linked from the tutorial you linked to.
Originally posted by tfoote with karma: 58457 on 2013-08-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jy on 2013-08-12:
Thanks so much! | {
"domain": "robotics.stackexchange",
"id": 15234,
"tags": "ros, tutorial, pr2, transform"
} |
Ether vs. Quantum Field Theory | Question: We were asked a question to differentiate the difference between the idea of an Ether and the idea of Quantum Fields. When I really began to think about it I concluded that the ideas are the same. The two are essentially the same idea. They both consist of the idea that 'something' permeates all of space and acts on everything. Why is this wrong?
Answer: A different point of view to Jamal's answer: I think what distinguishes quantum field theory, where each elementary particle in the particle table defines a field all over space time, from the luminiferous aether is Lorenz invariance.
The luminiferous aether theory was falsified by the Michelson Morley experiment because it was not Lorenz invariant.
In quantum field theory an electron traversing space time is described by a quantum mechanical wave packet (which means that what "waves" is the probability of existing at (x, y, z, t)), manifested by creation and annihilation operators acting on the electron field, and the expectation value defines the location of the electron as a function of (x, y, z, t). The same for a photon, riding on the photon field. The quantum fields though by construction are Lorenz invariant and thus cannot be identified with the luminiferous aether. | {
"domain": "physics.stackexchange",
"id": 37452,
"tags": "quantum-field-theory, spacetime, vacuum, aether"
} |
Convert from Relative Magnitude to Mass | Question: I have data which gives me the magnitude density (${\rm mag}\,{\rm arcsec}^{-2}$) of M31 as a function of radius. How can I convert these data to the (enclosed) mass at a given radius (for velocity curve analysis)?
Here's a chart of the data. The odd thing about the magnitude profile is that it looks exactly like a mass profile which leads me to believe there's a simple way to relate the two.
Answer: A magnitude is a somewhat convoluted measurement of luminosity. You probably have relative magnitude $m$ per $\rm arcsec^2$.
You can start by using the distance modulus $m-M$ to calculate the absolute magnitude $M$:
$$m-M=5\left(\log_{10}\left(\frac{d}{\rm pc}\right) - 1\right)$$
where $d$ is the distance.
Once you have the absolute magnitude you can convert that to a luminosity using:
$$M-M_\odot=-2.5\log_{10}\left(\frac{L}{\rm L_\odot}\right)$$
$\rm L_\odot$ is the solar luminosity. $M_\odot$ is the absolute magnitude of the Sun (not to be confused with $\rm M_\odot$, the solar mass, notice the italic/upright characters). Note that this formula is only technically correct for the bolometric luminosity, that is integrated over all wavelengths, but if you have a sufficiently broad filter, or make some correction to your magnitude, it can still be used.
And finally a luminosity can be related to a (stellar) mass $M_*$ as:
$$\frac{L}{\rm L_\odot}=\Upsilon\frac{M_*}{\rm M_\odot}$$
The mass to light ratio $\Upsilon$ is its own art form. Crudely you can set it to $1$ (a solar mass of stars emits a solar luminosity of light, makes sense as a rough guess), but to get beyond that you'll have to dig through the literature as choosing accurate $\Upsilon$ is quite involved (you'll need, at a minimum, measurements in at least two filters).
Putting all that together you should now have converted your ${\rm mag}\,{\rm arcsec}^{-2}$ measurements to ${\rm M}_\odot\,{\rm arcsec}^{-2}$. From there you can just integrate over solid angle to get a radial mass profile. | {
"domain": "physics.stackexchange",
"id": 24909,
"tags": "mass, astrophysics, galaxies, luminosity, galaxy-rotation-curve"
} |
Can someone tell a formal definition for this problem: k-disjoint triangle? | Question: k-disjoint triangle
"We consider the NP-complete problem of deciding whether an input graph on n vertices has k vertex-disjoint copies of a fixed graph H. "
The above definition is the best I could find online. Nowhere could I find an clear formal definition.
What does it mean to be vertex disjoint exactly?
How to prove k-disjoint vertex is NP hard?
Answer: Let's consider the case in which $H$ is a triangle; the general case is very similar.
A triangle in a graph $G$ is a set $\{x,y,z\}$ of three vertices of $G$ such that $G$ contains the edges $\{x,y\},\{y,z\},\{z,x\}$. Two triangles $T_1,T_2$ are (vertext) disjoint if they do not share a vertex: $T_1 \cap T_2 = \varnothing$. A graph contains $k$ disjoint triangles if it contains $k$ triangles $T_1,\ldots,T_k$ which are pairwise disjoint: $T_i \cap T_j = \varnothing$ for all $i \neq j$.
Your quote is taken from the paper Exact algorithms for finding $k$ disjoint triangles in an arbitrary graph by Fellows, Heggernes, Rosamond, Sloper, and Telle. Here is the last sentence of the first paragraph:
On the other hand, the $K_3$-packing problem, which is our main concern in this paper, is NP-hard [HK78].
The $K_3$-packing problem asks, given a graph $G$ and an integer $k$, whether $G$ contains $k$ disjoint triangles. The reference [HK78], which can be found in the bibliography of the paper, presumably contains a proof that this problem is NP-complete. | {
"domain": "cs.stackexchange",
"id": 17878,
"tags": "data-structures, parameterized-complexity"
} |
Can the Helmholtz equation be applied for a system with multiple frequencies? | Question: Currently I am working on a problem whereby I need to solve numerically the propagation of waves in water. My past experiences with this problem is to use the Helmholtz equation to model the standing wave formation in the system using frequency domain simulations. However, this new problem involves the waves of more than two frequencies propagating simultaneously. From what I currently know, one way to derive the Helmholtz equation is to simplify a wave equation by assuming that the system is harmonic (1 frequency only). So here comes my question:
Will it work if I solve for the standing wave for each frequency using the Helmholtz equation and superimpose them to form a composite standing wave?
I suppose it could work if I assumed that the waves are independent of each other.
Am I stuck with resorting to solving the time-dependent wave equation until steady state is achieved if I want more realistic results? As the time-step required may be too large, are there any other methods or alternatives to simplify the problem?
Any further insight into this problem would be much appreciated. Many thanks.
Answer: In brief: if you're looking at solutions more complex that a standing wave, you may have to resort to the time-dependent wave equation. However, we can use to our advantage that any general solution can be expressed as the weighted sum of all mode shapes, where the weights are functions of time.
I will assume that your problem is an initial conditions problem, where you know what the displacement and velocity of the system is at $t=0$, and you want to find how the system evolves over time. If not, please let me know.
Solving for propagating waves
Note that the Helmholtz equation
$$\nabla^2 u = -\lambda u$$
will have multiple (if not infinite) solutions, where each solution corresponds to each standing wave. Each solution consists of a pair made up of an eigenfunction/mode shape $u_n(\mathbf{x})$ and an eigenvalue $\lambda_n$, where $n$ denotes the $n^\mathrm{th}$ standing wave. The eigenvalue $\lambda_n$ is related to resonant frequency $\omega_n$ by
$$\lambda_n = \frac{\omega_n^2}{c^2}$$
If we want to understand how waves other than standing waves - such as propagating waves - behave, we do need to resort to using the time-dependent wave equation again:
$$\ddot u = c^2 \nabla^2 u$$
Trying to numerical solve the wave equation from scratch is very cumbersome, involving solving over both space and time domains.
However, we can greatly simplify things if we know all the mode shapes and resonant frequencies. A general solution of the wave equation can be expressed as the weighted sum of the mode shapes:
$$u(\mathbf{x},t) = \sum_{n=1}^{N} q_n(t) \, u_n(\mathbf{x})$$
$N$ is the number of modes, which will be infinite for continuous systems like strings, water waves, etc.
Now, to solve for the general solution, we just need to determine these "weighting functions"/modal coordinates $q_n(t)$. Note that the case of a standing wave is when all but one of the modal coordinates are zero.
Provided that the mode shapes are normalised, say, such that
$$\int_\Omega |u_n(\mathbf{x})|^2 \, \mathrm{d}m = 1$$
then we can transform our more complicated time-and-space-dependent wave equation into individual simpler time-dependent ODEs:
$$\ddot{q}_n + \omega_n^2 q_n = 0$$
This is the ODE for undamped harmonic motion, at frequency $\omega_n$.
There will exist an ODE for each modal coordinate, and these are much simpler to numerical solve than a PDE over space and time. In fact, for this simple case, analytical expression can be obtained for the general solution of each modal coordinate, and you would only have to solve for the constants of the general solution.
We need the values $q_n(0)$ and $\dot{q}_n(0)$ (the initial conditions) of each ODE to be able to solve, and these can be determined from $u(x,0)$ and $\dot{u}(x,0)$ using the following expressions by setting $t=0$:
$$q_n(t) = \int_\Omega u(\mathbf{x},t) \, u_n(\mathbf{x}) \, \mathrm{d}m$$
$$\dot{q}_n(t) = \int_\Omega \dot{u}(\mathbf{x},t) \, u_n(\mathbf{x}) \, \mathrm{d}m$$
Depending on whether the mode shapes are known as analytical or numerical expression, these expressions may need to be evaluated by analytical or numerical integration.
In summary
In summary, to find the solution for propagating waves:
Find ALL the mode shapes and resonant frequencies using Helmholtz equation
Determine the boundary conditions for all the ODEs for each mode
Solve each ODE for $q_n(t)$
Construction the general solution using $u(\mathbf{x},t) = \sum_{n=1}^{N} q_n(t) \, u_n(\mathbf{x})$
Big caveat
Note that we require knowledge of ALL mode shapes and resonant frequencies. For continuous system like water waves, there are infinitely many modes, and so this task is infeasible. However, a practical solution is to approximate using finitely many modes by excluding modes whose resonant frequency are well above the frequency range of interest, effectively applying a low pass filter to the true solution. | {
"domain": "physics.stackexchange",
"id": 75350,
"tags": "homework-and-exercises, waves, acoustics"
} |
Transition Amplitude in Free Field Theory | Question: In my course notes for quantum field theory, there is a section on calculating the probability of a 2 particle state making a transition to the 1 particle state:
$ =<\vec{k}_{out} | \vec{k}_{1} \vec{k}_{2}>$
$=<0|a^{\dagger} _{k _{out}} a _{k _{1}}a _{k _{2}}|0>$
$=<0|[a^{\dagger} _{k _{out}} a _{k _{1}}]a _{k _{2}}|0>$ + $<0|a _{k _{1}} a^{\dagger} _{k _{out}}a _{k _{2}}|0>$
= $0$
I can see how the 3rd step leads to $0$, but I do not understand how the 2nd step leads to the 3rd step.
Answer: Edit: The convention I used is apparently different than yours, although the method is still the same. What notes are you using?
We use the commutation relations
$$
[a_p,a^\dagger_q] = (2\pi)^3\delta^{(3)}(\vec{p}-\vec{q}), [a_p,a_q]=[a^\dagger_p,a^\dagger_q
]=0
$$
Now we start with $$
\langle k|k_1k_2\rangle=\langle 0|a_k a^\dagger_{k_1}a^\dagger_{k_2}|0\rangle
$$
Now we use the fact that $a_k a^\dagger_{k_1}=a^\dagger_{k_1}a_k+(2\pi)^3\delta^{(3)}({k}-{k_1})$.
This gives
$$
\langle 0|a_k a^\dagger_{k_1}a^\dagger_{k_2}|0\rangle=\langle 0|\left(a^\dagger_{k_1}a_k+(2\pi)^3\delta^{(3)}({k}-{k_1})\right)a^\dagger_{k_2}|0\rangle
$$
$$
=\langle0|a^\dagger_{k_1}a_ka^\dagger_{k_2}|0\rangle+(2\pi)^3\delta^{(3)}(k-k_1)\langle0|a^\dagger_{k_2}|0\rangle
$$
we use the commutation relations again
$$
=\langle0|a^\dagger_{k_1}\left(a^\dagger_{k_2}a_k+(2\pi)^3\delta^{(3)}(k-k_2)\right)|0\rangle+(2\pi)^3\delta^{(3)}(k-k_1)\langle0|a^\dagger_{k_2}|0\rangle
$$
$$
=0+(2\pi)^3\delta^{(3)}(k-k_2)\langle0|a^\dagger_{k_1}|0\rangle
+(2\pi)^3\delta^{(3)}(k-k_1)\langle0|a^\dagger_{k_2}|0\rangle=0$$ | {
"domain": "physics.stackexchange",
"id": 64130,
"tags": "homework-and-exercises, quantum-field-theory, field-theory"
} |
Determining the product which is not possible in the reaction | Question: This is an objective type question I encountered in a objective test.
NBS (N-bromosuccinimide) is used for the allylic substitution of $\ce{Br}$ in a reaction. In this reaction, since the allylic carbon is the carbon adjacent to the carbon having double bonds, I thought that options B), C) and D) all can be the products but not A). However, the answer key gives the answer as D) and I can't understand why it should be?
Answer: D is a product, but only one product. Not two products. Those two are the same molecule flipped on a mirror plane. Were the molecule chiral (having an non-superimposable asymmetric center), then it could be two products. Not as it sits now.
Notice D1 is the same as B1. D2 is also B1, but flipped. Six products.
D2 is an enantiomer of D1, but not identical to it. I cannot puzzle out how D is supposed to be the correct answer. | {
"domain": "chemistry.stackexchange",
"id": 8005,
"tags": "reaction-mechanism, halides, radicals"
} |
Minkowski metric and definition of coordinate differentials? | Question: This is probably a really silly confusion I have about the definition of “coordinate differentials”, which I thought were things like $dx,dy,dz$
etc. The Minkowski line element $$ds^{2}=c^{2}dt^{2}-dx^{2}-dy^{2}-dz^{2}$$
defines the Minkowski metric $$\left[\eta_{\mu\nu}\right]=\left(\begin{array}{cccc}
c^2 & 0 & 0 & 0\\
0 & -1 & 0 & 0\\
0 & 0 & -1 & 0\\
0 & 0 & 0 & -1
\end{array}\right).$$
Using index notation, the line element can be written as $ds^{2}=\eta_{\mu\nu}dx^{\mu}dx^{\nu}$.
In textbooks I have seen the terms $dx^{\mu},dx^{\nu}$
called “coordinate differentials”, which seems OK except $dx^{0}=cdt$.
I realise this is trivial, but is it correct to call $cdt$
a “coordinate differential”? To me it looks like a coordinate differential $dt$
multiplied by $c$.
Answer: I think this is largely just terminology. Strictly speaking the coordinate is $ct$ not $t$, and of course $d(ct) = cdt$. In any case we usually choose units where $c = 1$ and just ignore it. | {
"domain": "physics.stackexchange",
"id": 14224,
"tags": "relativity, metric-tensor, notation"
} |
Correct semantic html5 | Question: I would be thankful for feedback for following html5 markup.
The outer div (class articles) has no more meaning as setting flex for it's child elements. Is a div the semantically correct choice in this case?
.articles {
display: flex;
}
.articles__article {
flex: 1;
}
<main class="main">
<div class="articles">
<article class="articles__article">
<h1 class="article__headline">Headline</h1>
<p class="article__teaser">Text Text Text</p>
</article>
<article class="articles__article">
<h1 class="article__headline">Headline</h1>
<p class="article__teaser">Text Text Text</p>
</article>
<article class="articles__article">
<img class="article__image" src="https://via.placeholder.com/150">
</article>
</div>
</main>
Answer: Yes, in this case it is. As you stated yourself it has "no more meaning". It only exists for the purpose of styling.
This is what the MDN has to say about div:
The HTML Content Division element () is the generic container for
flow content. It has no effect on the content or layout until styled
using CSS.
As a "pure" container, the element does not inherently represent
anything. Instead, it's used to group content so it can be easily
styled using the class or id attributes, marking a section of a
document as being written in a different language (using the lang
attribute), and so on.
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/div | {
"domain": "codereview.stackexchange",
"id": 34637,
"tags": "html5"
} |
Autoware CUDA build failed, required packages error about colcon, invalid syntax | Question:
Hi,
I am using Ubuntu 16.04. I was trying to build the docker image of autoware.ai using CUDA and colcon, but got errors about exception loading extension related to colcon. My environment:
Ubuntu 16.04
CUDA 10
I have tried different versions of Autoware.ai from 1.12.0 to the latest master branch, no one is working.
Below is part of the error log. The errors keep repeating for building each package:
13:13:48 dev@lambda-quad ~/autoware.ai $ AUTOWARE_COMPILE_WITH_CUDA=1 colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release
ERROR:colcon.colcon_core.entry_point:Exception loading extension 'colcon_core.task.build.python': invalid syntax (dist.py, line 585)
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 120, in load_entry_points
extension_type = load_entry_point(entry_point)
File "/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 168, in load_entry_point
return entry_point.load()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 2450, in load
return self.resolve()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 2456, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/python3/dist-packages/colcon_core/task/python/build.py", line 8, in
import setuptools # noqa: F401
File "/usr/local/lib/python3.5/dist-packages/setuptools/__init__.py", line 18, in
from setuptools.dist import Distribution
File "/usr/local/lib/python3.5/dist-packages/setuptools/dist.py", line 585
license_files: Optional[List[str]] = self.metadata.license_files
^
SyntaxError: invalid syntax
ERROR:colcon.colcon_core.entry_point:Exception loading extension 'colcon_core.task.build.ros.ament_python': invalid syntax (dist.py, line 585)
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 120, in load_entry_points
extension_type = load_entry_point(entry_point)
File "/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 168, in load_entry_point
return entry_point.load()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 2450, in load
return self.resolve()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 2456, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/python3/dist-packages/colcon_ros/task/ament_python/build.py", line 17, in
from colcon_core.task.python.build import PythonBuildTask
File "/usr/lib/python3/dist-packages/colcon_core/task/python/build.py", line 8, in
import setuptools # noqa: F401
File "/usr/local/lib/python3.5/dist-packages/setuptools/__init__.py", line 18, in
from setuptools.dist import Distribution
File "/usr/local/lib/python3.5/dist-packages/setuptools/dist.py", line 585
license_files: Optional[List[str]] = self.metadata.license_files
^
SyntaxError: invalid syntax
[0.132s] ERROR:colcon.colcon_core.entry_point:Exception loading extension 'colcon_core.package_identification.python_setup_py': invalid syntax (dist.py, line 585)
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 120, in load_entry_points
extension_type = load_entry_point(entry_point)
File "/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 168, in load_entry_point
return entry_point.load()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 2450, in load
return self.resolve()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 2456, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 9, in
import setuptools
File "/usr/local/lib/python3.5/dist-packages/setuptools/__init__.py", line 18, in
from setuptools.dist import Distribution
File "/usr/local/lib/python3.5/dist-packages/setuptools/dist.py", line 585
license_files: Optional[List[str]] = self.metadata.license_files
^
SyntaxError: invalid syntax
[0.135s] ERROR:colcon.colcon_core.entry_point:Exception loading extension 'colcon_core.package_identification.ros': invalid syntax (dist.py, line 585)
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 120, in load_entry_points
extension_type = load_entry_point(entry_point)
File "/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 168, in load_entry_point
return entry_point.load()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 2450, in load
return self.resolve()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 2456, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/python3/dist-packages/colcon_ros/package_identification/ros.py", line 16, in
from colcon_python_setup_py.package_identification.python_setup_py \
File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 9, in
import setuptools
File "/usr/local/lib/python3.5/dist-packages/setuptools/__init__.py", line 18, in
from setuptools.dist import Distribution
File "/usr/local/lib/python3.5/dist-packages/setuptools/dist.py", line 585
license_files: Optional[List[str]] = self.metadata.license_files
^
SyntaxError: invalid syntax
[0.279s] ERROR:colcon.colcon_core.entry_point:Exception loading extension 'colcon_core.package_augmentation.python_setup_py': invalid syntax (dist.py, line 585)
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 120, in load_entry_points
extension_type = load_entry_point(entry_point)
File "/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 168, in load_entry_point
return entry_point.load()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 2450, in load
return self.resolve()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 2456, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_augmentation/python_setup_py.py", line 9, in
from colcon_python_setup_py.package_identification.python_setup_py import \
File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 9, in
import setuptools
File "/usr/local/lib/python3.5/dist-packages/setuptools/__init__.py", line 18, in
from setuptools.dist import Distribution
File "/usr/local/lib/python3.5/dist-packages/setuptools/dist.py", line 585
license_files: Optional[List[str]] = self.metadata.license_files
^
SyntaxError: invalid syntax
[0.282s] ERROR:colcon.colcon_core.entry_point:Exception loading extension 'colcon_core.package_augmentation.ros': invalid syntax (dist.py, line 585)
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 120, in load_entry_points
extension_type = load_entry_point(entry_point)
File "/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 168, in load_entry_point
return entry_point.load()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 2450, in load
return self.resolve()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 2456, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/python3/dist-packages/colcon_ros/package_identification/ros.py", line 16, in
from colcon_python_setup_py.package_identification.python_setup_py \
File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 9, in
import setuptools
File "/usr/local/lib/python3.5/dist-packages/setuptools/__init__.py", line 18, in
from setuptools.dist import Distribution
File "/usr/local/lib/python3.5/dist-packages/setuptools/dist.py", line 585
license_files: Optional[List[str]] = self.metadata.license_files
^
SyntaxError: invalid syntax
There was no error before EOL of ROS kinetic. I am not sure whether this build error is related to the recent ROS Kinetic end-of-life. I have updated my ROS GPG key. Any one know why this error happened?
Originally posted by fangzhou on ROS Answers with karma: 13 on 2021-06-25
Post score: 1
Original comments
Comment by christophebedard on 2021-06-28:
It looks like you might have an older version of Python. It's complaining about type annotation, which is only supported from Python 3.5+. The traceback shows /usr/local/lib/python3.5/, so it would be weird if you didn't actually have Python 3.5, but I don't see any other explanation for that error. If you could provide all the commands you ran, the tutorial you followed, etc., we might be able to help.
Comment by christophebedard on 2021-06-28:
If you can, maybe try to modify any command you run/dockerfile you build to add the --include-eol-distros option for rosdep update. Kinetic is EOL, and it could cause some problems as you say, but this error seems unrelated.
Comment by fangzhou on 2021-07-01:
@christophebedard I have updated details of my question and the errors. I have python 3.5 installed. --include-eol-distros has been added before the build. I am not sure why this error happened. All I guess maybe related to the recent ROS Kinetic EOL.
Comment by christophebedard on 2021-07-02:
If you could provide all the commands you ran, the tutorial you followed, etc., we might be able to help, otherwise it's quite hard. How did you install colcon? Has colcon ever worked before on this machine/setup? Also, you say you're trying to build a dockerfile, but this looks like straight command-line.
Comment by fangzhou on 2021-07-02:
@christophebedard Thanks for the reply. The tutorial I followed is https://github.com/Autoware-AI/autoware.ai/wiki/Source-Build. I am using Ubuntu 16.04 and CUDA 10. The only difference is I use rosdep update --include-eol-distros for the ros update.
I was trying to build a docker image of a project based on Autoware, but got the build error. So I switched to build the Autoware.ai to see if it works, then I got the same build error.
I was able to build the Autoware.ai about 3 months ago. I use the same the same desktop and environment.
Answer:
I was able to reproduce by following the tutorial (https://github.com/Autoware-AI/autoware.ai/wiki/Source-Build).
I don't know why exactly, but it seems that the last version of setuptools that is compatible with Python 3.5 (which is what we have on Ubuntu 16.04) is 51.3.3. However, by default with the pip3 install -U setuptools command (from the tutorial), we get version 57.0.0. The solution should therefore be to uninstall it and install 51.3.3:
pip3 uninstall setuptools && pip3 install setuptools==51.3.3
See this GitHub issue: https://github.com/pypa/setuptools/issues/2541#issuecomment-763275516. This fixes the SyntaxError with colcon build for me, but I haven't done any other validation.
Here are all the commands I ran, including the solution at the end: https://gist.github.com/christophebedard/349bff22f18285c6ebd9ecb1a2e79590
Originally posted by christophebedard with karma: 641 on 2021-07-02
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by fangzhou on 2021-07-06:
Thanks! I use "pip3 install setuptools==51.3.3" instead of "pip3 install -U setuptools" and it works. There are still some packages that cannot be built. But they belongs to different issues. Appreciate! | {
"domain": "robotics.stackexchange",
"id": 36581,
"tags": "ros, colcon, ros-kinetic, ubuntu, cuda"
} |
No. of meiotic divisions to produce specific no. of seeds | Question: If I want to produce 100 seeds. Then the no. of meiotic divisions is 125 which can be calculated by the formula x + x / 4.
x = no. of seeds produced.
How is this formula derived?
Answer: Assuming that you have studied megasporogenesis and microsporogenesis. To produce a seed, you require the production of pollen(n) and egg(n) and their fusion.
Let's start with pollen grain(n):
4 pollen grains are produced after 1 meiotic division in the pollen sacs.
$$4~\text{pollen grains} = 1~\text{meiotic division}$$
To produce 1 pollen grain.
$$ 1~\text{pollen grain} = 1/4~\text{meiotic division} $$
For production of $x$ pollen grains.
$$ x~\text{pollen grains}=x/4~ \text{meiotic divisions}\quad(1) $$
Now coming to egg(n):
Only 1 egg is produced after 1 meiotic division in the megasporangium.
$$ 1~\text{egg}= 1~\text{meiotic division}$$
For production of $x$ no. of eggs.
$$ x~\text{eggs}=x~\text{meiotic divisions}\quad(2) $$
For production of $x$ no of seeds(2n):
$x$ pollen grains and eggs are required. No. of meiotic divisions to produce $x$ pollen grains and eggs.
$$x~\text{seeds}=x + x/4~\text{meiotic divisions} \quad\text{From}~(1)~\text{and}~(2) $$ | {
"domain": "biology.stackexchange",
"id": 6892,
"tags": "meiosis, sexual-reproduction"
} |
An Add/Minus Operator For Boost.MultiArray in C++ | Question: This is a follow-up question for An element_wise_add Function For Boost.MultiArray in C++. The following code is the improved version based on G. Sliepen's answer. On the other hand, the built-in function .num_dimensions() I used here for the part of array dimension mismatch detection.
template<typename T>
concept is_multi_array = requires(T x)
{
x.num_dimensions();
x.shape();
boost::multi_array(x);
};
// Add operator
template<class T1, class T2> requires (is_multi_array<T1>&& is_multi_array<T2>)
auto operator+(const T1& input1, const T2& input2)
{
if (input1.num_dimensions() != input2.num_dimensions()) // Dimensions are different, unable to perform element-wise add operation
{
throw std::logic_error("Array dimensions are different");
}
if (*input1.shape() != *input2.shape()) // Shapes are different, unable to perform element-wise add operation
{
throw std::logic_error("Array shapes are different");
}
boost::multi_array output(input1); // [ToDo] drawback to be improved: avoiding copying whole input1 array into output, but with appropriate memory allocation
for (decltype(+input1.shape()[0]) i = 0; i < input1.shape()[0]; i++)
{
output[i] = input1[i] + input2[i];
}
return output;
}
// Minus operator
template<class T1, class T2> requires (is_multi_array<T1>&& is_multi_array<T2>)
auto operator-(const T1& input1, const T2& input2)
{
if (input1.num_dimensions() != input2.num_dimensions()) // Dimensions are different, unable to perform element-wise add operation
{
throw std::logic_error("Array dimensions are different");
}
if (*input1.shape() != *input2.shape()) // Shapes are different, unable to perform element-wise add operation
{
throw std::logic_error("Array shapes are different");
}
boost::multi_array output(input1); // [ToDo] drawback to be improved: avoiding copying whole input1 array into output, but with appropriate memory allocation
for (decltype(+input1.shape()[0]) i = 0; i < input1.shape()[0]; i++)
{
output[i] = input1[i] - input2[i];
}
return output;
}
The test of the add/minus operator:
int main()
{
// Create a 3D array that is 3 x 4 x 2
typedef boost::multi_array<double, 3> array_type;
typedef array_type::index index;
array_type A(boost::extents[3][4][2]);
// Assign values to the elements
int values = 0;
for (index i = 0; i != 3; ++i)
for (index j = 0; j != 4; ++j)
for (index k = 0; k != 2; ++k)
A[i][j][k] = values++;
for (index i = 0; i != 3; ++i)
for (index j = 0; j != 4; ++j)
for (index k = 0; k != 2; ++k)
std::cout << A[i][j][k] << std::endl;
auto sum_result = A;
for (size_t i = 0; i < 3; i++)
{
sum_result = sum_result + A;
}
sum_result = sum_result - A;
for (index i = 0; i != 3; ++i)
for (index j = 0; j != 4; ++j)
for (index k = 0; k != 2; ++k)
std::cout << sum_result[i][j][k] << std::endl;
return 0;
}
All suggestions are welcome.
Which question it is a follow-up to?
An element_wise_add Function For Boost.MultiArray in C++
What changes has been made in the code since last question?
The previous question is the implementation of element_wise_add function for Boost.MultiArray. Based on G. Sliepen's suggestion, this kind of element-wise operations could be implemented with C++ operator overloading. As the result, here comes the operator+ and operator- overload functions for Boost.MultiArray.
Why a new review is being asked for?
The operator+ and operator- overload functions here include the exception handling for the situation of different dimensions and shapes. Please check if this usage is appropriate. Moreover, I am still seeking another smarter method to improve the construction of output object. If there is any better idea for deducing the output structure from input without copying process, please let me know.
Answer:
if (input1.num_dimensions() != input2.num_dimensions())
{
throw std::logic_error("Array dimensions are different");
}
If I'm not mistaken, the dimensions are static, and can be accessed through the array class's dimensionality constant, so I think this check could be part of the function requiresments, rather than a run-time error.
if (*input1.shape() != *input2.shape())
{
throw std::logic_error("Array shapes are different");
}
Is it possible to create an array with num_dimensions() == 0? I think we could use a (probably static_) assertion to prove to ourselves and later maintainers that dereferencing the pointer returned by shape() is safe.
I believe *input1.shape() could be written as input1.size(), which is simpler.
for (decltype(+input1.shape()[0]) i = 0; i < input1.shape()[0]; i++)
Isn't input1.shape()[0] simply *input1.shape()? i.e. input1.size() again.
boost::multi_array output(input1);
So... the type of the output array is always the same as the type of input1? This seems very arbitrary, and could lead to unexpected type differences when simply changing a + b to b + a.
Perhaps we could use decltype(input1[0] + input2[0]) to get a more appropriate "common" element type for the output?
It might be neater to implement operator+= instead, and then implement operator+ using operator+=.
(And similarly for operator-= and operator-). | {
"domain": "codereview.stackexchange",
"id": 43149,
"tags": "c++, recursion, c++20, boost, constrained-templates"
} |
Is there a way to prevent the VLP-16 from disconnecting after it starts delivering information? | Question:
When I connect the VLP-16, the connection works fine, but after a few minutes Ubuntu indicates that the connection has been disconnected.
These are the commands I input into the terminal to get a data feed.
sudo ifconfig eth0 up
sudo ip addr add 192.168.1.200 dev eth0
sudo route add -net 192.168.1.0 netmask 255.255.255.0 dev eth0
roslaunch velodyne_pointcloud VLP16_points.launch calibration:=/home/user/PUCK_Calibration.yaml
I need the connection to be stable between the laptop and the LIDAR for a very long time. Is there a stable way to establish a connection? Because when it disconnects, I need to rerun those commands to get data from the lidar again.
Am I doing something wrong?
I am using Ubuntu 14.04 LST with ROS Indigo.
Originally posted by kparikh on ROS Answers with karma: 53 on 2016-11-17
Post score: 0
Original comments
Comment by gvdhoorn on 2016-11-18:
Are you on a wireless network? Or a network with a DHCP server that has a very short lease-time?
Comment by kparikh on 2016-11-18:
I am on a wireless network. Do I need to turn off wifi while connected to the LIDAR?
Comment by gvdhoorn on 2016-11-18:
No, as your lidar is on a wired connection it should not matter. I was just wondering if frequent disconnects on your wireless might trigger a DHCP release/renew cycle. That could result in loss of routes, leading to what you described.
Comment by gvdhoorn on 2016-11-18:
In general though: unless you need your eth0 for something else, why not just configure it with a static IP? No need to fiddle with ip addr and adding routes.
Comment by kparikh on 2016-11-18:
I'll give that a shot. I was having trouble before when I wasn't fiddling with the ip addr but I'll see if I can make it work with just a static IP.
I was following this answer.
Comment by gvdhoorn on 2016-11-19:
Please just accept your own answer instead of closing the question with just some comments. That will more clearly mark this question as answered. Thanks.
Comment by kparikh on 2016-11-19:
If you don't mind, could you write the answer out. I don't have enough points to accept my own answer yet. I'll accept yours and close it.
Comment by gvdhoorn on 2016-11-19:
I've accepted it for you. Good to hear you got things to work.
Comment by kparikh on 2016-11-19:
Thanks for the help.
Answer:
I checked the wired connection settings and it seems like it was set to Automatic (DHCP). It works now that I set it to manual and input the ip and mask. Thanks.
Originally posted by kparikh with karma: 53 on 2016-11-18
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 26271,
"tags": "ros, vlp-16, velodyne"
} |
ROS Install on Pandaboard trouble linking ASSIMP 3 collada_urdf | Question:
I have followed this guide to install ROS on my Pandaboard: ros.org/wiki/groovy/Installation/PandaBoard/Source
I have gotten to package 100/103 in the build (collada_urdf) and it fails to build giving me this error:
/opt/ros/groovy/ros_catkin_ws/devel_isolated/collada_urdf/lib/libcollada_urdf.so: undefined reference to ``aiScene::aiScene()'`
I added set(IS_ASSIMP3 1) and
add_definitions(-DIS_ASSIMP3) to the CMakeLists.txt for collada_urdf but with no luck.
Also, I followed the build instructions to build Assimp 3.
If anyone can provide any suggestions I would really appreciate it.
Full out put of sudo ./src/catkin/bin/catkin_make_isolated --pkg collada_urdf:
pastebin.com/3zLmKYJZ
Originally posted by swillis11 on ROS Answers with karma: 11 on 2013-04-21
Post score: 1
Original comments
Comment by ahendrix on 2013-04-22:
If you don't need collada_urdf, you can try removing it from your src folder and rebuilding.
I've been using the Ubuntu ARM debs with good success: http://www.ros.org/wiki/groovy/Installation/UbuntuARM . No collada_urdf build there either, but I haven't needed it yet.
Comment by William on 2013-04-22:
I think this is related to: https://github.com/ros/robot_model/pull/15#issuecomment-16760508
Answer:
I think the problem is that you have Assimp 2.0 also on the system. collada_urdf then gets confused because it finds Assimp 2.0 and does not bother to check for Assimp 3.0 which is needed by collada_urdf.
You must remove Assimp 2.0:
sudo apt-get remove libassimp2 libassimp-dev
Make sure Assimp 3.0 is installed and the path to the assimp.pc file is listed in your PKG_CONFIG_PATH.
Recompile everything and it should work.
Originally posted by Raptor with karma: 377 on 2013-04-22
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 13899,
"tags": "build, collada-urdf, pandaboard, rosinstall, linking"
} |
Does L have a definition in terms of circuits? | Question: Many complexity classes defined with Turing machines have definitions in terms of uniform circuits. For example, P can also be defined using uniform polynomial size circuits, and similarly BPP, NP, BQP, etc. can be defined with uniform circuits.
So is there a circuit-based definition of L?
An obvious idea would be to allow polynomial size circuits with some depth limitation, but this turn out to define the NC hierarchy.
I was thinking about this question a long time ago, but didn't find an answer. If I remember correctly, my motivation was to understand what the quantum analog of L would look like.
Answer: Well, $L = SC^1$, where $SC^1$ is the class of languages computed by polynomial size circuits of $O(\log n)$ width.
As for $NL$, it could be characterized as the class languages computed by polynomial size skew circuits (which in some sense is just another way of saying nondeterministic branching programs). | {
"domain": "cstheory.stackexchange",
"id": 445,
"tags": "cc.complexity-theory, complexity-classes, circuit-complexity"
} |
A hash function with predicted collisions | Question: As far as I know, the more collision-resistant a hash function is, the better. But is there any way to define a hash function with predicted collisions? In other words, a hash function that collides for some known set of possible inputs and avoids collisions for other input values. To state the problem simpler:
Let $A$ be some set of strings.
Define a function $f$ such that $f(x_i) \rightarrow y_i$ with $y_i \neq y_j$ for all $i,j \notin A$ with $i \neq j$, and otherwise $f(x_i) \rightarrow y$ where $y \notin \{y_i \mid i \notin A\}$ is constant.
Is it possible? In addition, is it possible for performance of such a hash function not to depend on the size of $A$?
Answer: Generally the collisions in a hash function are, in a way, what winds up improving the running time.
This is because running time (or size, or insertion time) is generally defined in terms of worst-case input. If you use an algorithm without any randomness or hash, you need to deal with the worst case. For example, finding duplicate elements in an array has an $\Omega(n\log n)$ in the comparison model, same as sorting. But if you just hash the values, you can determine whether or not there's a duplicate in $O(n)$ time and space with a small probability of error. Determining a hash function that wouldn't collide, eliminating the possibility of error, has to take $\Omega(n\log n)$ time. (You may notice that these are, in a way, the same problem--though of course hashed values generally belong to a smaller universe and the comparison model may not be appropriate).
That said, there are some ways to take advantage of the ways hash functions collide. One is locally sensitive hash functions, in which elements with small differences (i.e. normal distance functions for points) hash to the same value. Recently I've been reading this paper(1), which I found really easy to read, and shows some good, intuitive applications for how a locally sensitive hash function can be used to solve seemingly difficult problems--in this case the nearest neighbor problem with small probability of error.
Incidentally, the space advantage of possible collisions is how bloom filters work, a data structure that allows you to store elements of a set, each in space smaller than their actual size. They are a constant factor away from optimal, and the upper and lower bounds demonstrate the tradeoff between error and space.
(1) Neylon, T. A locally sensitive hash for real vectors. SODA2010. | {
"domain": "cs.stackexchange",
"id": 407,
"tags": "cryptography, hash"
} |
Why is the curl of electric field vector inside a hollow waveguide zero for TEM waves? | Question: I have been studying David Griffiths 'Introduction to Electrodynamics', Electromagnetic Waves chapter. For hollow wave guide, the Maxwell's equations
$$\begin{align}
\nabla\cdot\mathbf E &= 0 \\
\nabla\times\mathbf E &= - \frac{\partial \mathbf B}{\partial t}
\end{align}$$
From these we can obtain,
$$\begin{align}
\frac{\partial E_y}{\partial x} - \frac{\partial E_x}{\partial y} &= i\omega B_z \\
\frac{\partial E_z}{\partial y} - ikE_y &= i\omega B_x \\
ikE_x - \frac{\partial E_z}{\partial x} &= i\omega B_y
\end{align}$$
The left hand side of these equations are the components of the curl of the electric field vector. To show that TEM waves (where the z component of both the electric and magnetic fields are zero) cannot exist in a hollow wave-guide, it's stated that, the divergence and curl of the electric field vector are both zero. My question is, how can we show that the curl is zero for this electric field?
Answer: I assume that Griffiths does not mean the curl of the total electric field, just the transverse part of the electric field.
If we assume a wave of the following kind $$\mathbf{E} = \mathbf{E}_T(x,y)e^{j(\omega t - kz)},$$
and we define a tranverse del operator as $\nabla_T = \hat{x}(\frac{\partial}{\partial x}) + \hat{y}(\frac{\partial}{\partial y}),$
then we can state that $\nabla_T \cdot \mathbf{E}_T = 0 $ and $\nabla_T \times \mathbf{E}_T = 0$.
The tranverse field can thus be written in terms of a potential function, similar to the electrostatics case. It is solved by formulating Laplace's equation with the appropriate boundary equations. | {
"domain": "physics.stackexchange",
"id": 86741,
"tags": "electromagnetism, waveguide"
} |
Communication in ROS and ROS2 | Question:
ROS uses either TCP/ IP or UDP communication protocol. Different ROS nodes can be run on different machines with different network (not same subnet mask) only with one master provided that each machine can communicate bidirectionally.
Now ROS2 uses DDS for communication. In DDS language, different ROS2 nodes can interact with each other only if they belong to the same domain participant i.e. they are in the same network.
Does that mean, that ROS nodes can communicate with each other even if they are in different network whereas ROS2 nodes can only communicate if they are in the same network ?
Originally posted by aks on ROS Answers with karma: 667 on 2018-09-21
Post score: 0
Answer:
The "domain participant ID" is just a number that allows you to configure a sort of "virtual network" or "virtual overlay" which makes DDS participants that do not use the same number ignore each other, it does not influence the networking component of DDS. See Part 2: Core Concepts > Working with DDS Domains > DomainParticipants > Choosing a Domain ID and Creating Multiple DDS Domains for the RTI explanation of this.
What you are referring to ("only in the same network") is actually due to the fact that in the default configuration many (all?) DDS implementations use (UDP) broadcast traffic for peer discovery.
By default, many routers and gateways are configured to not pass on broadcast traffic from their own network to other networks they are connected to and additionally, the DDS implementation may only broadcast to the local subnet (ie: all IPs in the same range).
But that is not an inherent issue with DDS, nor with ROS2. It's a configuration issue (and many DDS implementations support either discovery with a pre-defined list of participants or a unicast discovery, or even both), and even if the specific middleware doesn't support changing it, there are various networking techniques that can be used to bridge two (or more) network segments in such a way that broadcast traffic is allowed to flow between them.
Originally posted by gvdhoorn with karma: 86574 on 2018-09-21
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by aks on 2018-09-21:
Does that mean, if I use only Unicasting for peer discovery, I cann connect to a compatible DDS participant in any network ?
And that the settings for multicasting can also be configured for the same behaviour?
Comment by gvdhoorn on 2018-09-21:
In theory: yes, that should work.
In practice: internetworking is never easy and can give you interesting problems..
But lots of other techniques can be used as well. There is no need to resort to unicast exclusively.
Comment by aks on 2018-09-21:
Yes, but multicasting is not allowed by most of the Work networks + there is additional overhead of firewalls which could get very tricky.
Comment by gvdhoorn on 2018-09-21:
Note: I wrote broadcast, not multicast. Those are two different things.
If broadcasting is prohibited in a network then a lot of things will stop working. | {
"domain": "robotics.stackexchange",
"id": 31806,
"tags": "ros, ros2, dds, tcpros, udpros"
} |
Incompatibility of the Lorentz force with electromagnetic radiation | Question: Let's consider a charged particle with a fixed speed along the x axis (for convenience). There is no external electric field $E$. If this particle enters a region of space with a constant magnetic $B$ in the z axis (for convenience), then the particle's trajectory will become (semi-)circular. This is can be easily retrieved from the Lorentz force law.
Furthermore, the modulus of the velocity of the particle won't change even if it's direction will. Thus, the energy of the particle won't change. This can be derived, for example, using the fact that for a chaged particle inside an electromagnetic field, the energy of the particle is
$\dfrac{d U}{d t} = q \mathbf{v} \cdot \mathbf{E}=0$.
All of this holds even if the speed of the particle is comparable to that of light.
Now, we also know that an accelerating charged particle radiates electromagnetic energy. In this case, we would have synchrotron radiation. This means that the particle must be losing energy, even if it isn't much. And this contradicts the conservation of energy deduced from Lorentz force law!
Could someone please explain to me where I have gone wrong in my reasoning?
Now, I have been thinking about how to resolve this incompatibility, and here is the best I have been able to piece: Because of radiation $U$ would no longer be the energy of the particle, but instead would be the energy of a new system: the particle + the radiated field (let's say "a collection of photons"). Now, the energy of this system would be conserved, as Lorentz law says.
But I am not sure if this idea works or is just pure BS. What do you think?
Thank you!!!
Answer: This question concerns an aspect of electromagnetism called self force (it is also called radiation reaction), and it also concerns the fact that you can't have a truly point-like particle of non-zero charge. This is because such a particle would source a field with infinite field energy.
If you take the limit of a particle in which the charge tends to zero, then the radiated energy tends to zero faster than the Lorentz force and in the limit you get a circular orbit and no radiation.
If we consider a charged body which is not point-like and therefore can have a finite charge, then we have to consider the fact that each part of the charged body produces a field and experiences the field produced by the other parts, in addition to any externally applied field. When the body accelerates these various internal contributions do not sum to zero, and they result in a force called the self-force. The self-force provides just the right amount of force to conserve momentum and energy overall. That is, the emitted radiation carries away energy and momentum, and the self-force makes the radiating body lose that amount of energy and momentum.
Note, all the above concerns classical electromagnetism. In quantum theory a single electron is often said to be 'point-like' but in fact its wavefunction is never confined to an infinitesimally small volume (because that would require infinite energy and there would be pair production). The self-force relates to the aspect of field theory called renormalization, and that is a whole further story. | {
"domain": "physics.stackexchange",
"id": 97385,
"tags": "electromagnetism, special-relativity, electromagnetic-radiation"
} |
How to prove that Gravity happens in one plane? | Question: How can we prove that an object which is orbiting another object by its gravity always moves in one plane in a full period? Not in different diffrent planes (two body system)
Answer: This is only true in a two body system. In many body systems like the Solar system the planets continually change their orbital plane (slightly) due to perturbations from other bodies.
In a two body system the orbital plane is constant because the Lagrangian is axially symmetric and that means angular momentum is conserved. This is a consequence of Noether's theorem. Since angular momentum is constant the orbital plane cannot change. | {
"domain": "physics.stackexchange",
"id": 36292,
"tags": "newtonian-gravity, angular-momentum, conservation-laws, orbital-motion, planets"
} |
What kind of big bird is this seen walking around Taipei? | Question: I've seen this humorous sign warning drivers to keep an eye out for birds many times, but today I finally realized what it's talking about!
This is a large bird and I've seen it all around Taipei in gardens and green areas. It's not shy of people unless you get too close.
I'd guess it's about 30 cm tall when standing upright, but it often is bend down looking for things to eat. It takes one step at a time, sometimes very slowly, and it's head moves out-of-synch with its body as it hunts for food. I am not sure if it can fly, I've only seen it on the ground, and I've never heard it make a sound.
It's various shades of brown and tan, and just like the cartoon on the sign it has long feathers projecting backwards from its head. That can be seen in the center top of the photo montage.
Any idea what kind of bird this is?
These are cell-phone photos from the closest I could get without it getting nervous.
"Life imitates Art far more than Art imitates Life" Oscar Wilde
Answer: I am fairly certain that this is a Malayan Night Heron (Gorsachius melanolophus).
Taiwan is listed as one of the countries where the Malayan night heron can be found. Its diet consists of small reptiles, snails, chilopods, arachnids, crabs and insects. This would explain why you have observed it bend down a lot to look for food.
Needless to say you can find more information in the corresponding Wikipedia article.
Here a photo with better resolution:
Photo by: Pete Morris/Birdquest
In addition, here is a link to a video taken in Taiwan. Does this remind you of the bird that you saw? | {
"domain": "biology.stackexchange",
"id": 8784,
"tags": "species-identification, ornithology"
} |
How much mass would have to be added to the Sun to significantly alter any of its characteristics? | Question: While this might at first sound like an XKCD What If? question, let's proceed with it because astronomical objects do occasionally merge.
How much mass would have to be added to the Sun to significantly alter its characteristics (lifetime, spectral type, size, etc.)?
As an example, from Wikipedia's Rogue planet:
A rogue planet (also termed an interstellar planet, nomad planet, free-floating planet, unbound planet, orphan planet, wandering planet, starless planet, or sunless planet) is a planetary-mass object that does not orbit a star directly. Such objects have been ejected from the planetary system in which they formed or have never been gravitationally bound to any star or brown dwarf. The Milky Way alone may have billions to trillions of rogue planets, a range which the upcoming Nancy Grace Roman Space Telescope will likely be able to narrow down.
...if one doesn't fall into the Sun first!
Just in the last few years alone we've seen both a comet and an asteroid with extra-solar velocities enter the inner solar system, compare that to the 4.6 billion year age of the Sun.
2I/Borisov 2.01 AU perihelion
ʻOumuamua 0.255 AU perihelion
Answer:
How much mass would have to be added to the Sun to significantly alter
its characteristics
Asking how much would be significant is inexact. The Sun is classified as a G2V main sequence star. Though the chart lists one solar mass as G4V, so there's some variation in there. The classifications seem to relate to temperature. To go 1 step up (and using the sun or any G star), the increase of one ranking (decrease in 1 number after G) alternates between a 2% and 5% increase in mass. I'm going to round that to 3%-4%, so for our sun, that's about 30-40 Jupiters (31.4-41.9 if you care about significant digits), but I'm going with 30-40 Jupiter masses added for each step up the classification ladder, ignoring the effects of impact, which would be considerable.
That increase of one in classification level viewed from Earth would be quite significant. 3% more mass corresponds to about the 4th power of that or 12.5% more luminosity. Earth would be very different if the Sun was 12.5% more luminous. I would think that would be enough to turn Earth at least into a steam-bath, perhaps a full blown runaway greenhouse effect due to the abundance of water and feedback mechanisms. Adding mass would also change Earth's orbit, drawing it in slightly and shortening the year, so the increase from Earth would be a bit more than 12.5%.
The stellar mass math goes something like this.
Lifespan decreases by about the 3rd power of an increase in mass.
Energy output or luminosity increases by about the 4th power of the mass, around 1 solar mass. This increase slows down to about the 3.5th power with more massive main sequence stars.
Surface temperature increases slightly with mass, perhaps 10-20 K per 1% increase in mass, or about a 1% increase in temperature for every 3% increase in mass would be about ballpark.
Radius increases with added mass too, so the heavier sun would be both brighter and larger. The radius increases enough that the surface gravity decreases. A 3% increase in mass corresponds to around a 2% increase in radius, so the size wouldn't appear very different on these scales.
If the Sun became just 0.5% to 1% more luminous (about 1.5-3 Jupiters in mass) that would be in the ballpark to man made climate change so far. Enough to melt surface ice over time and cause oceans to rise, so, significant depends a lot on point of view.
These numbers should be taken with a grain of salt because other factors like metallicity and age matter too. Stars like our Sun also change in size and luminosity during their lifetime, growing both larger and more luminous over time. There are more massive main sequence stars than our Sun that are less luminous because they're younger and the reverse as well, though off hand, I can't name any.
Errors fixed. Thankyou for pointing out.
I want to add, since you mention XKCD in the question, if Randall Munroe was answering this question, he would take it to the extreme, answering what happens if you add million Jupiters, or a billion. Fun times. Without his clever diagrams, it wouldn't be the same, so I'll stop here. | {
"domain": "astronomy.stackexchange",
"id": 5020,
"tags": "the-sun, mass, collision"
} |
Normalizing data in Excel table column with multiple values | Question: I have a table in excel which has 11 columns and 50,000 rows of data.
One of the columns contains email addresses and in some instances multiple addresses, separated with a ; character. Eg. a cell in that column might look like a@gmail.com;b@gmail.com;x@gmail.com .
In those instances where I have multiple addresses, I need to delete the row with the multiple addresses and create new rows in the table with each one of those addresses, also copying and pasting the rest of the columns.
An example of how originally the data might look like and how it should look like after processing:
Before:
After:
In order to accomplish that I wrote this piece of code:
Sub Fix_Table()
i = 1
table_size = Range("Table1").Rows.Count
'Goes through Table1 in "To" column and fixes the recipients
Do While i <= table_size
cell_value = Range("Table1[To]")(i)
If InStr(Range("Table1[To]")(i), ";") Then
from_table = Range("Table1[From]")(i)
subject_table = Range("Table1[Subject]")(i)
receivedDate_table = Range("Table1[Received_Date]")(i)
infolder_table = Range("Table1[In_Folder]")(i)
size_table = Range("Table1[Size]")(i)
weekday_table = Range("Table1[Weekday]")(i)
date_table = Range("Table1[Date]")(i)
month_table = Range("Table1[Month]")(i)
year_table = Range("Table1[Year]")(i)
time_table = Range("Table1[Time]")(i)
inout_table = Range("Table1[In/out]")(i)
recipients_table = Split(cell_value, ";")
number_of_recipients = UBound(recipients_table, 1) - LBound(recipients_table, 1)
Range("Table1[To]")(i).EntireRow.Delete
For k = 1 To number_of_recipients + 1
Range("Table1[To]")(i).EntireRow.Insert
Range("Table1[From]")(i) = from_table
Range("Table1[Subject]")(i) = subject_table
Range("Table1[Received_Date]")(i) = receivedDate_table
Range("Table1[In_Folder]")(i) = infolder_table
Range("Table1[Size]")(i) = size_table
Range("Table1[Weekday]")(i) = weekday_table
Range("Table1[Date]")(i) = date_table
Range("Table1[Month]")(i) = month_table
Range("Table1[Year]")(i) = year_table
Range("Table1[Time]")(i) = time_table
Range("Table1[In/out]")(i) = inout_table
Range("Table1[To]")(i) = recipients_table(k - 1)
i = i + 1
table_size = table_size + 1
Next k
Else
i = i + 1
End If
Loop
End Sub
The above code works , however it is extremely slow! Is there any faster way that I could do that for that size of data?
Answer: Ok, so this ended up being a little bit longer/more complicated than I hoped. I will do my best to explain so that you can follow along. Please ask questions if you get lost or confused!
Code first, then explanations:
Option Explicit
Private Type TRecord
To As String
From As String
Subject As String
ReceivedDate As Date
InFolder As String
Size As String
Weekday As String
RecordDate As Date
Month As String
Year As String
Time As String
InOut As String
End Type
Sub New_Fix_Table()
' Be sure to add 'Option Explicit' at the top of your modules. This prevents undeclared variables from slipping through.
' Never use underscores in names. They have special meaning to the interpreter.
' table_size = Range("Table1").Rows.Count
' ## Not Needed due to UBound/LBound ##
' Dim tableSize As Long
' Be sure to also properly qualify you range references.
' TableSize = ActiveWorkbook.Range("Table1").Rows.Count - Without proper qualification, your Range is really ActiveWorkbook.Range
' tableSize = ThisWorkbook.Range("Table1").Rows.Count
' ## ##
'Goes through Table1 in "To" column and fixes the recipients
Dim i As Long
' For loops such as these, I prefer for loops
' Do While i <= table_size
' I strongly prefer arrays for this purpose. If it was my own project, I even would use classes, but one step at a time for now.
' Change this to point to the correct worksheet.
Dim inputSheet As Worksheet
Set inputSheet = ThisWorkbook.Worksheets("TargetSheet")
' If your data is in a table, then you can use this method instead of referring to the range.
Dim tableData As Variant
tableData = inputSheet.ListObjects(1).Range.value
' Now, here is a trick I use when processing table data in a much more efficient manner.
' This does require a reference to Microsoft Scripting Runtime
Dim headerIndices As Scripting.Dictionary
Set headerIndices = GetHeaderIndices(tableData)
' Now we have a dictionary where we can use a key and return the index position of that key
' This is where it gets a little bit tricky. If we encounter a row with multiple emails, we need to duplicate the records.
' Otherwise, we want to keep the records as is. For this task, collections to the rescue!
' Having declared a Record Type, I can now use the Type as a container for my data (without needing a class)
Dim record As TRecord
' The records collection will contain the created records
Dim records As Collection
Set records = New Collection
Dim i As Long
' We loop through arrays using LBound and Ubound (lower bound, upper bound). The '1' denotes rows, whereas '2' would denote columns.
' I add 1 to the lower bound so I can skip the header row.
For i = LBound(tableData, 1) + 1 To UBound(tableData, 1)
' Set all the properties of the record.
record.From = tableData(i, headerIndices("From"))
record.Subject = tableData(i, headerIndices("Subject"))
record.ReceivedDate = tableData(i, headerIndices("Received_Date"))
record.InFolder = tableData(i, headerIndices("In_Folder"))
record.Size = tableData(i, headerIndices("Size"))
record.Weekday = tableData(i, headerIndices("Weekday"))
record.RecordDate = tableData(i, headerIndices("Date"))
record.Month = tableData(i, headerIndices("Month"))
record.Year = tableData(i, headerIndices("Year"))
record.Time = tableData(i, headerIndices("Time"))
record.InOut = tableData(i, headerIndices("In/out"))
' Split the addresses. If there are multiple addresses we dont need to rewrite the record, we just need to adjust the To field.
Dim splitAddresses As Variant
If InStr(tableData(i, headerIndices("To")), ";") > 0 Then
splitAddresses = Split(tableData(i, headerIndices("To")), ";")
Dim j As Long
For j = LBound(splitAddresses) To UBound(splitAddresses)
If Len(splitAddresses(i)) > 1 Then
record.To = splitAddresses(i)
records.Add record
End If
Next
Else
record.To = tableData(i, headerIndices("To"))
records.Add record
End If
Next
' Now we have a colleciton of all of the records we need. Now, we need to translate those back into an array.
Dim outputData As Variant
' The row is 0 based so we can add headers, but the headerIndices dictionary is already 1-based, so we leave the columns 1 based.
' Admittedly, I would avoid a mis-match of bases for re-dimming an array, I am only doing it this way to prevent confusion.
ReDim outputData(0 To records.Count, 1 To headerIndices.Count)
' An array with the same base-dimensions would be one of the two following:
' ReDim outputData(0 To records.Count, 0 To headerIndices.Count - 1)
' ReDim outputData(1 To records.Count + 1, 1 To headerIndices.Count)
' You would then need to adjust the actual filling of the array as well.
i = LBound(outputData, 2)
Dim header As Variant
' Loop through all of the stored headers
For Each header In headerIndices.Keys
' The LBound here dynamically points to the header row.
outputData(LBound(outputData, 1), i) = header
Next
' This way we can dynamically fill in the array.
Set headerIndices = GetHeaderIndices(outputData)
i = LBound(outputData, 2) + 1
For Each record In records
outputData(i, headerIndices("To")) = record.To
outputData(i, headerIndices("From")) = record.From
outputData(i, headerIndices("Subject")) = record.Subject
outputData(i, headerIndices("Received_Date")) = record.ReceivedDate
outputData(i, headerIndices("In_Folder")) = record.InFolder
outputData(i, headerIndices("Size")) = record.Size
outputData(i, headerIndices("Weekday")) = record.Weekday
outputData(i, headerIndices("Date")) = record.RecordDate
outputData(i, headerIndices("Month")) = record.Month
outputData(i, headerIndices("Year")) = record.Year
outputData(i, headerIndices("Time")) = record.Time
outputData(i, headerIndices("In/out")) = record.InOut
Next
' Now we just have to put the output data somewhere. Let's reuse the sheet we pulled from.
OutputArray outputData, inputSheet, "Output_Data"
End Sub
Public Function GetHeaderIndices(ByVal InputData As Variant) As Scripting.Dictionary
If IsEmpty(InputData) Then Exit Function
Dim headerIndices As Scripting.Dictionary
Set headerIndices = New Scripting.Dictionary
headerIndices.CompareMode = TextCompare
Dim i As Long
For i = LBound(InputData, 2) To UBound(InputData, 2)
If Not headerIndices.Exists(Trim(InputData(LBound(InputData, 1), i))) Then _
headerIndices.Add Trim(InputData(LBound(InputData, 1), i)), i
Next
Set GetHeaderIndices = headerIndices
End Function
Public Sub OutputArray(ByVal InputArray As Variant, ByVal InputWorksheet As Worksheet, ByVal TableName As String)
Dim AddLengthH As Long
Dim AddLengthW As Long
If NumberOfArrayDimensions(InputArray) = 2 Then
If LBound(InputArray, 1) = 0 Then AddLengthH = 1
If LBound(InputArray, 2) = 0 Then AddLengthW = 1
Dim r As Range
If Not InputWorksheet Is Nothing Then
With InputWorksheet
.Cells.Clear
Set r = .Range("A1").Resize(UBound(InputArray, 1) + AddLengthH, UBound(InputArray, 2) + AddLengthW)
r.value = InputArray
.ListObjects.Add(xlSrcRange, r, , xlYes).Name = TableName
With .ListObjects(1).Sort
.header = xlYes
.MatchCase = False
.Orientation = xlTopToBottom
.SortMethod = xlPinYin
.Apply
End With
End With
End If
End If
End Sub
Initial Notes
The first thing that struck me about your code was that you literally have no variable declarations. Lines like :
cell_value = ...
Are pretty much the same as :
Dim cell_value as Variant
cell_value = ...
The only difference between the two is that the second is at least explicit about wanting a variant. The first is implicit.
First bit of advice avoid implicit commands as much as possible. The reason for this is quite simple, there is a tendency to think that the computer is magically doing something it shouldnt be, but really you told it to do exactly what it is doing and as a result, you have a bug that can be nearly invisible.
Consider for example :
myRange = Range("SomeRange")
This declares a myRange (which the reader expects to be a Range) and converts the range to an array. This in itself is confusing, but, even worse, I can still do :
Set myRange = Range("SomeRange")
And it is now a range reference (the only difference being the Set keyword). While it is easy to read the code and determine what is happening for us, you will inevitably lose a bug in there that you will have to search for.
Option Explicit to the Rescue!
Option Explicit is one of the best things in VBA. It is truly simple, but it makes the simplest of bugs super simple to prevent (and even simpler to find). With Option Explicit at the top of a module, the compiler will throw an error when a variable isnt declared.
' This won't compile. Note the minor (but potentially difficult to find) spelling error between Very and Vary.
Dim SomeVeryLongVariableName as Long
SomeVaryLongVariableName = 10
To make Option Explicit a breeze to use:
Open the Developer Window
Press CTRL+T and then CTRL+O.
Check the box for Require Variable Declaration.
While you're in there, I recommend going to the general tab and selecting Break in Class Module under error trapping.
Qualifying References
One of the most common mistakes someone new to VBA will make is doing:
SomeVariable = Range("SomeRange")
' or
SomeVariable = Range("SomeRange").Value
There are two problems with the first version. The problem that is solved by the second version is that we specify the property we are accessing. The default property of a Range is .Value so we don't need .Value, but it is discouraged to implicitly access .Value.
The second issue is that we rely implicitly on ActiveSheet.Range("SomeRange"). This is a silent killer. I refuse to work with the Active Anything unless I absolutely must and even then, I prefer not to. It is always best to specifically call the object you are working with.
' This is literally better than not using a worksheet reference at all
Dim Foo as Worksheet
Set Foo = ActiveSheet
...many lines later...
DoSomething Foo
Why is this little change better? It is using the active sheet! While not ideal, it at least ensures that Foo points to the same worksheet unless we explicitly change the worksheet it is pointing to. It is stronger than the ActiveSheet reference.
Even better would be:
Dim Foo as Worksheet
Set Foo = ThisWorkbook.WorkSheets("SomeFoo")
The ThisWorkbook object is the workbook that is running the code, and by using a string argument as a call to the collection we can return the specific worksheet back.
Working on the Worksheet
Now lets get into the meat of the problem. Your code is slow because you are operating on the worksheet. Its that simple. A hundred calls to the Worksheet.Range will be slower than a hundred calls to Data(x, y). An array is faster, and is easier to use.
Even worse is when you not only access the cells by a range reference, but when you do :
EntireRow.Delete
Now you've really upset the worksheet. It has to update calculations, it has to resize stuff, fix formatting (if in a table), check number formatting, etc. It is a costly operation. If you're deleting a lot of rows...avoid it at all cost.
Enter the world of arrays. Not only are they fast but they are easy. Worksheet.Cells(1, 1) becomes Data(1, 1). Once you load in the data to the array (Data) you can manipulate, access, delete the values all you want. The worksheet doesn't care. It doesnt see what is happening to the same data it was previously responsible for.
Putting It Together
I am not going to go through the code line by line, particularly because I already provided in-line comments to help make the code a bit easier to read. This will be a broad explanation of what the code does.
First, it loads in all the data from a ListObject (excel Table)
into an array. That's the easy part.
Once we have the data, we need to know the indices of the headers.
This makes manipulating the data much easier. It also allows you to
move columns around all you want without breaking the code (just
ensure the names are still there).
Using a custom Type we can store all of the data in a defined
structure. A Type is similar to a Struct in other languages. In
essence, it is a variable that has properties, but that isnt an
object. Thus, it cannot be New'ed up.
Loop through all of the rows, and create new records for cells with
multiple addresses. Since the Type cannot be New'ed, it will
retain old values. This means we dont need to re-create the entire
record for each new row. We just need to change the new values,
then add it to the collection.
Once the collection is loaded with records, we can translate them
into a new array that is appropriately sized. No need to add/remove
rows. It is just the right size at creation.
The OutputArray method will take an array and a worksheet and it
will clear the cells on that worksheet, put the array onto the
worksheet, and then turn that output into a table. Point it where
you want the output to go, and it will do the rest.
I didn't test the code on my end (I didnt want to bother creating a test table), but I imagine it should run in a matter of seconds.
Note : Microsoft Scripting Runtime
To use the code as is, you will need to add a reference to the Microsoft Scripting Runtime Library. There are plenty of resources for adding references, but ask if you get lost.
RubberDuck. Use It. Love It. Prof-It
I tend to be shameless about my plugs for Rubberduck, largely because the head of the tool, Mat's Mug, is unabashed in his attempts to convert everyone on SO to using RD. That said, it is an amazing tool, especially for beginners, and it would make most of the above comments stupidly easy to implement. Honestly, it would.
Check it out here: http://rubberduckvba.com/.
Wrapping Up
Let me know how the code above works for you, and do your best to use it as an example for future projects. If you manage to implement even half of those suggestions you'll save yourself potentially months of costly learning experiences through failed projects. Best of luck!
EDIT: Use this code instead to fix the error from above. The error is caused by adding a custom-type to a collection (I have never used Types outside of a class before so I didn't think of the error in advance). This approach is slightly more advanced, but it shouldnt be too complex.
In a Class Module named 'Record'
Option Explicit
Private Type TRecord
ToField As String
FromField As String
Subject As String
ReceivedDate As Date
InFolder As String
Size As String
WeekDay As String
RecordDate As Date
Month As String
Year As String
Time As String
InOut As String
End Type
Private this As TRecord
Public Property Get ToField() As String
ToField = this.ToField
End Property
Public Property Get FromField() As String
FromField = this.FromField
End Property
Public Property Get Subject() As String
Subject = this.Subject
End Property
Public Property Get ReceivedDate() As Date
ReceivedDate = this.ReceivedDate
End Property
Public Property Get InFolder() As String
InFolder = this.InFolder
End Property
Public Property Get Size() As String
Size = this.Size
End Property
Public Property Get WeekDay() As String
WeekDay = this.WeekDay
End Property
Public Property Get RecordDate() As Date
RecordDate = this.RecordDate
End Property
Public Property Get Month() As String
Month = this.Month
End Property
Public Property Get Year() As String
Year = this.Year
End Property
Public Property Get Time() As String
Time = this.Time
End Property
Public Property Get InOut() As String
InOut = this.InOut
End Property
Public Property Let ToField(value As String)
this.ToField = value
End Property
Public Property Let FromField(value As String)
this.FromField = value
End Property
Public Property Let Subject(value As String)
this.Subject = value
End Property
Public Property Let ReceivedDate(value As Date)
this.ReceivedDate = value
End Property
Public Property Let InFolder(value As String)
this.InFolder = value
End Property
Public Property Let Size(value As String)
this.Size = value
End Property
Public Property Let WeekDay(value As String)
this.WeekDay = value
End Property
Public Property Let RecordDate(value As Date)
this.RecordDate = value
End Property
Public Property Let Month(value As String)
this.Month = value
End Property
Public Property Let Year(value As String)
this.Year = value
End Property
Public Property Let Time(value As String)
this.Time = value
End Property
Public Property Let InOut(value As String)
this.InOut = value
End Property
This class uses a code pattern I learned from Mat's Mug. Declare the Type for the class as a Private Type, then declare a private this that refers to that type. As a result, you have an organized Type to hold your variables, and you get intellisense.
Once you do that, you just need to open up the property accessors. In this case, I made everything public. This isnt good practice, but I am avoiding teaching you too much at once (I would prefer not to use a class as is, but it is the best approach at this point).
This Code Goes in Your Module
Option Explicit
Sub New_Fix_Table()
' Be sure to add 'Option Explicit' at the top of your modules. This prevents undeclared variables from slipping through.
' Never use underscores in names. They have special meaning to the interpreter.
' table_size = Range("Table1").Rows.Count
' ## Not Needed due to UBound/LBound ##
' Dim tableSize As Long
' Be sure to also properly qualify you range references.
' TableSize = ActiveWorkbook.Range("Table1").Rows.Count - Without proper qualification, your Range is really ActiveWorkbook.Range
' tableSize = ThisWorkbook.Range("Table1").Rows.Count
' ## ##
'Goes through Table1 in "To" column and fixes the recipients
' For loops such as these, I prefer for loops
' Do While i <= table_size
' I strongly prefer arrays for this purpose. If it was my own project, I even would use classes, but one step at a time for now.
' Change this to point to the correct worksheet.
Dim inputSheet As Worksheet
Set inputSheet = ThisWorkbook.Worksheets("TargetSheet")
' If your data is in a table, then you can use this method instead of referring to the range.
Dim tableData As Variant
tableData = inputSheet.ListObjects(1).Range.value
' Now, here is a trick I use when processing table data in a much more efficient manner.
' This does require a reference to Microsoft Scripting Runtime
Dim headerIndices As Scripting.Dictionary
Set headerIndices = GetHeaderIndices(tableData)
' Now we have a dictionary where we can use a key and return the index position of that key
' This is where it gets a little bit tricky. If we encounter a row with multiple emails, we need to duplicate the records.
' Otherwise, we want to keep the records as is. For this task, collections to the rescue!
' Having declared a Record Type, I can now use the Type as a container for my data (without needing a class)
Dim initialRecord As record
' The records collection will contain the created records
Dim records As Collection
Set records = New Collection
Dim i As Long
' We loop through arrays using LBound and Ubound (lower bound, upper bound). The '1' denotes rows, whereas '2' would denote columns.
' I add 1 to the lower bound so I can skip the header row.
For i = LBound(tableData, 1) + 1 To UBound(tableData, 1)
Set initialRecord = New record
' Set all the properties of the record.
initialRecord.FromField = tableData(i, headerIndices("From"))
initialRecord.Subject = tableData(i, headerIndices("Subject"))
initialRecord.ReceivedDate = tableData(i, headerIndices("Received_Date"))
initialRecord.InFolder = tableData(i, headerIndices("In_Folder"))
initialRecord.Size = tableData(i, headerIndices("Size"))
initialRecord.WeekDay = tableData(i, headerIndices("Weekday"))
initialRecord.RecordDate = tableData(i, headerIndices("Date"))
initialRecord.Month = tableData(i, headerIndices("Month"))
initialRecord.Year = tableData(i, headerIndices("Year"))
initialRecord.Time = tableData(i, headerIndices("Time"))
initialRecord.InOut = tableData(i, headerIndices("In/out"))
' Split the addresses. If there are multiple addresses we dont need to rewrite the record, we just need to adjust the To field.
Dim splitAddresses As Variant
If InStr(tableData(i, headerIndices("To")), ";") > 0 Then
splitAddresses = Split(tableData(i, headerIndices("To")), ";")
Dim j As Long
For j = LBound(splitAddresses) To UBound(splitAddresses)
If Len(splitAddresses(i)) > 1 Then
Dim splitRecord As record
Set splitRecord = New record
' Because of how objects are passed around, you cant copy a class through assignment. You must duplicate the properties manually into a new class.
splitRecord.FromField = initialRecord.FromField
splitRecord.Subject = initialRecord.Subject
splitRecord.ReceivedDate = initialRecord.ReceivedDate
splitRecord.InFolder = initialRecord.InFolder
splitRecord.Size = initialRecord.Size
splitRecord.WeekDay = initialRecord.WeekDay
splitRecord.RecordDate = initialRecord.RecordDate
splitRecord.Month = initialRecord.Month
splitRecord.Year = initialRecord.Year
splitRecord.Time = initialRecord.Time
splitRecord.InOut = initialRecord.InOut
initialRecord.ToField = splitAddresses(i)
records.Add splitRecord
End If
Next
Else
initialRecord.ToField = tableData(i, headerIndices("To"))
records.Add initialRecord
End If
Next
' Now we have a colleciton of all of the records we need. Now, we need to translate those back into an array.
Dim outputData As Variant
' The row is 0 based so we can add headers, but the headerIndices dictionary is already 1-based, so we leave the columns 1 based.
' Admittedly, I would avoid a mis-match of bases for re-dimming an array, I am only doing it this way to prevent confusion.
ReDim outputData(0 To records.Count, 1 To headerIndices.Count)
' An array with the same base-dimensions would be one of the two following:
' ReDim outputData(0 To records.Count, 0 To headerIndices.Count - 1)
' ReDim outputData(1 To records.Count + 1, 1 To headerIndices.Count)
' You would then need to adjust the actual filling of the array as well.
i = LBound(outputData, 2)
Dim header As Variant
' Loop through all of the stored headers
For Each header In headerIndices.Keys
' The LBound here dynamically points to the header row.
outputData(LBound(outputData, 1), i) = header
Next
' This way we can dynamically fill in the array.
Set headerIndices = GetHeaderIndices(outputData)
i = LBound(outputData, 2) + 1
Dim outputRecord As record
For Each initialRecord In records
outputData(i, headerIndices("To")) = outputRecord.ToField
outputData(i, headerIndices("From")) = outputRecord.FromField
outputData(i, headerIndices("Subject")) = outputRecord.Subject
outputData(i, headerIndices("Received_Date")) = outputRecord.ReceivedDate
outputData(i, headerIndices("In_Folder")) = outputRecord.InFolder
outputData(i, headerIndices("Size")) = outputRecord.Size
outputData(i, headerIndices("Weekday")) = outputRecord.WeekDay
outputData(i, headerIndices("Date")) = outputRecord.RecordDate
outputData(i, headerIndices("Month")) = outputRecord.Month
outputData(i, headerIndices("Year")) = outputRecord.Year
outputData(i, headerIndices("Time")) = outputRecord.Time
outputData(i, headerIndices("In/out")) = outputRecord.InOut
Next
' Now we just have to put the output data somewhere. Let's reuse the sheet we pulled from.
OutputArray outputData, inputSheet, "Output_Data"
End Sub
Public Function GetHeaderIndices(ByVal InputData As Variant) As Scripting.Dictionary
If IsEmpty(InputData) Then Exit Function
Dim headerIndices As Scripting.Dictionary
Set headerIndices = New Scripting.Dictionary
headerIndices.CompareMode = TextCompare
Dim i As Long
For i = LBound(InputData, 2) To UBound(InputData, 2)
If Not headerIndices.Exists(Trim(InputData(LBound(InputData, 1), i))) Then _
headerIndices.Add Trim(InputData(LBound(InputData, 1), i)), i
Next
Set GetHeaderIndices = headerIndices
End Function
Public Sub OutputArray(ByVal InputArray As Variant, ByVal InputWorksheet As Worksheet, ByVal TableName As String)
Dim AddLengthH As Long
Dim AddLengthW As Long
If NumberOfArrayDimensions(InputArray) = 2 Then
If LBound(InputArray, 1) = 0 Then AddLengthH = 1
If LBound(InputArray, 2) = 0 Then AddLengthW = 1
Dim r As Range
If Not InputWorksheet Is Nothing Then
With InputWorksheet
.Cells.Clear
Set r = .Range("A1").Resize(UBound(InputArray, 1) + AddLengthH, UBound(InputArray, 2) + AddLengthW)
r.value = InputArray
.ListObjects.Add(xlSrcRange, r, , xlYes).Name = TableName
With .ListObjects(1).Sort
.header = xlYes
.MatchCase = False
.Orientation = xlTopToBottom
.SortMethod = xlPinYin
.Apply
End With
End With
End If
End If
End Sub
The only main difference is that now we are using an object instead of a type, and we must manually copy the object any time we want to create a new one (whereas, with the Type, we just changed the To field). | {
"domain": "codereview.stackexchange",
"id": 27178,
"tags": "vba, excel"
} |
How are these marbles being accelerated? | Question: This question refers to an effect visible starting at around 5m45s in this video1. (The question will make little sense if one has not first watched the clip.)
The observation
At around 5m45s we see 5 marbles beginning to move, single-file, in a straight line, slanting down from right to left, at an approximately 45º angle. During this diagonal traverse the marbles seem to be almost floating through mid-air. Their motion does not look like what one would expect of marbles falling under gravity. (For one thing, the trajectory is rectilinear, not parabolic.)
This is pretty cool, but it's what happens next that really puzzles me.
Once each marble reaches the horizontal channel-like piece at the bottom of its diagonal traverse, it shoots to the left with much greater speed than I would have expected. IOW, there is an apparent discontinuous jump in kinetic energy that I'm having a hard time accounting for.
The easy part
OK, first, here's the explanation for the odd-looking diagonal motion: the marbles are moving between two rigid transparent plates. These plates are positioned relative to each other at a slight angular deviation from a parallel configuration, so that at any horizontal position, cannot fit below a certain vertical position: it gets stuck there. The locus of all these points where the marble just fits between the plates determines the diagonal trajectory observed in the video.
The not-so-easy part
But what about the subsequent motion along the horizontal channel?
Let $\frac{1}{2}mv_-^2$ be the kinetic energies of, say, the first marble right before it touches the horizontal channel, and let $\frac{1}{2}mv_+^2$ be its kinetic energy shortly after the marble starts moving along this channel.
If when I first saw this sequence the video had been stopped right before the first marble starts moving along the horizontal channel, I would have expected that that $\frac{1}{2}mv_+^2 \approx \frac{1}{2}mv_-^2$, and therefore $v_+ \approx v_-$.2
But this naive expectation appears to be wrong: it sure looks to me like $v_+ \gg v_-$, and therefore $\frac{1}{2}mv_+^2 \gg \frac{1}{2}mv_-^2$.
(It is possible that what I'm calling a "horizontal channel" is not so, and that its leftmost end is lower than its rightmost one, but any effect due to gravity that such an inclination could have would not be apparent right at the beginning of the marble's motion along the channel. IOW, it could not account for the visible jump from $v_-$ to $v_+$. It is also possible that the marbles are getting accelerated through some hidden trickery, but this would be very much against the style and the spirit of these educational videos.)
The only candidate for an explanation that I can come up with is that during the diagonal traverse between the clear plates each marble picks up (somehow) a great deal of angular momentum. IOW, it gets "revved up", as it were. Once it touches the horizontal channel, the friction between the two results in the conversion of some of this high angular momentum into a fast leftward rolling by the marble.
One objection to this hypothesis is that a light smooth marble is more likely to dissipate most of its angular momentum through sliding friction, spinning in place.
But putting aside this objection, how exactly would the marble pick up so much angular momentum during the diagonal traverse?
Alternatively, is there some other explanation for the apparent jump in kinetic energy?
1 The video is a collection of short clips from the Japanese educational children's TV show Pitagora Suitchi (ピタゴラスイッチ or ピタゴラ装置), which means literally "Pythagorean device(s)". (In the US such contraptions are usually referred to as "Rube Goldberg machines".) The phenomenon that motivated this question happens in the clip that begins at around 5m35s. FWIW, this clip is titled "5 marbles" (5つのビー玉), and it dates from 2003. All these clips were produced by the group of Satō Masahiko at Keiō University.
2 Here, I am using $v_-$ and $v_+$ to represent scalar "speeds" rather than vectorial "velocities".
Answer: OK, I get it.
What I had to realize is that if the marbles had rolled down a normal inclined plane with the same vertical drop, their speed along the horizontal channel would be comparable to what's seen in the video. IOW, the kinetic energy at the end is consistent with the potential energy at the beginning.
What made the motion seen in the video confusing to me is that during the diagonal traverse a much greater fraction of the marble's initial potential energy gets converted into rotational kinetic energy, if compared to the more familiar case of a marble rolling down an incline, in which only 2/7 of the potential energy goes into rotational kinetic energy, the rest going into linear kinetic energy. This is why, in the video, the linear motion during the diagonal traverse looks so unnaturally slow.
In fairness to myself, as I said in my original post, I hypothesized that the marbles probably had picked up a substantial angular velocity by the time they touched the horizontal channel.
Many of the comments in essence reiterated this, which I sort of knew already.
My question was in fact asking why this gain in angular velocity.
The following thought experiment was what finally clarified the situation for me.
Imagine first a sphere $\mathcal{S}$ of radius $R$, and a cylinder $\mathcal{C}$ of radius $r < R$ and length $L > 2 R$. IOW, the cylinder $\mathcal{C}$ is both skinnier and longer than the sphere $\mathcal{S}$.
Now imagine the rigid solid $\mathcal{S}^\prime$ obtained by superposing $\mathcal{C}$ and $\mathcal{S}$ so that their centers of mass coincide. IOW, $\mathcal{S}^\prime$ is similar to a sphere of radius $R$, but it has a couple of cylindrical bits sticking out antipodally along one of its axes.
Finally imagine two parallel rigid rails, that are far enough apart that the cylindrical bits of $\mathcal{S}^\prime$ can be rested on them, and $\mathcal{S}^\prime$ can be rolled along the two rails while hanging, as it where, between them.
The whole set-up should look something like this:
If the two rails are tilted by some angle, say $0 \lt \theta \lt \pi/4$, so that they describe an inclined plane, then $\mathcal{S}^\prime$ will roll downhill on the rails, as its potential energy gets converted to kinetic energy.
Key assumption: all rolling happens without slipping.
Let let $KE_l(t)$ and $KE_r(t)$ be the linear and rotational kinetic energies of $\mathcal{S}^\prime$ as it rolls down the inclined rails.
The fraction of the total kinetic energy coming from the linear motion is approximately1
$$\frac{KE_l(t)}{KE_l(t) + KE_r(t)} \approx \frac{\frac{1}{2}m(r\omega(t))^2}{\frac{1}{2}m(r\omega(t))^2 + \frac{1}{5}m(R\omega(t))^2}=\frac{5 \left(\frac{r}{R}\right)^2}{5 \left(\frac{r}{R}\right)^2 + 2}.
$$
...where $m$ and $\omega(t)$ are the mass and angular velocity of $\mathcal{S}^\prime$, respectively.
Similarly, the fraction of the total kinetic energy coming from the rotational motion is approximately
$$\frac{KE_r(t)}{KE_l(t) + KE_r(t)} \approx \frac{2}{5 \left(\frac{r}{R}\right)^2 + 2}.
$$
IOW, for a fixed $R$, the relative contributions of the linear and rotational motions to the total kinetic rapidly approach 0% and 100%, respectively, as $r \to 0$.
The phenomenon shown in the video can be thought of as the limiting case where $r \to 0$. The points of contact between the marbles and the clear plates play the role of $\mathcal{S}^\prime$'s two cylindrical bits rolling down the inclined rails.
BTW, I must say that I think this video segment is the prettiest demonstration I recall ever seeing of the principle of energy conservation.
1 The expression for $KE_r$ above is actually the rotational kinetic energy for the original sphere $\mathcal{S}$; this approximation becomes better as $r/R$ becomes smaller. | {
"domain": "physics.stackexchange",
"id": 27015,
"tags": "classical-mechanics, energy, angular-momentum, rotation, rotational-kinematics"
} |
Shannon entropy and information function | Question: I'm trying to solve a exercise about Shannon entropy. I've tried some things but I can't solve it, and couldn't find information on the internet.
Let's see. We have the Shannon entropy, and a system defined by a probability distribution $p(n)$:
$$S = \sum _{n\in N} p(n) \log (p(n)) $$
Where $N$ can be a finite set of probabilites or a infinite one, and $n$ is integer.
Now let's define the generating function of information (not sure if this is its name in English):
$$G(u) = \sum _{n \in N} p^u(n)$$
The question is, how can I write $S$ in terms of $G(u)$? Also it would be great to have a physical interpretation of this function (but the exercise doesn't ask for it)
My attempts to get the solution: I tried to write $G(1)$ and insert it into the expression of $S$, but I cannot get rid of the logarithm. Also I've tried to make a Taylor expansion of the logarithm, but I cannot write the series around $x=0$ because the logarithm is not analytic in this point. Around $x=1$,
$$\log(x) = - \sum _{k=1} ^{+\infty} \dfrac{(-1)^k (x-1)^k}{k}$$
But it looks like I can't substitute the expression of $G(u)$ inside this using $x\equiv p(n)$.
Any advice or hint is well received. As I've said, I haven't found any information about the function $G(u)$ on the Internet.
Answer: It wasn't. After a recalculus I've realized that if you take the derivative of $G(u)$:
$$\dfrac{dG(u)}{du} = \sum _{n\in N} p^u(n)\log(p(n))$$
using that the derivative of $a^x$ is $a^x \log(a)$. Now, taking $u=1$ in the above expression, you find the Shannon entropy, so
$$S=\left [\dfrac{dG(u)}{du} \right ]_{u=1}$$ | {
"domain": "physics.stackexchange",
"id": 29456,
"tags": "homework-and-exercises, information"
} |
How are Majorana zero-modes protected against fermionic operators? | Question: I am learning about Majorana fermions in topological quantum computation, and more particularly about the Kitaev chain, described by
$$ H = -\mu \sum_{i=1}^N c_i^\dagger c_i - \sum_{i=1}^{N-1} \left(t c_i^\dagger c_{i+1} + \Delta c_i c_{i+1} + h.c.\right) $$
where $c_i = (\gamma_{2i-1} + i\gamma_{2i})/2$ is the annihilation operator written as a sum of two Majorana fermions $\gamma_{2i-1}$ and $\gamma_{2i}$.
In the special case where $t=|\Delta|$ and $\mu = 0$, this Hamiltonian simplifies to
$$ H = 2t \sum_{i=1}^{N-1} \left[ d_i^\dagger d_i - \frac{1}{2} \right] $$
where $d_i = (\gamma_{2i+1} + i\gamma_{2i})/2$ is a new annihilation operator defined "in between" two fermions. This leaves a Majorana zero-mode that can defined with both ends of the chain through $d_0 = (\gamma_{1} + i\gamma_{2N})/2$. Using this operator, we can now define a computational basis $\{|0\rangle, |1\rangle \}$ through $d_0 |0\rangle = 0$ and $|1\rangle = d_0^\dagger |0\rangle$, which are the two degenerate ground states of the Hamiltonian above.
My question is the following. How are these two states $\{|0\rangle, |1\rangle \}$ protected against errors that can physically happen in the system? For instance, if the first element of the Kitaev chain loses (or gains) a fermion, the state $|0\rangle$ will collapse into $c_1^{(\dagger)}|0\rangle \neq |0\rangle$. Wouldn't we then lose information?
Answer: The OP is right: if the system is open to an environment allowing for single-particle tunneling into the system, then the Majorana edge mode is, in fact, not stable. This was studied by Budich, Walter and Trauzettel in Phys. Rev. B 85, 121405(R) (2012) (or check out the freely-accessible preprint: arXiv:1111.1734).
Kitaev's claim about the absolute stability---as quoted by AccidentalFourierTransform in his/her answer---is presuming a closed system. In that case, one can indeed argue that fermion parity symmetry cannot be broken in a local system, such that the edge mode becomes absolutely stable (for energy scales below the bulk gap). | {
"domain": "physics.stackexchange",
"id": 69011,
"tags": "quantum-mechanics, condensed-matter, solid-state-physics, topological-insulators, majorana-fermions"
} |
Difference: Replicator Neural Network vs. Autoencoder | Question: I'm currently studying papers about outlier detection using RNN's (Replicator Neural Networks) and wonder what is the particular difference to Autoencoders? RNN's seem to be treaded for many as the holy grail of outlier/anomaly detection, however the idea seems to be pretty old to, as autoencoders have been there for a long while.
Answer: Both types of networks try to reconstruct the input after feeding it through some kind of compression / decompression mechanism. For outlier detection the reconstruction error between input and output is measured - outliers are expected to have a higher reconstruction error.
The main difference seems to be the way how the input is compressed:
Plain autoencoders squeeze the input through a hidden layer that has fewer neurons than the input/output layers.. that way the network has to learn a compressed representation of the data.
Replicator neural networks squeeze the data through a hidden layer that uses a staircase-like activation function. The staircase-like activation function makes the network compress the data by assigning it to a certain number of clusters (depending on the number of neurons and number of steps).
From Replicator Neural Networks for Outlier Modeling in
Segmental Speech Recognition:
RNNs were originally introduced in the field of data compression [5].
Hawkins et al. proposed it for outlier modeling [4]. In both papers a
5-layer structure is recommended, with a linear output layer and a
special staircase-like activation function in the middle layer (see
Fig. 2). The role of this activation function is to quantize the
vector of middle hidden layer outputs into grid points and so arrange
the data points into a number of clusters. | {
"domain": "datascience.stackexchange",
"id": 900,
"tags": "neural-network, anomaly-detection, autoencoder, outlier"
} |
Can DNA be considered as a fractal antenna? | Question: I was reading about the consequences of using mobile phones and came across the statement that mobile phones are a potential source of cancers because DNA is a kind of a fractal antenna. The reason put forward for this was that it shares with mobile phone antennae the properties of self-similarity and electric conductivity.
The scientific paper which is the source of this statement appears to be: Blank and Goodman, “DNA is a fractal antenna in electromagnetic fields” Int J. Radiation Biology, 87:4, 409–415, DOI:10.3109/09553002.2011.538130. Is anyone able clarify this assertion and explain whether or not the arguments in the paper support it.
Answer: I looked at the original paper, Blank and Goodman "DNA is a fractal antenna in electromagnetic fields" Int J Radiation Biology, 87:4, 409-415, DOI:10.3109/09553002.2011.538130.
To be honest I can't give concrete reasons to doubt it, but here are a few things that raise my suspicions:
it is a review article that relies heavily on self-citation (15/50 citations are authored or co-authored by the two authors of this paper, including most of the primary experimental references)
in my judgement "responding to many different frequencies" and "having structures on several different scales" (Table 1) is not particularly strong evidence to support the conjecture that DNA acts as a fractal antenna; I would be much more convinced by an analysis based on physics rather than analogy (this comment on the paper by Foster reviews the physics of fractal antennas and its application to DNA and concludes that "Loose and implausible conjectures about DNA as a fractal antenna do not substitute for careful discussion of these matters"; I don't find the authors' rebuttal particularly convincing)
the authors cite epidemiological evidence that non-ionizing/low-frequency radiation causes cancer: according to the US NIH's National Cancer Institute, this evidence is weak. (However, one of the authors of this paper is a contrarian on this subject and believes that the mainstream view is wrong, for a variety of reasons.) (See Are low-intensity radio-waves carcinogenic? for more discussion on Biology.SE.)
the paper uses very disparate lines of evidence (not necessarily a bad thing, but in this case it feels haphazard): in particular, the authors discuss both low-frequency and ionizing radiation (this is part of their "DNA is sensitive to many different frequencies" argument, but ionizing radiation operates in a very different way)
it is highly speculative in places (e.g. "EMF is believed to have been an important driving force in evolution", p. 413, no reference; the authors go on to attribute the faster evolution of eukaryotes to the fact that their DNA structures are more fractal)
at least one of the references cited (de Pomerai et al. Nature 2000) was retracted in 2006 (5 years before the current paper was published)
What I can say in favor of the authors is that exploring the mechanisms by which low-frequency electromagnetic fields/radiation (ELF) affect cellular biology is indeed interesting; many of the epidemiological studies, while finding very weak effects, have also downplayed risk because so little is known about mechanisms of operation. If we could come up with a rigorous mechanistic understanding of ELF effects, that would be scientifically worthwhile. | {
"domain": "biology.stackexchange",
"id": 6991,
"tags": "dna, literature, radiation"
} |
Can we see the famous black hole pair or its effect on other stars in any other means but LIGO? | Question: If the source of LIGO's detection is a pair of black holes, can we see them using a traditional electromagnetic/neutrino/some other kind of telescope? Or can we see their effect on other stars in their vicinity?
related to How did LIGO detect the source location of the black holes mentioned to be the cause of today's announcement?
Answer: More devices are needed to localize the sources of the signals. It's a matter of months but until they become operational, it's impossible to scan all the suspected region of the sky.
You may find some numerical details in the question of Emilio Pisanty :
How many galaxies could be the source of the recent LIGO detection?
From ligo official
Independent and widely separated observatories are necessary to
determine the direction of the event causing the gravitational waves,
and also to verify that the signals come from space and are not from
some other local phenomenon.
Toward this end, the LIGO Laboratory is working closely with
scientists in India at the Inter-University Centre for Astronomy and
Astrophysics, the Raja Ramanna Centre for Advanced Technology, and the
Institute for Plasma to establish a third Advanced LIGO detector on
the Indian subcontinent. Awaiting approval by the government of India,
it could be operational early in the next decade. The additional
detector will greatly improve the ability of the global detector
network to localize gravitational-wave sources.
An Australian interferometer would fulfil the requirement of a complete network able to triangulate the locatation of the sources. It will give also some datas to measure properly the speed of GW inside the Earth. | {
"domain": "physics.stackexchange",
"id": 28475,
"tags": "black-holes, gravitational-waves, telescopes, ligo"
} |
Could dark matter consist of the supermassive black holes at the centers of galaxies? | Question: Inspired by this question about whether dark matter is matter, noting that dark matter tends to be clumped in galaxies near the center and less so on the edges, accepting that many (most?) galaxies have large black holes at their center, and theoretically, black holes have 'infinite' gravity, could black holes actually be the dark matter that holds galaxies together?
Put another way, how do we model the 'infinite' gravity of a black hole when considering the dynamics of a galaxy?
Answer: Do the black holes at the center of galaxies account for the experimental results that prompted the introduction of dark matter? No
The primary piece of evidence that originally sparked the idea of dark matter is the rotation curve of galaxies. We found that galaxies don't rotate like the luminous matter suggests it should rotate. Specifically, given some estimates of enclosed mass at some radius from the center of a given galaxy, galaxies were found to rotate faster at given radii than expected. In other words, the luminous matter didn't seem to account for all the mass within any given radius. This lead to the idea that galaxies are permeated by a "dark matter" that isn't luminous. For this idea to work though, the dark matter needs to permeate the galaxy, it can't all be concentrated at the center of the galaxy like the central black hole (to really explain why this is one would have to get into a bit more details into rotation curves and how the expectations differ from observations).
Other pieces of evidence, like the dynamics which occurs when two galactic clusters collide also wouldn't be accounted for by galactic central black holes. See e.g. the bullet cluster.
Is it possible that the dark matter is made up of many smaller black holes? Possible, but not likely.
At one point in time, there was conjecture that the dark matter consisted of (moderate sized) black holes and other compact objects which have low luminosity. This was the MACHO theory (MAssive Compact Halo Objects). But this theory has largely fallen out of favor.
As Ben points out in a comment, another candidate might be primordial black holes, but their abundance appears to be too low to be good candidates at this time. | {
"domain": "physics.stackexchange",
"id": 52957,
"tags": "black-holes, dark-matter, galaxies, galaxy-rotation-curve"
} |
[Talos] I am trying to launch talos in simulation but gazebo Segmentation fault (core dumped) | Question:
Hello,
I was trying to play with talos robot in simulation. I have installed the simulation using this tutorial in ros-melodic ubuntu 18.04. got some errors but I was able to solve it. I have error free build and all the files have solved missing arguments errors (That I was facing).
when I launch the robot in rviz, it works like a charm. so I thought launching in gazebo to test physics. There were many errors which I was able to solve but one error still remains at the end.
[ INFO] [1648586058.460195637, 30.401000000]: gazebo_ros_control plugin is waiting for model URDF in parameter [robot_description] on the ROS param server.
Segmentation fault (core dumped)
================================================================================REQUIRED process [gazebo-2] has died!
process has died [pid 16089, exit code 139, cmd /home/aarsh/henrique_ws/src/gazebo_ros_pkgs/gazebo_ros/scripts/gzserver -e ode worlds/empty.world __name:=gazebo __log:=/home/aarsh/.ros/log/83708ae4-af9f-11ec-87e7-ac2b6e45babe/gazebo-2.log].
log file: /home/aarsh/.ros/log/83708ae4-af9f-11ec-87e7-ac2b6e45babe/gazebo-2*.log
Initiating shutdown!
================================================================================
[rgbd/rgb/image_proc-5] killing on exit
[rgbd/rgb/high_res/image_proc_high_res-4] killing on exit
[gazebo-2] killing on exit
[gazebo_gui-3] killing on exit
Segmentation fault (core dumped)
NOTE : There is the /robot_description parameter present in the param list.
So I removed the libgazebo_ros_control.so from this file and tried to launch it again and it works without the libgazebo_ros_control.so. It is not at all the issue of the tags inside the plugin <ns></ns> <robotSimType>pal_hardware_gazebo/PalHardwareGazebo</robotSimType>
<robotNamespace></robotNamespace>
<controlPeriod>0.001</controlPeriod> I have tried removing all of them and still I get the same error.
When I remove the whole plugin, it works perfectly. Just robot falls down due to gravity.
Let me know if you need any further info.
Thank you for helping.
Originally posted by aarsh_t on ROS Answers with karma: 328 on 2022-03-29
Post score: 0
Original comments
Comment by saikishor on 2022-03-29:
Hello, This could be the gazebo_ros_control plugin provided by PAL Robotics, is not properly compiled in your workspace. I recommend you to clean your workspace and build again and also check for any running gazebo servers in the background and kill them before launching the simulation. You can also open an issue in the GitHub repository : https://github.com/pal-robotics/talos_tutorials with the issues you faced and resolved on your own, so we can take a look.
Comment by aarsh_t on 2022-03-30:
Hello sir, Thank you for your inputs, I will try to do the suggested and will report the updates.
Answer:
I figured out the issue, Pal is using modified ros_controller and the transmission_interface package. hence they are using the boost shared pointer and not the std shared pointer (which is used in the ros kinetic and above). I thought the ros_controller comes with talos simulation is the same that comes in melodic so I did not built it, but its not the same. it is modified one.
Thanks!
Originally posted by aarsh_t with karma: 328 on 2022-04-01
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by saikishor on 2022-04-01:
@aarsh_t at PAL Robotics we use the ROS Control with some custom fixes and features on our end. Unless you compile the the ros_control packages as per the . rosinstall, you might have troubles testing some controllers. If not change the gazebo plugin and things might work.
Comment by aarsh_t on 2022-04-01:
Aha I see! I tried to fixed the shared pointer things for one of the package and it built successfully but I didn't knew you have also modifications from your side. Well, nice to learn something new! now I might dig the modifications from the original package as well haha. Thanks for the info ;) | {
"domain": "robotics.stackexchange",
"id": 37538,
"tags": "ros, gazebo, ros-melodic, rosparam, robot-description"
} |
String manipulations: transform "a-b-c" into "a(b(c))" | Question: function dashesToParentheses(str) {
var list = str.split('-');
return str.replace(/-/g, '(') + repeatString(')', list.length - 1);
}
function repeatString(str, times) {
if (times == 1)
return str;
return new Array(times + 1).join(str);
}
dashesToParentheses('a-b-c') // "a(b(c))"
dashesToParentheses('a-b') // "a(b)"
dashesToParentheses('a') // "a"
dashesToParentheses('') // ""
dashesToParentheses works correct. Can I make it simpler or/and faster?
Answer: Having split the string you can join it with brackets instead of replacing them. You could optionally choose to remove the repeatString function and your +/- 1, but it does make a lot of sense the way you have it.
function dashesToParentheses(str) {
var list = str.split('-');
return list.join('(') + Array(list.length).join(')');
} | {
"domain": "codereview.stackexchange",
"id": 13518,
"tags": "javascript, strings, regex"
} |
What is the Complexity Status of Arbitarily Weighted Planar Max Cut? | Question: If you search on the internet for the Complexity Status of Arbitarily Weighted Planar Max Cut you seem to get conflicting answers.
On one hand, there are references that Barahona solved this problem in the 1980's.
On the other there is this paper:
NP-completeness of maximum-cut and cycle-covering problems for planar graphs
My question is what is the Complexity Status of Arbitarily Weighted Planar Max Cut?
Answer: After reading the paper you mentioned, it seems that the reduction stated in the paper is incomplete, therefore the result may not be correct.
In the paper the author gave a reduction from 3SAT to a problem called planar max-cycle-covering (although it is not consistent with the usual sense of a covering in graph theory, it is more like max-edge-disjoint-cycles problem), and proving that the max-cycle-covering problem and max-cut problem are dual to each other on planar graphs. While the second implication is correct, the reduction part mimics a proof to the NP-hardness of planar Hamiltonian cycle problem, and there may be a flaw in it.
In that proof one need a gadget for XOR-in-series (See Fig. 3 in the Hamiltonian paper); although the construction works in the Hamiltonian ones, I see no evidence that a similar construction is possible for cycle-covering, and there are no explanations in the paper. (And we need such a gadget in the NP-hardness proof of cycle-covering, see Fig. 6 in the max-cut paper.) The main problem is that for Hamiltonian cycle we can always be sure that the middle edge C must be passed though, hence we can "transfer" the XOR by the presence of the middle edge; but in cycle-covering we cannot guarantee that edge C must be passed though unless it is within a cycle passing though a heavy-weighted edge, and this is not the case here. (In fact we cannot enforce the edge being passed though, since in Fig. 6 if an edge C is passed though then the corresponding literal in the clause is true.)
My guessing is that planar max-cycle-covering cannot simulate 3-fan-in OR function, which is required in 3SAT. Supported with the answer by Mohammad Al-Turkistany, one should believe that planar max-cut problem is polynomial time solvable. | {
"domain": "cstheory.stackexchange",
"id": 538,
"tags": "cc.complexity-theory, ds.algorithms"
} |
“Emergent magnetic monopole” discovered (5th December 2023) | Question: Looks like a few days ago (5th December 2023) a research team discovered “emergent magnetic charge” in antiferromagnetic materials, and the result was published on Nature Materials.
The finding was also covered on other sites, often with less reserved titles, such as this one on the website of Cambridge University (the institution with which the researchers are associated) that says
Researchers have discovered magnetic monopoles – isolated magnetic charges – in a material closely related to rust…
What does “emergent magnetic monopole” mean here? Does this finding really show the existence of isolated magnetic monopoles?
Answer: To put it simply, an emergent property is a property that stems from many parts of a system all working in some collaboration that produces a whole that is greater than the sum of its parts as it were.The second law of thermodynaimcs is an example of an emergent property of large particle ensembles.
The monoploes discovered in the recent research are not real paricles like those hypothesized by Dirac to explain the quantization of charge, they are "quasi-particles"; an emergent phenomenon produced by ordinary matter. So, no need to revise Maxwell's equations just yet! | {
"domain": "physics.stackexchange",
"id": 99033,
"tags": "electromagnetism, condensed-matter, solid-state-physics, material-science, magnetic-monopoles"
} |
run "rosdep install -r --from-paths ." error when I install iai_kinect2 | Question:
When I was run rosdep install -r --from-paths ., some errors show me as following:
turtlebot@turtlebot-HP-ZBook-14-G2:~/catkin_ws/src/iai_kinect2$ rosdep install -r --from-paths .
ERROR: the following packages/stacks could not have their rosdep keys resolved
to system dependencies:
kinect2_viewer: Cannot locate rosdep definition for [kinect2_bridge]
kinect2_calibration: Cannot locate rosdep definition for [kinect2_bridge]
kinect2_bridge: Cannot locate rosdep definition for [kinect2_registration]
Continuing to install resolvable dependencies...
All required rosdeps installed successfully
Some assistes show as following writing by author:
rosdep will output errors on not being able to locate [kinect2_bridge] and [depth_registration]. That is fine because they are all part of the iai_kinect2 package and rosdep does not know these packages.
what should I do, my rosdep can just know these packages.
Thank you very much!
Originally posted by Ziwen Qin on ROS Answers with karma: 136 on 2016-06-18
Post score: 2
Original comments
Comment by gvdhoorn on 2016-06-20:
Please don't post updates as answers. Only post answers if you are actually answering your own question. I've merged your answer as an update into your OP.
Answer:
I would think this is what --ignore-src is meant for.
Edit:
following you, Should I run the command which is " rosdep install --ignore-src" ?
No, the full command should be:
rosdep install --from-paths ~/catkin_ws/src/iai_kinect2 --ignore-src -r
That will make rosdep look for packages in the ~/catkin_ws/src/iai_kinect2 directory, gather all the dependencies, check which you have installed, which are missing, then subtract all packages that it finds in ~/catkin_ws/src/iai_kinect2 (because of --ignore-src), and then install everything that is still missing.
I am running, but terminal show some errors to me:
rosdep: error: no packages or stacks specified
Well, yes. You give it no path and no package name, so it can't do anything for you.
Please also check the output of rosdep --help.
Originally posted by gvdhoorn with karma: 86574 on 2016-06-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Ziwen Qin on 2016-06-20:
following you, Should I run the command which is " rosdep install --ignore-src" ?
I am running, but terminal show some errors to me:
rosdep: error: no packages or stacks specified
Comment by Ziwen Qin on 2016-06-20:
I find that this errors can be ignore, however, OpenCL is not optional but integrant when install iai_kinect2.
Comment by gvdhoorn on 2016-06-21:
This is a new question, unrelated to your earlier one. I advise you to open a new one.
Also (and for the last time): please don't post answers if you are not answering your question. ROS Answers is not a normal forum, but an askbot site. It works differently.
Comment by Ziwen Qin on 2016-06-22:
I find that this errors can be ignore, however, OpenCL is not optional but integrant when install iai_kinect2.
Comment by pallavbakshi on 2017-01-17:
Hi, openCL is actually optional as you can choose to either use openGL, openCL or CPU. You may read the FAQs on https://github.com/code-iai/iai_kinect2#dependencies.
Check the website: If one of them works, try out the one that worked with kinect2_bridge: | {
"domain": "robotics.stackexchange",
"id": 24985,
"tags": "ros"
} |
Sensor fusion of IMU and ASUS for RTAB-MAP with robot_localization | Question:
Hi,
@matlabbe, I have ASUS xtion pro live and vn-100T IMU sensors like the following picture:
I tried to fuse the data from these two sensors similar to your launch file here with robot_localization. The difference in my launch file is that I added the tf relationship as:
<node pkg="tf" type="static_transform_publisher" name="base_link_to_camera_link_rgb" args="0.0 0 0.0 0.0 0 0.0 base_link camera_link 20" />
<node pkg="tf" type="static_transform_publisher" name="base_link_to_imu_link" args="0.0 0.0 0.5 0 0 0 base_link imu 20" />
After I launch it, I get the following result for the map could. In this picture I am not moving sensors at all. It starts to rotate around the view, and the information is not correct.
Also, when I move the sensors, the coordinate frames do not move correctly. I read "Preparing your sensor data", but I am not sure if something is wrong with the placement of the IMU or not!
Do you think there is a problem with the usage of the tf package? I think the relationship is correct, but I don't know how to remove this drift (or motion) when I don't move the sensors at all.
The rqt_graph and tf_frames are like the following:
I also read these two question (1) and (2), but I didn't understand how to solve it.
Thank you.
Edit:
When I do rostopic echo /imu/imu, I get the following output:
---
header:
seq: 1912
stamp:
secs: 1472737002
nsecs: 470006463
frame_id: imu
orientation:
x: -0.989729464054
y: -0.141599491239
z: 0.0163822248578
w: 0.0108099048957
orientation_covariance: [0.0005, 0.0, 0.0, 0.0, 0.0005, 0.0, 0.0, 0.0, 0.0005]
angular_velocity:
x: 0.0242815092206
y: -0.000830390374176
z: -0.0146896829829
angular_velocity_covariance: [0.00025, 0.0, 0.0, 0.0, 0.00025, 0.0, 0.0, 0.0, 0.00025]
linear_acceleration:
x: 0.298014938831
y: 0.262553811073
z: 9.89032745361
linear_acceleration_covariance: [0.1, 0.0, 0.0, 0.0, 0.1, 0.0, 0.0, 0.0, 0.1]
---
Originally posted by MahsaP on ROS Answers with karma: 79 on 2016-08-25
Post score: 1
Original comments
Comment by MahsaP on 2016-09-08:
@Tom Moore would you please give me some hints on my question. Thank you.
Comment by Tom Moore on 2016-09-13:
Can you please post your launch file and sample messages for all inputs? Which IMU driver are you using? Is it reporting data in ENU frame? It looks like it must be, given your linear acceleration, but I want to be sure.
Comment by MahsaP on 2016-09-13:
@Tom Moore For the IMU I am using this driver on Ubuntu 14.04 with ROS indigo. Similar to the picture in the question, I put the IMU on top of the vision sensor. I tried to use ENU frame, but the positive direction for the IMU is NED
Comment by MahsaP on 2016-09-13:
If I use NED, is it making a problem?
Comment by MahsaP on 2016-09-13:
I put the launch files here. I will add some plots to the question for the outputs.
Comment by matthewlefort on 2016-09-13:
@MahsaP
I am using your same IMU and Driver, and with unedited code my linear accelerations are the about the negative versions of yours. Did you have to modify the driver code so that z acceleration = +g when in the orientation of your picture? This is not a correction, but me trying to understand
Comment by MahsaP on 2016-09-14:
@matthewlefort I just modified the code to add the covariance matrix.
Comment by Tom Moore on 2016-09-20:
If it's NED frame, then yes, that is definitely a problem. r_l assumes ROS standards (see REP-103), and an IMU that measures motion around the Z axis in the wrong direction is not going to work.
Comment by Chubba_07 on 2019-11-09:
could you please tell me how you computed the covariance matrices for imu?
Answer:
Looking at the driver source, I think it's just using the data straight off the IMU. If it is, then the data is in NED frame, which does not follow the ROS standards, and is therefore not compatible with robot_localization. To fix it, you'll need to negate the Y and Z axes for everything the IMU produces, including the quaternion.
Originally posted by Tom Moore with karma: 13689 on 2016-10-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 25610,
"tags": "slam, navigation, sensor-fusion, robot-localization, rtabmap"
} |
How to calculate rock permeability in m^2 from gas flux in m/s? | Question: I am studying rock caverns lined with concrete for compressed air storage. I would like to know if it's possible to convert the permeability coefficient from m/s to m^2 or Darcy.
Thank you.
Answer: The word "convert" is reserved for units that measure the same quantity (like distance in inches or cm). In this case however you need to use the word "calculate" because the units are dissimilar. Yes, the Darcy's Law equation is what you are looking for.
In SI units, permeability "k" is measured in m2.
In SI units, porous medium gas flux "q" or "Q/A" is measured in (m3/s) / (m2) which reduces to m/s. Q is volumetric flowrate (m3/s) and A is area (m2). Algebraically rearrange q= Q/A as necessary such as Q= q × A.
So to calculate you also need the dynamic viscosity of air "µ" in kg/m-s. Note that this value is temperature and pressure dependent as you can see in this table.
You will also need the differential pressure in Pascals "Δp" in Pa or kg/ms2. The difference in pressure between the inside of our cave and atmospheric pressure.
And finally you will need the length "L" in meters. This is distance the gas must travel through the porous medium or in your case the thickness of your concrete layer.
When you solve the equation, you can confirm you used the correct units by canceling them out. | {
"domain": "engineering.stackexchange",
"id": 3982,
"tags": "airflow, compressed-air"
} |
How many atoms are in a piece of paper? | Question: How many atoms are there in a common sheet of paper?
The paper is A4, i.e. $210 \, \mathrm{mm} {\times} 297 \, \mathrm{mm}$ $\left(8.27 \, \mathrm{in} {\times} 11.7 \, \mathrm{in}\right).$
Answer: You can estimate number of atoms by finding out average molar mass of paper and mass of one sheet of paper.
If we assume that paper is mainly composed of cellulose, we can neglect other components as insignificant. Then we find out that cellulose's molar mass is approximately $162.14 \, \mathrm{g}/\mathrm{mol}$. Mass of one sheet of paper is about $4.5 \, \mathrm{g}$.
Doing the math you get that one sheet of paper is approximately $0.02775 \, \mathrm{mol}$. With Avogadro's constant, we can calculate that there are$$
0.02775 \, \mathrm{mol} \times 6.02214 \cdot {10}^{23} \, \frac{\mathrm{molecule}}{\mathrm{mol}} ~=~ 1.67 \cdot {10}^{22} \, \mathrm{molecule}.$$
As in one cellulose molecule there are 21 atoms, the number of atoms is $3.509 \cdot {10}^{23}$ atoms. | {
"domain": "physics.stackexchange",
"id": 21001,
"tags": "everyday-life, atoms, estimation"
} |
A Rubber Band in a Rocket Ship | Question: If I hang a mass onto a rubber band, the band gets stretched. The equation for this situation is $ma\ =\ F_{r}-mg\ =\ 0$. Where $F_{r}\ =\ kx$. Where $x$ is the amount the rubber band stretches and the $k$ is the rubber band constant. Is my thought process correct so far? I then get into a rocket ship on Earth which starts accelerating upwards at $a_{2}$. For some reason, the rubber band now stretches more, what causes that? And how can one mathematically describe this situation?
Answer: You used Newton's second law, $ma = F_{r}-m\,g = k\,x-mg =m\, 0\Rightarrow x=\frac{mg}{k}$ when the acceleration of the mass was zero.
When the mass is acceleration with an acceleration $a_2$, the equation of motion is now $ma_2 = k\,x'-mg \Rightarrow x'=\frac{mg+ma_2}{k}$.
Thus the extension of the rubber band when the body is accelerating, $x'$, is greater than when the body is not accelerating, $x$. | {
"domain": "physics.stackexchange",
"id": 96135,
"tags": "newtonian-mechanics, harmonic-oscillator"
} |
Derivative with respect to a spinor of the free Dirac lagrangian | Question: When we derived Dirac Equation starting form the lagrangian, our QFT professor said:
"let's take the free lagrangian $$\mathscr L = i\bar\Psi\gamma^\mu\partial_\mu\Psi - m\bar\Psi\Psi$$ and perform
$$ \frac{\partial\mathscr L}{\partial (\partial_\mu\Psi)} = \frac{\partial (i\bar\Psi\gamma^\mu\partial_\mu\Psi)}{\partial (\partial_\mu\Psi)} = - i\bar\Psi\gamma^\mu ,$$
where the extra minus sign come from the fact that when we perform the derivative with respect to $\partial_\mu\Psi$ we 'pass through' $\bar\Psi$ and the exchange of two spinors give raise to a minus sign".
This doesn't change anything in computing Dirac equation, but when I tried to compute the stress energy tensor $T^{\mu\nu}$ I obtained (I'm using $\eta^{\mu\nu} = \mathrm{diag}(+1, -1, -1, -1$))
$$T^{\mu\nu} = \frac{\partial\mathscr L}{\partial (\partial_\mu\Psi)}\partial^\nu\Psi + \frac{\partial\mathscr L}{\partial (\partial_\mu\bar\Psi)}\partial^\nu\bar\Psi - \eta^{\mu\nu}\mathscr L = -i\bar\Psi\gamma^\mu\partial^\nu\Psi $$
since the lagrangian is zero on-shell.
Now I take the zero-zero component which is nothing but the energy density
$$\mathscr H = T^{00} = -i\bar\Psi\gamma^0\partial^0\Psi $$
but this energy not only is different from the one I found in every book, it is also negative which means it is certainly wrong. My question now is where did I go wrong?
Answer: If $\theta_1,\theta_2$ are a pair of Grassmann variables, then
$$
\frac{\partial}{\partial\theta_2}(\theta_1\theta_2)=-\theta_1\qquad\tag{left derivative}
$$
where the negative sign is due to the fact that partial derivatives anti-commute with odd variables.
Moreover, Taylor's theorem reads
$$
f(\theta_1,\theta_2+\delta)=f(\theta_1,\theta_2)+\delta\frac{\partial f}{\partial\theta_2}+\cdots
$$
where here the ordering of factors is important ($f$ is an even function).
Therefore, the correct expression of the energy-momentum tensor is
$$
T^{\mu\nu} = \color{red}{-}\frac{\partial\mathscr L}{\partial (\partial_\mu\Psi)}\partial^\nu\Psi + \partial^\nu\bar\Psi\frac{\partial\mathscr L}{\partial (\partial_\mu\bar\Psi)} - \eta^{\mu\nu}\mathscr L = \color{red}{+}i\bar\Psi\gamma^\mu\partial^\nu\Psi
$$
Note that most books use right derivatives, which makes this analysis simpler. | {
"domain": "physics.stackexchange",
"id": 36548,
"tags": "lagrangian-formalism, fermions, differentiation, dirac-equation, grassmann-numbers"
} |
What does the second solution in an elastic collision represent? | Question: A ball of mass 2 collides with a stationary ball of mass 1 elastically!
In finding the velocities i end up with two solutions and am not sure how to understand the second solution
Simplified equation of Energy
1) $ 2v^2 = 2v^{`2} + v_b^2 $
Simplified equation of Momentum
2) $ 2v = 2v^` + v_b$
Now solving for vb
$v_b = 2v - 2v^`$
squarring and inserting into eq 1
$2v^2 = 2v^{`2} + 4v^2 -8vv^` +4v^{`2}$
solving
$0 = 2v^{`2} +2v^2-8vv^` +4v^{`2}$
$0 = 2v^2-8vv^` + 6v^{`2} $
$0 = v^2-4vv^` + 3v^{`2} $
$0 = (v - 3v^`)(v - v^`)$
Now the dilema, it makes sense that our original speed on ball 1 would be a third, but what is the second answer trying to say?
$v/3=v^`$
$v=v^`$
Answer: The second solution will mathematically satisfy the conservation equations, but corresponds the objects not actually colliding. Or they ``ghost'' and fly right through each other. :) | {
"domain": "physics.stackexchange",
"id": 23463,
"tags": "homework-and-exercises, newtonian-mechanics, momentum, conservation-laws, collision"
} |
Accelerated C++, exercise 3-5: student grade calculator | Question: I've started learning C++, coming from a PHP/JS background. I've got a copy of the Accelerated C++ book and I'm working through it, doing each chapter and exercise. Having done a few, I'd like to get some feedback to make sure I'm not doing anything too wrong.
The chapter is introduced with the premise of
Imagine a course in which each student's final exam counts for 40% of the final grade, the midterm exam counts for 20%, and the average homework grade makes up the remaining 40%.
The particular exercise I'm doing then says
Write a program that will keep track of grades for several students at once. The program could keep two vectors in sync: the first should hold the students' names and the second the final grades that can be computed as input is read. For now, you should assume a fixed number of homework grades.
They suggest using two vectors for this; one of student names and one of grades, although I expanded slightly on that to take into account the different grades (midterms, finals and homework). I've also opted to calculate the grades after receiving all input in order to separate the code that gets and stores input from the code that performs calculations.
Here's a full copy of the program as it stands (110 lines). I'm looking for any feedback on style, correctness, language traps I may have accidentally fallen into, and really anything in general that you think may be of use.
#include <iostream>
#include <iomanip>
#include <string>
#include <vector>
using std::cin;
using std::cout;
using std::endl;
using std::setprecision;
using std::streamsize;
using std::string;
using std::vector;
/*****
* This program is designed to take a list of students, then ask for their
* midterm, final and homework grades. It then calculates the final grade
* for each student, with the following weights:
* Midterm: 20%
* Final: 40%
* Homework: 40% of average grade
*/
int main()
{
// How many homework grades each student requires
int assignments = 3;
// Get all student names and store them in the first vector
vector<string> students;
{
string student;
cout << "Please enter all students forenames, or an empty line when done."
<< endl
<< "Student name: ";
while (getline(cin, student) && !student.empty()) {
students.push_back(student);
cout << "Student name: ";
}
cout << endl
<< students.size() << " students entered."
<< endl << endl;
}
// Ask for the midterm, final and `assignments` homework grades for students
vector<double> midterms;
vector<double> finals;
vector<double> homework;
{
double grade;
for (int s = 0; s < students.size(); s++) {
// invariant: we've received grades for `s` students
cout << "Grades for " << students[s] << endl;
cout << " - Midterm: ";
cin >> grade;
midterms.push_back(grade);
cout << " - Final: ";
cin >> grade;
finals.push_back(grade);
for (int a = 0; a < assignments; a++) {
// invariant: we've received the grade for `a` assignments
cout << " - Assignment " << a + 1 << ": ";
cin >> grade;
homework.push_back(grade);
}
cout << endl;
}
}
// Calculate and print the overall grades
cout << "Overall Grades:" << endl;
streamsize precision = cout.precision();
{
typedef vector<string>::size_type vec_size;
double overallGrade,
midtermGrade,
finalGrade,
homeworkGrades;
for (vec_size s = 0; s < students.size(); s++) {
midtermGrade = midterms[s];
finalGrade = finals[s];
homeworkGrades = 0;
// Pull out the homework grades
int start = s * assignments,
end = (s + 1) * assignments - 1;
for (vec_size h = 0; h < assignments; h++) {
// invariant: we've summed the grades for
homeworkGrades += homework[start + h];
}
// Calculate the overall grade
overallGrade = 0.2 * midtermGrade
+ 0.4 * finalGrade
+ 0.4 * homeworkGrades / assignments;
cout << " - " << students[s] << ": "
<< setprecision(3) << overallGrade << setprecision(precision)
<< endl;
}
}
return 0;
}
Answer: Overall, not bad for a beginner in my opinion. Some things to note:
Use compiler warnings
There are several (minor) warnings that you probably missed. One of them about an unused variable.
endl vs \n
I'm sure your book mentions this but endl will flush the buffer while \n won't.
In most cases using \n suffices.
Group logic together
cout << "Student name: "; This is repeated and could be eliminated if you restructure the loop slightly.
You should also declare variables as late as possible. So move the declaration of student right in front of the loop.
string student;
for (;;)
{
cout << Student name: ";
getline(cin, student);
if (student.empty())
{
break;
}
students.push_back(student);
}
Use range-based for loops
You can use the new range based for loops instead of the old style loops e.g.
for (const auto &s : students) { instead of for (int s = 0; s < students.size(); s++) {
Adjust the next line accordingly: cout << "Grades for " << s << endl;
Make constants constant
The assignments variable is never changed and should therefore be declared as a constant.
Use functions
You are already scoping the code which is good but as @RichN pointed out you might as well use functions instead.
Prefer using over typedefs
As Scott Meyers suggests in his book Effective Modern C++, you should prefer the new using directive over typedefs. For example:
using vec_size = vector<string>::size_type; | {
"domain": "codereview.stackexchange",
"id": 25965,
"tags": "c++, beginner, calculator"
} |
What experiments show that the nucleus is spherical in shape? | Question: I know that experiments like electron scattering can give the nuclear charge radius and proton scattering can give the nuclear matter radius. However, these experiments seem to first assume that the nucleus is spherical so as to calculate its radius. How do we know that the nucleus is spherical in the first place?
Answer: We have a variety of methods for detecting the fact that many nuclei are not spherical, and for measuring their deformations.
Classically a rotating rigid object has an energy given by $L^2/2I$, where $L$ is the angular momentum and $I$ is the moment of inertia. We also find that many nuclei have bands of energy levels with energies that go approximately like $L^2$ (or, usually more accurately, $L(L+1)$). Quantum mechanically, a sphere can't undergo this sort of collective rotation, because quantum mechanics can only describe motion as a change in the state of a system, and a sphere doesn't change when you rotate it. Therefore, when these bands are observed, the interpretation is that the nucleus cannot be spherical.
If you want to measure the deformation quantitatively, there is a variety of techniques. For nuclei that you can gather in macroscopic quantities, here are some techniques that can be used:
Coulomb excitation measures the transition quadrupole moment from the ground state to excited states. (And sometimes multiple Coulomb excitation is possible, so you get access to excited states as well.)
The nuclear hyperfine splitting depends on the static quadrupole moment of the ground state, if the spin is $\ge 1$.
Electron scattering can give information about deformation.
Most nuclides can't be produced in bulk quantities, so in most cases, a much easier technique is to measure the lifetimes of the excited states in a rotational band. If you think of the nucleus semiclassically as a rotating electrically charged ellipsoid, then the rate of radiation is proportional to the square of the quadrupole moment. For nuclei created in-beam in an accelerator experiment and studied through gamma-ray spectroscopy, the lifetime can often be inferred from the statistical distribution of the Doppler shifts of the recoiling nuclei, which are recoiling when they are formed by fusion and then slow down inside the target.
Sometimes one might know only the ground-state spin and parity. In many cases, this can be compared with theory to give a good idea of the deformation.
For a nucleus that is deformed, you might think that you could infer the shape by looking at the energy as a function of $L^2$ and getting the moment of inertia from the slope of the graph. This is a great method for molecules, but for nuclei it doesn't work very well, because there is a lot of model-dependence in the moment of inertia. Nuclei act like superfluids, so their moment of inertia is much smaller than the rigid-body value.
Most nuclei are either spheres or prolate ellipsoids, or unstable shapes that vibrate back and forth between such shapes. There are basically no known cases of stably oblate nuclei. Most prolate deformations are about as much as a chicken egg, but in some cases, e.g., fission isomers, the ratio of the lengths of the axes can be as much as 2 to 1. There is some evidence for pear shapes, but this is rare and most such cases are actually unstable shapes that vibrate between sphere and pear. | {
"domain": "physics.stackexchange",
"id": 56466,
"tags": "nuclear-physics"
} |
Could you view yourself in high gravity situations? | Question: I'm trying to understand what effects gravity can have on light. First of all, I don't understand how gravity can even affect it, since it doesn't have mass, right? That is probably a separate question though.
When gravity is strong enough, it bends light towards the source of the gravity. So if you were on a small planet and gravity were to gradually increase, would the horizon rise as well, allowing you to see further? If so, at some point, could you look up at some angle and have the light go all the way around the planet and back to yourself, in which case you would essentially be looking up at yourself?
Also, as the gravity increases, is there a point at which light could orbit the planet indefinitely.
Is this the right place to ask questions like this?
Answer:
When gravity is strong enough, it bends light towards the source of the gravity.
Roughly true
So if you were on a small planet and gravity were to gradually
increase, would the horizon rise as well, allowing you to see further?
Yes!
If so, at some point, could you look up at some angle and have the
light go all the way around the planet and back to yourself, in which
case you would essentially be looking up at yourself? Also, as the
gravity increases, is there a point at which light could orbit the
planet indefinitely.
These two are essentially the same question, and the answer to both is yes.
Here is a nice picture illustrating a few different light-like geodesics; the lines indicate paths that light might take near a massive, compact spherical body:
(Image credit)
For a non-rotating spherical object, there is a sphere of space in which light has a stable orbit; i.e. ideally light could orbit forever if it were emitted tangentially at that specific radius. This sphere is known as the photon sphere, and it has a radius of
$$R_p = \frac 3 2 r_s = \frac{3 G M}{c^2}$$
Interesting point of reference: For the mass of the Earth, this radius is about $1.3 ~\rm cm$. Thus if the mass of the Earth were compressed into a marble with a radius of $1.3 ~\rm cm$, there would be a stable orbit for photons directly on its surface. If you were an ant standing on this Earth-marble, you would (ignoring other inconvenient realities of such a situation) be able to see the back of your head. Or thorax, or whatever you have.
For further research, there's a nice overview of Schwartzchild geodesics, the name for paths that free bodies (including light) take in the vicinity of a non-rotating spherical object. Of particular interest to this question is the "Bending of light by gravity" section. | {
"domain": "physics.stackexchange",
"id": 24065,
"tags": "general-relativity, gravity, visible-light, black-holes"
} |
How to install an individual package in ROS hydro | Question:
I want to install the hector_slam package in ROS. I downloaded the file from the URL, but I don't know where to put it and how to install the package. Instead of having package.xml in the root folder, it has a stack.xml. I don't know the difference and "rosmake" doesn't work here. If I put the files under the /src folder in my catkin workspace, when I run "catkin_make" it doesn't make the files in this package. I found a few methods online for the earlier versions of ROS, one of them is using "rosws", but it gave a lot of errors and I don't really know what to do when specifying a target workspace. It's my first time installing a package from source, could anyone kindly help me? If possible, I wish to know the general way to install packages in ROS.
Many thanks in advance!
Originally posted by CathIAS on ROS Answers with karma: 15 on 2014-01-06
Post score: 1
Answer:
Below is one of multiple options. This is assuming you have a default hydro setup running, with "source /opt/ros/hydro/setup.bash" in your .bashrc file. It will also use the catkin version of hector_slam from source as it appears you want to use catkin (?). If you don´t absolutely require installation from source "sudo apt-get install ros-hydro-hector-slam" should also work.
Create a folder somewhere:
mkdir hector_slam_test
Enter folder, init workspace:
cd hector_slam_test
wstool init
Copy the following lines to a file named "hector_slam_catkin.rosinstall":
- git:
uri: https://github.com/tu-darmstadt-ros-pkg/hector_slam.git
local-name: src/hector_slam
version: catkin
/edit: It appears the link gets screwed up by ROS Answers. You can see the correct formatting here: http://answers.ros.org/answers/115386/revisions/
Merge the contents of this rosinstall file into your workspace:
wstool merge hector_slam_catkin.rosinstall
Update workspace (this updates/pulls all repos belonging to the workspace):
wstool update
Build contents of workspace:
catkin_make
Source the setup file in the workspace´s devel folder:
source devel/setup.bash
You should now be able to use hector_slam as expected. For example
roslaunch hector_slam_launch tutorial.launch
should start without errors.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-01-06
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by CathIAS on 2014-01-08:
Yes. In fact I used sudo apt-get ros-hydro-hector-slam, and it also worked. To install from source and in a customized workspace, your method is better.
Comment by mudassar on 2014-10-11:
When I run: wstool merge hector_slam_catkin.rosinstall
There is an error: ERROR in config: Yaml for each element must be in YAML dict form
Kindly help me to resolve!
mudassar | {
"domain": "robotics.stackexchange",
"id": 16586,
"tags": "ros, source, metapackage"
} |
Convex polygons inclusion relation | Question: I have the following problem which came as a subproblem in some work I was doing and I am completely stuck.
Note that I am interested in it only in terms of worst case time complexity (not heuristics or anything else).
Given is a set $\mathcal{P}$ of $m$ convex polygons with $n$ overall vertices.
PROB: Find the set $Z \subset \mathcal{P}$, such that for any $Q \in Z$, there exists a $P \in \mathcal{P}\setminus \{Q\}$ with $P$ contained in $Q$. (i.e. "Find the polygons which contain at least one polygon")
In the following example, the set Z would consist of the 4 highlighted polygons
Some thoughts I had were:
A first idea was to plane sweep with all vertices as events. Every time an event would come we would check the polygons it belongs to and mark them. When the event would be the end of a polygon we could verify that if this belongs to any polygon or not.
The problem is that a query event could take $O(m)$ as it could be inside $O(m)$ polygons (assuming an interval tree DS where an "all_overlap" operation takes $O(\min\{m,c\cdot \log m\})$ time, where $c$ is the number of overlaps - containing polygons). Moreover, I believe we can create a worst case instance where $O(n)$ events have $O(m)$ overlaps. So, with this approach it seems that we could end up with an $O(mn + n\log n)$ time complexity.
I started having some thoughts for a more elaborate plane sweep to use the pairwise polygon intersections but since this can be $\Theta(m^2)$ in the worst case, I didn't further look on that.
Another thought was to make an algorithm using range search queries for the convex polygons. If we triangulated every polygon, having $O(n)$ triangles, we could check the containment. Unfortunately, taking a "brief look", I didn't find very "positive" results for answering fast range queries with anything different than rectangles, e.g. triangles. Although I didn't yet delve very deep into it.
I would be extremely happy to have an $O((1+|Z|) n\log^2 n)$ time algorithm. I am not positive about that anymore. So, I would be happy with any algorithm or ideas how to further proceed.
Finally, and perhaps as a starting point, one could consider the simplified decision problem.
PROB*: Does there exist $Q \in \mathcal{P}$ such that there exists a $P \in \mathcal{P}\setminus \{Q\}$ with $P$ contained in $Q$.
What would be an efficient algorithm for that?
Answer: Here's an argument that you need time quadratic in the number of polygons. More precisely,
you should not be able to find containing pairs among $n$ $k$-sided polygons in time $O(n^{2-\epsilon})$, for any $k=O(n^{\delta})$ and any $\epsilon,\delta>0$ It's a reduction from the orthogonal vectors problem, the problem of finding two disjoint binary vectors among a set of $n$ $k$-dimensional binary vectors, for which the non-existence of an algorithm with time $O(n^{2-\epsilon})$ is a standard conjecture in fine-grained complexity (see e.g. Abboud et al, "More consequences of falsifying SETH and the orthogonal vectors conjecture"). Here orthogonality is measured in $\mathbb{Z}^k$, i.e. it means that the vectors have disjoint subsets of nonzero coordinates.
Given a set of vectors among which we wish to find an orthogonal pair, consider a regular $k$-gon $P$, whose vertices correspond to the coefficients of our vectors. For each vector $x$ construct a polygon $p(x)$ as the convex hull of the $k$-gon vertices for which $x$ has a nonzero coefficient. To make sure that none of these polygons contains each other, shrink each $k$-gon by a factor $1-\gamma |x|$ for some very small number $\gamma$, so that when one vector is dominated by another its polygon is shrunk more.
Next, form two more regular $k$-gons $Q$ and $R$, concentric with $P$, where $Q$ is slightly smaller than $P$ (so that all its vertices are inside the shrunken copies of vertices of $P$) and $R$ is slightly bigger (both close enough that the construction below produces convex polygons). For each vector $x$, construct a second polygon $q(x)$, a convex $k$-gon that, on each ray from the origin through a vertex of $P$, $Q$, and $R$, chooses either a vertex of $Q$ or $R$: a vertex of $Q$ when the corresponding coefficient of $x$ is nonzero, and a vertex of $R$ when it is zero.
We can make sure that all vectors have at least one zero coefficient in order to ensure that no $q(x)$ is contained inside a $p(y)$, and we can do the same variable-shrinking trick to ensure that no two $q$'s are nested.
With this construction, the only remaining containments of polygons are that a polygon $q(x)$ contains another polygon $p(y)$. This happens if and only if $x$ and $y$ are orthogonal. So if we could solve your polygon question in sub-quadratic time, we could also solve the orthogonal vectors problem in sub-quadratic time, conjectured to be impossible. | {
"domain": "cstheory.stackexchange",
"id": 4893,
"tags": "ds.algorithms, computational-geometry, polygon, convex-hull"
} |
Parsing pipe delimited lines | Question: I am parsing the following line of code via a specific format.
Line is:
S|111111|87654321|Bar UK|BCreace UK|GBP|24/08/2010|
The Format is:
Index Field Length
S0 - 1
S1 - 6
S2 - 34
....
...
S6 - 10
I am validating using many if statements. Could anyone suggest a better approach?
private static StatementLineResult Validate(string delimitedRecord)
{
if (delimitedRecord == null)
throw new ArgumentNullException("delimitedRecord");
var items = delimitedRecord.Split('|');
if(items.Length != 19)
throw new Exception("Improper line format");
var errorMessage = new Dictionary<string, string>();
var bankIdentifierCodes = new List<string> {"ABCDGB2L", "EFGHGB2L"};
if (items[0].Length != 1 || items[0] != "S")
errorMessage.Add("Record Identifier","Invalid Record Identifier");
var sortCode = 0;
if (!int.TryParse(items[1], out sortCode) || items[1].Length > 6)
errorMessage.Add("Sort Code", "Invalid sort code");
if (string.IsNullOrEmpty(items[2]) || items[1].Length > 34)
errorMessage.Add("Account Number", "Invalid account number");
if (string.IsNullOrEmpty(items[3]) || items[1].Length > 35)
errorMessage.Add("Account Alias", "Invalid account alias");
if (string.IsNullOrEmpty(items[4]) || items[1].Length > 35)
errorMessage.Add("Account Name", "Invalid account name");
if(string.IsNullOrEmpty(items[5]) || items[5].Length != 3)
errorMessage.Add("Account Currency", "Invalid account currency");
if (!string.IsNullOrEmpty(items[6]) && items[6].Length > 20)
errorMessage.Add("Account Type", "Invalid account type");
if(string.IsNullOrEmpty(items[7]) || items[7].Length != 8 || !bankIdentifierCodes.Contains(items[7],StringComparer.OrdinalIgnoreCase))
errorMessage.Add("Bank Identifier Code", "Invalid bank identifier code");
if (!string.IsNullOrEmpty(items[8]) && items[8].Length > 35)
errorMessage.Add("Bank Name", "Invalid bank name");
if (!string.IsNullOrEmpty(items[9]) && items[9].Length > 27)
errorMessage.Add("Bank Branch Name", "Invalid bank branch name");
DateTime transactionDate;
if (!DateTime.TryParse(items[10], out transactionDate))
errorMessage.Add("Transaction Date", "Invalid date");
if (!string.IsNullOrEmpty(items[11]) && items[11].Length > 25)
errorMessage.Add("Narrative Line 1", "Invalid narrative line 1");
if (!string.IsNullOrEmpty(items[12]) && items[12].Length > 25)
errorMessage.Add("Narrative Line 2", "Invalid narrative line 2");
if (!string.IsNullOrEmpty(items[13]) && items[13].Length > 25)
errorMessage.Add("Narrative Line 13", "Invalid narrative line 13");
if (!string.IsNullOrEmpty(items[14]) && items[14].Length > 25)
errorMessage.Add("Narrative Line 14", "Invalid narrative line 14");
if (!string.IsNullOrEmpty(items[15]) && items[15].Length > 25)
errorMessage.Add("Narrative Line 15", "Invalid narrative line 15");
if(_transactionTypes.First(item=>item.TransactionType==items[16]) == null)
errorMessage.Add("Transaction Type", "Invalid transaction type");
decimal debitValue;
if(items[17] != "" && !Decimal.TryParse(items[17], out debitValue))
errorMessage.Add("Debit Value", "Invalid debit amount");
decimal creditValue;
if (items[18] != "" && !Decimal.TryParse(items[18], out creditValue))
errorMessage.Add("Credit Value", "Invalid credit amount");
return new StatementLineResult()
{
ErrorMessages = errorMessage,
Data = BankLineData(delimitedRecord),
IsValid = errorMessage.Count==0
};
}
Answer: You are using the following if quite a lot:
if (string.IsNullOrEmpty(items[index]) || items[index].Length > value)
errorMessage.Add("Field Name", "Invalid field value");
now, there is not much you can do about this, but I do notice that you are making some mistakes in copy-pasting the index values, which could be avoided by making the conditional statement into a separate method:
private boolean FieldHasCorrectLength(string field, int length){
return string.IsNullOrEmpty(field) || field.Length > value;
}
And then call it like this:
if (FieldHasCorrectLength(items[index], value))
errorMessage.Add("Field Name", "Invalid field value");
You can then combine this with the recommendation from t3chb0t to add an Attribute annotation to the class you're parsing the line to. Another advantage of this is that it's easier to add a secondary validation check, because adding extra validations doesn't mean struggling with whether you need to use || or &&, which can get confusing in your case because you sometimes use && and sometime || for what is essentially the same comparison. | {
"domain": "codereview.stackexchange",
"id": 15753,
"tags": "c#, parsing, asp.net, csv"
} |
Determine whether there is a winner in tic-tac-toe game | Question: Below a function which answers: is there a winner in tic-tac-toe game?
Also, there is a test suite I wrote for it.
def is_win(field):
# check horizontal
N = len(field)
for i in range(N):
if field[i][0] != 0 and len(set(field[i])) == 1:
return True
vertical = [field[j][i] for j in range(N)]
if vertical[0] != 0 and len(set(vertical)) == 1:
return True
# check diagonal
diagonal = [field[i][i] for i in range(N)]
if diagonal[0] != 0 and len(set(diagonal)) == 1:
return True
o_diagonal = [field[N-i-1][i] for i in range(N)]
if o_diagonal[0] != 0 and len(set(o_diagonal)) == 1:
return True
return False
assert is_win(
[
[0, 0, 0],
[2, 2, 2],
[0, 0, 0],
]) == True
print(1)
assert is_win(
[
[0, 2, 0],
[0, 2, 0],
[0, 2, 0],
]) == True
print(2)
assert is_win(
[
[0, 0, 2],
[2, 2, 1],
[0, 0, 1],
]) == False
print(3)
assert is_win(
[
[2, 0, 0],
[0, 2, 0],
[0, 0, 2],
]) == True
print(4)
assert is_win(
[
[0, 0, 2],
[0, 2, 0],
[2, 0, 0],
]) == True
print(5)
```
Answer:
Your code is WET.
var[0] != 0 and len(set(var)) == 1
Is repeated four times.
field[len(field)-i-1] can be simplified to field[~i].
I'd personally just use a couple of any and a couple of ors.
def is_unique_player(values):
return values[0] != 0 and len(set(values)) == 1
def is_win(field):
N = len(field)
return (
any(is_unique_player(row) for row in field)
or any(is_unique_player(column) for column in zip(*field))
or is_unique_player([field[i][i] for i in range(N)])
or is_unique_player([field[~i][i] for i in range(N)])
) | {
"domain": "codereview.stackexchange",
"id": 34995,
"tags": "python, tic-tac-toe"
} |
Centre of mass, integral | Question: I was answering a question on proving the parallel axis thereom for angular momentum and came across this:
$$\int Yy'dm=Y\int y' dm=0$$
Where the position of the center of mass of an object is given by $(X,Y,Z)$, $(x',y',z')$ is a position relative to the centre of mass and m is the mass of the object.
My text book (Introduction to classical mechanics) says that this is due to the definition of the centre of mass. There are two things that I don't understand :
Firstly, why is $Y$ independent of mass whilst $y'$ is not?
Secondly, can you please explain what definition of the centre of mass they are using to get the reslut above?
I really have no idea to the answer for either of these questions.
Answer: The center of mass, $\vec R$, is just a vector. It's not a function of anything, it is just the result of the integral $\frac{1}{m}\int \vec r dm = \frac{1}{m}\int \rho(\vec r)\vec r dV$ ($\rho(\vec r)$ is the volume density) over all the region delimited by the body, it's a definite integral, that gives you a vector.
The fact that the integral you posted is zero is simply because you are measuring distances relative to the CM. What is the position of the CM relative to the CM? Zero. So:
$\frac{1}{m}\int \vec r' dm = 0$, this is the integral of a vector, so the integral of every component must be zero. So, $\frac{1}{m}\int y' dm = 0$. | {
"domain": "physics.stackexchange",
"id": 15753,
"tags": "newtonian-mechanics, moment-of-inertia, moment"
} |
Calculating kills and deaths in a game | Question: I'm trying to make a program that calculates your k/d ratio (kill/death, which is used in FPS games to make people believe that it's skill), how many kills you need without dying once to reach a goalKD. It also has a part that calculates how many battles you need if you give the program your average battle kills and battle deaths.
Is there a efficient way to program it? Currently, the way I'm doing it is getting me into programmer-efficiency hell.
For i = 1 to 100000 {
While GoalKD>KDratio {
Kills = BattleKills + Kill
Deaths = BattleDeaths + Death
KDratio = Kills / Deaths
i++
}
}
I'm programming this in SmallBasic because I'm most familiar with it, and I think it's easily readable. Also, if possible, don't give the answer right away; give me some hints for me to practice my mind. Add the answer in a spoiler box.
Answer: For the kills needed part, you are trying to solve this equation:
k{current} + n
-------------- = r
d{current}
Where r is the target rate and n is the number you're looking for. Some basic algebra:
n = rd - k
For the battles part, you need to solve
k{current} + b * k{battle} // current kills + additional kills
-------------------------- = r
d{current} + b * d{battle} // current deaths + additional deaths
k{current} + b * k{battle} = r * (d{current} + b * d{battle}) // multiply both sides by d{current} + b * d{battle}
k{current} + b * k{battle} = r * d{current} + r * b * d{battle}) // just showing multiplication of r
b * (k{battle} - r * d{battle}) = r * d{current} - k{current} // subtracting terms from each side
r * d{current} - k{current}
b = --------------------------- // dividing by k{battle} - r * d{battle}
k{battle} - r * d{battle}
In your code above, this becomes
need = Math.ceiling( GoalKD * Deaths - Kills )
battles = Math.ceiling( ( GoalKD * Deaths - Kills ) / (BattleKills - GoalKD * BattleDeaths) )
[I'm not a SmallBasic guy, so please forgive any syntax errors above.]
(Edited to fix math error re battles required and add comments to the math.) | {
"domain": "codereview.stackexchange",
"id": 5415,
"tags": "performance, homework, basic-lang"
} |
How can I fit categorical data types for random forest classification? | Question: I need to find the accuracy of a training dataset by applying Random Forest Algorithm. But my the type of my data set are both categorical and numeric. When I tried to fit those data, I get an error.
'Input contains NaN, infinity or a value too large for dtype('float32')'.
May be the problem is for object data types. How can I fit categorical data without transforming for applying RF?
Here's my code.
Answer: You need to convert the categorical features into numeric attributes. A common approach is to use one-hot encoding, but that's definitely not the only option. If you have a variable with a high number of categorical levels, you should consider combining levels or using the hashing trick. Sklearn comes equipped with several approaches (check the "see also" section): One Hot Encoder and Hashing Trick
If you're not committed to sklearn, the h2o random forest implementation handles categorical features directly. | {
"domain": "datascience.stackexchange",
"id": 7707,
"tags": "python, scikit-learn, data-mining, random-forest"
} |
What are the dangers of AI applications for nuclear industry? | Question: I've found this old scientific paper from 1988 about introduction of AI into nuclear power fields.
Were or still are there any dangers by application of such algorithm? Are nuclear power plants or human life in risk if the algorithm will fail?
Especially applications to the core, like cooling systems and other components which can be affected in negative way.
Answer: Any technology in the nuclear industry represents variance--it may be an improvement in safety or efficiency, or it may contain some unseen defect that allows a catastrophe to happen.
But the simple possibility of harm isn't enough to swing the decision one way or the other. The application of AI methods--whether to the real-time control of plant variables, or the early detection of problems, or to the design of plants and their components--seems likely to be as beneficial as in other realms.
For example, check out the publication list of a lab active in this area. Their paper I'm most familiar with is one in which they build a fault detector paired with a fault library classifier, so that the operators can be alerted not just that something is abnormal but what fault has probably occurred. This is done in such a way that standardized plants (such as, say, the French nuclear system) can share records with each other, meaning that any plant has the experience of every plant at their fingertips. | {
"domain": "ai.stackexchange",
"id": 24,
"tags": "applications"
} |
Quantizing Wave Vectors in 2D Electron Gas: Periodic vs Hard Wall Boundary Conditions? | Question: I'm studying the behavior of a two-dimensional electron gas in a magnetic field, focusing on the quantization of wave vectors and the resulting energy levels, specifically Landau levels. However, I've come across a point of confusion regarding the application of boundary conditions and the consequent quantization of the wave vector component $k_y$.
In the absence of a magnetic field, I understand that a particle in a box would have its wave vector quantized in units of $\pi/L$ due to the hard wall boundary conditions, where the wave function must be zero at the walls.
Conversely, in the presence of a magnetic field, as in the case of Landau levels, I've seen in most textbooks that the wave vector component $k_y$ quantized in units of $2\pi/L$ instead, implying periodic boundary conditions.$\psi(x,y)=\psi(x,y+L_y)$
Could someone help clarify the following points?
Why is $k_y$ quantized as $2\pi n/L_y$ under periodic boundary conditions for a 2DEG in a magnetic field, rather than $\pi n/L_y$ as one might expect from a "box" analogy?
How does "having a finite size $L_y$" reconcile with the use of periodic boundary conditions in this context?
Is there a physical intuition or a solid-state physics convention that dictates the choice of periodic boundary conditions over hard wall conditions when discussing Landau levels and the quantization of $k_y$?
Any insights or explanations would be greatly appreciated, especially those that can bridge my understanding of quantum mechanics in confined systems with the behaviors observed in solid-state physics.
Answer:
Why is $k_y$ quantized as $2πn/L_y$ under periodic boundary conditions for a 2DEG in a magnetic field, rather than $πn/L_y$ as one might expect from a "box" analogy?
About this point, Buzz correctly points out that
For a system of finite size containing a thermodynamically large number of particles, it should not matter precisely what boundary conditions are used at the edges.
There is a particular name for this Periodic Boundary Condition:
https://en.wikipedia.org/wiki/Born%E2%80%93von_Karman_boundary_condition
BvK PBC is very useful particularly for your question
Is there a physical intuition or a solid-state physics convention that dictates the choice of periodic boundary conditions over hard wall conditions when discussing Landau levels and the quantization of $k_y$?
Because in solid state physics we are often interested in conduction, which hard wall standing wave boundary conditions cannot treat at all, whereas BvK PBC allows us to pretend to treat it. It is also almost always the case that BvK PBC makes for perfectly unperturbed energy levels (of equal spacing, say), where the perturbation I am talking about comes from boundary conditions. Remember that then you were using standing wave boundary conditions, the normal mode decomposition gives almost analytic energy eigenvalues, with small deviations at the end of the spectrum. BvK PBC ensures that there will be no small deviations at all, and the normal mode decomposition is trivial to state.
How does "having a finite size $L_y$" reconcile with the use of periodic boundary conditions in this context?
This is actually irreconcilable. If $L_y$ is small, then there is no way that BvK PBC can be used. You would actually have to consider the boundary conditions exactly.
This is because the use of BvK PBC is to imagine that you have an infinitely large system, from which you are mentally chopping out a box of width $L_y$, and hoping that such a boundary-less mental copy is a good approximation of the actual system you want to study. When $L_y$ is actually small, then you cannot assert that this approximation is tolerable. | {
"domain": "physics.stackexchange",
"id": 99511,
"tags": "quantum-mechanics, wavefunction, solid-state-physics, boundary-conditions, approximations"
} |
Why Higgs field need hypercharge to work? | Question: I know that photon and gluon don't have any hypercharge so they can zip through higgs field at speed of light while left and right handed particles seems to communicate with these hypercharge, so then what do higgs boson really do?
Answer: Hypercharge isn’t needed for the Higgs mechanism to work, it’s needed for Electroweak theory to match observations.
The Higgs mechanism can generate mass terms in a wide variety of theories, but the $\mathrm{SU(2)_L \times U(1)_Y}$ theory of the Standard Model is the simplest one so far that explains all the observations. As a specific example, if you know the ratio of the weak coupling to the electromagnetic coupling you can predict the ratio of the W and Z boson masses. As these ratios are consistent with the predictions of Electroweak theory containing hypercharge, it would be an astonishing coincidence if this were by chance.
Furthermore, it’s not quite true that all particles without hypercharge don’t have mass — the W and Z bosons don’t have hypercharge but interact with the Higgs via isospin (the SU(2) charge). | {
"domain": "physics.stackexchange",
"id": 63376,
"tags": "mass, standard-model, higgs, symmetry-breaking, electroweak"
} |
repeated service calls cause strange behavior, publication of joint angle updates compromised? | Question:
Hi.
Using Linux 14.04, ROS Indigo. I am implementing a force controller loop that also consists of an inner position control loop.
My set-up is as follows:
I have tried asyncspinner/multithreaded spinner and just spinner to subscribe to Baxter's Joint States to get joint angle positions, velocities, torques, and gravitational torques. Optionally I subscribe to the endpoint wrench (and may also re-publish a low-pass filtered endpoint wrench). I also publish new joint angle reference positions to /robot/limb/side/JointCommand.
The force controller computes the error either in the force or the moment of the endpoint and uses the Jacobian Transpose to compute a joint angle update. This update is added to the current angles and then fed into the position controller. The position controller runs until the update is achieved.
The force controller is activated through a service call that can be done manually or in code.
Up to this point everything works fine. However, I am looking for reactivity in the force controller. I am seeking a behavior, whereby if the endpoint is pushed the wrist should respond quickly and adjust to achieve the set-point.
To this, I can either put a while loop around the force_controller and ask it to continue to run as long as the error has not been minimized beyond a threshold. Or equivalently, I could make cyclical calls with the client service through a while loop.
However, in my experience as soon as I do this, the arm does not move to the reference joint angles computed by the position_controller and invariably (no matter what output, technique, or setup options) always moves towards a straight arm configuration bit-by-bit.
I should clarify, if I I issue service calls manually, the force controller moves the end-effector in the expected direction. So this manual method works. But as soon as I put it in a while loop whether I while the service client, or while the server side, I get the strange behavior. Sleep times and nothing changes.
I have debugged over and over to make sure that the correct joint angle references are published to joint command (which for the baxter robot are fed directly in the control boards and moves the robot) and I believe they are. In fact, when the while loop is removed around the force control, and I do this manually, the program behaves well, it just lacks reactivity.
Update:
I have further tested by creating a wrapper node around a client instantiation and call. That is, I run a while loop around a function in which ros::init, a new ros node, a new service client object, and a new client call are all made, and then upon the function's exit they are killed. I too have tried changing sleep times around the call of the function in the while loop but same effect.
The client code can be seen here:
https://github.com/birlrobotics/birl_baxter/blob/master/controllers/force_controller/src/force_control_client.cpp
The server code can be seen here:
https://github.com/birlrobotics/birl_baxter/blob/master/controllers/force_controller/src/controller.cpp
This is as close I think as it gets to calling the service manually, but still the same behavior.
I think there is definitely something strange in the way that the service behaves.
I wonder if anyone else has come up with this kind of problem or if you have any insights into it.
Update 2: I tried a publisher-subscribe method and I got the same behavior. This led me to believe that the problem did not lie with the communication type but with other timing.
After further examination in my code I found a logic error between the way the force and position control loops communicate with each other!!
Thanks
Originally posted by Juan on ROS Answers with karma: 208 on 2015-11-16
Post score: 0
Original comments
Comment by gvdhoorn on 2015-11-17:
From your description it sounds like this could be Baxter related. Have you sent a message to the Baxter research google group?
Comment by Juan on 2015-11-17:
Yes. I wonder if this is a service related question.
How does the service communication work when multiple client calls are made? There is no queue not seems in case of network latency.
Or any problems publishing from within a serviced call through a multithreaded spinner?
Answer:
After further examination in my code I found a logic error between the way the force and position control loops communicate with each other!! Thanks
Originally posted by Juan with karma: 208 on 2015-11-19
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 22996,
"tags": "robotic-arm, ros, joint, service, update"
} |
Circular shift string | Question: So,basically I am working on an algorithm for circular shifting a string upto a position.There are two parameters required here.
String of text to shift
Number of shifts required.
For example:
Given string: "Hello"
Shifts: 3
Final: "ollHe"
Ignore the limited validation here.
import java.lang.*;
import java.util.*;
public class Shift
{
public String shifting(String s,Integer sh)
{
Integer shifts=sh;
String string=s;
char[] shifted=new char[50];
Integer pos=0;
for(int i=string.length();i>shifts;i--)
{
System.out.println(""+i);
pos++;
shifted[pos]=string.charAt(i-1);
}
System.out.println("Shifting complete");
for(int j=0;j<shifts;j++)
{
System.out.println(""+j);
pos++;
shifted[pos]=string.charAt(j);
}
return new String(shifted);
}
public static void main(String[] args) {
System.out.println("Enter the string ");
Scanner sc=new Scanner(System.in);
String string=sc.nextLine();
System.out.println("So you have entered:"+string);
System.out.println("Enter the number of shifts:");
Integer shifts=sc.nextInt();
System.out.println("number of shifts is:"+shifts.toString());
System.out.println("Shifted string:"+new Shift().shifting(string,shifts));
}
}
`
Give your views on the code here.
Answer: Do you have any constraints?
Is your example wrong? I would expected lloHe
If you are concerned of modifying your method parameters make them final. Copy it to local variables and don't change the variables is kind of useless.
Don't use short variables names (exception: the loop counter)
If you use Integer instead of int, you should handle null
Maybe a simple static utility method would be fine here, except you have some inheritance scenario in mind.
Maybe you should also indicate that you are shifting left for positive numbers
You should take the length of you string for the char.
Assuming your example is wrong, what about:
import java.util.Objects;
import javax.annotation.Nonnull;
public final class Shift
{
@Nonnull
public static String left( @Nonnull final String string, final int shift )
{
final int length = string.length();
if( length == 0 ) return "";
final int offset = ((shift % length) + length) % length; // get a positive offset
return string.substring( offset, length ) + string.substring( 0, offset );
}
public static void main( String... args )
{
assertEquals( "loHel", Shift.left( "Hello", -2 ) );
assertEquals( "oHell", Shift.left( "Hello", -1 ) );
assertEquals( "Hello", Shift.left( "Hello", 0 ) );
assertEquals( "elloH", Shift.left( "Hello", 1 ) );
assertEquals( "lloHe", Shift.left( "Hello", 2 ) );
assertEquals( "loHel", Shift.left( "Hello", 3 ) );
assertEquals( "oHell", Shift.left( "Hello", 4 ) );
assertEquals( "Hello", Shift.left( "Hello", 5 ) );
assertEquals( "elloH", Shift.left( "Hello", 6 ) );
assertEquals( "", Shift.left( "", 3 ) );
}
private static void assertEquals( String expected, String actual )
{
if( !Objects.equals( expected, actual ) ) throw new AssertionError( "Expected: >" + expected + "< was: >" + actual + "<" );
}
} | {
"domain": "codereview.stackexchange",
"id": 23390,
"tags": "java, algorithm, stack"
} |
No message recieved from pcl/NormalEstimation nodelet | Question:
Hello,
I'm trying to run simple example with kinect and normal estimation, I'm running a simple launch file:
<launch>
<!-- Launch openni grabber -->
<include file="$(find openni_launch)/launch/openni.launch"/>
<!-- Launch nodelet manager -->
<node pkg="nodelet" type="nodelet" name="nodelet_manager" args="manager" output="screen"/>
<!-- Normal Estimation -->
<node pkg="nodelet" type="nodelet" name="normal_estimation" args="load pcl/NormalEstimation nodelet_manager" output="screen">
<remap from="~input" to="/camera/depth_registered/points"/>
<rosparam>
# -[ Mandatory parameters
k_search: 0
radius_search: 0.015
spatial_locator: 0
</rosparam>
</node>
</launch>
but I'm getting "No message received" if I try to subscribe /normal_estimation/output topic in rviz, even though rostopic echo /normal_estimation/output show that messages are published.
Originally posted by liborw on ROS Answers with karma: 801 on 2012-05-03
Post score: 0
Original comments
Comment by karthik on 2012-05-03:
can you please share the screenshot of the rxgraph
Answer:
I thing that the problem is that pcl/NormalEstimation publishes pcl::Normal, which does not contain coordinates.
Originally posted by liborw with karma: 801 on 2012-05-03
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 9231,
"tags": "rviz, openni, pcl"
} |
Physical interpretation of the Maxwell stress tensor | Question: In 'Introduction to Electrodynamics' by D. Griffiths, shortly after introducing the Maxwell stress tensor there is a paragraph concerning the physical interpretation of the stress tensor $\boldsymbol{T}$
Physically, $\boldsymbol{T}$ is the force per unit area (or stress) acting on the surface. More precisely, $T_{ij}$ is the force (per unit area) in the $i$th direction acting on an element of the surface oriented in the $j$th direction - "diagonal" elements ($T_{xx}$, $T_{yy}$, $T_{zz}$) represent pressures, and "off-diagonal" elements ($T_{xy}$,$T_{xz}$, etc.) are shears.
I understand where all this comes from mathematically, but I fail to grasp how this translates into an actual force, and specifically what is meant by "an element of the surface oriented in the $j$th direction". Any clarification would be greatly appreciated.
Answer: It means a surface element whose tangent plane has a normal in the $j$th direction. For a flat surface, we can shorten that to the normal to the surface pointing in the $j$th direction. For a curved surface, each infinitesimal patch has its own tangent plane. | {
"domain": "physics.stackexchange",
"id": 49530,
"tags": "electromagnetism"
} |
Understanding The Fluctuations In The CMB Maps | Question: If I'm understanding this correctly the fluctuations in CMB are a result of the "last scatter" of photons when electrons joined together with nuclei that before this formed plasma. CMB is among other things used to see the early matter density fluctuations of the early universe, right?
So, how is this seen? Are the warmer parts in the maps where the matter is most dense? And are these spots warmer because more photons are coming from that spot, or because of some other reason?
Answer: The anisotropies in the CMB are caused by four effects; three at the surface of last scattering (SoLS), and one after:
Temperature differences
Denser regions will be more compressed and thus hotter, on average. Hence, an overdensity will result in a hotter spot, with a fractional fluctuation $\Delta T/T_0$.
Gravitational redshift
Photons climbing up (or falling down) the gravitational potential $\phi$ of their SoLS will lose (or gain) energy. Hence, an overdensity will result in a colder spot, and an underdensity in a hotter spot. This effect is also called the non-integrated Sachs-Wolfe effect and changes the temperature by $\Delta T = \phi/3c^2$, where $c$ is the speed of light.
Doppler shift
The SoLS has a bulk motion $\mathbf{v}$. Generally, overdensities have gas infalling, and hence result in a colder spot. In the direction $\hat{\mathbf{n}}$, the temperature difference will be $\Delta T = T_0 \mathbf{v}/c$.
The integrated Sachs-Wolfe effect
If a potential well is getting shallower in time (which happens during dark-energy domination), photons receive a net blueshift in crossing the well and the CMB appears hotter. This gives a temperature difference of $\Delta T = 2\Delta\phi/c^2$, where $\Delta\phi$ is the change in the potential while a photon traversed the potential well.
The total temperature fluctuation is the sum of all these terms. In general, the gravitational redshift will dominate over the temperature differences, which in turn dominates over the Doppler shift.
A review of the CMB fluctuations is given by Challinor (2013). | {
"domain": "astronomy.stackexchange",
"id": 4016,
"tags": "big-bang-theory, cosmic-microwave-background, photons"
} |
gpg: keyserver receive failed: No name | Question:
Hello.
I am following the installation guide for Debian Buster of ROS Noetic and in the second step, executing the comand sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654 I got the following error:
Executing: /tmp/apt-key-gpghome.nAyAX9Bj38/gpg.1.sh --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
gpg: keyserver receive failed: No name
I read that apt-key is deprecated or something like that. How can I continue the installation?
Best regards.
Alessandro
Originally posted by Alessandro Melino on ROS Answers with karma: 113 on 2021-11-17
Post score: 0
Original comments
Comment by osilva on 2021-11-17:
There is a recent similar question, please take a look: https://answers.ros.org/question/307081/unable-to-install-keys-for-ros-kinetic-using-ubuntu-1604/
Comment by Alessandro Melino on 2021-11-17:
Thank you a lot for your comment. I think I have resolved it by other way.
Comment by osilva on 2021-11-17:
Glad you found an answer.
Answer:
Hello.
Trying other ways to solve it, I finally solved it using the command:
curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -
Then following the rest of the steps it works fine, but for example It is not available:
sudo apt install ros-noetic-desktop-full
Instead I have used:
sudo apt install ros-noetic-desktop
And it is installing.
Best regards.
Alessandro
Originally posted by Alessandro Melino with karma: 113 on 2021-11-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 37145,
"tags": "ros, installation"
} |
how do we navigate space without knowing the position of everything? wouldn't the gravity affect the sattelite/ship? | Question: I tried looking for other questions but I couldn't find any. (if this is a duplicate, then I'm sorry, I just signed up, so I'm not sure what to search for)
I was wondering, how do we navigate into deep space without knowing the position of everything in space? Doesn't all matter have a gravitational affect on all other matter, even if the affect is very small? so wouldn't that mean that to succecfully land a probe/ship/sattelite on an asteroid we would have to compute the velocity of the spacecraft based on all the positions of every star, planet, moon, and asteroid in the universe, as well as random particles?
please note: I am not very experienced in physics. I am in 9th grade and am currently taking algebra 2
Answer: There are two key effects which help deal with this.
The first is that, were we to want to land precisely on a point on an asteroid, we would indeed need to account for every last bit of matter in the universe. Fortunately, in most situations we don't mind being nanometers off. In fact, we often don't mind being meters off. With this in mind, we can calculate how much of an effect this unknown mass would have to have on us before we failed our objective. It turns out that, for most missions we care about, the universe can be simplified dramatically. Most often we will see:
Model the Earth's gravitational pull
Model the Earth and the sun
Model the Earth sun and moon
Model the Earth, sun, moon, and Jupiter
Obviously each mission is different, but generally speaking the effect of all other players is so miniscule that it doesn't have a large effect.
The second key to this is guidance. We rarely send a probe careening through the solar system without the ability to accelerate slightly. Over time, as we see that the probe is falling off course, we issue commands to tell it to burn fuel to get back on track. One major challenge for spacecraft designers is to size the fuel containers required to do this. Too much, and you waste a lot of money lobbing a heavy object into space. Too little, and you can't do the corrections you need to go where you need to go. | {
"domain": "physics.stackexchange",
"id": 56880,
"tags": "gravity, space, space-travel, interstellar-travel"
} |
Anomaly, symmetries, and Ward identity | Question: I'm trying to bring together and understand the concepts of anomaly, quantum symmetries, and Ward (or Ward-Takahashi, or Slavnov-Taylor) identity in QFT. I think I know what the ideas mean, but I'm not sure if my unified understanding of the subject is correct. To avoid a mess, I will first lay out what I think I know, highlighting the focal points, and then explicitly ask the questions.
From what I understand, a symmetry (global or gauge) is said to be anomalous if it doesn't hold after renormalization, and a current $J$ that was conserved classically isn't conserved after quantization. The anomaly $\mathcal{A}$ is how much this current isn't conserved: $\partial_\mu J^\mu=\mathcal{A}$. If the said symmetry is a gauge one, then one requires its anomaly to be zero. Also, the anomaly is absent at the 0-loop level but is exact at 1-loop (meaning that 2+ loop calculations will yield the same result).
The Ward (or W-T or S-T) identity is an identity between correlation functions that holds iff a certain symmetry (global or gauge) holds. The identity is there because even after gauge fixing, the observables can't be gauge dependent, but the correlation functions (that can be gauge dependant) are linked to the observables and therefore can't be arbitrary (1). They remain valid after renormalization, meaning that if they hold in the classical case they also hold in the quantum one (2). The Ward identity in QED can be obtained directly from $\partial_\mu J_{EM}^\mu=0$, and more generally a Ward identity can be obtained directly from Noether's theorem (3, 4).
Questions:
(1): this definition of the Ward identity obviously relies on the fact that the symmetry is gauge. What is the definition of the Ward identity for global symmetry (provided that there's one)?
(2): do they also remain valid at every perturbation order? It seems to me that they should because if the anomaly is an exact result and the anomaly measures how much the current is not conserved, the Ward identity should hold at every order too.
(3): is the "Noether's theorem" here meant to be the first one? So, global symmetry, and physical conserved current?
(4): what is the relationship between Ward identity and anomaly? It seems to me they are somehow linked since they are both related to the quantum version of the equation of the conservation of current $\partial_\mu J^\mu=...$ but I can't grasp how they are connected.
Answer: Background
Here we work in the Euclidean theory throughout. I also preface this with a disclaimer that I have been a bit lax with indices, but hopefully the message remains clear.
The Ward identities in their broadest form possible are actually a statement about any infinitesimal shift in the fields, not necessarily symmetries of the action or otherwise:
$$
\delta\phi=\epsilon\cdot f\qquad[\delta\phi_i=\epsilon^r(x^\mu) f_r^i(\phi, \partial\phi)]
\\Z=\int\mathcal D\phi'\exp\left(-S'+\int J\cdot\phi'\right)
\\=\int\mathcal D\phi\left(1+\epsilon^r\mathrm{Tr}\frac{\delta f_r}{\delta\phi_j}\right)\exp\left(-S-\delta_\epsilon S+\int J\cdot(\phi+\epsilon\cdot f)\right)
$$
$$
\\\Rightarrow\left\langle-\delta_\epsilon S+\int \epsilon J\cdot f\right\rangle=-\left\langle\epsilon^r\mathrm{Tr}\frac{\delta f_r}{\delta\phi_j}\right\rangle\tag{1}
$$
Some people call this the Ward identity, but it's far too general to be of any immediate use. Note that $J$ is not the analogue of the classical Noether current: it's simply a background source. Let me stress again: this holds for all transformations $\delta_\epsilon$.
As you have probably seen in the derivation of Noether's first theorem, any symmetry of the action must have
$$
\delta_\epsilon S=\int\delta_\epsilon\mathcal L=-\int\epsilon\cdot\ \partial_\mu\left(f\cdot\frac{\partial\mathcal L}{\partial(\partial_\mu\phi)}\right)=-\int\epsilon\cdot\ \partial_\mu j^\mu
$$
with the appropriate boundary conditions on $\epsilon(x)$. Here $j^\mu(x^\mu)$ is the classically conserved Noether current.
Now for a symmetry in the quantum theory, this means that
$$
Z=\int\mathcal D\phi'\exp\left(-S'+\int J\cdot\phi'\right)
\\=\int\mathcal D\phi\left(1+\epsilon^r\mathrm{Tr}\frac{\delta f_r}{\delta\phi_j}\right)\exp\left(-S+\int J\cdot\phi\right)\left(1-\int\epsilon\cdot(\partial_\mu j^\mu-J\cdot f)\right)
$$
$$
\Rightarrow\partial_\mu\langle j^\mu\rangle-\left\langle J\cdot f\right\rangle=\left\langle\mathrm{Tr}\frac{\delta f^i}{\delta\phi_j}\right\rangle\tag{2}
$$
This is what is usually called the Ward identity. The right-hand side of the equation denotes the anomaly from the path integral measure, and is zero in case of a non-anomalous transformation (it is also a very formal expression, and actually requires quite a large toolset to evaluate in most cases). Also recall that the source $J$ is literally there for us to play around with, so we can take derivatives with respect to it before setting it to zero in order to derive the Ward identities for correlators: taking a non-anomalous transformation for simplicity,
$$
0=\int\mathcal D\phi\exp\left(-S+\int J\cdot\phi\right)\left(\partial_\mu j^\mu-J\cdot f\right)
\\ J=0\Rightarrow\partial_\mu\langle j^\mu\rangle=0
\\-i\frac{\delta}{\delta J(x_1)}Z\bigg|_{J=0}=0\Rightarrow\partial_\mu \langle j^\mu(x)\phi(x_1)\rangle=\delta(x-x_1)\langle f(\phi)\rangle
$$
and so on, to derive the relations between the correlators.
Another important observation is that for linear global symmetries, equation $(2)$ is equivalent to
$$
\epsilon^r\partial_\mu\langle j^\mu\rangle=\delta_\epsilon\Gamma[\varphi]
$$
where $\Gamma[\varphi]$ is the 1PI effective action. This also formalises the idea of "classical symmetries carrying over to the quantum theory": the 1PI effective action will normally have the same symmetries as the classical theory - this violation is measured by the anomaly. This, however, does not work for gauge theories, since we need to gauge-fix the action under the path integral. You could of course of course take the BRST route, but a simpler method is to consider a different sort of effective action - one where the gauge fields are integrated out into the background, say $W[A']$, which I will discuss qualitatively. The variation of this object under $\delta_\epsilon$ correctly gives the anomaly. Finally, one writes the conserved current in the presence of the background fields as
$$
\frac{\delta W[A]}{\delta A^a_\mu}=\langle J_a^\mu(x)\rangle
$$
whereupon
$$
D_\mu\langle J_a^\mu(x)\rangle=\left\langle\mathrm{Tr}\frac{\delta f^i}{\delta\phi_j}\right\rangle
$$
is the Slavnov-Taylor identity for non-abelian gauge symmetries with an anomaly.
Redux
As mentioned, the Ward identities hold for all symmetries of the action. So the Ward-Takahashi identities hold for global symmetries too, although the trick of "promoting" a global symmetry to a local one (as is done for using Noether's theorem in a classical field theory) and dropping spacetime dependence right at the end is a quick and easy way to derive the identity for a given theory - this does not qualify as a gauge transformation because we do not take the gauge field itself to transform. Also note that in the case when the operators in the theory only change via a replacement $\phi\to\phi'$, then the global Ward identities are particularly simple:$\langle\mathcal O_1(\phi(x_1))\dots\mathcal O_n(\phi(x_n))\rangle=\langle\mathcal O_1(\phi'(x_1))\dots\mathcal O_n(\phi'(x_n))\rangle$
Since the Ward identities are in terms of exact correlators, they hold to all orders in perturbation theory: you can also write out the identities in terms of the effective action $W[J,\{\Phi\}]$ with background gauge fields $\{\Phi\}$, which generates relations between all of the connected Green's functions.
Yes, this uses Noether's first theorem, as shown. Of course, $\partial_\mu j^\mu=0$ does not imply $\partial_\mu\langle j^\mu\rangle=0$ unless the path integral measure is invariant. However, recall that even local transformations of the fields that are symmetries of the action generate an on-shell conserved current when the corresponding infinitesimal parameters $\epsilon^r(x^\mu)$ are made constant.
OP has summarised this well themselves: the anomaly measures the extent of the violation of the naïvely expected Ward identity, as also shown above. For example, we would naïvely expect $\partial_\mu \langle j^\mu_A\rangle=0$ for the axial current, but the path integral measure transforms as
$$
\int\mathcal D\psi\mathcal D\bar\psi\rightarrow\int\mathcal D\psi\mathcal D\bar\psi\exp\left(-\frac{ie^2}{16\pi^2}\int\epsilon \ F_{\mu\nu}\tilde F^{\mu\nu}\right)
$$
and so
$$
\partial_\mu \langle j^\mu_A\rangle=\frac{e^2}{16\pi^2} F_{\mu\nu}\tilde F^{\mu\nu}
$$
which is the chiral anomaly. This is an example of a global anomaly, which is entirely harmless, but not entirely useless: it helps in e.g. the predicting $\pi^0\to\gamma\gamma$ decay rate. On the other hand, gauge anomalies ought to be cancelled otherwise, roughly, the lack of gauge invariance will prevent us from removing the negative norm states while trying to restrict to a well-defined Hilbert space for our states. These cancellation conditions lead to very important consistency requirements on the theory - a wild example is determining the gauge groups for the type I and heterotic string theories. | {
"domain": "physics.stackexchange",
"id": 78280,
"tags": "quantum-field-theory, gauge-theory, quantum-anomalies, ward-identity"
} |
Is there any notion of pH out of solution? | Question: For example, could one define a $\mathrm{pH}$ for pure acetic acid? It's a weak acid in water, but if someone handed you $1~\mathrm L$ of pure acetic acid, what would its $\mathrm{pH}$ be?
Answer: The IUPAC definition of pH is:
The quantity pH is defined in terms of the activity of hydrogen(1+) ions (hydrogen ions) in solution:
$pH = − lg [a(H^+)] = − lg [m(H^+) γ_m (H^+) / m^⦵]$
where a$(\ce{H+})$ is the activity of hydrogen ion (hydrogen 1+) in aqueous solution, $(\ce{H+})$(aq), $γ_m(\ce{H+})$ is the activity coefficient of $\ce{H+}$(aq) (molality basis) at molality m$(\ce{H+})$, and m$^⦵ = 1$ mol kg $^{−1}$ is the standard molality.
So since the definition specifically refers to "aqueous solution", pH is undefined unless an aqueous solution is being considered.
There would be a p[H+] in pure acetic acid based upon the self dissociation of acetic acid, where [H+] is the concentration of hydrogen ions in acetic acid solution, but this is not pH according to the IUPAC definition.
According to Acid-Base Equilibria in Glacial Acetic Acid. III. Acidity Scale. Potentiometric Determination of Dissociation Constants of Acids, Bases and Salts J. Am. Chem. Soc., 1956, vol. 78, pages 2974–2979:
the autoprotolysis constant of acetic acid is calculated to be $3.5 \times 10^{-15}$ (pK = 14.45)
Therefore p[H+] = 7.2 for pure acetic acid.
This value is not a measure of acidity, but simply the concentration of solvated hydrogen ions in pure acetic acid. | {
"domain": "chemistry.stackexchange",
"id": 5821,
"tags": "acid-base, ph"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.