anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
How to use move_base? | Question:
I found this one very useful http://wiki.ros.org/move_base, but I have no idea how can I apply it to use in my project like how to install, what command to use and so on. Could anyone help me please?
Originally posted by newcastle on ROS Answers with karma: 1 on 2015-01-04
Post score: 0
Answer:
I suggest you do the tutorials.
Originally posted by Procópio with karma: 4402 on 2015-01-05
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 20478,
"tags": "navigation, move-base"
} |
Permutations of a list in Haskell | Question: I have this piece of code which given a list returns another list which combines every element with every other element (without repetitions) using a given function as a combiner. I came up with three different version (2 of them listed below, fastest to slowest), and now I'm asking for help to optimize it/them even further. Currently, my perms3 does not utilize multithreading at all, and since I'm quite new to Haskell I can't seem to figure out why, but I'm thinking that in its current state my algorithm isn't parallelizable, even though it can be.
I've also concluded that the length of the output list will always be \$ \binom{x}{2} + x = x(x-1)/2 + x\$ where \$x\$ is the length of the input list, but I dont know how to use that in my algorithm either to minimize the allocations.
The reason the why I'm asking is because I want to know more about the inner workings of Haskell, how lazy-evaluation affects performance and what is considered good/bad Haskell code.
My two permutation functions:
perms3 c l = perms3' c l l
perms3' _ [] _ = []
perms3' c (a:b) [x] = c a x:perms3' c b b -- the outer loop
perms3' c l@(a:_) (h:t) = c a h:perms3' c l t -- the inner loop
perms2 _ [] = [] --this can probably be 2 folds, the inner fold might be parallelizable?
perms2 c l@(h:t)= foldr (\x acc -> c h x:acc) [] l ++ perms2 c t
Input: (,) ['a'..'c']
Output: [('a','a'),('a','b'),('a','c'),('b','b'),('b','c'),('c','c')]
Input (+) [1..5]
Output: [2,3,4,5,6,4,5,6,7,6,7,8,8,9,10]
Unsurprisingly, perms3 is a fair bit faster than perms2.
Testing perms3:
import Control.Monad
import System.Environment
main :: IO ()
main = liftM (length . perms3 (,) . enumFromTo 1 . read . head) getArgs >>= print
perms3 :: (a -> a -> t) -> [a] -> [t]
perms3 c l = perms3' c l l
perms3' _ [] _ = []
perms3' c (a:b) [x] = c a x:perms3' c b b
perms3' c l@(a:_) (h:t) = c a h:perms3' c l t
Running it:
$ ghc -O3 main && time ./main 10000
50005000
real 0m0.615s
$ ghc -O3 main && time ./main 30000
450015000
real 0m5.347s
Question(s):
Is this a good way to write Haskell code?
Can I squeeze any more performance out of my function?
Should I prefer the imperative do-notation instead of arrows in my main function?
Answer: 3. Monadic notation
Let's start with your last question, as it has a clichéd answer: Do whatever you prefer. I often find the bind operator >>= nicer to read, but switch to do-notation as soon as the code gets complicated. Another rule of thumb is to use do-notation if the code is easier to express imperatively. Both notations are equivalent, there is no performance impact.
1. Idiomatic Haskell
Haskell has a few conventions about variable naming. Lists are commonly referred to as xs or ys (think plural of x), while function parameters are named f or g. Type parameters are almost always taken from the first letters of the alphabet.
perms3 :: (a -> a -> b) -> [a] -> [b]
perms3 f xs = perms3' f xs xs
perms3' _ [] _ = []
perms3' c (x:xs) [y] = f y z : perms3' f xs xs
perms3' c xs'@(x:_) (y:ys) = f x y : perms3' f xs' ys
Since perms3' is only used within perms3, it would make sense to define it as a local function. This has the additional benefit that we can drop f from its parameter list, since it's already in scope and won't change.
perms3 :: (a -> a -> b) -> [a] -> [b]
perms3 f xs = perms3' xs xs
where
perms3' [] _ = []
perms3' (x:xs) [y] = f y z:perms3' f xs xs
perms3' xs'@(x:_) (y:ys) = f x y:perms3' f xs' ys
2. Performance improvement
Pattern matching on lists usually takes two cases: empty and non-empty lists. This can reduce code (DRY etc) and actually improve performance, as tests for empty lists are faster than tests for singleton lists.
perms4 :: (a -> a -> b) -> [a] -> [b]
perms4 f xs = perms4' xs xs
where
perms4' [] _ = []
perms4' (_:ys) [] = perms4' ys ys
perms4' ys'@(y:_) (z:zs) = f y z : perms4' ys' zs
A quick test gave me a 20% improvement compared to perms3.
Notes on perms2
The fold in perms2 is just a map. Using partial application, it could be written as
perms5 _ [] = []
perms5 f xs'@(x:xs) = map (f x) xs' ++ perms5 f xs
-- faster:
-- perms5 f xs'@(x:xs) = foldr ((:) . f x) [] xs' ++ perms5 f xs
However, this is (somewhat surprisingly) slower than the fold. AFAICT, this is due to the fact that ghc won't inline map, but will do so with foldr.
Finally, here is how I (naïvely) would have written the code. Looks nice, performs terribly.
import Data.List (concatMap, tails)
perms6 f = concatMap pairs . tails
where
pairs [] = []
pairs (ys'@(y:_)) = map (f y) ys'
or
perms7 f xs = concat $ zipWith (map . f) xs (tails xs) | {
"domain": "codereview.stackexchange",
"id": 14891,
"tags": "performance, algorithm, haskell, combinatorics"
} |
Where is the Backward function defined in PyTorch? | Question: This might sound a little basic but while running the code below, I wanted to see the source code of the backward function:
import torch.nn as nn
[...]
criterion = nn.CrossEntropyLoss()
loss = criterion(output, target)
loss.backward()
So I went to the PyTorch GitHub and found the CrossEntropyLoss class, but without any backward function defined. Moving up, CrossEntropyLoss extends _WeightedLoss >> _Loss >> Module then still nothing.
So, where is the backward function defined?
Answer: Backward function is same for all type of layers.
Look at 155 number line here.
https://github.com/pytorch/pytorch/blob/35bd2b3c8b64d594d85fc740e94c30aa67892a34/torch/tensor.py
and it will forward you to here
https://github.com/pytorch/pytorch/blob/35bd2b3c8b64d594d85fc740e94c30aa67892a34/torch/autograd/__init__.py | {
"domain": "datascience.stackexchange",
"id": 7878,
"tags": "pytorch"
} |
What is the actual size of point cloud from a kinect | Question:
Hello, all.
As far as I know a kinect provides 640x480x3 size of 2D image data (widthxheightxRGB) and
640x480x(sizeof a point in 3D).
Is there anyone who knows the actual size of a point cloud?
640x480x4x3 ? widthxheightx(float)x(x,y,z) ?
Cheers.
Originally posted by enddl22 on ROS Answers with karma: 177 on 2011-05-08
Post score: 5
Answer:
The data payload is 640 x 480 x 8 x sizeof(float) bytes = 9830400 bytes
Plus some bytes for auxiliary information like origin, timestamp etc.
A good way to check out the bandwidth required for transmission is rostopic bw
$ rostopic bw /camera/rgb/points
subscribed to [/camera/rgb/points]
(...)
average: 295.37MB/s
mean: 9.83MB min: 9.83MB max: 9.83MB window: 100
To find the size in memory consider the following analysis:
I store them as pcl::pointcloud. The data for one point is described in this header file, although it is somewhat cryptic. PointXYZRGB is described as follows:
00204 struct PointXYZRGB
00205 {
00206 PCL_ADD_POINT4D; // (...)
00207 union
00208 {
00209 struct
00210 {
00211 float rgb;
00212 };
00213 float data_c[4];
00214 };
00215 EIGEN_MAKE_ALIGNED_OPERATOR_NEW
00216 } EIGEN_ALIGN16;
PCL_ADD_POINT4D and the union below will add 4 floats each. While that is overkill memorywise, it is required to benefit from some cpu optimizations. Therefore for a pcl::pointcloud<pcl::PointXYZRGB> from the kinect, you store 640 x 480 x 8 x sizeof(float) bytes.
I wonder why rgb is in a separate struct though. Does anybody know this.
For a PointCloud2 you can look at the message with rostopic echo -n 1 /camera/rgb/points| less.
The interesting parts are:
height: 480
width: 640
point_step: 32
Therefore, also here you have 640 x 480 x 32, where 32 is 8 x sizeof(float)
Optimization Hack
A colleague of mine modified the openni_camera driver and rgbdslam to communicate the color value via the unused forth dimension of the pointXYZ. This essentially cuts the required memory to half the size. It is a little hackish, but if anyone wants the patch, you are welcome to contact me. Keep in mind though, that linear algebra operations with Eigen might be affected by the color value or overwrite it. Haven't experienced that yet though.
The bandwidth reduction shown by rostopic bw
average: 148.35MB/s
mean: 4.92MB min: 4.92MB max: 4.92MB window: 100
Originally posted by Felix Endres with karma: 6468 on 2011-05-09
This answer was ACCEPTED on the original site
Post score: 13 | {
"domain": "robotics.stackexchange",
"id": 5540,
"tags": "kinect, openni-camera, pointcloud"
} |
Is the density matrix corresponding to a state $|\alpha\rangle$ simply $\rho =| \alpha \rangle \langle \alpha \mid$? | Question: The eigenstate of the annihilation operator $a$ is given by the state $a\mid \alpha \rangle = \alpha \mid \alpha \rangle$. In the Fock state basis, we can expand this state as
$$\mid \alpha \rangle = e^{-\frac{1}{2}\vert \alpha\vert^2}\sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}}\mid n \rangle$$
I am bit confused about density matrices, I am not sure if $\rho = \mid \alpha \rangle \langle \alpha \mid$ in this case, as I do not know if $\mid \alpha \rangle$ is a pure state (I am still working on understanding these definitions).
Is the density matrix corresponding to the state $\alpha$ simply $\rho = \mid \alpha \rangle \langle \alpha \mid$? If so, is this true in general?
If I am trying to calculate $Tr(\rho \hat{O})$ for some observable $\hat{O}$, then would it be valid (by the cyclic property of trace) to say that $$Tr(\rho \hat{O}) = Tr(\langle \alpha \mid \hat{O} \mid \alpha \rangle) = \langle \alpha \mid \hat{O} \mid \alpha \rangle~?$$
Answer: The density matrix is indeed $|\alpha\rangle\langle\alpha|$. Remember that $\alpha \in \mathbb{C}$ so when you form $\langle\alpha|$ you must conjugate $\alpha$ (I always forget).
So, $\rho_\alpha := |\alpha\rangle\langle\alpha| = e^{-\left|\alpha\right|^2} \sum_{n, m}^\infty \frac{\alpha^n \alpha^{*m}}{\sqrt{n! m!}} |n\rangle\langle m |$.
For a pure state $|\phi\rangle$ the density matrix is always formed as $|\phi\rangle\langle\phi|$.
$|\alpha\rangle$ is a pure state. Indeed, all bra/ket vectors are pure, and this is a key motivation for including the density matrix in our mathematical toolbox: it allows us to use the same notation for both pure and mixed states.
Yes it is fine to use the cyclical property of the trace in this way. In my undergrad courses, $Tr\left(\rho\hat{O}\right)$ was referred to as the "trace", while $\langle\alpha|\hat{O}|\alpha\rangle$ was referred to as the "overlap" (in some contexts the "overlap integral"). It took me a while to work out that these are doing the same thing. | {
"domain": "physics.stackexchange",
"id": 69648,
"tags": "quantum-mechanics, hilbert-space, density-operator, observables, coherent-states"
} |
Displaying nucleotide at a single position from RNA-seq reads in a BAM file | Question: How do I display a single nucleotide position from reads in a BAM file? I have been looking at variation using samtools mpileup, but I want to actually just display the nucleotide at the position I am interested in. This seems like you should be able to do it but I can't figure out how.
To be clear I have a BAM file of a bunch of reads. I'm looking to do something like samtools magic reads.bam chr3:10000 where I get back something like:
T T T T T T T T T T T T T T T T T T T A A A A A A A A A A A A A.
I just want to sanity check the output of bcftools by actually looking at the base calls.
Answer: Your best option is to create a tab delimited position list file:
chr3 1000
You can then use the samtools mpileup -l pos.txt option
-l FILE list of positions (chr pos) or regions (BED)
This file can also be in bed format if you are interested in specific regions | {
"domain": "biology.stackexchange",
"id": 193,
"tags": "bioinformatics"
} |
Propagator Causality with commutators all the way | Question: We know that two fields commute - by locality and causality - iff there is spacelike separation
$\left[\phi_l^k(x) , \phi_m^{k'}(y)\right] = 0$ for $(x-y)^2<0$
In the canonical quantization of the Dirac field, if $b_\alpha(k)$ is the annihilation operator and $b^\dagger_\alpha(k)$ is the creation operator for a particle of 4-momentum $k$ with
$\left[b_\alpha(k), b^\dagger_\beta(q)\right] = (2\pi)^3\frac{\omega_\mathbf k}{m} \delta^{(3)}(\mathbf{k}-\mathbf{q})\delta_{\alpha\beta}$
and $\psi^{(+)}(x) = e ^{-ikx}u(k) $ is a solution with positive energy while $\psi^{(-)}$ is negative, when we use commutators all the way, the following
$\left[\psi_\xi(x) , \overline\psi_\eta(y)\right] = (i\not\partial_x+m)\int{\frac{d^3k}{(2\pi)^3}\frac{1}{2\omega_\mathbf k}\left[e^{-ik(x-y)} + e^{+ik(x-y)}\right]|_{k=(\omega_k,\mathbf k )}}$
does not vanish for spacelike separations and results in a violation of causality. How is this problem overcome or why isn't it a problem?
Answer: I've noted in comments that:
If you write fermions $f_1,\,f_2$ as $f_i=\eta_ic_i$ with $\{\eta_i,\,\eta_j\}=[\eta_i,\,c_j]=0$, $\{f_1,\,f_2\}=\eta_1\eta_2[c_1,\,c_2]$;
This works whether these objects are operators or not, but if they are operators, all matrix elements will be complex;
The operators in question will have uncountable dimension.
Let's also discuss what happens when you make a boson out of two fermions (if you use a larger even number of them, all but one of them makes a fermion, so this is the only case we need to consider). In particular write$$b_1=f_3f_4,\,b_2=f_5f_6\implies[b_1,\,b_2]=f_3\{f_4,\,f_5\}f_6-\{f_3,\,f_5\}f_4f_6+f_5f_3\{f_4,\,f_6\}-f_5\{f_3,\,f_6\}f_4$$in analogy with 9 here. If the anticommutators vanish at spacelike separations, so does $[b_1,\,b_2]$. | {
"domain": "physics.stackexchange",
"id": 62874,
"tags": "quantum-field-theory, commutator, causality"
} |
ROS Answers SE migration: rqt_shell tmux | Question:
Thank you for the contribution of rqt_shell. I am implementing a GUI in ROS indigo, Ubuntu 14.4. However, the integrated rqt_shell cannot use 'tmux' functionality by right clicking and adding a tag, like Guake. Could you please tell me how to implement the terminal multiplexer functionality in rqt_shell plug in? Thank you.
I look forward to your reply.
Originally posted by Helen HOU on ROS Answers with karma: 1 on 2016-10-27
Post score: 0
Answer:
Seems you already reported this over at ros-visualization/rqt_common_plugins#404.
Originally posted by gvdhoorn with karma: 86574 on 2016-10-28
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 26074,
"tags": "ros"
} |
Does the canonical commutation relation fix the form of the momentum operator? | Question: For one dimensional quantum mechanics $$[\hat{x},\hat{p}]=i\hbar. $$
Does this fix univocally the form of the $\hat{p}$ operator? My bet is no because $\hat{p}$ actually depends if we are on coordinate or momentum representation, but I don't know if that statement constitutes a proof. Moreover if we choose $\hat{x}\psi=x\psi$ is the answer of the following question yes?
For the second one
$$(\hat{x}\hat{p}-\hat{p}\hat{x})\psi=x\hat{p}\psi-\hat{p}x\psi=i\hbar\psi, $$
but I don't see how can I say that $\hat{p}$ must be proportional to $\frac{\partial}{\partial x}$. I don't know if trying to see that $\hat{p}$ must satisfy the Leibniz rule and thus it should be proportional to the $x$ derivative could help. Or using the fact that $\hat{x}$ and $\hat{p}$ must be hermitian
Any hint will be appreciated.
Answer: You have already got "practical" answers, so I intend to answer form another point of view.
There is a quite famous theorem due to Stone and von Neumann, later improved by Mackay, and finally by Dixmier and Nelson, roughly speaking establishing the following result within the most elementary version.
(Another version of the theorem focuses on the unitary groups generated by $X$ and $P$ avoiding problems with domains, however I stick here to the self-adjoint operator version.)
THEOREM. (rough statement "for physicists")
If you have a couple of self-adjoint operators $X$ and $P$ defined on a Hilbert space $H$ such that are conjugated to each other:
\begin{equation}
[X,P] = i \hbar I \quad\quad\quad (1)
\end{equation}
and there is a cyclic vector for $X$ and $P$, then there exists a unitary operator $U : L^2(\mathbb R, dx)\to H$ such that:
$$(U^{-1} X U )\psi (x)= x\psi(x)\quad \mbox{and}\quad (U^{-1} P U )\psi (x)= -i\hbar \frac{d\psi(x)}{dx}\:.\quad (2)$$
(The rigorous statement, in this Nelson-like version is reads as follows
THEOREM. Let $X$ and $P$ be a pair of self-adjoint operators on a complex Hilbert space $H$ such that (a) they verify (1) on a common invariant dense subspace $S\subset H$, (b) $X^2+P^2$ is essentially self-adjoint on $S$ and (c) all vectors in $S$ are cyclic for $X$ and $P$. Then there exists a unitary operator $U : L^2(\mathbb R, dx)\to H$ such that (2) are valid for $\psi \in C_0^{\infty}(\mathbb R)$.
Notice that the operators defined in the right-hand sides of (2) admits unique self-adjoint extensions so they completely fix the operators representing respective observables. We can equally replace $C_0^\infty(\mathbb R)$
for the Schwartz space ${\cal S}(\mathbb R)$ in the last statement.)
Barring technicalities, all that means that commutation relations actually fix position and momentum observables as well as the Hilbert space. For instance, referring to Murod Abdukhakimov's answer, if the addition of $\partial f$ to the standard expressions of $X$ and $P$ gives rise to truly self-adjoint operators, then a unitary transformation (just that connecting $\psi$ to $\psi'$ in Murod Abdukhakimov's answer) gets rid of the deformation restoring the standard expression. Remember that unitary transformations do not alter all physical objects of QM.
The result extends to $\mathbb R^n$, i.e., concerning particles in space for $n=3$.
Dropping the irreducibility requirement the thesis holds anyway but $H$ decompose into a direct sum (not direct integral!) of closed subspaces where
the strong statement is valid.
There are important consequences of this fundamental theorem.
First of all $H$ must be saparable as $L^2(\mathbb R,dx)$ is. Moreover no time operator $T$ (conjugated with the Hamiltonian operator $H$) exists if the Hamiltonian operator id bounded below as physics requires. The latter statement is due to the fact that the theorem fixes the spectra of $X$ and $P$ as the whole real axes in both cases, so that the spectrum of $H$ would not be bounded below if $T,H$ were a conjugated pair of operators.
A similar no-go theorem arises concerning quantization of a particle on a circle when one tries to define position and impulse self-adjoint operators.
The attempt to solve these no-go results gave rise to more general formulation of quantum mechanics based on the notion of POVM and eventually turned out to be very useful in other contexts as quantum information theory.
An important observation is that Stone-von Neumann - MacKay - Dixmier -Nelson's result fails when dealing with infinite dimensional systems.
That is, roughly speaking, passing from the (symplectic space) of a finite number of particle to the (symplectic space) of a field. In that case the canonical commutation relations of $X_i$ and $P_j$ are replaced by those of the quantum fields. E.g:,
$$[\phi(t, x), \pi(t, y)] = i \hbar \delta(x,y) I$$
or more sophisticated versions of them. In this juncture, there exist infinitely many representations of the algebra of observables that cannot be connected by unitary operators. This is a well-known phenomenon in QFT or quantum statistical mechanics (in the thermodynamic limit).
For instance the free theory and the interacting theory of a given quantum field cannot be represented in the same Hilbert space once one assumes standard requirements on states and observables (the so called Haag's theorem and this is the deep reason why LSZ formalism uses the weak topology instead of the strong one as in standard quantum theory of the scattering).
If one includes superselections charges in the algebra of observables, non unitarily equivalent representations of the algebra arise automatically giving rise to sectors.
In QFT in curved spacetime the appearance of inequivalent representations of the algebra of observables is a quite common phenomenon due to the presence of curvature of the spacetime. | {
"domain": "physics.stackexchange",
"id": 10756,
"tags": "quantum-mechanics, operators, momentum, commutator"
} |
String encryption using keys | Question: I wrote a program that encrypts a users string by increasing every char's position by a random number. I'm looking for any and all criticism on it.
using System;
using System.IO;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Encryption2
{
class Program
{
public static void Main()
{
Console.Clear();
bool valid = true;
while (valid) //while loop is here for error handling in case user inputs something other than
{
Console.WriteLine("Enter d for decrypt and e for encrypt");
string uInput = Console.ReadLine().ToLower();
if (uInput == "e")
{
encrypt();
valid = false;
}
else if (uInput == "d")
{
decrypt();
valid = false;
}
else
{
Console.Clear();
}
}
}
public static void encrypt()
{
Console.Clear();
Console.WriteLine("Enter a string for encryption");
string input = Console.ReadLine(); //string that will be encrypted
int[] keyVals = new int[input.Length]; //decryption key
Random generateKeyVals = new Random();
for (int i = 0; i < input.Length; i++) //generates input.Length amount of numbers between 1 and 9 that are used to increase
{ //each char in in the input string and then decrease each char when you decrypt
keyVals[i] = generateKeyVals.Next(1, 9);
Console.Write(keyVals[i]);
}
Console.WriteLine(" : This is your decryption key");
for (int i = 0; i < input.Length; i++) //encrypts string by adding first char in the encrypted string with the first value in the key and so on
{
char encrypted = Convert.ToChar(input[i] + keyVals[i]);
Console.Write(encrypted);
}
Console.WriteLine(" : This is your encrypted string");
Console.ReadKey();
Main();
}
public static void decrypt() //decrypts an encrypted message using a decryption key
{
Console.Clear();
Console.WriteLine("Enter a string for decryption");
string input = Console.ReadLine(); //user inputs the encrypted message
int[] keyVals = new int[input.Length]; //stores all values entered from the for loop below. This is the decryption key used to decrypt the string
for (int i = 0; i < input.Length; i++) //using a for loop to get all numbers from the key one at a time.
{
Console.WriteLine("Enter the #" + (i + 1) + " of the key");
keyVals[i] = Convert.ToInt32(Console.ReadLine());
}
for (int i = 0; i < input.Length; i++) //decrypts string by subtracting first char in the encrypted string by the first value in the key and so on
{
char decrypted = Convert.ToChar(input[i] - keyVals[i]);
Console.Write(decrypted);
}
Console.WriteLine(System.Environment.NewLine);
Console.WriteLine("Press any key to return to the main menu");
Console.ReadKey();
Main();
}
}
}
Answer: I know it's obvious but I really have to say this: outside a programming exercise do not write your own encryption function.
Let's now talk about code...
The first thing to do is to separate responsibilities. Encrypt() and Decrypt() methods (note PascalCase, you should follow C# naming guidelines) should do only one thing: encrypt and decrypt a string. To get user input and to provide an output is something else responsibility.
If you strip out all the console I/O your Encrypt() will be much simpler and you may note you can make it even simpler (to read):
static (IEnumerable<int> key, string encrypted) Encrypt(string text)
{
var key = GenerateEncryptionKey(text.Length);
return (key, new String(text.Zip(key, (v, k) => (char)(v + k))));
}
static IEnumerable<int> GenerateEncryptionKey(int length)
=> new Enumerable.Range(0, length).Select(x => _rnd.Next(1, 9));
static _rnd = new Random();
What I did:
I moved the generation of the encryption key to a separate function. In real world it will be a service you set separately. This gives you the ability to test the Encrypt() code with edge-cases using well-known and repeatable encryption keys (what will happen if v + k - which is Int32 - is higher than UInt16.MaxValue when casting back to char? Try and see).
I used Enumerable.Zip() function instead of manually looping through the input + key. Whenever possible you should use available functions to perform a task because they make your code faster to write, easier to read and...less buggy (they already been tested!) Take the time to explore LINQ functions, when working with collections they provide many things you may need.
I moved everything not directly related to encryption out of this function. Now I can test encryption in isolation!
It's an exercise for the reader to do the same thing for the Decrypt() function and to move magic values (like 0 and 9) to appropriate const int fields. Also, as exercise, think that adding a random value to a character does not necessarily produce another valid character (Unicode surrogates) and the string resulting from concatenation may not be a valid or representable string (you'd better encode the IEnumerable<int> produced by encryption instead of simply creating a string out of it).
Your logic for looping in the Main() method is little bit weird. You have a loop, you exit the loop and you recursively (from the encryption/decryption functions...) call Main() again. I will skip the StackOverflowException because you probably won't ever use this app long enough but it makes the program flow really hard to follow.
There are many ways to implement this but let's start with something easy:
static void Main()
{
while (true)
{
var actionToPerform = Prompt("(E)ncrypt, (D)ecrypt or (Q)uit?");
if (actionToPerform.Equals("e", StringComparison.CurrentCultureIgnoreCase))
{
ReadTextAndEncrypt();
}
else if (actionToPerform.Equals("d", StringComparison.CurrentCultureIgnoreCase))
{
ReadTextAndDecrypt();
}
else if (actionToPerform.Equals("q", StringComparison.CurrentCultureIgnoreCase))
{
return;
}
else
{
Console.WriteLine("Unknown option. Try again.");
}
}
}
The same can be done using a dictionary (<string, Action>) but for now let's keep it simple. For string comparison I do not use ToLower() but the appropriate overload of String.Equals() where I can specify a StringComparison value. The schoolbook example about case insensitive comparison is the i in Turkish (== performs comparison using current culture) where "i" != "I".ToLower() (and similarly "i".ToUpper() != "I").
Note the Prompt() function which is also reused elsewhere:
static string Prompt(string prompt)
{
Console.Write(prompt + ' ');
return Console.ReadLine();
}
We can now write our ReadTextAndEncrypt() function putting the things together:
static void ReadTextAndEncrypt()
{
(string key, string encrypted) result = Encrypt(Prompt("Enter the text to encrypt:"));
var key = new String(result.key.Select(x => (char)('0' + x)));
Console.WriteLine("Encryption key is: {0}, key);
Console.WriteLine("Encrypted string is: {0}, result.encrypted);
}
That's all, now, as exercise, write ReadTextAndDecrypt() with the same logic. You may also add Console.Clear() and Console.ReadKey() as appropriate (also that conversion for result.key isn't something to be proud of...)
In this example where I wrote (T1 key, T2 encrypted) I used value tuples (a C# 7 feature), it's similar to Tuple<T1, T2> but with an attached compile-time name. Function may be well rewritten as:
static Tuple<IEnumerable<int>, string> Encrypt(text)
{
var key = GenerateEncryptionKey(text.Length);
var encryptedText = new String(text.Zip(key, (v, k) => (char)(v + k)));
return Tuple.Create(key, encryptedText);
} | {
"domain": "codereview.stackexchange",
"id": 27910,
"tags": "c#, beginner"
} |
Why does a color video compress better than a black and white video? | Question: It was asked in an exam why a color video compress better than a black and white (grayscale) video using MPEG but can't find anything explaining it.
In other words, we would apparently get a better compression ratio when compressing color videos than when compressing BW videos.
Answer: Human eye is very sensitive to luminance change and order of magnitude less sensitive to chrominance change.
MPEG under the hood is based on JPEG transform, so you have 8x8 blocks of DCT.
It blurs a bit the whole block approximating it. Colour space changes to YUV or YCbCr, to encode two channels of colour and one of luminance.
Luminance (grayscale if it was colour) is not compressed as much as new created two colours. These colours are fitted more loosely than luminance.
The most heavy tricks are in colour space and luminance is preserved with lower compression (and less stages).
If you have BW frame it cannot be dealt with high compression as this faster degrades quality. MPEG was not created to deal with BW data. Origins of this codec are from times when grayscale tv was appended colour on top of existing frame.
Here I mean origins, not the same scheme as bandwidth was smaller at this time, I am referring to colour space and compression over chrominance.
MPEG popular scheme is 4:2:2 which encodes four parts of luminance (BW) and two times two colour parts. Colour is encoded as differences, because there are two channels and three colours.
So simply 50% of data is well preserved luminance and 50% are three colours with higher compression ratio. Compressed better means greater ratio of original vs processed channel.
Luminance takes (taking it simple as luminance is part of these colours) roughly equal space as colour info.
Different codecs are using different strategies, I referred to MPEG, not some new mixed schemes. | {
"domain": "cs.stackexchange",
"id": 5087,
"tags": "algorithms, image-processing, data-compression, video"
} |
Looking for study/genome data for HIV in different organs | Question: I am looking for a research study or data base that has HIV genome data available in fasta or similar format. Specifically I need genome data of HIV taken from different organs in the same subject. I am interested in studying the way HIV diversifies and changes to infect different organs and in comparing the process across different patients.
If you are familiar with such a study, or if you have general pointers as to where I may find similar data, I would appreciate it very much. I have done a few hours of searching on NCBI GeneBank, but have had no luck.
Thanks!
Answer: This website comes to my mind
https://www.hiv.lanl.gov/content/index
It is a HIV databases containing HIV sequence data and related immunologic information | {
"domain": "biology.stackexchange",
"id": 11015,
"tags": "dna-sequencing, database, sequence-analysis, hiv"
} |
What is the Asymptotic Equipartition Property (AEP)? | Question: I am currently studying about Polar Codes in 5G standard and while reading my paper I found something called AEP which is required for channel coding. I surfed the web but didn't found a satisfying answer. Can someone explain what it is, clearly?
Answer: When you are sampling a stochastic process $n$ times, the larger you make $n$, the higher the probability that the series of samples is contained in the so called strongly typical set of outcomes of length $n$, where all members have roughly the same probability to be realized. If you let $n\rightarrow \infty$ it holds that
$$-\frac{1}{n}\log p(X_1,X_2,...,X_n)\rightarrow H(X)$$
with $H(X)$ being the entropy rate of the process.
This should be intuitively clear: the longer the sequence, the more possible outcomes, each with possiblity near zero but positive. The number of typical outcomes with $p\approx\epsilon$ is growing faster than the number of non typical outcomes as $n$ is growing. As $n\rightarrow \infty$, the "leftover probability" for non typical outcomes gets smaller and smaller, thus making the above equation valid. | {
"domain": "dsp.stackexchange",
"id": 7401,
"tags": "digital-communications, information-theory, coding"
} |
Particle Physics Decay Question - Eta Prime Decay Parity/Angular Momentum Conservation | Question: I was hoping someone could clarify why the following decay does not occur:
$ \eta ^{'0} \rightarrow \pi ^{0} + \rho ^{0}$
The quark compositions and spin parity are as followed:
$ \eta ^{'0} : (u\bar{u}+d\bar{d}+s\bar{s}) / \sqrt{3} ;J^{P} = 0^{-} $
$ \pi ^{0} : (u\bar{u}-d\bar{d}) / \sqrt{2} ;J^{P} = 0^{-} $
$ \rho ^{0} : (u\bar{u}-d\bar{d}) / \sqrt{2} ;J^{P} = 1^{-} $
In order to conserve parity and angular momentum I thought that the two final particles states would have to be produced with angular momentum $l = 1$ between them (as parity of angular momentum 'part' is $ (-1)^{l}$ this would conserve parity and we can couple 0,1 and 1 to give 0 which conserves angular momentum). Does anyone know what is wrong this approach or alternatively a more straight forward reason why this does not occur.
Answer: This decay (occurring via the strong interaction) violates the charge conjugation since $J^{PC}(\pi^0) = 0^{-+}, J^{PC}(\rho^0) = 1^{--}, J^{PC}(\eta'^0) = 0^{-+}$.
The charge conjugation transforms a particle in its anti-particle. In the case of the 3 particles involved in this decay, they are all their own anti-particle, and the effect of the charge conjugation operator $C$ is therefore (taking as an example the pion) $C|\pi^0> = \eta_C |\pi^0>$, meaning that the $\pi^0$ is eigenstate of the charge conjugation with eigenvalue $\eta_C = +1$. The $\rho^0$ has $\eta_C=-1$ and the $\eta'^0$, +1 (remark: $\eta_C$ is necessarily $\pm 1$ because when you apply twice the charge conjugation you should recover the initial state). The requirement of the charge conjugation conservation by the strong interaction would imposes:
$\eta_C(\eta'^0) = \eta_C(\pi^0) \times \eta_C(\rho^0)$ which is not the case $+1 \ne (+1) \times (-1)$. Thus this reaction is forbidden. | {
"domain": "physics.stackexchange",
"id": 22261,
"tags": "particle-physics, angular-momentum, conservation-laws, feynman-diagrams, quarks"
} |
Error reading a .PCM file | Question: I want to convert a .wav file with a sampling frequency of 44100 Hz to a 16 depth .pcm.
I don't why I'm getting those peaks at the beginning of .pcm plot (second figure below 2). If you could explain and tell me how I can correct it, I would very much appreciate it.
This is my attempt:
% determine the least common multiple (lcm) of fsin and fsout
fsin = fs;
fsout = 22050;
m = lcm(fsin, fsout);
% determine the up and down sampling rates
up = m/fsin;
down = m/fsout;
% resample the input using the computed up/down rates
x_22 = resample(x, up, down);
audiowrite([a_filename(1:11),'_22050','.wav'], x_22, fsout);
precision = 'int16';
fidr = fopen([a_filename(1:11), '_22050','.wav'], 'r'); % open .wav file to read
fidw = fopen([a_filename(1:11), '_22050','.pcm'], 'wb'); % open .pcm file to write
w = int16(fread(fidr, inf, precision));% read wav file
fwrite(fidw, w, precision);
fclose(fidr);
fclose(fidw);
fidr2 = fopen([a_filename(1:11),'_22050','.pcm'], 'r');
[data, ~] = fread(fidr2, 'short');
fclose(fidr2);
% plot
figure(1)
set(1, 'color', 'w')
subplot(311),plot(x)
grid on, box on, axis tight, title('.wav (f_s = 44100 Hz)')
subplot(312),plot(x_22)
grid on, box on, axis tight, title('.wav (f_s = 22050 Hz)')
subplot(313), plot(data)
grid on, box on, axis tight, title('.pcm (16 bit depth)')
These are the resuts:
Here are all the steps of the conversion:
This is the zoom in on the thrid subplot to see the peaks:
Answer: That sure looks like your are including the .WAV file header as part of your data.
When you read your file in, and write it out, using the file operations, you are not accounting for the header. The header can vary in size (it's actually three RIFF headers), but is usually 44 bytes long.
These are the headers in my own (C++) words:
struct FirstHeader
{
char RiffID[4];
int RiffLength;
char WaveID[4];
char FormatID[4];
int FormatLength;
};
struct SecondHeader
{
short Always0x01;
short TrackCount;
int SamplesPerSecond;
int BytesPerSecond;
short BytesPerSample;
short BitsPerReading;
char Filler[16];
};
struct ThirdHeader
{
char DataID[4];
int DataLength;
};
Do a search on "Wav file header" and you can get lots of details. If you don't need any of the parameters, I suppose just looking for the "data" in the header and the next 4 bytes are the UINT32 value of the length. So your data starts the byte after.
Otherwise, follow the RiffLength and FormatLength to get to the DataID field.
The file can be mono or stereo, 8 16 or 24 bit, etc. | {
"domain": "dsp.stackexchange",
"id": 7246,
"tags": "matlab, audio, pcm"
} |
Why does Co(I) have a 3d8 configuration? | Question: Why is the electron configuration of $\ce{Co^+}$ $[\ce{Ar}](\mathrm{3d})^8$?
Since neutral $\ce{Co}$ itself has a $[\ce{Ar}](\mathrm{4s})^2(\mathrm{3d})^7$ configuration, wouldn't the ionised electron be lost from the $\mathrm{4s}$ orbital, leading to an electron configuration of $[\ce{Ar}](\mathrm{4s})^1(\mathrm{3d})^7$?
Answer: For lighter elements, the shells fill in order. Starting at the transition metals, an outer s orbital may fill before an inner d orbital, so the electron configuration of unioninzed cobalt is written $\ce{[Ar]}4\mathrm s^1\,3\mathrm d^7$, rather than $\ce{[Ar]}3\mathrm d^7\,4\mathrm s^1$.
There is a video diagramming the electron configuration of $\ce{Co}$, $\ce{Co^{2+}}$ and $\ce{Co^{3+}}$, thought it does not explain the reasoning, nor does it cover the less common $\ce{Co^{+}}$ ion, produced by photoionization or as found in some esoteric metal-organic compounds, or in the theoretical $\ce{CoCl}$.
That said, the situation becomes murky for transition elements, and downright turbid for lanthanides and actinides, where f orbitals are added. For example the outermost shells of $\ce{La, Ce and Pr}$ are $\ce5\mathrm d^1\,$, then $\ce4\mathrm f^1\,5\mathrm d^1$, and then $\ce4\mathrm f3$. What happened to the Pr d electron? My understanding is that energy levels are quite close for the larger outer shells, and it is not intuitive which orbital fills first. | {
"domain": "chemistry.stackexchange",
"id": 15473,
"tags": "electronic-configuration, transition-metals"
} |
Localizing using AR tags and wheel odometry | Question:
Hello everyone,
My team and I are working on a project where we need to localize our robots using AR tags (ar_track_alvar). Our robots will not always see tags, so we are using wheel odometry to track motion during these periods. We are trying to accomplish this through sensor fusion using the robot_pose_ekf or robot_localization package. We have tried both, but neither have given us the desired effect.
We have multiple tags in our environment which all have static transforms with respect to the map frame. The robot looks for tags and finds a transform from the tag to the robot. By extension, this yields a transform from the map to the robot. This map->robot transform is converted to an odometry message using only pose measurements. The covariance for the pose is set to:
|0.1 100 100 100 100 100 |
|100 0.1 100 100 100 100 |
|100 100 100 100 100 100 |
|100 100 100 100 100 100 |
|100 100 100 100 100 100 |
|100 100 100 100 100 0.1 |
The twist covariances are all set to 10000. The frame_id for this tag based odom message is map.
The wheel odometry is very standard, it is based off of the turtlebot odometry. One note is that when the first tag estimate is received, a transform is broadcasted which places the wheel odom frame at the initial location of the robot with respect to the map.
Using both of the aforementioned packages and setups, and keeping the robot motionless, it is observed that the filter estimates become increasingly erratic and tend to infinity in no particular direction. Removing the wheel odometry and keeping only the ar tag odometry yields the same effect. The ar tag odometry messages remain completely constant within three significant figures.
Thus we conclude that somehow our consistent tag odometry measurements are causing the kalman filters to behave erratically and tend to infinity. This happens even with the wheel odometry measurements being fused as well. Can anyone explain why this might be, and offer any suggestions to fix this? I would be happy to provide any extra information which I haven't provided.
Side note: As a naive test, we also set all of the covariances to large values and observed that no matter how we moved the robot the differences in the filter outputs were tiny (+/- 0.01), more likely drift than an actual reading.
EDIT:
Thank you for your advice Tom. We have already combed through all of the wiki, and we did see the part about inflating covariances being unhelpful. We were confused though, because we didn't know what to set them to if not an arbitrarily large number. We did use the config to turn them off. You can see this in the posted config file.
We do not have an IMU. We are using a very ragtag group of robots (its what our school had lying around) and we are lucky we even have wheel odometry!
Our AR tag readings give full 6DOF pose, but no velocities. We have already set the 2d param.
Below is our config file for robot_localization. As you can see, we have set world_frame = odom_frame that way we get an estimate of the robot's position with respect to the odom_combined frame. We then transform that into the map frame using a static_transform based on the first observation of a tag. In practice, this usually places the odom_combined frame exactly where we want it. We take only 2D velocity data from odometry and 2D position data from the AR tags.
<node name="odom_filter" pkg="robot_localization" type="ekf_localization_node" respawn="true" ns="hermes">
<param name="map_frame" value="map" />
<param name="odom_frame" value="hermes/odom_combined"/>
<param name="base_link_frame" value="hermes_footprint" />
<param name="world_frame" value="hermes/odom_combined" />
<param name="frequency" value="10.0"/>
<param name="sensor_timeout" value=".5"/>
<param name="two_d_mode" value="true" />
<param name="odom0" value="odom" />
<param name="odom1" value="ar_tag_odom" />
<rosparam param="odom0_config">[false, false, false,
false, false, false,
true, true, false,
false, false, true,
false, false, false]
</rosparam>
<rosparam param="odom1_config">[true, true, false,
false, false, true,
false, false, false,
false, false, false,
false, false, false]
</rosparam>
<param name="debug" value="true"/>
<param name="debug_out_file" value="/home/hermes/localization_debug2.txt"/>
<param name="odom1_differential" value="true" />
<remap from="odometry/filtered" to="odom_filter/odom_combined" />
</node>
Thanks for your help!
Originally posted by Robocop87 on ROS Answers with karma: 255 on 2015-04-15
Post score: 3
Original comments
Comment by Tom Moore on 2015-04-15:
Just a word of caution: I'd turn off debug mode if you value disk space. :)
Comment by Tom Moore on 2015-04-15:
Also, can you post a sample odom and ar_tag_odom message?
Answer:
My response is for robot_localization. Have a read through the wiki, but pay particular attention to this page. In particular, note that inflating covariances for variables you want to ignore is not going to produce good results for the state estimation nodes in robot_localization. Instead, use the configuration parameters to fuse only the variables you want.
Having said that, you need to make sure that you are accounting for all of your pose (position and orientation) variables. If you don't account for any one of them, you'll see the covariances explode, and your state estimate will act unpredictably. For position, you can either have absolute measurements (X, Y, and Z) or velocity measurements (X velocity, Y velocity, and Z velocity). For orientation, you will want to have at least one source of absolute measurement data.
Some questions for you:
Do you have an IMU on your robot?
Do your AR code-based measurements yield position only, or do they also produce orientation?
Is this robot only operating in 2D? If it is, then you should either set two_d_mode to true, or let your AR code measurements produce estimates for Z, roll, and pitch. Make them 0, and make the covariances very small.
Your setup is pretty much identical to a system with odometry and GPS, and that setup works well with robot_localization. Without knowing more, what I would do is this:
Run one instance of ekf_localization_node with the world_frame parameter set to the same value as the odom_frame parameter. In this instance, fuse your odometry and IMU if it's available. If you only have odometry, then don't run this instance. Instead, make sure that your odometry source is also producing an odom->base_link (or whatever your frames are) transform.
Run a second instance of ekf_localization_node, and fuse the odometry, IMU (again, if available), and AR measurements. Set the world_frame parameter set to the same value as the map_frame parameter.
How you deal with orientations will be important here, so I'll need more information before I can offer up any more advice. Please post a sample message from each input source, and also please post your ekf_localization_node launch file.
Originally posted by Tom Moore with karma: 13689 on 2015-04-15
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 21441,
"tags": "localization, navigation, ekf, robot-pose-ekf, robot-localization"
} |
Expectation value - Zetilli vs Griffith | Question: I know that an inner product between two vectors is defined like:
$$\langle a | b\rangle = {a_1}^\dagger b_1+{a_2}^\dagger b_2+\dots$$
but because a transpose of a component for example $a_1$ is again only $a_1$ the above simplifies to:
$$\langle a | b\rangle = \overline{a_1} b_1+\overline{a_2} b_2+\dots$$
Where $\overline{a_1}$ is a complex conjugate of $a_1$. Furthermore we can similarly define an inner product for two complex functions like this:
$$\langle f | g \rangle = \int\limits_{-\infty}^\infty \overline{f} g\, dx$$
In the Griffith's book (page 96) there is an equation which describes expectation value and we can write this as an inner product of a function $\Psi$ with a $\widehat{x} \Psi$:
\begin{align*}
\langle x \rangle = \int\limits_{-\infty}^{\infty}\Psi\,\,\widehat{x}\Psi\,\,dx = \int\limits_{-\infty}^{\infty} \Psi\,\,(\widehat{x}\Psi)\,\, dx \equiv \underbrace{\langle\Psi |\widehat{x} \Psi \rangle}_{\rlap{\text{expressed as an inner product}}}
\end{align*}
In Zettili's book (page 173) the expectation value is defined like a fraction:
\begin{align*}
\langle \widehat{x} \rangle = \frac{\langle\Psi | \widehat{x} | \Psi \rangle}{\langle \Psi | \Psi \rangle}
\end{align*}
Main question: I know the meanning of the definition in Griffith's book but i simply have no clue what Zetilli is talking about. What does this fraction mean and how is it connected to the definition in the Griffith's book.
Sub question: I noticed that in Zetilli's book they write expectation value like $\langle \widehat{x}\rangle$ while Griffith does it like this $\langle x \rangle$. Who is right and who is wrong? Does it matter? I think Griffith is right, but please express your oppinion.
Answer: If the wave function $\Psi$ is normalized, then $\langle\Psi|\Psi\rangle$ should equal 1. Griffiths' definition assumes the wave function is already normalized, while Zetilli accounts for all possibilities by dividing out the normalization constant. So if the wave function $\Psi$ is normalized, Zetilli's definition will reduce to Griffiths' definition.
As for the sub question, that's just a matter of notation, which is just a matter of convention, which doesn't really have an objective "right" and "wrong." They're both right as long as they're consistent with the rest of the notation in their respective books. | {
"domain": "physics.stackexchange",
"id": 7913,
"tags": "quantum-mechanics"
} |
To what extent are lepton and quark generations tied in the Standard Model? | Question: The Standard Model of particle physics splits both the leptons and the quarks into three generations, with mass and instability going up from the first to the third generation. These are normally displayed together, on the same rows or columns of the table of fundamental particles:
This makes some sense: each charged lepton is tied to its neutrino in most Feynman vertices it appears in, and the quarks are linked to each other by their charges, if nothing else. However, I can't think of any way in which the SM formally links, say, muons, and strange quarks. Is there some explicit link with generation-specific interactions? Or is it just a coincidence that there are three rungs in both ladders with increasing mass on both?
Answer:
Is there some explicit link with generation-specific interactions? Or is it just a coincidence that there are three rungs in both ladders with increasing mass on both?
In the SM, there are no generation-specific interactions, beyond color, which factors out. The generation tabulation system simply arrays states according to increasing mass, a still mysterious pattern; which is why, to date, the neutrino mass eigenstates $\nu_{1,2,3}$ have not been assigned to generations, yet, pending complete experimental confirmation of the normal hierarchy/ordering (the $\nu_{e,\mu,\tau}$ are not mass eigenstates, and were on that chart to simply confuse and abuse; mercifully, they are going away). The increasing mass pattern is not a coincidence: it is the construction principle.
The weak interactions, through weak mixing, hop across all generations, and neutrinos mix a lot, unlike quarks, whose mixing angles are small. So scrambling green columns will do nothing to physics, as long as you take care to rewrite your PMNs matrix to reflect the labelling change. Of course you do need the number of states you see on such tables to be matched, 3 rungs to 3 rungs, whatever their order, so as to prevent gauge anomalies, invalidating gauge invariance, but the assignment of states in generations, the order of the rungs, is basically arbitrary.
In speculative GUTs (like the Georgi-Glashow SU(5)) one tries to link fermion masses (cf (22), since leptons and quarks are put in common representations, 5 and 10; but, again, alternate inequivalent models wiggle the leptons around associating them to different quarks, to adapt to negative proton decay results. So, indeed, such alternative speculative models take advantage of the SM freedom to scramble green columns. | {
"domain": "physics.stackexchange",
"id": 70512,
"tags": "particle-physics, standard-model, quarks, leptons"
} |
A data structure for sets of trees. | Question: Tries allow for efficient storage of lists of elements. The prefixes are shared so it is space efficient.
I am looking for a similar way to efficiently store trees. I would like to be able to check for membership and to add elements, knowing if a given tree is a subtree of some stored trees or if there exists a stored tree being a subtree of the given tree is also desirable.
I would typically store about 500 unbalanced binary trees of height less than 50.
EDIT
My application is some kind of model checker using some sort of memoization. Imagine I have a state $s$ and the following formulae: $f = \phi$ and $g = (\phi \vee \psi)$ with $\phi$ being a complex subformula, and imagine I first want to know if $f$ holds in $s$. I check if $\phi$ holds and after a lengthy process I obtain that it is the case. Now, I want to know if $g$ holds in $s$. I would like to remember the fact that $f$ holds and to notice that $g \Rightarrow f$ so that I can derive $g$ in $s$ almost instantly.
Conversely, if I have proved that $g$ does not hold in $t$, then I want to tell that $f$ does not hold in $t$ almost instantly.
We can build a partial order on formulae, and have $g \geq f$ iff $g \Rightarrow f$. For each state $s$, we store two sets of formulae; $L(s)$ stores the maximal formulae that hold and $l(s)$ stores the minimal formulae that do not hold. Now given a state $s$ and a formula $g$, I can see if $\exists f \in L(s), f \Rightarrow g$, or if $\exists f \in l(s), g \Rightarrow f$ in which case I am done and I know directly whether $g$ holds in $s$.
Currently, $L$ and $l$ are implemented as lists and this is clearly not optimal because I need to iterate through all stored formulae individually. If my formulae were sequences, and if the partial order was "is a prefix of" then a trie could prove much faster. Unfortunately my formulae have a tree like structure based on $\neg, \wedge$, a modal operator, and atomic propositions.
As @Raphael and @Jack points out, I could sequentialise the trees, but I fear it would not solve the problem because the partial order I am interested in would not correspond to "is a prefix of".
Answer: You might want to check out g-tries. This is essentially the data structure you're looking for, but designed for use with general graphs instead of just trees. As such, I'm not sure that g-tries have good theoretical guarantees -- I think they use a graph canonization algorithm as a subroutine -- but in practice they seem to work well.
(Don't be scared that the linked paper is about "network motifs in biological networks": the g-trie is a perfectly good abstract data structure for graphs.) | {
"domain": "cstheory.stackexchange",
"id": 862,
"tags": "ds.data-structures, tree, model-checking"
} |
Doubt in the expression of partition function of a general canonical ensemble | Question: Suppose we have a system $S$ connected to a bath $B$. The combined system forms a microcanonical ensemble.
Suppose the energy of the combined system is $E_T$.
So, $E_S+E_B=E_T$.
The probability of finding the system $S$ in energy $E$ (or probability of finding a microstate of system corresponding to energy E) is
$\rho(E)=$ Probability of finding the bath in energy $(E_T-E)\;\alpha\;\Omega_B(E_T-E)$
$\displaystyle\implies \rho(E)\;\alpha\;\Omega_B(E_T-E)\tag{1}$
We consider bath to be huge and the system $S$ forms a tiny part of the combined system. So, $E<<E_B$. This means $E_T\approx E_B$
So, $(1)$ becomes
$\displaystyle\rho(E)\;\alpha\;\Omega_B(E_B-E)\tag{2}$
As $E$ is small, so we can do the Taylor expansion of the above expression around $E_B$ and retain only the first order terms.
Finally we get, $\displaystyle\rho(E)=Ce^{-\frac{E}{k_BT}}\tag{3}$
Here $C$ is the normalization constant.
We can find $C$ as follows
$\displaystyle\sum_E\rho(E)=1=\sum_{\text{(All possible microstates)}}e^{-\frac{E}{k_BT}}=C\sum_E\Big(\sum_{\text{(All possible microstates with energy E)}}e^{-\frac{E}{k_BT}}\Big)=C\sum_E\Omega(E)e^{-\frac{E}{k_BT}}\tag{4}$
We can define the partition function as
$\displaystyle Z=\frac{1}{\sum_E\Omega(E)e^{-\frac{E}{k_BT}}}\tag{5}$
So, $\displaystyle\rho(E)=\frac{e^{-\frac{E}{k_BT}}}{Z}\tag{6}$
Doubt
$\rho(E)$ by definition is $\frac{\Omega(E)}{\sum_E\Omega(E)}$
This is the probability of finding the sytem in macrostate $E$ or in other words, we can say that it is the probability of finding the system in a microstate corresponding to energy $E$ out of all the possible microstates that corresponds to different energies.
In $(4)$, on the $RHS$ why we are summing over all the possible microstates. I think that by the definition it should be just summing over all the energies E (macrostates).
Then $\displaystyle Z=\frac{1}{\sum_{E}e^{-\frac{E}{k_BT}}}$
I don't know what am I missing. Because in all the books, in partition function there is summing over all the possible microstates.
Answer: The probability of finding the system $S$ in a particular microstate $\mu$ which has energy $\epsilon(\mu)$ is
$$\mathrm{Prob}(\mu) = \frac{\Omega_B\big(E-\epsilon(\mu)\big)}{\Omega_{tot}}$$
where $\Omega_B(E)$ is the number of microstates of the bath with energy $E$ and $\Omega_{tot}$ is the total number of microstates of the system+bath. We then have
$$\log\big[\mathrm{Prob}(\mu)\big] = \log\big[\Omega_B\big(E-\epsilon(\mu)\big)\big] - \log\big[\Omega_{tot}\big]$$
$$\approx \log\big[\Omega_B(E)/\Omega_{tot}\big] - \underbrace{\frac{\Omega_B'(E)}{\Omega_B(E)}}_{\equiv 1/kT} \epsilon(\mu)$$
$$\implies \mathrm{Prob}(\mu) = C e^{-\epsilon(\mu)/kT}$$
That is, the probability of the microstate $\mu$ is equal to some constant $C$ times the usual Boltzmann factor. To determine the constant $C$, we require that the sum of the probabilities of all of the microstates is equal to $1$, yielding
$$C = 1/Z, \qquad Z= \sum_{\mu} e^{-\epsilon(\mu)/kT}$$
It is true that for many of those microstates, $\epsilon(\mu)$ will be the same. As a result, we could re-write this by summing over all system energies $\mathcal E$ and weighting each factor by the number of microstates with that energy $g(\mathcal E)$:
$$Z = \underbrace{\sum_\mu e^{-\epsilon(\mu)/kT}}_{\text{sum over microstates}} = \underbrace{\sum_\mathcal E g(\mathcal E) e^{-\mathcal E/kT}}_{\text{sum over energies}}$$ | {
"domain": "physics.stackexchange",
"id": 91475,
"tags": "thermodynamics, statistical-mechanics, entropy, partition-function"
} |
Relativity and "light years" | Question: As I understand relativity, time is relative to your velocity, meaning your watch moves slower relative to those who are stationary when moving at great speeds.
So if that's true, then when we talk about "light years", is that a distance based on year at some average Earth velocity?
Furthermore if we got in a spacecraft and traveled at near light speeds, for a journey say 5 light years away, would it not seem much shorter than 5 years for those traveling?
Answer: There are some misconceptions in your question, and you are really asking two different things.
First, remember that if you are on a spaceship moving at high speeds, Earth people will measure your watch as moving slower than theirs and you will measure their watches as moving slower than yours! Relativity works boths ways, that's why it's called relativity.
A light year is a unit of distance, just like a meter or a mile. It is defined as the distance that light travels in a year; this definition implicitly assumes that the same observer will measure both distance and time, and that this observer is inertial. One important point is that you do not actually have to go out and measure a light year to know what it is! Earth's velocity doesn't matter, because a light year is defined assuming you are in some inertial frame, it doesn't matter which one.
If distance and time are measured by different observers, things will change: If one of the observers gets on a rocket ship and travels at $0.9999c$ for a year proper time, someone else standing on Earth will measure the travelled distance to be much more than $0.9999$ light years (around 70 light years, in fact).
This is also the answer for the last part of your question. If you travel 5 light years (as measured from Earth!) at a speed practically equal to $c$, when you get there your clocks will read much less than 5 years. | {
"domain": "physics.stackexchange",
"id": 25700,
"tags": "special-relativity, reference-frames, relativity, observers, metric-space"
} |
Reference for Supersampling | Question: I want to downsample images to arbitrary sizes using supersampling to avoid aliasing effect.
The only two good explanations I found were on Wikipedia and everything2.com, but there are still gaps. For example:
The image should be upsampled before supersampling. But how much? Is there any special filter to be used for upsampling prior to supersampling? Is just taking interpolated color values on sub-pixel positions in source image sufficient?
What if I want to resize image to e.g. 80% of its size with supersampling? How much I have to upsample it and how to treat non-integral size ratio (e.g. downsampling from 379 pixels to 377 pixels)?
Are the multiple samples taken from the single pixel, multiple pixels or around sample point (both cases can occur)?
UPDATE:
I have tested the "Super Sampling" method in Paint.NET with test target on this page. Suprisingly, the result looked just like "Photoshop Bicubic" filter:
The only advantage can been achieved by pre-blurring input image and than using some conventional method.
I've implemented and tried ordinary "Box" and "Cubic" convolution filters with excellent results!
So now I don't see any benefit of using supersampling over the convolution-based filters. Or is there any?
Answer: EDIT: I took this answer down for a time because I realized a lot depends on whether there are any idiosyncrasies in how images are mapped to pixels by actual devices, and, not being an image guy, I don't know a lot about that. I decided to bring it back, with the caveat that the answer may be insufficient given said idiosyncrasies.
The image should be upsampled before supersampling. But how much?
You are just trying to soften quantization (pixel resolution) error by collecting information from neighboring pixels. Thus, oversampling by a factor of 2 should be plenty. In other words, once you oversample until your sample resolution is twice that of the pixel resolution, nothing can be gained by oversampling more.
Is there any special filter to be used for upsampling prior to supersampling?
I might be misunderstanding the terms, but it appears to me that upsampling is part of supersampling. No, you don't need to use any special filters. Insert 0's in between your samples and then low-pass filter. You'll probably need to adjust the gain of your low-pass filter by your upsampling rate to get the pixel intensity to come out right.
What if I want to resize image to e.g. 80% of its size with supersampling?
Then factor that into your upsample/decimation calculations. After you upsample you have to decimate down to the pixel resolution. By resizing the image you are, in effect, changing the pixel resolution.
How much I have to upsample it and how to treat non-integral size ratio?
How much you have to upsample it depends on the details of the image, pixel resolution, etc.
How much I have to upsample it and how to treat non-integral size ratio?
You can deal with non-integral ratios with fractional resampling. Essentially you upsample by an integer and decimate by another integer, giving you a fractional sample change. | {
"domain": "dsp.stackexchange",
"id": 261,
"tags": "sampling, downsampling, supersampling"
} |
Help with a supersymmetry problem 3.5b in Peskin and Schroeder | Question: I am self studying Quantum Field Theory and I am using the book Introduction to Quantum Field Theory by Peskin and Schroeder along with the solution manual by Zhong Zhi Xianyu. I am currently working on problem 3.5b, which is on supersymmetry. In the solutions for (b), the first line (which I was able to successfully understand) says
$$\delta (\Delta L) = -mi\epsilon^T \sigma^2 \chi F - im\phi \epsilon^T \bar{\sigma}^{\mu}\partial_\mu\chi + \frac{1}{2}im(\epsilon^T F + \epsilon^+(\sigma^2)^T (\sigma^\mu)^T\partial_\mu \phi)\sigma^2 \chi + \frac{1}{2}im\chi^T \sigma^2(\epsilon F + \sigma^\mu \partial_\mu\phi \sigma^2\epsilon^*).$$
The next line says,
$$-\frac{1}{2}imF\epsilon^T \sigma^2 \chi + \frac{1}{2}imF\chi^T \sigma^2 \epsilon - im\phi \epsilon^T \bar{\sigma}^{\mu}\partial_\mu\chi - \frac{1}{2}im(\partial_\mu \phi)\epsilon^+ \bar{\sigma}^\mu \chi + \frac{1}{2}im(\partial_\mu \phi) \chi^T (\bar{\sigma^\mu})^T\epsilon^*$$
I am completely lost on how the second line follows from the first. I see that the $- im\phi \epsilon^T \bar{\sigma}^{\mu}\partial_\mu\chi$ cancel. I also tried using the identities written beneath the derivation in the solutions manual, but I still cannot get the second line's results. I found that I often got the same letters but in a different order, and order matters here, so I am lost. Can anyone explain how the manual author got from the first line to the second?
Answer: I just did that problem two weeks ago to review my qualification exam. Let me first explain his purpose. He wants to prove the following Lagrangian
$$\Delta\mathcal{L}=[m\phi F+\frac{1}{2}im\chi^T\sigma^2\chi]+c.c.$$
is invariant under the supersymmetry transformation
\begin{align}
\delta\phi&=-i\varepsilon^T\sigma^2\chi\\
\delta\chi&=\varepsilon F+\sigma\cdot\partial\phi\sigma^2\varepsilon^*\\
\delta F&=-i\varepsilon^\dagger\bar{\sigma}\cdot\partial\chi.
\end{align}
I think the complicated part is $\frac{1}{2}im\delta\chi^T\sigma^2\chi$ and $\frac{1}{2}im\chi^T\sigma^2\delta\chi$. Let's do it step by step. Firstly,
$$\frac{1}{2}im\delta\chi^T\sigma^2\chi=\frac{1}{2}im(\varepsilon F+\sigma\cdot\partial\phi\sigma^2\varepsilon^*)^T\sigma^2\chi.$$
As he have seen, the first term of the above equation $\frac{1}{2}im(\varepsilon F)^T\sigma^2\chi$ cancels $m(\delta\phi)F$ and give you $-\frac{1}{2}im\varepsilon^T F\sigma^2\chi$. While the second part of it
$$\frac{1}{2}im(\sigma\cdot\partial\phi\sigma^2\varepsilon^*)^T\sigma^2\chi=
\frac{1}{2}im(\varepsilon^\dagger(-\sigma^2)\sigma^T\cdot\partial\phi)\sigma^2\chi=
-\frac{1}{2}im\varepsilon^\dagger\bar{\sigma}\cdot\partial\phi\chi,$$
where at the second equal sign $\sigma^2\sigma^T\sigma^2=\bar{\sigma}$ is used. And this is exactly the fourth term in "the next line". Take transpose on both sides of $\sigma^2\sigma^T\sigma^2=\bar{\sigma}$ can you get $\sigma^2\sigma\sigma^2=\bar{\sigma}^T$. Then do the same thing to $\frac{1}{2}im\chi^T\sigma^2\delta\chi$ and you will get the last term in "the next line".
There is a typo in "the next line". The third term should be $-im\phi\varepsilon^\dagger\bar{\sigma}\cdot\partial\chi.$ In "the next line", the first two terms cancel and the last three terms give a total differential. | {
"domain": "physics.stackexchange",
"id": 79447,
"tags": "homework-and-exercises, lagrangian-formalism, field-theory, supersymmetry"
} |
Separating $DSPACE(f)$ and $DTIME(2^f)$: is there any function computable in time $2^{O(f)}$ but not in space $O(f)$? | Question: For $f(n)\ge n$,
$$\mathsf{DSPACE}(f(n)) \subseteq \mathsf{DTIME}(2^{O(f(n))}).$$
Is there any function $f$ for which this containment is known to be proper?
Answer: If $\mathsf{L} = \mathsf{P}$ then $\mathsf{DSPACE}(O(f(n))) = \mathsf{DTIME}(2^{O(f(n)})$ for all time-constructible functions $f(n) \geq \log n$ using standard padding arguments. I am not sure what happens if $f(n)$ is not time-constructible.
If $\mathsf{L} \neq \mathsf{P}$ then $f(n) = \log n$ will be an example. Therefore there is a time-constructable function $f(n)$ where the containment is proper iff the containment is proper for $f(n) = \log n$ . | {
"domain": "cstheory.stackexchange",
"id": 2464,
"tags": "cc.complexity-theory, complexity-classes"
} |
Is 'grapheme' a substance or a typo? | Question: While reading Ref. 1 I came across the sentence
Below we focus on the physics of ideal (single layer) grapheme.
I did google search 'grapheme' but the results tended towards a completely unrelated use of the word.
When I googled it alongside the word physics I got a number of hits, none of which could exclude the possibility of it being a typo of 'graphene'. Is 'grapheme' a real word?
Bonus points if you can tell me what it is. An explanation aimed undergraduate physics level would be most helpful.
References:
K.S. Novoselov et al, Two-Dimensional Gas of Massless Dirac Fermions in Graphene, Nature 438 (2005) 197, arXiv:cond-mat/0509330.
Answer: This answer won't be very long, because there's not all too much to say: It's a typo. This is clear from the context:
The sentence describes graphene, as witnessed by the words "single layer", which is the characteristic property of graphene.
The sentence occurs in a paper on graphene.
The 'n' is found next to the 'm' on most keyboards. | {
"domain": "physics.stackexchange",
"id": 18976,
"tags": "terminology, definition, graphene"
} |
Rate constants for water exchange | Question: Are rate constants for water exchange of transition metal aqua-complexes tabulated anywhere ? I haven't been able to find a resource which is anywhere near comprehensive.
Answer: Yes, see Fig. 1 and tables VI-VIII of SOLVENT EXCHANGE ON METAL IONS Advances in Inorganic Chemistry volume 54 pages 1-69.
Values vary over almost 20 orders of magnitude.
To make this not a link-only answer, roughly, per second exchange rates are:
$\ce{Ir^3+}$ $10^{-10}$
$\ce{Rh^3+}$ $2.2 \times 10^{-9}$
$\ce{Cr^3+}$ $10^{-6}$
$\ce{Ru^3+}$ $10^{-5}$
$\ce{Pt^2+}$ $10^{-3}$
$\ce{Ru^2+}$ $10^{-2}$
$\ce{V^2+}$, $\ce{Fe^3+}$ $10^2$
$\ce{Pd^2+}$, $\ce{V^3+}$ $10^3$
$\ce{Ni^2+}$ $10^4$
$\ce{Ti^3+}$ $10^5$
$\ce{Co^2+}$ $3.2 \times 10^6$
$\ce{Fe^2+}$ $4.4 \times 10^6$
$\ce{Mn^2+}$, $2.1 \times 10^7$
$\ce{Zn^2+}$ great uncertainty $5 \times 10^7 - 10^{10}$
$\ce{Cd^2+}$ $10^8$
$\ce{Cr^2+}$, $\ce{Hg^2+}$ $10^9$
$\ce{Cu^2+}$ $5.7 \times 10^9$ | {
"domain": "chemistry.stackexchange",
"id": 3399,
"tags": "kinetics, coordination-compounds, reference-request"
} |
Factoring assuming smoothness of some numbers | Question: I have came across a lot of factorization methods and most of them seem to assume smoothness of some numbers.
For example
When $p-1$ is smooth
When $|E(\mathbb{F}_p)|$ is smooth. (Elliptic curve factorization)
Smoothness of prime ideals in Number field sieves.
I want to know whether any other notions are known to be equivalent to factoring like smoothness of $p+1$ or $p^2+1$ ?
Answer: See my paper with Eric Bach, "Factoring with cyclotomic polynomials", where we show that if the cyclotomic polynomial $\Phi_k(p)$ is $B$-smooth for any $p$ dividing $N$, then we can factor $N$ in time polynomial in $\log N$ and $k$ and $B$. In particular this gives a $(p+1)$-method (see the earlier work of Williams) and $(p^2+1)$ method.
http://www.ams.org/journals/mcom/1989-52-185/S0025-5718-1989-0947467-1/S0025-5718-1989-0947467-1.pdf | {
"domain": "cstheory.stackexchange",
"id": 3741,
"tags": "nt.number-theory, factoring, cryptographic-attack"
} |
Non-re-indexing List with defragmentation | Question: I'm currently coding something where I have Objects that take on an ID, and are stored in a List. For easy retrieval, the ID of the object is it's index in the list. However, that means that the object cannot be removed from the list without everything going to hell, since the list will be re-indexed and thus I won't be able to retrieve the right object from the list by knowing its ID anymore.
I also did not want to use a Map, since that just looks ugly and makes a big part of my code look more complicated than it should.
Thus, I tried to simply use a list, but modify it in order to be able to get an index that won't change as long as the object is alive.
@SuppressWarnings("serial")
public class SafeIndexList<E> extends ArrayList<E> {
Stack<Integer> safeIndexes = new Stack<Integer>();
@Override
public E remove(int index){
if(this.size()-1 == index) return super.remove(index);
else {
E e = this.get(index);
this.set(index, null);
safeIndexes.push(index);
return e;
}
}
public int getSafeIndex(){
return safeIndexes.size() > 0 ? safeIndexes.pop() : size();
}
}
Is this "good" or just complete, sorry for the wording, bullshit?
Answer: It's not complete BS. Your use-case is somewhat surprising, and your disregard for using a Map is also surprising, but given those, you're left with little option other than a completely custom implementation.... but, let's look at the issues you currently have...
People need to do a round-trip query-before-inserting operation - they need to call getSafeIndex() before setting a new value.
People can't use all the methods available to them. For example, they can't call:
clear() (See code)
add(int, obj) (See code)
..... none of the iterator, or stream-related methods (foreach, etc.) because they will return null values mixed in the wrong places.
size() will lie to them.
... in fact, the only methods that can be used reliably are set(...), get(...), and similar ones.
Extending ArrayList seems to introduce a lot of ways to break the code/data integrity.
I recommend encapsulation instead. In fact, this is a common "debate": Inheritance vs. Composition/Encapsulation (search for those topics on google).
In this case, composition/encapsulation would remove a lot of your issues, and reduce the class to just what you need.
Underlying your class you could still have the ArrayList and Stack, (except Stack class is deprecated in favour of Deque interfaces) but it would look something like:
@SuppressWarnings("serial")
public class SafeIndexList<E> {
List<E> data = new ArrayList<>();
Deque<Integer> safeIndexes = new ArrayDeque<>();
public E remove(int index){
if(index < data.size())
// check double-removes.
if (!safeIndexes.contains(index)) {
safeIndexes.push(index);
}
return super.remove(index);
}
return null;
}
public int add(E value){
int idx = safeIndexes.isEmpty() ? data.size() : safeIndexes.pop();
data.set(idx, value);
return idx;
}
}
You can add methods for other features you need, as needed. Note, that having abstracted/encapsulated the underlying data store mechanism, you can easily replace the array/deque with a Map now, and not notice.
Edit: to add in the combined constructor/index problem, a solution like this may help, adding the method:
public int construct(IntFunction<E> factory) {
int idx = safeIndexes.isEmpty() ? data.size() : safeIndexes.pop();
data.set(idx, factory.apply(idx));
return idx;
}
Now, you can call that with something like:
int id = mylist.construct(i -> new MyObject(..., i, ...));
The above constructs the inner object with the index, but "internal" to the list too. Alternatively, you can return the constructed object if that makes more sense....
MyObject obj = mylist.construct(i -> new MyObject(..., i, ...)); | {
"domain": "codereview.stackexchange",
"id": 24475,
"tags": "java, array, collections"
} |
Rosserial_xbee fails to connect xbees | Question:
My Xbee works fine with X-CTU in windows.
When I try to transfer data between XBee- end device with arduino to Xbee-coordinator connected to PC using
rosrun rosserial_xbee xbee_network.py /dev/ttyUSB0 1
I am getting following error :
Unable to sync with device; possible link problem or link software version mismatch such as hydro rosserial_python with groovy Arduino
I followed the http://wiki.ros.org/rosserial_xbee/Tutorials/Example%20Network tutorial which works fine for me the first time.
But now its not working.I didn't change any version after that.
I am using ROS Hydro on Ubuntu 12.0.4
Sparkfun Xbee Kit series 1.
Can anyone please help me to solve this problem.
Originally posted by RKV on ROS Answers with karma: 26 on 2016-03-11
Post score: 0
Original comments
Comment by RKV on 2016-03-13:
The device is actually connected to " /dev/ttyUSB0 ". Even I verified earlier.
I am tyring to narrow down the problem to know who is the root cause.
Problem is not with xbee:bcz xbee can communicate in X-CTU using windows.
Xbee configuration doesn't have any problem.
Comment by RKV on 2016-03-13:
Xbee end device RSSI is blinking means it can receive data from co-ordinator.
But CO-orninator RSSI never turns up. Means its not receiving sync from end device.SO what might be the problem in end device
Comment by RKV on 2016-03-13:
Is it the problem in software serialnode.py or hardware? Hope there wont be problem in hardware connection bcz of same hardware tested in X-CTU.So what I can do in software
Answer:
Hi,
I checked with the device connection before itself as well as now with your command.
Still same problem.
I am able to check the rostopic.Which is published by Xbee as /diagnostics
**header:
seq: 14
stamp:
secs: 1457920159
nsecs: 757045984
frame_id: ''
status:
level: 2
name: rosserial_python
message: no sync with device
hardware_id: ''
values:
-
key: last sync
value: Mon Mar 14 09:49:04 2016
-
key: last sync lost
value: Mon Mar 14 09:49:19 2016**
Any Idea now??.. But what i think is co-ordinator able to communicate with End device with board.
The end device is not able to connect with master it seems. I dont know whats the problem.
Originally posted by RKV with karma: 26 on 2016-03-13
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by RKV on 2016-03-18:
Problem got solved..
Try to re-configure Xbee and establish connection when its not working. | {
"domain": "robotics.stackexchange",
"id": 24075,
"tags": "ros"
} |
Property testing of a complete multipartite graph | Question:
Propose and prove an $\epsilon$-test for the following property in the dense graph model: $G=(V,E)$ is a complete multipartite graph. That is, there exists a partition $V=V_1\cup\ldots\cup V_\ell$ such that $uw\in E$ if and only if there are $i\neq j$ such that $u\in V_i$ and $w\in V_j$.
I have been stuck at trying to solve this question for a few days. I tried to devise an algorithm along the lines GGR98 for bipartiteness testing (such as here), where we sample a sets $U$ and $S$ of size $\mathrm{poly}(1/\epsilon)$ and try to "self-correct" $S$ based on a small number of "partitions" induced by $U$. In particular, I am not sure how to define "violating edges with respect to $U$" for the above mentioned property. Any help?
EDIT: The number $\ell$ is not given to the algorithm as input. In other words, the task is to determine if there exists an $\ell$ such that the graph is complete $\ell$-partite.
Answer: The following answer construct a tester for a graph being complete $\ell$-partite for a fixed value of $\ell$.
Consider the following tester:
Choose $\ell+1$ vertices at random.
Verify that the edges between them are consistent with a complete $\ell$-partite graph, that is, there is a way to color the vertices using $\ell$ colors such that there is an edge between two vertices iff they have different colors.
We claim that for every $\epsilon>0$ there exists $\delta>0$ such that if the test succeeds with probability at least $1-\delta$, then the graph is $\epsilon$-close to complete $\ell$-partite. To show this, we assume that the test succeeds with probability at least $1-\delta$, and show that the graph is $\epsilon(\delta)$-close to complete $\ell$-partite, where $\epsilon(\delta) \to 0$ as $\delta \to 0$. Moreover, the proof is by induction on $\delta$, the case $\ell = 1$ being trivial.
Let $\gamma$ be the probability that $\ell-1$ randomly chosen points form a clique. If $\gamma \leq \sqrt{\delta}$ then the graph passes the test for being complete $(\ell-2)$-partite with probability at least $1-\sqrt{\delta}-\delta$, and we complete the proof by induction. Assume, therefore, that $\gamma \ge \sqrt{\delta}$.
The expected failure probability of the test given that the first $\ell-1$ points form a clique is at most $\delta/\gamma \leq \sqrt{\delta}$. In particular, we can find a clique $x_1,\ldots,x_{\ell-1}$ such that with probability $1-\sqrt{\delta}$ over the choice of $x_\ell,x_{\ell+1}$, the graph induced by $x_1,\ldots,x_{\ell+1}$ is consistent with a complete $\ell$-partite graph.
Define the color $c(x)$ of a point $x \neq x_1,\ldots,x_{\ell-1}$ as follows. If $x$ is connected to all but $x_i$, then $c(x) = i$. If $x$ is connected to all, then $c(x) = \ell$. Otherwise, $c(x) = \bot$. Then with probability $1-\sqrt{\delta}$, the following holds:
Either $c(x_\ell) = c(x_{\ell+1}) \neq \bot$ and $(x_\ell,x_{\ell+1})$ is not an edge,
Or $c(x_\ell) \neq c(x_{\ell+1})$, $c(x_\ell),c(x_{\ell+1}) \neq \bot$, and $(x_\ell,x_{\ell+1})$ is an edge.
This shows that we can partition all but $\sqrt{\delta} n$ of the vertices into $\ell$ sets, such that the induced graph differs from the corresponding complete $\ell$-partite one in an $O(\sqrt{\delta})$-fraction of edges. | {
"domain": "cs.stackexchange",
"id": 18412,
"tags": "graphs, probabilistic-algorithms"
} |
Two 60W-lightbulbs are connected on 220 V AC-voltage - how much electric power is spent by each lightbulb? | Question:
Two 60W-lightbulbs are connected on 220 V AC-voltage - how much electric power is spent by each lightbulb?
There are two scenarios:
In one, these two lighbulbs are connected serially, and in the other parallelly. The trouble is, no resistance value is given, so I don't know how electric power can be calculated for each bulb. No amperage given either.
How do I proceed?
Edit: Thanks to the answer I did figure out the power consumption for each connection.
Serially:
$P=\dfrac{U^2}{R} => R=\dfrac{U^2}{P}$
At 220 Volt and 60 W the resistance is $806 \dfrac{2}{3}$ ohm for each bulb. In a serial connection the resistances add up, meaning, the two bulbs together have $1613 \dfrac{1}{3}$ ohm. Repeating the above formula, we get a total power consumption of $30$ watt. So each bulb consumes $15$ watt.
Parallelly:
The total resistance in a parallel connection can be calculated as
$R_T=\dfrac{R_1*R_2}{R_1+R_2}$
which is $403 \dfrac{1}{3}$ ohm. Thus, again repeating the formula for power consumption $\big(P=\dfrac{U^2}{R}\big)$, the total power consumption is $120$ watt. Which is $60$ watt for each bulb.
Answer: Use this formula:
Watt = (Voltage x Voltage)/Resistance
and hence the restistace of the bulb would be (220x220)/60 = 806.67 ohm
Hope it helps and you can proceed now very easily. | {
"domain": "physics.stackexchange",
"id": 14435,
"tags": "homework-and-exercises, electricity, electric-circuits, power"
} |
Find the error: If $L_x$ and $L_y$ are zero, then $L_z$ is conserved | Question: From Goldstein's Classical Mechanics (2nd ed.), problem 38 of chapter 9 basically says the following:
It's been shown that the Poisson bracket of two constants of the motion is also a constant of the motion (by the Jacobi identity). Applied the angular momentum of a particle/system, this says that if the components of angular momentum $L_x$ and $L_y$ are conserved, $L_z$ is also conserved because
$$\{L_x,L_y\}=L_z$$
This seems to imply that any system confined to move in a plane automatically has its angular momentum $L_z$ conserved, since $L_x$ and $L_y$ are identically zero. Immediately we can think of systems confined to a plane where $L_z$ isn't conserved (e.g. angular momentum of spring on a watch or that of a plane disk rolling down an incline). What objections can be made to this implication? Does the theorem above require any restrictions?
I can't find any solutions to this online, but here's my guess:
I think the problem is that this theorem relies on $L_x$ and $L_y$ being constants of the motion with respect to a system (certain phase-space with underlying hamiltonian) described by Hamilton's principle, and the cited examples where angular momentum wasn't conserved required constraint equations which would alter the form of the Lagrangian formalism (which the Hamiltonian formalism is based on). The theorem could then be formulated as follows: the Poisson bracket of two constants of the motion, with respect to a system described by the typical Hamilton's principle, is also a constant of the motion (by the Jacobi identity).
What do you think? Is my guess on the right track? If so, how could it be refined? If not, why is my guess invalid?
Answer: When you say that $L_x$ and $L_y$ vanish for a point confined to move in the plane $z=0$, you mean that the the solution $\vec{x}=\vec{x}(t)$, $\vec{p}=\vec{p}(t)$ describes a curve in the given plane with tangent vector parallel to that plane. So that, exactly along that curve,
$$L_x(\vec{x}(t),\vec{p}(t))= L_y(\vec{x}(t),\vec{p}(t))=0\quad \forall t \in \mathbb R$$
When you instead compute $\{L_x,L_y\}$, you actually compute derivatives $x$ and $p$ of the involved functions regardless the particular curve you finally use:
$$\{L_x,L_y\} = \sum_{i=1}^3 \frac{\partial L_x}{\partial x^i}
\frac{\partial L_y}{\partial p_i}- \frac{\partial L_y}{\partial x^i}
\frac{\partial L_x}{\partial p_i}\:.$$
You evaluate the derivatives on the given curve just at the end of the computation.
The point is that these derivatives may not vanish on the given curve even if the function $L_x$ and $L_y$ vanish on it.
This fact is general. For instance $F(x,y)= x-y^2$
vanishes if evaluated on the curve $x=t^2$, $y=t$:
$$F(x(t),y(t))= t^2-t^2=0 \quad \forall t \in \mathbb R\:.$$
However $\frac{\partial F}{\partial x}$ does not vanish on that curve:
$$\frac{\partial F}{\partial x}(x(t),y(t))= 1-t^2\:.$$
Also the general statement that if $L_x$ and $L_y$ are conserved then $L_z$ is does not follow so straightforwardly from $$\{L_x,L_y\}=L_z\tag{1}$$ as it may seem at first glance. You need the so-called Jacobi identity of Poisson bracket and the fact that $$\frac{df(\vec{x}(t),\vec{p}(t))}{dt} = \{H,f\}(\vec{x}(t),\vec{p}(t))$$ to prove it. | {
"domain": "physics.stackexchange",
"id": 27801,
"tags": "homework-and-exercises, classical-mechanics, angular-momentum, hamiltonian-formalism, poisson-brackets"
} |
Hamming weight of powers | Question: Given positive integers $b$ and $e$, what is known about the space and time complexity of finding the Hamming weight (number of binary 1s) of $b^e$?
If $e\log b$ bits are available, the number can simply be calculated by standard techniques and the 1s counted. But what techniques are possible when less memory can be used?
Answer: This answer expands my comment above.
You can do it with $O(\log e + \log \log b)$ space as follows:
1) First compute $b^e$ in Chinese remainder representation modulo sufficiently many primes.
2) Then use the Chiu-Davida-Litow algorithm to convert the Chinese remainder representation into binary representation. (Informatique Theoretique et Applications, Vol 35(3), pages 259-275, 2001)
3) Finally, just count the number of $1$'s.
This is a composition of a finite number of log-space computable functions, which is itself log-space computable. | {
"domain": "cstheory.stackexchange",
"id": 1128,
"tags": "ds.algorithms, co.combinatorics, space-bounded, space-time-tradeoff"
} |
Is $L = \{ \langle \langle \ M\ \rangle \rangle \ | \ M \ \text{does not accept}\ 010 \} $ Turing recognizeable? | Question: I'm working on the following problem:
Is the following language Turing recognizable (recursively enumerable)
?
$$L = \{ \langle \langle \ M\ \rangle \rangle \ | \ M \ \text{does not
> accept}\ 010 \} $$
The way I see it: Suppose that a machine $M$ loops forever on $010$. If a $TM$ recognizes $L$, it should accept $M$ in that case. But that means that it should know if $M$ loops forever or not, which is not possible. So, $L$ is not Turing recognizable.
Is my proof correct, and can it be more formal?
Answer: You are presenting an argument which falls short of being a proof. In particular, it is not clear why a Turing machine recognizing $L$ should know whether $M$ loops forever or not; indeed, it is not so clear what do you mean by know in this context.
Here is one way a proof could go. Suppose that $L$ were r.e. The language of Turing machines which do accept $010$ is also r.e. By running both machines in parallel, we can decide whether a given Turing machine accepts $010$, i.e., we could solve the halting problem, which we know is undecidable. Therefore $L$ cannot be r.e. | {
"domain": "cs.stackexchange",
"id": 13989,
"tags": "turing-machines, undecidability"
} |
RL Invertible Value Function approach - why it prevents rewards from exploding? | Question: Authors of "Recurrent Experience Replay in Destributed RL", page 3, use the function $h$ to prevent rewards from exploding:
$$h(x) = \operatorname{sign}(x)(\sqrt{|x|+1} -1) + \epsilon x$$
where $\epsilon$ is a very small number, for example $0.0001$ and $x$ is the score (aka reward). h re-scales the rewards the network receives.
This is different to a more common "reward-clipping" technique.
Question 1 (most important):
The Q-vaue formula looks like this:
$$y = h \bigg( nstepReward + \gamma^n \cdot h^{-1}(Q(s_{t+n}, a^*)) \bigg)$$
Why do authors pass the Q-scores through the inverse (exponential) function and then through the squashing function $h$? The effect of this is that Q-target-scores remain unchanged, because composition of a function and its inverse is identity. So to me, the $Q$ is still a big source of threat of "exploding rewards", because it's left untouched. Why do we keep it untouched?
It looks like the whole formula merely squashes $nStepReward$, but why not just squash everything, including Q to prevent exploding rewards? In other words, why not to have:
$$y = h \bigg( nstepReward + \gamma^n Q(s_{t+n}, a^*) \bigg)$$
Question 2:
Why to use $\epsilon x$ in the formula for $h$
Answer: I got a response from one of the authors:
A bit more intuitively, think of the neural network being asked to
produce a scaled variant of the q-function, namely $P(s,a) = h(Q(s,a))$
The Bellman equation reads:
$$Q(s,a) = R + \gamma * Q(s', a')$$
and by applying function $h$ to both sides you get, equivalently,
$$P(s,a) = h(Q(s,a)) $$
$$= h ( R + \gamma * Q(s', a') ) $$
$$= h ( R + \gamma * h^{-1}( P (s', a') ) )$$
So hopefully this makes it clear that the target for the network's
output at (s,a) should be $$h ( R + \gamma * h^{-1} ( P (s', a') ) )$$ where
$P (s', a')$ is the same network's output at the next state.
From my original post: "these Q is a still big source of "exploding rewards" threat ... these destination Q are free to grow as large as they wish."
With the author's help I realised that they are actually not "entirely free to grow" in the way I though originally:
We should not forget that these "destination scores" were downscaled themselves - as the yellow-quote formula tells, they will be $h(P(s',a'))$ themselves.
So we need to "unpack" them before we can add the currently observed reward. Thus, we won't have exploding rewards in the long run, and we also don't have
to truncate any reward-ratios by using the old-school reward clipping ...That's a very clever way to do it! :)
The yellow-quote formula tells us to:
quickly restore (un-shrink) the downscaled destination $h(P)$,
apply discount and reward,
then downscale it back. It now can be the "downscaled destination", for any states earlier in time. Stepping back in time, we begin from 1) and process earlier state in the same fashion.
The goal is to make the network learn to predict these "downscaled destinations", and it's a lot nicer - because these destination scores are smaller due to being downscaled.
There will be no huge gradient entering the network, since the differences are a lot smaller.
P.S.
Don't forget that $h^{-1}$ is NOT $\frac{1}{h}$
Instead, google for what an inverse function is. | {
"domain": "datascience.stackexchange",
"id": 5396,
"tags": "reinforcement-learning"
} |
Electrode potentials at interfaces? | Question: My questions relates to the fundamental concept of electrochemistry, more specifically the electrode potentials.
1) First, why is there a potential difference at the interface of two phases? Considering the simplest case of a metal rod dipped in the solution of its ions, why should there be a potential difference. Out of many ways, one of the ways of understanding this provided by this site here,is in terms of equilibrium between the metal atoms and ions. But then, why is the presence of the other phase (ions for the metal piece) required to effect this equilibrium? For example, for the equilibria of ammonia and nitrogen/hydrogen, even if we take pure ammonia, the equilibrium is set up according to the surrounding conditions. Next,then can this equilibrium be described in terms of the equation $\Delta G=-RT\ln(K)$. Lastly, if this is an equilibrium, then there must also be an equilibrium between the metal and the hydrogen or the hydroxyl ions in solution as these might also take up the electrons from the metals?
2) When we measure the difference in electrode potentials of two different half cells, we are just measuring the potential difference between the two metal electrodes. How can we get an idea of the potential difference of the metal piece and its solution as the absolute value of potential of the two solutions might also differ. That is, if the two solutions are at different electrostatic potentials ( though not measurable), the potential difference between the two electrodes will not be the difference of their electrode potentials(which are the potential difference between the electrode and the electrolyte in each case). So, in order for the potential difference of the electrodes be equal to the difference in electrode potentials of the two half-cells, the solutions should be at the same potential?
Answer: There are a bunch of of sub parts to these questions; let me see if I can tease some of them out.
Why does a potential difference occur at an interface?
The easiest way to answer this question is through thermodynamics. If we consider the situation of a reactive metal rod (e.g. magnesium) being exposed to water, the reaction $$\ce{Mg -> Mg^{2+} + 2 e-}$$ is favorable. In other words, the $\Delta G$ for this reaction will be less than zero. I've done nothing more than summarize the website you reference in your question, which does a nice job at describing what happens when a piece of metal comes in contact with water, so I don't feel the need to repeat that work. We then get to your next question:
Why is a second phase required to generate a potential difference?
If we have a universe that consists just of magnesium, then there would be no place for the above reaction to occur. In this trivialized case, there is no driving force to make magnesium give up its electrons. That doesn't mean it would not happen, only that any dynamic equilibrium that is established would highly favor the pure metal. The meat of question one, then, is (may be?):
Why doesn't a potential difference between a metal and gas occur?
Actually it does. There's an interesting paper (which is behind a paywall if you are not at a university that subscribes to this journal) that describes the measurement of potentials across metal/gas interfaces. The challenge in studying metal/gas potentials is the ability of controlling this potential externally. With the metal/liquid scenario, an electrochemist (or budding electrochemist) can add another electrode to the solution and apply a potential between the electrodes. It's much harder to do the same thing with a gas. (Well, technically it's *not harder to connect two electrodes to one another and hold them in air, but the redox work functions, resistance and capacitance issues that arise in gas-phase electrochemistry make this a really challenging feat.)
The paper I reference above uses chemisorbed species that have active IR or Raman vibration modes to measure the surface potential using spectroscopy and something called Stark tuning (not the best reference, but it's also not behind a paywall) which states that the vibrational frequency of an harmonic oscillator is proportional to an applied electric field. Not many metal/gas interfaces have been explored using this method, but those that have display surface potentials fairly close to 0 vs. SCE or 5 eV vs. vacuum.
What about the rest of the stuff in solution?
Back to the metal/liquid scenario, the last part of your first question asks about other ions in solution. They most definitely play a role in defining the potential of a metal in solution. In order to accurately predict electrode potentials, one must know concentrations and possible reactions (including states of matter) for all species present.
What about question 2?
I'm afraid question 2 might be based on the false premise that we can measure the potential of an electrode in the absence of anything else (liquid, gas, ions or molecules). When electrochemists refer to a 'half cell', they have in mind both the reduced (in this case, metal electrode) and oxidized (metal ions) forms of a redox couple contained within the same system. (Think about a beaker with a copper wire placed in it and the beaker is filled with a solution containing 0.1 M $\ce{CuCl2}$. That half-cell or system has a given potential based upon (primarily) the standard reduction potential of $\ce{Cu^{2+}}$ to $\ce{Cu}$ and the concentration of $\ce{CuCl2}$. If I have misread your second question, please edit and I will update my answer as needed. | {
"domain": "chemistry.stackexchange",
"id": 567,
"tags": "electrochemistry, thermodynamics, redox, theoretical-chemistry, electrons"
} |
Uncertainty Principle on System of particles | Question: I am new to Quantum Mechanics. I read the uncertainty principle - it says there are pairs of physical quantities which can't both be determined with certainty for a particle.
My question is does the same apply to a system of particles - the nucleus for example ? Can we determine both the position and momentum of a nucleus (containing more than one protons) with certainty?
Any help would be appreciated.
Answer: The uncertainty principle applies to any quantum system, and is way more general than just single particle examples. It is defined for any pair of operators (physical quantities) $A$ and $B$, with the system in a state $|\psi\rangle$
$$ \Delta A \; \Delta B \geq \frac{\hbar}{2} \langle \psi| [A, B] |\psi \rangle$$
Note: The constant factor ($\frac{1}{2}$ here) varies in different derivations, depending on how exactly you define $\Delta A$ and $\Delta B$, but the essence is the same.
In the case of simple quantum systems, you could take $A$ to be the position operator and $B$ to be the momentum operator. In your case, it seems like you would like to consider the whole nucleus as an effective particle and apply these operators on it's wavefunction/state. Sure, you could do that, and you'll get an uncertainty relation from that. | {
"domain": "physics.stackexchange",
"id": 7829,
"tags": "quantum-mechanics, heisenberg-uncertainty-principle, particle-physics, nuclear-physics"
} |
How can I optimize this hash string function? | Question: I'm writing a function which gets two strings and a number k as an input, and outputs a string with length k which is present in both strings. I want to do this using the function hash_sequence().
I need my code to be as efficient as possible because my inputs are very large strings.
def hash_sequence(string, k):
dictionary={}
for i in range(len(string)-(k-1)):
triplet = string[i:i+k]
dictionary.setdefault(triplet,[]).append(i)
return dictionary
def intersects(string1, string2, k):
itr=min(len(hash_sequence(string1,k)),len(hash_sequence(string2,k)))
for i in range(itr):
if list(hash_sequence(string1,k))[i] in hash_sequence(string2,k):
return list(hash_sequence(string1,k))[i]
return None
Answer: Your hash_sequence() function is quite good. The only thing I would suggest about that function is rename your triplet variable to something more suitable like sequence. Currently, I expect triplet to contain strings of length 3. A name like sequence is more general and is a better representation of what will be stored in it.
Your intersects() function is where the vast improvement can be found. Each iteration of the for loop you are re-running the hash_sequence() function. If your strings are really long and k really small (i.e. 1), hash_sequence() would have to scan through the entirety of both strings each iteration of the loop!
Instead, we only have to hash one of the strings! Before the loop save the returned dict to a variable (this way the hash_sequence function is only ran once):
def intersects(string1, string2, k):
dictionary= hash_sequence(string1, k)
Next, instead of iterating over the length of the dicts we can iterate over the keys in the dict, then we can check if that key is 'in' the other string:
for key in dictionary:
# Check if the key (string of length k) is in the other string
if key in string_two:
return [key]
Here is my full version of your code:
def hash_sequence(string, length):
dictionary = {}
for i in range(len(string) - (length-1)):
sequence = string[i:i+length]
dictionary.setdefault(sequence, []).append(i)
return dictionary
def intersects(string_one, string_two, length):
dictionary = hash_sequence(string_one, length)
for key in dictionary:
if key in string_two:
return [key]
return None
As a final note, take a look at PEP8, the official Python style conventions. It will help make your code more Pythonic. | {
"domain": "codereview.stackexchange",
"id": 7557,
"tags": "python, strings, python-3.x, hash-map"
} |
Bertrand's ballot theorem | Question: I want to understand the dynamic programming equation of https://en.wikipedia.org/wiki/Bertrand%27s_ballot_theorem theorem.
it is this
If i number of people voted for A and j number of people voted for B then dp[i][j] counts the number of ways voting can happen.
dp[ i ] [ j ] = dp[ i ] [ j - 1 ] + dp[ i - 1 ] [ j ] .
basically, I want to find the number of ways candidate A is in the winning position throughout.
Can anyone explain the logic behind the dp equation ?
I think it works like this.
We have a sequence of A and B.In which A wins throughout. Now we add one more A to that sequence or add one more B to it.
Answer: Let $N(p,q)$ be the number of sequences containing $p$ many A's and $q$ many B's, such that in every non-empty prefix, the number of A's strictly exceeds the number of B's. Clearly $N(p,q) = 0$ if $q \geq p$ and $q > 0$. When $p = q = 0$ there are no non-empty prefixes, and so $N(p,q) = 1$.
Suppose therefore that $p > q$, and consider any sequence satisfying the condition. If we remove the last element, we still get a sequence satisfying the condition. Conversely, whatever element we add, the resulting sequence will satisfy the condition, since $p > q$. We conclude that
$$
N(p,q) = \begin{cases}
1 & \text{ if } p=q=0, \\
0 & \text{ if } q \geq p \text{ and } q > 0, \\
N(p-1,q) + N(p,q-1) & \text{ if } p > q > 0, \\
N(p-1,q) & \text{ if } p > q = 0.
\end{cases}
$$ | {
"domain": "cs.stackexchange",
"id": 13368,
"tags": "algorithms, dynamic-programming, probability-theory, number-theory, game-theory"
} |
Germ cells vs. gametes | Question: Naively, I thought that germ cells are diploid (in diploid species like human/mouse at least). Then, germ cells undergo meiosis and become haploid. I thought this was the critical change that defined these derived cells being named gametes (spermatozoon/ovum) rather than germ cells. However, I read that while XX mammalian germ cells undergo meiosis during embryonic development, XY germ cells don't go through meiosis until well after birth (puberty). This would mean that female gametes arise over a decade before male ones in humans. Is there an accepted definition of when germ cells differentiate into gametes and the terminology?
Answer: Your observation is right.
In females( having XX mammalian germ cells), Oogenesis starts with
the process of developing primary oocytes, which occurs via the
transformation of oogonia into primary oocytes, a process called
oocytogenesis. Oocytogenesis is complete either before or shortly after
birth.(1)
The "germ cells" are called oogonia. Ovum is not a direct product
of meiosis I, the oocyte forms a primary oocyte in meiosis I, which
undergoes meiosis II after fertilization to give an ovum.
The ootid is the immature ovum formed shortly after fertilization, but
before complete maturation into an ovum. Thus, the time spent as an ootid is
measured in minutes. ....(2)
(In oogenesis, the ootid doesn't really have any significance in itself, since it is very similar to the ovum. It matures into an ovum.)
This diagram sums it up very well:
Oogenesis in Eukaryotic Cells.(A) oogonium where the mitotic division
occurs (B) differentiation and meiosis I begins (C) primary oocyte (D)
meiosis I is completed and meiosis II begins (E) secondary oocyte (F)
first polar body (G) ovulation must occur and the presence of the
sperm penetration (fertilization) induces meiosis II to completion (H)
ovum (I) second polar body ...(3)
It is commonly believed that, when oocytogenesis is complete, no additional primary oocytes are created, in contrast to the male process of spermatogenesis, where gametocytes are continuously created.
In males, again, as you say, Spermatogenesis is the process by which haploid spermatozoa develop from germ cells in the seminiferous tubules of the testis. The germ cells here are the spermatogonia. But here, initiation of spermatogenesis occurs at puberty. So, males start producing sperm when they reach puberty, which is usually from 10-16 years old.(4)
(The process is almost similar to oogenesis, I would recommend you to start with the wikipedia page if you want to understand this in detail.)
Take away points:
The first haploid cell formed from a diploid germ cell is not an ovum/spermatogonia.
Ovum is formed after fertilization, whereas sperms are formed at puberty.
Oogenesis starts after birth, whereas spermatogenesis starts at puberty.
1- https://en.wikipedia.org/wiki/Oogenesis
2- https://en.wikipedia.org/wiki/Immature_ovum#Ootid
3-https://upload.wikimedia.org/wikipedia/commons/1/1a/Oogenesis.svg
4- https://en.wikipedia.org/wiki/Spermatogenesis | {
"domain": "biology.stackexchange",
"id": 10810,
"tags": "reproduction, development, sex, meiosis, definitions"
} |
Preferred fluid flow | Question: As I’ve read in the book “Fluid Dynamics” by Yunus Cengel, The Pressure Drag decreases and the Skin Friction Drag increases when fluid flow over body transitions from laminar to turbulent thus, resulting in overall decrease in Drag Coefficient.
The Pressure Drag is reduced during the transition implying that the Normal Pressure Force on the body is reduced and as the Lift Force is mostly provided by the Normal Pressure Force thus, implying Decrement in Lift with transition from Laminar to Turbulent flow.
So which flow is preferred in case of aeroplanes?
Answer: Laminar flow is clearly preferred. But a turbulent one has its uses, too
Not only will a laminar boundary layer result in much less friction drag (the velocity gradient at the wall is much less steep than with turbulent flow), but for the same reason it will extract much less energy from the flow so its ability to endure the pressure rise later is better preserved. However, once the flow encounters this pressure rise, the boundary layer becomes unstable and will transition into a turbulent one if that had not already happened before. Because the energy transfer from the far wall to the near-wall layers is much greater in a turbulent boundary layer, it allows to sustain much steeper pressure rises, allowing to reach a higher angle of attack without separation.
Much depends on the speed and physical size of the craft: Even though a negative pressure gradient is stabilizing the boundary layer, large and fast airplanes have rarely any laminar flow left while slow and small gliders sport it on most of their wing's lower and half of its upper side. | {
"domain": "physics.stackexchange",
"id": 48422,
"tags": "fluid-dynamics, aerodynamics"
} |
Why is the dephasing time T2* gaussian? | Question: In MRI dephasing is characterized by $T_2^*$.
$T_2^*$ is measured by performing a Ramsey experiment and is defined as the 1/e decay time of the measured oscillation.
I have read, in
Universal dynamical decoupling of a single solid-state spin from a spin bath. G. de Lange et al. Science 330, p.60 (2010), arXiv:1008.2119
that when there is a slow spin bath the decay will be a Gaussian but cannot find any good sources describing why.
Could someone explain me why the decay has a gaussian ($e^{-(t/\tau)^2} $)profile?
Answer: I have found some articles that might be of interest to you:
An analytical semiclassical treatment (doi)
A simulation of the spin decoherence due to a randomly fluctuating spin bath (doi)
A comprehensive review (doi)
What I've managed to understand, from briefly reading these papers, is that the gaussian decoherence is a result of the broadening of the energy of the spins from inhomogeneity in the magnetic field (due to thermal fluctuations of the spin bath).
This broadening causes the spins to go out of sync such that their sum decays as a gaussian.
You may also find this paper (doi) helpful. | {
"domain": "physics.stackexchange",
"id": 15852,
"tags": "decoherence"
} |
How to Calculate Frequency "X values" from a FFT/DFT Transformation | Question: First of all I am totally new to the DSP field and I have no background in it whatsoever, but my work in biology has led me to data that would greatly benefit from DSP. Any answers devoid of specific DSP math notation, verbage or symbols would be greatly appreciated.
My question is: how do I calculate the frequency values or "X axis" values that go with the DFT peaks' "Y values" that were output from a DSP program? I have used a program that output the following DFT and I can also access/print out the actual "Y values"/peak heights themselves, but not the "X values"/frequency values:
I have heard it mentioned that the sampling frequency/sampling rate and the number of data points that make up the above graph/plot are the only necessary ingredients needed to calculate the "X values" that go with the "Y values" (peak heights) but I don't know what this math equation/formula is. I have also heard mention of something called "frequency bins" but I am not sure if that has any relevance to my question on this post (since the word "bin" reminds me of the bins mentioned in statistics for histograms, but those bins are ranges of values, not the discrete "X values" I am asking about here). Any help/direction or hints would be greatly appreciated.
LM
Answer: What you have heard about the sampling frequency/rate and the number of samples/data points is correct. When you take $n$ samples in the time-domain representation of the signal and "Fourier Transform" them (I use the term loosely without getting into details about DFT, DTFT, FFT etc.) you end up with equal amount of samples ($n$ that is). Those samples now represent the "amplitude" (I also use this term quite loosely) of some frequencies.
Your data after the transformation to the frequency-domain will now contain all the frequency information from $0 Hz$ (DC that is) up to your sampling frequency/rate, which I will denote with $f_{s}$. The spacing of the frequencies on the $x$-axis is linear so the whole spectrum is divided into equal intervals. This interval, denoted here as $\Delta f$, is given by
$$\Delta f = \frac{f_{s}}{n}, ~~~~~~~~~~~~ n = 0, 1, 2, \ldots, N - 1 \tag{1}$$
where $N$ represents the number of samples used.
Bins is indeed the term (or at least one of the terms) used in this case. So, each interval is denoted as one bin. It has indeed direct connection to the term as used in statistics when creating histograms because in each bin is included the energy in the band
$$\left[n \Delta f - \frac{\Delta f}{2}, n \Delta f + \frac{\Delta f}{2}\right], ~~~~~~~~~~~~ n = 1, 1, 2, \ldots, N - 1 \tag{2}$$
where for $n = 0$ the bin corresponds to DC and is the mean of all the values.
One comment to make here is that, in case of a real-valued signal/function, the magnitude/spectrum will be an even function (the second half of the data is mirrored in respect to the y-axis) and the phase information an odd function (the second half of the data is mirrored in respect to the y-axis and negated in respect to the x-axis). Thus, for analysis, one could use only the first half of the data (or the second half if the first half represents negative frequencies, but this is implementation dependent). | {
"domain": "dsp.stackexchange",
"id": 11282,
"tags": "fft, sampling, frequency"
} |
Filtering Arrays by values in a specific Column | Question: The following function, as evidenced by its name KeepOrRemoveArrayRowsWhereComparisonIsTrue is probably trying to do too much.
How can I better structure/refactor/split it to be cleaner whilst retaining the original intent: "Here's an array, filter it based on the values in this column"?
Public Function KeepOrRemoveArrayRowsWhereComparisonIsTrue(ByRef sourceArray As Variant, ByVal colIndex As Long, ByVal operator As ComparisonOperator, ByVal comparisonValue As Variant, ByVal hasHeaders As Boolean, ByVal keepOrRemoveOnTrue As KeepOrRemove, Optional ByRef arrayOfRemovedRows As Variant) As Variant
Dim LB1 As Long, UB1 As Long
AssignArrayBounds sourceArray, LB1, UB1
Dim rowsToBeRemoved As Variant, removeCounter As Long
rowsToBeRemoved = Array()
ReDim rowsToBeRemoved(1 To 1)
Dim ix As Long, startRow As Long
If hasHeaders Then startRow = LB1 + 1 Else startRow = LB1
Dim sourceValue As Variant
Select Case keepOrRemoveOnTrue
Case KeepOrRemove.keep
removeCounter = 0
For ix = startRow To UB1
sourceValue = sourceArray(ix, colIndex)
If IsNull(sourceValue) Then sourceValue = 0
If IsNull(comparisonValue) Then comparisonValue = 0
If Not ComparisonIsTrue(sourceValue, operator, comparisonValue) Then
removeCounter = removeCounter + 1
ReDim Preserve rowsToBeRemoved(1 To removeCounter)
rowsToBeRemoved(removeCounter) = ix
End If
Next ix
Case KeepOrRemove.Remove
removeCounter = 0
For ix = startRow To UB1
sourceValue = sourceArray(ix, colIndex)
If IsNull(sourceValue) Then sourceValue = 0
If IsNull(comparisonValue) Then comparisonValue = 0
If ComparisonIsTrue(sourceValue, operator, comparisonValue) Then
removeCounter = removeCounter + 1
ReDim Preserve rowsToBeRemoved(1 To removeCounter)
rowsToBeRemoved(removeCounter) = ix
End If
Next ix
End Select
sourceArray = Remove2DArrayRows(sourceArray, rowsToBeRemoved, arrayOfRemovedRows)
KeepOrRemoveArrayRowsWhereComparisonIsTrue = sourceArray
End Function
Answer: As mentioned in a comment (and in chat) already, your method does too many things at once
Do one thing and do that well. Any method that takes more than three arguments stinks to the heavens. At two arguments you should already be thinking whether you actually need them all.
Let's just look at the name for a second: KeepOrRemoveArrayRowsWhereComparisonIsTrue ...
Why is there an Or inside?
Why are you talking about the Comparison?
Why are you talking about ArrayRows?
The latter two are easily fixed. Remove that unneeded fluff:
KeepOrRemoveWhere( ... )
wow.. that's... significantly shorter. It also opens the door for the next change. Split the method at it's responsibilities.
Flags are dangerous, because they're magical and they allow you to pack state-machines into arguments where they don't belong.
Instead of passing in a boolean telling you whether you remove or keep, and which becomes a lie as soon as you change the variable name, you should separate these two methods into one responsible for removing, and one for keeping.
This is often known as Remove and Retain (though Keep works just as well.
Now we have the following signatures:
Public Function KeepWhere(ByRef sourceArray As Variant, ByVal colIndex As Long, ByVal operator As ComparisonOperator, ByVal comparisonValue As Variant, ByVal hasHeaders As Boolean, Optional ByRef arrayOfRemovedRows As Variant) As Variant
and
Public Function RemoveWhere(ByRef sourceArray As Variant, ByVal colIndex As Long, ByVal operator As ComparisonOperator, ByVal comparisonValue As Variant, ByVal hasHeaders As Boolean, Optional ByRef arrayOfRemovedRows As Variant) As Variant
now we include the changes to your ComparisonOperator construction from here, to replace operator and comparisonValue by a single predicate.
The last thing that bugs me is hasHeaders. This is something the calling code should've taken care of by skipping the first row.
I daresay that this is not something this function should take care of. It's yet another responsibility, and one that shouldn't be handed down the callchain, but processed early.
We end up with:
Public Function KeepWhere(ByRef sourceArray As Variant, ByVal colIndex As Long, ByVal predicate As Predicate, Optional ByRef arrayOfRemovedRows As Variant) As Variant
Additionally (and lastly) I'm more used to returning the removed elements from such a function (since you still have a reference to the sourceArray anyways). As such the last parameter falls away and we're left with following implementation
Public Function KeepWhere (ByRef sourceArray As Variant, ByVal colIndex As Long, ByVal predicate As Predicate) As Variant
Dim LB1 As Long, UB1 As Long
AssignArrayBounds sourceArray, LB1, UB1
Dim rowsToBeRemoved As Variant, removeCounter As Long
rowsToBeRemoved = Array()
ReDim rowsToBeRemoved(1 To 1)
Dim sourceValue As Variant
removeCounter = 0
For ix = startRow To UB1
sourceValue = sourceArray(ix, colIndex)
If IsNull(sourceValue) Then sourceValue = 0
If Not predicate.Test(sourceValue) Then
removeCounter = removeCounter + 1
ReDim Preserve rowsToBeRemoved(1 To removeCounter)
rowsToBeRemoved(removeCounter) = ix
End If
Next ix
sourceArray = Remove2DArrayRows(sourceArray, rowsToBeRemoved, arrayOfRemovedRows)
KeepOrRemoveArrayRowsWhereComparisonIsTrue = arrayOfRemovedRows
End Function
and similarly for RemoveWhere...
It may be interesting to talk about a way to "Mark" things for rows. then you could actually make this even cleaner by using some flow like the following stub code:
Public Function KeepWhere (sourceArray, colIndex, predicate)
rowsToBeRemoved = MarkRows (sourceArray, colIndex, predicate.Invert)
KeepWhere = Remove2DArrayRows (sourceArray, rowsToBeRemoved)
End Function | {
"domain": "codereview.stackexchange",
"id": 18957,
"tags": "vba, excel"
} |
Echo cancelling using autocorrelation function | Question: I was given a problem, but I couldn't solve. I did some researches but I still didn't figure it out. Here is the problem:
An audio signal $ s(t) $ is generated by a speaker reflects in a wall with reflection coefficient $ \mu (\mu < 1) $. The signal $ x(t) $ is recorded by a microphone close to the speaker and far from the wall, after sampled, is given by:
$ x(t) = s(t) + \mu s(t - k) $
Where $ k $ is the delay given in samples due to the echo.
So, my goal here is to estimate $ \mu $ and $ k $, observing the autocorrelation function of $ x(t) $, $ r_{xx}(l) $, that can be writen in function of the autocorrelation function of $ s(t) $, $ r_{ss}(l) $. I was able to compute this autocorrelation function and I got:
$ r_{xx}(l) = (1 + \mu^{2})r_{ss}(l) + \mu r_{ss}(l-k) + \mu r_{ss}(l+k) $
So, it obvious has three peaks: when $ l = 0 $, when $ l = k $ and when $ l = -k $.
Assuming that $ r_{ss}(l) = 0 $ for $ |l| > k/10 $, I think that I can obtain the value of k observing the side peaks, and the value of $ \mu $ observing the central peak, but i wasn't able to formalize it mathematically. Could anyone help me to do so?
Thanks in advance.
Answer: Haven't you already got it there?
$$
\hat{k} = \arg \max_{l > \frac{k}{10}} r_{xx}(l)
$$
and
$$
\hat{\mu} = \sqrt{\frac{r_{xx}(0) }{ r_{ss}(0) } - 1}
$$
The R code below outputs the figure and:
"k Estimate: 613 vs 613 mu Estimate : 0.747619585689531 vs 0.768768"
R Code
# 26617
T <- 10000
mu <- 0.768768
k <- 613
s <- filter(runif(T,-1,1), rep(10/k,k/10), circular = TRUE)
x <- s + mu*c(rep(0,k),s[1:(T-k)])
par(mfrow=c(2,1))
r_xx = acf(x, lag.max=1000, type="covariance")
min_idx <- floor(k/10)
mx <- which.max(r_xx$acf[(min_idx+1):10000])
khat <- min_idx+mx-1 # R indices start from 1, not 0
points(min_idx+mx,r_xx$acf[khat], col="red", pch=19)
r_ss = acf(s, lag.max=1000, type="covariance")
muhat <- sqrt( r_xx$acf[1] / r_ss$acf[1] - 1 )
print(paste("k Estimate:" , khat, " vs " , k, " mu Estimate : ", muhat, " vs ", mu )) | {
"domain": "dsp.stackexchange",
"id": 3220,
"tags": "discrete-signals, autocorrelation, correlation"
} |
Draw a function by only using two input NOR gates | Question: T = (BD + A'BC'+A'CD)
How can i draw this by only using NOR-Gates? This is what i have done so far:
https://i.stack.imgur.com/nIujl.jpg
Have I understood this right?
Answer: Yes, looks like you understood the question correctly and are implementing only using NOR-gates.
I also see correct usage of DeMorgan's Theorem.
Your answer seems to be correct. Good luck! | {
"domain": "cs.stackexchange",
"id": 10222,
"tags": "logic"
} |
Can't subscribe and check the content on micro-ROS | Question: I'm trying to set up micro-ROS on Ubuntu 22.04 following the steps on this website, and connect ESP32 to ROS2.
However, though the connection is established (Step8 on this website), it can't subscribe and check the content.
(When executing the command ros2 topic list, I can't see an additional topic /freertos_int32_publisher.)
Is there any solution?
Answer: I solved it myself.
The reason why it couldn't subscribe and check the content was that ROS_DOMAIN_ID was set.
After I set ROS_DOMAIN_ID back to 0, it become able to subscribe and check the content. If you're faced with the same problem, please refer to this. | {
"domain": "robotics.stackexchange",
"id": 38663,
"tags": "ros2, microcontroller, micro-ros"
} |
Parabolic flow of fluid inside tube | Question:
I came across a fact on web that when a fluid flows through a cylinder the shape of its flow is parabolic. But according to me if we have a steady state then the velocity each of the concentric fluid layers(element) shown in the figure must be constant and therefore the force on them must me zero, which implies that the force on each element must be same.
Applying the force due to viscosity
$$F= n A dv/dy$$
where $n$-coefficient of viscosity, A-area of cross section, dv/dy-velocity gradient
If the shape is parabolic then v will be proportional to y^2 and also A will be proportional to y, so overall after writing force equation it will come as a function of y which means it is not a constant.
Can someone please tell me how the shape is parabolic.
Answer: The issue is with your starting point, why would every fluid layer have the same velocity in steady flow? Since you have a non slip boundary condition and if your fluid is actually moving, it is impossible for this assumption to be satisfied. This implies that you have different speed, therefore a non zero and more generally a non constant force.
Check out Poiseuille Flow for more information.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 89659,
"tags": "fluid-dynamics, velocity, flow, viscosity"
} |
CtCI 16.8: Integer to English phrase in C++ | Question: I'm a C++ beginner working through Cracking the Coding Interview. This is question 16.8:
Given any integer, print an English phrase that describes the integer (e.g., "One Thousand, Two Hundred Thirty Four").
I have written a simple program, below, that I have confirmed works correctly to answer the question (note that I am ignoring the comma after "Thousand" in the example).
#include <iostream>
#include <vector>
// English Int: Given any integer, print an English phrase that describes the integer (e.g., "One Thousand, Two Hundred Thirty Four").
std::vector<int> const magnitudes = {1000000000, 1000000, 1000, 1};
std::vector<std::string> const magnitude_names = {"Billion", "Million", "Thousand", ""};
std::vector<std::string> const number_names = {"Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine"};
std::vector<std::string> const tens_group_names = {"", "", "Twenty", "Thirty", "Forty", "Fifty", "Sixty", "Seventy", "Eighty", "Ninety"};
std::vector<std::string> const teens_names = {"Ten", "Eleven", "Twelve", "Thirteen", "Fourteen", "Fifteen", "Sixteen", "Seventeen", "Eighteen", "Nineteen"};
std::string name_for_group_of_3(int group);
std::string join_vector(std::vector<std::string> vector, std::string joiner);
// -1 -> "Negative One"
// 0 -> "Zero"
int main() {
int input;
while (1) {
std::cout << "Input: ";
std::cin >> input;
if (input == 0) {
std::cout << "Zero" << "\n";
continue;
}
std::vector<std::string> result;
if (input < 0) {
input *= -1;
result.push_back("Negative");
}
for (int i = 0; i < magnitudes.size(); i++) {
int magnitude = magnitudes[i];
if (input / magnitude > 0) {
result.push_back(name_for_group_of_3(input / magnitude));
result.push_back(magnitude_names[i]);
}
input %= magnitude;
}
std::cout << join_vector(result, " ") << "\n";
}
}
// 0 -> ""
// 1 -> "One"
// 10 -> "Ten"
// 15 -> "Fifteen"
// 34 -> "Thirty Four"
// 456 -> "Four Hundred Fifty Six"
std::string name_for_group_of_3(int group) {
std::vector<std::string> result;
// group should be 0...999
if (group < 0 || group > 999) {
throw "Bad grouping provided!";
}
// Handle hundreds
if (group / 100 > 0) {
result.push_back(number_names[group / 100] + " Hundred");
}
int double_digits = group % 100;
// Handle special case for 11...19
if (double_digits >= 10 && double_digits < 20) {
result.push_back(teens_names[double_digits % 10]);
}
else {
// Handle tens group
if (double_digits / 10 > 0) {
result.push_back(tens_group_names[double_digits / 10]);
}
// Handle ones
if (double_digits % 10 > 0) {
result.push_back(number_names[double_digits % 10]);
}
}
return join_vector(result, " ");
}
std::string join_vector(std::vector<std::string> vector, std::string joiner) {
std::string str_result;
for (int i = 0; i < vector.size(); i++) {
str_result += vector[i];
if (i < vector.size()-1) {
str_result += joiner;
}
}
return str_result;
}
Notes & specific questions:
Compiling with g++ --std=c++11 main.cpp
I can run this on the first 10000 integers 0.15s on my '15 MacBook Pro, roughly 15µs per run. Is this reasonable performance?
I opted to build vectors of strings, then join them together later, mostly to simplify dealing with the details of space placement. This feels cleaner to me, do you agree? Am I losing significant perf by using vectors vs. string concat?
Is it evil to be modifying input as I do during main, with input %= magnitude;? It feels a bit odd to modify the original user input.
Is vector a terrible argument name in join_vector?
I, of course, don't really know what else to ask - appreciate any & all pointers!
Answer: General
Compile with -Wall -Wextra -pedantic-errors. Warnings are preferable to runtime problems. Sometimes I also compile with -ftrapv to avoid signed overflow.
You are doing much of the work inside main. This violates the one responsibility principle. Consider extracting a function to_English to do the actual work. Also, your program becomes clueless after reaching EOF or invalid input. A better main function looks like:
int main()
{
for (int num; std::cout << "Input: ", std::cin >> num;)
std::cout << to_English(num) << "\n";
}
In this case, the input process is simple, so I put it in the loop condition. You may want to extract as a separate get_number function for more sophisticated input.
You do not need to store the strings in vectors. Just concatenate them in place.
Code
#include <iostream>
#include <vector>
You are missing #include <string>.
std::vector<int> const magnitudes = {1000000000, 1000000, 1000, 1};
Using int to hold values outside of the range \$-32\,768 \le n < 32\,768\$ is nonportable. Use, for instance, int_least32_t, instead. (You need to #include <cstdint>) Write an alias for this to express the intent more explicitly: using number_t = std::int_least32_t, and use it consistently throughout the code.
std::string name_for_group_of_3(int group);
The name of the function is not entirely clear, but I cannot think of a better name either. Maybe add a comment.
std::string join_vector(std::vector<std::string> vector, std::string joiner);
Don't pass vectors and strings by value. Pass by const reference instead. Also, join_strings may be a better name in my opinion.
for (int i = 0; i < vector.size(); i++) {
for (int i = 0; i < vector.size(); i++) {
if (i < vector.size()-1) {
These lines trigger -Wsign-compare. Use std::size_t or std::vector<std::string>::size_type instead of int. Use ++i instead of i++.
// group should be 0...999
if (group < 0 || group > 999) {
throw "Bad grouping provided!";
}
Never throw a string literal. Throw a std::invalid_argument instead. Also, an assertion may be better for logical errors. | {
"domain": "codereview.stackexchange",
"id": 35432,
"tags": "c++, beginner, programming-challenge, numbers-to-words"
} |
Costmap wagon ruts Layer | Question:
I m trying to make some preference highways path.
So in order to the robot go from point a to point b, the cost map will be such that the robot will consider walking to the nearst highway to walkout only next to the point b.
Reading about layered costmaps, i found one presentation from David V. Lu, Dave Hershberger, and William D. Smart, named "Layered Costmaps for Context-Sensitive Navigation" (http://wustl.probablydavid.com/publications/IROS2014.pdf). There it is explaned one layer named wagon ruts. I thougth that i can use one implementation of this type and load on this layer preference paths where the cost is negative or it will be multiple the total cost by an small factor.
Someone has any idea if there something like this already developed?
Thanks,
Henrique
Originally posted by Henrique on ROS Answers with karma: 68 on 2016-09-27
Post score: 0
Answer:
The wagon ruts layer was never released, since if I recall correctly, it was for a beta version of layered costmaps.
The easiest way to implement it now would be a static layer with some added cost on the nonhighway cells. You cannot set a value in a costmap to a negative value, although you could (theoretically) make a layer that subtracted from whatever values were already in a previous layer.
Originally posted by David Lu with karma: 10932 on 2016-09-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Henrique on 2016-09-29:
what a pity... I will try to do it =D Thanks. | {
"domain": "robotics.stackexchange",
"id": 25849,
"tags": "ros, navigation, path, costmap"
} |
How to relate sunlight's incidence to its E and B amplitudes? | Question: If I understand this correctly sunlight (as any light) is an EM wave. This means it has, for most mediums, two perpendicular plane waves, an electric and a magnetic, right?
We know both are linearly polarized and the incidence on earth is x $watts/m^2$. How can we calculate the amplitude of E and B? I know
$$E_{Re}=E_{0a}e^{i(kz-wt)}$$
$$B_{Re}=B_{0a}e^{i(kz-wt)}$$
Where $E_{0a}$ and $B_{0a}$ are the real wave amplitudes.
My 1st thought was to calculate the electric and magnetic energies using
$$U_e=e_0/2\int_{allspace}E^2dv$$
$$U_m=1/2µ_0\int_{allspace}B^2dv $$
Where $U_e$ and $U_m$ are respectively the electric and magnetic energies, but these are volumetric not surface results and they would relate the waves to the energies not to the power.
Any ideas on how to find the amplitudes?
Answer: Looks like homework (indeed I have set this question as homework).
Clues:
what are the units of the Poynting vector and how does it relate to the E-field amplitude?
what is the relationship between E- and B-field amplitudes for a plane wave in vacuum?
is the light from the Sun polarised? If not, how can you represent it with plane polarised waves? | {
"domain": "physics.stackexchange",
"id": 29569,
"tags": "homework-and-exercises, electromagnetism"
} |
If I jump from space using a spacesuit filled with hydrogen could I survive the fall? | Question: If I fill a spacesuit to its maximum volume with hydrogen while wearing it (who cares about the breathing part anyway) and jump from space (let's say, from 100km above sea level, a.k.a. Karman Line), could I survive the fall? Or the hydrogen buoyancy/body weight ratio would be so little that it would make no difference for the speed of the fall?
Bonus questions: would I blow up when hitting the surface? Should I use helium instead?
Please help, I wanna jump from space!
Answer: The hydrogen would have an infinitesimal effect. It is highly unlikely that any scientific instrument could detect it's effect.
The buoyancy of hydrogen comes from the mass of the air it displaces. So right away, we realize there's a problem. If you're in 1/1000th of an atmosphere, even a full spacesuit of hydrogen is denser than the air. It doesn't provide any buoyancy at all!
So let's give you the benefit of the doubt. Let's fill the space with pure vacuum instead. You said breathing was optional, right? Now consider your own mass. Perhaps it's around 100kg. That means your vacuum bubble needs to displace around 100kg of air. The density of air is roughly 1.225kg/m^3, meaning you'd need about 122 cubic meters of air. That's not the volume of a space suit. That's more like the volume of a room.
At a mere 10km of altitude, the atmospheric pressure is about a third of that, so you need around 450 cubic meters of air. At a mere 50km, atmospheric density is 0.002 kg/m^3, so you would need 50,000 cubic meters of air! That's getting into the volume of a supertanker!
Far better to use more typical means of arresting your fall, like parachutes. | {
"domain": "physics.stackexchange",
"id": 50638,
"tags": "newtonian-mechanics, buoyancy, hydrogen, free-fall"
} |
Detecting Arabic verses spanning multiple lines | Question: I need to build a Quran app and I want to read out the verses when a user touches it. The problem I'm facing is that some verses may expand to one and half lines (highlighted red verse) or just fit in a quarter of a line (highlighted green verse). So adding each verse to textview or some other view wont work it seems.
I want to detect verses like the red ones in the second image. I have audio files for the verses so no need of text to speech conversion
Answer: This can be solved fairly straightforwardly with simple template matching. I don't know exactly how you have it set up, so I'll just describe the algorithm generally and use illustrations.
Observe that the verse numbers have a distinctive border that can easily be used to detect the start and end of a verse. So create a binarized template for that pattern and store it. Something like this:
Since the number of lines in a screen are known in advance (you're formatting the page) and each verse has a constant height, you can easily infer (algorithmically) where the Y coordinates for the centerlines of the verses should be on the screen. This demonstrates the idea:
When the user touches a verse, get the X-Y coordinates and snap the Y coordinate to the nearest verse center.
Then starting with the X coordinate, perform a simple template matching (cross-correlation) across that row. The first match (peak in the cross-correlation) in the forward direction (to the left), will be the end point for the verse. If there are no matches in the reverse direction (to the right), then move up one verse (which you can do, because you know the Y coordinate of the centerline) and repeat. The first match from the left end will be the start point of the verse. Similarly, if there is no forward match on the line, move down one line and repeat.
Here's a short illustration of the idea. The yellow box is where the user touches the verse. You then do the cross-correlation with your template and the blue circles will be the match.
I also use template matching in this answer, if you're interested in seeing it in action.
Once you've determined the start point for the verse, then use an Arabic text recognizer to infer the verse number inside that border and play the corresponding audio file.
Simpler solution:
A simpler solution, if you don't want to go through this is to store the X-Y coordinates of the verse starting points (keep it simple and use the center points) and once you get the coordinates of the user input, you can again snap it to the centerline and then walk backwards to see where the verse starts. This might have the advantage of being faster.
I didn't put this forward as the first solution because you seemed to reject a similar idea in the comments. In the end, it depends on your constraints — would you rather do computational work (template matching — which, by the way, also requires you to store the template) or using memory (storing coordinates).
If I were you, I'd probably go with this one, but the image processing solution can be fun to try. | {
"domain": "dsp.stackexchange",
"id": 258,
"tags": "image-processing, text-recognition"
} |
$SU(3)$ adjoint representation and irreducibility | Question: Consider the Gell-Mann matrices, with
$$
\lambda_3 = \operatorname{diag}(1, -1,0), \quad \lambda_8 = \frac{1}{\sqrt{3}}\operatorname{diag}(1, 1, -2), \quad, ... \ ,
$$
they span the Lie algebra $\mathfrak{su}(3)$. The Lie algebra transforms under the Lie group (not Lie algebra) $SU(3)$ in the adjoint representation, which is irreducible. By irreducibility, I would like to make the following statement:
Take a non-zero vector $T$ in $\mathfrak{su}(3)$, and consider the adjoint action $Ad_g$ (where $g$ is a group element in $SU(3)$) on $T$ and define the linear space $V_T:=\operatorname{span} \{Ad_g T | \forall g \in SU(3)\}$. $V_T$ is a non-trivial subspace of $\mathfrak{su}(3)$ and also closed under the adjoint action, thereby forming a representation of $SU(3)$, hence $V_T$ must be equal to $\mathfrak{su}(3)$ by irreducibility.
Now, I would like to confirm the above statement, by explicitly playing with the Gell-mann matrices. I can take $T = \lambda_8$, and hope to find a $g$ such that $Ad_g \lambda_8 \propto \lambda_3$ with a non-zero proportionality. The adjoint action can be written as matrix multiplication $g \lambda_8 g^\dagger$, hence
$$
g \lambda_8 \propto \lambda_3 g \ .
$$
But this proportionality is impossible, since it forces
$$
\frac{1}{\sqrt{3}}\begin{bmatrix}
g_{11} & g_{12} & - 2 g_{13}\\
g_{21} & g_{22} & -2 g_{23}\\
g_{31} & g_{32} & -2 g_{33}
\end{bmatrix} \propto
\begin{bmatrix}
g_{11} & g_{12} & g_{13}\\
-g_{21} & - g_{22} & -g_{23} \\
0 & 0 & 0
\end{bmatrix} \ .
$$
This forces $g_{3i} = 0$, which is not good condition for $g \in SU(3)$.
I wonder which part of the above reasoning is wrong?
Answer: The question fails to interpret its own definition correctly: It is correct that every vector $v$ in an irreducible representation is cyclic, which is the technical term for the span of the orbit being the full representation.
However, this does not mean that for any two vectors $v,w$ in the representation you can find some $g$ such that $w = \rho(g)v$ - it only means that there exist finitely many elements $g_i$ in the group and constants $c_i$ such that
$$w = \sum_{i=1}^N c_i g_i v,$$
whereas the question makes the mistake of assuming $N=1$. | {
"domain": "physics.stackexchange",
"id": 100169,
"tags": "gauge-theory, group-theory, representation-theory, lie-algebra, linear-algebra"
} |
DL vs dl notation | Question:
As the L-isomer of glucose, it is the enantiomer of the more common D-glucose.
Source: Wikipedia
As far as I know enantiomers or Optical isomers are non superimposable mirror image structures with chiral centres and they are represented by d and l isomers, not D and L forms. Is there a difference between the capital and lower case designations?
Answer: The notation d, l, and dl (for dextrorotatory and laevorotatory, respectively) is used to designate the sign of optical activity. According to Basic terminology of stereochemistry (IUPAC Recommendations 1996), this notation is obsolete and its usage is strongly discouraged. The recommended notation uses the prefixes (+), (−), and (±), respectively.
Since optical rotation is a distinguishing characteristic of enantiomers, the notation using d, l, and dl as well as the notation using (+), (−), and (±) can be used to distinguish the enantiomers of a chiral molecule. However, without further information, the absolute configuration (R or S) of the chiral molecule is indeterminable. Thus, it remains unknown whether the dextrorotatory enantiomer (d and (+), respectively) is the R enantiomer or the S enantiomer.
The notation D, L, and DL is based on the arbitrary convention according to which d-glyceraldehyde was named D-glyceraldehyde, l-glyceraldehyde was named L-glyceraldehyde, and the racemate dl-glyceraldehyde was named DL-glyceraldehyde. (Note that the stereodescriptors D, L, and DL shall actually be written in small capitals, which cannot be correctly typeset here.) The enantiomers were taken to have the absolute configuration represented by the Fischer projection.
Today, we know that D-(+)-glyceraldehyde actually is (2R)-2,3-dihydroxypropanal. | {
"domain": "chemistry.stackexchange",
"id": 9199,
"tags": "stereochemistry, notation"
} |
How is cross-correlation related with orthogonality? | Question: In linear prediction we can say that in case of optimum linear predictor the error with be orthogonal to data. And when we derive minimum mean square error for $\underline{y} = \mathbf{a}\underline{x} + \underline{b}$ case we find following relation
$\xi = \sigma^2_y (1 - \rho_{xy} )^2$
Where $\rho_{xy} $ is correlation coefficient and if we talk about cross correlation $r_{xy}(k,l)$ of zero mean and uncorrelated random variables x and y, then how does this conclude that if $r_{xy}(k,l)$ will be then the signals will be orthogonal?
Answer: I suppose you mean the cross-correlation at lag zero. Well take an Hilbert space $H$ (i.e. a metric space in which you can define a scalar product $\langle\cdot ,\cdot\rangle$). Then $x,y\in H$ are orthogonal if $\langle x,y\rangle=0$, by definition.
If your Hilbert Space is $L_2(\mathbb{R})$ (the space of real square integrable functions) then the scalar product of $f,g\in L_2(\mathbb{R})$ is defined as
$$
\langle f, g\rangle = \int_{-\infty}^\infty \overline{f(t)}g(t) dt
$$
If the signals are assumed to be ergodic, which is actually a costumary assumption, then the cross-correlation is simply a sliding scalar product, you have
$$
\text{corr}(f,g)(\tau) = \int_{-\infty}^\infty\overline{f(t)}g(t+\tau) dt
$$
hence
$$
\text{corr}(f,g)(0) = \langle f ,g \rangle
$$
and uncorrelation means orthogonality.
The same applies for discrete time signals in $l_2$ | {
"domain": "dsp.stackexchange",
"id": 3798,
"tags": "cross-correlation, statistics, linear-prediction"
} |
How to calculate thrust from mass flowrate and velocity | Question: How can I calculate thrust in Newton from mass flowrate and velocity?
Please note,I have mathematical skills of 7 year old,the most complex task I can do is multiply and divide,I dont understand any equations.
Answer: Well the best learning happens when there is a need, and the internet can answer almost any question. Easiest way to understand physics is to use SI derived units and always carry your units through your equation.
velocity = 5 m/s
mass flowrate = ?kg/s
volumetric flowrate = 5m^3/s
Thrust: ? Newtons = kg*m/s^2
I asked google what the mass of a cubic meter of air was:
density = 1.293 kg/m^3
The density lets us convert your volumetric flow into mass flow:
(looks like googles calculator carries units now which is cool)
5m^3/s * 1.293 kg/m^3 = 6.465 kg/s
mass flow = 6.465 kg/s
Now we just multiply mass flow and velocity and confirm our units work out:
6.465 kg/s * 5 m/s = 32.325N
So the thrust is 32.325N or 7.27 pounds force | {
"domain": "engineering.stackexchange",
"id": 1942,
"tags": "airflow, thrust"
} |
Takes in two sorted lists and outputs a sorted list that is their union | Question:
Write a function that takes in two sorted lists and outputs a sorted
list that is their union
Could anyone help me to improve this code and make it more user-friendly? We are not supposed to use any built-in Python functions, but rather develop our own efficient algorithms.
def answer(list1, list2):
len1 = len(list1)
len2 = len(list2)
final_list = []
j = 0
k = 0
for i in range(len1+len2):
if k == len1:
final_list.append(list2[j])
j += 1
continue
if j == len2:
final_list.append(list1[k])
k += 1
continue
if list1[k] < list2[j]:
final_list.append(list1[k])
k += 1
else:
final_list.append(list2[j])
j += 1
return final_list
print answer([1, 2 , 17, 18, 100], [3, 3, 4, 5, 15, 19, 20, 101])
Answer: You could pick better names for your variables. Definitely answer should be something else, like sorted_union. result would be better than final_list. And list1 and len1 aren't great, though you have limited options there.
You should also add a docstring. They're basically string literals that appear at the start of a class or function. They're programmatically accessible comments that will allow you to see what the function does, and you already have one in your brief, so just add the syntax to include it:
def answer(list1, list2):
"""Takes two sorted lists and returns their union as a sorted list."""
Also since you call for i in range but don't need i, a commonly accepted Python style is to actually use _ instead of i. Using an underscore signals that you don't need the value, it's just there to fit the for loop syntax.
You actually could shortcircuit the loop when j or k have reached the end. You know you'll no longer need anything but the rest of list2 in this case:
if k == len1:
final_list.append(list2[j])
j += 1
continue
So why not just use extend to add the rest of list2 on to final_list and break the loop?
if k == len1:
final_list.extend(list2[j:])
break | {
"domain": "codereview.stackexchange",
"id": 41698,
"tags": "python, performance, sorting, mergesort"
} |
TF2 migration for Hydro | Question:
The Hydro release page indicates that TF has been deprecated, but I still see many ROS packages that still use TF. (robot_state_publisher, navigation, etc)
Is there a plan for migrating these packages? Are patches being accepted for hydro-devel or is the idea to update packages wholesale for Indigo and continue using TF for Hydro?
Looking at the code for TF in Hydro, it appears as if many of the functions are simply wrappers for TF2. So do packages even need to be migrated to use the TF2 package directly? Is there any benefit of doing so?
Originally posted by Dereck on ROS Answers with karma: 1070 on 2013-09-07
Post score: 1
Answer:
http://wiki.ros.org/hydro/Migration#tf2.2BAC8-Migration.General_Backwards_compatability
In short, it should be safe to migrate any package to TF2 at any time, but there's little rush since TF provides TF2 wrappers for existing functions.
Originally posted by Dereck with karma: 1070 on 2013-09-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by 130s on 2016-09-11:
I opened a pr to put more stress about this in tf doc. | {
"domain": "robotics.stackexchange",
"id": 15451,
"tags": "ros, transform, ros-hydro, tf2"
} |
Requiring query field to not be empty on search api | Question: we have a RESTful API that includes an endpoint for search
apiserver/v5/search?q=[search text]
for any query, it passes it off to solr and returns the result like
{code:200,results:[{..},{...}]}
if q parameter is omitted, it returns:
{code:412,message:"parameter q is required"}
if q parameter is EMPTY, eg ?q= then it also returns 412:
if q parameter is omitted, it returns:
{code:412,message:"parameter q is empty"}
The design is the the search PAGE with eg example.com/search?q= will show a default view and not make the query if it is empty, but instead just show a search box or default information.
My question is, Is it poor design or very non-standard to have the API return error on empty query? ie, does it make the API very unexpected/uncomfortable, or is it quite reasonable, to enforce that "you won't get results so you should be handling this in front end behavior"
Below is the relevant code.
NOTE: search function handles all the validation and sanitation and other stuff and is outside the scope of the relevant code and question
<?php
require('search.php');
if(!isset($_GET['q'])
echo '{"code":412,"message":"parameter q is required"}';
// should we do this to enforce they should handle front end?
if(empty($_GET['q']))
echo '{"code":412,"message":"parameter q is empty"}';
echo search($_GET['q']);
Answer:
do search api endpoints typically allow you to search for nothing and just return empty
It is up to the implementors (and perhaps influenced by the stakeholders) of the API. Of the APIs I’ve looked at many disallow searching without a query/keyword. Below are a few examples:
accuweather location autocomplete
owler company /basicsearch
does it make the API very unexpected/uncomfortable, or is it quite reasonable, to enforce that "you won't get results so you should be handling this in front end behavior"
It is quite reasonable. For a REST API "meaningful HTTP status codes are a must". If there is a front-end then it should ensure that the request is well-formed - i.e. has all required parameters.
Take for example the StackExchange API endpoint /search. If the query is missing either the tagged or intitle parameter it will return the following response:
{
"error_id": 400,
"error_message": "one of tagged or intitle must be set",
"error_name": "bad_parameter"
}
And note that the response has HTTP status 400:
HTTP status 412 Precondition Failed likely isn't the best code to be used in such a case.
The HyperText Transfer Protocol (HTTP) 412 Precondition Failed client error response code indicates that access to the target resource has been denied. This happens with conditional requests on methods other than GET or HEAD when the condition defined by the If-Unmodified-Since or If-None-Match headers is not fulfilled. In that case, the request, usually an upload or a modification of a resource, cannot be made and this error response is sent back.
(emphasis: mine)
400 Bad Request would likely be more appropriate. If a form was submitted incorrectly then 422 Unprocessable Entity could be used but that might likely be more appropriate for a POST request.
The status code can be set with an HTTP header using header() and the response_code parameter
header('Content-Type: application/json', true, 400);
If the response is JSON then it would be appropriate to set the Content-Type header to application/json - as is done in the example line above. | {
"domain": "codereview.stackexchange",
"id": 42303,
"tags": "php, error-handling, api, search, rest"
} |
Meaning of the term $V(x)$ in the Schrodinger equation | Question: I'm new to quantum mechanics and I am currently trying to understand finite potential well (although my question is not specific to finite potential well ). In the Schrodinger equation, many texts and internet sources say that $V(x)$ is the potential. I have the following queries regarding this claim.
a) When they say $V(x)$ is the potential, is it just a way of "meaning" potential energy instead? Because after all, schrodinger equation is an energy conservation equation with kinetic and potential energies.
b) Secondly, if $V(x)$ means potential energy, then shouldn't the potential energy be dependent on the particle under consideration (I know a similar question has be asked before on this platform, but I am not convinced by the answer).
c) And if $V(x)$ is the potential energy, is it the potential energy that the particle under discussion would have had it been classically placed at the point $x$ or does $V(x)$ have the same meaning as it is traditionally defined, where a positive unit charge is placed at the point and it is the energy of that positively charged particle?
d) Lastly, when we are solving the finite potential well, we find the wave function for the two cases where $E<V_0$ and $E>V_0$. Are these two cases for the same particle under discussion or are we saying "**if there is a particle whose energy $E$ is greater than $V_0$ (or if there is a particle whose energy $E$ is less than $V_0$)
**
Answer: Yes, calling $V(x)$ the 'potential' is just a shorthand to 'potential energy'. And $V$ may depend on the particle we're investigating: if $V$ is the potential energy due to an electrical interaction then it must depend on the particle's charge. It's just that the potential well is a case where we may abstract away any dependence on the particle's properties. That's the answer to a) and b).
Now about c): we don't know how the interaction goes in this scale. We often 'discover' the appropriate $V$ by experiments or by looking at a similar classical interaction. If there's a similar classical case, we just take the classical hamiltonian and replace the $x$'s and $p$'s by the quantum mechanical operators $X$ and $P$ (being careful to keep the QM hamiltonian hermitian). In a nutshell: you can think of $V(x)$ as the potential a classical particle would have at $x$ only if $V$ has a classical counterpart, otherwise it's better to think that potential is set up in space and the particle responds to it at every position (since, anyway, a particle doesn't have a definite position in QM).
And for d): usually we treat them as separate experiments (where in one of them we used a particle with energy smaller than the 'potential wall' and in the other a particle with higher energy). But the most general solution for a QM particle is a linear combination of it's eigenfunctions, so you could have a particle prepared in such a way that both the $E < V_0$ and $E> V_0$ solutions make up the particle's state (an example: a particle has $E<V_0$, but we now make the particle interact with a photon; the particle may have absorbed the photon and increased it's energy or didn't - we can't really know before measuring! The best we can say is talk about probabilities and build the final wavefunction as a linear combination of states). | {
"domain": "physics.stackexchange",
"id": 55503,
"tags": "quantum-mechanics, potential, schroedinger-equation, terminology, potential-energy"
} |
Evidence of why the Standard Model is a successful theory of particle physics | Question: When discussing physics with laypersons, I'm often in the situation where I have to explain what the Standard Model is, and why it's a successful theory of particle physics.
To help in such situations, I'm looking for explicit examples of where the Standard Model has been successful - i.e. the prediction of certain particles, incredible agreement between experimental/theoretical results and so on.
Conversely, what, if any, are the failures of the Standard Model (apart from the the obvious not-including-gravity one). Are there theoretical predictions which differ significantly from experimental results?
Answer: The lists will end up being huge, therefore I will only mention a few of each. This is my attempt of an answer:
Successes of the Standard Model:
Perhaps the biggest success of the Standard Model is the prediction of the Higgs Boson. The particle has been experimentally verified in 2012 (if my memory serves me well) after it has been theorised for over 50 years.
Other successes of the Standard model include the prediction of the W and Z bosons, the gluon, and the top and charm quark, before they have even been observed.
Another prediction also includes the anomalous magnetic dipole moment of the electron, which is given by $a = 0.001 159 652 180 73(28)$ which results in our most precise value of the fine structure constant: $α^{−1} = 137.035 999 070 (98)$, which is a precision of better than one part in a billion!
Wikipedia has a table with the prediction of the masses of the W and Z boson compared with experimental data. It is evident that those are extremely accurate predictions:
$$
\begin{align*}
&\textrm{Quantity}&&\textrm{Measured (GeV)}&&\textrm{SM prediction (GeV)}\\
\hline
&\textrm{Mass of W boson}&&80.387\phantom0\pm0.019\phantom0&&80.390\phantom0\pm0.018\phantom0\\
&\textrm{Mass of Z boson}&&91.1876\pm0.0021&&91.1874\pm0.0021
\end{align*}
$$
Failures of the Standard Model:
The biggest one in my opinion is the complete absence of gravity in the SM. As you mentioned in your question though, you are interested in other failures, perhaps less known.
These include:
The SM predicts neutrinos to be massless. We have observed neutrino oscillations which implies that neutrinos are massive (by massive I mean they have mass, there actual mass is tiny!).
The Hierarchy Problem. In a nutshell, the SM cannot explain the large differences in the coupling constants of forces at low energy scales.
The contribution of Dark Energy arising from the SM is many, many orders of magnitude higher than observed.
CP violation in Cosmology | {
"domain": "physics.stackexchange",
"id": 15242,
"tags": "experimental-physics, standard-model"
} |
What is "surface fluid adhesion energy"? | Question: This is related to my previous question. Pardon me for asking so many questions recently. My physics knowledge is not that good, and some answers are hard to find.
In the question in the link, I asked what was the correct interpretation for $\beta_i$ in the energy formulation:
$$\mathcal{F}(S_1,S_2,S_3)= \sum_{1\leq i<j\leq 3}\sigma_{ij}P_\Omega(S_i \cap S_j)+\sum_{i=1}^3 \beta_i P_{\partial \Omega}(S_i)+\sum_{i=1}^3 g \rho_i\int_{S_i} z dV$$
I got an answer which said that the correct interpretation for $\beta_i$ is surface fluid adhesion energy. I tried googling the term, but I did not find anything clear. My question is
What is surface fluid adhesion energy and how is this related to the fluid-solid interfacial tension? What references are there on this subject?
Here is the context: So, in my hypothesis the container $\Omega$ is partitioned in $S_1,S_2,S_3$, the three fluids, with prescribed volumes $v_i$ and prescribed densities $\rho_i$. I took into account in the formulation of the energy the interfacial tensions, the gravity, and the contact of the fluids with the container $\Omega$ (these are not my ideas; they are taken from other similar mathematical articles). I will denote $P_\Omega(S)$ the perimeter of $S$ situated in the interior of $\Omega$ and $P_{\partial \Omega}(S)$ the perimeter of $S$ which is situated on the boundary of $\Omega$. I will not be very rigorous in what I'm about to write: I will write, for example $P_\Omega(S_i\cap S_j)$ the perimeter of the intersection $S_i\cap S_j$ even if as set theory intersection, this is void. Still, I think that the idea will be clear.
Answer: Surface fluid adhesion energy is the free energy per unit area of a fluid in contact with a surface. It can be defined by having a given bulk of fluid in contact with a container, and asking how much work does it take to add surface, for example by tilting a non-symmetric container and measuring how much work it takes (very precisely).
You can understand this in a microscopic model by having a two-state lattice (an Ising model) with a box boundary, with extra energy for lattice sites which are "1" near the edge. The surface fluid adhesion energy at any temperature is the difference between the free energy of the lattice with an edge from the free energy of the lattice with periodic boundaries. | {
"domain": "physics.stackexchange",
"id": 2789,
"tags": "fluid-dynamics, surface-tension"
} |
Force Transmitted to the support (Vibrations) | Question: Consider the spring mass dashpot system shown below which is acted upon by a harmonic excitation force F(t). The reference is taken at the equilibrium position of the system when no F(t) was present. At time t=0, F(t) acts and displaces the system from its equilibrium.
I'm interested in knowing the force transmitted to the support. I proceeded in the following manner: At time t, the spring would be stretched by $x-x_0$ and the end of damper will have a velocity equal to x(dot on the top)
The resulting expression I get for transmitted force has an mg in it. However all the sources I'm referring to state the expression with no mg. Where am I wrong in the analysis?
Answer: In the analysis you are referring to the authors are only concerned about the amplification of the excitation force (they are not concerned about the total force). I.e. they only care about the additional force that the excitation is causing on the supports.
(keep in mind that the vibration would still occur even in a zero-g environment if you added a harmonic excitation).
So, what they are doing is that they separate the static part (which is the mg) and the dynamic part.
It is easy to find the total force at the end by just adding mg to the total transmitted load (just like your equation). | {
"domain": "engineering.stackexchange",
"id": 4333,
"tags": "mechanical-engineering, vibration, machine-elements"
} |
Whats at the center of a neutron star? | Question: What is nuclear pasta? Somebody told me that the inside of a neutron star was made of nuclear pasta. Also is the inside of a neutron star some sort of liquid?
When you have a massive glacier the ice at the bottom is under so much pressure it stays in it's liquid form. So is the inside of a neutron star also a liquid? What is the star even made of any more?
Answer: The structure of a neutron star can be summarised as follows.
An outer crust, consisting of a solid lattice of nuclei in a degenerate gas of ultrarelativistic electrons. At densities $>4\times10^{14}$ kg/m$^3$, there is an inner crust where it becomes energetically feasible for neutrons to drip out of the nuclei, but the (increasingly n-rich) nuclei maintain their identity in a solid lattice. As densities $>10^{17}$ kg/m$^{3}$ the nuclei lose their identity and "dissolve" into a (super)fluid of degenerate neutrons with a small fraction (1%) of protons and electrons. Then at densities approaching $10^{18}$ kg/m$^3$ there may be some other phase change - either into a solid neutron core, quark matter or through the creation of additional hadronic degrees of freedom.
Nuclear pasta fits into the region between the inner crust and the n,p,e fluid, at densities between about $3\times 10^{16}$ kg/m$^3$ and $10^{17}$ kg/m$^3$. The basic idea is that the equilibrium state of the gas is found by minimising the overall energy density.
$$ u = n_N (M(A,Z)c^2 + L) + u_n + u_e + ,$$
where $n_N$ is the number density of nuclei, $M(A,Z)$ is the rest mass of the equilibrium nucleus of atomic mass $A$ and atomic number $Z$ (inverse beta decay drives the equilibrium towards n-rich nuclei with large $A$ and high $A/Z$), $u_n$ and $u_e$ are the energy densities of the degenerate neutron and electron gases, which depend only on their number density. $L$ is a (negative) energy density associated with the lattice of nuclei - i.e. some sort of crystal lattice has a lower energy.
The key thing here is the $(M(A,Z) + L)$ term. At lower densities it can be assumed that the nuclei are relatively isolated and pseudo-spherical, so that a semi-empirical mass formula will yield an estimate of $M$. But at densities above $3\times 10^{16}$ kg/m$^3$, the nuclei fill more than 10% of the volume, they are surrounded by neutrons that reduce the surface energy term, and they are becoming so large ($A>300$) that they become susceptible to fission (c.f. the Bohr-Wheeler condition for spontaneous fission).
What this means is that the equilibrium structure of the nuclear matter is no longer in the form of pseudo-spherical, individual nuclei. The nuclei distort and join together in various density-dependent forms; first spaghetti - long strings of nuclear matter with a neutron sauce; then lasagna - planes of nuclear matter with a neutron sauce. At even higher densities the roles reverse - the neutrons are in strings and planes surrounded by nuclear matter.
At densities above $\sim 10^{17}$ kg/m$^3$ the binding energy of the nuclear matter becomes so low that it is more favourable for the nuclei to dissolve into free neutrons (plus a few protons and electrons). | {
"domain": "physics.stackexchange",
"id": 33203,
"tags": "pressure, density, neutron-stars"
} |
confusion in discrete transform to solve kronig penney matrix equation in fourier space | Question: I have a periodic potential $$V(x) =\sum_{K}e^{iKx}V_{K} =\sum_{n}e^{\iota2\pi nx/a}V_{n} $$ where $K =\frac{2\pi n}a$ is the reciprocal lattice vector and $a$ is the lattice constant and $n =\pm 0,\pm 1,\pm 2,\pm 3... $so on. I would like to find the fourier coefficients $V_{K}=V_{n}$ corresponding to a particular $K$ or $n$. Suppose I have a vector for $V(x)$ having 10000 points for $$x = 0,0.01a,0.02a,...a,1.01a,....2a....99.99a$$ such that the size of my lattice is $100a$. Now $n$ will also go from $-50$ to $+49$. Thus I have defined the potential for 10000 points on a 1D lattice of 100 atoms. FFT on this vector gives 10000 Fourier coefficients. By theory of discrete fourier transform (http://www.robots.ox.ac.uk/~sjrob/Teaching/SP/l7.pdf), the $K$ values corresponding to these fourier coefficients are $\frac{2\pi n}{(NX)}$ where $N$ is the no. of readings = 10000 each separated by spacing $X=0.01a$ with $n=0,1,2,3,...9999$.
But I started with $K$ that had the form $K =\frac{2\pi n}a$. What am I missing ? How do I correctly find the fourier coefficients $V_{K}$ numerically using DFT (Fast Fourier transform method in Matlab can be used) ?
For reference on the Kronig Penney fourier space matrix equation, look here http://www.physics.buffalo.edu/phy410-505/topic5/index.html
Answer: First, a somewhat minor point is that $x = 0,0.01a,0.02a,...a,1.01a,....2a....100a$ actually gives a list of 10001 points, not 10000 points. I will assume that you actually meant to say $x = 0,0.01a,...a,1.01a,....2a....99.99a$.
Second, you say that
$$V(x)=\sum_{K}e^{iKx}V_{K}$$
where $K =\frac{2\pi n}a$ and $n=0,1,2,3$, but this gives a non-Hermitian Hamiltonian, so I will instead assume that you actually meant to say $n=0,\pm1,\pm2,\pm3$ and where $V_{-K}=V_K^*$.
So the easiest way to interpret the DFT is as follows. Note that the $k$th entry (where $k=1,2,...,10000$) of the vector $\mathbf{V}$ can be written as
$$\mathbf{V}[k]=\sum_{n=-3}^3V_ne^{2\pi i 100 n (k-1)/10000}$$
where I have specifically not canceled the factor $100/10000$ in order to make the frequency domain form of the coefficients clear. From this, it's clear that each coefficient $V_n$ can be determined from the DFT as
$$V_n=\sqrt{\frac{1}{10000}}\text{DFT}(\mathbf{V})[100n+1\text{ mod }10000]$$
where $\text{DFT}(\mathbf{V})$ is the unitary FFT of $\mathbf{V}$.
As a numerical example in Mathematica:
V0 = 10;
V1 = 2 + I;
V2 = 3 + 2 I;
V3 = 4;
Vm1 = 2 - I;
Vm2 = 3 - 2 I;
Vm3 = 4;
V = Chop@Table[
Exp[I 2 \[Pi] 1/a x] V1 + Exp[I 2 \[Pi] 2/a x] V2 +
Exp[I 2 \[Pi] 3/a x] V3 + Exp[-I 2 \[Pi] 1/a x] Vm1 +
Exp[-I 2 \[Pi] 2/a x] Vm2 + Exp[-I 2 \[Pi] 3/a x] Vm3 + V0, {x,
0, 99.99 a, 0.01 a}];
DFTV = InverseFourier[V];
Here is the first unit cell:
ListLinePlot[V[[1 ;; 100]]]
Here is the DFT:
ListLinePlot[Abs[DFTV], PlotRange -> All, Frame -> True,
Axes -> False]
And here are the coefficients returned from the DFT:
Chop@Table[1/Sqrt[10000] DFTV[[Mod[100 n + 1, 10000]]], {n, -3, 3}]
(*Out: {4., 3. - 2. I, 2. - 1. I, 10., 2. + 1. I, 3. + 2. I, 4.} *)
You can verify yourself that all the other DFT coefficients are zero up to machine precision. Minor note: I used InverseFourier rather than Fourier because Mathematica's definition of Fourier and InverseFourier are reversed compared to how people commonly define Fourier transforms. | {
"domain": "physics.stackexchange",
"id": 12149,
"tags": "quantum-mechanics, condensed-matter, atomic-physics, computational-physics, fourier-transform"
} |
why companding is used? | Question: Which one is the best answer?
Companding is used []
a. to overcome quantizing noise in PCM
b. in PCM transmitters, to allow amplitude limiting in the receivers
c. to protect small signals in PCM for quantizing distortion.
d. in PCM receivers, to overcome impulse noise.
I think c is the best answer. am i right?
Answer: Some thoughts about the other possible answers:
a. in general, it can mitigate quantization noise, but cannot completely overcome it
b. the limited range is over the whole channel, not only the receivers.
d. impulse noise is not suppressed
Answer c. is apparently the closest match.
Did you check the definition from the companding tag at SE.DSP? Could you propose some improvement? | {
"domain": "dsp.stackexchange",
"id": 5511,
"tags": "pcm, companding"
} |
Charging current of capacitor | Question: In one of my books there is a figure
where G is a neon lamp. Basically the capacitor gets charged once the switch is closed up to a certain spark-current $U_Z$ where the neon lamp gets switched on so the capacitor can discharge to a certain charge-current $U_L$. Further it says that from the charging current
$U(t)=U_0(1-\exp(-t/RC))$
of the capacitor it follows that the periodicity is
$\displaystyle T=RC\cdot\log\frac{U_0-U_L}{U_0-U_Z}$.
How exactly does this equation follow?
I am not familiar with the proper english terms in electrical engineering so I might have mixed up voltage, current, etc. I hope, it's still clear what I mean.
Answer: The time to get from zero to $U_L$ is obtained by solving (putting $RC=\tau$)
$$U_L=U_0\left(1-e^{-t/\tau}\right)\\
U_0-U_L = U_0 e^{-t/\tau}\\
\log(U_0-U_L) = \log(U_0)-t/\tau\\
t = \tau \log\frac{U_0}{U_0-U_L}$$
The time to get from $U_0$ to $U_H$ is similarly obtained. When you take the difference between these numbers and rearrange, you get the expression from your book.
Do you think you can do the rest of the derivation yourself now? | {
"domain": "physics.stackexchange",
"id": 41024,
"tags": "homework-and-exercises, electric-circuits, electrical-resistance, capacitance, electrical-engineering"
} |
Is there a basis for naming these empirical fluid model coefficients in terms of flow regime? | Question: Given any flow restrictive device (e.g. pipe, orifice, screen, etc.) one can measure data as the pressure drop across the device relative to the flow rate through the device. And from this data one can generally closely fit the polynomial
$$\Delta P = K_2Q^2+K_1Q $$
where $\Delta P$ is the pressure drop and $Q$ is the volumetric flow rate
Furthermore there have been people that (I believe) mis-appropriately attribute these factors to concepts involving Reynolds number, calling, for example $K_2$ the 'turbulent' flow factor and $K_1$ the 'laminar' flow factor.
While it is true that $K_1$ dominates the pressure drop at low flow, and $K_2$ dominates at high flow, I don't believe there is any basis to attribute these empirical factors to laminar and turbulent flow characteristics.
Am I maybe missing something? Is there anything in the analysis of fluid dynamics that might support such a naming convention or otherwise refute it?
Answer: Suppose geometry of your flow restrictive device is fixed, so that we can take any one geometric dimension of the device (which is relevant to the flow) as your length scale, call it $d$. Assuming fluid properties remain constant, changing flow $Q$ is equivalent to changing average speed $U$ (at any particular cross-section), which we shall take as our velocity scale. Now pressure difference $\Delta p$ required to create this flow depends on fluid properties ($\rho,\mu$), geometric parameters (characterized by $d$), and flow rate (characterized by $U$). Dimensional analysis then gives
$\frac{\Delta p}{\rho U^2}=f(Re)$, where $f$ is some function of Reynolds number.
In a turbulent flow it so happens that pressure drop is determined primarily by processes other than viscous dissipation (mixing of momentum by advection of fluid elements rather than by molecular diffusion), so to a good approximation viscosity is unimportant, in which case there cannot be dependence on $Re$ i.e. $\frac{\Delta p}{\rho U^2} \approx$ constant, which means for a turbulent flow, $\Delta p~\alpha~ U^2~\alpha~Q^2$. I think those who call $K_2$ turbulent flow factor are drawing upon this resemblance. But it is only a resemblence, may be it helps the person remember which factor is which.
If we write out $\Delta p$ as a polynomial series of $Q$, then for small enough $Q$, we may neglect higher powers of $Q$ and we thus have $\Delta p~\alpha~Q$. This is laminar flow regime, and calling $K_1$ laminar flow factor is again only by way of resemblance. | {
"domain": "physics.stackexchange",
"id": 32835,
"tags": "fluid-dynamics, conventions, flow, turbulence, models"
} |
Clearpath Husky A200 model in gazebo - wrong turnspeed? | Question:
Hello community!
I have a question concerning the clearpath husky A200 gazebo model. I set up a simulation for our husky in gazebo. After I got it running, I was able to control the husky model the same way as our "real" husky, running the same nodes in the simulation as on the real husky.
But there is a strange behaviour in gazebo during steering the model. It seems as if the steering/turning speed in gazebo is very slow compared to the "real" husky.
To get some numbers, I ran the "robot_pose_ekf" package to see some odometry data in rviz. When moving forward and backward on a strait line the data seems to be OK, but the turning speed is very different. For example, a 360° spot turn in gazebo are nearly 1080° in RVIZ regarding the odometry data.
Does anyone have some experience with this issue?
Regards
psei
Originally posted by psei on ROS Answers with karma: 318 on 2013-08-18
Post score: 0
Original comments
Comment by pmukherj on 2013-08-21:
Hello!
This maybe because the "real" husky is running in open loop mode while the simulated one is not. I can verify this but just so I understand the problem well. I am assuming the "real" husky has an IMU? So the 1080 degree you turn you saw in rviz is something you observed physically as well?
Answer:
A bugfix has been released to the ROS repo and should be available via apt-get in a week or so.
In the mean time, please check out https://github.com/husky/husky_simulator.git for the fixed version.
Originally posted by yanma with karma: 16 on 2013-11-01
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 15303,
"tags": "ros"
} |
Orbits of Trojan Asteroids | Question: I understand that the Trojan points are located 60 degrees ahead and behind a planet in its orbit. However, since there are quite a number of Trojans in Jupiter's orbit, they cannot all be exactly at that point. Presumably they orbit around it.
So what sort of orbits do they have? Are they circular or anywhere near so, or are they elongated along the orbit of Jupiter? If the latter how far can they get from the 60 degree point?
Answer: Trojan asteroids are in roughly circular orbits around the Sun at roughly the same distance as Jupiter, that are in 1:1 resonance with Jupiter and stay very roughly 60 degrees away from it.
Scott Manley's video below shows two classes of asteroids in resonance with Jupiter. The first one shown is confusing because it is in a 3:2 resonance and in the rotating frame it looks like they are cycling between L3, L4 and L5. Skip ahead to 33 seconds and you can see what "normal Trojan asteroids" do. Most of them stay within +/-20 degrees of L4 or L5, only a few exotic stragglers go farther than that away from their Lagrange points. There is some out-of-plane motion as well, as there is for all asteroids.
@JamesK's answer showing a rather exotic asteroid in 1:1 resonance with Earth is an extreme case, but the GIF does help to give some illustration of the back-and-forthness, even though it's pretty exaggerated compared to what normally happens.
After watching, go back to the beginning and see the more confusing 3:2 resonance orbits.
update: There's this! | {
"domain": "astronomy.stackexchange",
"id": 5439,
"tags": "asteroids, trojan-asteroids"
} |
Velocity of a photon according to a stationary observer in Schwarzschild metric | Question: Consider the Schwarzschild metric
$$ \left(1-\frac{2M}{r}\right)dt^2-\left(1-\frac{2M}{r}\right)^{-1}dr^2-r^2\left(d\theta^2+\sin^2\theta\, d\phi^2\right)$$
Final goal: calculate the observed velocity of a photon on a radial geodesic according to an observer that's stationary at some $r>2M$ above the singularity.
Such a stationary observer must have a wordline $$\gamma(\tau)=\left(t,r,0,0\right)$$
such that $\dot r=0$ and $\dot\gamma^\mu\dot\gamma_\mu=1$, this yields
$$\frac{\mathrm{d}t}{\mathrm{d}\tau}=\frac{1}{\sqrt{1-\frac{2M}{r}}} $$
The wordline for an observer stationary at some height $h$ is thus
$$ \gamma(\tau)=\left(\frac{\tau}{\sqrt{1-\frac{2M}{h}}},h,0,0\right)$$
Now, a photon geodesic must obey
$$ \frac{\mathrm{d}r}{\mathrm{d}\lambda}=\left(1-\frac{2M}{r}\right)\frac{\mathrm{d}t}{\mathrm{d}\lambda}\implies\frac{\mathrm{d}r}{\mathrm{d}t}=1-\frac{2M}{r}$$
The velocity of the photon according to the observer is
$$ \frac{\mathrm{d}r}{\mathrm{d}\tau}=\frac{\mathrm{d}r}{\mathrm{d}t}\frac{\mathrm{d}t}{\mathrm{d}\tau}=\frac{1-\frac{2M}{r}}{\sqrt{1-\frac{2M}{h}}}$$
The problem: if I plug in $r=h$ I would expect to obtain $1$, as the photon is supposed to move at the speed of light near the observer. What happened?
Answer: If we want to be precise, your calculations seem correct but it is just the coordinates you are using that are not locally inertial and that is why you are not getting what you would like. Just a comment, do not set $\theta = 0$ that is a coordinate singularity of spherical coordinates and there fore of the metric in these coordinates. Instead, take $\theta = \frac{\pi}{2}$ which is the equator.
In general relativity, locally inertial coordinates means the following: given some coordinates in a neighborhood of a world-line $\gamma(\lambda)$, these coordinates are called locally inertial if the metric written in these coordinates at each point $\gamma(\lambda)$ of the world-line is the Minkowski metric. However, away from the world-line the metric in these coordinates is not Minkowski!
Your coordinates are $(t, r, \phi, \theta)$ and the world-line (a non-geodesic one by the way) is let's say $$\gamma(\tau) = \left(\, \frac{\tau}{\sqrt{1-\frac{2M}{h}\,}},\, \, h,\,\, 0,\,\, \frac{\pi}{2} \,\right)$$ parametrized with respect to proper time.
The tangent coordinate vectors along the coordinate $t-$lines, $r-$lines, $\phi-$lines and $\theta-$lines are
$$\frac{\partial}{\partial t} = \begin{bmatrix}1\\0\\0\\0 \end{bmatrix}, \,\,\,
\frac{\partial}{\partial r} = \begin{bmatrix}0\\1\\0\\0 \end{bmatrix}, \,\,\,
\frac{\partial}{\partial \phi} = \begin{bmatrix}0\\0\\1\\0 \end{bmatrix} \,\,\, \text{ and } \,\,\,
\frac{\partial}{\partial \theta} = \begin{bmatrix}0\\0\\0\\1 \end{bmatrix} \,\,\,$$ respectively. They are Schwarzschild orthogonal, but their Schwarzschild length is not unit ($+1$ for the time-like vector and $-1$ for the three space-like vectors) along the world line $\gamma(\tau)$. If you rescale these vectors, however, you can achieve normalization
$$\frac{\partial}{\partial \tau} = \frac{1}{\sqrt{1-\frac{2M}{h}}}\, \frac{\partial}{\partial t} = \begin{bmatrix}\frac{1}{\sqrt{1-\frac{2M}{h}}}\\0\\0\\0\end{bmatrix} , \,\,\,
\frac{\partial}{\partial \rho} = {\sqrt{1-\frac{2M}{h}}}\, \frac{\partial}{\partial r} = \begin{bmatrix}0\\ \sqrt{1-\frac{2M}{h}} \\0\\0 \end{bmatrix}, \,\,\,$$
$$
\frac{\partial}{\partial u} = \frac{1}{h\,\sin(\frac{\pi}{2})}\,\frac{\partial}{\partial \phi} =\frac{1}{h}\,\frac{\partial}{\partial \phi} = \begin{bmatrix}0\\0\\\frac{1}{h}\\0 \end{bmatrix}, \,\,\,\,\,
\frac{\partial}{\partial w} = \frac{1}{h}\,\frac{\partial}{\partial \theta} = \begin{bmatrix}0\\0\\0\\\frac{1}{h} \end{bmatrix} \,\,\,$$
Now, you can perform the linear change of variables from $(t, r, \phi, \theta)$ to $(\tau, \rho, u, w)$ as follows
\begin{align}
&t = \frac{\tau}{\sqrt{\, 1 - \frac{2M}{h}\,}}\\
&r = \left(\sqrt{\, 1 - \frac{2M}{h}\,} \right)\, \rho \,+ \, h\\
&\phi = \frac{u}{h}\\
&\theta = \frac{w}{h} + \frac{\pi}{2}
\end{align}
and the the inverse relations are
\begin{align}
&\tau = \left({\sqrt{\, 1 - \frac{2M}{h}\,}}\right)\, t \\
&\rho = \frac{r \, - \, h}{\sqrt{1 - \frac{2M}{h}}} \\
&u = h\,\phi\\
&w = h\,\theta \, - \, \frac{h\,\pi}{2}
\end{align}
The world line $\gamma(\tau)$ in the new coordinates is
$$\gamma(\tau) = \big(\tau,\, 0, \, 0, 0 \big)$$
The light-like radial geodesic in the original $(t, r, \phi, \theta)$ coordinates is
$$g(t) = \Big(\,t, \, r(t), \, 0, \, \frac{\pi}{2}\,\Big)$$ where
$$\frac{dr}{dt} = 1 - \frac{2M}{r}$$ In the new coordinates it is
$$g(\tau) = \Big(\,\tau, \, \rho(\tau), \, 0, \, 0\,\Big)$$ where we can find the function $\rho(\tau)$ by changing the coordinates in the differential equation for $r(t)$, i.e.
\begin{align}
\frac{d\rho}{d\tau} &= \frac{\frac{dr}{\sqrt{1-\frac{2M}{h}}}}{\sqrt{1-\frac{2M}{h}}\, dt} = \left(1 - \frac{2M}{h}\right)^{-1}\, \frac{dr}{dt} = \left(1 - \frac{2M}{h}\right)^{-1}\, \left(1 - \frac{2M}{r}\right)\\
& = \left(1 - \frac{2M}{h}\right)^{-1}\, \left(1 - \frac{2M}{h + \rho \, \sqrt{1-\frac{2M}{h}}}\right)
\end{align} Observe that in the new coordinates the Schwarzschild metric is the Minkowski metric for each point on the world-line $\gamma(\tau)$. Now, when the light-like geodesic $g(\tau)$ intersects the world-line $\gamma(\tau)$, i.e. the photon reaches the observer, then $\rho = 0$ and from the differential equation for $\rho(\tau)$ we see that
$$\frac{d\rho}{d\tau} = 1$$ i.e. in this locally inertial coordinate system, the speed of light is still $1$ as expected. | {
"domain": "physics.stackexchange",
"id": 55513,
"tags": "general-relativity, black-holes, speed-of-light"
} |
Why "Couldn't find an AF_INET address for [TTBA]" always come out? | Question:
Hi, I am trying to learn and run multimaster in ROCON with two turtlebots.
I launched amcl in two turtlebots, and try to flip amcl_pose to a server computer TTBA. But several lines with "Couldn't find an AF_INET address for [TTBA]" comes out in some terminals of turtelbots such as the termianl running amcl_demo.launch and the terminal flipping amcl_pose topic to sever TTBA.
I thought it was the wireless network problem. But I did ping from turtlebot two server TTBA. The results was normal.
So has anyone met this problem, or it is just a transient break of wireless link?
Originally posted by scopus on ROS Answers with karma: 279 on 2014-04-21
Post score: 2
Original comments
Comment by Cyril Jourdan on 2014-10-20:
I also have the same problem. I have one machine running roscore linked via ethernet to a router. A laptop connects to that router via wifi, and has ROS_MASTER_URI defined. Both machines can ping each other. Laptop publish on /chatter, the other one sees it but can't echo and display that same error
Answer:
Hi, I have found the solution on this post : http://answers.ros.org/question/163556/how-to-solve-couldnt-find-an-af_inet-address-for-problem/
Originally posted by Cyril Jourdan with karma: 157 on 2014-10-22
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 17730,
"tags": "ros, multimaster, rocon"
} |
IK controlling library for Schunk 7-DOF manipulator | Question:
Hi all,
since i have initialized my Schunk 7-DOF arm, i would like to ask , is there any inverse kinematics library to control this manipulator?
PS: Ubuntu 12.04 LTS/FUERTE
Thanks!
Originally posted by Qt_Yeung on ROS Answers with karma: 90 on 2014-05-30
Post score: 0
Original comments
Comment by gracecopplestone on 2016-02-26:
Hi there, I'm just starting to work with 2 Schunk 7-DOF arms. Which library did you end up using?
Comment by greenfield on 2016-03-09:
Hi I am interested as well.
Comment by Qt_Yeung on 2016-04-22:
I used MoveIt! library, you can try it http://moveit.ros.org.
Comment by Qt_Yeung on 2016-04-22:
I used MoveIt! library, you can try it link text.
Comment by PatFGarrett on 2018-07-30:
Hello, I am working with a similar arm. I am having trouble with connecting MoveIt! to the COB software. Did you have issues there?
Answer:
See the http://wiki.ros.org/care-o-bot stack
Originally posted by davinci with karma: 2573 on 2014-05-30
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Qt_Yeung on 2014-05-30:
Do you mean the stack cob_manipulation? I also find ikfast from OpenRAVE and KDL solver on the internet. Can I use either of them to do inverse kinematics controlling? Sorry for this simple question, i'm a beginner.
Comment by davinci on 2014-05-31:
I don't have experience with that stack. Download and test it to find it out :) | {
"domain": "robotics.stackexchange",
"id": 18113,
"tags": "ros, kinematics, inverse"
} |
Does light change color on its way through a window? | Question: Looking at the refractive index of glass, it's around $1.6$.
Then the speed of light $x$ through light should be given by
$$ 1.6 = \frac{3.0\times10^8}{x}, $$
so $x$ is about $2\times10^8~\mathrm{m}~\mathrm{s}^{-1}$
The frequency is kept constant, so the wavelength must adapt to suit the slower speed, giving a wavelength of $2/3$ the original.
Does this mean that when passing through glass, say red light (wavelength $650~\mathrm{nm}$) changes to indigo ($445~\mathrm{nm}$), as $650 \times 2/3 = 433~\mathrm{nm}$, or is my logic flawed somewhere?
Answer: What do you take to define "red" light: a wavelength of $650~\mathrm{nm}$ or a frequency of $460~\mathrm{THz}$? On the one hand, this borders on being an ill-defined question, but I suppose it can be massaged into something answerable.
I would argue that frequency is more fundamental to describing the light. After all, it is the frequency that is constant throughout all this, as you noted. When a photon strikes a receptor in your eye, it doesn't matter whether it did so after just passing through glass or through vacuum - the biochemical response is dictated by the frequency/energy of the photon. Thus it would be more appropriate to say red light stays red, but the wavelength corresponding to red shifts in glass. | {
"domain": "physics.stackexchange",
"id": 6197,
"tags": "visible-light, refraction, wavelength"
} |
Cannot define robot poses in MoveIt setup assistant | Question: I am using ROS Noetic Ninjemys with Ubuntu 20.04. I made a URDF of an end effector. I tried to configure it with MoveIt setup assistant. However, I am unable to adjust the prismatic joint "flange_vertical" while I try to define the robot poses.
My URDF is given below.
<?xml version="1.0" encoding="utf-8"?>
<!-- This URDF was automatically created by SolidWorks to URDF Exporter! Originally created by Stephen Brawner (brawner@gmail.com)
Commit Version: 1.6.0-4-g7f85cfe Build Version: 1.6.7995.38578
For more information, please see http://wiki.ros.org/sw_urdf_exporter -->
<robot
name="endeffector4">
<link name="world"/>
<link
name="base_link">
<inertial>
<origin
xyz="0.050387 0.039686 0.048716"
rpy="0 0 0" />
<mass
value="0.1149" />
<inertia
ixx="4.1367E-05"
ixy="-6.866E-14"
ixz="2.7746E-21"
iyy="4.3009E-05"
iyz="1.4022E-22"
izz="5.6702E-05" />
</inertial>
<visual>
<origin
xyz="0 0 0"
rpy="0 0 0" />
<geometry>
<mesh
filename="package://endeffector4/meshes/base_link.STL" />
</geometry>
<material
name="">
<color
rgba="0.79216 0.81961 0.93333 1" />
</material>
</visual>
<collision>
<origin
xyz="0 0 0"
rpy="0 0 0" />
<geometry>
<mesh
filename="package://endeffector4/meshes/base_link.STL" />
</geometry>
</collision>
</link>
<joint name="world_base_link" type="fixed">
<parent link="world"/>
<child link="base_link"/>
<origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/>
</joint>
<link
name="vertical">
<inertial>
<origin
xyz="0 0.0502311372932116 0.00491665820356758"
rpy="0 0 0" />
<mass
value="0.00953772572267691" />
<inertia
ixx="7.5511917482841E-06"
ixy="7.3731952992275E-22"
ixz="2.38714677766247E-23"
iyy="1.58301414257507E-07"
iyz="-2.70631615917454E-08"
izz="7.55042505209247E-06" />
</inertial>
<visual>
<origin
xyz="0 0 0"
rpy="0 0 0" />
<geometry>
<mesh
filename="package://endeffector4/meshes/vertical.STL" />
</geometry>
<material
name="">
<color
rgba="0.792156862745098 0.819607843137255 0.933333333333333 1" />
</material>
</visual>
<collision>
<origin
xyz="0 0 0"
rpy="0 0 0" />
<geometry>
<mesh
filename="package://endeffector4/meshes/vertical.STL" />
</geometry>
</collision>
</link>
<joint
name="flange_vertical"
type="prismatic">
<origin
xyz="0.050387 -0.060076 0.0086904"
rpy="1.5708 0 0" />
<parent
link="base_link" />
<child
link="vertical" />
<axis
xyz="0 1 0" />
<limit
lower="0.04"
upper="0"
effort="1"
velocity="0.01" />
<calibration
rising="100"
falling="0" />
<safety_controller
k_position="20"
k_velocity="20"
soft_lower_limit="0.03"
soft_upper_limit="0.0" />
</joint>
<link
name="horizontal">
<inertial>
<origin
xyz="-0.014525 -1.569E-08 0.028652"
rpy="0 0 0" />
<mass
value="0.0063234" />
<inertia
ixx="3.8486E-06"
ixy="1.1544E-12"
ixz="8.5999E-08"
iyy="3.8887E-06"
iyz="-2.2122E-12"
izz="1.1115E-07" />
</inertial>
<visual>
<origin
xyz="0 0 0"
rpy="0 0 0" />
<geometry>
<mesh
filename="package://endeffector4/meshes/horizontal.STL" />
</geometry>
<material
name="">
<color
rgba="0.79216 0.81961 0.93333 1" />
</material>
</visual>
<collision>
<origin
xyz="0 0 0"
rpy="0 0 0" />
<geometry>
<mesh
filename="package://endeffector4/meshes/horizontal.STL" />
</geometry>
</collision>
</link>
<joint
name="vertical_horizontal"
type="revolute">
<origin
xyz="0 0.09 0.005"
rpy="-0.0056389 0 3.1416" />
<parent
link="vertical" />
<child
link="horizontal" />
<axis
xyz="1 0 0" />
<limit
lower="-3.14"
upper="1.5708"
effort="30"
velocity="3.14" />
</joint>
</robot>
There is no error. When I try to visualise the URDF with rviz, the joint moves as expected. I tried changing the limits of the prismatic joint, but that didn't work for me.
Answer: When I adjusted the upper/lower limits, I was able to control the Prismatic joint:
<joint
name="flange_vertical"
type="prismatic">
<origin
xyz="0.050387 -0.060076 0.0086904"
rpy="1.5708 0 0" />
<parent
link="base_link" />
<child
link="vertical" />
<axis
xyz="0 1 0" />
<limit
lower="0.04"
upper="100.0"
effort="1"
velocity="0.01" />
<calibration
rising="100"
falling="0" />
<safety_controller
k_position="20"
k_velocity="20"
soft_lower_limit="0.05"
soft_upper_limit="90.0" />
</joint> | {
"domain": "robotics.stackexchange",
"id": 2734,
"tags": "ros, ros-noetic, urdf, moveit"
} |
Complexity of deciding whether a family is a Sperner familiy | Question: We are given a family $\mathcal{F}$ of $m$ subsets of {1, ...,n}. Is it possible to find a non-trivial lower bound on the complexity of deciding whether $\mathcal{F}$ is a Sperner family ? The trivial lower bound is $O(n m)$ and I strongly suspect that it is not tight.
Recall that a set $\mathcal{S}$ is a Sperner family
if for $X$ and $Y$ in $\mathcal{S}$; $X \ne Y$ implies that $X \nsubseteq Y$ and $Y \nsubseteq X$.
Answer: Can't you solve this by matrix multiplication? Let the sets be $S_1$, $S_2$, $\ldots$, $S_m$. Take matrix $A$ to be the $m \times n$ matrix where $A_{ij}=1$ if $j \in S_i$ and 0 otherwise, and $B$ to be the $m \times n$ matrix where $B_{ij}=1$ if $j \notin S_i$ and 0 otherwise. Now, $AB^T$ has a $0$ entry if and only if there is one set of $\mathcal{F}$ contained in another.
So if you prove a lower bound of $\Omega(n^{2+\epsilon})$ for the case where $m = \theta(n)$, you have proven the same lower bound for matrix multiplication. This is a famous open problem.
I haven't thought much about it, but I don't see any way you could prove that this particular case of matrix multiplication is essentially as hard as the general case; if you really need a lower bound, this would seem to be the only hope you have of proving one without solving the matrix multiplication problem.
On the plus side, this gives algorithms for this problem that are better than the naive algorithm that takes $\theta(m^2n)$. | {
"domain": "cstheory.stackexchange",
"id": 404,
"tags": "ds.algorithms, co.combinatorics"
} |
Can someone clarify how many metrics exist that satisfy the EFEs? | Question: As I currently understand it, there are two ways to work with the Einstein Field Equations: (1) exact solutions and (2) approximations that work under certain conditions.
I also understand that there are two basic ways to go about discovering exact solutions.
Propose a matter distribution and see if there exists a metric out there in nature that matches the results you get around such matter distributions.
Propose a metric and see if there is a matter distribution that matches it.
My Question:
In Newtonian gravity, we basically only had one formula, Newton's gravity equation. But it seems like for Einstein gravity we have to do everything on a case by case basis, so we actually end up with many different tools that are useful but not universally applicable to all circumstances. Is that a misimpression? If not, how many different kinds of metrics do we expect are needed to describe the most common gravitational phenomena?
Answer: There are an infinite number of solutions. Given one solution, we may always perturb it to get a new solution.
See Wald (1984) sections 7.4, 7.5 and references therein.
Note that the perturbed solutions may not be physically significant. In particular, asymptotic flatness is not always preserved. | {
"domain": "physics.stackexchange",
"id": 19443,
"tags": "general-relativity"
} |
Does the mass lost by merging black holes depend on how they merged? | Question: We've all heard the news about the detection by gravitational waves of two black holes, one 29 solar masses and the other 36 solar masses, spiraling into each other to create a single black hole of 62 solar masses.
For me, the loss of three solar masses into the gravitational waves over a fraction of a second is the most amazing aspect of this. I can understand how the waves were created (and mass changed into broadcast energy) by the spiraling motion of the black holes as they did their final do-si-do, and it's clear that losing that energy was a necessary part of their joining; otherwise they would have orbited forever.
However, does the amount of mass lost depend on just how the two holes merged? For instance, if they had rammed each other head-on, would they still have somehow lost that amount of energy? I expect that less energy would have been diverted into the resultant black hole's spinning, but I don't know how that would have changed the resulting mass.
Answer: Yes. The amount of energy radiated as gravitational waves will depend on the details of the two black holes before the merger. The answers to questions like:
Was the orbit circular or elliptical?
Were they spinning?
Were the spins of the black holes aligned with the orbital plane?
will affect the energy of the gravitational waves. The most import detail in terms of energy radiated is the mass ratio of the two precursor black holes.
The final mass of the system $M_\mathrm{fin}$ is always less than the original total mass ($m_1 + m_2$), since some of the mass energy gets converted to gravitational waves, $M_\mathrm{rad}$.
$$M_\mathrm{fin} + M_\mathrm{rad} = m_1 + m_2 $$
Because of something akin to the second law of thermodynamics the final black hole must be bigger than the biggest original black hole. Basically, you can't radiate so much energy in gravitational waves that a black hole shrinks.
$$M_\mathrm{fin} > m_1 \quad \mathrm{and} \quad M_\mathrm{fin} > m_2$$
We can define the fraction of mass radiated aways as:
$$ e = \frac{M_\mathrm{rad}}{m_1 + m_2} $$
This is sometimes called the efficiency of radiation.
If the black holes have about the same mass (as they did in the LIGO detection), about 5% of the total mass will be radiated away. This is the most efficient possibility.
On the other extreme imagine the case where one black hole is way more massive than the other: maybe 1 million solar masses and 1 solar mass. In order to follow the two rules stated above, $M_\mathrm{fin}$ is less than 1 million and one solar masses and greater than 1 million solar masses. In this case the efficiency would be about $e=10^{-6}$ or 0.0001%. Extreme mass ratios produce the weakest gravitational waves. | {
"domain": "physics.stackexchange",
"id": 28699,
"tags": "black-holes, energy-conservation, mass-energy, gravitational-waves, ligo"
} |
In what forms do fire energy transfer in common situations | Question: Yesterday I was standing by the campfire. I used to think that campfire heat carried to me only by air. It was heating my face too much, so I blocked it with my hand just like blocking the sun. Then area on face which is shadowed by my hand stopped getting heat from campfire. Then I thought if it were getting transferred only by air it would pass by around my hand, and something that travels in straight line must be transferring most of the heat. But yet the campfire does not look so bright. Is it some kind of radiation, invisible light?
Answer: Yes, infrared radiation which is invisible to human eyes but still radiates heat. This is how they make thermal cameras, they are using your body's infrared radiation which is detected by the camera and forms an image based on visible light. | {
"domain": "physics.stackexchange",
"id": 13035,
"tags": "thermodynamics, radiation"
} |
Implementing C++ code into ROS using roscpp | Question:
If I write a program in C++, is there a way for ROS to recognize it and run the code through roscpp? I am writing a C++ program to recognize a sensor (connected via USB), read in the data, track the data, etc. Will roscpp allow me to run that C++ program through ROS? (I will be using diamondback and the TurtleBot).
Originally posted by scheithauer on ROS Answers with karma: 1 on 2012-02-01
Post score: 0
Answer:
You should have a look at the tutorials for roscpp. They will show you how to do what you're trying to do.
Originally posted by DimitriProsser with karma: 11163 on 2012-02-01
This answer was ACCEPTED on the original site
Post score: 6 | {
"domain": "robotics.stackexchange",
"id": 8071,
"tags": "c++, roscpp"
} |
When do Hamilton's equation and Heisenberg equation give different solutions? | Question: I'm self-studying QM and become fascinated about Heisenberg picture. I have a question about the relationship between Heisenberg picture and classical mechanics. Consider a simple form of Halmitonian:
\begin{equation*}
H=\frac{p^2}{2m}+V(x),
\end{equation*}
where $V(x)$ is a sufficiently smooth function (or even polynomial if that makes things easier).
The Hamilton's equation
\begin{equation*}
\frac{dx}{dt}=\{x,H \}; \quad \frac{dp}{dt}=\{p,H\}
\end{equation*}
and the Heisenberg equation (of course here $x$ and $p$ are understood as operators)
\begin{equation*}
\frac{dx}{dt}=\frac{1}{i\hbar}[x,H]; \quad \frac{dp}{dt}=\frac{1}{i\hbar}[p,H]
\end{equation*}
reduce to the same form
\begin{equation*}
\frac{dx}{dt}=\frac{p}{m}; \quad \frac{dp}{dt}=-\frac{\partial V}{\partial x}.
\end{equation*}
The difference is in QM we have $[x,p]=i\hbar$.
Griffith's book discusses the solution of the Heisenberg equation on free particle and harmonic oscillator. The solutions in both cases are exactly the same as their classical counterpart. The former is
\begin{equation*}
x(t)=x(0)+p(0)t; \quad p(t)=p(0)
\end{equation*}
and the latter is
\begin{equation*}
x(t)=x(0)\cos(\omega t)+\frac{p(0)}{m\omega} \sin(\omega t); \\
p(t)=p(0)\cos(\omega t)-m\omega x_0 \sin(\omega t).
\end{equation*}
My question is: is it generally true that the Hamilton's equation in the classical mechanics and Heisenberg equation in QM always give the same form of solutions (assume $H=p^2/(2m)+V(x))$? Here I'm only interested in the apparent similarity of the solutions, and I understand despite their similar looking, their physical meanings are completely different (for example, free particles have a nontrivial dispersion relation, or QHO has discrete energy levels). I'm just curious why $[x,p]=i\hbar$ does not affect the form of the solutions.
My guess is the solutions only have the same form for these two simple cases. They are special because the right hand side is linear with respect to $x$ and $p$. I appreciate if someone can explain the general case.
Answer: It is possible to prove $[p, f(x)] = \frac {\hbar}{i} \frac{d}{dx} f(x)$ for any analytical f(x). That's actually not that difficult, you can prove it for example by induction:
$$[p, x^0] = 0 = \frac {\hbar}{i} \frac{d}{dx} (x^0)$$
$$[p, x^{n+1}] = x^n [p,x] + [p, x^n] x $$
$$= - i \hbar x^n + \frac {\hbar}{i} n \cdot x^{n-1} x $$ $$= \frac {\hbar}{i} (n+1) x^n$$ $\rightarrow$ the statement is true for all $x^n$ $\rightarrow$ since the commutator is linear, the statement is true for all analytical functions. That proves that this is true in all pictures (and not only in the Schrödinger picture where $p = \frac{\hbar}{i} \frac{d}{dx}$ in position-space is the definition of p).
Hence, if $H = \frac{p^2}{2 m} + V(x)$ with analytical V, Heisenberg's equations give:
$$\frac{dp}{dt} = \frac{p}{m}$$ $$\frac{dp}{dt} = \frac{1}{i \hbar} \frac{\hbar}{i} \frac{dV}{dx} = - \frac{dV}{dx}.$$
So they are exactly the same as the classical equations of motion.
Knowing about Newton, everybody will probably agree that the equations above are actually the classical equations. However, for completeness, here is a proof that they are the same as Hamilton's equations:
Since the proof of $[p, f(x)] = \frac {\hbar}{i} \frac{d}{dx} f(x)$
only uses the commutation relations and the algebra of Poisson
brackets is the same as the algebra of commutators, $\{p, f(x)\} = -$
$\frac {d}{dx} f(x)$ is also true for every analytical f. So, a
Hamiltonian $H = \frac{p^2}{2 m} + V(x)$ with analytical V will always
give the following Hamilton equations: $$\frac{dx}{dt} = \frac{p}{m}$$
and $$\frac{dp}{dt} = - \frac{dV}{dx}$$
Therefore, as long as $H = \frac{p^2}{2 m} + V(x)$ with analytical V,
Heisenberg's equations should give the same result as Hamilton's
equations. But I should note that I've never read that anywhere, those
are just my own thoughts, so I could be mistaken.
Edit: By now, I've actually found it in the book "Heisenberg's QM" by Razavy (though only the result without proof).
Edit: But, as Qmechanic pointed out, the order ambiguity of operators like $p \cdot x$ still remains, even if H is completely seperated in x and p. | {
"domain": "physics.stackexchange",
"id": 95546,
"tags": "quantum-mechanics, operators, hamiltonian-formalism, commutator, poisson-brackets"
} |
Understanding basics of tensors | Question: I am trying to understand tensors to learn General relativity.
In the book that I am reading they claim that if the basis of a vector space undergoes a linear transformation $T$ then the components of the vector undergo the linear transformation $(T^{-1})^T$ (transpose of inverse of $T$).
This is how they prove it.
If the initial basis of a vector space is $\{e_i\}_1^n$ and the final basis $\{f_i\}_1^n$ is related by $$f_i=T_i{}^je_j$$ then for an arbitrary vector $v=x^ie_i=y^jf_j$ we have, $$y^k=x^j(T^{-1})_j{}^k=((T^{-1})^T)^k{}_jx^j$$
I am not able to understand the last equality at all.
I believe it is false.
Can somebody clarify if it is correct or wrong?
Answer: I'll use a notation where I write vectors as $\vec{v}$ (because that's how \vec renders and I have no idea how to turn it into boldface in MathJax) and use primes to indicate the different basis: $v'^i$ is the components of $\vec{v}$ in the primed basis (some people use primes on the indices, which I find confusing).
So given some basis $\left\{\vec{e}_i\right\}$ we can write any vector $\vec{v}$ as
$$\vec{v} = v^i\vec{e}_i$$
Now consider a new basis $\left\{\vec{e'}_i\right\}$ which is related to the original basis by some nonsingular transformation matrix $T$:
$$\vec{e'}_j = T_j{}^i\vec{e}_i$$
(Note that this is just matrix multiplication using the ESC, and in particular $T$ is not a tensor.)
Well, we can express $\vec{v}$ in the new basis:
$$
\begin{align}
\vec{v} &= v'^j\vec{e'}_j\\
&= v'^jT_j{}^i\vec{e}_i\\
&= v^i\vec{e}_i
\end{align}
$$
And comparing the second-last and last lines of this we get
$$v^i = v'^jT_j{}^i$$
OK, well we know $T$ is non-singular so there exists a $T^{-1}$, such that, in components
$$T_j{}^i\left(T^{-1}\right)_i{}^k = \delta_j{}^k$$
So let's multiply the expression for $v^i$ above by this on the right:
$$
\begin{align}
v^i\left(T^{-1}\right)_i{}^k
&= v'^j T_j{}^i\left(T^{-1}\right)_i{}^k\\
&= v'^j\delta_j{}^k\\
&= v'^k
\end{align}
$$
or (big fanfare, and renaming indices gratuitously)
$$v'^i = v^j \left(T^{-1}\right)_j{}^i$$
And finally, we want to make this look like matrix multiplication on the left, so we need to diddle the indices of $T$:
$$v'^i = \left(\left(T^{-1}\right)^T\right)^i{}_j v^j$$
And we're done. | {
"domain": "physics.stackexchange",
"id": 30857,
"tags": "tensor-calculus, linear-algebra"
} |
All balanced parentheses for a given N | Question:
Write a function (in Haskell) which makes a list of strings
representing all of the ways you can balance N pairs of parentheses:
Example:
balancedParens 0
[""]
balancedParens 1
["()"]
balancedParens 2
["()()","(())"]
balancedParens 3
["()()()","(())()","()(())","(()())","((()))"]
My code produces the right answer but is very slow.
I would appreciate any help on how to speed it up (I have little Haskell experience, so the main struggle for me was implementing the proper algorithm).
module Balanced.Parens where
import Data.List
import Data.List.Split
mutateBP :: String -> [String]
mutateBP "()" = ["(())", "()()"]
mutateBP xs = [xs ++ "()", "()" ++ xs, "(" ++ xs ++ ")"]
continuousSubSeqs :: String -> [String]
continuousSubSeqs = filter (not . null) . concatMap inits . tails
indicesOfSubStr :: String -> String -> [Int]
indicesOfSubStr [] _ = []
indicesOfSubStr sub str = filter (\i -> sub `isPrefixOf` drop i str) $ head sub `elemIndices` str
splitBP :: String -> String -> [[String]]
splitBP s xs = chunksOf 2 (concat (map (\i -> [take (i + 1) xs, drop (i + 1) xs]) (indicesOfSubStr s xs)))
get1st :: (a,b,c) -> a
get1st (a,_,_) = a
get2st :: (a,b,c) -> b
get2st (_,b,_) = b
get3st :: (a,b,c) -> c
get3st (_,_,c) = c
zipWithNums' :: String -> [(Char, Int, Int)]
zipWithNums' xs = zip3 xs (parensNum xs) [0..]
findMatch "" = (0, "", "", 0)
findMatch xs = (length xs, group, tail', matchIdx)
where zipWithNums = zip3 xs (parensNum xs) [0..]
matchNum = 1 + (get2st $ head zipWithNums)
matchPar = if '(' == (get1st . head) zipWithNums then ')' else '('
matchAndTail = dropWhile (\x -> (get1st x /= matchPar) || (get2st x /= matchNum) ) (tail zipWithNums)
matchIdx = if 0 == length matchAndTail then 0 else (get3st $ head matchAndTail) + 1
group = take matchIdx xs
tail' = drop matchIdx xs
groups xs = nub $ filter (\x -> (length $ get2st x) /= 0) [(take (xs_l - l) xs, g, t) | (l, g, t, idx) <- map findMatch (tails xs)]
where xs_l = length xs
parensNum :: String -> [Int]
parensNum xs = scanl (\acc x -> if '(' == x then acc + 1 else acc - 1) 1 xs
isValidBP :: String -> Bool
isValidBP "" = True
isValidBP xs = (length valid_parens_num) - 1 == length xs
where parens_num = parensNum xs
valid_parens_num = takeWhile (>0) parens_num
balancedParens :: Int -> [String]
balancedParens 0 = [""]
balancedParens 1 = ["()"]
balancedParens n = nub $ concat $ map (\x -> [p ++ z ++ s| p <- [get1st x], z <- mutateBP $ get2st x, s <- [get3st x]]) gr
where gr = concat $ map groups (balancedParens (n - 1))
Answer: The code can be sped up, and it can also be simplified. For n > 0, there is a unique way to write a balanced string as (a)b where a and b are balanced strings.
a and b must have n-1 pairs of parentheses, total. This leads us to the following solution:
balancedParens 0 = [""]
balancedParens n = ["(" ++ a ++ ")" ++ b | i <- [0 .. n - 1], a <- balancedParens i, b <- balancedParens (n - i - 1)] | {
"domain": "codereview.stackexchange",
"id": 11613,
"tags": "algorithm, strings, haskell, balanced-delimiters"
} |
How to interpate the IMU raw data? | Question:
Hello
I wont to check my IMU raw data. So I wont to find out the range of the orientation yaw and the pitch angle. So when I look at the IMU data I have like these:
orientation:
x: 0.00726589653641
y: 0.0109150372446
z: 0.452039599419
w: 0.891901493073
orientation_covariance: [0.017453292519943295, 0.0, 0.0, 0.0, 0.017453292519943295, 0.0, 0.0, 0.0, 0.15707963267948966]
angular_velocity:
x: 0.0101893069223
y: -0.0161255002022
z: 0.0226749740541
angular_velocity_covariance: [0.0004363323129985824, 0.0, 0.0, 0.0, 0.0004363323129985824, 0.0, 0.0, 0.0, 0.0004363323129985824]
linear_acceleration:
x: -0.535392701626
y: 0.325684845448
z: 9.78638076782
linear_acceleration_covariance: [0.00040000000000000002, 0.0, 0.0, 0.0, 0.00040000000000000002, 0.0, 0.0, 0.0, 0.00040000000000000002]
So what is the pitch and yaw angle??
Any help?
Originally posted by Astronaut on ROS Answers with karma: 330 on 2013-07-25
Post score: 0
Answer:
The orientation given here is in Quarternion representation and not Roll Pitch Yaw Representation.
This quaternion representation can be converted to RPY representation either by using a function in ROS or calculate manually using the formula which can be found on web
Originally posted by sai with karma: 1935 on 2013-07-25
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by P.Naughton on 2013-07-25:
There is code to do this conversion posted in this ros answers page http://answers.ros.org/question/11545/plotprint-rpy-from-quaternion/#17106 | {
"domain": "robotics.stackexchange",
"id": 15059,
"tags": "ros, yaw"
} |
How should one model a structural model with both line elements and area elements? | Question: I am modelling my structure using FEM, and my structure contains line elements (1D) such as beams and columns and area elements (2D) such as walls and columns.
My question is that, how can I proceed to do modelling, and how can I interpret the results for stress and strain, especially at the places where 1D elements meet the 2D elements? Take for example,
A beam's Center of Gravity might not be located within the slab, but in reality the beam and slab are still touching each other. In other words the beam is offset from the slab
A wall's center of gravity might not be located within the slab, but in reality the wall and slab are still touching each other. In other words the wall is offset from the slab
A beam and a column are supposed to be connected together, except that now the beam is offset in left and column in right. If going by line element, they are no longer touching, although in reality, they still are.
Is this a solved problem? Any research literature that I can refer to, or any software packages that you can recommend that handle this gracefully?
Answer: Say we have a slab supported on a beam.
The centroid of the slab and the centroid of the beam are not coincident. Fortunately, in FEM software packages the geometric centroid of the element can be offset from the nodes that define the element.
The sketch below shows a case where the shells have been offset such that the nodes are at the bottom face and the beams have been offset such that the nodes are at the top face. This way, the section properties accurately reflect the true geometry but the shell and beam elements are able to share nodes (which is essential).
Now for implementation. Most FEM packages should be able to handle this. I'm familiar with LARSA so my notes below are specific to that software package, but SAP, RISA, MIDAS, LUSAS, ABAQUS, etc. etc. etc. should have equivalent capabilities. Furthermore, there's likely quite a few different ways of solving this problem. What follows is just one of them -- just because it seems like some pictures might help.
Here's a wireframe and extruded view of a quick beam-slab model I put together. The plate elements and beam elements are shown at their centroids. Note that the plate and beam elements share nodes but their centroids are offset from one another.
In this case, this offset was accomplished in two steps. The beam itself was defined using Section Composer with a geometry that places the node at the top face of the beam. Then, member end offsets were used to offset the top of the beam half the plate thickness downward. The plate elements are defined using nodes at the plate centroid so these two steps shift the beam geometry down so that the top of the beam is 'in contact' with the bottom of the plate.
Basically, look for something like "member end offsets" in your FEM package. | {
"domain": "engineering.stackexchange",
"id": 553,
"tags": "structural-engineering, finite-element-method, structural-analysis"
} |
Will the ball fall down? | Question:
I just start learning Physics.
The picture shows the ISS orbiting Earth.
Suppose now we (or rather, God) put a baseball behind the ISS. The ball is not moving with the ISS, we just put it on the track of the ISS and let go of it.
Will this ball fall directly to Earth while things in the ISS are floating?
Answer: It is all about the sideways speed of the object.
The ISS and also your baseball is falling towards Earth constantly. If you just let go of the ball up there, then it will fall down and crash on the ground underneath.
Push it a bit sideways while letting go and it still crashes but a bit more to the side.
Push it even more, and it still crashes but this time far to the side.
Now push so hard that it reaches such a great sideways speed that although it still falls, it misses! It misses Earth entirely! It falls past Earth and is now flying away from earth on the other side. Earth's gravity slows it down until it eventually comes back again - and it will due to symmetry miss again, this time from the other side. It will repeat this forever. Your baseball is now tracing out an elliptic orbit.
Give it an even greater sideways speed, and the ellipsis widens. At some specific speed, the ellipsis becomes just as wide as it is tall - you now have circular orbit. The distance to the ground is now constant, the same at any moment.
Adjust your altitude, and you will be able to find a place where the sideways speed necessary for circular orbits perfectly matches the sideways speed of Earth's surface due to its rotation. Meaning, a place where the orbital period is 24 hours. Then you have what is called a geosynchronous orbit.
This might be the most peculiar of all orbits, because now your ball looks stationary when we look at it from the ground. It looks as though it is just hanging there in an invisible thread, just floating weightlessly. This is typically the positioning of satellites that must cover the globe in precise and sowewhat fixed patterns.
But in reality it is still falling. Every satellite and space station and astronaut and baseball in geosynchronous orbit, is still falling. Your rotational speed while standing on Earth's surface just happens to match this motion, so that the relative speed* seems to be zero. Only an object in deep outer space away from any gravitational field can truly be called weightless. Anywhere else it would be constantly falling.
Whenever you see astronauts in orbit free-floating, then they are actually not stationary. They just move (fall) at exactly the same speed* as the cameraman.
* To be precise, it is the rotational or angular relative speed that is zero, since it is the angular speeds of you and of the geosynchonous space object that are equal (you both sweep through the same angle per second, hence both covering 360 degrees per 24 hours). Your linear or translational speeds are still very different, since the space object farther away has a longer orbit to move through during those same 24 hours than you standing on Earth's surface farther down (closer to the circle centre). | {
"domain": "physics.stackexchange",
"id": 97513,
"tags": "newtonian-mechanics, orbital-motion, free-body-diagram"
} |
High-resolution georeferenced population data | Question: Can anyone recommend a good source for high-resolution georeferenced population data? I know that Natural Earth Data's Cultural Vectors e.g. contain the population per country. However, I am looking for georeferenced global population data with a higher resolution than on a country level. Obviously, Wikipedia and other sources feature data for some country's states or cities but is there a database (ideally) georeferenced which features such data on a global level?
Answer: For "historical" population data, you can refer to the Gridded Population of the World (GPW), version 4. This dataset makes available estimates of population at ~1 km resolution (30 arc second) for the years 2000, 2005, 2010, 2015, 2020.
The GPWv4 was also used by the European Commission's Joint Research Centre for the Global Human Settlement project. This dataset makes available residential population estimates for target years 1975, 1990, 2000 and 2015 at 250m and 1km spatial resolution.
If you also need future projections, you can refer to the Shared Socioeconomic Pathways (SSP) scenarios downscaled by Murakami and Yamagata (2019), available here, or by Jones and O'Neill (2016), made available through the ISIMIP project portal. | {
"domain": "earthscience.stackexchange",
"id": 1961,
"tags": "open-data, gis"
} |
Word frequency counter | Question: I recently took a self-assessment question to assess my Python ability for an online class. The problem was to return the frequency of a word occurring, as part of a tuple.
Implement a function count_words() in Python that takes as input a
string word_string and a number number_of_words, and returns the n most frequently-occurring words in word_string. The return value should be a list of tuples - the top n
words paired with their respective counts [(, ), (,
), ...], sorted in descending count order.
You can assume that all input will be in lowercase and that there will
be no punctuations or other characters (only letters and single
separating spaces). In case of a tie (equal count), order the tied
words alphabetically.
E.g.: print count_words("this is an example sentence with a repeated word example",3) Output: [('example', 2), ('a', 1), ('an', 1)]
def count_words(word_string, number_of_words):
"""
take in a word string and return a tuple of the
most frequently counted words
word_string = "This is an example sentence with a repeated word example",
number_of_words = 3
return [('example', 2), ('This', 1), ('a', 1)]
"""
word_array = word_string.split(' ')
word_occurence_array = []
for word in word_array:
if word in word_string:
occurence_count = word_array.count(word)
word_occurence_array.append((word, occurence_count))
else:
# no occurences, count = 0
word_occurence_array.append((word, 0))
# dedupe
word_occurence_array = list(set(word_occurence_array))
# reorder
# can also pass, reverse=True, but cannot apply `-` to string
word_occurence_array.sort(key=lambda tup: (-tup[1], tup[0]))
# only return the Nth number of pairs
return word_occurence_array[:number_of_words]
You can then call this function:
count_words(word_string="this is an example sentence with a repeated word example", number_of_words=3)
which returns [('example', 2), ('a', 1), ('an', 1)]
I found the process of tuple sorting, quite tricky, and achieved it using word_occurence_array.sort(key=lambda tup: (-tup[1], tup[0])). I was wondering if there are any other improvements that I can make to my overall code.
I hope this is a reasonable question - I've tweaked the description and example so that I hope it's not too easily identifiable.
Answer: The suggestion made by 200_success is a good one if you don't care about the returned values in the case of a tie, however the question seems to indicate that after sorting by count, you should sort alphabetically. You'll need to add post-processing to that with a Counter (or any mapping). You could also do this with a collections.defaultdict.
from collections import defaultdict, Counter
def count_words2(word_string, number_of_words):
words = word_string.split()
word_dict = defaultdict(int)
for word in words:
word_dict[word] += 1
return sorted(word_dict.iteritems(), key=lambda tup: (-tup[1], tup[0]))[:number_of_words]
def count_words3(word_string, number_of_words):
words = word_string.split()
word_dict = Counter(words)
return sorted(word_dict.iteritems(), key=lambda tup:(-tup[1], tup[0]))[:number_of_words]
My original answer was a bit hasty/knee-jerk reaction to the suggestion to use most_common, which gives no guarantees about ordering in the case of ties, nor can it be passed a function to handle the sorting. You can still use a Counter, you just can't use most_common without somewhat more complex post-processing. As seen above, you should be able to use any mapping to actually get the frequency table, as the post-processing step will be the same. Given the lower complexity of Counter, that is still probably the best solution.
As suggested by Mathias Ettinger in the comments, you could also do something like this
class OrderedCounter(Counter, OrderedDict): pass
def count_words4(word_string, n):
words = OrderedCounter(sorted(word_string.split()))
return words.most_common(n)
In general I prefer to avoid multiple inheritance unless it is clearly a much cleaner and simpler solution than anything else - I don't think that is accurate in this case, but you might decide that it works for you. | {
"domain": "codereview.stackexchange",
"id": 18337,
"tags": "python, python-2.x"
} |
Flexible Team Orienteering Problem | Question: In Team Orienteering Problem (TOP) we have a graph $G=(V,E)$ and $K$ participants, We are supposed to find $K$ paths with total edge cost less than a threshold while maximising total vertex rewards. The constraint on reward collecting is once a reward has been collected by some participant the it can not be collected again by another participant. e.g. each vertex have only one piece of reward.
But given a graph like this, and two participants, where $s,t$ are the start and terminating points. I want to have two pieces of reward in $d$ but only one piece of reward in $b$ and $c$.
Does this problem has a name ? Is that proven to be NP-Complete ?
Answer: I don't know if it has a name but it's NP-hard. Just take $K=1$, give every edge and vertex weight $1$ and ask if there's a solution with vertex reward $|V|$ and edge cost at most $|V|-1$. This happens if, and only if, there's an $s$–$t$ Hamiltonian path. The decision version "Can I get reward at least $R$ with cost at most $C$" is NP-complete. | {
"domain": "cs.stackexchange",
"id": 10566,
"tags": "np-complete, traveling-salesman"
} |
rostopic pub message with UInt16 fields etc | Question:
I have a message type as follows:
# LegoNXTROS_Status message
# Defines the protocol for publishing the NXT robot's status.
#
uint16 leftMotorEncoder
uint16 rightMotorEncoder
uint8 sonarSensor
bool leftTouchSensor
bool rightTouchSensor
When I use rostopic pub -1 /LegoNXTROS_Status nxt/LegoNXTROS_Status 1, 1, 120, false, false
roscore responds repeatedly with: [WARN] [WallTime: 1415504044.156096] Inbound TCP/IP connection failed: field leftMotorEncoder must be unsigned integer type and the message doesn't get through.
Can someone clarify what I'm doing wrong?
Cheers,
Nap
@ahendrix: Thanks. I will add that to my 'cheat' sheet.
Originally posted by Nap on ROS Answers with karma: 302 on 2014-11-08
Post score: 0
Answer:
I suspect rostopic pub is confused by the commas in your command. Maybe try without them?
rostopic pub -1 /LegoNXTROS_Status nxt/LegoNXTROS_Status 1 1 120 false false
Originally posted by ahendrix with karma: 47576 on 2014-11-09
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 20001,
"tags": "ros"
} |
Expected empirical entropy | Question: I'm thinking about some properties of the empirical entropy for binary strings of length $n$ when the following question crosses my way:
$\underbrace{\large\frac{1}{2^{n}}\normalsize\sum\limits_{w\in\left\{0,1\right\}^{n}}\normalsize nH_{0}(w)}_{\large\#}\;\overset{?}{=}\;n-\varepsilon_{n}\;\;\;$
with $\;\;\lim\limits_{n\rightarrow\infty}\varepsilon_{n}=c\;\;\;$ and $\;\;\;\forall n:\;\varepsilon_{n}>0$
where $c$ is a constant.
Is that equation true? For which function $\varepsilon_{n}$ respectively which constant $c$?
$ $
$n=2\;\;\;\;\;\;\;\rightarrow\;\#=1 $
$n=3\;\;\;\;\;\;\;\rightarrow\;\#\approx 2.066 $
$n=6\;\;\;\;\;\;\;\rightarrow\;\#\approx 5.189 $
$n=100\;\;\;\rightarrow\;\#\approx 99.275 $
$n=5000\;\rightarrow\;\#\approx 4999.278580 $
$n=6000\;\rightarrow\;\#\approx 5999.278592 $
$ $
Backround
$ $
$H_{0}(w)$ is the zeroth-order empircal entropy for strings over $\Sigma=\left\{0,1\right\}$:
$H_{0}(w)=\frac{|w|_{0}}{n}\log\frac{n}{|w|_{0}}+\frac{n-|w|_{0}}{n}\log\frac{n}{n-|w|_{0}}$
where $|w|_{0}$ is the number of occurences of $0$ in $w\in\Sigma^{n}$.
The term $nH_{0}(w)$ corresponds to the Shannon-entropy of the empirical distribution of binary words with respect to the number of occurences of $0$ respectively $1$ in $w\in\Sigma^{n}$.
More precise:
Let the words in $\left\{0,1\right\}^{n}$ be possible outcomes of a Bernoulli process. If the probability of $0$ is equal to the relative frequency of $0$ in a word $w\in\left\{0,1\right\}^{n}$, then the Shannon-entropy of this Bernoulli process is equal to $nH_{0}(w)$.
At this point, my question should be more reasonable since the first term normalizes the Shannon-entropies for all empirical distributions of words $w\in\left\{0,1\right\}^{n}$.
Intuitively I thought about getting something close to the Shannon-entropy of the uniform distribution of $\left\{0,1\right\}^{n}$, which is $n$.
By computing and observing some values I've got the conjecture above, but I'm not able to prove it or to get the exact term $\varepsilon_{n}$.
It is easy to get the following equalities:
$\large\frac{1}{2^{n}}\normalsize\sum\limits_{w\in\left\{0,1\right\}^{n}}\normalsize nH_{0}(w)\;\;=\large\frac{1}{2^{n}}\normalsize\sum\limits_{w\in\left\{0,1\right\}^{n}}\normalsize |w|_{0}\log\frac{n}{|w|_{0}}+(n-|w|_{0})\log\frac{n}{n-|w|_{0}}$
$=\large\frac{1}{2^{n}}\normalsize\sum\limits_{k=1}^{n-1}$ $n\choose k$ $\left(k\log\frac{n}{k}+(n-k)\log\frac{n}{n-k}\right)$
and it is possible to apply some logarithmic identities but I'm still in a dead point.
(the words $0^{n}$ and $1^{n}$ are ignored, because the Shannon-entropy of their empirical distributions is zero)
Any help is welcome.
Answer: Here is another approach, based on information theory and heavily inspired by @usul's answer. It shows that $\epsilon_n=O(1)$ with very few calculations, and can be used to prove that $\epsilon_n \rightarrow \log_2 \sqrt{e}$ and to derive good estimates on the rate of convergence with less calculations than @usul's approach. In fact, I find a closed-form expression for $\epsilon_n$ : $$(1) \; \; \; \; \epsilon_n = n \left( H(Binom(n,1/2)) - H \left( Binom \left( n-1, 1/2 \right) \right) \right) \ .$$
Details:
Let $X$ be a uniform random variable in $\{0,1\}^n$. Let $K$ be a random variable equal to the number of 1's in $X$. The expression $\#$ that @Danny wants to analyze is exactly equal to $n \cdot \mathbb{E}_{k} [H(X_1 | K=k)]$. (Here $X_1$ is the first bit of $X$.) By the basic properties of the entropy operator,
$$(2) \; \; \; \; \# = n\mathbb{E}_{k} [H(X_1 | K=k)]=nH(X_1|K)=n(H(X_1K)-H(K))=n(H(K|X_1)+H(X_1)-H(K)) = n(1-H(Binom(n,1/2)) + H(Binom(n-1,1/2))) \ .$$
The last equality follows from the fact that $X_1$ is just a uniformly random bit, $K$ is the binomial distribution, and $K|(X_1=x_1)$ is distributed either as $Binom(n-1,1/2)$ or as $Binom(n-1,1/2)+1$, depending on the value of $x_1$, both of which have the same entropy.
This already gave us equation (1). Now we just need to calculate to get the value of $\lim_{n \rightarrow \infty} \epsilon_n$
We use any known estimation for the entropy of a binomial RV, such as here. We see that $$ (3) \; \; \; \; H(K)=H(Binom(n,1/2))=\frac12 \log_2 ( \pi en / 2) + O(1/n) \ ,$$ and, similarly, that $$H(K|X_1)=H(Binom(n-1,1/2))=\frac12 \log_2 ( \pi e(n-1)/2) + O(1/n) \ .$$ Canceling out terms and substituting into (1), we get $$ (4) \; \; \; \; \epsilon_n = n \cdot (H(K)-H(K|X_1)) = n \cdot \frac12 (\log_2 (n/(n-1)) + O(1/n)) = \\ \frac12 \log_2 ((n/(n-1))^n) + O(1) \rightarrow \log_2(\sqrt{e}) + O(1) \ . $$
By slightly improving the approximation (3) we should be able to replace the $O(1)$ term in (4) by $O(1/n)$ and therefore get that, indeed, $\lim_{n \rightarrow \infty} \epsilon_n = \log_2 \sqrt{e}$. To get this better estimation it should be enough to check that $H(Binom(n,1/2))=\frac12 \log_2 ( \pi en / 2) + err(n)$ where $err(n)=O(1/n)$ and $err$ is a monotone function. | {
"domain": "cstheory.stackexchange",
"id": 2649,
"tags": "it.information-theory, shannon-entropy"
} |
Solving wave equations with heuristic-like, analytic methods | Question: Take a Klein-Gordon (KG) equation for a model exercise: \begin{equation}\frac{\partial^2 u}{\partial t^2}=c^2\frac{\partial^2 u}{\partial x^2 } - \Omega^2 u,\end{equation}
with boundary and initial conditions:
\begin{equation}u(x,t)\in[0,a]\times[0,\infty]\end{equation}
\begin{equation}u(x,0)=\alpha(x)\end{equation}
\begin{equation}\frac{\partial u}{\partial t}(x,0)=\beta(x)\end{equation}
\begin{equation}u(0,t)=0\end{equation}
\begin{equation}u(a,t)=0\end{equation} So nothing special until here.
I've learnt two methods for this, one of the is separation of variables, which nicely gives the dispersion of the wave as $\omega^2 = c^2 k^2 + \Omega^2$ and its solution as a Fourier series:
\begin{equation}u(x,t)=\sum_n\sin (k_n x)\left[A_n\sin(\omega_n t) + B_n\cos(\omega_nt)\right]\end{equation}
The second method starts by supposing the solution can be written in the form $e^{-i\omega t}e^{ikx}$ the minus sign being there because of convention.
Substitituing this into the KG equation I get $(-i\omega)^2 = c^2(ik)^2 - \Omega^2$ which gives the dispersion of the wave, but then I just don't know how to proceed.
If I express $\omega_{\pm}=\pm\sqrt{c^2 k^2 + \Omega^2}$, and wite the solution as $u(x,t)=\int dk e^{ikx}\left(A(k)e^{i\omega_+t}+B(k)e^{i\omega_-t}\right)$, I reach a dead end, and can not go on to write the solution as a Fourier series. Furthermore I get (from the initial conditions):
\begin{equation}u(x,0)=\int dk e^{ikx}(A(k)+B(k)) = \alpha(x) \rightarrow A(k)+ B(k) = \frac{1}{2\pi}\int dx \alpha(x)e^{-ikx}\end{equation}
\begin{equation}\dot{u}(x,0)=\int dk e^{ikx}\omega(A(k)+B(k)) = \beta(x) \rightarrow \omega[A(k)+ B(k)] = \frac{1}{2\pi}\int dx \beta(x)e^{-ikx}\end{equation} Thus $\omega = \frac{\int dx \beta(x)e^{-ikx}}{\int dx \alpha(x)e^{-ikx}}$, which is rather strange. Anyway, I don't know how to proceed from here.
If I go the other way around, and express $k_{\pm}=\pm\sqrt{\frac{\omega^2-\Omega^2}{c^2}}$, then this will be $e^{i\omega t}(Ae^{ik_+x}+Be^{ik_-x})$ (I don't know why I don't integrate here, but if I do, nothing will come out of this). Using the boundary conditions, I get $u=\sum_n2iA_nsin(k_nx)e^{i\omega t}$ with $k_n=n\pi/a$ as expected, but again I don't know what to do from here, because this gives a completely different Fourier series than the form of the solution (not to mention it is complex).
Please point out what am I doing wrong.
Answer: I believe you are forgetting the dispersion relation i.e. that $\omega$ is a function of $k$ when you work out $\dot{u}(x,0)$ as well as a sign. I make the calculation to be as follows. Since:
$$\begin{array}{lcl}u(x,t)&=&\int_0^\infty \,\mathrm{d} k\, e^{i\,k\,x}\left(A(k)e^{i\omega_+t}+B(k)e^{i\omega_-t}\right)\\&=&\int_0^\infty \,\mathrm{d} k\, e^{i\,k\,x}\left(A(k)\,\exp(i\,\sqrt{c^2 k^2 + \Omega^2}\,t)+B(k)\,\exp(-i\,\sqrt{c^2 k^2 + \Omega^2}\,t)\right)\end{array}\tag{1}$$
whence:
$$\dot{u}(x,t)=\int_0^\infty \,\mathrm{d} k\, e^{i\,k\,x}\,\sqrt{c^2 k^2 + \Omega^2}\,\left(A(k)-B(k)\right)\tag{2}$$
so you don't get your "strange" $\omega$ formula: there is no one, unique $\omega$ and instead you have a non sinusoidal superposition.
You're also forgetting the two $k$-branches branches in your last paragraph as well as the fact that $\omega$ must be allowed to vary over positive and negative values: the general solution for a given $\omega$ is $e^{i\,\omega\,t}\,(A_n\,e^{i\,k(\omega)\,x} + B_n\,e^{-i\,k(\omega)\,x})$, where $k(\omega)= \sqrt{\frac{\omega^2-\Omega^2}{c^2}}$. To fulfil the boundary conditions, you must have $B_n=-A_n$ and of course $A_n$ can be generally complex, so write $B_n=-A_n = i\,C_n$ and you will get your $u=\sum_n 2\,i\,A_n\,\sin(k_n\,x)\,e^{i\,\omega\,t}$. Now you recall that there are two $\omega$-branches, so for every $k_n=n\pi/a,\,n\in\mathbb{N}$ there are two $\omega$s: $\omega = \pm\sqrt{c^2 \left(\frac{n\pi}{a}\right)^2 + \Omega^2}$. If you like, you can think of the negative $\omega$ solutions as being associated with $-n$ values: hence you get your most general possible superposition:
$$\begin{array}{lcl}u(x,\,t)&=&\sum\limits_{n=-\infty}^\infty\,D_n\,\sin(k_n\,x)\,e^{i\,\omega(k_n)\,t}\\&=&\sum\limits_{n=0}^\infty\,D_n\,\sin(k_n\,x)\,\left(D_n\,e^{i\,\omega(k_n)\,t}+D_{-n}\,e^{-i\,\omega(k_n)\,t}\right)\end{array}\tag{3}$$
for arbitrary $D_n\in\mathbb{C}$, whence you can derive the first Fourier series you cited, and indeed you can find complex $D_{\pm n}$ to make the series real-valued. | {
"domain": "physics.stackexchange",
"id": 22330,
"tags": "waves"
} |
ethzasl_icp_mapper service Minimal Example | Question:
I'm trying to use the ethzasl_icp_mapper service to match some very simple point clouds. However, I get errors like:
ICP failed to converge: no outlier to filter
I create two simple point clouds and try to line them up using the MatchClouds service.
http://pastebin.com/2seS1jUq
Is this example too simple? Am I missing something?
(currently using groovy_released, but have tried other versions with similar results)
Originally posted by David Lu on ROS Answers with karma: 10932 on 2015-03-11
Post score: 0
Answer:
It all depend on your YAML file used when launching the node ethzasl_icp_mapper. The default launch files of the repository are made for large point clouds with a lot of redundant information. The default ICP solution (trigger if no YAML file is given) also assume many points.
I suspect that in your case, the chosen configuration is too aggressive on the downsample and on the outlier filter. We've made the registration solution highly configurable to suit a large range of applications. The downside of that is that you need to tune for your needs.
Here some suggestion to start:
remove all data filters
remove all outlier filters
use point-to-point error
For more information on all module available, you can use the pmicp -l. I think you should find that in the bin folder of the node. This executable can also take two point clouds from disk and a yaml configuration to register point clouds. It might be useful for your tests.
This node rely on libpointmatcher, a library independent of ROS. Lately, I'm putting more effort on maintaining that part. The node ethzasl_icp_mapper is mainly a showcase for it. Don't hesitate to read our tutorial page.
Originally posted by Francois with karma: 101 on 2015-03-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by David Lu on 2015-03-12:
Excellent. Thanks Francois. The tutorial page you linked to was very helpful. I had previously only looked at the documentation on this page which doesn't specify exactly how to build the yaml file. Also, the point-to-point reference was super-useful. | {
"domain": "robotics.stackexchange",
"id": 21116,
"tags": "ros, icp, ethzasl-icp-mapping, pointcloud"
} |
Lie group symmetry in Weinberg's QFT book | Question: In Weinberg's QFT volume 1, section 2.2 and appendix 2.B discuss the Lie group symmetry in quantum mechanics and projective representation. In particular, it's shown in the appendix 2.B how a representation of the Lie algebra extends to a representation of the group in the neighborhood of the identity. Unfortunately I get stuck in one line of the derivation.
Loosely following the notation of the book, group elements are represented by the coordinates $\theta$ (identity element corresponds to $\theta=0$) and group multiplication is represented by the function $f$
\begin{align}
g(\theta_1) g(\theta_2) = g(f(\theta_1,\theta_2))
\end{align}
The function $f$ encodes the information of the group, and near the identity it expands to
\begin{align}
f(\theta_1,\theta_2)^a = \theta_1^a + \theta_2^a + f^a_{bc} \theta_1^b \theta_2^c + \ldots
\end{align}
Suppose we have the representation of the Lie algebra
\begin{align}
U(\theta) = 1 + i t_a \theta^a + \ldots
\end{align}
Representation of the Lie group can be obtained by exponentiation, namely by flow along a curve $\theta(s)$ via the differential equation (obtained by taking the $\Delta\theta\to0$ limit of $U(\Delta\theta)U(\theta) = U(f(\Delta\theta,\theta))$)
\begin{align}
\frac{d}{ds} U(\theta(s)) = i t_a U(\theta(s)) h^a_b(\theta(s)) \frac{d\theta^b(s)}{ds}
\end{align}
where ${h^{-1}}^a_b(\theta)=\frac{\partial f^a(\bar{\theta},\theta)}{\partial \bar{\theta}^b}|_{\bar{\theta}=0}$. It remains to prove the definition of $U$ doesn't depend on the path $\theta(s)$ chosen. That depends on the properties of the function $h^a_b$ (ultimately properties of $f$), in particular we need equation (2.B.10)
\begin{equation}
\partial_{\theta^c} h^a_b(\theta) = - f^a_{de} h^d_b(\theta) h^e_c(\theta)
\end{equation}
This is where I got stuck. Presumably this is derived from the group associativity condition
\begin{align}
f^a(f(\theta_3,\theta_2),\theta_1) = f^a(\theta_3,f(\theta_2,\theta_1))
\end{align}
Differentiating with respect to $\theta_3^c$ we find
\begin{align}
\frac{\partial f^a(f(\theta_3,\theta_2),\theta_1)}{\partial f^d(\theta_3,\theta_2)} \frac{\partial f^d(\theta_3,\theta_2)}{\partial \theta_3^c} = \frac{\partial f^a(\theta_3,f(\theta_2,\theta_1))}{\partial \theta_3^c}
\end{align}
then differentiating with respect to $\theta_2^b$
\begin{align}
\frac{\partial^2 f^a(f(\theta_3,\theta_2),\theta_1)}{\partial f^d(\theta_3,\theta_2)\partial f^e(\theta_3,\theta_2)} \frac{\partial f^d(\theta_3,\theta_2)}{\partial \theta_3^c}\frac{\partial f^e(\theta_3,\theta_2)}{\partial \theta_2^b}+ \frac{\partial f^a(f(\theta_3,\theta_2),\theta_1)}{\partial f^d(\theta_3,\theta_2)} \frac{\partial^2 f^d(\theta_3,\theta_2)}{\partial \theta_3^c \partial \theta_2^b} = \frac{\partial^2 f^a(\theta_3,f(\theta_2,\theta_1))}{\partial \theta_3^c \partial f^d(\theta_2,\theta_1)} \frac{\partial f^d(\theta_2,\theta_1)}{\partial \theta_2^b}
\end{align}
and setting $\theta_2=\theta_3=0$, we find
\begin{align}
\frac{\partial^2 f^a(\theta,\theta_1)}{\partial \theta^b \partial \theta^c}|_{\theta=0} + {h^{-1}}^a_d(\theta_1)f^d_{cb} = \partial_{\theta_1^d} {h^{-1}}^a_c(\theta_1) {h^{-1}}^d_b(\theta_1)
\end{align}
It seems I need to drop the first term $\frac{\partial^2 f^a(\theta,\theta_1)}{\partial \theta^b \partial \theta^c}|_{\theta=0}$ to get to the equation (2.B.10). I don't know why it should vanish. In addition, (2.B.10) enables us to compute the function $f$, it seems everything is encoded in the coefficient $f^a_{bc}$ (relating to the Lie algebra coefficient by $C^a_{bc} = -f^a_{bc} + f^a_{cb}$).
Answer: You are correct, (2.B.10) is actually not true. But it's only used through (2.B.11), and (2.B.11) does hold, because the derivative $\frac{\partial^2 f^a(\theta, \theta_1)}{\partial\theta^b \partial\theta^c}$ is symmetric in $b$ and $c$. | {
"domain": "physics.stackexchange",
"id": 100022,
"tags": "quantum-field-theory, symmetry, group-theory, lie-algebra"
} |
Why do we perceive colour? (i.e., distinct values?) | Question: This is a question I often asked myself, but never really found a satisfactory answer to. It may be that I always went about it from a wrong starting point because the question is induced by physics. But I think an evolutionary perspective may be the most satisfactory one to explain this.
Humans and other animals had much to gain from differentiating not only dark and light but also colors to identify edible things, or threats etc. For this purpose we developed multiple types of cone cells. But what reason is there to not perceive light as we perceive sound, in a spectrum? We could easily distinguish said things by just knowing how high or low the light frequency is (or for that matter, what the spectral 'fingerprint' of specific objects is). Why differentiate between an - albeit big - array of colors?
Or, to go one step further, why did we need an extra type of cone in the middle, the yellow-green wavelength? Is there an evolutionary reason? I'd settle for using the three more or less distinct cone sensitivities as explanation that the brain just has enough information to arbitrarily differentiate these into very discrete cognitive signals.
But evolutionarily, there has to be a purpose, so that one of the questions is the right one. The colors must bring something to the table. Or not?
Thanks in advance. I am also grateful for anyone nitpicking my arguments, because it always helps.
Answer:
"But what reason is there to not perceive light as we perceive sound, in a spectrum?"
We do perceive light in a spectrum. Not just in a spectrum, but in a combination of spectral components (because we rarely in nature observe true single-wavelength light). If you are thinking about names of colors, those are just words we made up to describe colors to other people, and the range and spacing of those colors differs across language groups. We do the same for sound in the context of music: B-flat versus an E, for example.
Or, to go one step further, why did we need an extra type of cone in the middle, the yellow-green wavelength?
Colors are perceived in vision by the ratio of activity of different cones, and intensity and spectrum can be confused. If you have no "green" cones, there are big stretches of wavelengths that only activate one of the other two cones. In that situation, there is no way to discriminate between intensity and color.
If, on the other hand, you are wondering about transduction, it's because sound and light have different physical properties. The cochlea effectively "maps" sounds onto a physical arrangements of hair cells. You could do something similar with a prism, but you would lose spatial acuity (effectively, you would need a whole spectrum analyzer for the receptive field of each photoreceptor: your retina would have to be absolutely massive). Instead, color vision works by differential chemical sensitivity to different light wavelengths.
Your question doesn't seem to be much about evolution, besides the title, so I left off that part. | {
"domain": "biology.stackexchange",
"id": 7559,
"tags": "human-biology, evolution, neuroscience"
} |
Generalization of straight line motion under constant acceleration | Question: My question is that, we all know the three equations of straight line motion under constant acceleration,
\begin{align}
x & =x_{\rm o}+v_{\rm o}\,t+\tfrac12 \mathrm a\,t^2
\tag{1d-a}\label{1d-a}\\
v & =v_{\rm o}+\mathrm a\,t
\tag{1d-b}\label{1d-b}\\
v^2 & =v_{\rm o}^2+2\,\mathrm a\left(x-x_{\rm o}\right)
\tag{1d-c}\label{1d-c}
\end{align}
Is my generalization correct?
\begin{align}
\mathbf r & =\mathbf r_{\rm o}+\boldsymbol v_{\rm o}\,t+\tfrac12\mathbf a\,t^2 \quad \text{(no difference with that)}
\tag{3d-a}\label{3d-a}\\
\boldsymbol v & =\boldsymbol v_{\rm o}+\mathbf a\,t
\tag{3d-b}\label{3d-b}\\
\vert\boldsymbol v\vert^2 & =\vert\boldsymbol v_{\rm o}\vert^2+2\,\mathbf a\boldsymbol \cdot\left(\mathbf r-\mathbf r_{\rm o}\right)
\tag{3d-c}\label{3d-c}
\end{align}
Please explain the general principle of generalization of 1-dimension formulas into 3-dimensions.
And I must add I am very sorry that I print this question not using LaTeX, I really know nothing about it, so I printed it like that, hopefully you will be patient about me.
Answer: Though the above generalisations are correct, what we do in most cases is that we resolute (or break) any given motion along mutually perpendicular axes (namely x, y and z axes) and then apply these formula separately along each of the axes as: $$v_x=v_{0x}+a_{0x}t$$ $$x=v_{0x}t+\frac{1}{2}a_{0x}t^2$$ $$v_x^2=v_{0x}^2+2a_{0x}x$$ along x-axes. Similarly, $$v_y=v_{0y}+a_{0y}t$$ $$y=v_{0y}t+\frac{1}{2}a_{0y}t^2$$ $$v_y^2=v_{0y}^2+2a_{0y}y$$ and $$v_z=v_{0z}+a_{0z}t$$ $$z=v_{0z}t+\frac{1}{2}a_{0z}t^2$$ $$v_z^2=v_{0z}^2+2a_{0z}z$$ along y and z axes respectively. The reason why we do is because motion along mutually perpendicular axes are independent of each other and hence we can apply these formula separately along the axes.
Hope it helps. | {
"domain": "physics.stackexchange",
"id": 78846,
"tags": "kinematics, acceleration, velocity, differentiation, calculus"
} |
Meaning / proof of these regex | Question: I came across following excerpts while reading about regular expressions identities:
The regex associative laws are:
$$(L+M)+N=L+(M+N)$$
$$(LM)N=L(MN)$$
Some important implications out of associative laws are:
$$r(sr)^*=(rs)^*r$$
$$(rs+r)^*r=r(sr+r)^*$$
$$s(rs+s)^*r=(sr+s)^*sr$$
$$(LM)^*N*\neq L*(MN)*$$
The issue is that
I don't find the implications much intuitive as the identities themselves are. How can I understand the implications intuitively? I can always form a strings belonging to left hand side regex and check whether it can be accepted by other regex. The first implication is very simple to test this way. However how can I make them more intuitive??
Are these implications simply made up expressions which are tested rigorously to hold true and they don't have any specific expression as we can form many such expressions?
I am unable to get the point behind stating these implications. I dont think of any problem in which I can use these regexes straight / immediately. It may be because I am not able to get intuition behind these implications so that it may strike in my head immediately when to use these implications.
Answer: For all three of your statements, the answer is that, generally speaking, the implications simply aren't as intuitive as, say, the associative laws. Look at the analogous problem with algebraic expressions: we have associative laws for addition and multiplication and we also have a distributive law that states that for any expressions $p,q,r$ we have
$$
p\cdot(q+r)=(p\cdot q)+(p\cdot r)
$$
Eventually, these rules should be intuitively obvious. However, one implication of these rules, that
$$
(p+q)^3=p^3+3p^2q+3pq^2+q^3
$$
is probably not as intuitively obvious as the rules used to derive that identity. The utility of the result above is that it can be used as a tool to simplify other more complicated problems.
It's the same for regular expressions: the fact that, say, $r(sr)^*=(rs)^*r$, is correct can be proven rigorously, but having done that you can use it as a tool to show that
$$
aa+(aab)^*aa=aa+aa(baa)^*=aa(\epsilon+(baa)^*)
$$
should you ever need to. For example, there is a handy technique, involving what's known as Arden's lemma that can be used to produce a regular expression describing the language accepted by a finite automaton. Depending on how it's applied, this can produce several regular expressions from the same FA, so it might fall to you to show that the expressions are indeed equivalent, in which case the "implications" you listed might be handy.
The upshot (here's the tl;dr part), is that the implications you mentioned are simply tools you can use when needed: they've beed proven to hold, but there's no reason why they should be intuitively obvious. | {
"domain": "cs.stackexchange",
"id": 7698,
"tags": "regular-expressions"
} |
D'Alembertian and Laplacian Green's Fucntions | Question: There is a way to obtain the Green's Function for the Laplacian as a limit of the Green's function of the D'Alembertian?
For the Laplacian ($-\nabla^2$) we have
$$ G_1(\vec X) = \frac{1}{4\pi X}$$
And for the D'Alembertian ($\Box$) using the retarded prescription we have
$$ G_2(T,\vec X) = \frac{1}{4\pi X} \delta(c T-|\vec X|) $$
I suppose that a way to go from $G_2$ to $G_1$ would be take the limit $c\rightarrow \infty$. That supposition is based on:
$$ \left. \Box \right|_{c\rightarrow \infty}= \left. \frac{1}{c^2} \partial_t^2\right|_{c\rightarrow \infty} - \nabla^2 = - \nabla^2$$
The problem is that I can't see a way to have something like
$$ \lim_{c\rightarrow \infty} \delta(c T-|\vec X|) = 1 $$
What's going on?
Answer: The wave equation is a partial differential equation for a field in space and time, the Laplace equation is an ordinary differential equation for functions of space only. The natures of these equations are really different. However, if you have a Green's function of the wave equation that vanishes at infinite time, and which derivative also vanishes you may integrate from $t=0$ to $t=\infty$ and get a relation that could be what you want.
Note that this would not work in dimensions 1 or 2 because the Green's functions of the wave equation do not satisfy these prerequisites. | {
"domain": "physics.stackexchange",
"id": 13246,
"tags": "greens-functions"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.