anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
How does entangle qubits pass single qubit gate? | Question: How does entangle qubits pass single qubit gate?
For example, I initialize two qubits $|0\rangle\otimes|0\rangle$, then first qubit passes $H$ gate to make it an superposition state $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ and then applied CNOT gate. The state finally become $\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)$
Then I put the first qubit into $X$ gate. What will happen? It will become $\frac{1}{\sqrt{2}}(|10\rangle+|01\rangle)$? If so, how could it is possible the entangled two quibits pass the single-qubit-gate? If not, what will happen in this situation?
Answer: Here are the steps in details:
$$H\otimes I |0\rangle \otimes |0\rangle = H |0\rangle \otimes I|0\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle + |1\rangle \right) \otimes |0\rangle = \frac{1}{\sqrt{2}}\left(|00\rangle + |10\rangle \right)$$
$$CNOT \frac{1}{\sqrt{2}}\left(|00\rangle + |10\rangle \right) = \frac{1}{\sqrt{2}}\left(CNOT|00\rangle + CNOT|10\rangle \right) = \frac{1}{\sqrt{2}}\left(|00\rangle + |11\rangle \right)$$
$$X \otimes I \frac{1}{\sqrt{2}}\left(|00\rangle + |11\rangle \right) = \frac{1}{\sqrt{2}}\left(X |0\rangle \otimes I|0\rangle + X |1\rangle \otimes I|1\rangle \right) = \frac{1}{\sqrt{2}}\left(|10\rangle + |01\rangle \right)$$
So, we don't apply to the two-qubit state an $X$ operator, instead, we actually apply $X \otimes I$ operator or $I \otimes X$ depending on what qubit $X$ is applied. | {
"domain": "quantumcomputing.stackexchange",
"id": 1503,
"tags": "quantum-gate, quantum-state, entanglement"
} |
When sunlight bounces off the Earth, why isn't the entire spectrum reflected rather than just the infrared portion? | Question: I've read that greenhouse gases absorb and reemit sunlight, and that the infrared portion is what bounces off Earth back to space. When sunlight bounces off the Earth, why isn't the entire spectrum reflected rather than just the infrared portion?
Answer: The reflectivity of the atmosphere, and of the surface itself, is strongly wavelength-sensitive. So while some percentage of any given wavelength is reflected -- and some percentage is absorbed rather than transmitted, the variation over wavelength is what leads to the somewhat misleading statement you refer to. Here's an example of atmospheric absorption, as can be seen at wikipedia
There are also curves of reflectance. $transmittance+absorptance+reflectance = 1$, in case you were wondering :-) .
The reason all this matters is that shorter-wave energy, e.g. visible and some UV, that is absorbed either in the atmosphere or by the ground, is re-emitted at different wavelengths in accordance with black-body theory. In general this leads to a lot of IR-radiation, so if the atmosphere is reflective at these wavelengths, the energy is retained rather than re-emitted to space. | {
"domain": "physics.stackexchange",
"id": 27940,
"tags": "thermodynamics, electromagnetic-radiation, earth, climate-science"
} |
Projection of spin | Question: I have a naive and stupid question.
In university we learn that particles, i.e. electrons have spin $1/2$. This spin can have 2 projections on some axis that we chose with the values $1/2$ and $-1/2$. Also we learn in the beginning of QM, that if particle can be in some states, it can also be in a linear combination of states.
So my first question is: Can projection on axis for spin-$1/2$ be any number between $1/2$ and $-1/2$. My second question: If particle has momentum in $z$ direction, can spin projection of spin be $0$ on z-axis?
Answer: A partcile can be in a superposition of two spin states (in respect to a chosen quantization axis),
$$|\psi\rangle = c_1|\uparrow\rangle + c_2|\downarrow\rangle.$$
In every measurement we obtain either $+1/2$ or $-1/2$. However the mean of these measurements (which approaches the expectation value) can be anywhere in between.
In non-relativistic case the momentum is not coupled to spin, that is momentum can be zero, but the spin will still behave as described above. In the relativistic case (or if we include spin-orbit coupling, which is usually due to relativistic effects) the spin and the momentum are not independent, so one usually choses the direction of momentum as the quantization axis. | {
"domain": "physics.stackexchange",
"id": 86037,
"tags": "quantum-mechanics, quantum-spin"
} |
Do stars tend to fuse all hydrogen even when the mass is small? | Question: Suppose a brown dwarf requires only one more hydrogen atom to become massive enough to start fusion, what will happen if we add a hydrogen atom to it?
It fuses until the mass is dropped and then fusion stops suddenly
It starts to fuse until all hydrogen use up
Which one is correct?
Answer: The difference between a brown dwarf and a star is not a sharp boundary. A brown dwarf is simply a ball of gas where the (small) fusion rate is incapable of providing a significant fraction of the luminosity (which is mainly provided by gravitational contraction).
A star will contract and reach a minimum luminosity, whilst a brown dwarf's luminosity will monotonically decrease throughout its life. The core becomes degenerate and it is this that provides the pressure that supports a brown dwarf, even though it becomes colder.
If you add some mass to a brown dwarf, then the result depends when you add it. If you add it early during the strong contraction phase then you might get a low-mass star. If it is added after the brown dwarf is degenerate, then it could contract further, but do so without increasing the temperature enough to ignite hydrogen strongly. This is an interesting problem that deserves a model simulation!
If fusion can start to any degree, then convective mixing in these very low mass objects will very gradually (trillions of years), turn nearly all their H to He. | {
"domain": "astronomy.stackexchange",
"id": 1566,
"tags": "star"
} |
What are the wave functions for benzene intermediate energy states? | Question: I have a problem on finding wave functions solutions for the 6-carbon ring system - benzene. To get the energy levels it is necessary to solve secular determinant equal zero equation. Avoiding some steps, the latter is
\begin{array}{|cccccc|} x & 1 & 0 & 0 & 0 & 1 \\ 1 & x & 1 & 0 & 0 & 0 \\ 0 & 1 & x & 1 & 0 & 0 \\ 0 & 0 & 1 & x & 1 & 0 \\ 0 & 0 & 0 & 1 & x & 0 \\ 1 & 0 & 0 & 0 & 1 & x \end{array} = 0
where x = a-E/b
a are all the diagonal elements, b are diagonal adjacent elements and $H_{1,6}$ and $H_{6,1}$elements of Hamiltonian, E is the energy,
The equation has solutions x = -2 and 2 and -1 (doubly degenerate) and 1 (doubly degenerate)
As for x = -2 and 2 they correspond to maximum and minimum energy states.
My question rises for x = -1 and +1 states. So, they are doubly degenerate, it means that if one wants to find wave function, one should use the same secular matrix to do so and the solution will have two parameters. If one supposes that the wave function is normalized, then there is still one parameter left.
So the question is: where is the catch? Why there is no exact solution for wave function? Why is there a parameter left?
Answer: When there are degeneracies (as in the energy here), a linear combination of the vectors with the same eigenvalues is also an eigenvector. There is no reason why any one linear should occur. In general, this is resolved by using a second operator that commutes with the Hamiltonian, and choosing the eigenvectors to be eigenvectors of both operators.
In the hydrogen atom, for instance, there are 4 states with energy $-13.6/4$eVs. To refine the labelling of the states and pinpoint the $4$ states completely, one uses the total angular momentum $L^2$ and the $z$-projection $L_z$ to supply additional quantum numbers $\ell$ and $m$ so the states can be fully labelled. Both $L^2$ and $L_z$ commute with the Hamiltonian.
In your example, you need another operator to "split" the degeneracy of energy. This operator must commute with the Hamiltonian. One possible operator to investigate is as follows.
Since the ring benzene system is unchanged by "rotating" the ring by one atom to the right, the transformation $X_i\to X_{i+1}$, where $X_i$ is the position of atom $i$ in the ring, may supply the commuting operator you need.
The matrix representation $P$ of this operation is a band matrix with $1$ just below the diagonal and a $1$ in the upper right position, where the cyclicity condition is enforced. By definition it will commute with the Hamiltonian matrix, and so will have with it common eigenvectors. Since all eigenvalues of $P$ are not degenerate, the eigenvectors of $P$ are uniquely determined and will also be eigenvectors of $H$.
Physically, this amounts to enforcing the condition that your solutions should be invariant (up to an overall phase) when all atoms are "rotated" to their immediate neighbors. The eigenvalues of $P$ come in conjugate pairs: you could redo this using the symmetry $X_i\to X_{i-1}$ and the appearance of conjugate pairs of eigenvalues of $P$ can be intuitively understood as a consequence of the two possible $X_{i}\to X_{i\pm 1}$ symmetry transformations. | {
"domain": "physics.stackexchange",
"id": 38101,
"tags": "quantum-mechanics, homework-and-exercises, quantum-chemistry"
} |
Convert depth image to sensor_msgs::PointCloud2 | Question:
Hi all,
I have an Asus Xtion Pro RGB-D camera, I'm using this node to acquire images:
https://github.com/xqms/xtion
which has tremendous performance improvements w.r.t. OpenNI2.
To convert the depth image into a point cloud I'm following the draft tutorial in:
http://docs.ros.org/jade/api/sensor_msgs/html/namespacesensor__msgs.html#details
So far, that's what I wrote:
void frameCallback(const sensor_msgs::ImageConstPtr& msg) {
if (!gotInfo) return;
vector<float> points;
cv_bridge::CvImageConstPtr image = cv_bridge::toCvShare(msg);
for(int i =0;i<image->image.rows;i++){
const ushort* row_ptr = image->image.ptr<ushort>(i);
for(int j=0;j<image->image.cols;j++){
ushort id=row_ptr[j];
if(id!=0){
float d=depth_scale*id;
Eigen::Vector3f image_point(j*d,i*d,d);
Eigen::Vector3f camera_point=invK*image_point;
points.push_back(camera_point.x());
points.push_back(camera_point.y());
points.push_back(camera_point.z());
}
}
}
int n_points = points.size();
// Create a PointCloud2
sensor_msgs::PointCloud2 cloud_msg;
sensor_msgs::PointCloud2Modifier modifier(cloud_msg);
modifier.setPointCloud2Fields(3, "x", 1, sensor_msgs::PointField::FLOAT32,
"y", 1, sensor_msgs::PointField::FLOAT32,
"z", 1, sensor_msgs::PointField::FLOAT32);
modifier.setPointCloud2FieldsByString(1, "xyz");
modifier.resize(n_points);
sensor_msgs::PointCloud2Iterator<float> iter_x(cloud_msg, "x");
sensor_msgs::PointCloud2Iterator<float> iter_y(cloud_msg, "y");
sensor_msgs::PointCloud2Iterator<float> iter_z(cloud_msg, "z");
cloud_msg.height = 1;
cloud_msg.width = n_points;
cloud_msg.header.frame_id = msg->header.frame_id;
cloud_msg.header.seq = msg->header.seq;
cloud_msg.header.stamp = msg->header.stamp;
for(size_t i=0; i<n_points; ++i, ++iter_x, ++iter_y, ++iter_z){
*iter_x = points[3*i+0];
*iter_y = points[3*i+1];
*iter_z = points[3*i+2];
cerr << *iter_x << " " << *iter_y << " " << *iter_z << endl;
}
cloud_pub.publish(cloud_msg);
std::cerr << ".";
}
but as expected in RVIZ I see three point clouds, and if I print the points created I see that they are triplicated.
I'm quite sure that the bug is in the use of the sensor_msgs::PointCloud2Iterator, but since the tutorial I mentioned doesn't give better clues on how to use them I'm stuck.
Does anyone know the correct way to populate the sensor_msgs::PointCloud2?
Thanks
Originally posted by schizzz8 on ROS Answers with karma: 183 on 2017-05-15
Post score: 0
Original comments
Comment by Overseer on 2019-04-15:
mind me asking what does the depth_scale variable refer to?
Comment by schizzz8 on 2019-05-02:
It's a conversion factor, it's needed when depth images are stored as CV_16U opencv matrices.
It should be depth_value=1e-3
Comment by Brolin on 2020-09-29:
Hi, can someone tell me what is invK. I am trying to understand the calculation and how the code works.
Comment by schizzz8 on 2020-10-05:
@Brolin invK is the inverse of the camera matrix K: http://ksimek.github.io/2013/08/13/intrinsic/
Comment by Brolin on 2020-10-05:
@schizzz8 so invK is the inverse of the camera matrix right !!
Answer:
I know this is really late and I'm sure you would have figured this out, but maybe this will benefit someone else in the future.
The error in your code is where you iterate over the points vector. Now since n_points = points.size(), you should only iterate until i < n_points/3 not until i < n_points since you are later multiplying i with 3 during the access.
Originally posted by thesidjway with karma: 26 on 2018-03-01
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 27903,
"tags": "ros, sensor-msgs, xtion, depth-image, pointcloud"
} |
Listing a number's prime factors | Question: I wrote a little code to list a number's prime factors:
import java.util.Scanner;
import java.util.Vector;
public class Factorise2
{
public static Vector<Integer> get_prime_factors(int number)
{
//Get the absolute value so that the algorithm works for negative numbers
int absoluteNumber = Math.abs(number);
Vector<Integer> primefactors = new Vector<Integer>();
//Get the square root so that we can break earlier if it's prime
for (int j = 2; j <= absoluteNumber;)
{
//Test for divisibility by j
if (absoluteNumber % j == 0)
{
primefactors.add(j);
absoluteNumber /= j;
if (newprime && j > (int)Math.sqrt(absoluteNumber))
{
break;
}
}
else j++;
}
return primefactors;
}
public static void main(String[] args)
{
//Declare and initialise variables
int number;
int count = 1;
Scanner scan = new Scanner(System.in);
//Get a number to work with
System.out.println("Enter integer to analyse:");
number = scan.nextInt();
//Get the prime factors of the number
Vector<Integer> primefactors = get_prime_factors(number);
//Group the factors together and display them on the screen
System.out.print("Prime factors of " + number + " are ");
primefactors.add(0);
for (int a = 0; a < primefactors.size() - 1; a++)
{
if (primefactors.elementAt(a) == primefactors.elementAt(a+1))
{
count++;
}
else
{
System.out.print(primefactors.elementAt(a) + " (" + count + ") ");
count = 1;
}
}
}
}
I decided that I would try to optimise the algorithm, by skipping testing for divisibility with composite numbers.
import java.util.Scanner;
import java.util.Vector;
public class Factorise2
{
public static Vector<Integer> get_prime_factors(int number)
{
//Get the absolute value so that the algorithm works for negative numbers
int absoluteNumber = Math.abs(number);
Vector<Integer> primefactors = new Vector<Integer>();
Vector<Integer> newprimes = new Vector<Integer>();
boolean newprime = true;
int b;
//Get the square root so that we can break earlier if it's prime
for (int j = 2; j <= absoluteNumber;)
{
//Test for divisibility by j, and add to the list of prime factors if it's divisible.
if (absoluteNumber % j == 0)
{
primefactors.add(j);
absoluteNumber /= j;
if (newprime && j > (int)Math.sqrt(absoluteNumber))
{
break;
}
newprime = false;
}
else
{
for (int a = 0; a < newprimes.size();)
{
//Change j to the next prime
b = newprimes.elementAt(a);
if (j % b == 0)
{
j++;
a = 0;
}
else
{
a++;
}
}
//Add j as a new known prime;
newprimes.add(j);
newprime = true;
}
}
return primefactors;
}
public static void main(String[] args)
{
//Declare and initialise variables
int number;
int count = 1;
Scanner scan = new Scanner(System.in);
//Get a number to work with
System.out.println("Enter integer to analyse:");
number = scan.nextInt();
//Get the prime factors of the number
Vector<Integer> primefactors = get_prime_factors(number);
//Group the factors together and display them on the screen
System.out.print("Prime factors of " + number + " are ");
primefactors.add(0);
for (int a = 0; a < primefactors.size() - 1; a++)
{
if (primefactors.elementAt(a) == primefactors.elementAt(a+1))
{
count++;
}
else
{
System.out.print(primefactors.elementAt(a) + " (" + count + ") ");
count = 1;
}
}
}
}
I can't see anything that I have done wrong, but it is much slower. On 9876103, for example, it takes too long to wait for it to report back that its only prime factor is itself. Can anyone see why it is eating CPU cycles?
Answer:
I decided that I would try to optimise the algorithm, by skipping testing for divisibility with composite numbers.
That is only worthwhile if you factorise a lot of numbers. And then you need to remember the list of known primes between different factorisations.
In your case, the change is a massive pessimisation, because now you check each potential divisor for primality, which in the best case takes one division, and in the worst case about 2*sqrt(j)/log(j) divisions. The worst case, which is common enough, takes much much more time than a simple division by j to check whether j is a divisor.
You have changed the algorithm from O(sqrt(n)) complexity for the simple trial division to about O(n^0.75) (ignoring logarithmic factors) in good cases, and about O(n^1.5) in the worst case (when n is a prime). | {
"domain": "codereview.stackexchange",
"id": 2742,
"tags": "java, performance, beginner, primes, factors"
} |
The task of recognizing game units in the screenshot | Question: I'm new to computer vision and I want to solve the task of recognizing the game units of the game Clash Royale in the screenshot.
Briefly, there are about 70 different types of gaming units belonging to two teams (they differ a little in colors and some are visible in front, others with backs). I want to find game units on the screenshot and classify their type of unit (then health and the team).
What are the best tools for task like this? What libraries will I use to simply teach the model? How many teaching examples do I need to learn the model? It seems that the quality of the screenshots is quite good and the images are clear. To what size should I reduce the screenshoot to get a good model and its speed? Maybe someone had a similar experience?
I think about CNN or a lot of HAAR cascades for each of the types of units, but I would like to receive advice.
Answer: Machine learning doesn't seem needed, since the image of each unit is always identical. One approach is to obtain a clean image of each unit, and then use template matching to find all locations where the template occurs in the screenshot. You might be able to check all locations in the screenshot whether they exactly match the template image (you will probably want to mask out the part where the health appears, as that will obscure the image). This can be done very efficiently.
To recover the health, you might also be able to do template matching on the numbers 0-10.
The way to tell what resolution you can downsize to is empirically: you experiment with different resolutions and measure the accuracy for each. There's probably no way to predict without trying it.
If you want to try machine learning, you could try a CNN, or better yet, you could try retraining the YOLOv3 object detector for your particular images (see, e.g., the YOLO project page or here). You will probably need a larger training set this way, and you'll have to do a bunch of manual annotation of sample screenshots.
Tool and library recommendations are off-topic here. | {
"domain": "cs.stackexchange",
"id": 11805,
"tags": "machine-learning, artificial-intelligence, computer-vision, image-processing, computer-games"
} |
Geometrical Optics: Infinite Rays | Question: Normally in ray optics, we draw a parallel line from the top of the image to the lens and stop when this line intersects an angled line (drawn from the height of the real object) and intersects. However, why do we stop? We can draw infinite rays from this object and they should be able to go out for infinitely far distances. Thus, we should be able to get a huge image (albeit a bit darker).
Answer: An image is formed when there is a one to one correspondence between a point on the object and a point in space where all light rays emitted from that point on the object meet up. In other words, if we find that different light rays from the same point of the object end up meeting up at different points in space after passing through the lens, then we don't have an image there (or at least it will be pretty blurry).
This is why you typically see one arrow representing the object and then we look at light rays coming from the top. We pick rays that are easy to use and find where those rays intersect after going through the lens. This then means that the rest of the object must be in focus since it is represented by just a line that is perpendicular to the optical axis.
The problem with going out farther than where the image is formed is that all of the rays from that point on the object won't all go to that farther point. Hence no image will be out there. | {
"domain": "physics.stackexchange",
"id": 57316,
"tags": "newtonian-mechanics, electromagnetism, optics, geometric-optics"
} |
What is the oxidation state of oxygen in hydrogen peroxide? | Question: In Hydrogen peroxide, the oxidation number of oxygen is "-1" instead of "-2".
But it seems to me that, the oxygen atoms have '-2' as their oxidation number as each oxygen atom here is connected to a hydrogen atom and an oxygen atom.
So, what's actually happening here?
Answer: Each oxygen atom is connected to a hydrogen atom(which develops -1 charge on oxygen and +1 on hydrogen) and to another $\textbf{oxygen}$ atom which contributes no charge to both(0 and 0). Similar for other oxygen atom. Hence oxygen state of each oxygen atom is -1 and for each hydrogen atom it is +1.
In general, in peroxide linkage, oxygen has -1 oxidation state | {
"domain": "chemistry.stackexchange",
"id": 10914,
"tags": "oxidation-state, hydrogen"
} |
Are Kaons and Pions (mesons) made up of quarks? | Question: I have tried to reasearch this and google says yes.
But i learnt that pions decay into muons and netrinos (and the antiparticle versions) which are basically electrons and neutrinos
Which are fundamental particles!
So how are Pions and kaons made up of quarks
Answer: It is true that kaons and pions are made up of quarks, however composite particles that are unstable do not necessarily decay into their constituent particles. The weak interaction is responsible for radioactive decay, and it can change particle flavor. The muon, for instance, decays into an electron and some neutrinos, however the muon is not made of electrons and neutrinos, it isn't really made of anything, but it decays into these particles due to weak interactions. | {
"domain": "physics.stackexchange",
"id": 91784,
"tags": "quarks, mesons, pions"
} |
Minimum Spanning Tree over Vertices Proof | Question: This is the problem:
$d_{T}(v)$ denotes the degree of a vertex in a spanning tree $T$ and $w: V \rightarrow R^+$ is a weight function defined on vertices.
The goal is an algorithm that finds a spanning tree which minimizes the value $\sum_{v \in V}{d_{T}(v)*w(v)}$.
My idea is to define a new weight function on edges in the following way: $m(e_{ij})=w(v_i)+w(v_j)$, i.e. the weight of each edge is the weight of both of its vertices.
Then we run Kruskal on a graph with the given $m$ weight function. The problem is that I have no idea how to prove that it works.
I thought about starting with the expression $min \sum_{e \in E}{m(e)}$ which Kruskal yields, and then somehow change the summation to be over vertices. How can it be done?
Thanks!
Answer: It doesn't matter which minimum spanning tree algorithm you use. All you need to notice is that for a tree $T$,
$$
\begin{align*}
\sum_{(i,j) \in T} m(e_{ij}) &= \sum_{(i,j) \in T} w(v_i) + w(v_j) \\ &=
\sum_i \sum_{(i,j) \in T} w(v_i) + \sum_j \sum_{(i,j) \in T} w(v_j) \\ &=
2 \sum_i d_T(i) w(v_i).
\end{align*}
$$
You take it from here. | {
"domain": "cs.stackexchange",
"id": 5434,
"tags": "algorithms, algorithm-analysis, spanning-trees"
} |
How much dark energy will fit in an average cup of coffee? | Question: I am looking for the answer in Joules for obvious reasons.
Answer: Dark energy as expressed by the cosmological constant is, as the name implies, a constant of space. Therefore, in a cup of coffee, we get, for some static observer $t$, and a spacelike hypersurface $\Sigma$ (I'm assuming that in our universe, there exists a neighbourhood that can be foliated in spacelike hypersurfaces large enough to accommodate a coffee cup) on which we do the actual volume integration,
\begin{eqnarray}
E &=& \int_☕ T_{\mu\nu} t^\mu t^\nu d\mu[g_\Sigma]\\
&=& \int_☕ - \frac{c^4}{8\pi G} \Lambda g(t,t) d\mu[g_\Sigma]
\end{eqnarray}
If we consider the cosmological constant as part of the stress-energy tensor, $T'_{\mu\nu} = T_{\mu\nu} - \frac{c^4}{8\pi G} \Lambda g_{\mu\nu}$. A cup of coffee is fairly small
We can without much loss of experimental precision consider some Riemann normal coordinates around the center of the cup, so that $g \approx \eta$ (and, on $\Sigma$, that it is just the Euclidian metric) in the neighbourhood of the cup (Any extra term will be $\approx \mathcal{O}(l^3)$ here, with $l$ the characteristic dimension of the cup). Therefore, picking the canonical static observer $t^\mu = (1,0,0,0)$, this gives us
$$E = \frac{c^4}{8\pi G} \Lambda \int_☕ d^3x = \frac{c^4}{8\pi G} V \Lambda$$
In other words, we just have the volume by the cosmological constant. Given the current Lambda-CDM model of our universe, $\Lambda$ is estimated at
$$\Lambda = 1.1056 \times 10^{-52}\ \text{m}^{-2} $$
Unfortunately, the cosmological constant doesn't seem to have the uncertainty written down. This is due to the fact that in actual cosmology work, people generally use the dark energy density instead, $\Omega_\Lambda$, which we have as (cf particle data group)
$$\Omega_\Lambda = 0.692 \pm 0.012$$
The general formula relating the density parameter to its density, in the $\Lambda$CDM model, is
$$\Omega_\Lambda = \frac{8\pi G \rho_\Lambda(t = t_0)}{3 H_0^2}$$
So
$$\rho_\Lambda(t = t_0) = \frac{3 \Omega_\Lambda H_0^2}{8\pi G}$$
Where we have
\begin{eqnarray}
\pi &=& 3.141592653 \pm 0.0000000005\\
G &=& (6.674 08 \pm 0.00031) \times 10^{-11} \text{m}^3 \cdot \text{kg}^{−1}\cdot \text{s}^{−2}\\
H_0 &=& (0.2197 \pm 0.027) \times 10^{-17} s^{-1}
\end{eqnarray}
Using rough uncertainty propagation, this gives us
\begin{equation}
(\Delta \rho_\Lambda)^2 = \rho_\Lambda^2 \left[(\frac{\Delta \pi}{\pi})^2 + 4 (\frac{\Delta H_0}{H_0})^2 + (\frac{\Delta \Omega_\Lambda}{\Omega_\Lambda})^2\right]
\end{equation}
so that
\begin{equation}
\rho_\Lambda = (0.59739 \pm 0.0734) \times 10^{-26} \text{m}^{-3} \cdot \text{kg}
\end{equation}
For some reason this formula doesn't actually give us the energy density as it's only equivalent to our formula up to a factor of $c^2$, so we get
\begin{eqnarray}
\frac{c^4}{8\pi G} \Lambda &=& c^2 \rho_\Lambda
&=& (5.36907 \pm 0.65968) \times 10^{-10}\ \text{J}\cdot\text{m}^{-3}
\end{eqnarray}
That's roughly the same value we'd get from our value of $\Lambda$, but with uncertainty.
A medium coffee cup, as shown here, is about (assuming an error of every dimension of about $\approx 0.5 mm$), $(0.34 \pm 0.0015)\ \text{L}$, or $(0.34 \pm 0.0015)\times 10^{-3}\ \text{m}^3$, so this gives us
$$E_{\Lambda ☕} = (1.810220805 \pm 0.22255)\times 10^{-13}\ \text{J}$$
We've dragged around a lot of digits for the calculations, now let's cut them off to significant figures : the smallest number of significant figures in our values is the dark energy density, at 3 significant figures. Therefore, we can cut off everything at that point.
$$E_{\Lambda ☕} = (1.81 \pm 0.22)\times 10^{-13}\ \text{J}$$
As an exercise left to the reader, compute the energy as measured by an observer running to a coffee cup with speed $\beta = 0.1$ | {
"domain": "physics.stackexchange",
"id": 60239,
"tags": "homework-and-exercises, dark-energy"
} |
How do you describe a language that is generated by Context Free Grammer | Question: I am familiar with describing Regular Expressions but when it comes to describing CFG I get confused. Do you describe it in words like you would regular expressions or do you do something like this ?
this is the CFG I am trying to describe
S -> SS
S -> XXX
X -> aX| Xa| b
I was thinking something like this
S-> SS
->XXXS
->aXXXs
->abXXS
->abXXS
->abXAXS
->abbaXS
->abbabS
->abbabS
->abbabXXX
->abbabbXX
->abbabbbX
->abbabbbb
->abbabbbb
Answer: Hint: How many $b$s are there in each word generated by this grammar?
Guidance: What is the language generated by $X$? What is the language generated by $XXX$? What is the language generated by $S$? | {
"domain": "cs.stackexchange",
"id": 2669,
"tags": "context-free"
} |
Reversing a singly-linked List | Question: Is this the best way to reverse a singly-linked list? Can it be done with two, or fewer pointers? Any other comments?
public class ReverseLL {
Node start;
ReverseLL()
{
start=null;
}
class Node
{
Node next;
int data;
Node(int newData)
{
next=null;
data=newData;
}
public void setData(int newData)
{
data=newData;
}
public int getData()
{
return data;
}
public void setNext(Node n)
{
next=n;
}
public Node getNext()
{
return next;
}
}
public void insert(int newData)
{
Node p=new Node(newData);
if(start==null)
{
start=p;
}
else
{
Node temp=start;
while(temp.getNext()!=null)
{
temp=temp.getNext();
}
temp.setNext(p);
}
}
public void reverse()
{
Node temp=start;
Node previous=null;
Node previous1=null;
while(temp.getNext()!=null)
{
if(temp==start)
{
previous=temp;
temp=temp.getNext();
previous.setNext(null);
}
else
{
previous1=temp;
temp=temp.getNext();
previous1.setNext(previous);
previous=previous1;
}
}
temp.setNext(previous);
start=temp;
}
public void display() {
int count = 0;
if(start == null) {
System.out.println("\n List is empty !!");
} else {
Node temp = start;
while(temp.getNext() != null) {
System.out.println("count("+count+") , node value="+temp.getData());
count++;
temp = temp.getNext();
}
System.out.println("count("+count+") , node value="+temp.getData());
}
}
public static void main(String args[])
{
ReverseLL ll=new ReverseLL();
ll.insert(1);
ll.insert(2);
ll.insert(3);
ll.insert(4);
ll.insert(5);
ll.insert(6);
ll.insert(7);
ll.insert(8);
ll.display();
System.out.println();
ll.reverse();
ll.display();
}
}
Answer: This is a nice, clean implementation of a Linked list... Generally a good job.
You have a bug in your reverse method, a NullPointerException when the list is empty. There is an easy fix, but you should be aware.
I also had a look at your reverse method. I cannot see a way to do it with fewer than 3 variables, while still keeping the logic readable. I am not particularly fond of your implementation... The distinct if/else condition makes the internal logic cumbersome. It makes things easier if you consider the process to be closer to a swap... we want to swap the direction of the pointer between nodes.
So, the logic is, for three nodes A->B->C, we want to make B point to A, but, we have to remember that C comes after B before we reverse the pointer. Then we have to make C point to B, becoming A<-B<-C
But, we have a couple of loose ends (pun is intended)... we have the start pointer which points at A, and A is pointing at B still, So, we need to remove the now-redundant A->B pointer, and also move start to point at C..... All so complicated, but it boils down to a simple loop:
public void reverse() {
if (start == null) {
return;
}
Node current = start;
Node after = start.next;
while (after != null) {
Node tmp = after.next; // preserve what will come later.
after.next = current; // reverse the pointer
current = after; // advance the cursor
after = tmp; // the node after is the one preserved earlier.
}
start.next = null; // null-out next on what was the start element
start = current; // move the start to what was the end.
}
This, to me, is much more readable than the conditional logic you had. It does use three pointers in addition to the start.
If you want to, you can probably find a way to do it with one less pointer, but that is by hacking the start pointer and using it as a tracker in the loop (probably instead of current, but the readability, and simplicity will suffer if you do that.
Note also that Java coding convention puts the { open brace at the end of the line containing the conditional block.
Finally, at the risk of adding a little complexity to your code, most general-purpose Linked Lists in 'real' applications have an O (1) mechanism for getting the List size. If you have a custom purpose for the list where the size is not important, you can skip that, but, you should otherwise consider adding a size field so you can avoid doing a full iteration to get the size.
Another Finally, The Java Iterator concept is a very common idiom. It is surprisingly complicated though to get your implementation to match the specification. I strongly recommend that you take it upon yourself to make your List iterable, and to make sure your Iterator implementation conforms to the specification (especially the conditions under which the iterator throws exceptions).
I also extended your main method to do a few more tests than you have:
public static void main(String args[]) {
ReverseLL ll=new ReverseLL();
ll.reverse();
ll.display();
System.out.println();
ll.insert(1);
ll.reverse();
ll.display();
System.out.println();
ll.insert(2);
ll.reverse();
ll.display();
System.out.println();
ll.reverse();
ll.display();
System.out.println();
ll.insert(3);
ll.insert(4);
ll.insert(5);
ll.insert(6);
ll.insert(7);
ll.insert(8);
ll.display();
System.out.println();
ll.reverse();
ll.display();
System.out.println();
} | {
"domain": "codereview.stackexchange",
"id": 34736,
"tags": "java, linked-list"
} |
test topic with gtest | Question:
hi,
i wanna test a topic, what it actually publish, and when changing something, what the topic then publishes. My problem is, that i need to declare a callback function in the subscribe() member function of the nodeHandle.
What is the "best" way to handle that? I mean, in the main TEST() function a loop over a global variable till the callback has changed this global var, then end the main TEST() function?
i know ros.org/wiki/rostest/Nodes but its written in python, and i wanna write it in c++, perhaps someone knows a tutorial?
thanks
flo
Originally posted by inflo on ROS Answers with karma: 406 on 2013-03-21
Post score: 1
Answer:
That's effectively what you need. You shouldn't need to do it in a global variable. You can always use another container class and pass content that way.
Originally posted by tfoote with karma: 58457 on 2013-06-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13477,
"tags": "ros, topic, gtest"
} |
roscore: libconsole_bridge.so.0.2: cannot open shared object file | Question:
Hi,
I have ROS (Ros_Comm) installed on my beaglebone black with debian wheezy.
While running roscore I get:
/opt/ros/indigo/lib/rosout/rosout: error while loading shared libraries: libconsole_bridge.so.0.2: cannot open shared object file: No such file or directory
While I have libconsole-bridge-dev installed.
While installing that I got the message:
-- Installing: /usr/local/lib/arm-linux-gnueabihf/libconsole_bridge.so.0.2.
And nothing failed.
But in the rosout file :
libconsole_bridge.so.0.2 => not found is stated.
Does someone know how I can fix this, I think because libconsole is located at /usr/local/lib.
I tried editting LD_LIBRARY_PATH to /opt/ros/indigo/lib:/opt/ros/indigo/lib/arm-linux-gnueabihf:/usr/local/lib/arm-linux-gnueabihf, but that doesnt seem to help.
And I cant seem to edit the rosout file.
Greetings
Kenavera
Originally posted by Kenavera on ROS Answers with karma: 56 on 2016-01-14
Post score: 0
Answer:
Well the
LD_LIBRARY_PATH=/opt/ros/indigo/lib:/opt/ros/indigo/lib/arm-linux-gnueabihf:/usr/local/lib/arm-linux-gnueabihf
did fix it. Only the system was durping or, so. Now it works perfectly.
Originally posted by Kenavera with karma: 56 on 2016-01-14
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 23438,
"tags": "ros, beagleboneblack, wheezy"
} |
Forward List C++ | Question: I have written Forward List in c++ for learning purpose. There are a few things to be added yet, but do you have any thoughts on the code as for now?
ForwardList.h
#pragma once
#include <assert.h>
#include <iterator>
#include <type_traits>
namespace PrimLibrary
{
// base node - used as before head node
template <class T>
struct ForwardList_NodeBase
{
ForwardList_NodeBase* next;
ForwardList_NodeBase(ForwardList_NodeBase* _next = nullptr) :
next{ _next }
{
}
};
template <class T>
struct ForwardList_Node : public ForwardList_NodeBase<T>
{
T data;
ForwardList_Node(const T& _data, ForwardList_Node* _next = nullptr) :
ForwardList_NodeBase{ _next },
data{ _data }
{
}
};
template <class T, class UnqualifiedType = std::remove_cv_t<T>>
class ForwardListIterator : public std::iterator<std::forward_iterator_tag, UnqualifiedType, std::ptrdiff_t, T*, T&>
{
public:
explicit ForwardListIterator(ForwardList_NodeBase<UnqualifiedType>* node) :
_itr{ node }
{
}
ForwardListIterator(const ForwardListIterator&) = default;
ForwardListIterator(const ForwardListIterator&& rhs) noexcept :
_itr{ rhs._itr }
{
rhs._itr = nullptr;
}
ForwardListIterator& operator=(const ForwardListIterator& rhs)
{
_itr = rhs._itr;
return *this;
}
ForwardListIterator& operator=(const ForwardListIterator&& rhs) noexcept
{
std::swap(_itr, rhs._itr);
return *this;
}
ForwardListIterator& operator++()
{
assert(_itr != nullptr && "Iterator out-of-bounds.");
_itr = static_cast<ForwardList_Node<UnqualifiedType>*>(_itr)->next;
return *this;
}
ForwardListIterator& operator++(int)
{
assert(_itr != nullptr && "Iterator out-of-bounds.");
auto tmp(*this);
_itr = static_cast<ForwardList_Node<UnqualifiedType>*>(_itr)->next;
return *this;
}
T& operator*() const
{
assert(_itr != nullptr && "Iterator out-of-bounds.");
return static_cast<ForwardList_Node<UnqualifiedType>*>(_itr)->data;
}
T& operator->() const
{
assert(_itr != nullptr && "Iterator out-of-bounds.");
return static_cast<ForwardList_Node<UnqualifiedType>*>(_itr)->data;
}
bool operator==(const ForwardListIterator& rhs) const
{
return _itr == rhs._itr;
}
bool operator!=(const ForwardListIterator& rhs) const
{
return !(*this == rhs);
}
ForwardList_NodeBase<UnqualifiedType>* getNode() const
{
return _itr;
}
private:
ForwardList_NodeBase<UnqualifiedType>* _itr;
};
template <class T>
class ForwardList
{
public:
using iterator = ForwardListIterator<T>;
using const_iterator = ForwardListIterator<const T>;
ForwardList();
~ForwardList();
ForwardList(std::initializer_list<T> il);
template<class InputIterator>
ForwardList(InputIterator begin, InputIterator end);
ForwardList(const ForwardList& rhs);
ForwardList(ForwardList&& rhs) noexcept;
ForwardList& operator=(const ForwardList& rhs);
ForwardList& operator=(ForwardList&& rhs) noexcept;
// TODO: operator override;
// TODO: additional modifiers
T& front() { assert(_beforeBegin.next != nullptr && "No data to get - empty list"); return static_cast<ForwardList_Node<T>*>(_beforeBegin.next)->data; }
const T& front() const { assert(_beforeBegin.next != nullptr && "No data to get - empty list"); return static_cast<ForwardList_Node<T>*>(_beforeBegin.next)->data; }
// Iterators
iterator begin() { return iterator{ _beforeBegin.next }; }
iterator end() { return iterator{ nullptr }; }
const_iterator begin() const { return const_iterator{ _beforeBegin.next }; }
const_iterator end() const { return const_iterator{ nullptr }; }
const_iterator cbegin() const { return begin(); }
const_iterator cend() const { return end(); }
iterator before_begin() { return iterator{ &_beforeBegin }; }
const_iterator before_begin() const { return const_iterator{ &_beforeBegin }; }
const_iterator cbefore_begin() const { return before_begin(); }
bool empty() const noexcept { return _beforeBegin.next == nullptr; }
// Modifiers
void push_front(const T& value);
void push_front(std::initializer_list<T> il);
void push_back(const T& value);
void push_back(std::initializer_list<T> il);
void push_after(const T& value, iterator itr);
void push_after(std::initializer_list<T> il, iterator itr);
template <class InputIterator>
void push_after(iterator itr, InputIterator begin, InputIterator end);
void pop_front();
void remove(const T& value);
template<class Comparator>
void remove_if(Comparator cmp);
void erase_after(iterator itr); // (itr, itr+1]
void erase_after(iterator begin, iterator end); // (begin, end)
void clear();
void swap(ForwardList& other);
void splice_after(iterator position, ForwardList& other);
void splice_after(iterator position, ForwardList& other, iterator otherIt);
void splice_after(iterator position, ForwardList& other, iterator otherBegin, iterator otherEnd);
private:
ForwardList_NodeBase<T> _beforeBegin;
ForwardList_NodeBase<T>* _back; // for quick pushback
};
template<class T>
bool operator==(const ForwardList<T>& lhs, const ForwardList<T>& rhs)
{
auto lhsIt = lhs.begin();
auto rhsIt = rhs.begin();
const auto lhsEnd = lhs.end();
const auto rhsEnd = rhs.end();
for (;lhsIt != lhsEnd && rhsIt != rhsEnd; ++lhsIt, ++rhsIt)
{
if (*lhsIt != *rhsIt)
{
return false;
}
}
// lists have different sizes
if (lhsIt != lhsEnd || rhsIt != rhsEnd)
{
return false;
}
return true;
}
template<class T>
bool operator!=(const ForwardList<T>& lhs, const ForwardList<T>& rhs)
{
return !(lhs == rhs);
}
}
ForwardList.cpp
#include "ForwardList.h"
namespace PrimLibrary
{
template <class T>
ForwardList<T>::ForwardList() :
_back{ nullptr }
{
}
template<class T>
ForwardList<T>::~ForwardList()
{
clear();
}
template<class T>
ForwardList<T>::ForwardList(std::initializer_list<T> il) :
_back{ nullptr }
{
ForwardList_NodeBase<T>* lastNewNode = &_beforeBegin;
for (const T& value : il)
{
ForwardList_NodeBase<T>* newNode = new ForwardList_Node<T>{ value };
lastNewNode->next = newNode;
lastNewNode = newNode;
}
_back = lastNewNode;
}
template<class T>
template<class InputIterator>
ForwardList<T>::ForwardList(InputIterator begin, InputIterator end) :
_back{ nullptr }
{
ForwardList_NodeBase<T>* lastNewNode = &_beforeBegin;
while (begin != end)
{
ForwardList_NodeBase<T>* newNode = new ForwardList_Node<T>{ *begin };
lastNewNode->next = newNode;
lastNewNode = newNode;
++begin;
}
_back = lastNewNode;
}
template<class T>
ForwardList<T>::ForwardList(const ForwardList& rhs) :
_back{ nullptr }
{
for (const T& nodeData : rhs)
{
push_back(nodeData);
}
}
template<class T>
ForwardList<T>::ForwardList(ForwardList &&rhs) noexcept :
_back{ nullptr }
{
std::swap(_beforeBegin, rhs._beforeBegin);
std::swap(_back, rhs._back);
}
template<class T>
ForwardList<T>& ForwardList<T>::operator=(const ForwardList & rhs)
{
ForwardList safeCopy{ rhs };
std::swap(*this, safeCopy);
return *this;
}
template<class T>
ForwardList<T>& ForwardList<T>::operator=(ForwardList && rhs) noexcept
{
std::swap(_beforeBegin, rhs._beforeBegin);
std::swap(_back, rhs._back);
return *this;
}
template<class T>
void ForwardList<T>::push_front(const T& value)
{
ForwardList_NodeBase<T>* newFront = new ForwardList_Node<T>{ value };
if (_beforeBegin.next == nullptr)
{
_beforeBegin.next = _back = newFront;
}
else
{
newFront->next = _beforeBegin.next;
_beforeBegin.next = newFront;
}
}
template<class T>
void ForwardList<T>::push_front(std::initializer_list<T> il)
{
ForwardList_NodeBase<T>* firstAfterPush = _beforeBegin.next;
ForwardList_NodeBase<T>* lastNewNode = &_beforeBegin;
for (const T& value : il)
{
ForwardList_NodeBase<T>* newNode = new ForwardList_Node<T>{ value };
lastNewNode->next = newNode;
lastNewNode = newNode;
}
if (firstAfterPush == nullptr)
{
_back = lastNewNode;
}
else
{
lastNewNode->next = firstAfterPush;
}
}
template<class T>
void ForwardList<T>::push_back(const T& value)
{
ForwardList_NodeBase<T>* newBack = new ForwardList_Node<T>{ value, nullptr };
if (_beforeBegin.next == nullptr)
{
_beforeBegin.next = _back = newBack;
}
else
{
_back->next = newBack;
_back = newBack;
}
}
template<class T>
void ForwardList<T>::push_back(std::initializer_list<T> il)
{
ForwardList_NodeBase<T>* lastNewNode = _back ? _back : &_beforeBegin;
for (const T& value : il)
{
ForwardList_NodeBase<T>* newNode = new ForwardList_Node<T>{ value };
lastNewNode->next = newNode;
lastNewNode = newNode;
}
_back = lastNewNode;
}
template<class T>
void ForwardList<T>::push_after(const T& value, iterator itr)
{
ForwardList_NodeBase<T>* newBack = new ForwardList_Node<T>(value);
newBack->next = itr.getNode()->next;
itr.getNode()->next = newBack;
}
template<class T>
void ForwardList<T>::push_after(std::initializer_list<T> il, iterator itr)
{
assert(itr.getNode() != nullptr && "Iterator out of bounds");
ForwardList_NodeBase<T>* firstAfterPush = itr.getNode()->next;
ForwardList_NodeBase<T>* lastNewNode = itr.getNode();
for (const T& value : il)
{
ForwardList_NodeBase<T>* newNode = new ForwardList_Node<T>{ value };
lastNewNode->next = newNode;
lastNewNode = newNode;
}
if (firstAfterPush == nullptr)
{
_back = firstAfterPush;
}
else
{
lastNewNode->next = firstAfterPush;
}
}
template<class T>
template<class InputIterator>
void ForwardList<T>::push_after(iterator itr, InputIterator begin, InputIterator end)
{
ForwardList_NodeBase<T>* firstAfterPush = itr.getNode()->next;
ForwardList_NodeBase<T>* lastNewNode = itr.getNode();
while (begin != end)
{
ForwardList_NodeBase<T>* newNode = new ForwardList_Node<T>{ *begin };
lastNewNode->next = newNode;
lastNewNode = newNode;
++begin;
}
if (firstAfterPush == nullptr)
{
_back = firstAfterPush;
}
else
{
lastNewNode->next = firstAfterPush;
}
}
template<class T>
void ForwardList<T>::pop_front()
{
if (_beforeBegin.next)
{
ForwardList_NodeBase<T>* oldFront = _beforeBegin.next;
_beforeBegin.next = oldFront->next;
delete oldFront;
// do i need this? we wont be accessing back when front is nullptr
if (_beforeBegin.next == nullptr)
{
_back = nullptr;
}
}
}
template<class T>
void ForwardList<T>::remove(const T& value)
{
ForwardList_NodeBase<T>* previous = &_beforeBegin;
while (previous->next != nullptr)
{
ForwardList_Node<T>* current = static_cast<ForwardList_Node<T>*>(previous->next);
if (current->data == value)
{
previous->next = current->next;
delete current;
current = nullptr;
}
else
{
previous = current;
}
}
// in case last element was removed
_back = previous;
}
template<class T>
template<class Comparator>
void ForwardList<T>::remove_if(Comparator cmp)
{
ForwardList_NodeBase<T>* previous = &_beforeBegin;
while (previous->next != nullptr)
{
ForwardList_Node<T>* current = static_cast<ForwardList_Node<T>*>(previous->next);
if (cmp(current->data))
{
previous->next = current->next;
delete current;
current = nullptr;
}
else
{
previous = current;
}
}
// in case last element was removed
_back = previous;
}
template<class T>
void ForwardList<T>::erase_after(iterator itr)
{
assert(itr.getNode()->next != nullptr);
ForwardList_NodeBase<T>* toPop = itr.getNode()->next;
itr.getNode()->next = toPop->next;
delete toPop;
if (itr.getNode()->next == nullptr)
{
_back = itr.getNode();
}
}
template<class T>
void ForwardList<T>::erase_after(iterator begin, iterator end)
{
ForwardList_NodeBase<T>* current = begin.getNode()->next;
while (current != end.getNode())
{
auto tmp = current->next;
delete current;
current = tmp;
}
begin.getNode()->next = end.getNode();
}
template<class T>
void ForwardList<T>::clear()
{
while (!empty())
{
pop_front();
}
}
template<class T>
void ForwardList<T>::swap(ForwardList &other)
{
std::swap(_beforeBegin.next, other._beforeBegin.next);
std::swap(_back, other._back);
}
template<class T>
void ForwardList<T>::splice_after(iterator position, ForwardList& other)
{
ForwardList_NodeBase<T>* firstAfterPush = position.getNode()->next;
position.getNode()->next = other._beforeBegin.next;
other._back->next = firstAfterPush;
if (firstAfterPush == nullptr)
{
_back = other._back;
}
other._beforeBegin.next = other._back = nullptr;
}
template<class T>
void ForwardList<T>::splice_after(iterator position, ForwardList & other, iterator otherIt) // range (otherIt, ++otherIt]
{
ForwardList_NodeBase<T>* firstAfterPush = position.getNode()->next;
ForwardList_NodeBase<T>* nodeToMove = otherIt.getNode()->next;
otherIt.getNode()->next = nodeToMove->next;
position.getNode()->next = nodeToMove;
nodeToMove->next = firstAfterPush;
if (firstAfterPush == nullptr)
{
_back = nodeToMove;
}
}
template<class T>
void ForwardList<T>::splice_after(iterator position, ForwardList & other, iterator otherBegin, iterator otherEnd) // (otherBegin, otherLast) range
{
ForwardList_NodeBase<T>* firstAfterPush = position.getNode()->next;
position.getNode()->next = otherBegin.getNode()->next;
iterator lastNodeToMove{ otherBegin };
while (lastNodeToMove.getNode()->next != otherEnd.getNode())
{
++lastNodeToMove;
}
lastNodeToMove.getNode()->next = firstAfterPush;
if(firstAfterPush == nullptr)
{
_back = otherBegin.getNode();
}
otherBegin.getNode()->next = otherEnd.getNode();
}
}
Answer: A few comments (on top of the obvious file management issue mentioned in the comments):
I would switch to std::unique_ptr<> instead of raw pointers and new/delete.
I'm not sure I see the purpose of ForwardList_NodeBase. Why not just have a pointer to ForwardList_Node as the head? You could also get rid of all these nasty static_cast<> if you did that.
Since you have a push_back(), I would have added a pop_back() for API consistency, even if it would be slow.
You have a bunch of _beforeBegin.next == nullptr in the code, yet you use empty() in a few places. Be consistent! (empty() is much better)
Avoid putting multiple statements on the same line (see T& front()), it makes debugging a pain. It also makes your code lines too long.
I would be in favor of nulling out _back when the list becomes empty, even if you have an invariant protecting it. Ask yourself: "Why is it initialized to null at construction?". Keeping dangling pointers around is never good.
ForwardList_Node and ForwardListIterator should be in a sub-namespace to avoid poluting the user-facing one. It's also nice for people who use IDEs with code completion. | {
"domain": "codereview.stackexchange",
"id": 27330,
"tags": "c++, linked-list"
} |
Why do we search for square roots of 1 in Shor's algorithm unlike the quadratic sieve? | Question: In the quadratic sieve algorithm, the idea is to find $a$ and $a$ such that $a^2 \equiv b^2 \bmod n$. We need that $a\not\equiv \pm b \bmod n$. However, there the $c$ is not necessarily $1$. $\gcd(b \pm c,n)$ returns non-trivial factors.
However, in Shor's algorithm, we specifically need to find square roots of $1$ (in modulo $n$) i.e. we look for $a$ such that $a^2 \equiv 1 \bmod n$. That is, $b$ is specifically $1$. Why is this choice necessary?
Related: Math SE: Chinese Remainder Theorem: four square roots of 1 modulo N
Answer: The two problems are equivalent.
If you give me a pair $a, b$ such that $a^2 = b^2$ with $a \neq \pm b$ then $c = a b^{-1}$ satisfies $c \neq \pm 1$ and $c^2 = 1$.
If you give me a $c$ such that $c^2 = 1$ and $c \neq \pm 1$, and some target value $a$, then $b = a \cdot c$ satisfies $a^2 = b^2$ and $a \neq \pm b$.
You can derive a square root of 1 from an $a,b$ pair. You can derive an $a,b$ pair with desired $a$ from a square root of 1. They're reducible to each other. | {
"domain": "quantumcomputing.stackexchange",
"id": 639,
"tags": "quantum-algorithms, mathematics, shors-algorithm"
} |
Knuth-Morris-Pratt over a source of indeterminate length | Question: Please review my generic implementation of the Knuth-Morris-Pratt algorithm. Its modified to search a source of indeterminate length in a memory efficient fashion.
namespace Code
{
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
/// <summary>
/// A generic implementation of the Knuth-Morris-Pratt algorithm that searches,
/// in a memory efficient way, over a given <see cref="IEnumerator"/>.
/// </summary>
public static class KMP
{
/// <summary>
/// Determines whether the Enumerator contains the specified pattern.
/// </summary>
/// <typeparam name="T">The type of an item.</typeparam>
/// <param name="source">
/// The source, the <see cref="IEnumerator"/> must yield
/// objects of <typeparamref name="T"/>.
/// </param>
/// <param name="pattern">The pattern.</param>
/// <param name="equalityComparer">The equality comparer.</param>
/// <returns>
/// <c>true</c> if the source contains the specified pattern;
/// otherwise, <c>false</c>.
/// </returns>
/// <exception cref="ArgumentNullException">pattern</exception>
public static bool Contains<T>(
this IEnumerator source,
IEnumerable<T> pattern,
IEqualityComparer<T> equalityComparer = null)
{
if (pattern == null)
{
throw new ArgumentNullException(nameof(pattern));
}
equalityComparer = equalityComparer ?? EqualityComparer<T>.Default;
return SearchImplementation(pattern, source, equalityComparer).Any();
}
/// <summary>
/// Identifies indices of a pattern string in source.
/// </summary>
/// <typeparam name="T">The type of an item.</typeparam>
/// <param name="patternString">The pattern string.</param>
/// <param name="source">
/// The source, the <see cref="IEnumerator"/> must yield
/// objects of <typeparamref name="T"/>.
/// </param>
/// <param name="equalityComparer">The equality comparer.</param>
/// <returns>
/// A sequence of indices where the pattern can be found
/// in the source.
/// </returns>
/// <exception cref="ArgumentOutOfRangeException">
/// patternSequence - The pattern must contain 1 or more elements.
/// </exception>
private static IEnumerable<long> SearchImplementation<T>(
IEnumerable<T> patternString,
IEnumerator source,
IEqualityComparer<T> equalityComparer)
{
// Pre-process the pattern
var preResult = GetSlide(patternString, equalityComparer);
var pattern = preResult.Pattern;
var slide = preResult.Slide;
var patternLength = pattern.Count;
if (pattern.Count == 0)
{
throw new ArgumentOutOfRangeException(
nameof(patternString),
"The pattern must contain 1 or more elements.");
}
var buffer = new Dictionary<long, T>(patternLength);
var more = true;
long i = 0; // index for source
int j = 0; // index for pattern
while (more)
{
more = FillBuffer(
buffer,
source,
i,
patternLength,
out T t);
if (equalityComparer.Equals(pattern[j], t))
{
j++;
i++;
}
more = FillBuffer(
buffer,
source,
i,
patternLength,
out t);
if (j == patternLength)
{
yield return i - j;
j = slide[j - 1];
}
else if (more && !equalityComparer.Equals(pattern[j], t))
{
if (j != 0)
{
j = slide[j - 1];
}
else
{
i = i + 1;
}
}
}
}
/// <summary>
/// Fills the buffer.
/// </summary>
/// <remarks>
/// The buffer is used so that it is not necessary to hold the
/// entire source in memory.
/// </remarks>
/// <typeparam name="T">The type of an item.</typeparam>
/// <param name="buffer">The buffer.</param>
/// <param name="s">The source enumerator.</param>
/// <param name="i">The current index.</param>
/// <param name="patternLength">Length of the pattern.</param>
/// <param name="value">The value retrieved from the source.</param>
/// <returns>
/// <c>true</c> if there is potentially more data to process;
/// otherwise <c>false</c>.
/// </returns>
private static bool FillBuffer<T>(
IDictionary<long, T> buffer,
IEnumerator s,
long i,
int patternLength,
out T value)
{
bool more = true;
if (!buffer.TryGetValue(i, out value))
{
more = s.MoveNext();
if (more)
{
value = (T)s.Current;
buffer.Remove(i - patternLength);
buffer.Add(i, value);
}
}
return more;
}
/// <summary>
/// Gets the offset array which acts as a slide rule for the KMP algorithm.
/// </summary>
/// <typeparam name="T">The type of an item.</typeparam>
/// <param name="pattern">The pattern.</param>
/// <param name="equalityComparer">The equality comparer.</param>
/// <returns>A tuple of the offsets and the enumerated pattern.</returns>
private static (IReadOnlyList<int> Slide, IReadOnlyList<T> Pattern) GetSlide<T>(
IEnumerable<T> pattern,
IEqualityComparer<T> equalityComparer)
{
var patternList = pattern.ToList();
var slide = new int[patternList.Count];
int length = 0;
int i = 1;
while (i < patternList.Count)
{
if (equalityComparer.Equals(patternList[i], patternList[length]))
{
length++;
slide[i] = length;
i++;
}
else
{
if (length != 0)
{
length = slide[length - 1];
}
else
{
slide[i] = length;
i++;
}
}
}
return (slide, patternList);
}
}
}
I've used the non-generic IEnumerator for representing the source as it allows a wider breadth of enumerators to represent data, including the TextElementEnumerator. This enables the generic implementation to be used trivially to search Unicode strings with different normalizations, e.g.
namespace Code
{
using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
class Program
{
static void Main(string[] args)
{
var testData = new List<(string Source, string Pattern)>
{
(string.Empty, "x"),
("y", "x"),
("x", "x"),
("yx", "x"),
("xy", "x"),
("aababccba", "abc"),
("1x2x3x4", "x"),
("x1x2x3x4x", "x"),
("1aababcabcd2aababcabcd3aababcabcd4", "aababcabcd"),
("ssstring", "sstring")
};
foreach(var d in testData)
{
var contains = Ext.Contains(d.Source, d.Pattern);
Console.WriteLine(
$"Source:\"{d.Source}\", Pattern:\"{d.Pattern}\", Contains:{contains}");
}
Console.ReadKey();
}
}
public static class Ext
{
public static bool Contains(
this string source,
string value,
CultureInfo culture = null,
StringComparer comparer = null)
{
comparer = comparer ?? StringComparer.Ordinal;
var sourceEnumerator = StringInfo.GetTextElementEnumerator(source);
var sequenceEnumerator = StringInfo.GetTextElementEnumerator(value);
var pattern = new List<string>();
while (sequenceEnumerator.MoveNext())
{
pattern.Add((string)sequenceEnumerator.Current);
}
return sourceEnumerator.Contains(pattern, comparer);
}
}
}
This question was improved following the answer here.
Answer: Using an IEnumerator as first argument is quite unexpected, and makes this method more cumbersome to use than it needs to be. I'd stick to IEnumerable<T>.
It looks like StringInfo.GetTextElementEnumerator was an important motivation for this decision, but that's an old method that predates the introduction of generics. Why not write a wrapper method for that instead, one that returns IEnumerable<string>?
A few other points:
You can use tuple deconstruction when calling Slide: var (slide, pattern) = GetSlide(patternString, equalityComparer);.
It's good to see documentation, but some of it isn't very useful. If parameter documentation is just a repetition of the (already properly descriptive) parameter name then I would leave it out.
I'd rename i to sourceIndex and j to patternIndex. Those comments already indicate that those names aren't sufficiently descriptive.
The culture parameter in Ext.Contains is not used.
TextElementEnumerator has a GetTextElement method, which gives you Current as a string. | {
"domain": "codereview.stackexchange",
"id": 33250,
"tags": "c#, algorithm, strings, search"
} |
Exact solution for the perturbation of the inverse metric | Question: So when we usually linearize general relativity with respect to metric perturbations $g_{\mu\nu}\rightarrow g_{\mu\nu}+h_{\mu\nu}$, we compute the correction to the inverse of the metric to first order in $h$:$$g^{\mu\nu}\rightarrow g^{\mu\nu}-g^{\mu\rho}g^{\nu\tau}h_{\rho\tau}:=g^{\mu\nu}+h^{\mu\nu}$$
where we define $h^{\mu\nu}$ to be $h_{\mu\nu}$ with indexes lifted using the inverse of the background metric.
To get this result we ask that to the first order $$(g_{\mu\tau}+h_{\mu\tau})(g^{\tau\nu}+h^{\tau\nu})=\delta_{\mu}^{\nu}$$
Imposing that this holds exactly we get:
$$(g_{\mu\tau}+h_{\mu\tau})h^{\tau\nu}=-h_{\mu\tau}g^{\tau\nu}$$
Inverting the first factor we have
$$h^{\rho\nu}+h^{\rho\mu}h_{\mu\tau}g^{\tau\nu}=h^{\rho\mu}(\delta^{\nu}_{\mu}+h_{\mu\tau}g^{\tau\nu})=-g^{\rho\mu}g^{\nu\tau}h_{\mu\tau}$$
but I don't know how to solve this. I should invert $(\delta^{\nu}_{\mu}+h_{\mu\tau}g^{\tau\nu})$; is there a symbolic way to get to the result without using the explicit formula of the inversion of a matrix? (or equivalently: is perhaps the resulting expression simple?)
Even better: is there any other (more or less physical) reasoning to get to the exact correction to the inverse metric?
Answer: Notice that you are asking for the general form of the inverse matrix of some $A+B$ only under the assumption that we know the inverse $A^{-1}$ and that $A+B$ is non-degenerate. In dimension 4 there is no simple exact formula for the inverse in such a general case.
However, it often happens that linearized perturbations have a special form. For instance, perturbations sourced by quasi-static (Newtonian) gravitational sources in most commonly used coordinates (Cartesian, spherical) induce only perturbations on the diagonal. Then, of course, you invert by taking every term $a+b$ on the diagonal and replacing it by $1/(a+b)$.
For stationary sources such as the exterior of rotating objects the total linearized metric has a $2\times2$ block in a $t-\varphi$ sector and is diagonal in the rest in typical axial systems of coordinates. These also have very simple exact inversion formulas. | {
"domain": "physics.stackexchange",
"id": 57201,
"tags": "general-relativity, mathematical-physics, metric-tensor, perturbation-theory, linear-algebra"
} |
How can I denoise this signal? | Question: I have data captured by a wireless sensor that is noisy. It randomly jumps in value frequently, and I want to know what this signal will look like without these jumps. I am looking for an elegant signal processing technique to do this, if one exists. Below is the time-series signal:
I initially thought to do a Fourier Transform to see whether there is some frequency I can filter out. The FT looks like this:
Applying a LPF with a cutoff frequency of 2.5 Hz to try getting rid of the 3 Hz signal doesn't yield what I want. It just smooths out the signal and I lose most of the important underlying information that I care about. Using a 10th order Butterworth LPF, with fc = 2 Hz, I get the following signal:
As you can tell, I'm not very well-versed in signal processing, which is why I've come to you.
How can I denoise this signal and get rid of the random spikes?
Answer: Note that it appears that the interference is isolated to 1 Hz and it's higher harmonics. We can implement a multiband notch filter easily to reject those specific frequencies while minimizing impact to the desired signal.
A harmonic notch filter is simplified (elegant) when the sampling rate is also a harmonic (integer multiple of 1 Hz in this case) and given by the transfer function:
$$H(z) = \frac{1+\alpha^N}{2}\frac{1-z^{-N}}{1-\alpha^Nz^{-N}}$$
Where the closer $\alpha$ is to $1$, the higher the "Q" of the filter meaning tighter notches. This will produce periodic notches at $f_s/N$ where $f_s$ is the sampling rate. A possible implementation is shown below, where $z^{-N}$ indicates a delay of $N$ samples.
Note that for $\alpha$ close to $1$, $(1+\alpha)/2 \approx 1$ and the first multiplier can be eliminated with minimum consequence. (That is not the case with the second multiplier, and has implications on the importance of precision used in this "leaky accumulator" section.)
Here is an example frequency response for $\alpha=0.99$, $f_s=10$, and $N=10$:
For sampling rates that are not conveniently a multiple of the notch frequency, the integer sampling delays given by $z^{-N}$ with $N$ as an integer can be replaced with fractional delay all-pass filter elements (so $z^{-\tau}$ with $\tau$ as any positive real number).
I detail the derivation of the harmonic notch filter at this other post with further details and other options on harmonic rejection filters:
https://dsp.stackexchange.com/a/52728/21048 | {
"domain": "dsp.stackexchange",
"id": 12532,
"tags": "discrete-signals, fourier-transform, noise, denoising"
} |
Lowering the spacetime index of a Dirac matrix | Question: $\gamma_\mu\partial^\mu$=$\gamma^\nu\partial_\nu$
Does the above equation hold for Gamma matrices? If so, why?
Answer: Yes it does. You simply define $\gamma_\nu = \eta_{\mu\nu} \gamma^\mu$, $\eta_{\mu\nu}$ being the Minkowski metric, and you have $\gamma^\mu \partial_\mu = \gamma^\mu \eta_{\mu\nu} \eta^{\nu\sigma} \partial_\sigma = \gamma_\nu \partial^\nu$, where I employed $\delta_\mu {}^\sigma = \eta_{\mu\nu} \eta^{\nu\sigma}$. | {
"domain": "physics.stackexchange",
"id": 85937,
"tags": "special-relativity, metric-tensor, notation, dirac-equation, dirac-matrices"
} |
Parser for gleaning data from twitter | Question: I've written a script in python to parse the name, tweets, following and follower of those people available in view all section in my profile page of twitter. My scraper is able to parse those aforesaid fields flawlessly. Any input on the improvement of my parser will be highly appreciated. Here is what I've written:
from selenium import webdriver
import time
def browsing_pages():
driver = webdriver.Chrome()
driver.get('https://twitter.com/?lang=en')
driver.find_element_by_xpath('//input[@id="signin-email"]').send_keys('username')
driver.find_element_by_xpath('//input[@id="signin-password"]').send_keys('password')
driver.find_element_by_xpath('//button[@type="submit"]').click()
time.sleep(5)
#Clicking the viewall link
driver.find_element_by_xpath("//small[@class='view-all']//a[contains(@class,'js-view-all-link')]").click()
time.sleep(5)
for links in driver.find_elements_by_xpath("//div[@class='stream-item-header']//a[contains(@class,'js-user-profile-link')]"):
scraping_docs(links.get_attribute("href"))
#tracking down each profile links under viewall section
def scraping_docs(item_link):
driver = webdriver.Chrome()
driver.get(item_link)
# gleaning information of each profile holder
for prof in driver.find_elements_by_xpath("//div[@class='route-profile']"):
name = prof.find_elements_by_xpath(".//h1[@class='ProfileHeaderCard-name']//a[contains(@class,'ProfileHeaderCard-nameLink')]")[0]
tweet = prof.find_elements_by_xpath(".//span[@class='ProfileNav-value']")[0]
following = prof.find_elements_by_xpath(".//span[@class='ProfileNav-value']")[1]
follower = prof.find_elements_by_xpath(".//span[@class='ProfileNav-value']")[2]
print(name.text, tweet.text, following.text, follower.text)
driver.quit()
browsing_pages()
Answer: I'd focus on 3 main things:
don't use time.sleep() to wait for elements on a page. With hardcoded time delays there is a tendency to wait more than actually needed most of the time and less than needed sometimes - not reliable at all. Instead, use Explicit Waits with WebDriverWait class and a set of Expected Conditions
remove the overhead of firing up a separate Chrome instance - collect the links into a list a reuse the same WebDriver instance - you should expect improvements in page load times as well
improve your locators - XPath locators are generally the slowest - use "by id" locators whenever possible; handling class attributes with CSS selectors is more reliable (raw contains() in XPath may generate false positives - it can be a bit better with concat())
In the end, you should have something like this:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
def browsing_pages(driver):
driver.get('https://twitter.com/login')
wait = WebDriverWait(driver, 10)
driver.find_element_by_css_selector('form.signin input.email-input').send_keys('username')
driver.find_element_by_css_selector('form.signin input.js-password-field').send_keys('password')
driver.find_element_by_css_selector('form.signin button[type=submit]').click()
view_all = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".view-all a.js-view-all-link")))
view_all.click()
# wait for a profile link to become visible
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".stream-item-header a.js-user-profile-link")))
links = [link.get_attribute("href") for link in driver.find_elements_by_css_selector(".stream-item-header a.js-user-profile-link")]
for link in links:
scraping_docs(driver, link)
def scraping_docs(driver, item_link):
driver.get(item_link)
# gleaning information of each profile holder
for prof in driver.find_elements_by_css_selector(".route-profile"):
name = prof.find_element_by_css_selector("h1.ProfileHeaderCard-name a.ProfileHeaderCard-nameLink")
tweet = prof.find_element_by_css_selector(".ProfileNav-value")
_, following, follower = prof.find_elements_by_css_selector(".ProfileNav-value")[:3]
print(name.text, tweet.text, following.text, follower.text)
if __name__ == '__main__':
driver = webdriver.Chrome()
try:
browsing_pages(driver)
finally:
driver.quit()
You may also go for a class with the driver instance kept at self.driver, but I think it's pretty much okay to do it this "functional" way considering that you have only two functions. Good read on the subject:
Stop Writing Classes
Start Writing More Classes | {
"domain": "codereview.stackexchange",
"id": 26021,
"tags": "python, python-3.x, web-scraping, twitter, selenium"
} |
Anderson localization in the continuum case | Question: I haven't looked into Anderson localization before. A quick review of the available information gives the impression that this phenomenon has mainly been studied for the case of a discrete random Schrödinger equation. Therefore, I have the following question: does localization of the wave function take place for the one-dimensional random continuum Schrödinger equation? In particular, in the case of a bounded potential $V(x)$, such that $0 \leq V(x) \leq V_0$, which can be obtained by randomly placing potential barriers of finite random width and height on the line. If the answer to this question is “yes,” then are localized states with energies $E>V_0$ possible?
Answer:
Yes, Anderson localization persists to a continuum model.
The statement is actually much stronger than this. For any value of $V_0$, all states will be localized in 1d (and, marginally, in 2d as well).
The first experimental observation of Anderson localization in matter waves, by the group of Nobel laureate Alain Aspect, was in an effective 1d continuum system: https://arxiv.org/abs/0804.1621 . The disorder potential was somewhat different than the one you are envisioning, but that paper and its references should have useful information. The one caveat of which I am aware is that all states will only be localized as long as the power spectrum of the disorder is unbounded. In other words, for the potential you suggested, there must be no minimum width of the boxes. | {
"domain": "physics.stackexchange",
"id": 99927,
"tags": "wavefunction, schroedinger-equation, randomness, anderson-localization"
} |
Reflection optimization for export CSV on large scale | Question: So, I'm building export/import CSV helper. I have some performance issues in the code below. it takes me to parse CSV of 25,000 rows at 7 seconds.
If someone can help, it will be awesome!
public System.IO.Stream ParseContent<T>(IEnumerable<T> entities) where T : class
{
if (entities == null)
throw new ArgumentException(nameof(entities), "List accepted is empty.");
Type type = entities.First().GetType();
PropertyInfo[] properties = type.GetProperties();
string headers = GenerateTemplate(properties);
//No headers accepted - cannot export the content
if (string.IsNullOrEmpty(headers))
return null;
string contentToExport = $"{headers}{NewLineDelimiter}";
foreach (T entity in entities)
{
if (entity == null)
continue;
string template = this.ExportLine(entity, properties);
contentToExport += $"{template}{NewLineDelimiter}";
}
byte[] bytes = System.Text.Encoding.UTF8.GetBytes(contentToExport);
System.IO.MemoryStream memoryStream = new System.IO.MemoryStream(bytes);
return memoryStream;
}
private string ExportLine<T>(T entity, PropertyInfo[] properties) where T : class
{
if (entity == null || properties == null)
return string.Empty;
string template = "";
foreach (PropertyInfo property in properties)
{
string value = null;
if (property.PropertyType.IsGenericType && property.PropertyType.GetGenericTypeDefinition() == typeof(IEnumerable<>))
{
Type underlyingType = property.PropertyType.GetGenericArguments()[0];
if (underlyingType.IsValueType || underlyingType == typeof(string))
{
System.Collections.IEnumerable list = (System.Collections.IEnumerable)property.GetValue(entity);
value = string.Join(EnumerableValueDelimiter, list.Cast<string>());
}
}
else if (property.PropertyType.IsClass && (!property.PropertyType.IsPrimitive && !property.PropertyType.IsEnum) && property.PropertyType != typeof(string))
{
//Object type. need to be serialized
object propertyValue = property.GetValue(entity);
if (propertyValue != null)
value = JsonConvert.SerializeObject(propertyValue);
else
value = "null";
}
else
{
value = property.GetValue(entity)?.ToString();
}
if (string.IsNullOrEmpty(value))
value = "";
template += $"{value}{LineValuesDelimiter}";
}
//Removing the last delimiter at the row.
if (template.Length > 0)
template = template.Remove(template.Length - 1, 1);
return template;
}
Answer: ParseContent()
Type type = entities.First().GetType(); can throw an exception if entities doesn't contain any items. I may be wrong but you could use the T as well like Type type = typeof(T);.
If entities is null an ArgumentNullException should be thrown instead of an ArgumentException.
The foreach could be simplified and you should use a StringBuilder instead of concating strings in a loop. Thats because strings are immutable and for each contentToExport += $"{template}{NewLineDelimiter}"; you create a new string object.
If the right hand side of an assignment makes the type clear one should use var instead of the concrete type.
Omitting braces {} althought they might be optional can lead to hidden and therefor hard to find bugs. I would like to encourage you to always use them.
Having a variable memoryStream doesn't buy you anything. Just return the new memorystream.
Applying these points will lead to
public System.IO.Stream ParseContent<T>(IEnumerable<T> entities) where T : class
{
if (entities == null)
{
throw new ArgumentNullException(nameof(entities), "List accepted is empty.");
}
if (!entities.Any())
{
//assuming thats the desired behaviour
return null;
}
Type type = typeof(T);
PropertyInfo[] properties = type.GetProperties();
string headers = GenerateTemplate(properties);
//No headers accepted - cannot export the content
if (string.IsNullOrEmpty(headers))
{
return null;
}
StringBuilder contentToExport = new StringBuilder( $"{headers}{NewLineDelimiter}");
foreach (T entity in entities.Where(e=>e!=null))
{
string template = this.ExportLine(entity, properties);
contentToExport.Append($"{template}{NewLineDelimiter}");
}
byte[] bytes = System.Text.Encoding.UTF8.GetBytes(contentToExport.ToString());
return new System.IO.MemoryStream(bytes);
} | {
"domain": "codereview.stackexchange",
"id": 32920,
"tags": "c#, csv, reflection"
} |
Compressing string in Scala, how to do this immutably? | Question: I have a string "abbbbccdddd" and the function should return "a1b4c2d4".
This is what I have written in Scala.
Iterative-version
def compress(str: String) = {
val chars = List(str).flatten.map(_.toString) ++ List(null)
var result: String = ""
var lookBack: String = chars.head
var occurance: Int = 0
chars.foreach { c =>
if (c != lookBack) {
result = result + lookBack.toString + occurance
occurance = 0
}
occurance = occurance + 1
lookBack = c
}
result
}
Recursive-version
def compress(str: String): String = {
def compressHandler(str: String, lookBack: String, occurance: Int, result: String): String = {
if(str.isEmpty) {
result
} else if(str.head.toString == lookBack) {
compressHandler(str.drop(1), str.head.toString, occurance + 1, result)
} else {
compressHandler(str, str.head.toString, 0, result + lookBack + occurance)
}
}
compressHandler(str + "0", str.head.toString, 0, "")
}
Scala — being a functional language should have much better solutions!
How to improve second (by somehow using map/reduce/fold) and how to do the first following concept of immutability (purely functional)?
Answer: Here's my take using foldLeft:
def compress(s: String) = {
val a : List[(Char,Int)] = List()
s.toCharArray.foldLeft(a)((acc, elem) => acc match {
case Nil => (elem, 1) :: Nil
case (a, b) :: tail =>
if (a == elem) (elem, b + 1) :: tail else (elem, 1) :: acc
}).reverse
.map{ case (a, b) => a.toString + b }
.mkString("")
}
Note: I have assumed that order matters. That is, aabbbaa will reduce to a2b3a2 | {
"domain": "codereview.stackexchange",
"id": 38860,
"tags": "scala"
} |
Concept of the working of friction? | Question: I have been trying to solve the exercises of HC Verma- Concepts of Physics. Can't understand the solution of one of the questions on friction.
Question
The friction coefficient between an a pair of shoes and the ground is 0.90. Suppose a superman wears these shoes and races for 50m. There is no upper limit on his capacity of running at high speeds.speeds. Find the minimum time he will have to take in completing the 50m starting from rest.
This is the solution I found from the web.
Answer
So my question is, why is $ ma-\mu mg =0?$
If $ ma $ is balanced by the force of friction, why is the man accelerating at all?
Could you clear this up for me? Thanks a lot in advance.
Answer: This is a variation of a misconception that happens over and over again. The force that the ground can exert on the (super)man is equal and opposite to the force of friction of the man on the ground. It is precisely because the man is accelerating that the force of friction is even experienced. The force of the ground on his shoes is the force that accelerates his mass.
There are many similar questions on this site. | {
"domain": "physics.stackexchange",
"id": 19501,
"tags": "homework-and-exercises"
} |
Is it Possible to Determine Radiation Levels Using Satelites? | Question: Given recent events in Japan, this got me wondering. Is it possible to determine radiation levels reliably not having Geiger counters near the possible radiation contaminated zone? According to wikipedia the Chernobyl disaster was first (other than Soviet Union naturally) detected by Swedish via radioactive particles found on clothes of nuclear plant workers. Surely more efficient ways should have been developed by now.
Answer: Short answer: no.
Longer answer: No, excepting neutrinos none of the products of radioactive decay has the penetrating power to pass through the atmosphere, and neutrino detection is not something we can do from satellites.
To elaborate, the immediate products of radioactive decay are (some set of, depending on the decay in question) fission fragments, electrons, positrons, alphas, neutrons, photons (gamma rays) and neutrinos. Plus the remnant nucleus. The only secondary product which might be interesting is Cerenkov light.
The electrons and positrons will travel a number of cm in air (at ground level). The gamma might go a few meters. The heavy stuff has no penetrating power at all.
Even if lofted to the top of the troposphere, there is just too much air in the way.
Cerenkov light will, of course, go through a lot of atmosphere, but you'd be looking for a pale blue glow against the general light background. For dispersed radionucleides (i.e. contamination), the intensity will be awfully low.
N.B. I too have seen various TV show and movie where some character from some agency says "We can track the radiation with satellites!". I believe this to be misinformed babbling of desperate script writers. | {
"domain": "physics.stackexchange",
"id": 10933,
"tags": "soft-question, radiation"
} |
Identification of this plant? | Question: Photographed in Rocky Point, Mexico. Any ideas about this plant? Thank you!
Answer: [Not really a precise answer (and I'm not a botanist after all).]
It's likely a lycopodiophyte. Looks like a Huperzia species, but it's impossible to tell exactly from the provided photo [if the species is not familiar, of course]. Of great help would be general appearance of the plant and form of sporangia.
You can try this key: Lycopodiaceae of North America. | {
"domain": "biology.stackexchange",
"id": 1560,
"tags": "botany, species-identification"
} |
Closable BlockingQueue | Question: I am working on legacy code, specifically a sort of BoundedBlockingQueue (mainly used as a pipe between different threads). As it is heavily used in the system and the current implementation features fully synchronized methods and a wait/notify mechanism, I attempted to rewrite it, using the java 5 concurrency utilities. Below is my result, that is considerably faster in (naive) testing and I haven't hit obvious threading issues (yet... (: ).
As this is legacy code I cannot simply switch to a BlockingQueue implementation, but must support blocking read, write and peek methods. An additional complication is that closing the pipe is required, i.e. the writer or reader may decide to close it. The reader should then be able to empty the pipe, while the writer should not write more.
I would appreciate any constructive critique, especially regarding the correctness of my approach and hints at optimizations.
public class ConcurrentBufferedPipe implements Pipe {
/** Possible states of a pipe. ERROR and CLOSED are final states. */
private enum State {
OPEN, CLOSED, ERROR;
}
/*
* Relies on the thread-safety of the used BlockingQueue, the volatile
* semantics on the state variable and state invariants of the Pipe
* Interface, namely:
* - a closed or erroneous pipe will never be reopened
* - as long as blocks are available, readers are permitted to continue
* reading - even if the pipe was closed or set to error state
* - it is acceptable that a write happens while another thread closes the
* pipe
* Access to the blocking queue is controlled by two semaphores, one for
* writers and one for readers. They essentially represent the currently
* available blocks or space.
*/
/* waiting times above this timeout are unlikely and indicate starvation */
private static final long TIMEOUT = 60;
private static final TimeUnit UNIT = TimeUnit.SECONDS;
private final String name;
private final BlockingQueue buffer;
private final int size;
/* concurrency tools */
private volatile State state;
private final Semaphore availableBlocks;
private final Semaphore availableSpace;
public ConcurrentBufferedPipe(final String name, final int size) {
super();
this.name = name;
this.size = size;
this.buffer = new LinkedBlockingQueue(size);
this.state = State.OPEN;
this.availableBlocks = new Semaphore(size);
this.availableBlocks.drainPermits();
this.availableSpace = new Semaphore(size);
}
@Override
public Object read() throws PipeIOException, PipeTerminatedException,
DataError {
aquireOrFail(this.availableBlocks);
final Object head = buffer.poll();
if (head == null) { // indicates a closed or error state
assert state != State.OPEN;
this.availableBlocks.release();
return closedMarkerOrError();
} else {
this.availableSpace.release();
}
assert head != null;
return head;
}
/**
* {@inheritDoc}
*
* @throws DataError
* if the pipe is empty and was closed due to an error
*/
@Override
public Object peek() throws PipeIOException, PipeTerminatedException,
DataError {
aquireOrFail(this.availableBlocks);
final Object head = buffer.peek();
this.availableBlocks.release();
if (head == null) {
assert state != State.OPEN;
return closedMarkerOrError();
}
assert head != null;
return head;
}
/**
* {@inheritDoc}
*
* This implementation will also fail with a {@link PipeClosedException} if
* the pipe was closed by a writer.
*
*/
@Override
public void write(final Object block) throws PipeClosedException,
PipeIOException, PipeTerminatedException {
aquireOrFail(this.availableSpace);
boolean hasWroteBlock = false;
if (state == State.OPEN) {
hasWroteBlock = buffer.offer(block);
} else {
this.availableSpace.release();
throw new PipeClosedException();
}
this.availableBlocks.release();
assert hasWroteBlock;
}
@Override
public void closeForReading() {
state = State.CLOSED;
wakeAll();
buffer.clear();
}
@Override
public void closeForWriting() {
state = State.CLOSED;
wakeAll();
}
@Override
public void closeForWritingDueToError() {
state = State.ERROR;
wakeAll();
}
/**
* Safely tries to acquire a permission from a semaphore.
*
* @param resource
* holds permissions
* @throws PipeTerminatedException
* if the current thread is interrupted before or while
* acquiring the permission or acquisition times out
*/
private void aquireOrFail(final Semaphore resource)
throws PipeTerminatedException {
try {
final boolean aquired = resource.tryAcquire(TIMEOUT, UNIT);
if (!aquired) { // indicates time out
throw new PipeTerminatedException(name);
}
} catch (final InterruptedException e) {
throw new PipeTerminatedException(name);
}
}
/**
* Depending on final state of pipe returns appropriate marker value. May
* only be called if this pipe is NOT open.
*
* @return NO_MORE_DATA marker if pipe is closed
* @throws DataError
* if pipe is in error
*/
private Object closedMarkerOrError() throws DataError {
final State state = this.state;
if (state == State.ERROR) {
throw new DataError();
}
assert state == State.CLOSED;
return ControlBlock.NO_MORE_DATA;
}
/**
* Releases all reader / writer limits. May only be called after setting the
* pipe to a final state (ERROR or CLOSED), as it ultimately corrupts the
* invariants guarded by the used semaphores.
*/
private void wakeAll() {
assert this.state != State.OPEN;
this.availableBlocks.release(size);
this.availableSpace.release(size);
}
}
Thank you for your input !
Answer:
read or write methods requires 3 synchronization operations (2 semaphore operations and one access to the LinkedBlockingQueue. If you use synchronized methods (or locking with ReentrantLock) and non-synchronized underlying queue, then only one synchronization operation is needed per read/write/peek method.
LinkedBlockingQueue creates additional wrapper object (link) for each item put into the queue. Use java.util.ArrayDeque(size) to avoid redundant object creation. | {
"domain": "codereview.stackexchange",
"id": 2690,
"tags": "java, optimization, multithreading"
} |
Exact relative position between two links | Question:
Hi! I'll be blunt...
I used LDD and lxf2urdf to build a robot and I want to change the parent of some of the links because I need certain parts to be independent of the rest of the robot (e.g. the back wheel) if I intend to make some of the joints continuous.
I tried using rviz to select the links and see their relative position, but I found that those numbers aren't precise enough. So how do I know the relative position between two links with the maximum possible precision?
I didn't find any questions regarding this, excuse me if I just couldn't find them! :(
Update:
For example, let's take a random joint from my .urdf file:
<joint name="ref_70_joint" type="fixed">
<parent link="ref_58_link_hub"/>
<child link="ref_60_link"/>
<origin xyz="-0.0135999953902 6.27180865763e-08 0.00020025516928" rpy="3.14159265343 1.5396542947e-05 -1.04458704296e-05" />
<axis xyz="0 0 0" />
</joint>
Now let's see what rosrun tf tf_echo /ref_58_link_hub /ref_60_link returns:
At time 1332949971.511
- Translation: [0.014, 0.000, -0.000]
- Rotation: in Quaternion [0.000, -0.000, 1.000, -0.000]
in RPY [-3.142, -0.000, -0.000]
As you can see I only get three decimal digits, just like rviz.
Originally posted by Capelare on ROS Answers with karma: 202 on 2012-03-27
Post score: 0
Answer:
I've got a better idea. Just write a simple node that uses a TransformListener and print out that information directly. Ideally, someone would write a patch for tf_echo so that precision can be specified...
#!/usr/bin/python
import roslib; roslib.load_manifest('package_name')
import rospy
import tf
if __name__ == '__main__':
rospy.init_node('tf_a')
listener = tf.TransformListener()
rate = rospy.Rate(10.0)
while not rospy.is_shutdown():
try:
(trans,rot) = listener.lookupTransform('/ref_58_link_hub', '/ref_60_link', rospy.Time(0))
print trans, rot
except (tf.LookupException, tf.ConnectivityException):
continue
rate.sleep()
Originally posted by David Lu with karma: 10932 on 2012-03-28
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Capelare on 2012-03-28:
That's a great idea, thx! :)
Comment by Reza Ch on 2013-03-21:
What about printing this data in C++. How to write this code in C++ ? | {
"domain": "robotics.stackexchange",
"id": 8758,
"tags": "ros, joint, urdf, position"
} |
Why is the LHC circular and 27km long? | Question: The LHC in Geneva is a circular accelerator, 27 km long - why is it like that ?
Answer: The LHC is a synchrotron, that is, a accelerator with a magnetic field confining the orbit on a circular path and using RF accelerating cavities to accelerate the particles.
The voltage provided by the cavities is limited (the order of MV) and thus a linear accelerator cannot achieve such high energies (of the order of the TeV) (although some projects of TeV linear collider are in development, CLIC and the ILC) because it would be extremely long. The idea is thus to have a circular path, the particle going through the cavities at each turn and gaining a small amount of energy each turn.
To have this circular path, we use a magnetic field, it does not accelerate the particles, but it provides a force perpendicular to the motion, thus allowing to bend the trajectory and to obtain a circular orbit.
Why does it have to be that long ? A fundamental relation for the synchrotrons is:
p = q B r
where p is the particle momentum, q is the charge of the particle, B is the magnetic field and r is the radius of curvature.
We can then see that to have a high momentum (and energy) we need a high magnetic field and a large radius.
In the LHC, the magnetic field is already at the limit of what a superconducting magnet can achieve (almost 8.5 T).
The LHC then needs a very large tunnel. For that it reuses the tunnel of the LEP which was also a synchrotron, but for electrons.
In that case the size of the tunnel is not really given by the same reasoning. We need to take into account the synchrotron radiation: any accelerated charge radiates energy in the form of a EM radiation: "light".
But the amount of radiation goes with the inverse of the fourth power of the mass, the electron being very light they emit a large amount of radiation. For protons, this effects is almost negligible, that's why it does count for the LHC.
But for LEP (and LEP gave it's tunnel to the LHC) this was the main limitation to the achieved energy. And to obtain a high energy, the larger the tunnel the better, because the amount of radiation decreases with the bending angle of the dipole magnets, meaning that a large radius leads to lower radiation.
Finally, the size of precisely 27 km was chosen for geographic consideration: the tunnel is between the Jura Mountains and the Leman lake, this implies strict constraints in the civil engineering. | {
"domain": "physics.stackexchange",
"id": 1,
"tags": "accelerator-physics, large-hadron-collider"
} |
Is velocity vector also invariant(independent) under coordinate change? | Question: Vectors are said to be independent of coordinate system. They remain the same object but its the description of them that changes with the different coordinate system. But velocity, which is a vector, does change with the frame of reference and since each frame of reference has a coordinate system to give a measure of the vectors the velocity vector does change between these different coordinate systems. How is this possible?
Answer: You're confusing different kinds of coordinate transformations. If I rotate the Cartesian axes or translate the origin, I'll preserve velocity but change its components. But a reference frame shift can change the velocity. | {
"domain": "physics.stackexchange",
"id": 49818,
"tags": "reference-frames, vectors, coordinate-systems"
} |
How can the 5-photon absorption coefficient be estimated? | Question: Imagine a large bandgap material which is irradiated by an intense laser beam.
If the photon energy is only high enough for 1/5 of the bandgap, is there a way to approximate the absorption by 5-photon excitation, i.e. the ratio of transmitted to initial Intensity?
All I found is related to 2 or 3 Photon absorption but not higher orders in the pertubation series ...
Answer: The best approximation I found so far is in Bloembergen: Nonlinear Optics.
It is stated that successive orders of Polarisation are reduced by a factor
$|E|/|E_{at}|$,
with the applied electric field $E$ and the atomic electric field $E_{at}$.
This will be sufficient for my case. | {
"domain": "physics.stackexchange",
"id": 6140,
"tags": "research-level, photons, quantum-optics"
} |
Are laws of reflection valid in all cases? | Question: Imagine a light ray incident on a plane mirror along a vector i+j-k. The normal on incidence point is along i+j.
In this case the incident ray, reflected ray and the normal do not lie in the same plane. How valid are laws of reflection here?
Should we apply laws of reflection by considering only the components that get reflected and thereby ignoring 'k' component?
Where am I going wrong?
Answer: Step one is rotate to a new coordinates system with
$$\hat x = \frac 1 {\sqrt 2}[\hat i + \hat j] $$
and
$$ \hat z = \hat k $$
as "k" is a assigned to wave vectors.
The in-going wave has a wave vector (normalized);
$$ \hat k_{in} = \frac 1 {\sqrt 2}[\hat x - \hat z]$$
The mirror is in the x-y plane:
$$ \hat n = \hat z $$,
so the reflected wave is:
$$ \hat k_{out} = \frac 1 {\sqrt 2}[\hat x + \hat z]$$
Now take the scalar triple product to see the volume of the parallelepiped spanned:
$$ V = \hat n \cdot (\hat k_{in} \times \hat k_{out})$$
$$ V = \hat z \cdot [(\hat x-\hat z) \times (\hat x + \hat z)]/2$$
$$ V = \hat z \cdot (-\hat y) = 0$$
Zero means they are coplanar. | {
"domain": "physics.stackexchange",
"id": 55700,
"tags": "optics, reflection"
} |
Hot water freezing faster than cold water | Question: This question has puzzled me for a long time. There is already a question like this on Physics.SE. John's answer to the question seems quite satisfying. But when I googled the cause I found this and this explanation. They maybe wrong but I think How Stuff Works is a reliable source.
And here's the original paper.
I am quite confused now reading the different explanations. Can anyone please shed some light on the issue?
Answer: To start with, "water freezes faster when it starts out hot" is not terribly precise. There are lots of different experiments you could try, over a huge range of initial conditions, that could all give different results. Wikipedia quotes an article Hot Water Can Freeze Faster Than Cold by Jeng which reviews approaches to the problem up to 2006 and proposes a more precise definition of the problem:
There exists a set of initial parameters, and a pair of temperatures, such that given two bodies of water identical in these parameters, and differing only in initial uniform temperatures, the hot one will freeze sooner.
However, even that definition still has problems, which Jeng recognizes: first, there's the question of what "freeze" means (some ice forms, or the water freezes solid all the way through); second, the hypothesis is completely unfalsifiable. Even if you restrict the hypothesis to the range of conditions reasonably attainable in everyday life, to explain why the effect is so frequently noted anecdotally, there's literally an infinite number of possible experimental conditions to test, and you can always claim that the correct conditions just haven't been tested yet.
So, the fact that the internet is awash in a variety of different explanations makes perfect sense: there really are a bunch of different reasons why initially hotter water may freeze faster than initially colder water, depending on the precise situation and the definition of "freeze" that you use.
The paper you link to, O:H-O Bond Anomalous Relaxation Resolving Mpemba Paradox by Zhang et al., with results echoed by the HowStuffWorks video, attempts to solve the problem for a very specific sub-hypothesis. They eliminate the problem of defining freezing by treating freezing as a proxy for cooling in general, and directly measuring cooling rates instead. That experimental design implicitly eliminates one internet-provided explanation right off the bat: it can't possibly be supercooling, because whether the water supercools or solidifies when it gets to freezing temperature is an entirely different question from how quickly it cools to a temperature where it could freeze.
They also further constrain the problem by looking for explanations that cannot apply to any other liquid. After all, the Mpemba effect is about why hot water freezes faster; nobody is reporting anomalous freezing of, say, hot alcohol. That might just be because people freeze water a lot, and we don't tend to work with a lot of other exotic chemicals in day-to-day life, but choosing to focus on that restriction makes the problem more well-defined, and again implicitly rules out a lot of potential explanations ahead of time- i.e., it can't have anything to do with evaporation (because lots of liquids undergo evaporative cooling, and that's cheating anyway 'cause it changes the mass of the liquid under consideration) or conduction coupling to the freezer shelf (because that has nothing to do with the physical properties of the liquid, and everything to do with an uncontrolled experimental environment, as explained by John Rennie.
So, there really isn't just one answer to "why does hot water freeze faster than cold water", because the question is ill-posed. If you give someone a specific experimental set-up, then you can get a specific answer, and there are a lot of different answers for different set-ups. But, if you want to know "why does initially-hotter water cool faster through a range of lower temperatures than water that started out at those lower temperatures, while no other known liquid appears to behave this way" (thus contributing to it freezing first if it doesn't supercool), Zhang has your answer, and it's because of the weird interplay between water's intra- and inter-molecular bond energies. As far as I can tell, that paper has not yet been replicated, so you may consider it unconfirmed, but it's a pretty well-reasoned explanation for a very specific question, which is probably an influencing factor in a lot of other cooling-down-hot-water situations. There is a follow-up article, Mpemba Paradox Revisited -- Numerical Reinforcement, which provides additional simulation evidence for the bond-energy explanation, but it can't really be considered independent confirmation because it's by the same four authors. | {
"domain": "physics.stackexchange",
"id": 66763,
"tags": "thermodynamics, water, temperature, molecules, atomic-physics"
} |
Do my Image Widths Exceed Limit? | Question: I needed to find whether any image's width at or higher than a certain directory folder exceeds a limit. Is this code a good way of doing this, or is there a way in which it can be improved? I ran it over at least a couple hundred images and several hundred other files contained in the main directory (not C:\Users, but I don't want to show my user name), and it completed so fast the time is irrelevant.
public static void Main()
{
string[] files = Directory.GetFiles("C:\\Users", "*.png", SearchOption.AllDirectories);
foreach (string file in files)
{
Image img = Image.FromFile(file);
if (img.Width > 1100)
{
Console.WriteLine(file);
Console.WriteLine(img.Width);
}
img.Dispose();
}
}
Answer: Rather than opening a resource and then remembering to close it with a .Dispose() call,
I think it's good to use the using (...) { ... } idiom as much as possible.
Some questions (and again) on Stack Overflow suggest some issues with releasing image resources when opened with Image.FromFile.
I propose this alternative:
public static void Main()
{
string[] files = Directory.GetFiles("C:\\Users", "*.png", SearchOption.AllDirectories);
foreach (string file in files)
{
using (FileStream fs = new FileStream(file, FileMode.Open, FileAccess.Read))
using (Image img = Image.FromStream(fs))
{
if (img.Width > 1100)
{
Console.WriteLine(file);
Console.WriteLine(img.Width);
}
}
}
} | {
"domain": "codereview.stackexchange",
"id": 12525,
"tags": "c#, image, file-system, console"
} |
Some questions about Normal Ordering in QFT | Question: I have some questions about normal ordering in quantum field theory: I already read this very good question with very very good answers and this other question with other very good answers (I read also this one and many others, but without understanding much).
For what I understood, normal ordering is more a simbolic operation than an operator, so if for example I have
\begin{equation*}
\left[\hat{a},\hat{a}^\dagger\right]
=
1
\end{equation*}
Then I'm not authorized to say that $:\hat{a}\hat{a}^\dagger:=:\hat{a}^\dagger\hat{a}:+:1:$ (where I think that $:1:=1$ demonstrated through unitary operators). What I don't understand here is
The main fact is that this operation is non-linear? So (even if here the answer from Sebastiano Peotta seems to say the opposite)
\begin{equation*}
:\hat{a}^\dagger\hat{a}:+:1:
\neq
:\hat{a}^\dagger\hat{a}+1:
\,?
\end{equation*}
Or the main fact is that this operation doesn't care about operatorial equalities? In that case I would just have
\begin{equation*}
:\hat{a}\hat{a}^\dagger:
\neq
:\hat{a}^\dagger\hat{a}+1:
\,?
\end{equation*}
At the same time I wasn't able to find this kind of question, that is the main doubt that I have:
what if I rename the operator with the following substitution $\hat{b}^\dagger=\hat{a}$? In that case $:\hat{a}\hat{a}^\dagger:=\hat{a}^\dagger\hat{a}$, but $:\hat{b}^\dagger\hat{b}:=\hat{b}^\dagger\hat{b}=\hat{a}\hat{a}^\dagger$!
Is this crazy or am I doing something wrong (very likely)?
Answer: @Qmechanic 's impeccable answer above may be illustrated by the basic rule of normal ordering, which is not a functor. That is, normal orderings of commutators vanish; the objects inside : : are commutative entities, "(Weyl) symbols", and not operators, so are reprieved through commutative rules, as Sebastiano Peotta's answer reviews. So linearity of : : applies to its arguments, which are commutative symbols, and not operators.
Consequently, inside : : , you indeed "don't care about operator equalities", as you say, such as $\left[\hat{a},\hat{a}^\dagger\right]=1$,
$$
:\left[\hat{a},\hat{a}^\dagger\right]:~~ = ~~:0:~~=0 \neq 1= ~~:1:~.
$$
Thus your second bullet is correct,
$$
:\hat{a}\hat{a}^\dagger: -:\hat{a}^\dagger\hat{a}:~~\neq ~~ :1: ~,
$$
because inside : : you only care about (trivial) symbol commutators, instead of operator commutator equalities.
I don't want to confuse you, but you might take off the operator hats inside : :, even if only in your mind, to remind you they represent symbols, playing by their own rules; that is, think of the symbols as semiclassical objects, not operators, obeying a trivial commutation relation, instead of the usual quantum one. A gonzo quantization rule, yielding inconsistent alternate answers. | {
"domain": "physics.stackexchange",
"id": 72224,
"tags": "quantum-mechanics, quantum-field-theory, operators, notation"
} |
Moth that resembles a leaf. What species is this? | Question:
Location: Urban area near the Western Ghats, Kerala, South India
Date: Sometime in September, 2016
Climatic Conditions: Humid, frequent rains.
Brief description:
Head isn't prominent from the dorsal view. Wings outstretched even when at rest. Antennae largely concealed under the fore wings. Also it...um...does a pretty good impression of a leaf.
Answer: It resembles Pelagodes antiquadraria, which belongs to the moth genus Pelagodes in the family Geometridae.
Image Source: FlickRiver | {
"domain": "biology.stackexchange",
"id": 7028,
"tags": "species-identification, entomology, lepidoptera"
} |
How does the electric field behave at the surface of a realistic conductor? | Question: So I am taking my University Electromagnetism course and we are currently learning about gauss's law and gaussian surfaces.
A common question involves point charges inside of a hollow sphere that has a net charge.
When we graph the electric field [there is a mild error in graph with the y axis saying Q however the trend is the same $Q \propto E$] vs the radius from the centre with the model I was taught we get something like this:
Noting the vertical dotted lines which happen @$r_A$ and $r_B$. This preamble brings me to the meat of my question:
Outside of this simplified model (I suppose in the real world) what does this vertical line more accurately look like, what factors control it and on the electron scale what happens?
I am a first year engineering student, so please can your answers be a bit simpler for me, thank you very much.
Answer: What you ask is essentially how the transition occurs from a field outside a conductor to the field-free region inside the conductor (at least in the drawings it seems that the vertical junps are at the inner and outer surface of a conductive spherical wall).
First of all: the field inside a real-world conductor is not zero at all because you are there between positive cores of atoms, with negative electron wave mixtures all around you, and the resulting fields can fluctuate strongly. But on average they are zero, that much is true.
If you then come close to the surface, interestingly, there are two things that can happen. Let's assume that we have negative free flowing charge, like the electrons in most metals (for holes as the free charges the modifications are straightforward). We then have either:
A depletion layer, if the external $E$-field points outward from the surface. In this layer the electrons are pushed away to the inner parts of the conductor by the field, which penetrates into that part of the conductor. Only the atom cores are left, creating a positively charged layer with finite thickness. You can calculate how the field will now transition in a nice continuous way, and you can calculate how much thickness you actually need (if you look up the atom density for the metal involved).
An accumulation layer if the external $E$-field points inwards to the surface. In that layer the electrons are crowding up, so they create a negatively charged layer with finite thickness. Again the $E$-field will gradually decrease now, since here again we have no infinitely thin charge layers. But here it is less trivial to compute how thick the accumulation layer will become to create the required amount of charge! Unlike the atom cores which are fixed, the electrons are governed by the field equations of quantum mechanics (at least that is the distinction in the semi-classical description that is usually applied in solid state physics). This often results in accumulation layers being thinner than depletion layers, for situations where the external fields are equal but opposite.
I leave it to you to look up the details and especially the interesting implications for semicondutor devices (like diodes, bipolar transistors, and MOSFETs). | {
"domain": "physics.stackexchange",
"id": 100339,
"tags": "electromagnetism, gauss-law, conductors"
} |
How to see $\mathbf{E}\cdot\mathbf{B}$ is a total derivative? | Question: Since $\mathbf{E}\cdot\mathbf{B}$ is a Lorentz invariant of the electromagnetic fields it seems like an interesting thing to plug into a Lagrangian to see what happens. However, this ends up disappearing, and I'm told this should be obvious because it is a total derivative.
This however is not obvious to me. Is there an easy way to see that
$$ \frac{1}{2}\epsilon_{\alpha\beta\gamma\delta}F^{\alpha\beta} F^{\gamma\delta} = - \frac{4}{c}\left( \mathbf{B} \cdot \mathbf{E} \right) $$
is actually a total derivative?
I'd also appreciate if someone can show what it is the derivative of, so that I can work out the derivative to help it sink in.
Answer: First note that you can rewrite $\mathbf{E}\cdot\mathbf{B}$ as
$$
\mathbf{E}\cdot\mathbf{B}\propto F\wedge F
$$
using a field strength $2$-form $F$ where $\mathbf{E}$ and $\mathbf{B}$ are defined as
\begin{align}
F_{0i}&=E_i ,\\
F_{ij}&=\epsilon_{ijk}B_k.
\end{align}
More specifically,
\begin{align}
F\wedge F&=\frac{1}{4}F_{\mu\nu}F_{\rho\sigma}\, dx^\mu\wedge dx^\nu\wedge dx^\rho\wedge dx^\sigma \\
&=-\frac{1}{4}\epsilon^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma}\,\text{vol.}\\
&\propto \epsilon^{ijk}F_{0i}F_{jk}=\mathbf{E}\cdot\mathbf{B}.
\end{align}
Then it is easy to show that $\mathbf{E}\cdot\mathbf{B}$ is a total derivative using $F=dA$, i.e.,
$$
\mathbf{E}\cdot\mathbf{B}\propto F\wedge F=d(A\wedge F).
$$
As a side comment, $F\wedge F$ contains volume form but it is absent in $\mathbf{E}\cdot\mathbf{B}$. So the correct way to write is
$$
\int d^4x \,\mathbf{E}\cdot\mathbf{B}\propto\int F\wedge F.
$$ | {
"domain": "physics.stackexchange",
"id": 43705,
"tags": "homework-and-exercises, electromagnetism, lagrangian-formalism, field-theory"
} |
turtlebot 2: mass production | Question:
hi. we really love turtlebot 2 and ros. we plan to to build our robot based on turtlebot 2 and ros. our first production run will be around 300 - 500 units.
what would be a good way to source the parts effectively? buying kobuki base and kinect at single unit retail prices don't seem to make sense. they also seem like products for research and prototyping. we also want to keep our product cost as low as possible by buying wholesale/direct rather thay paying for retail prices.
where should we look for parts that are similar to kobuki and kinect? where do you usually go to source for robot parts in bulk? or do you have to deal with kobuki and microsoft to get bulk pricing?
any advice/comment/feedback is much appreciated.
Originally posted by d on ROS Answers with karma: 121 on 2014-07-03
Post score: 1
Answer:
For the Kobuki, going to Yujin would probably be the best option. They are a major manufacturer of robotic vacuums and so you are unlikely to find any source that can offer a better price. However, be aware that 300-500 units isn't all that much when you consider that hundreds of thousands of robotic vacuums are sold each year.
As for the Kinect, you might want to actually investigate their EULA as to commercial uses. Then, I would suggest finding out about availability, seeing as Kinect 2 is already out, and is very different (and not very robotics friendly in terms of openness).
Originally posted by fergs with karma: 13902 on 2014-07-04
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by d on 2014-07-04:
much appreciated for the answer. what would be a good alternative to kinect (in the context of buying in bulk and hopefully direct from the manufacturer)?
Comment by ccapriotti on 2014-07-04:
This is ALMOST pertinent to what you want. Look into http://g.co/ProjectTango. As a bonus, meet our friend Tully in one of their videos. The idea is, if you are a developer, Google wants to talk to you. Seems to fit the bill here. | {
"domain": "robotics.stackexchange",
"id": 18513,
"tags": "kinect, kobuki"
} |
Is there actually a one-to-one correspondence between a given central charge $c<1$ and a given universality class? | Question: I'm just starting to learn about conformal field theory, with an aim to understand critical exponents in terms of conformal fixed points. I see, in multiple locations, the claim that a central charge $c<1$, $c=1-\frac{6}{m(m+1)}$ for $m=3,4,5,...$, dictates the universality class. However, I also see claims of the complete opposite! Because I am new to this field, I will share examples of conflicting claims, bolding what I view as the important parts, in case I am misunderstanding something fundamental.
For example, In 1d criticality, what is the relation between the universality class and central charge? says that "I want to know how to obtain the universality class of the phase transition from the central charge "c" in one dimensional model. If c is less than 1, there is a one-to-one correspondence." The paper 1D Fermi Liquids by Johannes Voit notes that "The critical exponents are the scaling dimensions of the various operators in a conformally invariant theory and, generically, are fully determined by c."
However, on nLab, there's the following quote of Cardy's:
"Shortly thereafter Friedan, Qiu and Shenker showed that unitary CFTs (corresponding to local, positive definite Boltzmann weights) are a subset of this list, with $c=1−\frac{6}{m(m+1)}$ and $m$ an integer $\geq 3$. This gives rise to what might be termed the "conformal periodic table". The first few examples may be identified with well-known universality classes. The "hydrogen atom" of CFT is the scaling limit of the critical Ising model, "helium" is the tricritical Ising model, and so on. Note, however, that at the next value of $c=4/5$ two possible "isotopes" arise. In the second, corresponding to the critical 3-state Potts model, not all the scaling dimensions allowed by BPZ in fact occur, but some of those that do actually appear twice. In fact the constraint of unitarity is not sufficient to determine exactly which representations actually occur in a given CFT."
Cardy's quote seems to say that $c<1$ does not fix the universality class, as there are two different CFTs for $m=5$.
Is it correct that $c<1$ does not fix the universality class? If it doesn't, is it known how many universality classes there are for a given $c<1$?
Answer: Cardy's statement is correct, as you can see by the explicit $c=4/5$ case. In this case, there are two different modular-invariant choices for the field content, one of which corresponds to the three-state Potts model and the other of which describes a generic tetra-critical point.
Once you have the central charge, you can write down the allowed scaling dimensions of all primary operators, but this isn't the end of the story. It turns out that modular invariance constrains which of these operators can appear in your theory. A complete classification of the field content allowed in the Virasoro minimal models is known as the ADE classification: http://www.scholarpedia.org/article/A-D-E_Classification_of_Conformal_Field_Theories . Directly quoting the linked article by Cappelli and Zuber:
The results in Table 3 amount to a classification of the operator contents of all rational conformal theories with c<1 ; we see that more than one consistent set of primary fields is possible for the same central charge (14), leading to quite different theories. These correspond to independent universality classes of critical phenomena in statistical mechanics, because the (relevant) primary fields characterize the manifold of perturbations around the fixed point. | {
"domain": "physics.stackexchange",
"id": 80863,
"tags": "conformal-field-theory, critical-phenomena"
} |
Why do electric field lines curve at the edges of a uniform electric field? | Question: I see a lot of images, including one in my textbook, like this one, where at the ends of a uniform field, field lines curve.
However, I know that field lines are perpendicular to the surface. The only case I see them curving is when drawing field lines to connect two points which aren't collinear (like with charged sphere or opposite charges) and each point of the rod is collinear to its opposite pair, so why are they curved here?
Answer: I have taken your image and created a few additional field lines at one end of the plates in the first diagram below.
When you come to the ends of the plates, the field starts to resemble that associated with two point charges instead of a sheet of charge. The second diagram below shows the field lines between two point charges. Note that as you move away from the two point charges an equal distance apart, the lines look like those at the ends of your parallel plate capacitor (curved lines). Towards the center between the charges, the field lines start to look straight and evenly spaced (parallel lines).
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 71310,
"tags": "electric-fields"
} |
Determining a molecular formula of the catalyst for polymerisation of butadiene | Question: I've got the following question in a workbook which I need to solve:
A $\pu{50 g}$ sample of an experimental catalyst used in the polymerisation of butadiene is made up of $\pu{11.65 g}$ of $\ce{Co}$ and $\pu{25.7 g}$ of $\ce{Cl}$. If the molar mass of the compound is $\pu{759 g/mol},$ find the molecular formula of the catalyst.
I attempted to first find the percentage by mass of each given element by dividing their masses ($\pu{11.65g}$ and $\pu{25.7 g},$ respectively) by the mass of the entire catalyst $(\pu{50 g}).$ From there, I divided both percentages ($23.3$ and $51.4)$ by the smallest one, and multiplied the results $(1$ and $2.2)$ by $5$ to get two whole numbers. The formula I ended up with was $\ce{Co5Cl11}.$
Now that made sense to me, but somehow the correct answer is $\ce{Co3Mo2Cl11}$. I don't understand how I'm supposed to figure out that $\ce{Mo}$ is required? Could somebody please shed some light on this for me?
Answer: Despite the absurd lack of data, this problem attracted my attention, probably because it closer resembles a real-life challenge rather than a textbook problem.
It looks like it's an adaptation of the problem from Schaum’s Outline of Theory and Problems of College Chemistry [1, p. 39]:
3.37 What is the empirical formula of a catalyst that can be used in the polymerization of butadiene if its composition is $23.3\%$ $\ce{Co},$ $25.3\%$ $\ce{Mo},$ and $51.4\%$ $\ce{Cl}.$
Ans. $\ce{Co3Mo2Cl11}$
Here we are dealing with zero knowledge as to what the remaining element(s) are, but with a couple of assumptions we actually can come up with an answer from the textbook, and an answer on top of that with a justification.
Let's dive in and assume we are dealing with an unknown compound with the following formula:
$$\ce{Co_xCl_y}\sum_{i=1}^N \ce{El}_{i,z_i}$$
where we account for $i$ other elements $\ce{El_$i$}$ with the respective coefficients $z_i.$
The first assumption required to advance with the solution is to treat the compound as stoichimetric. e.g. $x, y, z_i \in\mathbb{N}.$
This would immediately allow us to find the exact values for both $x$ and $y$:
$$y : x = \frac{m(\ce{Cl})}{M(\ce{Cl})} : \frac{m(\ce{Co})}{M(\ce{Co})} = \frac{\pu{25.7 g}}{\pu{35.45 g mol-1}} : \frac{\pu{11.65 g}}{\pu{58.93 g mol-1}} = 3.67\tag{1}$$
To satisfy $x, y \in\mathbb{N},$ and taking into account the "hint" decimal part $.67$ (which implies a triple factor to an integer):
$$x = 3, 6, 9, \ldots, 3n~(n\in\mathbb{N})\tag{2}$$
To pinpoint the exact allowed value of $x$ (and $y,$ as $y = 3.67x),$ we can use the total molar mass:
$$M = x × M(\ce{Co}) + 3.67x × M(\ce{Co}) + \sum_{i=1}^N z_iM_i\tag{3}$$
$$\pu{759 g mol-1} = x × \pu{58.93 g mol-1} + 3.67x × \pu{35.45 g mol-1} + \sum_{i=1}^N z_iM_i\tag{3a}$$
$$\sum_{i=1}^N z_iM_i = (759 - 189x)~\pu{g mol-1}\tag{3b}$$
Since molar mass is a positive number, $759 - 189x > 0$ and the only $x$ to satisfy this criteria would be $x = 3.$
Accordingly, $y = 3.67 × 3 = 11,$ and at this point we are left with the following formula:
$$\ce{Co3Cl11}\sum_{i=1}^N \ce{El}_{i,z_i}$$
and the remaining sum
$$\sum_{i=1}^N z_iM_i = (759 - 189 × 3)~\pu{g mol-1} = \pu{192 g mol-1} \tag{3c}$$
The second assumption is that we are dealing with the common oxidation numbers of the elements, no exotic stuff.
With this in mind, and knowing that cobalt catalyst used in polymerization are vastly only $\ce{Co^{II}}$ and $\ce{Co^{III}}$ species, we can deduce the possible charge of the remaining sum:
$$(\ce{Co3Cl11})^{q-}\left(\sum_{i=1}^N \ce{El}_{i,z_i}\right)^{q+}$$
Since it's likely only $\ce{Cl-},$ the $q$ can adopt only the values defined by $\ce{Co^{II}}:\ce{Co^{III}}$ ratio.
For the total of three cobalt(II,III) atoms $q = 2,3,4,5.$
This will assist us in proposing missing element(s), and one can start to iterate over numbers.
The third assumption is that $i = 1$ and there is only one extra element with single coefficient $z_1$.
The table below summarizes possible outcomes:
$$
\begin{array}{ccccc}
\hline
z_1 & M_1/\pu{g mol-1} & \ce{El_1} & M(\ce{El_1})/\pu{g mol-1} & \text{Formula Example} \\
\hline
1 & 192 & \ce{Ir} & 192.22 & \ce{Co^{II}2Co^{III}Ir^{IV}Cl11} \\
& & & & \ce{Co^{II}Co^{III}2Ir^{III}Cl11} \\
2 & 96 & \ce{Mo} & 95.95 & \ce{Co^{II}3Mo^{II}Mo^{III}Cl11} \\
& & & & \ce{Co^{II}2Co^{III}Mo^{II}2Cl11} \\
3 & 64 & \ce{Cu} & 63.55 & \ce{Co^{II}Co^{III}2Cu^I3Cl11} \\
& & & & \ce{Co^{II}3Cu^{I}Cu^{II}2Cl11} \\
4 & 48 & \ce{Ti} & 47.87 & \ce{Co^{?}3Ti^{?}4Cl11} \\
\hline
\end{array}
$$
Starting with $z = 4,$ there doesn't seem to be an appropriate set of oxidation numbers for the element to comply with the charge balance, so I'd say the candidates for the third elements are only iridium, molybdenum and copper.
Judging from the application (catalysis) as well as from the typical oxidation states, I'd actually propose iridium cobalt chloride $\ce{Co3IrCl11}$ as the answer, yet $\ce{Co3Mo2Cl11}$ would also be possible (AFAIK, molybdenum(II,III) isn't a common composition for halide salts), and $\ce{Co3Cu3Cl11}$ would be a stretch both due to higher deviation from the declared molar mass and presence of copper(I).
References
Rosenberg, J. L.; Epstein, L. M. Schaum’s Outline of Theory and Problems of College Chemistry, 8th ed.; Schaum’s outline series; McGraw-Hill: New York, 1997. ISBN 978-0-07-053709-5. | {
"domain": "chemistry.stackexchange",
"id": 13026,
"tags": "inorganic-chemistry, catalysis, mole"
} |
Is this exponential-sized vertex cover problem in P? | Question:
Suppose P $\neq$ NP. Prove or disprove if language is in P using a reduction or an algorithm:
$$ \left\{ \left(G = (V,E), k, 0^{2^{|V|}} \right) \mid (G,k) \in VC \right\} $$
Suppose I have the this input $00000000$ I can construct a TM which calculates the number of vertices by applying log operation. So for this example $\log_28 = 3$ I have a graph with 3 vertices.
For this case I need to check if $k$ applies to these vertices.
This can be done in $v\choose k$ operations. I think because it can be done in polynomial time the language is in P.
Answer: More generally, if $L \in \mathsf{TIME}(f(n))$ and $f(n)$ is a reasonable function then the following language is in $\mathsf{P}$:
$$
L' = \{ (x,0^{f(|x|)}) : x \in L \}.
$$
Indeed, given an input $(x,0^m)$, we first check that $m = 0^{f(|x|)}$ (that's why we need $f$ to be reasonable, an informal notion which can be formalized if need be), and then we run the algorithm for $L$ witnessing that it is in $\mathsf{TIME}(f(n))$. Since the input has length larger than $f(n)$, the latter part actually runs in linear time.
In your case, $L$ is vertex cover, which can be solved in time $2^n$, where $n$ is the number of vertices in the graph (in this regard, your example doesn't quite correspond to the general case above, but the idea is the same). So your $L'$ is in $\mathsf{P}$.
Note that the assumption $\mathsf{P}\neq\mathsf{NP}$ makes no difference here – you can prove that your language is in $\mathsf{P}$ unconditionally.
In the comments, you are worried that since vertex cover is NP-complete, you have just shown that P=NP. But your language is not vertex cover, and (assuming P≠NP), it is not NP-hard. You can reduce vertex cover to your language by sending an instance $(G,k)$ of vertex cover to an instance $(G,k,0^{2^{|V|}})$ of your problem, but this reduction doesn't run in polynomial time, since the output is exponentially long. | {
"domain": "cs.stackexchange",
"id": 17737,
"tags": "np-complete, vertex-cover"
} |
Does the existence of hydrogen in the universe create an obscuration effect similar to the way air does at great distances? | Question: I've had this question for a while. I understand the universe is full of "dust". I am also aware of the fact that there is an average measure of particle density in the universe.
I am assuming for this question that these are actually separate, meaning where there is dust, there is dust, everywhere else, there is the background "1 proton per cubic meter" (or whatever, not important to be exact here). It could be that this is a false dichotomy and therefore the question falls away.
If not, I ask on the latter: this "1 proton per cubic meter", on the scale of billions of light years - does it produce a dimming/colouring of what we see that we have to take into account? In the same way distant mountains become blue because air isn't 100% transparent?
Answer: One way of thinking about this is in terms of the physics of the cosmic microwave background. The cosmic microwave background occurs as a phenomenon when a nearly homogeneous universe transitions between being hot and ionised and opaque to electromagnetic radiation to being slightly cooler, mainly neutral and transparent to visible and longer wavelength light.
Given that the universe has expanded and become rarefied (on average) by a factor of $(1100)^3$ since then, it should be obvious that absorption by atomic hydrogen at most wavelengths must be negligible. The exception to that is short wavelength radiation which may be absorbed by atomic transitions in hydrogen and at observed wavelengths that depend on the redshift of the absorbing gas. This leads to phenomena like Damped Lyman Alpha systems, which are broad absorption lines caused by discrete clouds of neutral hydrogen along the line of sight. The amount of absorption at wavelengths corresponding to wavelengths shortward of the rest wavelength of the Lyman Alpha transition ($121.6$ nm) can be 50% or more at redshifts of $>3$ (Thomas et al. 2020).
In terms of ionised hydrogen, we could think about Thomson scattering from electrons, which has a cross-section of $\sigma = 6.6\times 10^{-29}$ m$^2$ and is wavelength-independent. A reasonable fraction of the Universe's hydrogen is ionised and given that the ionised early universe was opaque it requires a calculation to see what the opacity of those electrons might be now. This does require an estimate of the density, and the number $n$ of a few (say 3) electrons per cubic metre is not a bad estimate. The mean free path of a photon before it is scattered is given by $(n\sigma)^{-1} = 5\times 10^{27}$ m, or 500 billion light years. This is a lot bigger than the observable universe and so can be neglected.
In terms of dust, this requires heavy elements (carbon, silicon, oxygen) and these are produced inside stars. Most star formation in the universe occurs at redshifts of around 3-5 or less. Some of that dust is expelled from galaxies and pollutes the intergalactic medium. This should produce a small extinction/reddening effect which is bigger for shorter wavelength light. There have been estimates of this - Thomas et al. (2020) look at various datasets and review various work which seems to suggest a reddening parameter of just $E(B-V) \sim 0.1$ out to redshifts of 3 or 4, which equates to an extinction of just 20-30% at visible wavelengths.
So what effects does this have? In the case of absorption shortward of Lyman alpha it means there are dramatic changes in the colours of the galaxies at wavelengths corresponding to redshifted Lyman alpha. i.e., the extinction is very wavelength-dependent;the galaxies' light can be completely absorbed at shorter wavelengths. The effects of dust are much more subtle. The extinction occurs across a broad range of wavelengths and there is a small reddening effect; but it is so small that it has been very difficult to measure with any precision. | {
"domain": "astronomy.stackexchange",
"id": 6701,
"tags": "dust, deep-sky-observing, hydrogen, vacuum, proton"
} |
What was the lowest temperature ever recorded on the surface of Earth? | Question: This article Lowest recorded temperatures lists a -89 Degrees Celsius in Vostok, Antartica as the lowest recorded temperature. However, this other article List of weather records lists a −93.2 °C (−135.8 °F) on 10 August 2010, at 81.8°S 59.3°E. measured by Satellite. And this other article Coldest temperature on Earth in Antarctica lists a -97,8 °C , also using satellite measurements. Which one is the lowest temperature ever recorded on the surface of Earth then? Are satellite measurements reliable and do they count?
Answer: The wikipedia article List of weather records explicitly states
This list does not include remotely sensed observations such as satellite measurements, since those values are not considered official records.[1]
Reference 1 in that article, a web archive link to a page on World Meteorological Organization Global Weather & Climate Extremes, adds that (emphasis mine)
Although claims of a "world-record coldest temperature" have been made for a remote-sensed location in Antarctica, the WMO official coldest temperature remains -89.2°C (-128.5°F) recorded on 21/7/1983 at Vostok, Antarctica. Official weather measurements are made at using standard equipment at a fixed height of between 1.25 m (4 ft 1 in) and 2 m (6 ft 7 in) above the ground for a fixed location over a specific length-of-record. Remoted-sensed values of temperature are not at this time regarded as official weather measurements.
There are many reasons why remote sensed temperature values (especially extreme ones) should not be considered as official with regard to record temperatures. First and foremost, those remotely sensed values are not measuring temperature. They instead measure proxies for temperature in the form of intensities of electromagnetic radiation in various frequency bands in the thermal infrared. The surface temperature is derived from these observations by a complex set of algorithms that make various assumptions regarding the nature of the sensed data.
These algorithms need to be calibrated against ground truth. These is no ground truth data for temperatures that low. The interpreted data were well outside the sensors calibrated rate. Moreover, the "ground truth" data is actually the recorded temperature at 1.25 meters to 2 meters above the ground. Official weather stations are supposed to be slightly elevated to avoid boundary layer effects. The sensed data on the other hand are primarily a result of surface temperatures, confounded by the fact that snow is a lousy thermal infrared emitter ("a good reflector is a lousy emitter") and confounded by contributions from the atmosphere between the surface and the satellite.
That said, those satellite readings almost certainly do indicate temperatures well below the reading at Vostok that officially counts as the lowest temperature recorded on the Earth. | {
"domain": "earthscience.stackexchange",
"id": 1804,
"tags": "climate, temperature, measurements, extreme-weather, field-measurements"
} |
C++14 Lock-free Multi-producer, Multi-Consumer Queue | Question: Introduction
This is a follow-up to a previous question of mine, where I presented another queue of the same type to get some feedback on it. Some people pointed out some fundamental errors I had made, and I came to learn that I was very naive when it comes to how I was padding variables. So this is an updated version, created using the feedback I got in the previous thread.
The new queue is 32bit and is intended to be portable and light-weight
Bounded Circular Buffer & Two Cursors
I'm still using a bounded circular buffer to store/load the data. And it still uses two cursors which indicate to the next producer/consumer which index on the buffer they should be working with. When a producer/consumer wishes to increment their respective cursor, both cursors are loaded at the same time, as they are both contained within an aligned structure. This is intended to ensure true-sharing of access to the cursors because they are never loaded individually. Once the cursors have been loaded, a fullness/emptiness check is performed before a CAS on the object holding both cursors. If either of the cursors has changed in the interim, the CAS will fail and another attempt is made. When the CAS succeeds, the data can be put into or removed from the buffer. In order to calculate the index on the buffer, an index-mask is used which will be one less than a power of two because we can use a bitwise-and instead of modulo to have the index wrap back to zero. For this to work the queue size must be a power of two, so the size specified by the user is raised up to the next power of two.
The Circular Buffer Nodes & Spin-locks
While the cursors maybe be protected by a CAS operation, each node on the buffer is protected by a spin-lock. This is to prevent the case where a consumer may try to read some data before a producer has finished putting it in. Or the opposite case, where a producer tries to add some data before a consumer has finished reading it.
The Code
Here's the full source, with many Doxygen style comments removed for clarity.
// SPDX-License-Identifier: GPL-2.0-or-later
/**
* C++14 32bit Lockless Bounded Circular MPMC Queue type.
* Author: Primrose Taylor
*/
#ifndef BOUNDED_CIRCULAR_MPMC_QUEUE_H
#define BOUNDED_CIRCULAR_MPMC_QUEUE_H
#include "stdio.h"
#include "stdlib.h"
#include <atomic>
#include <stdint.h>
#include <functional>
#include <thread>
#define CACHE_LINE_SIZE 64U
#if defined(_MSC_VER)
#define HARDWARE_PAUSE() _mm_pause();
#define _ENABLE_ATOMIC_ALIGNMENT_FIX 1 // MSVC atomic alignment fix.
#define ATOMIC_ALIGNMENT 4
#else
#define ATOMIC_ALIGNMENT 16
#if defined(__clang__) || defined(__GNUC__)
#define HARDWARE_PAUSE() __builtin_ia32_pause();
#endif
#endif
/**
* Lockless, Multi-Producer, Multi-Consumer, Bounded Circular Queue type.
* The type is intended to be light weight & portable.
* The sub-types are all padded to fit within cache lines. Padding may be put
* inbetween member variables if the variables are accessed seperatley.
*/
template <typename T, uint_least32_t queue_size, bool should_yield_not_pause = false>
class bounded_circular_mpmc_queue final
{
/**
* Simple, efficient spin-lock implementation.
* A function that takes a void lambda function can be used to
* conveiniently do something which will be protected by the lock.
* @cite Credit to Erik Rigtorp https://rigtorp.se/spinlock/
*/
class spin_lock
{
std::atomic<bool> lock_flag;
public:
spin_lock()
: lock_flag{false}
{
}
void do_work_through_lock(const std::function<void()> functor)
{
lock();
functor();
unlock();
}
void lock()
{
while (true)
{
if (!lock_flag.exchange(true, std::memory_order_acquire))
{
break;
}
while (lock_flag.load(std::memory_order_relaxed))
{
should_yield_not_pause ? std::this_thread::yield() : HARDWARE_PAUSE();
}
}
}
void unlock()
{
lock_flag.store(false, std::memory_order_release);
}
};
/**
* Structure that holds the two cursors.
* The cursors are held together because we'll only ever be accessing
* them both at the same time.
* We don't directly align the struct because we need to use it as an
* atomic variable, so we must align the atomic variable instead.
*/
struct cursor_data
{
uint_fast32_t producer_cursor;
uint_fast32_t consumer_cursor;
uint8_t padding_bytes[CACHE_LINE_SIZE -
sizeof(uint_fast32_t) -
sizeof(uint_fast32_t)
% CACHE_LINE_SIZE];
cursor_data(const uint_fast32_t in_producer_cursor = 0,
const uint_fast32_t in_consumer_cursor = 0)
: producer_cursor(in_producer_cursor),
consumer_cursor(in_consumer_cursor),
padding_bytes{0}
{
}
};
/**
* Structure that represents each node in the circular buffer.
* Access to the data is protected by a spin lock.
* Contention on the spin lock should be minimal, as it's only there
* to prevent the case where a producer/consumer may try work with an element before
* someone else has finished working with it. The data and the spin lock are seperated by
* padding to put them in differnet cache lines, since they are not accessed
* together in the case mentioned previously. The problem with this is
* that in low contention cases, they will be accessed together, and thus
* should be in the same cache line. More testing is needed here.
*/
struct buffer_node
{
T data;
uint8_t padding_bytes_0[CACHE_LINE_SIZE -
sizeof(T) % CACHE_LINE_SIZE];
spin_lock spin_lock_;
uint8_t padding_bytes_1[CACHE_LINE_SIZE -
sizeof(spin_lock)
% CACHE_LINE_SIZE];
buffer_node()
: spin_lock_(),
padding_bytes_0{0},
padding_bytes_1{0}
{
}
void get_data(T& out_data) const
{
spin_lock_.do_work_through_lock([&]()
{
out_data = data;
});
}
void set_data(const T& in_data)
{
spin_lock_.do_work_through_lock([&]()
{
data = in_data;
});
}
};
/**
* Strucutre that contains the index mask, and the circular buffer.
* Both are accessed at the same time, so they are not seperated by padding.
*/
struct alignas(CACHE_LINE_SIZE) circular_buffer_data
{
const uint_fast32_t index_mask;
buffer_node* circular_buffer;
uint8_t padding_bytes[CACHE_LINE_SIZE -
sizeof(const uint_fast32_t) -
sizeof(buffer_node*)
% CACHE_LINE_SIZE];
circular_buffer_data()
: index_mask(get_next_power_of_two()),
padding_bytes{0}
{
static_assert(queue_size > 0, "Can't have a queue size <= 0!");
static_assert(queue_size <= 0xffffffffU,
"Can't have a queue length above 32bits!");
static_assert(
std::is_copy_constructible_v<T> ||
std::is_copy_assignable_v<T> ||
std::is_move_assignable_v<T> ||
std::is_move_constructible_v<T>,
"Can't use non-copyable, non-assignable, non-movable, or non-constructible type!"
);
/** Contigiously allocate the buffer.
* The theory behind using calloc and not aligned_alloc
* or equivelant, is that the memory should still be aligned,
* since calloc will align by the type size, which in this case
* is a multiple of the cache line size.
*/
circular_buffer = (buffer_node*)calloc(
index_mask + 1, sizeof(buffer_node));
}
~circular_buffer_data()
{
if(circular_buffer != nullptr)
{
free(circular_buffer);
}
}
private:
/**
* @cite https://graphics.stanford.edu/~seander/bithacks.html#RoundUpPowerOf2
*/
uint_least32_t get_next_power_of_two()
{
uint_least32_t v = queue_size;
v--;
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
v++;
return v;
}
};
public:
bounded_circular_mpmc_queue()
: cursor_data_(cursor_data{}),
circular_buffer_data_()
{
}
bool push(const T& in_data)
{
cursor_data current_cursor_data;
// An infinite while-loop is used instead of a do-while, to avoid
// the yield/pause happening before the CAS operation.
while(true)
{
current_cursor_data = cursor_data_.load(std::memory_order_acquire);
// Check if the buffer is full..
if (current_cursor_data.producer_cursor + 1 == current_cursor_data.consumer_cursor)
{
return false;
}
// CAS operation used to make sure the cursors have not been incremented
// by another producer/consumer before we got to this point, and to then increment
// the cursor by 1 if it hasn't been changed.
if (cursor_data_.compare_exchange_weak(current_cursor_data,
{current_cursor_data.producer_cursor + 1,
current_cursor_data.consumer_cursor},
std::memory_order_release, std::memory_order_relaxed))
{
break;
}
should_yield_not_pause ? std::this_thread::yield() : HARDWARE_PAUSE();
}
// Set the data
circular_buffer_data_.circular_buffer[
current_cursor_data.producer_cursor & circular_buffer_data_.index_mask
].set_data(in_data);
return true;
}
bool pop(T& out_data)
{
cursor_data current_cursor_data;
while(true)
{
current_cursor_data = cursor_data_.load(std::memory_order_acquire);
// Check if the queue is empty..
if (current_cursor_data.consumer_cursor == current_cursor_data.producer_cursor)
{
return false;
}
if (cursor_data_.compare_exchange_weak(current_cursor_data,
{current_cursor_data.producer_cursor,
current_cursor_data.consumer_cursor + 1},
std::memory_order_release, std::memory_order_relaxed))
{
break;
}
should_yield_not_pause ? std::this_thread::yield() : HARDWARE_PAUSE();
}
// Get the data
circular_buffer_data_.circular_buffer[
current_cursor_data.consumer_cursor & circular_buffer_data_.index_mask
].get_data(out_data);
return true;
}
uint_fast32_t size() const
{
const cursor_data cursors = cursor_data_.load(std::memory_order_acquire);
return cursors.producer_cursor - cursors.consumer_cursor;
}
bool empty() const
{
return size() == 0;
}
bool full() const
{
return size() == circular_buffer_data_.index_mask + 1;
}
private:
alignas(CACHE_LINE_SIZE) std::atomic<cursor_data> cursor_data_;
circular_buffer_data circular_buffer_data_;
private:
bounded_circular_mpmc_queue(
const bounded_circular_mpmc_queue&) = delete;
bounded_circular_mpmc_queue& operator=(
const bounded_circular_mpmc_queue&) = delete;
};
#endif
I'm wondering if my push/pop methods work as I think they do? Is there any chance of the ABA problem? And is the use of the spin-lock to guard each node they best way of doing it? I'm using one because in theory it shouldn't really need to be used very often, as in the vast majority of cases where no-one else is still in the middle of working with the node.
Any help would be greatly appreatiated! Cheers.
Answer: Remove do_work_through_lock()
The intention behind this function is good, and the implementation looks reasonable for its intended use. However, since you added lock() and unlock() member functions, you can use a std::lock_guard to lock your spin_lock. This means that you can write:
void set_data(const T& in_data)
{
std::lock_guard lg(spin_lock_);
data = in_data;
}
If the function you would pass to do_work_through_lock() would be more complicated and could potentially throw exceptions, you cannot guarantee that unlock() would be called. std::lock_guard however takes care of that.
Use an enum class for should_yield_not_pause
Suppose you want to declare a queue that should yield, you have to write something like:
bounded_circular_mpmc_queue<int, 10, true> queue;
While it's very normal to see a value type and a size being passed as template parameters for a container, that true says very little. It's not only hard for someone reading this code to understand what it means, it might also be unclear for someone writing this code whether true means yield or pause. You can make it much more explicit by passing it as an enum class type template parameter, like so:
enum class wait_method {
YIELD,
PAUSE,
};
template <typename T, uint_least32_t queue_size, wait_method yield_or_pause = wait_method::YIELD>
class bounded_circular_mpmc_queue final
{
...
};
And when you need to decide whether to yield or pause write:
yield_or_pause == wait_method::YIELD ? std::this_thread::yield() : HARDWARE_PAUSE();
It might also help to make a private member function that yields-or-pauses, so you only have to write this logic once. Finally, while I wouldn't recommend it over the enum class solution, you could consider passing a function pointer as a template parameter:
template <typename T, uint_least32_t queue_size, void (*wait_method)() = std::this_thread::yield>
class bounded_circular_mpmc_queue final
{
...
};
And then just call wait_method() whenever you need to wait. This allows the user to pass an arbitrary non-member function.
Yet another solution is to take away the choice from the user, and instead do something like pausing for the first 10 iterations or so, and if you still haven't got a lock by then, start yielding.
Proper way to align things
Maybe CACHE_LINE_SIZE is set correctly for the CPU you are running on your code on, but it might be wrong on another CPU. Since your code only compiles with C++17 and up, consider using std::hardware_destructive_interference_size to get the size objects need to be apart to avoid cache line sharing. (Note that it might not be implemented in the C++ standard library you are using, so use the fallback shown in the example.)
Furthermore, there is no need to add padding bytes to structs you want to align. Your calculations for the size is incorrect anyway, as % has a higher operator precedence than -, and it would fail to compile if CACHE_LINE_SIZE is smaller than the size of the data you want to align, since taking the remainder of a negative number might be negative in C++.
So consider writing:
struct buffer_node
{
alignas(std::hardware_destructive_interference_size) T data;
alignas(std::hardware_destructive_interference_size) spin_lock spin_lock_;
...
};
std::atomic<T> does not guarantee it is lock-free
While most built-in types will be lock-free on most platforms when used atomically, the use of std::atomic does not automatically guarantee that. Check with is_lock_free() that, for example, std::atomic<cursor_data> is lock-free, otherwise your whole queue will no longer be lock-free. Note that if you pad it to be the size of a cache line, it will very likely not be lock-free.
Avoid memory allocations
The theory behind using calloc() and not aligned_alloc()
or equivelant, is that the memory should still be aligned,
since calloc will align by the type size, which in this case
is a multiple of the cache line size.
Unfortunately, that is not the case. It will return a pointer which is suitably aligned for any built-in type (this will probably be smaller than the cache line size), but it will not align it to the size parameter you pass to calloc(). Also, a buffer_node can be larger than a cache line, given a large enough T.
new will actually see the type of the object you are trying to allocate, including its alignment restrictions. So just by adding alignas attributes to the member variables of buffer_node, new buffer_node[index_mask + 1] will allocate a suitably aligned array.
Even better that new/delete would be to use a std::unique_ptr. But even better than that would be not to have to allocate memory at all. Consider writing:
/* Note: outside circular_buffer_data */
static constexpr uint_least32_t get_next_power_of_two()
{
uint_least32_t v = queue_size();
...
return v;
}
struct circular_buffer_data
{
static constexpr uint_fast32_t index_mask = get_next_power_of_two();
buffer_node circular_buffer[index_mask + 1];
};
Now this struct just has a single member variable, consider removing it entirely and just declare this directly in bounded_circular_mpmc_queue:
static constexpr uint_fast32_t index_mask = get_next_power_of_two();
buffer_node circular_buffer_data_[index_mask + 1];
Move the static_assert()s to the top of bounded_circular_mpmc_queue
The static_assert()s you had in the constructor of buffer_node don't depend on any parameter of buffer_node itself. So they should just be in bounded_circular_mpmc_queue directly. Also note that unlike assert(), static_assert() is a declaration, which means you don't need to put it inside a function. You can write it directly at the top of bounded_circular_mpmc_queue:
template <typename T, uint_least32_t queue_size, ...>
class bounded_circular_mpmc_queue final
{
static_assert(queue_size > 0, "Can't have a queue size <= 0!");
static_assert(queue_size <= 0xffffffffU, "Can't have a queue length above 32bits!");
...
};
Incorrect check for full queue
Your cursors are 32-bit integers that you increment indefinitely. Only when using it to index an item in circular_buffer[] do you AND it with the index_mask. This is fast and avoids the ABA problem for small queues. However, your check for whether the queue is full looks like this:
if (current_cursor_data.producer_cursor + 1 == current_cursor_data.consumer_cursor)
This is however incorrect. Consider what happens if the producers adds more than queue_size items to the queue before any consumer finishes consuming a single item. You should apply the mask on both sides of the equality operator:
if ((current_cursor_data.producer_cursor + 1) & circular_buffer_data_.index_mask ==
current_cursor_data.consumer_cursor & circular_buffer_data_.index_mask)
Producers can still overwrite data a consumer is working on
You still have the same problem as in the first iteration of your code. Again, simplifying your push() and pop() function:
push(const T& in_data)
{
auto produced_cursor = claim_cursor_for_push();
circular_buffer_data_[producer_cursor].set_data(in_data);
}
pop(T& out_data)
{
auto consumer_cursor = claim_cursor_for_pop();
circular_buffer_data_[consumer_cursor].get_data(out_data);
}
In both cases, the act of claiming the cursor itself is an atomic operation, but getting/setting the data is a separate operation. This means that by calling pop(), consumer_cursor might be claimed, but another thread might then do a push() operation, which might overwrite the data at the consumer_cursor, since as far as cursor_data_ is concerned, the consumer just freed that index, so it is free for the taking of a producer. | {
"domain": "codereview.stackexchange",
"id": 42883,
"tags": "c++, multithreading, concurrency, queue, producer-consumer"
} |
Probability of Photoelectric Interactions | Question: I have am currently reading Radiation Detection and Measurement, by Gleen F.Knoll, and in chapter 2 page 49, he talks about the probability of the photoelectric interaction as: $$\tau \approx C \frac{Z^n}{E_{\gamma}^{3.5}}$$ and say's that 'exponent n varies between 4 and 5 over the gamma-ray energy region of interest'.
But what dose the actually mean? I can't quite picture what he is saying, can someone put this in to context if possible.
Answer:
exponent n varies between 4 and 5 over the gamma-ray energy region of
interest
It says that, for gamma rays (high energy EM radiation), the photoelectric interaction (interaction of a gamma ray photon with an atom resulting in the emission of an electron) is more likely to occur in materials with high atomic number, Z (number of protons in the nucleus).
More specifically, the probability of photoelectric interaction is proportional to $Z^n$, where $n$, the power or exponent, varies between $4$ and $5$.
As an example, according to this formula, the probability of the photoelectric interaction of gamma rays with lead (atomic number $82$) should be $16 (2^4)$ to $32 (2^5)$ times greater than the probability of photoelectric interaction of the same gamma rays with niobium (atomic number $41$). | {
"domain": "physics.stackexchange",
"id": 52279,
"tags": "radiation, photoelectric-effect, gamma-rays"
} |
Actionlib reconnect problem (c++ , with test case) | Question:
I'm unable to get my action clients to reconnect to their servers for more than a single request when the connection is temporarily lost (e.g. when the server node is restarted). It may be that I'm doing something wrong with threading. I created a very simple test case [1] (server.cpp and client.cpp) to illustrate the problem.
To see the problem, simply start the client, and leave it running. It will report its connection status every second, and attempt to send a goal if it's connected. In another terminal, run the server, then stop it, and then start it again. The following is the full output of the client terminal, with comments denoting when I start and stop the server.
$ rosrun actionlib_cpp_disconnect client # in first terminal
client not connected, skipping goal=0!
client not connected, skipping goal=1!
#### here i started the server, second terminal
client connected, sending goal=2!
client connected, sending goal=3!
#### here i stopped the server, second terminal
client not connected, skipping goal=4!
client not connected, skipping goal=5!
#### here i started the server again, second terminal
[ WARN] [1375230239.830277763]: goalConnectCallback: Trying to add [/server] to goalSubscribers, but it is already in the goalSubscribers list
[ WARN] [1375230239.830734224]: cancelConnectCallback: Trying to add [/server] to cancelSubscribers, but it is already in the cancelSubscribers list
client connected, sending goal=6! # the first one after the reconnect always works ...
client not connected, skipping goal=7! # wtf? the server is running ...
client not connected, skipping goal=8!
client not connected, skipping goal=9!
Is this standard behaviour? So far, the only workaround I've found is to manually delete the action client whenever it reports that it's disconnected, and re-construct a new one in its place.
I'm using ROS Groovy on Ubuntu Precise 12.04 with actionlib version 1.9.11-0precise-20130325-2034-+0000.
Here's the server code:
void cb_goal(actionlib::ServerGoalHandle srv_handle)
{
printf("server received new goal, goal=%d!\n", srv_handle.getGoal()->goal);
srv_handle.setRejected();
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "server");
ros::NodeHandle nh;
actionlib::ActionServer server(nh, "action", cb_goal, false); /* auto_start = false */
server.start();
ros::spin();
return 0;
}
Here's the client code:
void ros_spin()
{
ros::spin();
}
int main(int argc, char** argv)
{
ros::init(argc, argv, "client");
ros::NodeHandle nh;
actionlib::ActionClient client("action");
boost::thread mythd(ros_spin);
actionlib::TestGoal goal;
goal.goal = 0;
ros::Rate loop_rate(1.0);
while (ros::ok())
{
if (client.isServerConnected())
{
printf("client connected, sending goal=%d!\n", goal.goal);
client.sendGoal(goal);
}
else
{
printf("client not connected, skipping goal=%d!\n", goal.goal);
}
loop_rate.sleep();
goal.goal++;
}
return 0;
}
[1] http://dellin.net/box/actionlib_cpp_disconnect.tar
Originally posted by cdellin on ROS Answers with karma: 462 on 2013-07-30
Post score: 7
Original comments
Comment by Martin Günther on 2013-08-01:
This is an extremely good question. Tried to figure out for an hour what's going on, but gave up. I started out by replacing Client and Server by their Simple* versions, just to make sure you got all state transitions right (see https://gist.github.com/mintar/6132716, FWIW). Sounds like a bug to me.
Comment by cdellin on 2013-08-04:
I submitted a bug against actionlib here: https://github.com/ros/actionlib/issues/7. I'll keep this question updated.
Answer:
This issue [0] was fixed in actionlib upstream [1], and the patched version is included in Hydro. If you're stuck with Groovy or earlier, I think you might be out of luck.
Thanks to Dirk Thomas for finding the bug and writing the patch!
[0] https://github.com/ros/actionlib/issues/7
[1] https://github.com/ros/actionlib/pull/13
Originally posted by cdellin with karma: 462 on 2014-01-23
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by Martin Günther on 2014-01-23:
Good work! :) | {
"domain": "robotics.stackexchange",
"id": 15107,
"tags": "actionlib"
} |
Origins of the principle of least time in classical mechanics | Question: Is it possible to derive the principle of least time from the principle of least action in lagrangian or hamiltonian mechanics? Or is Fermat's principle more fundamental than the principle of least action?
Answer: Yes, but note that the principle of least time is a special case of the principle of least action. Fermat's principle only applies to geometrical optics, i.e., where we are considering the trajectory of a ray of light, which always travels « in a straight line » at a speed determined by the index of refraction at the particular point it is at.
Therefore the following discussion is only about classical mechanics, with point particles and definite trajectories. There, it will be seen that ulitmately both principles are equivalent. But in the larger context of quantum physics, it is the principle of least action which generalises and not, as far as I know, Fermat`s principle.
The principle of least action was, historically, a generalisation of Fermat's principle, motivated by the rival corpuscular theory of light. Then the new principle was later seen to apply in greater generality, to all particle dynamics.
Gauss and Hertz found a re-formulation of the principle of least action: they were able to define a notion of curvature in the abstract configuration space of a system of particles, see Whittaker, Analytical Dynamics, p. 254, and found that the laws of dynamics followed from a « principle of least curvature.» the system will follow the trajectory which has at each point the least curvature. Felix Klein generalised this
further. He put a non-Euclidean geometry on this abstract space describing the trajectory of the system and formulated the law that the actual path is a geodesic in this geometry. This is not the same as Einstein's theory of general relativity since it is an abstract space with a high number of dimensions, as always in Hamiltonian Mechanics.
Using Klein's pint of view, the principle of least action can be deduced back from the principle of least time: abstractly, the paths in this non-Euclidean space can be regarded as the quickest paths given an artificially defined « index of refraction » in this space, defined at each point. So perhaps neither is more fundamental than the other. But from a physically intuitive point of view, perhaps one could justify saying that the principle of least action is more fundamental since Klein's construction could be thought of as rather artificial. see also http://math.ucr.edu/home/baez/classical/ and also Klein's book on the history of mathematics in the 19th century, a wonderful book which should be every mathematical physicist's bedside reading... but I cannot find my copy right this instant... | {
"domain": "physics.stackexchange",
"id": 2248,
"tags": "classical-mechanics, optics, lagrangian-formalism, hamiltonian-formalism"
} |
Data requirement to determine proportionality | Question: A common result of theoretical analysis in physics is some sort of relation derived from physical parameters and typically expressed in the form of a non-dimensional parameter. These scale relations are not equalities but proportional relationships. For instance, in turbulence you end with with
$$ \frac{\eta}{l} \sim Re^{-3/4}$$
Assuming that some derivation results in:
$$f(\Pi_1,\Pi_2,\dots,\Pi_i) \sim C$$
where $C$ is the experimental measure, $f$ is a function of non-dimensional parameters, and $\Pi_i$ are the independent non-dimensional parameters, is there a minimum number of experiments to determine the constant of proportionality $a$ such that:
$$f(\Pi_1,\Pi_2,\dots,\Pi_i) = aC$$
sufficiently? I would expect that the number of experiments required is in some way related to the combination of all possible parameters. For instance, if $i = 2$ then I would expect a minimum of 4 experiments would be required. But is the minimum based on combinations sufficient or are more values needed?
Answer: Dimensional analysis allows us to write the solution to any physical system in the form
$$ f(\Pi_0, \Pi_1, \Pi_2, \dots) = 0 $$
where the $\Pi$s are independent dimensionless constants formed from our dimensionful physical parameters.
Usually, there is a particular physical parameter we are interested in computing, and so, by rescaling our dimensionless parameters against one another we can arrange that our physical parameter of interest appears in only one of the dimensionless constants. At that point, we can imagine solving this equation for that dimensionless parameter obtaining a relation of the form
$$ \Pi_0 = g(\Pi_1, \Pi_2, \dots) $$
where $g$ is an unspecified function that you would have to determine by means of experiment.
Sometimes, the problems we consider are simple enough that we have only a single independent dimensionless parameter can be formed, so that we have $ f(\Pi) = 0 $ or equivalently
$$ \Pi = C $$
where $C$ is some constant.
As a somewhat trivial example, let's say we wanted to figure out the area of a circle, and we had forgotten how. There are two physical parameters of interest, the area $A$ with dimensions $[L^2]$ and the radius $r$ with dimensions $[L]$. Given two dimensionful parameters in one dimension there is only a single independent dimensional parameter we can form, so we know the physical law has to take the form
$$ f(\Pi) = 0 \qquad \Pi = \frac{A}{r^2} $$
which is equivalent to
$$ A = C r^2 $$
And we need only do a single experiment to determine the constant of proportionality. (In this case it's $\pi$)
But, it is rare that we have only a single dimensionless parameter and so can reduce the problem to one of direct proportionality. Take for example the problem of determining the period of a pendulum. First let's collect the physical parameters we have think are important, the period itself $T$, the mass $m$, the length $l$ the initial angle $\theta_0$, and gravity $g$. They have their respective dimensions:
$$ T : [T] \quad m : [M] \quad l : [L] \quad \theta_0 : [1] \quad g : [ L T^{-2} ] $$
These are 5 physical parameters in 3 dimensions ($[M], [L], [T]$), so we have only two independent dimensionless parameters that we can form, let's use the decomposition
$$ \Pi_0 = \frac{T^2 g}{l} \qquad \Pi_1 = \theta_0 $$
and so the best we can say with dimensional analysis is
$$ f(\Pi_0 , \Pi_1 ) = 0 $$
or equivalently
$$ \Pi_0 = g(\Pi_1) $$
$$ T = \sqrt{\frac{l}{g}} g(\theta_0) $$
but that is as far as we can go. In principle $g$ can be an arbitrary function and we cannot know how many measurements it would take to specify. In fact, in this particular example we can solve for $g$ numerically and find:
Notice that this is a fully general equation, one that we could not determine in principle with any finite set of experiments. (Notice also that in the limit of low angle, the function is approximately flat, and nearly $2\pi$, which is the answer for a linear pendulum).
But. There are two things can help keep dimensional analysis sane and useful. The first is that when we can reduce a problem to a single dimensionless parameter, it is usually the case that the value of that dimensionless parameter (i.e. the constant of proportionality) is typically of order 1 (probably because as a pure number it doesn't have any reason not to be). In our area example above for instance, the missing constant of proportionality was $\pi \sim 3.14$ which is of order 1.
And the second thing, which is particularly useful, is that the physical world tends to have sane and nearly constant solutions in the extremes. That is to say, one of our physical solutions of the form
$$ f(\Pi_0, \Pi_1, \Pi_2, \dots ) = 0 $$ will tend to constant (usually of order one) in the limit that one of the $\Pi$s is either very small ($\ll1$) or very big ($\gg 1$).
This is precisely the behavior we see for the period of our pendulum. In the limit of small angles, the function is well behaved and approaches a constant of order 1 roughly (in this case $2 \pi \sim 6.28$).
This has broader implications that aren't usually shown to students when they are first introduced to dimensional analysis.
For instance, let's rewind and imagine we had done a poor job when constructing our list of possible physical parameters for the pendulum, for instance let's say we were just in our first quantum class and so thought $\hbar$ would be important. This would have introduced another dimensional parameter, so we would have
$$ \Pi_2 = \frac{ \hbar^2 }{ m^2 g l^3} $$
and we would have been left with as our most general solution
$$ T = \sqrt{ \frac{l}{g} } h( \theta_0, \Pi_2 ) $$
and it would seem we are worse off than we were to begin with. How cruelly we are awarded for trying to get a more accurate answer. But, let's evaluate the actual value of this $\Pi_2$ for a reasonable pendulum, let's say one that is 1 kg, 1 meter long in earths gravity, in this case $\Pi_2 \sim 10^{-69}$ and we expect its functional dependence is as good as a constant and we can take the approximation
$$ h(\theta_0 , \Pi_2 ) \sim g(\theta_0) $$
that is some general function $h$ where its second parameter is vanishing, looks like some other function only of its first variable.
In fact, if we wanted to get all philosophical, a real pendulum ought to be best described as a quantum object to begin with, so surely there ought to be some actual dependence on $\Pi_2$, though forgive me if I don't find it analytically. But, since we are interested in a classical pendulum, where classical in this case means the precise statement
$$ m^2 g l^3 \gg \hbar^2 $$
we are completely justified in ignoring the quantum contribution to our pendulum, much like in introductory courses, if you are only interested in small angle excitation of a pendulum, you are apt to ignore the full functional dependence on $\theta_0$ and instead say it contributes only a multiplicative constant.
Appendix: Period of pendulum
We have for a real pendulum
$$ \frac 12 l^2 \dot \theta^2 - g l \cos \theta = gl \cos \theta_0 $$
We manipulate it into the form
$$ dt = \sqrt{\frac{l}{g}} \sqrt 2 \frac{ d\theta }{ \sqrt{ \cos \theta - \cos \theta_0 } } $$
which we can integrate for a quarter period to get the full period
$$ T \sqrt{ \frac{g}{l} } = 2 \sqrt 2 \int_{\theta_0}^0 \frac{ d\theta }{ \sqrt{ \cos \theta - \cos \theta_0 } } $$
which is the equation I graphed above in 1. | {
"domain": "physics.stackexchange",
"id": 14634,
"tags": "experimental-physics, dimensional-analysis, data-analysis, scaling, order-of-magnitude"
} |
The notion of PAC in approximation algorithms | Question: In computational machine learning, the notion of Probably Approximately Correct means that (generally speaking) we can find (or "learn") with a high probability a function which has "low error".
Is there a way to generalize this idea for approximation algorithms? What I thought of:
Given a language $L \subseteq \{0,1\}^*$, a confidence parameter $\delta\in[0,1]$, an error parameter $\epsilon > 0$, and a distribution $D$ over $L$, the algorithm $A$ is a PAC algorithm for $L$ if with probability at least 1 - $\delta$ it holds that $\text{ERR}(A,D)\leq \epsilon$, where $\text{ERR}(A,D) := \Pr_{x\text{~}D}(A \text{ answers wrong on } x)$.
One drawback of this definition is the following: suppose $L$ is a countable infinite set, and let $l_1, l_2,...,l_k,...$ be some enumeration of the elements of $L$. Since $D$ is a probability distribution over $L$, $\sum_{i}D(l_i) = 1$, and hence there must be some index $k$ such that $\sum_{j>k} D(l_j) <\epsilon$.
The algorithm $A$ can be defined as: remember the answers of the first $k$ words. When receiving $w \in \{0,1\}^*$ check if $w$ is saved (and if so answer correctly). Otherwise, answer "0".
Clearly this algorithm is a PAC by the above definition, but not a clever one.
Is there a way to define PAC algorithm for a formal language? Or: can we use the theory of machine learning in the theory of approximation algorithms?
Answer: Approximation algorithms don't make sense here, because here the output of the algorithm is 0 or 1. (0 represents "the input is not in $L$", 1 represents "the input is in $L$".) Approximation algorithms are useful when the output is a continuous variable, as then we can ask for the algorithm to output something that is close to the correct answer, but the concept is not useful when talking about algorithms that output a boolean.
Also, in your proposed definition, once you have fixed an algorithm $A$ and a distribution $D$ over $L$, the condition $ERR(A,D) \le \epsilon$ is either true or false. It's not an event that you can assign a probability to. So, it doesn't make sense to ask it to hold with probability at least $1-\delta$. | {
"domain": "cs.stackexchange",
"id": 12131,
"tags": "complexity-theory, machine-learning, approximation"
} |
OOP simple contact form | Question: Here is my Code. But I feel that my code isn't really that much object oriented.
There is a small piece of "styling" in it as well, the "message error" thing at the end.
Should that be in my class or should it go somewhere else?
<?php
require 'contact.class.php';
?>
<form action="" method="post">
<?php
if (isset($_POST["submit"])) {
$sendMail = new gw2Mail();
$sendMail->senderName = $_POST['senderName'];
$sendMail->senderEmail = $_POST['senderEmail'];
$sendMail->recipient = $_POST['recipient'];
$sendMail->copy = $_POST['copy'];
$sendMail->subject = $_POST['subject'];
$sendMail->message = $_POST['message'];
$sendMail->sendMail();
}
?>
<table class='ipb_table ipsMemberList'>
<tr class='header' colspan='4'>
<th scope='col' colspan='5'>Send E-mail</th>
</tr>
<tr class='row1'>
<td style='width: 50%'><strong>From:</strong><br />Enter the sender's name in this field.</td>
<td><input type="text" class="input_text" name="senderName" id="senderName" value="<?php if (isset($_POST['senderName'])) { echo $_POST['senderName']; } ?>" size="50" maxlength="125" /></td>
</tr>
<tr class='row2'>
<td style='width: 50%'><strong>From E-mail:</strong><br />Enter the sender's e-mail address in this field.</td>
<td><input type="text" class="input_text" name="senderEmail" id="senderEmail" value="<?php if (isset($_POST['senderEmail'])) { echo $_POST['senderEmail']; } ?>" size="50" maxlength="125" /></td>
</tr>
<tr class='row1'>
<td style='width: 50%'><strong>Recipient:</strong><br />Enter the recipient's e-mail address in this field.</td>
<td><input type="text" class="input_text" name="recipient" id="recipient" value="<?php if (isset($_POST['recipient'])) { echo $_POST['recipient']; } ?>" size="50" maxlength="125" /></td>
</tr>
<tr class='row2'>
<td style='width: 50%'><strong>Carbon Copy:</strong><br />Send a copy to someone else? Enter another e-mail address here. Leave blank for no copy.</td>
<td><input type="text" class="input_text" name="copy" id="copy" value="<?php if (isset($_POST['copy'])) { echo $_POST['copy']; } ?>" size="50" maxlength="125" /></td>
</tr>
<tr class='row1'>
<td style='width: 50%'><strong>Subject:</strong><br />Enter a subject in this field.</td>
<td><input type="text" class="input_text" name="subject" id="subject" value="<?php if (isset($_POST['subject'])) { echo $_POST['subject']; } ?>" size="50" maxlength="50" /></td>
</tr>
<tr class='row2'>
<td colspan='2' style='width: 100%'><textarea style="height: 250px; width: 99%;" name="message" id="message" cols="30" rows="14" virtual wrap="on"><?php if (isset($_POST['message'])) { echo $_POST['message']; } ?></textarea></td>
</tr>
</table>
<input type="submit" name="submit" value="Submit" tabindex="50" class="input_submit" accesskey="s" />
</form>
And this is the contact.class.php:
<?php
class gw2Mail {
var $senderName;
var $senderEmail;
var $recipient;
var $copy;
var $subject;
var $message;
var $bcc;
public function sendMail()
{
if ($this->senderName != "") {
$this->senderName = filter_var($this->senderName, FILTER_SANITIZE_STRING);
if ($this->senderName == "") {
$errors .= '- Please enter a valid name!';
}
} else {
$errors .= '- You forgot to enter a name!<br />';
}
if ($this->senderEmail != "") {
$this->senderEmail = filter_var($this->senderEmail, FILTER_SANITIZE_STRING);
if ($this->senderEmail == "") {
$errors .= '- Please enter a valid Email!';
}
} else {
$errors .= '- You forgot to enter an email!<br />';
}
if ($this->recipient != "") {
$this->recipient = filter_var($this->recipient, FILTER_SANITIZE_STRING);
if ($this->recipient == "") {
$errors .= '- Please enter a valid recipient email!';
}
} else {
$errors .= '- You forgot to enter a recipient email!<br />';
}
if ($this->subject != "") {
$this->subject = filter_var($this->subject, FILTER_SANITIZE_STRING);
if ($this->subject == "") {
$errors .= '- Please enter a valid subject!';
}
} else {
$errors .= '- You forgot to enter a subject!<br />';
}
if ($this->message != "") {
$this->message = filter_var($this->message, FILTER_SANITIZE_STRING);
if ($this->message == "") {
$errors .= '- Please enter a valid message!';
}
} else {
$errors .= '- You forgot to enter a message!<br />';
}
if (!$errors) {
$this->bcc="";
$headers = "From: $this->senderName <$this->senderEmail>";
$headers .= "\r\nCc: $this->copy";
$headers .= "\r\nBcc: $this->bcc\r\n\r\n";
$send_contact=mail("$this->recipient","$this->subject","$this->message","$headers");
} else {
echo '<p class=\'message error\'>';
echo '<font color="#FFFFFF">' . $errors . '</font>';
echo '</p><br />';
}
}
}
?>
Answer: Here are some observations noticed:
For something to be OOP it needs to be somewhat reusable. As always use SOLID principles to achieve this.
Because you are using styling it broke some of the rules. Consider returning outputs such as errors and such to be pure text without styling and let the return handle it. Reason: what happens if you want to log the fail message and shoot it out internally or to a log file.
Your sendmail is a jack of all trade: it does the header, store the emails, AND checks for errors - this is procedural (start, middle end) - consider separating it into different functions of class (your original one and then have a separate class to do validations which you send as arguments).
Consider using a constructor. You initialize the object with settings and then reuse the function.
Another point to add: attributes (or variables from a class that pertains to the class) should never be never accessible to the "main" or elsewhere but to the class itself. Use accessor functions like get/set or magic functions _get _set instead. Classes are suppose to be encapsulated so that outside code cannot effected without going through a checker. I'm aware that its easier to just access it directly but you defeat the purpose of OOP in that sense.
Lastly, too many if/else renders the code too tight which is why I suggested the validation class - let that be the class to check on the arguments rather than the mailer itself. | {
"domain": "codereview.stackexchange",
"id": 6626,
"tags": "php, object-oriented"
} |
Formatted output of a phone number from an array of int | Question: I wrote a function that takes an array of 10 integers (from 0 to 9) that returns a string of those numbers as a phone number.
Example:
Kata.CreatePhoneNumber(new int[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 0})
// => returns "(123) 456-7890".
The program is fully working, but I want to make this code shorter and clearer.
class Program
{
static void Main()
{
int[] numbers = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 };
CreatePhoneNumber(numbers); // => returns "(123) 456-7890"
}
public static string CreatePhoneNumber(int[] numbers)
{
return ($"({numbers[0]}{numbers[1]}{numbers[2]}) {numbers[3]}{numbers[4]}{numbers[5]}-{numbers[6]}{numbers[7]}{numbers[8]}{numbers[9]}");
}
}
Answer: Two things I'd do there:
Pull out the parts into local vars
use the range operator
So more like this:
public static string CreatePhoneNumber(int[] numbers)
{
var areaCode = string.Concat(numbers[0..3]);
var middlePart = string.Concat(numbers[3..6]);
var lastPart = string.Concat(numbers[6..]);
return $"({areaCode}) {middlePart}-{lastPart}";
}
This makes the code not shorter (more lines) but much clearer in my eyes. The line that adds the formatting stuff is short and one can easily see the parenthesis and dash getting added. | {
"domain": "codereview.stackexchange",
"id": 43015,
"tags": "c#, formatting"
} |
History/Etymology- Map/[Hash]Map vs map() | Question: Forgive me/delete this question if it doesn't qualify as Computer Science:
I've always wondered about the relationship between Map the data structure and map() the function. I know they are two different things. Why share the name?
Answer: Well, I am only a beginning MSc student in computerscience but if if followed the classes correctly the purpose of a hash is to create some memory space, let say this memory space is 10 adresses big. With a hashmap, every value you put in the hashmap gets coupled with a key. That key will first be mapped by a hashfunction to produce a unique number. That number is within the range of possible adresses in the vector. $ f: A \mapsto B $ with A the collection of possible keys and B a collection of possible adresses. Mathematically speaking this is a mapping. The value gets then saved to that adress. Now it is possible that you want to save more values than the vector allows or the hashfuntion is not completely one on one (that is every y has a unique x value it corresponds to) In that case you wil need to make the vector longer and remap every value within the vector or you reshash the value from the hashfunction with that same same value it has returned (usualy slightly edited to avoid all values being at the same place in the vector) as the parameter to find a new adress that wasn't used before. There are a number of other techniques involved and other methods but that is beside the point.
Long story short. This mapping of key values is, i believe, the reason it is called a hashmap. | {
"domain": "cs.stackexchange",
"id": 7292,
"tags": "data-structures"
} |
How to Combine Low and High Frequencies of Two Images in MATLAB? | Question: I would like to combine two images A and B in the following way:
1) I want to take a Fourier transform of both of them
2) For image A I want to apply a weighted filter, which gives more emphasis for low frequencies
3) For image B I want to apply a weighted filter, which gives more emphasis for high frequencies
4) I want to combine these frequencies and take the inverse Fourier transform
Can someone give me any guidelines where I should start (which functions etc.) to do this in Matlab? =) I'm kinda learning about Fourier transform and I want to play around with images. I was hoping if someone could give an example of code how this could be done etc.
Please note that I'm new at this stuff and I'm not very familiar with all the terminology yet. I'm an amateur trying to learn about Fourier transform by doing an experiment with it :)
Thank you for any help! =)
P.S. I would appreciate if someone could give me a minimal code snippet showing me what I need to do =)
Answer: The easiest way doing so would be Laplacian Pyramid.
Yet, it can be done just by using simple Addition operator.
Just add the High Frequency of one image to the Low Frequency of the other.
Keep in mind few things:
Dimensions must be the same.
Otherwise, interpolate to the same dimensions.
It is better to use HPF which is built from the same LPF used.
To the least, they must have the same Cut Off Frequency.
Use floating point arithmetic for this procedure. | {
"domain": "dsp.stackexchange",
"id": 3439,
"tags": "image-processing, matlab, discrete-signals, fourier-transform, multi-scale-analysis"
} |
Relationship between surface area and vapor pressure | Question: How is vapor pressure independent of the surface area and volume of a liquid, although rate of evaporation depends on surface area?
Please someone help me I am totally confused.
Answer: The rate of evaporation is proportional to surface area. The rate of condensation is proportional to surface area. The vapor pressure is the equilibrium pressure where the rate of evaporation is equal to the rate of condensation. Since the scaling factor is the same, the vapor pressure is independent of the surface area. | {
"domain": "chemistry.stackexchange",
"id": 6830,
"tags": "vapor-pressure"
} |
A question about Kruskal extension of Schwarzschild solution | Question: For the well-known Schwarzschild solution, if we do a series of transformation(the details are not important here), we get
$$
ds^2=-\frac{32M^3e^{-r/2M}}{r}dUdV+r^2(d\theta^2+\sin^2\theta\,d\phi^2).\qquad (1)
$$
By making the final transformation $T=(U+V)/2, X=(V-U)/2$, we have
$$
ds^2=\frac{32M^3e^{-r/2M}}{r}(-dT^2+dX^2)+r^2(d\theta^2+\sin^2\theta\,d\phi^2).\qquad (2)
$$
The relation between the old coordinates $(t,r)$ and the new coordinates $(T,X)$ is given by
$$
\left(\frac{r}{2M}-1\right)e^{r/2M}=X^2-T^2,\qquad (3)\\
\frac{t}{2M}=\ln\frac{T+X}{X-T}=2\tanh(T/X),\qquad (4)
$$
and in equation $(2)$, $r$ is to be viewed as the function of $X$ and $T$ difined by equation $(3)$.
In Wald's GR, page 154, he commented that
From equation $(3)$ we see that $\nabla_a r=0$ at $X=T=0$, and it is not difficult to verify that the static Killing field $\xi^a$ vanishes there also. Note also that $\nabla_a r$ and $\xi^a$ become collinear along the null lines $X=\pm T$.
I don't understand why we have $\nabla_a r=0$ at $X=T=0$, and also don't understand why $\nabla_a r$ and $\xi^a$ become collinear along the null lines $X=\pm T$. Can someone give an explanation?
Answer: You just compute! Wald is just skipping these steps because they’re “elementary” to work out, and also a good exercise.
First, take the exterior derivative of both sides of equation (3) (use the product rule on the LHS, and simplify… some cancellation will happen) to get
\begin{align}
\frac{re^{r/2M}}{4M^2}dr&=2XdX-2TdT.\tag{@}
\end{align}
So, if we plug in $X=T=0$ then the RHS vanishes, while on the LHS, we note that (3) implies $r=2M$. This is a non-zero coefficient of $\frac{e}{2M}$ on the left, and thus $(dr)|_{X=T=0}=0$.
Next, (4) gives you an equation for $dt$ in terms of $dX$ and $dT$. So, combined with the previous equation for $dr$ in terms of $dX,dT$, you ave two equations with two “unknowns”, and you can invert the system of equations to rewrite $dT,dX$ in terms of $dt,dr$. Once you do this calculation, you should be able to prove that in terms of the $(T,X,\theta,\phi)$ coordinate system, the static Killing field $\xi$ is given by
\begin{align}
\xi&=dT(\xi)\frac{\partial}{\partial T}+dX(\xi)\frac{\partial}{\partial X}\\
&=\frac{1}{4M}\left(X\frac{\partial}{\partial T}+ T\frac{\partial}{\partial X}\right).\tag{$*$}
\end{align}
Ok very strictly speaking, we only know that this equality holds in region I of the spacetime, which is the common domain of definition of the two coordinate systems $(T,X,\theta,\phi),(t,r,\theta,\phi)$. But now, we observe that the vector field on the right of $(*)$ is well-defined and actually analytic with respect to the global (up to the usual caveats with spherical coordinates) $(T,X,\theta,\phi)$ coordinates. So, we shall use $(*)$ as our definition of the vector field $\xi$ on the maximally extended spacetime. Since the metric $g$ is also analytic, and we know for sure that $\xi$ is a Killing vector field in region I, it follows by uniqueness of analytic continuation that $\xi$ is a Killing field on the entire maximally extended spacetime (which vanishes if and only if $X=T=0$). I’m sure you could try to argue directly that the RHS of $(*)$ is indeed Killing everywhere, but using this analysis fact is much quicker.
Next, to prove collinearity, we need to use the metric-induced isomorphism. We have
\begin{align}
\begin{cases}
g^{\flat}\left(\frac{\partial}{\partial T}\right)&=g_{TT}\,dT+g_{TX}\,dX+g_{T\theta}\,d\theta
+g_{T\phi}\,d\phi=-\alpha(r)\,dT\\
g^{\flat}\left(\frac{\partial}{\partial X}\right)&=g_{XT}\,dT+g_{XX}\,dX+g_{X\theta}\,d\theta+g_{X\phi}\,d\phi=\alpha(r)\,dX,
\end{cases}
\end{align}
where $\alpha(r):=\frac{32M^3e^{-r/2M}}{r}$. Therefore, applying $g^{\flat}$ to $(*)$ above (i.e calculating $\xi_a$), we get
\begin{align}
g^{\flat}(\xi)&=\frac{\alpha(r)}{4M}\left(-X\,dT+T\,dX\right)\equiv A(r)\left(-X\,dT+T\,dX\right).
\end{align}
Compare this with (@) above, which says $dr=B(r)\left(X\,dX-T\,dT\right)$, for some positive function $B(r)$. Hence, we see that on the null lines $X=\pm T$ (which corresponds to $r=2M$), we have $g^{\flat}(\xi)=\pm\frac{A(2M)}{B(2M)}\,dr$. | {
"domain": "physics.stackexchange",
"id": 92761,
"tags": "general-relativity, black-holes, metric-tensor"
} |
What's the difference between VCF spec versions 4.1 and 4.2? | Question: What are the key differences between VCF versions 4.1 and 4.2?
It looks like v4.3 contains a changelog (specs available here) but earlier specifications do not.
This biostar post points out one difference: the introduction of Number=R for fields with one value per allele including REF — can anyone enumerate the other changes between these two versions?
Answer: This is easy to check, you can download both specs in .tex format and do diff.
Changes to the v4.2 compared to v4.1:
Information field format: adding source and version as recommended fields.
INFO field can have one value for each possible allele (code R).
For all of the ##INFO, ##FORMAT, ##FILTER, and ##ALT metainformation, extra fields can be included after the default fields.
Alternate base (ALT) can include *: missing due to a upstream deletion.
Quality scores, a sentence removed: High QUAL scores indicate high confidence calls. Although traditionally people use integer phred scores, this field is permitted to be a floating point to enable higher resolution for low confidence calls if desired.
Examples changed a bit. | {
"domain": "bioinformatics.stackexchange",
"id": 42,
"tags": "vcf, htslib, file-formats"
} |
Remove code duplication inside of a loop preserving performance | Question: I'm coding a 2D collision engine, and I need to merge adjacent axis-aligned bounding boxes depending on a direction (left, right, top, bottom).
The four cases are very similar, except for the if condition, and the push_back argument.
Is there any way I can refactor this code without compromising performance?
vector<AABB> getMergedAABBSLeft(vector<AABB> mSource)
{
vector<AABB> result;
while(!mSource.empty())
{
bool merged{false}; AABB a{mSource.back()}; mSource.pop_back();
for(auto& b : mSource)
if(a.getRight() == b.getRight())
{
result.push_back(getMergedAABBVertically(a, b));
eraseRemove(mSource, b); merged = true; break;
}
if(!merged) result.push_back(a);
}
return result;
}
vector<AABB> getMergedAABBSRight(vector<AABB> mSource)
{
vector<AABB> result;
while(!mSource.empty())
{
bool merged{false}; AABB a{mSource.back()}; mSource.pop_back();
for(auto& b : mSource)
if(a.getLeft() == b.getLeft())
{
result.push_back(getMergedAABBVertically(a, b));
eraseRemove(mSource, b); merged = true; break;
}
if(!merged) result.push_back(a);
}
return result;
}
vector<AABB> getMergedAABBSTop(vector<AABB> mSource)
{
vector<AABB> result;
while(!mSource.empty())
{
bool merged{false}; AABB a{mSource.back()}; mSource.pop_back();
for(auto& b : mSource)
if(a.getBottom() == b.getBottom())
{
result.push_back(getMergedAABBHorizontally(a, b));
eraseRemove(mSource, b); merged = true; break;
}
if(!merged) result.push_back(a);
}
return result;
}
vector<AABB> getMergedAABBSBottom(vector<AABB> mSource)
{
vector<AABB> result;
while(!mSource.empty())
{
bool merged{false}; AABB a{mSource.back()}; mSource.pop_back();
for(auto& b : mSource)
if(a.getTop() == b.getTop())
{
result.push_back(getMergedAABBHorizontally(a, b));
eraseRemove(mSource, b); merged = true; break;
}
if(!merged) result.push_back(a);
}
return result;
}
Answer: I find this line hard to read:
bool merged{false}; AABB a{mSource.back()}; mSource.pop_back();
Please split variables up 1 per line.
Yes the new syntax allows {} for list initialization. But these are not lists so it seems confusing to me (this one is more personal bias so feel free to ignore).
bool merged(false);
AABB a(mSource.back());
mSource.pop_back();
Pass large parameters by const reference if you can.
It saves a copy and you don't seem to need the copy (since you are not modifying the value (Note the eraseRemove() call has no effect externally and does not effect the rest of the code)).
vector<AABB> getMergedAABBSLeft(vector<AABB> const& mSource)
// ^^^^^^^^
Since your four functions are identical apart from one method call you could write a generic version and pass that method as a parameter:
vector<AABB> getMergedAABBSLeft(vector<AABB> mSource)
{ return getMergedAABBSGeneric(mSource, &AABB::getRight);
} | {
"domain": "codereview.stackexchange",
"id": 3218,
"tags": "c++, performance, c++11"
} |
Bound on space for selection algorithm? | Question: There is a well known worst case $O(n)$ selection algorithm to find the $k$'th largest element in an array of integers. It uses a median-of-medians approach to find a good enough pivot, partitions the input array in place and then recursively continues in it's search for the $k$'th largest element.
What if we weren't allowed to touch the input array, how much extra space would be needed in order to find the $k$'th largest element in $O(n)$ time? Could we find the $k$'th largest element in $O(1)$ extra space and still keep the runtime $O(n)$? For example, finding the maximum or minimum element takes $O(n)$ time and $O(1)$ space.
Intuitively, I cannot imagine that we could do better than $O(n)$ space but is there a proof of this?
Can someone point to a reference or come up with an argument why the $\lfloor n/2 \rfloor$'th element would require $O(n)$ space to be found in $O(n)$ time?
Answer: It is an open problem if you can do selection with $O(n)$ time and $O(1)$ extra memory cells without changing the input (see here). But you can come pretty close to this.
Munro and Raman proposed an algorithm for selection that runs in $O(n^{1+\varepsilon})$ time while using only $O(1/\varepsilon)$ extra storage (cells). This algorithm leaves the input unchanged. You can pick any small $\varepsilon>0$.
At its core, Munro and Raman's algorithm works as the classical $O(n)$ algorithm: It maintains a left and right bound (called filter), which are two elements with known rank. The requested element is contained between the two filters (rank-wise). By picking a good pivot element $p$ we can check all numbers against the filters and $p$. This makes it possible to update the filters and decreases the number of elements left to check (rank-wise). We repeat until we have found the request element.
What is different to the classical algorithm is the choice of $p$. Let $A(k)$ be the algorithm that solves selection for $\varepsilon=1/k$. The algorithm $A(k)$ divides the array in equally-sized blocks and identifies a block where many elements are, whose ranks are in between the filters (existence by pigeon-hole principle). This block will then be scanned for a good pivot element with help of the algorithm $A(k-1)$. The recursion anchor is the trivial $A(1)$ algorithm. The right block size (and doing the math) gives you running time and space requirements as stated above.
Btw, the algorithms you are looking for, were recently named constant-work-space algorithms.
I am not aware of any lower bound. | {
"domain": "cs.stackexchange",
"id": 379,
"tags": "algorithms, algorithm-analysis, space-complexity, lower-bounds"
} |
Lagrangian of a 2D double pendulum system with a spring | Question:
In the figure above (please excuse my Picasso drawing skills), we have the general 2D double pendulum system with a slight modification, there's a spring connecting the masses instead of the usual wire.
A few statements on the system:
The wire connecting $m_1$ to the pivot point is massless and has a fixed length $l$.
The spring connecting $m_1$ and $m_2$ is massless, has a constant $k$, an unstretched length $l_0$ and can only extend/contract in the $m_1$-$m_2$ direction, here called $r$.
The angles of $m_1$ and $m_2$ with respect to the $y$-axis are $\theta$ and $\phi$, respectively.
There's no friction involved.
Now, I've considered using $(r, \theta, \phi)$ as my generalized coordinates, however, I'm not so sure if $r$ should really be one of them. The cartesian coordinates would relate to them as
$$ \left \{ \begin{array}{lcl}x_1 = l \sin \theta \\
x_2 = l \sin \theta + r \sin \phi \\
y_1 = l \cos \theta \\
y_2 = l \cos \theta + r \cos \phi
\end{array} \right.$$
I believe that is pretty straight forward. Now, the lagrangian would be
$$ L = T - U = \left \{ \frac{1}{2}m_1 \left(\dot{x}_1^2 + \dot{y}_1^2 \right) - m_1gy_1 \right \} + \left \{ \frac{1}{2}m_2 \left(\dot{x}_2^2 + \dot{y}_2^2 \right) - m_2gy_2 \right \} - \frac{1}{2}k r^2$$
And if we rewrite this lagrangian in terms of our generalized coordinates $(r,\theta, \phi)$ we get, after some algebra,
$$L = \Bigg\{ \frac{1}{2}l^2 \dot \theta^2 \left(m_1 + m_2 \right) + \frac{1}{2}m_2 \dot r^2 + \frac{1}{2}m_2 r^2 \dot \phi^2 \\
+ m_2 l \dot r \dot \theta \sin \left( \theta - \phi \right) + m_2 l \dot r \dot \theta \dot \phi \cos \left( \theta - \phi \right) \\
-gl \cos \theta \left( m_1 + m_2\right) +m_2gr \cos \phi - \frac{1}{2} kr^2 \Bigg\}$$
Which is precisely reminiscent of the lagrangian for the general case, as seen in (9) here, with two modifications:
there's a spring of length $r$ connecting the masses instead of another fixed length wire.
there's an additional potential energy $\frac{1}{2}kr^2$ due to the spring.
My question is:
Is $r$ really a generalized coordinate or can it be expressed in terms of the angles alone, and if so how?
Answer: I haven't done the calculation by myself but as far as I can tell you have forgotten to take the derivative of $r$ when you wrote $\dot x_2^2$ since $r$ is a variable (it contracts and stretches) you have to have something like:
$$ \dot x_2= \dot r \sin \phi + \cdots $$
I don't know whether these terms cancel out but my intuition is that they shouldn't.
Think of it this way. If I gave you the $\phi,\theta$ as initial conditions can you tell me what $r$ should be? You obviously cannot because you don't know how much I've stretched the spring and since this is the initial state I can do whatever I want. Thus you must have $r$ as a generalised coordinate. | {
"domain": "physics.stackexchange",
"id": 34598,
"tags": "classical-mechanics, lagrangian-formalism, coordinate-systems, spring"
} |
Separate compilation C++ ROS | Question:
Hello !
I have a main_test.cpp file which include the functions_test.h file whose the definitions are contained in the functions_test.cpp file.
Here the procedure that I followed:
I placed functions_test.h to this location: include/my_package_name/
I placed functions_test.cpp and main_test.cpp to this location: src/
I added in the CMakeLists.txt the following lines:
include_directories(include ${catkin_INCLUDE_DIRS})<br/>
add_executable(main_test src/main_test.cpp src/functions_test.cpp)<br/>
target_link_libraries(main_test ${catkin_LIBRARIES})
My problem is when I run catkin_make, the prototypes of the functions in the functions_test.h file are successfully seen but not the definitions of the functions inside the functions_test.cpp file and therefore I get an undefined reference error.
I don't know what I missed. If someone can help me, I will be grateful,
lfr
I did some changes, but I still have the same error.
Here my new CMakeLists.txt:
cmake_minimum_required(VERSION 2.8.3)
project(my_package_name)
find_package(catkin REQUIRED COMPONENTS
actionlib
move_base_msgs
roscpp
std_msgs
tf
)
catkin_package(
INCLUDE_DIRS include
LIBRARIES functions_test
)
include_directories(include ${catkin_INCLUDE_DIRS})
add_library(functions_test src/functions_test.cpp)
target_link_libraries(functions_test ${catkin_LIBRARIES})
add_executable(main_test src/main_test.cpp)
target_link_libraries(main_test functions_test)
Here the complete error message:
CMakeFiles/main_test.dir/src/main_test.cpp.o: In function `main':
main_test.cpp:(.text+0x22d): undefined reference to `void nav_api::execute_checkpoints<double, bool>(std::vector<move_base_msgs::MoveBaseGoal_<std::allocator<void> >, std::allocator<move_base_msgs::MoveBaseGoal_<std::allocator<void> > > > (*)(double, bool), double, bool)'
main_test.cpp:(.text+0x2da): undefined reference to `void nav_api::execute_checkpoints<double, double, bool>(std::vector<move_base_msgs::MoveBaseGoal_<std::allocator<void> >, std::allocator<move_base_msgs::MoveBaseGoal_<std::allocator<void> > > > (*)(double, double, bool), double, double, bool)'
main_test.cpp:(.text+0x383): undefined reference to `void nav_api::execute_checkpoints<double, bool>(std::vector<move_base_msgs::MoveBaseGoal_<std::allocator<void> >, std::allocator<move_base_msgs::MoveBaseGoal_<std::allocator<void> > > > (*)(double, bool), double, bool)'
collect2: error: ld returned 1 exit status
make[2]: *** [/home/********/catkin_ws/devel/lib/my_package_name/main_test] Error 1
make[1]: *** [my_package_name/CMakeFiles/main_test.dir/all] Error 2
make: *** [all] Error 2
Invoking "make -j2 -l2" failed
The prototypes of the concerned functions is below:
template <typename P0>
void execute_checkpoints(std::vector <move_base_msgs::MoveBaseGoal>(*order)(P0), P0 p0);
template <typename P1, typename P2>
void execute_checkpoints(std::vector <move_base_msgs::MoveBaseGoal>(*order)(P1, P2), P1 p1, P2 p2);
template <typename P1, typename P2, typename P3>
void execute_checkpoints(std::vector <move_base_msgs::MoveBaseGoal>(*order)(P1, P2, P3), P1 p1, P2 p2, P3 p3);
I want to notify that the code works correctly when I include directly the functions_test.cpp at the beginning of main_test.cpp. But I know that it is a bad solution.
Originally posted by lfr on ROS Answers with karma: 201 on 2016-05-30
Post score: 0
Original comments
Comment by BennyRe on 2016-05-30:
Everything looks good to me. Can you please post your complete CMakeLists.txt?
Comment by lfr on 2016-05-31:
I updated the question (with the new CMakeLists.txt)
Comment by BennyRe on 2016-06-06:
In your question you write that you do add_executable(main_test src/main_test.cpp src/functions_test.cpp) which is correct. In your CMakeLists.txt you don't do this. You write that you updated your file, was it this what you updated?
Comment by lfr on 2016-06-06:
Yes, it was what I updated because it didn't works and I wanted to try what spmaniato advises in his answer.
Answer:
Hello !
I found my mistake. I forgot that templates functions must be defined inside the header file. It works properly now.
Thank you to all of you for helping me.
lfr.
Originally posted by lfr with karma: 201 on 2016-06-14
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 24760,
"tags": "ros, c++"
} |
Average of logCPM | Question: I am a bit in doubt how to calculate correctly the mean (average) in the logarithmic scale. I know e.g., that a division in the normal scale becomes a subtraction in the logarithmic scale, and so forth, that's why I was wondering what the correct way is to calculate the mean (since the mean is calculated by additions and divisions in the normal scale).
So for example, if I have in RNAseq data a group with 3 samples, and have log2 CPM values for each sample. How can I calculate the mean of the group correctly?
Do I first need to get CPM (without log2), calculate the mean and then apply log2? Or can I just ignore the fact that the log2 CPM values are in the log scale, and just calculate the mean as if it was the normal scale (add all up and divide by n)?
Answer: The mean of log-transformed values is the log-transformed geometric mean of the untransformed values (i.e., $log_{2}(geometric~mean)$). This can be useful when the values you want to summarize have vastly different ranges (so one doesn't dominate the average). I would expect that to generally NOT be the case for CPMs of the same gene, so it's unclear to me what the point of that would be. It can also be difficult to interpret these results due to the geometric mean being dependent upon the variance:
> originalValues = rnorm(1000, mean=10, sd=c(rep(1, 500), rep(2, 500)))
> mean(originalValues[1:500])
[1] 9.991464
> mean(originalValues[501:1000])
[1] 10.04407
> log2(mean(originalValues[1:500]))
[1] 3.320696
> log2(mean(originalValues[501:1000]))
[1] 3.328272
> mean(log2(originalValues[1:500]))
[1] 3.313263
> mean(log2(originalValues[501:1000]))
[1] 3.300103
Above I created two groups of 500 values each. Each group has approximately the same mean (10) but with different variance (1 vs 2). As you can see, the second group of 500 values randomly has a slightly higher average. The log2(mean) preserves this difference, but the log2(geometric mean) inverts the relationship such that group 2 now appears to have a lower mean value. This isn't really wrong, it's the difference between mean and geometric mean, but it's something that needs to be kept in mind.
This has implications if your want to compare values between genes, since your comparison metric is then not just dependent upon the mean, but also the variability of the data (such that more variable genes will have systematically lower geometric means).
In short, the geometric mean is useful when values are on different scales (such as CPM values of all genes in a sample, but probably not CPM values of the same gene between samples). If that's not the case, then you're mostly removing interpretability from the results. | {
"domain": "bioinformatics.stackexchange",
"id": 548,
"tags": "rna-seq"
} |
Is acceleration relative? | Question: A while back in my Dynamics & Relativity lectures my lecturer mentioned that an object need not be accelerating relative to anything - he said it makes sense for an object to just be accelerating. Now, to me (and to my supervisor for this course), this sounds a little weird. An object's velocity is relative to the frame you're observing it in/from (right?), so where does this 'relativeness' go when we differentiate?
I am pretty sure that I'm just confused here or that I've somehow misheard/misunderstood the lecturer, so can someone please explain this to me.
Answer: I find the phrase "acceleration need not be relative anything" to be awkward, but I can see where it comes from.
For the moment, let's restrict our consideration to the Galilean relativity (just to keep the math simple). Consider two frames of reference, one ($S$) in which the body is at rest and another ($S'$) in which it moves with velocity $\vec{v'_i} = \vec{u} = u \hat{z}$.
So we have the initial velocity of the body in frame $S$ as $v_i = 0$, and $v' = v + u \hat{z}$
Now assume that the the body accelerates from time $t$ at acceleration $\vec{a} = a \hat{Z}$ resulting in a velocity in frame $S$ of $\vec{v_f} = a t \hat{z}$.
Compute the final velocity in frame $S'$ as $v'_f = v_f + u \hat{z} = (u + a t)\hat{z}$, and from that the acceleration in the primed frame as $a' = a$.
So the acceleration is the same in all frames (you can check the cases for $a \not\parallel u$ yourself), and it is reasonable to say that accelerations are not relative to anything.
All of this is a consequence of the simple form of the transformation between frames:
$$ \vec{x'} = \vec{x_0} + \vec{u} t $$
$$ t' = t $$
So what about Einsteinian relativity?
Here the transformation between frames is more complicated, and the math is much more complicated resulting in observers in different frames seeing different accelerations, but they will all agree on the acceleration as measured in the body's own frame. In my opinion "the acceleration need not be relative" risks causing unnecessary confusion on these points. The magnitude and direction measured will depend on the frame of the observer, which is often what is meant when people say "it's relative". | {
"domain": "physics.stackexchange",
"id": 100604,
"tags": "special-relativity, acceleration, relativity, inertial-frames, relative-motion"
} |
Complex values for the dispersion relation obtained through an $s$-band only tight binding model for diamond cubic crystal | Question: Any given atom in a diamond cubic lattice (Like Si or Ge) has four nearest neighbours at at a distance $\sqrt{3}a/4$, being $a$ the lattice constant. The translation vectors to these neighbours can be chosen as
\begin{equation}
(1,1,1)\frac{a}{4}, {\phantom{x}} (1,-1,-1)\frac{a}{4}, {\phantom{x}} (-1,1,-1)\frac{a}{4}, {\phantom{x}} (-1,-1,1)\frac{a}{4},
\end{equation}
To obtain the vectors above I consider the left face centred site and considered the x-axis positive to the right, the y-axis positive upwards and the z-axis coming out of the screen
Now, the s-band only dispersion relation given by the tight-binding method is
\begin{equation}
E(\vec{k}) = e_{s} + \Sigma_{\vec{\tau}}\gamma(|\tau|)e^{i\vec{k}\cdot\vec{\tau}}
\end{equation}
where $\tau$ is the vectors described above to the nearest neighbours. This summation will result in
\begin{equation}
E(\vec{k}) = e_{s} +\gamma(|\tau|)(e^{i(k_{x}+k_{y}+k_{z})a/4}+e^{i(k_{x}-k_{y}-k_{z})a/4}+e^{i(-k_{x}+k_{y}-k_{z})a/4}+e^{i(-k_{x}-k_{y}+k_{z})a/4})
\end{equation}
My issue is the following: When you do the same procedure for the face-centred cubic crystal you have 12 nearest neighbours, but the summation can be simplified to an explicitly real expression, depending only on combinations of cosines. In this case the expression seems to always have an imaginary component. Where have I made a mistake?
edit.: the expression multiplying the $\gamma$ factor can be reduced to $$2\cos(k_x\frac{a}{4})\cos(k_y\frac{a}{4})\cos(k_z\frac{a}{4})-2i\sin(k_x\frac{a}{4})\sin(k_y\frac{a}{4})\sin(k_z\frac{a}{4}).$$
Answer: The formula you have given works only if there is a single orbital in the basis of the lattice. Diamond has zinc blende structure, meaning a face-centred cubic (FCC) lattice with a basis of atoms at $[0,0,0]$ and $\left[\frac{1}{4},\frac{1}{4},\frac{1}{4}\right]$ (in Cartesian coordinates and units of the lattice constant). Thus there are two atoms in the basis, and the strategy is instead to solve the matrix equation
$$E(\boldsymbol{k})c_i=\sum_j c_j \sum_{\boldsymbol{R}}\gamma_{ij} \,e^{i\boldsymbol{k}\cdot\boldsymbol{R}}$$
where the sum is over atoms in the basis $i$ and lattice vectors $\boldsymbol{R}$, and $\gamma_{ij}$ is the coupling matrix element between orbital $i$ and $j$ (any solid state physics book should derive something like this). The tight binding approximation then assumes that only the nearest neighbour couplings are non-negligible, in our case given by a constant $\gamma$.
Importantly, note that we use the lattice vectors whose basis contains the nearest neighbours, not the coordinates of the nearest neighbours themselves. The nearest neighbours to the atom at $[0,0,0]$ are correctly as you listed, but we must find the lattice vectors to which they belong. In the FCC lattice, these are
$$[0,0,0],\quad \left[0,-\frac{1}{2},-\frac{1}{2}\right], \quad \left[-\frac{1}{2},0,-\frac{1}{2}\right], \quad \left[-\frac{1}{2},-\frac{1}{2},0\right].$$ In other words, the above lattice vectors plus $\left[\frac{1}{4},\frac{1}{4},\frac{1}{4}\right]$ give back the coordinates you listed. Similarly, we must find the lattice vectors to which the nearest neighbours of the atom at $\left[\frac{1}{4},\frac{1}{4},\frac{1}{4}\right]$ belong; a bit of thought shows that these are minus the above.
Substituting these into the matrix equation shows that the energies of the bands are given by the eigenvalues of the matrix
$$\gamma\begin{pmatrix}
0 & 1 + e^{-i(k_y+k_z)a/2} + e^{-i(k_x+k_z)a/2}+e^{-i(k_x+k_y)a/2} \\ 1 + e^{i(k_y+k_z)a/2} + e^{i(k_x+k_z)a/2}+e^{i(k_x+k_y)a/2} & 0
\end{pmatrix}
$$
where we have ignored any diagonal term (just giving an overall constant). I won't do this, but note that it is Hermitian so the eigenvalues must be real. Note also that there are two different bands deriving from the two atoms in the basis. | {
"domain": "physics.stackexchange",
"id": 96754,
"tags": "energy, solid-state-physics, crystals, dispersion, tight-binding"
} |
Index notation with Navier-Stokes equations | Question: This is an index-notation question rather then the NS one:
For incompressible flow and Newtonian fluid, the continuity equation is denoted with:
$$\frac{\partial u_i}{\partial x_i} = 0, $$
which means ${\rm div} u = 0$. Which is fine.
But then in the momentum equation, the divergence in the convection is described via
$$
\frac{\partial u_i}{\partial x_j}u_i,
$$
Which means ${\rm div} uu$. Which is also fine. But why is
$$
{\rm div}\neq\frac{\partial u_i}{\partial x_i}?
$$
So I would be very grateful if somebody could explain this simply (meaning-using simple words and things). Or am I a lost cause?
Answer: The divergence is a vector operator. This simply means that it is a differential operator that acts only on vectors. In this particular case,
$$
{\rm div}\equiv\frac{\partial}{\partial x}\hat{x}+\frac{\partial}{\partial y}\hat{y}+\frac{\partial}{\partial z}\hat{z}
$$
Which, assuming an implicit summation,
$$
{\rm div}\equiv\frac{\partial}{\partial x_i}\hat{x}_i\tag{1}
$$
Since the velocity field is a vector,
$$
\mathbf{u}:=(u,\,v,\,w)=u\hat{x}+v\hat{y}+w\hat{z}\tag{2}
$$
then the divergence of this is the dot product of the velocity and the vector operator:
\begin{align}
{\rm div}\mathbf u&=\left(\frac{\partial}{\partial x}\hat{x}+\frac{\partial}{\partial y}\hat{y}+\frac{\partial}{\partial z}\hat{z}\right)\cdot\left(u\hat{x}+v\hat{y}+w\hat{z}\right)\\
&=\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}+\frac{\partial w}{\partial z}\\
&\equiv\frac{\partial u_i}{\partial x_i}
\end{align}
where $i$ is an index (1, 2, 3) that is implicitly summed over.
In the case of the Navier-Stokes equations, we have
$$
\left(\frac{\partial}{\partial t}+\mathbf u\cdot{\rm div}\right)\mathbf u=\frac1\rho\nabla p+\nu\nabla^2\mathbf u + \frac1\rho\mathbf f
$$
where the second term on the left is the one you are concerned with. First, you must take the dot-product of the velocity with the divergence operator (e.g., (2) dotted with (1)):
$$
\mathbf u\cdot{\rm div}=u_j\frac{\partial}{\partial x_j}
$$
which is a new operator. Then you can apply the vector $\mathbf u=u_i$ to this operator to get
$$
\left(\mathbf u\cdot{\rm div}\right)\mathbf u=u_j\frac{\partial u_i}{\partial x_j}
$$ | {
"domain": "physics.stackexchange",
"id": 55053,
"tags": "notation, navier-stokes"
} |
Are benzyl alcohol and o-cresol functional isomers or position isomers? | Question: My textbook categorizes them as functional isomers (phenol and aromatic alcohol) whereas
The above mentioned isomers are given as position isomers on 'chemguide.co.uk'
Answer: The chloride example is interesting because it highlights a fine line separating both.
The distinction between functional and position isomers isn't as simple as the typical examples make it to be - sometimes, what a substituent is attached to is critical to its functional group, so the same sequence of atoms with the same connectivity can behave, chemically speaking, very differently depending on what they are bonded to. And, conversely, position isomers in general don't have exactly the same chemistry either - you know, for instance, that n-butanol, 2-butanol and t-butanol will behave similarly but not identically.
So where do we draw the line? Technically, it's true that both are position isomers. In both cases the same substituent is attached to different positions (unlike, say, acetone and propenol, where the connectivity is clearly different). But it's also true that their chemistry is distinct - aryl compounds, in general, have different reactivity than their equivalent alkyl compounds, which is why we talk about the reactivity of "phenols" and "alcohols" as separate ideas.
So, from your examples, I'd say benzyl alcohol and o-cresol are clearly functional isomers, as their chemistry is quite different. For instance, compare pKa values:
o-cresol: $\mathrm pK_\mathrm a=10.3$
benzyl alcohol: $\mathrm pK_\mathrm a=15.4$
2-ethylphenol (similar to o-cresol): $\mathrm pK_\mathrm a=10.2$
2-phenylethanol (similar to benzyl alcohol): $\mathrm pK_\mathrm a=15.9$
Chloride, on the other hand, is less sensitive to the substrate; but I'd still classify chlorotoluenes and benzyl chloride as functional isomers, since I think they are different enough to merit that distinction, but it's in the twilight zone of vague definitions. | {
"domain": "chemistry.stackexchange",
"id": 8364,
"tags": "organic-chemistry, isomers"
} |
Jumping insect acceleration question | Question: From my book, verbatim:
The froghopper, Philaenus spumarius is supposedly the best jumper in the animal kingdom. To start a jump, this insect can accerate at 4.00 $\mathrm{km/s^2}$ over a distance of 2.00 $\mathrm{mm}$ as it straightens its specially adapted "jumping legs." Assume the acceleration is constant. (a) Find the upward velocity with which the insect takes off. (b) In what time interval does it reach this velocity? (c) How high would the insect jump if air resistance were negligible? The actual height it reaches is 70 $\mathrm{cm}$, so air resistance must be noticeable force on the leaping froghopper.
I don't feel like the book has given me enough information, though I am asked to derive $v_0$, $\Delta t$, and $\Delta y$. I don't think I'm supposed to use the $70 cm$ height given at the end of the question, because that is an experimental height and I am looking for the theoretical. Any of the kinematic equations I look for [I'm] missing a variable, and the equations can be quickly found here:
http://physics.info/equations/
I don't know if I should assume the bug jumps straight up, but since it has been seen to jump 70 cm high only to move 2 mm over I would imagine so.
I also think that I should include the acceleration due to gravity on the Earth, -9.81 m/s^2, so with the accelerations added would total 4000 + -9.81 = 3990.19 m/s^2, but that doesn't seem to help.
Any ideas or the solution would be helpful. Thanks in advance
Answer: I believe you already stated all you need to solve the problem. You just need to write it out.
One would normally assume the insect jumps at an angle, but given none is specified, I think it is safe to say it jumps vertically. Given the assumption that the acceleration is constant you can find the final velocity with the following formula (with null initial velocity) found on the webpage you linked:
$$
v_{f}^{2}=2\cdot a\cdot\Delta x
$$
The value of $\Delta x$ is the 2mm specified (convert it to meters so that you'll end up getting SI units). Just like when you crouch a little and then you push against the floor when jumping, and you elevate yourself while pushing with your feet before finally jumping, so does the insect, but only 2mm.
Take care to use the correct units. Just make sure that they're the same on both sides of the equation.
With the final velocity calculated you can find the time it takes the insect to reach it knowing it grows linearly with time (since the acceleration is constant), therefore:
$$v_f=(2a\Delta x)^{\frac{1}{2}}=0+a\cdot t$$
To find the maximum theoretical height you can use kinematics equations of you can just use the conservation of energy since we ignore the air drag. If we consider the potential energy to be 0 at the ground level then:
$$\frac{1}{2}mv^{2} + 0 = 0 + mgh_{max}$$
You can solve for the height by yourself. Note that the final velocity before is the initial velocity now.
On the question of what acceleration you should use, you're not wrong going with the difference between the mean acceleration and gravity but it's not entirely wrong to neglect gravity either. If you compare the two you'll see that 4000m per second squared is much greater than 9.8 m/s^2 (gravity) so you will get very similar values.
I hope you found this helpful. | {
"domain": "physics.stackexchange",
"id": 19112,
"tags": "homework-and-exercises, kinematics"
} |
getMesh from gazebo | Question:
Hi,
We are interested in retrieving the vertex and triangles from a loaded model directly out of gazebo (without loading the mesh from a file separately). In fact, we need to get the meshdata knowing the GeomID (for instance the GeomIDs that are in contact).
What we did is to start a simple_world.launch and run a basic package to access the model data. Here is the code :
int main(int argc, char **argv)
{
ros::init(argc,argv,"gazebo_ros_getmesh",ros::init_options::NoSigintHandler);
std::vector<gazebo::Model*> models;
models = gazebo::World::Instance()->GetModels();
if(models.size()>0)
ROS_INFO("ModelName:%s",models[0]->GetName().c_str());
else
ROS_INFO("NoModel");
}
We always get "NoModel".
We thought it would be possible to access to the meshdata via some service that retrieves first the models (gazebo::World::Instance()->GetModels() ) and then the Body, Geom and finally the meshdata.
Appears to us, that any call to such functions must be done in gazeboros node but cannot be called from an external package of our own.
Is there another mean to access to the Meshdata ?
Originally posted by GuiHome on ROS Answers with karma: 242 on 2011-10-06
Post score: 0
Answer:
We finally found our way through this, by creating a gazebo_plugin and attaching it to the world to get access to the models->body->geom->ogrevisual->mesh
this helped us a lot :
http://answers.ros.org/question/1919/gazebo-tutorials-for-creating-a-plugin
Any other simpler mean to access to the triangles of the mesh (not the geoms) that are in contact during a collision would be appreciated.
Originally posted by GuiHome with karma: 242 on 2011-10-07
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by CR7 on 2020-06-22:
Can you please help me with the plugin. unable to proceed after fetching the models. | {
"domain": "robotics.stackexchange",
"id": 6886,
"tags": "ros, gazebo, service, mesh"
} |
Moving WordPress database and files to new host | Question: I've written a script in an attempt to try to automate moving hosts from old hosts to our new docker containers.
All feedback and input appreciated!
#!/bin/bash
# Generate private public ssh-keys
if [ ! -f "$HOME/.ssh/id_rsa" ] && [ ! -f "$HOME/.ssh/id_rsa.pub" ]; then
ssh-keygen -b 4096
fi
read -p "Enter your ssh host:" ssh_host
read -p "Enter your ssh username:" ssh_user
read -p "Enter your remote ssh port:" ssh_port
# REMOTE SSH
if [ -z "$ssh_user" ]; then
echo -e "Config: SSH Username missing"
echo $ssh_user
exit
fi
if [ -z "$ssh_host" ]; then
echo -e "Config: SSH Host is missing"
exit
fi
if [ -z "$ssh_port" ]; then
echo -e "Config: SSH Port missing"
exit
fi
ssh-copy-id $ssh_user@$ssh_host -p $ssh_port -i "$HOME/.ssh/id_rsa" &>/dev/null
# REMOTE DATABASE
read -p "Enter path for wp-config.php:" wp_config_path
# INIT
db_details="cat $wp_config_path/wp-config.php"
scp -P $ssh_port -r $ssh_user@$ssh_host:"$wp_config_path/*" .
# Might be dangerous if file contains malicious input in the values?
eval $(awk -F "[()']" '/^define\(/{printf "%s='\''%s'\''\n", $3, $5;}' < wp-config.php | grep DB_*)
if [ -z "$DB_USER" ]; then
echo -e "Config: RDB user missing"
exit
fi
if [ -z "$DB_PASSWORD" ]; then
echo -e "Config: RDB password missing"
exit
fi
if [ -z "$DB_NAME" ]; then
echo -e "Config: RDB name missing"
exit
fi
dump="mysqldump -u $DB_USER --password='$DB_PASSWORD' $DB_NAME"
read -p "Enter your LOCAL database user:" local_db_user
read -p "Enter your LOCAL database name:" local_db_name
read -s -p "Enter your local database password:" local_db_password
printf "\033c"
ssh $ssh_user@$ssh_host -p $ssh_port $dump | mysql -u $local_db_user --password=$local_db_password $local_db_name
sed -i -e "s;\(define([[:space:]]*'DB_USER',[[:space:]]*\)\(.*\)\()\;\);\1'$local_db_user'\3;g" wp-config.php
sed -i -e "s;\(define([[:space:]]*'DB_PASSWORD',[[:space:]]*\)\(.*\)\()\;\);\1'$local_db_password'\3;g" wp-config.php
sed -i -e "s;\(define([[:space:]]*'DB_NAME',[[:space:]]*\)\(.*\)\()\;\);\1'$local_db_name'\3;g" wp-config.php
echo -e "Database imported to: $local_db_name"
echo -e "Cleaning up..."
# CLEANING UP
# This is bad, and should NEVER be used on hosts with active pub/private key authentication.
ssh $ssh_user@$ssh_host -p $ssh_port 'echo "" > $HOME/.ssh/authorized_keys'
ssh_user=
ssh_host=
ssh_port=
DB_USER=
DB_PASSWORD=
DB_NAME=
local_db_name=
local_db_password=
local_db_user=
wp_config_path=
echo -e "Exiting"
Answer: Make input validation more user-friendly
You ask to enter host, user and port, and if one of them is empty you exit.
If I entered a host and a user and accidentally pressed enter for port (empty), I would be very unhappy with the script.
It would be better to put the prompting logic in a loop,
and repeat forever until a non-empty value is entered.
Ideally in a function, to reduce code duplication.
Exit with non-zero code on error
When exiting with error,
it's recommended to use a non-zero exit code.
~ is the same as $HOME
I prefer writing ~ because it's shorter.
Use printf instead of echo -e
The flags of echo are not portable.
To get the behavior of echo -e,
I suggest to make it a habit to use printf instead.
Replace awk + grep with just awk
In this command you use a combination of awk and grep:
eval $(awk -F "[()']" '/^define\(/{printf "%s='\''%s'\''\n", $3, $5;}' < wp-config.php | grep DB_*)
You can do what grep does here in awk,
which will be more efficient.
The difference will be negligible in this example,
but it's good to make it a habit to avoid additional processes when easily possible.
Replace multiple sed calls when one is enough
This code rewrites the same file 3 times:
sed -i -e "s;\(define([[:space:]]*'DB_USER',[[:space:]]*\)\(.*\)\()\;\);\1'$local_db_user'\3;g"
wp-config.php
sed -i -e "s;(define([[:space:]]'DB_PASSWORD',[[:space:]])(.)()\;);\1'$local_db_password'\3;g" wp-config.php
sed -i -e "s;(define([[:space:]]'DB_NAME',[[:space:]])(.)()\;);\1'$local_db_name'\3;g"
wp-config.php
One sed would be enough, using multiple -e parameters.
Pointless cleaning up
Resetting the variables at the end of the script seem pointless.
This is only useful if you source this script, which seems unlikely.
If you run this script (as opposed to sourcing it),
then the variables you defined or modified will not be visible after the script has completed. | {
"domain": "codereview.stackexchange",
"id": 21614,
"tags": "mysql, bash, wordpress, ssh"
} |
how to merge more than two sample in Seurat? | Question: I would like to merge more than two sample in the Seurat, and the mergeseurat can only merge two sample. So what should I do now. The screenshot is my script.
Answer: You have fed arguments to the MergeSeurat() function that it does not expect. In terms of objects, MergeSeurat() accepts only 2 arguments object1 and object2, please do ?MergeSeurat and see yourself.
Seurat is probably the best documented single cell package, your exact question has its own dedicated vignette: https://satijalab.org/seurat/v3.0/merge_vignette.html
As the vignette above is for Seurat v3 and in case you would like to use Seurat v2:
You can create a named list of Seurat objects to be merged and then use reduce() of the purrr package.
# create a named list of seurat objects to be merged
# code not shown
# optional but probably a good idea
# rename cells using object names as prefix
for (i in names(seurat_object_list)) {
seurat_object_list[[i]] <- RenameCells(seurat_object_list[[i]],
add.cell.id = i)
}
# merge all the objects in the list
merged_combined <- reduce(seurat_object_list,
MergeSeurat,
do.normalize = FALSE) | {
"domain": "bioinformatics.stackexchange",
"id": 2533,
"tags": "rna-seq, seurat, single-cell"
} |
How to learn Machine Learning | Question: I want to get into machine learning. I've been in information security for the last 10 years so i have an IT background.
Where is the best place to start:
Can anyone recommend a good book? And also a platform i can use to practice (preferably free)
Also if there are any online courses someone could recommend that would be great.
I looked into AWS's offering of machine learning but that is not included in the free tier.
Any help/advice would be much appreciated.
Thank You.
Answer:
Online Course: Andrew Ng, Machine Learning Course from Coursera.
Book: Tom Mitchell, Machine Learning, McGraw-Hill, 1997. | {
"domain": "datascience.stackexchange",
"id": 3202,
"tags": "machine-learning"
} |
Meaning of negative emf in the context of Kirchhoff’s Voltage Law | Question: I’m wondering what it means when the emf of a battery is calculated to be a negative value through Kirchhoff’s Voltage Law. This is the problem:
As you can see, we’re given the currents, and we can also see that the two batteries are in series. Therefore, the emf of the center battery should be positive as well. However, the calculations depicted lead to a negative value.
Why does this happen, and what does this mean? Normally, I’d assume it is because this battery’s voltage opposes that of the other battery, but that is evidently not the case here.
Answer: As @Steeven points out, the battery polarity is actually the reverse of that shown.
Just like loop currents in loop analysis are generally initially unknown, the center battery emf here is an unknown. For loop currents a direction is initially assumed. If after solving the loop equations a loop current turns out to be negative, it simply meant the assumed direction of the current was the reverse of the actual direction.
The same concept applies here, but in this case the polarity of the center battery was assumed to be as drawn. After doing the loop equation you found the assumed polarity of the center battery to be the opposite of what it actually is.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 61838,
"tags": "electric-circuits, electric-current, voltage, batteries"
} |
Voltage and Current in transformers | Question: In transformers, the ratio of the voltages equals the ratio of the turns - so double the output coil's turns and the output voltage doubles. Then, in order to conserve energy, current halves.
This makes perfect sense in terms of $\mathrm{P=VI}$, but what happened to $\mathrm{V=IR}$? Doubling voltage and halving the current seems to completely contradict this basic law. That is, of course, unless the resistance in the output circuit changes, with R proportional to $\mathrm{V^2}$ - but I don't see how this is possible.
So how can a transformer obey both laws? Can resistance change or am I missing something else?
Answer: There is a well known transformation law for the effective load seen through a transformer.
Let $R_o$ be the load in the output circuit.
$V_o = I_o R_o$
Assuming all power is transferred into the output circuit,
$V_o I_o = V_i I_i$
It then follows simply that
$V_i / I_i = (V_i / V_o)^2 R_o$
This is the effective load seen by the input circuit. | {
"domain": "physics.stackexchange",
"id": 29516,
"tags": "electric-circuits, electric-current, electrical-resistance, voltage, power"
} |
Resilient & Stable TCP Server Polling | Question: I am looking for feedback to perfect my code developed for WPF in terms of speed, stability and resiliency. My code is supposed to handle synchronous status polling as well as asynchronous Commands to numerous TCP servers (more than 20). I am using a BlockingCollection to funnel these Polls & Commands and using the APM Pattern for Socket Communication. Please review.
ManualResetEvent connect = new ManualResetEvent(false);
ManualResetEvent send = new ManualResetEvent(false);
ManualResetEvent receive = new ManualResetEvent(false);
BlockingCollection<string[]> cmd_Queue = new BlockingCollection<string[]>();
...
Task.Run(() => TCP_Comms());
...
void TCP_Comms()
{
while (true)
{
while (wifi_state)
{
string[] frame = cmd_Queue.Take(); // string[] { IP, Command }
Socket client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
try
{
client.BeginConnect(frame[0], port, new AsyncCallback(Connect_Callback), client);
if (!connect.WaitOne(timeout_Millis)) //1000ms
{
Debug("Connection timeout");
}
else
{
data = Encoding.ASCII.GetBytes(frame[1]);
client.BeginSend(data, 0, data.Length, SocketFlags.None, new AsyncCallback(Send_Callback), client);
if (!send.WaitOne(timeout_Millis))
{
Debug("Send timeout");
}
else
{
var response = new byte[client.ReceiveBufferSize];
client.BeginReceive(response, 0, response.Length, SocketFlags.None, new AsyncCallback(Receive_Callback), client);
if (!receive.WaitOne(timeout_Millis))
{
Debug("Receive timeout");
}
else
{
string response_data = Encoding.ASCII.GetString(response, 0, response.Length);
Debug("Response Received: " + response_data);
}
}
}
}
catch (SocketException ex)
{
Debug("Comms Error: " + ex.Message);
}
connect.Reset();
send.Reset();
receive.Reset();
client.Close();
}
}
}
void Connect_Callback(IAsyncResult ar)
{
try
{
client.EndConnect(ar.AsyncState as Socket);
connect.Set();
}
catch (Exception ex)
{
Debug("Connect Error: " + ex.Message);
}
}
void Send_Callback(IAsyncResult ar)
{
try
{
client.EndSend(ar.AsyncState as Socket);
Debug("Send Success");
send.Set();
}
catch (Exception ex)
{
Debug("Send Error: " + ex.Message);
}
}
void Receive_Callback(IAsyncResult ar)
{
try
{
client.EndReceive(ar.AsyncState as Socket);
Debug("Receive Success");
receive.Set();
}
catch (Exception ex)
{
Debug("Recieve Error: " + ex.Message);
}
}
Answer: The APM Pattern is old school. I would suggest using the TAP extensions that are built in now. Can still use Task.Run as it takes a Func Task as one of the method overloads.
I would also change the timeout to be a TimeSpan they are easier to read and change instead of having to think in milliseconds.
private TimeSpan timeout = TimeSpan.FromSeconds(1);
while true is always a code smell for me. I don't know how this is running but if converting it to run as a service the services have a shutdown event. What I would recommend is changing the method signature of TCP_Comms to be
async Task TCP_Comms(CancellationToken cancellationToken)
and instead of while true have the while check the cancellationtoken
while (!cancellationToken.IsCancellationRequested)
If right now you don't need a way to break out of the loop when calling Task.Run can just pass in a default CancellationToken
Task.Run(() => TCP_Comms(CancellationToken.None));
This will allow later an option to pass in a cancellation token that does get cancelled when the app is shutting down by replacing CancellationToken.None with a token that gets cancelled when the app is shutting down.
There is a lot of code in the loop. I would recommend breaking them out into smaller methods or local functions, depending on the version of c# you are using.
For example on connecting/sending/receiving can break out each code into it's own set of code like the following. These are a local functions so has access the the variables of cancellationtoken and client socket. If want a methods would need to pass them along.
async Task<bool> Connect(string host, int port)
{
// using Task.Delay to have a timeout on the Connection
var completedTask = await Task.WhenAny(
client.ConnectAsync(host, port),
Task.Delay(timeout, cancellationToken));
// await here to throw any exception this task might have been completed with
try
{
await completedTask;
}
catch (Exception ex)
{
Debug("Connect Error: " + ex.Message);
return false;
}
// timeout hit first
if (!client.Connected)
{
Debug("Connection timeout");
client.Close();
return false;
}
return true;
}
async Task<bool> Send(byte[] command)
{
// Create a CancellationTokenSource to timeout the sendasync
using var tokenSource = new CancellationTokenSource(timeout);
try
{
await client.SendAsync(command, SocketFlags.None, tokenSource.Token);
}
catch (TaskCanceledException)
{
Debug("Send timeout");
return false;
}
catch (Exception ex)
{
Debug("Send Error: " + ex.Message);
return false;
}
return true;
}
async Task<string> Receive(int bufferSize)
{
var buffer = new byte[bufferSize];
// Create a CancellationTokenSource to timeout the receiveasync
using var tokenSource = new CancellationTokenSource(timeout);
try
{
await client.ReceiveAsync(buffer, SocketFlags.None, tokenSource.Token);
var response = Encoding.ASCII.GetString(buffer, 0, buffer.Length);
Debug("Response Received: " + response);
return response;
}
catch (TaskCanceledException)
{
Debug("Receive timeout");
return null;
}
catch (Exception ex)
{
Debug("Receive Error: " + ex.Message);
return null;
}
}
Now the code for each event that happens in the code is it's own set of code and logging.
Now makes the main inner loop look similar to
while (wifi_state)
{
string[] frame = cmd_Queue.Take(); // string[] { IP, Command }
Socket client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
try
{
if (await Connect(frame[0], port))
{
var data = Encoding.ASCII.GetBytes(frame[1]);
if (await Send(data))
{
var response = await Receive(client.ReceiveBufferSize);
}
}
}
catch (SocketException ex)
{
Debug("Comms Error: " + ex.Message);
}
client.Close();
Warning I didn't run/test this code as I don't have all your code or a server setup to send and receive messages This is just used as an example of how using the TAP and restructuring the code into smaller functions/methods will help make it read easier and will be more up-to-date and I personally feel easier to maintain by anyone coming afterwards. | {
"domain": "codereview.stackexchange",
"id": 42713,
"tags": "c#, asynchronous, wpf, socket, tcp"
} |
In variational autoencoders, why do people use MSE for the loss? | Question: In VAEs, we try to maximize the ELBO = $\mathbb{E}_q [\log\ p(x|z)] + D_{KL}(q(z \mid x), p(z))$, but I see that many implement the first term as the MSE of the image and its reconstruction. Here's a paper (section 5) that seems to do that: Don't Blame the ELBO! A Linear VAE Perspective on Posterior Collapse (2019) by James Lucas et al. Is this mathematically sound?
Answer: If $p(x|z) \sim \mathcal{N}(f(z), I)$, then
\begin{align}
\log\ p(x|z)
&\sim \log\ \exp(-(x-f(z))^2) \\
&\sim -(x-f(z))^2 \\
&= -(x-\hat{x})^2,
\end{align}
where $\hat{x}$, the reconstructed image, is just the distribution mean $f(z)$.
It also makes sense to use the distribution mean when using the decoder (vs. just when training), as it is the one with the highest pdf value. So, the decoder produces a distribution from which we take the mean as our result. | {
"domain": "ai.stackexchange",
"id": 2776,
"tags": "objective-functions, autoencoders, variational-autoencoder, mean-squared-error, evidence-lower-bound"
} |
Ros NXT question | Question:
Hi, I am very very new in Ros and I want to connect a NXT robot.
My question is do i need to install Ros and Ros-NXT or is ROS-NXT-Electric enough?
I'm using Ubuntu 12.04 on a VM.
Thanks for your time.
Originally posted by Kzr on ROS Answers with karma: 1 on 2015-09-24
Post score: 0
Answer:
There hasn't been NXT support for a long time, but you can certainly use it with Electric. I would recommend, however, that you first try to use some simulated robots in Gazebo. The PR2 robot's tutorials are rather helpful.
http://wiki.ros.org/pr2_gazebo
Originally posted by allenh1 with karma: 3055 on 2015-09-24
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 22704,
"tags": "nxt, ros-electric"
} |
prove that the unique language $A$ such that $AB$ is context free for all languages B is the empty set | Question:
Prove that the unique language $A\subseteq \Sigma^*$ such that $AB$ is context free for all languages $\subseteq \Sigma^*$ is the empty set.
If $A$ is not the empty set, there should be a way to construct a language $B$ so that $AB$ is not context-free. I'm not sure how to generalize this for arbitrary nonempty languages $A$ however. The language $\{0^n 1^n 0^n : n\in\mathbb{N}\}$ is known to be non-context-free. Also, removing finitely many strings from a non-context-free language results in a non-context free language.
Edit: perhaps modifying the language given will work.
Answer: If $AB$ is context-free, so should be the set of all its suffixes. Those include the suffixes of $B$, and if $B$ is complicated enough prefixing $A$ does not change the complexity of the set of suffixes.
Your solution $B = \{0^n1^n0^n \mid \dots \}$ seems to work, but if $A$ ends in $0^*$ the reasoning becomes a little complicated. I would suggest $B' = \{10^n1^n0^n \mid \dots \}$. | {
"domain": "cs.stackexchange",
"id": 20299,
"tags": "formal-languages, context-free, formal-grammars"
} |
Refactoring a FTPHelper Class | Question: I've written a FTPHelper class which should help to teach me more about code structure.
I don't know what I don't know so I would really value feedback on how I can be laying out and thinking about my programming better!
public class FtpHelper : BaseHelper
{
public FtpHelper(string ftpHostname, string ftpUsername, string ftpPassword)
{
Hostname = ftpHostname;
Username = ftpUsername;
Password = ftpPassword;
}
private string Hostname { get; set; }
private string Username { get; set; }
private string Password { get; set; }
public void UploadFilesinFolder(string sourcePath, string destinationPath, string fileType = "*.*")
{
PostEvent("Destination Path: " + destinationPath, FtpEventArgs.ExceptionLevel.Debug);
if (String.IsNullOrEmpty(destinationPath)) throw new Exception("No files in destination folder or desintation folder was not specified");
foreach (var file in Directory.GetFiles(sourcePath, fileType))
{
UploadFile(file, destinationPath);
}
}
/// <summary>
/// Check if a directory exists.
/// </summary>
/// <param name="directory"></param>
/// <returns></returns>
public bool DirectoryExists(string directory)
{
// todo: Check if directory var has a trailing '/', if not, add it. Otherwise false positives are thrown
if (String.IsNullOrEmpty(directory))
throw new Exception("No directory was specified to check for");
var request = (FtpWebRequest)WebRequest.Create(directory);
request.Method = WebRequestMethods.Ftp.ListDirectory;
request.Credentials = new NetworkCredential(Username, Password);
try
{
using (request.GetResponse())
{
return true;
}
}
catch (WebException)
{
return false;
}
}
public FtpStatusCode CreateDirectory(string destination)
{
var folderRequest = WebRequest.Create(destination);
folderRequest.Credentials = new NetworkCredential(Username, Password);
folderRequest.Method = WebRequestMethods.Ftp.MakeDirectory;
try
{
using (var resp = (FtpWebResponse)folderRequest.GetResponse())
{
return resp.StatusCode;
}
}
catch (Exception ex)
{
throw new Exception("Unable to create directory " + destination + " Details:" + ex.Message);
}
}
private FtpStatusCode ProcessFile(string source, string destination)
{
if (String.IsNullOrEmpty(source))
throw new Exception("No source specified, cannot process file");
if(String.IsNullOrEmpty(destination))
throw new Exception("No destionation specified, cannot process source: " + source);
var sourceFile = WebHelper.AppendPaths(destination, Path.GetFileName(source));
PostEvent("Attempting to upload: " + sourceFile, BaseExceptionEventArgs.ExceptionLevel.Debug);
var request = WebRequest.Create(sourceFile);
request.Credentials = new NetworkCredential(Username, Password);
request.Method = WebRequestMethods.Ftp.UploadFile;
// todo: split create directory and upload file into two areas
using (var resp = (FtpWebResponse)request.GetResponse())
{
return resp.StatusCode;
}
}
public void UploadFile(string sourceFile, string destinationPath)
{
if (String.IsNullOrEmpty(destinationPath))
throw new Exception("Empty Destination Path");
if(string.IsNullOrEmpty(sourceFile))
throw new Exception("No source file specified");
try
{
PostEvent("Destination Path: " + destinationPath, BaseExceptionEventArgs.ExceptionLevel.Debug);
var destination = "ftp://" + Hostname + destinationPath;
PostEvent("Checking if exists: " + destination, BaseExceptionEventArgs.ExceptionLevel.Debug);
// check if destination directory exists and if not create it
if (!DirectoryExists(destination))
{
PostEvent("Attempting to create directory: " + destination, BaseExceptionEventArgs.ExceptionLevel.Debug);
var directoryStatus = CreateDirectory(destination);
PostEvent("FTP Response: " + directoryStatus, BaseExceptionEventArgs.ExceptionLevel.Debug);
}
else
{
PostEvent("Directory already exists: " + destination, BaseExceptionEventArgs.ExceptionLevel.Debug);
}
// upload file
PostEvent("Attempting to upload " + sourceFile + " to " + destinationPath, BaseExceptionEventArgs.ExceptionLevel.Debug);
var fileStatus = ProcessFile(sourceFile, destinationPath);
PostEvent("FTP Response: " + fileStatus, BaseExceptionEventArgs.ExceptionLevel.Info);
}
catch (Exception ex)
{
PostEvent("Error when uploading file: " + sourceFile, BaseExceptionEventArgs.ExceptionLevel.Error, ex);
}
}
}
Base Helper Class
public class BaseHelper
{
private EventHandler<BaseExceptionEventArgs> _onEvent;
public event EventHandler<BaseExceptionEventArgs> OnEventHandler
{
add { _onEvent += value; }
remove { _onEvent += value; }
}
public void PostEvent(string message, BaseExceptionEventArgs.ExceptionLevel exceptionLevel, Exception exception = null)
{
if (_onEvent == null) return;
if (exception == null)
{
var e = new BaseExceptionEventArgs(message, exceptionLevel);
_onEvent(this, e);
}
else
{
var e = new FtpEventArgs(message, exceptionLevel, exception);
_onEvent(this, e);
}
}
}
Answer: I would put the guard clause first - before calling the PostEvent base class method:
if (String.IsNullOrEmpty(destinationPath)) throw new Exception("No files in destination folder or desintation folder was not specified");
Now that would involve less horizontal scrolling with proper bracing:
if (String.IsNullOrEmpty(destinationPath))
{
throw new Exception("No files in destination folder or desintation folder was not specified");
}
And now the exception type and its message become more apparent.
Don't throw System.Exception - here an ArgumentException would be a much better fit. Always try to throw meaningful exception types.. create your own if you have to.
The cake message is a lie. I don't see how that method can have any clue whatsoever as to how many files are in the specified destination folder. If the caller is assumed to have verified that, then this method must not make such an assumption. Misleading exception messages can make code harder to debug than it needs to be.
Same thing here - I'll just add that in no-brace code, I prefer this to the above.. but consistency in style is always a better choice:
if (String.IsNullOrEmpty(directory))
throw new Exception("No directory was specified to check for");
The message is better though, only missing punctuation.
catch (Exception ex)
{
throw new Exception("Unable to create directory " + destination + " Details:" + ex.Message);
}
This is a very specific message, for the widest possible exception. By re-throwing (again, don't throw System.Exception) like this, you are also losing the stack trace information from the original exception - not good. A better practice would be to throw a custom exception type, and embedding the original exception as an InnerException. | {
"domain": "codereview.stackexchange",
"id": 11118,
"tags": "c#, ftp"
} |
Is there selection against long proteins and long genes? | Question: Background thought
Titin and TTN
Titin is the largest protein in the human genome with 33423 amino acids. Titin is coded by the gene TTN that must be at least $3 \cdot 33423 \approx 100kb$ long. Looking at NCBI entry for the gene TTN indicate that TTN is actually about 240kb long.
Transcription rate
The average transcription rate (Ref.) is around 1.5kb per minute. It therefore takes about $\frac{240k}{1.5k * 60}$ 2.5 hours to transcribe TTN in mRNA. This mRNA then need to be spliced before being available for being translated. Per consequence, I don't think it is possible for translation to happen in the same time as transcription but might well be wrong.
Translation rate
The translation rate is about 8.4 amino acids per second (Ref.). It therefore takes about $\frac{ 33423} { 8.4 \cdot 3600} \approx 1$ hour to translate the protein. Sure, several ribosome can translate the mRNA in the same time but it still remain that it takes 1 hour to synthesize at least one protein.
Transcription + Translation rate
Assuming translation does not occur in the same time as transcription, the total time to create the first protein of titin is about 3.5 hours.
Half-life
The half-life of a typical human protein is 6.9 hours (Ref.). Intuitively I would expect a negative correlation between mRNA size and mRNA half-life.
Half-life and Transcription + Translation rate
Because the time to produce the first protein is about half the half life, it means that a quarter of every single mRNA that is being produced would never give rise to even a single protein because it would degrade before either before or after translation has started.
It sounds like an important cost and would be surprised if a gene or a protein could be any longer.
Question
Is there evidence of selection against long proteins and long genes?
Are there proteins that are much longer than Titin in other species?
Do I exaggerate the cost it represents, either by not considering that an average rate (such as transcription rate) is not representative of the actual rate for typically long gene/protein or by assuming that it is costly to create tons of mRNAs that won't never be translated?
Answer:
Is there evidence of selection against long proteins and long genes?
I am not aware of any such evidence and cursory googling did not reveal studies that researched a correlation between gene selection and gene size. However, the larger a gene, the larger the probability of a deleterious mutation within said gene so I expect that there is some limit to the size genes can reach and be stable through evolution.
Are there proteins that are much longer than Titin in other species?
To date, titin is the largest known protein
Do I exaggerate the cost it represents, either by not considering that
an average rate (such as transcription rate) is not representative of
the actual rate for typically long gene/protein or by assuming that it
is costly to create tons of mRNAs that won't never be translated?
I really like how you estimated the time cost of producing titin. However, as you already suspect, I believe that you have several flaws in your assumptions.
First of all, the stability of mRNA and proteins varies a lot and depends strongly on their sequences. The half life of proteins can vary from minutes to Years. The Titin protein has a half life of ~70 h.
Similarly, mRNA stability varies from minutes to > 12h. Especially household and structural genes were identified to have mRNAs with long half lifes.
Both protein and mRNA stability is not simply governed by random decay but rather by tightly regulated degradation. For proteins, an example is ubiquitinylation which is a process where certain amino acid sequences are recognized and cause the protein to be ubiquitinylated which in turn triggers the degradation via the proteasome. For mRNA, the secondary structure is crucial since certain loop structures can be recognized by RNAses. Thus average protein/mRNA lifetimes do not help to estimate the actual turnover of a specific protein. | {
"domain": "biology.stackexchange",
"id": 5972,
"tags": "genetics, evolution, molecular-genetics, transcription, molecular-evolution"
} |
Is there a database of the geologic/stratigraphic units of France? | Question: Is there a page where I can look up rock unit definitions for France similar to:
https://www.bgs.ac.uk/lexicon/home.cfm
http://www.bgr.de/app/litholex/index.php
I did some searching for this, but because of my poor French I couldn't find anything.
Answer: The best I have been able to turn up is the paid BD-logs service, which provides standardised lithology for borehole data. Open Geospatial Consortium data all appears to be listed on the Geoservices page, and does not appear to include a lithological database/lexicon. The French Geological Reference Platform does not appear to include a standalone database. The BRGM's list of resources also does not appear to provide pointers to any relevant information. Given that the FRGM is in active development, it seems plausible that this is not in fact a dataset that currently exists. | {
"domain": "earthscience.stackexchange",
"id": 72,
"tags": "regional-geology, stratigraphy"
} |
Why is $L^2$ norm of the gradient called kinetic energy? | Question: I'm reading Lieb-Loss's book 'Analysis', chapter 7. The authors refer to the following integral:
$$\tag{1} \lVert \nabla f\rVert_2^2=\int_{\Omega}\lvert \nabla f(x)\rvert^2\, d^nx $$
as the kinetic energy (see pag. 172), without explanation. To what physical system are they referring to? My intuitions says that (1) would be more appropriately called potential energy, as we have discussed here.
(Note: In a later paragraph the authors introduce a magnetic potential $\mathbf{A}$ and the so-called covariant derivative $\nabla+i\mathbf{A}$, remarking that after this introduction the kinetic energy integral must be replaced with
$$\int_{\mathbb{R}^n}\lvert (\nabla + i \mathbf{A})f(x)\rvert^2\, d^n x.$$
This induces me to think that they take as a model some kind of electromagnetic system.)
Thank you for your attention.
Answer: As Greg P mentions in a comment, this terminology is inspired by quantum mechanics.
$f(x)$ plays the role of the wave function $\psi(x)=\langle x|\psi\rangle$ in the position space representation.
The conjugate/canonical momentum operator $\hat{\bf p}$ is replaced by $\frac{\hbar}{i}{\bf \nabla}$ in the Schrödinger representation.
The kinetic/mechanical momentum operator $m\hat{\bf v}$ is replaced by $\hat{\bf p}+{\bf A}$ (if we absorb certain constants, such as the charge of the particles, into the definition of the magnetic potential ${\bf A}$.)
The expectation value $\langle\psi| \hat{K}|\psi\rangle$ of the non-relativistic kinetic energy operator $\hat{K}=\frac{m}{2}\hat{\bf v}^2$ is
$$\langle\psi| \hat{K}|\psi\rangle
~=~\frac{m}{2} \int d^n x~ |\hat{\bf v}\psi(x)|^2
~=~\frac{1}{2m} \int d^n x~ |(\hbar{\bf \nabla}-i{\bf A})\psi(x)|^2. $$ | {
"domain": "physics.stackexchange",
"id": 2749,
"tags": "energy, notation"
} |
Why do NEBNext indexing primers have sequence between the p5 oligo and index? | Question: In a previous post I asked Why do NEB adapters have non-complementary sequence?
Since then, I realized that there is some other sequence in the p5 indexing primer, as well as in the p7 indexing primer.
Here is a diagram of the NEBNext protocol. The parts that I am confused about are in step 5.
My first question is about the p5 index. Why is there a sequence 5'GATCTACAC 3' between the p5 flowcell oligo and the index?
My second question is about both indices. Why is there no gap between the p7 index and the flowcell annealing oligo?
For reference, I fetched these indexing primer sequences from NEB on page 18 and 19 of NEBNext Multiplex Oligos for Illumina instruction manual.
Answer: Question 1: The additional sequences is needed because that is complimentary to the P5 sequence anchored to the flow cell. It is also the site for priming the Index 2 read on a MiSeq. The additional sequence is also needed for the read 1 primer in the cartridge to anneal to the correct place for the molecules.
Question 2: The "GT" sequence you're referring to is necessary for the Index 1 and Read 2 primer to anneal to the correct place. | {
"domain": "bioinformatics.stackexchange",
"id": 828,
"tags": "phylogenetics, illumina, adapter, pcr, library"
} |
Is angular resolution important when we want the spectra of an Earth-like exoplanet? | Question: Right now, our resolution + light gathering power are still far too low to take direct images of exoplanets, so we're limited to subtracting the planet spectra from the parent star spectra when the planet undergoes a transit (and this isn't going to be possible for decades, according to Jim Kasting's latest book). So in this case, it seems that light gathering power is more important.
So, Is angular resolution more important when we want to measure the spectra of an Earth-like Exo?
Answer: To study the spectra of Earth-like planets in transit across their stars, we'll need an observatory in space so that we won't be affected by spectral lines from the Earth's atmosphere. Unfortunately, we won't then be able to use large apertures on Earth to smooth down the noise. We'll also want to go to wavelengths of several microns in the infrared to get deep absorption lines, and also get to a weaker portion of the spectrum of the host star. Then it's simply a matter of integrating for a long time during transits and subtracting the spectrum of the star during transit from the spectrum when there is no transit. This will give the absorption spectrum due to the ring of atmosphere visible around the disk of the planet. But transits are few and far between so it takes a long time to build up integration time.
One way around needing an observatory in space may be to choose a star with a high radial velocity so that a planet's spectral lines will be red or blue-shifted relative to absorption from the Earth's atmosphere. Then the largest ground-based observatory could be used, or even a consortium of all the largest telescopes be employed at the same time to get the best possible signal.
Using interferometric techniques in space to localize on only a portion of the star being transited, would one day offer a good way to improve the signal-to-noise ratio. | {
"domain": "physics.stackexchange",
"id": 3075,
"tags": "astronomy, exoplanets"
} |
What is the cause of the normal force? | Question: I've been wondering, what causes the normal force to exist?
In class the teacher never actually explains it, he just says "It has to be there because something has to counter gravity." While I understand this is true, it never explains why.
Whenever I ask anyone else they always respond in a similar way, saying "It has to be there, because the object is not accelerating", and this has become very frustrating.
So what is the cause of the normal force? From my reasoning, it has to be one of the four fundamental forces. (Gravity, electromagnetism, the weak force, or the strong force). It would seam to me that electromagnetism would make to most sense (electrons in the outer shells of atoms repelling each other),
However, just as I thought this had to be right, I read a thing online about "certain fundamental particles repelling each other when their wave functions overlap". I haven't studied quantum mechanics yet so I'm not really sure what to make of that.
If anyone could shed some light on this for me it would be much appreciated.
Answer: The normal force is not really due to any of the four force of nature. The forces of nature are not all the forces in the macroscopic sense, they are just the fundamental bosonic particles in a modern quantum field theory description.
The normal force is due to the Pauli exclusion principle almost exclusively. This is because electrons have the property that two electrons cannot be in the same quantum state. Two electrons can't be at exactly the same point.
But you might be thinking, "two point particles in three dimensions can't ever be at the same point, it's infinitely improbable!" In quantum mechanics, the particles are spread out in a wavefunction, and the condition that they can't be at the same point means that wherever their spread-out-ness overlaps, the wavefunction is zero. The wavefunction is in 6 dimensions for 2 particles, so it is hard to visualize, but the zeros appear on the diagonal part, where the two positions for the particle coincide.
When you bring two objects to touch, the electron wavefunctions are squeezed together, and the average scale of variation increases slightly, because of the exclusion. The rate of change of the wavefunction is the momentum of the electron, and as you push them closer, it costs energy. This is the source of the normal force. It would not exist if electrons were elementary bosons. | {
"domain": "physics.stackexchange",
"id": 1653,
"tags": "electromagnetism, newtonian-mechanics, forces, pauli-exclusion-principle"
} |
Use of conservation of energy | Question:
A ball starts rolling down a slope as shown in the figure below. All
units shown are in meters. The floor has very little friction. Which
of the following is true after a long time?
options
A. The ball will come to rest at P
B. The ball will come to rest at Q
C. The ball will come to rest at R
D. The ball will continue to be in motion at point S
My approach was to break down the problem in two parts and use conservation of energy. First part would be from peak to point P, then point P to small peak. If the energy stored in the ball is greater than the energy reuired to climb the samll peak it will pass. So, from peak to point P , 1/2*m*v^2 = mgh or V^2=20 ( approax g= 10 taken). But I am unable to proceed farther. What should be the eqaution for next part ( point P to small peak)?
Answer: Let's assume that the ball's path is not obstructed (see @NarcosisGF's answer).
You can ignore the peak because it is lower than the starting height.
At start, some amount of potential energy is stored. It is all converted into kinetic energy at P.
Then the ball starts to rise. It can rise just as high as it started. Since the peak clearly is lower than the starting height, there will be no trouble overcoming the peak.
That being said, this drawing is severely flawed. @NarcosisGF points out one issue, namely that the vertical part of the small peak will alter its path to perfectly vertical. Thus it will never pass the peak but just drop straight vertically back down into the valley again.
Secondly, even if this is assumed a drawing mistake and we assume that it should be a sloped path, the ball will never be able to reach point Q as it will fly over the peak and land far away.
Thirdly, when landing it will bounce. If not, then energy is sucked out of it by the soft ground and you can't be sure of the speed (since the kinetic energy is then altered). The arrow at S is thus not a good represention of the ball's speed. | {
"domain": "physics.stackexchange",
"id": 53240,
"tags": "homework-and-exercises, newtonian-mechanics, energy-conservation"
} |
How to determine the laser duration in high harmonic generation? | Question: In high harmonic generation, how can we determine the laser width (duration)? Say, I have a Gaussian laser pulse with intensity $10^{14} \:\rm W/cm^2$ centered at 800 nm wavelength, interacting with an atom of ionization potential = 16 eV. What is the appropriate laser duration and why? I assume that one needs to use uncertainty relation but what is $\delta \omega$ in this case?
Answer: You can't ─ it's a free parameter, and it can take a bunch of different values: from the single- or few-cycle regime, say, if you want to do pulse-length gating, to the 100fs regime if you want to work with your HHG's spectral properties and you don't care so much about having an isolated attosecond pulse.
There are some limits, of course (you can't go shorter than one or two cycles of your IR, and if you go much longer than 100fs you will struggle to get the required intensity unless your pulse energy is very high) but within those limits, you have a broad spectrum of pulse-length choices that will affect the characteristics of the emitted radiation in many ways, and it is up to the experimental design to choose a driving-pulse length that is suitable for the science you want to do (and for your budget! few-cycle pulses don't come cheap in equipment, personnel, or time).
And, that said, there are some hard requirements on the bandwidth of the laser, in the sense that if you want to support a pulse length $\Delta \tau$, then your IR driver must have a bandwidth no smaller than
$$
\Delta \omega \gtrsim \frac{2\pi}{\Delta\tau}
$$
to support that pulse. As an example, if your laser oscillator and (CPA) amplifier are running on Ti:Sa gain media, then you will be restricted to a bandwidth of around 150-200 nm, i.e. some 600 THz, which then puts a hard limit of some 10 fs on your pulse length, i.e. some four cycles of full-width at half-maximum.
That's an immovable physical constraint, and that means that if you want to go lower than that and get to truly few- or single-cycle IR driving pulses, then you need more bandwidth. This is normally done using self-phase modulation in a gas-filled hollow-core optical fiber, where third-order nonlinear optical processes in the fiber are used to increase the bandwidth to the point where it can support the few-cycle pulses that you want, followed by extremely careful compensation of the added chirp to compress the pulse down to its Fourier-limited pulse length. | {
"domain": "physics.stackexchange",
"id": 46846,
"tags": "non-linear-optics"
} |
LRU Cache with a static variable for garbage value | Question: I have tried to implement a Least Recently Used cache with only C++ STL containers. The main thing that I keep asking my self the static variable called garbage_val. Is it a good practice to have such a static variable just for garbage values?
template<typename Key, typename Value>
class LRU_Cache
{
struct Node
{
Key k;
Value v;
};
public:
static Value garbage_val;
LRU_Cache(unsigned int capacity)
{
capacity_ = capacity;
}
Value get(Key key)
{
auto node = cache_.find(key);
if(node == cache_.end())
{
return garbage_val;
}
Value val = (*(node->second)).v;
recentlyList_.erase(node->second);
Node n; n.v = val; n.k = key;
recentlyList_.push_back(n);
cache_[key] = --recentlyList_.end();
return val;
}
void set(Key key, Value val)
{
auto node = cache_.find(key);
if(node != cache_.end())
{
recentlyList_.erase(node->second);
}
else
{
evict_if_needed();
}
Node n; n.v = val; n.k = key;
recentlyList_.push_back(n);
cache_[key] = --recentlyList_.end();
}
void evict_if_needed()
{
if(cache_.size() >= capacity_)
{
auto node = cache_.find(recentlyList_.begin()->k);
recentlyList_.pop_front();
cache_.erase(node);
}
}
virtual ~LRU_Cache(void)
{
}
void print()
{
std::cout << "Objects in Memory:" << std::endl;
for(auto& c : cache_)
{
std::cout << "(" << c.first << "," << (*(c.second)).v << ")" << std::endl;
}
std::cout << "Recently used:" << std::endl;
for(auto& r : recentlyList_)
{
std::cout << "(" << r.k << "," << r.v << ")" << std::endl;
}
}
private:
LRU_Cache(void){}
std::unordered_map<Key, typename std::list<Node>::iterator> cache_;
std::list<Node> recentlyList_;
unsigned int capacity_;
};
template<typename Key,typename Value> Value LRU_Cache<Key,Value> ::garbage_val;
Answer:
Is it a good practice to have such a static variable just for garbage values?
Maybe not:
garbage_val is a Value, so if you return it, then the caller doesn't know whether the get succeeded.
If garbage_val is supposed to be a magic number (an impossible Value), different users of the class might want different magic numbers ... so why isn't garbage_val an instance member instead of a static member?
Normally container classes expect the caller to check that iterator != end() before dereferencing the iterator.
Instead, the following methods might be appropriate:
// throws an exception if key does exist.
Value get(Key key) { ... }
// tests whether key exists: use this before calling get.
bool contains(Key key) { ... }
// return true and initializes value iff key exists.
// if key doesn't exist then valueOut is not initialized.
bool tryGet(Key key, Value& valueOut) { ... } | {
"domain": "codereview.stackexchange",
"id": 6830,
"tags": "c++, c++11, cache, stl"
} |
Relationship between velocity and pressure of a fluid in motion | Question: I'm confused about the relationship between the velocity and the pressure of a fluid in motion. According to Bernoulli's equation, mathematically when the velocity increases the pressure has to decrease and vice versa because of the conservation of energy. But why doesn't the pressure law (P = F / A) prove this? I mean, for fluids in motion, once the area of a cross section decreases the velocity increases, thus the pressure should increase. Unless the force changes too (decreases).
Any rational explanation about it?
Answer: I believe the confusion can be resolved first by realizing that directions are important in your question. And second by understanding the microscopic origin of pressure.
The first observation then concerns the fact that force is a vectorial quantity and the definition of pressure you give is not general enough. So the force that should be taken into account is the one done perpendicularly to the surface $A$. Alternatively you can consider an area $A$ as a vector too, then the general definition would include $\cos\theta$, where $\theta$ would be the angle between the force and area vector.
For the second point, intuitively, think that pressure is produced by particle collisions which exert a force on a surface (by delivering momentum). Having said that, Bernoulli's equation concerns mostly pressure due to height differences and perpendicular velocities (mostly), no regard to external forces or directions. So a way to think about it is the following. Think about a flat horizontal surface (perhaps a house roof), under normal conditions particles in the air fly in all directions and collide with the roof producing a certain pressure (let's say atmospheric pressure). When the wind starts blowing strongly then most particles in the air will have a horizontal velocity so vertical collisions will be reduced, namely less particles will exert force on the roof since most of them are flying horizontally. The consequence is that the force is reduced, hence the pressure on the roof is reduced (so if the wind is hard enough to reduce the pressure to a value below the pressure within the house (ignoring how it is attached) it will get blown.) So indeed more velocity is less pressure at a macroscopic level for directions that are perpendicular. | {
"domain": "physics.stackexchange",
"id": 62833,
"tags": "fluid-dynamics, pressure"
} |
The intepretation and maths behind the many-worlds interpretation of quantum mechanics | Question: recently I started reading the book "Something deeply hidden" by Sean Caroll. In the book he talks about the many-worlds interpretation of quantum mechanics as a more elegant way of thinking about quantum mechanics instead of the usual, as he would like to call it, textbook recipe of quantum mechanics were the wavefunction collapses when a quantum system is observed.
I found this many-worlds interpretation very interesting, but I have a bit of trouble of understandig or visualizing the mathematics behind it because he doesn't really talk about the maths in his book. My problem goes as the following: Let's take as an example a quantum particle and we as the observer want to measure/observe its location. It is known that the particle can only be in 2 location let's call them location 1 and location 2.
Let's say, for the sake of simplicity, that its wave function takes on the following form
$$\psi = \frac{1}{\sqrt{2}}(\psi_1 + \psi_2)$$
where $\psi_1$ stands for the particle being is location 1 and $\psi_2$ stand for the particle being in location 2. When we believe the interpretation of quantum mechanics where the wave function collapses upon measurement we now know that if we measure the location of the particle it has 50% chance of being in location 1 or 2. After the measurement the wave function collapses. So let's say that we have observed the particle to be in location 1 then exactly right after the measurement the wavefunction takes the form
$$\psi = \psi_1$$
Now I understand this interpretation, but I know want to approach this same problem following the many-worlds interpretation. So this many-worls interpretation says that measurement, unlike the textbook recipe, isn't something fundamental. When the observer measures the location of the particle it is just two quantum systems interacting with each other and thus getting entangled with each other. What happens is that if we measure the particle to be in let's say location 1, then there is also anothere version of ourselves with which we don't interact that has measured the particle to be in location 2. I understand the logic behind it when it is said in words, but I want to translate it into maths. So before the measurement we still have a wave function of the form
$$\psi = \frac{1}{\sqrt{2}}(\psi_1 + \psi_2)$$
Now with the many-world approach we can't say the wavefunction collapses upon measurement because that simply doesn't happen following this approach. So let's say in our world we observe the particle to be in location 1, then we now know that there must be another world in which the other version of ourselves observed the particle in location 2. Now how does the wavefunction of the particle look in our world? It can't be $\psi_1$ since this would implie a collapse of the wavefunction. Is it right to say it this has the above form but our world just lives inside the $\psi_1$ part of the total wavefunction?
So in short I just have problems with mathematically formulating the above example following the many-worlds approach. Any help with this would be greatly appreciated :))
P.s: I'm sorry if my way of writing isn't clear
Answer: One of key elements of the many worlds approach you need to consider the observer as a quantum system, with their own wavefunction. Let us say that the observer intially has a wavefuntion $\Phi_i$ and if they measure the system to be in state 1 (respectively 2) they will have a wavefunction of $\Phi_1$ (respectively $\Phi_2$). The combined system of the original system with the observer then initially
$$
\Phi_i \psi = \frac{1}{\sqrt{2}}\Phi_i\left(\psi_1 + \psi_2\right)\;.
$$
After measurement the observer will have become entangled with the system and we will have a state
$$
\frac{1}{\sqrt{2}}\left(\Phi_1\psi_1 + \Phi_2\psi_2\right)\;.
$$
That is the wavefunction has two terms ("worlds") in one of which the observer found the system in state 1 and in the other they found it in state 2. This state is entangled, so we cannot factor the state to obtain a wavefunction for the system alone.
If we wish to describe the system after measurement without direct reference to the observer we must move to a density matrix formalism. The resulting density matrix for the system, after tracing out the observer, is identical to the one obtained from a projective measurement in the standard formalism | {
"domain": "physics.stackexchange",
"id": 90717,
"tags": "quantum-mechanics, wavefunction, quantum-entanglement, quantum-interpretations"
} |
Path of free fall in Schwarzschild coordinates (superior at realease speed) | Question: I'm a new member on StackExchange. I'm french and my english is awfull, so I beg you to excuse me and I hope you can understand my question anyway…
I'm looking for a very precise equation for free fall in Schwarzschild coordinates (r;t) but with an initial speed superior at release speed.
I already have the formula of the speed with a parameter K
$$dr/dt=(1-\frac{Rs}{r})\frac{\sqrt{\frac{1}{r}-K}}{\sqrt{\frac{1}{Rs}-K}}$$
Where Rs is de Schwarzschild radius. This is not a local speed but the "slope" of the path I'm looking for.
For exemple, if K=0 it gives the speed at r for free fall from infinity, if K=-∞ it gives the speed of light (Shapiro's one). I'm interrested in the case K belongs to ]-∞;0[ wich means that a particle could follow a geodesic from an initial r=Ro with an initial speed superior at release speed.
The subject :
This formula can be written v(r). If I had v(t) I could intégrate it and find the equation of the path r(t), but this is not the case. Is anyone here able to write the formula t(r) or r(t) I need to draw the path of this kind of particles in Schwarzschild coordinates (r;t) ?
Either you can find a way to integrate this, either you already have a "ready made" formula without the K parameter. I just want the result, not the whole explanation… waste of time because I can't understand it. I'm not a good mathematician but I'm interrested in black holes and I like to "draw formulas" because it's the only way for me to understand somethis about relativistic équations.
Thank you for your help and sorry for my english…
Mailou
Answer: I haven't checked whether your equation is correct, but I can tell you how to integrate it to get $t(r)$. (Sorry, you can't get $r(t)$.) All you need to do is write your differential equation in the separated form
$$dt=\sqrt{\frac{1}{R}-K}\,\frac{dr}{\sqrt{\frac{1}{r}-K}(1-\frac{R}{r})}$$
where $t$-stuff is on one side and $r$-stuff is on the other, and then integrate both sides:
$$\int dt=\sqrt{\frac{1}{R}-K}\int\frac{dr}{\sqrt{\frac{1}{r}-K}(1-\frac{R}{r})}.$$
The $t$ integral is trivial, and Mathematica can do the messy $r$ integral. The result, slightly rewritten for easy calculation in the case $K<0$, is
$$t=\sqrt{\frac{1}{R}-K} \left(-\frac{(2 K R+1) \tanh
^{-1}\left(\frac{\sqrt{-K}}{\sqrt{\frac{1}{r}-K}}\right)}{(-K)^{3/2}}-\frac{2 R^{3/2} \tanh
^{-1}\left(\frac{\sqrt{R} \sqrt{\frac{1}{r}-K}}{\sqrt{1-K R}}\right)}{\sqrt{1-K
R}}-\frac{r \sqrt{\frac{1}{r}-K}}{K}\right)+C$$
for $r>R$ and
$$t=\sqrt{\frac{1}{R}-K} \left(-\frac{(2 K R+1) \tanh
^{-1}\left(\frac{\sqrt{-K}}{\sqrt{\frac{1}{r}-K}}\right)}{(-K)^{3/2}}-\frac{2 R^{3/2} \tanh
^{-1}\left(\frac{\sqrt{1-K R}}{\sqrt{R} \sqrt{\frac{1}{r}-K}}\right)}{\sqrt{1-K
R}}-\frac{r \sqrt{\frac{1}{r}-K}}{K}\right)+C$$
for $r<R$.
Here $C$ is a constant of integration determined by the initial conditions.
You should be able to verify that this solution satisfies your differential equation by differentiating this to compute $dt/dr$; the reciprocal $dr/dt$ should simplify to the right-hand-side of your equation.
By the way, if it had been impossible to obtain an analytic solution, it is always possible to solve a differential equation numerically.
I suspect that your equation has at least one mistake: shouldn’t $dr/dt$ be negative? If so, simply negate my solution.
Two final comments: This is called radial free-fall, and what you mean by “superior at release speed” is unclear because this is not a phrase used in English. | {
"domain": "physics.stackexchange",
"id": 62205,
"tags": "general-relativity, black-holes, differential-geometry, coordinate-systems"
} |
How to properly add some new arguments in a subclass? | Question: I am trying to subclass list in order to implement a callback function that would be called whenever a list item is being set.
I have this:
class CustomList(list):
def __init__(self, *args, **kwargs):
self.setitem_callback = kwargs['setitem_callback']
del kwargs['setitem_callback']
super().__init__(*args, **kwargs)
def __setitem__(self, *args, **kwargs):
super().__setitem__(*args, **kwargs)
self.setitem_callback()
def callback():
print('callback')
l = ['a', 'b', 'c']
custom_list = CustomList(l, setitem_callback=callback)
custom_list[1] = 'd' # prints 'callback'
It works as expected, but I am concerned about having to do del kwargs['setitem_callback'] in order for the superclass' __init__ to work. Is it a bad practice? Are there any other ways to achieve the same?
Answer: As @Peilonrayz said in the comments: You can just write
class CustomList(list):
def __init__(self, *args, setitem_callback, **kwargs):
self.setitem_callback = setitem_callback
super().__init__(*args, **kwargs)
def __setitem__(self, *args, **kwargs):
super().__setitem__(*args, **kwargs)
self.setitem_callback()
def callback():
print('callback')
l = ['a', 'b', 'c']
custom_list = CustomList(l, setitem_callback=callback)
custom_list[1] = 'd' # prints 'callback'
as long as you're okay with slightly different behavior when the caller forgets to pass setitem_callback= at all. With your original code, you get a KeyError:
Traceback (most recent call last):
File "test.py", line 16, in <module>
custom_list = CustomList(l)
File "test.py", line 3, in __init__
self.setitem_callback = kwargs['setitem_callback']
KeyError: 'setitem_callback'
With a proper keyword argument, you get a TypeError with a more tailored message:
Traceback (most recent call last):
File "test.py", line 16, in <module>
custom_list = CustomList(l)
TypeError: __init__() missing 1 required keyword-only argument: 'setitem_callback' | {
"domain": "codereview.stackexchange",
"id": 37105,
"tags": "python"
} |
Looking for clarification on superposition | Question: I have always had a hard time accepting the concept of superposition from quantum mechanics. I know that the leading physicists say that the cat is both alive and dead until it is observed and that an electron is in multiple places at once (is a wave) until it is observed, but I sadly have never been convinced. I've taken a basic course on the subject and watched countless documentaries that explain this, but I have a hard time accepting that we can change the state of the world just by observing. Questions that have popped into my head are things like:
What if a brain dead person looks and doesn't comprehend?
What if I take a picture and don't look at that picture for years later? Is the cat both alive and dead for years all because I didn't look at the picture?
What if a friend and I flip a coin, and only I look? When asked which came up, I would say "Heads" and he would say "Both". Are we both right? Doesn't my absolute knowledge make him wrong?
It seems like the trend in science has always been to show us how small and irreverent we are. From finding out that we are not at the center the universe, to finding out that we are just one planet of many, in one solar system of many, in one galaxy of many, and possibly even in one universe of many, to finding out that all life on earth came from the same place (making us less special then we thought), to finding out that our genetic code is filled with genetic bloat and the little left over is nearly identical to a banana. After all that, I find out that I am in fact so special that all matter in the universe ceases to exist when I close my eyes!
Currently I am one of those people that think that moon is there even when I'm not looking. Can someone point me a convincing argument for all this so that I can finally be convinced?
EDIT:
What I gathered from the replies so far is that superposition is a way of representing our uncertainty of an answer. By saying that a cat is in a state between life and death, we are actually saying that we don't know in a mathematically describable way. This would imply that our observation does not change reality, it only changes our certainty.
But what about electrons acting like waves (going through 2 slits at once) when we're not observing and like particles (being visible electrons) when we are observing in the double slit experiment? This seems to show that they are both in reality and indicates that our observation does change reality (which of the two it is at the moment).
This is the contradiction that I struggle with. (And if I am wrong about my interpenetration, please correct me.)
RE-EDIT:
I think I'm understanding this now. The only remaining issue is this:
The electrons in the double slit experiment that act like waves before we look and like particles after we look are exposed to the environment (gas, photons, etc.) the whole time. Shouldn't it be one or the other the whole time due to this exposure?
Answer: I think the fundamental misunderstanding of superposition has a lot to do with the popular interpretation of quantum mechanics. That is, how Schrödinger's cat is portrayed in popular science. When a quantum system is in a state of superposition, it means that the outcome of a measurement of some property of that system is uncertain. The wacky thing about quantum mechanics is that it behaves as if it were in both states at the same time. However, this is only true while the system is still coherent. What the popular science explanation calls "observations" has nothing to do with consciousness or an observer making a measurement. It has to do with a phenomenon called quantum decoherence. Basically, the system exists in a state of superposition until that systems interacts with the environment in a thermodynamically irreversible way. This interaction is what causes the wave function to collapse and Schrödinger's cat to be dead or alive. That interaction could be an electron striking a detector, or or could be random neutrinos passings through the system, or a photon interacting with it, or just about anything. This is why it's so hard to build quantum computer; you have to ensure that the system doesn't interact with the environment in any way to ensure that is stays coherent (in a state of superposition). So to answer your questions:
1. What if a brain dead person looks at it and doesn't comprehend? Observation by a human consciousness is not a requirement of decoherence. The wavefunction was already collapsed by the photons hitting it that allowed the brain dead person to see.
2. What if I take a picture and don't look at that picture for years later? Is the cat both alive and dead for years all because I didn't look at the picture? Same as above, the wavefunction collapsed into a determined state the moment that it interacts with photons of light.
3. What if a friend and I flip a coin, and only I look? When asked which came up, I would say "Heads" and he would say "Both". Are we both right? Doesn't my absolute knowledge make him wrong? First thing, a coin flip is not an example of quantum superposition. Let's suppose you "flipped" and electron so that it lands in either a spin up or spin down state, with 50% probability. When you make a measurement of the spin, the wavefunction collapses due to the interaction with whatever tool you are using to make the measurment. Then you could say "up" or "down" but your friend would say "I don't know". Your friend could then measure the spin of the same electron and he would get the same answer that you got. Once the wavefunction collapses the spin is no longer in a state of superposition. There isn't any way you could get "both up and down" with a single measurement.
4. Does matter continue to exist after you close your eyes? Yes. Again, having a human observer is not a requirement to having the wavefunction collapse. Also, quantum effects are generally not observable at the macroscopic level (with a few exceptions). Don't ask this questions of any philosophers, though.
The whole point of the Schrödinger's cat thought experiment was to show how absurd quantum effects would be on a macroscopic scale. To make it actually work in practice, the cat in the box would have to be completely isolated from the outside environment, or else the wavefunction would not stay in a state of superposition. | {
"domain": "physics.stackexchange",
"id": 26234,
"tags": "quantum-mechanics, hilbert-space, measurement-problem, superposition"
} |
are explanatory variables in multiple linear regression slopes? | Question: In simple linear regression the formula is: $y = m \cdot x + b$.
When there are multiple independent variables: $y = b_0 + b_1 \cdot x_1 + b_2 \cdot x_2 + \ldots $
I know that $x_1,x_2,\ldots $ represents different independent variables but what exactly is $b_1,b_2,\ldots$?
Answer: They are the coefficients.
Suppose you keep the other variable fixed besides $x_1$. How much does increasing a single unit of $x_1$ would affect $y$? According to the formula, it would change by $b_1$. Similarly for the other variables. | {
"domain": "datascience.stackexchange",
"id": 7343,
"tags": "statistics"
} |
What is meant precisely, when a term in the Lagrangian is "chirally invariant"? | Question: I am reading this paper, where eq. (2):
$$ m_0(\bar{\phi}_{-L}\phi_{+R}-\bar{\phi}_{-R}\phi_{+L}-\bar{\phi}_{+L}\phi_{-R}+\bar{\phi}_{+R}\phi_{-L}) \tag{2} $$
is said to be chirally invariant. Here, the $\phi$'s represent Weyl left- and right-handed spinors (I believe) with the indexed parity.
At first I thought chirally invariant would mean performing the tranformation $L\leftrightarrow R$ would leave the term invariant, but this is not the case (you pick up an extra minus sign). Does it have something to do with parity? If so, why exactly?
Answer: Here's a link to a free version on arxiv: https://arxiv.org/abs/1108.2596
Eq (1) defines what the authors mean by a chiral transformation
\begin{eqnarray}
\varphi_{+R}' &=& R\varphi_{+R} \\
\varphi_{+L}' &=& L\varphi_{+L} \\
\varphi_{-R}' &=& L\varphi_{-R} \\
\varphi_{-L}' &=& R\varphi_{-L}
\end{eqnarray}
where $R$ and $L$ are rotations in the right- and left- handed subspaces.
Using these transformations in the first mass term, we see
\begin{equation}
\bar\varphi'_{-L} \varphi'_{+R} = \bar\varphi_{-L} R^\dagger R \varphi_{+R} = \bar\varphi_{-L} \varphi_{+R}
\end{equation}
since for rotations, $R^\dagger R = 1$.
The other terms in Eq (2) are similarly invariant under the transformation in Eq (1). | {
"domain": "physics.stackexchange",
"id": 91969,
"tags": "lagrangian-formalism, field-theory, spinors, chirality, invariants"
} |
How to handle optional date parameters? | Question: When creating a function to be reusable throughout the project I came across something strange. After looking at this function it looks very much like it could be refactored.
The function should only return the short month and day, in that order. If it receives a parameter, it will be a Firestore Timestamp with the following format:
{
"seconds":1667420699,
"nanoseconds":394000000
}
If it does not receive a parameter, it must return today's month and day. So I did this:
function shortMonthDay(timestamp = undefined) {
let date
if (timestamp) {
date = new Date(timestamp.seconds * 1000 + timestamp.nanoseconds / 1000000)
} else {
date = new Date()
}
return date.toLocaleDateString('en-US', {
month: 'short',
day: 'numeric'
}) // Nov 2
}
The function works as it should, it doesn't return any errors. But I wonder if it could be improved and how.
Answer: Minor points
Default parameters eg shortMonthDay(timestamp = undefined) are only assigned to arguments that are undefined or contain undefined. To assign the default undefined will do nothing and is just code noise.
The timestamp object is not vetted. If it contains seconds, it may not have nanoseconds (and visa versa), or either or both may not hold a number. You need to ensure that you do not pass NaN to the Date constructor when parsing the timestamp
Function roles
shortMonthDay would be better as two functions.
timestampToDate to convert a timestamp if given to a date.
formatDate to format the date object, which you can curry with the locale format. Thus you don't need to touch the timestampToDate to change the output format
Rewrite
We can rewrite the function as follows
const formatDate = (loc = "en-US", format = {month: "short", day: "numeric"}) =>
date => date.toLocaleDateString(loc, format);
const timestampToDate = (timestamp = {}) => new Date(
(isNaN(timestamp.seconds) ? Date.now() : Number(timestamp.seconds) * 1000) +
(isNaN(timestamp.nanoseconds) ? 0 : timestamp.nanoseconds * 1e-6)
);
const toMonthDay = formatDate();
// basic tests
const timestamps = [undefined, // current local date
{seconds: 1670263910, nanoseconds: 394000000}, // ~ Dec 6 GMT+0800 for rest
{seconds: 1670263910, nanoseconds: "blah"},
{seconds: 1670263910},
{seconds: "foo", nanoseconds: 1667420699000 + 394}
].forEach(timestamp => console.log(toMonthDay(timestampToDate(timestamp))));
Note As a habit I avoid divides as generally divides are slower than multiplies. Its just a habit, in this case the difference between * 1e-6 and / 1e6 is inconsequential.
Note In timestampToDate the names are a little noisy and because the variable timestamp is being used so often one could also write it as
const timestampToDate = (stamp = {}) => new Date(
(isNaN(stamp.seconds) ? Date.now() : Number(stamp.seconds) * 1000) +
(isNaN(stamp.nanoseconds) ? 0 : stamp.nanoseconds * 1e-6)
); | {
"domain": "codereview.stackexchange",
"id": 44186,
"tags": "javascript, datetime"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.