anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Reconstructing a wave which interferes together with a second known one | Question: Is it possible to calculate one of two interfering waves, if the interference (the sum) $v$ and the other wave (summand) $v_1$ is known? The waves have same frequency but may differ in phase and amplitude.
I have found the formulas for the interference (the sum) but my algebra/analytical skills are not good enouth to recognize if there is a explicit solution. There are two unknown variables and four known variables in the formula.
If there is a explicit solution, could someone please show me the formula for it.
Edit:
I look for the amplitude $A_2(t)$ and phase $\varphi_2(t)$ of the following equation in a time discrete, embedded system.
$$
v(t)=v_1(t)+v_2(t)
=Acos(\omega t+\varphi)=A_1cos(\omega t+\varphi_1)+A_2cos(\omega t+\varphi_2)
$$
When subtracting $ v_1(t) \text{ from } v(t)$
I get a term with amplitude $A_2$ and phase $\varphi_2$ not separable because both are unknown.
$$ v_2(t)=v(t)-v_1(t)=A_2cos(\omega t+\varphi_2)=Acos(\omega t+\varphi)-A_1cos(\omega t+\varphi_1) $$
Answer: Let us define $f:
\begin{bmatrix}
a_2\\
\phi_2
\end{bmatrix}\mapsto
\begin{bmatrix}
f_1(a_2, \phi_2)\\
f_2(a_2, \phi_2)
\end{bmatrix} =
\begin{bmatrix}
\sqrt{a_1^{2}+a_2^{2}+a_1 a_2 cos(\phi_1 - \phi_2)}-A\\
\frac{a_1 sin(\phi_1)+a_2 sin(\phi_2)}{a_1 cos(\phi_1)+a_2 cos(\phi_2)}-tan(\phi)
\end{bmatrix}$
where $a_1, \phi_1, A, \phi$ are supposed to be known.
Then I'd go for numerical tools such as the Newton method.
First you need find a way to compute your gradient, either analytically or numerically compute the gradient matrix, here
$G(a_2, \phi_2) =
\begin{bmatrix}
\frac{\partial f_1}{\partial a_2}(a_2, \phi_2) & \frac{\partial f_1}{\partial \phi_2}(a_2, \phi_2)\\
\frac{\partial f_2}{\partial a_2}(a_2, \phi_2) & \frac{\partial f_2}{\partial \phi_2}(a_2, \phi_2)\\
\end{bmatrix}$
Then you enter the algorithm. You need to find an initial seed as close as you can to the solution ($X_0 = \begin{bmatrix}a_1\\ \phi_1\end{bmatrix}$ in our case for instance) and do in a loop :
$X_{k+1} = X_{k} - G(X_k)^{-1}f(X_k)$
and if you just want the result, this might do the trick for you.
EDIT :
apparently you want to go full analytical. Let us use the second of your equations. We have, after some regrouping:
$$a_1(tan(\phi)cos(\phi_1)-sin(\phi_1)) = a_2(sin(\phi_2)-tan(\phi)cos(\phi_2))$$
by remarking that:
$$sin(\phi_2)-tan(\phi)cos(\phi_2) = -\sqrt{1+tan^{2}(\phi)}[\frac{tan(\phi)}{\sqrt{1+tan^{2}(\phi)}}cos(\phi_2)-\frac{1}{\sqrt{1+tan^{2}(\phi)}}sin(\phi_2)]$$
we can, by setting $\theta = arctan(1/tan(\phi))$ get $cos(\theta) = \frac{tan(\phi)}{\sqrt{1+tan^{2}(\phi)}}$ and $sin(\theta) = \frac{1}{\sqrt{1+tan^{2}(\phi)}}$. Then by using the $cos(\phi_2+\theta)$ development formula backwards we get :
$$a_2cos(\phi_2+\theta) = -\frac{a_1(tan(\phi)cos(\phi_1)-sin(\phi_1))}{\sqrt{1+tan^{2}(\phi)}}$$. We will call the right hand factor $m$ for the sake of brevity. Note that is does not depend on $\phi_2$ nor $a_2$. So we have
$$a_2cos(\phi_2+\theta) = m \tag{1}\label{eq1}$$
Note how $m$ does not depend on $\phi_2$ nor $a_2$ (that was the point).
The other equation of your post can be squared and put under the form :
$$a_2^{2}+2a_1a_2cos(\phi_2-\phi_1) = A^{2}-a_1^{2}$$
Now if we develop the cosine and use \eqref{eq1}, we get
$$2a_1a_2cos(\phi_2-\phi_1)=2a_1a_2cos((\phi_2+\theta)-(\phi_1+\theta))$$ $$=2a_1a_2[cos(\phi_2+\theta)cos(\phi_1+\theta)+sin(\phi_2+\theta)sin(\phi_1+\theta)]$$ $$=2a_1mcos(\phi_1+\theta)+[2a_1sin(\phi_1+\theta)]$$
Note how $2a_1mcos(\phi_1+\theta)$ does not depend on the parameters we seek. Therefore, if we reinject this in the original equation we get :
$$a_2^{2}+[2a_1sin(\phi_1+\theta)]a_2sin(\phi_2+\theta) = A^{2}-a_1^{2}-2a_1mcos(\phi_1+\theta)$$
If we name the right hand $\gamma$ and we set $\alpha = 2a_1sin(\phi_1+\theta)$ we get the nicer equation
$$a_2^{2}+\alpha a_2sin(\phi_2+\theta) = \gamma\tag{2}\label{eq2}$$
so we also have, by passing $a_2^{2}$ on the right side and squaring equation \eqref{eq2}:
$$\alpha^{2}a_2^{2}sin^{2}(\phi_2+\theta) = \gamma^{2}-2\gamma a_2^{2}+a_2^{4}$$
Now we remember we also have \eqref{eq1}. We multiply it and square it
$$\alpha^{2}a_2^{2}cos^{2}(\phi_2+\theta)=\alpha^{2}m^{2}$$
then, by summing both equations you eliminate the $\phi_2$ dependence. You finally get:
$$\alpha^{2}a_2^{2} = \gamma^{2}+\alpha^{2}m^{2}-2\gamma a_2^{2}+a_2^{4}$$
so
$$a_2^{4} -(\alpha^{2}+2\gamma)a_2^{2}+\gamma^{2}+m^{2}=0\tag{3}\label{eq3}$$
I believe \eqref{eq3} is called a bi squared equation. It means that you can solve it with $a_2^{2}$ as the unknown (you just have to find the roots of the polynomial $X^{2}-(\alpha^{2}+2\gamma)X+m^{2}+\gamma^{2}$) and then $a_2$ the generalized square roots of whatever you found for $a_2^{2}$ (with j or not, with + or -). You should get four roots and hope that only one makes sense.
Once you have $a_2$, for $\phi_2$ I suggest you use equations \eqref{eq2} and \eqref{eq1} :
$$ a_2sin(\phi_2+\theta) = \frac{\gamma-a_2^{2}}{\alpha}$$
$$a_2cos(\phi_2+\theta) = m$$
You ratio both equations and you solve with arctan but you can go with arccos on \eqref{eq1} or arcsine on \eqref{eq2}, it should depend on which hypothesis you make on $\phi_2$. | {
"domain": "dsp.stackexchange",
"id": 11769,
"tags": "wave, interference"
} |
Atomic clock in kitchen microwave or other random electronics? | Question: Is it possible for atomic clocks to be put in anything, say a kitchen microwave? Or a regular wall clock?
If so, why are they not in all clocks?
Is it cost? How much does this tech cost to add to something?
Answer: The cheapest and smallest atomic clocks are usually optically pumped rubidium cell clocks, and you can get some of these models for a few thousand euros, maybe less. Are you willing to spend that much for a wall clock?
In case, consider the following points:
Rubidium cell clocks are not primary standards (that is, they do not realize the second according to the International System of Units) and, moreover, due to their operation, they are subjected to drifts (unlike beam or fountain types atomic clocks). Indeed, they drift much less than a quartz clock but, yes, they do drift.
They are complex devices and their average lifetime is less than that of a quartz oscillator.
Nowadays there are cheap ways to synchronize an oscillator through the GPS navigation system or through the network. | {
"domain": "physics.stackexchange",
"id": 48325,
"tags": "atomic-physics, electronics, technology, atomic-clocks"
} |
Overlapping between two intervals: reasoning / algorithm to find the set of disjoint and overlapping intervals | Question: Consider the positive integers {1, 2, 3, 4, ...} and the corresponding Integer Number Line.
Suppose we have four integer numbers, A, B, C and D.
For example:
_________________A___________________________B__________________
___________________________C__________D_________________________
or
_________________A___________________________B__________________
___________________________C____________________________D_______
Consider the notation AB as the set of all consecutive numbers A, ...., B, and the interval CD as the set of all consecutive numbers C, ..., D.
I would like to know a simple yet abstract and generalizaed resoning (mathematical demonstration would be really apreciated) that we can apply if we want to find all intervals consisting of the "disjoint" numbers and the "overlapping" numbers between both set of numbers, AB and CD.
Example 1: If AB = {2, 3, 4, 5, 6, 7, 8, 9} and CD = {4, 5, 6, 7}, we have the following subsets for each set AB and CD:
for AB: {2, 3}, {4, 5, 6, 7} and {8, 9}
for CD: {4, 5, 6, 7}
Example 2: If AB = {2, 3, 4, 5, 6, 7, 8, 9} and CD = {4, 5, 6, 7, 8, 9, 10, 11, 12}, we have the following subsets for each set AB and CD:
for AB: {2, 3}, {4, 5, 6, 7, 8, 9}
for CD: {4, 5, 6, 7, 8, 9} and {10, 11, 12}
But, which one would be a great and yet simple reasoning behind disjoint and overlapping sets of numbers between two given set of integer numbers?
Thanks very much for any clue!!
Answer: For two integers $A$ and $B$ where $A\le B$, let $[A,B)$ denote the half-closed half-open interval that consists of the numbers $A, A+1, \cdots, B-1$. In particular, if $A<B$, then there are $B-A$ numbers in $[A,B)$. If $A=B$, then there is no numbers in $[A,B)$. For example,
$[0,1)$ means the number 0.
Both $[0,0)$ and $[3, 3)$ mean no numbers.
$[5,11)$ means the numbers $5,6,7,8,9,10.$
To denote the numbers $-2, -1, 0, 1, 2$, we can use $[-2, 3)$.
A nice property of half-closed half-open interval: If $L\le M \le R$, then $[L,R) = [L,M) \sqcup [M,R)$.
Proof: I am afraid it is obvious by definition. Done.
Suppose we are given $[A, B)$ and $[C, D)$, where $A\lt B$ and $C\lt D$. Then $[A,B)$ overlaps $[C,D)$ if and only $\max(A,C)\lt \min(B,D)$.
Proof:
"$\Rightarrow$": Let $n$ be a number in both $[A,B)$ and $[C,D)$, i.e.,
$A\le n\lt B$ and $C\le n \lt D$. Then $\max(A,C)\le n\lt \min(B,D)$.
"$\Leftarrow$": $A\le \max(A,C)\lt \min(B,D)\le B$, i.e., $\max(A,C)\in [A,B)$. Similarly, $\max(A,C)\in [C,D)$. So $[A,B)$ overlaps $[C,D)$.
Done.
Suppose $[A,C)$ and $[B,D)$ overlap. Then we have the following set decompositions.
$[A,B) = [A, \max(A,C))\sqcup[\max(A,C), \min(B,D)]\sqcup[\min(B,D), B)$
$[C,D) = [C, \max(A,C))\sqcup[\max(A,C), \min(B,D)]\sqcup[\min(B,D), D)$
Proof:
We have $A\le \max(A,C) \le \min(B,D)\le B$. So,
$$\begin{align}
[A,B) &= [A, \max(A,C))\sqcup[\max(A,C),B)\\
&= [A, \max(A,C))\sqcup[\max(A,C), \min(B,D)]\sqcup[\min(B,D), B).
\end{align}$$
Example 1: If $I = \{2, 3, 4, 5, 6, 7, 8, 9\}$ and $J = \{4, 5, 6, 7\}$, then $I=[2,10)$ and $J=[4,8)$. We have the following set decompositions.
$[2,10) = [2, 4)\sqcup [4, 8) \sqcup [8,10)$.
$[4,8) = [4, 4)\sqcup [4, 8) \sqcup [8,8) = [4,8)$.
Example 2: If $I = {2, 3, 4, 5, 6, 7, 8, 9}$ and $J = {4, 5, 6, 7, 8, 9, 10, 11, 12}$, then $I=[2,10)$ and $J=[4,13)$. We have the following set decompositions.
$[2,10) = [2, 4)\sqcup [4, 10) \sqcup [10,10)= [2, 4)\sqcup [4, 10)$.
$[4,13) = [4, 4)\sqcup [4, 10) \sqcup [10,13)= [4, 10) \sqcup [10,13)$.
The above examples should indicate clearly enough the algorithm that decomposes two overlapping intervals into disjoint intervals. | {
"domain": "cs.stackexchange",
"id": 15648,
"tags": "algorithms, sets, integers"
} |
Let's (path) find A Star | Question: Yesterday I found myself struggling with creating an A * algorithm in Java, but this morning I finally figured it out, so I would love to hear what Code Review has to say about it!
The goal here is not performance since it was a learning exercise. I tried to make the code and algorithm as readable and simple as possible. My understanding is that using BinaryHeap in some fashion can improve the speed of the algorithm quite a bit. However, I don't know how to implement this, and my intended usage at this time is for map generation, so the speed is not especially critical.
For this implementation I do not need to consider diagonals. I haven't implemented it yet, but the next step will be to use the type of terrain of each tile on the map in order to affect the movement cost of the heuristic, so instead of a list of walls, the entire map data will be passed into the object.
Here is some proof that the algorithm is working:
The Node
public class AStarNode {
public final MapPoint point;
public AStarNode parent;
public int gValue; //points from start
public int hValue; //distance from target
public boolean isWall = false;
private final int MOVEMENT_COST = 10;
public AStarNode(MapPoint point) {
this.point = point;
}
/**
* Used for setting the starting node value to 0
*/
public void setGValue(int amount) {
this.gValue = amount;
}
public void calculateHValue(AStarNode destPoint) {
this.hValue = (Math.abs(point.x - destPoint.point.x) + Math.abs(point.y - destPoint.point.y)) * this.MOVEMENT_COST;
}
public void calculateGValue(AStarNode point) {
this.gValue = point.gValue + this.MOVEMENT_COST;
}
public int getFValue() {
return this.gValue + this.hValue;
}
}
The Algorithm
public class BZAstar {
private final int width;
private final int height;
private final Map<MapPoint, AStarNode> nodes = new HashMap<MapPoint, AStarNode>();
@SuppressWarnings("rawtypes")
private final Comparator fComparator = new Comparator<AStarNode>() {
public int compare(AStarNode a, AStarNode b) {
return Integer.compare(a.getFValue(), b.getFValue()); //ascending to get the lowest
}
};
public BZAstar(int width, int height, List<MapPoint> wallPositions) {
this.width = width;
this.height = height;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
MapPoint point = new MapPoint(x, y);
this.nodes.put(point, new AStarNode(point));
}
}
for (MapPoint point : wallPositions) {
AStarNode node = this.nodes.get(point);
node.isWall = true;
}
}
@SuppressWarnings("unchecked")
public ArrayList<MapPoint> calculateAStarNoTerrain(MapPoint p1, MapPoint p2) {
List<AStarNode> openList = new ArrayList<AStarNode>();
List<AStarNode> closedList = new ArrayList<AStarNode>();
AStarNode destNode = this.nodes.get(p2);
AStarNode currentNode = this.nodes.get(p1);
currentNode.parent = null;
currentNode.setGValue(0);
openList.add(currentNode);
while(!openList.isEmpty()) {
Collections.sort(openList, this.fComparator);
currentNode = openList.get(0);
if (currentNode.point.equals(destNode.point)) {
return this.calculatePath(destNode);
}
openList.remove(currentNode);
closedList.add(currentNode);
for (MapDirection direction : MapDirection.values()) {
MapPoint adjPoint = direction.getPointForDirection(currentNode.point);
if (!this.isInsideBounds(adjPoint)) {
continue;
}
AStarNode adjNode = this.nodes.get(adjPoint);
if (adjNode.isWall) {
continue;
}
if (!closedList.contains(adjNode)) {
if (!openList.contains(adjNode)) {
adjNode.parent = currentNode;
adjNode.calculateGValue(currentNode);
adjNode.calculateHValue(destNode);
openList.add(adjNode);
} else {
if (adjNode.gValue < currentNode.gValue) {
adjNode.calculateGValue(currentNode);
currentNode = adjNode;
}
}
}
}
}
return null;
}
private ArrayList<MapPoint> calculatePath(AStarNode destinationNode) {
ArrayList<MapPoint> path = new ArrayList<MapPoint>();
AStarNode node = destinationNode;
while (node.parent != null) {
path.add(node.point);
node = node.parent;
}
return path;
}
private boolean isInsideBounds(MapPoint point) {
return point.x >= 0 &&
point.x < this.width &&
point.y >= 0 &&
point.y < this.height;
}
}
MapPoint is a simple point class with X and Y as integer values. I have overridden the equals and hashCode() methods so that two map points will be equal if they have the same X and Y values, even if they are not actually the same object.
MapDirection is pretty simple as well. For each direction, getPointForDirection(point) will return the delta of the input point and the direction X and Y values. Let me know if I need to post this class as well for you to review the code.
Answer: PriorityQueue
For openList, you could use a PriorityQueue instead of an ArrayList for better performance. With the ArrayList, every time you insert new elements, you need to sort the whole list, taking \$O(n\log n)\$ time. With the PriorityQueue, inserting a new element should only take \$O(\log n)\$ time. If you need to change an existing element, you should remove, modify, and reinsert the element to make sure that the element is correctly updated in the PriorityQueue.
HashSet
Similarly, for closedList, you could use a HashSet instead of an ArrayList. Your two operations on closedList are add() and contains(). With an ArrayList, contains() takes \$O(n)\$ time, but with a HashSet, contains() takes \$O(1)\$ time.
Example
If you want an example of how to use the PriorityQueue and HashSet, you could look at this other code review question. Be aware, that example did not update the PriorityQueue correctly, as it needed to remove and reinsert the node (see @mzivi's answer to that question). | {
"domain": "codereview.stackexchange",
"id": 15253,
"tags": "java, algorithm, game, pathfinding"
} |
Project Euler #5 - Lowest Multiple of 1 through 20 | Question: I was trying to solve problem 5 of Project Euler, and this was I needed to do:
2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is evenly divisible by all
of the numbers from 1 to 20?
I came up with a solution, and I get the correct answer in about 7 minutes, but my question is, how could I make it faster? Calculation speeds are incredibly slow, how could I optimize it?
for (long x = 1; x < 1000000000; x++) {
bool dividable = true;
for (int y = 20; y > 0; y--) {
if (x % y != 0) {
dividable = false;
}
}
if (dividable == true) {
Console.WriteLine (x);
}
}
}
Answer: There are a few obvious things:
You loop from 20 to 0 every time. You only need to loop from 20 to 2 -- every number is evenly divisible by 1.
You divide every number from everything from 20 to 0. If it's not divisible by 20, why check if it's divisible by 19, 18, 17, etc? It already failed.
Only numbers ending in 0 are divisible by 10. You can eliminate a lot of possibilities right off the bat by checking for that. In fact, you can just increment by 10 every time. You've just cut the amount of numbers you're checking by an order of magnitude.
Only numbers ending in 0 or 5 are divisible by 5.
There's a whole list of divisibility rules: http://en.wikipedia.org/wiki/Divisibility_rule
I think just fixing those obvious things will be enough to see a significant increase in overall performance. | {
"domain": "codereview.stackexchange",
"id": 23291,
"tags": "c#, performance, programming-challenge"
} |
Why does menthol (e.g. peppermint) feel cool to the tongue? | Question: Especially when drinking water after the fact, mint can give a sharp cold sensation inside one's mouth. What process causes the sensation to occur?
Answer: Menthol it self gives a cold feeling in the mouth because it is active at the same receptor (an ion channel) on the tongue that cold temperature triggers. Interestingly, although they act at the same receptor, they act at different sites, so that provides the intensified response when eating a mint and then drinking water. This reference gives an excellent detailed answer, with references to the original papers, which I'll summarize here.
Menthol acts at the TRPM8 protein which forms an ion channel that allows $\ce{Na+}$ and $\ce{Ca^2+}$ ions to flow into cells and this sends a signal saying "cool" to the brain. (As an aside, this protein monitors temperature across the body and not just on the tongue.) Cold temperatures actually change the confirmation of this protein, which allows the ions to flow more freely, and sends the signal to the brain. Menthol, on the other hand, stabilizes the open channel (allowing ions to flow even more freely) and also...
“menthol shifts the voltage dependence of channel activation to more negative values by slowing channel deactivation”. This is very significant to my question because it supports a claim made by the first web page I visited which stated that menthol acts on the receptors, leaving them sensitized for when the second stimulus is applied (i.e. cold water) resulting in the enhanced sensation. This mechanism of binding is very clearly different from the mechanism of cold affecting the TRP channels. This is why the sensation is increased when both stimuli are applied, yet is not affected after addition stimulation from the same stimuli (i.e. eating another mint).
All in all, pretty cool. | {
"domain": "chemistry.stackexchange",
"id": 80,
"tags": "biochemistry"
} |
Processing a binary file with buffer length tags | Question: I am trying to process a very large binary file using MappedByteBuffer from java.nio package.
This is how the data looks like in the file:
[0, 12, 83, 0, 0, 0, 0, 9, -11, -66, -116, -91, 100, 79,
39, 82, 0, 1, 0, 0, 10, 52, 126, -35, 45, -75, 65, 32, 32, 32, 32, 32, 32, 32, 78,
32,0,0, 0, 100, 78, 67, 90, 32, 80, 78, 32, 49, 78, 0, 0, 0, 0, 78....
....10 GB of more data]
We skip the first byte (0), the next byte(12) tells us how many bytes to process next. After processing 12 bytes, we see 39 and then we process next 39 bytes and so on.
Here is the code that works but it's not the most efficient code. MappedByteBuffers are expensive to create and there is no way to explicitly release them, we let the GC reclaim them when ever it runs. I am creating 2 mappedbytebuffers, one to read the first byte and then another to read the number of bytes the first buffer told me to. I could read let's say 1024 bytes in the first buffer itself but I am not sure how to handle the case where there is not enough number of bytes left to read at the end of the buffer.
I am open to using 3rd party libraries, I am already using bytes-java to process my messages later. I tried using Chronicle-Bytes but couldn't understand how to use it in my use case.
public void parse(String filename) throws IOException {
try (RandomAccessFile file = new RandomAccessFile(new File(filename), "r")) {
FileChannel inChannel = file.getChannel();
//skip the first byte, start at 1
long position = 1;
long length = inChannel.size();
//get the number of bytes to read by reading the first byte
int bytesToRead = 1;
while(position < length) {
//first mapped buffer to read the number of bytes to read ahead
MappedByteBuffer msgTypeBuffer = inChannel.map(FileChannel.MapMode.READ_ONLY, position, bytesToRead);
bytesToRead = (int) (msgTypeBuffer.get()); //we have the number of bytes to read
position++; //move cursor to the next position
//second bytebuffer - reads bytesToRead
MappedByteBuffer msgBuffer = inChannel.map(FileChannel.MapMode.READ_ONLY, position, bytesToRead);
byte[] payBytes = new byte[bytesToRead];
msgBuffer.get(payBytes, 0, bytesToRead);
Message m = parsers.messageIn(payBytes, stock);
if (!m.isEmpty()) {
//process the message
}
//move the cursor after the bytes that have been read
position += bytesToRead + 1;
}
}
}
Answer: You seem to have taken a very odd starting position here. The data file is in a simple stream format, so I really can't see why you'd bother with RandomAccessFiles and MappedByteBuffers - they add no value.
The processing can be boiled down to this (add error handling as needed) :
Open a java.io.BufferedInputStream for the file
Read first byte (perhaps check it's actually zero) and discard it
Read a length byte
Allocate an appropriately sized byte array
Read into the array (different form of the read() method)
Process the array
Repeat from step 3 until finished (let GC deal with the arrays you don't need)
A sketch of the code would be :
public void parse(String filename) throws IOException {
try (BufferedInputStream inStream = new BufferedInputStream(new FileInputStream(filename))) {
//skip the first byte
inStream.read();
int bytesToRead = 0;
while((bytesToRead = inStream.read())!= -1) {
byte[] payBytes = new byte[bytesToRead];
// inStream.read(payBytes);
// May not read all bytes requested - prefer the loop below
int offset = 0;
while(bytesToRead > 0) {
int bytesRead = inStream.read(payBytes, offset, bytesToRead);
offset += bytesRead;
bytesToRead -= bytesRead;
}
Message m = parsers.messageIn(payBytes, stock);
if (!m.isEmpty()) {
//process the message
}
}
}
} | {
"domain": "codereview.stackexchange",
"id": 43193,
"tags": "java, parsing, file, serialization"
} |
What should be the angle from horizontal (in anticlockwise direction) for which range is maximum for a projectile projected from a height? | Question: I was studying about projectiles in the section of 2D Kinematics, where I came to know about the ground to ground projectiles. I got to know that for having maximum range in ground to ground projectiles (such as in the case of throwing a javelin), there should be an angle of 45° with the ground. But I also came to know that, if the same thing is projected in from a height, the angle, for reaching maximum range, will not be 45°. I tried finding that angle, but it got very much complex and I could not get any result. Can you please tell me how to derive the result?
Answer: Juhi!
First, consider the motion equations by axis:
$x(t)=v_0 t \cos{\theta}$
$y(t)=h+v_0 t \sin{\theta}-gt^2/2$
being $h$ the initial height, $v_0$ the launch velocity, $\theta$ the launch angle, $t$ the time and, $(x;y)$ the coordinate of the projectile.
If you separate $t$ from first equation and substitute it in the second, you obtain the trajectory equation.
$y(x) = h + x \tan{\theta} - gx^2 (1+\tan^2{\theta})/(2v_0^2)$
Since you study the range $R$, in that point, $y(R)=0$, so you must to resolve a quadratic equation.
$0=h + R \tan{\theta} - gR^2(1+\tan^2{\theta})/(2v_0^2)$
When you solve the above, obtain two solutions (positive and negative), obviously you take the positive.
$R=v_0^2\sin{2\theta}/(2g)\left(1+\sqrt{1+2gh/(v_0^2\sin^2{\theta})\right)$
Note if $h=0$, the above equation is reduced to the known formula for horizontal ground launch.
Now, you can apply the derivative $\frac{dR}{d\theta}=0$ for obtaining the optimal angle for maximal or minimal range. Obviously, for minimal range ($R=0$), the angle is $\theta=\pi/2$. | {
"domain": "physics.stackexchange",
"id": 95027,
"tags": "homework-and-exercises, kinematics, projectile"
} |
Linker error for geometry_msgs | Question:
Hi,
I explore ros2 recently. Differences with ros, creating custom messages, pub sub examples for c++ etc.
I just wanted to demonstrate that another package can be used in a custom message.
I generated a message for simple pub-sub example.
Message and its CMakefile in autodrive_msgs package :
AccelCommand.msg
float32 accel
geometry_msgs/Vector3 v
CMakefile:
cmake_minimum_required(VERSION 3.5)
project(autodrive_msgs)
# Default to C99
if(NOT CMAKE_C_STANDARD)
set(CMAKE_C_STANDARD 99)
endif()
# Default to C++14
if(NOT CMAKE_CXX_STANDARD)
set(CMAKE_CXX_STANDARD 14)
endif()
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
# find dependencies
find_package(ament_cmake REQUIRED)
find_package(rclcpp)
find_package(autodrive_msgs)
find_package(rosidl_default_generators REQUIRED)
rosidl_generate_interfaces(${PROJECT_NAME}
"msg/AccelCommand.msg"
)
# uncomment the following section in order to fill in
# further dependencies manually.
# find_package(<dependency> REQUIRED)
if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
# the following line skips the linter which checks for copyrights
# uncomment the line when a copyright and license is not present in all source files
#set(ament_cmake_copyright_FOUND TRUE)
# the following line skips cpplint (only works in a git repo)
# uncomment the line when this package is not in a git repo
#set(ament_cmake_cpplint_FOUND TRUE)
ament_lint_auto_find_test_dependencies()
endif()
ament_package()
I can generate the message. However It gives me a linker error when I want to use the message.
Codes and CMakefile in pub_sub package:
pub.cpp:
#include <chrono>
#include <memory>
#include "rclcpp/rclcpp.hpp"
#include "autodrive_msgs/msg/accel_command.hpp" // CHANGE
using namespace std::chrono_literals;
class MinimalPublisher : public rclcpp::Node
{
public:
MinimalPublisher()
: Node("minimal_publisher"), count_(0)
{
publisher_ = this->create_publisher<autodrive_msgs::msg::AccelCommand>("topic", 10); // CHANGE
timer_ = this->create_wall_timer(
500ms, std::bind(&MinimalPublisher::timer_callback, this));
}
private:
void timer_callback()
{
auto message = autodrive_msgs::msg::AccelCommand(); // CHANGE
message.accel = this->count_++; // CHANGE
RCLCPP_INFO(this->get_logger(), "Publishing: '%d'", message.accel); // CHANGE
publisher_->publish(message);
}
rclcpp::TimerBase::SharedPtr timer_;
rclcpp::Publisher<autodrive_msgs::msg::AccelCommand>::SharedPtr publisher_; // CHANGE
size_t count_;
};
int main(int argc, char * argv[])
{
rclcpp::init(argc, argv);
rclcpp::spin(std::make_shared<MinimalPublisher>());
rclcpp::shutdown();
return 0;
}
sub.cpp:
#include <memory>
#include "rclcpp/rclcpp.hpp"
#include "autodrive_msgs/msg/accel_command.hpp" // CHANGE
using std::placeholders::_1;
class MinimalSubscriber : public rclcpp::Node
{
public:
MinimalSubscriber()
: Node("minimal_subscriber")
{
subscription_ = this->create_subscription<autodrive_msgs::msg::AccelCommand>( // CHANGE
"topic", 10, std::bind(&MinimalSubscriber::topic_callback, this, _1));
}
private:
void topic_callback(const autodrive_msgs::msg::AccelCommand::SharedPtr msg) const // CHANGE
{
RCLCPP_INFO(this->get_logger(), "I heard: '%d'", msg->accel); // CHANGE
}
rclcpp::Subscription<autodrive_msgs::msg::AccelCommand>::SharedPtr subscription_; // CHANGE
};
int main(int argc, char * argv[])
{
rclcpp::init(argc, argv);
rclcpp::spin(std::make_shared<MinimalSubscriber>());
rclcpp::shutdown();
return 0;
}
CMakefile:
cmake_minimum_required(VERSION 3.5)
project(pub_sub)
# Default to C99
if(NOT CMAKE_C_STANDARD)
set(CMAKE_C_STANDARD 99)
endif()
# Default to C++14
if(NOT CMAKE_CXX_STANDARD)
set(CMAKE_CXX_STANDARD 14)
endif()
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
# find dependencies
find_package(ament_cmake REQUIRED)
find_package(rclcpp)
find_package(autodrive_msgs)
find_package(geometry_msgs)
# uncomment the following section in order to fill in
# further dependencies manually.
# find_package(<dependency> REQUIRED)
add_executable(talker src/pub.cpp)
ament_target_dependencies(talker rclcpp autodrive_msgs geometry_msgs) # CHANGE
add_executable(listener src/sub.cpp)
ament_target_dependencies(listener rclcpp autodrive_msgs geometry_msgs) # CHANGE
install(TARGETS
talker
listener
DESTINATION lib/${PROJECT_NAME})
if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
# the following line skips the linter which checks for copyrights
# uncomment the line when a copyright and license is not present in all source files
#set(ament_cmake_copyright_FOUND TRUE)
# the following line skips cpplint (only works in a git repo)
# uncomment the line when this package is not in a git repo
#set(ament_cmake_cpplint_FOUND TRUE)
ament_lint_auto_find_test_dependencies()
endif()
ament_package()
I get this error message:
clepz@clepz-PC:~/workspace/autodrivemessages_ros2$ colcon build --packages-select pub_sub
Starting >>> pub_sub
--- stderr: pub_sub
/home/clepz/workspace/autodrivemessages_ros2/install/autodrive_msgs/lib/libautodrive_msgs__rosidl_typesupport_fastrtps_cpp.so: undefined reference to `geometry_msgs::msg::typesupport_fastrtps_cpp::get_serialized_size(geometry_msgs::msg::Vector3_<std::allocator<void> > const&, unsigned long)'
/home/clepz/workspace/autodrivemessages_ros2/install/autodrive_msgs/lib/libautodrive_msgs__rosidl_typesupport_fastrtps_cpp.so: undefined reference to `geometry_msgs::msg::typesupport_fastrtps_cpp::max_serialized_size_Vector3(bool&, unsigned long)'
/home/clepz/workspace/autodrivemessages_ros2/install/autodrive_msgs/lib/libautodrive_msgs__rosidl_typesupport_fastrtps_cpp.so: undefined reference to `geometry_msgs::msg::typesupport_fastrtps_cpp::cdr_deserialize(eprosima::fastcdr::Cdr&, geometry_msgs::msg::Vector3_<std::allocator<void> >&)'
/home/clepz/workspace/autodrivemessages_ros2/install/autodrive_msgs/lib/libautodrive_msgs__rosidl_typesupport_fastrtps_cpp.so: undefined reference to `geometry_msgs::msg::typesupport_fastrtps_cpp::cdr_serialize(geometry_msgs::msg::Vector3_<std::allocator<void> > const&, eprosima::fastcdr::Cdr&)'
collect2: error: ld returned 1 exit status
make[2]: *** [talker] Error 1
make[1]: *** [CMakeFiles/talker.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
/home/clepz/workspace/autodrivemessages_ros2/install/autodrive_msgs/lib/libautodrive_msgs__rosidl_typesupport_fastrtps_cpp.so: undefined reference to `geometry_msgs::msg::typesupport_fastrtps_cpp::get_serialized_size(geometry_msgs::msg::Vector3_<std::allocator<void> > const&, unsigned long)'
/home/clepz/workspace/autodrivemessages_ros2/install/autodrive_msgs/lib/libautodrive_msgs__rosidl_typesupport_fastrtps_cpp.so: undefined reference to `geometry_msgs::msg::typesupport_fastrtps_cpp::max_serialized_size_Vector3(bool&, unsigned long)'
/home/clepz/workspace/autodrivemessages_ros2/install/autodrive_msgs/lib/libautodrive_msgs__rosidl_typesupport_fastrtps_cpp.so: undefined reference to `geometry_msgs::msg::typesupport_fastrtps_cpp::cdr_deserialize(eprosima::fastcdr::Cdr&, geometry_msgs::msg::Vector3_<std::allocator<void> >&)'
/home/clepz/workspace/autodrivemessages_ros2/install/autodrive_msgs/lib/libautodrive_msgs__rosidl_typesupport_fastrtps_cpp.so: undefined reference to `geometry_msgs::msg::typesupport_fastrtps_cpp::cdr_serialize(geometry_msgs::msg::Vector3_<std::allocator<void> > const&, eprosima::fastcdr::Cdr&)'
collect2: error: ld returned 1 exit status
make[2]: *** [listener] Error 1
make[1]: *** [CMakeFiles/listener.dir/all] Error 2
make: *** [all] Error 2
---
Failed <<< pub_sub [2.17s, exited with code 2]
Summary: 0 packages finished [2.29s]
1 package failed: pub_sub
1 package had stderr output: pub_sub
If I delete geometry_msgs/vector v and its dependency from cmakefile, pub and sub examples are running.
I tried many things such as building without colcon, I gave all libraries with manual. Result is the same undefined reference to gemometry_msgs.
I tried to build in ros2 docker with same packages. Result is the same.
I can get that from same terminal where i tried to build:
clepz@clepz-PC:~/workspace/autodrivemessages_ros2$ ros2 msg show geometry_msgs/msg/Vector3
# This represents a vector in free space.
float64 x
float64 y
float64 z
Originally posted by clepz on ROS Answers with karma: 15 on 2021-04-05
Post score: 0
Answer:
I copied your code to give it a try and it did compile for me. I however then get the following error when trying to run a node:
$ ros2 run pubsub talker
/home/sander/install/pubsub/lib/pubsub/talker: symbol lookup error: /home/sander/install/autodrive_msgs/lib/libautodrive_msgs__rosidl_typesupport_fastrtps_cpp.so: undefined symbol: _ZN13geometry_msgs3msg24typesupport_fastrtps_cpp27max_serialized_size_Vector3ERbm
I could get it working with the following pointers:
Have you sourced the whole of the ROS 2 environment? If perhaps you only do source install/local_setup.bash, only packages in your local workspace will be available, but not the base ROS 2 packages. So autodrive_msgs can be found, but geometry_msgs not, which looks like what you get. If this is the case, you need to either source install/setup.bash, or source the base environment first. See also: https://answers.ros.org/question/292566/what-is-the-difference-between-local_setupbash-and-setupbash/
Your autodrive_msgs package is the one with the direct dependency on geometry_msgs, so you should specify that in that package. I am slightly surprised the autodrive_msgs package built without doing so and thought that that was your issue, but I guess the generation process doesn't actually need to work with the base messages. You should still specify the dependency though to enable easier use of your msgs package. For this, you should add this to auodrive_msgs/CMakeLists.txt:
find_package(geometry_msgs)
And then specify the dependency in the rosidl_generate_interface call:
rosidl_generate_interfaces(${PROJECT_NAME}
"msg/AccelCommand.msg"
DEPENDENCIES geometry_msgs
)
Finally you should also have a <depend>geometry_msgs</depend> in autodrive_msgs/package.xml. This way any package that depends on your autodrive_msgs package will know it transitively depends on geometry_msgs and you don't have specify that in puch_sub/CMakeLists.txt
You have find_package(autodrive_msgs) in autodrive_msgs/CMakeLists.txt, implying the package depends on itself, which can't be (you should get a warning about that when building the package), so that line can/should go.
Originally posted by sgvandijk with karma: 649 on 2021-04-05
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by clepz on 2021-04-05:
Thank you a lot. It builds now. I get this warning:
WARNING:colcon.colcon_ros.prefix_path.ament:The path '/home/clepz/workspace/autodrivemessages_ros2/install/pub_sub' in the environment variable AMENT_PREFIX_PATH doesn't exist
Should I ignore that?
Comment by sgvandijk on 2021-04-05:
Good to hear you got it building! The warning you get usually indicates you removed a previously built package from the install directory or maybe renamed and rebuilt the package, after you had sourced [local_]setup.bash, causing the old path still to be in the AMENT_PREFIX_PATH environment variable even though it is not there on disk any more. You can indeed ignore it. Opening a new terminal and sourcing the workspace again may resolve it, or else a completely fresh start by doing rm -r install build and build again. In general it is advisable not to source install/[local_]setup.bash in the same terminal where you run Colcon (but to just source the base ROS 2 installation), to prevent this kind of issue and hidden issues with how dependencies are specified in your packages. | {
"domain": "robotics.stackexchange",
"id": 36281,
"tags": "ros, ros2, rclcpp, build"
} |
How does a plant grow before photosynthesis is possible? | Question: During photosynthesis, a plant translates CO2, water and light into O2. I assume the carbon C is further used for the growing process. I wonder how the plant grows before the time where photosynthesis is possible, i.e. before there are even leaves, in which photosynthesis occurs.
To what extent does the plant grow from the seed/the minerals in the ground?
How much carbon are relevant for which parts of the plant and at what times of the evolution of the plant?
I don't know if there are only these life periods of the plant, i.e. if there are other major biochemist-of-growing changes, than in the comparison before and after the plant got leaves. So are there other relevant aspects to this? If there are leaves present, is the rigid structure of the plant only coming from the CO2 in the air and not from the ground anymore?
The answer will probably depend on the plant. So here is another formulation of the question:
What are typical characteristics of different plants in this regard? I.e., how do common species of plants manage their C consumption before (and after) the development of leaves?
Answer: There are quite a few questions and thoughts in there, I'll try to cover them all:
First, to correct your initial word equation: During photosynthesis, a plant translates CO2 and water into O2 and carbon compounds using energy from light (photons).
You are correct to assume the C is further used for the growing process; it is used to make sugars which store energy in their bonds. That energy is then released when required to power other reactions, which is how a plant lives and grows. C is also incorporated into all the organic molecules in the plant.
Plants require several things to live: CO2, light, water and minerals. If any of those things is missing for a sustained period, growth will suffer. Most molecules in a plant require some carbon, which comes originally from CO2, and also an assortment of other elements which come from the mineral nutrients in the soil. So the plant is completely reliant on minerals.
Most plants, before a leaf is established or roots develop, grow using energy and nutrients stored in the endosperm and cotyledons of the seed. I whipped up a rough diagram below. Cotyledons are primitive leaves inside the seed. The endosperm is a starchy tissue used only for storage of nutrients and energy. The radicle is the juvenile root. The embryo is the baby plant.
When the seed germinates the embryo elongates, the endosperm depletes, the testa ruptures, and the cotyledons emerge from the seed. The cotyledons are green, and like leaves can photosynthesise, so as soon as they are in the light the plant is able to make carbon compounds. The radicle elongates at the same time, and becomes the root, so the plant is very quickly able to obtain fresh nutrients from the soil (or whatever it's growing in).
At all stages of a plant's life it is using both energy stored in carbon compounds (from CO2) and nutrients which it took up via its roots. At no point does the plant start to depend solely on the CO2 in the air for its growth.
You are right that the way in which plants acquire energy and nutrients prior to leaves and roots being established varies between plants. Above I outlined the way most plants use. But there are lots of variations. For example, orchids have very tiny seeds, some barely visible to the naked eye, like specks of dust. They have no endosperm or storage tissue, so they have to rely on a symbiotic mycorrhizal fungus to get carbon and nutrients. The fungus grows through the coat of the orchid seed, then provides everything the growing plant needs until it has its own leaves. Then the orchid repays the fungus by providing sugars.
There are lots of other examples, but we could go on all day! | {
"domain": "biology.stackexchange",
"id": 5474,
"tags": "biochemistry, botany, plant-physiology, photosynthesis"
} |
What is meant by ‘local structure’ of proteins? | Question: The EBI/EMBL training course includes the following definition of Secondary structure of proteins:
Secondary structure refers to the regular, local structure of the protein backbone, stabilized by intramolecular and sometimes intermolecular hydrogen bonding of amide groups.
What does the word ”local” mean in this case?
Is there anything that is not local in the case of protein structures?
Answer: ‘Local’ appears to be a term that is used in relation to the structure of proteins to distinguish specific small parts of it from the overall structure, which in this context appears to be termed the ‘global’ structure.
I write ‘appears’, because I had not met it before, but I draw this conclusion from a section of a review in the journal, Computers, entitled Local Structure Comparison of Proteins:
Starting from the 3D coordinates of the atoms in a protein…
global structure comparison can determine the similarity of two complete protein structures.… However, a protein’s global
structure does not always determine its function.… For this
reason there has been increased interest in local structure
comparison to identify structural similarity between parts of
proteins.
One does not, in my experience, encounter this term in definitions of secondary structure in general biochemistry texts, which might use something more generally comprehensible like “sections of the overall structure”. Although EBI/EMBL is an authoritative institution, its staff tend to have expertise in research and computational work rather than education. | {
"domain": "biology.stackexchange",
"id": 11895,
"tags": "proteins, protein-structure"
} |
Which material makes best radiator in Cherenkov detector? | Question: What qualities of a radiator are important if I want to build a Cherenkov detector to detect muons? I don't know how to choose the most appropriate. Possibilities would include ice, water, maybe aerogel, anything I can easily get my hands on. I know water is most typically used, but is it worth exploring other options?
Answer: The parameters that matter are
Index of refraction Which controls what speeds generate light ($v \gt \frac{c}{n}$), the opening angle of the cone of light generated, and the geometry for possible total internal reflection (TIR) of the light so generated (TIR can be necessary or a problem depending on the geometry you intend for your detector).
Keep in mind that index of refraction is a frequency dependent quality. We are most interested in the value in the blue end of the optical and the near UV.
The path length through the material (yeah, length rather than areal density which dominated for other processes) which affects how much light is generated.
The transparency of the medium to the Cerenkov light. Especially if your geometry creates long optical paths in the radiator medium (designs where you take advantage of TIR to transmit the light out of the beam path can have this 'feature').
And the thing about this is that the values that you want are all a matter of design decisions for a particular case.
We can't give you hard and fast rules that apply to all designs. You really need to understand what your detector is designed to do.
But the easiest decision is usually the threshold velocity and that feeds into selecting the desired index of refraction range.
Most solids have $n \gtrsim 1.4$ which implies $v_\text{threshold} \ge 0.7c$
Most liquids have $n \gtrsim 1.2$ which implies $v_\text{threshold} \ge 0.8c$
Gasses and aerogels usually have low index of refraction meaning threshold velocities near $c$ and are often used for $0.9c < v_\text{threshold} < c$. | {
"domain": "physics.stackexchange",
"id": 35081,
"tags": "experimental-physics, cosmic-rays"
} |
Determining the probability of outcomes in a measurement | Question: While showing us the Schrodinger's cat experiment, my physics teacher defined:
$$\varphi_\text{alive} = \begin{bmatrix}1\\0\end{bmatrix},\qquad \varphi_\text{dead} = \begin{bmatrix}0\\1\end{bmatrix}, \qquad \hat{O}\varphi = \begin{bmatrix}1&0\\0&-1\end{bmatrix}\varphi,$$ and $$\Phi = \frac{1}{\sqrt{2}}\varphi_\text{alive} + \frac{1}{\sqrt{2}}\varphi_\text{dead},$$
such that $\hat{O}\varphi_\text{alive} = \varphi_\text{alive}$ and $\hat{O}\varphi_\text{dead} = -\varphi_\text{dead}$.
He later claims that there's a 50% chance that $\hat{O}\Phi = \varphi_\text{alive}$ and a 50% chance that $\hat{O}\Phi = \varphi_\text{dead}$. But according to the definitions, $\hat{O}\Phi$ is just a matrix multiplication whose result is $\begin{bmatrix}\frac{1}{\sqrt{2}}\\-\frac{1}{\sqrt{2}}\end{bmatrix}$...
How could doing the same operation to the same vector two times give two different results, both of which are wrong?
Answer: Let me rephrase what your teacher did. They first defined an operator $\hat{O}$ given by the matrix you have. They then noticed that the operator has two eigenvectors with eigenvalues $+1$ and $-1$, given by the two vectors that you have. Notice that these eigenvectors are orthonormal.
They then interpreted the two eigenvectors as two states of a quantum system, one corresponding to "alive" and the other one to "dead". Then they supposed that you have a system which is in a specific linear combination of these eigenvectors, that is their wavefunction is the $\Phi$ you have.
They then applied the following two axioms of quantum mechanics, things which we believe are rules of nature and cannot be derived from anything else:
The value of the measurement of an observable (operator) is one of its eigenvalues; the system then "collapses" to the corresponding eigenvector.
The probability of obtaining a certain eigenvalue is given by the modulus squared of the coefficient of the orthonormal eigenvector corresponding to the eigenvalue in the expansion of the wavefunction.
Your wavefunction is already expanded in terms of orthonormal eigenvectors of $\hat{O}$. By the first axiom, a measurement of $\hat{O}$ can give either $+1$ ("alive") or $-1$ ("dead"). Before the measurement, the wavefunction is given by $\Phi$. Say the measurement yields $+1$. After the measurement, the wavefunction of the system is given by $\Phi'=\varphi_{alive}$. The wavefunction has "collapsed".
By the second axiom, the probability of getting $+1$ is given by $|1/\sqrt{2}|^2=1/2=50\%$ and is the same as the probability of getting $-1$.
In particular, the measurement of an operator is not given by applying the operator to the wavefunction. This works only if the system is in a state which is an eigenvector of the operator. In this case the system has a definite value for that particular operator (which can be for instance, position or momentum). Otherwise the system does not have a definite value for that operator, and when you measure it, you get different outcomes with different probabilities. | {
"domain": "physics.stackexchange",
"id": 44850,
"tags": "quantum-mechanics, wavefunction, hilbert-space, superposition, schroedingers-cat"
} |
Do quantum jumps define the color of objects? | Question: When an electron jumps to a higher energy level due to the absorption of energy, let's say light, and then later jumps to a lower energy level, is the frequency of the photon that is released, presuming that frequency is of the visible light spectrum, the color that we see?
Answer:
When an electron jumps to a higher energy level due to the absorption of energy,
Solid state is matter in bulk, it is usually in some form of a lattice, which if opaque, as your question implies , will reflect with elastic scatters the photons that are not absorbed and their energy dissipated in the lattice transitions.
let's say light, and then once it jumps to a lower energy level, is the photon that is released, presuming its frequency is of the visible light spectrum, the color that we see?
The absorption of photons that interact with the lattice or the surface molecules removes those frequencies from the light band. The ones that are reflected elastically will give the perceived color. When falling back to lower energies the distribution is in 4pi and will not generally reflect back, as there are many lattice levels to relax into and generate lower frequency photons in all directions. | {
"domain": "physics.stackexchange",
"id": 39484,
"tags": "quantum-mechanics"
} |
Verify loop_rate | Question:
I have set a loop rate in the following way:
ros::Rate r(100)
while(ros::Ok()){
ros::SpinOnce();
function();
r.sleep();
}
Is there anyway to verify if I am accomplish this rate or if the function is taking longer than that?
Thank you.
Originally posted by Carollour on ROS Answers with karma: 33 on 2015-03-24
Post score: 0
Answer:
Hi!
If you publish a topic you can verify the loop rate with:
rostopic hz /your_topic_name
Originally posted by Chaos with karma: 396 on 2015-03-24
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Carollour on 2015-03-24:
thank you, that was it. | {
"domain": "robotics.stackexchange",
"id": 21218,
"tags": "ros"
} |
Can we use relational parametricity to simplify the type $\forall a. ( (a \to r) \to r ) \to (a \to r) \to r$? | Question: This question is about using relational parametricity to resolve practical questions in pure functional programming in System F.
Consider the following types of polymorphic functions:
$$ \forall a.\, ((a \rightarrow r) \rightarrow r) \rightarrow ((a \rightarrow r) \rightarrow r) $$
and
$$ \forall a.\, ((a \rightarrow r) \rightarrow s) \rightarrow ((a \rightarrow r) \rightarrow s) $$
where $s$ and $r$ are free type variables, i.e., fixed arbitrary types. Here we only consider pure lambda terms from System F, there are no side effects, all code is fully parametric, and so the parametricity theorems apply. In particular, all functions of the above types will be natural transformations.
It appears that the last type is equivalent to a pair of functions $ (r \rightarrow r, s \rightarrow s) $. But I don't know how to derive that type equivalence rigorously, and it is not clearly the correct type equivalence that one expects here.
As an example where a similar question can be answered straightforwardly, consider the type $\forall a.\, (a \rightarrow r) \rightarrow (a \rightarrow s)$.
The type $\forall a.\, (a \rightarrow r) \rightarrow (a \rightarrow s)$ can be simplified to $r \rightarrow s$ by using the (contravariant) Yoneda lemma:
$$ \forall a. (a \rightarrow t) \rightarrow (F\, a) \cong F\, t \quad \textrm{where } F \textrm { is a contravariant functor} $$
if we set $F\,x \overset{def}{=} x\rightarrow s $.
The Yoneda lemma may be used here because we assume that all values are pure lambda terms from System F. Then, by parametricity theorems of Reynolds and Wadler, all functions of type $\forall a. (a \rightarrow r) \rightarrow (a \rightarrow s)$ are natural transformations (their naturality law is a "free theorem" in the terminology of Wadler). So, the Yoneda lemma applies.
We may also use the relational parametricity theorems directly to derive this type equivalence:
$$ \forall a.\, (a \rightarrow r) \rightarrow (a \rightarrow s) \cong r \rightarrow s $$
Without the Yoneda lemma, we need a longer and more complicated proof, but we will still need to begin by deriving the naturality law and to proceed from there, in order to show that each function of type $\forall a.\, (a \rightarrow r) \rightarrow (a \rightarrow s)$ is expressed via a unique function of type $s \rightarrow r$.
However, the Yoneda lemma does not apply to the types shown at the beginning. (See solution below.)
Relational parametricity theorems can be used to simplify certain types containing universal quantifiers. However, it is never obvious how to use relational parametricity when the Yoneda lemma cannot be used. I don't seem to find a good trick to resolve this and similar questions about quantified types of the form $\forall a. (a \rightarrow r) \rightarrow s$.
Answer: The solution was suggested in a comment by @DanDoel.
Flip the first two curried arguments in the type:
$$\forall a.\,((a\rightarrow r)\rightarrow s) \rightarrow(a\rightarrow r)\rightarrow s $$
and obtain an isomorphic type:
$$ \forall a. (a\rightarrow r)\rightarrow ((a \rightarrow r)\rightarrow s)\rightarrow s $$
Now this type is in the form $\forall a.(a\rightarrow r)\rightarrow F\,a$ where $F$ is a contravariant functor:
$$ F\,a \overset{\textrm{def}}{=} ((a \rightarrow r)\rightarrow s)\rightarrow s $$
By the contravariant Yoneda lemma, the type is equivalent to $F\,r$.
So, we obtain the type equivalence:
$$ \forall a.\,((a\rightarrow r)\rightarrow s) \rightarrow(a\rightarrow r)\rightarrow s \cong ((r \rightarrow r)\rightarrow s)\rightarrow s $$ | {
"domain": "cstheory.stackexchange",
"id": 5552,
"tags": "functional-programming, typed-lambda-calculus, parametricity"
} |
Cause and description of 'secondary' probability peaks in Above Threshold Ionization | Question: While reading about above-threshold ionization, I found this graph on the wikipedia page about ATI:
The $x$ axis represents the kinetic energy of the electron and the $y$ axis shows the differential probability.
I could understand why the average probability drops for higher electron kinetic energies. I also understand the cause of the peaks, 3 of which are visible in the picture. However, in the region between the peaks, there are several local maximums. Why are these maximums visible?
They do not seem to follow a constant count: there are 5 of them between the first ATI peak and the second; there are 4 of them between the second ATI peak and the third. After the third, they seem to become indistinct.
Answer: They're an experimental artifact:
The theoretical angle integrated photoelectron spectrum (PES) of Hydrogen from a fully ab-initio TDSE calculation. The three larger peaks are the Above threshold ionization peaks. The smaller oscillations are caused by interference between cycles. Laser parameters: 10 cycle Sine pulse, 95eV photons, 1x10^15 Wcm^-2.
(WikiCommons tag on the image) | {
"domain": "physics.stackexchange",
"id": 50780,
"tags": "atomic-physics, quantum-optics, absorption"
} |
Why does a spring lose its energy when compressed for a long time? | Question: Why does a spring lose a part of its energy when compressed for a long period of time? Is it because the material gets bent?
Answer: Yes. Some of the elastic energy stored in the spring does work by moving lattice dislocations through the metal - this is the physical mechanism responsible for the plastic deformation of the metal spring - and is the reason the spring may be permanently deformed when unloaded, even when the grip position applied to the spring has remained fixed. Plastic deformation generates heat, which can be lost to the environment as the deformed spring cools.
If it seems difficult to understand how work is being done when the ends of the spring are fixed, remember that the stress field inside the spring is inhomogeneous. Where the local stress exceeds the yield stress there are plastic strains that correspond to real displacements within the metal lattice, which are doing work against the local stress field. You could use calipers to measure the diameter of the wire comprising the spring to demonstrate that the material has changed shape. | {
"domain": "physics.stackexchange",
"id": 24493,
"tags": "material-science, spring"
} |
How to set GAZEBO_PLUGIN_PATH correctly and add the plugin into Gazebo_ros? | Question:
Hi, I am using gazebo to simulate a sonar. Thus I will use hector_sonar_plugin. While after I install the hector plugins from source and call it in my sdf, errors come.
At first, an error appear asking me to add libhector_sonar_plugin.so to gazebo_ros package.
I cannot understand it and thought it cannot find the plugin. Thus,I use
export GAZEBO_PLUGIN_PATH=`pwd`:$GAZEBO_PLUGIN_PATH
to add hector_plugin folder into path, then more error come:
Error [Plugin.hh:156] Failed to load plugin libRayPlugin.so: libRayPlugin.so: cannot open shared object file: No such file or directory
[FATAL] [1464103865.820981859]: A ROS node for Gazebo has not been initialized, unable to load plugin. Load the Gazebo system plugin 'libgazebo_ros_api_plugin.so' in the gazebo_ros package)
Could someone tell me how to set the GAZEBO_PLUGIN_PATH correctly (or reset it correctly)? And if I want to use plug-Ins such as hector's, what should I do? (just copy .so to the model folder or ...?)
I am using Indigo full desktop, and thus version should be gazebo2
Thank you.
Originally posted by wyxf007 on Gazebo Answers with karma: 5 on 2016-05-25
Post score: 0
Original comments
Comment by Aiven92 on 2016-05-27:
Can you show line of SDF file, where you include plugin ? Frequent problem is a relative path to the plugin.
Comment by Weiwei on 2016-05-28:
Can you check out the current GAZEBO_PLUGIN_PATH? Run $echo $GAZEBO_PLUGIN_PATH to see whether the path is right.
Comment by wyxf007 on 2016-05-28:
I just use
<plugin name="gazebo_ros_sonar_data" filename="libhector_gazebo_ros_sonar.so">
<gaussianNoise>0.005</gaussianNoise>
<topicName>radiofence</topicName>
<frameId>radiofence1</frameId>
</plugin>
</sensor>
to call the plugin in my sensor sdf file.
Comment by wyxf007 on 2016-05-28:
After echo GAZEBO_PLUGIN_PATH in Terminal, I just get the result
GAZEBO_PLUGIN_PATH
Comment by Aiven92 on 2016-05-28:
I have zero string result of echo $GAZEBO_PLUGIN_PATH, but I'm use ABSOLUTE path to the plugin *.so file and all work fine.
Comment by wyxf007 on 2016-05-30:
Could you tell me how to use the ABSOLUTE path? do changes in URDF file ?
Answer:
as already mentioned previously you can see your current GAZEBO_PLUGIN_PATH by the command echo $GAZEBO_PLUGIN_PATH. The path should point directly to the folder where the .so file is located (for example the build folder).
the export GAZEBO_PLUGIN_PATH=´pwd´:$GAZEBO_PLUGIN_PATH doesn't overwrite the path it just append the current directory(pwd) to the variable. (you can also use $PWD which is handier) Resulting you have a mess after several tries. In addition, export is just temporary and depends on the current terminal. (don't confuse yourself with too many tabs / terminals)
To clean the variable use unset GAZEBO_PLUGIN_PATH
, then check again the variable: echo $GAZEBO_PLUGIN_PATH (no nothing should be returned)
Now you have to navigate in terminal to the folder where the .so file is located (since pwd is the current folder). After using export GAZEBO_PLUGIN_PATH=$PWD your variable should be set correctly. (overwrites existing values)
If you want to append your path to the variable use export GAZEBO_PLUGIN_PATH=$PWD:$GAZEBO_PLUGIN_PATH.
(Hint: you can also set the path directly for example: export GAZEBO_PLUGIN_PATH=/home/#USERNAME#/Documents/testplugin/build)
To have the GAZEBO_PLUGIN_PATH permanent you can include the "export xx=xx" line into your ~/.bashrc file
Edit/Remark:
The default gazebo plugins are loading since gazebo is sourcing its default environment variables before the start. This is carried out using the file /usr/share/gazebo/setup.sh or /usr/share/gazebo-X.X/setup.sh(the X.X is your respective gazebo version)
To be aware of this behavior and be able to see its default values, you can use source /usr/share/gazebo/setup.sh
Again, to have this permanent in every console instance, add the line to your .bashrc file.
Originally posted by m4k with karma: 61 on 2016-05-31
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by wyxf007 on 2016-06-02:
Thank you! I was quite confused why echo GAZEBO_PLUGIN_PATH just return a same name a same name, even if the gazebo default plugin setting still works. But at least I can use your method now. | {
"domain": "robotics.stackexchange",
"id": 38869,
"tags": "gazebo-plugin"
} |
Second law of Newton for variable mass systems | Question: Frequently I see the expression
$$F = \frac{dp}{dt} = \frac{d}{dt}(mv) = \frac{dm}{dt}v + ma,$$
which can be applied to variable mass systems.
But I'm wondering if this derivation is correct, because the second law of Newton is formulated for point masses.
Furthermore if i change the inertial frame of reference, only $v$ on the right side of the formula $F = \frac{dm}{dt}v+ma$ will change, meaning that $F$ would be dependent of the frame of reference, which (according to me) can't be true.
I realize there exists a formula for varying mass systems, that looks quite familiar to this one, but isn't exactly the same, because the $v$ on the right side is there the relative velocity of mass expulsed/accreted. The derivation of that formula is also rather different from this one.
So my question is: is this formula, that I frequently encounter in syllabi and books, correct? Where lies my mistake.
Answer:
So my question is: is this formula, that I frequently encounter in syllabi and books, correct? Where lies my mistake.
The equation and the subsequent expression of the derivative
$$\mathbf F_{ext} = \frac{d\mathbf p}{dt} = \frac{d}{dt}(m\mathbf v) = \frac{dm}{dt}\mathbf v + m\mathbf a,~~~(1)$$
are based on the erroneous idea that the equation
$$
\mathbf F_{ext} = \frac{d\mathbf p}{dt}~~~(2)
$$
is valid for systems that lose or gain mass (where $\mathbf F_{ext}$ is external force on the system and $\mathbf p$ is momentum of the system.
The system in this context is often a rocket without the expelled gases far away. Obviously such system has variable mass. More generally, we can consider any mass system in a specified well-delimited control volume as the subject for which we seek the equation of motion.
The above idea keeps around (probably) in part because of some special relativity texts saying $\mathbf F_{ext} = d\mathbf p/dt$ is more general than $\mathbf F_{ext} = m\mathbf a$ because the former is valid equation for relativistic particles.
But in non-relativistic mechanics this equation is valid only when the system does not lose or gain parts.
In the subtler case where the system of interest (inside the control volume) does acquire or lose material parts (like a rocket), the equation (2) is no longer valid. An easy way to see this is that this equation is not Galilei-invariant: when written in different frame, the external force does not change, but the right-hand side does.
However, a different and correct equation for $\mathbf p$ (a variable-mass system's momentum) may be derived from Newton's laws when applied to each particle the system is made of 1:
$$
\mathbf F_{ext} + \mathbf F_{parts~outside} = \frac{d\mathbf p}{dt} + \frac{d\mathbf p_{lost}}{dt}~~~(4).
$$
1)
This can be done because each particle obeys Newton's law $\mathbf F= m\mathbf a$, as it does not lose or gain parts. One way to derive this equation goes like this.
Let us use convention where $\mathbf F_{a}$ means force due to body $a$ on something, $\mathbf F_{-b}$ means force acting on body $b$ due to something, and $\mathbf F_{a,-b}$ means force due to body $a$ acting on the body $b$.
It is easy to see that at time $t$
$$
\sum_{i\in V(t)} \mathbf{F}_{ext,-i} + \sum_{i \in V(t)} \mathbf F_{parts~outside,-i} = \sum_{i\in V(t)} m_i \mathbf a_i.~~~(a)
$$
We would like to express this without summing over index $i$, using only quantities referring to the system inside and outside as a whole.
Let $\mathbf p$ be momentum inside the control volume $V$. This changes in time for two reasons:
1. particles that stay inside change their momentum
2. some particle leave or come in the control volume
So we can write
$$
\frac{d\mathbf p}{dt} = \sum_{i\in V} m_i \mathbf a_i - \frac{d\mathbf p_{lost}}{dt}~~~(b)
$$
where $d\mathbf p_{lost}$ is momentum lost from the control volume due to particles leaving, per time $dt$).
Comparing $(a),(b)$ we see that the sum equation can be expressed more over $i$ can be removed resulting equation of motion for the system (in the control volume) can be written more concisely as
$$
\mathbf F_{ext} + \mathbf F_{parts~outside} = \frac{d\mathbf p}{dt} + \frac{d\mathbf p_{lost}}{dt}
$$
which is the equation (4).
In case of mass leaving the system (a rocket), we can write this in an easier-to-remember way
$$
\mathbf F_{ext} + \mathbf F_{parts~outside} - \frac{d\mathbf p_{lost}}{dt} = \frac{d\mathbf p}{dt}~~~(5).
$$
When we apply this to a rocket, we can see that momentum of the rocket changes due to 1) external force, 2) force of the exhaust gases acting back on the rocket, but decreased by momentum lost from the rocket per unit time (due to exhaust gas leaving the system).
Although more general, this is somewhat foreign to the engineering viewpoint on rockets, because for the purpose of travel, rather than in momentum of the rocket, we are interested in its velocity.
In the simplest case where the lost particles all leave in direction same or opposite to the body velocity $\mathbf v$ (idealized rocket), this can be further simplified, as is common in textbooks. Let the boundary of the control volume be far from the rocket, so that velocity of particles crossing the boundary (relatively to the rocket) is constant $\mathbf{c}$, and $\mathbf{F}_{parts}$ acting back on the system in the control volume is negligible (the exhaust gas is rarified). Then the lost momentum per unit time is
$$
\frac{d\mathbf{p}_{lost}}{dt} = - \frac{dm}{dt}(\mathbf v+\mathbf c)~~~(6)
$$
and the equation of motion simplifies into
$$
\mathbf{F}_{ext} + \frac{dm}{dt} \mathbf c = m\frac{d\mathbf{v}}{dt}.~~~(7)
$$
It is important to realize also that $\mathbf v$ is not velocity of center of mass of the system, but is defined as $\mathbf p/m$ where $\mathbf p$ and $m$ are momentum and mass inside the control volume. The necessity of this distinction is best seen from this example: let the body have constant velocity $\mathbf v$, but let the control volume shrink so that less and less of the body is inside. Center of mass of the control volume has different velocity from $\mathbf v$, in fact it accelerates due to moving boundary of the control volume. However, velocity of the material particles does not change at all. | {
"domain": "physics.stackexchange",
"id": 97969,
"tags": "newtonian-mechanics, mass, conservation-laws, inertial-frames, rocket-science"
} |
Is the fine-structure constant related to the size of the observable universe? | Question: The fine-structure constant $\alpha \approx 1/137$. In Planck units, this is also the charge of the electron squared, $e^2 = \alpha$ ($e \approx 0.085$). In Planck units the size of the observable universe is about $10^{60}$ along the spatial and time axes. My question is, is the logarithm of the size of the observable universe, $\log(10^{60}) \approx 138$ related to the fine-structure constant (electron charge)? I suppose there is some relation since the observable mass is observed by measuring electromagnetic radiation, which is generated by charged particles like the electron.
Answer: It would be difficult even to construct a theory in which those quantities are related in that way, for a number of reasons.
The immediate problems have been pointed out in other responses: ~1/137 is just the infrared (zero energy) value of the fine structure constant, the high energy value is usually regarded as fundamental (@knzhou); the current size and age of the universe is just a temporary quantity, and the fine structure constant does not exhibit the time dependency one would expect if there was such a relationship (@knzhou); and - in my opinion the severest difficulty - the proposed relationship depends on measuring the universe's size and age in Planck units, whereas one normally looks for fundamental quantities to be dimensionless, e.g. a ratio of sizes or a ratio of ages, since such ratios remain the same regardless of the system of units (@Dale).
For these reasons, if one really were trying to motivate the value of 1/α as being approximately ln 10^60, it might make more sense to obtain "~10^60" as 1/sqrt(10^-122), where 10^-122 is a dimensionless parameter characterizing the cosmological constant.
Nonetheless, if one were determined to try to construct a theory in which the OP's relationship was not a coincidence - a few pointers.
Counting time in Planck units
Supposing that the exact age of the universe in units of Planck time could be of fundamental significance, is a speculation one would associate more with attempts to create a "digital physics" e.g. universe as cellular automaton, than with the role that Planck units actually play in mainstream physics. There, Planck units have more to do with reasoning about "naturalness", by telling you what the expected size of various properties of the universe would be, if all the fundamental constants were of order 1 in size. But this is only used for order of magnitude estimates.
However, I can think of two theoretical frameworks in which Planck-time time-steps are actually supposed to play a role. There are a number of papers by Paola Zizzi in which there's a kind of quantum-circuit model of inflation; and then in 2017 a group including Sean Carroll came out with a similar-looking story.
Meanwhile, there is at least a precedent for the idea that "time variation of the fine structure constant [could be] driven by quintessence", quintessence being a hypothetical scalar field responsible for dark energy. So all you need is a quantum-information model of holographic quintessence, in which UV/IR mixing allows quantum gravity to play a role in determining the infrared value of the electromagnetic coupling constant...
Current size and age of the universe
However, even if you could construct such a theory, you would run up against the evidence that the fine structure constant simply hasn't been varying with time. So you could instead try to revive a static or a steady-state cosmology, in which "10^60 Planck units" is somehow a fixed property of the universe, rather than just how big and how old it happens to be right now. I am aware that there are specific proposals that are still out there - we live in an eternal rotating "Godel universe", the CMB is the thermal equilibrium of some ubiquitous astrophysical source rather than a fading big-bang remnant - but these ideas must be heavily at odds with modern cosmological data and paradigms, if I am to judge by their fringe status.
There is some mainstream work on "cosmic coincidences" (numerology involving cosmological parameters) in which the current age of the universe does appear, e.g. this paper from 2000. This is called the "why now" question. However, again, this only involves orders of magnitude - the argument is considered a success if it can give a reason why the coincidence occurs after 10 billion years, rather than 1 million years or 1 trillion years.
There have been questions asked on this site, regarding relations between the cosmological constant, and the current age and current size of the universe. Conceivably they lead to theoretical avenues whereby a relationship between fine structure constant and cosmological constant could mimic the proposed relationship between fine structure constant and size/age of the universe.
Conclusion
I think that concludes my attempt to make this work. I never even got around to contemplating exactly how quantum electrodynamics might be modified in order to introduce a dependency on any of these cosmological parameters, beyond a handwave in the direction of quintessence. Quantum field theory contains many ways in which one quantity may be made dependent on another; but the basic problem here is that it is hard to see how these particular quantities (current age/size of the universe, measured in Planck units) can enter into that interplay at all. It would seem to require some quantum-gravity magic and perhaps even an abandonment of big-bang cosmology. Hopefully I have at least conveyed something of how a real theorist might proceed, if they put aside the professional knowledge which tells them this idea leads nowhere, and spent five minutes trying to make it work.
Incidentally, the answer by Sean Lake touches on something much more real, namely, the likely existence of order-of-magnitude relationships between the size of the fine structure constant and the size of the universe in the era with galaxies, atoms, and life. It would be good to have someone who really knows this material, spell that out quantitatively. | {
"domain": "physics.stackexchange",
"id": 56053,
"tags": "cosmology, dimensional-analysis, physical-constants"
} |
construct a TM from a PDA | Question: Given a PDA $P=(Q,\sum,\delta,q_0,F)$ construct formally a TM that accepts $L(P)$.
My idea is to construct a Turing machine with 2 tapes, one for the input and the other for the stack. Also to add $q_a$ for accept and $q_r$ for reject and to send to $q_a$ if the TM stops on states in $F$ and send to $q_r$ otherwise.
But I am having a trouble to define new the transition function for the TM: $\delta_M$.
Answer: Your idea is correct (or at least one way of doing it), though it may be "easier" to add a third tape, where you just write the current PDA state. Your transition function just needs to simulate the PDA. The alphabet for the input tape is just the PDA's input alphabet plus the blank, the alphabet for the stack tape is the stack alphabet plus the blank symbol. You read a symbol on the input tape, the PDA state and the current symbol on the stack, move the input head right and maybe write a symbol on the stack tape and stay put, write a symbol and move right, or erase a symbol and move left. The state tape head just stays put and writes the new state on the tape. If you read a blank on the input tape, and the current PDA state is accepting, move to the TM's accept state, if you otherwise read a blank on the input tape, then move to the TM's reject state.
Given a PDA transition $\delta_{PDA}(q_{i}, \sigma, \tau) = (q_{j}, \rho) $ (so moving from state $q_{i}$ to $q_{j}$, with $\sigma$ in the input, popping $\tau$ and pushing $\rho$), the TM transition looks something like $\delta_{TM}(q_{p}, \sigma, \tau, q_{i}) = (q_{p}, (\sigma, R), (\rho, S), (q_{j},S))$ - the notation here is somewhat abused, mostly to group each tapes actions together in order. Transitions that have one or more $\varepsilon$/$\lambda$ in them complicate things a little, the input tape head would not move on an $\varepsilon$, the stack tape head might move left or right, depending on whether it was a pop or push that had the $\varepsilon$.
This gives you a TM with a single state that does all the real work, and the accept and reject states. | {
"domain": "cs.stackexchange",
"id": 1446,
"tags": "machine-learning, turing-machines, machine-models"
} |
How do you derive the formula of density of mixed liquids with the same mass? | Question: You know this equation : $$ρ_{mix} =\frac{2 × ρ_1 × ρ_2} {ρ_1 + ρ_2} $$ But where is it derived from?
Answer: Suppose you have two masses M1(=M) and M2(=M) with volumes V1 and V2, respectively. Then the total density is the total mass divided by the total volume. So $\rho_{mix}$=2M/(V1+V2). V1=M/$\rho_1$ and V2=M/$\rho_2$ so $\rho_{mix}$=2M/(M/$\rho_1$+M/$\rho_2$), which after canceling the M's and simplifying the expression is equal to what you wrote above.
Additional Note
By the way, if instead of considering mixing two equal masses of different densities, you consider mixing two equal volumes of different densities then the resulting equation for the total density is much simpler. It's just $\rho_{mix}=\rho_1+\rho_2$. | {
"domain": "physics.stackexchange",
"id": 26397,
"tags": "homework-and-exercises, density"
} |
How can i save data into bag file so that i can perform gmapping? | Question:
I am trying to save data into bag file to run gmapping later. I don't want to do it through command line using rosbag, rather i want to write a python script to save the data into bag file.
I tried saving just Laserscan messages into bag file by following this tutorial and it worked like charm. :)
But in order to simulate gmapping i have to save data from multiple topics i.e /tf and /scan ,so i tried using message filters to scan data from both tf and scan, but it always throwing me errors.
I think most people saved the data into bag file before, to simulate gmapping. Can someone tell me how they did it?
Thanks.
Originally posted by krishna43 on ROS Answers with karma: 63 on 2016-07-15
Post score: 0
Answer:
It would help if you'd post the error and and give short code snippet. However, I think you are trying to reinvent the wheel and I am pretty sure that the vast majority of people used the commandline tool to save data for gemapping. rosbag does exactly what you want to do when you start it from the command line and is well tested even thought it might have its flaws. If you want to start recording bag files from a python script, have a look at the subprocess package in python. This allows you to run rosbag record from the python script instead of the command line and you do not have to reimplementing its functionality.
Originally posted by Chrissi with karma: 1642 on 2016-07-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by krishna43 on 2016-07-15:
Thank you. I found how i can include rosbag record commands in launch file.
Comment by Chrissi on 2016-07-15:
If that also works for you, even better ;) | {
"domain": "robotics.stackexchange",
"id": 25256,
"tags": "navigation, message-filters, gmapping, laserscan, bagfile"
} |
Is there a way to calculate the heat capacity of a solid or volumetric heat capacity of a solid oxide fuel cell? | Question: I am working on a personal project and wanted to use a solid oxide fuel cell to power up a load (I have not decided what yet), so I am trying to find a way to estimate the power I would need to heat up my fuel cell to a working temperature, so I was considering calculating the heat capacity of the fuel cell. But as it turns out the fuel cell is composed of different materials. Should it be enough to obtain the heat capacity from each material and then add each value to compute the overall energy needed?
Thanks!
Answer: This is a case where you want to determine a property in a composite using some form of a rule of mixtures.
$$ P^n = \sum f_j P^n $$
Here, $P$ is the composite property, $f_j$ is typically the volume fraction, and $P_j$ is the property of the component $j$ in the mixture. When $n = 1$, this is the simple rule of mixtures. When $n = -1$, this is the inverse rule of mixtures.
An example of this rule applied to fiber composites is given at the link below.
https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Rule_of_mixtures.html
For heat capacity, the formulation is written using mass specific heat capacity and mass fraction. The link below gives an explanation and a calculator.
https://thermtest.com/rule-of-mixtures-calculator | {
"domain": "engineering.stackexchange",
"id": 2768,
"tags": "thermodynamics"
} |
Character class for an RPG game | Question: I have an RPG-like program that uses a Character class to specify and return information about an individual player/character. I was curious about a way to best set and return an inventory of strings in terms of memory and performance for this purpose. I gave some thought to creating a struct or using something similar to the following:
char *inventory[] = { "Item", "Item I", "Item II", "Item III" };
for (int i = 0; i < 4; ++i) {
char *pos = inventory[i];
while (*pos != '\0') {
printf("%s ", *(pos++));
}
And setting *inventory[] in the above from a single array containing all strings in the program. My goal would be to return all the inventory from a player in the most efficient way possible, while still retaining the ability to choose individual items (through the same or another function); though this does seem problematic for scaling above 4 items, or when the size is changed.
How else can this be improved?
#include <iostream>
#include <string>
using std::cerr;
using std::cin;
using std::cout;
using std::endl;
using std::string;
class Character {
private:
string name, classType;
int experience;
string inventory[4];
public:
// Should be separate?
struct Position {
int x; int y;
};
Position location;
public:
void Character::setName(string x) { name = x; }
string Character::getName() { return name; }
void Character::setClassType(string x) { classType = x; }
string Character::getClassType() { return classType; }
void Character::setExperience(int x) { experience = x; }
int Character::getExperience() { return experience; }
void Character::setPosition(int x, int y) {
location.x = x; location.y = y;
}
Position& Character::getPosition() { return location; }
void Character::setInventory(string(&x)[4]) { for (int i = 0; i < 4; ++i) { inventory[i] = x[i]; } }
string& Character::getInventory(int slot) { return inventory[slot]; }
};
void showUser(Character player);
int main() {
bool running(true);
while (running) {
try {
string itemsWizard[4] = { "Scroll of Invisibility", "Mana Potion", "Enchanted Staff", "Mage Robe" };
string itemsRanger[4] = { "Longbow", "Quiver", "Dagger", "Jerkin" };
string itemsWarrior[4] = { "Broadsword", "Shield", "Steel Breastplate", "Throwing Axe" };
Character characterI, characterII, characterIII;
characterI.setName("Jax");
characterI.setClassType("Ranger");
characterI.setExperience(4750);
characterI.setInventory(itemsRanger);
characterI.setPosition(50, 20);
characterII.setName("Brand");
characterII.setClassType("Warrior");
characterII.setExperience(9999);
characterII.setInventory(itemsWarrior);
characterII.setPosition(1, 45);
characterIII.setName("Paige");
characterIII.setClassType("Sorcerer");
characterIII.setExperience(5900);
characterIII.setInventory(itemsWizard);
characterIII.setPosition(65, 120);
cout << "\n" << "Retrieving Character Info..." << "\n" << endl;
showUser(characterI);
showUser(characterII);
showUser(characterIII);
} // Close try
// Generalized Catch
catch (std::exception & e) {
cerr << "\nERROR : " << e.what() << '\n';
}
char choice('c');
bool invalidChoice(true);
// Option to run again
while (cout << "\nRun Again? (y/n) \n\n" && (!(cin >> choice)) || (!(choice == 'n')) && (!(choice == 'y'))) {
cin.clear();
cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
cerr << "\nERROR: The only valid answers are y/n. \n" << endl;
}
if (choice == 'y') {
invalidChoice = false;
cout << "\nSelected: \"" << choice << "\" *** RUNNING AGAIN. ***\n" << endl;
}
else if (choice == 'n') {
invalidChoice = false;
running = false;
cout << "\nSelected: \"" << choice << "\" *** EXITING. ***\n" << endl;
}
}
// Pause before returning
do {
cin.ignore();
cout << '\n' << "Press enter to continue...";
} while (cin.get() != '\n');
return 0;
}// End Main
void showUser(Character player) {
cout << "----------------------------------------" << endl;
cout << "Name : " << player.getName() << endl;
cout << "Class : " << player.getClassType() << endl;
cout << "Experience : " << player.getExperience() << endl;
Character::Position p = player.getPosition();
cout << "Location : (" << p.x << "," << p.y << ")" << endl;
for (int i = 0; i < 4; ++i) {
cout << "Inventory " << i + 1 << " : " << player.getInventory(i) << endl;
}
}
Answer: Use a vector instead of a raw array
Your C-like code would be unlikely to be reviewed by any modern C++ expert. The issue is too many beginners focus on trivialities and premature optimization rather than actually understanding the language. Here's how to turn your suggestion into a one-liner:
// a and b are both std::vector<std::string>'s
a = b;
You'll quickly realize that the implementators of your standard library spent a lot more time thinking about it than you have and whatever ad-hoc approach you think is "faster" will beat by it.
Getters and setters
Getters and setters used in this fashion are an anti-pattern. It creates a bulky interface, invites mistakes1 and gives the impression of somebody who looked at Java or C# code without understanding the implications. Your class can be cut down to:
struct Character
{
std::string name, classType;
int experience;
std::vector<std::string> inventory;
Position location;
};
Now it can benefit from aggregate initialization:
Character character{"Test", "Test", 42, {"a", "b", "c"}, {1, 2}};
See how much simpler that is?
This also simplifies assignment:
// a and b are both Character's
a = b;
Following the "Rule of Zero", memberwise assignment is handled automagically.
Footnote 1
void Character::setInventory(string(&x)[4]) { for (int i = 0; i < 4; ++i) { inventory[i] = x[i]; } }
string& Character::getInventory(int slot) { return inventory[slot]; }
What's the purpose of having a get that also acts like a set? Perhaps you intended to overload operator[] instead? Not only does it do the wrong thing (it returns a slot, rather than entire inventory) but it's inconsistent with the rest of your interface. Even if you performed some operations other than "get"ing, it's implicit that "set" mutates and that "get" inspects (it doesn't modify any member variables).
Therein lies the danger of blindly applying patterns you don't understand: a bulky and incorrect interface.
Note that your "get" method doesn't do any bounds checking. std::vector's at() does do bounds checking, and throws an std::out_of_range exception.
Pointless try..catch
Is there anything that you put in the try..catch block that can throw an exception? If it did, how would you handle it? What kind of exceptions do you expect? Again, please think more carefully about what you're trying to achieve rather than blindly inserting code. | {
"domain": "codereview.stackexchange",
"id": 15790,
"tags": "c++, performance, role-playing-game"
} |
Can you tell if a corpse was male or female by only examining its skull? | Question: There are articles that says women have more rounded corner faces than men, than women noses are usually shorter, etc.
But are those (and other features) tendencies or deterministic features that makes you really able to tell if a corpse was male or female, by only examining its skull?
Answer: According to textbook of forensic medicine and toxicology by Ks Narayan reddy:
Qualitative differences are:
Male mandible has everted ramus, but female have inverted ramus.
Males have chin U shaped, but it's rounded in females
Quantitative differences:
Suparaorbital ridges are prominent in males but often absent in females.
Mastoid process is wider, longer and blunt in males but is narrow and pointed in females.
And there are many more differences, see the pics below: | {
"domain": "biology.stackexchange",
"id": 9412,
"tags": "human-anatomy, anatomy, sex"
} |
Best way to check if ROS is running | Question:
I am writing a script, which relies on ROS currently running. I would like to check if ROS is currently running on the script's machine. What's the best way to do this?
A couple ways I am considering are pinging Master URI and checking the processes. However, I'm asking to see if there's a better way. Perhaps an environment variable is set.
Originally posted by baalexander on ROS Answers with karma: 233 on 2011-07-06
Post score: 12
Answer:
If you want to do it in a script, run a regex over the output of "rostopic list" for "/rosout." If its not in the output then the roscore is not running.
Originally posted by Nash with karma: 207 on 2011-07-06
This answer was ACCEPTED on the original site
Post score: 8
Original comments
Comment by baalexander on 2011-07-07:
Thanks @Nash. rostopic list does print out an error if roscore is not running, which I can use to check if ROS is running. However, it seems a bit hackish still.
Comment by 130s on 2013-05-05:
@Nash thx this works with rostopic python API. My example can be seen here. | {
"domain": "robotics.stackexchange",
"id": 6052,
"tags": "roslaunch, roscore"
} |
How massive should a body be in order to significantly affect the passage of time? | Question: I have read that the effects of time dilation due to relative motion become noticeable after a speed of about 10% of the speed of light.
The Wikipedia article on time dilation says this:
It is only when an object approaches speeds on the order of 30,000
km/s (1/10 the speed of light) that time dilation becomes important.
On a similar note, beyond what mass does gravitational time dilation become noticeable?
Answer: Wikipedia's statement is wrong: relativistic effects happen at all speeds, they're just harder to measure at slower speeds. In fact Chu et al at NIST have measured time dilation at relative speeds of 10 m/s.
Similarly gravitational time dilation always happens, and when it becomes "noticeable" depends on how good your clock is. For many purposes this would probably mean planet size bodies, but state of the att clocks are very very good and could no doubt observe effects on much smaller objects like asteroids. For example the NIST clock mentioned above should be able to detect gravitational time dilation due to a body with escape velocity on the order of 10 m/s, which corresponds to a smallish sized asteroid or moon (e.g. like Phobos). | {
"domain": "physics.stackexchange",
"id": 96379,
"tags": "general-relativity, time-dilation"
} |
Download pictures (or videos) from Instagram using Selenium | Question: Python script that can downloads public and private profiles images and videos, like Gallery with photos or videos. It saves the data in the folder.
How it works:
Log in in instragram using selenium and navigate to the profile
Checking the availability of Instagram profile if it's private or existing
Creates a folder with the name of your choice
Gathering urls from images and videos
Using threads and multiprocessing improving the execution speed
My code:
from pathlib import Path
import requests
import time
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException, TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from multiprocessing.dummy import Pool
import urllib.parse
from concurrent.futures import ThreadPoolExecutor
from typing import *
import argparse
class PrivateException(Exception):
pass
class InstagramPV:
def __init__(self, username: str, password: str, folder: Path, profile_name: str):
"""
:param username: Username or E-mail for Log-in in Instagram
:param password: Password for Log-in in Instagram
:param folder: Folder name that will save the posts
:param profile_name: The profile name that will search
"""
self.username = username
self.password = password
self.folder = folder
self.http_base = requests.Session()
self.profile_name = profile_name
self.links: List[str] = []
self.pictures: List[str] = []
self.videos: List[str] = []
self.url: str = 'https://www.instagram.com/{name}/'
self.posts: int = 0
self.MAX_WORKERS: int = 8
self.N_PROCESSES: int = 8
self.driver = webdriver.Chrome()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.http_base.close()
self.driver.close()
def check_availability(self) -> None:
"""
Checking Status code, Taking number of posts, Privacy and followed by viewer
Raise Error if the Profile is private and not following by viewer
:return: None
"""
search = self.http_base.get(self.url.format(name=self.profile_name), params={'__a': 1})
search.raise_for_status()
load_and_check = search.json()
self.posts = load_and_check.get('graphql').get('user').get('edge_owner_to_timeline_media').get('count')
privacy = load_and_check.get('graphql').get('user').get('is_private')
followed_by_viewer = load_and_check.get('graphql').get('user').get('followed_by_viewer')
if privacy and not followed_by_viewer:
raise PrivateException('[!] Account is private')
def create_folder(self) -> None:
"""Create the folder name"""
self.folder.mkdir(exist_ok=True)
def login(self) -> None:
"""Login To Instagram"""
self.driver.get('https://www.instagram.com/accounts/login')
WebDriverWait(self.driver, 10).until(EC.presence_of_element_located((By.TAG_NAME, 'form')))
self.driver.find_element_by_name('username').send_keys(self.username)
self.driver.find_element_by_name('password').send_keys(self.password)
submit = self.driver.find_element_by_tag_name('form')
submit.submit()
"""Check For Invalid Credentials"""
try:
var_error = WebDriverWait(self.driver, 4).until(EC.presence_of_element_located((By.CLASS_NAME, 'eiCW-')))
raise ValueError(var_error.text)
except TimeoutException:
pass
try:
"""Close Notifications"""
notifications = WebDriverWait(self.driver, 20).until(
EC.presence_of_element_located((By.XPATH, '//button[text()="Not Now"]')))
notifications.click()
except NoSuchElementException:
pass
"""Taking cookies"""
cookies = {
cookie['name']: cookie['value']
for cookie in self.driver.get_cookies()
}
self.http_base.cookies.update(cookies)
"""Check for availability"""
self.check_availability()
self.driver.get(self.url.format(name=self.profile_name))
self.scroll_down()
def posts_urls(self) -> None:
"""Taking the URLs from posts and appending in self.links"""
elements = self.driver.find_elements_by_xpath('//a[@href]')
for elem in elements:
urls = elem.get_attribute('href')
if 'p' in urls.split('/'):
if urls not in self.links:
self.links.append(urls)
def scroll_down(self) -> None:
"""Scrolling down the page and taking the URLs"""
last_height = self.driver.execute_script('return document.body.scrollHeight')
while True:
self.driver.execute_script('window.scrollTo(0, document.body.scrollHeight);')
time.sleep(1)
self.posts_urls()
time.sleep(1)
new_height = self.driver.execute_script("return document.body.scrollHeight")
if new_height == last_height:
break
last_height = new_height
self.submit_links()
def submit_links(self) -> None:
"""Gathering Images and Videos and pass to function <fetch_url> Using ThreadPoolExecutor"""
self.create_folder()
print('[!] Ready for video - images'.title())
print(f'[*] extracting {len(self.links)} posts , please wait...'.title())
new_links = (urllib.parse.urljoin(link, '?__a=1') for link in self.links)
with ThreadPoolExecutor(max_workers=self.MAX_WORKERS) as executor:
for link in new_links:
executor.submit(self.fetch_url, link)
def get_fields(self, nodes: Dict, *keys) -> Any:
"""
:param nodes: The json data from the link using only the first two keys 'graphql' and 'shortcode_media'
:param keys: Keys that will be add to the nodes and will have the results of 'type' or 'URL'
:return: The value of the key <fields>
"""
fields = nodes['graphql']['shortcode_media']
for key in keys:
fields = fields[key]
return fields
def fetch_url(self, url: str) -> None:
"""
This function extracts images and videos
:param url: Taking the url
:return None
"""
logging_page_id = self.http_base.get(url.split()[0]).json()
if self.get_fields(logging_page_id, '__typename') == 'GraphImage':
image_url = self.get_fields(logging_page_id, 'display_url')
self.pictures.append(image_url)
elif self.get_fields(logging_page_id, '__typename') == 'GraphVideo':
video_url = self.get_fields(logging_page_id, 'video_url')
self.videos.append(video_url)
elif self.get_fields(logging_page_id, '__typename') == 'GraphSidecar':
for sidecar in self.get_fields(logging_page_id, 'edge_sidecar_to_children', 'edges'):
if sidecar['node']['__typename'] == 'GraphImage':
image_url = sidecar['node']['display_url']
self.pictures.append(image_url)
else:
video_url = sidecar['node']['video_url']
self.videos.append(video_url)
else:
print(f'Warning {url}: has unknown type of {self.get_fields(logging_page_id,"__typename")}')
def download_video(self, new_videos: Tuple[int, str]) -> None:
"""
Saving the video content
:param new_videos: Tuple[int,str]
:return: None
"""
number, link = new_videos
with open(self.folder / f'Video{number}.mp4', 'wb') as f:
content_of_video = self.http_base.get(link).content
f.write(content_of_video)
def images_download(self, new_pictures: Tuple[int, str]) -> None:
"""
Saving the picture content
:param new_pictures: Tuple[int, str]
:return: None
"""
number, link = new_pictures
with open(self.folder / f'Image{number}.jpg', 'wb') as f:
content_of_picture = self.http_base.get(link).content
f.write(content_of_picture)
def downloading_video_images(self) -> None:
"""Using multiprocessing for Saving Images and Videos"""
print('[*] ready for saving images and videos!'.title())
picture_data = enumerate(self.pictures)
video_data = enumerate(self.videos)
pool = Pool(self.N_PROCESSES)
pool.map(self.images_download, picture_data)
pool.map(self.download_video, video_data)
print('[+] Done')
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-U', '--username', help='Username or your email of your account', action='store',
required=True)
parser.add_argument('-P', '--password', help='Password of your account', action='store', required=True)
parser.add_argument('-F', '--filename', help='Filename for storing data', action='store', required=True)
parser.add_argument('-T', '--target', help='Profile name to search', action='store', required=True)
args = parser.parse_args()
with InstagramPV(args.username, args.password, Path(args.filename), args.target) as pv:
pv.login()
pv.downloading_video_images()
if __name__ == '__main__':
main()
Usage:
myfile.py -U myemail@hotmail.com -P mypassword -F Mynamefile -T stackoverjoke
Changes:
1) Changed the function of scroll_down
2) Added get_fields
My previous comparative review tag:Instagram Scraping Posts Using Selenium
Answer: Class constants
These:
self.MAX_WORKERS: int = 8
self.N_PROCESSES: int = 8
should not be set as instance members; they should be static members, which is done by setting them in the class outside of function scope; i.e.
class InstagramPV:
MAX_WORKERS: int = 8
N_PROCESSES: int = 8
Nested if
if 'p' in urls.split('/'):
if urls not in self.links:
can be
if urls not in self.links and 'p' in urls.split('/'):
Direct import
urllib.parse.urljoin could use a from urllib.parse import urljoin.
URL passing
You pass this into submit - urllib.parse.urljoin(link, '?__a=1') - and then fetch url.split()[0]. Why call split at all? Does the original string actually have spaces in it? If so, that should be taken care of before it's passed into submit. Also, don't call urljoin for a query parameter - instead, pass that QP into get's params argument.
Streamed downloads
Regarding this:
with open(self.folder / f'Image{number}.jpg', 'wb') as f:
content_of_picture = self.http_base.get(link).content
f.write(content_of_picture)
The problem with using content is that it loads everything into memory before being able to write it to a file. Instead, pass stream=True to get, and then pass response.raw to shutil.copyfileobj. | {
"domain": "codereview.stackexchange",
"id": 37919,
"tags": "python, python-3.x, web-scraping, selenium, instagram"
} |
ROS in Docker: messaging with host PC | Question:
Hi!
I would like to know how to set up connection between ROS in Docker with ROS on host PC correctly.
Following some answers like these [1][2] I can understand that connecting container to host network and exporting proper environmental variables should be enough.
However, when I try pub/echo between container and host PC, I get no response
Here is how I start everything:
On host PC I have the following in my .bashrc:
ROS_HOSTNAME=192.168.1.1
ROS_MASTER_URI=http://192.168.1.1:11311/
ROS_IP=192.168.1.1
Then I start roscore in the first terminal. In the second terminal I run:
docker run --net=host --name ros-test -it --rm osrf/ros:noetic-desktop
And in the resulting second terminal I export the same environmental variables as shown above. Then I inspect available topics with rostopic list and can see default topics when roscore in running:
/rosout
/rosout_agg
Then in the third terminal from host PC I try simple publish command:
rostopic pub -r 1 /test std_msgs/String "halo"
Then try to receive messages in the container which is the second terminal: rostopic echo /test
I expect to receive messages, but there is nothing coming. When I do rostopic hz /test, it says "no new messages"
When I check rostopic info /test from fourth terminal I get the folllowing:
Type: std_msgs/String
Publishers:
/rostopic_178825_1670943581897 (http://risc-SYS-5049A-TR:33815/)
Subscribers:
/rostopic_111_1670945011845 (http://docker-desktop:57739/)
With publisher being my third terminal and subscriber being my second terminal which is container
Is there something wrong with how I am doing things? Please advise or help
Originally posted by akim on ROS Answers with karma: 11 on 2022-12-13
Post score: 1
Answer:
So I figured out it was my dumb mistake
I was following these instructions, but, apparently, I somehow skipped step (3) and didn't activate group changes. Now when I did, everything seems to work
Originally posted by akim with karma: 11 on 2022-12-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 38177,
"tags": "ros, communication, docker"
} |
Binary Conversion in Java | Question: Before implementing a GUI I tried performing it at CLI first and I also tried to implement it by using calling methods to another class.
BinaryConversion class:
import java.io.IOException;
import static java.lang.System.in;
import java.util.InputMismatchException;
import java.util.Scanner;
public class BinaryConversion{
public static void main(String[] args) throws IOException {
try (Scanner in = new Scanner(System.in)) {
System.out.print("Enter given: ");
int givenNum = in.nextInt();
System.out.println("Binary: " + convert.getBinary(givenNum));
System.out.println("Octal: " + convert.getOctal(givenNum));
System.out.println("Hex: " + convert.getHex(givenNum));
} catch(InputMismatchException e) {
System.out.println("Looks like you entered a non integer value.");
} finally {
in.close();
}
}
}
Convert class:
public final class convert{
private convert() {
// removes the default constructor
}
public static String getBinary(int given) {
char binNumbers[] = {'0','1'};
String str = "";
int rem;
while (given > 0) {
rem = given % 2;
str = binNumbers[rem] + str;
given /= 2;
}
return str;
}
public static String getOctal(int given) {
char octalNumbers[] = {'0','1','2','3','4','5','6','7'};
String str = "";
int rem;
while (given > 0) {
rem = given % 8;
str = octalNumbers[rem] + str;
given /= 8;
}
return str;
}
public static String getHex(int given) {
char hexNumbers[] = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'};
String str = "";
int rem;
while(given > 0){
rem = given % 16;
str = hexNumbers[rem] + str;
given /= 16;
}
return str;
}
}
My questions are:
Did I missed something?
Is there a more efficient implementation of this program?
Answer: First: Please format your code more consistently.
The indention of the whileloop is very unusual. Better (and more common):
public static String getBinary(int given) {
char binNumbers[] = {'0','1'};
String str = "";
int rem;
while (given > 0) {
rem = given % 2;
str = binNumbers[rem] + str;
given /= 2;
}
return str;
}
Next: Please use the default cases for identifiers. Classes should start upper case (Convert instead of convert).
Also use StringBuilder instead of String-concatenation:
public static String getBinary(int given) {
char binNumbers[] = {'0','1'};
StringBuilder sb = new StringBuilder();
int rem;
while (given > 0) {
rem = given % 2;
sb.append(binNumbers[rem]);
given /= 2;
}
return sb.toString();
}
Ths will be a large performace gain. Your code basically translates to:
public static String getBinary(int given) {
char binNumbers[] = {'0','1'};
String str = "";
int rem;
while (given > 0) {
rem = given % 2;
//str = binNumbers[rem] + str;
// Code generated by compiler for the above line of code
StringBuffer sb = new StringBuffer();
sb.append(binNumbers[rem]);
sb.append(str);
str = sb.toString();
// End of generated code
given /= 2;
}
return str;
}
As you can see the code generated many objects which have to be cleaned up afterwards by the Garbage Collector.
Hint:
You can use Integer.toString(given, 2) instead of your Convert.getBinary(given) for the same result. | {
"domain": "codereview.stackexchange",
"id": 13645,
"tags": "java, object-oriented, converting, reinventing-the-wheel"
} |
Question regarding Lorentz Transformation and Space Contraction- Contradiction | Question: I stumbled upon this question regarding Special relativity- and have seemed to reach a contradiction.
I am trying to find the distance that the ball travels
I am obviously not looking for the numerical answer, but I'm trying to understand what should be my intuition when looking at these types of problems.
The train is moving at a velocity of $c/2$ relative to earth, while the ball is moving at a velocity of $c/3$ relative to the train.
The train has a proper length of $L_0$.
Now, when trying to find the distance that the ball travels regarding earth, I see two approaches:
If we set $t=t'=0$ as the time where the ball is at the back if the train ($x=x'=0$), we find that the time he travels in the train's frame of reference is $3L_0/c$ and the distance is $L_0$. Using Lorentz transformation we find the distance traveled is $5L_0/\sqrt{3}$ in the earth frame of reference.
In the train's frame, the ball is moving at a speed of $c/3$ a distance of $L_0$. Using the proper length we are able to calculate the train's length in the earth's frame as $\sqrt{3}L_0/2$. Seeing as the ball starts at the back of the train (in both frames) and reaches the front of the train, this is the balls distance traveled.
What am I missing here? Which approach should I use and where does this contradiction come from?
Thanks a lot in advance!
Answer: When dealing with problems in Special Relativity its best to deal with individual events. So let's consider two events here: the ball leaves the "back end" of the train, and the ball arrives at the "front end" of the train.
Now let's see what we know: The ball leaves the back end ($x=0=x'$) at $t=0=t'$, and the front end of the train is at $x'=L_0$, the proper length. Using just this information, we can figure out everything else. For starters, in the train's rest frame, the ball covers a distance of $L_0$ with a speed of $c/3$ so, as you point out, the ball will hit the front end at $$t' = \frac{3L_0}{c}.$$ To make this clearer, let's make a table:
\begin{array} {|c|c|}\hline \textbf{Event} & \text{Train Frame} & \text{Earth Frame} \\ \hline \text{Ball leaves the back} &\,\,\quad t'=0,\quad\quad\,\, x'=0 \quad\quad & t=0, \quad x=0 \\
\hline \text{Ball arrives at the front} & t'=3L_0/c, \quad x'= L_0 & t = {\color{red}?}, \quad x = {\color{red}?} \\ \hline \end{array}
We can now use the information we have to find both $t$ and $x$, the coordinates of the second event ("Ball arrives at the front of the train") in the Earth Frame. Let's write the Lorentz Transformations in terms of the difference of the events: \begin{aligned}\Delta x &= \gamma (\Delta x' + v \Delta t')\\ ~\\\Delta t &= \gamma \left( \Delta t' + \frac{v}{c^2}\Delta x'\right)\end{aligned}
From the table, you should be able to see that $\Delta t' = 3L_0/c$ and $\Delta x' = L_0$. Given that $v=c/2$, you can show that \begin{aligned}\Delta x &= \frac{5}{\sqrt{3}}L_0\\ \Delta t &= \frac{7}{\sqrt{3}}\frac{L_0}{c} \end{aligned}
So your first approach is correct, the distance between the events in the Earth frame is indeed $5L_0/\sqrt{3}$. However, we can now see why the second approach is wrong: the ball doesn't just cover the length of the train. The front wall of the train is also moving! Therefore, the distance covered by the ball in the Earth frame is given by $$\text{Length of the train (in Earth frame)} + \text{Distance covered by the front wall (in Earth frame)}$$
Clearly (as you pointed out) $$\text{Length of the train (in Earth frame)} = \frac{L_0}{\gamma} = \frac{\sqrt{3}}{2}L_0,$$ and the distance the front wall covers in the time $\Delta t$ is $$\text{Distance covered by the front wall (in Earth frame)} = v\Delta t = \frac{7}{2\sqrt{3}}L_0.$$
Adding them up, you can see that $$\text{Distance covered by ball in Earth frame} = \frac{L_0}{2}\left(\sqrt{3} + \frac{7}{\sqrt{3}}\right) = \frac{5}{\sqrt{3}}L_0,$$ as you'd imagine, since it's exactly what we calculated above. | {
"domain": "physics.stackexchange",
"id": 75726,
"tags": "special-relativity"
} |
moveit_setup_assistant loading pr2.urdf.xacro from official tutorial fails | Question:
I tried to load pr2.urdf.xacro with moveit_setup_assistant following the official tutorial
But after the loading bar reached 100%, the console became dark and finally stopped.
Note that moveit_setup_assistant can work well with other file like the moveit tutorial in the book, ros by example vol2.
Here was the error result from terminal.
SUMMARY
========
PARAMETERS
* /rosdistro: indigo
* /rosversion: 1.11.20
NODES
/
moveit_setup_assistant (moveit_setup_assistant/moveit_setup_assistant)
auto-starting new master
process[master]: started with pid [66948]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to e1e4b9ae-b11e-11e6-b242-000c290a3fdb
process[rosout-1]: started with pid [66961]
started core service [/rosout]
process[moveit_setup_assistant-2]: started with pid [66964]
[rospack] Error: no package given
[librospack]: error while executing command
[ INFO] [1479865759.144299071]: Running 'rosrun xacro xacro.py /opt/ros/indigo/share/pr2_description/robots/pr2.urdf.xacro'...
[ INFO] [1479865763.768824299]: Loaded pr2 robot model.
[ INFO] [1479865763.768900516]: Setting Param Server with Robot Description
[ INFO] [1479865763.788140448]: Robot semantic model successfully loaded.
[ INFO] [1479865763.788221240]: Setting Param Server with Robot Semantic Description
[ INFO] [1479865763.809109524]: Loading robot model 'pr2'...
[ INFO] [1479865763.809186007]: No root joint specified. Assuming fixed joint
[ INFO] [1479865763.952783314]: Stereo is NOT SUPPORTED
[ INFO] [1479865763.952884243]: OpenGl version: 3 (GLSL 1.3).
[ INFO] [1479865764.130981952]: Loading robot model 'pr2'...
[ INFO] [1479865764.131149135]: No root joint specified. Assuming fixed joint
TIFFFetchNormalTag: Warning, Incompatible type for "RichTIFFIPTC"; tag ignored.
TIFFFetchNormalTag: Warning, Incompatible type for "RichTIFFIPTC"; tag ignored.
TIFFFetchNormalTag: Warning, Incompatible type for "RichTIFFIPTC"; tag ignored.
TIFFFetchNormalTag: Warning, Incompatible type for "RichTIFFIPTC"; tag ignored.
TIFFFetchNormalTag: Warning, Incompatible type for "RichTIFFIPTC"; tag ignored.
TIFFFetchNormalTag: Warning, Incompatible type for "RichTIFFIPTC"; tag ignored.
[ INFO] [1479865765.214522606]: Loading robot model 'pr2'...
[ INFO] [1479865765.214625559]: No root joint specified. Assuming fixed joint
================================================================================REQUIRED process [moveit_setup_assistant-2] has died!
process has died [pid 66964, exit code -11, cmd /opt/ros/indigo/lib/moveit_setup_assistant/moveit_setup_assistant __name:=moveit_setup_assistant __log:=/home/shawn/.ros/log/e1e4b9ae-b11e-11e6-b242-000c290a3fdb/moveit_setup_assistant-2.log].
log file: /home/shawn/.ros/log/e1e4b9ae-b11e-11e6-b242-000c290a3fdb/moveit_setup_assistant-2*.log
Initiating shutdown!
================================================================================
[moveit_setup_assistant-2] killing on exit
[rosout-1] killing on exit
[master] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
done
NOTE:
version indigo
ubuntu 14.04
I cannot deal with this problem yet.
I install Ubuntu dual boot alongside with windows10, instead of using Ubuntu under Vmware. It works now.
Originally posted by shawnysh on ROS Answers with karma: 339 on 2016-11-22
Post score: 0
Original comments
Comment by gvdhoorn on 2016-11-23:\
OpenGl version: 3 (GLSL 1.3).
Are you running this in a Virtual Machine? If yes: the PR2 model is quite resource intensive, and might cause virtual graphics hw/sw to fail, even if other (simpler) models do work.
Comment by shawnysh on 2016-11-23:
Yes, I run on vmware station.
memory 5.6iB
processor intel i7-6600U 2.6GHz,
graphics Gallium 0.4 on llvmpip(LLVM 3.8, 256 bits),
OS type 64-bit
So, it there method to deal with it? /(ㄒoㄒ)/~~
Answer:
I install Ubuntu dual system alongside with windows, it works now. VM kills....
Originally posted by shawnysh with karma: 339 on 2016-11-29
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2016-11-29:
Good to hear you got it to work.
RViz with complex scenes in VM just doesn't seem to work too well. | {
"domain": "robotics.stackexchange",
"id": 26308,
"tags": "ros, moveit, moveit-setup-assistant"
} |
How data augmentation like rotation affects the quality of detection? | Question: I'm using an object detection neural network and I employ data augmentation to increase a little my small dataset. More specifically I do rotation, translation, mirroring and rescaling.
I notice that rotating an image (and thus it's bounding box) changes its shape. This implies an erroneous box for elongated boxes, for instance on the augmented image (right image below) the box is not tightly packed around the left player as it was on the original image.
The problem is that this kind of data augmentation seems (in theory) to hamper the network to gain precision on bounding boxes location as it loosens the frame.
Are there some studies dealing with the effect of data augmentation on the precision of detection networks? Are there systems that prevent this kind of thing?
Thank you in advance!
(Obviously, it seems advisable to use small rotation angles)
Answer:
The problem is that this kind of data augmentation seems (in theory) to hamper the network to gain precision on bounding boxes location as it loosens the frame.
Yes, it is clear from your examples that the bounding boxes become wider. Generally, including large amounts of data like this in your training data will mean that your network will also have a tendency to learn slightly larger bounding boxes. Of course, if the majority of your training data still has tight boxes, it should stell tend towards learning those... but likely slightly wider ones than if the training data did not include these kinds of rotations.
Are there some studies dealing with the effect of data augmentation on the precision of detection networks? Are there systems that prevent this kind of thing?
(Obviously, it seems advisable to use small rotation angles)
I do not personally work directly in the area of computer vision really, so I'm not sufficiently familiar with the literature to point you to any references on this particular issue. Based on my own intuition, I can recommend:
Using relatively small rotation angles, as you also already suggested yourself. The bounding boxes will become a little bit wider than in the original dataset, but not by too much.
Using rotation angles that are a multiple of $90^\circ$. Note that if you rotate a bounding box by a multiple of $90^\circ$, the rotated bounding boxes become axis-aligned and your problem disappears again, they'll become just as tight as the bounding boxes in the unrotated image. Of course, you can also combine this suggestion with the previous one, and use rotation angles in, for example, $[85^\circ, 95^\circ]$.
Apply larger rotations primarily in images that only have bounding boxes that are approximately "square". From looking at your image, I get the impression that the problem of bounding boxes becoming wider after rotations is much more severe when you have extremely wide or thin bounding boxes (with one dimension much greater than the other). When the original bounding box is square, there still will be some widening after rotation, but not nearly as much, so the problem may be more acceptable in such cases. | {
"domain": "ai.stackexchange",
"id": 900,
"tags": "convolutional-neural-networks, object-recognition"
} |
How are the vitreous humour and aqueous humour of the eye, connected? | Question: My question is regarding the biological nature of the separation between the vitreous humour and aqueous humour of the human (or mammal) eye. What connects the two in terms of the passive transport of proteins between the two? Is there a single membrane?
If so what is the name of this membrane and is it the only thing separating the aqueous from the vitreous? What is the anatomical difference between the aqueous and the vitreous?
Apologies, I am far from a biologist.
Specifically what sort of transport is arrow 9 in the figure below representing? And is backward transport (from the aqueous to the vitreous) possible?
Any links to papers detailing this mechanism would be highly appreciated, I can only find references to experimental readings of concentrations, but nothing about the transport process itself.
Answer: Don't be confused by the word "humour": the vitreous body is presented in birth and has very low "exchange" rate of its components, while aqueous is in constant turnover.
Secondly, the vitreous is an organ-like structure and is separated from other eye structures by its membrane, while aqueous is fluid produced by ciliary body processes into the posterior chamber and moves anteriorly throughout the pupil. Aqueous can move posteriorly in a case of trauma, operations and other non-physiologic states.
ADD (after attachment of the image)
I'd correct some things in the scheme - call Vitreous as Vitreous, not Vitreous humour, change arrow pointing at the ciliary body as on the following image, depict the vitreous body membrane as I did.
In addition learn two terms: Cloquet canal and Berger's space.
Number 9 shows a vitreo-aqueous route which will follow blood, drugs injected to the vitreous (not an only route), etc. | {
"domain": "biology.stackexchange",
"id": 4266,
"tags": "eyes, human-eye"
} |
Emulating a Variable Delay | Question: I am trying to emulate a satellite link where the delay time between the satellite and ground station changes over time due to the satellite's motion. I plan do this by converting to the frequency domain via FFT, multiplying it by $e^{-2\pi \omega_{delay} }$, and converting back to the time domain via IFFT.
However, when I implement this in GNURadio, it fails to work as I expect.
I generate a signal, pass it through the delay, and then match filter the original signal with the delayed signal. I then change the delay with a GUI slider. When I use the frequency-domain-phase-shifting method that I implemented, the correlation peak does not move when I change the delay. However, if I substitute in gr-baz's variable delay block, the peak does move when I change the delay.
Why does my implementation of the shifting method not work? Is it because my delay block only works with a finite window of the signal?
Answer: Please see the answer to this question How to implement Polyphase filter?
on how to implement a polyphase interpolator, and then note that the same structure without actually commutating the output can be used as a discrete variable delay; each output is the input within the passband of the filter at a different delay in uniform discrete steps. By stepping through the output continuously as we do in the interpolator, we achieve a higher sampled version of the input. By staying at one output we get the input at a specific delay based on that filter, by changing to another output we get another delay. The design of the filter as given in the link will properly create the filters such that each is a different delay of the other (with the same ampltitude response) in uniform steps.
Continuing with the example shown of "Resampling" further shows how a variable delay device results:
Here is shown a IS95 CDMA waveform with 2 samples per symbol (as indicated by the red dots on this eye diagram). The analog waveform that is represented by these samples is shown in blue.
If we used the polyphase structure as an interpolator, we would get all the samples in between the 2 samples per symbol we have, evenly spaced, given by the number of filter banks in the polyphase structure.
Therefore it should be clear that we can select any filter output that is closest to the desired delay we want to achieve (in this case within one symbol duration).
As mentioned in the comments a Farrow Filter is another approach to implementing a variable fractional delay in an FIR filter. | {
"domain": "dsp.stackexchange",
"id": 6602,
"tags": "fft, delay, gnuradio"
} |
Rust shortest way to find the maximum product of a fixed length substring | Question: I'm trying to code in a Rust a function, that gives the maximum product of adjacent digits.
So given the string 123456789, with 3 adjacent digits, the maximum is 7 * 8 * 9 = 504.
I've tried to come up with this current functional solution (see previous edit for longer solutions). Could it be shortened? Comments are welcome.
The log is here, and desired, so the code runs quickly.
use std::iter::Map;
use std::str::Chars;
fn iter_log_digits(str: &str) -> Map<Chars<'_>, fn(char) -> f64> {
str.chars().map(|c: char| (c.to_digit(10).unwrap() as f64).ln())
}
/// Returns the maximum log sum of adjacent digit characters.
/// # Arguments
/// - `str`: Assumes there are no 0s.
/// - `adj_count`: The number of adjacent digit characters.
fn max_adj_log_sum_no_zeros(str: &str, adj_count: usize) -> f64 {
if str.len() < adj_count {
return 0.;
}
let log_first: f64 = iter_log_digits(&str[..adj_count]).sum();
iter_log_digits(&str).zip(iter_log_digits(&str[adj_count..])).fold(
(log_first, log_first), // `(log_cur, log_max)`
|(log_cur, log_max), (x_left, x_right)| {
let log_next = log_cur - x_left + x_right;
(log_next, log_max.max(log_next))
}
).1
}
fn max_product_no_zeros(str: &str, adj_count: usize) -> u64 {
max_adj_log_sum_no_zeros(str, adj_count).exp().round() as u64
}
max_product_no_zeros("123456789", 3) should return 504.
Answer: General observations:
An indentation of two spaces is quite small. The standard formatting uses four spaces.
Names like Map and Chars read better with their modules: iter::Map, str::Chars.
str: &str — don't do that.
When str is not entirely made of decimal digits, your function panics by unwrapping. Reporting an error via Result is generally preferred. Magic also happens when str contains zeros.
Using fold here makes the semantics harder to understand. A simple for loop might be more suitable.
Now, my intuition tells me floating point calculations are less efficient than just doing the operations on integers, so I wrote a benchmark. Here's the result that I got:
running 4 tests
test l_f::tests::test ... ignored
test simonzack::tests::test ... ignored
test l_f::tests::bench ... bench: 10,189,230 ns/iter (+/- 278,958)
test simonzack::tests::bench ... bench: 16,865,990 ns/iter (+/- 440,350)
test result: ok. 0 passed; 0 failed; 2 ignored; 2 measured; 0 filtered out
Using integers resulted in a ~1.7x speedup. Do note that this benchmark is very crude, and you might want to measure your own use cases.
For reference, here's the code for the benchmark. (Error checking is omitted for simplicity.)
#![feature(test)]
extern crate test;
mod l_f {
fn to_digits(digits: &str) -> impl Iterator<Item = u64> + '_ {
digits.chars().map(|c| c.to_digit(10).unwrap().into())
}
pub fn max_product_no_zeros(digits: &str, length: usize) -> u64 {
if digits.len() < length {
return 0;
}
let mut current_product: u64 = to_digits(&digits[..length]).product();
let mut max_product = current_product;
for (left, right) in to_digits(digits).zip(to_digits(&digits[length..])) {
current_product = current_product / left * right;
max_product = max_product.max(current_product);
}
max_product
}
#[cfg(test)]
mod tests {
use test::Bencher;
#[test]
fn test() {
assert_eq!(super::max_product_no_zeros("31415926", 3), 108);
}
#[bench]
fn bench(b: &mut Bencher) {
let digits = test::black_box("9".repeat(1_000_000));
b.iter(|| super::max_product_no_zeros(&digits, 10));
}
}
}
mod simonzack {
use std::iter::Map;
use std::str::Chars;
fn iter_log_digits(str: &str) -> Map<Chars<'_>, fn(char) -> f64> {
str.chars()
.map(|c: char| (c.to_digit(10).unwrap() as f64).ln())
}
/// Returns the maximum log sum of adjacent digit characters.
/// # Arguments
/// - `str`: Assumes there are no 0s.
/// - `adj_count`: The number of adjacent digit characters.
fn max_adj_log_sum_no_zeros(str: &str, adj_count: usize) -> f64 {
if str.len() < adj_count {
return 0.;
}
let log_first: f64 = iter_log_digits(&str[..adj_count]).sum();
iter_log_digits(&str)
.zip(iter_log_digits(&str[adj_count..]))
.fold(
(log_first, log_first), // `(log_cur, log_max)`
|(log_cur, log_max), (x_left, x_right)| {
let log_next = log_cur - x_left + x_right;
(log_next, log_max.max(log_next))
},
)
.1
}
pub fn max_product_no_zeros(str: &str, adj_count: usize) -> u64 {
max_adj_log_sum_no_zeros(str, adj_count).exp().round() as u64
}
#[cfg(test)]
mod tests {
use test::Bencher;
#[test]
fn test() {
assert_eq!(super::max_product_no_zeros("31415926", 3), 108);
}
#[bench]
fn bench(b: &mut Bencher) {
let digits = test::black_box("9".repeat(1_000_000));
b.iter(|| super::max_product_no_zeros(&digits, 10));
}
}
} | {
"domain": "codereview.stackexchange",
"id": 39505,
"tags": "rust, iterator"
} |
Displaying results of image filters | Question:
I would like a way to display images and results of several image filters and data extracted from images in a interactive GUI. The GUI would need to display at least 6 panels, some of which would be 2 or 3 dimensional plots, and then a couple panels to show results of image filters. One of the plots should also be interactive, allowing for a data point selection tool.
Finally, I would want displaying this data to be integrated with ROS. For example, one of the panels should show the result of listening to a topic that is publishing sensor_msgs/Image. Other panels that show filtered images should show the data being published by my ROS nodes or the results of calling my ROS services.
I'm a little new to python, C++, and ROS, so I apologize for my ignorance. Is there a tool or set of tools that can help me do this?
Originally posted by robzz on ROS Answers with karma: 328 on 2013-01-27
Post score: 0
Answer:
You can do all the image processing with ROS using CV_Bridge, and rxplot for the graphs.
Originally posted by jd with karma: 62 on 2013-01-27
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by facesad on 2013-09-04:
of course you can find a lot of online image sdk that is able to display image the data and the image filters vb.net. change a few lines then they will be able to integrate with ROS. "image filters vb.net. change a few lines then they will be able to integrate with ROS. | {
"domain": "robotics.stackexchange",
"id": 12593,
"tags": "ros, gui, images"
} |
Why isn't this a valid derivation of the formula for capacitors in series? | Question: I had to derive the formula for capacitors
(I decided to use m capacitors in my derivation) in series, and this is what I did.
The formula for a capacitor is
$$Q=CV,$$
which is the same as saying
$$\int \dfrac{dI}{dt} dt= C_n V_n.$$
Since they are all in series, $I_1=I_2=...=I_n=I$, thus $Q_1=Q_2....=Q$.
We also know that
$$V=IR \Longrightarrow \sum \frac{Q}{C_n} = V=\frac{Q}{C_{eq}}.$$
Since they are in series, we can apply kirchoffs voltage law, obtaining
$$ \frac{Q}{C_1}+\frac{Q}{C_2}+...\frac{Q}{C_n}= \frac{Q}{C_{eq}} \Longrightarrow Q \sum_{n=0}^{m} C_n^{-1} = \frac{Q}{C_{eq}}.$$
Thus we conclude that
$$ \sum_{n=0}^{m} C_n^{-1} = \frac{1}{C_{eq}}.$$
Is this a proper derivation of the result?
Answer: You bring up an $R$, for no known reason, and immediately after that write down something equivalent to what you want to show that also comes out of nowhere, so it's very very confusing as written.
In series, the voltage of the unit is the sum of the voltages of the parts.
In series, the charge of each one of the plates has to be the same because in the region between two plates, the charge from one side/plate comes from the other side (the one before/after in series). Thus, the $Q$ of the unit is the $Q$ of the parts.
These two facts give you what you want, write the voltage as the sum of voltages, express those voltages in terms of $Q$ and $C_i$, then divide by the $Q$ of the whole unit (which is the regular $Q$ of the parts), to get an expression for the total capacitance. | {
"domain": "physics.stackexchange",
"id": 19467,
"tags": "homework-and-exercises, electric-circuits, capacitance"
} |
How to use a dataset with only one category of data | Question: I am performing a classification task, to try to detect an object. A picture of the environment is taken, candidates are generated of this possible object using vision algorithms, and once isolated, these candidates will be passed through a CNN for the final decision on whether the object has been detected or not. I am attempting to use transfer learning on InceptionV3 but am having difficulty training it, as I only have one set/class of images.
The dilemma is that I only have one class of data and when I pass it through the network, I get a 100% accuracy (because there is nothing to compare it to). How should I overcome this? Should I find more categories online to add to my dataset? What should these categories be?
Just to clarify, as an example, I have class "cat".
Not "cat" and "dog".
Not "cat" and "no cat".
Just "cat". That is what my dataset consists of at the moment.
Answer: The Model learns to match the weights as per the image and feedback from label data.
If you will feed a few Image classes as "Not Cat", it will learn to classify similar features as "Not Cat". But might fail for a new Class.
e.g. if it is trained on "Car/Furniture/Dog" as "Not Cat", then chances are high that a Wild Cat will be classified as Cat.
Dumping all the Imagenet dataset will definitely provide quite a good variance to the "Not Cat" class and may work most of the time but that is not the appropriate solution for the problem.
Such type of problem will fall under One-Class-Classification.
Core idea is to use CNN to extract features then use some specialized models e.g. one-class SVM, Gaussian Mixtures, etc. to define a boundary for "Cat"
This problem, as defined by the one-class SVM approach, consists of identifying a sphere enclosing all (or the most) of the data. The classical strategy to solve the problem considers a simultaneous estimation of both the center and the radius of the sphere.
You may start with these links(In the specified order) -
Hackernoon blog
Arxiv
Researchgate
There are other approaches too i.e. based on Auto-encoder. Here, we try to put a Threshold on reconstruction error.
References-
Quora
SO
Keras blog
Also, may look here to check an idea which generated Random images for "No Cat" Class.Here | {
"domain": "datascience.stackexchange",
"id": 8231,
"tags": "keras, cnn"
} |
Can a nucleus be decayed to only half of it? | Question: I was wondering while studying radioactivity that can a nucleus out of millions of nuclei decay to only half of it. For example, say at any time t = 60 sec, the decay rate is 2.5 nuclei/second. it means that 2 nuclei and "0.5" nucleus decayed! what about other 0.5? Any help would be appreciated.
Answer: Decay is inherently probabalistic. That means after X time, a nucleus has Y% chance of decaying (through some mechanism) into other particles. This translates into what is called an expectation value for the number of nuclei left in a sample after some time. This is a function of time. Differentiating this function with respect to time (i.e. finding the rate of change at that instant), we get 2.5 nuclei/second.
This doesn't mean we lose half a nucleus, because that's simply not how decay in this context works. (Nuclear fission is a thing, but it's not relevant to this discussion.) It means we expect to lose 2.5 nuclei, because there is some chance that over a second, we lose 1 nucleus, there is some more chance we lose 2 nuclei, there is a chance we lose 3 nuclei, there is even a really, really, really (and I mean astronomically) low chance that we lose ALL the nuclei (this is possible but we will never see it happen).
All these chances mathematically average out to 2.5 nuclei/second, even there is zero chance that we will lose 2.5 nuclei in one second (simply because losing 0.5 nuclei to decay doesn't really make sense in this scenario).
This is a very common thing in probability, because working with expectation values (even if they don't precisely make physical sense, like "the average person has 2.75 children) makes it very easy to estimate things for large data sets. For example if we have 300,000,000 families, and we know that the average family has 2.75 children, then our estimate of how many children there are (300,000,000*2.75 = 825,000,000) will be pretty close to the actual number of children—even though 0.75 children doesn't make sense.
Hope this clears things up for you! | {
"domain": "physics.stackexchange",
"id": 45763,
"tags": "nuclear-physics, radioactivity"
} |
DFS/BFS implementation of Cormen's pseudo code | Question: This is DFS/BFS C++ code for Cormen pseudo code. Please comment on this.
#include "iostream"
#include "vector"
#include "list"
enum color{
WHITE,
GREY,
BLACK
};
struct edge{
int destination_vertex;
edge(int ver){
destination_vertex = ver;
}
};
struct vertex{
int id;
color visited;
std::list<edge> list;
vertex(int _id){
id = _id;
}
};
class graph
{
private:
std::vector<vertex> vertexes;
void dfs_visits(vertex& source);
int next;
public:
graph(void){
next = 0;
}
~graph(void){}
void dfs();
void bfs();
};
void graph::dfs(void)
{
for(std::vector<vertex>::iterator iter =vertexes.begin();iter < vertexes.end();iter++ ){
iter->visited = WHITE;
}
for(std::vector<vertex>::iterator iter =vertexes.begin();iter < vertexes.end();iter++ ){
if(iter->visited == WHITE){
dfs_visits(*iter);
}
}
}
void graph::dfs_visits(vertex& source){
source.visited = GREY;
for(std::list<edge>::iterator iter = source.list.begin();iter != source.list.end();iter++){
if(vertexes[iter->destination_vertex].visited == WHITE){
dfs_visits(vertexes[iter->destination_vertex]);
}
}
source.visited = BLACK;
std::cout<< source.id <<std::endl;
}
void graph::bfs(){
for(std::vector<vertex>::iterator iter=vertexes.begin();iter != vertexes.end();iter++){
iter->visited = WHITE;
}
std::queue<vertex*> bsf_q;
bsf_q.push(&vertexes[0]);
while(!bsf_q.empty()){
vertex * v = bsf_q.front();
bsf_q.pop();
for(std::list<edge>::iterator iter = v->list.begin() ;iter != v->list.end();iter++ ){
if(vertexes[iter->destination_vertex].visited == WHITE){
vertexes[iter->destination_vertex].visited = GREY;
bsf_q.push(& vertexes[iter->destination_vertex]);
}
}
v->visited = BLACK;
std::cout << v->id <<std::endl;
}
}
Answer: Preliminaries
It's conventional to include system headers with brackets instead of quotes and prefer std::vector over std::list unless proven wrong by a benchmark (note that the use of the word "list" in CLRS is not used in the same sense as std::list).
#include <vector>
#include <limits>
Data structures
Separate your algorithms and data structures and use a common naming convention where types have capitalized initial letters
enum {
INFINITY = std::numeric_limits<int>::max()
};
enum Color {
WHITE,
GREY,
BLACK
};
struct Vertex {
int id;
// BFS properties
Color color;
int discovery;
Vertex* parent;
// DFS properties
int finish;
};
// adjacenty list representation (Figure 22.1 (b) of CLRS 3rd ed.)
struct Graph {
std::vector<Vertex> vertices;
std::vector< std::vector<Vertex*> > adjacent;
};
Several things can be said about my choice of data structures. For the purpose of illustrating BFS and DFS I wouldn't make them full-fledged classes with data hiding. You also need to set up some scaffolding code to initialize a Graph object, and make sure that the Vertex* inside adjacent are actually only pointing to Vertex objects inside vertices of the same Graph object (so that in fact adjacent only holds non-owning pointers). Real-world code would encapsulate the enforcement of such invariants inside the constructor and modifying member functions. For brevity this is not being show.
I also would remove the Edge class because it is implicitly represented as an Vertex* in the adjacency list graph representation. Finally, because the id in Vertex is an int, you get away with using a std::vector as a container. For std::string vertex labels, one would need to use a std::map<std::string, std::vector<Vertex*>> for the adjacent data inside Graph.
Note that the properties of BFS and DFS are put inside Vertex. This is to accomodate the style of notation of CLRS. They write on p592 that you can put that information in separate data structures as well (e.g. a std::map<Vertex*, SearchProperties>). Boost.Graph (the closest thing in C++ to a standardized graph implementation) uses such property maps to achieve the same effect more elegantly.
Breadth-first search
Document your algorithms pre- and post-conditions, as well as the runtime complexity as a function of the input size (look it up in CLRS!). I would give the function breadth_first_search the same signature as in CLRS, except that they are not very careful about whether they pass by value, reference or pointer.
#include <queue>
// pre-condition: graph of vertices with unitialized BFS properties
void breadth_first_search(Graph& g, Vertex* s)
{
for (auto& v: g.vertices)
if (v.id == s->id) continue;
v.color = WHITE;
v.discovery = INFINITY;
v.parent = nullptr;
}
s->color = GRAY;
s->discovery = 0;
s->parent = nullptr;
std::queue<Vertex*> q;
q.push(s);
while (!q.empty()) {
auto u = q.front();
q.pop();
for (auto v: G.adjacent[u->id])) {
if (v->color == WHITE) {
v->color = GRAY;
v->discovery = u->discovery + 1;
v->parent = u;
q.push(v);
}
}
u->color = BLACK;
}
}
// post-condition: graph with initialized vertex color and discovery times
Notice that I would strongly recommend to use the C++11 features auto type deduction and ranged for-loop. These enormously improve the readability of your code: the code above is in almost exactly the same notation as the listing of BFS(G,s) on p595 of CLRS (except that I use color, discovery and parent instead of the terser c, d and pi.
Except for debugging purposes, I would eliminate the std::cout calls from the BFS algorithm. Because the algorithm also keeps track of the parent information, you can completely retrace the algorithms steps after it finishes, should you want to, and print whatever information necessary.
Depth-first search
For the depth-first search, I again would follow the CLRS conventions as much as possible:
void depth_first_search(Graph& g)
{
for (auto& u: g.vertices) {
u.color = WHITE;
u.parent = nullptr;
}
for (auto& u: g.vertices) {
if (u.color == WHITE)
depth_first_search_visit(g, &u, 1)
}
}
void depth_first_search_visit(Graph& g, Vertex* u, int time)
{
u->color = GRAY;
u->discovery = time;
for (auto v: g.adjacent[u->id]) {
if (v->color == WHITE) {
v->parent = u;
depth_first_search_visit(G, u, time + 1)
}
}
u->color = BLACK;
u->finish = time + 1;
}
Note that I would pass the time variable by value along the recursive calls to depth_first_search_visit, which makes it slightly clearer to reason about the algorithm (and also avoids a few explicit increments here and there). Again note that the code above is in almost exactly the same notation as the listing of DFS(G) on p604 of CLRS.
Extra: note that BFS is written as a single function with an init phase + infinite loop over a std::queue, and DFS as an init function + a recursive fuction. It is a nice exercise to rewrite DFS as a single function with an init phase + infinite loop over a std::stack (instead of the implicit stack used by the recursion).
Further reading
If you want to know how the pros are doing it: familiarize yourself with Boost.Graph. | {
"domain": "codereview.stackexchange",
"id": 3956,
"tags": "c++, graph, breadth-first-search, depth-first-search"
} |
Programming challenge “Greatest Odd Divisor” | Question: I am trying to solve a programming problem on a coding platform. When I submit the code on the coding platform, it throws a "Time Limit Exceeded" error. Can someone check my solution and help optimize it?
A food delivery company X gets a lot of orders every day. It charges some
commission from the restaurants on these orders. More formally, if an order
value is K, X charges a commission which is the greatest odd divisor of K. You
can assume that an order value will always be an integer. Given an order value
N, and let C(N) be the greatest odd-divisor of N, output the value of C(1) +
C(2) + C(3) + … + C(N).
INPUT : Input will be an integer, N, the order value. 1 <= N <= 10^9
OUTPUT : Single integer which is the answer
import java.util.*;
public class Solution {
static double GOD(double num)
{
if(num%2!=0)
{
return num;
}
else
{
for (double i = num / 2; i > 0 ;i--)
{
if (num % i == 0 && i % 2 != 0)
{
return i;
}
}
return 0;
}
}
public static void main(String args[] ) throws Exception {
Scanner sc = new Scanner(System.in);
double num = sc.nextDouble();
double sum = 0;
for(double i = 1; i<=num; i++)
{
sum = sum + GOD(i);
}
System.out.println((int)sum);
}
}
Answer: You should not be using doubles to perform integer arithmetic. Stick to int or long.
This is a brute-force search that follows the instructions very literally. That is the wrong approach. Similar to Project Euler Problem 3, the fast way to find the largest factor is by finding the smallest factors. Just divide by 2 as many times as possible.
Example:
$$360 = 2^3 \cdot 45$$
The largest odd divisor is therefore 45. | {
"domain": "codereview.stackexchange",
"id": 21510,
"tags": "java, programming-challenge, time-limit-exceeded"
} |
Header usage among packages | Question:
I'm having a problem including an header file from another package.
In package package_a I have created an header file and I want to include it in package_b. I thought this should be possible using
#include <package_a/header.hpp>
I also have the dependency included in the manifest. I'm getting the error that there is no such file or directory.
It's probably a linking/library issue but I'm not familiar with linking.
I'm using this header file to share some definitions (constants) among packages, so if there is an (better) alternative solution feel free to share.
Originally posted by Jeroen on ROS Answers with karma: 13 on 2011-07-07
Post score: 0
Answer:
You need to export the include and possibly linker flags of the header in its manifest.
Originally posted by dornhege with karma: 31395 on 2011-07-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by seanarm on 2011-07-08:
Right. Here's an example. It goes in package_a's manifest file and assumes you have your includes in package_a's include directory: | {
"domain": "robotics.stackexchange",
"id": 6073,
"tags": "ros, rosmake, header"
} |
What do $\nabla$ and $\frac{d }{d t}$ mean when they are by themselves? | Question: In QM and QFT, I have seen some equations where they have just the derivative and/or the gradient without specifying what it is acting on.
Taken from wiki.
This does not make sense to me since I learned that gradients and derivatives can only act on functions and not exist by themselves (unless they are vector components). This seems to be going against the grain of what I learned.
Can someone please help me understand how can we just put a derivative without specifying what it is acting on?
Answer: I am ignoring $\frac{\partial}{\partial t}$, since it does not involve the issue that confuses you.
You are right to be puzzled by the abuse of notation, $\hat p \sim -i\hbar \frac{d}{dx}$, but your teacher should have made this very clear right from the start. What it means is that this is a representation in the coordinate picture, which is to say
$$
\hat p = -i\hbar\int dx ~~ |x\rangle \frac{d}{dx}\langle x|.
$$
Acting on any state in Hilbert space, it yields, e.g.,
$$
\hat p |\psi\rangle= -i\hbar\int dx~~ |x\rangle \frac{d}{dx} \psi(x).
$$
(Sometimes, informally, people summarize the above in code, abusively, as $ \hat p_x \psi(x)= -i\hbar \frac{d}{dx} \psi(x)$, which apparently confused you.)
From the definition above, you may also easily derive
$$
\hat p = \int d p ~~ |p\rangle p \langle p|.
$$
This is a reminder of the logical power of the Dirac notation, and underscores the conceptual service he's offered the theory. | {
"domain": "physics.stackexchange",
"id": 82349,
"tags": "quantum-mechanics, operators, differentiation, notation, mathematics"
} |
The role of the root switch after Spanning Tree Protocol has established a tree network in a LAN? | Question: In Spanning Tree Protocol, a root switch is selected at first, and then somehow, the shortest path from each other switch to the root is obtained. Thus we established a tree network.
My questions are:
After the tree network is established, does the root switch act as a center in the network? In other words, all the traffic originated from other switch would first flow into the root switch along the shortest path and then routed to its destination by the root switch, also along the shortest path.
If it's not the case, why does the protocol calculate a tree such that the path between each non-root switches and the root is minimized? As far as I can see, a minimum spanning tree would work better, which is not difficult to calculate.
Answer: I can find no requirement that all traffic is routed through the root switch. For example, in A<-->B<-->C, with C the root switch, traffic fom A to B does not pass through the root switch.
The spanning tree protocol first selects a root switch. In Cisco, this is the switch with the lowest bridge ID. So the selection of the root is arbitrary. Now a tree is built that connects all switches, without creating cycles. Even if initially a minimum spanning tree is created because links with maximum bandwidth (minimum cost) are selected, at some future point this may no longer be when a link fails and a redundant link or path is enabled. This new link may result in the total tree not being a minimum spanning tree anymore. I can imagine that reconfiguring the whole tree to be minimum again can cause unwanted overhead.
I found this information on Wikipedia and Cisco. I am not a network guru so my answer may not be complete. I found no information that the initial tree created is minimum though it seems to meet the requirements and seems to follow e.g. Kruksal's algorithm. | {
"domain": "cs.stackexchange",
"id": 12870,
"tags": "computer-networks, spanning-trees, communication-protocols"
} |
Absorption HOMO-LUMO, canonical? | Question: So I already asked yesterday about canonical and localized orbitals together with Koopman's theorem where we came to the conclusion that for for example ionization potentials localized orbitals cannot be used. So I assume a ionization would be from the HOMO which leaves me with the question, whenever we say an electron is excited from the HOMO into the LUMO of a system, are we talking about the canonical forms then? Because before I knew about those two I always imagined that there is a local position somewhere in the molecule like a double bond where one electron is exited into an antibonding MO creating a single bond or 1.5 bond and that you can really tell from where to where the transition is. But canonical orbitals do not really look like we imagine orbitals they are distributed over the whole molecule.
Answer: Short answer is yes, HOMO and LUMO refer to canonical orbitals, since only those have at least some physical meaning.
Two more remarks about that:
Koopmans' Theorem does only work if you add or remove one electron.
If you change two or more electrons of your system (e.g. exciting one electron is first removing one, then adding one in another orbital) you get additional terms in energy difference between both states. Those terms are coming from the electron-electron interaction of the two changed orbitals/electrons.
In case of changing only one orbital, those additional terms do not appear since there is no electron-electron self-interaction.
Furthermore, the approximation of Koopmans' Theorem, that orbitals do not change (do not relax, remain stationary) on removing/adding electrons, gets worse the more electrons change.
Orbitals itself are only an approximation. Factorizing the total electronic wave function into orbitals is mathematically only possible for non-interaction electrons, or like in Hartree-Fock if you assume some mean-field for the electron-electron interaction.
The resulting canonical orbitals do not always match the simplified picture based on our chemical intuition. | {
"domain": "chemistry.stackexchange",
"id": 8507,
"tags": "computational-chemistry, spectroscopy"
} |
Calculate query coverage from BLAST output | Question: I have a BLAST output file and want to calculate query coverage, appending the query lengths as an additional column to the output. Let's say I have
2 7 15
f=open('file.txt', 'r')
lines=f.readlines()
import re
for line in lines:
new_list=re.split(r'\t+',line.strip())
q_start=new_list[0]
q_end=new_list[1]
q_len=new_list[3]
q_cov=((float(q_end)-float(q_start))/float(q_len))*100
q_cov=round(q_cov,1)
q_cov=str(q_cov)
new_list.append(q_cov)
r=open('results.txt', 'a')
x='\t'.join(new_list)
x=x+'\n'
r.writelines(x)
f.close()
r.close()
Answer: One serious bug is that you open results.txt for each line of input. It's almost always better to open files in a with block. Then, you won't have to worry about closing your filehandles, even if the code exits abnormally. The with block would have made your results.txt mistake obvious as well.
Since you want to treat your q_start, q_end, and q_len as numbers, I wouldn't even bother to assign their string representations to a variable. Just convert them to a float as soon as possible. Similarly, q_cov should be a float; I would just stringify it at the last moment. I would also postpone rounding just for the purposes of formatting the output, preferring to preserve precision in q_cov itself.
Put your import statements at the beginning of the program.
import re
with open('file.txt') as input, open('results.txt', 'a') as output:
for line in input.readlines():
fields = re.split(r'\t+', line.strip())
q_start, q_end, q_len = map(float, (fields[0], fields[1], fields[3]))
q_cov = 100 * (q_end - q_start) / q_len
print >>output, '\t'.join(fields + [str(round(q_cov, 1))]) | {
"domain": "codereview.stackexchange",
"id": 5790,
"tags": "python, regex, linux, csv, bioinformatics"
} |
Poincare invariance of Dirichlet and Neumann boundary conditions | Question: The action which describes a string propagating in a $D$ dimensional spacetime, with given metric $g_{\mu\nu}$, is given by the Polyakov action
$$S_{\text{p}}=-\frac{T}{2}\int \mathrm{d}\sigma\mathrm{d}\tau\sqrt{-h}\eta^{\alpha\beta}\partial_{\alpha}X^{\mu}\partial_{\beta}X^{\nu}g_{\mu\nu}\tag{1}$$
where the symbols have their usual meaning. It is not hard to check that action is invariant under Poincare transformations
$$\delta X^{\mu}(\sigma,\tau)=a^{\mu}_{~~~\nu}X^{\nu}(\sigma,\tau)+b^{\mu}\tag{2}.$$
When all the dust is settled (i.e. after gauge fixing and Weyl transformations) the Polyakov action becomes
$$S_{\text{P}}=\frac{T}{2}\int \mathrm{d}\sigma\mathrm{d}\tau \left((\dot{X})^{2}-(X')^{2}\right)\tag{3}$$
where $\dot{X}=\partial_{\tau}X^{\mu}$ and $X'=\partial_{\sigma}X^{\mu}$. Variation with respect to $X^{\mu}$ yields the equation of motion
$$(-\partial^{2}_{\tau}+\partial^{2}_{\sigma})X^{\mu}-T\int\mathrm{d}\tau\left[X'\delta X^{\mu}|_{\sigma=\pi}+X'\delta X^{\mu}|_{\sigma=0}\right]=0.\tag{4}$$
The $\sigma$ boundary terms tell us what type of strings we have, either closed or open strings.
For open string equation (4) becomes
$(-\partial^{2}_{\tau}+\partial^{2}_{\sigma})X^{\mu}=0$
where we assume that the end points of the string follow the Neumann boundary conditions
$$\partial_{\sigma}X^{\mu}(\tau,\sigma)=\partial_{\sigma}X^{\mu}(\tau,\sigma+n).\tag{5}$$
One interesting feature is that the Neumann boundary conditions remains invariant under global Poincare transformation since
\begin{eqnarray} \partial_{\sigma}X'^{\mu}|_{\sigma=0,n} & = & \partial_{\sigma}\left(a^{\mu}_{~~~\nu}X^{\nu}(\sigma,\tau)+b^{\mu}\right)|_{\sigma=0,n} \\ & = & a^{\mu}_{~~~\nu}~\partial_{\sigma}X^{\nu}|_{\sigma=0,n}\\ & = & 0 \\ \end{eqnarray}
Whereas the Dirichlet boundary conditions
$$X^{\mu}(\tau,\sigma=0)=X^{\mu}_{0}\qquad\qquad X^{\mu}(\tau,\sigma=n)=X^{\mu}_{n}$$
break the Poincare invariance, as
$$X'^{\mu}|_{\sigma=0,n}=\left(a^{\mu}_{~~~\nu}X^{\nu}(\sigma,\tau)+b^{\mu}\right)|_{\sigma=0,n}\neq X^{\mu}_{0,n}$$
which simply means that under a Poincare transformation the ends of the string actually change.
Does the spectrum of string excitations keep any signature
of this (non) invariance under Poincare transformations? If so, how can
that result be interpreted?
Answer: The Dirichlet boundary condition, in particular, breaks spacetime translation invariance. This is reflected in the string spectrum, that is, a Goldstone boson state appears in the massless spectrum of the string. This state corresponds to the collective coordinate which parametrizes small oscillations of the Dirichlet 'brane' or D-brane on which the string is constrained to move on.
For the SUPER-string, a similar effect occurs whereby a Goldstino state appears in the massless spectrum, and this indicates the breaking of some amount of spacetime supersymmetry by the D-brane.
References: String Theory, Polchinski, Volume I (equation (8.6.18) and pages 268-269), Volume II (pages 138-140). | {
"domain": "physics.stackexchange",
"id": 36430,
"tags": "string-theory, boundary-conditions, poincare-symmetry"
} |
How to connect Green function to propagator? | Question: I know that there has already been many questions related to this question, such as in Differentiating Propagator, Green's function, Correlation function, etc. However, that question mainly discriminate the Green function and kernel, just briefly discuss the propagator as we often know it. Now I don’t mean to duplicate other questions related to this question, if you find other related, please inform me and I will delete it, I just haven’t found out a satisfying answer. To be more specific, what I mean by propagator is the following:
$$ \Delta (x,t;x’,t’) = \langle x | U(t, t’) | x’ \rangle $$
Or in QFT settings
$$ \Delta (x,t;x’,t’) = \langle 0| \mathcal{T} [\phi^{(H)}(x’,t’) \phi ^{\dagger(H)} (x,t)]| 0 \rangle. $$
I want to know how to connect this to the green function or correlation function, which is defined to be (two-point)
$$G(x1,x2) = \langle \phi (x1) \phi (x2) \rangle = \frac{\int D \phi e^{-S[\phi]}\phi(x1) \phi(x2)}{Z}.$$
In my own try to understand this, we could try to write the green function as the following. (In QFT settings)
$$G(x1,t1;x2,t2) = \langle \mathcal{T} [\phi ^{(H)}(x1,t1) \phi^{\dagger (H)} (x2,t2)] \rangle = \langle \mathcal{T} [e^{i H t_1}\phi (x1) e^{-i H(t_1-t_2)} \phi^{\dagger} (x2)e^{-i H t_2}] \rangle. $$
Now it seems to feel like the evolution function in the propagator, but how can one deal with the “expectation value” part of the green function definition, which is missing in the propagator definition?
I also know that partition function $Z$ could be related to the integral of imaginary time propagator, but couldn’t really get all these fuzzy things in place at once.
Answer: All right so after days of looking textbooks I finally get a feel of how things are arranged, I’ll try to put all things together to give a clear distinction for the people who are also confused by this.
So basically it is the difference between the operator language and the path integral language, and it uses the fact that the real-time green function is defined on zero temperature.
In the path integral formulation, we tend to talk about the expectation value, so in this language, we write the green function in terms of expectation value of “pure function” or “correlation function”, there is no operator anymore:
$ G( x_1,x_2) = \langle \phi(x_1) \phi(x_2) \rangle $
In the operator formulation, we tend to care how operator operates on the states and what is its outcome. In this language, we write green function in expectation value of operators’ matrix elements.
$ G(x_1,x_2) = \langle \mathcal{T} [\phi(x_1,t_1) \phi^{\dagger} (x_2,t_2) ]\rangle $
While doing this expectation value calculation, we actually face two situations, finite temperature or zero-temperature. In the zero-temperature scenario, the ground state contributions dominate and we could write the operators expectation value as:
$ G(x_1,x_2) = \langle 0| \mathcal{T} [\phi(x_1, t_1) \phi^{\dagger} (x_2,t_2) ]| 0 \rangle $
And that is what we usually call “propagator”. | {
"domain": "physics.stackexchange",
"id": 65681,
"tags": "path-integral, greens-functions, correlation-functions, propagator, partition-function"
} |
How to encode a sentence using an attention mechanism? | Question: Recently, I read about one of the state-of-the-art method called Attention models. This method use a Encoder-Decoder model. It can find a better encoding for each word in a sentence. But how can I encode a full sentence?
For example, I have a sentence "I love reading".
After embedding, this sentence will be converted to list of three vectors. (or matrix with dimension number of words times embedding dimension).
After several layers of attention mechanism, I will still have the same matrix.
How can I convert this matrix to a single vector that contains an encoded representation of the full sentence?
Answer: A standard way of obtaining a sentence representation with attention models is using BERT or any other of its derivations, like RoBERTa. In these models, the sentence tokens passed as input to the model are prefixed with a special token [CLS]. The output of the model at that first position is the sentence representation.
To use these models, you may use sentence-transformers library, e.g.:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('paraphrase-distilroberta-base-v1')
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
for sentence, embedding in zip(sentences, sentence_embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding)
print("") | {
"domain": "datascience.stackexchange",
"id": 9639,
"tags": "nlp, encoding, attention-mechanism"
} |
Comoving Volume Calculation | Question: Suppose I have data from an astronomical survey at redshifts in the range $z = [2,3]$. Suppose that, on average in this range, the data covers an area on the sky of $A=1$ $\mathrm{Mpc}^2$. How would I calculate the comoving volume covered by the survey? Is it as simple as calculating the comoving distance from $z=2$ to $z=3$ and multiplying by the average area? That is: $V_C = A(D_C(3)-D_C(2))$?
Answer: That depends on how accurate you want your answer.
The reason is that the angle $\theta$ spanned by a length $L = 1\,\mathrm{Mpc}$ depends on the distance $d$ of that length — in comoving coordinates, $\theta$ keeps decreasing with $d$, just like a normal item, say a bicycle, looks smaller the farther away it is (curiously, in physical coordinates, this is not the case. Due to the finite speed of light and the expansion of the Universe, galaxies only look smaller out a certain distance, after which they start to look larger).
An $(L = 1\,\mathrm{Mpc})^2$ square spans $\theta_2 = 39''$ at a redshift of $z=2$, and $\theta_3 = 32''$ at $z=3$. The comoving distances are $d_2 = 5.3\,\mathrm{Gpc}$ and $d_3 = 6.5\,\mathrm{Gpc}$, respectively.
So yes, roughly you can say that the comoving volume of an $A = 1\,\mathrm{Mpc}^2$ square between $z=2$ and $z=3$ is $V = A(d_3-d_2)$. But read on.
The calculation
You mention a survey, so I assume you're not actually given an area, but a field of view (FOV). The problem is the same, just the other way; your FOV doesn't cover the same area at different redshifts.
So, here's the rigorous way to calculate it:
Let's say your FOV spans a solid angle $\Omega=\theta_\mathrm{RA}\times\theta_\mathrm{dec}$, which measured in radians comprises a fraction $\Omega/4\pi$ of the whole sphere. The total, comoving volume to a comoving distance $d$ is just $V = 4\pi d^3/3$, so the volume of the shell between $z=2$ and $z=3$ is
$$
V_\mathrm{2\rightarrow3} = V_3 - V_2 = \frac{4\pi}{3} \big(d_3^3 - d_2^3\big).
$$
The comoving volume spanned by your FOV is thus
$$
V = \frac{\Omega}{4\pi} V_\mathrm{2\rightarrow3},
$$
or
$$
V = \frac{\Omega}{3} \big(d_3^3 - d_2^3\big).
$$
The error
The difference between the two approaches increases with the difference between the two redshifts. With a FOV of, say, $\Omega = (\theta=32'')^2 = 1024\,\mathrm{arcsec}^2=2.4\times10^{-8}\,\mathrm{sr}$, if you say that it spans an area of $A = 1\,\mathrm{Mpc}^2$ (which is only correct at $z=3$), you'll get
$$
V_\mathrm{approx.} = 1\,\mathrm{Mpc}^2 \times (6508 - 5312)\,\mathrm{Mpc} = 1198\,\mathrm{Mpc}^3,
$$
whereas with the correct formula you get
$$
V_\mathrm{true} = \frac{2.4\times10^{-8}}{3} \left( 6508^3 - 5312^3\right)\,\mathrm{Mpc}^3 = 1011\,\mathrm{Mpc}^3,
$$
that is, 16% percent lower.
If, on the other hand, your FOV is $\Omega = (\mathbf{39}'')^2$, and if you say that it spans an area of $A = 1\,\mathrm{Mpc}^2$ (which is only correct at $z=\mathbf{2}$), then the true volume would be 1500 Mpc, i.e. 25% larger than your ~1200 Mpc.
Since the correct calculation isn't much harder than the approximation, I suggest you stick to the correct one.
The Python way
With the astropy module, just type
from astropy.cosmology import Planck15
from astropy import units as u
theta_RA = 32 * u.arcsec
theta_dec = 32 * u.arcsec
Omega = (theta_RA * theta_dec).to(u.steradian).value # get rid of unit
d2 = Planck15.comoving_distance(2)
d3 = Planck15.comoving_distance(3)
V = Omega/3 * (d3**3 - d2**3)
print(V)
1011.0148201494444 Mpc3
or
V23 = Planck15.comoving_volume(3) - Planck15.comoving_volume(2)
V = Omega/(4*pi) * V23
which gives the same result. | {
"domain": "astronomy.stackexchange",
"id": 5641,
"tags": "observational-astronomy, cosmology, distances, redshift"
} |
How does the lack of information increase as temperature increases? | Question: Suppose one knows nothing about the concept of entropy. How can we argue that the lack of information/ignorance about the system typically increases with the increase in the temperature using the formula of the canonical probability $p_i=e^{-\beta E_i}/Z$ where $Z$ is the canonical partition function? Assume that a system has a fixed volume and a fixed number of particles.
Here is the objective. If I can argue that lack of information typically increases with temperature, I can use that to argue that entropy typically increases with temperature by equating lack of information with entropy.
Answer: At absolute 0, assuming the ground state is a crystal, there is no information encoded in the state, so there is no lack of information about the state.
When we heat up the system, the amount of information encoded in the state (in terms of the positions and motion of all the atoms in the material) increases. But we are learning barely any of this information. Therefore, our lack of information increases, not because we're forgetting anything but because there's more information in the system to lack.
Where is this extra information coming from? From the way we heat up the system. Suppose we shine microwaves on it. We don't know which atoms the microwaves are interacting with, so we don't know the resulting motion of the atoms.
So entropy increases as temperature increases. | {
"domain": "physics.stackexchange",
"id": 50680,
"tags": "statistical-mechanics, temperature, entropy, information"
} |
missing libraries/dependencies since latest gazebo/ros/drcsim updates? (liburdfdom_model.so...) | Question:
Hi,
I've been running gazebo with drcsim on Ubuntu 12.04 successfully for some months.
However since the latest package updates in the past few days, gazebo now crashes and reports missing libraries.
Are possibly some dependencies missing from the gazebo 1.3.1 package?
The error I get is:
$ gazebo
gazebo: error while loading shared libraries: liburdfdom_model.so: cannot open shared object file: >No such file or directory
I see the following of unresolved libs:
$ ldd $(which gazebo) | grep 'not found' | sort | uniq
libOgreMain.so.1.7.3 => not found
libOgreRTShaderSystem.so.1.7.3 => not found
libOgreTerrain.so.1.7.3 => not found
liburdfdom_model.so => not found
liburdfdom_world.so => not found
I've tried uninstalling/purging/re-installing drcsim and gazebo with no success.
Any suggestions on how to resolve this would be appreciated.
Originally posted by andyw on Gazebo Answers with karma: 16 on 2013-01-30
Post score: 0
Answer:
I just realised my mistake:
It turns out I wasn't running the setup.sh file.
Sourcing either the gazebo or drcsim setup.sh file resolves the issue for me.
Thanks for the prompt response though! and I'm running from apt packages.
Originally posted by andyw with karma: 16 on 2013-01-30
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 2992,
"tags": "gazebo"
} |
Entropy reducing power generator | Question: An 8.3 EER 1200 W air conditioner will move 10000 BTU of heat per hour. 1 BTU is 1055 J of heat. Per second this is moving 2930 W of heat energy. It is possible to get them even more efficient e.g. 12 EER.
This means 4130 W of heating on hot side, and 2930 W of cooling on cold side.
The heating side of the air conditioner is used to boil water to power a steam turbine. The cooling side is used to produce coolant.
A condensing steam turbine is approx 40% efficient, so the 4130 W of heat will produce 1652 W of electricity, and 2478 W of heat.
The 2478 W of heat is cancelled by the coolant, with 452 W of cooling left over.
The 1200 W of power needed by the air conditioner is cancelled by the turbine, with 452 W of electricity left over.
The excess cold can be released into the environment or used some other way, and the excess power put into the power grid.
I was thinking about this while playing the computer game Oxygen not Included, and wondered if such a machine could exist in reality?
Answer: No, such a machine cannot exist in reality. One critical thing that you neglected in your analysis is the temperature. Every heat engine or heat pump has a hot side and a cold side. The efficiency depends strongly on the temperature difference. The heat pump operates with a much lower heat difference than the steam engine. If you run it at the larger difference as you described then it will be nowhere near as efficient as you quoted. | {
"domain": "physics.stackexchange",
"id": 74414,
"tags": "thermodynamics, entropy"
} |
Why holes are better in storing both valley and pseudospin information? | Question: Recently I attended a class about transition-metal dichalcogenides (TMDC) and, during the lecture, the professor said that holes are better than electrons in storing both pseudospin and valley information. I know that it’s somehow related to transitions due to phonons and the fact that they are efficient in transferring momentum but not spin, but I don’t know much more. Could you help me? Thanks and have a good day!
Answer: In this paper by Kumar et al. from 2021 it is shown that the lifetime of holes is orders of magnitude longer than that of electrons. The paper explains that this is due to the differing energy landscapes seen by both species, meaning that it is much easier for electrons to be scattered. Excited electrons "live" in the conduction band which is only spin-orbit split slightly, while holes "live" in the conduction band which is spin-orbit split much more.
It could be in this sense that "holes are better carriers of valley information": because they live longer. | {
"domain": "physics.stackexchange",
"id": 89487,
"tags": "momentum, metals, phonons"
} |
Get string truncated to max length | Question: This is an extension method to get a string that is truncated to a maximum length.
Any comments?
public static class StringExtensions
{
public static string WithMaxLength(this string value, int maxLength)
{
if (value == null)
{
return null;
}
return value.Substring(0, Math.Min(value.Length, maxLength));
}
}
Answer: If you're using C# 6.0, I believe you can drop the null-check:
public static string WithMaxLength(this string value, int maxLength)
{
return value?.Substring(0, Math.Min(value.Length, maxLength));
}
Other than that, it's like Jeroen said: that's pretty much as good as it gets.
The class and parameter naming is exactly as I'd have it, and the name of the extension method is decent, although I'd try to find a name that better indicates that truncation will occur when value is longer than maxLength... but WithMaxLength isn't a bad name. | {
"domain": "codereview.stackexchange",
"id": 15246,
"tags": "c#, strings, extension-methods"
} |
Confusion regarding usage of MATLAB for Z domain? | Question: How we can use MATLAB for z domain especially in scenarios where we have two different expressions of Z transform(one has negative powers of z and other has positive powers of z)
I have added a link and snapshot of that and relevant MATLAB code where i get different poles and zeros when i switch from z^-1 form to z form
So which form is correct and why we get this difference?
My Updated Matlab code:
clc;clear;close all
%using negative power representation
num1=[2 3 4]
den1=[1 3 3 1]
[z1,p1,k1]=tf2zp(num1,den1)
zplane(z1,p1)
title('pole zero plot using negative power representation')
%using positive power representation
num2=[2 3 4 0]
den2=[1 3 3 1]
[z2,p2,k2]=tf2zp(num2,den2)
figure
zplane(z2,p2)
title('pole zero plot using positive power representation')
http://www.ece.northwestern.edu/local-apps/matlabhelp/toolbox/signal/basics27.html
Answer: This answer has been updated to be consistent with the modified question. My earlier response remains further below to the question
originally posted.
Transfer functions can be described using either positive powers or negative powers for the polynomials. Either can be used at the convenience of the user. It is more common for continuous time systems using $s$ for the variable in the Laplace Transform to be described with positive powers of $s$, while negative powers of $z$ are quite common for discrete-time systems using $z$ for the variable in the Z Transform. A big reason for this is the z transform of the unit sample delay is $z^{-1}$, so using negative power descriptions leads more readily to implementation block diagrams.
That said the OP is using the Matlab tf2zp function, and the help for this function states clearly that the polynomials are entered in positive powers in decreasing order (regardless if we use $s$ or $z$). So the OP's numerator polynomial given as [2,3, 4] would describe the polynomial as:
$$2z^2 + 3z + 4$$
The OP has then modified this polynomial by simply adding a zero to the end of it. This results in the following polynomial:
$$2z^3 + 3z^2 + 4z + 0$$.
The result is multiplying the original polynomial by $z$, which is identical to adding a zero at the origin, since $z=0$ is now a root of the numerator. This is consistent with the results provided in the graphic at the bottom of the question posted.
That said, the comment for the following is not correct:
%using negative power representation
num1=[2 3 4]
den1=[1 3 3 1]
[z1,p1,k1]=tf2zp(num1,den1)
The above uses positive power representation given it is using the 'tf2zp' function. This representation is describing the following polynomial if we assume the variable is $z$:
$$H_1(z) = \frac{2z^2+ 3z+4}{z^3+3z^2+3z+1}$$
The next lines are correct, it is positive power representation but for a different polynomial:
%using positive power representation
num2=[2 3 4 0]
den2=[1 3 3 1]
[z2,p2,k2]=tf2zp(num2,den2)
This representation is describing the following polynomial in comparison:
$$H_2(z) = zH_1(z) = \frac{2z^3+ 3z^2+4z}{z^3+3z^2+3z+1}$$
As described earlier we simply multiplied the original transfer function by $z$.
I don't know which transfer function the OP really wants, but let's assume the OP's intention is to describe the first polynomial with negative powers of $z$, namely the polynomial with numerator given as [2,3,4] and denominator given as [1,3,3,1] corresponds to the following transfer function:
$$H_3(z) = \frac{2+ 3z^{-1}+4z^{-2}}{1+3z^{-1}+3z^{-2}+z^{-3}}$$
Then we have the following two options:
Option 1: Use Matlab's alternate function tfzpk where the help states the polynomials are entered directly with negative powers, so:
tfzpk([2 3 4], [1 3 3 1])
Option 2: (When we don't have the luxury of a companion function that can take negative powers directly, namely we want a $H_3(z)$ that can be used with a function such as tfzp that only allows a positive power format. The details of this are given in my original answer below with the following result):
Convert $H_3(z)$ to be the same transfer function, described with positive powers of $z$ in the format required by the function:
$$H_3(z) = \frac{z^3}{z^3}\frac{2+ 3z^{-1}+4z^{-2}}{1+3z^{-1}+3z^{-2}+z^{-3}} = \frac{2z^3+3z^2+4z}{z^3+3z^2+3z+1}$$
This results in the following MATLAB command:
tf2zp([2 3 4 0], [1 3 3 1]
Both Option 1 and Option 2 should provide the identical result.
Answer to question as originally posted
The OP's intended transfer function written with negative power representation is given as:
$$H(z) = \frac{2+3 z^{-1}+ 4 z^{-2}}{1+3 z^{-1}+ 3 z^{-2} + z^{-3}}$$
The function the OP is using requires positive power representation.
If we wanted instead $H(z)$ to be expressed with all positive powers of $z$, in lowest order, we can multiply numerator and denominator by $z^3$ resulting in:
$$ H(z) = \bigg(\frac{z^3}{z^3}\bigg)\frac{2+3 z^{-1}+ 4 z^{-2}}{1+3 z^{-1}+ 3 z^{-2} + z^{-3}}$$
If we multiply this out we get:
$$ H(z) = \frac{2z^3+3z^2+4z + 0}{z^3 +3z^2+3 z+ 1}$$
The OP has added a extra zero on both numerator and denominator polynomials instead of the numerator only as demonstrated above. What the OP has done does not achieve the result detailed above and changes the poles represented, as the OP has found, since the denominator order has increased. However a simple solution in this particular case is to just use the alternate MATLAB function 'tf2zpk' that uses negative powers of z directly: The MATLAB function used 'tf2zp' states in its help that it uses a positive power representation and recommends using another function 'tf2zpk' when a negative power representation is desired. MATLAB, Octave and Python functions used for signal processing and discrete time processing in the z-domain typically use polynomials expressed with negative powers. While MATLAB, Octave and Python functions used for control systems and continuous time processing typically used polynomials expressed with positive powers. This isn't a hard and fast rule, so best to refer to the help documentation for any function used to determine which convention is used. | {
"domain": "dsp.stackexchange",
"id": 12021,
"tags": "matlab, z-transform"
} |
What is causal signal? | Question: I'm using Digital Signal Processing Principles, Algorithm and Applications 4th Edition written by Proakis and Manolakis. In Chapter 3, subtopic 3.3.2 it mentions the term "causal signal" which I can't find any definition. I know what a causal system means: a system that output depend entirely on past and present input, not future input but here it's causal signal, not system
So what does causal signal definition?
I tried to look at the Index and it says that the term "Causal Signal" is mentioned in page 85 but in fact, I find nothing in page 85 mentioning it
Answer: A system is causal if its output depends only on the current input and past inputs (and not on future inputs). As a consequence, if the system is a linear, time-invariant (LTI) system (which input/output relationship can be completely characterized by an impulse response), that impulse response, $h(t)$ must be zero for all time $t \lt 0$.
Some people define a causal signal, $x(t)$, to be one that can be the impulse response of a causal system: it is zero for all time $t \lt 0$. | {
"domain": "dsp.stackexchange",
"id": 9752,
"tags": "discrete-signals"
} |
What does "number of inputs to each neuron" mean in Neural Network terms? | Question: I am reading about a Neural Networks project that has some data like this
I am new to this, and though I think I understand what a 3:1 network mean, I do not understand what number of inputs (to each neuron) means.
I think this is what a 3:1 network would look like (please correct me if I am wrong). Does 3 inputs per neuron mean that we will have 3 inputs to each of nodes A, B and C? In that case, what would the line connecting A to Z indicate? As in out of three inputs, how is a resultant chosen?
Answer: You are bypassing the "hidden layer": A neural network consists of three layers: Input layer, hidden layer, and output layer:
You may also have more than one hidden layer:
(source: mu-sigma.com)
Though, in your case, since the number of hidden layers is not specified, it is safe to assume that there is just 1 hidden layer.
So, for example, in a 3:1 network, you have 1 output neuron, and 3 hidden neurons. The number of inputs shows the number of neurons in the input layer. | {
"domain": "cstheory.stackexchange",
"id": 163,
"tags": "ds.algorithms, ne.neural-evol, ai.artificial-intel"
} |
Projections in Polar coordinate system | Question: I really understand what projections in Cartesian coordinate system, I can imagine this, but I absolutely do not understand projection in polar system. For example, I have a speed, $U$, and I must find projections $U_r ; U_{\phi}$ in polar system $(r,\phi)$
Google didn't help me.
Answer: This answer assumes you want to find the projections of a vector onto the polar basis at some point away from the origin. This is equivalent to rmhleo's advice and differs from that given by Kyle in the comments to the question which address a different (and simpler) problem.
The notion of projection is the same.
Given an arbitrary vector $\vec{v}$, the components of $\vec{v}$ in terms of the coordinates $\{a,b,c\}$ ($v_a$, $v_b$, and $v_c$) are the amounts by which $\vec{v}$ points in the direction of the unit vectors associated with each coordinate: $\{\vec{e}_a,\vec{e}_b,\vec{e}_c\}$.
These can be computed with the inner (or dot) product $v_a = \vec{v} \cdot \vec{e}_a = |v|\,|e_a| \cos\theta_{va}$ (where $\theta_{ea}$ is the angle between $\vec{v}$ and $\vec{e}_a$).
Of course if two vectors $\vec{u}$ and $\vec{v}$ are already known in terms of a common set of coordinates (say $\{i,j,k\}$) then we can also write the dot product as $$\vec{u} \cdot \vec{v} = \sum_{c \in \{i,j,k\}} u_c v_c \,.$$
Now comes the stuff that depends on the coordinate system you are using.
While the unit vectors in Cartesian coordinates are constant (that is $\vec{e}_x$ always points in the same direction), the unit vectors for polar coordinates depend on where they are evaluated. The radial unit vector $\vec{e}_r$ always points directly away from the origin, and the polar unit vector always points tangent to the circle of constant $r$ in the direction of increasing $\phi$, which means we can find the polar unit vectors at point $\vec{p} = (r,\phi)$ in terms of the Cartesian unit vectors as
$$
\begin{align*}
\vec{e}_r(\vec{p}) &= +\vec{e}_x \cos \phi + \vec{e}_y \sin \phi \\
\vec{e}_\phi(\vec{p}) &= -\vec{e}_x \sin \phi + \vec{e}_y \cos \phi \,.
\end{align*}
$$
From this we can deduce the general projection formula in terms of the point $\vec{p}$ where it is evaluated.
$$
\begin{align*}
\vec{v}_r(\vec{p})
&= \vec{v} \cdot \vec{e}_r(\vec{p}) \\
&= v_x \cos\phi + v_y \sin\phi \\
\vec{r}_\phi(\vec{p})
&= \vec{v} \cdot \vec{e}_\phi(\vec{p}) \\
&= -\vec{v}_x \sin\phi + \vec{v}_y \cos\phi \,.
\end{align*}
$$ | {
"domain": "physics.stackexchange",
"id": 16432,
"tags": "homework-and-exercises, vectors, coordinate-systems"
} |
Do atoms and molecules affect light rays | Question: Molecules of air are all around us all the time. If so, during daylight do rays from the sun diffract as it passes through molecules in the air? and if so is this diffraction negligible to be noticed? plus does this affect anything?
Molecules move at high speeds and at random directions so their diffraction effect must be low, however as their are an infinite number of various air molecules in the atmosphere certainly there must be some sort of effect
Also how and why is the skies blue fixed? (molecules don't stay at the same place they move randomly) and why isn't the air below the stratosphere seem blue? are there any required criteria's for the diffraction by molecules to be noticeable?
Answer:
Molecules of air are all around us all the time. If so, during daylight do rays from the sun diffract as it passes through molecules in the air? and if so is this diffraction negligible to be noticed? plus does this affect anything?
While both diffraction and scattering refer to redirection, I think scattering is the better term here. The molecules in the air are sufficient to scatter some of the light as it travels through the atmosphere. Aerosols also perform some absorption and additional scattering. I'm not sure what you might mean by "does this affect anything?".
why isn't the air below the stratosphere seem blue?
There's not enough of it between you and an obvious background. Water is blue as well. But in a tall glass, you can't see that. In a white bathtub, you can probably tell when the water is high. In a lake or ocean, you can tell easily. | {
"domain": "physics.stackexchange",
"id": 17802,
"tags": "visible-light, diffraction"
} |
I do not understand why finding the minimum of elements take O(n) time | Question: I know that sorting the array take O(logn) time. But why finding the minimum of elements take O(n) time, which is more expensive? If I sort the array, I simply output the first element and this would be my minimum.
Answer: Sorting the array using comparison-based method takes $\Omega(n \log n)$. So sorting is obviously more expensive than finding minimum in $\mathcal O(n)$.
In fact finding minimum or sorting takes $\Omega(n)$ to at least read the whole array. For finding minimum it is actually $\Theta(n)$.
If it were true that sorting takes $\mathcal O(\log n)$ then you would be right to use sorting instead. | {
"domain": "cs.stackexchange",
"id": 7325,
"tags": "algorithms"
} |
A small but expandable Exam system | Question: I'm trying to practice OO design as well as OOP in Java so I've created an Exam system that tries to be Object Oriented and expandable.
This is what I haven't learned from Java so far: Java 8, interfaces, Collections, IO, Swing, exceptions
public abstract class Question {
protected int score;
protected boolean isAnswerRight;
public Question(int puntuacion) {
this.score = puntuacion;
}
public int getScore() {
return score;
}
public abstract void ask();
public abstract void showClue();
}
This class has two children
import javax.swing.*;
public class MultipleOptionQuestion extends Question {
private static final int MULTIPLE_OPTION_SCORE = 1;
private String statement;
private String[] options;
private String clue;
private int rightOption;
public MultipleOptionQuestion(int score, String statement, String[] options, int rightOption, String clue) {
super(MULTIPLE_OPTION_SCORE);
this.statement = statement;
this.options = options;
this.clue = clue;
this.rightOption = rightOption;
}
@Override
public void ask() {
String answer = (String) JOptionPane.showInputDialog(null, statement, "Question Test", JOptionPane.QUESTION_MESSAGE, null, options, options[0]);
if (answer.equals(options[rightOption])) {
this.isAnswerRight = true;
}
}
@Override
public void showClue() {
JOptionPane.showMessageDialog(null, clue);
}
}
And an abstract class to represent simple math operations (+,-,*,/)
import javax.swing.*;
public abstract class ArithmeticQuestion extends Question {
//additions may have a wider interval in order to make them harder
// than multiplications, for example, and we want divisions to be exact so
//child classes provide operands
protected int first;
protected int second;
protected int result;
public int getFirstOperand() {
return first;
}
public int getSecondOperand() {
return second;
}
public int getResult() {
return result;
}
public abstract char getOperator();
public ArithmeticQuestion(int score) {
super(score);
}
public String stringQuestion() {
return "How much is it? " +
getFirstOperand() +
getOperator() +
getSecondOperand();
}
@Override
public void ask() {
String answer = null;
while (answer == null || answer.equals("")) {
answer = JOptionPane.showInputDialog(null, this.stringQuestion());
if (answer != null) {
if (Integer.parseInt(answer) == getResult()) {
isAnswerRight = true;
}
}
}
System.out.println(isAnswerRight ? "Right" : "Wrong"); //For debugging
}
}
Here is where I'm not sure I have made a good design (I don't know whether I have too many classes)
import javax.swing.*;
public class AdditionQuestion extends ArithmeticQuestion {
private static final int ADDITION_MINIMUM = 100;
private static final int ADDITION_INTERVAL = 200;
private static final int ADDITION_SCORE = 1;
public AdditionQuestion() {
super(ADDITION_SCORE);
this.first = (int) (Math.random() * ADDITION_INTERVAL) + ADDITION_MINIMUM;
this.second = (int) (Math.random() * ADDITION_INTERVAL) + ADDITION_MINIMUM;
this.result = first + second;
}
@Override
public char getOperator() {
return '+';
}
@Override
public void showClue() {
JOptionPane.showMessageDialog(null, "Be careful when you add more than 10 units");
}
}
The class for Subtraction
import javax.swing.*;
public class SubtractionQuestion extends ArithmeticQuestion {
private static final int SUBTRACTION_MINIMUM = 0;
private static final int SUBTRACTION_INTERVAL = 100;
private static final int SUBTRACTION_SCORE = 1;
public SubtractionQuestion() {
super(SUBTRACTION_SCORE);
this.first = (int) (Math.random() * SUBTRACTION_INTERVAL) + SUBTRACTION_MINIMUM;
this.second = (int) (Math.random() * SUBTRACTION_INTERVAL) + SUBTRACTION_MINIMUM;
this.result = first - second;
}
@Override
public char getOperator() {
return '-';
}
@Override
public void showClue() {
JOptionPane.showMessageDialog(null, "The answer may be a negative number");
}
}
For multiplication
import javax.swing.*;
public class MultiplicationQuestion extends ArithmeticQuestion {
private static final int MULTIPLICATION_MINIMUM = 1;
private static final int MULTIPLICATION_INTERVAL = 10;
private static final int MULTIPLICATION_SCORE = 3;
public MultiplicationQuestion() {
super(MULTIPLICATION_SCORE);
this.first = (int) (Math.random() * MULTIPLICATION_INTERVAL) + MULTIPLICATION_MINIMUM;
this.second = (int) (Math.random() * MULTIPLICATION_INTERVAL) + MULTIPLICATION_MINIMUM;
this.result = first * second;
}
@Override
public char getOperator() {
return '*';
}
@Override
public void showClue() {
String output = "RECALL THE MULTIPLICATION TABLE\n";
for (int i = 1; i < 10; i++) {
output += i + "x" + getFirstOperand() + "=" + i * getSecondOperand() + "\n";
}
JOptionPane.showMessageDialog(null, output);
}
}
And finally, the exams:
import javax.swing.*;
public abstract class Exam {
protected Question[] questions = new Question[50];
protected int currentNumberOfQuestions = 0;
protected int totalScore = 0;
public void addQuestion(Question p) {
questions[currentNumberOfQuestions++] = p;
}
public void increaseScore(int score) {
this.totalScore += score;
}
public int maximumPossibleScore() {
int total = 0;
for (int i = 0; i < currentNumberOfQuestions; i++) {
total += questions[i].getScore();
}
return total;
}
public void examResult() {
JOptionPane.showMessageDialog(null, "You've got " + this.totalScore + " points out of: " + maximumPossibleScore());
}
public abstract void doExam();
}
A mock exam (makes the same question until you are right)
public class mockExam extends Exam {
//The student only scores if they are right in their first attempt
//If they are wrong, a clue is shown
@Override
public void doExam() {
for (int i = 0; i < this.currentNumberOfQuestions; i++) {
Question p = this.questions[i];
p.ask();
if (p.isAnswerRight) {
this.increaseScore(p.score);
}
while (!p.isAnswerRight) {
p.showClue();
p.ask();
}
}
examResult();
}
}
And a real Exam
public class RealExamen extends Exam {
//Only asks each question once
@Override
public void doExam() {
for (int i = 0; i < this.currentNumberOfQuestions; i++) {
Question p = this.questions[i];
p.ask();
if (p.isAnswerRight) {
this.increaseScore(p.score);
}
}
examResult();
}
}
Looking to hear your feedback on how I can improve and manage my code.
Answer: Thank for sharing your code.
OOP doesn't mean to "split up" code into random classes.
The ultimate goal of OOP is to reduce code duplication, improve readability and support reuse as well as extending the code.
Doing OOP means that you follow certain principles which are (among others):
information hiding / encapsulation
single responsibility
separation of concerns
KISS (Keep it simple (and) stupid.)
DRY (Don't repeat yourself.)
"Tell! Don't ask."
Law of demeter ("Don't talk to strangers!")
replace branching with polymorphism
Information hiding
This is a major principle (not only in OOP). In OOP this means that no other class (not even sub classes) know the inner structure of a certain class.
You violate this principle by giving the sub classes of Question direct access to its member variables score and isAnswerRight.
This also violates the Tell, don't ask! principle.
The better approach would be to add a method to class Question to manage the score value itself:
public abstract class Question {
private final /*hopefully the score never change during runtime */
int score;
public Question(int puntuacion) {
this.score = puntuacion;
}
/** public entry point, do not override */
public final int ask(){
boolean isAnsweredRight = askUser();
if(!isAnsweredRight) {// may fail once
showClue();
isAnsweredRight = askUser();
}
return isAnsweredRight? score : 0; // no score if failed
};
/** ask the question and report success/failure */
protected abstract boolean askUser();
public abstract void showClue();
}
class design
As mentioned in the comments some Subclasses of Question raise doubt:
In OOP we create new (sub)classes when we need to change behavior. That is: we overwrite a method of the super class to do some different or additional calculation. (returning a value is no calculation...)
I argue the reasoning in the comment: The fact that a bad design is done somewhere else should not be an excuse to do the same.
So I would have only two subclasses of Question:
public class MultipleOptionQuestion extends Question {
and
public class ArithmeticQuestion extends Question {
I would introduce another interface Operation like this:
interface Operation{
int calculate(int first, int second);
}
And the ArithmeticQuestion would look like this:
public class ArithmeticQuestion extends Question {
private final int first;
private final int second;
private final Operation operation;
private final String operator;
private final String clue;
public ArithmeticQuestion(String operator, Operation operation, in first, int second, int score, String clue)
super(score);
// constructors do no work, they just assign values to members
this.operation=operation;
this.operator=operator;
this.first=first;
this.second=second;
this.clue=clue;
}
public String stringQuestion() {
return "How much is it? " +
first +
operator +
second;
}
public void showClue() {
JOptionPane.showMessageDialog(null, clue);
}
@Override
public boolean askUser() {
int result = operation.calculate(first,second);
String answer = null;
while (answer == null || answer.equals("")) {
answer = JOptionPane.showInputDialog(null, this.stringQuestion());
if (answer != null) {
return Integer.parseInt(answer) == result;
}
}
return false; // maybe marked as unreachable code...
}
}
this would lead to this exsam class:
public class Exam {
private static final int MULTIPLE_OPTION_SCORE = 1;
private static final int MULTIPLICATION_MINIMUM = 1;
private static final int MULTIPLICATION_INTERVAL = 10;
private static final int MULTIPLICATION_SCORE = 3;
private static final int SUBTRACTION_MINIMUM = 0;
private static final int SUBTRACTION_INTERVAL = 100;
private static final int SUBTRACTION_SCORE = 1;
private static final int ADDITION_MINIMUM = 100;
private static final int ADDITION_INTERVAL = 200;
private static final int ADDITION_SCORE = 1;
private static final int QUESTION_TYPE_COUNT = 4;
public void doExam(int numberOfQuestions) {
Random random = new Random();
int maxScore =0;
int userScore =0;
for (int i = 0; i < numberOfQuestions; i++) {
int questionType =random.nextInt(QUESTION_TYPE_COUNT);
Question question;
switch(questionType){
case 0: // Addition
question = new ArithmeticQuestion("+",
(first,second)->first+second,
random.nextInt(ADDITION_INTERVAL)+ADDITION_MINIMUM,
random.nextInt(ADDITION_INTERVAL)+ADDITION_MINIMUM,
ADDITION_SCORE,
"Be careful when you add more than 10 units");
maxScore+=ADDITION_SCORE;
break;
case 1: // Subtraction
question = new ArithmeticQuestion("-",
(first,second)->first-second,
random.nextInt(SUBTRACTION_INTERVAL)+SUBTRACTION_MINIMUM,
random.nextInt(SUBTRACTION_INTERVAL)+SUBTRACTION_MINIMUM,
SUBTRACTION_SCORE,
"The answer may be a negative number");
maxScore+=SUBTRACTION_SCORE;
break;
case 2: // Multiplication
// ...
default: // MultiOption
// ...
}
userScore+= question.ask();
}
JOptionPane.showMessageDialog(null, "You reached "+userScore+ " of "+maxScore+" possible points!")
}
} | {
"domain": "codereview.stackexchange",
"id": 29203,
"tags": "java, object-oriented, swing, quiz"
} |
Potential energy $= mgh$, what is $h$? | Question: NOTE: when I say potential energy I mean gravitational PE
The formula for potential energy is P.E = mgh.
What is h referring to? Height, obviously.
Consider the example: What is the potential energy of a 1kg mass lifted 2 metres off the ground?
m=1, g=9.8, h=2 => P.E=17.6J
My problem is this: why is h the height off the ground, this seems rather arbitrary. Would it not make more sense to have h as the height from the centre of gravity?
Suppose we repeat the experiment on top of a mountain, does the mass still have the same potential energy? I am pretty sure it doesn't.
I am foreseeing one or both of the following answers, so which one will it be?
It doesnt make sense to talk about potential energy in absolute terms, only in terms of gain in potential energy
Potential energy is defined in a way where h is the height off the ground (I dont buy this)
I am leaning towards the first one, but I am still generally uncomfortable with the idea that you can't have an absolute P.E
Answer: It's the first one. This is a really excellent observation! It's a fascinating fact of physics.
Absolute potential energy is a silly idea. If you take a bunch of different objects, list their potential energies, and then add $100$ to each one, nothing will change about how the system behaves. We only talk about relative potential energy.
The kinetic energy an object gains in falling a certain height is equal to the potential energy it has lost. If we let an object fall from $h_1$ to $h_2$, we find that its change in kinetic energy is $\Delta KE = m g h_1 - m g h_2.$ If we add any arbitrary number $C$ to each of these potential energies, the difference is the same: $\Delta KE = (m g h_1 + C) - (m g h_2 + C) = m g h_1 - m g h_2.$
We often use the height off of the ground because it means that at ground level, $PE = mgh = mg \cdot 0 = 0$ for all objects, and since nothing can be lower than ground level in a simple system it makes sense to say that $0$ is the lowest possible potential energy anything can attain.
EDIT:
To add to this, let's look at a little extra mathematical formalism. It turns out that in classical mechanics, the nonexistence of "absolute potential energy" is a special case of something called gauge invariance.
For simplicity, let's talk about a one dimensional system - we have a ball that can only move back and forth along a single line. Let $x$ be the position of the ball.
Let $U(x)$ be the potential energy of the system as a function of the position of the ball. This could, for example, be a simple gravitational problem -- $x$ is the height of the ball above the ground, and $U(x) = m g x.$ But for generality, we won't specify what the form of $U$ is.
What we will say is that potential energy is the result of some force acting on an object. We know that the potential energy of an object at a certain position resulting from a given force is the work required to bring that object to that position. So if an object is acting under a force $F(x)$, the potential energy at position $x$ is $U(x) = W = - \int F(x)\, dx$ (the negative sign comes from the fact that we must work in the opposite direction of the force).
So from the fundamental theorem of calculus, if $U(x) = - \int F(x)\, dx$, then
$F(x) = - \frac{d U(x)}{d x}.$
Okay, this is interesting. In classical mechanics, we can completely describe the motion of a system if we know the forces acting on it (since we can then use Newton's law $F = ma$). But since we know $F(x) = - \frac{d U(x)}{d x}$, we can describe the motion of the system completely by knowing the potential energy.
Here's the payoff:
If the forces are the same for two different potential energy functions, then those potential energy functions result in the same physical behavior.
Mathematically:
If $U_1(x)$ and $U_2(x)$ are two potential energy functions such that $- \frac{d U_1(x)}{dx} = - \frac{d U_2(x)}{dx}$, then the potential energy functions result in the same physical behavior.
What does it mean if two functions have the same derivative? Well, it means that they differ by a constant.
Oh! That's where we wanted to get to, isn't it? If two potential energy functions differ by a constant, then they result in the same physical behavior. So it doesn't make sense to talk about "absolute potential energy", because no matter what we can add any constant we want and we'll obtain the same forces and thus the same physical behavior.
Hence it only makes sense to talk about changes in potential energy, not absolute potential energy.
(I said earlier that this is an example of a gauge invariance -- choosing a different constant to add to your potential energy function may be referred to as choosing a different "gauge" [which is a physical term]. The principle of gauge invariance states that the physical behavior of the system is the same regardless of which gauge you choose. In physics we often choose the gauge that makes our calculations the simplest -- which is why we choose the potential energy function $m g h$, where the potential energy is zero at ground level. This is an example of picking a useful gauge) | {
"domain": "physics.stackexchange",
"id": 36656,
"tags": "newtonian-gravity, potential-energy, definition, conventions"
} |
Ranking bond types from strongest to weakest | Question: Note: I've already handed this in for homework and got the question wrong but don't understand why. Not looking for someone to do my homework for me, just trying to flesh out an area where I'm not yet proficient.
This is problem 4.10 from the book "Nanotechnology: Understanding Small Systems" 2nd ed. by Rogers, Pennarthur, and Adams.
The exact wording of the problem states:
"Rank the following bonds from strongest to weakest and provide the bond energy: the bond between hydrogen an oxygen in a water molecule; the bond between sodium and chloride in the NaCl molecule; the bond between atoms in a metal; the van der Waals bond between adjacent hydrogen atoms."
I've found the exact bond strength of 3 of 4 of these.
Na+ - Cl- bond = 830 zJ or 8.3E-19 J
H-O bond = 760 zJ or 7.6E-19 J
H-H bond = 0.14 zJ or 1.4E-22 J
What I cannot find is the bond strength for metal-to-metal atoms. I tried specifically looking for copper, silver, and iron and couldn't find the bond strength between atoms.
To complicate things further, this question has been asked numerous times in various iterations and other answers have stated that covalent bonds are stronger than ionic bonds, which are in turn stronger than metallic bonds. Everyone agrees H-H bonding is weakest.
So is it just the case that Na-Cl is a particularly strong ionic bond and H-O is a particularly weak covalent bond such that this particular ionic bond is stronger than this particular covalent bond? Or are the other answers incorrect?
I should probably also note that based off of copper's heat of vaporization of 3630 J/g and its molar mass of 63.546 g/mol I calculated a bond strength of 383 zJ and WRONGLY concluded:
ionic > covalent > metallic > H-H (van der Waals)
So I got the question marked incorrect which probably means I didn't do the calculation for copper's bond strength correctly.
Answer: The lattice
energies of ionic
compounds are
relatively large.
The lattice energy
of NaCl, for
example, is 787.3
kJ/mol , which is only slightly less
than the energy given off when
natural gas burns. The bond
between ions of opposite charge is
strongest when the ions are small.
For example, an HO–H bond of a
water molecule (H–O–H) has 493.4
kJ/mol of bond-dissociation
energy, and 424.4 kJ/mol is needed
to cleave the remaining O–H bond.
The bond energy of the covalent
O–H bonds in water is 458.9 kJ/
mol , which is the average of the
values.
Total for water 493+424 = 917.8
Clearly for
H2O>Nacl
For metals
you need to plus BE along with E for vaporisation. | {
"domain": "physics.stackexchange",
"id": 29914,
"tags": "homework-and-exercises, physical-chemistry"
} |
nerode equivalence classes q - prefix? | Question: Lets consider the following language : $L = \{1w |w \in \Sigma^*\}$ (Alphabet is 0 and 1).
I know this language is regular, I just have to prove it now, the Problem here is the number of equivalence classes, I thought it would be:
$[1] = \{x|x $ starts with 1$\}$ and $[\epsilon] = \{x| x$ doesn't start 1$\}$.
But now I am dubious, isn't it possible for L to have 3 equivalence classes one containing words that start with 1 and one class that has epsilon as its only element?
Thanks in advance
Answer: There are three equivalence classes, and this can be seen from the minimal DFA, which has three states. The equivalence classes are $\epsilon,0\Sigma^*,1\Sigma^*$. Indeed:
$\epsilon$ and $0$ are in different classes since $\epsilon 1 \in L$ whereas $01 \notin L$.
$\epsilon$ and $1$ are in different classes since $\epsilon \epsilon \notin L$ and $1\epsilon \in L$.
$0$ and $1$ are in different classes since $0\epsilon \notin L$ whereas $1\epsilon \in L$.
All words in $\{\epsilon\}$ are clearly equivalent.
All words in $0\Sigma^*$ are equivalent since $wz \notin L$ for all words $w \in 0\Sigma^*$ and all words $z$.
All words in $1\Sigma^*$ are equivalent since $wz \in L$ for all words $w \in 1\Sigma^*$ and all words $z$. | {
"domain": "cs.stackexchange",
"id": 9018,
"tags": "formal-languages, regular-languages"
} |
Can you express the Feynman propagator as a limit? | Question: At first I thought that the Feynman propagator was the limit of:
$$ G(x) = \frac{1}{x^2 + i \varepsilon} $$
But if you apply the wave equation to this you get:
$$ \Box G(x) = \frac{\varepsilon}{(x^2 + i \varepsilon)^3} $$
So it seems this not the solution to
$$\Box G(x) = \delta_4(x) $$
(Where $\delta_4(x)=\delta(x_0)\delta(x_1)\delta(x_2)\delta(x_4)$). But is the solution to:
$$ \Box G(x) = \varepsilon G(x)^3 $$
(OK so in the limit the RHS is zero). The RHS is like $\delta(x^2)$ instead of $\delta_4(x)$.
So it looks like that my first assumption was wrong. I know if you can get it using the fourier integral:
$$ G_\varepsilon(x) = \int \frac{\exp(i x.k)}{ k^2 + i\varepsilon } dk^4 $$
but I would like to find it as a nice limit not involving integrals or delta functions. I believe the above integral would involve Bessel functions.
It would be OK still if you could show that:
$$\int G(x-y)\Box_y G(y-z)dy^4 = G(x-z) $$ for this function.
Edit:
Perhaps the solution is that we define
$$ \lim_{\varepsilon\rightarrow 0} \varepsilon G(x)^3 \equiv \delta_G(x) $$
Since it has all the properties of a delta function. Then the equation would be:
$$ \lim_{\varepsilon\rightarrow 0}\Box G(x) = \delta_G(x) $$
But should then we need to work out what is:
$$ \int G(x-y) \delta_G(y-z) dy^4 = ? $$
to see if it has all the correct properties.
Edit 2:
I think it must be that Lorenz invariance is only satisfied in the limit as $\varepsilon\rightarrow 0$. So
$$G(x) = \frac{1}{r^2-t^2} + O(\varepsilon,r,t) $$
and
$$\delta_4(x) = \delta(r^2+t^2) $$
So G(x) must be a non-Lorenz invariant function of r,t and $\varepsilon$ which is only Lorenz invariant in the limit.
Although it says in this book that my first answer was correct!
Maybe is it wrong to use $\delta(r^2+t^2)$ ?
Answer: So that is correct:
$$ G(x) = \frac{1}{x^2+i\varepsilon^2} $$
The equation
$$ \Box G(x) = \delta_4(x) $$
is satisfied since
$$ \lim_{\varepsilon\rightarrow 0} \frac{\varepsilon^2}{(x^2+i\varepsilon^2)^3} = \delta(x^2) $$
Note that $\delta(0) = \frac{1}{\varepsilon^4} $
The issue with $\delta(x^2-x_0^2)$ not being the same as $\delta(x^2+x_0^2)$ is that because these always appear in an integral one can do a Wick rotation from one to the other. Hence one is as good as the other! Which, interestingly means a delta function for a single point is equivalent in an integral to a delta function over the entire light cone of a point.
Problem solved.
Also as an aside when normalising G(x) at time $x_0$ one gets a factor of $\varepsilon$:
$$ G(x) = \frac{\varepsilon}{x^2+i\varepsilon^2} \frac{1}{|x_0|^{1/2}} $$
so this behaves like a delta function and proves light always stays on the light cone. | {
"domain": "physics.stackexchange",
"id": 23646,
"tags": "quantum-field-theory, integration, propagator"
} |
Gravitation law paradox for very close objects? | Question: We all know that gravitation force between two small (not heavenly) bodies is negligible. We give a reason that their mass is VERY small. But according to inverse square law, as $r\to 0$, then $F\to \infty$. But in real life we observe that even if we bring two objects very close, no such force is seen.
Why is this so?
Answer: The inverse-square law holds for spherically symmetric objects, but in that case the main problem is that $r$ is the distance between their centers. So "very close" spheres are still quite a bit apart--$r$ would be at least the sum of their radii.
For two spheres of equal density and size just touching each other, the magnitude of the gravitational force between them is
$$F = G\frac{M^2}{(2r)^2} = \frac{4}{9}G\pi^2\rho^2r^4\text{,}$$
which definitely does not go to infinity as $r\to 0$ unless the density $\rho$ is increased, but ordinary matter has densities of only up to $\rho \sim 20\,\mathrm{g/cm^3}$ or so.
Tests of Newton's law for small spheres began with the Cavendish experiment, and this paper has a collection of references to more modern $1/r^2$ tests. | {
"domain": "physics.stackexchange",
"id": 10969,
"tags": "forces, newtonian-gravity, symmetry"
} |
Python program to manipulate a list based on user input | Question: I have this Python code:
def main() -> None:
print("Welcome To UltiList!")
lst = []
len_of_list = int(input("Enter the len of the list: "))
while len(lst) <= len_of_list:
print(">> ", end="")
args = input().strip().split(" ")
if args[0] == "append":
lst.append(int(args[1]))
elif args[0] == "insert":
lst.insert(int(args[1]), int(args[2]))
elif args[0] == "remove":
lst.remove(int(args[1]))
elif args[0] == "pop":
lst.pop()
elif args[0] == "sort":
lst.sort()
elif args[0] == "reverse":
lst.reverse()
elif args[0] == "print":
print(lst)
elif args[0] == "exit":
print("Bye!")
exit()
else:
print("That's not a valid command!")
if __name__ == "__main__":
main()
But I thinks is very repetitive,Eg: In JavaScript I could do something like:
const commands = {
append: (arg)=>lst.append(arg),
remove: (arg)=>lst.remove(arg),
...
}
// This would go in the while loop
commands[`${args[0]}`](args[1])
And I would no longer have to make any comparison, this will make it more readable and shorter in my opinion.
What can be improved?
Answer: Your 'switch statement' is bad because you've chosen to not use the same algorithm as your JavaScript example:
const commands = {
append: (arg)=>lst.append(arg),
remove: (arg)=>lst.remove(arg),
...
}
Converted into Python:
commands = {
"append": lambda arg: lst.append(arg),
"remove": lambda arg: lst.remove(arg),
}
commands[f"{args[0]}"](args[1])
Python has no equivalent to JavaScript's () => {...} lambdas.
So you can abuse class if you need them.
class Commands:
def append(arg):
lst.append(arg)
def exit(arg):
print("bye")
exit()
getattr(Commands, f"{args[0]}")(args[1]) | {
"domain": "codereview.stackexchange",
"id": 41834,
"tags": "python, python-3.x"
} |
Is there any better than (2/k)-approximation algorithm for Independent Set in Coloring graph? | Question: I would like to know any results in terms of approximation algorithm about Maximum weight Independent Set problem in coloring graph.
Input: Given a graph G with non-negative vertex weights and valid but not important to be optimal coloring of G.
Task: Find the Maximum weight Independent Set.
Note that this problem is NP-hard (for k>2 where k is the number of coloring) [see here]
in Hochbaum's paper in 1996 "Approximating Covering and Packing Problems: Set Cover, Vertex Cover, Independent Set, and related problems" in Theorem 3.2 he give an (2/k)-approximation algorithm for this problem.
Now, my question is: Is there any new results regarding this problem? Is there any lower bound for this problem? Of course in general for simple graph without coloring to find Independent Set, there is no constant factor unless P=NP. Now, Is it true that we cannot have a constant factor for this coloring case? anything related to this problem, it would be nice to mention it here.
Thank you!
Answer: If you have a graph with maximum degree $\Delta$, then the greedy algorithm finds a coloring with $\Delta+1$ colors, so for $k = \Delta+1$ the assumption that you are given a proper $k$-coloring does not change the complexity of the problem. It is known that independent set is NP-hard to approximate within factor $\Delta^{1 - o(1)}$, and Unique Games-hard to approximate within factor $\frac{\Delta}{O(\log^2\Delta)}$, which, for $k=\Delta$ is within a $\log \Delta$ factor from the upper bound you cite. (I prefer to write approximation factors as bigger than 1, so I am thinking of the upper bound you cite as $\frac{k}{2}$.)
However, it's still interesting to close these logarithmic gaps. This paper by Bansal may contain useful ideas. | {
"domain": "cstheory.stackexchange",
"id": 4334,
"tags": "graph-theory, approximation-algorithms, approximation-hardness"
} |
Received power for free space optic (FSO) | Question: I am facing a problem with the calculation of received power in FSO.
I have calculated the received power for free space optic (FSO) using the equation:
Lsystem (system loss) is set to 8dB. PTotal can be calculated as:
where Ntx (number of receiver) = 1 and PTx (transmitted power) =7.78 dBm. LGeo (geographical loss) can be calculated as:
where d2R (receiver diameter) = 0.07m, l=1, θ (divergence angle) =0.05mrad and Nr (number of receiver) =1.
The problem is that I got received power = 10.72 dBm which is illogical value since I have setup the transmitted power = 7.78dBm. As my understanding, the received power must be lower than transmitted power.
I hope that anyone may help me to understand this situation.
For your information, I refer this paper for the calculation: https://ieeexplore.ieee.org/document/6015903
Thank you.
EDIT
This is my calculation coded in Python code:
d1=0.035
d2=0.07
NRX= 1 # no. of receiver lenses, site B
ARX=((math.pi*(d2**2))/4)*NRX
beam=0.05 #beam divergence
L=1
geo=((4*(ARX))/(np.pi*((beam*L)**2))) # site A
geometric_loss=-(10*np.log10(geo))
print ("Geometric loss",geometric_loss,"\n")
PTX=7.78
Ptotal=PTX+10*np.log10(1)#change number of transmitter here
Lsystem=0
PRX=Ptotal-Lsystem-geometric_loss
print ("Received power",PRX)
Output:
Geometric loss -2.92256071356
Received power 10.7025607136
Answer:
$$L_{geo}=-10\log_{10} \frac{4A_R}{\pi(l\theta)^2}$$
This equation is valid when the area of the receiver is (substantially) smaller than the beam diameter.
With your values ($\theta=0.05\ {\rm mrad}$ at distance 1 m, and $A_R=0.015\ {\rm m}^2$), this is not true and this equation can't be used. You can probably just assume $L_{geo}=0$ with these values.
But 0.05 mrad is an extremely well collimated beam. Maybe the value should be 50 mrad?
Edit
Your $L_{geo}$ term is just accounting for the fact that the receiver is smaller than the optical beam. You could re-write it as
$$L_{geo}=-10\log_{10}\frac{A_R}{A_B}$$
where $A_B$ is the area of the beam spot in the plane of the receiver.
Even with 50 mrad (as used in your Python code), the beam spot at 1 meter will have a diameter of 0.05 m. Since your receiver diameter is 0.07 m, under this approximation you can assume you essentially capture the entire beam with your receiver, and $L_{geo} = 0\ {\rm dB}$.
In the real world of course you wouldn't get perfect coupling. To get the actual coupling loss, you'd need a better model for the beam intensity as a function of transverse position, you'd need to know how well centered the beam is on the receiver, how accurately the receiver is pointed toward the source, etc.
If you're just getting the loss at 1 m to be able to compare with what it would be at 10 m and 100 m, assuming $L_{geo}$ of 0 dB at 1 m is probably good enough. | {
"domain": "physics.stackexchange",
"id": 54857,
"tags": "attenuation"
} |
How do I get warnings when using catkin tools? | Question:
I have been using catkin_make for a while and am now testing catkin_tools.
I seems like the default configuration hides the warnings from the user, how do I print the warnings?
catkin_make
dell@M6700:~/test_workspace$ catkin_make
Base path: /home/dell/test_workspace
Source space: /home/dell/test_workspace/src
Build space: /home/dell/test_workspace/build
Devel space: /home/dell/test_workspace/devel
Install space: /home/dell/test_workspace/install
####
#### Running command: "cmake /home/dell/test_workspace/src -DCATKIN_DEVEL_PREFIX=/home/dell/test_workspace/devel -DCMAKE_INSTALL_PREFIX=/home/dell/test_workspace/install -G Unix Makefiles" in "/home/dell/test_workspace/build"
####
-- The C compiler identification is GNU 4.9.3
-- The CXX compiler identification is GNU 4.9.3
-- Check for working C compiler: /usr/lib/ccache/cc
-- Check for working C compiler: /usr/lib/ccache/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/lib/ccache/c++
-- Check for working CXX compiler: /usr/lib/ccache/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Using CATKIN_DEVEL_PREFIX: /home/dell/test_workspace/devel
-- Using CMAKE_PREFIX_PATH: /home/dell/MEGAsync/catkin_workspace/devel;/opt/ros/indigo
-- This workspace overlays: /home/dell/MEGAsync/catkin_workspace/devel;/opt/ros/indigo
-- Found PythonInterp: /usr/bin/python (found version "2.7.6")
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/dell/test_workspace/build/test_results
-- Looking for include file pthread.h
-- Looking for include file pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.6.16
-- BUILD_SHARED_LIBS is on
WARNING: Package "ompl" does not follow the version conventions. It should not contain leading zeros (unless the number is 0).
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~ traversing 1 packages in topological order:
-- ~~ - test_package
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'test_package'
-- ==> add_subdirectory(test_package)
-- Using these message generators: gencpp;genlisp;genpy
-- Configuring done
-- Generating done
-- Build files have been written to: /home/dell/test_workspace/build
####
#### Running command: "make -j8 -l8" in "/home/dell/test_workspace/build"
####
Scanning dependencies of target test_node
[100%] Building CXX object test_package/CMakeFiles/test_node.dir/src/test_node.cpp.o
/home/dell/test_workspace/src/test_package/src/test_node.cpp: In function ‘int main(int, char**)’:
/home/dell/test_workspace/src/test_package/src/test_node.cpp:81:12: warning: unused variable ‘z’ [-Wunused-variable]
double z;
^
Linking CXX executable /home/dell/test_workspace/devel/lib/test_package/test_node
[100%] Built target test_node
Cleaning the workspace
dell@M6700:~/test_workspace$ catkin clean -a
catkin build
dell@M6700:~/test_workspace$ catkin build
---------------------------------------------------------------------------------------
Profile: default
Extending: [env] /home/dell/MEGAsync/catkin_workspace/devel:/opt/ros/indigo
Workspace: /home/dell/test_workspace
Source Space: [exists] /home/dell/test_workspace/src
Build Space: [exists] /home/dell/test_workspace/build
Devel Space: [exists] /home/dell/test_workspace/devel
Install Space: [missing] /home/dell/test_workspace/install
DESTDIR: None
---------------------------------------------------------------------------------------
Isolate Develspaces: False
Install Packages: False
Isolate Installs: False
---------------------------------------------------------------------------------------
Additional CMake Args: None
Additional Make Args: None
Additional catkin Make Args: None
Internal Make Job Server: True
---------------------------------------------------------------------------------------
Whitelisted Packages: None
Blacklisted Packages: None
---------------------------------------------------------------------------------------
Workspace configuration appears valid.
---------------------------------------------------------------------------------------
Found '1' packages in 0.0 seconds.
Starting ==> test_package
Finished <== test_package [ 8.3 seconds ]
[build] Finished.
[build] Runtime: 8.3 seconds
Originally posted by VictorLamoine on ROS Answers with karma: 1505 on 2016-04-01
Post score: 0
Answer:
According to the following link (https://github.com/catkin/catkin_tools/issues/53), there is a " -v " option to use with catkin build that will print you more informations.
Originally posted by F.Brosseau with karma: 379 on 2016-04-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by VictorLamoine on 2016-04-02:
Note that newer versions of catkin_tools will print warnings without any switch (https://github.com/catkin/catkin_tools/issues/308)
Comment by F.Brosseau on 2016-04-04:
Good to know, I don't really use this tool for the moment. | {
"domain": "robotics.stackexchange",
"id": 24291,
"tags": "ros, catkin-tools"
} |
Skin depth and Mermin-Wagner theorem | Question: I recently became aware of the Coleman-Mermin-Wagner theorem presented in [1802.07747] for higher-form symmetries and I have a question about how it might be applied to electromagnetism.
The theorem states: continuous $p$-form symmetries in $D$ spacetime dimensions are never broken if $p ≥ D − 2$.
In 3+1 spacetime dimensions, we consider the photon to be the Goldstone associated with a continuous $U(1)$ one-form symmetry. However, as stated in [1802.07747], in 2+1 dimensions, this interpretation as a Goldstone is no longer allowed.
Questions:
Suppose I have a dielectric material at finite temperature in 3+1 spacetime dimensions. Effectively, I now have a 3 dimensional system. Does this mean that the $U(1)$ one-form symmetry cannot be spontaneously broken?
If the answer to the above question is "yes," then the photon field can no longer be viewed as a Goldstone and thus need not satisfy Goldstone's theorem. As a result, I would expect the low-frequency dispersion relation $\omega \propto k$ should not hold (in the absence of fine-tuning). Are there any circumstances in which this dispersion relation holds exactly at arbitrarily low frequency (again in the absence of fine-tuning)?
If the answer to the above question in "no," then can we interpret the Coleman-Mermin-Wagner theorem as mandating that all finite-temperature materials have a finite skin-depth beyond which electromagnetic radiation gets exponentially damped?
Answer: Great question! However, the question isn't really specific to 1-form symmetries. You can ask the same question about a system in (2+1)-D that spontaneously breaks a 0-form $\mathrm{U}(1)$ symmetry. At nonzero temperature the Coleman-Mermin-Wagner theorem forbids the $\mathrm{U}(1)$ symmetry to be spontaneously broken, so you can ask what happens to the Goldstone modes. The answer is, they are still present, with no frequency gap opening up, as long as you are below the Berezinskii–Kosterlitz–Thouless (BKT) transition temperature.
A nice explanation for this is provided in the following paper:
https://arxiv.org/abs/1908.06977
where it is argued that a sufficient requirement for Goldstone modes in $d$ spatial dimensions is the existence of an emergent $d$-form symmetry that has a mixed anomaly with the 0-form $\mathrm{U}(1)$ symmetry. Such an emergent symmetry is basically equivalent to the absence of vortices, and is present in the (2+1)-D system below the BKT temperature.
I imagine there is probably a similar statement that applies to (3+1)-D electromagnetism. It has two emergent $\mathrm{U}(1)$ 1-form symmetries that have mixed anomalies with each other. That statement is presumably robust even at nonzero temperature, even though the spontaneous symmetry breaking is not, and that would be sufficient to imply a gapless photon. | {
"domain": "physics.stackexchange",
"id": 80661,
"tags": "electromagnetic-radiation, symmetry, goldstone-mode"
} |
Did our Universe experience a curvature dominated phase? | Question: So, my question is simple: did our Universe experience a curvature dominated phase?
Or, rather, could our Universe have experienced a curvature dominated phase?
This seems quite shruggish, at first glance, as the Universe has been measured to be pretty locally (one horizon scale) flat. However, within the experimental error, the Universe could be slightly curved. So, I'm thinking if the Universe was positively curved then the curvature could have dominated between the matter dominated phase and the current $\Lambda$ dominated phase? Is that correct?
Answer: Let's analyse the evolution of the curvature in the $\Lambda\text{CDM}$ model. If $\rho_R$, $\rho_M$, and $\rho_\Lambda$ are the densities of radiation, matter and dark energy, and
$$
\rho_c = \frac{3H^2}{8\pi G}
$$
is the critical density, then we can define
$$
\Omega_{R} = \frac{\rho_{R}}{\rho_{c}},\quad
\Omega_{M} = \frac{\rho_{M}}{\rho_{c}},\quad
\Omega_{\Lambda} = \frac{\rho_{\Lambda}}{\rho_{c}},
$$
and the quantity
$$
\Omega_{K} = 1 - \Omega_{R} - \Omega_{M} - \Omega_{\Lambda},
$$
which can serve as a measure of the curvature: if $\Omega_{K} = 0$ the universe is flat, if $\Omega_{K} < 0$ the curvature is positive, and if $\Omega_{K} > 0$ the curvature is negative. We can write these quantities in terms of their present-day values (indicated by subscripts "$0$") as follows:
$$
\rho_R = \rho_{R,0}\, a^{-4},\quad
\rho_M = \rho_{M,0}\, a^{-3},\quad
\rho_\Lambda = \rho_{\Lambda,0},
$$
where $a$ is the scale factor with present-day value $a=1$, so that
$$
\Omega_{R} = \Omega_{R,0}\frac{H_0^2}{H^2}a^{-4},\quad
\Omega_{M} = \Omega_{M,0}\frac{H_0^2}{H^2}a^{-3},\quad
\Omega_{\Lambda} = \Omega_{\Lambda,0}\frac{H_0^2}{H^2}.
$$
From the Friedmann equations, we also find (see this post for details) that
$$
H^2 = H_0^2\left(\Omega_{R,0}\,a^{-4} + \Omega_{M,0}\,a^{-3} + \Omega_{K,0}\,a^{-2} + \Omega_{\Lambda,0}\right),
$$
so that
$$
\Omega_{K}(a) = \frac{\Omega_{K,0}\,a^{-2}}{\Omega_{R,0}\,a^{-4} + \Omega_{M,0}\,a^{-3} + \Omega_{K,0}\,a^{-2} + \Omega_{\Lambda,0}}.
$$
From this we learn the following:
If $\Omega_{K,0}=0$, then $\Omega_{K}\equiv 0$. That is, if the universe is exactly flat today, it has always been flat and always will be.
As $a\rightarrow\infty$, the term $\Omega_{\Lambda,0}$ dominates, so that $\Omega_{K}\rightarrow 0$. In other words, if $\Omega_{K,0}\ne 0$, the curvature of the universe will go to zero in the future under the influence of dark energy.
As $a\rightarrow 0$, the term $\Omega_{R,0}$ dominates, and again $\Omega_{K}\rightarrow 0$. So in the distant past, the curvature of the universe was also very close to zero; this is known as the flatness problem, and one of the motivations for the existence of an inflationary epoch.
Since $|\Omega_{K}|$ vanishes in the past and in the future, it must have had a maximum value at some intermediate time if it is nonzero today. This maximum occurs when the derivative of $\Omega_{K}(a)$ is zero. After some algebra, this reduces to solving
$$
2\,\Omega_{R,0}\,a^{-4} + \Omega_{M,0}\,a^{-3} - 2\,\Omega_{\Lambda,0} = 0.
$$
Incidentally, this is also the moment at which the expansion of the universe transitioned from deceleration to acceleration (i.e. when $\ddot{a}=0$, see the previous link for details). Using the values $\Omega_{R,0}\approx 0$, $\Omega_{M,0}\approx 0.3$ and
$\Omega_{\Lambda,0}\approx 0.7$, we find the solution
$$
a_m \approx \left(\frac{\Omega_{M,0}}{2\,\Omega_{\Lambda,0}}\right)^{1/3} \approx 0.6,
$$
and the corresponding curvature
$$
\Omega_{K,m} \approx \frac{\Omega_{K,0}\,a_m^{-2}}{\Omega_{M,0}\,a_m^{-3} + \Omega_{K,0}\,a_m^{-2} + \Omega_{\Lambda,0}} \approx \frac{\Omega_{K,0}}{\Omega_{K,0} + (3/2)\,\Omega_{M,0}\,a_m^{-1}} \approx \frac{\Omega_{K,0}}{\Omega_{K,0} + 0.75}.
$$
Observations indicate that the present-day curvature is
$$-0.02 < \Omega_{K,0} < 0.02,$$
so that the minimum/maximum curvature would have been
$$-0.027 < \Omega_{K,m} < 0.026.$$
In other words, the curvature of the universe has always been small. | {
"domain": "physics.stackexchange",
"id": 29872,
"tags": "cosmology, spacetime, universe, space-expansion, curvature"
} |
Why is spacetime not Riemannian? | Question: I apologize if this is a naïve question. I'm a mathematician with, essentially, no upper-level physics knowledge.
From the little I've read, it seems that spacetime is Lorentzian. Unfortunately, the need for a metric that isn't positive-definite escapes my understanding. Could someone explain the reasoning?
Answer: Firstly, the Equivalence Principle reduces to the statement that in a freefall frame, the spacetime manifold is locally exactly as it is for special relativity (see my answer here for a fuller explanation of why this is so).
So you can swiftly reduce your question to "Why does Minkowski spacetime have a nontrivial, non-Euclidean signature?".
Since we're now doing special relativity, there are many approaches to answering this latter, equivalent question. My favourite is as follows. I expand on the following in my answer to the Physics SE question "What's so special about the speed of light?", but the following is a summary.
To get special relativity, you begin with Galilean Relativity and basic symmetry and homogeneity assumptions about the World. If you assume absolute time (i.e. that all observers agree on the time between two events) then these assumptions uniquely define the the Galilean Group as the group of co-ordinate transformations between relatively moving observers. Time of course is not acted on by that group: $t=t^\prime$ for any two co-ordinate systems.
Now you relax the assumption of absolute time. You now find that there are a whole family of possible transformation groups, each parameterized by a constant $c$ defining the group. The Galilean group is the family member for $c\to\infty$. The only way time can enter these transformations and be consistent with our homogeneity and symmetry assumptions is if it is mixed with a Euclidean spatial co-ordinate along the direction of relative motion by either a Lorentz boost or a Euclidean rotation. Another way of saying this is that the whole transformation group must be either $\mathrm{SO}(1,\,3)$ or $\mathrm{SO}(4)$.
But we must also uphold causality. That is, even though relatively moving inertial observers may disagree on the time between two events at the same spatial point, they will not disagree on the sign - if one event happens before the other in one frame, it must do so in the other. So our transformations can't mix time and space by rotations, because one could then always find a boost which would switch the direction of the time order of any two events for some inertial observer. Hence our isometry group must be $\mathrm{SO}(1,\,3)$ and not $\mathrm{SO}(4)$, and spacetime must therefore be Lorentzian and not Riemannian.
This is what user bechira means when he makes the excellent, pithy but probably mysterious comment:
To describe causal structure. | {
"domain": "physics.stackexchange",
"id": 80853,
"tags": "general-relativity, spacetime, metric-tensor"
} |
Derivation of Fourier Transform in Quantum Mechanics | Question: I recently came across an expression for Fourier Transform in Quantum Mechanics given by:
$$
\psi(x)=\frac{1}{\sqrt{2 \pi \hbar}} \int_{-\infty}^{\infty} e^{\frac{ipx}{\hbar}} \phi(p) d p
$$
I tried to derive with starting with the Fourier Transforms, and it went like this:
$$
F(x)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(\rho) e^{i x p} d p
$$
$$
F\left(\frac{x}{\sqrt{\hbar}}\right)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f\left(\frac{p}{\sqrt{\hbar}}\right) e^{i\left(\frac{x}{\sqrt{\hbar}}\right)\left(\frac{p}{\sqrt{\hbar}}\right)} d\left(\frac{p}{\sqrt{\hbar}}\right)
$$
$$
F\left(\frac{x}{\sqrt{\hbar}}\right)=\frac{1}{\sqrt{2 \pi \hbar}} \int_{-\infty}^{\infty} f\left(\frac{p}{\sqrt{\hbar}}\right) e^{\frac{i x p}{\hbar}} d p
$$
I'm not able to find a way to get the arguments of both the function and its Fourier transform from $\frac{x}{\sqrt{\hbar}}$ to x and $\frac{p}{\sqrt{\hbar}}$ to p.
Any clue (or a complete answer) is very much appreciated.
Answer: Hint: start with basic Fourier transform
$$
F(x)=\frac{1}{\sqrt{2\pi}}\int dk f(k)e^{ikx}dk
$$
and perform a change of integration variable:
$$p=\hbar k.$$ | {
"domain": "physics.stackexchange",
"id": 88958,
"tags": "quantum-mechanics, definition, conventions, fourier-transform"
} |
Why topologically non-trivial materials are robust againist any external perturbations or defects? | Question: Topologically non-trivial materials are insensitive to perturbations or defects. How can I prove it mathematically?
I thought of making the first-order perturbation term zero.
$$\left< \psi \right|H'\left| \psi \right>=0$$ Where $H'$ is the perturbation applied.
But I am unaware of the starting assumptions or conditions to be applied. Can anyone help me with any hint or answer to prove why topological materials are insensitive to perturbations?
Answer: For the sake of this explanation, let's concentrate on systems that have a spectral gap (not the most general scenario but it shall do).
Let $P$ be the Fermi projection of some topological material $H$ such that its Fermi energy is placed inside of a spectral gap of $H$. We have the Riesz formula $$ P = -\frac{1}{2\pi\mathrm{i}}\oint(H-zI)^{-1}\mathrm{d}z $$ where the contour of the integral encloses the spectrum below the gap.
If we perturb $H\mapsto H'$ such that the two Hamiltonians share a common gap, we have the formula (using the same contour) \begin{align} P' &= -\frac{1}{2\pi\mathrm{i}}\oint(H'-zI)^{-1}\mathrm{d}z \\ &=P -\frac{1}{2\pi\mathrm{i}}\oint \left((H'-zI)^{-1}-(H-zI)^{-1}\right) \mathrm{d}z \\
&=P -\frac{1}{2\pi\mathrm{i}}\oint (H'-zI)^{-1}(H-H')(H-zI)^{-1}\mathrm{d}z
\end{align} and so, assuming that $H,H'$ are semibounded (from below) we get the estimate $$ \|P-P'\| \leq C\|H-H'\|\,. $$ where the constant $C$ depends on the size of both gaps and lower bound of the spectrum (of $H$ and of $H'$).
Since we can express the Chern number as a function of $P$ which is moreover continuous w.r.t. this operator norm, this shows the desired stability. | {
"domain": "physics.stackexchange",
"id": 73188,
"tags": "condensed-matter, topological-insulators, topological-order, topological-phase"
} |
Are number of photons in an incident radiation proportional to its intensity? | Question: Does number of photos emitted depends upon frequency of light source (or light color)?
I don't think so but one of the questions I attempted today suggests so. When the number of photons is proportional to the frequency of the radiation?
I think it should only depend on intensity of light. How can one relate the number of photons in an incident radiation to its intensity ?
Answer: Intensity is the total amount of energy falling on (or going through) a surface/region per unit area $A$ per unit time $t$ and therefore measured in $\rm J/(m^2\,s)$.
For monochromatic radiation, the total energy emitted equals the number of photons $n$ times the energy of one photon, $h\nu$.
Hence intensity $I$ is given by $$I=\frac{hn\nu}{At}$$
For constant area and time, $$I \propto n\cdot\nu$$
This is a very important result. You can increase the intensity of the radiation by either increasing the number of photons in it or increasing energy of each photon, or both.
The number of photons does not necessarily increase when the frequency of the radiation increases; only the energy of each photon increases.
However, for constant intensity, $$n \propto \frac{1}{\nu}$$ | {
"domain": "physics.stackexchange",
"id": 40496,
"tags": "photons"
} |
launching the image_view node of kinect camera | Question:
Dear all,
For Kinect, to view depth images on the topic /camera/depth/image, we use this command
rosrun image_view image_view image:=/camera/depth/image
Now my question is how do I include image_view node in launch file of my package?. Including a node in a launch file is straightforward but I do not know to include image:=/camera/depth/image in my launch file. Dose anyone know how?
Thanks,
Originally posted by A.M Dynamics on ROS Answers with karma: 93 on 2015-01-06
Post score: 0
Answer:
Remapping is done with the remap tag. Running image_view from within a launch file would look something like this:
<node name="image_view" type="image_view" pkg="image_view">
<remap from="image" to="/camera/depth/image"/>
</node>
Originally posted by Dan Lazewatsky with karma: 9115 on 2015-01-06
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by A.M Dynamics on 2015-01-06:
Thanks. But there is still a problem. Not any window pops up to show the output of camera and there is no errors!! so what is the problem? | {
"domain": "robotics.stackexchange",
"id": 20493,
"tags": "ros, roslaunch, node, image-view"
} |
Why is Common Lisp, Python and Prolog used in artificial intelligence? | Question: What are the advantages/ strengths and disadvantages/weakness of programming languages like Common Lisp, Python and Prolog? Why are these languages used in the domain of artificial intelligence? What type of problems related to AI are solved using these languages?
Please, give me link to papers or books regarding the mentioned topic.
Answer: If we talk about applied AI, the choice of a programming language for an AI application has the same points to be taken into account that in any other software area: speed of generated code, expressive, reusable, etc.
By example, as training of a neural net is very expensive in CPU, languages as C/C++, that produces very optimized code, are very convenient. Moreover, there are GPUs librarians in C/C++ that allows use of strong parallelism.
A system with some complexity will combine more than one language, in order to use the best of the language in the points where it is need.
But returning to the list of languages that appears in the question. As all them are Turing complete compare them means talk about its paradigm, features, syntax and available compilers/interpreters. Obviously, something that exceeds the possibilities of a simple answer. Just to show some key points about the ones mentioned in the question:
Prolog is a programming paradigm by itself. Its main advantage was that Prolog sentences are independent from remainder ones and near to the mathematical definitions of the concepts. Moreover, it is itself a database. Its drawback are also well known: very slow, lack of librarians for i/o, ... . Very interesting (even mandatory) to known a few examples of algorithms in Prolog, but I doubt nobody is using it nowadays, except in obsolete university courses (when you reach the "!", cut its study).
Lisp is also a zombie. Its functional paradigm has been now included in lots of very more modern languages, combining it with object oriented paradigm: scala, haskell, ocaml/F#, ... . Being functional allows a syntax that made easier to express logic concepts as recursive definition of logic or types, ... . Something very interesting in AI.
In the category of object oriented paradigm and valid for all applications, we have Python ( easy to learn, fast prototyping, slow, ... ) C/C++ (very optimized code), Java, ... . More or less, all them are adopting also functional features in latest standards.
In AI there are a lot of very interesting language features to be also considered: rule based systems, ... . Librarians for them can be found in all main languages.
Finally, some words about AGI (strong AI): you do not need a computer. In best moments, we are at the stage of pencil and paper, remainder ones looking at ceiling. | {
"domain": "ai.stackexchange",
"id": 461,
"tags": "programming-languages"
} |
What is the equation to update the weights in the perceptron algorithm? | Question: I'm trying to understand the solution to question 4 of this midterm paper.
The question and solution is as follows:
I thought that the process for updating weights was:
error = target - guess
new_weight = current_weight + (error)(input)
I do not understand for example, for number 2 below, how that sum is determined. For example, I want to understand whether to update the weight or not. The calculation is:
x1(w1) + x2(w2)
(10)(1) + (10)(1) = 20
20 > 0, therefore update.
But the equation to obtain the same answer in the solution is:
1(10 + 10) 20
20 > 0, therefore update.
I understand that these two equations are essentially the same, but written differently. But for example, in step 5, what do the elements in g5 mean. What do the -8, -16 and -2 represent?
p.s. I know in a previous (now deleted) post of mine, I asked a question related to the use of LaTeX instead for maths equations. If someone can show me a simple way to convert these equations online, I'm more than happy to use it. However, I'm unfamiliar with this software, so I need some sort of converter.
Answer: I will tell you my knowledge, correct me if I am wrong.
Perceptron Learning Algorithm (PLA) is a simple method to solve the binary classification problem.
Define a function:
$$
f_w(x) = w^Tx + b
$$
where $x \in \mathbb{R}^n$ is an input vector that contains data points and $w$ is a vector with the same dimension as $x$ which present for the parameters of our model.
Call $y=label(x)=\{1,-1\}$ where $1$ and $-1$ are the label of each $x$ vector.
The PLA will predict a class like this:
$$
y=label(x)=sgn(f_w(x))=sgn(w^Tx+b)
$$
(The definition of sgn function can be found in this wiki)
We can understand that PLA tries to define a line (in 2D, or a plane in 3D, and hyperplane in more than 3 dimensions coordinate, I will assume it in 2D from now on) which separate our data into two areas. So how can we find that line? Just like every other machine learning problems, define a cost function, then optimize the parameters to have the smallest cost value.
Now, let define the cost function first, you can see that if a data point lies in the correct area, $y$ and $f(x)$ have the same sign, which means $y(w^Tx+b) > 0$ and otherwise. Similar to your example, I will define:
$$
g(x)=y(w^Tx+b)
$$
We ignore all the points in the safe zone ($g(x)>0$), only update to rotate or move the line to adapt with the misclassified points ($g(x)\le 0$), here, you can understand why we only update if $g(x)\le0$.
We need to define a cost function to minimize it, so our cost function will become:
$$
L(w)=\displaystyle\sum_{x_i\in U}(-y_i(w^Tx_i+b))
$$
where
$U$is the set of the misclassified points
$y_i$ is the label of data point $i$-th
$x_i$ is the $i$-th data vector
$w$ and $b$ is parameters of our model
For each data point, we have the derivative is
$$
\frac{\partial L}{\partial w} = -y_ix_i \\
\frac{\partial L}{\partial b} = -y_i
$$
Finally, update them by Stochastic gradient descent (SGD), we get:
$$
w = w - \frac{\partial L}{\partial w} = w + y_ix_i \\
b = b - \frac{\partial L}{\partial b} = b + y_i
$$
For your last question, notice that the weight and bias changed from $4$-th updated, so we have:
$$
y_5 = 1, x_5 = (4,8), w = (-2,-2), b = -2 \\
\Rightarrow g_5 = +1 \times (4\times(-2) + 8\times(-2)+ (-2)) = -8 -16 -2
$$ | {
"domain": "ai.stackexchange",
"id": 2516,
"tags": "machine-learning, perceptron"
} |
Extinction of non-mammal Synapsida | Question: I am following M.J. Benton's Vertebrate Palaeontology, which explains how many groups of Synapsida were already extinct in the Mesozoic Era, with one obvious exception being the group of Cynodontia, which includes modern day Mammalia.
Nevertheless, I cannot find when the last non-mammal synapsids got extinct and I would be very grateful to anybody giving more information about the topic.
Answer: OK, I don't know how I did it but I misread your question as being why all the non-mammalian Synapsids went extinct. I answered your question at the end, but I'm leaving the rest as-is until I figure out how to rewrite it (it was rather too much work to delete).
According to The Wikipedia page for Synapsids, the answer really depends on the Synapsis since there are quite a gap between the start of the Synapsid clade and the appearance of Cynodonts.
The first big "culling" of Synapsids (i.e. the extinction of all groups except one clade that led to mammals - Therapsids in this case) seems to occurred over the Permian era. Some groups went extinct in the early Permian but most groups went extinct at various points during the Middle Permian. No clear reason is given for these various extinctions (though the page for Dinocephalia suggests "disease, sudden climatic change, or other factors of environmental stress may have brought about their end"... which basically covers all possible extinction reasons so that's helpful); it's probably various reasons specific to each. Most species do go extinct after all.
Among Therapsids, the only clade that survived the Middle Permian, most seem to have been wiped out in the Permian-Triassic extinction, the largest extinction event in Earth's history, of which only three Therapsid clades survived: Therocephalians, which went extinct soon after, Dicynodonts, which were extinct by the Early Cretaceous, and Cynodonts. Aside from the usual environmental factors, a reason for the extinction of those two clades could have been competition from the then-ascendant Dinosaurs, and Cynodonts themselves.
As for when the last non-mammal Synapsid when extinct, it appears to have been a member of the Tritylodonts, which survived into the Cretaceous (and arguably having a representative after the K/Pg extinction, emphasis on "arguably"). Mind you these are very close cousins to Mammals indeed:
Tritylodontids ("three knob teeth", named after the shape of animal's teeth) were small to medium-sized, highly specialized and extremely mammal-like cynodonts, bearing several mammalian hallmarks like erect limbs, endothermy and lactation. They were the last known family of the non-mammalian synapsids. | {
"domain": "biology.stackexchange",
"id": 6795,
"tags": "zoology, palaeontology, reptile, extinction"
} |
Understanding Work and the conservation of energy | Question: We have a car with a mass of $780 kg$ with travels with a speed of $50 km/h$. The car brakes and after $4,2m$ is stops completely. Warmth is created. Calculate the friction.
I solved this easily, by simply filling in the data like a headless chicken (the solution I'm showing is from the correction model, I got the same answers but I did it without writing anything down, so):
$E_{total 1} = E_{total 2} $
$0.5mv_1^2 = 0.5mv_2^2 + Q = 0+ F_w . s $
$0.5 . 780 . (50/3.6)^2 = F_w . 4.2 $
$F_w = 1,8.10^4 N$
I didn't have any trouble with this, I got the same answer, but than I started thinking about it, and I am increasingly finding the solution illogical. The LHS is completely logical to me, but the right hand side. Normally, the formula of work is the resultant force times the distance d. But, after you've travelled the 4,2 meters, your $F_w$ is $0$. So how can you say that $F_w \times d$ = LHS, because by the time the total $d$ (4.2) is reached, the resistance has already turned into 0. What is the logic behind this? I know this is high school level so it is simplified, but even then, knowing it's simplified a lot, I don't understand the logic. Can someone explain?
Answer: Centered dots for multiplication in LaTeX are put in with \cdot
The force of kinetic friction does not gradually draw down to zero. It has a specific value for any nonzero velocity, and only at zero velocity does the friction force exhibit discontinuity as we abruptly enter the regime of static friction. Discontinuities in general are hard to fathom, but this is the standard way of teaching friction.
Think of it this way: the force on the car while it is moving is constant, so the work is easy to calculate. Once the car stops moving, the force may be different...but the car no longer moves, and so there is no work to account for. | {
"domain": "physics.stackexchange",
"id": 5233,
"tags": "homework-and-exercises, energy-conservation, friction, work"
} |
General form of a fermionic Hamiltonian in second quantization | Question: I am a new to quantum chemistry. I've been reading this paper and it seems like the equation (1) in the paper which is,
$H = \sum_{pq} T_{pq} a_p^\dagger a_q + \sum_p U_pn_p + \sum_{pq} V_{pq} n_p n_q$ (where $a_p^\dagger$ and $a_p$ are fermionic creation and annihilation operators, and $n_i = a_i^\dagger a_i$ is the number operator),
represents a general form of any fermionic Hamiltonian in second quantization.
However, I thought the second quantization of molecular Hamiltonian is expressed in
$H = \sum_{pq}a_p^\dagger a_q + \frac{1}{2}\sum_{pqrs} h_{pqrs}a_p^\dagger a_q^\dagger a_r a_s $.
How does the first Hamiltonian express the second Hamiltonian? I get that the first term in the first Hamiltonian is equal to the first term in the second Hamiltonian. Also, I get that setting $U_p = 0$. However, I'm not sure how to connect the last term in the first Hamiltonian to the last term in the second Hamiltonian. Why does the second term involve $4$ variables whereas the first one (which claims to be the general form) only has $2$ variables?
Answer: @Matteo is right: the two-body term of the second Hamiltonian is more general than the first one. Consider the two-body term
$$h_{pqrs}a_p^+a_r^+a_qa_s
=h_{pqrs}a_p^+\big(\{a_r^+,a_q\}-a_qa_r^+\big)a_s$$
The summations over $p,q,r,s$ are implicit.
Using the anti-commutation relation $\{a_r^+,a_q\}=\delta_{r,q}$, we get
$$\eqalign{
h_{pqrs}a_p^+a_r^+a_qa_s
&=h_{pqrs}a_p^+\big(\delta_{r,q}-a_qa_r^+\big)a_s\cr
&=h_{pqqs}a_p^+a_s-h_{pqrs}a_p^+a_qa_r^+a_s
}$$
Now consider the special case
$${1\over 2}h_{pqrs}=-V_{pr}\delta_{p,q}\delta_{r,s}$$
The two-body Hamiltonian becomes
$${1\over 2}h_{pqrs}a_p^+a_r^+a_qa_s
=-V_{pr}a_p^+a_r+V_{pr}a_p^+a_pa_r^+a_r$$
Plugging $n_p=a_p^+a_p$, we get finally
$${1\over 2}h_{pqrs}a_p^+a_r^+a_qa_s
=-V_{pr}a_p^+a_r+V_{pr}n_pn_r$$
Conclusion: the two-body term of your second Hamiltonian is a special case of the two-body term of your first Hamiltonian.
The most general Hamiltonian with two-body interaction reads
$$H=-t_{pq}a_p^+a_q+{1\over 2} h_{pqrs}a_p^+a_r^+a_qa_s$$
Your second Hamiltonian corresponds to $t_{pq}=-\delta_{p,q}$ and your first Hamiltonian to ${1\over 2}h_{pqrs}=-V_{pr}\delta_{p,q}\delta_{r,s}$ and $T_{pq}+U\delta_{p,q}=-t_{pq}-V_{pq}$. | {
"domain": "physics.stackexchange",
"id": 92347,
"tags": "hamiltonian, fermions, quantum-chemistry"
} |
Planetary tails: How many have we observed yet? | Question: Just recently I learned that Mercury has a sodium tail and it is actually well studied since long:
As a result, the planet has the appearance of a comet, with a tail that's been observed streaming nearly 3.5 million kilometres away from the planet.
This made me wonder about exoplanets with tails, like WASP-69b. How many (exo)planets are known to have a tail?
References
Why would a planet form a cometlike tail under conditions of weaker rather than stronger solar wind?
Answer: Yes, apart from WASP-69b, there has been reports of other exoplanets emitting some form of planetary tails. Let's start from the Wikipedia:
KIC 12557548 b is a small rocky planet, very close to its star, that is evaporating and leaving a trailing tail of cloud and dust like a comet. The dust could be ash erupting from volcanos and escaping due to the small planet's low surface-gravity, or it could be from metals that are vaporized by the high temperatures of being so close to the star with the metal vapor then condensing into dust.
In June 2015, scientists reported that the atmosphere of GJ 436 b was
evaporating, resulting in a giant cloud around the planet and, due to
radiation from the host star, a long trailing tail 14 million km (9
million mi) long.
More on the dust particles that is trailing from the exoplanets KIC 12557548 b as well as KOI-2700b:
The observed tail lengths are consistent with dust grains composed of
corundum (Al2O3) or iron-rich silicate minerals (e.g., fayalite,
Fe2SiO4). Pure iron and carbonaceous compositions are not favored. In
addition, we estimate dust mass loss rates of 1.7 ± 0.5 M⊕ Gyr-1 for
KIC 12557548b, and > 0.007 M⊕ Gyr-1 (1σ lower limit) for KOI-2700b.
Source: Dusty tails of evaporating exoplanets, I. Constraints on the dust composition, R. van Lieshout1, M. Min and C. Dominik, A&A, Volume 572, December 2014, DOI: 10.1051/0004-6361/201424876
Astronomers using NASA's Hubble Space Telescope have confirmed the
existence of a baked object that could be called a "cometary planet."
The gas giant planet, named HD 209458b, is orbiting so close to
its star that its heated atmosphere is escaping into space.
Source: https://www.nasa.gov/mission_pages/hubble/science/planet-tail.html
Astronomers have confirmed that the planet Gliese 436b seems to be
trailing a gigantic, comet-like cloud of hydrogen.
Source: https://skyandtelescope.org/astronomy-news/exoplanet-with-a-comet-tail-06302015234/
Disintegrating planets allow for the unique opportunity to study the
composition of the interiors of small, hot, rocky exoplanets because
the interior is evaporating and that material is condensing into dust,
which is being blown away and then transiting the star. Their transit
signal is dominated by dusty effluents forming a comet-like tail
trailing the host planet K2-22b, making these good candidates for
transmission spectroscopy.
Source: Eva H. L. Bodman et al 2018 AJ 156 173, DOI: 10.3847/1538-3881/aadc60 | {
"domain": "astronomy.stackexchange",
"id": 5530,
"tags": "exoplanet, mercury"
} |
Guess number game with mines | Question: I've made mine first Java program and it was Guess a number game and got some great code review, so I've decided to first implement that suggestions and to make a game a bit more complex/advanced.
So now, the user can select between 3 options, 3 different intervals (1-10, 1-20, 1-100), but also, in each interval there are few mines he must avoid.
Eclipse suggested to set private int max; to bet private static int max; and some other private int's as well, wasn't sure why?
Let me know if something is not clear in explanation of game.
Here is the code:
Main.java
public class Main {
public static void main(String[] args) {
Game game = new Game();
game.options();
game.playGame();
}
}
Game.java
import java.util.Random;
import java.util.Scanner;
public class Game {
private static int min=1;
private static int max;
private static int numberOfMines;
private int randomNumber;
int mine[] = new int[20];
private int numGuesses;
private int userGuess;
public static int promptForInteger() {
Scanner input = new Scanner(System.in);
while(!input.hasNextInt()) {
System.out.println("Please enter valid number");
input.next();
}
return input.nextInt();
}
public void options(){
int option = 0;
do{
System.out.println("Please select your game: ");
System.out.println("1) Guess number between 1 and 10 with 1 mine.");
System.out.println("2) Guess number between 1 and 20 with 3 mines.");
System.out.println("3) Guess number between 1 and 100 with 20 mines.");
option = promptForInteger();
}while (option == 0 || option > 3);
switch (option) { // set max depending on user selection
case 1: max = 10;
numberOfMines = 0; // one mine
break;
case 2: max = 20;
numberOfMines = 2; // 3 mines
break;
case 3: max = 100;
numberOfMines = 19; // 20 mines
break;
}
createNumberAndMines();
}
public void createNumberAndMines(){
randomNumber = getRandom(); // get random number to guess
askForGuess(); // first user guess can't be mine, so I want to ask for first guess then set mines
for (int i = numberOfMines; i >= 0; i--){
do{ // we don't want to have same number for guessing and mine or first user guess to be a mine
mine[i] = getRandom();
for (int j = i+1; j<19; j++){ //check if generated number is duplicate mine
if (mine[i] == mine[j]){
i++; // if we find that this number is duplicated to previous one we want to do for loop again for this mine[i]
}
}
}while (mine[i] == randomNumber || mine[i] == userGuess);
}
}
public static int getRandom() {
Random rand = new Random();
return rand.nextInt((max - min) + 1) + min;
}
public void playGame() {
for (int numGuesses = 1; true; numGuesses++){
for (int i=numberOfMines; i>=0; i--){// check if user hits mine
if (mine[i] == userGuess){
System.out.println("You lose! You have hit the MINE");
return;
}
}
// tell user how close he is to right answer
if (userGuess < randomNumber){
System.out.println("Too low!");
}else if(userGuess > randomNumber){
System.out.println("Too high!");
}else if(userGuess == randomNumber){
System.out.println("You win! You tried " + numGuesses + " time(s) total");
break;
}
askForGuess();
}
}
public void askForGuess(){
System.out.println("You are guessing random number between 1 and " + this.max + ". Enter your guess: ");
Scanner inputGuess = new Scanner(System.in);
userGuess = Integer.parseInt(inputGuess.nextLine());
}
}
Answer: Potential bugs
When asking the user to select their option for the game, negative values aren't checked:
do{
System.out.println("Please select your game: ");
System.out.println("1) Guess number between 1 and 10 with 1 mine.");
System.out.println("2) Guess number between 1 and 20 with 3 mines.");
System.out.println("3) Guess number between 1 and 100 with 20 mines.");
option = promptForInteger();
}while (option == 0 || option > 3);
If the user were to enter -7 for example, it would not match option == 0 || option > 3 so the while loop will exit. This will make the rest of the code crash because numberOfMines and max were not initialized.
Also, in the askForGuess() method, whose purpose is to ask the user for a guess, the code isn't checking that what the user entered is a valid integer. You already have a promptForInteger() method exactly for that! You can re-use it:
public void askForGuess(){
System.out.println("You are guessing random number between 1 and " + this.max + ". Enter your guess: ");
userGuess = promptForInteger();
}
Reuse Random objects
To generate a random number, your current method is:
public static int getRandom() {
Random rand = new Random();
return rand.nextInt((max - min) + 1) + min;
}
The issue with this is that, everytime the method is called, a new Random object is created. This produces mediocre quality random numbers. A Random object should be created just once, and be re-used. For example, you could make it a constant of the Game class.
Another possibility is to use the ThreadLocalRandom class, which makes it really easy to generate a random integer in a range:
public static int getRandom() {
return ThreadLocalRandom.current().nextInt(min, max + 1);
}
(We need to have max + 1 since the upper bound is exclusive).
Use static only for global fields
A lot of the code is static:
private static int min=1;
private static int max;
private static int numberOfMines;
along with the methods getRandom() and promptForInteger(). Static fields are generally not a good idea. min, max and numberOfMines being static means that if we create two games, they will have the same value. This is probably not what you want: each game should have its own number of mines, etc.
As such, consider making them instance fields. You will also need to change getRandom() and make it an instance method since it operates on those instance fields.
promptForInteger() can also be made an instance method.
Unused code
private int numGuesses;
is unused in your code. Either remove it or use it, unused code shouldn't be left in code.
Hard-coded bounds, off-by-one checks
The maximum number of mines is hard-coded at multiple places of the code:
int mine[] = new int[20];
// ...
for (int j = i+1; j<19; j++)
This shows a small design issue: you are always creating a mine array of length 20 because that is the maximum the game can have. But when the game has less mines than 20, then the array is storing elements that will be of no use. Instead, this array should be instantiated with the correct number of elements, which is the number of mines in the game.
Additionally, numberOfMines, despite its name, does not represent the number of mines:
switch (option) { // set max depending on user selection
case 1: max = 10;
numberOfMines = 0; // one mine
break;
case 2: max = 20;
numberOfMines = 2; // 3 mines
break;
case 3: max = 100;
numberOfMines = 19; // 20 mines
break;
}
This is awkward, why would numberOfMines be equal to 2 when there are 3 mines? Consider changing that so that numberOfMines accurately represents the number of mines in the game, instead of the number minus 1.
Your game has 3 options, each of them with a specific minimum value, maximum value and number of mines. Those values are hard-coded in the options printed to the user and in the switch.
You could create an enum of those option, which would look like
enum Options {
EASY(1, 10, 1),
MEDIUM(1, 20, 3),
HARD(1, 100, 20);
private final int min, max, numberOfMines;
Options(int min, int max, int numberOfMines) {
this.min = min;
this.max = max;
this.numberOfMines = numberOfMines;
}
}
This would centralize all the options in one place. You can then loop over the options to print them and select the option chosen by the user (for example taking its ordinal as a first step).
Using the Stream API
The mine array is populated with random integers from min to max, with the exception from the first guess and the number to guess. Instead of having a while loop, you can do it easily using the Stream API:
mine = ThreadLocalRandom.current()
.ints(min - 1, max)
.filter(i -> i != userGuess && i != randomNumber)
.distinct()
.limit(numberOfMines)
.toArray();
This creates a stream of random integers between the bounds of the game, filters out the first guess and the number to find, only keeps numberOfMines of them and makes an array.
Checking if the user hit a mine is also easy to do:
if (Arrays.stream(mine).anyMatch(i -> i == userGuess)) {
System.out.println("You lose! You have hit the MINE");
return;
}
This checks if the array contains userGuess.
Lastly, in the following:
if (userGuess < randomNumber) {
System.out.println("Too low!");
} else if (userGuess > randomNumber) {
System.out.println("Too high!");
} else if (userGuess == randomNumber) {
System.out.println("You win! You tried " + numGuesses + " time(s) total");
break;
}
you don't need the else if (userGuess == randomNumber): if the number isn't strictly greater and strictly lower, then it must be equal. You can just have an else. | {
"domain": "codereview.stackexchange",
"id": 20495,
"tags": "java, game, number-guessing-game"
} |
Mixed symmetry of rank $3$ tensor | Question: I have rank 3 tensor $T_{ijk}$ with following properties:
$T_{ijk}=T_{jik}$
$T_{ijk}=-T_{kji}$
Is it true that there is the only one tensor of rank 3 with those properties and it is $T_{ijk}=0$.
I'm starting from the following
$T_{ijk}=-T_{kij}=-T_{ikj}=T_{jki}=T_{kji}=-T_{ijk}$
$\Rightarrow 2T_{ijk}=0$
$\Rightarrow T_{ijk}=0$
Where am I making a mistake? Because the result is strange enough. Does this mean that from the point of view of Young diagrams, the hook is not interesting?
Answer: The question and accepted answer here miss the main point, which is that for tensors of mixed symmetry, the indices themselves do not obey that symmetry, which is somewhat counterintuitive. This is explained in the comments by Michael Seifert here.
Tensors of mixed symmetry are generated by applying Young symmetrizers. In your case, you want to generate a tensor with symmetry
1 2 3
which means that to an arbitrary tensor $T_{ijk}$, you apply the symmetrizers
$$[e-(13)][e+(12)]T$$
where $e$ is the identity permutation, so that you get the desired tensor
$$T_{ijk}+T_{jik}-T_{kji}-T_{jki}$$
which is non-trivial.
For your method, you correctly arrived at the conclusion that the only tensor with $T_{ijk}=T_{jik}$ and $T_{ijk}=-T_{kji}$ is the zero tensor. It is not the right way to generate tensors of mixed symmetry, for the reasons mentioned at the beginning of this answer. | {
"domain": "physics.stackexchange",
"id": 82980,
"tags": "tensor-calculus, representation-theory"
} |
Given the "programs as proofs" isomorphism, how do we know that the program isn't lying? | Question: I've been studying constructive type theory (CTT) and one of the things that I'm not clear on is the proof part: Proving the correctness of a program in a form of a proof that's nothing but the program itself (Curry-Howard Correspondence)
Most examples that I've seen in books (e.g., Type Theory and Functional Programming - Thomson and TaPL) show the "proofs" on $\lambda$-abstractions and applications on terms (i.e., literally $a, b, e...$). The proofs seem to mostly rely on the type signatures of the functions, under the assumption that the function does what it claims. Not much is discussed about the "how" of the function's correctness.
For example, when writing a real program (e.g., in Haskell and other [pure] functional languages), the function under consideration could do any arbitrary computation and return a term of the correct type for the proof to go through (statically speaking). So how do we know that the program is computationally doing the "right" thing (dynamically speaking) and not just faking it to get past a proofing system?
From what I understand, here's how things should go (and probably are, but I'm not sure if I'm right), crudely speaking:
Given a program's specification in something like predicate logic we "convert" it into an equivalent "typed representation"
Using backward inference we substitute the "functions" with their appropriate values, which themselves could be other functions (i.e., we replace the functions with their computation rules, but I'm thinking more on the lines of replacing the function with its body, from a programming point of view, for the sake of argument. Assuming that they're "returning" the correct type this seems like a believable substitution)
We continue doing #2 above till we hit primitive operations (again, crudely speaking) which we can trivially prove (or if not, maybe the proof is "simple" enough).
Once we've hit all the "axioms" (or trivial proofs) along all the branches of backward inferences, we stop.
QED
Two questions:
Is my understanding/intuition of "how" the proof of correctness works in CTT works correct? It looks like it won't be possible for the program to "cheat" this or can it?
And secondly, is this what proof assistants like Coq help you prove/analyze (at a high level)?
Answer:
Proving the correctness of a program in a form of a proof that's nothing but the program itself
This is not quite how the Curry-Howard-Correspondence works.
First one has to show that the language of choice actually corresponds to some consistent logic. Different languages correspond to different logics, and many languages correspond to inconsistent logics which are not good for proving things. Haskell doesn't correspond to a consistent logic, because in Haskell, we can write nonterminating programs which would correspond to infinite proofs. But infinite proofs are invalid, so not every Haskell program corresponds to a valid proof.
Now what does it mean that a language corresponds to a logic? It means that:
a type in the language corresponds to a formula in the logic
a program in the language corresponds to a description of a proof in the logic
a well-typed program in the language corresponds to a description of a valid proof in the logic.
a program of a specific type in the language corresponds to a proof of a specific formula in the logic
a value in the language correspond to truth in the logic
evaluation of programs to values corresponds to soundness of the proving rules
reification of values to programs corresponds to completeness of the proving rules
...
No need to understand all aspects of the correspondence at once, but it is good to keep in mind that the correspondence relates not just programs and proofs, but many (all?) aspects of the languages and the logic. For the question here, only the first four aspects are relevant.
A minor issue is that only well-typed programs correspond to valid proofs. That's why type signatures of functions are so relevant to the Curry-Howard-Correspondence.
The key issue is that a well-typed program doesn't correspond to a valid proof of its own correctness, but to a valid proof of whatever formula corresponds to the program's type.
For example, let us consider the following two programs (using Haskell syntax, but assuming a version of Haskell that corresponds to a consistent logic):
f :: a -> a -> a
f x y = x
g :: a -> a -> a
g x y = y
The functions f and g are well-typed programs of the same type, so they correspond to valid proofs of the same formula. The formula is "forall propositions p, p implies that p implies p".
But clearly, f and g are different programs. And of course, they correspond to different proofs of the same formula. The program f corresponds to the proof "We know that p implies p, so also p implies (whatever implies p)". And the program g corresponds to the proof "We know that p implies p, so also whatever implies (p implies p)."
Usually, with programming languages, we care about the difference between programs like f and g, and with logics, we don't care about the difference between the two proofs that correspond to f and g. But when thinking about the Curry-Howard-Correspondence, it is important not to forget that there can be multiple different valid proofs of the same formula.
So how do we know that the program is computationally doing the "right" thing (dynamically speaking) and not just faking it to get past a proofing system?
We don't. We only know that the program proves the formula we have encoded in the program's type. So how can we use the Curry-Howard-Correspondence to prove a program correct? We have to encode a statement about program correctness into the type of the program. This requires a very expressive type system, of course, which is exactly what the languages inside tools like Coq or Agda provide. | {
"domain": "cs.stackexchange",
"id": 4277,
"tags": "type-theory, correctness-proof, intuition, proof-assistants, curry-howard"
} |
Confusion with length contraction | Question: Why is it called length "contraction" when an approaching rod can appear elongated?
http://www.spacetimetravel.org/bewegung/bewegung3.html
Answer: What a moving object may look like depends on light travelling to the eye from different points on the object. To reach the eye simultaneously the light must have left different points on the moving object at different times. This can all get complicated quite rapidly!
Length contraction is a much simpler phenomenon (if you've grasped the basics of Special Relativity as a theory about the nature of space-time!). In a frame of reference in which a rod is moving we must make simultaneous measurements of the positions of the ends of the rod against our stationary ruler. The length we get for the rod is less than the rod's length in a frame of reference in which it is stationary (so in which our measurements of the positions of its ends need not be simultaneous). | {
"domain": "physics.stackexchange",
"id": 39859,
"tags": "special-relativity"
} |
Filter Implementation | Question: I am rather new to the world of signal processing, and am struggling to understand a fundamental concept: How are filters actually implemented?
I have read a significant portion of this online book, and scrounged the internet, finding snippets of useful information here and there, but I cannot quite tie it all together.
In brief, I have a vibration profile (~200,000 samples) in the time domain, which I would like to analyze using a particular frequency weighting. This involves a four-step filter, using a (Butterworth) high-pass and low-pass, followed by an a-v transition and an upward step. The analog equations are shown below.
Yet I am still struggling on how to implement them. I have seen mentions of the bilinear transform, used in programs in MATLAB/Python, but such implementations seem to omit the 's' or 'p' variable from the filters shown, typically creating the numerator 'B' and the denominator 'A' from the coefficients of the analog filters.
Other sources suggest taking the Fourier of the filter equations and the signal, multiplying, then applying the inverse Fourier to bring it back into the time domain. Yet I cannot discern how to take the Fourier of an equation.
My same problem occurs with using convolution in the time domain, as I have not understood how to convolve an equation yet.
I am sure I have come near the answer, and I know this is a basic principle, but I cannot seem to comprehend it, and my Googling is off the mark. Any help on this would be appreciated. General advice is also welcome, as I have a lot to learn.
** If important, I am currently working in Python, using the scipy.signal package, with access to MATLAB for testing and evaluation purposes.
Currently I am looking at substituting j2πf for p, then simply running the data through the equations and taking the inverse Fourier to bring it back into the time domain.
Alternatively, my eyes have just noticed what was staring me in the face: the second equation given for each filter is in terms of 'f', which would suggest I am able to simply run the frequency-domain data through this equation before taking the inverse Fourier to recover the time-domain. If this is a correct realization, this question can be closed (or I can 'answer' my own, now self-evident, question).
Answer: There are a few different ways to proceed. They involve doing some algebra but it is more an issue of tedium. The notation of your analog filters is a little non standard, essentially your $p$ is $s$ in most books.
Traditionally, phase wasn't considered important in hearing so standards tended to be specified by magnitude, so there is some ambiguity with respect to the digital implementation's phase response. You have 2 basic simple choices, a digital IIR filter or FIR filter. FIR is likely to be easier but easy and better are typically not the same. FIR can be direct or FFT based. IIR has a number of choices such as biquads or direct. Floating point math has its own easy/better issues.
One path way is to focus on the terms inside the magnitude brackets, i.e. the complex analog transfer functions. The other path is to focus on the analog magnitude.
So, given the analog transfer functions, you can either use a table of Laplace transforms and deduce the impulse response of each component you listed, or use the bilinear transform to express each component that is in $s$ in terms of $z$.
If you derive the impulse responses, you can sample points of the impulse response and provided a generally exponential decay, use those samples as coefficients of a FIR filter of sufficient length. The FIR filters are then applied sequentially. One nice property of ideal LTI filters, the order that you apply the filters are ideally interchangeable.
If you choose the bilinear transform and do the algebra to derive the discrete time numerators and denominators, you need to be aware that the bilinear transform squeezes the response as it approaches the Nyquist frequency. One needs to plot the discrete time response to asses how the corresponding analog response is distorted. If some modest tweaking results in an acceptable response then, the numerators and denominators need to be factored so that they are compatible with the IIR implementation, such as a cascade of biquads.
If you go with the magnitudes, the task becomes a direct sampling from the analog frequency domain to the discrete time periodic frequency domain. One can choose a linear phase response, and appropriately impose symmetry and define the frequency domain FIR filter. This might not sound like an analog implementation.
The essential challenge is mapping a frequency response in $-\infty < \Omega < \infty$ to $-\pi \le \omega \le \pi$
There are other approaches like state variables, so this is not the entire story. | {
"domain": "dsp.stackexchange",
"id": 5464,
"tags": "algorithms, filtering"
} |
Algorithm for counting neighbors | Question: I need an algorithm to process protein chains. The volume size would probably not exceed 30x30x30 angstrom^3.
Say, I have a point cloud of 50,000 points in a 3D space.
I want to count the number of neighbors each point {(x1,y1,z1), (x2,y2,z2), (x3,y3,z3), ... ...,(xN,yN,zN)} has at radii {r1, r2, r3, ..., rM}.
Which algorithm should I use?
Additional information:
Do you need to do this count online/sequentially or all at once?
All once.
What restrictions might there be on the radii?
The radii will be 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0 angstrom.
Do you need to do this just once in this particular case or do you need a general algorithm?
I will use this to process protein sequences. Not DNA/RNA, though.
For a general algorithm, what's the likely range of the number of points?
The range would be 50,000 at max (although, the maximum length of available protein is 38,000 residues).
Are you expecting there to be relatively few or relatively many neighbors?
I am expecting relatively few neighbors, in the range of 0-50.
NOTE: I prefer a hash-based algorithm as it doesn't require additional data structures like KD tree, or octree.
Answer: The usual method is dividing the space into cells, in your case 7Å cubes, assigning atoms to cells, and searching for neighbors only in 3x3x3=27 cells. This method is described in Wikipedia as cell lists, although that name is not as widely used as the method itself.
If you analyze experimental protein structures note that they may have alternative conformations (alternative locations of the same atom), and that the structure in the file represents an asymmetric unit. | {
"domain": "bioinformatics.stackexchange",
"id": 2525,
"tags": "proteins, protein-structure, algorithms, 3d-structure"
} |
What kind bottom feeder is this? | Question:
It’s about 2 1/2 inches long and had a little tail on the end. Also has short feelers that almost look like a moles.
Answer: It's a kind of loach. You have to include the location in your query.
i.e. a black khuli loach:
a weather loach:
http://badmanstropicalfish.com/profiles/profile103.html | {
"domain": "biology.stackexchange",
"id": 9809,
"tags": "species-identification, ichthyology"
} |
Solution of the coupled non-linear oscillators by using perturbation theory | Question: The integration shown here, $$∫_{-\infty}^{+∞}x^r\mathrm{Exp}[−x^2]\mathrm{H_n}^2[x]\mathrm{d}x,$$ appears when we try to calculate the spectrum of the perturbed non-linear oscillators by using perturbation theory in quantum mechanics. Is there any direct way to perform the definite integration of the form shown above. I want the solution of the integral for $r≥4$. I hope there may exist some techniques which can be used to calculate the integration of above integral. Please i need suggestion from this forum to solve this integration. Highly appreciated!
Answer: $\textbf{Note :}$ Below a generating function is derived for calculating the following matrix element (useful in perturbative computations) :
$$O_{m n}^{k}=\frac{2_{}^{k}}{\sqrt{\pi}}\int_{-\infty}^{+\infty}dx_{}^{} H_{m}^{}(x) x_{}^{k} H_{n}^{}(x) e_{}^{-x_{}^{2}}.$$
where $\{H_{r}^{}(x)\}$ are hermite polynamials and $k$ is a non-negative integer.
(i) Consider the generating function of the Hermite polynomials :
$$G[z,x]=\sum_{n=0}^{\infty}\frac{z_{}^{n}}{n!}H_{n}^{}(x)=e_{}^{2 x_{}^{} z_{}^{}-z_{}^{2}}.$$
(ii) Define a generating function $Z[z_{1}^{},z_{2}^{},z_{3}^{}]$ :
$$Z[z_{1}^{},z_{2}^{},z_{3}^{}]=\frac{1}{\sqrt{\pi}}\int_{-\infty}^{+\infty}dx_{}^{}e_{}^{-x_{}^{2}}G[z_{1}^{},x]G[z_{2}^{},x]e_{}^{2x z_{3}^{}}=\frac{e_{}^{-[z_{1}^{2}+z_{2}^{2}]}}{\sqrt{\pi}}\int_{-\infty}^{+\infty}dx_{}^{}e_{}^{-x_{}^{2}+2x[z_{1}^{}+z_{2}^{}+z_{3}^{}]}$$
$$\Rightarrow Z[z_{1}^{},z_{2}^{},z_{3}^{}]=e_{}^{-[z_{1}^{2}+z_{2}^{2}]+[z_{1}^{}+z_{2}^{}+z_{3}^{}]_{}^{2}}.$$
(iii) Now notice :
$$O_{m n}^{k}=\left(\frac{\partial}{\partial z_{1}^{}}\right)_{}^{m}\left(\frac{\partial}{\partial z_{2}^{}}\right)_{}^{n}\left(\frac{\partial}{\partial z_{3}^{}}\right)_{}^{k}Z[z_{1}^{},z_{2}^{},z_{3}^{}]\Big|_{(z_{1}^{},z_{2}^{},z_{3}^{})=(0,0,0)}^{}.$$
(iv) Hence :
$$O_{m n}^{k}=\left(\frac{\partial}{\partial z_{1}^{}}\right)_{}^{m}\left(\frac{\partial}{\partial z_{2}^{}}\right)_{}^{n}\left(\frac{\partial}{\partial z_{3}^{}}\right)_{}^{k}e_{}^{-[z_{1}^{2}+z_{2}^{2}]+[z_{1}^{}+z_{2}^{}+z_{3}^{}]_{}^{2}}\Big|_{(z_{1}^{},z_{2}^{},z_{3}^{})=(0,0,0)}^{}$$
$$\therefore \int_{-\infty}^{+\infty}dx_{}^{} H_{m}^{}(x) x_{}^{k} H_{n}^{}(x) e_{}^{-x_{}^{2}}=\frac{\sqrt{\pi}}{2_{}^{k}}\left(\frac{\partial}{\partial z_{1}^{}}\right)_{}^{m}\left(\frac{\partial}{\partial z_{2}^{}}\right)_{}^{n}\left(\frac{\partial}{\partial z_{3}^{}}\right)_{}^{k}e_{}^{-[z_{1}^{2}+z_{2}^{2}]+[z_{1}^{}+z_{2}^{}+z_{3}^{}]_{}^{2}}\Big|_{(z_{1}^{},z_{2}^{},z_{3}^{})=(0,0,0)}^{}.$$ | {
"domain": "physics.stackexchange",
"id": 55886,
"tags": "quantum-mechanics, perturbation-theory, non-linear-systems, coupled-oscillators"
} |
value of complex multiplier in DFT/FFT | Question: I am studying proakis book digital signal processing using matlab 3rd ed
but i am bit confused about calculation of value complex multiplier W
fig is attached. i am not able to understand how the values of W are found/calculated in this fig in red enclosure
how/why we know that W base 4 and raised to the power 1 is equal to -j
Answer: Using Euler's formula you can see that
$$W_4=e^{-j2\pi/4}=e^{-j\pi/2}=\cos(\pi/2)-j\sin(\pi/2)=-j$$
With $j^2=-1$ it should be easy to verify that
$$(-j)^0=(-j)^4=1\quad\text{and}\quad(-j)^2=(-j)^6=-1$$
It's important to develop a geometric intuition which allows you to immediately see these equivalences. Multiplication with $j$ corresponds to a rotation by $\pi/2$ in the complex plane, and multiplication with $-j$ is a rotation by $-\pi/2$. So in general for $k\in\mathbb{Z}$ you have
$$(-j)^{4k}=1\\(-j)^{4k+1}=-j\\(-j)^{4k+2}=-1\\(-j)^{4k+3}=j$$ | {
"domain": "dsp.stackexchange",
"id": 7009,
"tags": "fft"
} |
Writing a Discrete Fourier Transform program | Question: I would like to write a DFT program using FFT. This is actually used for very large matrix-vector multiplication (10^8 * 10^8), which is simplified to a vector-to-vector convolution, and further reduced to a Discrete Fourier Transform.
May I ask whether DFT is accurate? Because the matrix has all discrete binary elements, and the multiplication process would not tolerate any non-zero error probability in result. However from the things I currently learnt about DFT it seems to be an approximation algorithm?
Also, may I ask roughly how long would the code be? i.e. would this be something I could start from scratch and compose in C++ in perhaps one or two hundred lines? Cause actually this is for a paper...and all I need is that the complexity analyis is $\mathcal{O}(n \log n)$ and the coefficient in front of it doesn't really matter :) So the simplest implementation would be best. (Although I did see some packages like kissfft and FFTW, but they are very lengthy and probably an overkill for my purpose...)
Answer: When applied correctly, the DFT yields exactly the same result as convolution, it is no approximation (neglecting rounding errors due to limited precision of floating point calculations, of course).
To calculate the circular convolution of vectors $x$ and $y$ (both of length $N$):
$$
x \circledast y = \operatorname{IDFT}_N\left[\operatorname{DFT}_N[x]\cdot\operatorname{DFT}_N[y]\right]
$$
To calculate the linear convolution first append $N-1$ zeros to both vectors yielding the zero-padded vectors $\tilde x$ and $\tilde y$. Then:
$$
x * y = \operatorname{IDFT}_{2N-1}\left[\operatorname{DFT}_{2N-1}[\tilde x]\cdot\operatorname{DFT}_{2N-1}[\tilde y]\right]
$$
To implement the [I]DFT I highly recommend the FFTW library. I promise that it will take you less time to learn its usage than to write your own implementation. For the special case of $x$ and $y$ containing only $1$ and $0$ their might be a more efficient algorithm, though. You would have to look into the Cooley-Tukey algorithm that is used for the FFT and see if you can simplify it for your needs. Whether it gets simpler will also depend on how you define computational complexity - if you define it in terms of additions and multiplications a custom FFT implementation probably won't buy you much. But if you go down to the hardware implementation level you can save some multipliers in the first butterfly stage by replacing them with multiplexers. Anyway, it is well known that the FFT algorithm complexity is of order $N\log N$ as you say. I would like to think that you don't have to show this again.
If you'd like to play around with IFFT and FFT I recommend Octave (an open source Matlab clone). | {
"domain": "dsp.stackexchange",
"id": 2006,
"tags": "fft, dft, convolution"
} |
Why to implement Open list in Heuristic search with priority queue | Question: I don't understand why do we need to implement Open list using priority queue? There could be other data structure.
I understand the usage of priority queues from this example clearly. The newly discovered node would always be inserted at the head of the Open List. Then why do we need priority queue?
This article has mentioned everything about open list.
Usually Open Set contains the states. Its more convenient to assume Open Set as a list. However, the when we were to implement it in some programming language, we use priority queue as data structure.
By states, I mean assume rubix cube of 6 faces. When we rotate it we could have 3 possibilities: 90, 180, 270 degree rotations.
Total states 6x3 = 18
Answer: Your question is not perfectly clear to me, so maybe I answer a different question from what you intended. The article you linked talks about A* as a search algorithm that uses a heuristic to speed up searching, so I'll talk about this.
In a naive search you explore a graph for example in breadth first order: You start at some node, then visit its neighbors, then the neighbors of the neighbors and so on until you reach your goal state (or there is nothing more to explore).
Now suppose you had some function (the "heuristic") that can tell you more or less accurately how close a particular node is to your goal node. Then you can speed up your search by visiting nodes that are closer to your goal first. Now it depends on the quality of your heuristic how you should do that.
If your heuristic is perfect, that is, it tells you the distance to the goal accurately for every node, then you can just look at the neighbors of your start node and proceed to the node with the smallest distance to the goal. From there again you look at the neighbors and proceed to the one with the smallest distance to the goal. You don't need to keep any memory of possible alternative routes, because your heuristic exactly tells you which node you have to go to.
If the heuristic is not perfect however, the picture changes. Say your heuristic is optimistic, that is, it always underestimates the distance to the goal. Then can't assume that the neighbor with the smallest heuristic distance to the goal is always the correct node to which to proceed, because it might turn out that the true distance is much larger than what your heuristic told you. So you have to remember some alternative routes.
You can do that as follows: Maintain a set of nodes to which you could proceed (the neighbors of all nodes that you have already seen but not visited). This is you open set. In each step you continue searching from the node in the open set where the distance from the start plus the heuristic distance to the goal is smallest.
Now how do you find this node in the open set? You could store the open set as a list look at all nodes in each step to find it, but that is inefficient. Instead you could maintain the open set as a heap, where finding the minimum is cheap.
So this is why you want to use a heap in a heuristic search. It speeds up finding the next node from which to proceed with your search. | {
"domain": "cs.stackexchange",
"id": 6292,
"tags": "algorithms, search-algorithms, heuristics"
} |
Why so discrepancy between ARIMA and LSTM in time series forecasting? | Question: I have this time series below, that I divided into train, val and test:
Basically, I trained an ARIMA and an LSTM on those data, and results are completely different, in terms of prediction:
ARIMA:
LSTM:
Now, maybe I am passing, in some way, the test set to LSTM in order to perform better? Or LSTM is simply (lot) better than ARIMA?
Below there is some code. Note that in order to do prediction in future days, I am adding the new and last predicted value to my series, before training and predicting:
ARIMA code:
# Create list of x train valuess
history = [x for x in x_train]
# establish list for predictions
model_predictions = []
# Count number of test data points
N_test_observations = len(x_test)
# loop through every data point
for time_point in list(x_test.index[-N_test_observations:]):
model = sm.tsa.arima.ARIMA(history, order=(3,1,3), seasonal_order=(0,0,0,7))
model_fit = model.fit()
output = model_fit.forecast()
yhat = output[0]
model_predictions.append(yhat)
true_test_value = x_test[time_point]
#history.append(true_test_value)
history.append(yhat)
MAE_error = mean_absolute_error(x_test, model_predictions)
print('Testing Mean Squared Error is {}'.format(MAE_error))
Testing Mean Squared Error is 86.71141520892097
LSTM code:
def sequential_window_dataset(series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=window_size, drop_remainder=True)
ds = ds.flat_map(lambda window: window.batch(window_size + 1))
ds = ds.map(lambda window: (window[:-1], window[1:]))
return ds.batch(1).prefetch(1)
# reset any stored data
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
# set window size and create input batch sequence
window_size = 30
train_set = sequential_window_dataset(normalized_x_train, window_size)
valid_set = sequential_window_dataset(normalized_x_valid, window_size)
# create model
model = keras.models.Sequential([
keras.layers.LSTM(100, return_sequences=True, stateful=True,
batch_input_shape=[1, None, 1]),
keras.layers.LSTM(100, return_sequences=True, stateful=True),
keras.layers.Dense(1),
])
# set optimizer
optimizer = keras.optimizers.Nadam(lr=0.00033)
# compile model
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
# reset states
reset_states = ResetStatesCallback()
#set up save best only checkpoint
model_checkpoint = keras.callbacks.ModelCheckpoint(
"my_checkpoint", save_best_only=True)
early_stopping = keras.callbacks.EarlyStopping(patience=50)
# fit model
model.fit(train_set, epochs=500,
validation_data=valid_set,
callbacks=[early_stopping, model_checkpoint, reset_states])
# recall best model
model = keras.models.load_model("my_checkpoint")
# make predictions
rnn_forecast = model.predict(normalized_x_test[np.newaxis,:])
rnn_forecast = rnn_forecast.flatten()
# Example of how to iverse
rnn_unscaled_forecast = x_train_scaler.inverse_transform(rnn_forecast.reshape(-1,1)).flatten()
rnn_unscaled_forecast.shape
'LSTM': 9.964744041030935
Maybe there is something with that window size of the LSTM? Or maybe something when I do predictions for LSTM? # make predictions rnn_forecast = model.predict(normalized_x_test[np.newaxis,:])
Answer: Arima and LSTM are very different and there could be some tips to improve results.
Have you tried relative values instead of raw values?
For instance:
#Raw values:
raw=[1200, 1300, 1250, 1370]
#Relative (or differential) values:
diff=[+100,-50,+120]
Sometimes, raw values like 1400 could alter the results for ARIMA and LSTM differently.
On the other hand, LSTM could have bad predictions with noisy data. Some smoothing could improve results, but it depends on the kind of data.
Finally, are you trying to forecast 30 days in a single shot? Most predictions focus on 1-day forecast and set their precision on the sequential results from one day to another on the 30 days of validation data.
If your aim is to get accurate long-term forecasting, ARIMA and LSTM might not be the best solutions (overall ARIMA), because they have their own structural limitations. This could explain also why LSTM results have a gap with real results: some intern mechanisms have limited memory and wrongly predict important decreases or decreases of values.
The shape result of LSTM seems correct, but there is a small shift in Y of 10 because it initially predicted a smaller decrease. LSTM is quite difficult to understand: all I can say is that weights are connected to each other and peaks are more difficult to predict because of those dependencies. I recommend reading the initial paper, it's very interesting:
https://www.researchgate.net/publication/13853244_Long_Short-term_Memory
My advice is to lose accuracy by grouping values (ex: make a prediction of weeks instead of days) or use long-term models like those ones:
https://towardsdatascience.com/extreme-event-forecasting-with-lstm-autoencoders-297492485037
https://thuijskens.github.io/2016/08/03/time-series-forecasting/
https://arxiv.org/pdf/2210.08244.pdf | {
"domain": "datascience.stackexchange",
"id": 11246,
"tags": "deep-learning, time-series, lstm, forecasting, arima"
} |
Convolution of Haar filter | Question: In "Conceptual Wavelets" (2009) by D. Lee Fugal, on pg 47 he writes
Conv ([-1, 1], [1, 1]) = [-1, 2,-1]
When I do it I get
0 -1 1 0
1 1 > -1
1. 1. > 0
1 1. > 1
So I get [-1, 0, 1]. What am I not understanding?
The definition Fugal is using is the Matlab function conv which I assume is just traditional convolution....
Answer: If the book is really using Matlab's conv then it must be a typo.
Bring up Matlab and try it. (Or if you don't have Matlab then go to octave-online.net and try it.)
The Matlab definition of conv over finite sequences also agrees with your answer. | {
"domain": "cs.stackexchange",
"id": 3251,
"tags": "signal-processing"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.