anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Is it valid to use Spark's StandardScaler on sparse input? | Question: While I know it's possible to use StandardScaler on a SparseVector column, I wonder now if this is a valid transformation. My reason is that the output (most likely) will not be sparse. For example, if feature values are strictly positive, then all 0's in your input should transform to some negative value; thus you no longer have a sparse vector. So why is this allowed in Spark, and is it a bad idea to use StandardScaler when you want sparse features?
Answer: The default for the parameter withMean is False, so that data won't be centered, just scaled by the standard deviation(s). | {
"domain": "datascience.stackexchange",
"id": 11346,
"tags": "feature-scaling, pyspark, sparse"
} |
Simple wildcard pattern matcher in Java (follow-up) | Question: This is a follow-up review of Simple wildcard pattern matcher in Java
I have built a simple wild card pattern matcher algorithm.
I am concerned where it can be improved further.
Main concern is algorithm.
Any other improvements are welcome too.
What it should does:
Match * -> one or more of any character
Match ? -> any single character similar to the file search wild card pattern (ex: Windows file search)
Code:
public class SimpleMatch {
//state enums
private static enum State {
JUST_STARTED, NORMAL, EAGER, END
}
//constants
private static final char MATCH_ALL = '*';
private static final char MATCH_ONE = '?';
private final int ptnOutBound; // pattern out bound
private final int strOutBound; // string out bound
private final String pattern; // pattern
private final String matchString; // string to match
private int ptnPosition; // position of pattern
private int strPosition; // position of string
private State state = State.JUST_STARTED; // state
private boolean matchFound = false; // is match
public SimpleMatch(String pattern, String matchStr) {
if (pattern == null || matchStr == null) {
throw new IllegalArgumentException(
"Pattern and String must not be null");
}
this.pattern = pattern;
this.matchString = matchStr;
int pl = pattern.length();
int sl = matchStr.length();
if (pl == 0 || sl == 0) {
throw new IllegalArgumentException(
"Pattern and String must have at least one character");
}
ptnOutBound = pl - 1;
strOutBound = sl - 1;
ptnPosition = 0;
strPosition = 0;
}
private void calcState() {
//calculate state
if (state == State.END) {
return;
}
if (!patternCheckBound() || !matchStrCheckBound()) {
state = State.END;
} else if (patternChar() == MATCH_ALL) {
if (!patternNextCheckBound()) {
state = State.END;
matchFound = true;
} else {
state = State.EAGER;
}
} else {
state = State.NORMAL;
}
}
private void eat() {
//eat a character
if (state == State.END) {
return;
}
matchFound = false;
if (state == State.EAGER) {
int curStrPosition = strPosition;
int curPtnPosition = ptnPosition;
strPosition++;
ptnPosition++;
if (match()) {
state = State.END;
matchFound = true;
return;
} else {
strPosition = curStrPosition;
ptnPosition = curPtnPosition;
state = State.EAGER;
}
strPosition++;
} else if (state == State.NORMAL) {
if (matchOne()) {
strPosition++;
ptnPosition++;
matchFound = true;
} else {
state = State.END;
}
}
}
private boolean matchOne() {
// match one
char pc = patternChar();
return (pc == MATCH_ONE || pc == matchStrChar());
}
private char patternChar() {
// pattern current char
return pattern.charAt(ptnPosition);
}
private char matchStrChar() {
// str current char
return matchString.charAt(strPosition);
}
private boolean patternCheckBound() {
//pattern position bound check
return ptnPosition <= ptnOutBound;
}
private boolean patternNextCheckBound() {
//pattern next position bound check
return (ptnPosition + 1) <= ptnOutBound;
}
private boolean matchStrCheckBound() {
//string bound check
return strPosition <= strOutBound;
}
/**
* Match and return result
*
* @return true if match
*/
public boolean match() {
if (ptnOutBound > strOutBound) {
return false;
}
while (state != State.END) {
calcState();
eat();
}
return matchFound;
}
/**
* Match and return result
*
* @param pattern pattern
* @param matchStr string to match
* @return true if match
* @throws IllegalArgumentException
*/
public static boolean match(String pattern, String matchStr) throws
IllegalArgumentException {
return new SimpleMatch(pattern, matchStr).match();
}
}
Unit Test:
This code currently passes following jUnit Test Cases
import org.junit.Assert;
import org.junit.Test;
/**
* Unit Test for SimpleMatch
*
* @author Bhathiya
*/
public class SimpleMatchTest {
@Test
public void test1() {
SimpleMatch m = new SimpleMatch("a*", "bb");
Assert.assertFalse(m.match());
}
@Test
public void test2() {
SimpleMatch m = new SimpleMatch("a*b", "anj");
Assert.assertFalse(m.match());
}
@Test
public void test3() {
SimpleMatch m = new SimpleMatch("a?b", "acd");
Assert.assertFalse(m.match());
}
@Test
public void test4() {
SimpleMatch m = new SimpleMatch("a*", "abcdefg");
Assert.assertTrue(m.match());
}
@Test
public void test5() {
SimpleMatch m = new SimpleMatch("a*ba", "abba");
Assert.assertTrue(m.match());
}
@Test
public void test6() {
SimpleMatch m = new SimpleMatch("bhathiya", "bhathiya");
Assert.assertTrue(m.match());
}
@Test
public void test7() {
SimpleMatch m = new SimpleMatch("a?", "a1");
Assert.assertTrue(m.match());
}
@Test
public void test8() {
SimpleMatch m = new SimpleMatch("bhathiya", "blah");
Assert.assertFalse(m.match());
}
@Test
public void test9() {
SimpleMatch m = new SimpleMatch("/img/abc.jpg", "/img/abc.jpg");
Assert.assertTrue(m.match());
}
@Test
public void test11() {
SimpleMatch m = new SimpleMatch("/x/*/z/abc.jpg", "/x/a/z/abc.jpg");
Assert.assertTrue(m.match());
}
@Test
public void test12() {
SimpleMatch m = new SimpleMatch("/x/*/z/abc.jpg", "/x/a/j/abc.jpg");
Assert.assertFalse(m.match());
}
@Test
public void test13() {
SimpleMatch m = new SimpleMatch("a", "a");
Assert.assertTrue(m.match());
}
@Test
public void test14() {
SimpleMatch m = new SimpleMatch("a", "b");
Assert.assertFalse(m.match());
}
@Test
public void test15() {
SimpleMatch m = new SimpleMatch("aa", "ab");
Assert.assertFalse(m.match());
}
@Test
public void test16() {
SimpleMatch m = new SimpleMatch("aaaa", "a");
Assert.assertFalse(m.match());
m = new SimpleMatch("aaaa", "aaa");
Assert.assertFalse(m.match());
m = new SimpleMatch("aaaa", "aa");
Assert.assertFalse(m.match());
}
@Test
public void test17() {
SimpleMatch m = new SimpleMatch("a*", "a");
Assert.assertFalse(m.match());
}
@Test
public void test18() {
SimpleMatch m = new SimpleMatch("a*xxxxxxxxxxb", "axxxxxxxxxxxb");
Assert.assertTrue(m.match());
}
@Test
public void test19() {
SimpleMatch m = new SimpleMatch("a*xxxxxxxxxxb", "axxxxxxxxxxxc");
Assert.assertFalse(m.match());
}
@Test
public void test20() {
SimpleMatch m = new SimpleMatch("a*b*c*d", "abbccdd");
Assert.assertTrue(m.match());
}
@Test
public void test21() {
SimpleMatch m = new SimpleMatch("a*xxxxxxx*xxb", "axxxxxxxxxhelloxxb");
Assert.assertTrue(m.match());
}
@Test
public void test22() {
SimpleMatch m = new SimpleMatch("a*xxxxxxxxxx*", "axxxxxxxxxxxhello");
Assert.assertTrue(m.match());
}
@Test
public void test23() {
SimpleMatch m = new SimpleMatch(
"start*in-part-1*in-part-2*in-part-3*end",
"start[because]in-part-1[I'm]in-part-2[Batman]in-part-3[!]end");
Assert.assertTrue(m.match());
}
}
Related Links:
travis-ci: https://travis-ci.org/JaDogg/SimpleMatch
repository: https://github.com/JaDogg/SimpleMatch
Answer: Using mutation with recursion
You've written a lot of code to solve a mildly difficult problem. I think that this excerpt from eat() is indicative of the complexity:
int curStrPosition = strPosition;
int curPtnPosition = ptnPosition;
strPosition++;
ptnPosition++;
if (match()) {
state = State.END;
matchFound = true;
return;
} else {
strPosition = curStrPosition;
ptnPosition = curPtnPosition;
state = State.EAGER;
}
strPosition++;
Your solution relies on mutual recursion: match() calls eat(), which calls match(). Recursion is a powerful technique because…
Local reasoning can be extended to solve a larger complex problem.
Backtracking state can be stored in the call stack.
However, as soon as you introduce mutation, those benefits disappear. The reason for the else clause of the excerpt above is to undo side-effects of match(), and the only way to know what those side-effects may be is to read the source code of match(). To understand any part of your solution, you must understand all of it. That's a huge mental burden for any code maintainer, and it also means that your code is fragile.
Interface
The SimpleMatch(String pattern, String matchStr) interface is cumbersome, in my opinion. A programmer using your library might get the order of the arguments confused, unless he/she has good documentation or a good IDE.
Instead, I suggest dividing the responsibilities:
new SimplePattern(pattern).match(matchStr);
This makes the parameters foolproof, and also simplifies the constructor.
All you need for the pattern and matchStr is .length() and .charAt(). Therefore you shouldn't require a String, considering that any String
Validation
Why shouldn't an empty pattern legally match an empty string?
Dubious optimization
The following optimization relies in the fact that every character in the pattern must consume at least one character of the string.
if (ptnOutBound > strOutBound) {
return false;
}
Even if it is accurate for your current behaviour of * in the pattern, but it would fail if * meant "zero or more characters" — as it customarily does with shell globs. Therefore, if you use that optimization, it's worth an explanatory comment.
Suggested implementation
Notice that the object barely carries state:
public class SimplePattern {
//constants
private static final char MATCH_ALL = '*';
private static final char MATCH_ONE = '?';
private final CharSequence pattern;
private final int ptnPosition;
public SimplePattern(CharSequence pattern) {
this(pattern, 0);
}
private SimplePattern(CharSequence pattern, int ptnPosition) {
if (pattern == null) {
throw new IllegalArgumentException("Pattern must not be null");
}
this.pattern = pattern;
this.ptnPosition = ptnPosition;
}
/**
* Match and return result
*
* @return true if match
*/
public boolean match(CharSequence string) {
return this.match(string, 0);
}
public boolean match(CharSequence string, int startPosition) {
if (ptnPosition == this.pattern.length()) {
return startPosition == string.length();
}
if (startPosition >= string.length()) {
return false;
}
SimplePattern nextPattern = new SimplePattern(pattern, ptnPosition + 1);
char patternChar = this.pattern.charAt(this.ptnPosition);
switch (patternChar) {
case MATCH_ONE:
return nextPattern.match(string, startPosition + 1);
case MATCH_ALL:
for (int i = startPosition + 1; i <= string.length(); i++) {
if (nextPattern.match(string, i)) {
return true;
}
}
return false;
default:
return string.charAt(startPosition) == patternChar &&
nextPattern.match(string, startPosition + 1);
}
}
/**
* Match and return result
*
* @param pattern pattern
* @param matchStr string to match
* @return true if match
* @throws IllegalArgumentException
*/
public static boolean match(CharSequence pattern, CharSequence matchStr) {
return new SimplePattern(pattern).match(matchStr);
}
}
Acceptable use of mutation with recursion
Recursion is discouraged in Java, due to performance concerns and stack size limitations. Therefore, you may want to reintroduce some mutation (looping) in the solution. Here is a replacement for the match(string, startPosition) method above that illustrates how you can do so responsibly:
public boolean match(CharSequence string, int startPosition) {
int ptnPos;
for (ptnPos = this.ptnPosition; ptnPos < this.pattern.length(); ptnPos++) {
if (startPosition >= string.length()) {
return ptnPos == this.pattern.length();
}
char patternChar = this.pattern.charAt(ptnPos);
switch (patternChar) {
case MATCH_ONE:
startPosition++;
break;
case MATCH_ALL:
SimplePattern nextPattern = new SimplePattern(pattern, ptnPos + 1);
for (int i = startPosition + 1; i <= string.length(); i++) {
if (nextPattern.match(string, i)) {
return true;
}
}
return false;
default:
if (string.charAt(startPosition++) != patternChar) {
return false;
}
}
}
assert ptnPos == this.pattern.length();
return startPosition == string.length();
}
Now, instead of making a recursive call for each character in the pattern, you only need to make a recursive call for each * in the pattern.
Notice that this version only mutates local variables, so that it has no side-effects outside the current stack frame. The use of final in the declarations of instance variables pattern and ptnPosition helps enforce that discipline. | {
"domain": "codereview.stackexchange",
"id": 8901,
"tags": "java, algorithm"
} |
What part of the milky way do we see from earth? | Question: When looking up at the night sky, we can sometimes observe a bright band of stars stretching across it. I know this is our galaxy but exactly what are we looking at?
Could we look at the centre of the galaxy or is there another spiral arm blocking our view?
How does our view change as earth rotates around the sun? (E.g. What part of the milky way do we see in a winter night sky)
Answer: Fraser Cain:
We’re seeing the galaxy edge on, from the inside, and so we see the
galactic disk as a band that forms a complete circle around the sky.
Which parts you can see depend on your location on Earth and the time
of year, but you can always see some part of the disk.
The galactic core of the Milky Way is located in the constellation
Sagittarius, which is located to the South of me in Canada, and only
really visible during the Summer. In really faint skies, the Milky Way
is clearly thicker and brighter in that region.
If you want to know more, watch this video from Fraser Cain which explains it in details:
https://www.youtube.com/watch?v=pdFWbEwsOmA
You can find your answer in the first minute | {
"domain": "astronomy.stackexchange",
"id": 1289,
"tags": "galaxy, milky-way, coordinate"
} |
cannot specify link libraries for target | Question:
Hello everyone. I dont normally run to the forums for healp, but I have been dealing with an issue for two days and I am getting really frustrated. I am trying to interface with a LabJack from within a ROS node. The developers have published a .c file full of functions for native TCP operation. I have compiled it into a shared object library but I am unable to link to it (Upon linking, I get undefined reference errors on the lines where I call functions from the library).
I found a post on this site that says after adding the /lib directory (where my shared object library lives) to the link directories in CMakeLists, given my lib as libfoo, I add to the CMakeLists file
"target_link_libraries(myNode,foo)"
When I do this, I get the following error
CMake Error at CMakeLists.txt:36 (target_link_libraries):
Cannot specify link libraries for target "myNode" which is not
built by this project.
I am completely stumped. I am pasting my CMakeLists.txt. file just incase it helps. My node is for a diesel generator, called 'dieselGenerator' and my library (for the LabJack) is called libue9.so. It looks like comments get put in bold here, I dont mean to yell. I really appreciate your help.
Cheers
cmake_minimum_required (VERSION 2.4.6)
include ($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake)
# Set the build type. Options are:
# Coverage : w/ debug symbols, w/o optimization, w/ code-coverage
# Debug : w/ debug symbols, w/o optimization
# Release : w/o debug symbols, w/ optimization
# RelWithDebInfo : w/ debug symbols, w/ optimization
# MinSizeRel : w/o debug symbols, w/ optimization, stripped binaries
#set (ROS_BUILD_TYPE RelWithDebInfo)
# Initialize the ROS build system.
rosbuild_init ()
# Set the default path for built executables to the "bin" directory.
set (EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
#set the default path for built libraries to the "lib" directory
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)
# Set the name to use for the executable.
set (BINNAME dieselGenerator)
# Set the source files to use with the executable.
#set (SRCS ${SRCS} src/ue9.c)
set (SRCS ${SRCS} src/dieselGenerator.cpp)
# Have ROS autogenerate code used in messages.
rosbuild_genmsg ()
# Set the directories where include files can be found.
include_directories (${PROJECT_SOURCE_DIR}/include)
# Make sure the compiler can find the libraries.
link_directories (${PROJECT_SOURCE_DIR}/lib)
target_link_libraries (dieselGenerator ue9)
# Build the executable that will be used to run this node.
rosbuild_add_executable (${BINNAME} ${SRCS})
# List the libraries here.
set (LIBS ${LIBS} ue9)
Originally posted by Gideon on ROS Answers with karma: 239 on 2011-08-12
Post score: 3
Answer:
In principle your file should work. The only thing is that you need the target (rosbuild_add_executable) before the target_link_libraries.
Originally posted by dornhege with karma: 31395 on 2011-08-12
This answer was ACCEPTED on the original site
Post score: 5 | {
"domain": "robotics.stackexchange",
"id": 6405,
"tags": "ros, libraries, cmake, linking"
} |
Definition of complex permittivity | Question: I'm not sure if this is the appropriate forum for my question as I actually am studying this as part of electrical engineering and I don't actually study physics. Nonetheless, I shall ask and if need be, move my question to another venue.
My question is with regard to how complex permittivity is defined. According to my book
$$
\begin{align*}
\nabla \times \mathbf{\tilde{H}} &= \sigma \mathbf{\tilde{E}} + \jmath\omega\varepsilon \mathbf{\tilde{E}} \\
&= (\sigma + \jmath\omega\varepsilon)\mathbf{\tilde{E}} \\
&= \jmath\omega\underbrace{\left(\varepsilon - \jmath\frac{\sigma}{\omega}\right)}_{\varepsilon_c}\mathbf{\tilde{E}} \\
&= \jmath\omega\varepsilon_c\mathbf{\tilde{E}}
\end{align*}
$$
($\mathbf{\tilde{E}}$ and $\mathbf{\tilde{H}}$ are phasors.)
I really do not understand why $\varepsilon_c \equiv \varepsilon - \jmath\frac{\sigma}{\omega}$ and not $\varepsilon_c \equiv \sigma + \jmath\omega\varepsilon$. What is the sense in creating a complex value, $\varepsilon_c$, and then multiplying by $\jmath\omega$ when you could just modify the definition of $\varepsilon_c$ such that $\nabla \times \mathbf{\tilde{H}} = \varepsilon_c \mathbf{\tilde{E}}$?
I also have a conceptual question: From what I understand, $\epsilon$ determines the phase delay between the H and E fields. This phase delay, as far as I know, comes from the finite speed involved in 'rotating' the dipoles in the medium. When these dipoles are 'rotated' though, since they take a finite time to rotate, that implies to me that there are some sort of losses involved in rotating the dipoles. These losses, though, as far as I can tell, are not accounted for in $\varepsilon_c \equiv \varepsilon - \jmath\frac{\sigma}{\omega}$ (I figure the loss due to rotating the dipoles should be part of $\Im{\{\varepsilon_c\}}$ (from what I can tell, $\Im{\{\varepsilon_c\}}$ accounts for the loss and $\Re{\{\varepsilon_c\}}$ accounts for the phase delay); however, $\Im{\{\varepsilon_c\}}$ only seems to take into account frequency and loss from electorns crashing into atoms).
Similarily, I would've thought that the loss that comes from electrons crashing into atoms (which $\sigma$ looks after), would also have the affect of at least somewhat slowing down the wave and causing lag.
Basically, what I'm saying is, why aren't $\varepsilon$ and $\sigma$ also complex numbers? Or maybe they are...
Thank you.
Answer: Have a look at the reference cited in this answer: Can the Kramers–Kronig relation be used to correct transfer function measurements?
In a nutshell here is the point: because of causality, in the time-domain, the permittivity, conductivity, etc must be real but non-local. In the Fourier domain, they must be complex. The nonlocality expresses itself as a convolution integral in the time domain. This is easier to write in the Fourier domain as a multiplication; as a result this often leads to misleading abuses of notation involving time-domain fields in the same equation as Fourier domain optical constants. The answer indicated above has a link to a tutorial on this subject. Oh and the phase delay applies to all material, not just dipoles, this is causality. | {
"domain": "physics.stackexchange",
"id": 43451,
"tags": "electromagnetism, plane-wave"
} |
Why is invariant mass preferred over conserved one? | Question: In Special Relativity, there are two kinds of mass one can define for a system, right?:
$$m_{rel} = \frac{E_{tot}}{c^2}$$
and
$$m_0 = \frac{\sqrt{E_{tot}^2 - p_{tot}^2c^2}}{c^2}$$
Both have their pros and cons. The first one is conserved (because $E_{tot}$ is) but not invariant. The second one is invariant (because $E_{tot}^2 - p_{tot}^2c^2$ is) but not conserved.
Usually, the second one is preferred over the first. Why is that? Why is in-variance preferred over conservation?
Answer: To an extent this is a matter of opinion since both invariant and relativistic mass can be used in calculations. However I suspect the opinion of most physicists these days would be that the invariant mass is a more useful concept.
If we start with Newtonian physics then we know that the basic equation of motion is Newton's second law:
$$ F = ma = \frac{dp}{dt} $$
If we now consider relativistic velocities then we find this no longer correctly describes the motion. However, if we use the relativistic mass $m_r$ we find the equations do work. This I suspect was why the concept caught on originally. If you regard Newton's laws as fundamental then it seems natural to redefine the mass to keep Newton's laws working.
But the force, acceleration, momentum, etc used in Newtonian mechanics are three-vectors and we know that special relativity is most naturally formulated using four-vectors. Indeed if you use four vectors you can simply write:
$$ \mathbf F = m\mathbf a = \frac{d\mathbf p}{d\tau} $$
and now $m$ is the invariant mass i.e. the same as the mass of the object in the objects rest frame. So if you regard four-vectors as fundamental then it seems natural to redefine Newton's laws and keep the mass constant.
As to which is better ultimately that is a matter of opinion, though I doubt you will find many physicists today who consider the concept of relativistic mass better. The invariant mass has a natural physical interpretation as the norm of the four-momentum (which immediately explains why it is an invariant since the norm of any four-vector is a scalar invariant).
Using this formalism also gives us the relativistic equation for the total energy:
$$ E^2 = p^2c^2 + m^2c^4 $$
that applies to everything - massless particles as well as massive ones. Unless you subscribe to this view you have a problem explaining why massless particles can have energy without any mass. | {
"domain": "physics.stackexchange",
"id": 51415,
"tags": "special-relativity, mass"
} |
In the equation for the magnetic field caused by a single moving point charge, is the degree of $|r|$ in the denominator 2 or 3? | Question: I just wanted to ask what the correct formula for the Magnetic Field around a single moving point charge is. In most places I have seen it written as $$\vec{B} = \frac{\mu_0}{4\pi} \frac{q\vec{v} \times \vec{r}}{r^2}$$ in many places however in some places such as this and this they say it is $$\vec{B} = \frac{\mu_0}{4\pi} \frac{q\vec{v} \times \vec{r}}{r^3}.$$ Could someone please clarify which one is the right one?
Answer: You've written the first equation wrong, it's actually $\hat{\mathbf{r}}$, not $\vec{\mathbf{r}},$ i.e. it's the unit vector in the $\mathbf{r}$ direction. Clearly, in this case, both equations are the same, if you multiply the first by $|\mathbf{r}|$, since $r\hat{\mathbf{r}} = \vec{\mathbf{r}}$.
If you have trouble remembering which of the two is "right", dimensional analysis is your friend. Let's say all you remember are Maxwell's Equations (a very good practice, in my opinion). Then you'd know that $$\nabla \times \mathbf{B} = \mu_0 \mathbf{j}.$$ Looking at the above equation dimensionally, you should see that the dimensions of $\mathbf{B}$ are:
$$L^{-1}[B] = [\mu_0] [j] = [\mu_0] [I] L^{-2},$$
where I've used the fact that the curl is a derivative with respect to position, and that $j$ is the current ($I$) density. Now, with the definition of current as being charge per unit time, I'll leave it to you to show that: $$[B] = [\mu_0]\, [q v]\,\ L^{-2},$$
meaning that of the two equations as you've written them in your question, only the second could be dimensionally consistent. (Of course, dimensional consistency is no guarantee that it's the right equation! It just helps filter obviously wrong ones.) | {
"domain": "physics.stackexchange",
"id": 74981,
"tags": "electromagnetism, magnetic-fields, vectors, notation"
} |
Neutrino Hypothesis for Beta Decay | Question: Neutrino was discovered from the seemingly violation of conservation laws.
Supposedly, it was suprising to scientists, when they found electrons were emmited at various energies during beta decay Hence, the continuous range of energies was accounted for using this new particle, the neutrino.
However, it is unclear to me why other forms of decay must be discrete. Why was it suprising that electrons were emitted at various energies? Consider alpha decay. A parent nucleus decays into a daughter nucleus and an alpha particle. Couldn't the decay energy be variably shared across the daughter nucleus and the alpha particle, such that the alpha particle is emitted with a range of energies as well?
Watching a video, I found the following explanation.
When a nuclear reaction takes place, some amount of energy is released. This energy released can be predicted using the mass defect. From this energy, we can then predict the kinetic energy of the emitted particle, as the daughter nuclei is so massive compared to the emitted particle. Hence, the vast amount of decay energy is carried off by the emitted particle. So we can theoretically predict the kinetic energy of this emitted particle.
What is not clear to me, is why the emitted particle carries nearly all the kinetic energy. Kinetic energy is $\frac{1}{2}mv^2$. As the daughter nuclei has much more mass, wouldn't the kinetic energy be roughly the same between the daughter nuclei and the emitted particle?
It is unclear to me why the decay energy cannot be shared in variable proportions with the daughter nuclei and the alpha particles, therefore giving a continuous range of energies for the alpha particle. What is wrong with my intuition?
Answer:
It is unclear to me why the decay energy cannot be shared in variable proportions with the daughter nuclei and the alpha particles.
The decay must conserve both energy and momentum. For a two-body decay, there's only one way to accomplish this.
Let's consider the decay of a heavy nucleus into an alpha particle with mass $m=4\rm\,u$ and a heavy daughter with mass $M\gg m$. In the rest frame of the parent nucleus, the decay products have momenta obeying
$$
\vec p_\text{nucleus} + \vec p_\alpha = 0
$$
and kinetic energies obeying
$$
\frac{p_\text{nucleus}^2}{2M} + \frac{p_\alpha^2}{2m} = Q,
$$
where the value $Q$ (sometimes called the "$Q$-value") is the total energy liberated in the decay. The momentum equation tells us that we need consider only the magnitude of the momentum $p_\text{nucleus} = p_\alpha = p$, and the energy equation tells us that $p = 2Q \cdot \left(
M^{-1} + m^{-1}
\right)^{-1}$
is uniquely determined.
You can see from the energy relation $p^2/2m$ that, for a given momentum, the lower-mass particle will carry more of the kinetic energy.
To conserve energy and momentum in a three-body decay, you have two equations but three unknowns to conserve momentum and energy. The third degree of freedom is convenient in different places, depending on exactly what you're trying to compute, but a nice way to think of it is the angle between the electron and neutrino momenta.
The comments about the ratio of masses are a red herring in a question about why the energies are discrete instead of continuous. For another example, consider the disintegration of lithium as induced by thermal neutrons,
$$
\rm n + {^6Li} \to {^3H} + {^4He}
$$
This reaction is quite efficient with thermal neutrons, who have energies best measured in milli-eV, while its $Q$-value is better measured in mega-eV. The relatively tiny initial momentum in thermal neutron capture makes it reasonable to think of the neutron-lithium system as being initially at rest. But when you neglect the incident neutron's energy, the two-body final state is discrete by the same arguments as above. | {
"domain": "physics.stackexchange",
"id": 97860,
"tags": "particle-physics, proton-decay"
} |
Are Colors Emitted at Specific Temperatures? | Question: There are quite a few nagging questions I have been having over the years, I do not require a full explanation, just some guidance in my assumptions and pointers if I am very wrong.
My basic knowledge of a lightbulb is that the power dissipated by the tungsten filament heats it to around 2,700–3,300K (inefficient!), emitting a yellow light. Something supporting my current (and next) claims is this image, see the bottom chart on temperatures:
With this, can I assert that fire is red (and yellow) due to the fact it is simply at those temperatures where those colours are emitted, and it happens to emit those frequency photons? What is this called? I recall "thermionic emission" however, I am unsure if this relates closely to visible light as it does "heat".
Of course, I've heard the phrase "white hot", iron (and other similar metals) melt at around 1800°K (tungsten at 3422°K), while that near-white temperature is below 10M°K! Not to mention it would glow green first before that blueish point. I will assert that metal cannot glow that hot - of course searching images of "white-hot metal" come up with red-ish metal, however "white hot" is used often enough to cloud my memory.
"Colour temperature" is another confusing topic I have looked at, does it directly relate to the colour emitted at the above temperatures? I do not see "5000K" (often referenced as a blueish white) being white unless other colours are combined, however how that is a "temperature" confuses me. Are they just separate from each other?
I can also as a sanity check assume that some gas generated flames being blue, is due to the gas burning and not the temperature. Are all of my assumptions correct? Can you clarify or add anything?
Answer: Just to add to Vineet's answer.
Wien's displacement law only tells you the peak wavelength of a hot object's spectrum. The actual emmission spectra is a broad function and so a hot object will emit both longer and shorter wavelengths.
Remember also that a hot object will emit overall more power, and finally you have to include your eye's sensitivity to various wavelengths.
So a piece of hot metal will appear white because although it's peak emmission will be in the infrared it is still emitting a lot of power at shorter wavlengths and your eye is more sensitive to blue than red. | {
"domain": "physics.stackexchange",
"id": 2670,
"tags": "electromagnetic-radiation, temperature, visible-light"
} |
Difference Between Feature Engineering and Feature Learning | Question: I am playing with features (input data) to improve my model's accuracy.
If I have a raw time-series dataframe, does feature engineering mean extracting properties or characteristics of my raw data and feed it as input? Or will the algorithm learn these from the time-series itself?
In other words, should I create a column that is comprised of the moving average, or will the algorithm pick up on moving average from the raw data?
Is feature engineering just the munging of independent variables? Or is it extracting features that are dependent on other raw data?
EDIT:
Here's another question: If I have a categorical feature, would it be better to have it as a one-hot vector (say, 5 binary inputs), or to have it as one input with range [0,4]?
How does one intuitively know the answer to these questions??
Answer: Feature engineering refers to creating new information that was not there previously, often by using domain specific knowledge or by creating new features that are transformations of others you already have, such as adding interaction terms or as you state, moving averages.
A model generally cannot 'pick up' on information it doesn't have, and that is where finesse and creativity comes into play.
Whether you should one-hot or leave a feature as categorical depends on the modeling approach. Some, like randomForest will do fine with categorical predictors; others prefer recoding.
Intuition on these questions comes with practice and experience. There's no substitute for trying out and comparing toy examples to see how your choices affect outcomes. You should take the time to do that, and intuition will follow. | {
"domain": "datascience.stackexchange",
"id": 1934,
"tags": "machine-learning, data, feature-engineering"
} |
Prove that there is no computability reduction HP $\le$ $\Sigma$* | Question: I tried to prove in negative way that there is computability reduction HP $\le$ $\Sigma$* and accept contradiction because of HP $\in$ RE and $\Sigma$* $\in$ R but it feels that is not strong argument.
There is a more solid way to prove it?
Answer: Let us actually prove more: If $L$ is a language and $L \not\in \mathsf{R}$, then $L \not\le_\mathrm{T} L'$ for any $L' \in \mathsf{R}$. (Here, $\le_\mathrm{T}$ indicates a Turing reduction; this is synonymous with your notion of a "computability" reduction.) In other words, $\mathsf{R}$ is closed under Turing reductions.
Suppose towards a contradiction that such a reduction exists and is computed by a Turing machine $M$. Furthermore, let $M'$ be a decider for $L'$. Then there is a Turing machine which solves $L$, namely by directly simulating $M$, answering all of its oracle queries by simulating $M'$ on them, and, at the end, answering what $M$ does. This contradicts $L \not\in \textsf{R}$.
The case of the halting problem follows from it not being in $\mathsf{R}$ (since it is not even in $\mathsf{RE}$) and, naturally, $\Sigma^\ast \in \mathsf{R}$.
An aside: If you find the idea of classes closed under Turing reductions interesting, I suggest you look up the computation-theoretical notion of Turing degree. $\mathsf{R}$ would be the "base case" (i.e., degree zero) in that context. | {
"domain": "cs.stackexchange",
"id": 14352,
"tags": "computability, reductions"
} |
How long does it take an electron to emit (or absorb) a photon? | Question: A photon is emitted (or absorbed) by a transitioning electron. How fast is this process?
Answer: The emission process always takes some time. How much depends on the kind of transition.
If the transition is spontaneous dipole transition, like when excited electronic state of an atom decays, the rate of spontaneous transition as found using quantum electrodynamics is
$$
\Gamma = \frac{\omega_{12}^3|\mu_{12}|^2}{3\pi\varepsilon_{0}\hbar c^3}
$$
where $\omega_{12}$ is emission frequency ($(E_1-E_2)/\hbar$, $\mu_{12}$ is transition dipole matrix element magnitude $|\langle 1 |\boldsymbol{\mu}|2\rangle|$. So mean time needed for one transition is
$$
1/\Gamma = \frac{3\pi\varepsilon_{0}\hbar c^3}{\omega_{12}^3|\mu_{12}|^2}.
$$
For hydrogen transition $2P \to 1S$, this turns out tobe around 1.6 nanoseconds [1].
[1] http://farside.ph.utexas.edu/teaching/qmech/Quantum/node122.html | {
"domain": "physics.stackexchange",
"id": 76074,
"tags": "electromagnetic-radiation, time"
} |
Do certain cumulated dienes present geometric isomerism? | Question: Let's take, for example: 2,3-pentadiene.
Can't this compound present geometrical isomers, like 2-butene?
Answer: Properly speaking, these sorts of allenes/cumulated dienes do not exhibit geometric isomerism in the way that alkenes can. Rather, when each of the two respective carbons bear two different substituents, the molecule is chiral and will exhibit enantiomerism. More specifically, these asymmetric allenes exhibit axial chirality. This is consequent to the two consecutive $\pi$-bonds being mutually perpendicular, which results in the molecule lacking any mirror plane of symmetry.
Generally, an even number of successive double bonds in a cumulene produces a potential axis of chirality (again, assuming each carbon at the end of the double-bond system bears two different substituents), while an odd number results in the substituents at the ends of the double-bond system being coplanar, which permits geometric isomerism akin to that seen in alkenes. | {
"domain": "chemistry.stackexchange",
"id": 1604,
"tags": "organic-chemistry, cis-trans-isomerism"
} |
How can the momentum be conserved in $y$ direction here? | Question: In this question,
A circus acrobat of mass $M$ leaps straight up with initial velocity $v_0$ from a trampoline. As he rises up, he takes a trained monkey of mass $m$ off a perch at a height $h$ above the trampoline. What is the maximum height attained by the pair? [Source: Introduction to Mechanics, D.Kleppner and R.Kolenkow, Chapter 3, Exercise 3.5]
Here, in the solution, the momentum is conserved along the Y-direction by finding out the velocity at height h and then equating the momenta at highest point with it.
However, my doubt is that, momentum can be conserved only when there is no external force acting along that particular direction ; but then here, the weight acts downwards always, so how is momentum conserved?
Please help me understand where I'm going wrong.
Answer: The conservation of momentum is used to know the velocity of the pair monkey + acrobat just after they join. The momentum immediatly after the pair is formed must be equal to the momentum just before the acrobat take the monkey.
The rest of the problem is solved by uniformly accelerated movement. | {
"domain": "physics.stackexchange",
"id": 82250,
"tags": "newtonian-mechanics, momentum, conservation-laws, collision, free-body-diagram"
} |
What mathematical tools exist for understanding modulated noise? | Question: Suppose we have a signal $n$ that consists of Gaussian white noise. If we modulate this signal by multiplying it by $\sin 2\omega t$, the resulting signal still has a white power spectrum, but clearly the noise is now "bunched" in time. This is an example of a cyclostationary process.
$$x(t) = n(t) \sin2\omega t$$
Suppose that we now demodulate this signal at a frequency $\omega$ by mixing with sine and cosine local oscillators, forming I and Q signals:
$$I = x(t) \times \sin\omega t$$
$$Q = x(t) \times \cos\omega t$$
Naively observing that the power spectrum of $x(t)$ (taken over a time interval much greater than $1/f$) is white, we would expect $I$ and $Q$ to both contain white Gaussian noise of the same amplitude. However, what really happens is that the $I$ quadrature selectively samples the portions of the timeseries $x(t)$ with high variance, while $Q$, ninety degrees out of phase, samples the lower variance portions:
The result is that the noise spectral density in I is $\sqrt{3}$ times that of $Q$.
Clearly there must be something beyond the power spectrum that is useful in describing modulated noise. The literature of my field has a number of accessible papers describing the above process, but I would like to learn how it is treated more generally by the signal processing / EE communities.
What are some useful mathematical tools for understanding and manipulating cyclostationary noise? Any references to literature would also be appreciated.
References:
Niebauer et al, "Nonstationary shot noise and its effect on the sensitivity of interferometers". Phys. Rev. A 43, 5022–5029.
Answer: I'm not sure specifically what you're looking for here. Noise is typically described via its power spectral density, or equivalently its autocorrelation function; the autocorrelation function of a random process and its PSD are a Fourier transform pair. White noise, for example, has an impulsive autocorrelation; this transforms to a flat power spectrum in the Fourier domain.
Your example (while somewhat impractical) is analogous to a communication receiver that observes carrier-modulated white noise at a carrier frequency of $ 2 \omega $. The example receiver is quite fortunate, as it has its an oscillator that is coherent with that of the transmitter; there is no phase offset between the sinusoids generated at the modulator and demodulator, allowing for the possibility of "perfect" downconversion to baseband. This isn't impractical on its own; there are numerous structures for coherent communications receivers. However, noise is typically modeled as an additive element of the communication channel that is uncorrelated with the modulated signal that the receiver seeks to recover; it would be rare for a transmitter to actually transmit noise as part of its modulated output signal.
With that out of the way, though, a look at the mathematics behind your example can explain your observation. In order to get the results that you describe (at least in the original question), the modulator and demodulator have oscillators that operate at an identical reference frequency and phase. The modulator outputs the following:
$$
\begin{align}
n(t) &\sim \mathcal{N}(0, \sigma^2) \\
x(t) & = n(t) \sin(2\omega t)
\end{align}
$$
The receiver generates the downconverted I and Q signals as follows:
$$
\begin{align}
I(t) &= x(t) \sin(2 \omega t) = n(t) \sin^2(2 \omega t)\\
Q(t) &= x(t) \cos(2 \omega t) = n(t) \sin(2 \omega t) \cos(2 \omega t)
\end{align}
$$
Some trigonometric identities can help flesh out $ I(t) $ and $ Q(t) $ some more:
$$
\begin{align}
\sin^2(2 \omega t) &= \frac{1 - \cos(4 \omega t)}{2}\\
\sin(2 \omega t) \cos(2 \omega t) &= \frac{\sin(4 \omega t) + \sin(0)}{2} = \frac{1}{2} \sin(4 \omega t)
\end{align}
$$
Now we can rewrite the downconverted signal pair as:
$$
\begin{align}
I(t) &= n(t) \frac{1 - \cos(4 \omega t)}{2}\\
Q(t) &= \frac{1}{2} n(t) \sin(4 \omega t)
\end{align}
$$
The input noise is zero-mean, so $ I(t) $ and $ Q(t) $ are also zero-mean. This means that their variances are:
$$
\begin{align}
\sigma^{2}_{I(t)} &= \mathbb{E}(I^2(t)) = \mathbb{E}\left(n^2(t) \left[\frac{1 - \cos(4 \omega t)}{2}\right]^2\right) = \mathbb{E}\left(n^2(t)\right) \mathbb{E}\left(\left[\frac{1 - \cos(4 \omega t)}{2}\right]^2\right) \\
\sigma^{2}_{Q(t)} &= \mathbb{E}(Q^2(t)) = \mathbb{E}\left(n^2(t) \sin^2(4 \omega t)\right) = \mathbb{E}\left(n^2(t)\right) \mathbb{E}\left(\sin^2(4 \omega t)\right)
\end{align}
$$
You noted the ratio between the variances of $ I(t) $ and $ Q(t) $ in your question. It can be simplified to:
$$
\frac{\sigma^{2}_{I(t)}}{\sigma^{2}_{Q(t)}} = \frac{\mathbb{E}\left(\left[\frac{1 - \cos(4 \omega t)}{2}\right]^2\right)}{\mathbb{E}\left(\sin^2(4 \omega t)\right)}
$$
The expectations are taken over the random process $ n(t) $ 's time variable $ t $. Since the functions are deterministic and periodic, this is really just equivalent to the mean-squared value of each sinusoidal function over one period; for the values shown here, you get a ratio of $ \sqrt 3 $, as you noted. The fact that you get more noise power in the I channel is an artifact of noise being modulated coherently (i.e. in phase) with the demodulator's own sinusoidal reference. Based on the underlying mathematics, this result is to be expected. As I stated before, however, this type of situation is not typical.
Although you didn't directly ask about it, I wanted to note that this type of operation (modulation by a sinusoidal carrier followed by demodulation of an identical or nearly-identical reproduction of the carrier) is a fundamental building block in communication systems. A real communication receiver, however, would include an additional step after the carrier demodulation: a lowpass filter to remove the I and Q signal components at frequency $ 4 \omega $. If we eliminate the double-carrier-frequency components, the ratio of I energy to Q energy looks like:
$$
\frac{\sigma^{2}_{I(t)}}{\sigma^{2}_{Q(t)}} = \frac{\mathbb{E}\left((\frac{1}{2})^2\right)}{\mathbb{E}(0)} = \infty
$$
This is the goal of a coherent quadrature modulation receiver: signal that is placed in the in-phase (I) channel is carried into the receiver's I signal with no leakage into the quadrature (Q) signal.
Edit: I wanted to address your comments below. For a quadrature receiver, the carrier frequency would in most cases be at the center of the transmitted signal bandwidth, so instead of being bandlimited to the carrier frequency $ \omega\ $, a typical communications signal would be bandpass over the interval $ [\omega - \frac{B}{2}, \omega + \frac{B}{2}] $, where $ B $ is its modulated bandwidth. A quadrature receiver aims to downconvert the signal to baseband as an initial step; this can be done by treating the I and Q channels as the real and imaginary components of a complex-valued signal for subsequent analysis steps.
With regard to your comment on the second-order statistics of the cyclostationary $ x(t) $, you have an error. The cyclostationary nature of the signal is captured in its autocorrelation function. Let the function be $ R(t, \tau) $:
$$
R(t, \tau) = \mathbb{E}(x(t)x(t - \tau))
$$
$$
R(t, \tau) = \mathbb{E}(n(t)n(t - \tau) \sin(2 \omega t) \sin(2 \omega(t - \tau)))
$$
$$
R(t, \tau) = \mathbb{E}(n(t)n(t - \tau)) \sin(2 \omega t) \sin(2 \omega(t - \tau))
$$
Because of the whiteness of the original noise process $ n(t) $, the expectation (and therefore the entire right-hand side of the equation) is zero for all nonzero values of $ \tau $.
$$
R(t, \tau) = \sigma^2 \delta(\tau) \sin^2(2 \omega t)
$$
The autocorrelation is no longer just a simple impulse at zero lag; instead, it is time-variant and periodic because of the sinusoidal scaling factor. This causes the phenomenon that you originally observed, in that there are periods of "high variance" in $ x(t) $ and other periods where the variance is lower. The "high variance" periods are selected by demodulating by a sinusoid that is coherent with the one used to modulate it, which stands to reason. | {
"domain": "dsp.stackexchange",
"id": 7,
"tags": "noise, modulation, cyclostationary-random-process"
} |
Star brightness data to study exoplanets with the transit method? | Question: Can someone tell me how I can find the star brightness data to study exoplanet using transit method? The file should be in comma separated value (CSV) format or any other formats that can be latter converted to that format.
The following sample data shows the star brightness along with time. By using this data we can plot the transit curve. I want such over of data for actual star:
Answer: I don't know of a source for the CSV data directly but if you are OK with a little bit of Python, this can be done with the NASA Exoplanet Archive. Looking at one famous example (HD 189733b), if you do a search for this object on the front page it should bring you to a page of detailed information and links to datasets. Expanding the 'Ancillary Information' section will show the links for the data (this link should be equivalent)
The data files are in IPAC table format but this is easily readable by AstroPy's Table class. A short example of this is below:
from astropy.table import Table
import matplotlib.pyplot as plt
# Read photometric table
phot_table = Table.read("https://exoplanetarchive.ipac.caltech.edu/data/ExoData/0098/0098505/data/UID_0098505_PLC_025.tbl", format="ipac")
# Subtract off integer part of first JD to make plotting easier
t0 = int(phot_table['HJD'][0])
plt.figure()
plt.errorbar(phot_table['HJD']-t0, phot_table['Relative_Flux'], yerr=phot_table['Relative_Flux_Uncertainty'], color='r', fmt="+", capsize=3)
plt.minorticks_on()
plt.xlabel('HJD-{:.1f} [days]'.format(t0))
plt.ylabel("Relative Flux")
plt.title("HD 189733b Transit Light Curve")
plt.savefig("transit_lc.png")
# Read radial velocity data
rv_table = Table.read("https://exoplanetarchive.ipac.caltech.edu/data/ExoData/0098/0098505/data/UID_0098505_RVC_001.tbl", format="ipac")
t0 = int(rv_table['JD'][0])
plt.clf()
plt.errorbar(rv_table['JD']-t0, rv_table['Radial_Velocity'], yerr=rv_table['Radial_Velocity_Uncertainty'], color='r', fmt='+')
# Zoom in on one of the nights where Rossiter_McLaughlin effect was being measured
plt.xlim(395.45, 395.70)
plt.minorticks_on()
plt.xlabel('JD-{:.1f} [days]'.format(t0))
plt.ylabel('Radial Velocity [m/s]')
plt.title("HD 189733b Radial Velocity Curve")
plt.savefig("transit_rv.png")
# Example export to CSV format
rv_table.write("transit_RV.csv", format='csv')
The top part of the exported CSV file looks like:
JD,Radial_Velocity,Radial_Velocity_Uncertainty
2453946.612661,-2167.93,0.89
2453946.620219,-2155.63,0.85
2453946.627291,-2142.88,0.78
if this is easier for further analysis. | {
"domain": "astronomy.stackexchange",
"id": 5647,
"tags": "exoplanet, planetary-transits, radial-velocity"
} |
Data file read/write from within IBM Q-Experience Jupyter notebook | Question: In IBM Q-Experience how can I upload a data file that I intend to read from the python code?
Answer: If you go to the Qiskit Notebooks section, you will see this button
If you click import you can chose to upload the file from your computer into IQX. | {
"domain": "quantumcomputing.stackexchange",
"id": 1088,
"tags": "ibm-q-experience"
} |
How to design a digital filter with a frequency dependent phase delay? | Question: I want to design a filter with a custom phase delay related to frequency. As frequency increases, phase delay should increase.
The time delay as a function of frequency can be expressed as:
$t_d = \frac{L}{C_{ph}} - \frac{1}{f}$
$L$ and $C_{ph}$ are constants of $0.0017$ and $2628$ respectively.
The range of $f$ is $500\mathrm{kHz}$ to $1 \mathrm{MHz}$, the sampling frequency, $f_s$ is $78.39\mathrm{MHz}$.
Thus, when the frequency is $500 \mathrm{kHz}$, the delay should be $1.37\mathrm{\mu s}$ or $107$ samples.
For the pass band, $f$, I have used a filter response of $e^{-i2\pi fd}$, where $d$ is the delay in samples required.
I have designed stop bands with a filter response of zero between $0 \mathrm{Hz}-100\mathrm{kHz}$ and $1.4 \mathrm{MHz}-f_s$.
In MATLAB my code looks like this:
n = 50;
fs = double(fs);
L = double(L);
Cph = 2628;
f1 = linspace(500e3/fs,1e6/fs,100);
f1d = L/Cph - (1./(f1*fs));
%Our array is backwards though innit
f1d = f1d * -1;
f1dz = f1d * fs;
h1 = exp(-1i*2*pi*f1.*f1dz);
fstop1 = linspace(0,100e3/fs, 10);
hstop1 = zeros(size(fstop1));
fstop2 = linspace(1.4e6/fs, 1, 10);
hstop2 = zeros(size(fstop2));
d=fdesign.arbmagnphase('n,b,f,h', n, 3, fstop1, hstop1, f1, h1, fstop2, hstop2);
%d=fdesign.arbmagnphase('n,b,f,h', n, 1, f1, h1);
D = design(d,'equiripple');
fvtool(D,'Analysis','phasedelay');
This is what I get:
Markers are at 500k and 1M.
What am I doing wrong?
Answer: I went down the ifft route. Based off this post: What effect does a delay in the time domain have in the frequency domain?
My code looks like this:
N = length(h1_data(:,1));
%N = (2^nextpow2(N));
L = h1_cord(18,1) - h1_cord(2,1);
Cph = 2628;
%Generate real frequency vectors (negative and positive
freal = linspace(nyq, -nyq, N);
fd = (L/Cph) - (1./freal);
fd = fs * fd;
fd = fd * -1;
%fd(1) = fd(2);
f = exp((-1i*2*pi*(1:N).*fd)/N);
%f(N/2+1:end) = fliplr(f(1:N/2));
%f = [f fliplr(f)];
%Overwrite with freqs we don't care about
fs = 1/(h1_time(2) - h1_time(1));
nyq = fs/2;
bin = nyq/(N/2);
f0bin = round(250e3/bin);
f1bin = round(1.25e6/bin);
f(1:f0bin) = 1;
f(f1bin:N/2) = 1;
f(N/2+1:f1bin+N/2) = 1;
f(N-f0bin:end) = 1;
f(1) = 1;
g = real(ifft(f));
b = [g(N/2+1:end) g(1:N/2)];
b = b .* hamming(N)';
fvtool(b,1); | {
"domain": "dsp.stackexchange",
"id": 4061,
"tags": "matlab, filters, phase, finite-impulse-response, delay"
} |
PCL::MomentOfInertiaEstimation in ROS | Question:
Hello, I would like to use the MomentOfInertiaEstimation object from PCL. As I saw in pcl_ros this package only uses 1.7.0 of the pcl library. The object MomentOfInertiaEstimation is available from 1.7.2.
Is there any easy way to get the 1.7.2 version of pcl?
I'm using ROS Hydro
Thanks Johannes
Originally posted by Johannes on ROS Answers with karma: 75 on 2014-10-14
Post score: 0
Answer:
If you want to use a different version of pcl you will need to install it from source as well as any package which depends on it.
Originally posted by tfoote with karma: 58457 on 2014-11-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 19723,
"tags": "pcl"
} |
What is the difference of the gap between superconductor and insulator? | Question: This is what I learned from textbook. An insulator is insulate as the gap between the valence band and the conduction band and the fermi level lies in the gap. A superconductor is super electronically conductive because there is a gap between the BCS ground state with the first excited state. This gap prevents electrons from being backscattered. I'm trying to understand why both have gaps but one is insulator, the other is superconductor.
Answer: The difference is that in a normal conductor the current is carried by fermions (i.e. electrons) while in a superconductor the current is carried by bosons (i.e. Cooper pairs).
Have a read through my answer to What is it about the "conduction band" of a material that is distinct from the valence band? where I explain why a full energy band cannot carry a current. In a conventional conductor any momentum eigenstate in the band can be occupied by at most two electrons (with opposite spins) so in a full band the net momentum of the electrons in the band is zero i.e. there is no net drift velocity and hence no current.
In a superconductor the electrons pair up into Cooper pairs that obey Bose-Einstein statistics, so any number of Cooper pairs can occupy the same momentum state. That means the electrons joined into Cooper pairs can have a net momentum, and hence a net drift velocity, so they can carry a current. | {
"domain": "physics.stackexchange",
"id": 48153,
"tags": "superconductivity, insulators"
} |
Which is faster, (n==0 or n==1) or (n*(n-1)==0) | Question: This is the first time for me to ask a question in this site. Recently, I was learning python 3.x and I got a question...
Which is faster ?
if n==0 or n==1:
or
if n*(n-1)==0:
Similarly, for a,b,c,d are numbers, which is faster?
if n==a or n==b or n==c or n==d:
or
if (n-a)*(n-b)*(n-c)*(n-d)==0:
I have been confused of this question for some time. I asked my friend but they didn't know too. Answer is appreciated. Thank you!
Answer: Besides of searching "what is faster" always consider code readability and maintainability.
Due to or operator nature, most of sub-checks could be just skipped on "earlier" match, whereas with the 2nd approach (n-a)*(n-b)*(n-c)*(n-d)==0 whatever n value would be - the long sequence of arithmetic operations (n-a)*(n-b)*(n-c)*(n-d) will inevitably be performed, which obviously makes the 2nd approach less efficient. Moreover it looks more confusing in case of simple comparisons.
As for time performance, consider the following tests:
In [121]: a,b,c,d = range(1,5)
In [122]: n=4
In [123]: %timeit n==a or n==b or n==c or n==d
176 ns ± 8.52 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [124]: %timeit (n-a)*(n-b)*(n-c)*(n-d)==0
213 ns ± 6.86 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [125]: n=2
In [126]: %timeit n==a or n==b or n==c or n==d
108 ns ± 1.9 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
In [127]: %timeit (n-a)*(n-b)*(n-c)*(n-d)==0
241 ns ± 10.4 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) | {
"domain": "codereview.stackexchange",
"id": 36614,
"tags": "python, python-3.x"
} |
Does quantum mechanics somehow generalize the concept of affine tensor? | Question: From https://mathworld.wolfram.com/AffineTensor.html
and
https://encyclopediaofmath.org/wiki/Affine_tensor
It seems affine tensor transforms via orthogonal matrices:
$$A^{T} A = 1 $$
But, in quantum mechanics, the transformations of operator and basis are unitary:
$$U^{\dagger} U = 1 $$
My question is, does quantum mechanics adopt a general/modified version of an affine tensor?
Answer: Firstly, an affine tensor is usually just called a tensor. The qualifier 'affine' here is to distinguish this from a tensor field which is a field of tensors and which is also usually called a tensor, particularly in GR. To define an (affine) tensor, we use the tensor product. For example:
$V^3 := V \otimes V \otimes V$
The quantum mechanics of composite systems uses tensors, however, the tensor product here is usually on different vector spaces. So for example:
$U \otimes V$
Thus it is not the same as an (affine) tensor. However, if it is a composite of one or more identical systems, then this of course reduces to the concept of an (affine) tensor.
So, yes. The concept of tensor space does generalise that of an (affine) tensor - with a proviso - dual spaces aren't used. | {
"domain": "physics.stackexchange",
"id": 86153,
"tags": "quantum-mechanics, hilbert-space, operators, tensor-calculus, linear-algebra"
} |
Detect When Someone Has Stopped Talking | Question: What's an extremely efficient way of detecting when a user has stopped talking into a microphone? When I use systems like Siri it can detect when I've stopped talking almost immediately, even when there's background noise. My initial guess was to get the average volume of the background noise first but it seems there isn't much time for that as Siri starts listening straight after the button is pressed. The method doesn't have to be accurate in what is being said, just recognize that something is being said. What algorithms should I look into?
Answer: Assume noise is not a serious issue in your problem. I guess you can get pretty clear speech signal. If you have speech recognition part implemented in your system, I think you should be able to take advantage over the language model in your recognition system. According to the transition probabilities, you shall get some confidence to say at what moment someone stopped talking.
In case you donot know the language model. Here is a simple example. Suppose I want to introduce myself. I will say "Hello[verb] everyone [object]. My name [subject] is [verb] Bob [object]. I [subject] came [verb] from ... [object]". The probability for someone to stop talking right after a subject or verb is much lower than after an object. If you can recognize speech, then you shall be able to take advantage over these language models.
The other thing that is worthy to try is to see whether you can find some stop patterns when someone is talking. What I mean is that from word to word, sentence to sentence, everyone might have his/her stop patterns. If you could successfully retrieve some patterns for example duration, then they shall help you differentiate whether the current stop is alike previous stops or not. | {
"domain": "dsp.stackexchange",
"id": 818,
"tags": "algorithms, signal-detection, speech"
} |
Avoiding "side effects" in recursive functions | Question: I am writing a function to find the intersection between two sets.
The non-functional requirements of the assignment include avoiding "side effects".
function intersection(a, b) {
return helper(a, b, []);
}
function helper(a, b, c) {
if (a.length == 0) {
return c;
}
if (b.contains(a[0])) {
return helper(a.slice(1, a.length), b, c.concat([a[0]]));
}
return helper(a.slice(1, a.length), b, c);
}
Would mutating the argument c in the above code be considered a side effect?
The specific assignment is written in OCaml, so, though the above example is in an imperative language, I want to stay true to the spirit of functional programming.
Please don't provide any alternate solutions to the problem.
Answer: If the helper function mutated the object c (which concat doesn't do in JS), then helper would have that side effect. But overall your intersection function wouldn't have any side effect that I can see, as the c array doesn't exist outside of it.
PS: concat returns a new array every time. The original c array is unchanged. | {
"domain": "cs.stackexchange",
"id": 13704,
"tags": "recursion, functional-programming"
} |
Weird claim of graphclasses about complexity of domination | Question: EDIT this got 'fixed' on graphclasses, as per answers/comments, so you might not reproduce it, unless you have their earlier database, which is publicly available via sage - http://sagemath.org.
Likely this is technical error in graphclasses or in me.
Appears to me graphclasses claims P=NP.
In Domination perfect
we have $\gamma(G)=i(G)$ which implies the complexity of Dominating Set
(DS) and Independent Dominating Set (IDS) is the same in subclasses of
domination perfect.
In line graphs
we have DS NP-complete and IDS Polynomial.
According to the java application of graphclasses, line graphs
are domination perfect, one of the reasons is they are claw-free.
Appears to me this implies P=NP.
What is wrong?
Screenshot:
Answering vb le question about polynomial IDS on line.
Clicking details recursively, we get: Polynomial on claw-free
and the claim without reference on claw-free:
Polynomial On domination perfect graphs, the Independent set and Independent domininating set problems are equivalent.
Answer: The culprit is the claim that Weighted-IDS is polynomial on claw-free graphs, because the independent set problem is and for domination perfect graphs, IDS is equivalent to IS. But for domination perfect graphs, IDS is not equivalent to IS, but to dominating set. So the statement should be: IDS is NPC on line graphs, because domination is. | {
"domain": "cstheory.stackexchange",
"id": 3091,
"tags": "graph-theory, co.combinatorics, graph-classes"
} |
A mass is hung from the centre of an unstretched, horizontal wire. How do I work out the depression of the centre of the wire? | Question: A wire of unstretched length $l$ is extended by a distance $10^{-3}l$ when a certain mass is hung from its bottom end. If this same wire is connected between two points that are a distance $l$ apart on the same horizontal level, and the same mass is hung from the midpoint, what is the depression $y$ of the midpoint?
I have been struggling with this question for too long, and I am totally stuck. My textbook states that the answer is $l/20$, but doesn't explain how or why. I would greatly appreciate any help you could give, and have described my own attempt below.
My attempt so far:
I have attempted to balance forces on the mass (balancing the vertical component of the tension in each string with the weight of the mass) and then to solve for tension, and equate tension with $kx$, with $k$ being given by $1000mg/l$ (as is easily deducible from the given information) and $x$ being given by $2\sqrt{y^2+l^2/4}-l$, as is easily obtainably by simply trigonometry. For tension I found, by geometry, that $T=mg\sqrt{y^2+l^2/4}/2y$. Equating all of this gives the below equation:
$$\frac{mg\sqrt{y^2+l^2/4}}{2y}=\frac{1000mg}{l}(2\sqrt{y^2+l^2/4}-l)$$
This expression is hideous and practically impossible to solve by hand, and I can't believe that this is the only way to do it.
Answer: My guess is that your textbook is wrong because I get the same result as you.
$$k=1000\frac{mg}{l}$$
With Newton:
$$\mathbf{T_1}+\mathbf{T_2}-\mathbf{mg}=0$$
Scalars:
$$2T\sin\theta =mg$$
$$\sin\theta=\frac{y}{H}$$
$$H=\sqrt{y^2+\frac{l^2}{4}}$$
$$T=\Big(H-\frac{l}{2}\Big)\times 1000\frac{mg}{l}=1000mg\Big(\sqrt{y^2+\frac{l^2}{4}}-\frac{l}{2}\Big)$$
$$2000\frac{1}{l}\Big(\sqrt{y^2+\frac{l^2}{4}}-\frac{l}{2}\Big)\times \frac{y}{\sqrt{y^2+\frac{l^2}{4}}}=1$$ | {
"domain": "physics.stackexchange",
"id": 57000,
"tags": "homework-and-exercises, newtonian-mechanics, spring, statics"
} |
Is the Wikipedia version of the Heisenberg equation of motion correct? | Question: Back in 2011, this question asked about the Wikipedia version of the Heisenberg equation of motion for an operator $A$:
\begin{equation*}
\frac{d}{dt} A(t) = \frac{i}{\hbar} \left[ H, A(t) \right] + \frac{\partial A}{\partial t}
\end{equation*}
The accepted response asserted that "there is no mistake on the Wikipedia page", and indeed the formula is the same today as it was then.
But it's not correct, I think.
The first two occurrences of $A$ refer to $A_H$, "in the Heisenberg picture", where
\begin{equation*}
A_H(t) = U^\dagger(t) A_S(t) U(t)
\end{equation*}
where $U(t)$ is the unitary time-evolution operator and $A_S(t)$ is the operator in the Schrodinger picture, which may or may not be time-dependent.
So far so good. However, the last term is not
\begin{equation*}
\frac{\partial A_H}{\partial t}
\end{equation*}
but rather
\begin{equation*}
\left(\frac{dA_S}{dt} \right)_H = U^\dagger(t) \frac{dA_S}{dt} U(t)
\end{equation*}
In fact, if one wanted an equation of motion using exclusively $A_H$, I think one would write:
\begin{equation*}
\frac{d}{dt} A_H(t) = \frac{i}{\hbar} \left[ H_H(t), A_H(t) \right]
+ U^\dagger(t) \frac{d}{dt} \left(U(t) A_H(t) U^\dagger(t) \right) U(t)
\end{equation*}
But no one ever does, as far as I can see. Is it wrong, or too ugly for prime time, or???
Answer: You may see the idea on why the formula is written like that by a comparison with the classical phase space formula.
In classical phase space you have observables of the type $a(x,p,t)$, or more precisely $a(x(t),p(t),t)$; where $x(t)$, $p(t)$ and $t$ are considered independent variables. The equation of motion for $a$ is given by:
$$\frac{d}{dt}a(x(t),p(t),t)=\{h,a\}+\partial_t a\; ;$$
where $h$ is the classical hamiltonian, $\{\cdot,\cdot\}$ the Poisson bracket and the partial derivative in time is intended as deriving the functional form $a(\cdot,\cdot,\cdot)$ above w.r.t the third variable only (i.e. disregarding the fact that the first two argument may depend on the third). This is quite usual when dealing with function of many variables, and is a widely accepted convention (only the explicit dependence is taken into account when taking partial derivatives).
In quantizing, a formula with the same flavour is obtained replacing functions with operators (with suitable ordering, and in their Heisenberg version)---for me it will be replacing small letters with capital ones---and Poisson brackets with $\frac{i}{\hbar}[\cdot,\cdot]$. This is just a naïve procedure, and not an exact derivation of the formula; nevertheless you get:
$$\frac{d}{dt}A(X(t),P(t),t)=\frac{i}{\hbar}[H,A]+\partial_t A\; .$$
Again the meaning remains unchanged: the partial derivative is intended as the derivative of the functional form $A(\cdot,\cdot,\cdot)$ with respect to the third variable only.
Note (related to the OP comment in the other answer): As a disclaimer, I will not consider in the following any problem of domains of operators, convergence of series and so on; nevertheless also these problems are to be taken into account. The statements have to be intended as true on suitable domains and with suitably regular functions.
Let $A(X,P,t)$ be a ploynomial function (regardless of the ordering), whith $[X,P]\neq 0$, and $t$ commutes with everything. Let also $U(t)=U(X,P,t)$ be a unitary operator (in particular we will use $U^*(t)U(t)=U(t)U^*(t)=id$, where $id$ is the identity operator) such that $X(t):= U^*(t)XU(t)$, and $P(t)=U^*(t)PU(t)$. It is then clear that, using the property of $U$ above,
$$U^*(t)A(X,P,t)U(t)=A(X(t),P(t),t)\; .$$
This can be extended, roughly speaking, to any function that admits a series expansion.
Now given this equality, if you interpret the partial derivative as acting only explicitly on $t$, you see that again you obtain for polynomials/functions with series expansion
$$\partial_t A(X(t),P(t),t)=U^*(t)\partial_tA(X,P,t)U(t)\; ,$$
for the functional form of $\partial_t A(\cdot,\cdot,t)$ is always the same, and the "replacement" of $X,P$ with $X(t),P(t)$ (by means of $U$) can be done either before or after the derivative without changing the functional form (just changing $X$ in $X(t)$ and $P$ in $P(t)$).
As I said, this may not be true with more general functions, anyways this type of Heisenberg evolution formulas, written like that, are valid only in very special situations, if you want to be precise and consider domain problems and the proper definition of functions of unbounded operators. | {
"domain": "physics.stackexchange",
"id": 20773,
"tags": "quantum-mechanics"
} |
Can blood transfusion help to fight cancer cell? | Question: First of all, I'm not a biology student or have sufficient knowledge of biology so I apologize if this question appears silly. Let's say patient A has cancer cells and a healthy person B has the same blood type as patient A, by using a blood regulator device to regulate the blood between A and B, will the white cell in patient B recognize the normal cell in patient A as foreign cell and attack it? Is there a chance the white cells of person B recognize the cancer cell in patient A as a foreign cell and attack it?
So far what I understand as below from internet
At early stage white blood will detect cancer cell but as the cell evolve, the white blood cell will no longer recognize the cancer cell as foreigner cell which enter the escaping phase, in this phase the cancer cell start to growth without obstruction. However, I am not able to find article like the membrance wall constitution for different people so logically if it is similar, the white cell will not recognize normal cell in both people as a foreigner cell, correct me if i'm wrong but a cancer cell that enter the escaping phase may not treated as normal cell in person B due to difference membrance wall constitution? If this is true, then can this method use to save patient with cancer especially for family members that willing to use a blood regulator device to constantly regulate the blood between both hence regulate the white blood cell of person B into A to fight cancer cell? If this work, this can also use to save patient with Covid at critical stage where an immune person can regulate the blood between both people as if lending the person immunity system to fight covid infected person?
Answer: White blood cells have antibodies that target specific proteins on the surface of different cells- this is why blood transfusions need to occur with the same blood type in each patient. (If this were to happen, your body would produce antibodies to attack the foreign blood cells.) To address your first question, as long as the blood types are the same, the cells inside of patient B's blood will not attack any of patient A's blood. Even if patient B had a different blood type as A, if his blood was introduced into A's bloodstream, a healthy amount of B lymphocytes (these are cells that stem from bone marrow and differentiate to create new antibodies) would need to be introduced along with it. Otherwise, the cells would not have the capability to create antibodies that are compatible with A's blood. However, doing this wouldn't really be a good idea, as this would cause an indiscriminate attack- all of A's blood cells would be attacked. And to answer your second question- after cancer cells get past a certain stage in their development, they lose important proteins that the body uses to identify them. In other words, they sneak past its immune system and begin to spread. The escape phase is a direct result of this- the tumor begins to metastasize throughout the body and the body fails to stop it. To be frank, there is very little to do here- the issues with the blood transfusion method are detailed above. And to address your final statement about Covid- from what I understand, this is mostly correct. If you may remember, people with antibodies often donate their plasma to be used in other patients. However, the key part of this is that the blood cells are not the useful part of this transfusion. The antibodies are. It isn't a matter of blood transfusion- it's a matter of antibody transfer. For the most part, however, your logic is correct. I could help you a little more if you linked the websites that you researched. Thanks. | {
"domain": "biology.stackexchange",
"id": 11711,
"tags": "immunology"
} |
Compute Similarity between Fourier Transforms | Question: I'm looking to compare the Fourier Transforms generated by accelerators and gyroscopes that collected data of people walking. I've looked to see if there is a standard form of comparison, but I have yet to find one.
Here's my specific use case: I'm looking to compare similarity between the dominant signals of two Fourier Transforms (i.e. how close are they to being the same frequency/magnitude). Is there a specific metric that would help identify that?
If not (or perhaps irrespective of my goal), what are some common ways of comparing two Fourier transforms? I've found surprisingly little on the subject.
Answer: Most simply, you can compute simple statistical measures on each FFT frame and compare those directly. The signal energy, its distribution, variance, etc across the frequency spectrum are interesting measures. You can also directly compute the correlation coefficient between accelerometer signal pairs (normalized to their length).
It might be useful to look into research on “gait recognition”, the literature points to “low frequency-domain entropy” as feature that is useful in discriminating accel/gyro data that differ in “complexity”
Have you seen this thread on comparing 3d data from accelerometers. It is worth considering if using the raw 3d data, doing a dimensionality reduction (PCA, etc) and then using a simple distance metric may get you further than analyzing raw FFT output. Doing an FFT as you've proposed doing is only one way to simplify the data, but I think the Fourier transform is too coarse & noisy to give you directly comparable results, every time.
As an alternative, it might be easier/faster to use machine learning, i.e. training a "people walking" classifier using any modern supervised learning technique. | {
"domain": "dsp.stackexchange",
"id": 2490,
"tags": "signal-analysis, fourier-transform"
} |
What is this notation in the context to the radial wave equation: $R(r)=\frac{u(r]}{r}$? | Question: Here is the whole problem, I understand the context... but I never encountered this notation before. It looks like set notation but it is not defining a set of real numbers.
Problem: Writing the radial wave function, $R(r)=\frac{u(r]}{r}$, show that the radial equation becomes:
\begin{align}
-\frac{\hbar^2}{2m}\frac{d^2 u}{dr^2} + \left[ V(r) + \frac{\hbar^2 \ell (\ell + 1)}{2mr^2}\right]u = Eu
\end{align}
Compare this with the one dimensional Schrodinger equation. What's similar, what's different?
Question: How do I start this problem? Specifically, what is the notation in the prompt?
Answer:
How do I start this problem? Specifically, what is the notation in the prompt.
As noted in the comments, it's a typo, it should be $R (r) = \frac {u (r)}{r} $, or you might see it written as $R_{nl}(r) = \frac {1}{r}U_{nl}(r) $ which allows for the different combinations of energy level $n $ and angular momentum $l $.
Compare this with the one dimensional Schrodinger equation. What's similar, what's different?
$$\begin{align} -\frac{\hbar^2}{2m}\frac{d^2 u}{dr^2} + \left[ V(r) + \frac{\hbar^2 \ell (\ell + 1)}{2mr^2}\right]u = Eu \end{align}$$
What's different is the effective potentential term $$V (r) +\frac{\hbar^2 \ell (\ell + 1)}{2mr^2}$$.
If your book is noticeably high on typos, you might need another source, as there are lots of substitions and rearranging to follow to solve this.
Related Solving The Radial Equation | {
"domain": "physics.stackexchange",
"id": 35291,
"tags": "quantum-mechanics, notation, textbook-erratum"
} |
Kleinberg-consistency of spectral clustering | Question: Spectral clustering refers to a family of graph-based algorithms, which usually rely on a similarity function rather than a metric, though a metric $\rho(x,y)$ can always be converted to a similarity function, say $e^{-\rho(x,y)}$.
What I called Kleinberg-consistency is commonly referred to in the clustering literature as "Kleinberg's consistency axiom". A clustering algorithm takes a finite point set $S$ together with a metric $\rho$ as input, and returns a clustering $\mathcal{C}$ of $S$, which is just a nontrivial partition of $S$.
The clustering algorithm $A$ is consistent if it satisfies the following property: Run $A$ on point set $S$ with metric $\rho$, to obtain some clustering $\mathcal{C}$. Now take any $\rho'$ such that all $x,y\in S$ that fell into the same cluster $C\in\mathcal{C}$ satisfy $\rho'(x,y)\le\rho(x,y)$ while for all $x,y$ that fell into distinct clusters, we have $\rho'(x,y)\ge\rho(x,y)$. Consistency requires that if we run $A$ on $(S,\rho')$, we obtain the same clustering $\mathcal{C}$ as we did for $\rho$.
Question: Is there any specific natural spectral clustering algorithm that is known (not) to be Kleinberg-consistent?
NB: The question is explicitly not about the statistical consistency of spectral clustering, which has been addressed in the literature.
Answer: Answering my own question, with a hat tip to Misha Belkin (private correspondence). It is easy to see that the unnormalized mincut and the RatioCut algorithm (see here for precise definitions
http://www.kyb.mpg.de/fileadmin/user_upload/files/publications/attachments/Luxburg07_tutorial_4488%5b0%5d.pdf
) are Kleinberg-consistent. This is because the intra-cluster distances/similarity does not influence the objective function in any way.
The Ncut-based clustering algorithm, however, is not Kleinberg-consistent. This can be seen by making the similarity weights within some cluster sufficiently large that it is able to absorb nearby clusters with negligible extra charge. | {
"domain": "cstheory.stackexchange",
"id": 3948,
"tags": "machine-learning, lg.learning, clustering"
} |
Philosophy of perfect inter-sample interpolation | Question: How would you define perfect (inter-sample) interpolation, and is it possible?
To quote Armifinn's prior answer:
"I guess the most important result is that for signals with bandwidth limitation, you can have perfect reconstruction via sinc(⋅)sinc(⋅) convolution; the famous sampling theorem..."
If by perfect interpolation it is meant that an analog band-limited signal is recovered perfectly from digital samples, wouldn't a sinc interpolation lead to some problems? Considering that analog equipment is causal while sinc interpolation is anti-causal, shouldn't a minimum phase low pass filter be used instead?
On the other hand, I understand that the above method would not preserve the phase information of the system. Sampling theorem states that band-limited signals can be captured perfectly. So indeed that would be a contradiction.
Finally, a quick thought experiment: consider an analog dirac delta impulse that is low pass filtered using a perfect causal analog filter so that it satisfies the sampling criterion. The signal is then digitally captured by a DSP engineer. Now, the DSP engineer wants to reconstruct the analog signal in his lab, and after running the perfect sinc interpolation he discovers that the signal in fact started BEFORE the analog dirac delta even occurred, to be more precise an infinite time before the analog signal started. This casts some doubts on how the interpolation should in fact be done, and whether the band-limiting criterion is so easy to satisfy.
Answer: By the fundamental notions of signal processing, we can state that, a perfect intersample interpolation of a given perfectly bandlimited signal from its samples is not possible due to the fact that a perfectly bandlimited signal would be infinetely long whose infinite many samples are required to compute any one single interpolated intersample, hence impossible.
That being said, i.e. that the impossibility of obtaining infinite duration samples, a practical answer would be yes if you would allow an upperbound on the acceptable interpolation error, such as lower than the roundoff error within a digital system. This will be a truncation error whose spectral manisfestation is spectral smearing due to the convolution of the truncating window's frequency spectrum with that of the true spectrum of the bandlimited signal (i.e. multiplication in time, convolution in frequency)
Also a tradeoff can be sought that, instead of a perfectly bandlimited signal, you can work with bandlimited enough signal whose aliasing error is within acceptable bounds, just as in the previous paragraph, in this case you wont need infineteley many smples, but there will be spectral overlap. | {
"domain": "dsp.stackexchange",
"id": 4000,
"tags": "sampling, interpolation"
} |
Get sporadic messages from periodic publisher | Question:
Hi everyone,
I've just started working with ros, this is my first post here :-)
I'd like to ask a question: I have a publisher that publishes a kinect stream at a certain rate.
Until now I've always needed to process every snapshot, so I just subscribed to the topic and process each message in my callback.
I was thinking that sometimes it could be useful for a client to get single messages, sporadically, from a periodic publisher. Is there an easy way to do that?
I could skip messages in my callback until I want to grab one, but I'd like to avoid the overhead of the calls I don't need.
I guess I could play with subscribe/ubsubscribe on demand, but I imagine it would be very slow.
The better way that I was able to think is to modify the publisher, adding a service that, when called, returns just the last message.
This way it could be possible to have a periodic publisher that could also be used as a "sporadic" publisher. Is this the best solution, or there's something simpler, that doesn't need to modify the publisher?
To simplify, I want to use the same publisher as sporadic for some clients, as periodic for some others.
Thank you :-)
Originally posted by Kilin on ROS Answers with karma: 56 on 2012-12-10
Post score: 0
Answer:
I think the easiest solution here really is to let the callback always execute and only use the value if required. Callbacks are not too expensive and I would consider trying to avoid them without facing actual issues premature optimization.
roscpp provides the function ros::waitForMessage to just get a single message from a topic. But this method might first subscribe, then wait, and finally unsubscribe from the topic which is much more expensive than executing a normal callback multiple times given the size of the received message is moderate.
Your proposed solution with a service sounds ok, too. If you can modify your publisher and if you really run into performance problems, I think it should be your preferred solution.
Originally posted by Lorenz with karma: 22731 on 2012-12-10
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 12039,
"tags": "ros, topics"
} |
Publishing arrays unsuccessful | Question:
Hi all,
I have been following this question very closely.
This is how mine looks like:
def talker(armj_acc) :
pub10 = rospy.Publisher('/armj_acc', obs_arm_JointAccelerations)
msg2send10 = obs_arm_JointAccelerations()
msg2send10.header.stamp.secs = time_now
msg2send10.accelerations = armj_acc
pub10.publish(msg2send10)
I have my own custom message, by the name obs_arm_JointAccelerations which contains
Header header
float64[] accelerations
The variable armj_acc is actually a list whose values have been successfully calculated and passed into the function, which I print them on the screen. The calculation for the armj_acc is done via subscribing velocities from the /joint_states topic, every 4 seconds ((current velocity - previous velocity) /4).
The problem is, whenever I try to echo the topic, it gives me the following error:
(1) In the terminal where the topic is being echo-ed
WARNING: no messages received and simulated time is active.
Is /clock being published?
(2) In the terminal where the code is run:
[ERROR] [WallTime: 1367266510.640556] [5269.097000] bad callback: <function callback at 0x2f6e1b8>
Traceback (most recent call last):
File "/opt/ros/fuerte/lib/python2.7/dist-packages/rospy/topics.py", line 678, in _invoke_callback
cb(msg)
File "/home/path/to/my/package/script/obs_armj_all.py", line 47, in callback
talker(armj_names, armj_act_positions, armj_act_velocities, armj_effort_act, armj_effort_diff, armj_pos_rel_diff)#, armj_acc)#, armj_jerk)
File "/home/path/to/my/package/script/obs_armj_all.py", line 185, in talker
pub10.publish(msg2send10)
File "/opt/ros/fuerte/lib/python2.7/dist-packages/rospy/topics.py", line 796, in publish
raise ROSSerializationException(str(e))
ROSSerializationException: cannot convert argument to integer
Any ideas?
Thanks.
Originally posted by whiterose on ROS Answers with karma: 148 on 2013-04-29
Post score: 0
Answer:
If your posted code matches what you're running, it looks like you're publishing the wrong variable. You're trying to publish the armj_acc array directly, rather than the msg2send10 message. I'm also not sure that you're allowed to assign a time directly to the header field like you've shown.
Try this:
def talker(armj_acc) :
pub10 = rospy.Publisher('/armj_acc', obs_arm_JointAccelerations)
msg2send10 = obs_arm_JointAccelerations()
msg2send10.header.stamp = rospy.Time.now()
msg2send10.accelerations = armj_acc
pub10.publish(msg2send10)
Originally posted by Jeremy Zoss with karma: 4976 on 2013-04-29
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by whiterose on 2013-04-29:
I edited that one, it is now: msg2send10.header.stamp.secs = time_now, where time_now = rospy.Time.now(), which is actually wrong. It is supposed to be: msg2send10.header.stamp.secs = time_now.secs... Thanks! | {
"domain": "robotics.stackexchange",
"id": 14000,
"tags": "ros, rostopic-echo, publish"
} |
Are pumps used in geothermal systems | Question: The water held underground in a geothermal system is under high temperature and pressure. Are pumps used to raise the very hot underground water to the surface or does the water rise under its own pressure? One source says that the water is pumped up: https://energy.gov/eere/geothermal/electricity-generation while another says that it rises on its own: https://www.youtube.com/watch?v=kjpp2MQffnw I am not sure if I am just misunderstanding the wording. Does anyone know which is correct?
Answer: They are both correct. There are actually a number of different types of geothermal power plant with the most common being, dry steam, flash steam, and binary cycle.
Dry steam plants require pumps to draw from underground resources of steam. This is piped directly from underground wells to the power plant.
Flash steam plants (most common) use geothermal reservoirs that have naturally high temperatures (180 Celsius +). The hot water from the reservoir flows up through a well under its own pressure. As it reaches the surface the pressure decreases and the water begins to boil and turn in to steam. Any water left over from this process is then pumped back in to the reservoir.
Binary cycle power plants are more complex but are capable of operating at much lower temperatures (100-180 Celsius). These use hot water pumped from an underground reservoir to heat an organic fluid with a low boiling point. This organic fluid is vaporised in a heat exchanger which turns a turbine. The leftover water is "injected" back in to the reservoir.
If you want to read up more on the specific types of geothermal plant, this website has more information: https://www.conserve-energy-future.com/geothermalpowerplanttypes.php
Hope that helps! | {
"domain": "engineering.stackexchange",
"id": 1850,
"tags": "pressure, heat-transfer, pumps, geotechnical-engineering"
} |
Wormhole related experiments? | Question: Are there any simple "experiments" that can be done in a high school science lab that could demonstrate some sort of basic principals of wormholes or spacetime? Or sort of proving how long something would take to get through a wormhole or why you wouldn't be able to travel through them etc.
Answer: My favorite explanation of something similar to a wormhole is this, from Madeline L'Engle's A Wrinkle in Time
Mrs. Who took a portion of her white robe in her hands and held it tight.
"You see," Mrs. Whatsit said, "if a very small insect were to move from the section of skirt in Mrs. Who's right hand to that in her left, it would be quite a long walk for him if he had to walk straight across."
Swiftly Mrs. Who brought her hands, still holding the skirt, together.
“Now, you see," Mrs. Whatsit said, "he would be there, without that long trip. That is how we travel."
There's a bit more after this on dimensions, but it doesn't completely explain the issue (plus it uses some slightly wrong physics--it is, after all, a children's book)
You can do this explanation with a string. Then show them that a string (one-dimensional) had to be bent into a two-dimensional loopy thing.
Then, do the same thing with a piece of paper. Here, a 2D paper is bent into a 3D thing.
Now try to explain how this is possible in the real world--bending of 3D space into 4D spacetime in a loopy manner.
It would help if you explained the standard rock-in-a-rubber-sheet analogy of general relativity at first. Of course, I've seen this particular analogy lead to misconceptions/confusions frequently. But there's no way that I know of to explain it correctly without confusing the lot of them. | {
"domain": "physics.stackexchange",
"id": 2543,
"tags": "spacetime, wormholes"
} |
Possible error in Griffiths, Intro. to Electrodynamics, Problem 7.60 | Question: Probelm 7.60 of Griffiths' Introduction to Electrodynamics, 4th ed, says:
Suppose $\mathbf J(\mathbf r)$ is constant in time but $\rho(\mathbf r, t)$ is not—conditions that
might prevail, for instance, during the charging of a capacitor.
(a) Show that the charge density at any particular point is a linear function of time:
$$
\rho(\mathbf r, t) = \rho(\mathbf r, 0) + \dot \rho(\mathbf r, t)t,
$$
where $\dot \rho(\mathbf r, 0)$ is the time derivative of ρ at t = 0.
(b) Show that
$$
\mathbf B(\mathbf r) = \frac{\mu_0}{4\pi} \int \frac{\mathbf J(\mathbf r')\times \hat{\boldsymbol{\ell}}}{\ell^2} d \tau'
$$
obeys Ampère’s law with Maxwell’s displacement current term.
Where
$
\ell = |\boldsymbol{\ell}|, \quad
\boldsymbol{\ell} = \mathbf {r - r'}, \quad
\boldsymbol{\hat \ell} = \boldsymbol{\ell}/\ell, \quad
$ (since I don't know how to type his cursive $r$ in LateX.)
So the (a) is solved by invoking (as hinted) the continuity equation. But for (b) I have a different answer, namely
$$
\mathbf B(\mathbf r) = \frac{\mu_0}{4\pi} \int \frac{(\mathbf J(\mathbf r') + \varepsilon_0 \dot{\mathbf E}(\mathbf r',0) )\times \hat{\boldsymbol{\ell}}}{\ell^2} d \tau'
$$
and this is my justification:
Since the charge density is as above and $\mathbf E(\mathbf r,t)$ is given (as in the manual) by
$$
\mathbf E(\mathbf r,t) = \frac{1}{4\pi\varepsilon_0} \int \frac{\rho(\mathbf r',t) \hat{\boldsymbol{\ell}}}{\ell^2} d\tau' = \mathbf E(\mathbf r,0) + \dot{\mathbf E}(\mathbf r,0)t
$$
then the two Maxwell's equations
\begin{align*}
\nabla \cdot \mathbf B &= 0 & \nabla \times \mathbf B = \mu_0 \mathbf J + \mu_0\varepsilon_0 \partial_t \mathbf E,
\end{align*}
with $\mathbf J$ and $\partial_t \mathbf E$ functions only of position (but not of time) and with $\nabla\times \mathbf E = 0$ (by direct computation), are decoupled from the other two and we have the solution for $\mathbf B$ as the Bio-Savart Law with $\mathbf J(\mathbf r')$ replaced with $\mathbf J(\mathbf r') + \varepsilon_0 \dot{\mathbf E}(\mathbf r')$.
Otherwise where is my mistake ?
Answer: Griffiths wants you to realize that that integral of $\partial_t \mathbf E$, apparently contributing to magnetic field $\mathbf B$, is zero.
Any field $\mathbf B$ whose divergence vanishes everywhere can be expressed as
$$
\mathbf B = \nabla \times \mathbf A
$$
where one possible vector potential $\mathbf A$ is
$$
\mathbf A(\mathbf x) = \frac{1}{4\pi}\int \frac{\nabla_{\mathbf x'} \times \mathbf B(\mathbf x')}{|\mathbf x - \mathbf x'|}d^3 \mathbf x'.
$$
This follows from Helmholtz's theorem. We assume the integral exists.
Using Maxwell's equations, we can expresss the vector potential in this way:
$$
\mathbf A = \frac{1}{4\pi}\int \frac{\mu_0\mathbf j(\mathbf x')+ \mu_0\epsilon_0 \partial_t \mathbf E(\mathbf x')}{|\mathbf x - \mathbf x'|}d^3 \mathbf x'.
$$
There are two terms in the integrand. If the latter term contributes zero to magnetic field $\mathbf B$, it can be dropped and we arrive at the conclusion that magnetic field, obeying Maxwell's equations with displacement current, is given by the Biot-Savart formula.
We have to assume (in addition to what is stated as assumptions in the assignment) that magnetic field is constant in time, so electric field is a conservative field, so it can be expressed as
$$
\mathbf E = - \nabla \varphi
$$
for some function $\varphi$. We will show this means that contribution of $\partial_t \mathbf E$ to the vector potential is a conservative field, hence a gradient of something, hence has zero curl, hence does not contribute to magnetic field.
We start with the integral
$$
I = \int \frac{\partial_t \mathbf E(\mathbf x')}{|\mathbf x - \mathbf x'|}~d^3\mathbf x'
$$
and express it using electric potential:
$$
I = -\int \frac{\partial_t \nabla_{\mathbf x'} \varphi(\mathbf x')}{|\mathbf x - \mathbf x'|}~d^3\mathbf x'
$$
Using substitution of variables in the integral and switching the order of differentiation and integration, we can express this quantity also in this way:
$$
I = -\nabla_{\mathbf x} \int \frac{\partial_t \varphi(\mathbf x')}{|\mathbf x - \mathbf x'|}~d^3\mathbf x'
$$
so this is a conservative field with zero curl. Hence it does not contribute to magnetic field.
Hence under the assumption (magnetic field is constant in time), the Biot-Savart formula gives correctly magnetic field obeying the Maxwell equations with the displacement current term. | {
"domain": "physics.stackexchange",
"id": 88273,
"tags": "electromagnetism, maxwell-equations, textbook-erratum"
} |
Addressing possible strategic problems with LDAP module and unit testing code | Question: I'm a sysadmin writing a tool to perform administrative tasks on our upcoming new account management system that runs over LDAP. I want the tool to be highly flexible, configurable, and reliable, so I'm using automated test-driven development. I've started writing a module to perform LDAP connections and commands, but the problem is that writing my unit tests for this module takes 90% of my time. As it stands at the moment, with only a couple of class methods implemented so far with four more on the drawing board, here is the module that I am testing:
import ldap
import ldap.sasl
SCOPE=ldap.SCOPE_SUBTREE # will be moved to configuration at a later point
class auth():
kerb, simple, noauth = range(3)
class LDAPObjectManager():
def __init__(self, uri, authtype, user=None, password=None, **kwargs):
self._ldo = ldap.initialize(uri)
for key, value in kwargs.items():
self._ldo.set_option(getattr(ldap, key), value)
if authtype == auth.simple:
self._ldo.simple_bind_s(user, password)
elif authtype == auth.kerb:
self._ldo.sasl_interactive_bind_s('', ldap.sasl.gssapi())
def _stripReferences(self, ldif):
return filter(lambda x: x[0] is not None, ldif)
def gets(self, sbase, sfilter):
ldif = self._ldo.search_ext_s(sbase, SCOPE, sfilter)
result = self._stripReferences(ldif)
if not result:
raise RuntimeError("""No results found for single-object query:
base: %s
filter: %s""" %(sbase, sfilter))
if len(result) > 1:
raise RuntimeError("""Too many results found for single-object \
query:
base: %s
filter: %s
results: %s""" %(sbase, sfilter, [r[0] for r in result]))
return result[0]
def getm(self, sbase, sfilter):
return self._stripReferences(self._ldo.search_ext_s(sbase, SCOPE,
sfilter))
Here are my unit tests that I've written so far:
import mock
import unittest
import src.ldapobjectmanager
@mock.patch('src.ldapobjectmanager.ldap', autospec=True)
class TestLOMInitializationAndOptions(unittest.TestCase):
def testAuth(self, mock_ldap):
uri = 'ldaps://foo.bar:636'
def getNewLDOandLOM(auth, **kwargs):
ldo = mock_ldap.ldapobject.LDAPObject(uri)
mock_ldap.initialize.return_value = ldo
lom = src.ldapobjectmanager.LDAPObjectManager(uri, auth, **kwargs)
return ldo, lom
# no auth
ldo, lom = getNewLDOandLOM(src.ldapobjectmanager.auth.noauth)
self.assertEqual(ldo.simple_bind_s.call_args_list, [])
self.assertEqual(ldo.sasl_interactive_bind_s.call_args_list, [])
# simple auth
user = 'foo'
password = 'bar'
ldo, lom = getNewLDOandLOM(src.ldapobjectmanager.auth.simple,
user=user, password=password)
self.assertEqual(ldo.simple_bind_s.call_args_list,
[((user, password),)])
# kerb auth
sasl = mock.MagicMock()
mock_ldap.sasl.gssapi.return_value = sasl
ldo, lom = getNewLDOandLOM(src.ldapobjectmanager.auth.kerb)
self.assertEqual(ldo.sasl_interactive_bind_s.call_args_list,
[(('', sasl),)])
def testOptions(self, mock_ldap):
uri = 'ldaps://foo.bar:636'
def addOption(**kwargs):
ldo = mock_ldap.ldapobject.LDAPObject(uri)
mock_ldap.initialize.return_value = ldo
for key, value in kwargs.items():
if not hasattr(mock_ldap, key):
with self.assertRaises(AttributeError):
lom = src.ldapobjectmanager.LDAPObjectManager(uri,
src.ldapobjectmanager.auth.noauth, **{key:value})
else:
lom = src.ldapobjectmanager.LDAPObjectManager(uri,
src.ldapobjectmanager.auth.noauth, **{key:value})
self.assertEqual(ldo.set_option.call_args,
((getattr(mock_ldap, key), value),))
addOption(OPT_X_TLS=1, OPT_BOGUS=1, OPT_URI="ldaps://baz.bar")
@mock.patch('src.ldapobjectmanager.ldap', autospec=True)
class TestLOMGetMethods(unittest.TestCase):
def testGets(self, mock_ldap):
uri = 'ldaps://foo.bar:636'
ldo = mock_ldap.ldapobject.LDAPObject(uri)
mock_ldap.initialize.return_value = ldo
lom = src.ldapobjectmanager.LDAPObjectManager(uri,
src.ldapobjectmanager.auth.kerb)
# if gets() fails to find an object, it should throw an exception
ldo.search_ext_s.return_value = []
with self.assertRaises(RuntimeError) as err:
lom.gets("", "")
# sometimes references are included in the result
# these have no DN and should be discarded from the result
ldo.search_ext_s.return_value = [(None, ['ldaps://foo.bar/cn=ref'])]
with self.assertRaises(RuntimeError) as err:
lom.gets("", "")
# if gets() finds > 1 object, it should throw an exception
ldo.search_ext_s.return_value = [
('CN=fred,OU=People,DC=foo,DC=bar', {'name': ['fred']}),
('CN=george,OU=People,DC=foo,DC=bar', {'name': ['george']})
]
with self.assertRaises(RuntimeError) as err:
lom.gets("", "(|(name=fred)(name=george))")
# if gets() finds exactly 1 object, it should return that object
expectedresult = ('CN=alice,OU=People,DC=foo,DC=bar', {'name': ['alice']})
ldo.search_ext_s.return_value = [expectedresult]
actualresult = lom.gets("", "name=alice")
self.assertEqual(expectedresult, actualresult)
# repeat with reference in result
expectedresult = ('CN=alice,OU=People,DC=foo,DC=bar', {'name': ['alice']})
ldo.search_ext_s.return_value = [expectedresult,
(None, ['ldaps://foo.bar/cn=ref'])]
actualresult = lom.gets("", "name=alice")
self.assertEqual(expectedresult, actualresult)
def testGetm(self, mock_ldap):
uri = 'ldaps://foo.bar:636'
ldo = mock_ldap.ldapobject.LDAPObject(uri)
mock_ldap.initialize.return_value = ldo
lom = src.ldapobjectmanager.LDAPObjectManager(uri,
src.ldapobjectmanager.auth.kerb)
expectedresult = [
('CN=fred,OU=People,DC=foo,DC=bar', {'name': ['fred']}),
('CN=george,OU=People,DC=foo,DC=bar', {'name': ['george']})
]
ldo.search_ext_s.return_value = expectedresult
actualresult = lom.getm("", "(|(name=fred)(name=george))")
self.assertEqual(expectedresult, actualresult)
# repeat with reference in result
alice = ('CN=alice,OU=People,DC=foo,DC=bar', {'name': ['alice']})
reference = (None, ['ldaps://foo.bar/cn=ref'])
ldo.search_ext_s.return_value = [reference, alice, alice, reference,
reference, alice, alice, reference, alice, reference]
actualresult = lom.getm("", "name=alice")
self.assertEqual([alice, alice, alice, alice, alice], actualresult)
Here are the problems that I perceive with my testing code:
My unit tests are huge and cumbersome to work with
Because of the need to mock out the LDAP library, my unit tests are tightly coupled with the module implementation
The mock makes it difficult to factor out common code in my unit tests
Because of the above two bullet points, it takes much longer to write & debug my unit tests than it does to write my module, and I don't feel like I'm getting much useful coverage from my unit tests other than confirming the way I call the LDAP library.
How do I address these problems?
One potential cause for these problems is that I'm simply writing a wrapper around the LDAP library that does too little, but I haven't been able to sketch out my project in a way that avoids the need for such a wrapper or module.
Answer: Nice practical question! I've spend a bit of time refactoring the test
code, the actual class is good as it is, so only two comments there:
The auth enum is cool, although I'd say that simply using strings
(or an actual enum in Python 3) is better just because you can inspect
and understand them easier than numbers. Even then, checking the
validity of the argument will help you prevent very easily spotted
errors. So something line if not authtype in [kerb, ...]: raise ...
is good enough.
And the method names gets and getm don't conway much meaning.
This is a minor complaint, because it looks like in the context LDAP
this might be okay.
Now to the tests. I'm attaching my version below so you can refer to
that if I explain it badly. In general this level of detail is good; if
you wanted to add a layer between your actual code and the library than
that is fine, but I think that is overkill from what I can see (even
though I don't know how many other library options there would be).
Even then, you might be able to go the other way and implement the
ldap interface for another library instead and keep your actual code
the same.
Anyway, the test cases are good; I've split some of them up into
separate methods if it was useful to reuse some data. The names of the
test cases could also be changed so that they refer to the case you want
to test -- the comments could also be just docstrings, then you have a
bit inspectable documentation as well.
And then just by aggressively reusing, looking up some helper methods,
extracting and creating some others, using a common base class and a bit
of meta magic you'll have reasonably short and expressive test cases.
Helpers include assert_no_calls (I don't see how that is not an
already existing method, oh well), assert_called_once_with and
assert_called_with. I've moved dummy values to person and
reference, since they are used very often and the result looks
cleaner. Oh yeah and I've added the longer imports so that the code is
shorter in general.
The setUpClass in TestLOMGetMethods is probably overkill, maybe
there's even an implementation of that already available somewhere, but
you used the same approach for mock_ldap I wanted to see if that was
possible for other very commonly used fields as well -- and it is. The
benefit are a few less selfs, YMMV though.
testOptions I don't quite fully understand. If you just want to test
separate options, then don't call it addOption, either use a better
name if you want the keyword argument syntax, or use a regular
dictionary instead. This way it looks like you're testing a single
method call, or option, so it's a bit confusing to call the function
with multiple options, but then iterate over them instead.
import inspect
import mock
import unittest
import src.ldapobjectmanager
from src.ldapobjectmanager import LDAPObjectManager, auth
uri = 'ldaps://foo.bar:636'
def person(name):
return ('CN={},OU=People,DC=foo,DC=bar'.format(name), {'name': [name]})
def reference():
return (None, ['ldaps://foo.bar/cn=ref'])
class MyUnitTestCase(unittest.TestCase):
def setUp(self):
patcher = mock.patch('src.ldapobjectmanager.ldap', autospec=True)
self.mock_ldap = patcher.start()
self.addCleanup(patcher.stop)
def getNewLDOandLOM(self, auth, **kwargs):
ldo = self.mock_ldap.ldapobject.LDAPObject(uri)
self.mock_ldap.initialize.return_value = ldo
lom = LDAPObjectManager(uri, auth, **kwargs)
return ldo, lom
def assert_no_calls(self, method):
self.assertEqual(method.call_args_list, [])
class TestLOMInitializationAndOptions(MyUnitTestCase):
def testAuth(self):
# no auth
ldo, lom = self.getNewLDOandLOM(auth.noauth)
self.assert_no_calls(ldo.simple_bind_s)
self.assert_no_calls(ldo.sasl_interactive_bind_s)
# simple auth
user = 'foo'
password = 'bar'
ldo, lom = self.getNewLDOandLOM(auth.simple, user=user, password=password)
ldo.simple_bind_s.assert_called_once_with(user, password)
# kerb auth
sasl = mock.MagicMock()
self.mock_ldap.sasl.gssapi.return_value = sasl
ldo, lom = self.getNewLDOandLOM(auth.kerb)
ldo.sasl_interactive_bind_s.assert_called_once_with('', sasl)
def testOptions(self):
def addOption(**kwargs):
for key, value in kwargs.items():
def newNoAuth():
return self.getNewLDOandLOM(auth.noauth, **{key: value})
if not hasattr(self.mock_ldap, key):
with self.assertRaises(AttributeError):
newNoAuth()
else:
ldo, lom = newNoAuth()
ldo.set_option.assert_called_with(getattr(self.mock_ldap, key), value)
addOption(OPT_X_TLS=1, OPT_BOGUS=1, OPT_URI="ldaps://baz.bar")
class TestLOMGetMethods(MyUnitTestCase):
@classmethod
def setUpClass(cls):
for (name, method) in filter(lambda x: x[0].startswith("test"),
inspect.getmembers(TestLOMGetMethods,
predicate=inspect.ismethod)):
setattr(cls, name, lambda instance, *args, **kwargs: method.__func__.__get__(instance.ldo, instance.lom, *args, **kwargs))
def setUp(self):
super(TestLOMGetMethods, self).setUp()
self.ldo, self.lom = self.getNewLDOandLOM(auth.kerb)
def testGets(self, ldo, lom):
# if gets() fails to find an object, it should throw an exception
ldo.search_ext_s.return_value = []
with self.assertRaises(RuntimeError):
lom.gets("", "")
# sometimes references are included in the result
# these have no DN and should be discarded from the result
ldo.search_ext_s.return_value = [(None, ['ldaps://foo.bar/cn=ref'])]
with self.assertRaises(RuntimeError):
lom.gets("", "")
def testExactlyOneObject(self, ldo, lom):
alice = person('alice')
# if gets() finds exactly 1 object, it should return that object
ldo.search_ext_s.return_value = [alice]
self.assertEqual(alice, lom.gets("", "name=alice"))
# repeat with reference in result
ldo.search_ext_s.return_value = [alice, reference()]
self.assertEqual(alice, lom.gets("", "name=alice"))
def testNameQuery(self, ldo, lom):
query = "(|(name=fred)(name=george))"
expectedresult = [person('fred'), person('george')]
ldo.search_ext_s.return_value = expectedresult
# if gets() finds > 1 object, it should throw an exception
with self.assertRaises(RuntimeError):
lom.gets("", query)
self.assertEqual(expectedresult, lom.getm("", query))
def testReferenceInResult(self, ldo, lom):
# repeat with reference in result
alice = person('alice')
ref = reference()
ldo.search_ext_s.return_value = [
ref, alice, alice, ref,
ref, alice, alice, ref, alice, ref
]
actualresult = lom.getm("", "name=alice")
self.assertEqual([alice] * 5, actualresult) | {
"domain": "codereview.stackexchange",
"id": 10927,
"tags": "python, unit-testing, ldap"
} |
Are intermolecular forces a type of chemical bond? | Question: My chemistry teacher told me that chemical bonds are of two types: intramolecular and intermolecular. He said that intermolecular forces come under the category of intermolecular chemical bond.
I have never read such statement anywhere. Nor can I find anything on the Internet that would support this statement.
My understanding is that chemical bond is a force that holds atoms together in a chemical species. Since intermolecular forces do not hold atoms together, they should not be termed as chemical bond.
So, are intermolecular forces a type of chemical bond?
Answer: The IUPAC definition of "chemical bond" is:
When forces acting between two atoms or groups of atoms lead to the formation of a stable independent molecular entity, a chemical bond is considered to exist between these atoms or groups. The principal characteristic of a bond in a molecule is the existence of a region between the nuclei of constant potential contours that allows the potential energy to improve substantially by atomic contraction at the expense of only a small increase in kinetic energy. Not only directed covalent bonds characteristic of organic compounds, but also bonds such as those existing between sodium cations and chloride anions in a crystal of sodium chloride or the bonds binding aluminium to six molecules of water in its environment, and even weak bonds that link two molecules of $\ce{O_2}$ into $\ce{O_4}$, are to be attributed to chemical bonds.
So the answer is "yes" in some cases.
See also the IUPAC definition of "molecule":
... must correspond to a depression on the potential energy surface that is deep enough to confine at least one vibrational state
So, for example, a water-water dimer, held together by hydrogen bonding, has a monomer-monomer potential energy surface that is deep enough to confine at least one vibrational state, and it would be appropriate to refer to the hydrogen bond as a chemical bond. | {
"domain": "chemistry.stackexchange",
"id": 9750,
"tags": "bond, intermolecular-forces"
} |
Using Barrier to Implement Go's Wait Group | Question: I have implemented a WaitGroup class to simulate WaitGroup in Go lang. Then I've found that I can use Barrier in a different manner, to achieve the same thing. My WaitGroup has some additional functionality, but in most scenarios Barrier just would do.
Question: Do you spot any design flaws in this style of using Barrier?
Code:
I have an instance of Barrier:
static Barrier WaitGroupBarrier = new Barrier(0);
Then I define my tasks (or threads) this way in multiple places - so I have not the actual number of tasks/threads:
for (int i = 0; i < 1000; i++) // in multiple places
{
var t = new Task(() =>
{
try
{
WaitGroupBarrier.AddParticipant();
// body
}
finally
{
WaitGroupBarrier.RemoveParticipant();
}
});
t.Start();
}
And I wait for them to complete, this way:
if (WaitGroupBarrier.ParticipantsRemaining > 0)
{
Console.WriteLine("waiting...");
WaitGroupBarrier.SignalAndWait();
}
Answer: if (WaitGroupBarrier.ParticipantsRemaining > 0) {
WaitGroupBarrier.SignalAndWait();
}
The above is a non-atomic operation. Thus if the if branch gets evaluated and there is one remaining, but finishes before SignalAndWait, then this will throw an exception. | {
"domain": "codereview.stackexchange",
"id": 4249,
"tags": "c#, multithreading, go, task-parallel-library"
} |
Keras BatchNormalization axis | Question: I use spectrogram as input to a Convolutional Neural Network I have created with tensorflow.keras in Python.
Its shape is (time, frequency, 1).
The input's shape of the CNN is (None, time, frequency, n_channels) where n_channels=1 and the first layer is a Conv2D. In between every Convolutional layer I use a BatchNormalization layer before an Activation layer.
The default value for BatchNormalization is "axis=-1".
Should I leave it as it is or should I make it with "axis=2" which corresponds to the "frequency" axis?
The thought behind this is that the features of a spectrogram are represented in the frequency axis.
Answer: Interesting question :)
Using spectrograms means you are essentially using images (of frequencies varying over time). I understand the content is in understood like the graph i.e. with axes time and frequency, but as far as the network knows, you are giving it black and white images; assuming your last dimension (=1) is the channels dimension.
You normally want to take the batch-norm over the features, so it could depend on what you see as a feature. Do you care about the shape of the peaks/troughs of the spectra, or the values encoded in the final channel? The meaning of axis when used in terms of BatchNormalization might be a little confusing. Have a look at these explanations and some of the points here.
So as far as pure images go, I would recommend keeping the default axis=-1.
Remember that a 2d convolution operation is looking for spatial correlations over the image itself - the kernel slides from left to right, top to bottom. So mixing that idea with then taking batch norm over your second axis (frequency) is literally an orthogonal idea.
I don't know if it will work out, but I think it might make most sense if your spectrograms are aligned in time, that is they cover the same nominal time period and so for two of them with the same size, the x-axis has the same scale. Otherwise you would be taking batch-normalization over different time-frames and therefore stirring up temporal information between the samples. | {
"domain": "datascience.stackexchange",
"id": 6915,
"tags": "keras, cnn, batch-normalization"
} |
Question on how to make product rule for differentiation consistent with operators? | Question: By the product rule for differentiation:$$\frac{\partial(\hat A\psi)}{\partial x}=\left(\frac{\partial\hat A}{\partial x}\right)\psi+\hat A\left(\frac{\partial\psi}{\partial x}\right)\tag{1}$$
Where $\hat A$ is an operator and $\psi$ is a function depend on $x$ i.e. $\psi=\psi(x)$.
My question is: when $\hat A$ takes the form of momentum operator: $\hat A=\hat P=-i\hbar \frac{\partial}{\partial x}$, it looks like the product rule for differentiation no longer work (I'm not sure this statement is right or not) since the correct answer should be:
$$\frac{\partial(\hat P\psi)}{\partial x}=-i\hbar\left(\frac{\partial^2}{\partial x^2}\right)\psi-i\hbar\left(\frac{\partial^2\psi}{\partial x^2}\right).$$
It doesn't equal to the answer given by $(1)$.
Another question:
$$\frac{\partial(\psi\hat A)}{\partial t}=\left(\frac{\partial\psi}{\partial t}\right)\hat A+\psi\left(\frac{\partial\hat A}{\partial t}\right)\tag{2}$$
Where $\hat A$ now takes the form of $\hat A=\frac{\partial}{\partial t}$ and $\psi$ is a function depend on $t$ i.e. $\psi=\psi(t)$, then the product rule for differentiation now do works. (I'm pretty sure about this one is correct because it's a derivation from the book.)
I must have confused something. Can someone help me or give me some comments that may correct my question please?
Answer: OP is essentially asking how to make sense of
$$\frac{\partial\hat{A}}{\partial x}\tag{i}$$
in OP's eq. (1), where $\hat{A}$ is a differential operator, in particular if $\hat{A}=\frac{\partial}{\partial x}$. The problem is partly that the notation (i) is ambiguous, cf. e.g. Do derivatives of operators act on the operator itself or are they "added to the tail" of operators? In OP's case, the derivative in (i) does not act further, so we can rewrite (i) via Leibniz rule as a commutator
$$ \left[\frac{\partial}{\partial x},\hat{A}\right]\tag{i'}$$
Then OP's eq. (1) turns into a well-known non-commutative Leibniz rule:
$$ \left[\frac{\partial}{\partial x},\hat{A}\psi\right]~=~\left[\frac{\partial}{\partial x},\hat{A}\right]\psi+\hat{A}\underbrace{\left[\frac{\partial}{\partial x},\psi\right]}_{=\psi^{\prime}(x)},
\tag{1'}$$
free of notational paradoxes. (Technically, the wavefunction $\psi\leftrightarrow \hat{M}_{\psi}$ is here identified with a zeroth-order differential operator, i.e. a left multiplication operator $\hat{M}_{\psi}\phi:=\psi\phi$.) | {
"domain": "physics.stackexchange",
"id": 91404,
"tags": "quantum-mechanics, operators, differentiation, notation, calculus"
} |
Why is there an energy gap in superconductors? | Question: I'm a little out of my depth here...
I'm trying to understand quasiparticle tunnelling in superconductor-insulator-superconductor junctions. Many books use the "semiconductor model" to explain this:
(source: wikimedia.org)
These diagrams show the available quasiparticle states (with a large band gap due to the formation of Cooper pairs), the filled states, and the empty states.
My question with these diagrams is: shouldn't all the electrons exist as Cooper pairs? I assume that the lower band is filled with quasiparticles, since Cooper pairs would all be at the same energy level and quasiparticles obey Fermi-Dirac statistics, but I don't know where they're coming from.
Also, why is there an energy gap in the quasiparticle energy states? I understand that this gap corresponds to the energy needed to break Cooper pairs, but I don't understand why would you need to break Cooper pairs to raise the energy of quasiparticles.
Or is this "semiconductor model" not fully representative of the physics?
Answer: The lower part is not filled with quasi-particles. At zero Kelvin, in zero magnetic field and with zero disorder all free electrons condense and form the superconducting condensate. The semiconductor model now describes the breaking of Cooper pairs not as resulting in two electron-, but in one electron- and hole-like excitation. As you are potentially familiar with in semiconductors, one electron of energy $E$ above the Fermi level may be described by one hole with the same energy below.
Now, this is the equilibrium. Tunnelling processes however are very non-equilibrium processes! When you apply a bias across your junction you give rise to quasi-particle excitations, with result in a tunnelling current. That is why, even at 0 Kelvin, there is "electron" tunnelling... | {
"domain": "physics.stackexchange",
"id": 23891,
"tags": "quantum-mechanics, condensed-matter, superconductivity, quantum-tunneling"
} |
how to find distance and angle b/w each points in a point cloud and the Kinect sensor | Question:
Question : how to find distance and angle b/w each points in a point cloud and the Kinect sensor?
Details : Using the code provided in ROS PCL tutorial I subscribed to sensor message and generated a point cloud. Now I want to know how to find out the distance and angle b/w the point in the point cloud and the camera. There is a misconception that point (x,y,z) in which z gives the distance from the camera to a point. If anybody got a solution, please post the answer in detail.
Thanks
Originally posted by AbhijithNair on ROS Answers with karma: 41 on 2016-06-21
Post score: 1
Original comments
Comment by gvdhoorn on 2016-06-30:
If nobody answers your question, it's likely that either nobody knows, or that the people that do know haven't seen it. It's not like we're purposefully ignoring you.
In any case: could you please revert the edit to your question and post what you have now as an answer? Then accept your own answer.
Answer:
Answer:
Hi, guys.
I figured a way to find out the point which is closer to the camera from the point cloud.
I shall share the code on the following link hoping that it might help you guys, also I
shall try to provide some insight into the code. Before reading my code I would recommend
you have a better understanding of coordinate frame conventions for the camera
( http://www.ros.org/reps/rep-0103.html ). I'm sharing the code in the below link.
https://github.com/abhijithanil/ROS_PCL_Kinect_distance_obstacle
Thanks
Originally posted by AbhijithNair with karma: 41 on 2016-06-30
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by ROSkinect on 2016-06-30:
Thanks for sharing your code
but it is better to share it in github.
Stepstorun.txt:
You don't need 7 shells to execute your commands:
Comment by ROSkinect on 2016-06-30:
Shell:
$ roslaunch openni_launch openni.launch
New shell:
$ cd ~/catkin_ws
$ catkin_make
$ source devel/setup.bash
$ rosrun test pub_kfc
New shell:
$ source devel/setup.bash
$ rosrun test sub_kfc
New shell:
$ rosrun rviz rviz
New shell:
$ roslaunch turtlebot_rviz_launchers view_rob
Comment by AbhijithNair on 2016-06-30:
Yeah.. i shall upload the whole project into github once i complete it. I know it is too awk to say i have no experience working with github. anyway thanks for your suggestion. i will consider that for my next upload. Hope my code helps and if its help, please flag it for others. Thanks
Comment by GLV on 2019-04-16:
@Mr.AbhijithNair could you please tell me how you convert Kinect data to point cloud. which packages did you use? could you please elaborate on how to find the distance from Kinect sensor in a stepwise.
thanks in advance | {
"domain": "robotics.stackexchange",
"id": 25019,
"tags": "ros, navigation, pcl, avoid-obstacle"
} |
Reference paper for gazebo | Question:
Is Design and use paradigms for gazebo, an open-source multi-robot simulator (N Koenig, A Howard - Intelligent Robots and Systems, 2004) still the canonical reference for Gazebo?
Originally posted by SL Remy on Gazebo Answers with karma: 319 on 2013-06-17
Post score: 3
Original comments
Comment by Boris on 2013-06-17:
Not really canonical, but quite a recent one from ICRA'13 - N. Koenig and J. Hsu, “The many faces of simulation: Use cases for a general purpose simulator,” in ICRA’13 Workshop on Developments of Simulation Tools for Robotics & Biomechanics, 2013. Not sure will it be available online at any point since workshop papers tend to be not included to indices like IEEEXplore. May be Nate or John will upload it later.
Comment by Boris on 2013-06-17:
P.S. The link you have provided is broken, looks like a missed http://.
Comment by SL Remy on 2013-06-18:
sorry about that, corrected.
Comment by rahul on 2019-01-09:
It is not a good idea to cite a paper that is not available to read. It is not a good practice to cite a paper just for the sake of citation. Citation is done to inform reader about where to find more information on said sentences.
Answer:
The 2004 IROS paper is the right one. They didn't write a paper for the ICRA'13 workshop; just an abstract and then the talk.
Originally posted by scpeters with karma: 2861 on 2013-06-18
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Boris on 2013-06-18:
Apparently they did write something for ICRA :) You can find it on the conference's CD/USB titled "ICRA'13 Workshops and Tutorials Proceedings". Although it has only 2 pages, there are title, abstract, 5 sections and references. So looks like a paper to me. Again I am not clear about its official status, i.e. can it be treated as a publication.
Comment by scpeters on 2013-06-18:
I asked Nate, and he said they submitted a 2 page abstract for the ICRA workshop. He also said that the 2004 IROS paper is the right one to reference.
Comment by Boris on 2013-06-19:
Thanks for clarifying!
Comment by ZdenekM on 2014-01-29:
Any news on this? Unfortunately, 2004 paper is bit out of date... | {
"domain": "robotics.stackexchange",
"id": 3342,
"tags": "gazebo"
} |
Callback-oriented Tic-Tac-Toe (follow-up) | Question: A followup to this question, thanks to @mdfst13 and @janos for the feedback.
I have made some changes to my code, mainly the following:
Replaced the next method with a loop and run method, as suggested by @mdfst13. Also removed redundant "null" checks.
Added a Cell enum as suggested by @janos, replacing the 2D array of players in the board class with a 2D array of cells.
Moved complex logic out of Board#toString() and into it's own class.
Created a controller class to allow cleaner interfacing from agents - this also prevents agents from making any "unauthorised" moves.
I no longer have any major issues with my code personally, however I would appreciate feedback on improving the code to be more robust and just generally "well written" - following the idioms of the language.
I know there are some "redundant" calls in the code (e.g. the controller copies the board even though the board getter in the engine copies it), however I believe they should remain as they provide clarity over the intent of the actual code.
One or two things I've noticed that could be improved:
I'm not entirely sure on the way I've designed the board renderer - should it be a final class that cannot be instantiated? I'm not sure if there would be any advantages of making the static methods into instance methods and allowing the renderer to be instantiated, as there are no "settings" for the renderer and is no data for the renderer to store.
Copied from my previous question:
I don't like the "magic" going on in getWinValues. I understand why it works but I don't like the looks of the "magic" size - y - 1. There probably isn't much I can do about this.
Result.java
public enum Result {
LOSE,
DRAW,
WIN;
}
Player.java
public enum Player {
X,
O;
public Player getOpponent() {
return this == X ? O : X;
}
}
Cell.java
public enum Cell {
EMPTY(' ', null),
X('X', Player.X),
O('O', Player.O);
private char value;
private Player player;
Cell(char value, Player player) {
this.value = value;
this.player = player;
}
public char getValue() {
return value;
}
public Player getPlayer() {
return player;
}
public static Cell getCell(Player player) {
for (Cell cell : values()) {
if (cell.getPlayer() == player) {
return cell;
}
}
return EMPTY;
}
}
Board.java
import java.util.Arrays;
public class Board {
private int size;
private Cell[][] board;
public Board(int size) {
this.size = size;
board = new Cell[size][size];
for (Cell[] row : board) {
Arrays.fill(row, Cell.EMPTY);
}
}
public Board(Board source) {
this(source.size);
for (int y = 0; y < size; y++) {
System.arraycopy(source.board[y], 0, board[y], 0, size);
}
}
public int getSize() {
return size;
}
private int[][] getCellValues(int size) {
int[][] cells = new int[size][size];
int cellValue = 1;
for (int y = 0; y < size; y++) {
for (int x = 0; x < size; x++) {
cells[y][x] = cellValue;
cellValue *= 2;
}
}
return cells;
}
private int[] getWinValues(int size) {
int winCount = (size * 2) + 2;
int[] wins = new int[winCount];
int[][] cellValues = getCellValues(size);
for (int y = 0; y < size; y++) {
wins[winCount - 2] += cellValues[y][y];
wins[winCount - 1] += cellValues[y][size - y - 1];
for (int x = 0; x < size; x++) {
wins[y] += cellValues[y][x];
wins[y + size] += cellValues[x][y];
}
}
return wins;
}
private int getPlayerValue(Player player) {
int value = 0;
int[][] cellValues = getCellValues(size);
for (int y = 0; y < size; y++) {
for (int x = 0; x < size; x++) {
if (board[y][x].getPlayer() == player) {
value += cellValues[y][x];
}
}
}
return value;
}
public boolean isWin(Player player) {
int[] winValues = getWinValues(size);
int playerValue = getPlayerValue(player);
for (int winValue : winValues) {
if ((playerValue & winValue) == winValue) {
return true;
}
}
return false;
}
public boolean isFull() {
boolean full = true;
for (Cell[] row : board) {
for (Cell cell : row) {
full &= cell != Cell.EMPTY;
}
}
return full;
}
public Cell get(int x, int y) {
if (x < 0 || y < 0 || x >= size || y >= size) {
return null;
}
return board[y][x];
}
public boolean set(int x, int y, Cell cell) {
if (x < 0 || y < 0 || x >= size || y >= size) {
return false;
}
board[y][x] = cell;
return true;
}
public Board copy() {
return new Board(this);
}
}
BoardRenderer.java
public final class BoardRenderer {
private BoardRenderer() {
}
private static String getRow(Board board, int row) {
StringBuilder builder = new StringBuilder();
for (int x = 0; x < board.getSize(); x++) {
Cell cell = board.get(x, row);
if (x != 0) {
builder.append('|');
}
builder.append(' ').append(cell.getValue()).append(' ');
}
return builder.toString();
}
public static String render(Board board) {
int size = board.getSize();
StringBuilder builder = new StringBuilder();
for (int y = 0; y < size; y++) {
String row = getRow(board, y);
if (y != 0) {
builder.append('\n');
for (int x = 0; x < row.length(); x++) {
builder.append('-');
}
builder.append('\n');
}
builder.append(row);
}
return builder.toString();
}
}
Engine.java
public class Engine {
private Board board;
private Agent playerX;
private Agent playerO;
private Player startingPlayer;
public Engine(int size, Agent playerX, Agent playerO, Player startingPlayer) {
board = new Board(size);
this.playerX = playerX;
this.playerO = playerO;
this.startingPlayer = startingPlayer;
}
public Engine(int size, Agent playerX, Agent playerO) {
this(size, playerX, playerO, Player.X);
}
public Board getBoard() {
return board.copy();
}
public boolean hasEnded() {
return board.isFull() || board.isWin(startingPlayer) || board.isWin(startingPlayer.getOpponent());
}
public Result getResult(Player player) {
if (!hasEnded()) {
return null;
}
boolean winSame = board.isWin(player);
boolean winOpponent = board.isWin(player.getOpponent());
if ((winSame && winOpponent) || (!winSame && !winOpponent && board.isFull())) {
return Result.DRAW;
}
return winSame ? Result.WIN : Result.LOSE;
}
public Player getWinner() {
if (!hasEnded()) {
return null;
}
if (getResult(startingPlayer) == Result.WIN) {
return startingPlayer;
}
Player opponent = startingPlayer.getOpponent();
if (getResult(opponent) == Result.WIN) {
return opponent;
}
return null;
}
public boolean isDraw() {
return getResult(startingPlayer) == Result.DRAW && getResult(startingPlayer.getOpponent()) == Result.DRAW;
}
private Agent getAgent(Player player) {
if (player == Player.X) {
return playerX;
} else if (player == Player.O) {
return playerO;
}
throw new UnsupportedOperationException("invalid player " + player);
}
public void run() {
int moveCount = 0;
int maxMoveCount = board.getSize() * board.getSize();
Player player = startingPlayer;
Agent playerAgent = getAgent(player);
while (!board.isFull() && moveCount < maxMoveCount) {
playerAgent.makeMove(player, new Controller(this, player));
if (board.isWin(player)) {
break;
}
moveCount++;
player = player.getOpponent();
playerAgent = getAgent(player);
}
Player opponent = player.getOpponent();
Agent opponentAgent = getAgent(opponent);
Result playerResult = getResult(player);
Result opponentResult = getResult(opponent);
playerAgent.gameEnded(player, playerResult);
opponentAgent.gameEnded(opponent, opponentResult);
}
public boolean move(Player player, int x, int y) {
return !hasEnded() && board.get(x, y) == Cell.EMPTY && board.set(x, y, Cell.getCell(player));
}
}
Agent.java
public interface Agent {
void makeMove(Player player, Controller controller);
void gameEnded(Player player, Result result);
}
Controller.java
public class Controller {
private Engine engine;
private Player player;
private boolean moved = false;
public Controller(Engine engine, Player player) {
this.engine = engine;
this.player = player;
}
public Board getBoard() {
return engine.getBoard().copy();
}
public boolean move(int x, int y) {
if (moved) {
throw new UnsupportedOperationException("controller already moved");
}
if (engine.move(player, x, y)) {
moved = true;
}
return moved;
}
}
Main.java
public class Main {
private static class RandomAgent implements Agent {
private static int random(Controller controller) {
return (int) Math.floor(Math.random() * controller.getBoard().getSize());
}
@Override
public void makeMove(Player player, Controller controller) {
System.out.println("makeMove(player = " + player + ", controller)");
boolean success;
do {
success = controller.move(random(controller), random(controller));
} while (!success);
}
@Override
public void gameEnded(Player player, Result result) {
System.out.println("gameEnded(player = " + player + ", result = " + result + ")");
}
}
public static void main(String[] args) {
Engine engine = new Engine(3, new RandomAgent(), new RandomAgent());
long start = System.currentTimeMillis();
engine.run();
long end = System.currentTimeMillis();
System.out.println(BoardRenderer.render(engine.getBoard()));
System.out.println("The game took " + (end - start) + "ms");
System.out.println("The winner was: " + (engine.isDraw() ? "Nobody" : engine.getWinner()));
}
}
And some example output:
makeMove(player = X, controller)
makeMove(player = O, controller)
makeMove(player = X, controller)
makeMove(player = O, controller)
makeMove(player = X, controller)
makeMove(player = O, controller)
makeMove(player = X, controller)
makeMove(player = O, controller)
gameEnded(player = O, result = WIN)
gameEnded(player = X, result = LOSE)
X | X | O
-----------
| X | O
-----------
X | O | O
The game took 3ms
The winner was: O
Answer: Make final what you can
Make final what you can, for example the fields of Cell:
private final char value;
private final Player player;
And the fields of Board:
private final int size;
private final Cell[][] board;
And so on.
Refining Cell.getCell
Instead of looping over cell values, you could simplify as in Player:
public static Cell getCell(Player player) {
return player == Player.X ? X : O;
}
Or you could take a more conservative approach:
Use an assert to document that the player parameter must never be null
Crash with a big bang if the player parameter has an unexpected value
Something like this:
public static Cell getCell(Player player) {
assert player != null;
if (player == Player.X) {
return X;
}
if (player == Player.O) {
return O;
}
throw new IllegalArgumentException("player must be either X or O");
}
A bit ugly fill + overwrite
It's a bit ugly that when the copy constructor calls the original,
the cells are filled with empty, only to be overwritten immediately.
It would be slightly better to not call the other constructor from the copy constructor.
Another readability improvement would be using the .clone() method of arrays instead of the tedious System.arraycopy(...).
public Board(Board source) {
this.size = source.size;
this.board = new Cell[size][];
for (int y = 0; y < size; y++) {
board[y] = source.board[y].clone();
}
}
Naming
The Board class contains the cells in a field named board.
This is a bit confusing. It would be more natural to call the cells cells.
Repeated calculations
If I'm looking at it right, getCellValues and getWinValues will always return the same values. As these methods are called repeatedly, this is wasteful. It would be better to pre-calculate their returned values once, and simply reuse.
The size parameter of these methods also looks pointless.
Flag variables
Flag variables are often unnecessary and cause more problems than they solve.
Consider full in Board.isFull:
public boolean isFull() {
boolean full = true;
for (Cell[] row : board) {
for (Cell cell : row) {
full &= cell != Cell.EMPTY;
}
}
return full;
}
As soon as you find a non-empty cell, you could simply return.
The result will be simpler, shorter, and faster too:
public boolean isFull() {
for (Cell[] row : board) {
for (Cell cell : row) {
if (cell == Cell.EMPTY) {
return false;
}
}
}
return true;
}
Prefer to fail with a big bang
The Board.get and Board.set methods have this check:
if (x < 0 || y < 0 || x >= size || y >= size) {
In case of such invalid x or y parameters,
the methods return something,
quietly ignoring the fact that something's awfully wrong in the caller.
In such situations, in case of non-sense input, instead of returning normally (also known as "garbage in, garbage out"),
it's better to crash with a big bang, for example:
throw new IllegalArgumentException("invalid input: x and y must be within 0 and size");
Prefer one way to do things
Board has two ways to make a copy: using a copy constructor, and the copy method. This can cause problems:
Users of the class may question which method is preferred.
The more public methods, the more maintenance overhead.
If you have give one way to do something, you can preempt boring questions, and you have fewer methods to maintain. | {
"domain": "codereview.stackexchange",
"id": 19501,
"tags": "java, tic-tac-toe"
} |
The dot product integral in the proof of the Parallel axis theorem | Question:
The first picture is a question about proving the Parallel axis theorem and the second is the solution. I have no problem with the solution except for the part which says that
$$
\int 2\vec h \cdot \vec r_\text{cm} dm = 2\vec h \cdot \int \vec r_\text{cm} dm
$$
I don't understand how they got the dot product out of the integral. It seems mathematically illogical to me. Can anyone explain that to me. Thanks in advance.
Answer: Presumably you're okay with the intermediate step
$$
\int 2\vec h\cdot \vec r_\text{cm} dm
=
2\int \vec h\cdot \vec r_\text{cm} dm
\tag1
$$
since the number two is a constant, and integration is linear over a constant. The authors assert that $\vec h$ is also constant, so it comes out as well.
If it's motion of the dot product across the integral sign that bothers you, expand the dot product in terms of vector components:
$$
\vec h \cdot \vec r = h_x r_x + h_y r_y + h_z r_z
$$
Now it's clear that the single vector-product integral in (1) is the sum of three scalar-product integrals.
The first is
$$
\int h_x r_x dm = h_x \int r_x dm
$$
where the extraction is totally okay because $h_x$ is a constant in this problem. | {
"domain": "physics.stackexchange",
"id": 28369,
"tags": "newtonian-mechanics, moment-of-inertia, calculus"
} |
Probability of finding a particle with a particular value of momentum | Question: Given a particle's wave function, what is the general method of finding the probability distribution of momentum (i.e., the probability of finding that particle with a particular value of momentum)?
For example, given
$$
\psi(x) = a e ^ { ikx } + b e ^ { -ikx },
$$
what is the probability of measuring $ p_x = \hbar k $ ?
Answer: First of all, I assume that you know that the probability (density) distribution is given by the squared amplitude of the wavefunction. In your example, you are given the wavefunction in the position basis, so it gives you the position probability density. If you want the momentum probability density, you have to change basis to get $\psi(p)$.
The standard way to change basis is to use a "resolution of the identity". The manipulation is easiest in bra-ket notation. You start out with
\begin{equation}
\psi(x) = \langle x | \psi \rangle = a\, e^{i k x} + b\, e^{-i k x}.
\end{equation}
But you want $\psi(p) = \langle p | \psi \rangle$. You can find it with this manipulation:
\begin{align}
\langle p | \psi \rangle
&= \int_{-\infty}^{\infty} \langle p | x \rangle \langle x| \psi \rangle\, d x \\
&= \int_{-\infty}^{\infty} \frac{1} {\sqrt{2\pi \hbar}} e^{-i p x / \hbar} \langle x| \psi \rangle\, d x \\
&= \int_{-\infty}^{\infty} \frac{1} {\sqrt{2\pi \hbar}} e^{-i p x / \hbar} \left( a\, e^{i k x} + b\, e^{-i k x} \right)\, d x.
\end{align}
Here, the resolution of the identity I used was $\int_{-\infty}^\infty |x\rangle \langle x |\, dx$. For a one-dimensional system, that's just the identity operator, so you should be able to just plonk it into any old expression you want.
I'll leave it as a homework exercise to do the integral and take the squared magnitude of the result. But here are a couple hints. First, I happen to know that your solution $\psi(x)$ is a sum of two momentum eigenstates, so you should almost always get zero probability of measuring any given momentum — except for those two particular momentum eigenvalues. That is, your answer will be zero everywhere except for two values related to $k$ and $-k$. Second, you might want to read up on the Dirac $\delta$ function — specifically how it can be expressed as an integral. | {
"domain": "physics.stackexchange",
"id": 46294,
"tags": "quantum-mechanics, homework-and-exercises, measurement-problem"
} |
How is a portion of DNA selected and unwound from nucleosome? | Question: If I understand this correctly during interphase most of the DNA strand is tightly wound around histones in the form of nucleosomes, to conserve space in the nucleus. Yet RNA polymerase in order to work needs a part of DNA to be temporarily unwound. How does the polymerase find the particular part of the DNA (and especially the promoter for a gene coding the protein needed to be synthesised) in this mass of tightly wound DNA?
Answer: This answer will be a very broad overview and is based largely on information from the textbook "Molecular Biology of the Gene" by Watson et al. (which I highly recommend).
Nucleosomes are dynamic structures.
The interactions between DNA and histones are non-specific and dynamic. Regions of DNA can be transiently unwrapped from the nucleosome and thus recognized by DNA binding proteins. In fact, because of the non-specific nature of the interaction, DNA can slide relative to the histone core and expose different sections of DNA. This can be facilitated by nucleosome remodelling complexes which can also catalyze complete ejection of histone octamers to expose even more DNA.
Nucleosomes can be positioned specifically.
The association of DNA binding proteins with specific sequences can prevent or control nucleosome formation at specific loci and leave tracts of exposed DNA accessible to the transcription machinery. Additionally, certain DNA sequences that form bent helices are preferentially bound by nucleosomes.
Chromatin compaction is highly regulated.
Two broad classes of chromatin are present in the interphase nucleus: highly condensed, transcriptionally inactive heterochromatin and less condensed, transcriptionally active euchromatin. Chromatin compaction is regulated by post-translational modification of histone tails, and modifications such as acetylation and phosphorylation, especially of the N-terminal tails, can prevent higher order, repressive chromatin compaction. The specific pattern of histone modification is referred to as the histone code and certain modifications are associated with gene expression. These modifications can be recognized by remodelling complexes and even transcription factors.
Further Reading:
Becker PB, Workman JL. 2013. Nucleosome Remodeling and Epigenetics. Cold Spring Harb Perspect Biol 5(9):a017905. | {
"domain": "biology.stackexchange",
"id": 7551,
"tags": "cell-biology, dna, dna-replication"
} |
Maximum likelihood estimation vs calculating distribution parameters "manually" | Question: I'm sorry for asking probably elementary question, but I cannot understand how estimating probability distribution parameters using maximum likelihood estimation method differs from calculating these parameters from observed data manually.
For MLE we need to know the type of probability distribution anyway so why don't we just use the known formulas for calculating the corresponding parameters from observed data?
I believe that MLE is somehow more general method but I cannot see what is the real advantage of MLE compared to getting these parameters "manually".
Thanks for explanation.
Tomas
Answer: These "known formulas" that you are thinking about are precisely the ones that maximize the likelihood of the distribution (e.g. taking the mean of a sample gives you the maximum likelihood estimate of the $\mu$ parameter of a normal distribution fit to that sample).
In many cases however, there aren't any closed formulas for the "best" parameters of distributions, in which case you have to follow an iterative optimization approach (e.g. generalized linear models), if that's what you meant to ask. | {
"domain": "datascience.stackexchange",
"id": 3898,
"tags": "probability, distribution, parameter-estimation"
} |
Are regular languages closed under sort (Parikh image)? | Question: Assume $L$ is a regular language over an ordered alphabet. Is the language built by taking every word in $L$ and sorting it always a regular language?
Answer: No. Counterexample: assuming $a < b$, we have $(ab)^\ast \xrightarrow{\;\;\text{sorted}\;\;} \{ a^n b^n \;|\; n \geqslant 0 \}$, which cannot be expressed by a regular expression, by the pumping lemma for regular languages. | {
"domain": "cs.stackexchange",
"id": 371,
"tags": "formal-languages, regular-languages"
} |
Why did we need relativity to derive $E=mc^2$? | Question: Okay, so the way I understand one of the "derivations" of $E=mc^2$ is roughly as follows:
We observe a light bulb floating in space. It appears motionless. It gives off a brief flash of light. We measure the frequency of this light, and conclude that the light bulb lost some amount of energy $E_1$.
Next, we imagine flying by this light bulb at 90% the speed of light (or something similar), and again observe the flash. Due to the Relativistic Doppler Effect, the frequency of the light is decreased, and so we observe that the bulb only lost a smaller amount of energy $E_2$.
Since the total amount of energy lost must be the same in both reference frames, we conclude that the bulb must have lost kinetic energy in the moving reference frame. Kinetic energy is $\frac{1}{2}mv^2$. Since $v$ did not decrease, $m$ must have decreased. Thus, when bodies lose energy, they also lose mass. QED.
So obviously it's a little more complicated than that, but my question is: why do we even need special relativity? Can't we replace "relativistic doppler effect" with "classical doppler effect" and still get a similar result? The exact relationship between $E$ and $m$ will change, but shouldn't the insight that a relationship exists be accessible even with only classical physics?
Answer: These thought experiments should always be made as simple as possible. Emitting in all directions is a needless complication. Consider Einstein's original derivation of E=mc^2 with only two light rays pointing in opposite directions (as Ron said above, this is a necessary assumption- it means the momentum of the light rays won't kick the bulb in one direction). You will find that, if you only put in the first term of the Relativistic Doopler Effect (the v/c term that -alone- is the classical doopler effect), it cancels out. Only the v^2/c^2 and higher terms remain. Using only the classical doopler effect produces no result. EDIT: The +/-v/c terms produces no net energy from the point of view of the moving frame because adding the energies results in the cancellation of +/-v/c terms (see Einstein's original paper), but these terms do produce a net change in the momentum of light in the right direction from the point of view of the moving system (See * below for how this net momentum is used in Rohrlich's derivation of E=mc^2)
On the cancellation of the v/c term: (http://www.science20.com/curious_inquirer/blog/was_einstein_wrong-90405)
On Einstein's original paper:(http://www.science20.com/curious_inquirer/most_famous_equation_physics_and_its_derivation-89948)
The Original Paper on E=mc^2:( http://www.fourmilab.ch/etexts/einstein/E_mc2/www/)
*BUT! There is a derivation of E=mc^2 using the classical doopler effect:(http://en.wikipedia.org/wiki/Mass%E2%80%93energy_equivalence#Alternative_version)
[The last comment was EDITED due to Alfred Centauri's comment below.]. | {
"domain": "physics.stackexchange",
"id": 3977,
"tags": "special-relativity, doppler-effect"
} |
Binomial And Trigonometric Approximations Give Different Answers? | Question: So I was solving a question that is based on Simple Harmonic Motion. The question is as follows: There is a spring of spring constant $k$ and natural length $2a$ (it is in its natural length initially). It is then pulled by a very small distance $y$ from the centre perpendicular to its length. We define that the new length is $2\ell$.
Then from Pythagorean theorem, $\ell = \sqrt{a^2 + y^2}$. Now we know that $y\ll a$, so $\sqrt{a^2 + y^2} = a\sqrt{1+\frac{y^2}{a^2}}$ and $\frac{y^2}{a^2}\ll 1,$ so we can use the binomial approximation for $(1+x)^n\approx1+nx$ (if $x\ll 1$). Here n = $\frac{1}{2}$, so we get $\ell = a\sqrt{1+\frac{y^2}{a^2}}\approx a(1+\frac{y^2}{2a^2}) = a + \frac{y^2}{2a}$.
So extension is final length - original length = $\ell - a \approx \frac{y^2}{2a}$
Now let's consider a different approach. Let us drop a perpendicular from the original centre $O$ to the final configuration of the spring and let the foot of perpendicular be $P$, i.e. :
Let $\angle COP = \theta$
So now in $\triangle OPB$ we have $OP = OB\sin\theta = a\sin\theta$.
We also have $PB = OB\cos\theta = a\cos\theta = \ell-x$
Now in $\triangle OPC$ we have $PC = OP \tan\theta = a\sin\theta\tan\theta = x$
Since $y\ll a,\ \tan\theta = \frac{y}{a}\approx\sin\theta$ and $\cos\theta\approx1$
Now, extension in spring $= \ell-a$ and $\ell = x + \ell-x,$ so $\ell - a = x + \ell-x - a$
extension $= a\sin\theta\tan\theta + a\cos\theta - a \approx a(\frac{y}{a})(\frac{y}{a}) + a - a \approx \frac{y^2}{a}$
This gives a difference of factor of 2 in the extension of the spring, but how is that possible?
Another example is a slight displacement of a pendulum.
Here in $\triangle CSP,\ \frac{CS}{SP} = \cos\theta$. Thus $CS = SP\cos\theta = \ell\cos\theta$.
so its vertical height rise $= OC = OS - CS = \ell - \ell\cos\theta = \ell(1-\cos\theta),$ but $1-2\sin^2\theta = \cos2\theta$. Thus $1-\cos\theta = 2\sin^2(\frac{θ}{2})$ and as $\theta$ is very small, $\sin\theta\approx\theta$, thus $2\ell\sin^2(\frac{θ}{2}) \approx 2\ell{(\frac{θ}{2})}^2 \approx \ell\frac{θ^2}{2}$
But we can try to get it by another way. $\angle SPO$ is $90^\circ$ because it is $\approx$ tangent at $P$ on the circle with centre $S$ and radius $\ell$. So, the arc $OP \approx$ chord $OP$ because $\theta$ is very small. By definition of radians, we know that the length of arc $OP$ is $\ell\theta$. Next, since $\angle CPO = \theta$, so in $\triangle OPC,\ \frac{OC}{OP} = \tan\theta$. Thus $OC = OP\tan\theta = \ell\theta\tan\theta$. But as $\theta$ is very small, $\tan\theta \approx \theta$. So $OC = \ell\theta\tan\theta \approx \ell\theta^2$.
We again get a difference of factor of 2. What exactly is the problem here and how do I rectify it?
Answer:
extension $=a\sin\theta\tan\theta+a\cos\theta−a\approx a(\frac ya)\frac ya+a−a\approx \frac{y^2}a$
That is simply too rough an approximation, akin to kricheli's comment. Instead, you were so close to having done it properly:
$$\ell-a=a\sin\theta\tan\theta+a\cos\theta-a=a\left(\frac{\sin^2\theta+\cos^2\theta}{\cos\theta}-1\right)=a\left(\frac1{\cos\theta}-1\right)\approx\frac{y^2}{2a}$$
as required. Of course, you can either choose to do $\frac1{\cos\theta}\approx\frac1{1-\frac{\theta^2}2}\approx1+\frac{\theta^2}2$ or $\frac{1-\cos\theta}{\cos\theta}=\frac{2\sin^2\frac\theta2}{\cos\theta}\approx2\sin^2\frac\theta2$ (this last approximation because the $\sin^2\frac\theta2$ is already correct to $2^\text{nd}$ order and thus we do not need to expand the cosine in the denominator.
kricheli's comment is that you should have done
$$\ell-a=a\sin\theta\tan\theta+a\cos\theta-a\approx a\theta^2+a(1-\frac{\theta^2}2)-a=a\frac{\theta^2}2\approx \frac{y^2}{2a}$$
$\angle SPO\neq90^\circ$ because $SO=OP=\ell$ makes $\triangle SPO$ an isosceles triangle. i.e. $\angle SPO=\perp-\frac\theta2$
This means $\angle CPO=\angle SPO-\angle SPC=(\perp-\frac\theta2)-(\perp-\theta)=\frac\theta2$
This fixes everything. You had a small mistake: in $\triangle OPC,\ OC=OP\sin\angle CPO$ instead of tangent. For how small everything is, chord $OP \approx$ arc $OP,$ and so $OC\approx\ell\theta\frac\theta2=\ell\frac{\theta^2}2$ as required.
Good work, just slight mistakes here and there, expected because the manipulations become so long. There are a lot of smart trickery that you can learn, because we have had to deal with these for so many centuries, that we have discovered cheaper methods to get the correct answers. | {
"domain": "physics.stackexchange",
"id": 95463,
"tags": "homework-and-exercises, newtonian-mechanics, spring, geometry, approximations"
} |
Why don't forces on two surfaces of a fluid need to balance? | Question: In my textbook (Resnick Halliday Krane), the derivation for pressure in a fluid at a given depth is done assuming that the fluid is homogeneous. Thus, the book concludes that the pressure in a fluid is the same at all levels given that the fluid is homogeneous and not otherwise. I had two problems with this:
Consider a vessel in the following shape, with two different area of cross-section.
I know, from experience, that the level of water must be the same in the two arms. Indeed, that is necessary if the pressure is to be the same at the points $A$ and $B$ at the same level. But if we consider the situation in the following manner, there is a contradiction. Consider the whole volume of fluid between $A$ and $B$ in the tube as the fluid element.
As the pressure at $A$ and $B$ is the same ($p$, say), the force exerted by the fluid above $A$ on our fluid element will be $pA_A$ where $A_A$ is the cross-sectional area of the larger arm. Likewise, the force exerted by the fluid above $B$ on the fluid element will be $pA_B$ where $A_B$ is the cross-sectional area of the smaller arm.
But clearly, $pA_A > pA_B$ since the areas are different. How is the fluid then in equilibrium?
As pointed out in the comments below by LDC3, this setup is similar to that of a hydraulic press. So the question may be rephrased: if we put a large weight on one arm of the hydraulic press (the one with the larger area), and a small weight on the other (the one with the smaller area), why does the fluid in the press stay in equilibrium? There are clearly two unbalanced forces acting on it.
The book also presents the following argument to prove that for the pressure to be the same at two points in a fluid, the fluid must be homogeneous. Consider a typical u-tube but filled with three fluids of different densities. The densities are of the order: blue > green > red.
Now, the book says that as the fluid is in equilibrium the pressure at the interface (of red and blue fluids and green and blue fluids) must be equal in both arms. Thus the pressure exerted by the column of red fluid equals the pressure exerted by the column of green fluid (the heights are different as the densities of the fluids vary).
Now, consider two points at the same level. One point is in the green fluid and one point in the red fluid. Thus the points are not in the same fluid. Also, since the height of the two points is the same above the interface, but the densities are different, the drop in pressure from the interface to $A$ is less than the drop in pressure from the interface to $B$. Thus the pressure is different at $A$ and $B$.
But if we again consider the fluid between $A$ and $B$ as the fluid element, we may follow a similar argument as in the previous point to arrive at a contradiction. The cross-sectional area of the two arms is the same ($A$, say). Thus the force exerted by the fluid above $A$ on the fluid element is $p_AA$ where $p_A$ is the pressure at $A$. Likewise, the force exerted by the fluid above $B$ on the fluid element is $p_BA$, where $p_B$ is the pressure at $B$.
But $p_AA < p_BA$. How is the fluid then in equilibrium?
Answer: Take A to be a cylindrical fluid element of height $h_A$ and cross sectional area $A_A$, as the entire portion of the fluid above the section marked $A$.
Take B to be another cylindrical element of the height $h_B$, with cross sectional area $A_B$, as the entire portion of the fluid about the section marked $B$.
As you have noticed, if the sections marked A and B are at the same level, then $h_A = h_B = h$
Assuming the density of the fluid is $\rho$, the weight of the elements is:
$$
W_A = \rho A_Ahg \\
W_B = \rho A_Bhg
$$
Since both elements are in equilibrium, the force due to the pressure from the fluid under them should be equal to their respective weights
$$
\rho A_A h g = p_AA_A \\
\therefore p_A = \rho h g\\
\rho A_B h g = p_b A_B \\
\therefore p_b = \rho h g \\
\bbox[5px,border: 1pt solid black]{\therefore p_A = p_b}
$$
The error in your reasoning is this: The forces on both the fluid elements do not need to be equal to each other. For an element to be in equilibrium requires only that the forces acting on that element add up to zero. Both elements are individually in equilibrium.
If you consider the portion of the fluid I've colored blue to be your element, your error lies in forgetting to consider reaction forces between the element and the bottom of the vessel in your free-body diagram. As it happens, the weight of the element plus any fluid elements above it will be balanced by the normal reaction between the fluid and the vessel. | {
"domain": "physics.stackexchange",
"id": 17595,
"tags": "pressure, fluid-statics"
} |
Why are gluons believed to be massless? | Question: Earlier questions under a similar title referred to the short range of the strong force. My question is completely different. I'd like to know why gluons are considered massless in the Standard Model (regardless of the range of the strong force). For example, if mass is a result of the Higgs interactions, then why do gluons not interact with the Higgs boson? In addition to the theoretical reasons, is there any experimental evidence of gluons being actually massless? (I do realize that such experiments are not easy, because of the confinement.)
In the past, neutrinos were also considered massless, but not anymore. Is there a similar possibility for gluons to have perhaps a small invariant mass or is there an overriding reason for them to be definitely massless?
EDIT: The answer I've been looking for is deep in the comments below and not immediately apparent. To make it clear, I am repeating it here: To interact with a boson, a particle (the Higgs) must have a charge mediated by this boson. The Higgs has a weak charge and therefore interacts with the W and Z bosons thus giving them mass. The Higgs does not have the electric or color charge and therefore does not interact with the photon or gluons thus leaving them massless.
Answer: Simply put, the Higgs isn't charged under the strong force. It doesn't have standard electrical charge either. The $W^{\pm},Z$ bosons aquire a mass through the Higgs mechanism because the Higgs itself is charged under the weak force. Leptons aquire masses through the Higgs mechanism because they too interact with the Higgs.
No Higgs interaction means no effective mass.
You ask why the Higgs does not interact with gluons. It has to do with the quantum numbers (charges) of the fundamental particles in the standard model. It turns out that you aren't allowed to freely choose quantum numbers for the different particles. If you made a bad choice, you'd violate gauge invariance and have an inconsistent theory. This puts relatively strict constraints on the allowed quantum numbers. check out Gauge Anomaly for more details.
Basically, the known quantum numbers of the other standard model particles constrains the allowed quantum numbers of the Higgs, specifically prohibiting a gluon-higgs interaction. if you wanted to add that interaction, you would necessarily imply the existence of other particles in order to balances all the charges of the theory. I don't know if that would be possible, but its a matter of simple algebra to figure it out. | {
"domain": "physics.stackexchange",
"id": 43229,
"tags": "mass, speed-of-light, standard-model, higgs, gluons"
} |
TF_NO_FRAME_ID: Ignoring transform with child_frame_id | Question:
Hi, I've tried the pi_tracker package. I have no errors running skeleton.launch and tracker.launch. However, when I rosrun rviz, I met the following errors
[ERROR] [1316763642.701371303]: TF_NO_FRAME_ID: Ignoring transform with child_frame_id "/right_shoulder" from authority "/skeleton_tracker" because frame_id not set
[ERROR] [1316763642.701422097]: TF_NO_FRAME_ID: Ignoring transform with child_frame_id "/right_elbow" from authority "/skeleton_tracker" because frame_id not set
[ERROR] [1316763642.701475147]: TF_NO_FRAME_ID: Ignoring transform with child_frame_id "/right_hand" from authority "/skeleton_tracker" because frame_id not set
Originally posted by ychua on ROS Answers with karma: 71 on 2011-09-22
Post score: 1
Original comments
Comment by ychua on 2011-09-22:
In rviz, in TF display. I got status: warning "/openni_camera: No transform from [/openni_camera] to frame[/world] /openni_depth_frame: No transform from [/openni_depth_frame to frame[/world]" etc....
Comment by ychua on 2011-09-22:
The robot model in the rviz has no errors. I've also changed the 'skel_to_joint_map' parameter in the tracker_params.yaml, to correspond with the joint names with my robot urdf model. I also changed the line in the 'self.cmd_joints.position[self.cmd_joints.name.index('myRobot_jointName')] = right_shoulder_lift_angle' in the tracker_joint_controller.py to my robot urdf's joint name.
Is there anywhere else I need to change in the code? Thanks a lot!
Comment by ychua on 2011-09-22:
[ERROR] [1316763642.701371303]: TF_NO_FRAME_ID: Ignoring transform with child_frame_id "/right_shoulder" from authority "/skeleton_tracker" because frame_id not set
[ERROR] [1316763642.701422097]: TF_NO_FRAME_ID: Ignoring transform with child_frame_id "/right_elbow" from authority "/skeleton_tracker" because frame_id not set
[ERROR] [1316763642.701475147]: TF_NO_FRAME_ID: Ignoring transform with child_frame_id "/right_hand" from authority "/skeleton_tracker" because frame_id not set
Answer:
You need to change the fixed frame and target frame to something connected in rviz. See the user guide
Originally posted by tfoote with karma: 58457 on 2011-11-27
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 6754,
"tags": "ros, pi-tracker, skeleton-tracker"
} |
In a state $\alpha \vert HL \rangle + \beta \vert VR \rangle$ can a particle be measured in the $\{|L\rangle,|R\rangle\}$ basis? | Question: Given a pair of entangled particles:
$\alpha \vert HL \rangle + \beta \vert VR \rangle$
where:
$\alpha^2 + \beta^2=1$
$H$ and $V$ are orthonormal vectors.
$L$ and $R$ are orthogonal vectors (but not necessarily orthonormal).
Is it possible to measure one of the particles in $L$ and $R$ basis using a polarizer (without any normalization etc.)? Or does quantum mechanics forbids it? And if yes, does the probability of it being $L$ or $R$ given by $\alpha^2$ and $\beta^2$ respectively?
Answer: The overall state must have unit norm, i.e. for $|\psi\rangle=\alpha |HL\rangle+\beta|VR\rangle$ we should have $\langle\psi|\psi\rangle=1$. If I understood the question, by your assumption $\langle H |V\rangle=0$, and $\langle L |R\rangle=0$, hence
$$\langle\psi|\psi\rangle=|\alpha|^2\langle H|H\rangle \langle L|L\rangle +|\beta|^2\langle V|V\rangle \langle R|R\rangle=1$$
Assuming further that $|H\rangle$ and $|V\rangle$ have unit norm and $|\beta|^2=1-|\alpha|^2$ this reduces to
$$|\alpha|^2\langle L|L\rangle +(1-|\alpha|^2)\langle R|R\rangle=1$$
Although for states $|L\rangle$ and $|R\rangle$ of unit norm this constraint is satisfied, they do not need to have unit norm, but only subject to satisfy the constraint.
With all that said, defining normalized versions of $|R\rangle$, $|L\rangle$ and adjusting the coefficients $\alpha,\beta$ correspondingly would probably make the analysis much simpler.
----------(addition)
If the vectors $|L\rangle, |R\rangle$ are not normilized, then coefficients $|\alpha|^2, |\beta|^2$ in decomposition $|\psi\rangle=\alpha |L\rangle+\beta |L\rangle$ do not correspond to measurement probabilities. Consider a simple example $|L\rangle = \frac1{\sqrt{2}}|0\rangle, |R\rangle = \sqrt{\frac{3}{2}}|1\rangle$ and
$$|\psi\rangle=\frac{1}{\sqrt{2}}|L\rangle+\frac{1}{\sqrt{2}}|R\rangle=\frac12|0\rangle+\frac{\sqrt{3}}{2}|1\rangle$$
The actual measurement probabilities are $\frac14,\frac34$. It is not clear to me though why go into the trouble of working with non-normilized states at all in this context. | {
"domain": "quantumcomputing.stackexchange",
"id": 4015,
"tags": "quantum-state, entanglement, measurement"
} |
Why do ostriches have wings? | Question: Wings serve most birds for flying, or (as in penguins) for swimming. But ostriches, which exclusively use only their legs for locomotion, still have wings. Why?
Answer: There is a common misconception that a selective process is needed to remove a feature from a population. However, the correct approach is quite the opposite: selective processes normally maintain a feature in a population: if a given feature stops promoting selective advantage, its corresponding genes start to "erode" and the feature disappears from the population (and that can happen in different rates, depending on several factors: have in mind that maladaptative features do exist).
That being said, the maintenance of wings in ostriches indicates us that there must be a reason for it.
It has been hypothesised that wings:
Improve balance when running;
Helps regulating temperature;
Serve as mating display.
(source: University of Washington) | {
"domain": "biology.stackexchange",
"id": 6430,
"tags": "evolution, zoology, ornithology"
} |
Approach for training multilingual NER | Question: I am working on multilingual (English, Arabic, Chinese) NER and I met a problem: how to tokenize data?
My train data provides sentence and list of spans for each named entity.
e.g.
[('The', 'DT'),
('company', 'NN'),
('said', 'VBD'),
('it', 'PRP'),
('believes', 'VBZ'),
('the', 'DT'),
('suit', 'NN'),
('is', 'VBZ'),
('without', 'IN'),
('merit', 'NN'),
('.', '.')]
[('你', 'PN'),
('有', '.'),
('没', 'AD'),
('有', '.'),
('用', 'VV'),
('过', '.'),
('其它', 'DT'),
('药品', 'NN'),
('?', 'PU')]
What the best way to tokenize input data? There are main different alternatives I consider: word level, wordpiece level, BPE.
BPE does not work with Chinese and Arabic because of unicode thing. I doubt about word level because I am not sure what is a word in Chinese.
What can you recommend me?
Answer: First, some clarification: BPE does work with Chinese and Arabic.
The only problem with Chinese is that there are no blanks between words, and therefore there is no explicit word boundary. In order to address that problem, normally you would segment words before applying BPE. For that, the typical approach is to use Jieba or any of its multiple ports to other programming languages. Other languages without blanks, like Japanese, may have their own tools to perform word segmentation.
Now, the answer:
Subword vocabularies are the norm nowadays. The norm also consists of finetuning one of the many pre-trained neural models available (BERT, XLNet, RoBERTa, etc), and the tokenization is imposed by the model you choose.
BPE is, in general, a popular choice for tokenization, no matter the language. Lately, the unigram tokenization is becoming popular also.
I suggest you take a look at the recent tner library, which builds on top of Huggingface Transformers to make NER very easy. | {
"domain": "datascience.stackexchange",
"id": 9206,
"tags": "nlp, named-entity-recognition"
} |
Gluons pions and the strong nuclear force | Question: Within Baryons is the force holding them together purely gluons or are pions involved?
Between protons and neutrons in a nuclei is it purely the exchange of pions between baryons which hold the baryons together?
Is the force between gluons the strong force? and the force between baryons the residual nuclear force?
I feel sources online talk about each of these questions without being concise and clear about the overall strong force
Answer: It is the strong nuclear force that binds quarks together into protons and neutrons, it is stronger then any other of the known fundamental forces at 10^-15m distances.
The strong force is observable at two distances mediated by two force carriers.
On the larger scale, 1 to 3 fm it is mediated by mesons and the force that binds protons and neutrons. it is called the residual strong force or nuclear force
on the shorter range, 0.8 fm, it is the force that is mediated by gluons that binds quarks together to form protons and neutrons and other hadrons too. It is mediated by gluons, betweeen quarks, antiquarks and other gluons.
Now, the residual strong force (between neutrons and protons), has a really interesting characteristic when it is about distance.
It is very strong at about 1 fm, but it decreases very fast until it is not significant anymore at 2.5 fm.
At distances less then 0.7 fm, it becomes repulsive. It is why nuclei have their physical size, because neutrons and protons cannot come any closer then this distance.
This is very interestinf because the strong force (color force) itself (between quarks and gluons) is attractive only.
This is why we call the nuclear force a residual effect of the fundamental strong force or color force.
Similarly, even though nucleons are made of quarks in combinations which cancel most gluon forces (they are "color neutral"), some combinations of quarks and gluons nevertheless leak away from nucleons, in the form of short-range nuclear force fields that extend from one nucleon to another nearby nucleon. These nuclear forces are very weak compared to direct gluon forces ("color forces" or strong forces) inside nucleons, and the nuclear forces extend only over a few nuclear diameters, falling exponentially with distance. Nevertheless, they are strong enough to bind neutrons and protons over short distances, and overcome the electrical repulsion between protons in the nucleus.
Now, you are asking about the mediators.
inside the neutron and proton, the color force (strong interaction) is mediated by gluons between quarks, antiquarks, and other gluons
the nuclear force (residual strong force) between neutrons and protons is mediated by virtual light mesons, pions, rho and omega. The difference between them is that the rho and omega are responsible for the spin dependence of the nuclear force. Let me explain that a little bit more.
You asked if the nuclear force is mediated by pions. Yes, but not exclusively: Neutrons and protons were thought to be identical other than the EM charge, and This is not completely true, neutrons are heavier because of one of their characteristics, that we call isospin.
Now let's clear the mediator a little bit. You would think that these interactions are mediated by real actual particles that we call pion and rho and omega. Unfortunetaly, it is a little bit more complicated. The reasidual strong force is mediated by virtual mesons. What this means is that these are just a mathematical artifact, a mathematical description of the interaction, and not actual real particles. We do not actually know physically how these interactions are mediated, so we do not know whether real actual pions would be flying between the neutron and the proton. This is not the reality. We do not know what is actually flying between them or if anything is really. What we do, is that we use virtual particles to describe the interaction between them, and fit them to the experimental data. Since the data match our theories of these virtual particles, we take them as being an accurate description of the force.
Analogically to the EM force, where we say that virtual photons are mediating the EM force, these virtual photons are not real actual particles, but a mathematical description of the EM interaction.
This is the same inside the neutron and proton. The color force is mediated by virtual gluons between real quarks, antiquarks, and real gluons. Virtual gluons are a mathematical description of the color force interactions. | {
"domain": "physics.stackexchange",
"id": 57305,
"tags": "strong-force, gluons, baryons"
} |
What are dreams, biologically? | Question: Falling asleep or states of subconsciousness does not stop the mind from making its own fictional images. These seem like sensations just like those received from human eyes. But, how do we define these visual sensations biologically?
Answer: Short answer
In terms of visual function, the low-tier primary visual cortex and high-tier frontal cortex are inactivated. The activity of the intermediate ventral stream and limbic regions are increased, apparently uncoupling low- and high-level vision processing from the system.
Background
The sleep stage where visualizations (dreaming) occur is called the rapid-eye movement (REM) sleep.
Brain imaging studies have shown pronounced effects of REM sleep on the functioning of the visual system.
In the awake state, the primary visual cortex (V1, or striate cortex) is activated, which is mainly concerned with low-level visual processing such as contrast perception. This cortical region in turn projects to higher-level (extrastriate) regions for more complex processing of visual information.
During REM sleep, however, a selective activation of extrastriate visual cortices is observed. Particularly the ventral processing stream is activated, which is associated with perception and recognition in the awake state (Deubel et al., 1998). Also the limbic and paralimbic regions are active (the emotional brain centers).
Interestingly, activity in the primary visual cortex (V1) is attenuated, as well as the frontal association areas of the brain. V1 is the primary, most 'primitive' visual cortex, while the frontal association areas are the highest-tier cortex that integrates complex perceptual information across perceptual modalities (Braun et al., 1998).
Overall, this pattern suggests a model of REM sleep where visual association cortices and their paralimbic projections may operate as a closed system dissociated from the regions at either end of the visual hierarchy that mediate interactions with the external world (Braun et al., 1998).
References
- Braun et al., Science (1998); 279: 91-5
- Deubel et al., Vis Cognition (1998); 5(1/2), 81–107 | {
"domain": "biology.stackexchange",
"id": 4156,
"tags": "human-biology, neuroscience, vision, sensation, dreaming"
} |
Vacuum Manifold of $U(1)$ theory and Goldstone theorem | Question: I want to know if my understanding of the Goldstone theorem is correct.
What I know is that the number of Goldstone is equal to the rank of $G/H$ where $G$ is the symmetry of the Lagrangian before symmetry breaking and $H$ is the symmetry of the vacuum manifold
$$
\rho (h)\phi_0 = \phi_0 , \forall h\in H.
$$
Now as an example consider a potential
$$
V=(|\phi|^2-2r^2)^2 ~,~~~ \sqrt{2}\phi=\varphi+i\chi
$$
Where $\phi$ is in the fundamental representation of $U(1)$. Now solving $\delta V(\phi_0)=0$ gives
$$
\varphi^2_0 + \chi^2_0 = r^2 \implies (\varphi_0,\chi_0)\in S^1 \cong SO(2)
$$
So in this case do I have
$$
\color{red}{H=\mathbb{Z}~~?}
$$
Because
$$
e^{i\theta}\phi_0 = \phi_0 ~,~~~\forall \theta = 2n\pi ~~~?
$$
Answer: 1) You found minimum of potential:
$$
V^\prime (\phi_0)= 0 \Rightarrow |\phi_0|^2 = 2r^2
$$
2) You choose minimum, for example:
$$
\phi_0 = \sqrt{2} r
$$
3) This minimum is not invaritant under action of any nontrivial subgroup of initial $U(1)$ group. So $H = {1}$ is group with one trivial element.
4) So $dim(G/H) = dim (U(1)) = 1 $ and we have one Goldstone boson.
5) Actually, one can easily check it, if consider $\phi$ in following form:
$$
\phi = \varphi e^{i\chi}
$$
It is trivial to see that potential doesn't depend on field $\chi$, so it can't generate mass for $\chi$ after expansion over minimum of potential. | {
"domain": "physics.stackexchange",
"id": 65158,
"tags": "quantum-field-theory, symmetry, field-theory, group-theory, symmetry-breaking"
} |
Excel VBA to Fetch email addresses from inbox and sent items folders | Question: I am using the following VBA to fetch multiple specified Email addresses from inbox and sent items folder also including cc and bcc
for eg(gmail.com;yahoo.com) must return all mails having that domain name.
the problem is it takes a whole lot of time and I mean if a person has 2k emails (overall) he might have to wait for approx. 2 hours.
The internet speed isn't an issue and it gives desired output of specified email addresses.
Checked some sources how to make code faster i got to know about restrict function when applied through DASL filter and limit number of items in a loop. I applied the same but the result is still the same and fetching is still slow.
As new into VBA I don't know all about optimization and still learning.
Any other sources or ways to make the fetching and execution faster ?
code given for reference
Option Explicit
Sub GetInboxItems()
'all vars declared
Dim ol As Outlook.Application
Dim ns As Outlook.Namespace
Dim fol As Outlook.Folder
Dim i As Object
Dim mi As Outlook.MailItem
Dim n As Long
Dim seemail As String
Dim seAddress As String
Dim varSenders As Variant
'for sent mails
Dim a As Integer
Dim b As Integer
Dim objitem As Object
Dim take As Outlook.Folder
Dim xi As Outlook.MailItem
Dim asd As String
Dim arr As Variant
Dim K As Long
Dim j As Long
Dim vcc As Variant
Dim seemail2 As String
Dim seAddress2 As String
Dim varSenders2 As Variant
Dim strFilter As String
Dim strFilter2 As String
'screen wont refresh untill this is turned true
Application.ScreenUpdating = False
'now assigning the variables and objects of outlook into this
Set ol = New Outlook.Application
Set ns = ol.GetNamespace("MAPI")
Set fol = ns.GetDefaultFolder(olFolderInbox)
Set take = ns.GetDefaultFolder(olFolderSentMail)
Range("A3", Range("A3").End(xlDown).End(xlToRight)).Clear
n = 2
strFilter = "@SQL=" & Chr(34) & "urn:schemas:httpmail:fromemail" & Chr(34) & " like '%" & seemail & "'"
strFilter2 = "@SQL=" & Chr(34) & "urn:schemas:httpmail:sentitems" & Chr(34) & " like '%" & seemail2 & "'"
'this one is for sent items folder where it fetches the emails from particular people
For Each objitem In take.Items.Restrict(strFilter2)
If objitem.Class = olMail Then
Set xi = objitem
n = n + 1
seemail2 = Worksheets("Inbox").Range("D1")
varSenders2 = Split(seemail2, ";")
For K = 0 To UBound(varSenders2)
'this is the same logic as the inbox one where if mail is found and if the mail is of similar kind then and only it will return the same
If xi.SenderEmailType = "EX" Then
seAddress2 = xi.Sender.GetExchangeUser.PrimarySmtpAddress
If InStr(1, seAddress2, varSenders2(K), vbTextCompare) Then
Cells(n, 1).Value = xi.Sender.GetExchangeUser().PrimarySmtpAddress
Cells(n, 2).Value = xi.SenderName
ActiveSheet.UsedRange.RemoveDuplicates Columns:=Array(1, 2), Header:=xlYes
On Error Resume Next
Range("A3:A9999").Select
Selection.SpecialCells(xlCellTypeBlanks).EntireRow.Delete
End If
'this is the smpt address (regular address)
ElseIf xi.SenderEmailType = "SMTP" Then
seAddress2 = xi.SenderEmailAddress
If InStr(1, seAddress2, varSenders2(K), vbTextCompare) Then
Cells(n, 1).Value = xi.SenderEmailAddress
Cells(n, 2).Value = xi.SenderName
ActiveSheet.UsedRange.RemoveDuplicates Columns:=Array(1, 2), Header:=xlYes
On Error Resume Next
Range("A3:A9999").Select
Selection.SpecialCells(xlCellTypeBlanks).EntireRow.Delete
End If
'this one fetches the cc part recipient denotes cc
For j = xi.Recipients.Count To 1 Step -1
If (xi.Recipients.Item(j).AddressEntry.Type = "EX") Then
vcc = xi.Recipients.Item(j).Address
If InStr(1, vcc, varSenders2(K), vbTextCompare) Then
Cells(n, 1).Value = xi.Recipients.Item(j).AddressEntry.GetExchangeUser.PrimarySmtpAddress
Cells(n, 2).Value = xi.Recipients.Item(j).Name
ActiveSheet.UsedRange.RemoveDuplicates Columns:=Array(1, 2), Header:=xlYes
On Error Resume Next
Range("A3:A9999").Select
Selection.SpecialCells(xlCellTypeBlanks).EntireRow.Delete
End If
Else
vcc = xi.Recipients.Item(j).Address
If InStr(1, vcc, varSenders2(K), vbTextCompare) Then
Cells(n, 1).Value = xi.Recipients.Item(j).Address
Cells(n, 2).Value = xi.Recipients.Item(j).Name
ActiveSheet.UsedRange.RemoveDuplicates Columns:=Array(1, 2), Header:=xlYes
On Error Resume Next
Range("A3:A9999").Select
Selection.SpecialCells(xlCellTypeBlanks).EntireRow.Delete
End If
End If
Next j
Else: seAddress2 = ""
End If
For a = 1 To take.Items.Count
n = 3
'this also fetches the recipient emails
If TypeName(take.Items(a)) = "MailItem" Then
For b = 1 To take.Items.Item(a).Recipients.Count
asd = take.Items.Item(a).Recipients(b).Address
If InStr(1, asd, varSenders2(K), vbTextCompare) Then
Cells(n, 1).Value = asd
Cells(n, 2).Value = take.Items.Item(a).Recipients(b).Name
n = n + 1
End If
Next b
End If
Next a
Next K
End If
Next objitem
For Each i In fol.Items.Restrict(strFilter)
If i.Class = olMail Then
Set mi = i
'objects have been assigned and can be used to fetch emails
seemail = Worksheets("Inbox").Range("D1")
varSenders = Split(seemail, ";")
n = n + 1
For K = 0 To UBound(varSenders)
'similar logic as above
If mi.SenderEmailType = "EX" Then
seAddress = mi.Sender.GetExchangeUser().PrimarySmtpAddress
If InStr(1, seAddress, varSenders(K), vbTextCompare) Then
Cells(n, 1).Value = mi.Sender.GetExchangeUser().PrimarySmtpAddress
Cells(n, 2).Value = mi.SenderName
ActiveSheet.UsedRange.RemoveDuplicates Columns:=Array(1, 2), Header:=xlYes
On Error Resume Next
Range("A3:A9999").Select
Selection.SpecialCells(xlCellTypeBlanks).EntireRow.Delete
End If
ElseIf mi.SenderEmailType = "SMTP" Then
seAddress = mi.SenderEmailAddress
If InStr(1, seAddress, varSenders(K), vbTextCompare) Then
Cells(n, 1).Value = mi.SenderEmailAddress
Cells(n, 2).Value = mi.SenderName
ActiveSheet.UsedRange.RemoveDuplicates Columns:=Array(1, 2), Header:=xlYes
On Error Resume Next
Range("A3:A9999").Select
Selection.SpecialCells(xlCellTypeBlanks).EntireRow.Delete
End If
For j = mi.Recipients.Count To 1 Step -1
If (mi.Recipients.Item(j).AddressEntry.Type = "EX") Then
vcc = mi.Recipients.Item(j).Address
If InStr(1, vcc, varSenders(K), vbTextCompare) Then
Cells(n, 1).Value = mi.Recipients.Item(j).AddressEntry.GetExchangeUser.PrimarySmtpAddress
Cells(n, 2).Value = mi.Recipients.Item(j).Name
ActiveSheet.UsedRange.RemoveDuplicates Columns:=Array(1, 2), Header:=xlYes
On Error Resume Next
Range("A3:A9999").Select
Selection.SpecialCells(xlCellTypeBlanks).EntireRow.Delete
End If
Else
vcc = mi.Recipients.Item(j).Address
If InStr(1, vcc, varSenders(K), vbTextCompare) Then
Cells(n, 1).Value = mi.Recipients.Item(j).Address
Cells(n, 2).Value = mi.Recipients.Item(j).Name
ActiveSheet.UsedRange.RemoveDuplicates Columns:=Array(1, 2), Header:=xlYes
On Error Resume Next
Range("A3:A9999").Select
Selection.SpecialCells(xlCellTypeBlanks).EntireRow.Delete
End If
End If
Next j
Else: seAddress = ""
End If
Next K
End If
Next i
ActiveSheet.UsedRange.RemoveDuplicates Columns:=Array(1, 2), Header:=xlYes
On Error Resume Next
Range("A3:A9999").Select
Selection.SpecialCells(xlCellTypeBlanks).EntireRow.Delete
Set take = Nothing
Set mi = Nothing
Application.ScreenUpdating = True
End Sub
Answer: Performance
Each statement that references ActiveSheet is affecting the performance/speed. Rather than using the ActiveSheet repeatedly create a named sheet and assign that sheet to a sheet variable. Also create Range variables, all of the Select statements are affecting the performance. Any Select statements should be outside loops if possible.
Use With statements to speed up internal operations.
DRY Code
There is a programming principle called the Don't Repeat Yourself Principle sometimes referred to as DRY code. If you find yourself repeating the same code mutiple times it is better to encapsulate it in a function. If it is possible to loop through the code that can reduce repetition as well.
Complexity
There is only one subroutine, and when I copied the subroutine it was 241 lines long. The general rule in programming is that no function or subroutine should be larger than a single screen in an editor because it is too difficult to understand large subroutines or functions. Break the subroutine up into smaller subroutines or functions that do exactly one thing. Localize the variables to the subroutines they are needed in. There should probably be one subroutine for the Inbox and one subroutine for the Sent mails.
Another reason to break up the function is that it is very difficult to identify where any bottlenecks are (things that slow down the code) in a large subroutine.
There is also a programming principle called the Single Responsibility Principle that applies here. The Single Responsibility Principle states:
that every module, class, or function should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by that module, class or function.
The art or science of programming is to break problems into smaller and smaller pieces until each piece is very simple to code. | {
"domain": "codereview.stackexchange",
"id": 43139,
"tags": "performance, vba, excel, outlook"
} |
Simulating Es/N0 by adding noise to analog signal | Question: I believe I am familiar with simulating $E_s/N_0$ by adding noise to the output of the matched filter, but what would be the correct way to simulate $E_s/N_0$ starting with the analog signal at the receive antenna and corresponding to a transmitted signal of the following form:
$$x(t)=\sum_k x_kp(t-kT)$$
where $x_k$ are the channel symbols and $p(t)$ is the pulse shaping waveform?
My understanding is that if I have AWGN with power spectral density $N_0$, the output of the matched filter is a Gaussian random variable with variance $N_0$. As for $E_s$, would this depend on the energy $E\{|x_k|^2\}$ of the channel symbols, the energy $\int |p(t)|^2 dt$ of the pulse shape, and path loss between the transmitter and the receiver?
Answer: $E_s$ would be the energy of the channel symbols at the output of the matched filter as scaled by the path loss between transmitter and receiver as long as all timing and carrier offsets have been corrected for and channel equalization completed - consider how after matched filtering with carrier and timing offsets removed we use just one sample per symbol at the symbol decision.
If equalization for channel multipath is not yet corrected for and multipath distortion exists, this would not be the case since the symbol energy is still distributed across multiple samples within each symbol (and across symbols). | {
"domain": "dsp.stackexchange",
"id": 10769,
"tags": "digital-communications, noise, snr, matched-filter, channel"
} |
How to make sense of quantum fields differential equations? | Question: A quantum field is an operator valued function, that is, a function $\varphi(x)$ defined on spacetime which assigns operators on a Hilbert space to each event $x$. In a more rigorous approach a quantum field could be defined as an operator valued distribution on spacetime.
Anyway, it is quite common that these quantum fields obey differential equations, like the Klein-Gordon equation $$(\Box +m^2)\varphi=0$$ and Dirac's equation $$(i\gamma^\mu \partial_\mu - m)\psi=0.$$
In that sense we need to understand what is the derivative of a quantum field. In this seems a little complicated.
Of course one can say: "a quantum field takes values in a Hilbert space so you can use the Frechet derivative", but it is not even clear what Hilbert space it is that the quantum fields takes values on. Also, as is clear from Quantum Mechanics, most of the operators we deal in QM are unbounded and hence discontinuous. I believe this would have a great impact on how should we deal with things like derivatives.
So, in order for quantum fields to satisfy differential equations, how is the correct way to define and understand the derivative of a quantum field? How can we make sense of quantum field differential equations?
Answer: The fields of a QFT are not functions of the spatial coordinates $\boldsymbol x\in\mathbb R^n$, but operator-valued distributions (borrowing Wightman's terminology). The notion of the fields being functions of time ("sharp-time fields") can be kept in general, but their dependence on $\boldsymbol x$ is "too singular", so that they become distributions; fields need be smeared out in space.
Therefore we should write $\phi[f]$ instead of $\phi(\boldsymbol x)$, in the same way we should write $\delta[f]$ for the Dirac delta. In this sense, the commutation relations
$$
[\phi(\boldsymbol x),\pi(\boldsymbol y)]=\delta(\boldsymbol x-\boldsymbol y)
$$
should be written as
$$
[\phi(f),\pi(g)]=(f,g)
$$
for a certain scalar product $(\cdot,\cdot)$ on your space of test functions (that depends on the spin of $\phi$).
Similarly, the field equations
$$
\dot\pi(\boldsymbol x)-\Delta \phi(\boldsymbol x)+m^2\phi(\boldsymbol x)=0
$$
should actually be written as
$$
\dot\pi[f]-\phi[\Delta f]+m^2\phi[f]=0
$$
More generally, the naïve field equations
$$
\mathscr D\phi(x)=0
$$
are nothing but a short-hand notation for
$$
\phi[\mathscr Df]=0
$$
for all $f$ in the domain of $\phi$. Therefore, the derivatives acting on fields are to be understood in the sense of distributional derivatives (if $T$ is a distribution, then we define $T'[f]\equiv -T[f']$, etc.).
For free theories, the whole framework of operator-valued distributions is perfectly well understood, and one may work with all mathematical rigour one may wish. For interacting theories though, we are far from a mathematically sound theory.
For more details see for example On Relativistic Irreducible Quantum Fields Fulfilling CCR or Haag's theorem in renormalised quantum field theories. Also, anything by Wightman (e.g., PCT, Spin and Statistics, and All That). | {
"domain": "physics.stackexchange",
"id": 39106,
"tags": "quantum-field-theory, mathematical-physics, differential-equations"
} |
Which parts of the cerebral cortex don't belong to the neocortex? | Question: In the Wikipedia article on the cerebral cortex one reads:
»Most of the cerebral cortex consists of the six-layered neocortex.«
Accordingly, in the Wikipedia list of regions in the human brain, one finds "cerebral cortex" and "neocortex" as almost synonyms.
But assuming that they are not perfect synonyms – but that there are some parts of the cerebral cortex that are not part of the neocortex – I wonder which parts this might specifically be.
To make my question specific, I list here all parts of the cerebral cortex as listed at Wikipedia:
Cerebral cortex (neocortex)
Frontal lobe
Primary motor cortex
Supplementary motor cortex
Premotor cortex
Prefrontal cortex
Orbitofrontal cortex
Dorsolateral prefrontal cortex
Superior frontal gyrus
Middle frontal gyrus
Inf. frontal gyrus
Brodmann areas
Parietal lobe
Primary somatosensory cortex (S1)
Secondary somatosensory cortex (S2)
Posterior parietal cortex
Postcentral gyrus (Primary somesthetic area)
Precuneus
Brodmann areas
Occipital lobe
Primary visual cortex (V1)
V2
V3
V4
V5/MT
Lateral occipital gyrus
Cuneus
Brodmann areas
Temporal lobe
Primary auditory cortex (A1)
secondary auditory cortex (A2)
Inf. temporal cortex
Posterior Inf. temporal cortex
Superior temporal gyrus
Middle temporal gyrus
Inf. temporal gyrus
Entorhinal cortex
Perirhinal cortex
Parahippocampal gyrus
Fusiform gyrus
Brodmann areas
Medial superior temporal area (MST)
Insular cortex
Cingulate cortex
Anterior cingulate
Posterior cingulate
Retrosplenial cortex
Indusium griseum
Subgenual area 25
Brodmann areas
If there are parts of the cerebral cortex missing here, a hint would be welcome!
Answer: The hippocampus is considered archicortex rather than neocortex. Sometimes hippocampus is considered "sub-cortical" but I think this is only really fair if you really mean "sub-neocortical"; it's common for "cortical" to be used as a shorthand for neocortex in human neurobiology. Writers will be more specific when necessary.
The piriform cortex and related olfactory structures are considered paleocortex.
Some also delineate an in-between periallocortex that includes entorhinal cortex and limbic structures around the hippocampus.
Neocortex is mostly a six-layered structure and found only in mammals. The other, non-neocortical types of cortex have fewer layers, and are shared by an older common ancestor that includes other vertebrates. | {
"domain": "biology.stackexchange",
"id": 10315,
"tags": "brain, terminology, neuroanatomy"
} |
Is there any faster way instead of using preg_match in the following code? | Question: this code finds if there is the string 2004 in <date_iso></date_iso> and if it is so, I echo some data from that specific element that the search string was found.
I was wondering if this is the best/fastest approach because my main concern is speed and the XML file is huge. Thank you for your ideas.
this is a sample of the XML
<entry ID="4406">
<id>4406</id>
<title>Book Look Back at 2002</title>
<link>http://www.sebastian-bergmann.de/blog/archives/33_Book_Look_Back_at_2002.html</link>
<description></description>
<content_encoded></content_encoded>
<dc_date>20.1.2003, 07:11</dc_date>
<date_iso>2003-01-20T07:11</date_iso>
<blog_link/>
<blog_title/>
</entry>
this is the code
<?php
$books = simplexml_load_file('planet.xml');
$search = '2004';
foreach ($books->entry as $entry) {
if (preg_match('/' . preg_quote($search) . '/i', $entry->date_iso)) {
echo $entry->dc_date;
}
}
?>
Answer: First of all, put up a timer so you know if things get better.
You're repeating '/' . preg_quote($search) . '/i'for each book. You should create the search string only once or else you are wasting time:
<?php
$books = simplexml_load_file('planet.xml');
$search = '2004';
$regex = '/' . preg_quote($search) . '/i';
foreach ($books->entry as $entry) {
if (preg_match($regex, $entry->date_iso)) {
echo $entry->dc_date;
}
}
?>
If you are only looking for 2004 or similar you might analyze if simpler functions would be faster e.g. strpos.
Also the /i modifier might be unnecessary. | {
"domain": "codereview.stackexchange",
"id": 511,
"tags": "php, xml"
} |
Does entanglement not immediately contradict the theory of special relativity? | Question: Does entanglement not immediately contradict the theory of special relativity? Why are people still so convinced nothing can travel faster than light when we are perfectly aware of something that does?
Answer: To answer this kind of question properly, it's important to clarify the foundational issues of why SR forbids superluminal speeds and what kind of superluminal speeds it forbids. There are several independent arguments of this kind that tell us several different things.
Superluminal transmission of information would violate causality, since it would allow a causal relationship between events that were spacelike in relation to one another, and the time-ordering of such events is different according to different observers. Since we never observe causality to be violated, we suspect that superluminal transmission of information is impossible. This leads us to interpret the metric in relativity as being fundamentally a statement of possible cause and effect relationships between events.
We observe the invariant mass defined by $m^2=E^2-p^2$ to be a fixed property of all objects. Therefore we suspect that it is not possible for an object to change from having $|E|>|p|$ to having $|E|<|p|$.
Composing a series of Lorentz boosts produces a velocity that approaches $c$ only as a limit. Therefore no continuous process of acceleration can bring an observer from $v<c$ to $v>c$. Since it's possible to build an observer out of material objects, it seems that it's impossible to get a material object past $c$ by a continuous process of acceleration.
If we could boost a material object past the speed of light, even by some discontinuous process, then we could do so for an observer. However, there is a no-go theorem, Gorini 1971, proving that this is impossible in 3+1 dimensions.
Entanglement doesn't violate any of these arguments. It doesn't violate #1, since it doesn't transmit information. It doesn't violate #2, #3, or #4, since it doesn't involve boosting any object past the speed of light.
V. Gorini, "Linear Kinematical Groups," Commun Math Phys 21 (1971) 150, open access at Link | {
"domain": "physics.stackexchange",
"id": 15011,
"tags": "quantum-mechanics, special-relativity, quantum-entanglement, faster-than-light"
} |
endothermic dissolution process | Question: Can anybody give me an example of an endothermic dissolution process, preferably one in which name of the substances involved are easy to remember. I have searched the Web thoroughly but could not find an example, only definitions. Thanks in advance.
Answer: An example of an endothermic dissolution is dissolving potassium iodide in water.
Added: Here is a table of Enthalpy of Solution for various compounds. Since this is the enthalpy of solution, some compounds will release energy when dissolve when the table indicates it would require energy to dissolve, and vice versa. If the number is negative, it is likely to be exothermic, and positive number are likely to be endothermic. | {
"domain": "chemistry.stackexchange",
"id": 1104,
"tags": "physical-chemistry, thermodynamics, everyday-chemistry"
} |
Can you increase data rate by increasing number of bits? | Question: My question is on data rate and baud rate.
data rate = baud rate * n
The question is, can you keep on increasing the data rate by increasing baud rate or number of bits.
I believe it can be increased by increasing the baud rate but what about the latter part?
Answer: Theoretically you can, but in reality you can not. The problem in raising the number of bits one symbol represents is that you have to space the symbol levels closer and closer together. Therefore even small amounts of random noise lead to misinterpreted symbols. A first attempt at calculating the maximum data rate is Nyquist's formula:
maximum data rate [bit/s] = $2H\cdot log_{2}(V)$
Where $H$ is the bandwidth in $Hz$ and $V$ is the number of symbol values.
This formula indicates that unlimited data rate can be achieved when enough symbol levels are used, but, as we have already seen, that is not possible because of random noise. Shannon's formula takes that into account:
maximum data rate [bit/s] = $H\cdot log_{2}(1 + S/N)$
Where $S$ is the signal strength and $N$ is the noise level. | {
"domain": "cs.stackexchange",
"id": 4981,
"tags": "data-structures"
} |
Why are magnetic field lines closed? | Question: If there were two charges placed at a large distance, won't their (say) magnetic fields interact? What if that large distance is something like close to infinity? Even if they experience a feeble force from each other, doesn't that imply that the magnetic field lines are not closed, as the particle could have been placed anywhere around the source charge at any distance?
Would not this generally say that every electrically charged particle in the universe is part of an infinitely large magnetic field?
Answer: Since monopoles don't exist. What it means is that a north pole can never be extracted independent of a south pole. So independent sources and sinks of lines of force don't exist unlike electrostatics, where a positive charge can be isolated from a negative charge (but a north pole can not be isolated from a south pole). Hence because sources and sinks don't exist, there is no concept of beginning/termination of lines of force (unlike electrostatics wherein lines of force begin in the +ve charge and terminate in the negative charge). The only way a curve can have no beginning or ending is if the curve loops into itself. Hence magnetic lines of forces create closed loops. Hope this helps. | {
"domain": "physics.stackexchange",
"id": 36643,
"tags": "magnetic-fields, magnetic-monopoles"
} |
Would wall sockets glow if human eye could see light at 50Hz? | Question: The title is self-explanatory. I know that the electrons at the tip of Live get pushed in and out with respect to Neutral. (You shouldn't say there is no current; since there is air, a poor conductor, but still a conductor)
Does this back and forth motion of electrons cause emittance of photons at 50/60Hz? If so, assuming we have gained some 50/60Hz light-sensitive cells on our retina due to pure chance, don't you think it would be fun to see a glow around wall sockets and wouldn't you use it as a sleeping lamp?
Answer: The problem isn't the socket, it's the eye. To answer directly, if the eye could see at 50 Hz, yep, you'd see it glowing. It appears that you can see single photons, on average, so the leakage from your wiring would be more than enough. But...
Very generally, to get reasonable "reception" you need an "antenna" that is resonant at the frequency in question. This is why old-school VHF TV antennas have multiple arms sticking out of them, each one is resonant at a different frequency so that it has broader frequency response. The shortest you can really go is about 1/4 the wavelength, although cell phones and such are often shorter.
Of course, human vision isn't based directly on little antennas but the visual receptor proteins like visual purple. These do have wavelength-sensitive structure which makes them relatively highly turned to specific frequencies. So then in order to see 50 Hz, you would need cones with a new visual protein in them.
Assuming that protein has something like a quarter-wave dipole structure, to be sensitive to 50 Hz it would have to be about 150 km long. The eyeball needed to hold that... well...
So, yes but no! | {
"domain": "physics.stackexchange",
"id": 61177,
"tags": "electromagnetic-radiation, photons, electrons"
} |
Why can't we run the laws of physics backwards and forwards in time infinitely? | Question: So assuming we know all the laws of physics in differential equation form, and I have an estimate for the current large scale state of the universe (whatever standard assumptions/data cosmologists use about the current large scale state of the universe in order to extrapolate the state of the universe on the large scale far into the future or far into the past... whatever standard assumptions are used to estimate that there was a big bang in the past)
It seems to me that I could plug these into my differential equations and find out the state of the universe infinitely far back or infinitely in the future.
So why couldn't I plug in a time 100 billion years before today (before the big bang) and find out the state of the universe far before the big bang?
Is there something in the theory/mathematics that forces the equations to begin at a certain time t(big bang)... and not allow us to extrapolate prior to that?
Answer: Let's not even talk about big bangs yet. Consider a simple non-linear ODE $\frac{dx}{dt}=-x^2$ with the condition $x(1)=1$. There is a unique maximal solution defined on a connected interval, which in this case is easily seen to be $x(t)=\frac{1}{t}$ for $t\in (0,\infty)$. Ouch. Even for such a simple looking ODE, a simple non-linearity already implies that our solution blows up in a finite amount of time, and we can't continue 'backwards' beyond $t=0$. You as an observer living in the 'future', i.e living in $(0,\infty)$ can no longer ask "what happened at $t=-1$?" The answer is that you can't say anything. Note that you can also cook up examples of ODEs for which solutions only exist for a finite interval of time $(t_1,t_2)$, and blowup as $t\to t_2^-$ or as $t\to t_1^+$.
The Einstein equations (which are PDEs, not merely ODEs) are a much bigger nonlinear mess. It is actually a general feature of nonlinear equations that solutions usually blow up in a finite amount of time. Of course, certain nonlinear equations have global-in-time existence of solutions, but a-priori, there's no reason you should expect them to have that nice property. For instance, in the FRW solution of Einstein's equations, the scale factor $a(t)$ vanishes as $t\to t_0$ (if you plug in some simple matter models you can even see this analytically), and doing a bunch more calculations, you can show this implies somme of the curvature components blow up. What this says is the Lorentzian metric cannot be extended in a $C^2$ sense. We can try to refine our notion of solution and singularity, but that would require a deep dive into the harshness of Sobolev spaces etc, and I don't want to open that can of worms here or now.
Anyway, my simple point is that it is very common to have ODEs which only have solutions that exist for a finite amount of time, so your central claim of
It seems to me that I could plug these into my differential equations and find out the state of the universe infinitely far back or infinitely in the future.
is just not true.
Edit:
@jensenpaull good point, and I was debating whether or not I should have elaborated on it originally, but since you asked, I’ll do so now. Are there functions that satisfy the ODE $\frac{dx}{dt}=-x^2$ which are defined on a larger domain? Absolutely! The general solution is $x(t)=\frac{1}{t}+C(t)$, where $C(t)$ is constant on $(0,\infty)$, and a perhaps different constant on $(-\infty,0)$. So, we we have completely lost uniqueness. But, why is this physically (and even mathematically in some regards) such a big deal?
In Physics, we do experiments, and that means we have only access to things ‘here and now’ (let’s gloss over technical (but fundamental) issues and say we have the ability to gather perfect experimental data). One of the goals of Physics is to use this information, and predict what happens in the future/past. But if we lose uniqueness, then it means our perfect initial conditions are still insufficient to nail down what exactly happened/will happen, which is a sign that we don’t know everything. We are talking about dynamics here, so our perfect knowledge ‘initially’ should be all that we require to talk about existence and uniqueness of solutions (Otherwise, our theory is not well-posed). So, anything which is not uniquely predicted by our initial conditions cannot in any sense be considered physically relevant. Btw, such ‘well-posedness’ (in a certain class) questions are taken for granted in Physics, and occupy Mathematicians (heck the Navier-Stokes Millenium problem is roughly speaking a question of well-posedness in a smooth setting). Dynamics is everywhere:
Newton’s laws are 2nd order ODEs and require require two initial conditions (position, velocity). From there, we turn on our ODE solver, and see what the result is.
Maxwell’s electrodynamics: although in elementary E&M we simply solve various equations using symmetry, the fundamental idea is these are (linear, coupled) evolution equations for a pair of vector fields, which means we prescribe certain initial conditions (and boundary conditions) and then solve.
GR: initially, there was lots of confusion regarding what exactly a solution is. It wasn’t until the work of Choquet Bruhat (and Geroch) that we finally understood the dynamical formulation of Einstein’s equations, and that we had a good well-posedness statement and a firm understanding of how the initial conditions (a 3-manifold, a Riemannian metric, and a symmetric $(0,2)$-tensor field which is the to-be second-fundamental form of the embedding) give rise to a unique maximal solution (which is globally hyperbolic).
So, my first reason for why we don’t continue past $t=0$ (though of course, the reasoning is not really specific to that ODE alone) has been that dynamics should be uniquely predicted by initial conditions. Hence, it makes no physical sense to go beyond $t=0$. The second reason is that in physics, nothing is ‘truly infinite’, and if it is, then our interpretation is that we don’t yet have a complete understanding of what’s going on. So, rather than trying to fix our solution, we should fix our equations (e.g maybe the ODE isn’t very physical). But before we throw out our equations, we may wonder: have we been too restrictive in our notion of solution? For instance, maybe it is too much to require solutions to be $C^1$. Could we for instance require only weaker regularity of $L^2=H^0$ or $H^1$? Well, $H^1$-regularity is indeed more natural for many Physical purposes (because $H^1$-regularity means ‘energy stays finite’). However, for this solution, we can see that $\frac{1}{2}\int_0^{\infty}|x(t)|^2+|\dot{x}(t)|^2\,dt=\infty$. In fact, this is so bad that for any $\epsilon>0$, $\int_0^{\epsilon}[\dots]\,dt=\infty$, so the origin is a truly singular point that even energy blows up. So, there’s no physical sense in continuing past that point. | {
"domain": "physics.stackexchange",
"id": 93695,
"tags": "space-expansion, big-bang, differential-equations, determinism, laws-of-physics"
} |
Why isn't there an atom with an electron at the center and a proton on the outside? | Question: I can understand that the role of electrons and protons are not interchangeable for elements heavier than Hydrogen, because there is no nuclear force to keep the charged electrons together in a nucleus, but at least for the Hydrogen atom, it should be possible to exchange the proton and the electron? Is there a reason this element doesn't exist?
Answer: When we solve the basic (i.e., non relativistic quantum mechanics) equation for the hydrogen atom, we have 2 coordinate vectors:
$$ \vec R_p,$$
$$ \vec r_e, $$
the positions of the proton and electron, respectively. Since:
$$ \vec R_{CM} =\frac{M_p\vec R_p + m_e\vec r_e}{M_p + m_e}$$
is just the center of mass, we ignore it.
We then solve for the remaining degrees-of-freedom with:
$$ \vec r = \vec r_e - \vec R_p $$
with a reduced mass:
$$ \mu = \frac{M_pm_e}{M_p+m_e} $$
If you want, you could solve for:
$$ \vec r' = \vec R_p - \vec r_e $$
and get the same results, but you would call it the proton orbiting the electron.
Once you go to reduced mass, there really is no distinction. | {
"domain": "physics.stackexchange",
"id": 71019,
"tags": "quantum-mechanics, reference-frames, atomic-physics"
} |
Earth's gravitational pull on ISS | Question: How do they make International space station to orbit the earth beyond Earth's gravity acting on it? We all know that ISS is rotating at an altitude of just 350km away. How could the ISS escape Earth's gravitational pull?
Answer: Your question presumes that the ISS is beyond Earth's gravity, that it has escaped earth's gravitational pull. This is not correct. All objects with mass in the universe affect all other things with mass in the universe, the effect just gets weaker with distance. So the ISS is feeling the effect of gravity from Earth significantly more than the moon is.
The reason the ISS doesn't just fall to the earth, either directly or gradually spiralling towards the earth, is that it is travelling fast enough around the earth that it is continually "missing" earth. It is sometimes described as 'falling' constantly around earth.
If I am to be properly correct though, the reality is that ISS is in fact falling towards the earth, getting closer and closer to Earth all the time. It needs occasional boosts to push it further back out in it's orbit.
Just to blow your mind a little bit: The ISS is pulling on the Earth with the same force that the Earth is pulling on ISS. | {
"domain": "astronomy.stackexchange",
"id": 523,
"tags": "orbit, gravity, astrophysics"
} |
Weather application data structures and design | Question: Relevant files are script.ts, WeatherData.ts, index.html.
I made a simple weather 5-day forecast application using mainly TypeScript. I was wondering how I can improve my code since I originally wanted to use a data structure that I created called a "WeatherData" to store my information about the weather for each day of the week.
GitHub project
WeatherData data structure
export class WeatherData {
date: string;
city: string;
country: string;
temperature: number;
minTemperature: number;
maxTemperature: number;
weather: any;
weatherIcon: any;
}
TypeScript file concerns
The main concerns I have about the TypeScript files (script.ts, WeatherData.ts) is the fact that halfway through making my project, I realized that I didn't actually need to use a WeatherData data structure. This led to me passing in WeatherData variables in functions as well as returning WeatherData variables from functions that don't actually need to return it.
Is there a more object-oriented way I should be approaching my script.ts class in order to use data structures?
Will using a WeatherData data structure be better?
How can I format my code (maybe different classes?) such that all the functions aren't all clumped together?
HTML file concerns
First time ever making a front-end UI using HTML and I don't know what areas I should refactor. I noticed I have a lot of repeated element tags such as:
<h4 id="wordDay0">Day 0</h4>
<h4 id="monthDay0">month-day 0</h4>
<h4 id="forecast0">Forecast</h4>
<img src="" id="icon0" alt="" height="60" width="60">
<h1 id="currentTemp0">Current</h1>
<h4 id="maxTemp0">Max</h4>
<h4 id="minTemp0">Min</h4>
I have 5 of these (one for each day).
Screenshot of what the UI looks like:
What type of suggestions are there to making my UI look better? I was thinking of making the jumbotron element a different color.
Answer: unor already covered pretty much everything I would say for the HTML structure, so I will focus on script.ts.
The most important advice I can give you is to get rid of any. By using any you are completely negating the advantages of using Typescript. Properly type everything possible. In this case, it looks like you are using any mostly for the response from the API. It would be far better to use an interface for the API response. It could look something like this.
Avoid explicitly stating types. There is a place and time for stating what type a variable is, but it is often just more noise to read.
const daysShort: string[] = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'];
There is no need to state that daysShort is an array of strings as the compiler is smart enough to determine that itself. Personally, I try to only define types when using objects. Type inference for primitive types and arrays is generally very good.
Don't mix const/let and var. Be consistent. I recommend sticking with const/let.
Don't use functions that return promises as event handlers. This doesn't make sense as the event handler won't know what to do with them.
If using $.ajax don't append query parameters yourself, use the data property of the settings object. (next point)
customSearch doesn't make sense. Instantiating a promise will never throw an error. If you want to handle errors from the promise, you need to use .catch(error => {}) or use async/await. It would be better to write the function like this:
async function customSearch() {
const searchParameter: string = (<HTMLInputElement>document.getElementById("searchBox")).value;
if (searchParameter === "") {
alert("Please enter a city and it's ISO country code.")
}
try {
const response: ApiResponse = await $.getJSON("https://api.openweathermap.org/data/2.5/forecast", {
data: {
q: searchParameter,
appid: appid
}
})
if (response.cod !== "200") {
throw new Error(response.message)
}
setGeneralInforation(response)
} catch (error) {
showError("API returned an error, invalid city / country code?");
}
}
(I would make similar comments for your other search functions, and I would also personally split out the error handling from the API request so that the function would end up looking more like the following)
async function customSearch(searchParameter: string): Promise<ApiOkResponse > {
if (searchParameter === "") {
throw new Error("Please enter a city and it's ISO country code.");
}
const response: ApiResponse = $.getJSON("https://api.openweathermap.org/data/2.5/forecast", {
data: {
q: searchParameter,
appid: appid
}
})
if (response.cod !== "200") {
throw new Error("Invalid request: " + response.message)
}
return response;
}
I suspect that determineMinTemp, determineMaxTemp, determineCurrentTemp, (or at least min/max) can be mostly combined by using a function which accepts a function as a parameter to get the correct key, but this is difficult to say without better typings, feel free to ask another question once you have incorporated suggestions from this round of reviews to see what other suggestions there may be.
There is certainly more that could be said, but I think for now I'll leave you with this so this doesn't become too overwhelming. Your project isn't a bad start!
A few opinionated statements that some may not agree with.
You probably don't need jQuery here. You essentially use it for document.querySelector, .addEventListener('click', fn), and fetch. I would probably remove it.
Add tslint to the project and configure it to warn about consistent spacing (if( vs if (), this can also help with autofixing missing semicolons, etc.
Add noUnusedLocals and noUnusedParameters to your tsconfig.json file to avoid importing code you don't need. You import parse from ts-node/dist, but don't use it. | {
"domain": "codereview.stackexchange",
"id": 28863,
"tags": "object-oriented, html, typescript"
} |
Keeping rpm stable with input of floating rpm | Question: How can I create a gear system which keeps, say, 60 rpm stable on output wheel, when I don't care about force? I am using a hand-turned crank as an input wheel.
I know a flywheel could be used, but it is weak to small accelerations and slowdowns over time.
Answer: Since this is fully analog, if you don't want to delve into extremely complex devices like a continuously variable transmission with ratio governed by deviation of output speed, your best bet would be to ballpark the input torque range and provide a strongly non-linear speed governor creating extra friction on output exceeding rated value.
Go with a centrifugal governor that upon exceeding preset speed activates a brake on the input shaft, with friction climbing rapidly as output speed is exceeded - say, the weights of the governor move jaws of a brake, engaging them as friction reaches the limit.
A somewhat simpler mechanism, frequently used in music boxes, uses viscous friction of air; a little fan driven by a set of gears increasing its speed relative to input, responds with torque of air resistance proportional to square of rotary speed. This is less accurate, but simpler and suffer less wear over time. | {
"domain": "engineering.stackexchange",
"id": 1709,
"tags": "mechanical-engineering, gears"
} |
Open Notify simple console application | Question: I made this code in python using Open Notify API, there is something that I could improve?
Im new to coding and I'm almost 4 hours understanding and coding this program that let you interact by some inputs and it can give you information about how many people are in space now, where is the ISS now and when it will go through a given coordinate.
import requests, json, os, datetime
clear = lambda: os.system('cls')
clear ()
def peopleInSpace():
responseAstros = requests.get("http://api.open-notify.org/astros.json")
astros = responseAstros.json()
people = astros['people']
peopleInSpa = []
for d in people:
peopleInSpa.append(d['name'])
print('There are', astros['number'], 'people in space, and their names are:', (', '.join(peopleInSpa)))
def issNowF():
responseIssNow = requests.get('http://api.open-notify.org/iss-now.json')
issNow = responseIssNow.json()
issNowLat = issNow['iss_position']['latitude']
issNowLon = issNow['iss_position']['longitude']
issNowTime = issNow['timestamp']
print('The International Space Station is right now (', datetime.datetime.fromtimestamp(issNowTime), ' GMT -3) on these approximately coordinates:', issNowLat, issNowLon)
def issNextTime():
clear()
lat = input("What is the latitude?")
lon = input('What is the longitude?')
parameters = {"lat":lat, "lon":lon }
responseIssNextTime = requests.get('http://api.open-notify.org/iss-pass.json', params=parameters)
issNextTime = responseIssNextTime.json()['response']
risetimes = []
for d in issNextTime:
time = d['risetime']
risetimes.append(time)
times = []
print("The next 5 times ISS is going to go through the given coordinates is:")
for rt in risetimes:
time = datetime.datetime.fromtimestamp(rt)
times.append(time)
print(time)
clear()
print('Press 1 to see how many people are in space right now')
print('Press 2 for getting the coordinates of the ISS right now')
print('Press 3 for seeing when the ISS will go through certain coordinates')
r = input('')
if r == '1':
clear()
peopleInSpace()
elif r == '2':
clear()
issNowF()
elif r == '3':
clear()
issNextTime()
else:
clear()
print('Please open the program again and press only 1, 2 or 3')
Answer: Cross-platform console clear
Instead of clear = lambda: os.system('cls') try to use a more cross-platform solution:
import os
def cls():
os.system('cls' if os.name=='nt' else 'clear')
which you can then call as:
cls()
Imports
Try writing your imports on separate lines and don't forget to remove the ones you're not using:
import datetime
import os
import requests
As you can see, I've also added an extra new line between stdlib modules and 3rd party modules. It's usually a good idea to do this as it improves readability.
Naming
In Python, the name of the variables and functions should be snake_cased. Meaning, instead of def peopleInSpace() you'd have def people_in_space() and instead of responseAstros you'd have response_astros.
Spacing & line wrapping
Try to add two new-lines (instead of one) between each function and try to end your lines somewhere between 79 - 100 chars. (I usually prefer to have more than 79 chars as specified in PEP8)
Let's see how your code looks if we take into account all the above suggestions:
import datetime
import os
import requests
def cls():
os.system('cls' if os.name=='nt' else 'clear')
def people_in_space():
response_astros = requests.get("http://api.open-notify.org/astros.json")
astros = response_astros.json()
people = astros['people']
people_in_spa = []
for d in people:
people_in_spa.append(d['name'])
print(f'There are {astros["number"]} people in space, and '
f'their names are: {", ".join(people_in_spa)}')
def iss_now_f():
response_iss_now = requests.get('http://api.open-notify.org/iss-now.json')
iss_now = response_iss_now.json()
iss_now_lat = iss_now['iss_position']['latitude']
iss_now_lon = iss_now['iss_position']['longitude']
iss_now_time = iss_now['timestamp']
print(f'The International Space Station is right now '
f'({datetime.datetime.fromtimestamp(iss_now_time)} GMT -3) '
f'on these approximately coordinates: {iss_now_lat} {iss_now_lon}')
def iss_next_time():
cls()
lat = input("What is the latitude?")
lon = input('What is the longitude?')
parameters = {"lat":lat, "lon":lon }
response_iss_next_time = requests.get(
'http://api.open-notify.org/iss-pass.json', params=parameters
)
iss_next_time_resp = response_iss_next_time.json()['response']
rise_times = []
for d in iss_next_time_resp:
time = d['risetime']
rise_times.append(time)
times = []
print("The next 5 times ISS is going to go through the given "
"coordinates is:")
for rt in rise_times:
time = datetime.datetime.fromtimestamp(rt)
times.append(time)
print(time)
cls()
print('Press 1 to see how many people are in space right now')
print('Press 2 for getting the coordinates of the ISS right now')
print('Press 3 for seeing when the ISS will go through certain coordinates')
r = input('')
if r == '1':
cls()
people_in_space()
elif r == '2':
cls()
iss_now_f()
elif r == '3':
cls()
iss_next_time()
else:
cls()
print('Please open the program again and press only 1, 2 or 3')
Looks already a bit nicer, isn't it? ^_^
Now, let's focus on improvements:
In people_in_space function, we can use list comprehensions and also wrap that request call into a try / except guard so we can catch any possible request exceptions:
def people_in_space():
try:
response_astros = requests.get(
"http://api.open-notify.org/astros.json"
)
except requests.exceptions.RequestException as req_exception:
# raise an exception here or do w/e you want
# I'm just going to exit (import sys module for this to work)
sys.exit(f'Could not fetch people in space: {str(req_exception)}')
astros = response_astros.json()
people_in_spa = [d['name'] for d in astros['people']]
print(f'There are {astros["number"]} people in space, and '
f'their names are: {", ".join(people_in_spa)}')
The above recommendations can also be applied to iss_next_time function:
def iss_next_time():
cls()
lat = input("What is the latitude?")
lon = input('What is the longitude?')
parameters = {"lat": lat, "lon": lon}
try:
response_iss_next_time = requests.get(
'http://api.open-notify.org/iss-pass.json', params=parameters
)
except requests.exceptions.RequestException as req_exception:
# raise an exception here or do w/e you want
# I'm just going to exit (import sys module for this to work)
sys.exit(f'Could not fetch next time: {str(req_exception)}')
iss_next_time_resp = response_iss_next_time.json()['response']
print("The next 5 times ISS is going to go through the given "
"coordinates is:")
for item in iss_next_time_resp:
print(datetime.datetime.fromtimestamp(item['risetime']))
I've removed the times list since you were appending things to it but didn't use it anywhere else. And I've also saved you some space complexity by directly iterating over the iss_next_time_resp.
// LE:
Even if your solution isn't bad at all, this is how I'd do it:
import datetime
import os
import sys
import requests
def clear_screen():
"""
Clear the console.
Warning: on specific OSs you might need to export the
TERM env variable.
"""
os.system('cls' if os.name == 'nt' else 'clear')
class OpenNotify:
"""
API client wrapper for open-notify.org.
"""
def __init__(self):
self.base_url = 'http://api.open-notify.org'
def _get(self, resource, params=None):
"""
Abstract get method for all subsequent GET requests.
Arguments:
resource (str): Name of the resource.
params (dict or None): If provided, dict of GET params.
Returns:
<requests.Response> object or raises exception
"""
url = f'{self.base_url}/{resource}'
try:
return requests.get(url, params=params) if params else requests.get(url)
except requests.exceptions.RequestException as req_exception:
print(f'Could not retrieve data at url {url} because: '
f'{str(req_exception)}')
sys.exit()
@staticmethod
def _get_json_from_response(response, resource_item=None):
"""
Decode response to json and get a specific resource
item if provided.
Arguments:
response (requests.Response): The response
resource_item (str or None): The desired resource item or None
Returns:
json object or raises an exception
"""
try:
json_data = response.json()
except ValueError as val_error:
print(str(val_error))
sys.exit()
if resource_item:
return json_data.get(resource_item)
return json_data
def _get_astronauts_info(self):
"""
Get astronauts information.
Returns:
json object
"""
response = self._get('astros.json')
return self._get_json_from_response(response)
def _get_iss_data(self):
"""
Get iss data (latitude, longitude and timestamp in GMT -3).
Returns:
dict
"""
response = self._get('iss-now.json')
timestamp = self._get_json_from_response(response, 'timestamp')
return {
'latitude': self._get_json_from_response(response, 'iss_position')['latitude'],
'longitude': self._get_json_from_response(response, 'iss_position')['longitude'],
'timestamp': datetime.datetime.fromtimestamp(timestamp),
}
def _get_iss_next_position_time(self, latitude, longitude):
"""
Get iss's next position time given the latitude
and longitude.
Arguments:
latitude (str): The desired latitude
longitude (str): The desired longitude
Returns:
json object
"""
response = self._get(
'iss-pass.json',
params={'lat': latitude, 'lon': longitude}
)
return self._get_json_from_response(response, 'response')
def list_astros_info(self):
astronauts_info = self._get_astronauts_info()
astronauts_names = ", ".join([item["name"] for item in astronauts_info["people"]])
print(f'There are {astronauts_info["number"]} people in space, and '
f'their names are: {astronauts_names}')
def list_iss_info(self):
iss_info = self._get_iss_data()
print(f'The International Space Station is right now '
f'{iss_info["timestamp"]} GMT-3 on these approximately '
f'coordinates: lat {iss_info["latitude"]} long {iss_info["longitude"]}')
def list_iss_next_position_time(self, latitude, longitude):
iss_next_position_info = self._get_iss_next_position_time(latitude, longitude)
print('The next 5 times ISS is going to go through the '
'given coordinates is:')
for item in iss_next_position_info:
print(datetime.datetime.fromtimestamp(item['risetime']))
def main():
print("""Choose from the options below:
1) See how many people are in space right now.
2) Get the coordinates of the ISS right now.
3) See when the ISS will go through certain coordinates.
q) Exit
""")
user_input = input('Please choose an option: ')
open_notify = OpenNotify()
if user_input == '1':
clear_screen()
open_notify.list_astros_info()
elif user_input == '2':
clear_screen()
open_notify.list_iss_info()
elif user_input == '3':
clear_screen()
latitude = input('Please enter the latitude: ')
longitude = input('Please enter the longitude: ')
open_notify.list_iss_next_position_time(latitude, longitude)
else:
print('Invalid option')
sys.exit()
if __name__ == '__main__':
main()
A few things to think about:
There's still room for improvements in regards to user input validation;
The retrieved API data can change and there're no sanity checks against this;
The menu can be dynamically created in order to avoid hardcoding future next options;
The OpenNotify class might be a bit redundant since we're not making changes to any specific state variables but as the API grows and you want to add other functionality to it...this might become helpful. | {
"domain": "codereview.stackexchange",
"id": 40317,
"tags": "python, python-3.x, api"
} |
Principle of physics used in the lift of skateboard | Question: What is the principle of physics used in this popular stunt?
Initially, I thought aerodynamics due to an increase in the angle of attack, but its magnitude is not sufficient to balance the whole body and skateboard. Please, can anyone help me to get about it?
Animation:
Answer: The skateboard is able to lift off the ground because of the momentum imparted to it by the skateboarder pushing down on the kicktail. The skateboard acts as a lever around the rear wheels, so when the kicktail is pushed down, the center of mass of the skateboard rises up. If you do this fast enough, the skateboard's center of mass gets enough upward momentum to lift the entire skateboard off the ground.
To set up a similar experiment, lay a ruler or pencil so it hangs over the edge of a table a small amount, hit down on the free end, and watch it fly up into the air. You may notice that the object not only flies up but also across the room toward the end you hit. The impulse imparts both vertical and horizontal momentum, which you can see in the first part of the skateboard clip as the center of the board moves both upward and backward.
The skateboarder then uses their front foot to stop this horizontal/rotational motion of the board and keep it under their feet, which is possible because the skateboarder has much more mass/inertia than the board. Because the skateboarder is tens of times as massive as the board, they are easily able to manipulate its momentum with their body, while changing their own momentum relatively little (if you look closely, you can see that both the skateboard and skateboarder do, in fact, land slightly behind the point of liftoff). If they just stomped on the kicktail without doing anything else, the board would arc upwards and backward, flipping end over end through the air.
There is nothing related to aerodynamics at play here, this trick could be performed exactly the same way in a vacuum.
EDIT: There seem to be some other factors at play that I've missed here. In particular, the front foot can add some lift to the board as it slides forward to the nose. As the board leaves the ground and rotates up into the front foot, it produces a normal force, which allows the front foot to impart a frictional force parallel to the surface of the board. This won't get the board off the ground in the first place (since friction is always parallel to the board), but once the board is oriented somewhat upright, the board can be pulled further upward by the front foot. Thanks to @Todd Wilcox for pointing this out. | {
"domain": "physics.stackexchange",
"id": 71241,
"tags": "newtonian-mechanics, forces, rotational-dynamics, everyday-life, free-body-diagram"
} |
How to improve this SmartDownloader? | Question: This is a class that downloads files in a smart way: it tries to download from different sessions. This do speed up things. How can I Improve it?
import os
import urllib2
import time
import multiprocessing
import string, random
import socket
from ctypes import c_int
import dummy
from useful_util import retry
import config
from logger import log
"Smart Downloading Module."
shared_bytes_var = multiprocessing.Value(c_int, 0) # a ctypes var that counts the bytes already downloaded
@retry(socket.timeout, tries=2, delay=1, backoff=1, logger=log)
def DownloadFile(url, path, startByte=0, endByte=None, ShowProgress=True):
'''
Function downloads file.
@param url: File url address.
@param path: Destination file path.
@param startByte: Start byte.
@param endByte: End byte. Will work only if server supports HTTPRange headers.
@param ShowProgress: If true, shows textual progress bar.
@return path: Destination file path.
'''
url = url.replace(' ', '%20')
headers = {}
if endByte is not None:
headers['Range'] = 'bytes=%d-%d' % (startByte,endByte)
req = urllib2.Request(url, headers=headers)
try:
urlObj = urllib2.urlopen(req, timeout=4)
except urllib2.HTTPError, e:
if "HTTP Error 416" in str(e):
# HTTP 416 Error: Requested Range Not Satisfiable. Happens when we ask
# for a range that is not available on the server. It will happen when
# the server will try to send us a .html page that means something like
# "you opened too many connections to our server". If this happens, we
# will wait for the other threads to finish their connections and try again.
log.warning("Thread didn't got the file it was expecting. Retrying...")
time.sleep(5)
return DownloadFile(url, path, startByte, endByte, ShowProgress)
else:
raise Exception("urllib2.HTTPError: %s" % e)
f = open(path, 'wb')
meta = urlObj.info()
try:
filesize = int(meta.getheaders("Content-Length")[0])
except IndexError:
log.warning("Server did not send Content-Length.")
ShowProgress=False
filesize_dl = 0
block_sz = 8192
while True:
try:
buff = urlObj.read(block_sz)
except socket.timeout, e:
dummy.shared_bytes_var.value -= filesize_dl
raise e
if not buff:
break
filesize_dl += len(buff)
try:
dummy.shared_bytes_var.value += len(buff)
except AttributeError:
pass
f.write(buff)
if ShowProgress:
status = r"%.2f MB / %.2f MB %s [%3.2f%%]" % (filesize_dl / 1024.0 / 1024.0,
filesize / 1024.0 / 1024.0, ProgressBar(1.0*filesize_dl/filesize),
filesize_dl * 100.0 / filesize)
status += chr(8)*(len(status)+1)
print status,
if ShowProgress:
print "\n"
f.close()
return path
@retry(urllib2.URLError, tries=2, delay=1, backoff=1, logger=log)
def DownloadFile_Parall(url, path=None, processes=config.DownloadFile_Parall_processes,
minChunkFile=1024**2, nonBlocking=False):
'''
Function downloads file parally.
@param url: File url address.
@param path: Destination file path.
@param processes: Number of processes to use in the pool.
@param minChunkFile: Minimum chunk file in bytes.
@param nonBlocking: If true, returns (mapObj, pool). A list of file parts will be returned
from the mapObj results, and the developer must join them himself.
Developer also must close and join the pool.
@return mapObj: Only if nonBlocking is True. A multiprocessing.pool.AsyncResult object.
@return pool: Only if nonBlocking is True. A multiprocessing.pool object.
'''
from HTTPQuery import Is_ServerSupportHTTPRange
global shared_bytes_var
shared_bytes_var.value = 0
url = url.replace(' ', '%20')
if not path:
path = r"%s\%s.TMP" % (config.temp_dir, "".join(random.choice(string.letters + string.digits) for i in range(16)))
if not os.path.exists(os.path.dirname(path)):
os.makedirs(os.path.dirname(path))
log.debug("Downloading to %s..." % path)
urlObj = urllib2.urlopen(url)
meta = urlObj.info()
filesize = int(meta.getheaders("Content-Length")[0])
if filesize/processes > minChunkFile and Is_ServerSupportHTTPRange(url):
args = []
pos = 0
chunk = filesize/processes
for i in range(processes):
startByte = pos
endByte = pos + chunk
if endByte > filesize-1:
endByte = filesize-1
args.append([url, path+".%.3d" % i, startByte, endByte, False])
pos += chunk+1
else:
args = [[url, path+".000", None, None, False]]
log.debug("Running %d processes..." % processes)
pool = multiprocessing.Pool(processes, initializer=_initProcess,initargs=(shared_bytes_var,))
mapObj = pool.map_async(_DownloadFile_ArgsList, args)
if nonBlocking:
return mapObj, pool
while not mapObj.ready():
status = r"%.2f MB / %.2f MB %s [%3.2f%%]" % (shared_bytes_var.value / 1024.0 / 1024.0,
filesize / 1024.0 / 1024.0, ProgressBar(1.0*shared_bytes_var.value/filesize),
shared_bytes_var.value * 100.0 / filesize)
status = status + chr(8)*(len(status)+1)
print status,
time.sleep(0.1)
file_parts = mapObj.get()
pool.terminate()
pool.join()
combine_files(file_parts, path)
def combine_files(parts, path):
'''
Function combines file parts.
@param parts: List of file paths.
@param path: Destination path.
'''
with open(path,'wb') as output:
for part in parts:
with open(part,'rb') as f:
output.writelines(f.readlines())
os.remove(part)
def ProgressBar(progress, length=20):
'''
Function creates a textual progress bar.
@param progress: Float number between 0 and 1 describes the progress.
@param length: The length of the progress bar in chars. Default is 20.
'''
length -= 2 # The bracket are 2 chars long.
return "[" + "#"*int(progress*length) + "-"*(length-int(progress*length)) + "]"
def _DownloadFile_ArgsList(x):
"Function gets args for DownloadFile as a list and executes it"
return DownloadFile(*x)
def _initProcess(x):
dummy.shared_bytes_var = x
dummy class is a blank file.
from useful_util import retry is this
function: http://www.saltycrane.com/blog/2009/11/trying-out-retry-decorator-python/
config has config about the project. config.DownloadFile_Parall_processes is 6.
The Is_ServerSupportHTTPRange from HTTPQuery is:
@retry(urllib2.URLError, tries=4, delay=3, backoff=2, logger=log)
def Is_ServerSupportHTTPRange(url, timeout=config.Is_ServerSupportHTTPRange_timeout):
" Function checks if a server allows HTTP Range "
url = url.replace(' ', '%20')
fullsize = GetFileSize(url)
headers = {'Range': 'bytes=0-4'}
req = urllib2.Request(url, headers=headers)
try:
urlObj = urllib2.urlopen(req, timeout=timeout)
except urllib2.HTTPError as e:
log.error(e)
return False
except urllib2.URLError as e:
log.error(e)
return False
meta = urlObj.info()
filesize = int(meta.getheaders("Content-Length")[0])
return (filesize != fullsize)
Thanks!
Answer: url = url.replace(' ', '%20')
has codesmell. You probably want to use the general-case URL encoding method here.
f = open(path, 'wb')
should probably be using the 'with' context manager to make sure the filehandle gets closed automatically in the case of exceptions or the like. | {
"domain": "codereview.stackexchange",
"id": 1793,
"tags": "python, optimization"
} |
100 Locker Problem Expanded | Question: The locker problem for 100 lockers is simple:
A school has 100 lockers and 100 students. All lockers are closed on the first day of school. As the students enter, the first student, denoted S1, opens every locker. Then the second student, S2, begins with the second locker, denoted L2, and closes every other locker. Student S3 begins with the third locker and changes every third locker (closes it if it was open, and opens it if it was closed). Student S4 begins with locker L4 and changes every fourth locker. Student S5 starts with L5 and changes every fifth locker, and so on, until student S100 changes L100.
Which lockers will remain open?
Now my program in Python uses a trick hidden in this problem which is that all lockers whose numbers have an odd number of divisors will remain open. In other words, lockers whose numbers are perfect squares. This allowed me to expand my program to solve the problem for any \$x\$ amount of lockers, provided by the user:
# 100 Locker Problem
counter_open = 0
counter_close = 0
open_list = []
def locker(num_lockers):
global counter_open, counter_close
num = 1
while num**2 < num_lockers:
open_list.append(num**2)
num += 1
counter_open = len(open_list)
counter_close = num_lockers - len(open_list)
def run():
global open_list, counter_close, counter_open
open_list = []
counter_open = 0
counter_close = 0
endings = ["st", "nd", "rd", "th"]
try:
user_input = int(raw_input("How many lockers will there be: "))
except ValueError:
print "Please insert an integer"
else:
locker(user_input)
return_str = "The "
for index in range(0, len(open_list)):
if index == (len(open_list) - 1):
if str(open_list[index])[-1:] == "1":
return_str += ("and " + str(open_list[index]) + endings[0])
elif str(open_list[index])[-1:] == "2":
return_str += ("and " + str(open_list[index]) + endings[1])
elif str(open_list[index])[-1:] == "3":
return_str += ("and " + str(open_list[index]) + endings[2])
else:
return_str += ("and " + str(open_list[index]) + endings[3])
else:
if str(open_list[index])[-1:] == "1":
return_str += (str(open_list[index]) + endings[0] + ", ")
elif str(open_list[index])[-1:] == "2":
return_str += (str(open_list[index]) + endings[1] + ", ")
elif str(open_list[index])[-1:] == "3":
return_str += (str(open_list[index]) + endings[2] + ", ")
else:
return_str += (str(open_list[index]) + endings[3] + ", ")
return_str += " locker(s) will be open."
print return_str
while True:
run()
print "Open Lockers: " + str(counter_open)
print "Closed Locker: " + str(counter_close)
print "----------------------------"
Yes, I do realize that I am using global in my functions and I have 8 if/elif statements for grammar but accuracy in the result (and PEP 8) is key to me (keep that in mind) . How do I improve the efficiency of the code and/or shorten the code?
Answer: Algorithm
The beginning of the mathematical analysis is very nice and well explained. Maybe it would be nice to have it in your code as a comment.
Also, please note that if what you really care about is the number of lockers open (or closed), it is just a matter of computing a square root.
Separation of concerns
The biggest issue in your code is that you have a function doing everything : handling user input, string formatting, updating globals... This makes your code hard to follow and hard to test. A better way to do it would be to have smaller units (functions or sometimes classes and modules) with a single responsability.
By doing so, you could get rid of the globals and write the code like this :
# 100 Locker Problem
def get_open_lockers(num_lockers):
open_list = []
num = 1
while num**2 < num_lockers:
open_list.append(num**2)
num += 1
return open_list
def format_output(nb_lockers, open_list):
endings = ["st", "nd", "rd", "th"]
counter_open = len(open_list)
counter_close = nb_lockers - counter_open
return_str = "The "
for index in range(0, len(open_list)):
if index == (len(open_list) - 1):
if str(open_list[index])[-1:] == "1":
return_str += ("and " + str(open_list[index]) + endings[0])
elif str(open_list[index])[-1:] == "2":
return_str += ("and " + str(open_list[index]) + endings[1])
elif str(open_list[index])[-1:] == "3":
return_str += ("and " + str(open_list[index]) + endings[2])
else:
return_str += ("and " + str(open_list[index]) + endings[3])
else:
if str(open_list[index])[-1:] == "1":
return_str += (str(open_list[index]) + endings[0] + ", ")
elif str(open_list[index])[-1:] == "2":
return_str += (str(open_list[index]) + endings[1] + ", ")
elif str(open_list[index])[-1:] == "3":
return_str += (str(open_list[index]) + endings[2] + ", ")
else:
return_str += (str(open_list[index]) + endings[3] + ", ")
return_str += " locker(s) will be open."
print return_str
print "Open Lockers: " + str(counter_open)
print "Closed Locker: " + str(counter_close)
print "----------------------------"
while True:
try:
nb_lockers = int(raw_input("How many lockers will there be: "))
except ValueError:
print "Please insert an integer"
else:
open_list = get_open_lockers(nb_lockers)
format_output(nb_lockers, open_list)
Look like a native
Almost everytime you do range(len(list)), you are not doing things the right way. I highly recommend Ned Batchelder's excellent talk called "Loop like a native".
In your case, the code becomes (l stands for locker which is probably not the best name):
def format_output(nb_lockers, open_list):
endings = ["st", "nd", "rd", "th"]
counter_open = len(open_list)
counter_close = nb_lockers - counter_open
return_str = "The "
for i, l in enumerate(open_list):
if i == (len(open_list) - 1):
if str(l)[-1:] == "1":
return_str += ("and " + str(l) + endings[0])
elif str(l)[-1:] == "2":
return_str += ("and " + str(l) + endings[1])
elif str(l)[-1:] == "3":
return_str += ("and " + str(l) + endings[2])
else:
return_str += ("and " + str(l) + endings[3])
else:
if str(l)[-1:] == "1":
return_str += (str(l) + endings[0] + ", ")
elif str(l)[-1:] == "2":
return_str += (str(l) + endings[1] + ", ")
elif str(l)[-1:] == "3":
return_str += (str(l) + endings[2] + ", ")
else:
return_str += (str(open_list[i]) + endings[3] + ", ")
return_str += " locker(s) will be open."
print return_str
print "Open Lockers: " + str(counter_open)
print "Closed Locker: " + str(counter_close)
print "----------------------------"
Extra Operations (functions calls and slicing)
You keep converting l to str. Maybe you could it once and for all.
Similarly, you keep accessing string[-1:] : it would be nice to do it only once. By the way, the usual way to get the last elements of the list is just [-1] (no need for colon).
def format_output(nb_lockers, open_list):
endings = ["st", "nd", "rd", "th"]
counter_open = len(open_list)
counter_close = nb_lockers - counter_open
return_str = "The "
for i, l in enumerate(open_list):
s = str(l)
last_dig = s[-1]
if i == (len(open_list) - 1):
if last_dig == "1":
return_str += ("and " + s + endings[0])
elif last_dig == "2":
return_str += ("and " + s + endings[1])
elif last_dig == "3":
return_str += ("and " + s + endings[2])
else:
return_str += ("and " + s + endings[3])
else:
if last_dig == "1":
return_str += (s + endings[0] + ", ")
elif last_dig == "2":
return_str += (s + endings[1] + ", ")
elif last_dig == "3":
return_str += (s + endings[2] + ", ")
else:
return_str += (s + endings[3] + ", ")
return_str += " locker(s) will be open."
print return_str
print "Open Lockers: " + str(counter_open)
print "Closed Locker: " + str(counter_close)
print "----------------------------"
Don't repeat yourself
Your have similar logic in the then and else blocks in format_output. This could be factorised out. Also, it would be a good idea to take this chance to extract it out in a function on its own.
def int_to_ordinal(n):
endings = ["st", "nd", "rd", "th"]
s = str(n)
last_dig = s[-1]
if last_dig == "1":
return s + endings[0]
elif last_dig == "2":
return s + endings[1]
elif last_dig == "3":
return s + endings[2]
else:
return s + endings[3]
def format_output(nb_lockers, open_list):
counter_open = len(open_list)
counter_close = nb_lockers - counter_open
return_str = "The "
for i, l in enumerate(open_list):
if i == (len(open_list) - 1):
return_str += ("and " + int_to_ordinal(l))
else:
return_str += (int_to_ordinal(l) + ", ")
return_str += " locker(s) will be open."
print return_str
print "Open Lockers: " + str(counter_open)
print "Closed Locker: " + str(counter_close)
print "----------------------------"
Tiny bug
I've just discovered one bug as I was trying to go further. I imagine it was in the original code but I might be wrong. It happens when there is only one lock open (the first), the formatting gives : "The and 1st locker(s) will be open." which looks a bit weird.
PEP8 and join
There is a style guide for Python code called PEP 8 : it is worth reading (and worth following as much as possible). Your code seems to follow the PEP 8 style for most parts but there is a part you'd be interested in:
For example, do not rely on CPython's efficient implementation of in-place string concatenation for statements in the form a += b or a = a + b . This optimization is fragile even in CPython (it only works for some types) and isn't present at all in implementations that don't use refcounting. In performance sensitive parts of the library, the ''.join() form should be used instead. This will ensure that concatenation occurs in linear time across various implementations.
Even though I can safely assume that your code performance is not that important, the join builtin, on top of making your code faster, can make it clearer. You could write something like:
def format_output(nb_lockers, open_list):
counter_open = len(open_list)
counter_close = nb_lockers - counter_open
print "The " + ", ".join(int_to_ordinal(l) for l in open_list) + " locker(s) will be open."
print "Open Lockers: " + str(counter_open)
print "Closed Locker: " + str(counter_close)
print "----------------------------"
(I removed the part with the "And" because it seemed buggy anyway).
If main
In Python, it is recommended to put the part of your code that actually does something (instead of merely defining something) behind an if __name__ == "__main__": so that you can easily run your code as a main program but also import it to be able to reuse the functions/classes you've defined.
if __name__ == "__main__":
while True:
try:
nb_lockers = int(raw_input("How many lockers will there be: "))
except ValueError:
print "Please insert an integer"
else:
open_list = get_open_lockers(nb_lockers)
format_output(nb_lockers, open_list)
Optimisation in get_open_lockers
In get_open_locks, you can easily compute the number of iterations you'll need. Also, you could use a list comprehension to avoid calling append many times.
import math
def get_open_lockers(num_lockers):
if num_lockers <= 0:
return []
sqrt = int(math.sqrt(num_lockers-1))
return [i*i for i in range(1, sqrt+1)]
(It is easy to write tests to ensure that the function returns the same resultats before and after)
Final version of the code
My final code looks like (it can still probably be improved):
import math
def get_open_lockers(num_lockers):
if num_lockers <= 0:
return []
sqrt = int(math.sqrt(num_lockers-1))
return [i*i for i in range(1, sqrt+1)]
def int_to_ordinal(n):
endings = ["st", "nd", "rd", "th"]
s = str(n)
last_dig = s[-1]
if last_dig == "1":
return s + endings[0]
elif last_dig == "2":
return s + endings[1]
elif last_dig == "3":
return s + endings[2]
else:
return s + endings[3]
def format_output(nb_lockers, open_list):
counter_open = len(open_list)
counter_close = nb_lockers - counter_open
print "The " + ", ".join(int_to_ordinal(l) for l in open_list) + " locker(s) will be open."
print "Open Lockers: " + str(counter_open)
print "Closed Locker: " + str(counter_close)
print "----------------------------"
if __name__ == "__main__":
while True:
try:
nb_lockers = int(raw_input("How many lockers will there be: "))
except ValueError:
print "Please insert an integer"
else:
open_list = get_open_lockers(nb_lockers)
format_output(nb_lockers, open_list) | {
"domain": "codereview.stackexchange",
"id": 23492,
"tags": "python, programming-challenge, python-2.x"
} |
Does a basis of maximally entangled states exist for two-qubit or two-qutrit system so that the density matrices of the basis states don't commute? | Question: I want to find a basis of maximally entangled states $|\Psi_i\rangle$, for $\mathcal{H}^{2} \otimes \mathcal{H}^{2}$ and, $\mathcal{H}^{3} \otimes \mathcal{H}^{3}$ such that the density matrices of those states don't commute with each other.
If $\rho_{i} = |\Psi_i\rangle \langle\Psi_i|$
I need the following condition to be met:
$[\rho_{i},\rho_{j}] \ne 0$ for at least one pair of indices $i, j$.
I have tried using quasi Bell states, and Bell states in $|+\rangle, |-\rangle$ basis etc. but haven't succeeded yet.
If one or more such bases exist, how should I go about constructing one? If such a basis doesn't exist, what would be the next best basis in terms of entanglement, given that non-commutativity and entangled states are my priority. The orthogonality requirement can be discarded if necessary.
Answer: No such (orthonormal) basis can exist. An orthonormal basis $\{|\psi_i\rangle\}$ requires $\langle \psi_i | \psi_j \rangle = 0$ for $i\neq j$, and so clearly
\begin{align}
[\rho_i, \rho_j] &= |\psi_i\rangle \langle \psi_i | \psi_j\rangle \langle \psi_j | - | \psi_j\rangle \langle \psi_j |\psi_i\rangle \langle \psi_i | \\
&= 0
\end{align}
So to get a basis whose elements don't commute you have to sacrifice orthogonality. In that case its hard to recommend what the "next best basis" would be without knowing what your goal is. You could start with a Bell basis and apply a local rotation to just one of the states, for example replace the state $ \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$ with $(I\otimes R_z(\theta)) \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$, which is still maximally entangled but no longer commutes with $\frac{1}{\sqrt{2}}(|00\rangle - |11 \rangle)$.
Also note that if you choose to relax the orthogonality constraint you can make your search a bit easier. Assume you've found any $\rho_i, \rho_j$ that do not commute. Then their commutator is still nonzero under an arbitrary unitary $U$:
\begin{align}
[\rho_i, \rho_j] &= A
\\\rightarrow U[\rho_i, \rho_j]U^\dagger &= U\rho_i U^\dagger U \rho_j U^\dagger - U\rho_j U^\dagger U \rho_i U^\dagger
\\&= [U\rho_i U^\dagger, U \rho_j U^\dagger]
\\&= UAU^\dagger
\end{align}
So regardless of how entangled the states that you found were, you can apply a transformation so that at least one of them becomes maximally entangled. | {
"domain": "quantumcomputing.stackexchange",
"id": 2887,
"tags": "entanglement, linear-algebra, bell-basis, povm"
} |
pandas dataframe of temps and numpy.linalg.lstsq | Question: The problem is this. Forecast model data indicates at 0700Z the 1000 milibar temps will be -4C°. The forecaster knows that is incorrect, so they choose to make a correction. This correction should have a diminishing affect across the axes. The forecaster can set the threshold of correction via 2 sliders. time and height axis_0_weight axis_1_weight.
data temps
data = [{'2022-02-19 06:00:00': -61, '2022-02-19 07:00:00': -61, '2022-02-19 08:00:00': -61, '2022-02-19 09:00:00': -62, '2022-02-19 10:00:00': -63, '2022-02-19 11:00:00': -63, '2022-02-19 12:00:00': -64, '2022-02-19 13:00:00': -63, '2022-02-19 14:00:00': -62, '2022-02-19 15:00:00': -63}, {'2022-02-19 06:00:00': -60, '2022-02-19 07:00:00': -60, '2022-02-19 08:00:00': -60, '2022-02-19 09:00:00': -61, '2022-02-19 10:00:00': -60, '2022-02-19 11:00:00': -61, '2022-02-19 12:00:00': -61, '2022-02-19 13:00:00': -61, '2022-02-19 14:00:00': -60, '2022-02-19 15:00:00': -60}, {'2022-02-19 06:00:00': -60, '2022-02-19 07:00:00': -60, '2022-02-19 08:00:00': -60, '2022-02-19 09:00:00': -59, '2022-02-19 10:00:00': -59, '2022-02-19 11:00:00': -60, '2022-02-19 12:00:00': -59, '2022-02-19 13:00:00': -59, '2022-02-19 14:00:00': -59, '2022-02-19 15:00:00': -58}, {'2022-02-19 06:00:00': -57, '2022-02-19 07:00:00': -57, '2022-02-19 08:00:00': -56, '2022-02-19 09:00:00': -55, '2022-02-19 10:00:00': -55, '2022-02-19 11:00:00': -55, '2022-02-19 12:00:00': -55, '2022-02-19 13:00:00': -55, '2022-02-19 14:00:00': -55, '2022-02-19 15:00:00': -55}, {'2022-02-19 06:00:00': -55, '2022-02-19 07:00:00': -54, '2022-02-19 08:00:00': -53, '2022-02-19 09:00:00': -52, '2022-02-19 10:00:00': -53, '2022-02-19 11:00:00': -52, '2022-02-19 12:00:00': -52, '2022-02-19 13:00:00': -52, '2022-02-19 14:00:00': -52, '2022-02-19 15:00:00': -52}, {'2022-02-19 06:00:00': -52, '2022-02-19 07:00:00': -52, '2022-02-19 08:00:00': -51, '2022-02-19 09:00:00': -51, '2022-02-19 10:00:00': -51, '2022-02-19 11:00:00': -50, '2022-02-19 12:00:00': -50, '2022-02-19 13:00:00': -50, '2022-02-19 14:00:00': -50, '2022-02-19 15:00:00': -49}, {'2022-02-19 06:00:00': -50, '2022-02-19 07:00:00': -50, '2022-02-19 08:00:00': -49, '2022-02-19 09:00:00': -49, '2022-02-19 10:00:00': -49, '2022-02-19 11:00:00': -49, '2022-02-19 12:00:00': -48, '2022-02-19 13:00:00': -48, '2022-02-19 14:00:00': -48, '2022-02-19 15:00:00': -47}, {'2022-02-19 06:00:00': -50, '2022-02-19 07:00:00': -50, '2022-02-19 08:00:00': -50, '2022-02-19 09:00:00': -50, '2022-02-19 10:00:00': -50, '2022-02-19 11:00:00': -50, '2022-02-19 12:00:00': -50, '2022-02-19 13:00:00': -49, '2022-02-19 14:00:00': -49, '2022-02-19 15:00:00': -48}, {'2022-02-19 06:00:00': -50, '2022-02-19 07:00:00': -50, '2022-02-19 08:00:00': -50, '2022-02-19 09:00:00': -51, '2022-02-19 10:00:00': -52, '2022-02-19 11:00:00': -51, '2022-02-19 12:00:00': -51, '2022-02-19 13:00:00': -50, '2022-02-19 14:00:00': -49, '2022-02-19 15:00:00': -49}, {'2022-02-19 06:00:00': -51, '2022-02-19 07:00:00': -51, '2022-02-19 08:00:00': -52, '2022-02-19 09:00:00': -53, '2022-02-19 10:00:00': -53, '2022-02-19 11:00:00': -53, '2022-02-19 12:00:00': -53, '2022-02-19 13:00:00': -52, '2022-02-19 14:00:00': -50, '2022-02-19 15:00:00': -49}, {'2022-02-19 06:00:00': -51, '2022-02-19 07:00:00': -53, '2022-02-19 08:00:00': -54, '2022-02-19 09:00:00': -55, '2022-02-19 10:00:00': -54, '2022-02-19 11:00:00': -55, '2022-02-19 12:00:00': -55, '2022-02-19 13:00:00': -54, '2022-02-19 14:00:00': -51, '2022-02-19 15:00:00': -49}, {'2022-02-19 06:00:00': -51, '2022-02-19 07:00:00': -51, '2022-02-19 08:00:00': -51, '2022-02-19 09:00:00': -51, '2022-02-19 10:00:00': -51, '2022-02-19 11:00:00': -51, '2022-02-19 12:00:00': -51, '2022-02-19 13:00:00': -50, '2022-02-19 14:00:00': -49, '2022-02-19 15:00:00': -48}, {'2022-02-19 06:00:00': -50, '2022-02-19 07:00:00': -50, '2022-02-19 08:00:00': -48, '2022-02-19 09:00:00': -48, '2022-02-19 10:00:00': -47, '2022-02-19 11:00:00': -47, '2022-02-19 12:00:00': -47, '2022-02-19 13:00:00': -47, '2022-02-19 14:00:00': -48, '2022-02-19 15:00:00': -47}, {'2022-02-19 06:00:00': -47, '2022-02-19 07:00:00': -47, '2022-02-19 08:00:00': -45, '2022-02-19 09:00:00': -44, '2022-02-19 10:00:00': -44, '2022-02-19 11:00:00': -44, '2022-02-19 12:00:00': -44, '2022-02-19 13:00:00': -44, '2022-02-19 14:00:00': -44, '2022-02-19 15:00:00': -44}, {'2022-02-19 06:00:00': -44, '2022-02-19 07:00:00': -43, '2022-02-19 08:00:00': -43, '2022-02-19 09:00:00': -41, '2022-02-19 10:00:00': -41, '2022-02-19 11:00:00': -40, '2022-02-19 12:00:00': -40, '2022-02-19 13:00:00': -41, '2022-02-19 14:00:00': -41, '2022-02-19 15:00:00': -41}, {'2022-02-19 06:00:00': -40, '2022-02-19 07:00:00': -40, '2022-02-19 08:00:00': -40, '2022-02-19 09:00:00': -39, '2022-02-19 10:00:00': -38, '2022-02-19 11:00:00': -38, '2022-02-19 12:00:00': -37, '2022-02-19 13:00:00': -37, '2022-02-19 14:00:00': -38, '2022-02-19 15:00:00': -38}, {'2022-02-19 06:00:00': -37, '2022-02-19 07:00:00': -37, '2022-02-19 08:00:00': -37, '2022-02-19 09:00:00': -36, '2022-02-19 10:00:00': -36, '2022-02-19 11:00:00': -35, '2022-02-19 12:00:00': -34, '2022-02-19 13:00:00': -34, '2022-02-19 14:00:00': -35, '2022-02-19 15:00:00': -35}, {'2022-02-19 06:00:00': -34, '2022-02-19 07:00:00': -34, '2022-02-19 08:00:00': -34, '2022-02-19 09:00:00': -34, '2022-02-19 10:00:00': -34, '2022-02-19 11:00:00': -33, '2022-02-19 12:00:00': -32, '2022-02-19 13:00:00': -32, '2022-02-19 14:00:00': -32, '2022-02-19 15:00:00': -33}, {'2022-02-19 06:00:00': -31, '2022-02-19 07:00:00': -31, '2022-02-19 08:00:00': -31, '2022-02-19 09:00:00': -31, '2022-02-19 10:00:00': -31, '2022-02-19 11:00:00': -31, '2022-02-19 12:00:00': -30, '2022-02-19 13:00:00': -29, '2022-02-19 14:00:00': -29, '2022-02-19 15:00:00': -31}, {'2022-02-19 06:00:00': -28, '2022-02-19 07:00:00': -28, '2022-02-19 08:00:00': -28, '2022-02-19 09:00:00': -28, '2022-02-19 10:00:00': -28, '2022-02-19 11:00:00': -28, '2022-02-19 12:00:00': -28, '2022-02-19 13:00:00': -27, '2022-02-19 14:00:00': -28, '2022-02-19 15:00:00': -30}, {'2022-02-19 06:00:00': -26, '2022-02-19 07:00:00': -25, '2022-02-19 08:00:00': -25, '2022-02-19 09:00:00': -26, '2022-02-19 10:00:00': -26, '2022-02-19 11:00:00': -26, '2022-02-19 12:00:00': -27, '2022-02-19 13:00:00': -26, '2022-02-19 14:00:00': -27, '2022-02-19 15:00:00': -29}, {'2022-02-19 06:00:00': -23, '2022-02-19 07:00:00': -23, '2022-02-19 08:00:00': -23, '2022-02-19 09:00:00': -23, '2022-02-19 10:00:00': -23, '2022-02-19 11:00:00': -23, '2022-02-19 12:00:00': -24, '2022-02-19 13:00:00': -24, '2022-02-19 14:00:00': -27, '2022-02-19 15:00:00': -28}, {'2022-02-19 06:00:00': -21, '2022-02-19 07:00:00': -21, '2022-02-19 08:00:00': -21, '2022-02-19 09:00:00': -21, '2022-02-19 10:00:00': -21, '2022-02-19 11:00:00': -21, '2022-02-19 12:00:00': -22, '2022-02-19 13:00:00': -23, '2022-02-19 14:00:00': -27, '2022-02-19 15:00:00': -28}, {'2022-02-19 06:00:00': -19, '2022-02-19 07:00:00': -19, '2022-02-19 08:00:00': -19, '2022-02-19 09:00:00': -19, '2022-02-19 10:00:00': -19, '2022-02-19 11:00:00': -19, '2022-02-19 12:00:00': -21, '2022-02-19 13:00:00': -23, '2022-02-19 14:00:00': -26, '2022-02-19 15:00:00': -27}, {'2022-02-19 06:00:00': -17, '2022-02-19 07:00:00': -17, '2022-02-19 08:00:00': -17, '2022-02-19 09:00:00': -17, '2022-02-19 10:00:00': -17, '2022-02-19 11:00:00': -17, '2022-02-19 12:00:00': -19, '2022-02-19 13:00:00': -23, '2022-02-19 14:00:00': -25, '2022-02-19 15:00:00': -27}, {'2022-02-19 06:00:00': -16, '2022-02-19 07:00:00': -15, '2022-02-19 08:00:00': -15, '2022-02-19 09:00:00': -15, '2022-02-19 10:00:00': -15, '2022-02-19 11:00:00': -16, '2022-02-19 12:00:00': -18, '2022-02-19 13:00:00': -21, '2022-02-19 14:00:00': -23, '2022-02-19 15:00:00': -24}, {'2022-02-19 06:00:00': -15, '2022-02-19 07:00:00': -14, '2022-02-19 08:00:00': -14, '2022-02-19 09:00:00': -13, '2022-02-19 10:00:00': -13, '2022-02-19 11:00:00': -15, '2022-02-19 12:00:00': -17, '2022-02-19 13:00:00': -20, '2022-02-19 14:00:00': -21, '2022-02-19 15:00:00': -22}, {'2022-02-19 06:00:00': -14, '2022-02-19 07:00:00': -13, '2022-02-19 08:00:00': -13, '2022-02-19 09:00:00': -12, '2022-02-19 10:00:00': -13, '2022-02-19 11:00:00': -15, '2022-02-19 12:00:00': -17, '2022-02-19 13:00:00': -18, '2022-02-19 14:00:00': -19, '2022-02-19 15:00:00': -20}, {'2022-02-19 06:00:00': -12, '2022-02-19 07:00:00': -13, '2022-02-19 08:00:00': -12, '2022-02-19 09:00:00': -12, '2022-02-19 10:00:00': -13, '2022-02-19 11:00:00': -15, '2022-02-19 12:00:00': -16, '2022-02-19 13:00:00': -16, '2022-02-19 14:00:00': -17, '2022-02-19 15:00:00': -18}, {'2022-02-19 06:00:00': -11, '2022-02-19 07:00:00': -12, '2022-02-19 08:00:00': -12, '2022-02-19 09:00:00': -13, '2022-02-19 10:00:00': -13, '2022-02-19 11:00:00': -15, '2022-02-19 12:00:00': -14, '2022-02-19 13:00:00': -15, '2022-02-19 14:00:00': -15, '2022-02-19 15:00:00': -16}, {'2022-02-19 06:00:00': -10, '2022-02-19 07:00:00': -10, '2022-02-19 08:00:00': -12, '2022-02-19 09:00:00': -13, '2022-02-19 10:00:00': -13, '2022-02-19 11:00:00': -14, '2022-02-19 12:00:00': -13, '2022-02-19 13:00:00': -13, '2022-02-19 14:00:00': -13, '2022-02-19 15:00:00': -15}, {'2022-02-19 06:00:00': -8, '2022-02-19 07:00:00': -9, '2022-02-19 08:00:00': -11, '2022-02-19 09:00:00': -12, '2022-02-19 10:00:00': -12, '2022-02-19 11:00:00': -12, '2022-02-19 12:00:00': -12, '2022-02-19 13:00:00': -11, '2022-02-19 14:00:00': -12, '2022-02-19 15:00:00': -15}, {'2022-02-19 06:00:00': -8, '2022-02-19 07:00:00': -8, '2022-02-19 08:00:00': -9, '2022-02-19 09:00:00': -11, '2022-02-19 10:00:00': -11, '2022-02-19 11:00:00': -11, '2022-02-19 12:00:00': -10, '2022-02-19 13:00:00': -10, '2022-02-19 14:00:00': -10, '2022-02-19 15:00:00': -14}, {'2022-02-19 06:00:00': -9, '2022-02-19 07:00:00': -8, '2022-02-19 08:00:00': -7, '2022-02-19 09:00:00': -9, '2022-02-19 10:00:00': -10, '2022-02-19 11:00:00': -9, '2022-02-19 12:00:00': -9, '2022-02-19 13:00:00': -8, '2022-02-19 14:00:00': -8, '2022-02-19 15:00:00': -12}, {'2022-02-19 06:00:00': -10, '2022-02-19 07:00:00': -8, '2022-02-19 08:00:00': -7, '2022-02-19 09:00:00': -7, '2022-02-19 10:00:00': -9, '2022-02-19 11:00:00': -8, '2022-02-19 12:00:00': -8, '2022-02-19 13:00:00': -7, '2022-02-19 14:00:00': -7, '2022-02-19 15:00:00': -10}, {'2022-02-19 06:00:00': -10, '2022-02-19 07:00:00': -8, '2022-02-19 08:00:00': -7, '2022-02-19 09:00:00': -6, '2022-02-19 10:00:00': -8, '2022-02-19 11:00:00': -7, '2022-02-19 12:00:00': -6, '2022-02-19 13:00:00': -6, '2022-02-19 14:00:00': -5, '2022-02-19 15:00:00': -8}, {'2022-02-19 06:00:00': -10, '2022-02-19 07:00:00': -9, '2022-02-19 08:00:00': -8, '2022-02-19 09:00:00': -6, '2022-02-19 10:00:00': -7, '2022-02-19 11:00:00': -6, '2022-02-19 12:00:00': -5, '2022-02-19 13:00:00': -4, '2022-02-19 14:00:00': -4, '2022-02-19 15:00:00': -6}, {'2022-02-19 06:00:00': -10, '2022-02-19 07:00:00': -9, '2022-02-19 08:00:00': -7, '2022-02-19 09:00:00': -6, '2022-02-19 10:00:00': -5, '2022-02-19 11:00:00': -4, '2022-02-19 12:00:00': -3, '2022-02-19 13:00:00': -3, '2022-02-19 14:00:00': -2, '2022-02-19 15:00:00': -4}, {'2022-02-19 06:00:00': -9, '2022-02-19 07:00:00': -8, '2022-02-19 08:00:00': -6, '2022-02-19 09:00:00': -5, '2022-02-19 10:00:00': -4, '2022-02-19 11:00:00': -3, '2022-02-19 12:00:00': -2, '2022-02-19 13:00:00': -1, '2022-02-19 14:00:00': -1, '2022-02-19 15:00:00': -3}]
index
index = [
'50mb', '75mb', '100mb', '125mb', '150mb', '175mb', '200mb', '225mb',
'250mb', '275mb', '300mb', '325mb', '350mb', '375mb', '400mb', '425mb',
'450mb', '475mb', '500mb', '525mb', '550mb', '575mb', '600mb', '625mb',
'650mb', '675mb', '700mb', '725mb', '750mb', '775mb', '800mb', '825mb',
'850mb', '875mb', '900mb', '925mb', '950mb', '975mb', '1000mb'
]
main code
import pandas as pd
import numpy as np
CORRECTION=+20
data=...
index=...
def make_matrix(df:pd.DataFrame)->np.ndarray:
x,y=df.shape
arr_x,arr_y= np.vstack(np.linspace(0,1,x)),np.linspace(1,0,y)
matrix = (arr_x+arr_y)-1
matrix[matrix<0]=0
return matrix
if __name__ == '__main__':
celcius_temps = pd.DataFrame.from_records(data,index=index)
celcius_temps.columns=pd.to_datetime(celcius_temps.columns)
mat = make_matrix(celcius_temps)
print(celcius_temps+np.around(mat*CORRECTION,2))
Results
original dataframe
2022-02-19 06:00:00 ...
50mb -61 -61 -61 -62 -63 -63 -64 -63 -62 -63
75mb -60 -60 -60 -61 -60 -61 -61 -61 -60 -60
100mb -60 -60 -60 -59 -59 -60 -59 -59 -59 -58
125mb -57 -57 -56 -55 -55 -55 -55 -55 -55 -55
150mb -55 -54 -53 -52 -53 -52 -52 -52 -52 -52
175mb -52 -52 -51 -51 -51 -50 -50 -50 -50 -49
200mb -50 -50 -49 -49 -49 -49 -48 -48 -48 -47
225mb -50 -50 -50 -50 -50 -50 -50 -49 -49 -48
250mb -50 -50 -50 -51 -52 -51 -51 -50 -49 -49
275mb -51 -51 -52 -53 -53 -53 -53 -52 -50 -49
300mb -51 -53 -54 -55 -54 -55 -55 -54 -51 -49
325mb -51 -51 -51 -51 -51 -51 -51 -50 -49 -48
350mb -50 -50 -48 -48 -47 -47 -47 -47 -48 -47
375mb -47 -47 -45 -44 -44 -44 -44 -44 -44 -44
400mb -44 -43 -43 -41 -41 -40 -40 -41 -41 -41
425mb -40 -40 -40 -39 -38 -38 -37 -37 -38 -38
450mb -37 -37 -37 -36 -36 -35 -34 -34 -35 -35
475mb -34 -34 -34 -34 -34 -33 -32 -32 -32 -33
500mb -31 -31 -31 -31 -31 -31 -30 -29 -29 -31
525mb -28 -28 -28 -28 -28 -28 -28 -27 -28 -30
550mb -26 -25 -25 -26 -26 -26 -27 -26 -27 -29
575mb -23 -23 -23 -23 -23 -23 -24 -24 -27 -28
600mb -21 -21 -21 -21 -21 -21 -22 -23 -27 -28
625mb -19 -19 -19 -19 -19 -19 -21 -23 -26 -27
650mb -17 -17 -17 -17 -17 -17 -19 -23 -25 -27
675mb -16 -15 -15 -15 -15 -16 -18 -21 -23 -24
700mb -15 -14 -14 -13 -13 -15 -17 -20 -21 -22
725mb -14 -13 -13 -12 -13 -15 -17 -18 -19 -20
750mb -12 -13 -12 -12 -13 -15 -16 -16 -17 -18
775mb -11 -12 -12 -13 -13 -15 -14 -15 -15 -16
800mb -10 -10 -12 -13 -13 -14 -13 -13 -13 -15
825mb -8 -9 -11 -12 -12 -12 -12 -11 -12 -15
850mb -8 -8 -9 -11 -11 -11 -10 -10 -10 -14
875mb -9 -8 -7 -9 -10 -9 -9 -8 -8 -12
900mb -10 -8 -7 -7 -9 -8 -8 -7 -7 -10
925mb -10 -8 -7 -6 -8 -7 -6 -6 -5 -8
950mb -10 -9 -8 -6 -7 -6 -5 -4 -4 -6
975mb -10 -9 -7 -6 -5 -4 -3 -3 -2 -4
1000mb -9 -8 -6 -5 -4 -3 -2 -1 -1 -3
The 1000mb 2022-02-19 06:00:00 receives 100% of the correction.
the 50mb row and 2022-02-19 15:00:00 column remain unchanged
with broadcast diminished correction
2022-02-19 06:00:00 ...
50mb -61.00 -61.00 -61.00 -62.00 -63.00 -63.00 -64.00 -63.00 -62.00 -63.0
75mb -59.47 -60.00 -60.00 -61.00 -60.00 -61.00 -61.00 -61.00 -60.00 -60.0
100mb -58.95 -60.00 -60.00 -59.00 -59.00 -60.00 -59.00 -59.00 -59.00 -58.0
125mb -55.42 -57.00 -56.00 -55.00 -55.00 -55.00 -55.00 -55.00 -55.00 -55.0
150mb -52.89 -54.00 -53.00 -52.00 -53.00 -52.00 -52.00 -52.00 -52.00 -52.0
175mb -49.37 -51.59 -51.00 -51.00 -51.00 -50.00 -50.00 -50.00 -50.00 -49.0
200mb -46.84 -49.06 -49.00 -49.00 -49.00 -49.00 -48.00 -48.00 -48.00 -47.0
225mb -46.32 -48.54 -50.00 -50.00 -50.00 -50.00 -50.00 -49.00 -49.00 -48.0
250mb -45.79 -48.01 -50.00 -51.00 -52.00 -51.00 -51.00 -50.00 -49.00 -49.0
275mb -46.26 -48.49 -51.71 -53.00 -53.00 -53.00 -53.00 -52.00 -50.00 -49.0
300mb -45.74 -49.96 -53.18 -55.00 -54.00 -55.00 -55.00 -54.00 -51.00 -49.0
325mb -45.21 -47.43 -49.65 -51.00 -51.00 -51.00 -51.00 -50.00 -49.00 -48.0
350mb -43.68 -45.91 -46.13 -48.00 -47.00 -47.00 -47.00 -47.00 -48.00 -47.0
375mb -40.16 -42.38 -42.60 -43.82 -44.00 -44.00 -44.00 -44.00 -44.00 -44.0
400mb -36.63 -37.85 -40.08 -40.30 -41.00 -40.00 -40.00 -41.00 -41.00 -41.0
425mb -32.11 -34.33 -36.55 -37.77 -38.00 -38.00 -37.00 -37.00 -38.00 -38.0
450mb -28.58 -30.80 -33.02 -34.25 -36.00 -35.00 -34.00 -34.00 -35.00 -35.0
475mb -25.05 -27.27 -29.50 -31.72 -33.94 -33.00 -32.00 -32.00 -32.00 -33.0
500mb -21.53 -23.75 -25.97 -28.19 -30.42 -31.00 -30.00 -29.00 -29.00 -31.0
525mb -18.00 -20.22 -22.44 -24.67 -26.89 -28.00 -28.00 -27.00 -28.00 -30.0
550mb -15.47 -16.70 -18.92 -22.14 -24.36 -26.00 -27.00 -26.00 -27.00 -29.0
575mb -11.95 -14.17 -16.39 -18.61 -20.84 -23.00 -24.00 -24.00 -27.00 -28.0
600mb -9.42 -11.64 -13.87 -16.09 -18.31 -20.53 -22.00 -23.00 -27.00 -28.0
625mb -6.89 -9.12 -11.34 -13.56 -15.78 -18.01 -21.00 -23.00 -26.00 -27.0
650mb -4.37 -6.59 -8.81 -11.04 -13.26 -15.48 -19.00 -23.00 -25.00 -27.0
675mb -2.84 -4.06 -6.29 -8.51 -10.73 -13.95 -18.00 -21.00 -23.00 -24.0
700mb -1.32 -2.54 -4.76 -5.98 -8.20 -12.43 -16.65 -20.00 -21.00 -22.0
725mb 0.21 -1.01 -3.23 -4.46 -7.68 -11.90 -16.12 -18.00 -19.00 -20.0
750mb 2.74 -0.49 -1.71 -3.93 -7.15 -11.37 -14.60 -16.00 -17.00 -18.0
775mb 4.26 1.04 -1.18 -4.40 -6.63 -10.85 -12.07 -15.00 -15.00 -16.0
800mb 5.79 3.57 -0.65 -3.88 -6.10 -9.32 -10.54 -12.77 -13.00 -15.0
825mb 8.32 5.09 0.87 -2.35 -4.57 -6.80 -9.02 -10.24 -12.00 -15.0
850mb 8.84 6.62 3.40 -0.82 -3.05 -5.27 -6.49 -8.71 -10.00 -14.0
875mb 8.37 7.15 5.92 1.70 -1.52 -2.74 -4.96 -6.19 -8.00 -12.0
900mb 7.89 7.67 6.45 4.23 0.01 -1.22 -3.44 -4.66 -6.88 -10.0
925mb 8.42 8.20 6.98 5.75 1.53 0.31 -0.91 -3.13 -4.36 -8.0
950mb 8.95 7.73 6.50 6.28 3.06 1.84 0.61 -0.61 -2.83 -6.0
975mb 9.47 8.25 8.03 6.81 5.58 4.36 3.14 0.92 -0.30 -4.0
1000mb 11.00 9.78 9.56 8.33 7.11 5.89 4.67 3.44 1.22 -3.0
I've looked into the numpy.linalg.lstsq method but I'm not sure how to implement it.
Answer: Passing df to make_matrix is not adequately loosely coupled. Instead you can just pass its size as two integers.
I do not trust your use of vstack. I would sooner add a new dimension via slice.
Rather than conditional zeroing of the matrix in place, you should probably use np.maximum.
celcius is spelled celsius.
Don't around where you've done it. If you want to round for print, there are better ways.
Suggested
import pandas as pd
import numpy as np
def make_matrix(x: int, y: int, correction: float = 20) -> np.ndarray:
arr_x = np.linspace(0, correction, x)[:, np.newaxis]
arr_y = np.linspace(0, -correction, y)
matrix = np.maximum(arr_x + arr_y, 0)
return matrix
def main() -> None:
celsius_temps = pd.DataFrame.from_records(data, index=index)
celsius_temps.columns = pd.to_datetime(celsius_temps.columns)
mat = make_matrix(*celsius_temps.shape)
result = celsius_temps + mat
print(result)
if __name__ == '__main__':
main() | {
"domain": "codereview.stackexchange",
"id": 43107,
"tags": "python-3.x, numpy, pandas"
} |
If the torque caused by a force is cancelled out, is the force cancelled out as well? | Question: I calculated the value of $\theta$ to be $53^{\circ}$ for (ii) by using the principle of moments ($6\sin\theta r 2 = 2.4 r$ ) as the disc is in equilibrium. Nothing wrong so far.
For (iii), I calculated the horizontal component of $\:\rm 6 N$ force ($6\cos 53^{\circ}= 3.6\:\rm N$) and since it is the only force that hasn't been cancelled, I deduced it must be the force (in opposite direction) of the pin on the disc.
But the answer is $6 \:\rm N.$
Isn't the vertical component of the force responsible for the anti-clockwise torque which is being cancelled by the clockwise torque? If so, why do I have to include it for my answer to (iii)?
Is it because despite being the cause of the anti-clockwise torque which has been cancelled, the force itself isn't cancelled and requires an opposing force ? But again, if this force is cancelled, how can it exert any torque? I'm confused.
I've just been introduced to this topic so I'd like to apologize if I'm missing out something very simple. Thanks in advance :).
Answer: Don't think in terms of cancellation.
To answer (ii) you've used the Principle of Moments.
To answer (iii), use the fact that for equilibrium the net force is zero.
(The fact that the forces produce moments whose sum is zero is irrelevant here.) | {
"domain": "physics.stackexchange",
"id": 48489,
"tags": "homework-and-exercises, forces, torque, equilibrium"
} |
Mass Distribution to turn Hanging Chain into Parabola | Question: I've learned recently about how a uniform chain hanging between two points will form a catenary curve (of the form $a \cdot \cosh (\frac{x}{a})$), and I reflected on the fact that this is only because of the fact that the chain has a uniform mass density. I then reasoned that a chain with a non-uniform mass density would have another shape when hung between two poles. Specifically, there must exist some mass distribution such that the hanging chain would form a parabola. However, I am unsure how to find this mass distribution. Does anyone know what this mass distribution is, or how I would go in the right direction of finding it?
Answer: As discussed here, for example, one obtains a parabolic shape from a hanging chain if the vertical load is constant regardless of the slope of the chain. The reason is that a horizontal force balance anywhere gives $T\cos\theta=T_0$, where $T$ is the tension, $\theta$ is the angle with the horizontal, and $T_0$ is the tension at the (symmetric) center at the origin. A vertical force balance anywhere gives $T\sin\theta=wx$, where $w$ is the weight per horizontal distance. Dividing the latter by the former, we obtain $\tan\theta=\frac{dy}{dx}=\frac{wx}{T_0}$, which we can integrate to obtain a parabola $y\sim x^2$.
Therefore, the reference mass density at the center of the chain should be reduced elsewhere by the factor $\cos\theta$ so that a sharply sloped chain doesn't weigh any more than a horizontal chain. You can verify that if the parabola is expressed by $y=ax^2$, where $a=\frac{y_0}{x_0^2}$ refers to the attachment point at $(x_0,y_0)$, then $\theta=\tan^{-1}\left(\frac{dy}{dx}\right)=\tan^{-1}(2ax)=\tan^{-1}\left(\frac{2y_0x}{x_0^2}\right)$. The mass density should therefore by adjusted by
$$\cos\left[\tan^{-1}\left(\frac{2y_0x}{x_0^2}\right)\right]=\frac{1}{\sqrt{\frac{4y_0^2x^2}{x_0^4}+1}}=\frac{x_0^2}{\sqrt{4y_0^2x^2+x_0^4}}$$ relative to the mass density at the center. | {
"domain": "physics.stackexchange",
"id": 85456,
"tags": "newtonian-mechanics, variational-calculus, statics"
} |
How to understand this form of writing the solution: (some salt • n H₂O)? | Question: For example — "high purity chloride salt of Zinc $(\ce{ZnCl2.2H2O})$" or "various concentrations of $\ce{FeCl2.4H2O}$". What does the number before $\ce{H2O}$ mean?
Answer: The salt's crystal lattice's repeating unit is constituted of n molecules of salt and m molecules of water. Such salts are called hydrates. Wikipedia even has a nice picture of a hydrated vs non hydrated salt.
You can have anhydrous ferrous chloride, as well as ferrous chloride tetrahydrate. The number indicates how many water molecules are present in the crystal lattice's unit cell.
EDIT
This has just crossed my mind: be careful when preparing solutions of metal complexes to make sure you know what compound you're working with. One gram of anhydrous salt contains more equivalents than a gram of hydrated salt. This information should be obvious on the bottle. If not, ask a lab tech if they can identify the hydrated/anhydrous salt by memory. | {
"domain": "chemistry.stackexchange",
"id": 7510,
"tags": "inorganic-chemistry, coordination-compounds, aqueous-solution"
} |
Why doesn't current flow in reverse biased diode? | Question: Consider this reverse biased diode :
I read that no or very small current flows in reverse biased diode as depletion layers get widened and huge resistance is offered so no electrons can cross it. But, why the electrons or holes need to cross the depletion layer? In the diagram above, the positive charges (holes) are moving towards left and the current due to electrons is also in left, so won't the circuit be completed?
Answer: The current flows shown in the diagram are only temporary and flow only when the battery is first connected.
When you first connect the battery holes flow to the left (in your diagram) and electrons flow to the right, and the resulting charge separation creates a potential difference across the depletion layer. The flow stops when the potential difference across the depletion layer becomes equal and opposite to the battery potential. At this point the net potential difference is zero so the charges stop flowing. | {
"domain": "physics.stackexchange",
"id": 50271,
"tags": "semiconductor-physics"
} |
Addition of angular momentum for 4 spins | Question: I have a system of $4$ spin $\frac{1}{2}$ particles, whose Hamiltonian looks like this:
$$ H = \alpha (s_1 \cdot s_2 + s_1 \cdot s_4 +s_2 \cdot s_3 +s_3 \cdot s_4 )$$
In order to find its eigenvalues and respective degeneracies I tried the typical method:
$$ \hat{\overrightarrow{S}} = s_1+ s_2+s_3+s_4 $$
$$ \hat{S}^2 = (s_1+ s_2+s_3+s_4) \cdot (s_1+ s_2+s_3+s_4)$$
which leads me to:
$$ \frac{1}{2} \left( \hat{S}^2 - \sum_{i=1}^4 |s_i|^2\right) = s_1 \cdot s_2 + s_1 \cdot s_4 +s_2 \cdot s_3 +s_3 \cdot s_4 + s_1 \cdot s_3 +s_2 \cdot s_4 $$
There are two extra terms $s_1 \cdot s_3$ and $s_2 \cdot s_4$ that will still appear in the Hamiltonian. How do I handle these in order to get the possible eigenvalues and degeneracies of this particular system?
Answer: You are composing four doublets, hence you are inspecting a $2^4\times 2^4= 16\times 16$ matrix. This reduces to the familiar spin 2, spin 1, and spin 0 blocks, specifically
a quintet, three triplets, and two singlets, 16 states in all,
$$
1/2\otimes 1/2\otimes 1/2\otimes 1/2= 2\oplus (3) ~~1\oplus (2) ~~0.
$$
But your hamiltonian $\alpha(_1+_3)⋅(_2+_4)$ is special: it does not care how spins 1 & 3 compose together symmetrically or anti symmetrically, in a triplet or a singlet, and likewise for 2 & 4. These two options are degenerate, respectively, in both cases, so you are really composing just
$$
\vec S_a = {\vec s_1+\vec s_3} ~~~\hbox{with} \\
\vec S_b = {\vec s_2+\vec s_4},
$$
where the spins of each $S_a$ and $S_b$ are 0 and 1; so, four combinations in all for $\vec S= \vec S_a+\vec S_b$,
$$
H= {\alpha\over 2} (S^2-S_a^2-S_b^2).
$$
Recalling that the Casimirs for spin 0; 1; and 2 are 0; 2; and 6, respectively: you get the singlet-singlet combination (singlet) to have 0 energy eigenvalue; the two singlet-triplet combinations (triplets) to have 0 energy eigenvalue, as well.
The triplet-triplet combination reduces to the quintet, with eigenvalue α; a triplet with eigenvalue -α; and a singlet with eigenvalue -2α.
Not all products of the reduction with a common spin are treated identically by the (a)symmetric hamiltonian. | {
"domain": "physics.stackexchange",
"id": 84684,
"tags": "quantum-mechanics, angular-momentum, quantum-spin, eigenvalue"
} |
Including both Python and C++ Plugins in a single RQT Plugin | Question:
I'm writing a RQT plugin that includes some functionality of other plugins, as well as my own.
My plugin is in C++, because I'd like to add some rqt_rviz visualization. Mainly the OGRE window.
Along side the RViz visualization, I want to add some functionality of the rqt_bag plugin. I plan on branching this plugin and heavily modifying it.
I want to combine the two plugins in code and create my plugin. I'll call that rqt_curranw. The final product (assume rqt_bag will be much different) looks like the screenshot attached.
The issue is...rqt_bag is written in Python and rqt_rviz is written in C++. I can create two plugins and dock them next to each other, but I'd rather have my single plugin include all of the functionality I want. No extra steps. I can kind of get around this by using a perspective, but I don't see a way to include arguments to the plugins using perspectives.
So my question is, how can I include functionality from both plugins in my one plugin widget? In C++, I can use the pluginlib::ClassLoader to upload the rqt_rviz plugin, and use the functions associated with that class. However, I can't use it to load the rqt_bag plugin, since it's written in Python.
I've looked through rqt_gui for inspiration and it's a monster. I'm hoping to find some quick answers here before jumping down that rabbit hole.
Thoughts:
Is there a way to latch onto an existing QtApplication. For example, if the QtApplication is written in C++ can a simultaneous Python process access it and add widgets to it? Or vice-versa?
I assume there is a way to do this, since rqt_gui uploads both Python and C++ plugins, and is written in Python.
Is there a way to hack perspectives to use plugin arguments? Is this the right approach?
Am I barking up the wrong tree and over-complicating things?
Originally posted by curranw on ROS Answers with karma: 211 on 2016-03-30
Post score: 2
Original comments
Comment by 2ROS0 on 2016-08-15:
I'm looking to do something similar. Which path did you go down finally? Any suggestions based on your experience?
Answer:
Mixing two languages in a single rqt plugin is not supported. rqt does a lot of internal work to enable using different languages and I don't think you want to go down that rabbit hole in a single plugin.
I see two viable options to proceed:
Keep using two separate plugins, one in C++, one in Python and let them interact through ROS communication patterns with each other (messages, services). The "only" downside is that the user has to open two plugins. (It would be possible though to add API to rqt to enable plugins to start other plugins somehow automatically.)
Reimplement the rqt_bag user interface in C++ using the C++ API of rosbag. The significant downside of this is the effort since rqt_bag does quite a lot of stuff already.
Since option 2 is quite some effort I would suggest option 1.
Originally posted by Dirk Thomas with karma: 16276 on 2016-03-30
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by 2ROS0 on 2016-08-15:
I'm facing a similar situation - where I want to have my plugin (in C++) run along with a modified rqt_plot as a single app. How difficult would it be to implement? I know you mentioned it's a lot of work, but I was wondering how much is a lot. :)
Comment by curranw on 2016-08-18:
I ended up ditching the idea and using RViz and rqt_bag separately. | {
"domain": "robotics.stackexchange",
"id": 24277,
"tags": "ros, pluginlib, rqt, rqt-bag"
} |
Why is the magnetic moment of a proton weaker than the electron? | Question: I know that the nucleus of an atom does not contribute to the magnetization of an atom or material, and that the magnetic moments of protons are much weaker than electrons. Why is the magnetic moment of a proton weaker than the electron?
Note: I know that electrons don't actually spin (they're described by wave functions).
Answer: Nuclear magnetic moment is given by $\vec{m_N}=g_N\mu_N\vec{I}$, where $\vec{I}$ is a spin of nucleus, $g_N$ is $g$-factor and $\mu_N=\frac{e\hbar}{2m_p}$ is a nuclear magneton.
Orbital magnetic moment of electron is given by $\vec{m}=-\mu_B\vec{l}$, where $\vec{l}$ is a angular momentum (more precisely - operator of angular momentum) and $\mu_B=\frac{e\hbar}{2m_e}$ is a Bohr magneton.
For proton $\langle\vec{m}_N\rangle\approx 2.793\mu_N $.
Hence ratio $\frac{\langle\vec{m}\rangle}{\langle\vec{m}_N\rangle}\approx \frac{\mu_B}{\mu_N}\approx \frac{2000}{2.793}\approx700$. | {
"domain": "physics.stackexchange",
"id": 28700,
"tags": "electromagnetism, electrons, magnetic-moment"
} |
Problems with efficient solution except for a small fraction of inputs | Question:
The halting problem for Turing machines is perhaps the canonical undecidable
set. Nevertheless, we prove that there is an algorithm deciding almost
all instances of it. The halting problem is therefore among the growing collection
of those exhibiting the “black hole” phenomenon of complexity theory,
by which the difficulty of an unfeasible or undecidable problem is confined
to a very small region, a black hole, outside of which the problem is easy.
[Joel David Hamkins and Alexei Miasnikov, "The halting problem is decidable on a set of asymptotic probability one", 2005]
Can anyone provide references to other “black holes” in complexity theory, or another place where this or related concepts are discussed?
Answer: I'm not sure whether this is what you're looking, but the phase transition in random SAT is an example. Let $\rho$ be the ratio of number of clauses to number of variables. Then a random SAT instance with parameter $\rho$ is very likely to be satisfiable if $\rho$ is less than a fixed constant (near 4.2) and is very likely to be unsatisfiable if $\rho$ is a little bit more than this constant. The "black hole" is the phase transition. | {
"domain": "cstheory.stackexchange",
"id": 2582,
"tags": "cc.complexity-theory, reference-request, computability, undecidability"
} |
Are there survey papers in theoretical computer science? | Question: Are there conferences or journals where we can publish surveys/literature review papers related to theoretical computer science problems? If provide a list of such conferences and journals.
I know there are many options in applied areas of computer science, but I have not seen this trend in theoretical computer science.
I work in computational algebra and haven't seen any survey papers so far.
Answer: Yes!
These survey series come to mind:
Foundations and Trends in TCS (many authors put a free version on their web page)
Theory of Computing Graduate Surveys
SIGACT News Complexity Column (and also sometimes other technical columns etc in SIGACT News)
Bulletin EATCS regularly has surveys and tutorials
To your more specific question, can you be even more specific? "Computational algebra" is a pretty big field. I recall seeing surveys on computational algebraic geometry, computational real algebraic geometry, computational group theory (several links at that page). | {
"domain": "cstheory.stackexchange",
"id": 5491,
"tags": "reference-request, survey"
} |
Zwitterions/IEP of glycine at pH 6? (Paradox?) | Question: I am trying to understand an experiment: Some crystalline, solid glycine was solved in water and a pH=6 was measured (also calculable with $\mathrm{pH} =\frac{1}{2}(\mathrm{p}K_\mathrm{a1} + \mathrm{p}K_\mathrm{a2} = \frac{1}{2}(2.34 + 9.6) = 5.97)$
I have tried to explain the acidic pH-value: Solid glycine is in a crystalline structure — all molecules are in a zwitterionic state. If they get solved in neutral water, the crystal-structure is lost and $100~\%$ of zwitterions are dissolved in the neutral solution. The carboxylic and amino group now act as an acid and base in aqueous solution. The acid reaction of the carboxylic group predominates ($\mathrm{p}K_\mathrm{a1} < \mathrm{p}K_\mathrm{a2}$). Some amino acids are now negative ions, some are still zwitterions.
But now is my question (the contradiction?): pH = 6 is also the isoelectric point IP with the definition, that all amino acids are in zwitterionic state. But according to my theory above, some amino acids have already reacted "away" from the zwitterionic state.
Are there flaws in my reasoning — is my explained "paradox" not true?
Answer: You need to separate the two concepts.
When dissolving a neutral amino acid in water, a buffered acid-base reaction will take place as you highlighted, giving a certain pH value. In this case, the following equilibrium:
$$\ce{HOOC-CH2-NH3+ <=> ^{-}OOC-CH2-NH3+ <=> ^{-}OOC-CH2-NH2}$$
will be somewhere between the pure second and the pure third species. (The neutral, non-zwitterionic species is neglectable in aquaeous solution.) Superfluous protons liberated into the solution will end up as $\ce{H3O+}$ explaining the overall reduction in pH value when compared to a neutral solution. You won’t notice it unless you perform really accurate experiments, but the solution’s pH value will be a tad higher than the isoelectric point.
If you go on and add external protons, i.e. by carefully adding an acid, you can adjust the overall solution’s pH. Since the glycinide anions are the strongest base present, they will take up the protons first, followed by the zwitterions. At a certain pH value (corresponding to a certain external proton concentration added), the concentrations of glycinide anions and glycinium cations will be identical and we have reached the isoelectric point of glycine. It is, and has to be, slightly different from the pH value obtained by dissolving glycine. | {
"domain": "chemistry.stackexchange",
"id": 5714,
"tags": "organic-chemistry, acid-base, amino-acids"
} |
Pole and Barn Paradox w/ Spacetime Interval | Question: I'm having trouble with a pole and barn paradox problem. The problem is as follows:
A pole vaulter is running with a pole at $ v=\frac{\sqrt3}{2}c $. Her pole has a proper length of $L$. She runs into a barn with proper length $\frac{L}{2}$ with doors on the front and back. When the pole vaulter runs into the barn, a farmer tries to close both front and back doors at the same time, but only for an instant, and then reopens them.
What is the expression for the time interval of the door closings in the pole vaulter's frame?
So I know that $\gamma=2$, so in the pole vaulter's frame, the barn has length $\frac{L}{4}$, and in the farmer's frame, the pole has length $\frac{L}{2}$.
In order to find the time interval, I tried using the spacetime interval. In the farmer's frame, the time interval between the two doors closing is 0. This means that the spacetime interval between the two events is $$ (\Delta s) ^2=-\left(\frac{L}{2}\right)^2$$
Then, I equate this to the spacetime interval from the pole vaulter's frame. $$(\Delta s) ^2=(c\Delta t)^2-(\Delta x)^2 = (c\Delta t)^2 - \left(\frac{L}{4}\right)^2 = -\left(\frac{L}{2}\right)^2$$
But solving for $\Delta t$ gives a complex number, when I should be getting a real solution. Why am I getting this result? Does it have to do with how I am selecting my $\Delta x$ for the pole vaulter?
Answer: Often, I think a nice way to untangle the mess we find ourselves in is by appealing to the all-knowing God of special relativity, blessed be her name, Lorentztransformalia.
Let event $A$ denote the front door closing, and let event $B$ denote the rear door closing. Without loss of generality, we assume that we've chosen our coordinates so that in the farmer's frame, event $A$ has spacetime coordinates $(ct_A, x_A) = (0,0)$, and event $B$ has spacetime coordinates $(ct_B, x_B) = (0, L/2)$.
Using the standard Lorentz boost, we find that the coordinates of the first event in the vaulter's frame are again $(ct_A', x_A') = (0,0)$ because linear transformations always map the zero vector to itself, while
$$
\begin{pmatrix} ct_B' \\ x_B'\end{pmatrix}
= \begin{pmatrix} \gamma & -\gamma\beta \\ -\gamma\beta & \gamma\end{pmatrix}
\begin{pmatrix} 0 \\ L/2 \end{pmatrix}
= \begin{pmatrix} 2 & -\sqrt{3} \\ -\sqrt{3} & 2\end{pmatrix}
\begin{pmatrix} 0 \\ L/2 \end{pmatrix}
= \begin{pmatrix} -\sqrt{3}L/2 \\ L\end{pmatrix}
$$
In particular, the spatial separation of the events in the farmer's frame is
$$
x_B' - x_A' = L \neq L/4
$$
hmmm...so it seems the original assumption that the spatial separation of these events equals $L/4$ in the vaulter's frame is not correct. | {
"domain": "physics.stackexchange",
"id": 30618,
"tags": "homework-and-exercises, special-relativity, inertial-frames, observers"
} |
Calculate the flux of a point charge with Gauss's law | Question: I know from my class that to calculate the flux of a point charge with Gauss's law, I have to make a surface that intersects with all of the flux lines resulting from the charge, and then make this integration on that surface
$$
\iint \vec{E}\cdot\vec{dA}
$$
and that surface will of course be a sphere. But I thought that if I only need to make a surface which intersects with all of the field lines, then I may use the surface bounded by a circle with infinite radius and then multiply the result by two because this surface will only intersect with the lines at the upper side of the charge. This image shows what I mean.
But when I tried to do this I got an infinite flux. Was this approach WRONG in the first place? If so, why?
Answer: You probably did some wrong calculation, because your reasoning works. Take a circle of radius $r$ a distance $a$ above the charge. The increase $d\Phi$ of the flux by increasing the radius by $dr$ is given by
$$d\Phi=\frac{q}{4\pi\epsilon_0}\frac{2\pi r \cos \theta dr}{a^2+r^2}=\frac{q}{2\epsilon_0}\frac{ar dr}{(a^2+r^2)^{3/2}},$$
where $\theta$ is the azimuthal angle. Then
$$\Phi=\frac{q}{2\epsilon_0}\int_0^\infty\frac{ar dr}{(a^2+r^2)^{3/2}}=-\frac{q}{2\epsilon_0}\left.\frac{a}{\sqrt{a^2+r^2}}\right|_{r=0}^\infty=\frac{q}{2\epsilon_0},$$
which is the expected result, since the flux over a closed surface must be $\dfrac{q}{\epsilon_0}$. | {
"domain": "physics.stackexchange",
"id": 17176,
"tags": "electrostatics, electricity, gauss-law"
} |
Checks size of and number of files in each subdirectory | Question: The code snippet here checks in every folder for subfolders, and stores the sub-sub folders
This is my sample directory structure. I want to read through every sub-folders to find the number of files and total size of the sub folder.
--Parent
|---FolderA
| |__subFolder1
| |__subFolder2
|
|---FolderB
|__subFolder3
|__subFolder4
|__subFolder5
import glob
import os
from IPython import embed
import subprocess
import humanize
list_of_files = os.listdir(os.getcwd())
total_stats =[]
path = os.getcwd()
for files in list_of_files: #check FolderA, Folder B etc.,
print files
while (os.walk(path+"/"+files).next()):
curr_path ,dirs,_ = os.walk(path+"/"+files).next()
data_dict = {}
flag = "_hd"
for d in dirs: #check subFolder1,2,3,4...
sub_p,sub_d,_ = os.walk(curr_path+"/"+d).next()
for d in sub_d: #reads through files inside subfolder
p,_,f =os.walk(sub_p+"/"+d).next()
total_size = 0
for fn in f: #Finds total size of all files
total_size += os.path.getsize(p+"/"+fn)
data_dict[d+flag] = [len(f),humaize.naturalsize(total_size)] #prints file size in human readable form Ex: '2.3MB'
flag = "_rd"
break
total_stats.append([files,data_dict]) #updates no. of files and sizes
How can I optimize this code to avoid these many ridiculous amounts of for loops?
Answer: edit: I just realized, that you're using python2. I'm very unfamiliar with that and if you'll be able to use yield from, pathlib and format string literals. If you like the solution, I think I could rewrite it to be usable for python2.
Solution for python 3.6 and up (of that, I am very confident that it will produce no error):
Instead of creating all your information within one function, you can reduce the nesting depth by pulling up some of the functionality.
from pathlib import Path
def folders_in_path(path):
if not Path.is_dir(path):
raise ValueError("argument is not a directory")
yield from filter(Path.is_dir, path.iterdir())
def folders_in_depth(path, depth):
if 0 > depth:
raise ValueError("depth smaller 0")
if 0 == depth:
yield from folders_in_path(path)
else:
for folder in folders_in_path(path):
yield from folders_in_depth(folder, depth-1)
def files_in_path(path):
if not Path.is_dir(path):
raise ValueError("argument is not a directory")
yield from filter(Path.is_file, path.iterdir())
def sum_file_size(filepaths):
return sum([filep.stat().st_size for filep in filepaths])
if __name__ == '__main__':
for folder in folders_in_depth(Path.cwd(),1):
# vvvv quick hack to to use len(), does not perform well
files = list(files_in_path(folder))
total_size = sum_file_size(files)
print(f'{folder}: filecount:{len(files)}, total size:{total_size}')
which produces:
/tmp/Parent/FolderB/subfolder3: filecount:10, total size:50
/tmp/Parent/FolderB/subfolder2: filecount:10, total size:50
/tmp/Parent/FolderB/subfolder1: filecount:10, total size:50
/tmp/Parent/FolderA/subfolder3: filecount:10, total size:50
/tmp/Parent/FolderA/subfolder2: filecount:10, total size:50
/tmp/Parent/FolderA/subfolder1: filecount:10, total size:50
if your cwd is Parent.
This will only count files in that are in a subdirectory at depth == 1 and nothing else.
There might be some neater solutions, that's just what I came up with in a few minutes.
edit2: I was curious, so I did a 2.7 version. os.walk is using depth first search. From what I gather, it could be done like this:
import os
def folders_in_depth(path, depth, walk_iter=None):
if walk_iter is None:
walk_iter = os.walk(path)
if 0 > depth:
raise ValueError("depth smaller 0")
if 0 == depth:
dirpath, dirnames, filenames = next(walk_iter)
for i in range(len(dirnames)):
dirpath, dirnames, filenames = next(walk_iter)
yield dirpath, filenames
else:
dirpath, dirnames, filenames = next(walk_iter)
for i in range(len(dirnames)):
for result in folders_in_depth(path, depth-1, walk_iter=walk_iter):
yield result
def sum_file_size(folder, filepaths):
return sum([os.path.getsize(folder + '/' + filep) for filep in filepaths])
if __name__ == '__main__':
for folder in folders_in_depth(os.getcwd(),1):
foldername, files = folder
print foldername,': filecount:', len(files), ', total size:', sum_file_size(foldername,files)
which results in
/tmp/Parent/FolderB/subFolderB3 : filecount: 10 , total size: 50
/tmp/Parent/FolderB/subFolderB2 : filecount: 10 , total size: 50
/tmp/Parent/FolderB/subFolderB1 : filecount: 10 , total size: 50
/tmp/Parent/FolderA/subFolderA3 : filecount: 10 , total size: 50
/tmp/Parent/FolderA/subFolderA2 : filecount: 10 , total size: 50
/tmp/Parent/FolderA/subFolderA1 : filecount: 10 , total size: 50
Not sure, where the extra spaces come from.
Here, the iterator is passed down in the recursion and has to be forwarded with next to produce the correct result.
Caveat: I think, this 2.7 solution will not work if the 'subFolder*' contain more folders on their own. Judging from how os.walk is implemented, modifying that solution might be the best option, so you don't have to worry about that. (sadly I was looking at how 3.6 implements os.walk, which is a little more complicated)
So for this to work correctly, you'd have to forward the iterator through the complete sub-tree, which is completely unneccesary.
os.walk allows to modify the returned dirname to control where it should recurse via side effects. So clearing that might lead to the wanted result, you'd need to test that. | {
"domain": "codereview.stackexchange",
"id": 27977,
"tags": "python, performance, file-system, iteration"
} |
Can someone please explain the "infrared catastrophe"? | Question: In my readings I've run into this idea of an "infrared catastrophe" associated with 1/f noise. As far as I can tell it is because when you graph the periodogram of the 1/f signal you see the PSD goes to infinity as frequency goes to 0. Not sure what that means practically though. If we are talking about a sound wave, does that mean the sound becomes infinitely loud at low frequencies? What is the "catastrophe"?
Answer: The infrared catastrophy seems to be named after a mimic of the ultraviolet catastrophy of the black body radiation .
In physics, an infrared divergence or infrared catastrophe is a situation in which an integral, for example a Feynman diagram, diverges because of contributions of objects with very small energy approaching zero, or, equivalently, because of physical phenomena at very long distances.
The infrared (IR) divergence only appears in theories with massless particles (such as photons). They represent a legitimate effect that a complete theory often implies. One way to deal with it is to impose an infrared cutoff and take the limit as the cutoff approaches zero and/or refine the question. Another way is to assign the massless particle a fictitious mass, and then take the limit as the fictitious mass vanishes.
The divergence is usually in terms of particle number and not empirically troubling, in that all measurable quantities remain finite
So it is a problem in calculating measurable quantities. | {
"domain": "physics.stackexchange",
"id": 12436,
"tags": "fourier-transform, frequency, acoustics"
} |
Equivalent RC circuit to a RRC circuit? | Question: I'm in doubt about a situation that I've seen sometimes: imagine we have a resistor in parallel with a resistor and a capacitor in series. Since I don't know how to generate figures of circuits to post here, the situation can be described as: a single resistor on the right, and on the left a resistor and a capacitor in series.
If there was no capacitor, I know I could replace the resistors by an equivalent one. My doubt is, do this continues to be true in this case? I mean, can I replace this configuration by one capacitor with one resistor in series such that this resistor is equivalent to the other two? If we can, what's the argument beyond this?
Answer:
I mean, can I replace this configuration by one capacitor with one
resistor in series such that this resistor is equivalent to the other
two?
The answer is actually no.
For a single resistor and capacitor in series, the real part of the impedance is independent of frequency, i.e., the real part acts like a resistor.
$Z_s = R_s + \frac{1}{j \omega C}$
However, for the circuit you describe, the real part varies with frequency. This is easily seen by noting that at zero frequency, the impedance is real and equal to the value of the parallel resistance. At "infinite" frequency, the impedance is real and equal to the parallel combination of the two resistances.
The equivalent impedance of the circuit you describe is:
$Z_{eq} = R_p || (R_s + \frac{1}{j \omega C})$
This can be expressed as the sum of a real part and an imaginary part but the real part involves the radian frequency $\omega$.
So, although we can write the above as the sum of a real (resistive) part and an imaginary (reactive) part, the real part acts like a frequency dependent resistor.
If your circuit were to be operated at a single frequency, then, in that limited context, the answer is yes, one can replace the two resistors with an equivalent resistor for that particular frequency. | {
"domain": "physics.stackexchange",
"id": 7861,
"tags": "homework-and-exercises, electric-circuits, capacitance, electrical-resistance"
} |
Why are Alkali atoms used in many Cold Atom experiments? | Question: It seems that alkali atoms are often used in cold atom experiments. The first BEC was formed with alkali atoms, and many modern experiments used Alkalis. What is special about having a single electron in the outer most orbital that is useful for cold atom experiments?
Answer: Alkali atoms have several benefits!
The one outer electron makes them "hydrogen-like". Therefore, it is "easy" to calculate the energy levels which makes predictions and calculations using these elements far easier!
Since the energy structure is very simple, you can find closed cooling cycle. E.g. cooling rubidium requires the use of only one repumping laser beam to close the cooling cycle.
Why not use just hydrogen? Alkali atoms feature transition frequencies, which are easily accessible (laser technology is very advanced for visible to near infrared light). Hydrogen would require UV laser light, which is hard to produce and air is not transparent for this light.
Feshbach resonances are certainly nice to have, but without the points I mentioned above, you could not even cool the atoms, rendering the study of Feshbach resonances impossible. | {
"domain": "physics.stackexchange",
"id": 99964,
"tags": "atomic-physics, bose-einstein-condensate, cold-atoms"
} |
Anticommutatorrelation in Bogoliubov-de Gennes Hamiltonian | Question: I almost solved the problem Equivalence of Bogoliubov-de Gennes Hamiltonian for nanowire. In the next steps I used the notation by arXiv:0707.1692:
$$
\Psi^{\dagger} = \left(\left(\psi_{\uparrow}^{\dagger}, \psi_{\downarrow}^{\dagger}\right), \left(\psi_{\downarrow}, -\psi_{\uparrow}\right)\right)
$$
and
$$
\Psi = \left(\left(\psi_{\uparrow}, \psi_{\downarrow}\right), \left(\psi_{\downarrow}^{\dagger}, -\psi_{\uparrow}^{\dagger}\right)\right)^{T}\text{.}
$$
I'm trying to show that the Hamiltonian for a nanowire with proximity-induced superconductivity
$$
\hat{H} = \int dx \text{ } \left[\sum_{\sigma\epsilon\{\uparrow,\downarrow\}}\psi_{\sigma}^{\dagger}\left(\xi_{p} + \alpha p\sigma_{y} + B\sigma_{z}\right)\psi_{\sigma} + \Delta\left(\psi_{\downarrow}^{\dagger}\psi_{\uparrow}^{\dagger} + \psi_{\uparrow}\psi_{\downarrow}\right)\right]\text{,}
$$
can be written as
$$
\hat{H} = \frac{1}{2}\int dx \text{ } \Psi^{\dagger}\mathcal{H}\Psi
$$
with $\mathcal{H} = \xi_{p} 1\otimes \tau_{z} + \alpha p \sigma_{y}\otimes\tau_{z} + B\sigma_{z}\otimes 1 + \Delta 1\otimes\tau_{x}$ (here $\tau_{i}$ are the Pauli matrix for the particle-hole space and $\otimes$ means the Kronecker product).
Here I calculate as example the first und third term of $\Psi^{\dagger}\mathcal{H}\Psi$.
$$
\tau_{z}\Psi =
\left(\left(\psi_{\uparrow}, \psi_{\downarrow}\right), -\left(\psi_{\downarrow}^{\dagger}, -\psi_{\uparrow}^{\dagger}\right)\right)^{T} = \left(\left(\psi_{\uparrow}, \psi_{\downarrow}\right), \left(-\psi_{\downarrow}^{\dagger}, \psi_{\uparrow}^{\dagger}\right)\right)^{T}
$$
$$
\Rightarrow \left(\left(\psi_{\uparrow}^{\dagger}, \psi_{\downarrow}^{\dagger}\right), \left(\psi_{\downarrow}, -\psi_{\uparrow}\right)\right)\xi_{p}\left(\left(\psi_{\uparrow}, \psi_{\downarrow}\right), \left(-\psi_{\downarrow}^{\dagger}, \psi_{\uparrow}^{\dagger}\right)\right)^{T} = \left(\psi_{\uparrow}^{\dagger}, \psi_{\downarrow}^{\dagger}\right)\xi_{p}\left(\psi_{\uparrow}, \psi_{\downarrow}\right)^{T} + \left(\psi_{\downarrow}, -\psi_{\uparrow}\right)\xi_{p}\left(-\psi_{\downarrow}^{\dagger}, \psi_{\uparrow}^{\dagger}\right)^{T} = \psi_{\uparrow}^{\dagger}\xi_{p}\psi_{\uparrow} + \psi_{\downarrow}^{\dagger}\xi_{p}\psi_{\downarrow} - \psi_{\downarrow}\xi_{p}\psi_{\downarrow}^{\dagger} -\psi_{\uparrow}\xi_{p}\psi_{\uparrow}^{\dagger}
$$
Now I use the anticommutatorrelation $\{\psi_{\sigma}, \psi^{\dagger}_{\sigma^{\prime}}\} = \delta_{\sigma,\sigma^{\prime}} \Leftrightarrow \psi_{\sigma}\psi^{\dagger}_{\sigma} = 1 - \psi_{\sigma}^{\dagger}\psi_{\sigma}$
$$
\Leftrightarrow 2\psi_{\uparrow}^{\dagger}\xi_{p}\psi_{\uparrow} + 2\psi_{\downarrow}^{\dagger}\xi_{p}\psi_{\downarrow} - 2\xi_{p}
$$
However, the term $-2\xi_{p}$ here are wrong.
For the third term I obtain
$$
\psi_{\uparrow}^{\dagger}B\sigma_{z}\psi_{\uparrow} + \psi_{\downarrow}^{\dagger}B\sigma_{z}\psi_{\downarrow} + \psi_{\downarrow}B\sigma_{z}\psi_{\downarrow}^{\dagger} +\psi_{\uparrow}B\sigma_{z}\psi_{\uparrow}^{\dagger} = \psi_{\uparrow}^{\dagger}B\sigma_{z}\psi_{\uparrow} + \psi_{\downarrow}^{\dagger}B\sigma_{z}\psi_{\downarrow} - \psi_{\downarrow}^{\dagger}B\sigma_{z}\psi_{\downarrow} -\psi_{\uparrow}^{\dagger}B\sigma_{z}\psi_{\uparrow} + 2B\sigma_{z} = 2B\sigma_{z}
$$
Does anybody see my mistake?
Answer: I define $H\sim\psi_{\alpha}^{\dagger}\xi_{\alpha\beta}\psi_{\beta}$,
where I forget the sum/integrals and all these boring staff. I also
define $\xi_{\alpha\beta}\equiv\boldsymbol{\xi}\cdot\boldsymbol{\sigma}=\xi_{0}+\xi_{x}\sigma_{x}+\xi_{y}\sigma_{y}+\xi_{z}\sigma_{z}$
to have the most generic one-body Hamiltonian written in a compact
form. The one body Hamiltonian then reads, in matrix notation
$$
H\sim\left(\begin{array}{cc}
\psi_{\uparrow}^{\dagger} & \psi{}_{\downarrow}^{\dagger}\end{array}\right)\left(\begin{array}{cc}
\xi_{0}+\xi_{z} & \xi_{x}-\mathbf{i}\xi_{y}\\
\xi_{x}+\mathbf{i}\xi_{y} & \xi_{0}-\xi_{z}
\end{array}\right)\left(\begin{array}{c}
\psi_{\uparrow}\\
\psi_{\downarrow}
\end{array}\right)
$$
as can be easily check.
One now wants to add the particle-hole double space (Nambu space).
One uses that (the anti-commutation relation)
$$
\psi_{\alpha}^{\dagger}\xi_{\alpha\beta}\psi_{\beta}=-\xi_{\alpha\beta}\psi_{\beta}\psi{}_{\alpha}^{\dagger}+\delta_{\alpha\beta}\xi_{\alpha\beta}=-\psi_{\beta}\left(\xi_{\alpha\beta}\right)^{T}\psi{}_{\alpha}^{\dagger}+\delta_{\alpha\beta}\xi_{\alpha\beta}
$$
and you get the unavoidable trace over the one-body energy. This nevertheless
renormalizes your energy in a standard way, and one usually drops
off this extra term. We thus get
$$
H\sim\dfrac{1}{2}\left(\begin{array}{cccc}
\psi{}_{\uparrow}^{\dagger} & \psi{}_{\downarrow}^{\dagger} & \psi_{\uparrow} & \psi_{\downarrow}\end{array}\right)\left(\begin{array}{cc}
\boldsymbol{\xi}\cdot\boldsymbol{\sigma} & -\mathbf{i}\sigma_{y}\Delta\\
\mathbf{i}\sigma_{y}\Delta & -\left(\boldsymbol{\xi}\cdot\boldsymbol{\sigma}\right)^{T}
\end{array}\right)\left(\begin{array}{c}
\psi_{\uparrow}\\
\psi_{\downarrow}\\
\psi{}_{\uparrow}^{\dagger}\\
\psi{}_{\downarrow}^{\dagger}
\end{array}\right)-\dfrac{1}{2}\text{Tr}\left\{ \boldsymbol{\xi}\cdot\boldsymbol{\sigma}\right\}
$$
in a mixed notation (block matrix in the middle, full vectors on the
edge). Note the only important thing here that $\left(\boldsymbol{\xi}\cdot\boldsymbol{\sigma}\right)^{T}=\left(\boldsymbol{\xi}\cdot\boldsymbol{\sigma}\right)^{\ast}$
and only the $\sigma_{y}$ component changes sign (look at your $\alpha p\sigma_{y}\tau_{z}$
term in the BdG Hamiltonian)
Your ordering convention is found by an obvious change of basis from
mine. Then you choose a representation for the tensor product and
you're done. One more time, you can not avoid the final trace term,
but most of the people forget to discuss it. It has almmost no role,
except when you want to describe some effects related to the phase
transition of superconductivity (for instance, to correctly write
the free energy, you need it).
One more thing: the Hamiltonian you gave is a bit famous at the moment
for hosting Majorana fermions. If you diagonalise the spin-part, you
end up with a $p$-wave effective superconductivity at low energy. | {
"domain": "physics.stackexchange",
"id": 9517,
"tags": "homework-and-exercises, condensed-matter, superconductivity, anticommutator"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.