anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
For an NFA, can we always find a RAM? | Question: For an NFA, can we always find a RAM, which recognises the same language?
Answer: If RAM is a Random Access Machine (i.e., a rudimentary computer with registers, memory and assorted instructions), the answer is just "build a DFA that recognizes the same language, simulate that DFA in code". I.e., have a transition table that tells you the next state for each combination of state and input symbol; start in the start state, check if the state after consuming all input is final.
More abstractly: RAM is equivalent in computing power to a Turing machine. As regular languages are decidable, they can be decided by a (deterministic) Turing machine or a RAM. | {
"domain": "cs.stackexchange",
"id": 16748,
"tags": "complexity-theory, formal-languages, computability, finite-automata, discrete-mathematics"
} |
Invariance of Maxwell equations | Question: Is there an easy way to show that the Maxwell equations
$$
\partial_\alpha F_{\beta\gamma} + \partial_\gamma F_{\alpha\beta} + \partial_\beta F_{\gamma\alpha} = 0
$$
are invariant under a Lorentz transformation:
$$\overline x^\mu = \Lambda^\mu_{\phantom\mu\nu}x^\nu$$
$$\overline F^{\mu\nu} = \Lambda^\mu_{\phantom\mu\alpha}\Lambda^\nu_{\phantom\nu\beta}F^{\alpha\beta}$$
Unfortunately my lecture notes are incomplete and I haven't found any other source.
A straightforward attack does not help. I think there are some tricks necessary. I am searching for a proof which only uses the transformations rules. Nothing more... (please without the Hodge star operator)
Answer: As ACuriousMind said in a comment above, such an equation is manifestly covariant. Suppose the homogeneous Maxwell equations hold in the Lorentz transformed coordinates, i.e. that
$$\bar\partial_\alpha \bar F_{\beta\gamma} + \bar\partial_\gamma \bar F_{\alpha\beta} + \bar \partial_\beta \bar F_{\gamma\alpha} = 0$$
Lorentz transformation are global and thus the representation is constant. Thus we may pull the matrices out from under the derivatives. Then we have
$$\Lambda_\alpha{}^\mu\Lambda_\beta{}^\nu\Lambda_\gamma{}^\rho \partial_\mu F_{\nu\rho}+\text{cyclic permutations}=0$$
Lorentz transformations are invertible, so now multiply by the three inverse matrices to obtain
$$\partial_\mu F_{\nu\rho}+\text{cyclic permutations}=0$$
which shows that the equation is Lorentz covariant. | {
"domain": "physics.stackexchange",
"id": 21033,
"tags": "homework-and-exercises, special-relativity, maxwell-equations"
} |
Simple console JSON formatter | Question: I am writing simple formatter for valid JSON. Is reads JSON data from stdin and writes formatted output to stdout.
Goals:
given valid input produce valid formatted JSON output
constant memory usage
minimal viable program (smallest program that solves a particular problem)
feature free
easy to read and understand
be C99 compatible
Non-goals:
validating JSON
handling arguments
I am not aiming JSON validation right now (as said "formatter for valid JSON").
I tried adding arguments (like setting placeholder string with -p) but faced several issues:
unclear what arguments must be implemented
I don't feel comfortable yet writing argument parsing code.
So I skipped these for now.
Latest file in GitHub: https://github.com/sineemore/juu/blob/master/juu.c
#include <stdio.h>
#include <stdlib.h> /* EXIT_SUCCESS */
#define BUF_SIZE (1024 * 4)
int main(int argc, char **argv) {
char buf[BUF_SIZE];
const char placeholder[] = " ";
unsigned int indent = 0;
unsigned int i = 0;
unsigned int k = 0;
char is_string = 0;
char escaped = 0;
char ch;
size_t n;
while (0 < (n = fread(&buf, sizeof(char), BUF_SIZE, stdin))) {
for (k = 0; k < n; k++) {
ch = buf[k];
if (is_string) {
/* Inside quoted string */
putchar(ch);
if (! escaped) {
if (ch == '"') {
/* Unescaped quote, string just ended */
is_string = 0;
} else if (ch == '\\') {
escaped = 1;
}
} else {
escaped = 0;
}
continue;
}
switch (ch) {
case ' ':
case '\t':
case '\n':
case '\r':
/* Ignoring original formatting */
break;
case '{':
case '[':
putchar(ch);
putchar('\n');
i = ++indent;
while (i-- > 0) fputs(placeholder, stdout);
break;
case '}':
case ']':
putchar('\n');
i = --indent;
while (i-- > 0) fputs(placeholder, stdout);
putchar(ch);
if (indent == 0) putchar('\n');
break;
case ',':
putchar(',');
putchar('\n');
i = indent;
while (i-- > 0) fputs(placeholder, stdout);
break;
case ':':
putchar(':');
putchar(' ');
break;
case '"':
/* String/property key start, see if clause on top (line 20) */
putchar('"');
is_string = 1;
break;
default:
/* Numbers, true, false, null */
putchar(ch);
break;
}
}
}
return EXIT_SUCCESS;
}
Example output:
$ wget -qO- 'https://xkcd.com/info.0.json' | juu
{
"month": "4",
"num": 1979,
"link": "",
"year": "2018",
"news": "",
"safe_title": "History",
"transcript": "",
"alt": "HISTORIANS: We've decided to trim the past down to make things more manageable. Using BCE/CE, would you rather we lose the odd-numbered or even-numbered years?",
"img": "https://imgs.xkcd.com/comics/history.png",
"title": "History",
"day": "11"
}
UPDATE:
I missed ALL ferror() calls, fixing this part.
Also program doesn't handle SIGPIPE, therefore it may be killed. Just tested it. I don't see a clear solution, should I set SIG_IGN?
Answer: We'll have a look at your code from the top to the bottom. The proper indentation makes that easy.
Magic numbers and defines
First of all, it's great that you've used BUF_SIZE instead of magic numbers, e.g.
char buf[1024 * 4]; // bad!
However, #defines can be error prone. You've used parentheses, which are often necessary. Furthermore, BUF_SIZE doesn't exist in your compiled program anymore, which can lead to some confusion if you want to debug. So consider the possible alternatives. In this case, a #define is fine. But there is no need for abbreviation:
//! Buffer size for reading from stdin.
#define BUFFER_SIZE (1024 * 4)
While we're at it, add some documentation. BUFFER_SIZE and buf[BUFFER_SIZE] are only some lines away from each other, but that might change later. The ! after // is Doxygen specific, you can ignore it if you don't use Doxygen.
Declarations and initializations
You use C99 and therefore can declare variables as late as you want. Whenever you declare a variable but set it a lot later, try to rewrite it as initialization at the right point. For example, ch isn't used until ch = buf[k]. We should keep it's scope limited. That way we cannot accidentally reuse variables.
If we follow this suggestion then i, k and ch get limited in their scope. We will have a look at that later, though. And since we already renamed BUF_SIZE to BUFFER_SIZE, we could also rename buf to buffer. You can of course choose other names, but again: there is no need to abbreviate. Disk space isn't expensive anymore, so choose names that you still understand after several months or years when someone calls you in the middle of the night.
Input and sizeof usage
fread(...,..., SIZE ,...) may not return SIZE. That can either happen if you're at the end of the file or if an error happens. You should check the FILE* with feof or ferror if that happens.
We stay at fread. While it's unlikely that you change buffer's type, it's usually good practice to use sizeof(*buf) or sizeof(buf[0]). If you ever change char buffer[BUFFER_SIZE] to mychar buffer[BUFFER_SIZE], you don't have to remember to change sizeof(char) to sizeof(mychar).
State machines and repetition
Your loop is essentially a state machine. The state machine itself looks fine. However, there is a lot of repetition. We have
while (i-- > 0) fputs(placeholder, stdout);
four times. That really asks for a function:
/**
* \brief Puts the given \c str \c count times on \c stream.
* \param str null-terminated character string to be written
* \param count number of times \c str shall be written
* \param stream output stream
* \returns a non-negative value on success
* \returns EOF on error and sets the error indicator
*/
inline void fputs_repeat(const char * str, size_t count, FILE * stream) {
int value = 0;
while (count-- > 0) {
value = fputs(str, stream);
if (value == EOF) {
return EOF;
}
}
return value;
}
Now we can just use fputs_repeat(placeholder, indentation, stdout) wherever you've used while (i-- > 0) .... We would now end up with the following variant:
#include <stdio.h>
#include <stdlib.h> /* EXIT_SUCCESS */
#define BUFFER_SIZE (1024 * 4)
inline void fputs_repeat(const char * str, size_t count, FILE * stream) {
int value = 0;
while (count-- > 0) {
value = fputs(str, stream);
if (value == EOF) {
return EOF;
}
}
return value;
}
int main(int argc, char **argv) {
char buffer[BUF_SIZE] = {0};
const char placeholder[] = " ";
unsigned int indent = 0;
char is_string = 0;
char escaped = 0;
size_t n;
while (0 < (n = fread(&buffer, sizeof(buffer[0]), BUFFER_SIZE, stdin))) {
// exercise: add error handling
for (unsigned int k = 0; k < n; k++) {
char ch = buffer[k];
if (is_string) {
/* Inside quoted string */
putchar(ch);
if (! escaped) {
if (ch == '"') {
/* Unescaped quote, string just ended */
is_string = 0;
} else if (ch == '\\') {
escaped = 1;
}
} else {
escaped = 0;
}
continue;
}
switch (ch) {
case ' ':
case '\t':
case '\n':
case '\r':
/* Ignoring original formatting */
break;
case '{':
case '[':
putchar(ch);
putchar('\n');
fputs_repeat(placeholder, ++indent, stdout);
break;
case '}':
case ']':
putchar('\n');
fputs_repeat(placeholder, --indent, stdout);
putchar(ch);
if (indent == 0) putchar('\n');
break;
case ',':
putchar(',');
putchar('\n');
fputs_repeat(placeholder, indent, stdout);
break;
case ':':
putchar(':');
putchar(' ');
break;
case '"':
/* String/property key start, see if clause on top */
putchar('"');
is_string = 1;
break;
default:
/* Numbers, true, false, null */
putchar(ch);
break;
}
}
}
return EXIT_SUCCESS;
}
Output
You use putchar quite often. In some instances multiple calls can get replaced by puts or fputs. For example, instead of
putchar(',');
putchar('\n');
you could just use
puts(",");
and instead of
putchar(':');
putchar(' ');
you could use
fputs(": ", stdout);
Either way, if you're striving for performance, you want to keep the number of function calls low, so consider an output buffer if your current variant isn't fast enough for your liking. But first measure your program before you change it.
Goals
Let's revisit your goals and check them now.
given valid input produce valid formatted JSON output
Since you don't introduce additional characters in strings and never remove characters from the original JSON except whitespace (outside of strings), you've reached that goal.
constant memory usage
As there is only a single buffer, you've reached that goal too, although fputs might buffer.
minimal viable program (smallest program that solves a particular problem)
Ah, that's a definition problem. What's minimal? What's "smallest"? Your program is short, but a single additional function removed some repetition which can lead to a technical debt. That function won't increase the size of your program, though.
feature free
Check.
easy to read and understand
The ! escaped logic took a little bit, but apart from that, goal reached.
be C99 compatible
Yes, but use those features I've mentioned above (inline functions, late declarations).
Also program doesn't handle SIGPIPE, therefore it may be killed. Just tested it. I don't see a clear solution, should I set SIG_IGN?
If the input pipe is broken, the JSON input will suddenly end and you have invalid JSON. Do you need to handle invalid JSON at that point? It's a non-goal, as you said. | {
"domain": "codereview.stackexchange",
"id": 30197,
"tags": "c, json, formatting, c99"
} |
File format of substitution matrix in clustalw | Question: I need to set the substitution matrix used by command line CLUSTALW when comparing DNA sequences to:
0 -1 -1 -1
-1 0 -1 -1
-1 -1 0 -1
-1 -1 -1 0
from my understanding I need to pass the location of a file with the matrix to the -dnamatrix parameter, however, I cannot find out what format the file is supposed to be in. Any idea?
Answer: This example seems to have information on this, looks like it should have this format:
A G C T U *
A 1 -1 -1 -1 -1 -1
G -1 1 -1 -1 -1 -1
C -1 -1 1 -1 -1 -1
T -1 -1 -1 1 -1 -1
U -1 -1 -1 -1 1 -1
* -1 -1 -1 -1 -1 -1
However, the page I linked is specifically about a bug regarding ClustalW ignoring custom made matrices. It's a pretty old post so it might have been fixed since. However, if you encounter that bug, that same page thankfully does contain a workaround! | {
"domain": "bioinformatics.stackexchange",
"id": 1772,
"tags": "sequence-alignment, multiple-sequence-alignment, substitution-model"
} |
How to subscribe to published topics in Baxter SDK environment | Question:
I have a c++ program that publishes three topics.
When the rostopic list command is used on the terminal it displays the list of ros topics . After running the code that publishes the topics when I run the command in the workspace normally it displays the topics I published on the terminal, but when I initialize my baxter environment using the ./baxter.sh command and run the same command it does not show the topics on the terminal.
When the rostopic echo <name of topic> command is run on the terminal it shows the value of the topic. When I run the command in the workspace normally it displays the value of which topic I chose, but when I initialize my baxter environment using the ./baxter.sh command and run the same command it does not display the values of the topics on the terminal.
I also have a python program that subscribes to the published topics the does some calculations on their values then displays the result on the terminal. When I run the program in my workspace normally it displays the the results on the terminal as it is supposed to, but when I use the ./baxter.sh command to initialize the Baxter environment just as above the code doesn't display the values.
I don't get any error messages it just does not display the published topics when I initialize my Baxter environment. I want to know why I can't see the the published topics or subscribe to them after I initialize my Baxter environment and also how to fix this problem.
I am using ROS indigo on Ubuntu 14.04.
Also when I after I execute c++ program If I use command rosnode list on the terminal the name of the node is shown on the terminal but when I initialize my Baxter environment using ./baxter.sh and I try the same command the node doesn’t up on the terminal.
Originally posted by fundamentals on ROS Answers with karma: 11 on 2017-03-09
Post score: 0
Original comments
Comment by gvdhoorn on 2017-03-09:
"Do not work" and "do not show" are too vague. Please tell us exactly what it is that you're doing. Include messages that you see on the console when you start your programs and perhaps show some samples of code that you're using. Also: which version of ROS, which OS, how installed, etc.
Comment by fundamentals on 2017-03-09:
@gvdhoorn I tried making some edits to the question
Comment by biorobotics on 2020-04-17:
Hello, I have a same problem with you. I'm using a python and also ubuntu 14.04.
Did you solved the problem? Could you show me how do you solved it? :)
Answer:
It seems that you're having trouble with your ROS networking, and the associated Linux environment variables that allow bi-directional ROS communication. The ./baxter.sh script is a convenience script to help you set these viarables. Please see this tutorial from Rethink on how to set up the baxter.sh script
If you'd like a deeper dive into Baxter ROS networking, Rethink has a Networking Page that describes the ROS Naming conventions according to your networked environment. Finally, it might be useful to check out the ROS Environment variables page, looking at ROS_MASTER_URI and ROS_IP / ROS_HOSTNAME in particular.
Originally posted by imcmahon with karma: 790 on 2017-03-09
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by fundamentals on 2017-03-13:
@imcmahon I have tried the instructions in the links but none of them worked | {
"domain": "robotics.stackexchange",
"id": 27256,
"tags": "ros, baxter, topics"
} |
Significant figures | Question: I have a question about the significant figures topic:
Problem:
Find the volume of a cylinder having a diameter measured as 2.3 ft and length measured as 8.25 ft.
Solution:
V = π(D^2 /4)h = π((2.3)^2)(8.25)/4 = 34.276739 cu ft = 34 cu ft
Question:
In my workbook is explained that the final answer is rounded to two significant figures because the quantity 2.3 has two significant figures.
Why will I have to round to two significant figures if the quantity 8.25 have three significant figures?
Answer: You can only assume the accuracy of the least precise component of the input. so if you have elements with only 2 SF then you can not assert a higher level of precision. | {
"domain": "engineering.stackexchange",
"id": 1758,
"tags": "civil-engineering"
} |
Force of photons from the Sun hitting a football field = weight of 1 dime? | Question: I read, I think, some time ago that the "weight" of photons from the Sun hitting an area the size of a football field at noon on a sunny day would be about the "weight" of a dime?
Would appreciate it someone could flesh that out, verify if correct or false?
Answer: Photons are massless so their weight is 0. However, photons do have momentum so they can exert force. This force is due to their momentum and would occur even in the absence of gravity, so it is not a weight.
The solar irradiance during peak hours is approximately $1000 \mathrm{ \ W \ m^{-2}}$ and the size of a football field is about $7200 \mathrm{ \ m^2}$ for a total radiant power of $7.2 \mathrm{ \ MW}$. Since $p=E/c$ and $F=\frac{dp}{dt}$ we get that the force from this energy is $(7.2 \mathrm{\ MW})/c = 0.024 \mathrm{\ N}$.
In comparison, a dime has a mass of $2.268 \mathrm{\ g}$ which on the earth turns into a gravitational force, or weight, of $0.022 \mathrm{\ N}$.
So the force of the sunlight on a football field during peak solar hours is close to the weight of a dime. | {
"domain": "physics.stackexchange",
"id": 85519,
"tags": "homework-and-exercises, electromagnetic-radiation, photons, momentum, thermal-radiation"
} |
Which water is easier to filter? Rain water or tap water | Question: I understand the water greatly varies depending on the region and filtration differs as well. Is it better to filter rain water or tap water from contamination like hormones, pesticides, and cloud seeding, which are hard to filter? Is there a better place/SE to ask this?
Answer: First thing: All surface water and high ground water started out as evaporated ocean water that condensed on a nanoparticle, scrubbed the air as the droplet formed and fell, landed, extracted whatever it flowed on, eventually collected in streams, ponds, lakes, rivers temporarily, picking up ions, biomass, possibly suspended solids and exchanging gases on its way back to the Oceans. This IGNORES industrial pollution that can be pervasive [Remember how everything cleared up in the start of the pandemic].
Each phase has its own purification problems. First step is an analysis. The least complicated is rainwater collected and settled; then new surface water, Municipal sources [tap water] usually have some pretreatment and usually supply an analysis; then high-level ground water such as springs, then shallow wells, deeper wells; Finally sinks such as the Dead Sea, the Great Salt Lake, the ocean, and sewage [space station].
A rain water story: I taught high school chemistry for 2 years at a school 2 blocks from Kodak Park, Eastman Kodak's industrial complex. While in class a sudden thunderstorm caused a deluge and everyone said, "Let's check it for acid rain". A girl sitting next to the window held out a bucket to collect water; after several seconds she screamed and pulled her arm in! There were black tendrils hanging all over it. We quickly wiped it off and got her to the girl's room to wash her arm and to the nurse to get checked out, no irritation. We never checked for acid rain or had the substance analysed. | {
"domain": "chemistry.stackexchange",
"id": 17131,
"tags": "everyday-chemistry, bond, water, filtering"
} |
Set the initial state of the QuantumCircuit using the Statevector object | Question: Suppose we have a 4-qubit quantum state state which is a Statevector object:
>>> type(state)
qiskit.quantum_info.states.statevector.Statevector
I would like to initialize a 6-qubit QuantumCircuit, such that registers 0, 1, 3, 4 are initialized in state, something like this:
qc = QuantumCircuit(6)
qc.initialize(state, [0, 1, 3, 4])
Now, the initialize method takes a list of complex amplitudes as an input. However, is there a way to feed it state directly, without converting it into a list?
Answer: You can do this with QuantumCircuit.prepare_state or QuantumCircuit.initialize.
I'll show a simple example.
from qiskit import QuantumCircuit
from qiskit.quantum_info import Statevector
state = Statevector.from_label("+-rl")
qc = QuantumCircuit(4)
qc.prepare_state(state, [0, 1, 2, 3])
Here I've constructed my Statevector object using Statevector.from_label but you should be able to construct it however you like and plug it into QuantumCircuit.prepare_state.
QuantumCircuit.initialize resets all qubits to the $|0\rangle$ state before state preparation.
QuantumCircuit.prepare_state does not add resets. | {
"domain": "quantumcomputing.stackexchange",
"id": 5217,
"tags": "qiskit, programming"
} |
"long code test" and "dictatorship test" | Question: Why is "long code test" also called "dictatorship test"?
I got really confused when I read about it in Arora's survey.
Answer: "Long code" and "Dictator code" are two different names for the same code. Here's why:
Let's start with the natural definition of the long code: The message is an $n$-bit string $i$ (the reason I'm calling this $i$ and not $x$ becomes clear soon) and the encoding is a vector $y$ of length $2^{2^n}$ where the positions of $y$ are indexed by all possible Boolean functions that map $n$ bits to one bit. Moreover, $y$ at position $f$ is simply $f(i)$.
Now you can alternatively identify $f$ by its truth table. So you can also assume that the coordinate positions of $y$ are indexed by all possible strings of length $N := 2^n$ (i.e., all possible truth tables) and then $y$ at position $f=(f_1, \ldots, f_N)$ just encodes $f_i$. So the $i$th codeword is the truth table of the $i$th dictator function (among the dictator functions mapping $N$ bits to one bit). So the long code and the dictator code are the same. In the long code interpretation, codewords correspond to $n$-bit points and codeword positions correspond to $n$-bit predicates. In the dictator code interpretation, codewords correspond to dictator functions on $2^n$ bits and codeword positions correspond to evaluation points. | {
"domain": "cstheory.stackexchange",
"id": 1085,
"tags": "cc.complexity-theory, co.combinatorics, boolean-functions, coding-theory"
} |
The Significance of Electromotive Force Being Non-Conservative in Origin | Question: Taking the battery as an example. Intuitively, I understand that without the chemical forces that continuously maintains a voltage difference between the two terminals, the current will stop because, eventually, the two terminals of the battery will obtain the same voltage.
However, most textbooks I have read emphasize that the source of an EMF must be non-electrostatic (or non-conservative) in origin. From my understanding, as stated above, I don't see a connection between the chemical forces that maintain a voltage difference between two battery terminals and that they must be non-electrostatic (or non-conservative). It seems to me that the mechanism of maintaining a voltage difference (in this case, the chemical forces) still works just fine without the requirement of the forces being non-conservative.
My question is: Why is the fact that the source of an EMF being non-conservative important? I'd prefer an explanation in the context of my understanding of EMF, which is stated in the first paragraph. Also, please limit the explanation to classical EM only.
If my understanding is missing something, which might have rendered my question to appear "nonsense", please help me correct my misconception(s).
Answer: There is a lot of interesting stuff going on behind your textbooks' emphasis.
A conservative force is one which can be described as the gradient of a potential function. We speak of gravitational and electrostatic potential because gravity and static electricity produce conservative forces. This does mean that the total change in potential around any wire loop must be zero, but just because the total work done on a test charge going around a loop is zero does not mean that all of the forces on the test charge are conservative.
Within the battery, charged ions are flowing from a terminal where they have low potential energy to one where they have high potential energy. So they must be being pushed by some other force, one which is not associated with a potential--a "non-conservative force". This is the observation which your textbooks are emphasizing, and it is a fact that can be fairly hard to wrap your brain around at first. I know it was for me.
The non-conservative force at work inside an acid battery is the random jostling of the liquid molecules surrounding the charged ions. The positive ions are being electrostatically pushed toward the cathode, but for chemical reasons they are being produced faster than they are being absorbed at the cathode. At the anode, they are being consumed faster than produced, so the net effect of the jostling is to move positive ions "uphill" to the anode. Analogously, negative ions are jostled "uphill" to the cathode. So, in a very non-literal sense, the second law of thermodynamics is creating the EMF.
Pretty crazy, right?
PS. The resistance of the wire is a second non-conservative force, which exactly undoes the work done by the random jostling in the battery. This makes the total work done on a test charge going around the circuit zero, even though there are non-conservative forces at work in addition to the conservative electrostatic force. | {
"domain": "physics.stackexchange",
"id": 46965,
"tags": "electromagnetism"
} |
Correcting for bad eyesight on display monitors | Question: I think it would be neat if one could configure one's eyesight parameters (astigmatism and myopia in my case), viewing distance, and perhaps age into a special display driver, such that a computer would present its user interface distorted (from the view of a normal-sighted person) in such a way that the bad eye, by applying its own "distortion", would essentially un-distort it so that the brain would receive a sharp view without the need for physical glasses or lenses.
I suspect though that this is not possible partly on grounds of limited information. If the error in eyesight would be linear (e.g. shift to the left or enlargement) an appropriate distortion seems trivial. But a realistic error (hence a realistic distortion) would lead to a situation where several image points get mixed into the same "pixel" (in the physical eye this would mean: hit the same receptor), and disentangling those would require a-priori knowledge about original image, which is not always available (could be available for regular window shapes, or moving images -- this may lead to headache :) It's strange though, because my glasses are simple physical devices that a computer should be able to simulate, but perhaps not in the limited confines of the 2D surface of its display.
Now all of this is just a hunch and my question is this: is there a straightforward answer for what kinds of "invertible" distortions (in a mathematical sense) the imagined apparatus could work, and has the problem ever been formalized in a better way?
Answer: Unless you deal with holograms, all displays we are observing follow the laws of ray optics. So the display has some specific location – for example, rectangle between four points with certain coordinates – and all the light rays leaving from the points of the display carry the information about their origin because they are diverging from that point, and if one extrapolates several rays associated with the same "pixel" on the display, they actually intersect in the pixel that created the light.
The human eye, when properly focused, changes the direction of these divergent light rays by refraction so that they converge again and intersect on the retina. So a sharp pixel on the display produces a sharp pixel on the retina.
When it's not so, because of myopia, astigmatism, or any condition that prevents the eye from accurate focusing – and believe me, I also know something about it – it just means that the light rays emitted by one pixel on the display are attempting to "reconverge" but they don't intersect at one point of the retina.
Myopia means that the eye works a little too much in trying to "reconverge" the light rays. As a consequence, the intersection of the light rays from one pixel appears inside the liquid in the eye, before the rays reach the retina. And the signal on the retina is inevitably fuzzy – a disk of a sort – even though the source of the light is one pixel.
Hyperopia works in the other way around; the light rays try to "reconverge" but the refraction is too weak and the intersection would only occur "behind" the retina (in the brain) which doesn't physically occur because the photons are absorbed before they reach the intersection.
Astigmatism means that the light rays emitted into different directions horizontally and the light rays emitted into different directions vertically experience different amounts of refraction. So there's no "common intersection" of all the light rays coming from one display pixel at all. If you only consider light rays differing in the "vertical" direction, they intersect at one point; the light rays differing in the "horizontal" direction intersect at another distance from the retina (positive or negative). So one is more myopic for horizontal lines or more myopic for vertical lines or more farsighted for horizontal lines or... and so on.
If a display pixel emits light rays according to the laws of geometric optics – and it's true for CRT monitors, LCD panels, LED, OLED, Plasma TV, or anything else of this sort – the light rays simply carry the information about the actual distance and this information can't be fooled. The image on the retina is blurred and when it's blurred, it just can't ever be unblurred (without a modification of the eye's refraction, by glasses or lens etc.). This is a completely general claim (for generic images). Blurring is "irreversible" (in the same sense as the increasing-entropy phenomena in thermodynamics such as heat diffusion) because one is losing the detailed sharp information. Even if one tried to apply some "sharpening" tools, they would have to work within/near the eye, not on the display's side. What the unfocused eye sees is always "more blurry" (analogous to "higher entropy") and there's no way to produce sources of light from many pixels that would end up as one pixel on the retina, for example.
A normal display can't "fake" the actual point where the photons originate – in this respect, it's completely analogous to all other objects we routinely observe with our eyes. However, holograms can do it. You could construct holographic TVs based on the wave optics which can reconstruct the configuration of photons that is equivalent to an arbitrary arrangement of objects in 3D, between the plate and the human eye.
Note that holography isn't some cheap "3D effect" depending on the fact that we are observing things by two eyes, from two different points. Instead, holography creates genuine 3D images so the eyes must individually focus at various distances, depending on the distances of parts of the objects seen on the hologram, and one may actually see how the 3D object looks like from all directions (something certainly impossible for stereographic "3D" technologies depending on having two images only).
In this way, you could create light rays that apparently originate from points that are closer to the human eye than the plate. This is still insufficient for astigmatism because astigmatism is an "inconsistency" in the required distance of the objects that the eye is able to see sharply.
However, in principle, it should be possible to produce holograms that may be seen sharply by an astigmatic eye, at least if the astigmatism is sufficiently "weak" or "perturbative". Some complicated calculation would have to be done to produce the right hologram that looks sharp to a general enough astigmatic eye. | {
"domain": "physics.stackexchange",
"id": 5682,
"tags": "optics, geometric-optics, vision"
} |
Phylogenetics and the Tree of Life | Question: As far as I understand, evolution is nowadays pretty much analyzed through phylogenetic trees, that is cladograms. These are constructed using the available records and taking some key structures and deduce whether species acquired or lost it, thereby generating a node. By this approach, chronology is of utmost importance, but chronological time is not. Hence, cladograms are not 'Darwinian' evolutionary trees as the latter are based on chronological time.
However, I cannot help myself when analyzing a cladogram and interpreting the accumulation of various nodes along a branch as moving through time. Isn't evolution all about time? At the risk of pitfalling myself into a 'this is a primarily opinion based question':
Isn't it just so that along a certain branch, each successive node has taken time to develop? Would it not be plausible to determine and assign a certain 'standard evolutionary time unit (a SETU if you like) representing the average time it takes for any organism to loose or acquire a certain trait, thereby allowing each node to be interpreted as time? Of course, the SETU would be a gross estimate, but could still potentially re-animate the Darwinian look on modern-day evolution.
Answer: Let's start with a short comment. You say [..]evolution is nowadays pretty much analyzed through phylogenetic trees[..]. Many evolutionary biologist make extensive use of phylogenetic methods but a good part of evolutionary biologist (most of them I would say) do not work directly with phylogenetic methods.
I am not sure I understand your question but I hope the following may help you either to understand the issue or to edit your question. Note that I am NOT a phylogeneticist (not sure this word exist).
What is the definition of "Node" in Phylogenetics
I think your definition of a node might be unusual. In phylogeny, a node is the most recent common ancestor shared by two sister lineages.
Time in a phylogenetic tree
Also, when you look at a phylogenetic tree, the axis along which lineages diversify represent the time (real time in years). Now, we use different methods in order to make estimation of this real time such as the rate of neutral substitutions. You may want to read about molecular clock
each successive node has taken time to develop?
I am not sure what you mean by "develop" here. Develop usually means the process by which a given individual/organism change phenotypically through its lifetime. See Developmental Biology.
average time it takes for any organism to loose or acquire a certain trait
Again, I am not sure whether you talk about development or evolution here as you talk about "any organism" and not about "any population". In any case, an average time for a population to acquire a given arbitrary trait is a concept that 1) is valid only for discrete trait, 2) that is very dependent on the kind of trait you want to consider, 3) that will varies a lot from one given trait to another and 4) is extremely dependent on when you consider being the starting point of the evolutionary process toward the creation of the given trait. Therefore, it doesn't makes much sense.
Currently used "standard evolutionary time unit"
However, you can use similar concept for quantitative traits. The Darwin (d) is a unit of evolutionary change (invented by Haldane and named after C. Darwin). It is defined as the e-fold (e≈2.7) in a mean of individual's trait in a population over million of years.
Similarly, the Haldane is a unit of evolutionary change (I don't know who invented it and it is obviously named after JBS Haldane) and correspond to the number of standard deviation (of the distribution of individual trait in the population) change in the mean of individuals trait in a population per generation. The Haldane unit is therefore dependent on the selection pressure, the additive and non-additive genetic variance and environmental variance (therefore the heritability) and the mutational pressures. | {
"domain": "biology.stackexchange",
"id": 3278,
"tags": "evolution, phylogenetics"
} |
Church Numerals | Question: Here is exercise 2.6 from SICP:
Exercise 2.6: In case representing pairs as procedures wasn’t
mind-boggling enough, consider that, in a language that can manipulate
procedures, we can get by without numbers (at least insofar as
nonnegative integers are concerned) by implementing 0 and the
operation of adding 1 as
(define zero (lambda (f) (lambda (x) x)))
(define (add-1 n)
(lambda (f) (lambda (x) (f ((n f) x)))))
This representation is known as Church numerals, after its inventor,
Alonzo Church, the logician who invented the λ-calculus.
Define one and two directly (not in terms of zero and add-1). (Hint:
Use substitution to evaluate (add-1 zero)). Give a direct definition
of the addition procedure + (not in terms of repeated application of
add-1).
Please review my code:
(define one (lambda (f) (lambda (x) (f x))))
(define two (lambda (f) (lambda (x) (f (f x)))))
;; I used an identity function to check the + procedure
(define (+ a b)
(lambda (f)
(lambda (x)
((((a f) b) f) x))))
How can I improve this code?
Answer: Your function + is not correct.
The definition of the sum of two Church numerals is the following:
(define (plus a b)
(lambda (f)
(lambda (x)
((a f) ((b f) x)))))
(see for instance wikipedia).
In fact, the Church numeral n can be defined as the functional that applies a given functionf n times to a given value x. So in the above definition, the sum (plus a b) first apply b times f to x, and to that result f is applied a times. In your definition, instead, the types of the applications inside the body of the function are wrong.
How to test for the correctness of Church numerals and functions over them?
You simply apply a Church numeral to the function integer successor (i.e. (lambda(x)(+ x 1))) and the number 0 to find if it produces the corresponding “regular” numeral. So, for instance:
(define (succ x) (+ x 1)) ;; here `+` is the integer addition, not your function!
((zero succ) 0) ; produces 0
((one succ) 0) ; produces 1 etc.
So you can test if the sum is correct with:
(((plus one two) succ) 0) ; produces 3
If you try your function, you will find:
(((+ one two) succ) 0) ; raises an error | {
"domain": "codereview.stackexchange",
"id": 21929,
"tags": "functional-programming, lisp, scheme, lambda, sicp"
} |
How to arrive at the Bloch equation $H(k)u(k) = E(k)u(k)$? | Question: Bloch's theorem states that in the presence of a periodic potential solutions to the Schrödinger equation take the following form:
$$Ψ(k) = exp(ik•r)*u(k)$$
I am trying to show that using this ansatz results in the Bloch equation
$$H(k)u(k) = E(k)u(k)$$ but something is lost upon me...
I tried using the Bloch Hamiltonian in the following way:
$$H(k) = exp(ik•r)*H*exp(-ik•r))$$
Time independent Schrödinger equation is:
$$HΨ = EΨ$$
giving
$$[exp(ik•r)*H*exp(-ik•r)][exp(ik•r)*u(k)] = E[exp(ik•r)*u(k)]$$
simplifying to
$$exp(ik•r)*H*u(k) = E[exp(ik•r)*u(k)]$$
From here can I "pull out" the exp(ik•r) term to cancel them? It doesn't feel right. Please point me in the right direction... resources would be immensely helpful!
Answer: A couple of remarks are in order. The periodic potential is periodic in space typically, since $V \equiv V(r)$. In other words, we start with knowing that $V(r) = V(r + R)$ for some $R \in \mathbb{R}$. Then Bloch's Theorem states that,
$$
\begin{equation}
\psi_k(r) = e^{ikr}u_k(r)
\end{equation}
$$
are the solutions to the Schrödinger equation with $u_k(r) = u_k(r+R)$. These solutions are called Bloch functions. Such a situation may be observed in a perfect crystal, where the periodicity would correspond to its atomic structure. As to why I restate the Theorem in position space instead of momentum is because the Bloch functions $\psi_k(r)$ aren't generally periodic in the reciprocal space. More details on that in the answer here.
Now, applying the Time-Independent Schrödinger Equation $H\psi_k(r) = E_k \psi_k(r)$ to the Bloch function, we get
$$
\begin{equation}
H_k u_k(r) = \left[ \frac{p^{2}_{eff}}{2m} + V(r) \right] u_k(r) = E_k u_k(r)
\end{equation}
$$
where, $p_{eff} := -i\hbar\partial_r + \hbar k$. In the context of a crystal, one may identify this effective momentum as the crystal momentum $\hbar k$ (a quasimomentum really) added to the usual momentum operator $-i\hbar\partial_r$. I leave the proof to you as an exercise.
Side-note: There's nothing wrong with pushing out the translation operator $T_r = e^{ikr}$ to one side in your equations, so long as it commutes (must take care of -ve signs if it anticommutes) with the operators that you're pushing them through. In fact, for a perfect crystal with period $R$, we must have $[H, T_R] = 0$. As far as cancellations go, since $T_r$ is unitary, all you have to do is (pre/post-)multiply by the adjoint $T^{\dagger}_{r}$ on both sides after pushing out $T_r$ to the same end of either side.
PS: My descriptions are for a 1D case. The generalisation to higher dimensions is straightforward if necessary. | {
"domain": "physics.stackexchange",
"id": 91207,
"tags": "quantum-mechanics, condensed-matter, wavefunction, schroedinger-equation, crystals"
} |
How can i tell if my model is overfitting from the distribution of predicted probabilities? | Question: all,
i am training light gradient boosting and have used all of the necessary parameters to help in over fitting.i plot the predicted probabilities (i..e probabililty has cancer) distribution from the model (after calibrating using calibrated classifier) i.e. their histogram or kde. as you can see from below the probabilities for my class 1 are concentrated on the upper and lower end.
i have tried playing around with bandwith too to smooth this a little and it doesn't smooth the bumps too much. what do you think this shows about my model? isn't it a good thing that the model for class 1 (which is has cancer) is assigning a greater probability for this class?
i am unsure how to interpret this or where i could be going wrong
the red curve is positive class (has cancer) and the blue curve is hasn't. below is plot used to generate.
results = df[['label','predicted_prob']]
colors = ['b', 'r']
for label in [0, 1]:
results[results['label'] == label]['predicted_prob'].plot.kde(bw_method=0.35,color=colors[label])
plt.xlim(0,1)
Answer: Such a plot doesn't really tell you much about overfitting.
First, check that your calibration has worked well; it's possible that an incorrect calibration has pushed the probabilities to the extremes. Otherwise, the distribution of probabilities being so extreme suggests the data just naturally separates into a segment of easy-to-detect cancers and the rest. Among the latter, it looks like you get reasonably good but not great rank-ordering of cases. | {
"domain": "datascience.stackexchange",
"id": 7985,
"tags": "python, classification, probability, lightgbm, probability-calibration"
} |
Are there side-effects to having a generic Ninject factory? | Question: Consider this class:
using Ninject;
using Ninject.Syntax;
using Ninject.Parameters;
namespace MyApp.Dependencies.Factories
{
public abstract class FactoryBase<T> where T : class
{
private IResolutionRoot _resolutionRoot;
protected FactoryBase(IResolutionRoot resolutionRoot)
{
_resolutionRoot = resolutionRoot;
}
protected T Create(params IParameter[] parameters)
{
return _resolutionRoot.Get<T>(parameters);
}
}
}
What this class really abstracts away, is the _resolutionRoot.Get<T> part, allowing for concrete factory classes to look something like this:
//using Ninject; // not needed
using Ninject.Syntax;
using Ninject.Parameters;
public interface IUnicornFactory
{
Unicorn Create(string name, int hornLength, Color color);
}
public class UnicornFactory : FactoryBase<Unicorn>, IUnicornFactory
{
public UnicornFactory(IResolutionRoot resolutionRoot)
: base(resolutionRoot)
{ }
public Unicorn Create(string name, int hornLength, Color color)
{
return Create(new ConstructorArgument("name", name),
new ConstructorArgument("hornLength", hornLength),
new ConstructorArgument("color", color));
}
}
Is this overkill, or something that I'll thank myself for later, when I have dozens of factory classes? I find the base factory merely allows for synctactic sugar in the concrete factory implementations - are there any side-effects to this?
Answer: Line-by-line I had exactly the same factory code. It smells, isn't it?
The correct solution would be to use Ninject Factory extension. I am happy since. | {
"domain": "codereview.stackexchange",
"id": 4240,
"tags": "c#, dependency-injection, factory-method"
} |
analog-to-discrete-to-analog system sampling problem | Question: This is not a homework problem, I am solving practice problems for my exam.
Consider the analog-to-discrete-to-analog system shown in figure 1. The CT signal $x_a(t)$
is sampled at a frequency of $F_s = 2000$ Hz ($T_s = 0.5$ msec). The resulting impulse train
is then converted to a discrete time sequence $x_d[n]$. The Lowpass DT filter $H_d(e^{j\Omega}$ is
subsequently used to filter $x_d[n]$ giving $y_d[n]$. Finally, a CT version of the output $y_a(t)$ is
created, using an ideal DT-to-CT converter (at the same sampling frequency $F_s = 2000$ Hz).
Note : $H_d(e^{j\Omega}$, which is obviously periodic, is shown for only one period.
a) For the CTFT of $x_a(t)$ given by $X_a(\omega)$ in the figure with $B = 2000\pi$ rad/sec, sketch the
$X_d(e^{j\Omega})$, the DTFT of the DT sequence $x_d[n]$.
b) Sketch $Y_a(\omega)$, the CTFT of the CT signal $y_a(t)$. Again, assume that we are using the
same frequency as that of sampling $F_s = 2000$ Hz.
I followed the following steps:
Step 1: Multiplied $x_a(t)$ by an infinite pulse train and then switched to frequency domain so I can have an expression for $X_d(\omega)$ in CTFT.
Step 2: After conversion, the "envelope" (which in this case is the triangular shapes) are not drawn but instead we have pulses, something like this:
(Excuse my very bad drawing skills) However, only the red lines exist in this case since we are now in discrete time.
Step 3: I believe in step 2 I have found how $X_d(e^{j\Omega})$ should look like. But then here I get stuck about how should I filter the signal. I get that the filter is periodic but how do I use this in here? Isn't the width of the filter much smaller than the width of the signal?
Answer: Your first step is correct. However, a signal which is discrete in time domain does not have a discrete frequency domain representation per se, as in theory the Discrete-Time Fourier Transform may be calculated. The signal spectrum would be discrete in combination with the Discrete Fourier Transform (DFT) only if a finite representation was required.
The main consequence of time-domain discretisation is periodicity in the frequency domain. Your solution for exercise 2 should be, as well. For this, you may need to check whether the spectrum will overlap at $\frac{F_s}{2}$.
Your assumption about the discrete filter transfer function being periodic in the frequency domain is correct. Since the filter is applied in frequency domain, you should multiply it with the signal spectrum to get the output spectrum. Should the filter be smaller than the signal, the signal components outside the passband are attenuated (or set to zero, as this filter is ideal). The computation looks like this:
$$y_d[n]=x_d[n] * h_d[n] = \mathcal{F}_\text{DTFT}^{-1}\{X(e^{j\Omega}) \cdot H(e^{j\Omega})\}$$ | {
"domain": "dsp.stackexchange",
"id": 7730,
"tags": "filters, discrete-signals, continuous-signals, lowpass-filter, analog"
} |
Reading a pointer to a string | Question: What it is doing basically is reading an std::string from a given pointer.
I was told that using a StringBuilder as I am is not the best way to achieve this, so I would like to know how this would be best written.
As additional information, that is a pointer of a std::string and I've made that because I was originally curious if there was a way to read a std::string in C# without having to create a bridge using C++/CLI.
How could I optimize this bit of code for best performance?
public static string ReadStdString(IntPtr address)
{
var capacity = Marshal.ReadInt32(address + 24);
var length = Marshal.ReadInt32(address + 20);
var startStr = address + 4;
if (capacity > 15)
startStr = (IntPtr)Marshal.ReadInt32(address + 4);
var result = new StringBuilder();
for (var i = 0; i < length; i++)
{
result.Append((char)Marshal.ReadByte(startStr, i));
}
return result.ToString();
}
Answer:
var result = new StringBuilder();
for (var i = 0; i < length; i++)
{
result.Append((char)Marshal.ReadByte(startStr, i));
}
You're working in a tight loop: a StringBuilder looks like a reasonable tool to use.
One thing I would change that could impact performance (depending on the length of the string involved), is the StringBuilder constructor being used:
var result = new StringBuilder(length);
There's no reason not to specify the length of the string you're building if you know it from the start; that will reduce the overhead, since the internals of the builder won't need to manage growth.
The loop is a one-liner. You have this one-liner just a few instructions above:
if (capacity > 15)
startStr = (IntPtr)Marshal.ReadInt32(address + 4);
Why does the loop have an explicit { } scope, but not the if block? It would be better to be consistent about scoping braces, and have them everywhere:
if (capacity > 15)
{
startStr = (IntPtr)Marshal.ReadInt32(address + 4);
}
An alternative to the StringBuilder could be to write the bytes into a MemoryStream:
using (var stream = new MemoryStream(length))
{
for (var i = 0; i < length; i++)
{
stream.WriteByte(Marshal.ReadByte(startStr, i));
}
return new StreamReader(stream).ReadToEnd();
}
The nice thing with this approach is that you don't need to cast every single byte of the string into a char. On the other hand, you need 2 objects instead of one, and you need to cleanly Dispose of the stream, too - depending on the length of the string, the overhead might just not be worth it, although my guts tell me the stream could be faster... but I think you'd need to race your horses to find out. | {
"domain": "codereview.stackexchange",
"id": 14758,
"tags": "c#, strings, .net"
} |
What does the notation $\Psi_k/(\Psi_k,\Psi_k)^{1/2} $ mean? | Question: I am currently reading the paper "Gravitation and quantum mechanics for macroscopic objects" by F. Karolyhazy (1966). In his paper, he uses certain notation that I haven't come across before (he also eats up some mathematics here and there but that's another story). He is speaking of the development of initial states of a quantum mechanical system to one of the states that he denotes as follows:$$ \frac{1}{(\Psi_k, \Psi_k)^{1/2}} \Psi_k $$ where $k=1, 2$. What does this notation imply?
Answer: This is the same notation that you'll find in Weinberg's books.
$$(\psi, \chi)$$
is the inner product of the two states $\psi$ and $\chi$, and corresponds to
$$\langle \psi \mid \chi \rangle$$.
So, the above corresponds literally to
$$ \frac{1}{\sqrt{\langle \psi_k \mid \psi_k \rangle}} \left| \psi_k \right>$$
This new object is just the normalized version of the state $\psi_k$, whose inner product with itself is unity. | {
"domain": "physics.stackexchange",
"id": 15931,
"tags": "quantum-mechanics, wavefunction, hilbert-space, notation, normalization"
} |
Pledge: Promise-like entities design goals experiment | Question: Background
I was writing a promises/a+ implementation, just for fun. About halfway through I decided it wasn't that fun after all, so I decided to do something a little different.
Thinking of promises as a pattern (and not a contract), I figured they basically amount to the following (if I'm missing the point, please correct me):
They "flatten" async* code, avoiding callback hell.
They allow sensible error handling on async code.
They are composable, and can be chained.
If one promise resolves to another, that new promise must also resolve before the rest of the chain resolves (e.g. a then function returning a promise).
* I get that promises themselves don't care about async, I'm thinking more about what people use them for.
Design goals
Instead of trying to conform to the promises/a+ spec, I decided to try to create something that would allow for using a similar pattern without worrying about the promise contract. I thought it would be interesting to learn:
Can something as powerful as promises can be implemented in a simpler / more minimal way?
What improvements can be made on a promise-like pattern?
Is it reasonable to try to integrate promises with other promise-like things?
The first two points basically boil down to ditching the design contract idea (having a special interface for everything that returns a promise) and making a system that works better with the more common function-takes-a-callback style interfaces.
The third point is handled by having "thenable" entities that work with Promise.resolve, and by recognizing and resolving other "thenables" passed from one entity to another.
Promise-like entities
The Pledge API is similar to the Promises API.
Static methods
Use this instead of invoking the constructor.
Pledge.resolve(callback)
Create a new pledge, return it, and resolve it after the current script finishes.
Instance methods
Use these on objects returned from Pledge.create and Pledge.resolve.
pledge.and(callback)
Create a new pledge, set it as the next pledge to resolve after this instance, and return it.
Similar to promise.then.
pledge.or(callback)
Create a new pledge, set it as the error handler for this instance, and return this instance.
Similar to promise.catch.
pledge.then(resolve, reject)
This is not meant to be used directly. It's here so pledges can be converted to promises with Promise.resolve.
Callbacks
The pledge.or callback takes a single argument, an Error instance. All other callbacks behave as follows:
Callbacks take a variable number of arguments. The first argument, go, is a function to call to pass execution to the next pledge in the chain. The go function may be called with any number of arguments, which will be passed to the next pledge in the chain (similar to returning a value from promise.then). Any callback arguments after the first are passed in from the previous pledge in this manner.
If any arguments passed to go are Pledge instances, they will be resolved before the next pledge in the chain is resolved.
If any arguments passed to go are "thenable," they will be wrapped in pledges and resolved before the next pledge in the chain is resolved. This should allow pledges to integrate seamlessly with interfaces that return promises.
As a syntactic sugar, a callback may return a truthy value rather than calling go. This is equivalent to calling go with that value as the sole argument.
Experiments
There are a few experimental features that aren't really the core focus of the library, but might stick. Most notably, if you pass and multiple callback arguments, the next item in the chain will only resolve when they are all resolved. When using multiple callbacks, the next item in the chain will receive go plus one argument for each callback, containing a list of all arguments passed to go from that callback.
Examples
The go function is useful because it can pass multiple values to the next entity in the chain, while return can only pass a single value. It's also useful for using callback-style interfaces without much hassle. For example:
Pledge.resolve(function (go) {
go("whatever.jpg");
}).and(function (go, src) {
var image = new Image();
image.onload = go;
image.src = src;
}).and(function (go, event) {
// the event argument was passed in from image.onload, do something with it here.
})
With promises, this kind of thing gets a little more ugly. You end up wanting to use a promise-aware image loader. This is where promises work as a design contract... if you have a special promise interface for everything, it works well. If you just want to do ad-hoc stuff without using special interfaces, you have to do something awkward like this:
new Promise(function (resolve, reject) {
resolve("whatever.jpg");
}).then(function (src) {
return new Promise(function (resolve, reject) {
var image = new Image();
image.onload = resolve;
image.src = src;
});
}).then(function (event) {
// do something with the event argument.
})
However, this only works because onload passes the callback a single argument, as resolve only accepts a single argument. To use promises with something that passes the callback multiple arguments, you'd have to do even more work:
new Promise(function (resolve, reject) {
resolve("whatever.txt");
}).then(function (src) {
return new Promise(function (resolve, reject) {
loadFile(src, function (data, filesize, lastModified) {
resolve({
data: data,
filesize: filesize,
lastModified: lastModified
});
});
});
}).then(function (fileInfo) {
// do something with the file info.
})
With Pledge, we could write this instead:
Pledge.resolve(function (go) {
go("whatever.txt");
}).and(function (go, src) {
loadFile(src, go);
}).and(function (go, data, filesize, lastModified) {
// do something with the file info.
})
There is no need for a special file loader that returns pledges; we can use pledges with existing callback mechanisms easily.
Implementation
It's sort of rough in some parts as I'm still developing the idea, but here it is. I'm more interested in having the overall design of the library reviewed than the code itself, but comments on the code are of course welcome.
Note: go is now fulfill, and callbacks are "vows".
/** Fulfill function.
Passed in as the first argument to a vow function.
Accepts any number of arguments to pass to the next pledge.
@typedef function(...)
*/
Function.Fulfill;
/** Vow function.
Passed into vow list functions. Accepts a fulfill function
as the first argument, followed by any arguments passed from
the previous pledge.
@typedef function(Function.Fulfill, ...)
*/
Function.Vow;
/** Vow list function.
A function that accepts a variable length list of vow functions.
@typedef function(...[Function.Vow])
*/
Function.VowList;
( /** @type function(Window) **/ function(global) {
/** Pledge constructor.
Don't invoke this, use Pledge.create or Pledge.resolve instead
(for consistency; CJS-style exports don't expose this).
@constructor
@param {...Function.Vow} vows
*/
function Pledge(vows) {
/** Vows.
When resolving, a pledge will resolve each vow before
resolving the next pledge in the chain.
@type Arguments.<Function.Vow>
*/
this.vows = arguments;
/** Pending.
An array of functions to execute once this pledge is fulfilled.
@type Array.<function()>
*/
this.pending = [];
/** Resolution.
While resolving, a pledge will populate this array with
the arguments lists passed to each vow's fulfill function.
@type {Error|Array.<Arguments>}
*/
this.resolution = [];
/** Resolved count.
The number of vows resolved so far.
@type number
*/
this.resolvedCount = 0;
/** Fail.
A pledge to resolve when an error is thrown.
@type Pledge
*/
this.fail;
/** Locked.
As soon as a pledge starts resolving, it is locked.
If it was already locked, it will throw an error.
@type boolean
*/
this.locked;
/** Ready.
A pledge with all vows fulfilled is ready.
@type boolean
*/
this.ready;
}
//
// Public API.
//
/** Pledge.resolve
Create a new pledge, return it, and resolve it
after the current script finishes, if it's not
locked by then.
@type Function.VowList: Pledge
*/
Pledge['resolve'] = function() {
var /** @type Pledge */ pledge = create(arguments);
setTimeout(function() { resolve(pledge); }, 0);
return pledge;
};
/** pledge.and
Appends a new pledge to this one and returns it.
@type Function.VowList: Pledge
*/
Pledge.prototype['and'] = function() {
return resolveNext(this, create(arguments));
};
/** pledge.or
Sets a new pledge as this pledge's error handler,
and returns this pledge.
@type Function.VowList: Pledge
*/
Pledge.prototype['or'] = function() {
if (this.locked) {
throw new Error("locked");
}
this.fail = create(arguments);
return /** @type Pledge */ this;
};
/** pledge.then
Allows promises to resolve pledges.
Not meant to be called from user code.
*/
Pledge.prototype['then'] = function(fulfill, reject) {
var result = this;
function thenFulfill(go, value) { fulfill(value); }
function thenReject(error) { reject(error); }
if (fulfill) {
result = result['and'](thenFulfill);
}
if (reject) {
result = result['or'](thenReject);
}
return result;
};
//
// Internal stuff.
//
/** @type function(this:Array, number): Array */
var arraySlice = Array.prototype.slice;
/** @type function(Array=): Array */
function arrayCopy(list) { return arraySlice.call(list || [], 0); }
/** Create pledge from vow list.
@param {Arguments.<Function.Vow>} vows
@return {Pledge}
*/
function create(vows) {
var pledge = new Pledge();
pledge.vows = vows;
return pledge;
}
/** Argument value.
Get a pledge's resolution in a format suitable for
an argument passed to a vow function.
@param {Pledge} self
@return {Object}
*/
function argValue(self) {
var value = self.resolution;
if (value instanceof Error) {
return value;
}
if (value.length == 1) {
value = value[0];
if (value.length == 1) {
value = value[0];
}
}
return value;
}
/** List value.
Get a pledge's resolution in a format suitable for
a list of arguments passed to a vow function.
@param {Pledge} self
@return {Array|Error}
*/
function listValue(self) {
var value = self.resolution;
if (value instanceof Error) {
return value;
}
value = arrayCopy(value)
if (value.length == 1) {
return value[0];
}
for (var i = value.length; i--;) {
if (value[i].length == 1) {
value[i] = value[i][0];
}
}
return value;
}
/** Vow count.
@param {Pledge} self
@return {number}
*/
function vowCount(self) {
return self.vows && self.vows.length || 0;
}
/** Set ready state to true and run anything in pending queue.
@param {Pledge} self
*/
function setReady(self) {
var completion;
self.ready = true;
while ((completion = self.pending.pop())) {
completion();
}
}
/** Resolve arguments.
Resolve all pledges passed to a fulfill function.
@param {Pledge} self
@param {Arguments} args
*/
function resolveArgs(self, args) {
var argIndex = args.length,
pledgeCount = 0,
pledgesResolved = 0,
arg;
function ready() {
if (++self.resolvedCount == vowCount(self)) {
setReady(self);
}
}
/**
@param {Error} error
*/
function handleError(error) {
self.resolution = error;
setReady(self);
}
/**
@param {number} argIndex
@param {Pledge} pledge
*/
function finalize(argIndex, pledge) {
++pledgeCount;
// while (pledge.next) { pledge = pledge.next; }
pledge['and'](function(){
args[argIndex] = argValue(pledge);
if (++pledgesResolved == pledgeCount) {
ready();
}
})['or'](handleError);
// while (pledge.previous) { pledge = pledge.previous; }
// resolve(pledge);
}
// resolve any pledges and thenables passed to fulfill.
for (argIndex = args.length; argIndex--;) {
arg = args[argIndex];
// if it's a pledge, finalize it.
if (arg instanceof Pledge) {
finalize(argIndex, arg);
// if it's thenable, wrap it in a pledge and finalize it.
} else if (arg && arg['then'] && arg['then'].call) {
finalize(argIndex, Pledge['resolve'](function(fulfill) {
arg['then'](fulfill, handleError);
}));
}
// otherwise it's an immediate value.
}
if (!pledgeCount) {
ready();
}
}
/** Resolve vow.
@param {Pledge} self
@param {number} vowIndex
@param {Array|Error=} args
*/
function resolveVow(self, vowIndex, args) {
var /** @type Function.Vow */ vow = self.vows[vowIndex],
/** @type Object */ result,
/** @type Pledge */ pledge,
/** @type Array */ newArgs;
/** @type Function.Fulfill */
function fulfill() {
self.resolution[vowIndex] = arguments;
resolveArgs(self, arguments);
}
if (args instanceof Error) {
args.rethrow = true;
newArgs = [args];
} else {
newArgs = arrayCopy(args);
newArgs.unshift(fulfill);
}
try {
result = vow.apply(self, newArgs);
} catch(error) {
if (error.rethrow) {
throw error;
}
self.resolution = error;
setReady(self);
}
if (result) {
fulfill(result);
}
}
/** Resolve another pledge after this one.
@param {Pledge} self
@param {Pledge} next
*/
function resolveNext(self, next) {
function ready() { resolve(next, listValue(self)); }
if (self.ready) {
ready();
} else {
self.pending.push(ready);
}
return next;
}
/** Resolve pledge.
@param {Pledge} self
@param {Array|Error=} args
@param {boolean=} failure
*/
function resolve(self, args, failure) {
var len = vowCount(self);
if (args instanceof Error) {
if (failure) {
resolveVow(self, 0, args);
return;
}
self.resolution = args;
self.locked = true;
if (self.fail) {
resolve(self.fail, args, true);
}
return;
}
if (self.locked) {
throw new Error("locked");
}
self.locked = true;
for (var i = 0; i < len; i++) {
resolveVow(self, i, args);
}
}
//
// Exports.
//
if (global['define']) {
global['define'](Pledge);
} else if (global['exports']) {
global['exports']['create'] = Pledge['create'];
global['exports']['resolve'] = Pledge['resolve'];
} else {
global['Pledge'] = Pledge;
}
}(this));
Answer: Very interesting question.
From a high level perspective, the code gets a bit hairy from Internal stuff on, which means the API is quite clear.
I really only have nitpickings;
This code could have been shorter if the Pledge constructor could take the vows parameter:
function create(vows) {
var pledge = new Pledge();
pledge.vows = vows;
return pledge;
}
could be
function create(vows) {
return new Pledge( vows );
}
This: in a format suitable for an argument passed to a vow function. and this if (value.length == 1) { value = value[0]; drive me nuts. I dont understand from the code why you need this, and the comments are not helping me understand this either. You have this array shifting in both argValue and listValue, you could have a common helper function here
I got stuck on this:
return self.vows && self.vows.length || 0;
consider writing this for readability
return ( self.vows && self.vows.length ) || 0;
especially since you use compiler afterwards
In setReady(self), the naming could be better, if self is a pledge, why not call the parameter pledge ? Since a pledge has vows why not call completion -> vow. Compare and contrast
function setReady(self) {
var completion;
self.ready = true;
while ((completion = self.pending.pop())) {
completion();
}
}
and
function setReady(pledge) {
var vow;
pledge.ready = true;
while ((vow = pledge.pending.pop())) {
vow();
}
}
In resolveArgs as well, I would replace self with pledge, things become far more clearer.
All in all, I like the code, it is more readable than other Promise libraries that I tried to grok. | {
"domain": "codereview.stackexchange",
"id": 8285,
"tags": "javascript, callback, promise"
} |
Integration of tensor to find potential | Question: I have question given as:
$$\partial_k \varphi = -(C_k+ D_{jk}r_j)$$
where $C_k \,\&\, D_{jk}$ are constants and $D_{jk}$ is symmetric and traceless. I have to find $\varphi$.
I am getting : $\varphi = A -C_kr_k - D_{jk}r_jr_{k}$
but answer is: $\varphi = A -C_mr_m - \frac12 D_{sm}r_sr_{m}$
I am clueless about $\frac12$ term in the answer.
Answer: I think your answer and the official answer are basically the same but they used the fact that the tensor D is symmetric and there is on additional little problem in your solution. Let me start from the beginning:
\begin{equation}
\partial_{k}\phi=-D_{jk}r_{j}
\end{equation}
this expression can be written as:
\begin{equation}
\partial_{k}\phi=-\frac{1}{2}\sum_{j\neq k}\left(D_{jk}r_{j}+D_{kj}r_{j}\right)- D_{kk}r_{k}
\end{equation}
Where I just used symmetry.This is easily integrated as:
\begin{equation}
A-\frac{1}{2}\sum_{j\neq k}\left(D_{jk}r_{j}r_{k}+D_{kj}r_{j}r_{k}\right)- \frac{1}{2}D_{kk}r_{k}r_{k}
\end{equation}
Now since any term that does not contain $r_{k}$ is just a constant in this moment you can and subtract this quantity to the expression:
\begin{equation}
\pm\sum_{s\neq k \;\;m\neq k}D_{sm}r_{s}r_{m}
\end{equation}
where the plus term goes into A and the minus allows you the get the exact expression. | {
"domain": "physics.stackexchange",
"id": 81193,
"tags": "homework-and-exercises, electromagnetism, tensor-calculus, integration, calculus"
} |
safeguards against comets? | Question: I read one of the related questions here, but what i'm curious to know is how do we deflect imminent collision courses of comets, if we detect them at a later stage? I read somewhere else that there are systems which use powerful lasers (radiation pressure) to deflect them, but honestly, i don't understand why something like radiation pressure will work on something travelling at about 40km/s through space, and still manage to cause a large enough deflection. Are there other methods ?
Answer: Difficulty is hard to say, but the reason a laser might work is because the laser is supposed to melt one side of the comet, which, as it melts, gas particles fly off at relatively high velocity, which redirects the momentum of the comet somewhat. It's not the pressure from the laser, it's the heat from the laser and the fact that comets are made largely from ice, which can be melted or "zapped" into fast moving jets of gas.
The approach wouldn't be a laser on or orbiting Earth shooting comets from a great distance but spacecrafts flying up close to the comet and using lasers to target one side. It would be difficult to fire a laser so pinpoint as to target one side of the comet from earth and create localized heat that would create the necessary directional out-gassing. This article proposes spacecraft, perhaps several, flying adjacent to the comet.
A nuclear weapon would be way more effective, however, depending on how fast we'd needed to push it. This article mentions a pulse laser landing on the comet or asteroid as well as other methods that might work. | {
"domain": "astronomy.stackexchange",
"id": 1825,
"tags": "comets"
} |
Is work done by sound wave on air particles? | Question: Is it possible for sound wave to do net work on air particles?
As in can a sound wave make the air move in one direction so that it can for example move a sail boat ?
I think since molecules gyrate about a mean position even though they are in the direction of wave propagation no net work is done but I want to confirm this idea.
Answer:
actually i wanted to ask whether sound wave does work on the medium, air, itself. like raise its temperature or something ?
Yes. The main reason sound decreases in amplitude with distance is not due to absorption; it's because sound sources emit roughly spherical radiation and are subject to the inverse square law. So an ideal plane wave in an ideal atmosphere would not attenuate with distance, because it's not subject to inverse square law.
But sound is also absorbed by the air, so even a plane wave will slowly decrease in amplitude with distance. This means the sound energy is being turned into heat energy, and increasing the temperature of the air slightly. The amount of energy absorbed varies with humidity and affects high frequencies first:
Absorption of Sound in Air versus Humidity and Temperature
Damping of Air of High Frequencies
Damping of Air of High Frequencies (Dissipation) | {
"domain": "physics.stackexchange",
"id": 13495,
"tags": "acoustics"
} |
Gibbs Paradox - why should the change in entropy be zero? | Question: The Gibbs paradox deals with the fact that for an ideal gas with $N$ molecules in a volume $V$ seperated by a diaphragm into two subvolumes $V_1,V_2$ with $N_1,N_2$ particles in each subvolume, removing the diaphragm gives a nonzero change in entropy, but the change should be zero.
I don't understand why (conceptually) the change of entropy in this situation is supposed to be zero. Why isn't it positive - after all, removing the diaphragm gives the particles more freedom and thus increases the 'disorder' of the system and with entropy being a measure of this 'disorder' it should too increase. Conversely, if I put more and more diaphragms into the container, I could potentially isolate each particle in its own subvolume and leading the system to be very ordered, so the entropy should be very small.
What is wrong with this way of thinking?
Answer: The entropy change should be zero – and essentially is zero, in the correct theory that takes the indistinguishability into account – because the thin membrane doesn't materially change the system and carries a tiny entropy by itself. The first reason is enough: the removal of the membrane is a reversible process – one may add the membrane back – so the entropy has to be zero. An entropy can't increase during a reversible process because it would decrease when the process is reversed – and that would violate the second law of thermodynamics.
In other words, the self-evident reversibility of the unphysical membrane means that $\delta S = \delta Q/T$ where $\delta Q$ is the heat flowing to the system – but it's clearly zero.
The paradox is removed when the indinguishability of the particles is appreciated. The calculable entropy change is zero, as expected. In some sense, we are implicitly assuming that the molecules are indistinguishable everywhere above. If the molecules carried some passports, they could have a Canadian and American passport in the volume $V_1,V_2$, respectively, which would be a very special state (none of the molecules is abroad) while the number of states would be increased because each molecule may be either in its own country/volume or abroad. This is indeed why the wrong classical calculation claims that the entropy would increase.
However, this prediction may be extracted even if the initial total volume $V_1+V_2$ is actually perfectly mixed before the membrane is added. | {
"domain": "physics.stackexchange",
"id": 10418,
"tags": "thermodynamics, statistical-mechanics, entropy, identical-particles"
} |
Is there an analog to SQL's STRING_AGG (or FOR XML PATH) function in Python? | Question: Asked this in SE but maybe this is too data-oriented so trying to post it here. I am trying to find the analog to the SQL function STRING_AGG so that I can concatenate across columns and rows of a table (or dataframe).
Here is an example of what I am trying to achieve:
input:
output:
With SQL I can easily group by the ID_No and also specify the order by via the RUN_No. The syntax for achieving what I want would be:
SELECT ID_NO,
STRING_AGG(CONCAT('(', RUN_No, ') ', Start, ' to ', Stop))
WITHIN GROUP (ORDER BY RUN_No ASC) AS "Sequence"
FROM X_TBL GROUP BY ID_NO
So what would be the way to achieve the same grouping, concatenating and ordering in Python? I do have my data stored as a dataframe. I am able to concatenate across columns using the following code, but then wasn't sure how to group by the "ID_No", or concatenate across the rows within each ID_No.
sample['Merge'] = sample['Start'].map(str) + ", " + sample['Stop']
Answer: Here a versatile solution. As you can see, you can modify the aggregation function in order to format the data as you want.
#the original DataFrame
df=pd.DataFrame({'ID_NO': [20, 20, 30, 30, 30], 'RUN_NO': [1,2,1,2,3], 'START': ['F2','F3','F9','F11','F14'],
'STOP': ['F3','F2','F11','F14','F6',]})
#convert 'RUN_NO' to string. This will make the aggregation formula easier to read
df['RUN_NO']=df['RUN_NO'].apply(str)
#The aggregation function
def agg_f(x):
return pd.Series(dict(Sequence = "%s" % ' '.join('(' +x['RUN_NO']+ ') ' + x['START'] +' to ' + x['STOP'])))
df_agg=df.groupby('ID_NO').apply(agg_f)
The output will be: | {
"domain": "datascience.stackexchange",
"id": 3316,
"tags": "python, pandas, sql"
} |
Lock-free bounded stack atomics | Question: I was thinking about using very basic bounded (preallocated) stack to keep track of my threads IDs in correct LIFO order. I was wondering if my implementation is thread safe:
// we use maximum 8 workers
size_t idle_ids_stack[8];
// position current write happening at
std::atomic_uint_fast8_t idle_pos(0);
// this function is called by each thread when it is about to sleep
void register_idle(size_t thread_id)
{
std::atomic_thread_fence(std::memory_order_release);
idle_ids_stack[idle_pos.fetch_add(1, std::memory_order_relaxed)] = thread_id;
}
// this function can be called from anywhere at anytime
void wakeup_one()
{
uint_fast8_t old_pos(idle_pos.load(std::memory_order_relaxed));
std::atomic_thread_fence(std::memory_order_acquire);
size_t id;
do
{
if(old_pos == 0) return; // no idle threads in stack; exit;
id = idle_ids_stack[old_pos-1];
}
while (!idle_pos.compare_exchange_weak(old_pos, old_pos-1, std::memory_order_acquire, std::memory_order_relaxed));
// wakeup single thread
signal_thread(id);
}
Please see latest changes: Gist
Answer: Problem 1
Consider that register_idle() is equivalent to this slightly rewritten version:
// this function is called by each thread when it is about to sleep
void register_idle(size_t thread_id)
{
std::atomic_thread_fence(std::memory_order_release);
uint8_t pos = idle_pos.fetch_add(1, std::memory_order_relaxed);
// What happens when this thread stops right here?
idle_ids_stack[pos] = thread_id;
}
If the thread stops after incrementing idle_pos, but before writing the actual thread id to the stack, you will be in trouble when another thread tries to wake up the thread at the top of stack. It will try to read a thread id from the stack that has not yet been written.
Problem 2
(Credit @MikeMB for correcting me on my previous answer)
You have a problem in wakeup_one() as well. Suppose idle_pos is 2 and idle_ids_stack[1] is 5. Now consider the following sequence of events:
Thread 1 runs wakeup_one() and executes id = idle_ids_stack[old_pos-1], so at this point, id is 5. But now thread 1 sleeps for a little while.
Thread 2 runs wakeup_one(), grabs the same id (5), and returns it, setting idle_pos to 1 in the process.
Thread 3 runs register_idle(), puts its id (3) into idle_ids_stack[1] and increments idle_pos back to 2.
Now thread 1 resumes, and compare-exchanges idle_pos from 2 to 1 successfully. But it returns id as 5 (a duplicate thread id), instead of 3 (the proper one). | {
"domain": "codereview.stackexchange",
"id": 14199,
"tags": "c++, c++11, multithreading, stack, lock-free"
} |
Is there a name for this phenomenon? | Question: Imagine I have a cylindrical pipe closed on both ends with lids. I fill it with sand and compress the sand tightly. Now I hold the cylinder vertically and remove the bottom lid. The sand will counter intuitively not fall off apart from a few loose grains. I understand the reason for this. Essentially the downward gravity force acting on each sand grain is counter balanced by net upward friction force. And the friction force arises due to the contact of each sand particle with it's neighbors.
Now, is there a name for this phenomenon? Or, are there related experiments or applications of this phenomenon?
Answer: The phenomenon is called dilatancy.
In my former life as a colloid science I frequently encountered this in concentrated dispersions (the archetypal example of this is oobleck) though the mechanism is subtly different in dispersions since it arises from viscous drag in the medium rather than friction between the dispersed particles. However the phenomenon is basically the same.
In this particular case the volume is constrained by the tube hence the grains cannot flow. | {
"domain": "physics.stackexchange",
"id": 51168,
"tags": "friction, terminology, statics, stability, granulated-materials"
} |
How to correctly determine the stopping distance of a coasting bicycle when considering aerodynamic drag? | Question: Given values of $C_{rr}$, $C_{d}{A}$ and $\rho$(Air density) how can I correctly determine the distance and time taken to coast from some $\upsilon$1 to $\upsilon$2?
Answer: Assuming that your rolling resistance is independent of velocity, and that the force of rolling friction is $f_{rr} = -mgC_{rr}$, you can write the equation of motion as
$$F = m \frac{dv}{dt} = - \left(m g C_{rr} +\frac12 \rho v^2 C_d A\right)$$
From Wolfram Alpha we learn that the solution for
$$\frac{dv}{dt} = -\left(a + b v^2\right)$$
is
$$v(t) = -\sqrt{\frac{a}{b}} \tan\left(\sqrt{ab} (c_1 + t)\right)$$
You need to think about this a little bit to understand that the situation where you are decelerating happens then $c_1+t<0$. Getting the values of $a$ and $b$ right:
$$\begin{align} a&= gC_{rr}\\
b&= \frac{\rho C_dA}{2m}\end{align}$$
The result is
$$v(t) = -\sqrt{\frac{2mgC_{rr}}{\rho C_d A}}\tan\left(\sqrt{\frac{\rho C_d A ~g C_{rr}}{2m}}\left(c_1+t\right)\right)$$
You find the integration constant $c_1$ from the initial velocity (put $t=0$; you will find that $c_1$ must be negative), and the evolution of velocity with time follows. Interestingly, there is a finite time to come to a complete stop. That doesn't happen when you have "pure" quadratic drag - it's the rolling friction that dominates at low speeds.
Update
Just to check that things work as expected, I wrote a quick Python program that computes the velocity according to the above expression, incorporating also the effect of slope (note - if the slope is such that the object would accelerate, you get a "math domain error". This is not a hard thing to fix, but it would make the code more complicated to read, so I left that out for now.)
Running the code with three values of slope (where negative slope = downhill) gave the following plot; you can see that the slope of -5° almost exactly cancels the rolling resistance of 0.1 (arcsin(0.1) = 5.7°), leaving just the quadratic drag; if you set the quadratic drag coefficient $C_ad$ to zero, the velocity ends up almost completely unchanged. So yes, this is believable.
And the code (this is not meant to show "good Python", just something I threw together for a quick demo):
# rolling resistance and quadratic drag
import math
import numpy as np
import matplotlib.pyplot as plt
# pick some values for mass etc:
# these have obvious meanings, and SI units
m = 1.
g = 9.81
crr = 0.1
cda = 0.05
rho = 1.2
v0 = 10.
# convert to numbers we use in the formula
b = rho*cda/(2*m)
# a function that allows me to use degrees for slope:
def sind(theta):
return math.sin(theta*math.pi/180.)
def vt(t,a,b,c1):
# implement the expression I derived
temp = -np.sqrt(a/b)*np.tan(np.sqrt(a*b)*(c1+t))
# if velocity goes negative, things go awry
stop = np.where(temp<0)
if np.prod(np.size(stop))>0:
# set all elements past the point where v first goes negative to zero
temp[stop[0][0]:]=0
return temp
# range of time for simulation:
t = np.linspace(0, 15, 500)
plt.figure()
# calculate for a range of slopes
for slope in np.arange(-5,6,5):
a = g*(crr + sind(slope))
c1 = math.atan(-v0*math.sqrt(b/a))/math.sqrt(a*b)
plt.plot(t, vt(t,a,b,c1), label='slope = %d'%slope)
plt.xlabel('time (s)')
plt.ylabel('velocity (m/s)')
plt.title('coasting down with rolling and quadratic friction')
plt.legend()
plt.show() | {
"domain": "physics.stackexchange",
"id": 40672,
"tags": "homework-and-exercises, newtonian-mechanics, drag"
} |
Do stars of a galaxy change their positions relatively to each other? | Question: Complete astronomy noob over here who would be happy if he get a simple answer (and who is also aware that this may be not possible)...
I've learned from a tv documentary that the stars at the edge of the galaxy are not traveling more slowely than the ones closer to the center.
Does this also mean that all stars in a galaxy do not change their positions relatively to each other?
To put it simply: If I note the relative position of a star compared to our sun, and then do it again 10 (100 / 1000) years later: Are the coordinates of that star identical or will the star be in a different relative position?
I would assume that the star is in a completely different (relative) place given the speed star systems are travelling because the galaxy is rotating AND the "fact" (read: I don't know if this is really a fact) that the solar system is oscillating through the galactic plane.
Answer: Stars do in fact move relative to one another within galaxies of all types. The orbital period of stars in a typical spiral galaxy (at around the same distance as the Sun is from the center of the Milky Way) is on the order of hundreds of millions of years. For the Sun it's something like 230 million years (source). A particularly old person (~100 years old), will have been around long enough to see something like one-tenthousandth of 1% of the orbit of stars in this region of the galaxy! Most of these stars simply do not move much relative to us in that amount of time.
The exception to this view of the motions of the stars is the Sagittarius A* (which is not a star at all, but is in fact a supermassive black hole at the center of the Milky Way). The period of stars traveling around it are on the order of 10's of years.
Also, to give you a rough idea of how galaxies look dynamically, here's a link to a particularly nice galaxy simulator implemented in the browser (created by Adrian Price-Whelan). | {
"domain": "astronomy.stackexchange",
"id": 251,
"tags": "star, orbit, galaxy, coordinate"
} |
ApproximateTime Policy of message filter doesn't filter right | Question:
Hi all,
I would like to synchronize some of my sensordata (kinect and a webcam) in the mater of time. My webcam seems to put everything in a internal memory before transmitting it. Because of that I have a time difference of about 1 sec to the actual recorded scene. I tried to adjust the time-stamp in the header of webcam message before publishing and then use the message filter package to synchronize them as good as possible to my kinect images (openni_launch).
Here is my function which grabs an image from the webcam, fills in the header and publishes the data. (its the source code of the usb_cam with some minor tweaks)
bool take_and_send_image(){
usb_cam_camera_grab_image(camera_image_);
fillImage(img_, "rgb8", camera_image_->height, camera_image_->width, 3 *camera_image_->width, camera_image_->image);
img_.header.stamp = ros::Time::now();
ROS_INFO("Time now = %i.%i",img_.header.stamp.sec, img_.header.stamp.nsec );
ros::Duration lag(5.0);
ros::Time header_time = ros::Time::now()-lag;
img_.header.stamp = header_time;
info_.header.stamp = img_.header.stamp;
ROS_INFO("Time lag = %i.%i",img_.header.stamp.sec, img_.header.stamp.nsec );
image_pub_.publish(img_, info_);
return true;
}
the output of this code indicates that the minus operation does the right thing.
[ INFO] [1357672295.986516662]: Time now = 1357672295.986394591
[ INFO] [1357672295.986791322]: Time lag = 1357672290.986699769
This is the subscriber which is pretty much the same as in the tutorial for the message filter. subscribing and calling the callback everytime two messages have the approximate same time.
void sync_callback(const ImageConstPtr& webcam_image,const ImageConstPtr& rgb_image){
// save both images to disk
ROS_INFO("syncro callback");
ROS_INFO("Kinect Timestamp= %i.%i",rgb_image->header.stamp.sec,rgb_image->header.stamp.nsec);
ROS_INFO("Webcam Timestamp= %i.%i",webcam_image->header.stamp.sec,webcam_image->header.stamp.nsec);
}
int main(int argc, char **argv){
ros::init(argc, argv, "listener");
ros::NodeHandle nh;
image_transport::ImageTransport it(nh);
image_transport::SubscriberFilter webcam_sub (it, "/panda_cam/image_raw", 20);
image_transport::SubscriberFilter rgb_sub (it, "/camera/rgb/image", 20);
Synchronizer<MySyncPolicy> sync(MySyncPolicy(1000), webcam_sub, rgb_sub);
ROS_INFO("Ready to recieve messages");
sync.registerCallback(boost::bind(&sync_callback, _1, _2));
ros::spin();
return 0;
}
The output of the time is indeed indicating that the message are 5 seconds behind the actual time.
[ INFO] [1357672295.859623746]: Kinect Timestamp= 1357672290.292021533
[ INFO] [1357672295.859715299]: Webcam Timestamp= 1357672290.301270495
But by observing the images (of a stopwatch) there is now real difference even when I ad 10 or more seconds to it. The Kinect header is still the original and pretty much on the original time and can be seen as my reference.
Would be great if someone could point me to the failure.
Originally posted by dinamex on ROS Answers with karma: 447 on 2013-01-08
Post score: 0
Answer:
I answered the question on my own. I made the mistake to aspect that the synchronization takes the actual time into account, which is not the case. Therefore both images (kinect and webcam) are now behind the time of the stopwatch with respect to my "trigger". By adding a third message as trigger to the synchronization the packages works as expected.
Originally posted by dinamex with karma: 447 on 2013-01-09
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 12316,
"tags": "ros, message-filter, message-filters, approximatetime"
} |
Is music/sound similarity comparison feasible on neural networks? | Question: I wonder on the following concept:
A given neural network gets two audio input (preferably music) and gives a real number between 0 and 1 which describes "similarity" between the second and the first track.
As far as my understanding of neural networks go, the problem fits the concept of NNs, as pattern recognition in music can help determine similarities and discrepancies in audio, see voice recognition.
However, due to the nature of long and complex inputs, and the vague nature of learning datasets (how similar, for instance, Diana Ross "It's your move", and The Vaporwave legend "Floral Shoppe" exactly are? 0.9? 0.6? other?), such a network would be extremely slow and convoluted.
Is it possible today to build and train such a model? If yes, how would it look like?
Answer: Yes, it is possible, even if the best approach could be different from neural networks. Anyway, you should extract some significant features from the audio (energy, onsets, root frequencies, and other). Usually, more features than those really needed are extracted and afterwards the most sigificant are selected through some algorithm (e.g. PCA). In this way you will obtain an array of features (say between 10 and 100 features) with which you can train your NNs.
Note that NNs do not tell you why two audio are similar but only if they are or not. This is a big disadvantage. Instead, algorithms based on grey-box modeling such as rule or case based algorithm (maybe using fuzzy logic) could be more useful, provided that you have a deeper knowledge of the problem.
References and deepening sources: SMC Lab from University of Padua education material | {
"domain": "ai.stackexchange",
"id": 299,
"tags": "neural-networks, deep-learning, pattern-recognition, voice-recognition, similarity"
} |
How is a state function a sum of path function and state function? | Question: We know
H=U+pV
Where
H is the enthalpy (State function)
U is the internal energy (State function)
pV is the work done (Path function)\
How is it possible that a State function is the sum of a path and a state function. An intuitive explanation would be helpful.
Thanks
Answer: $pV$ is not the work. It is an explicit state function.
The work done in a reversible transformation on the system is $-pdV$. Said in a more mathematical way, the problem of the exactness pertains to differential forms, not to functions. | {
"domain": "physics.stackexchange",
"id": 76227,
"tags": "thermodynamics, energy"
} |
pi_face_tracker installation problem | Question:
Hi, I followed the instructions to install the uvc_cam and pi_face_tracker but I have a problem installing pi_vision package. What should I do? The problem, I think, is that rosmake doesn't build all the packages required:
tim@ubuntu:~/fuerte_workspace/sandbox$ rosmake pi_vision
[ rosmake ] rosmake starting...
[ rosmake ] Packages requested are: ['pi_vision']
[ rosmake ] Logging to directory /home/tim/.ros/rosmake/rosmake_output-20140205-151624
[ rosmake ] Expanded args ['pi_vision'] to:
['ros2opencv', 'pi_face_tracker_gui', 'pi_face_tracker']
[rosmake-0] Starting >>> geometry_msgs [ make ]
[rosmake-0] Finished <<< geometry_msgs No Makefile in package geometry_msgs
[rosmake-0] Starting >>> sensor_msgs [ make ]
[rosmake-0] Finished <<< sensor_msgs No Makefile in package sensor_msgs
[rosmake-0] Starting >>> rosbuild [ make ]
[rosmake-0] Finished <<< rosbuild No Makefile in package rosbuild
[rosmake-0] Starting >>> roslib [ make ]
[rosmake-1] Starting >>> rosconsole [ make ]
[rosmake-2] Starting >>> message_filters [ make ]
[rosmake-0] Finished <<< roslib No Makefile in package roslib
[rosmake-1] Finished <<< rosconsole No Makefile in package rosconsole
[rosmake-1] Starting >>> roslang [ make ]
[rosmake-1] Finished <<< roslang No Makefile in package roslang
[rosmake-3] Starting >>> common_rosdeps [ make ]
[rosmake-2] Finished <<< message_filters No Makefile in package message_filters
[rosmake-1] Starting >>> roscpp [ make ]
[rosmake-0] Starting >>> pluginlib [ make ]
[rosmake-3] Finished <<< common_rosdeps ROS_NOBUILD in package common_rosdeps
[rosmake-1] Finished <<< roscpp No Makefile in package roscpp
[rosmake-0] Finished <<< pluginlib ROS_NOBUILD in package pluginlib
[rosmake-3] Starting >>> ogre [ make ]
[rosmake-3] Finished <<< ogre ROS_NOBUILD in package ogre
[rosmake-3] Starting >>> ogre_tools [ make ]
[rosmake-2] Starting >>> rostest [ make ]
[rosmake-3] Finished <<< ogre_tools ROS_NOBUILD in package ogre_tools
[rosmake-3] Starting >>> orocos_kdl [ make ]
[rosmake-3] Finished <<< orocos_kdl ROS_NOBUILD in package orocos_kdl
[rosmake-2] Finished <<< rostest No Makefile in package rostest
[rosmake-0] Starting >>> image_transport [ make ]
[rosmake-0] Finished <<< image_transport ROS_NOBUILD in package image_transport
[rosmake-0] Starting >>> polled_camera [ make ]
[rosmake-0] Finished <<< polled_camera ROS_NOBUILD in package polled_camera
[rosmake-0] Starting >>> camera_calibration_parsers [ make ]
[rosmake-0] Finished <<< camera_calibration_parsers ROS_NOBUILD in package camera_calibration_parsers
[rosmake-0] Starting >>> bullet [ make ]
[rosmake-2] Starting >>> camera_info_manager [ make ]
[rosmake-2] Finished <<< camera_info_manager ROS_NOBUILD in package camera_info_manager
[rosmake-1] Starting >>> python_orocos_kdl [ make ]
[rosmake-1] Finished <<< python_orocos_kdl ROS_NOBUILD in package python_orocos_kdl
[rosmake-3] Starting >>> kdl [ make ]
[rosmake-0] Finished <<< bullet ROS_NOBUILD in package bullet
[rosmake-1] Starting >>> angles [ make ]
[rosmake-2] Starting >>> rospy [ make ]
[rosmake-2] Finished <<< rospy No Makefile in package rospy
[rosmake-2] Starting >>> mk [ make ]
[rosmake-1] Finished <<< angles ROS_NOBUILD in package angles
[rosmake-1] Starting >>> std_msgs [ make ]
[rosmake-1] Finished <<< std_msgs No Makefile in package std_msgs
[rosmake-1] Starting >>> rosservice [ make ]
[rosmake-0] Starting >>> roswtf [ make ]
[rosmake-3] Finished <<< kdl ROS_NOBUILD in package kdl
No Makefile in package kdl
[rosmake-3] Starting >>> diagnostic_msgs [ make ]
[rosmake-3] Finished <<< diagnostic_msgs No Makefile in package diagnostic_msgs
[rosmake-3] Starting >>> diagnostic_updater [ make ]
[rosmake-1] Finished <<< rosservice No Makefile in package rosservice
[rosmake-2] Finished <<< mk No Makefile in package mk
[rosmake-2] Starting >>> ros2opencv [ make ]
[rosmake-1] Starting >>> dynamic_reconfigure [ make ]
[rosmake-1] Finished <<< dynamic_reconfigure ROS_NOBUILD in package dynamic_reconfigure
[rosmake-0] Finished <<< roswtf No Makefile in package roswtf
[rosmake-3] Finished <<< diagnostic_updater ROS_NOBUILD in package diagnostic_updater
[rosmake-3] Starting >>> self_test [ make ]
[rosmake-3] Finished <<< self_test ROS_NOBUILD in package self_test
[rosmake-3] Starting >>> pi_face_tracker [ make ]
[rosmake-0] Starting >>> tf [ make ]
[rosmake-0] Finished <<< tf ROS_NOBUILD in package tf
[rosmake-0] Starting >>> eigen_conversions [ make ]
[rosmake-1] Starting >>> driver_base [ make ]
[rosmake-0] Finished <<< eigen_conversions ROS_NOBUILD in package eigen_conversions
[rosmake-0] Starting >>> pi_face_tracker_gui [ make ]
[rosmake-1] Finished <<< driver_base ROS_NOBUILD in package driver_base
[rosmake-1] Starting >>> uvc_cam [ make ]
[ rosmake ] All 21 linesos2opencv: 0.7 sec ] [ pi... [ 4 Active 34/39 Complete ]
{-------------------------------------------------------------------------------
mkdir -p bin
cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=`rospack find rosbuild`/rostoolchain.cmake ..
[rosbuild] Building package ros2opencv
Failed to invoke /opt/ros/fuerte/bin/rospack deps-manifests ros2opencv
[rospack] Error: package/stack ros2opencv depends on non-existent package opencv2
CMake Error at /opt/ros/fuerte/share/ros/core/rosbuild/public.cmake:129 (message):
Failed to invoke rospack to get compile flags for package 'ros2opencv'.
Look above for errors from rospack itself. Aborting. Please fix the
broken dependency!
Call Stack (most recent call first):
/opt/ros/fuerte/share/ros/core/rosbuild/public.cmake:203 (rosbuild_invoke_rospack)
CMakeLists.txt:12 (rosbuild_init)
-- Configuring incomplete, errors occurred!
-------------------------------------------------------------------------------}
[ rosmake ] Output from build of package ros2opencv written to:
[ rosmake ] /home/tim/.ros/rosmake/rosmake_output-20140205-151624/ros2opencv/build_output.log
[rosmake-2] Finished <<< ros2opencv [FAIL] [ 0.83 seconds ]
[ rosmake ] Halting due to failure in package ros2opencv.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] All 21 linesi_face_tracker: 0.8 sec ]... [ 3 Active 34/39 Complete ]
{-------------------------------------------------------------------------------
mkdir -p bin
cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=`rospack find rosbuild`/rostoolchain.cmake ..
[rosbuild] Building package pi_face_tracker
Failed to invoke /opt/ros/fuerte/bin/rospack deps-manifests pi_face_tracker
[rospack] Error: package/stack ros2opencv depends on non-existent package opencv2
CMake Error at /opt/ros/fuerte/share/ros/core/rosbuild/public.cmake:129 (message):
Failed to invoke rospack to get compile flags for package
'pi_face_tracker'. Look above for errors from rospack itself. Aborting.
Please fix the broken dependency!
Call Stack (most recent call first):
/opt/ros/fuerte/share/ros/core/rosbuild/public.cmake:203 (rosbuild_invoke_rospack)
CMakeLists.txt:12 (rosbuild_init)
-- Configuring incomplete, errors occurred!
-------------------------------------------------------------------------------}
[ rosmake ] Output from build of package pi_face_tracker written to:
[ rosmake ] /home/tim/.ros/rosmake/rosmake_output-20140205-151624/pi_face_tracker/build_output.log
[rosmake-3] Finished <<< pi_face_tracker [FAIL] [ 0.83 seconds ]
[ rosmake ] Halting due to failure in package pi_face_tracker.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] All 21 lines
{-------------------------------------------------------------------------------
mkdir -p bin
cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=`rospack find rosbuild`/rostoolchain.cmake ..
[rosbuild] Building package pi_face_tracker_gui
Failed to invoke /opt/ros/fuerte/bin/rospack deps-manifests pi_face_tracker_gui
[rospack] Error: package/stack pi_face_tracker_gui depends on non-existent package rosbridge
CMake Error at /opt/ros/fuerte/share/ros/core/rosbuild/public.cmake:129 (message):
Failed to invoke rospack to get compile flags for package
'pi_face_tracker_gui'. Look above for errors from rospack itself.
Aborting. Please fix the broken dependency!
Call Stack (most recent call first):
/opt/ros/fuerte/share/ros/core/rosbuild/public.cmake:203 (rosbuild_invoke_rospack)
CMakeLists.txt:12 (rosbuild_init)
-- Configuring incomplete, errors occurred!
-------------------------------------------------------------------------------}
[ rosmake ] Output from build of package pi_face_tracker_gui written to:
[ rosmake ] /home/tim/.ros/rosmake/rosmake_output-20140205-151624/pi_face_tracker_gui/build_output.log
[rosmake-0] Finished <<< pi_face_tracker_gui [FAIL] [ 0.83 seconds ]
[ rosmake ] Halting due to failure in package pi_face_tracker_gui.
[ rosmake ] Waiting for other threads to complete.
[rosmake-1] Finished <<< uvc_cam [PASS] [ 3.04 seconds ]
[ rosmake ] Results:
[ rosmake ] Built 38 packages with 3 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/tim/.ros/rosmake/rosmake_output-20140205-151624
Thanks
Originally posted by timgivois on ROS Answers with karma: 1 on 2014-02-05
Post score: 0
Answer:
It sounds like you're missing the opencv package. Try:
sudo apt-get install ros-fuerte-opencv
Originally posted by ahendrix with karma: 47576 on 2014-02-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 16895,
"tags": "ros"
} |
what is the meaning of hit time? | Question: Average memory access time = Hit time + Miss rate * miss penalty
Assume a computer with only one cache level. What is the exact meaning of hit time? Is it the number of clock cycles to access data from cache? OR Clock cycles to execute an instruction? How does the number of clock cycle per instruction come into this equation?
Answer: First of all your equation is for the hierarchical cache where you have to search entire cache in any case irrespective of hit or miss.
For simultaneous cache the first term in right hand side will be multiplied by hit time as well. Hit time is nothing but the time taken to sense the presence of data in cache if there is. | {
"domain": "cs.stackexchange",
"id": 12637,
"tags": "cpu-cache, memory-access"
} |
Amplitude of E-M wave (e.g. laser) in meters? | Question: The wavelength of an electromagnetic wave is described in terms of distance (e.g. 633 nm). I understand this physically as the distance over which the repeating peak to peak oscillations occur.
I am trying to understand what the Amplitude's distance is? E.g. what is the typical distance (in m) the oscillations travel from average to peak (or trough)? I see it generally refereed to as intensity, but in the diagrams e.g.
I am trying to understand if the amplitude has some physical 'height' as well? For a regular He-Ne laser is it in the order of nano meters? picometers?
I ask because I am considering light scattering from small particles of say e.g. 1 nm. I understand the particle can be though of a new source of light of the same wavelength and amplitude of the incident light.
What confuses me is what if the amplitude is 'taller' than the max height of the particle? Wouldn't the electrons be 'stuck' at the top of the particle until the electric field vectors come on their way back down... essentially making the scattered wave that looks something like this:
I hope this makes sense. Thanks
Answer: This isn't a wave of a rope. The thing that varies as the wave passes is not the position of something it is the strength of the electric and magnetic fields. Those are coupled so it is sufficient to give one and we usually use the electric field strength, so the units of amplitude for a EM wave are volts-per-meter (or joules-per-coulomb which is equivalent).. | {
"domain": "physics.stackexchange",
"id": 39305,
"tags": "waves, electromagnetic-radiation, laser, scattering"
} |
Using dimensional analysis to find the expression for free energy | Question: In page 181 of Thomas Hartman's notes on Quantum Gravity and Black Holes, we have the following:
The thermodynamic free energy $F$ is given by
$$F = - T \log Z,$$
where $Z$ is the partition function.
This can be computed by the Euclidean path integral on $R^{d-1} \times S_{\beta}^{1}$.
At a fixed point, dimensional analysis fixes
$$F(\beta) = - c_{\text{therm}}V_{d-1}T^{d}$$
where $c_{\text{therm}}$ is a dimensionless number.
How does dimensional analysis fix the free energy to be
$$F(\beta) = - c_{\text{therm}}V_{d-1}T^{d}$$
at a fixed point?
Answer: The free energy is an extensive quantity, so it must scale with the volume. Since we are in $d-1$ spatial dimensions, it makes sense to label the volume as $V_{d-1}$. In natural units $\hbar=c=k_B=1$ the dimensions of length are the same as the inverse units of energy, $[L]=[E]^{-1}$. Therefore a $d-1$-dimensional volume has dimensions of $[V_{d-1}]=[L]^{d-1}=[E]^{1-d}$.
The free energy is an energy, so it must have dimensions of energy. The only dimensionful scale in the problem (other than the volume) is the temperature, $T$. In natural units $[T]=[E]$. Therefore we must have that $F=c V_{d-1}T^d$, where $c$ is some dimensionless number. | {
"domain": "physics.stackexchange",
"id": 40695,
"tags": "thermodynamics, energy, dimensional-analysis"
} |
Derivation of Schwingers action principle from Heisenberg Equation and CCR - Why does it work with Anticommuting variations? | Question: In the Book "Quantum Field Theory I" by Manoukian, in section 4.3, from what I understood, he derived the quantum-action-principle of Schwinger only by using unitary time-evolution of the field Operators. At least I assume it to be like this, because for the proof he looked at the fields operators $\hat{\Phi}(x)$ (which obey some unitary time evolution generated by $\hat{H}$, and defined a field $\Pi$ which is supposed to generate variations of $\hat{\Phi}(x)$ and which time evolution is supposed to be the same. With the variations simply being c-numbers, which are proportional to unity, he then derives Schwinger's variational principle.
In short:
He states that any infinitesimal variation of fields $\Phi(\vec{x}) \rightarrow \Phi(\vec{x}) + \delta \Phi(\vec{x})$ (where $\delta \Phi(\vec{x})$ is a c-number) can be written with a generator
$$
G(t) = \int d^3\vec{x} \delta \Phi(x) \Pi(\vec{x})
$$
With $\Pi$ being the canonical momentum of the field. It should be explicitly noted that the proof only uses c-number variations (which I translate to "$\delta \Phi$ is an ordinary complex number).
The author then shows that you can write
$$
\frac{d}{dt} G(t) = \int d^3\vec{x} \delta ( \dot{\hat{\Phi}} \hat{\Pi} - \mathscr{H}(\hat{\Phi}, \hat{\Pi}))
$$
And following is the variational principle:
$$
G(t_2) - G(t_1) = \delta \int d^3\vec{x} dt ( \dot{\hat{\Phi}} \hat{\Pi} - \mathscr{H}(\hat{\Phi}, \hat{\Pi}))
$$
To be more specific (Manoukian doesn't write the steps down like that, I just assume them to be like this, with the "generator of the full variation " being "$\delta \Phi \hat{\Pi} - \delta \Pi \hat{\Phi}$. I denoted operators with a hat to separate them from numerical variations:
$$
\int d^3\vec{x} \delta ( \dot{\hat{\Phi}} \hat{\Pi} - \mathscr{H}(\hat{\Phi}, \hat{\Pi})) = \int d^3\vec{x} \dot{(\delta \Phi )} \hat{\Pi} + \hat{\dot{\Phi}} \delta \Pi - \frac{i}{\hbar}[\delta \Phi \hat{\Pi} - \delta \Pi \hat{\Phi}, \mathscr{H}] = \int d^3\vec{x} \dot{(\delta \Phi )} \hat{\Pi} + \hat{\dot{\Phi}} \delta \Pi + \delta \Phi \hat{\dot{\Pi}} - \delta \Pi \hat{\dot{\Phi}} = \int d^3\vec{x} \dot{(\delta \Phi )} \hat{\Pi} + \delta \Phi \hat{\dot{\Pi}} = \frac{d}{dt} \int d^3\vec{x} \delta \Phi \hat{\Pi}
$$
Where $\delta$ is a simultaneous variation of the fields AND the canonical momenta. This proof uses that $\delta \Phi$ is just a c-number, since the 2nd equality uses that $\delta \Phi$ and $\delta \Pi$ can simply be pulled out of the commutator.
However, later on, the author talks about also using grassmann variables as Field variations, which leads to anticommutators instead of commutators for the field. My Question here would be: Why can we also use grassmann variable variations, without breaking the derivation here?
To make it more explicit: Why would
$$\mathscr{H}(\hat{\Phi}+ \delta \Phi, \hat{\Pi} + \delta \Pi) - \mathscr{H}(\hat{\Phi}, \hat{\Pi}) = \hat{\dot{\Phi}} \delta \Pi + \delta \Phi \hat{\dot{\Pi}}$$
hold, even if the variation is not commuting, but anticommuting?
Answer: Even in the case of grassman variables, you will still be able to pull the variations to the left of the commutator. This is because the Hamiltonian - while being an operator - is necessarily a c-number valued operator.
This implies that you can pull the grassman valued variations through the Hamiltonian in the second term of the commutator. Remember, they are themselves not operator-valued. The important point is that c-numbers and grassman numbers commute. Therefore, a c-number valued operator also commutes with the unit operator multiplied with a grassman number. | {
"domain": "physics.stackexchange",
"id": 44571,
"tags": "quantum-field-theory, fermions, variational-principle, commutator, grassmann-numbers"
} |
Eigenspinor of helicity of electrons | Question: I am reading the chapter in Griffth's introduction to elementary particle.
By solving the momentum space Dirac equation and requiring the solution of the spinor to be the eigenspinor of the helicity operator, see the questions below.
I worked out the math but I don't understand the solution completely. For example,
If $p_x = p_y = 0$ and $p_z = |\bf{p}|$, it implies that the eigenspinor $u^{(-)}$ does not exits, similarly for the case of $u^{(+)}$ with $p_z =-|\bf{p}|$.
Since helicity is the dot product of momentum and spin operator, does it implies that the spin of an electron cannot point in the direction exactly opposite to its momentum?
Answer: It seems that you are taking too literally the zero divided by zero as something undefined, where in this case it is just a form of writing the solution. You can start with $p_x=p_y=0$ and solve everything from the start, or, more conveniently, take a limit where $p_x \to 0^+$ and see that everything behaves properly.
Let us assume that $p_z > 0$ and take $\vec{p} = p_z \hat{z} + p_x\hat{x}$. We can expand everything to leading order in $p_x$, for the case of $u_A^-$. We get that $|p| = \sqrt{p_z^2+p_x^2} \simeq p_z + \frac{p_x^2}{2p_z}$ and then
$$ N = \sqrt{\frac{E+mc^2}{2|p|c(|p|-p_z)}} \simeq \sqrt{\frac{E+mc^2}{2p_z c}} \sqrt{\frac{2p_z}{p_x^2}} = \sqrt{\frac{E+mc^2}{c}}\frac{1}{p_x}$$
where we omitted terms that diverge slower than $1/p_x$. And then we get
$$N u_A = N\begin{bmatrix}|p|-p_z\\p_x\end{bmatrix} \simeq \sqrt{\frac{E+mc^2}{c}}\frac{1}{p_x} \begin{bmatrix}p_x^2/2p_z\\p_x\end{bmatrix} \to \sqrt{\frac{E+mc^2}{c}} \begin{bmatrix}0\\1\end{bmatrix}$$
where we took the limit $p_x\to 0$ at the end. So indeed $u_A^-$ has a spinor representation that we associate with spin down, which makes sense for a particle going in the positive $z$-direction ($p_z>0$) and has negative helicity. | {
"domain": "physics.stackexchange",
"id": 68751,
"tags": "particle-physics, dirac-equation, spinors, helicity"
} |
Vanishing Skew-Hermitian Inner Product | Question: Context
In section 4.3 of "Statistical Mechanics of Nonequilibrium Liquids" by Evans and Morriss, the following identity is noted:
$$ \langle \dot{A} A^* \rangle = 0,$$
with
$$ \langle A B^* \rangle = \int \mathrm{d} \Gamma f_0 A(\Gamma) B^*(\Gamma) = (A,B),$$
and
$$ \dot{A} = iL(A). $$
Here $*$ denotes complex conjugation $L$ is Hermitian on the inner product: $(A, L(B)) = (L(A), B),$ so $iL$ is skew-Hermitian.
For completeness, the "true adjoint" on the phase space integral is $L^\dagger = - \mathcal{L}$, and the identity $\mathcal{L}(f_0 B) = f_0 L(B)$ holds for the equilibrium distribution $f_0$.
Attempts at a Solution
It's fairly straightforward to show that the object $ \langle \dot{A} A^* \rangle$ must be pure imaginary:
$$ (A, iLA) = -(iLA,A) = -(A, iLA)^*$$
Alternatively, from an integral point of view:
$$ \int \mathrm{d} \Gamma f_0 \dot{A}(\Gamma) A^*(\Gamma) = \int \mathrm{d} \Gamma f_0 iL({A}(\Gamma)) A^*(\Gamma) \\
=-\int \mathrm{d} \Gamma {A}(\Gamma) i \mathcal{L}(f_0 A^*(\Gamma)) \\
=-\int \mathrm{d} \Gamma {A}(\Gamma) f_0 iL(A^*(\Gamma))\\
=-\int \mathrm{d} \Gamma {A}(\Gamma) f_0 (\dot{A(\Gamma)}^*),$$
(noting that $iL$ is real.)
For real functions this solves it, however the identity is given for arbitrary phase functions.
I've searched for some other way of showing, for example, that the inner product is also equal to its conjugate, therefore rendering the expression zero. But I can't find any way of doing so - approaching from the physical properties of Liouvilleans only leads to agreement with the operator algebra. I don't think there's an answer here purely from the operator properties, but if someone with experience in this formulation of classical statistical mechanics can see what's going on, I'd be grateful.
Answer: I'll start with what it says in my go-to reference for this kind of stuff, Theory of Simple Liquids by J-P Hansen and IR McDonald, and then fill in some details. I only have the second edition, but hopefully the same material appears in later editions. In their chapter on "Time-dependent Correlation Functions and Response Functions", they say:
Equation (7.1.10) shows that autocorrelation functions are necessarily
even functions of time. If $A$ is a complex quantity, the
autocorrelation function is conventionally defined as
$$ C_{AA}(t) =
\langle A(t) A^* \rangle \tag{7.1.11}
$$
which ensures that
$C_{AA}(t)$ is a real function of $t$ for all times.
Their eqn (7.1.10) deals with the signatures of the dynamical variables under
time reversal, and applying it to an autocorrelation function gives
\begin{align*}
C_{AA}(t) &= \varepsilon_A \varepsilon_A C_{AA}(-t) = C_{AA}(-t) && \text{even}
\\
&= \langle A(-t)A^*(0) \rangle = \langle A(0) A^*(t) \rangle && \text{stationary}
\\
&= \langle A^*(t) A(0) \rangle
= C_{AA}^*(t) && \text{real}
\end{align*}
since $\varepsilon_A=\pm 1$ according to whether $A$ is an even or odd function of the momenta.
The two properties (being even in time, and being real) seem to be linked, as they should be, because quite generally the autocorrelation function of a complex variable is a Hermitian function
$$
C_{AA}(-t) = C_{AA}^*(t)
$$
The "extra" feature here is that we are discussing dynamical variables
which are functions of position and momentum,
which have a characteristic signature under time reversal.
So it follows that your average $\langle \dot{A} A^*\rangle$,
which is the linear coefficient in a Taylor expansion of $C_{AA}(t)$
about $t=0$,
being both imaginary (as you have shown)
and being an odd-order term in the expansion,
must vanish. | {
"domain": "physics.stackexchange",
"id": 52096,
"tags": "classical-mechanics, statistical-mechanics, operators, hilbert-space, complex-numbers"
} |
Can we diagonalize optomechanical hamiltonian? | Question: The optomechanical hamiltonian is given as
$$\hat{H}=\hbar\Delta_{a}a^{\dagger}a+\hbar\omega_{m}b^{\dagger}b+\hbar g_{a}a^{\dagger}a(b^{\dagger}+b)$$
$a$ and $b$ are photonic and phononic operators and others are numbers.
Can a Bogoliubov transformation be made to such a hamiltonian to find the normal frequencies?
Do we always have to linearize it?
Answer: Since the Hamiltonian in the question is not atmost a quadratic in bosonic creation and annihilation operators, it is not possible to reduce it to uncoupled form by just a combination of Bogoliubov and displacement transformations.
However it can be simplified by a generalized displacement transformation aka the Polaron transformation:
$$\hat{H} \rightarrow \hat{\tilde{H}}=\hat{U}_{}^{\dagger}\hat{H} \hat{U}_{}^{}$$
where
$$\hat{U}_{}^{}=e_{}^{-\frac{g_{a}^{}}{\omega_{m}^{}}a_{}^{\dagger}a_{}^{}\left[b_{}^{\dagger}-b_{}^{}\right]}.$$
Using this generalized Polaron transformation, the transformed Hamiltonian acquires a diagonal form as given below
$$e_{}^{\frac{g_{a}^{}}{\omega_{m}^{}}a_{}^{\dagger}a_{}^{}\left[b_{}^{\dagger}-b_{}^{}\right]}\hat{H}e_{}^{-\frac{g_{a}^{}}{\omega_{m}^{}}a_{}^{\dagger}a_{}^{}\left[b_{}^{\dagger}-b_{}^{}\right]}=\hbar\Delta_{a}^{}a_{}^{\dagger}a_{}^{}-\hbar\frac{{g_{a}^{}}_{}^{2}}{\omega_{m}^{}}\left[a_{}^{\dagger}a_{}^{}\right]_{}^{2}+\hbar \omega_{m}^{}b_{}^{\dagger}b_{}^{}.$$ | {
"domain": "physics.stackexchange",
"id": 62848,
"tags": "quantum-mechanics, operators, hamiltonian, quantum-optics"
} |
Filtering out robot parts from pointcloud data | Question:
What is the recommended way of filtering out robot parts (of which URDF is available) in pointclouds? I am aware of the robot_self_filter package, but this still uses the old PointCloud format and has unstable ROS and C++ APIs per the docs, so I thought that with all the recent advances in RGB-D sensors/PCL etc. there might be alternatives.
Originally posted by Stefan Kohlbrecher on ROS Answers with karma: 24361 on 2011-04-05
Post score: 1
Original comments
Comment by Martin Günther on 2011-04-06:
Well, 24 hours without an answer. I would take that as confirmation that the robot_self_filter package is still the recommended way to do it.
Answer:
Hi Martin,
The documentation on the wiki is outdated. The robot_self_filter does subscribe to PointCloud2 messages now.
Originally posted by Sachin Chitta with karma: 1304 on 2011-04-08
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Stefan Kohlbrecher on 2011-04-08:
Oh ok, thanks. That'll teach me to look into the wiki ..better to directly look at source code hehe. | {
"domain": "robotics.stackexchange",
"id": 5293,
"tags": "urdf, pointcloud"
} |
Coverage processing on multiple Java projects with gmake | Question: As part of my data collection, I have to run multiple kinds of coverage processing on multiple Java projects. Below is my main Makefile intented only for gmake. Portability is not a requirement for me (for this project), but DRY and following make best practices are. Please comment on my code, and if there is any way to make this better.
.PHONY: checkout all clobber clean $(checkout.all)
.SECONDARY: $(addprefix projects/,$(names))
root:=$(CURDIR)
names=$(shell cat etc/projects.txt)
checkout.all=$(addprefix checkout-,$(names))
clean.all=$(addprefix clean-,$(names))
testgen.types=original randoop tpalus
all:
@echo use 'make <type>-<project>'
@echo projects = $(names)
@echo types = $(testgen.types)
checkout: $(checkout.all)
@echo $@ done.
checkout-%: projects/%/pom.xml
@echo $@ done.
projects build: ; mkdir -p $@
projects/%/pom.xml: | projects build
cd projects && $(root)/bin/checkout $*
touch $@
clean: $(clean.all)
@echo $@ done.
clean-%: | projects/%/pom.xml
cd projects/$* && git clean -xfd && git reset --hard
@echo $@ done.
clobber-%:
cd projects && rm -rf $*
#-----------------------------------------------------------------
root:=$(CURDIR)
define testgen =
$1 : $(addprefix $(1)-,$(names))
@echo $$(@) done.
$1-% : projects/%/.$(1).done
@echo $$(@) done.
%/.$1.done : | %/pom.xml
echo $$(MAKE) -C $$(*) root=$$(root) $(1)
endef
$(foreach var,$(testgen.types),$(eval $(call testgen,$(var))))
Some clarification about the intended purpose of this project. I am doing an analysis on a large number of projects from github. My method of action is to clone each project, run multiple tests on them. Before running any project, I do a git clean to make sure that the artifacts of previous tests are not present in the project directories. The types of tests and projects names vary frequently, and hence I have kept them outside the main Makefile.
Answer: .PHONY: checkout all clobber clean $(checkout.all)
There's no clobber target, so I think that will be ignored.
The root variable is unnecessary (CURDIR should be well known by anyone using make), and is even defined twice.
names=$(shell cat etc/projects.txt)
In my experience it's more common to write out the list in the makefile. If this is so long as to clutter up the file, you could create a separate makefile for variables like this and include variables.mk or similar.
projects build: ; mkdir -p $@
Why not split this into two lines? Also, the -p doesn't make any difference since it's only creating a single directory under the current one.
touch $@
This looks like a hack to get around checkout not updating the modified date properly, or to rebuild a project when make doesn't think it needs to be rebuilt. You might be able to just set the relevant target .PHONY or change it so that rebuilding is only done when necessary.
cd projects/$* && git clean -xfd && git reset --hard
This is rather brutal. I'd rather advise to use non-recursive make (based on Recursive Make Considered Harmful) and enumerate the files to remove to make sure you never remove anything other than generated files. You could have a separate target with cd projects/$* && git clean -xnd to check whether there's anything you still need to clean.
clobber-%:
cd projects && rm -rf $*
This also looks dangerous. You're effectively saying that make clobber-whatever will delete whatever no matter what it is. I can't recall where I heard this advice, but I do believe it's good practice to enumerate the files which makefiles can and should work with, rather than adding general-purpose tactical nuclear missiles.
define testgen =
...
$(foreach var,$(testgen.types),$(eval $(call testgen,$(var))))
There's a lot of indirection going on here, so it's difficult to understand what will actually be run in the end. Are you sure this couldn't be done simpler?
This is a personal preference, but I tend to avoid any echo statements because the make output should make it clear what is actually happening, and comments like this often tend to get out of sync with the rest of the code. On the other hand, you can include special targets to get the value of all/any variables. | {
"domain": "codereview.stackexchange",
"id": 3303,
"tags": "java, git, make, maven"
} |
Logic behind location of shear centre | Question: When we apply a vertical shear force Sy and the structure is symmetric about x axis, then why is it logical to have the position of shear center at the location of intersection of line of action of shear force and x axis?
If we go by basic definition then the moment about the shear center due to shear forces should be 0, so the line of action of external shear force is logical, but when we consider moment about x- axis, rather than getting cancelled, I think they get added.
eg. This is the situation.
Now look at the final solution depicting the shear flows
So as I suggested, instead of moments being equal and opposite, I think they exactly equal in magnitude and direction. So where am I getting a wrong interpretation?
EDIT:- I tried to keep the question general and explained accordingly, to be specific the question is as follows:-
The thin-walled single cell beam shown in Fig. 20.11 has been idealized into a combination
of direct stress carrying booms and shear stress only carrying walls. If the
section supports a vertical shear load of 10 kN acting in a vertical plane through booms
3 and 6, calculate the distribution of shear flow around the section.
Boom areas: B1 =B8 =200mm2, B2 =B7 =250mm2, B3 =B6 =400mm2, B4 =
B5 =100mm2.
x is horizontal and y is vertical
It is a closed section beam. An idealized version of an airplane wing
Answer:
why is it logical to have the position of shear center at the location
of intersection of line of action of shear force and x axis?
That statement isn't logical. I think you have misunderstood how the shear centre is defined.
The shear centre is the point such that an applied force passing through the S.C. does not cause any rotation of the section. In other words, if you apply a shear force through the shear centre, it does not cause any torsion in the beam but only bending.
The position of the S.C. depends only on the geometry of the beam section, not on the applied loads.
You can apply a shear force at any point on the beam. If the force does not pass through the S.C, you need to replace it by an equal shear force through the S.C, plus a moment about the S.C. You can then find the deflections and stresses due to bending (caused by the force through the S.C.) and torsion (caused by the moment about the S.C.) separately, and add them together to get the total deflections and stresses.
In your aircraft wing example, the shear forces caused by the aerodynamic lift and drag will act through the centre of pressure of the wing, and the centre of pressure is usually not the same point as the shear centre. The aerodynamic forces will therefore cause a combination of bending and twisting in the wing. | {
"domain": "engineering.stackexchange",
"id": 158,
"tags": "mechanical-engineering, structural-engineering, aerospace-engineering, structures"
} |
Efficiently count distinct in large range | Question: I have a pubsub channel where an event is fired every time a user logs in, and I want to be able to query the unique users in a date range.
Solutions I thought:
Put the data in bigquery, and then use APPROX_COUNT_DISTINCT, but it's too expensive
Same as above, plus a cache. Past data doesn't change, so it's a good approach, but still very expensive because I would need to import the pubsub channel in bigquery
Precompute daily uniques, and then do something very rough like max(range)^log(days)
I was also thinking about storing a 64 bytes bloom filter and a counter per day, then merge the filters in range and do some estimation on the count, but I couldn't find any paper on it.
Any better idea?
If it can be helpful we're speaking about 2/3 gb of data per day, around 6 months and growing.
Answer: I think the Google Scholar keywords you're looking for are "sliding window" and "hyperloglog". I found "Cardinalities estimation under sliding time window by sharing HyperLogLog Counter" and "Sliding HyperLogLog: Estimating cardinality in a data stream".
For a simple version, you can store a HyperLogLog sketch for each day and then merge the sketches to get an estimate, but this scales as $m$, the number of days you query.
Finally, you could store a Merkle tree of merged HLL sketches, making counting take $\lg_2 m$ HLL merges and one HLL estimation. | {
"domain": "cs.stackexchange",
"id": 20319,
"tags": "counting, bloom-filters"
} |
Sample detection events with TableauSimulator | Question: I've been manually driving stim.TableauSimulator to simulate more complex noise models like in a previous post.
The way I've set up my code is to replace instances of DEPOLARIZE1 in an input circuit with my own arbitrary classical distribution of Pauli errors via the Tableau simulator. I can use TableauSimulator.current_measurement_record to obtain measurement samples, but what's the most efficient way to obtain detector samples (and hence apply error correction)?
I'm aware of stim.Circuit.compile_m2d_converter, but that would require compiling a new circuit with every shot. Is there a better solution?
Answer: You might want to try the new stim.FlipSimulator instead of the tableau simulator. It's much faster, it will compute detection events for you, and you can still do complicated python-driven noise with it. It just needs to be the case that you only care about whether results are flipped, instead of about what their absolute value is. It's still somewhat bare bones, so let me know if it's missing something you need.
To compute detection events with the tableau simulator is quite tricky, because it requires computing the expected value of each detector's measurement set, in order to check if it's flipped. Since you're only changing the noise, you should be able to use circuit.reference_sample() to get this expectation. More generally you may have some way to know a valid noiseless sample for your specific circuit, e.g. it's somewhat common for QEC circuits to have all-zeros as a valid sample. Anyways, you can do something like this:
def compute_detector(
sim: stim.TableauSimulator,
entire_circuit_we_are_running: stim.Circuit,
detector_instruction: stim.CircuitInstruction,
) -> bool:
assert detector_instruction.name == "DETECTOR"
# Don't actually do this for every detector. Cache things.
ref_sample = entire_circuit_we_are_running.reference_sample()
measurement_record = sim.current_measurement_record()
result = False
for t in detector_instruction.targets_copy():
assert t.is_measurement_record_target
k = len(measurement_record) + t.value
result ^= measurement_record[k]
result ^= ref_sample[k]
return result
The main issue with this method is that it calls sim.current_measurement_record() and circuit.reference_sample() for every single detector, which is very inefficient. The reference sample you can just store and pass in, instead of recomputing it each time. But the measurement record changes as you do measurements, so you have to keep pulling it. I should probably add some way to get specific record entries without pulling the whole thing. | {
"domain": "quantumcomputing.stackexchange",
"id": 5211,
"tags": "error-correction, stim"
} |
Why Does Planck's Relation $E=hf$ Imply a Linear Relationship Only for Sinusoidal Frequency Bases? | Question: I have been studying quantum mechanics and I came across Planck's relation which describes the energy $E$ of a photon as being directly proportional to its frequency $f$, with Planck's constant $h$ as the proportionality constant, i.e. $E=hf$.
My question arises from the observation that this linear relationship between energy and frequency seems to hold true specifically when considering sinusoidal bases for frequencies. Why is it that using a sinusoidal frequency base results in a linear relationship in Planck's relation? If we were to use a different basis for frequencies, would we not see a linear relation? What is it about the sinusoidal nature of frequencies that makes this linearity apparent? I'm trying to understand the fundamental reasons behind this and would appreciate any insights into why the energy-frequency relationship in Planck's formula would change, if at all, with a different frequency base.
Answer: For convenience I will be using the reduced Planck constant $\hbar=\frac{h}{2\pi}$ and angular frequency $\omega=2\pi f$ in this answer. Secondly, if you'll indulge me, it's easier to talk about complex exponentials (i.e. functions of the form $c(t)=e^{i\omega t}$ or $c(t)=e^{-i\omega t}$ ) instead of sines and cosines. One can show that these have a period $T=\frac{2\pi}{\omega}$. Of course, sines and cosines can be written in terms of these two functions via Euler's formula.
Thus the question becomes: Why does the Planck relation $E=\hbar \omega$ hold for particles (note: this relation holds for all particles, not just photons) with a $e^{-i\omega t}$ time dependence? Why doesn't a triangle or square wave with angular frequency $\omega$ have an energy $E=\hbar \omega$ as well?
In the Schrödinger picture, a quantum mechanical system with a Hamiltonian $\hat{H}$ and a state $|\psi(t)\rangle$ evolves over time according to the Schrödinger equation: $$\hat{H}|\psi(t)\rangle=i\hbar\frac{d}{dt}|\psi(t)\rangle $$
Furthermore, if a system has a well-defined energy $E$, then the state is an eigenstate of the Hamiltonian such that: $$\hat{H}|\psi (t)\rangle=E|\psi(t)\rangle = i\hbar\frac{d}{dt}|\psi(t)\rangle $$
This differential equation is solved by a state $|\psi(t)\rangle =e^{-i\frac{E}{\hbar}t}|\psi(0)\rangle$ (check for yourself!), where $|\psi(0)\rangle$ is the state of the system at time $t=0$. Thus, we can see that the state oscillates with an angular frequency $\omega=\frac{E}{\hbar}$, giving us Planck's famous relation. What happens when $|\psi(t)\rangle$ is not of this form? In that case $|\psi(t)\rangle$ is not an eigenstate: $$i\hbar\frac{d}{dt}|\psi(t)\rangle \neq E|\psi(t)\rangle$$
In other words, the system does not have a well defined energy. As you correctly point out in the comments, we can decompose any function into sines and cosines of different frequencies. In the same vein, we can decompose any state into eigenstates of the Hamiltonian:
$$|\psi(t)\rangle=A_1e^{-i\frac{E_1}{\hbar}t}|\psi_1(0)\rangle + A_2e^{-i\frac{E_2}{\hbar}t}|\psi_2(0)\rangle + A_3e^{-i\frac{E_3}{\hbar}t}|\psi_3(0)\rangle + \dots$$
In this sense, even though the state doesn't have a single definite energy, the relation between the energy and the frequency is still 'linear' as you describe it. If you double the frequency of every eigenstate, then the energy of every eigenstate will also be doubled. But to reiterate: these states do not have a definite energy. I hope this answers your question.
Some final notes:
It is conceivable to construct a universe in which the energy eigenstates don't depend on time according to $e^{-i\omega t}$. In that case, the Schrödinger equation would be of a different form, perhaps with triangle or square waves as solutions like you propose. However, this is not the universe we live in.
This of course does just kick the can further down the road: why is the Schrödinger equation the way it is? It is possible to motivate the form from certain symmetries (i.e. the Hamiltonian, and thus the energy, is related to time translation symmetry. Thus, it shouldn't be too surprising that there is a relation between energy and frequency), but that goes beyond the scope of this answer.
It is often convenient, especially when talking about photons specifically, to use the Heisenberg picture, in which the operators (the fields, the Hamiltonian, the momenta) change over time and the state remains constant. I used the Schrödinger picture mainly for clarity, but suffice it to say the physics are the same.
It should go without saying, but historically the relation $E=\hbar \omega$ was found before the Schrödinger equation. In this case, what we mean by $\omega$ is 'the angular frequency of a sine/cosine/complex exponential' and not 'the angular frequency of a triangle/square/other wave'. As it happens, this is consistent with reality (and thus subsequent developments in quantum mechanics). | {
"domain": "physics.stackexchange",
"id": 100348,
"tags": "quantum-mechanics, energy, frequency, wave-particle-duality"
} |
Can anyone clarify the origin of the radial current in this discussion on the quantum hall effect? | Question: I am trying to understand parts of this webpage on the quantum hall effect, and I am stuck on the part where they talk about Corbino geometry. So we have a conductive 2D annulus and a changing magnetic flux confined to the middle of the annulus.
We will also try to do the experiment in reverse i.e. apply an electric field along the circumference of the disk and measure the current $I$ in the radial direction, as shown in the figure. The radial current is easy to measure - we just measure the amount of charge $\Delta Q$ transferred between the inner and outer edges of the Corbino geometry and obtain the radial current $I = \Delta Q/\Delta T$, where $\Delta T$ is the time over which this is done.
But how do we apply an electric field in the tangential direction? The easiest way to do this is to apply a time-dependent magnetic field in the centre of the disc and use the Faraday effect.
We can calculate the electric field from the changing magnetic field using Faraday’s law as $\oint d{\bf{r}\cdot\bf{E}}=\partial_t \Phi$, where $\Phi$ is the magnetic flux resulting from the field in the center of the disk. Assuming that the electric field depends only on the radius $R$ we find that the resulting tangential electric field is given by
$$ E(R,t)=\frac{1}{2\pi R}\,\partial_t \Phi. $$
Given $I$, we can also calculate the other component of the measurement of the Hall conductance $\sigma_{H}$ i.e. the radial current density $j=I/(2\pi R)$ at the same radius $R$ as we calculated the electric field.
What I don't understand is, what is the origin of the radial current? The magnetic field is confined to the center of the disk, so the electrons are not affected by the Lorentz force due to this $B$-field. The electric field or the EMF due to the $B$-field will cause the electrons to go in one of the azimuthal directions, but not radially, so it's completely unclear why there would be any radial current here at all.
My understanding is that in the usual 2D strip scenario, the quantum Hall effect occurs when there is both an electric field in the longitudinal direction and a magnetic field perpendicular to the strip to provide the Lorentz force (as in the classical Hall effect). Based on the quote above, it doesn't seem like there is a magnetic field perpendicular to the annulus itself (it's only in the center).
Can anyone clarify this to me? Or is the claim that there is radial current false?
Answer: That webpage does not describe the Corbino disc experiment very well:
To observe the quantum Hall effect, the variable magnetic field should be superimposed to a high static magnetic field (of the order of 10 T with a GaAs structure if one wants to observe the $i=2$ plateaux).
The magnetic field should not be confined to the center of the disc but should cross it.
The picture of the Corbino disc setup in that webpage is misleading because to measure the current $I$ one should short the inner and the outer edges of the disc with an ammeter, whereas in the picture the edges are left open.
If you want to read an account of a real Corbino disc experiment, with all the details, you can have a look to the following paper (paywalled):
B. Jeanneret et al., "Observation of the integer quantum Hall effect by magnetic coupling to a Corbino ring", Phys. Rev. B 51 9752
In the above paper instead of measuring the radial short-circuit current, the authors measure the open-circuit voltage, but the principle is the same. | {
"domain": "physics.stackexchange",
"id": 95185,
"tags": "electromagnetism, condensed-matter, maxwell-equations, quantum-hall-effect"
} |
Mixing cold with hot water - How long does it takes? | Question: Sometimes the bath for my baby is too hot, so I mix into it some cold water. I open a cold water stream and mix the water with my hand waiting for the temperature to drop.
But here is something that I don't understand. After doing a really good mix with my hand, I notice that the hot water is kept together. It's hard to explain so check out the image.
Step 1: I fill the bath with hot water
Step 2: I open the cold water to stream into A
Step 3: I close the cold water
Step 4: I mix the water with my hand
After mixing the water, I can make it that the hot water is in A and the cold water is in B. Don't get me wrong it's not as hot and as cold as it was at time 0. But still there is a meaningful amount of hot water in A, that is too hot for my baby and, there is a meaningful amount of water in B that is too cold for my baby.
And this is not the only strange thing. I also like adding a color-bath-drops(It changes the water color) to the bath to make it more fun for the baby. The color sink/mix with the water much faster then it takes for the cold water and the hot water to reach the same temperature.
So here is my question:
Is it expected?
Can you perform a theoretical experiment in which we mix hot water with cold water and some material that changes the water color, and calculate under what circumstances the material will get evenly distributed in the water before the water temperature will become the same in all the places?
Answer: The reason for this unexpected behavior is the density of the hot water is lower than the cold water and so the hot water floats over the cold water. Even after the color already been mixed and, even after I mix the water a hot water concentration will be created on the top layer.
Check this video that proves the density point. | {
"domain": "physics.stackexchange",
"id": 60803,
"tags": "thermodynamics, statistical-mechanics, temperature, estimation"
} |
Are laws of gravity time symmetric? | Question: Time symmetry is often explained by the example of orbiting objects... What I can't find an explanation for is the moment when an object enters into orbit around another object. That clearly breaks time symmetry, since once object is in orbit (and you reverse time), it will never leave of it's own. Does this mean laws of gravitational motion are not time symmetric? Or is there some other explanation (e.g. entropy of the system)?
Answer: If we restrict ourselves to Newtonian gravity, then it is indeed temporally symmetric.
Orbits require the orbiting body to be gravitational bound to the central object---i.e. they must have negative energy (the magnitude of the potential energy is greater than the kinetic) relative to the central body. One way gravity can do this is by exchanging energy with a third body. This is still time-symmetric. (Thanks @MichaelBrown for reminding me).
In general, for something to 'enter' an orbit without such a three-body type interaction, there needs to be a mechanism of dissipating its initial (non-negative) energy. In practice, how this would happen depends on the context. For stellar systems, it could happen from tidal dissipation; for satellites, I guess it could be atmospheric drag; or for a spaceship it could be its engines. | {
"domain": "physics.stackexchange",
"id": 6806,
"tags": "gravity, arrow-of-time"
} |
What kind of string to use for the ice fishing experiment for kids? | Question: Classic ice fishing experiment for kids.
I used nylon because I saw it in a stationery, and apparently that doesn't work because I guess it's too slippery. I tried thread from my mom's sewing kit, but that doesn't work either.
I'm not sure I can find yarn or kite string in the stationery or nearby so please suggest alternatives expected to be found in the average household, supermarket, grocery store or convenience store.
Also how long should we wait? The above video takes only 10 seconds while this video takes about 2 minutes (off screen). What factors influence waiting time? Amount of salt? Type of string?
Answer: This resource has an explaination of the behavior, which relies on the salt-dissolved water wicking into the string. This would mean that you need a type of string which will absorb water. It's surprising to me that your nylon string doesn't work (as long as it's thick/low-density enough to absorb water, probably unlike fishing line), is it possible it has a hydrophobic coating?
I would recommend any cotton-based string or twine. Cooking twine and sewing thread is often cotton, if you have access to that. I've had no problems with these.
Edit: I just thought, that you might want to try this not in a cup of water first (like in the link above). You might potentially run into not-string-related problems if your water isn't cold enough, or if your ice gets too submerged when you put the salt on it. It's possible your nylon string is fine. | {
"domain": "chemistry.stackexchange",
"id": 7045,
"tags": "everyday-chemistry, experimental-chemistry, water, home-experiment, melting-point"
} |
Does differential cross section assume no multiple collisions in a material? | Question: The differential cross section definition is often expressed as $\frac{d\sigma}{d\Omega} = \frac{n(\theta,\phi)}{N_{target} j_A}=\frac{n(\theta,\phi)}{p_N \Delta x \space j_A}$
My question concerns how we simply divide by $N_{target}$. I understand the logic of the incoming particle having 'more opportnities to scatter. However does this only work if we assume that once a particle has scattered of one target, it goes straight to a detector without any more scatters? This question is only relevant if we assume the target is thick so we use the $\Delta x$ thickness notation (so multiple successive scatters are possible).
I would think that in cases where a back scatter is incredibly unlikely (say negligible) but a 'sideways' scatter is quite likely, if we account for a particle undergoing multiple scatters then having more particles would increase the liklihood of 'two successive sideways scatters' and therefore skew the final distribution to have back scatters no longer neglible.
Therefore am I correct in my assumption that we only include $N_{target}$ as 'more opportunities to scatter one' and not 'more opportunities to also scatter multiple times'?
Answer: The microscopic scattering cross section- and the differential microscopic scattering cross section- applies to one scattering event for one target atom/nucleus. The microscopic differential cross section is that fraction of the microscopic scattering cross section for scattering into a solid angle.
The cross section for any reaction (fission, scattering, etc.) also is for one specific reaction event for one target atom/nucleus.
I like the following description of the microscopic cross section. Let $\Sigma$ be the mean free path for a specific interaction. The microscopic cross section $\sigma$ is defined as $\Sigma/N$ where $N$ is the density of atoms/nuclei in the target.
The evaluation of the incident particles as they travel through and interact with a thick target is complicated since scattering does not remove incident particles, and multiple scattering events can occur that change the angular distribution of the incident particles as they move throughout the target. To model neutrons, the transport equation is used, and if the scattering cross section is much greater than the absorption cross sections (and if other conditions apply), diffusion theory (Fick's law) can be used as an approximation.
Other unusual reactions can change the intensity of the incident particles as they interact with a target. For example, for 14 MeV neutrons incident on lead, the dominant reaction is (n, 2n) that effectively doubles the number of incident neutrons. | {
"domain": "physics.stackexchange",
"id": 75099,
"tags": "particle-physics, nuclear-physics, scattering, scattering-cross-section"
} |
RTABMAP estimate normals with external odometry | Question:
Is it possible to use external odometry to estimate normals with RTABMAP?
Will the normals then be estimated based on the odom when the points where captured?
I know icp_odometry have some parameters for normal estimation, but I have not found any examples using them.
My goal is to user Intel RealSense D435 and T265 cameras to scan a box from five sides and use the normals to decide tcp orientation for a robot so that its tool is perpendicular to the surface.
Originally posted by MRRobot on ROS Answers with karma: 60 on 2021-03-16
Post score: 0
Answer:
Normals are inverted if needed based on the current point of view.
Originally posted by matlabbe with karma: 6409 on 2021-03-16
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by MRRobot on 2021-03-22:
Do I understand it correct that RTABMAP will orient all normals according to current pov and not store the pov from when the point was captured?
If I were to scan one side of a box and then scan the opposite side, all surface normals would have the same direction and be oriented according to the last side, and not in opposite directions?
Comment by matlabbe on 2021-03-22:
/rtabmap/cloud_map doesn't have normals published. When doing File->Export Clouds, normals are adjusted depending on each camera point of view. See adjustNormalsToViewPoints(). For each point, we search the camera looking at it, then flip the normal is needed.
Comment by MRRobot on 2021-03-22:
I see. Thank you for the clarification! | {
"domain": "robotics.stackexchange",
"id": 36205,
"tags": "ros, slam, navigation, rtabmap"
} |
Scala: drop elements in Sequence | Question: Can I reduce the loop inside my method with map or is there a shorter implementation of this code that's dropping elements in sequence?
// Method that drops one element in sequence
def drop_Val_In_List[String](ls: Seq[String], value: String): Seq[String] = {
val index = ls.indexOf(value) //index is -1 if there is no matc
if (index < 0) {
ls
} else if (index == 0) {
ls.tail
} else {
// splitAt keeps the matching element in the second group
val (a, b) = ls.splitAt(index)
a ++ b.tail
}
}
val KeepCols = Seq("id", "type", "month", "car", "road")
// Generalization of the above method to drop multiple elements
def drop_Val_List_In_List[String](ls: Seq[String], in_ls: Seq[String]): Seq[String] = {
var tmp_ = ls
//println(tmp.getClass)
for(x <- in_ls ){ // should work without var x
tmp_ = drop_Val_In_List(tmp_, x)
}
tmp_;
}
val tmp = drop_Val_List_In_List(KeepCols, List("id", "type", "month"))
```
Answer: Follow the naming convention of the language
For scala, this is here. Spell methods and variables in lower camel case:
drop_Val_In_List
drop_Val_List_In_List
in_ls
Use the most appropriate data structure
To me, it seems like you want a set, not a sequence/list. Set also has the method you want:
val KeepCols = Set("id", "type", "month", "car", "road")
val tmp = KeepCols.diff(Set("id", "type", "month"))
If you don't want to use a set, Seq also has diff. | {
"domain": "codereview.stackexchange",
"id": 39790,
"tags": "scala"
} |
Application of ideas from graph theory in machine learning | Question: I work with neural networks (ConvNNs, DeepNNs, RNNs/LSTMs) for image segmentation and recognition and Genetic Algorithms for some optimization problems. Recently I started to learn some deep graph theory ideas (random graphs, chromatic numbers, graph coloring). I'm familiar with combinatorics at undergrad level. Are there any existing interesting applications and areas of research of graph theory and combinatorics in ML?
Answer: Graphs are a very flexible form of data representation, and therefore have been applied to machine learning in many different ways in the past. You can take a look to the papers that are submitted to specialized conferences like S+SSPR (The joint IAPR International Workshops on Structural and Syntactic Pattern Recognition and Statistical Techniques in Pattern Recognition) and GBR (Workshop on Graph-based Representations in Pattern Recognition) to start getting a good idea of potential applications. Some examples:
Within the Computer Vision field, graphs have been used to extract structure information that can later on be used on several applications, like for instance object recognition and detection, image segmentation and so on.
Spectral clustering is an example of clustering method based on graph theory. It makes use of the eigenvalues of the similarity matrix to combine clustering and dimensionality reduction.
Random walks may be used to predict and recommend links in social networks or to rank webpages by relevance. | {
"domain": "datascience.stackexchange",
"id": 724,
"tags": "neural-network, graphs, reference-request"
} |
Charge not in center of spherical cavity of a conductor | Question: I have seen the well known example of a charge $ Q $ placed in the center of a spherical cavity of radius $R $of a conductor. We can then say that the inner wall has a charge of $-Q $ and we can find the electric field $E$ by applying Gauss' law for a sphere of radius $r\lt R$.
So what if the charge Q is not in the center of the sphere but at some point of radius $a\lt R$ ? The charge of the inner wall will again be $-Q$ (we find this by applying Gauss' law for a sphere of radius $R' \gt R$ and noting that inside conductors $E=0$).
What is the best way to compute the electric field and the electric potential function though? Is it with respect to a position vector (x,y,z) or can we use other coordinates to simplify? I guess using the radius doesn't make sense as the problem is not symmetric anymore.
Assume that everything outside the cavity is from the same conducting material, with potential $V_0=0$
Answer: Ok i think i found the answer using image charges.
For finding the potential, we can do it by using an image charge $q'$ placed appropriately, such that the potential from $Q$ and $q'$ is zero everywhere on the surface of the conducting spherical cavity.
From a well known example, if we have a conducting sphere of radius R and a point charge with distance d from the center, then we can achieve zero potential on the surface of the sphere by placing an image charge $Q'=-QR/d$ at a distance $d'= R^2/d$. So if we invert the charges and their positions we have what we're looking for:
We place an image charge $q'=-Qa/R$ at distance $d=R^2/a$ from the center of the caavity. Now the potential on the surface is zero, as is the real potential everywhere outside the sphere. Now we can use the formula $$V(x,y,z)={1\over 4 \pi \epsilon_0 }({q\over r_q}+{q'\over r_q'})$$
to find the potential function inside the cavity, taking (0,0,0) to be the canter:
$$ V(x,y,z)={1\over 4 \pi \epsilon_0 }({q\over \sqrt{(x-a)^2+y^2+z^2}}-{qR\over a \sqrt{(x-d)^2+y^2+z^2}})$$ for $\sqrt{x^2+y^2+z^2} \le R$
After that we easily find the field from $E=-\nabla V$ | {
"domain": "physics.stackexchange",
"id": 40198,
"tags": "electromagnetism, conductors"
} |
Debunking Tesla's argument against general relativity | Question: Nikola Tesla didn't believe in relativity. More historical context here. He made the following argument against general relativity in a 1931 interview with Hugo Gernsback:
Tesla contradicts a part of the relativity theory emphatically, holding that mass is unalterable; otherwise, energy could be produced from nothing, since the kinetic energy acquired in the fall of a body would be greater than that necessary to lift it at a small velocity.
Filling in some of the steps in the logic, it seems that he is assuming that the passive gravitational mass equals the relativistic mass, and therefore the force of the earth's gravity during the fall would be greater than the force during the slow lifting. That means the work around a closed path would be nonzero.
Is there a simple explanation for this? It's easy to pick holes in the argument, since GR doesn't describe gravity as a force, and gravitational interactions depend on the stress-energy tensor, not the mass-energy. But that doesn't feel like a complete resolution of the issue to me.
The relation $W=\int F dx$ is exact in special relativity if $F$ is the three-force, since $dE/dx=(dE/dp)(dp/dt)(dt/dx)$, and $dE/dp=p/E$. However, this seems ambiguous in the context of GR, since, e.g., the tension in a hanging piece of rope is subject to a correction equal to the gravitational redshift factor evaluated between the ends.
A possible way to get at this would be to imagine a slightly different scenario that may be simpler to reason about. We shoot a test particle straight down into a planet's gravity well with relativistic energy $E_1$. At the bottom, we reduce its speed to exactly escape velocity, extracting energy $E_2$ from it, and then reflect it back up, so that it rises back up with zero total energy. Then Tesla's argument would seem to be that $E_2>E_1$, which seems unlikely since we have a conserved energy for the geodesic motion of a test particle in this field.
Answer: For a stationary metric with an asymptotically timelike Killing field, the contraction of the Killing field and any geodesic four-velocity is a constant quantity $E$ that could reasonably be called the "mechanical energy", which is conserved in the absence of external forces. For simplicity, let's consider the outside of a spherically symmetric matter distribution, which is described by the Schwarzchild metric. A standard calculation gives that $E = (1 - 2 G M / r)\ dt/d\tau$ is conserved in this region.
When faced with an apparent violation of conservation of energy, the best way to see the problem is usually to try to come up with some kind of cycle that exploits the apparent violation to provide an infinite source of energy. If we shoot a particle in from infinity at zero velocity, the energy $E$ will equal the initial $\gamma := dt/d\tau$ factor $\gamma_0 = 1$, so that $\gamma \equiv 1/(1 - 2 G M / r) = r/(r - r_*)$, where $r_*$ is the Schwarzchild radius $2 G M$. As it falls inward, the $(1 - 2 G M / r)$ factor will decrease from $1$ and the $\gamma$ factor will increase accordingly. We can roughly think of $\gamma$ as being like the "rest plus kinetic energy" and $(1 - 2 G M / r)$ as being like the "potential energy" (except that their product rather than their sum is conserved). We see that, precisely as in the nonrelativistic situation, any energy we extract by bouncing the particle off a piston or something will decrease the maximum radius that it can bounce back to, so we will extract less and less energy with each bounce, with a finite limit on the total energy.
Carroll explains what's going on very well on pg. 208 of his textbook:
The energy of a particle with four-momentum $p^\mu$, as measured by an observer with four-velocity $U_\mu$, would be $-p_\mu U^\mu$. This not equal, or even proportional to [the conserved quantity E], even if the observer is taken to be static ($U_i = 0$) .... $-p_\mu U^\mu$ may be though of as the inertial/kinetic energy of the particle, while $[E =\, ] p_\mu K^\mu$ is the total conserved energy, including the potential energy due to the gravitational field. The notion of gravitational potential energy is not always well-defined, but the total energy is well-defined in the presence of a timelike Killing vector.
Tesla was only considering the "rest plus kinetic energy" $\gamma$, which is indeed not conserved, even when added to any possible potential $V(r)$. But he was failing to consider the "gravitational potential energy" $(1 - 2 G M / r)$, because in GR you have the extremely non-Newtonian property that the product rather than the sum of the "kinetic energy" and "potential energy" is conserved. | {
"domain": "physics.stackexchange",
"id": 42463,
"tags": "general-relativity"
} |
What is the meaning of the anti-commutator term in the uncertainty principle? | Question: What is the meaning, mathematical or physical, of the anti-commutator term?
$$\langle ( \Delta A )^{2} \rangle \langle ( \Delta B )^{2} \rangle \geq \dfrac{1}{4} \vert \langle [ A,B ] \rangle \vert^{2} + \dfrac{1}{4} \vert \langle \{ \Delta A, \Delta B \} \rangle \vert^{2},$$
where $\Delta A, \Delta B, A$ and $ B$ are operators.
The inequality is still true, and the anti-commutator term "strengthens" the inequality, but why does it appear?
Answer: Dear Rodrigo, it's an interesting stronger version of the uncertainty principle for general operators $A,B$ that I've never seen before but I just verified it holds. Just to be sure, the anticommutator is simply
$$\{A,B\}\equiv AB+BA.$$
I like when the braces are only used for pairs of Grassmannian objects but people use it as a bookkeeping device to simplify $AB+BA$ in all situations. Nothing difficult about the notation. Note that the commutator and anticommutator appear totally symmetrically in the inequality, a fact we will derive.
To see why the stronger inequality holds, open Wikipedia here
http://en.wikipedia.org/wiki/Uncertainty_principle#Mathematical_derivations
where only the simpler version of the inequality (without the squared anticommutator) is proved by combining two inequalities. The first one,
$$ ||A\psi||^2 ||B\psi||^2 \geq |\langle A\psi|B\psi\rangle|^2 $$
remains unchanged. However, the second inequality from the Wikipedia article may be strengthened to a full-fledged equality
$$ |\langle A\psi|B\psi\rangle|^2 = \left| \frac{1}{2i} \langle \psi | AB-BA | \psi \rangle \right|^2 + \left| \frac{1}{2} \langle \psi | AB+BA | \psi \rangle \right|^2 $$
This identity simply says that the squared absolute value of a complex number is the sum of the squared real part and the squared imaginary part (which was omitted on Wikipedia). Combining the previous two inequalities, one gets your "stronger" uncertainty principle.
(Of course, the equation derived above is uselessly weak unless the expectation values of $A,B$ vanish themselves. It can be strengthened into yours by repeating the same prodedure for $\Delta A = A-\langle A\rangle$ and similarly $\Delta B = B-\langle B\rangle$ instead of $A,B$.)
I wrote what the anticommutator means mathematically and why the inequality is true. Now, what does the anticommutator term mean physically? I don't know what this question mean. It's a term in an equation that I can read and explain for you again. The precise answers in physics are given by mathematics. So I guess that the answer you want to hear is that it means nothing physically, it's just pure mathematics. This fact doesn't mean that it can't be useful.
Well, in normal cases, the stronger version is not "terribly" useful because the anticommutator term is only nonzero if there is a "correlation" in the distributions of $A,B$ - i.e. if the distribution is "tilted" in the $A,B$ plane rather than similar to a vertical-horizontal ellipse which is usually the case in simple wave packets etc. Maybe this is what you wanted to hear as the physical explanation of the anticommutator term - because $AB+BA$ is just twice the Hermitean part of $AB$, it measures the correlation of $A,B$ in the distribution given by the wave function - although the precise meaning of these words has to be determined by the formula. | {
"domain": "physics.stackexchange",
"id": 362,
"tags": "quantum-mechanics, operators, heisenberg-uncertainty-principle, anticommutator"
} |
Proving that the weak hypercharge gauge field is not A | Question: Under the electroweak gauge group $SU(2)_LU(1)_Y$ one identifies the 4 gauge fields $W^+, W^-, W^0, B$. After symmetry breaking $W^0$ and $B$ mix to give the observed fields $Z^0$ and $A$. Is there an intuitive argument showing immediatly that $A$ can not be identified with $B$?
Answer: Yes--- the electric charge is unbroken, from the zero mass of the photon, so B would have to be unbroken. But B commutes with all the generators of the SU(2), so all the electroweak doublets would have the same electric charge. But this is impossible, as the only things in a family with a given electric charge are unique--- they can't make an SU(2) doublet with anything else. | {
"domain": "physics.stackexchange",
"id": 3478,
"tags": "quantum-field-theory"
} |
Palindrome checker program | Question: I've been studying Java for some days and I'm just now starting to make some simple programs, I'd like to know if there's any 'noob' mistakes or habits that I could avoid.
In other words: how could my code be more streamlined to Java?
My code:
import java.util.Scanner;
public class Main
{
private static String reverse_string(String my_string)
{
String reversed_string = "";
for(int j = my_string.length() - 1; j >= 0; j--)
{
reversed_string = reversed_string + my_string.charAt(j);
}
return reversed_string;
}
public static void main(String[] args)
{
System.out.print("Insert a 'String': ");
Scanner input = new Scanner(System.in);
String user_string = input.nextLine().toLowerCase().replace(" ", "");
if (user_string.equals(reverse_string(user_string)))
{
System.out.println("It is a palindrome.");
} else
{
System.out.println("It is not a palindrome.");
}
}
}
I believe there are already other alternatives to my reverse_string function, hence the 'reinventing the wheel' tag.
Answer: To answer your comment, yes there are equivalent of PEP8 in java. I suggest checkstyle, this plugin works in lots of IDE / text editors and can be configured with your code style.
Code review
reverse_string method
In java, we try to use the upper-case version of Snake case only on constants / enums. For the methods and variables, I suggest that you use the Camel case style.
Before
private static String reverse_string(String my_string)
{
//[..]
}
After
private static String reverseString(String myString)
{
//[..]
}
When concatenating String in a loop, it's generally recommended to use a java.lang.StringBuilder to gain performance and take fewer operations.
private static String reverseString(String myString)
{
StringBuilder reversedString = new StringBuilder();
for (int j = myString.length() - 1; j >= 0; j--)
{
reversedString.append(myString.charAt(j));
}
return reversedString.toString();
}
If you want not to use it, then I suggest the += operator instead of the +; it will give the same result and make the code shorter.
Other observations
The code was missing a bit of formatting, but nothing too important, I sugest that you pick a formatter for your style (Horstmann style) depending on your IDE / text editor.
Refactored code
private static String reverseString(String myString)
{
String reversedString = "";
for (int j = myString.length() - 1; j >= 0; j--)
{
reversedString += myString.charAt(j);
}
return reversedString;
}
public static void main(String[] args)
{
System.out.print("Insert a 'String': ");
Scanner input = new Scanner(System.in);
String userString = input.nextLine().toLowerCase().replace(" ", "");
if (userString.equals(reverseString(userString)))
{
System.out.println("It is a palindrome.");
}
else
{
System.out.println("It is not a palindrome.");
}
} | {
"domain": "codereview.stackexchange",
"id": 37467,
"tags": "java, beginner, reinventing-the-wheel, palindrome"
} |
White blood cells after dealing with an infection | Question: I just have a quick question. If there was an infection of a tissue within the body, white blood cells would leave the capillaries around the tissue and enter the tissues to help cure the infection. How would these cells leave the tissue after that? Would the white blood cells be broken down or would they manage to find their way back into the capillary that they came from?
Answer: White blood cells, especially neutrophils, rapidly leave the blood and migrate in tissues toward infections (or other forms of stimulations). Many of them do die at the location of an infection, but they can also migrate back. The simplest route for this is to follow lymph flow. Lymphatic fluid is picked up by lymphatic vessels, which work in parallel to blood vessels, and from there can move back into blood. Just as an example of migration through lymphatics
Thus, we provide in vivo evidence that neutrophils, like DCs or inflammatory monocytes, migrate via afferent lymphatics to lymphoid tissue and can shuttle live microorganisms.
--Neutrophils rapidly migrate via lymphatics after Mycobacterium bovis BCG intradermal vaccination and shuttle live bacilli to the draining lymph nodes | {
"domain": "biology.stackexchange",
"id": 9187,
"tags": "human-biology, immunology, tissue"
} |
Using ROS Melodic with Python 3 | Question:
I'm trying to install ROS M following the installation instructions,. I installed the desktop configuration, but when tried step 1.6, rosdep installation, it doesn't work for python3-rosdep. Is this the right way of doing it?
I'm using Ubuntu 18.04
$ sudo apt install python3-rosdep
[sudo] password for user:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
python3-rosdep : Depends: python3-catkin-pkg but it is not going to be installed
Depends: python3-rosdistro but it is not going to be installed
Depends: python3-rospkg but it is not going to be installed
Depends: python3-rosdep-modules (>= 0.18.0) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
EDIT:
If I try to install python3-rospkg it will attempt to remove a lot of ros-* packages I just installed
Originally posted by gustavo.velascoh on ROS Answers with karma: 756 on 2020-04-01
Post score: 0
Answer:
The binary packages provided by the buildfarm target Python 2 for ROS Melodic. Melodic does not support Python 3 out-of-the-box. See REP-3: Target platforms.
The first ROS 1 release with official support for Python 3 will be ROS Noetic Ninjemys (ros-infrastructure/rep#202). See also wiki/UsingPython3 and wiki/noetic/Migration.
If you need Python 3 with Melodic, you'll have to build it from source. See #q237613 for a Q&A with some example workflows.
Note: building (base) ROS from source means you cannot use apt any more to install additional packages (as those will only work with Python 2).
Originally posted by gvdhoorn with karma: 86574 on 2020-04-01
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by gustavo.velascoh on 2020-04-01:
@gvdhoorn Thanks for your explanation. It is clear now. | {
"domain": "robotics.stackexchange",
"id": 34678,
"tags": "ros, ros-melodic, rosdep, python3"
} |
MoveIt add collision box coordinates system understanding | Question:
Hey there,
Anyone can help me understanding moveIt coordinates system while adding collisions. I am attaching the code and I want an understanding of one lines.
Src of code : Moveit tutorials : moveit_tutorials/doc/collision_environments/scripts/collision_scene_example.py
from __future__ import print_function
import rospy
from moveit_commander import RobotCommander, PlanningSceneInterface
import geometry_msgs.msg
import time
import sys
class CollisionSceneExample(object):
def __init__(self):
self._scene = PlanningSceneInterface()
# clear the scene
self._scene.remove_world_object()
self.robot = RobotCommander()
# pause to wait for rviz to load
print("============ Waiting while RVIZ displays the scene with obstacles...")
# TODO: need to replace this sleep by explicitly waiting for the scene to be updated.
rospy.sleep(2)
def add_one_box(self):
box1_pose = [0.25, 0.25, 0.0, 0, 0, 0, 1]
box1_dimensions = [0.25, 0.25, 0.75]
self.add_box_object("box1", box1_dimensions, box1_pose)
I want the explanation of below line:
box1_pose = [0.25, 0.25, 0.0, 0, 0, 0, 1] # Over here arg_1 = X pos, arg_2 = Y pos, arg_3 = Z pos, What about others?
Thanks for helping!
Originally posted by Ranjit Kathiriya on ROS Answers with karma: 1622 on 2021-04-01
Post score: 0
Original comments
Comment by gvdhoorn on 2021-04-02:\
Closed for the following reason spam or advertising
I doubt this is "spam or advertising".
In addition, we don't close questions here on ROS Answers when they've been answered.
Simply click on the checkmark of the answer you feel best answers your question. It will turn green, and will clearly mark the question as answered.
Comment by Ranjit Kathiriya on 2021-04-02:
Actually, this question was silly. I have not seen a proper code and posted a question, that was the reason for closing this question.
Comment by fvd on 2021-04-02:
I understand, but that's how Q&A sites work. It's rare that someone is the only one to have a question, and hopefully the next person who has this one will google it and find it answered for them.
Comment by Ranjit Kathiriya on 2021-04-02:
Okay! understood and thanks for the help! @gvdhroom and @fvd
Answer:
The function add_box_object is defined further down in the same file, and it shows that those are the quaternion components that define the orientation:
def add_box_object(self, name, dimensions, pose):
p = geometry_msgs.msg.PoseStamped()
p.header.frame_id = self.robot.get_planning_frame()
p.header.stamp = rospy.Time.now()
p.pose.position.x = pose[0]
p.pose.position.y = pose[1]
p.pose.position.z = pose[2]
p.pose.orientation.x = pose[3]
p.pose.orientation.y = pose[4]
p.pose.orientation.z = pose[5]
p.pose.orientation.w = pose[6]
Originally posted by fvd with karma: 2180 on 2021-04-01
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Ranjit Kathiriya on 2021-04-01:
Sorry! for the silly question. How can I forget that. Omg! Really have to take a rest. Haha
Comment by Ranjit Kathiriya on 2021-04-01:
Thanks for helping!
Comment by Ranjit Kathiriya on 2021-04-01:
Can I close this question? I don't think it will be helpful | {
"domain": "robotics.stackexchange",
"id": 36268,
"tags": "ros, moveit"
} |
Code organization for .NET solution | Question: I am going through one of my class library projects, and while nothing is wrong with it, I am finding myself being a bit anal, and wanting to organize things a bit differently.
The project in particular that I am looking at reorganizing, is a class library to hook into a multitude of databases, and do work on them. Now because of the differences in datatypes and methods for doing said work, each database has its own sub-folder in the main project, labelled after the database it will do the work for.
Would it be best to keep it like it is, or combine the similar methods into 1 class and simply through a switch statement in, giving the option to "select" the database type the library should be working with?
Right now it's setup as you simply use or import My.ClassLibrary.MSSQL and the like, however, all the methods in each type are so flippin similar, I am looking at it and saying to myself... why?
Each Database type has a folder in this project containing the database specific representation of the following code:
Access Class
using System;
using System.Data;
using System.Data.SqlClient;
using System.Threading.Tasks;
namespace kp.Class.Library.Data
{
internal class Access : IDisposable
{
#region "Properties"
// Set the type of query we are running
private CommandType _QT;
internal CommandType QueryType { set { _QT = value; } }
// Set the actual query text to run
private string _Qry;
internal string Query { set { _Qry = value; } }
// Set the parameter names if there are any
private string[] _PNs;
internal string[] ParameterNames { set { _PNs = value; } }
// Set the parameter values if there are any
private object[] _PVs;
internal object[] ParameterValues { set { _PVs = value; } }
// Set the actual Sql Data Types if there are any
private DataTypes[] _DTs;
internal DataTypes[] ParameterDataTypes { set { _DTs = value; } }
// Check to see if there are any parameters passed
private bool AreParams() {
// Check to see if the values and names are null first
if (_PVs != null && _PNs != null) {
try {
Type _t_pv = _PVs.GetType();
Type _t_pn = _PNs.GetType();
if (_t_pv.IsArray && _t_pn.IsArray) {
return (_PVs.Length > 0 && _PNs.Length > 0) ? true : false;
} else {
return false;
}
} catch {
// yes I meant to do this, we really don't need to get the exception here
return false;
}
} else {
return false;
}
}
// Get a return message if any
private string _Msg;
internal string Message { get { return _Msg; } }
// Set the official Sql Reader object
private SqlDataReader _Rdr;
// Set the official Sql Connection object
private SqlConnection _Conn;
// Set the official Sql Command object
private SqlCommand _Cmd;
// Hack for seeing if we're disposed already
private bool disposedValue;
#endregion
// Constructor
internal Access(string _connStr)
{
Invoke(_connStr);
}
// Official Constructor. We can thread these 2 becuase they are not being used yet, and it makes it slightly more efficient
internal void Invoke(string _connStr)
{
try {
Parallel.Invoke(() => {
_Conn = new SqlConnection(_connStr);
}, () =>
{
_Cmd = new SqlCommand();
});
}
catch (SqlException dEx)
{
// Catch an exception if any, an write it out to our logging mechanism, in addition to adding it our returnable message property
_Msg = "Access.Invoke Exception: " + dEx.Message;
ErrorReporting.WriteEm.WriteItem(dEx, "kp.Class.Library.Data.Access.Invoke", _Msg);
}
catch (Exception ex)
{
_Msg = "Access.Invoke Exception: " + ex.Message;
ErrorReporting.WriteEm.WriteItem(ex, "kp.Class.Library.Data.Access.Invoke", _Msg);
}
}
/// <summary>
/// Return a SqlDataReader based on the properties passed to this class
/// </summary>
/// <returns></returns>
internal SqlDataReader GetResults()
{
try {
// check for parameters
if (AreParams()) {
PrepareParams(_Cmd);
}
// set our connection
_Cmd.Connection = _Conn;
// set the type of query to run
_Cmd.CommandType = _QT;
// set the actual query to run
_Cmd.CommandText = _Qry;
// open the connection
_Cmd.Connection.Open();
// prepare the command with any parameters that may have gotten added
_Cmd.Prepare();
// Execute the SqlDataReader, and set the connection to close once returned
_Rdr = _Cmd.ExecuteReader(CommandBehavior.CloseConnection);
// clear out any parameters
_Cmd.Parameters.Clear();
// return our reader object
return (!_Rdr.HasRows) ? null : _Rdr;
}
catch (SqlException SqlEx)
{
_Msg += "Acccess.GetResults SqlException: " + SqlEx.Message;
ErrorReporting.WriteEm.WriteItem(SqlEx, "kp.Class.Library.Data.Access.GetResults", _Msg);
return null;
}
catch (Exception ex) {
_Msg += "Acccess.GetResults Exception: " + ex.Message;
ErrorReporting.WriteEm.WriteItem(ex, "kp.Class.Library.Data.Access.GetResults", _Msg);
return null;
}
}
/// <summary>
/// Execute a non-return query, and return the success
/// </summary>
/// <returns></returns>
internal bool Execute() {
try {
// check for parameters
if (AreParams()) {
PrepareParams(_Cmd);
}
// set our connection
_Cmd.Connection = _Conn;
// set the type of query to run
_Cmd.CommandType = _QT;
// set the actual query to run
_Cmd.CommandText = _Qry;
// open the connection
_Cmd.Connection.Open();
// prepare the command with any parameters that may have gotten added
_Cmd.Prepare();
// execute the non-returnable query against the database
_Cmd.ExecuteNonQuery();
// clear out any parameters
_Cmd.Parameters.Clear();
// executed successfully (otherwise would have thrown an exception)
return true;
}
catch (SqlException SqlEx)
{
_Msg += "Access.Execute SqlException: " + SqlEx.Message;
ErrorReporting.WriteEm.WriteItem(SqlEx, "kp.Class.Library.Data.Access.Execute", _Msg);
return false;
}
catch (Exception ex) {
_Msg += "Access.Execute Exception: " + ex.Message;
ErrorReporting.WriteEm.WriteItem(ex, "kp.Class.Library.Data.Access.Execute", _Msg);
return false;
}
}
/// <summary>
/// Execute a query with a return value. Used in Selecting the ID of the last inserted record.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="_DefVal"></param>
/// <returns></returns>
internal T ExecuteWithReturn<T>(T _DefVal) {
try {
T _Ret;
// check for parameters
if (AreParams()) {
PrepareParams(_Cmd);
}
// set our connection
_Cmd.Connection = _Conn;
// set the type of query to run
_Cmd.CommandType = _QT;
// set the actual query to run
_Cmd.CommandText = _Qry;
// open the connection
_Cmd.Connection.Open();
// prepare the command with any parameters that may have gotten added
_Cmd.Prepare();
T _T = (T)_Cmd.ExecuteScalar();
_Ret = (_T is DBNull) ? default(T) : _T;
// clear out _T
_T = default(T);
// clear out any parameters
_Cmd.Parameters.Clear();
// return the single return value from the query run
return _Ret;
}
catch (SqlException SqlEx)
{
_Msg += "Access.ExecuteWithReturn SqlException: " + SqlEx.Message;
ErrorReporting.WriteEm.WriteItem(SqlEx, "kp.Class.Library.Data.Access.ExecuteWithReturn", _Msg);
return default(T);
} catch (Exception ex) {
_Msg += "Access.ExecuteWithReturn Exception: " + ex.Message;
ErrorReporting.WriteEm.WriteItem(ex, "kp.Class.Library.Data.Access.ExecuteWithReturn", _Msg);
return default(T);
}
}
/// <summary>
/// Prepare our parameters, adding them and forcing a valid data length
/// </summary>
/// <param name="objCmd"></param>
protected void PrepareParams(SqlCommand objCmd) {
try {
// set our initial Data Size
int _DataSize = 0;
// get the number of Parameter Values passed in
int _PCt = _PVs.GetUpperBound(0);
// begin array check
Type _t_dt = _DTs.GetType();
// start looping over our parameters
for (int i = 0; i <= _PCt; ++i) {
// make sure that the data types are actually an array
if (_t_dt.IsArray) {
// select which datatype, and force the official size
switch ((int)_DTs[i]) {
case 0:
case 33:
case 6:
case 9:
case 13:
case 19:
_DataSize = 8;
break;
case 1:
case 3:
case 7:
case 10:
case 12:
case 21:
case 22:
case 23:
case 25:
_DataSize = _PVs[i].ToString().Length;
break;
case 2:
case 20:
_DataSize = 1;
break;
case 5:
_DataSize = 17;
break;
case 8:
case 17:
case 15:
_DataSize = 4;
break;
case 14:
_DataSize = 16;
break;
case 31:
_DataSize = 3;
break;
case 32:
_DataSize = 5;
break;
case 16:
_DataSize = 2;
break;
}
// add our parameter to the command object
objCmd.Parameters.Add(_PNs[i], (SqlDbType)_DTs[i], _DataSize).Value = _PVs[i];
} else {
// if the datatypes were not set, try to add them generically
objCmd.Parameters.AddWithValue(_PNs[i], _PVs[i]);
}
}
// clean up
_PNs = null;_PVs = null;_DTs = null;
} catch (Exception ex) {
_Msg += "Access.PrepareParams Exception: " + ex.Message;
ErrorReporting.WriteEm.WriteItem(ex, "kp.Class.Library.Data.Access.PrepareParams", _Msg);
}
}
#region "Dispose Support"
protected virtual void Dispose(bool disposing)
{
if (!disposedValue && disposing) {
try
{
_Qry = string.Empty;
_Rdr.Close();
_Rdr.Dispose();
_Cmd.Connection.Close();
_Cmd.Dispose();
if (_Conn.State == ConnectionState.Open)
{
SqlConnection.ClearAllPools();
_Conn.Close();
_Conn.Dispose();
}
_Msg = null;
}
catch(Exception ex) {
ErrorReporting.WriteEm.WriteItem(ex, "kp.Class.Library.Data.Access.Dispose", "");
}
}
disposedValue = true;
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
#endregion
}
}
Wrapper Class
using System;
using System.Collections.Generic;
using System.Data;
using System.Data.Common;
namespace kp.Class.Library.Data
{
/// <summary>
/// Wrapper class for our data access
/// </summary>
public class Wrapper {
/// <summary>
/// Setup our return message if any
/// </summary>
public static string Message { get { return _Msg; } }
private static string _Msg;
// Instantiate our caching methods
internal static Common.CustomCache _Cache = new Common.CustomCache();
// Map our datareader object to a strongly typed list
private static IList<T> Map<T>(DbDataReader dr) where T : new()
{
try
{
// initialize our returnable list
List<T> list = new List<T>();
// fire up the lamda mapping
var converter = new Converter<T>(dr);
while (dr.Read())
{
// read in each row, and properly map it to our T object
var obj = converter.CreateItemFromRow();
// add it to our list
list.Add(obj);
}
// reutrn it
return list;
}
catch (Exception ex)
{
// Catch an exception if any, an write it out to our logging mechanism, in addition to adding it our returnable message property
_Msg += "Wrapper.Map Exception: " + ex.Message;
ErrorReporting.WriteEm.WriteItem(ex, "kp.Class.Library.Data.Wrapper.Map", _Msg);
// make sure this method returns a default List
return default(List<T>);
}
}
/// <summary>
/// Get the results of a stronly-typed IList Object
/// </summary>
/// <typeparam name="T">Strongly-Typed class of objects that should be returned</typeparam>
/// <param name="_connStr">The connection string to the database</param>
/// <param name="_Qry">The query to run</param>
/// <param name="_QryType">The Query Type to run</param>
/// <param name="_ParamNames">The Parameters' names to pass to the query, if any</param>
/// <param name="_ParamVals">The Parameters' values to pass to the query, if any</param>
/// <param name="_ParamDTs">The Parameters' data types to pass to the query, if any</param>
/// <param name="_ShouldCache">Should we cache the response</param>
/// <param name="_CacheID">Cache item name</param>
/// <returns>Strongly Typed ilist of objects</returns>
public static IList<T> GetResults<T>(string _connStr, string _Qry, CommandType _QryType,
string[] _ParamNames = null,
object[] _ParamVals = null,
DataTypes[] _ParamDTs = null,
bool _ShouldCache = false,
string _CacheID = "") where T : new()
{
// Create a reference to a potential already cached IList
IList<T> _CachedItem = _Cache.Get<IList<T>>(_CacheID);
// If we're already cached, there's no need to fire up the data access objects, so return the cached item instead
if (_CachedItem != null && _ShouldCache)
{
return _CachedItem;
}
else
{
// Fire up our data access object
using (Access db = new Access(_connStr))
{
try
{
// create a new ilist reference of our strongly typed class
IList<T> _Query = default(IList<T>);
// set the query type
db.QueryType = _QryType;
// set the query text
db.Query = _Qry;
// make sure we've got some parameters, if we do the set them to our db access object
if (_ParamNames != null)
{
// set the parameter names
db.ParameterNames = _ParamNames;
// set the parameter values
db.ParameterValues = _ParamVals;
// set the parameter data types
db.ParameterDataTypes = _ParamDTs;
}
// start using our db access :) Fire off the GetResults method and return back a SqlDataReader to work on
using (DbDataReader r = db.GetResults())
{
// make sure the data reader actually exists and contains some results
if (r != null)
{
// map the data reader to our strongly type(s)
_Query = Map<T>(r);
}
r.Close();
}
// check if we should cache the results
if (_ShouldCache)
{
// if so, set the query object to the cache
_Cache.Set<IList<T>>(_Query, _CacheID);
}
// return our strongly typed list
return _Query;
}
catch (DbException dEx)
{
// Catch an exception if any, an write it out to our logging mechanism, in addition to adding it our returnable message property
_Msg += "Wrapper.GetResults Exception: " + dEx.Message + db.Message;
ErrorReporting.WriteEm.WriteItem(dEx, "kp.Class.Library.Data.Wrapper.GetResults", _Msg);
// make sure this method returns a default List
return default(IList<T>);
}
catch (Exception ex)
{
// Catch an exception if any, an write it out to our logging mechanism, in addition to adding it our returnable message property
_Msg += "Wrapper.GetResults Exception: " + ex.Message + db.Message;
ErrorReporting.WriteEm.WriteItem(ex, "kp.Class.Library.Data.Wrapper.GetResults", _Msg);
// make sure this method returns a default List
return default(IList<T>);
}
}
}
}
/// <summary>
/// Execute a query against the database. Usually used for IUD Operations
/// </summary>
/// <param name="_connStr">The connection string to the database</param>
/// <param name="_Qry">The query to run</param>
/// <param name="_QryType">The Query Type to run</param>
/// <param name="_ParamNames">The Parameters' names to pass to the query, if any</param>
/// <param name="_ParamVals">The Parameters' values to pass to the query, if any</param>
/// <param name="_ParamDTs">The Parameters' data types to pass to the query, if any</param>
/// <returns>Boolean of success</returns>
public static bool Execute(string _connStr, string _Qry, CommandType _QryType,
string[] _ParamNames = null,
object[] _ParamVals = null,
DataTypes[] _ParamDTs = null)
{
// setup a reference for our success return
bool _T;
// Fire up our data access object
using (Access db = new Access(_connStr))
{
try {
// set the query type
db.QueryType = _QryType;
// set the query text
db.Query = _Qry;
// make sure we've got some parameters, if we do the set them to our db access object
if (_ParamNames != null)
{
// set the parameter names
db.ParameterNames = _ParamNames;
// set the parameter values
db.ParameterValues = _ParamVals;
// set the parameter data types
db.ParameterDataTypes = _ParamDTs;
}
// execute the query and return if it was successful or not
_T = db.Execute();
// return it
return _T;
}
catch (DbException dEx)
{
// Catch an exception if any, an write it out to our logging mechanism, in addition to adding it our returnable message property
_Msg += "Wrapper.Execute Exception: " + dEx.Message + db.Message;
ErrorReporting.WriteEm.WriteItem(dEx, "kp.Class.Library.Data.Wrapper.Execute", _Msg);
// make sure this method returns a default List
return false;
}
catch (Exception ex)
{
// Catch an exception if any, an write it out to our logging mechanism, in addition to adding it our returnable message property
_Msg += "Wrapper.Execute Exception: " + ex.Message + db.Message;
ErrorReporting.WriteEm.WriteItem(ex, "kp.Class.Library.Data.Wrapper.Execute", _Msg);
// make sure this method returns a default value of false
return false;
}
}
}
/// <summary>
/// Executes a query against the database, and returns a value
/// </summary>
/// <typeparam name="T">Strongly Typed Object for return</typeparam>
/// <param name="_connStr">The connection string to the database</param>
/// <param name="_Qry">The query to run</param>
/// <param name="_QryType">The Query Type to run</param>
/// <param name="_ParamNames">The Parameters' names to pass to the query, if any</param>
/// <param name="_ParamVals">The Parameters' values to pass to the query, if any</param>
/// <param name="_ParamDTs">The Parameters' data types to pass to the query, if any</param>
/// <param name="_DefVal">Default value that should get returned if none are</param>
/// <returns>Strongly Typed object from the query executed</returns>
public static T ExecuteWithReturn<T>(string _connStr, string _Qry, CommandType _QryType,
string[] _ParamNames = null,
object[] _ParamVals = null,
DataTypes[] _ParamDTs = null,
object _DefVal = null) where T : new() {
// setup a new reference to T
T _T;
// Fire up our data access object
using (Access db = new Access(_connStr))
{
try{
// set the query type
db.QueryType = _QryType;
// set the query text
db.Query = _Qry;
// make sure we've got some parameters, if we do the set them to our db access object
if (_ParamNames != null)
{
// set the parameter names
db.ParameterNames = _ParamNames;
// set the parameter values
db.ParameterValues = _ParamVals;
// set the parameter data types
db.ParameterDataTypes = _ParamDTs;
}
// execute the query and return the results back to _T
_T = db.ExecuteWithReturn<T>((T)_DefVal);
// return it
return (_T is DBNull) ? default(T) : _T;
}
catch (DbException dEx)
{
// Catch an exception if any, an write it out to our logging mechanism, in addition to adding it our returnable message property
_Msg += "Wrapper.ExecuteWithReturn Exception: " + dEx.Message + db.Message;
ErrorReporting.WriteEm.WriteItem(dEx, "kp.Class.Library.Data.Wrapper.ExecuteWithReturn", _Msg);
// make sure this method returns a default List
return default(T);
}
catch (Exception ex)
{
// Catch an exception if any, an write it out to our logging mechanism, in addition to adding it our returnable message property
_Msg += "Wrapper.ExecuteWithReturn Exception: " + ex.Message + db.Message;
ErrorReporting.WriteEm.WriteItem(ex, "kp.Class.Library.Data.Wrapper.ExecuteWithReturn", _Msg);
// return the default value for the strong typed object
return default(T);
}
}
}
}
}
Data Typing Class
namespace kp.Class.Library.Data.SqlServer
{
/// <summary>
/// Sql Data Type Enumeration
/// </summary>
public enum DataTypes : int
{
/// <summary>
/// BigInt
/// </summary>
BigInt = 0,
/// <summary>
/// Binary
/// </summary>
Binary = 1,
/// <summary>
/// Bit
/// </summary>
Bit = 2,
/// <summary>
/// Char
/// </summary>
Char = 3,
/// <summary>
/// Date
/// </summary>
Date = 31,
/// <summary>
/// DateTime
/// </summary>
DateTime = 4,
/// <summary>
/// DateTime2
/// </summary>
DateTime2 = 33,
/// <summary>
/// DateTimeOffset
/// </summary>
DateTimeOffset = 34,
/// <summary>
/// Decimal
/// </summary>
Decimal = 5,
/// <summary>
/// Float
/// </summary>
Float = 6,
/// <summary>
/// Image
/// </summary>
Image = 7,
/// <summary>
/// Int
/// </summary>
Int = 8,
/// <summary>
/// Money
/// </summary>
Money = 9,
/// <summary>
/// NChar
/// </summary>
NChar = 10,
/// <summary>
/// NText
/// </summary>
NText = 11,
/// <summary>
/// NVarChar
/// </summary>
NVarChar = 12,
/// <summary>
/// Real
/// </summary>
Real = 13,
/// <summary>
/// SmallDateTime
/// </summary>
SmallDateTime = 15,
/// <summary>
/// SmallInt
/// </summary>
SmallInt = 16,
/// <summary>
/// SmallMoney
/// </summary>
SmallMoney = 17,
/// <summary>
/// Structured
/// </summary>
Structured = 30,
/// <summary>
/// Text
/// </summary>
Text = 18,
/// <summary>
/// Time
/// </summary>
Time = 32,
/// <summary>
/// Timestamp
/// </summary>
Timestamp = 19,
/// <summary>
/// TinyInt
/// </summary>
TinyInt = 20,
/// <summary>
/// Udt
/// </summary>
Udt = 29,
/// <summary>
/// UniqueIdentifier
/// </summary>
UniqueIdentifier = 14,
/// <summary>
/// VarBinary
/// </summary>
VarBinary = 21,
/// <summary>
/// VarChar
/// </summary>
VarChar = 22,
/// <summary>
/// Variant
/// </summary>
Variant = 23,
/// <summary>
/// Xml
/// </summary>
Xml = 25
}
}
Answer:
Would it be best to keep it like it is, or combine the similar methods
into 1 class and simply through a switch statement in, giving the
option to "select" the database type the library should be working
with?
This is a two part question. Where I am only refering to
Part1
Would it be best to keep it like it is, or combine the similar methods
into 1 class ?
This is where the design principle DRY - don't repeat yourself would come in handy. Also, I am not a big fan of inheritance (I prefer composition), this would be a perfect candidate to build a abstract or must inherit class which should implement all the non database system specific code.
So a Access class will inherit this super class and will add it's own system specific part to the class. So e.g for Access or any OleDB related operation, the parameters of a parameterized query just use the ? as placeholder, but must ensure the order of the parameters.
General
You code seems to be too commented. For every setting/assignment you first write the comment what you are doing and then in the next line the code itself.
Some examples
// check if we should cache the results
if (_ShouldCache)
// set the parameter data types
db.ParameterDataTypes = _ParamDTs;
This is blowing your codebase up to 1000 lines. Renaming your parameters and fields to something more meaningful would reduce the comments to the minimum.
Comments should be used to explain why something is done. What is done by the code should be documented by the code itself.
So by renaming _ShouldCache to shouldResultsBeCached (I don't like the underscores) and ParamDTs to parameterDataTypes you could remove the comments and Mr.Maintainer still would understand what your code does.
Access class
The // select which datatype, and force the official size part of the PrepareParams() method should be extracted to it's own method.
This tenary expression/operator
return (_PVs.Length > 0 && _PNs.Length > 0) ? true : false;
can be expressed like
return (_PVs.Length > 0 && _PNs.Length > 0);
which is more clear. This came form the method AreParams()
private bool AreParams() {
// Check to see if the values and names are null first
if (_PVs != null && _PNs != null) {
try {
Type _t_pv = _PVs.GetType();
Type _t_pn = _PNs.GetType();
if (_t_pv.IsArray && _t_pn.IsArray) {
return (_PVs.Length > 0 && _PNs.Length > 0) ? true : false;
} else {
return false;
}
} catch {
// yes I meant to do this, we really don't need to get the exception here
return false;
}
} else {
return false;
}
}
If you use a guard condition, your code will be indented less. Adding up condition checks can remove if..else statements.
The method can be refactored and renamed to
private bool HasParameters() {
// Check to see if the values and names are null first
if (_PVs == null || _PNs == null) { return false;}
try {
Type _t_pv = _PVs.GetType();
Type _t_pn = _PNs.GetType();
return (_t_pv.IsArray && _t_pn.IsArray && _PVs.Length > 0 && _PNs.Length > 0);
}
} catch {
// yes I meant to do this, we really don't need to get the exception here
return false;
}
}
Using auto properties (where possible) can simplify your code also, as you name your properties meaningful, but not your backing fields.
Example
private CommandType _QT;
internal CommandType QueryType { set { _QT = value; } }
could become
internal CommandType QueryType { private get; set; }
Wrapper class
Inside your ExecuteWithReturn() method you are catching 2 different exception types, which you treat exactly the same (only the exception variable name differs). If you treat exceptions the same, you don't need to distinguish between them.
Style
Be consistent with the style you use. Try to use style the majority uses so a new developer will be used to it.
Example
Sometimes you put a opening brace { to a new line (where it belongs) sometimes at the end of the line. | {
"domain": "codereview.stackexchange",
"id": 10419,
"tags": "c#, .net"
} |
Lorentz Transformations and Angular Momentum unclear derivation | Question: How do we get rid of $\omega^{\rho}_{\;\;\nu}$ and howt do we derive the last equation form the previous one. Here is URL for the source where I found this text page 17 (marked at bottom).
Answer: Depending on the type of transformation, the infinitesimal parameter can be a scalar, vector, tensor etc.
We parameterize the change in the field as
Scalar: $\delta \phi = \alpha \Delta \phi$
Vector: $\delta \phi = \alpha^{\rho} (\Delta \phi)_{\rho}$
Tensor: $\delta \phi = \alpha^{\rho\sigma} (\Delta \phi)_{\rho\sigma}$
Correspondingly, more number of indices are involved in $F^{\mu}$ appearing in the change of Lagrangian. For example: (for the above three cases)
$\mathcal{L} \rightarrow \mathcal{L}+\alpha \partial_{\mu}F^{\mu}$
$\mathcal{L} \rightarrow \mathcal{L}+\alpha^{\rho} \partial_{\mu}\big((F^{\mu})_{\rho}\big)$
$\mathcal{L} \rightarrow \mathcal{L}+\alpha^{\rho\sigma} \partial_{\mu}\big((F^{\mu})_{\nu\sigma}\big)$
This simply means that the current defined in eq.(1.38) in your reference (https://www.damtp.cam.ac.uk/user/tong/qft/qft.pdf) will be $\mathcal{J}^{\mu}$, $(\mathcal{J}^{\mu})^{\rho}$ and $(\mathcal{J}^{\mu})^{\rho\sigma}$, respectively. It is written as:
1.$\big(\mathcal{J}^{\mu}\big)=\frac{\partial \mathcal{L}}{\partial (\partial_{\mu}\mathcal{L})}\Delta \phi-(F^{\mu})$
2.$\big(\mathcal{J}^{\mu}\big)_{\rho}=\frac{\partial \mathcal{L}}{\partial (\partial_{\mu}\mathcal{L})}(\Delta \phi)_{\rho}-(F^{\mu})_{\rho}$
3.$\big(\mathcal{J}^{\mu}\big)_{\rho\sigma}=\frac{\partial \mathcal{L}}{\partial (\partial_{\mu}\mathcal{L})}(\Delta \phi)_{\rho\sigma}-(F^{\mu})_{\rho\sigma}$
Since the question is about a tensor case, let's focus on the item 3 above.
To read off $(F^{\mu}){\rho\sigma}$, the key point is to rewrite equation (1.53) as following
$$\mathcal{L} \rightarrow \mathcal{L}-\omega^{\rho \sigma}\partial_{\mu}\big(\delta_{\rho}^{\mu} \delta_{\sigma \nu} x^{\nu} \mathcal{L}\big)=\mathcal{L}-\big(\underbrace{\frac{1}{2}\omega^{\rho \sigma}}_{\alpha_{\mu\nu}}\big)\partial_{\mu}\big(\underbrace{\delta_{\rho}^{\mu} \delta_{\sigma \nu} x^{\nu} \mathcal{L}-\delta_{\sigma}^{\mu} \delta_{\rho \nu} x^{\nu} \mathcal{L}}_{\big(F^{\mu}\big)_{\rho\sigma}}\big)$$
where we have used the fact that $\omega_{\mu\nu}$ is anti-symmetric.
Note also that $(\Delta\phi)_{\rho\sigma}$ only contributes by its anti-symmetric part. According to equation (1.52)
$$\delta \phi=-\underbrace{\frac{1}{2}\omega^{\rho\sigma}}_{\alpha^{\rho \sigma}}\big(\underbrace{x_{\rho}\partial_{\sigma}\phi-x_{\sigma}\partial_{\rho}\phi}_{(\Delta\phi)_{\rho \sigma}}\big) $$
By the last two equations above you can compute $\big(\mathcal{J}^{\mu}\big)_{\rho\sigma}$ which turns out to be the same as (1.55).
Hope this helps.
P.S. The above discussion is based on Peskin-Schroeder section (2.2). | {
"domain": "physics.stackexchange",
"id": 98518,
"tags": "special-relativity, lagrangian-formalism, angular-momentum, stress-energy-momentum-tensor, noethers-theorem"
} |
RVIZ trace robot path | Question:
Hi
I have a bagfile with tf transforms containing /body frames
Is it possible to trace the path of the body frame in RVIZ?
kind of like in this tutorial, but I want to see a line the turtle followed in RVIZ,rather than a line to the transform...
http://wiki.ros.org/tf/Tutorials/Introduction%20to%20tf
Originally posted by Sentinal_Bias on ROS Answers with karma: 418 on 2014-05-29
Post score: 0
Answer:
hector_trajectory_server does what you request. See also this other older Q/A.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-05-29
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 18104,
"tags": "transform"
} |
What happens when two strings collide? | Question: I have a question, that perhaps someone with a much better understanding of physics can help me answer.
Please correct me if I'm wrong. From what I understand, a string in string theory is basically a strand of energy that vibrates. The string can vibrate in a variety of different ways. It is thought that things such as quarks consist of a string vibrating in a very specific way.
The question is, if two of these vibrating strings collide, would not there vibration patterns change?
Is it possible for a string vibrating with one pattern to collide with a string vibrating in the exact opposite pattern in such a way that the two strings are essentially cancelled out?
Answer: Strings are not described very accurately in popular science, because much of the physics of strings was only understood long after the mathematical theory was somewhat advanced, and an accurate classical analog for the string wasn't available until relatively recently.
The classical analog people often use is a vibrating band of energy, but this is mostly wrong, because strings don't interact by bumping into each other, like rubber bands do. They can only interact in a strange way determined by the consistency of having infinitely many different particles, all conspiring to make a consistent theory. This conspiracy makes it that when strings merge, they do a complicated thing, and the only correct classical analog to this complicated thing is a black hole merger. This wasn't understood very well until at least the late 1990s, so pop-sci has not caught up.
Ordinary astrophysical black holes are big, and complicated, and black, and gooey. They are electrically resistive, their oscillations decay quickly, they are very irreversible in the statistical sense. The string describes the case that the black hole is extended and charged so much that it is in the non-viscous extremal limit, so that strings are not gooey and lossy like ordinary black holes, but shiny and reversible. The one dimensional shiny black holes are the strings, in the limit that these black holes are weakly interacting, so that the lightest ones are light, and consequently weakly charged (because extremality relates charge and mass).
When black holes collide, the oscillations do not combine in a simple way, they combine the way black holes combine, in a strange acausal way that only makes sense teleologically. So you can't say "this pattern made this pattern", at least not in a causal way precisely, you need to describe the whole thing at once. The ripples running along the string doesn't add to the ripple running along another string when they combine, but if the two combine at high energies to make a long string with many ripples running in a statistical way, so that it has a classical interpretation, they combine to an object which is gooey because it has a lot of junk running around on it, and this is no different than a viscous liquid. When the strings cool down again by shooting out other cold strings, the oscillations die down. The laws of combination are not like superposing waves on a pond, but they are described over the entire space-time world-sheet.
Strings are more elementary than black holes, they are small and simple. You shouldn't be intimidated by the above to thinking that the laws of string collisions require you to understand the classical collisions of black holes! This is no more true than saying that describing a collision in the Born approximation in quantum mechanics requires you to solve the much more difficult classical problem of particles scattering around in that potential. The quantum behavior is simpler than the classical behavior in many cases.
The laws of string collisions, for those strings which are lightest at the lowest energies, that is for those distance scales which reproduce our experience, are extremely simple: they just reproduce the laws of Feynman diagrams, so that the particles combine into other particles the same way they do in the standard model. On the string world sheet, these laws of particle combination are the laws of algebraic products of operators, they describe how to expand a product of two operators in an infinite series of a third operator. This process takes a limit where the points of merger of the incoming particles are smooshed close together by a conformal transformation, so that each of their oscillations is no longer distinct, but merged into a combined oscillation a long long time ago (the collision limit is not really a short-distance limit, but a long-time limit). The combined oscillation just looks like an infinite series of particles in the theory, an infinite series of operators on the world sheet which create the oscillation corresponding to sending in one of these particles from infinity.
This is just like a Taylor expansion, except for fluctuating quantities, and the operator product laws are the laws of string merger and vibration-adding. You can't make nothing, because you always have a world sheet, and the addition law is strange, not by the laws of superposition (like water waves, or oscillations on rubber bands) but by the rules of operator product expansion. | {
"domain": "physics.stackexchange",
"id": 2378,
"tags": "string-theory, string"
} |
Simple battle class | Question: I'm new to Java, and this code could likely be squashed down a whole heap. How can I make this code simpler and easier to read?
chance.base is a percentage. If it is = 40, it has a 40% chance of returning true.
public class battleManager {
void battle(Army playerArmy,Army enemyArmy,int armyModifier,int army2Modifier){
if(armyModifier > 40| army2Modifier > 40){
System.out.println("note: modifiers were over maximum (40)");
}
chanceManager chance = new chanceManager();
chance.base = (50 + armyModifier) - army2Modifier;
int armyCount = playerArmy.units, army2Count = enemyArmy.units;
boolean armyAlive = true, army2Alive = true, playerWon = false;
while(armyAlive & army2Alive){
if(chance.generate()){
army2Count--;
}
else{
armyCount--;
}
if(armyCount == 0){
armyAlive = false;
playerWon = false;
}
if(army2Count == 0){
army2Alive = false;
playerWon = true;
}
}
chance.base = 25;
int armyKilled, army2Killed;
armyKilled = playerArmy.units - armyCount;
army2Killed = enemyArmy.units - army2Count;
for (int unit = 0; unit < armyKilled; unit++){
if(chance.generate()){
armyCount++;
}
}
for (int unit = 0; unit < army2Killed; unit++){
if(chance.generate()){
army2Count++;
}
}
// this is for debugging
System.out.println("army survivors:");
System.out.println(armyCount);
System.out.println("army2 survivors:");
System.out.println(army2Count);
if(playerWon){
System.out.println("player army won");
}else
System.out.println("Player army lost");
// end of debugging
playerArmy.units = armyCount;
}
Answer: Naming
Class names should start with an UpperCase letter by convention.
What is a battleManager? What is a choiceManager? Why not ColonelSanders or VicePresidentOfChoice? Joking aside, the naming could be more descriptive. How about BattleSimulator and BernoulliTrial?
The naming of the parameters is inconsistent:
battle(Army playerArmy,Army enemyArmy,int armyModifier,int army2Modifier)
Any of these would be more logical:
battle(Army playerArmy, Army enemyArmy, int playerModifier, int enemyModifier)
battle(Army army1, Army army2, int army1Modifier, int army2Modifier)
battle(Army attacker, Army defender, int attackerModifier, int defenderModifier)
The last option would be particularly appealing if the situation being modelled is asymmetrical (i.e., the attacker army is attempting to move into the defender's territory). Typically, the rules would require the attacker to have at least n units, while the defender might have no units at all. (An assertion at the beginning of the battle method to enforce such a rule would be a good idea too.)
Object-oriented design
The battle method should be public, to be consistent with the class.
It would be useful for the function to return the winner of the battle.
Setting chance.base to modify the probability is not recommended, since you are allowing others to meddle with the internals of the chance object. Three better approaches are:
BernoulliTrial battleProb = new BernoulliTrial(50 + army1Modifier - army2Modifier);
battleProb.setProbability(50 + army1Modifier - army2Modifier);
battleProb.generate(50 + army1Modifier - army2Modifier);
See below for a more creative naming suggestion — I think it reads more nicely.
Algorithm
There is a fighting phase and a healing phase.
After healing, you resuscitate some of playerArmy's casualties. However, you do not do likewise for the enemy army. Either that is an oversight, or one of the healing loops is superfluous. I'll assume that only the winning army should be healed.
Rather than setting the unit count after the loop, you could manipulate the army's unit count directly within the loop.
You would be better off without the armyAlive and army2Alive flags.
assert(attacker.units > 0);
BernoulliTrial attack = new BernoulliTrial(50 + attackerModifier - defenderModifier);
int attackerCasualties = 0, defenderCasualties = 0;
// Begin fighting
while (attacker.units > 0 && defender.units > 0) {
if (attack.success()) {
defender.units--;
defenderCasualties++;
} else {
attacker.units--;
attackerCasualties++;
}
}
Army winner = (attacker.units > 0) ? attacker : defender;
// Heal the winning army
BernoulliTrial heal = new BernoulliTrial(25);
int winnerCasualties = (winner == attacker) ? attackerCasualties : defenderCasualties;
while (winnerCasualties-- > 0) {
if (heal.success()) {
winner.units++;
}
}
return winner; | {
"domain": "codereview.stackexchange",
"id": 11035,
"tags": "java, beginner, game, battle-simulation"
} |
Tokenizing string using strtok | Question: In this assignment, I'm supposed to split a string using strtok. (The assignment consists of the comment and the function definition; I fill in the function body.)
It works as far as I've tested. But it seems a little ugly to me. Perhaps it's my Java background, but somehow this just doesn't feel right. Specific concerns:
Is this easily readable to a more experienced C programmer?
Can this be made more idiomatic somewhere?
Is it feasible to split this into several functions? Where would you split it?
Any corner cases where it might break? Does it leak memory?
/* Parses a string into tokens (command line parameters) separated by space.
* Builds a dynamically allocated array of strings. Pointer to the array is
* stored in variable pointed by argv.
*
* Parameters:
* argv: pointer to the variable that will store the string array
* input: the string to be parsed (the original string can be modified, if needed)
*
* Returns:
* number of space-separated tokens in the string */
int parse_cmdline(char ***argv, char *input)
{
char *token;
int len, num;
len = 4;
*argv = malloc(len * sizeof(char *));
if (*argv == NULL) {
return 0;
}
for (num = 0;; num++, input = NULL) {
token = strtok(input, " ");
if (token == NULL) {
break;
}
if (num >= len) {
char **temp;
len *= 2;
temp = realloc(*argv, len * sizeof(char *));
if (temp == NULL) {
for (int i = 0; i < num; i++) {
free((*argv)[i]);
}
free(*argv);
*argv = NULL;
return 0;
}
*argv = temp;
}
(*argv)[num] = strdup(token);
if ((*argv)[num] == NULL) {
for (int i = 0; i < num; i++) {
free((*argv)[i]);
}
free(*argv);
*argv = NULL;
return 0;
}
}
return num;
}
Answer: The key point in tokenizing strings is that
The number of resulting token would not be known in prior
One good approach is the one you have followed,
Allocate some memory
Use it
Allocate more if needed
I want to answer some of the questions that you have asked.
Is it feasible to split this into several functions? Where would you
split it?
As you see, there is the repetition of error handling (freeing up the argv array). Well this can be separated into another function.
Is this easily readable to a more experienced C programmer?
Not sure of the level of expertise you mean, but I can read it and understand without any problems. I am a 2 year experienced C Professional.
Any corner cases where it might break? Does it leak memory?
There is no obvious memory leak I could see in the program. But you can use tools like valgrind to detect such leaks.
I am just posting few code, that I would use, if I am given the same program.
int free_argv_q(struct queue *q)
{
int i;
int q_len = q_length(q);
for (i = 0; i < q_len; i++) {
char *arg = dequeue(q);
free(arg);
}
return 0;
}
int parse_cmdline(char ***argv, char *input)
{
int i;
int argc = 0;
char *arg = NULL;
char *token = NULL;
struct queue q;
token = strtok(input, " ");
do {
/* strdup may fail due to low memory */
arg = strdup(token);
if (arg == NULL) {
free_argv_q(&q);
return 0;
}
/* add to queue */
enqueue(&q, arg);
token = strtok(NULL, " ");
} while (token);
/* length of the queue is the number of splits */
argc = q_length(&q);
*argv = malloc(argc * sizeof(char *));
if (*argv == NULL) {
free_argv_q(&q);
return 0;
}
/* now we have all memory allocated */
for (i = 0; i < argc; i++)
argv[i] = dequeue(&q);
return argc;
}
The explanation is as follows.
I use a queue to split the tokens and store them. Well this would grow dynamically.
After parsing whole input, the argv array is allocated
argc is the length of the queue
Memory allocation failures are handled though
The cleanup function is separated
Note: The implementation of queue is skipped for simplicity. Hope this program is self explanatory. | {
"domain": "codereview.stackexchange",
"id": 6358,
"tags": "c, strings, memory-management"
} |
Electric Field for Non-Symmetric Charge Distributions | Question: In real-world situations where charge distributions are asymmetric and continuous, how is the electric field calculated? Gauss's Law in integral form can't be practically applied for asymmetric distributions, and although I've heard it alluded to that the differential form can be used I lack the mathematical skill to understand how (i.e whether the PDE $\nabla \cdot {\bf E}=\frac{\rho}{\epsilon_0}$ can be numerically integrated to find ${\bf E}$). My only other thoughts are using Coulomb's Law or that there's some other method I haven't learned about.
Alternatively, if this has been asked before I'd appreciate a link to the question.
Answer: I am guessing from how you phrase the question that you have taken an introductory university-level course in electricity and magnetism; and so you have perhaps seen Gauss's Law and Coulomb's Law, but not much else in the way of problem-solving techniques. If this is not the case, let me know in the comments and I'll modify my answer accordingly.
As a general rule, it is not possible to directly use Gauss's Law to solve for the electric field of an arbitrary charge distribution. The only cases in which it's possible are those with a high degree of symmetry. In such instances, you're frequently stuck with Coulomb's Law.1
For an asymmetric, continuous charge distribution, the tactic is to:
View the continuous charge distribution as a bunch of little chunks with small volumes $\Delta V$.
Figure out the charge $\Delta q$ inside each little chunk by the relationship $\Delta q \approx \rho \Delta V$.
Figure out the electric field $\Delta \vec{E}$ due to each little chunk at the point $P$ where we want to find the field, using Coulomb's Law.
Add up the fields from all the chunks to get the total field $\vec{E} = \sum_\text{chunks} \Delta \vec{E}$.
Take the limit as these "chunks" go to infinitesimally small size. In this limit, the sum becomes a multiple integral over the volume occupied by the charge distribution.
The process is tedious but straightforward. The problem is that turning the formal "sum" from step 4 into an integral in step 5 can be rather complicated; and even if you can write down such an integral, it often happens that you can't actually perform the integral to get a result in terms of elementary functions. Even for a situation as simple as the field at a general point in the plane of a ring of charge, you run into things like elliptic integrals; and encountering them in a problem usually requires "a very specific flavour of giving up" on solving your problem.
1 There are various more advanced techniques that sometimes be applied, which you'll learn about in upper-level E&M courses. As it happens, one of these techniques (the method of Green's functions) is secretly equivalent to Coulomb's Law in the case of a charge distribution for which the electric potential goes to zero at infinity. It's all related! | {
"domain": "physics.stackexchange",
"id": 95490,
"tags": "electrostatics, electric-fields, gauss-law"
} |
Efficent way to Query a DB and write to text file - Large Recordset | Question: I have a SQL query that returns a recordset with between 2 and 5 million records and I need to write that to a .csv file.
I wrote the following procedure and I'm curious to see if there is a better / more efficient way to do this.
Dim conn As New OracleConnection()
...
Sub DBExecuteQueryWriteToFile(ByVal SQLCommand As String, ByVal FileName As String, Optional ByVal Delimiter As String = ",")
Dim cmd = New OracleCommand(SQLCommand, conn)
Dim sb As New StringBuilder
Dim x As Integer = 0
Dim CountRow As Integer = 0
If File.Exists(FileName) Then
File.Delete(FileName)
End If
Using FileObject As New FileStream(FileName, FileMode.OpenOrCreate)
Using MStream As New MemoryStream()
Using StreamWriterObj As New StreamWriter(MStream)
Using Reader As OracleDataReader = cmd.ExecuteReader()
Dim FieldCount As Integer = Reader.FieldCount - 1
Do While Reader.Read()
sb.Append(Reader.Item(0))
For i = 1 To FieldCount
sb.Append(Delimiter)
sb.Append(Reader.Item(i))
Next
sb.Append(vbCrLf)
'Write every 25000 rows of data to the file from the buffer
If x = 25000 Then
StreamWriterObj.Write(sb.ToString().ToCharArray())
MStream.Seek(0, SeekOrigin.Begin)
MStream.WriteTo(FileObject)
sb = New StringBuilder()
CountRow = CountRow + x
x = 0
End If
x = x + 1
Loop
'Write any remaining data from the buffer to the file
StreamWriterObj.Write(sb.ToString().ToCharArray())
MStream.WriteTo(FileObject)
Reader.Close()
StreamWriterObj.Close()
MStream.Close()
FileObject.Close()
End Using
End Using
End Using
End Using
End Sub
Any and all comments and help are geatly appreciated and, also, even though this is written in VB, I'm equally as comfortable with C# solutions.
Answer: You can simplify the code greatly without loosing any performance, more likely gaining instead.
There is no need to use StringBuilder - you can write directly to the StreamWriter - it will be faster since there will be no need of copy the whole data twice.
Next, there is no need to use MemoryStream - you are using it to buffer data before writing it to the disk. The same can be achieved by specifying the buffer size when creating FileStream. You should play around with the buffer size to see what is fastest on your environment - usually 4KB is used, in the sample below it is 1MB, but you have to try it on your hardware.
You should also play around with the options how to open FileStream. You should try what impact Asynchronous and WriteThrough has on your solution.
Of course, as the other answer proposed, separating reading from database and writing to the file in two threads might work as well. Try and measure three separate scenarios - just reading from the database (measure separately the query (ExecuteReader() and first Read() and then reading the other rows - because it might be that most of the time is spending to initially execute the query, not to iterate through the data), just writing to the file (dummy data) and then combined. This way you will see if there is actual reason to put the two operations in parallel.
Dim conn As New OracleConnection()
...
Sub DBExecuteQueryWriteToFile(ByVal SQLCommand As String, ByVal FileName As String, Optional ByVal Delimiter As String = ",")
Dim cmd = New OracleCommand(SQLCommand, conn)
Dim bufferSize = 1024*1024; // 1Mb
If File.Exists(FileName) Then
File.Delete(FileName)
End If
Using FileObject As New FileStream(FileName, FileMode.OpenOrCreate, FileAccess.Write, FileShare.None, bufferSize)
Using StreamWriterObj As New StreamWriter(FileObject)
Using Reader As OracleDataReader = cmd.ExecuteReader()
Dim FieldCount As Integer = Reader.FieldCount - 1
Do While Reader.Read()
StreamWriterObj.Write(Reader.Item(0))
For i = 1 To FieldCount
StreamWriterObj.Write(Delimiter)
StreamWriterObj.Write(Reader.Item(i))
Next
StreamWriterObj.WriteLine();
Loop
End Using
End Using
End Using
End Sub
P.S. few other issues in your code:
StringBuilder().ToString().ToCharArray() - no need to convert to char array.
new StringBuilder() - instead of reusing the memory already allocated to the existing builder, you are throwing that away and creating new one.
MStream.Seek() - you are seeking to the beginning of the stream but are not setting Length to 0. So if the second block written there is smaller than first one, junk data will be written to the file. | {
"domain": "codereview.stackexchange",
"id": 6138,
"tags": "vb.net"
} |
Complexity of factoring products of distinct prime numbers | Question: Problem: Input is an integer number $x$ that we know factors as $p_{i_1}\cdot p_{i_2}\ldots p_{i_n}$, where the $p_{i_j}$'s are distinct prime numbers. Output is the above factorization of $x$.
Do you know any results/references for the time complexity of this factoring problem?
Note: If the $p_{i_j}$'s are not assumed distinct, then the problem is just integer factorization. This is a very special case.
Answer: As far as I know, there are no non-trivial lower bounds on factoring, and in particular, no variant is conjectured to be NP-hard. (While factoring is not a decision problem per se, you can make up a corresponding decision problem that gives the $k$bit of the $\ell$th smallest factor.)
As far as algorithms go, there are a great many of them, including the following subexponential ones:
Quadratic sieve: this factors an integer $n$ in time $2^{O(\sqrt{\log n\log\log n})}$. There are many variants of this algorithm resulting in different constants for the big O, and one of them (due to Dixon) provably has the stated running time.
Elliptic curve method: this factors an integer $n$ with smallest prime factor $p$ in time $2^{O(\sqrt{\log p \log\log p})}$ (in general, this is the time it takes to find the factor $p$ rather than the complete factorization). No variant of this algorithm has a provable running time guarantee. The algorithm relies on the fact that random elliptic curves modulo $p$ have a certain order (number of points) distribution, and this empirically verifiable property isn't proven. (What you really need to know is the probability that the order is $\alpha$-smooth, whose conjectured behavior is satisfied for any reasonably smooth probability distribution.)
Number-field sieve: this factors an integer $n$ in time $2^{O(\sqrt[3]{\log n (\log \log n)^2})}$. No variant of this algorithm has a provable running time guarantee. The algorithm relies on the probability that a random element of a number field is smooth (in terms of its norm), a property which can be empirically verified but is unproven.
In practice, one first runs simpler algorithm which are able to find small prime factors more quickly. Some of the algorithms might run into problem if all prime factors are repeated, which is ruled out by your constraints, but I can't think of any concrete example.
There is no consensus on the conjectured complexity of the problem. Recent advances in the related discrete logarithm problem suggest that factoring might be in quasipolynomial time, and some people (Charles Rackoff, for example) wouldn't even rule out a polynomial algorithm for factoring. Others would concede that the number-field sieve algorithm might not be optimal, but that factoring should require time $2^{\Omega((\log n)^c)}$ for some $c > 0$. | {
"domain": "cs.stackexchange",
"id": 2784,
"tags": "complexity-theory, reference-request, time-complexity, factoring"
} |
Derivation of discontinuity of vector potential | Question: I'm trying to prove the equation from Griffiths' Introduction to Electrodynamics which tells us that the vector potential ${\bf A}$ is continuous across any boundary (both tangential and normal components) but its derivative somehow inherits the discontinuity already proved for ${\bf B}$, namely:
$$ \frac{\partial {\bf A}_{above} }{\partial n}- \frac{\partial {\bf A}_{below}}{\partial n} = -\mu_0 \bf{K} \tag{1}$$
Now I tried a very weird approach yesterday, but it was evidently flawed and did not take me anywhere so, this time, I tried following some of the author's suggestions.
Let us consider a cartesian coordinate system (${\bf \hat{i},\hat{j}, \hat{k} }, x,y,z$). Let us consider now a current sheet, namely a 2D-surface $\Sigma$ with surface current density $\bf{K}$ which lies in the $xy$-plane. Moreover, let's set the current parallel to the $x$-axis, therefore ${\bf K} = K \bf{\hat{i}}$. By doing so, the ${\bf\hat{k}}$ direction will be perpendicular to $\Sigma$ as well. Let us eventually rename ${\bf A}_{above/below} \equiv {\bf A}_{a,b}$.
Now, using ${\bf B = \nabla \times A }$ and computing the cross product in the RHS I can rewrite the equation:
$$ {\bf B}_{a}-{\bf B}_{b} = \mu_0 (\mathbf{K} \times \mathbf{\hat{n}}) \tag{2}$$
as
$$\nabla \times(\mathbf{A}_a - \mathbf{A}_b) = -\mu_0 K \bf{\hat{j}}$$
Well, I then compute the curls in the LFH and regroup the terms according to the cartesian unit vectors, and by comparison with the RHS I finally get to the following system of three equations:
$$ \begin{cases}
\frac{\partial A_{a,z}}{\partial y} + \frac{\partial A_{b,y}}{\partial z} = \frac{\partial A_{b,z}}{\partial y} + \frac{\partial A_{a,y}}{\partial z} \\
\frac{\partial A_{a,y}}{\partial x} + \frac{\partial A_{b,x}}{\partial y} = \frac{\partial A_{b,y}}{\partial x} + \frac{\partial A_{a,x}}{\partial y} \\
\frac{\partial A_{a,x}}{\partial z} - \frac{\partial A_{b,x}}{\partial z} + \frac{\partial A_{b,z}}{\partial x} - \frac{\partial A_{a,z}}{\partial x} = -\mu_0 K
\end{cases} \tag{3}$$
Now, if we ignore for a second the partial derivatives with respect to $x$, we can see that the equation tells us that the derivative of the components of the vector potential which are parallel to the current, with respect to the unit normal to the surface do indeed inherit a discontinuity.
Now, I tried using the equation:
$$ {\bf A}_{above} = {\bf A}_{below} $$
which, expanded, gives us:
$$A_{a,i}-A_{b,i} = 0, i \in [x,y,z], \tag{4}$$
Now going back to $(3)$ and using $(4)$, we can see that the first two equations vanish and so does the extra term in the third equation that was causing us problems. We are left with the sole equation:
$$ \frac{\partial A_{a,x}}{\partial z} - \frac{\partial A_{b,x}}{\partial z} = -\mu_0 K $$
Now, being totally honest, I can't see how this is equal to the author's result $\bf{(1)}$. Nevertheless, I cannot unsee the symmetry: in the case of $\mathbf{B}$, the component that presents the discontinuity is the one parallel to the surface but orthogonal to the current, while in the case of $\mathbf{A}$, I obtained that the component in question is the one parallel to the current. This has made me just a bit more confident in the result I got.
If somebody could tell me if my procedure is correct and, in that case, if the result is indeed equivalent to Griffiths', I'd really appreciate it. Moreover, if someone wants to propose a proof of his own, well that would also be much appreciated since my methods are far from rigorous (almost primitive I'd say) and lack all of that physical intuition which make those kind of proofs much easier and which I'd like to possess myself
Answer: Start with $$\mathbf {B} = \nabla \times \mathbf A \tag{1}\label{1}$$ then assuming the Coulomb gauge $$\nabla \cdot \mathbf A = 0 \tag{2}\label{2}$$ you get
$$\nabla \times (\nabla \times \mathbf A) = \nabla (\nabla \cdot \mathbf A) - \nabla ^2 \mathbf A \\
=- \nabla ^2 \mathbf A \tag{3}\label{3}.$$ Therefore
$$\nabla \times \mathbf B = - \nabla ^2 \mathbf A = \mu_0 \mathbf J \tag{4}\label{4}.$$
Eq. $\eqref{4}$ holds both above and below the interface.
For a surface current that is, say, one in the $xy$ plane $$\mathbf J = \mathbf K \delta (z) \tag{5}\label{5}$$ and then
$$\nabla ^2 \mathbf A = -\mu_0 \mathbf K \delta (z) \tag{6}\label{6}.$$
Now integrate form below to above over an infinitesimal segment along $\hat z$:
$$\frac{\partial}{\partial z} \left(\mathbf {A_a - A_b}\right)= -\mu_0 \mathbf K \tag{7}\label{7},$$
which is what you wanted to prove.
Added details:
$\mathbf A$ is continuous everywhere, and since the discontinuity is along the $\hat z$ direction, both $\frac{\partial \mathbf A}{\partial x}$ and $\frac{\partial^2 \mathbf A}{\partial x^2}$ are also continuous, similarly in the $\hat y$ direction.
Now let $\mathbf K = K\hat y$, this implies that $\nabla^2A_y=-\mu_0K\delta(z)$. Then integrating along $\hat z$ across the interface you only have to show that $\frac{\partial^2 A_y}{\partial x^2}+\frac{\partial^2 A_y}{\partial y^2}$ is zero, but that is true because of its continuity$. | {
"domain": "physics.stackexchange",
"id": 96899,
"tags": "electromagnetism, magnetostatics"
} |
Advantages of high-energy heavy-ion collisions over proton-proton collisions? | Question: Some high-energy experiments (RHIC, LHC) use ion-ion collisions instead of proton-proton collisions. Although the total center-of-mass energy is indeed higher than p-p collisions, it might happen that the total energy per nucleon is actually lower. What are the advantages of using ion-ion collisions (e.g. gold-gold or lead-lead) instead of proton-proton collisions, considering the same accelerator?
Answer: As a clarification the energy per nucleon is always lower: for example, currently in the LHC the proton top energy is 3.5 TeV. Now the Pb energy is 3.5 TeV times Z so the energy per nucleon is 3.5*Z/A and A is greater than Z for every nucleus (except the proton where it is equal to one).
But the goal of ion-ion collision is not to increase the total energy or the energy per nucleon: it is to obtain a different type of collision.
It should be noted than in a proton-proton collision, the energy involved in the real collision process is variable: each quark and gluon carry a fraction of the energy of the proton, and hard collision involve a collision between a quark/gluon of one proton against a quark/gluon of the other.
In the case of ion-ion collision you have the same process: the energy is shared by the protons/neutrons and they can have different energies.
The goal of such collision is also to obtain a volume (bigger than in a p-p collision) with a very high energy density. In such a volume, a "state of matter" called quark-gluon-plasma is believed to be possibly created. The study of this QGP is one of the main goal of the ALICE experiment at the LHC.
A few references:
The ALICE experiment at CERN
Live control screen of the LHC now running as a Pb-Pb collider at 574 TeV in the center of mass | {
"domain": "physics.stackexchange",
"id": 67988,
"tags": "particle-physics, large-hadron-collider, collision, heavy-ion"
} |
Is apparent competition a suitable term in situations where one species is not negatively affected? | Question: When two prey (or resource) species share a common predator (consumer), they can be in apparent competition. An increase in one prey species can increase the common predator density, which negatively influences the other prey species.
A textbook (Molles, Seventh edition, ISBN-10: 0077837282) gives an example of apparent competition that confuses me. It gives another definition "one species facilitating populations of a predator or herbivore of the second species." Thus, one of the species does not have to be consumed by the predator/herbivore.
The specific example is that there are two plant species. Species A becomes the habitat for the herbivore (but is not consumed by the herbivore). Species B is consumed by the herbivore. Therefore, an increase in species A will increase the herbivore density and has negative effects on species B. I thought that to form an apparent competition relationship, it has to be mutually negative relationship (-,-). In this example, there is no mutually negative relationship. Furthermore, the plants do not share the common herbivore.
This is simply a definition question. Different sources have variable definitions, but is the example really considered an apparent competition?
Answer: Competition-like interactions where only one species suffers ([0,-]) are usually labelled amensalism, so a suitable term for the situation you describe should be "apparent amensalism". I hadn't actually seen this term in use from before, but a quick Google search reveals that is in fact used. One example is in Jaworski et al. (2015):
Indirect interactions mediated by shared natural enemies are known as apparent competition, if negative interactions are reciprocal, or apparent amensalism, if one prey suffers from the presence of the other prey (Holt 1977; Holt and Lawton 1994). | {
"domain": "biology.stackexchange",
"id": 6349,
"tags": "ecology, terminology"
} |
Does common X-like picture represent one doubled chromosome or two homologous chromosomes? | Question: What is on common X-like picture of chromosome?
Here is the image from Wikipedia:
Are (for example) lower petals homologous to each other?
If yes, then why (1) is entitled as chromatid: one-half of two identical threadlike strands of a replicated chromosome? Homologous parts are similar but not identical.
If no, then why does human chromosomes are often denoted by Xs:
Do we have 23 Xs (females) or 46 Xs?
Answer: The "X" you see is a chromosome. The two strands that forms the X-shaped chromosome are called chromatids. As they are bound together (by the centromer), they are called sister chromatids. Note that chromosomes look like that only during the metaphase. Note also, that not all chromosomes have a centromer that is in the middle of the chromosome as it is obvious from the picture in your question.
Many species (including humans) have two sets of each chromosome. If you add the other chromosome in the picture, you would see a pair of chromosome. One chromosome of the pair is said to be homologous to the other one. Below is a cladogram during the metaphase of a human (I let you figure out the sex) showing each pair of chromosome.
Sister chromatids are identical (except for mutation during the last chromosomal replication). Homologous chromosomes are similar but not identical. Crossover occurs in between chromatids of homologous chromosomes.
Further comment on your figure
There are 23 pairs of chromosomes, that is 46 chromosomes. Your picture shows only one chromosome per pair, that is your picture shows only 23 chromosomes. In other words, your picture shows the haploid genome and not the diploid genome. | {
"domain": "biology.stackexchange",
"id": 5654,
"tags": "cell-biology, chromosome"
} |
calc new pose according to twist | Question:
Is there any library to do this?
input geometry_msgs::Pose and geometry_msgs::Twist,
output new geometry_msgs::Pose.
I found tf::addDelta(), but it's a little bit different:
/// Starting from a Pose from A to B, apply a Twist with reference frame A and reference point B, during a time t.
geometry_msgs::Pose addDelta(const geometry_msgs::Pose &pose, const geometry_msgs::Twist &twist, const double &t) attribute((deprecated));
and it's deprecated.
so is there any other library to calc new pose according to twist?
Originally posted by rubick on ROS Answers with karma: 21 on 2015-07-26
Post score: 1
Answer:
Assuming that your Pose and Twist are already in the same reference frame, simply add the linear portion of the Twist to your pose's position.
Then, convert your Pose's orientation to a tf::Quaternion. Store the angular portion of the Twist as a second quaternion using setRPY() and multiply the two Quaternions together.
Originally posted by Adam Allevato with karma: 194 on 2015-07-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 22296,
"tags": "ros, pose, transform"
} |
If we feel it's hotter when humidity increases, then why do we feel it's colder when inside water? | Question: When the humidity in the air is high, we sweat more and feel it's hotter than when the humidity is lower.
So why don't we feel it's hotter when we go inside water, where the water content is much higher than in the air, than when we're not inside the water?
Is it just because it's liquid and not a gas?
Answer: You feel cold when heat is flowing from you to the surroundings, your body tries to burn more energy to keep up your temperature, so you shiver.
Water conducts heat much more effectively than air (more than 100x as well) so even with water at the same temperature as air you will lose a lot more heat and feel cold.
When your body is too hot it losses energy most efficiently by sweating. It releases water which evaporates, the energy needed for the water to go from liquid to gas comes from your skin which is then cooled.
In humid conditions it is harder for the water to evaporate (because there is already a lot of gaseous water in the air) so you can't cool as efficiently and so feel hotter. | {
"domain": "physics.stackexchange",
"id": 31075,
"tags": "thermodynamics, everyday-life, biology"
} |
Autoware.Auto with LGSVL: mpc acceleration to throttle | Question:
Hi, in Autoware.Auto the output control from MPC controller is acceleration. To control LGSVL vehicle we need to input throttle though. Usually this type of convertion is not trivial because the relation between acceleration and throttle is complex. But from the lgsvl_interface code it seems like there's 1 to 1 conversion with just scaling to account for different limits. Am I missing something?
Originally posted by soldierofhell on ROS Answers with karma: 13 on 2021-02-17
Post score: 0
Answer:
Using a "higher-level" control like acceleration to send messages to the vehicle interface level was an intentional design decision. We want the message API between the controller and the vehicle interface to be somewhat vehicle-agnostic. As for the complexity of acceleration to pedal position, yes, it usually is more complex than what is represented in lgsvl_interface. However, this vehicle interface layer is designed to communicate with a simulator that is capable of using a wide range of vehicle models and dynamics. We didn't want to try to represent all of the possible vehicle configurations in this single interface so we just implemented something generic that works with all vehicle to greater and lesser degrees depending on the vehicle model. A vehicle interface layer for a real-world vehicle would contain more complex mappings.
Originally posted by Josh Whitley with karma: 1766 on 2021-02-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by soldierofhell on 2021-02-18:
I agree that doing this for each and every vehicle or even prepare some generic procedure (like in Apollo) is not necessary the "duty" of Autoware stack but having at least some reference vehicle that is well "tuned" IMO is necessary to e.g. test the performance of different controllers. Nevertheless, at least I have a confirmation of my investigation :) Thank you | {
"domain": "robotics.stackexchange",
"id": 36102,
"tags": "ros2"
} |
Fibonacci sequence and nth number with reduce() | Question: const generateFibonnaciSequence = (n) => {
return [
(arr = new Array(n).fill(1).reduce((arr, _, i) => {
arr.push(i <= 1 ? 1 : arr[i - 2] + arr[i - 1]);
return arr;
}, [])),
arr[n - 1],
];
};
const [fibonacciSequence, fibonacciNthNumber] = generateFibonnaciSequence(n);
My idea is to return an array holding fibonacci sequence up to n in index 0 and fibonnaci's n-th value in index 1. Can someone help me with a prettier way of constructing this function. I'd like to avoid holding the array in temp arr variable if possible, but still use a single expression for return statement.
Answer: Quick review
Try to avoid very long names "generateFibonnaciSequence" can be "fibonnaciSequence".
The new array arr is undeclared and thus will be in the global scope. Chrome has recently improved how it handles optional parameters so there is no penalty to declare the array as an argument scoped to the function.
It seems to me that the reducer can return the last Fibonnaci number rather than a array which will make the code a little cleaner.
You can also reduce the iteration by one because you are starting the sequence at 1 rather than 0.
Filling the array is a lot of extra work as you only need the first value set.
Putting all that together you can so it all with just the one array (ignoring the returned array)
const fibSeq = (n, arr = new Array(n - 1)) => [
arr, arr.reduce((fib, _, i) => arr[i + 1] = i ? fib + arr[i - 1]: 1, arr[0] = 1)
];
The reduction in overheads makes it run about 4 times quicker.
However there is a dangerous behavioral change. If n is 0 it will throw when trying to create an array with a negative value, and n = 1 will return [[1,1], 1] rather than [[1], 1].
If you need it to return for these values you can check for 0 and 1 returning the static values when needed.
const fidSeq = (n, a = --n > 0 && new Array(n)) =>
a? [a, a.reduce((f, _, i) => a[i+1] = i? f + a[i-1]: 1, a[0] = 1)]: n? [[],]: [[1], 1]; | {
"domain": "codereview.stackexchange",
"id": 40789,
"tags": "javascript, fibonacci-sequence"
} |
Synthesis Golf I: Sodium Fluvastatin | Question: A full FAQ exists on meta.chem.SE explaining the premise of synthesis golf, why we're doing this, and the ground rules. Please read this before posting an answer.
The target for this first round of synthesis golf is sodium fluvastatin, an artificial statin drug used to treat hypercholesterolemia and in the prevention of cardiovascular disease. You must provide a synthetic route to the target molecule.
In addition, to narrow the scope of this question:
the synthesis must include a way of making the indole (i.e. no buying of the indole core and carrying out multiple functionalisation reactions).
all of the stereo centres must be set. Relative stereochemistry only is fine (the 1,3-syn relationship between the hydroxyls), but if you can think of a way to set them with absolute configuration, then great.
Answer: Edit: step count is 15, longest linear sequence of 9 steps
Just to (hopefully) get the ball rolling, here's something I scribbled down. I am sure people here will have better answers. And I have not done a lot of literature research to see whether the steps are plausible, so there might be fatal flaws... but hopefully not; as far as I know/can tell, everything here is relatively sensible.
Retrosynthesis first. I chose a Horner–Wadsworth–Emmons reaction to make the (E)-double bond, and something similar to the Reissert synthesis for the indole. It's not exactly the same because I didn't acylate the benzylic carbon, but the initial disconnection is similar. This leads to the key building blocks:
I chose the C(sp2)-C(sp3) bonds in 2 to disconnect. One can be made by SNAr with the amazing SNAr substrate 1-fluoro-2-nitrobenzene (5), and the other by Pd cross-coupling (I'm guessing cheaper ways probably exist). I've never actually seen SNAr reactions with enolates as nucleophiles, but I guess that if it doesn't work, SRN1-type conditions should still make that step possible:
As for 3, I figured it could be made from a symmetrical intermediate:
Forward synthesis
Synthesis of 3, starting from cyclopentadiene:
(i) $\ce{BH3.THF}$, then $\ce{H2O2, NaOH}$; (ii) TBSCl, imidazole; (iii) $\ce{O3}$, then $\ce{H2O2}$; (iv) $\ce{TMSCHN2}$; (v) enzymatic desymmetrisation (I did not look up specifically which, but I think it should be possible); (vi) 7, NaH
Next, the synthesis of 2, starting from methylglyoxal:
(vii) ethylene glycol, TsOH; (viii) Pd cat., ligand, $\ce{NaO^{t}Bu}$, 6; (ix) NaH, 5
Forming the indole:
(x) Zn, AcOH; (xi) $\ce{NaH, ^{i}PrI}$; (xii) aq. HCl
and finally the HWE, the TBS deprotection, and the last redox step, a diastereoselective boron reduction. I wasn't entirely sure if it would work on an α,β-unsaturated ketone, but Comp Org Synth II, section 8.01 has several examples of it being used on an α,β-unsaturated ketones.
(xiii) LiHMDS, 3; (xiv) $\ce{TBAF, H2O}$; (xv) $\ce{Et2B(OMe), NaBH4}$ | {
"domain": "chemistry.stackexchange",
"id": 12018,
"tags": "synthesis-golf"
} |
Where do the electron and antineutrino come from in beta decay? | Question: I was studying about nuclear reactions and similar stuff, but stumbled upon this doubt
In the process of beta decay, where a neutron transforms into a proton, a positron and an antineutrino, where do the electron and antineutrino come from ( do they already exist within the neutron or like? )
I do not believe that this is a duplicate of In nuclear chemistry, how does a neutron split to form a proton and an electron?
Answer: A down quark of the neutron emits a virtual W- boson which becomes the electron and antineutrino pair. | {
"domain": "chemistry.stackexchange",
"id": 7719,
"tags": "radioactivity"
} |
Questions about Maxwell capacitance matrix and reference point selection | Question: Let's consider a system of n conductors. If I'm not mistaken, the Maxwell capacitance matrix tells us that the following system of equations applies:
\begin{align}
Q_1 &= C_{11}V_1 + C_{12}V_{12} + \dots + C_{1n}V_{1n} \\ &\;\;\vdots \notag \\ Q_n &= C_{1n}V_{n1} + C_{2n}V_{n2} + \dots + C_{nn}V_{n}
\end{align} where $C_{ii}$ represents self capacitance of i-th conductor, $C_{ij}$ represents capacitance between i-th and j-th conductor, $V_{i}$ represents voltage between i-th conductor and reference point in infinity and $V_{ij}$ represents voltage between i-th and j-th conductor.
If we now "move" each diagonal element of our system and we define $Q_{i}'$ as $Q_{i}' = Q_{i} - C_{ii}V_{i} $ we get the following equations:
\begin{align}
Q_1' &= C_{12}V_{12}+ C_{13}V_{13} + \dots + C_{1n}V_{1n} \\ &\;\;\vdots \notag \\ Q_n' &= C_{1n}V_{n1} + C_{2n}V_{n2} + \dots + C_{n-1n}V_{nn-1}
\end{align}
If we now select an arbitrary reference point R then : $V_{ij} = V_{iR} - V_{jR}$ . If we now return to our system then:
\begin{align}
Q_1' &= C_{12}(V_{1R} - V_{2R})+ C_{13}(V_{1R} - V_{3R}) + \dots + C_{1n}(V_{1R} - V_{nR}) \\ &\;\;\vdots \notag \\ Q_n' &= C_{1n}(V_{nR} - V_{1R}) + C_{2n}(V_{nR} - V_{2R}) + \dots + C_{n-1n}(V_{nR} - V_{n-1R})
\end{align} which means:
\begin{align}
Q_1' &= (C_{12} + \dots + C_{1n})V_{1R} - C_{12}V_{2R} - \dots - C_{1n}V_{nR} \\ &\;\;\vdots \notag \\ Q_n' &= -C_{1n}V_{1R} - C_{2n}V_{2R} - \dots + (C_{1n}+\dots +C_{n-1n})V_{nR}
\end{align}
Which means that:
$$
\begin{bmatrix}
Q_1' \\
\vdots \\
Q_n'
\end{bmatrix} =
\begin{bmatrix}
b_{11} & \dots & b_{1n} \\
\vdots & \vdots & \vdots \\
b_{1n} & \dots & b_{nn}
\end{bmatrix}
\begin{bmatrix}
V_{1R} \\
\vdots \\
V_{nR}
\end{bmatrix}
$$
where
\begin{align}
b_{ii} &= C_{1i} + \dots + C_{ni} \\
b_{ij} &= -C_{ij} \\
\end{align}
and :
\begin{align}
B_{NxN} = \begin{bmatrix}
b_{11} & \dots & b_{1n} \\
\vdots & \vdots & \vdots \\
b_{1n} & \dots & b_{nn}
\end{bmatrix}
\end{align}
Finally we get the following equation:
$$
\begin{bmatrix}
b_{11} & \dots & b_{1n} \\
\vdots & \vdots & \vdots \\
b_{1n} & \dots & b_{nn}
\end{bmatrix}^{-1}
\begin{bmatrix}
Q_1' \\
\vdots \\
Q_n'
\end{bmatrix} =
\begin{bmatrix}
V_{1R} \\
\vdots \\
V_{nR}
\end{bmatrix}
$$
Now it seems to me that the left side of equation is independant of the selection of reference point R. Which , if true, I guess would mean that the right side of equation is same however we pick the point R,which, I guess, would mean that the voltage between i-th capacitator and any point R is same , which , obviously, doesn't seem to be true. Did I make an error? How can I interpret this result?
Answer: @HEMMI's comment already contained the essence of the answer.
Small caveat, you are mixing up the Maxwell capacitance matrix and the mutual capacitance matrix. Your matrix $B$ is what is usually called the Maxwell capacitance matrix, and $C$ is the mutual capacitance matrix.
You error is in the inversion of $B$. You cannot do that since $B$ is singular. You can see that the line of constant vectors, ie $\mathbb R (1, ..., 1)$ is in its kernel, which translates the independence of the reference point. This is why if you want to solve the equation, $Q = BV$, the solutions are always given up to a an additive constant vector.
Note that conversely, for a solution to exist, $Q$ must be constrained. In fact, it is easy to see that the constraint is precisely the conservation of total charge $Q_1+...+Q_n$ which can be checked from direct calculations, or by noting that you have capacitor network which is closed so charge stays within.
In fact, you can prove that the rank of $B$ is $n-1$, which shows that the additional constant vector is the only ambiguity and the net neutral charge is the only constraint.
Hope this helps.
Answer to comment
Yes, your previous problem assumes you have a ground. You were implicitly saying that $C_{ii} = 0$, and when it isn't zero, you are effectively connecting the $i$th conductor to the ground via a capacitor. Note that now charge can leak into the ground, you do not have the constraint of conserved charge and $C$ will not be singular in general, you'll now be able to invert it.
You might be worried as this brings us back to the original problem, since charge is affected by a constant shift of voltage. However, in this case, the voltages are unambiguously defined as differences with respect to the ground, so you should not expect such an invariance. You can retrieve the invariance by explicitly adding the ground. Mathematically, you are adding an extra line and column to $C$ to add the ground, and modifying accordingly the matrix. However, it isn't more practical, and having an invertible $C$ is pretty handy.
Let's look at an example of this. In practices, your conductors are bounded, then there is a natural reference point which is infinity, taken as the ground. Take for example a conducting ball (1) of radius $a$ contained in a conducting, cocentric, spherical shell (2) of inner radius $b$ and outer radius $c$ with $a<b<c$. You can calculate:
$$
\begin{pmatrix}
Q_1 \\
Q_2
\end{pmatrix}
= 4\pi\epsilon_0\begin{pmatrix}
\frac{ab}{b-a} & -\frac{ab}{b-a}\\
-\frac{ab}{b-a} & \frac{ab}{b-a}+c
\end{pmatrix}
\begin{pmatrix}
V_1 \\
V_2
\end{pmatrix}
$$
As you can see, only the shell has a self-capacitance, making $C$ invertible. Thus $V$ cannot give the same $Q$ by shifting its origin. The absence of $C_{11}$ is precisely due to shileding. In general, it can be identified by block structure of $C$.
Note that it isn't trivial to prove that the capacity matrix of $n$ conductors can be obtained by mapping it to an effective capacitor network. This is possible iff $C$ is a diagonally dominant matrix with negative off diagonal entries and positive diagonal entries. It would be surprising if this were always the case in electrostatics. | {
"domain": "physics.stackexchange",
"id": 89882,
"tags": "electromagnetism, electrostatics, capacitance, conductors"
} |
Is it possible to overlay a 2D image over a pointcloud in RViz2 | Question:
Let's say I have a 2D image, taken from a video. Let's say I also have a Pointcloud, generated by a radar sensor. Would it be possible to project this 2D image over the Pointcloud, such that the Pointcloud acts like a 3D model (in which the points are vertices of said model) and the image acts as the texture for the model? If it is possible, is there anything online that could tell me how to do it?
EDIT: I also saw a question (https://answers.ros.org/question/218502/rviz-display-image-in-3d-scene/) and the top answer said something about being able to use an image to texture a Mesh object. Would it be possible to use the radar information I used to create a Pointcloud in order to create a Mesh? The radar data is represented as polar coordinates, which we then convert to cartesian to generate the Pointcloud)
Originally posted by NN_SS on ROS Answers with karma: 3 on 2019-04-23
Post score: 0
Answer:
If you want to try to project a RGB video onto the point cloud, I used to do that with asus xtion which was providing a point cloud and rgb image stream. I was able to visualize video on the point cloud by adding Camera and choosing "Image Rendering" as overlay while also visualizing point cloud. If you dont have a video stream and you want to project a constant image, you can convert your image to sensor_msgs/Image as in here, then do the same thing. Hope this helps.
Originally posted by ozzdemir with karma: 28 on 2019-04-23
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by NN_SS on 2019-04-23:
This sounds like what I want to do (I have a video stream and a set of radar data that is synced with the video) do you mind taking me through more specific steps, I'm kind of new to ROS/RViz.
Thank you so much for the quick response!
Comment by ozzdemir on 2019-04-23:
This is the configuration I was using, but its been a long time. I can't test it now since I don't have the hardware now but if you still need help and can share a bag file, I may check via it.
Comment by NN_SS on 2019-04-23:
I think I got it, thanks so much for the help!
Comment by ozzdemir on 2019-04-23:
You are welcome, don't forget to accept as answer.
Comment by akashbaskaran on 2022-08-05:
The link seems to be broken. Is there any other link for it?
I am facing a similar issue. I have a camera image selected to overlay and a lidar pointcloud. They seem to be in 2 different windows. How do i combine them? | {
"domain": "robotics.stackexchange",
"id": 32925,
"tags": "ros, 2d-image, overlay, ros-crystal, pointcloud"
} |
What is the accuracy majority class classifier? | Question: I have an SFrame and a model:
train_data,test_data = products.random_split(.8, seed=0)
selected_words_model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=selected_words,
validation_set=test_data)
After computing the accuracy of the model with `selected_words_model.evaluate(test_data) I'm asked "What is the accuracy majority class classifier on this task?" Yet I don't even know what this "means accuracy majority class classifier", shouldn't it be "accuracy of the majority class classifier" ?
Here is my attempt.
All these materials come from this coursera ML fundations course exercise.
Answer: I suspect you are right that there is a missing "of the," and that the "majority class classifier" is the classifier that predicts the majority class for every input. Such a classifier is useful as a baseline model, and is particularly important when using accuracy as your metric. This matches what your notebook comments in the next bullet, so that's likely what was intended. | {
"domain": "datascience.stackexchange",
"id": 11975,
"tags": "machine-learning, classification"
} |
Did we get near Pluto with our current technology? | Question: I was sent from Skeptic to here.
We all know that NASA has successfully carried out the mission for getting near Pluto.
More here.
It is very difficult for me to accept that we already have the technology necessary to transmit data in such a distances. They transmit data over a billion miles and still we haven't been able to cover the most of the Earth with the internet? 4.4 billion people still don't have an access to the internet.
The cellphone coverage is limited and you don't get the reception for your phone all the time. There is many other areas where the transmitters have their limits here on Earth. So what is the catch?
Is NASA really able to transmit data over the billion miles (unimaginable distance) but still struggle to provide internet/phone coverage on Earth?
Answer: The "catches" as you call them are at least threefold:
Cellphones require transmission and receiving hardware that fit in your hand / desktop computer and be almost omnidirectional, i.e. able to receive transmissions from any direction. In contrast, deep space probes are tracked with extremely directional, huge antennas of hundreds of meters diameter scale; often these are grouped into phased arrays yielding effective antenna sizes of the scale of continents. Even the transmission hardware on the spacecraft itself far exceeds the power and antenna size (therefore gain) of any cellphone.
Cellphones and internet require large datarates. However, if datarate is no object, you can transmit over arbitrary distances simply by slowing the transmission down and using error correcting coding. This is indeed so for New Horizons: the transmission rate is two kilobits per second - a 1980s telephone modem speed (remember Kermit?). The reason this works is the famous Shannon noisy channel coding theorem: that each signal to noise ratio defines a quantity called the channel capacity in bits per second and, as long as the information transmission rate does not exceed this capacity, there exists an error correcting code that will reduce the error probability to as near as you like to nought if you're willing to use long enough data frames to implement the code. The capacity decreases with along with the signal to noise ratio, but it is always positive (but small for low signal to noise ratios). This is why we can still receive transmissions from Voyager 1 and 2. Voyager 1 currently transmits at 160 bits per second: a 1960s teletype printer rate!
We have a clear line of sight to New Horizons. Not so if your transmission tower is behind a hill and you're on the lee side.
As user DJohnM notes:
Another example: the GPS Navigation Message is transmitted at around 60 bits per second! | {
"domain": "physics.stackexchange",
"id": 23561,
"tags": "universe, data"
} |
Transmitted Power and Poynting's theorem contradiction? | Question: I was reading Chapter 12.1 in Hayt & Buck "Engineering Electromagnetics" 8-th edition. Here they discuss the reflection of uniform plane waves at normal incidence.
They derived the following expressions for the reflection and transmission coefficients:
$\Gamma = \frac{E_{x10}^{-}}{E_{x10}^{+}} = \frac{\eta_2 - \eta_1}{\eta_2 + \eta_1}$
$\tau = \frac{E_{x20}^{+}}{E_{x10}^{+}} = \frac{2\eta_2}{\eta_1+\eta_2} = 1 + \Gamma$
where $\eta_1, \eta_2$ are the intrinsic impedances of the two materials (which may be complex), and the electric field is uniform in the x direction, parallel to the interface.
Then they consider the power reflected and the power transmitted.
They use Poynting's theorem in phasor form:
$\left<S\right> = \left|\frac{1}{2}\Re\left\{\mathbf{E}_s \times \mathbf{H}_s\right\}\right|$.
From here, they use the reflection and transmission coefficients and the intrinsic impedances to conclude that
$\left<S_{1r}\right> = \left|\Gamma\right|^2\left<S_{1i}\right>$
$\left<S_{2}\right> = \frac{\Re\left\{1/\eta_2^*\right\}}{\Re\left\{1/\eta_1^*\right\}}\left|\tau\right|^2\left<S_{1i}\right>$
On the other hand, conservation of energy implies that the transmitted power must be the incident power minus the reflected power, so another expression is
$\left<S_{2}\right> = \left(1 - \left|\Gamma\right|^2\right)\left<S_{1i}\right>$
But the two forms of the coefficients obtained for the transmitted power are not in general equal. From my calculations, they would be equal if and only if the two intrinsic impedances have a ratio which is a real number. But I don't see any reason for this to be the case in general, and yet none of the steps seem to make that assumption.
So, where does the contradiction arise?
Answer: I came back to this problem a few years later, and I finally figured out what was wrong.
First of all, the Poynting vectors are indeed conserved across the boundary (placed at $z = 0$), but you cannot use the $\vec{S}_{1i}$ and $\vec{S}_{1r}$ vectors independently in the energy conservation equation. This is because there is interference between the incident and reflected waves, which causes a standing wave pattern in medium 1, whose average intensity is position-dependent.
In general, interference effects cause the intensity of the sum of two waves to be different from the sum of the intensities of the two waves. The interference means that you cannot simply say $\langle S_{1i}\rangle = \langle S_{1r}\rangle + \langle S_2\rangle$ or $(1 - |\Gamma|^2)\langle S_{1i}\rangle = \langle S_2\rangle$
Let $\vec{S}_1$ be the total Poynting vector in medium 1, and $\vec{S}_2$ the same for medium 2.
By definition, $\vec{S}_1 = \vec{E}_1 \times \vec{H}_1$ where $\vec{E}_1$, $\vec{H}_1$ are the total electric and magnetic fields in medium 1. Assuming the interface between medium 1 and 2 has no current sheets ($\vec{K}_s = \vec{0}$), then the boundary conditions imply that $\vec{E}_1 = \vec{E}_2$ and $\vec{H}_1 = \vec{H}_2$. Therefore we clearly must also have $\vec{S}_2 = \vec{S}_1$, and intensity is conserved across the interface (at $z = 0$).
This means we should still be able to say that $\langle S_1\rangle = \langle S_2\rangle$ (at $z = 0$, again). So how do we correctly find $\langle S_1\rangle$ given $\langle S_{1i}\rangle$?
The incident wave in medium 1 propagates in the $\hat{z}$ direction, with $\vec{E}_{1i}$ in $\hat{x}$ and $\vec{H}_{1i}$ in $\hat{y}$. Moreover, in phasor form we know that $\mathbf{H}_{1i} = \frac{\mathbf{E}_{1i}}{\eta_1}$
The reflected wave in medium 1 propagates in the $-\hat{z}$ direction, with $\vec{E}_{1r}$ again in $\hat{x}$. This implies $\vec{H}_{1r}$ is in $-\hat{y}$, because the direction of the wave has been flipped. In phasor form, we thus have $\mathbf{H}_{1r} = -\frac{\mathbf{E}_{1r}}{\eta_1}$. In addition, we know that $\mathbf{E}_{1r} = \Gamma\mathbf{E}_{1i}$, and hence $\mathbf{H}_{1r} = -\Gamma\mathbf{H}_{1i}$.
The total phasors in medium 1 are then $\mathbf{E}_1 = \mathbf{E}_{1i} + \mathbf{E}_{1r} = (1 + \Gamma)\mathbf{E}_{1i}$ and $\mathbf{H}_1 = \mathbf{H}_{1i} + \mathbf{H}_{1r} = (1 - \Gamma)\mathbf{H}_{1i}$.
Therefore the Poynting vector in phasor form is $\mathbf{S}_1 = \frac{1}{2}\mathbf{E}_1\times\mathbf{H}_1^* = \frac{1}{2}(1 + \Gamma)(1 - \Gamma^*)\mathbf{E}_{1i}\times\mathbf{H}_{1i} = (1 + \Gamma)(1 - \Gamma^*)\mathbf{S}_{1i}$
Note that $\mathbf{S}_1 = (1 - |\Gamma|^2)\mathbf{S}_{1i} + 2j\Im\{\Gamma\}\mathbf{S}_{1i}$, so there is an additional term besides the $(1 - |\Gamma|^2)$ factor that comes from the interference effect. When $\Gamma$ and $\mathbf{S}_{1i}$ have imaginary components (i.e. when $\frac{\eta_2}{\eta_1}$ is not real and $\eta_1$ is not real) we get a nonzero contribution to the real part of the Poynting phasor. So we can only use $\langle S_{1i}\rangle = \langle S_{1r}\rangle + \langle S_2\rangle$ when $\eta_1$ is real or when $\frac{\eta_2}{\eta_1}$ is real.
We can substitute $\Gamma = \frac{\eta_2 - \eta_1}{\eta_2 + \eta_1}$ and $\mathbf{S}_{1i} = \frac{|\mathbf{E}_{1i}|^2}{2\eta_1}$ to obtain
$$\mathbf{S}_1 = \frac{2\eta_2}{\eta_1 + \eta_2}\frac{2\eta_1^*}{\eta_1^* + \eta_2^*}\frac{|\mathbf{E}_{1i}|^2}{2\eta_1} = \frac{2\eta_2}{|\eta_1 + \eta_2|^2}|\mathbf{E}_{1i}|^2 = |\tau|^2\frac{|\mathbf{E}_{1i}|^2}{2\eta_2^*} = \frac{|\mathbf{E}_2|^2}{2\eta_2^*} = \mathbf{S}_2$$
which confirms that $\vec{S}_1 = \vec{S}_2$.
Moreover, $\langle S_1\rangle = \langle S_2\rangle = \Re\left\{|\tau|^2\frac{|\mathbf{E}_{1i}|^2}{2\eta_2^*}\right\} = \frac{|\tau|^2|\mathbf{E}_{1i}|^2}{2}\Re\{1/\eta_2^*\} = \frac{\Re\{1/\eta_2^*\}}{\Re\{1/\eta_1^*\}}|\tau|^2\langle S_{1i}\rangle$ and so we confirmed the formula for $\langle S_2\rangle$.
The comments about power dissipation due to $\vec{E}\cdot\vec{J}$ terms are valid, but they do not imply that $\vec{S}$ is discontinuous across the interface. The dissipative terms will produce an absorption coefficient which causes $\vec{S}$ to decay as the distance increases. But right across an interface, the enclosed volume is zero, so the volume integral of $\vec{E}\cdot\vec{J}$ will be nil (assuming the current densities are distributed over a volume, and there are no current sheets), and hence the inward flux equals the outward flux across the interface (i.e. the Poynting vector is continuous).
Additionally, absorption coefficients can arise even if the medium is non-conductive, i.e. even if $\vec{J}_f = \vec{0}$. This is because power can also be dissipated by bound currents and movement of bound charges which do not get accounted for explicitly in Poynting's theorem. Instead they show up by causing complex electric and magnetic permittivities.
Another way to understand this is that although $\vec{S}$ has generally nonzero divergence, the divergence is still finite, so fluxes of $\vec{S}$ can only vary continuously across a volume, and they cannot just change right across an interface. For this to happen you would need infinite volume current densities, or basically something like current sheets or current wires. | {
"domain": "physics.stackexchange",
"id": 52761,
"tags": "electromagnetism, waves, reflection, poynting-vector"
} |
Lock-free ringbuffer with multiple readers in C++11 | Question: Basic Info
I needed a lock-free ringbuffer with multiple readers (but one writer). However, I did not want the writer to check all readers every time in order to prevent an overrun. Thus, I decided to split the buffer in two halves. If the write pointer is in one half, it is only allowed to advance to the next half if all readers are already in the same half as the writer.
This is realized by using an atomic counter named readers_left, which counts the number of readers left in the previous half.
The code to this question can be found here and the most recent version here. In the following, I'll try to explain the code bit by bit.
Classes
This class is the base for a reader and writer:
class ringbuffer_common_t
{
private:
static std::size_t calc_size(std::size_t sz);
protected:
const std::size_t size; //!< buffer size (2^n for some n)
const std::size_t size_mask; //!< = size - 1
public:
ringbuffer_common_t(std::size_t sz);
};
Next comes the writer, just called ringbuffer_t. It contains the only two atomics:
class ringbuffer_t : protected ringbuffer_common_t
{
std::atomic<std::size_t> w_ptr; //!< writer at buf[w_ptr]
//! counts number of readers left in previous buffer half
std::atomic<std::size_t> readers_left;
std::size_t num_readers = 0; //!< to be const after initialisation
char* const buf;
friend class ringbuffer_reader_t;
//! version for preloaded write ptr
std::size_t write_space_preloaded(std::size_t w,
std::size_t rl) const;
public:
//! allocating constructor
//! @param sz size of buffer being allocated
ringbuffer_t(std::size_t sz);
~ringbuffer_t();
//! size that is guaranteed to be writable one all readers
//! are up to date
std::size_t maximum_eventual_write_space() const {
return size >> 1;
}
//! returns number of bytes that can be written at least
std::size_t write_space() const;
//! writes max(cnt, write_space) of src into the buffer
//! @return number of bytes successfully written
std::size_t write(const char *src, size_t cnt);
};
And finally, the reader class. It contains a helper class read_sequence_t. If you want to read using this reader, the reader returns such a read_sequence_t object. Only one read per reader at a time is allowed (this is not checked).
class ringbuffer_reader_t : protected ringbuffer_common_t
{
const char* const buf;
ringbuffer_t* const ref;
std::size_t read_ptr = 0; //!< reader at buf[read_ptr]
//! increases the @a read_ptr after reading from the buffer
void try_inc(std::size_t range);
class seq_base
{
const char* const buf;
std::size_t range;
protected:
ringbuffer_reader_t* reader_ref;
public:
//! requests a read sequence of size range
seq_base(ringbuffer_reader_t& rb, std::size_t range) :
buf(rb.buf),
range(range),
reader_ref(&rb)
{
}
//! single member access
const char& operator[](std::size_t idx) {
return *(buf + ((reader_ref->read_ptr + idx) &
reader_ref->size_mask));
}
std::size_t size() const { return range; }
};
class read_sequence_t : public seq_base {
public:
using seq_base::seq_base;
//! increases the read_ptr after reading
~read_sequence_t() { reader_ref->try_inc(size()); }
};
template<class Sequence>
Sequence _read(std::size_t range) {
std::size_t rs = read_space();
std::size_t rs2 = rs < range ? 0 : range;
return Sequence(*this, rs2);
}
public:
//! constuctor. registers this reader at the ringbuffer
//! @note careful: this function is @a not thread-safe
ringbuffer_reader_t(ringbuffer_t& ref);
read_sequence_t read(std::size_t range) {
return _read<read_sequence_t>(range);
}
//! returns number of bytes that can be read at least
std::size_t read_space() const;
};
Implementation
The common interface simply sets the size to some power of two. I won't show this code here.
Ctor and Dtor of the writer are simple:
ringbuffer_t::ringbuffer_t(std::size_t sz) :
ringbuffer_common_t(sz),
buf(new char[ringbuffer_common_t::size])
{
w_ptr.store(0, std::memory_order_relaxed);
readers_left.store(0, std::memory_order_relaxed);
}
ringbuffer_t::~ringbuffer_t()
{
delete[] buf;
}
How much can the writer write?
std::size_t ringbuffer_t::write_space_preloaded(std::size_t w,
std::size_t rl) const
{
return (((size_mask - w) & (size_mask >> 1))) // = before next half
+ ((rl == false) * (size >> 1)) // one more block?
;
}
std::size_t ringbuffer_t::write_space() const
{
return write_space_preloaded(w_ptr.load(std::memory_order_relaxed),
readers_left.load(std::memory_order_relaxed));
}
Now, the write procedure of the writer:
std::size_t ringbuffer_t::write (const char *src, size_t cnt)
{
std::size_t w = w_ptr.load(std::memory_order_relaxed);
std::size_t rl = readers_left.load(std::memory_order_relaxed);
// size calculations
std::size_t free_cnt;
if ((free_cnt = write_space_preloaded(w, rl)) == 0) {
return 0;
}
const std::size_t to_write = cnt > free_cnt ? free_cnt : cnt;
const std::size_t cnt2 = w + to_write;
std::size_t n1, n2;
if (cnt2 > size) {
n1 = size - w_ptr.load(std::memory_order_relaxed);
n2 = cnt2 & size_mask;
} else {
n1 = to_write;
n2 = 0;
}
// reset reader_left
if((w ^ ((w + to_write) & size_mask)) & (size >> 1)) // msb flipped
{
if(rl) throw "impossible";
readers_left.store(num_readers, std::memory_order_relaxed);
}
// here starts the writing
std::copy_n(src, n1, &(buf[w]));
w = (w + n1) & size_mask;
// update so readers are already informed:
w_ptr.store(w, std::memory_order_relaxed);
if (n2) {
std::copy_n(src + n1, n2, &(buf[w]));
w = (w + n2) & size_mask;
w_ptr.store(w, std::memory_order_relaxed);
}
return to_write;
}
We're almost done. Now to the reader - the Ctor is simple:
ringbuffer_reader_t::ringbuffer_reader_t(ringbuffer_t &ref) :
ringbuffer_common_t(ref.size), buf(ref.buf), ref(&ref) {
++ref.num_readers; // register at the writer
}
How much can the reader read? It is much more simple here since we know the exact position of the write pointer.
std::size_t ringbuffer_reader_t::read_space() const
{
const std::size_t
w = ref->w_ptr.load(std::memory_order_relaxed),
r = read_ptr;
if (w > r) {
return w - r;
} else {
return (w - r + ref->size) & ref->size_mask;
}
}
Finally, after we have read something (using read_sequence_t), we need to increase the read_ptr:
void ringbuffer_reader_t::try_inc(std::size_t range)
{
const std::size_t old_read_ptr = read_ptr;
read_ptr = (read_ptr + range) & size_mask;
// checks if highest bit flipped:
if((read_ptr ^ old_read_ptr) & (size >> 1))
{
--ref->readers_left;
}
}
What works and what not
As one can see in github, I made sequential and parallel tests, all working. Valgrind saw no memory leaks. However, I have no tool to debug race conditions.
My questions are:
Is this code safe of any race conditions? (main question)
Is there anything you would improve concerning efficiency? (e.g. bit manipulation)
...
Answer: Barriers missing
I know this is an old question, but I started looking at the "lock-free" tag and came across this unanswered question. I believe that your code is unsafe because it is lacking the proper memory barriers. For example, in this code snippet:
std::copy_n(src, n1, &(buf[w]));
w = (w + n1) & size_mask;
// update so readers are already informed:
w_ptr.store(w, std::memory_order_relaxed);
The std::memory_order_relaxed doesn't provide any protection against store reordering. So the store to w_ptr can be reordered to be before the copy to the buffer. Later on, the reader could see an updated write pointer and try to read from the buffer before the buffer contents have been stored. You should use std::memory_order_release to prevent this.
You have similar problems on the reader side, where you should use std::memory_order_acquire instead of std::memory_order_relaxed.
Why do my tests work?
Most likely, you are testing this on an x86 based system. The x86 architecture has certain memory ordering guarantees that make most memory barriers unnecessary. So your code will in fact operate correctly when run on an x86 architecture. | {
"domain": "codereview.stackexchange",
"id": 14200,
"tags": "c++, c++11, circular-list, lock-free, atomic"
} |
Why can't $\psi(x) = \delta(x)$ in the case of Harmonic oscillator? | Question: In the analysis of Harmonic Oscillator, it is claimed that $\langle\hat H\rangle$ cannot be zero, why is it so?
I mean $\hat H = \frac{ \hat p^2 }{2m } + \frac12 k \hat x^2$, and
$$\left<x^2\right> = \int dx (x\psi(x))^\dagger (x\psi(x)) = 0$$
would imply that $x\psi(x) = 0 \quad \forall x.$
In particular, this is true when $x = 0$, so we have two options; either $\psi(x) = 0$ or $\psi(x) = \delta(x)$.
So, why can't $\psi(x) = \delta(x)$ in the case of Harmonic oscillator ?
Note: $\hat H = \frac{ \hat p^2 }{2m } + \frac12 k \hat x^2 $
Answer: The state $\psi(x) = \delta(x)$ is a perfectly valid state for the harmonic oscillator to occupy. (With caveats, though: it is not normalizable, so it's not a physically-accessible state. Still, it's a perfectly reasonable thing for the mathematical formalism to handle.) As you note, it has a position uncertainty equal to zero, as well as a vanishing expectation value $⟨x^2⟩=0$.
However, it does not have a vanishing momentum uncertainty, and in fact if you expand it as a superposition of plane waves,
$$
\delta(x) = \frac{1}{2\pi\hbar} \int_{-\infty}^\infty e^{ipx/\hbar}\mathrm dp = \frac{1}{\sqrt{2\pi\hbar}} \int_{-\infty}^\infty A(p) e^{ipx/\hbar}\mathrm dp,
$$
you require an even weight $A(p) \equiv 1/\sqrt{2\pi\hbar}$ for all momenta, which means that the momentum-squared expectation value
$$
⟨p^2⟩ = \int_{-\infty}^\infty |A(p)|^2 p^2\mathrm dp = \frac{1}{2\pi\hbar}\int_{-\infty}^\infty p^2\mathrm dp = \infty
$$
diverges to infinity. (This result is required by the uncertainty principle, but the derivation here does not rely on it - it's an independent proof of that fact. Still, you can see the consistency in that $\Delta x=0$ and $\Delta p \geq \hbar/2\Delta x$ can only be satisfied by having $\Delta p = \infty$.)
This then implies that the expectation value of the hamiltonian is also infinity:
$$
⟨H⟩ = \frac{1}{2m}⟨p^2⟩ + \frac12 k ⟨x^2⟩ = \infty.
$$
As for this,
In the analysis of Harmonic Oscillator, it is claimed that $\langle\hat H\rangle$ cannot be zero, why is it so?
this is the zero-point energy of the oscillator, which has been explored multiple times on this site. If you want to ask why this is, you should ask separately, with a good showing of the previous questions here and what it is about them you do not understand. | {
"domain": "physics.stackexchange",
"id": 55466,
"tags": "quantum-mechanics, operators, hilbert-space, harmonic-oscillator, dirac-delta-distributions"
} |
How to design shopping cart Java application which satisfy modular, extensible and maintainable | Question: I am new in Application Design. I have use-case as below
As per the above story I have implemented code as below without any Modularity, Extensible and Maintainable. Could someone share the thoughts how to design below bad code to production grade application ?
Based on this story context, there could be other stories which could build-upon in later.
Example: Introduce new customer types like Gold, Diamond, Platinum, etc and their discounts slabs.
I really appreciate if some one have a look at the code below and let me know the how to design before creating data members, Interfaces and classes of improvements.
public class ShoppingCartDiscount {
public static void main(String args[]) {
System.out.println("Price for the premium customer: " + calculatePrice("Premium", 20000));
System.out.println("Price for the regular customer: " + calculatePrice("Regular", 8000));
}
static float calculatePrice(String customerType, float purchaseAmount) {
float total = 0;
if (customerType.equalsIgnoreCase("Regular") {
if (purchaseAmount > 5000 && purchaseAmount <= 10000) {
float firstSlab = purchaseAmount - 5000;
firstSlab = firstSlab - (float) (firstSlab * 0.1);
total = 5000 + firstSlab;
}
else if (purchaseAmount > 10000) {
float secondSlab = purchaseAmount - 10000;
secondSlab = secondSlab - (float) (secondSlab * 0.2);
float firstSlab = 10000 - 5000;
firstSlab = firstSlab - (float) (firstSlab * 0.1);
System.out.println("firstSlab:" + firstSlab);
total = 5000 + firstSlab;
total = total + secondSlab;
} else if (purchaseAmount <= 5000 && purchaseAmount >= 0) {
total = purchaseAmount;
} else {
return purchaseAmount;
}
} else if (customerType.equalsIgnoreCase("premium")) {
if (purchaseAmount <= 4000) {
total = purchaseAmount - (float) (purchaseAmount * 0.1);
}
if (purchaseAmount > 4000 && purchaseAmount <= 8000) {
float secondSlab = purchaseAmount - 4000;
secondSlab = secondSlab - (float) (secondSlab * 0.15);
float firstSlab = 8000 - 4000;
total = firstSlab - (float) (firstSlab * 0.1);
total = total + secondSlab;
}
if (purchaseAmount > 8000 && purchaseAmount <= 12000) {
float thirdSlab = purchaseAmount - 8000;
thirdSlab = thirdSlab - (float) (thirdSlab * 0.20);
float secondSlab = 8000 - 4000;
secondSlab = secondSlab - (float) (secondSlab * 0.15);
float firstSlab = 8000 - 4000;
total = firstSlab - (float) (firstSlab * 0.1);
total = total + secondSlab + thirdSlab;
}
if (purchaseAmount > 12000) {
float fourthSlab = purchaseAmount - 12000;
fourthSlab = fourthSlab - (float) (fourthSlab * 0.30);
float thirdSlab = 8000 - 4000;
thirdSlab = thirdSlab - (float) (thirdSlab * 0.20);
float secondSlab = 8000 - 4000;
secondSlab = secondSlab - (float) (secondSlab * 0.15);
float firstSlab = 8000 - 4000;
total = firstSlab - (float) (firstSlab * 0.1);
total = total + secondSlab + thirdSlab + fourthSlab;
}
}
return total;
}
}
Answer: I separated the calculatePrice method into calculatePremiumPrice and calculateRegularPrice methods. This made the code easier to visually inspect.
I also made the two blocks of if statements consistent, starting with the lower discounts and working my way up to the larger discounts. Again, this makes the code easier to visually inspect.
I was more measured in my use of blank lines. There's no need to put a blank line after every statement. Use blank lines to separate logical concepts within a method.
If the method code fits on one screen, it's easier to visually inspect.
Finally, I ran all the test cases and formatted the output into currency.
import java.text.NumberFormat;
public class ShoppingCartDiscount {
static NumberFormat formatter = NumberFormat.getCurrencyInstance();
public static void main(String args[]) {
System.out.println("Price for the regular customer: "
+ formatter.format(calculatePrice("Regular", 5000)));
System.out.println("Price for the regular customer: "
+ formatter.format(calculatePrice("Regular", 10000)));
System.out.println("Price for the regular customer: "
+ formatter.format(calculatePrice("Regular", 15000)));
System.out.println();
System.out.println("Price for the premium customer: "
+ formatter.format(calculatePrice("Premium", 4000)));
System.out.println("Price for the premium customer: "
+ formatter.format(calculatePrice("Premium", 8000)));
System.out.println("Price for the premium customer: "
+ formatter.format(calculatePrice("Premium", 12000)));
System.out.println("Price for the premium customer: "
+ formatter.format(calculatePrice("Premium", 20000)));
}
static float calculatePrice(String customerType, float purchaseAmount) {
float total = 0f;
if (customerType.equalsIgnoreCase("Regular")) {
total = calculateRegularPrice(purchaseAmount);
} else if (customerType.equalsIgnoreCase("Premium")) {
total = calculatePremiumPrice(purchaseAmount);
}
return total;
}
static float calculateRegularPrice(float purchaseAmount) {
float total = 0f;
if (purchaseAmount <= 5000) {
total = purchaseAmount;
} else if (purchaseAmount > 5000 && purchaseAmount <= 10000) {
float firstSlab = purchaseAmount - 5000;
firstSlab = firstSlab - (float) (firstSlab * 0.1);
total = 5000 + firstSlab;
} else {
float secondSlab = purchaseAmount - 10000;
secondSlab = secondSlab - (float) (secondSlab * 0.2);
float firstSlab = 10000 - 5000;
firstSlab = firstSlab - (float) (firstSlab * 0.1);
total = 5000 + firstSlab;
total = total + secondSlab;
}
return total;
}
static float calculatePremiumPrice(float purchaseAmount) {
float total = 0f;
if (purchaseAmount <= 4000) {
total = purchaseAmount - (float) (purchaseAmount * 0.1);
} else if (purchaseAmount > 4000 && purchaseAmount <= 8000) {
float secondSlab = purchaseAmount - 4000;
secondSlab = secondSlab - (float) (secondSlab * 0.15);
float firstSlab = 8000 - 4000;
total = firstSlab - (float) (firstSlab * 0.1);
total = total + secondSlab;
} else if (purchaseAmount > 8000 && purchaseAmount <= 12000) {
float thirdSlab = purchaseAmount - 8000;
thirdSlab = thirdSlab - (float) (thirdSlab * 0.20);
float secondSlab = 8000 - 4000;
secondSlab = secondSlab - (float) (secondSlab * 0.15);
float firstSlab = 8000 - 4000;
total = firstSlab - (float) (firstSlab * 0.1);
total = total + secondSlab + thirdSlab;
} else {
float fourthSlab = purchaseAmount - 12000;
fourthSlab = fourthSlab - (float) (fourthSlab * 0.30);
float thirdSlab = 8000 - 4000;
thirdSlab = thirdSlab - (float) (thirdSlab * 0.20);
float secondSlab = 8000 - 4000;
secondSlab = secondSlab - (float) (secondSlab * 0.15);
float firstSlab = 8000 - 4000;
total = firstSlab - (float) (firstSlab * 0.1);
total = total + secondSlab + thirdSlab + fourthSlab;
}
return total;
}
}
Based on the comment by the OP, I created more adaptable code. I'm not sure what pattern this is, except I used a factory to build the rewards.
I created a Tier class to hold a tier, a Reward class to hold a reward, and a RewardFactory to define the rewards. This should make it easier to change tiers or add new reward types.
If a new reward concept is created, then some code would have to be added.
Here's the revised code.
import java.text.NumberFormat;
import java.util.ArrayList;
import java.util.List;
public class ShoppingCartDiscount {
static NumberFormat formatter =
NumberFormat.getCurrencyInstance();
public static void main(String args[]) {
ShoppingCartDiscount scd = new ShoppingCartDiscount();
RewardFactory rewardFactory = scd.new RewardFactory();
String rewardType = "Regular";
float amount = 5_000f;
float discount = rewardFactory.calculateDiscount(
rewardType, amount);
displayDiscount(rewardType, amount, discount);
amount = 10_000f;
discount = rewardFactory.calculateDiscount(
rewardType, amount);
displayDiscount(rewardType, amount, discount);
amount = 15_000f;
discount = rewardFactory.calculateDiscount(
rewardType, amount);
displayDiscount(rewardType, amount, discount);
System.out.println();
rewardType = "Premium";
amount = 4_000f;
discount = rewardFactory.calculateDiscount(
rewardType, amount);
displayDiscount(rewardType, amount, discount);
amount = 8_000f;
discount = rewardFactory.calculateDiscount(
rewardType, amount);
displayDiscount(rewardType, amount, discount);
amount = 12_000f;
discount = rewardFactory.calculateDiscount(
rewardType, amount);
displayDiscount(rewardType, amount, discount);
amount = 20_000f;
discount = rewardFactory.calculateDiscount(
rewardType, amount);
displayDiscount(rewardType, amount, discount);
}
static void displayDiscount(String rewardType,
float amount, float discount) {
System.out.print(rewardType);
System.out.print(" customer spends ");
System.out.print(formatter.format(amount));
System.out.print(", so we discount ");
System.out.print(formatter.format(discount));
System.out.print(", so he owes ");
amount -= discount;
System.out.print(formatter.format(amount));
System.out.println(".");
}
public class RewardFactory {
private List<Reward> rewards;
public RewardFactory() {
this.rewards = new ArrayList<>();
createRewards();
}
private void createRewards() {
Reward reward = new Reward("Regular");
Tier tier = new Tier(0f, 5_000f, 0.00f);
reward.addTier(tier);
tier = new Tier(5_000f, 10_000f, 0.10f);
reward.addTier(tier);
tier = new Tier(10_000f, Float.MAX_VALUE, 0.20f);
reward.addTier(tier);
rewards.add(reward);
reward = new Reward("Premium");
tier = new Tier(0f, 4_000f, 0.10f);
reward.addTier(tier);
tier = new Tier(4_000f, 8_000f, 0.15f);
reward.addTier(tier);
tier = new Tier(8_000f, 12_000f, 0.20f);
reward.addTier(tier);
tier = new Tier(12_000f, Float.MAX_VALUE, 0.30f);
reward.addTier(tier);
rewards.add(reward);
}
public float calculateDiscount(String rewardType,
float amount) {
float discount = 0f;
for (Reward reward : rewards) {
if (reward.isDiscountApplied(rewardType)) {
discount += reward.calculateDiscount(amount);
}
}
return discount;
}
}
public class Reward {
private final String rewardType;
private List<Tier> tiers;
public Reward(String rewardType) {
this.rewardType = rewardType;
this.tiers = new ArrayList<>();
}
public void addTier(Tier tier) {
this.tiers.add(tier);
}
public boolean isDiscountApplied(String type) {
return (rewardType.equalsIgnoreCase(type));
}
public float calculateDiscount(float amount) {
float discount = 0f;
for (Tier tier : tiers) {
if (tier.isDiscountApplied(amount)) {
discount += tier.calculateDiscount(amount);
}
}
return discount;
}
}
public class Tier {
private final float lowerAmount;
private final float upperAmount;
private final float percentDiscount;
public Tier(float lowerAmount, float upperAmount,
float percentDiscount) {
this.lowerAmount = lowerAmount;
this.upperAmount = upperAmount;
this.percentDiscount = percentDiscount;
}
public boolean isDiscountApplied(float amount) {
return (lowerAmount < amount);
}
public float calculateDiscount(float amount) {
if (amount > upperAmount) {
return (upperAmount - lowerAmount) *
percentDiscount;
} else {
return (amount - lowerAmount) * percentDiscount;
}
}
}
} | {
"domain": "codereview.stackexchange",
"id": 38017,
"tags": "java, algorithm, object-oriented, programming-challenge, design-patterns"
} |
How the "precession of the ecliptic" can be distinguished from the "precession of the equator"? | Question: The precession of the equinoxes (AKA general precession) is composed from two components:
A dominate component which is caused by the the tilt of the Earth's axis - which is called "precession of the equator", or "Lunisolar precession".
a minor component which is caused by the very movement of the ecliptic plane itself. this component is called "precession of the ecliptic" or "planetary precession".
I'm not sure I understand what is the nature of the minor component, but the impression I get is that it is equivalent observation-wise to the major component (in very much the same manner that static Sun and a moving Earth is equivalent to moving Sun and static Earth). Is this is the case?
If indeed this is so, can we somehow distinguish between the components physically by actual observation, or it is only by theoretical gravity calculation that we know the correct share of each component in the precession of the equinoxes?
Answer: "Precession of the equator" refers to long-term changes in the Earth's rotation axis relative to the mean (average) orbital plane of the Earth-Moon barycenter about the Sun, aka the ecliptic. "Precession of the ecliptic" refers to long-term changes in the mean orbital plane itself.
The Earth's orientation with respect to the ecliptic changes primarily because of torques exerted on the Earth by the Moon and the Sun, but also because of torques exerted by other planets. Because of those other torques, it is not quite correct to call this "lunisolar precession".
The mean orbital plane of the Earth-Moon barycenter about the Sun also changes slowly due to the influences of other planets (and also the Sun's not-quite spherical shape, and also relativity). The mean orbital plane of the Earth-Moon barycenter about the Sun is tilted a bit (about 1.57°) from the solar system invariable plane.
There are also short term effects, with frequencies less than 20 years. These short term effects are lumped into two groups. Nutation models the short term effects that can be predicted for hundreds of years. Like general precession, nutation also results from torques on the Earth's rotation and its orbit. Polar motion describes short term effects that cannot be modeled accurately beyond a year or so. Polar motion includes what physicists call torque-free precession. Polar motion also includes effects due to transfer of angular momentum between the Earth's core, mantle, oceans, and atmosphere. These small amplitude motions are observed. Predictive models don't cut it after a year.
From the perspective of physics, precession and nutation are aspects the same phenomenon. The frequency gap between the Earth's precession and nutation is so vast that it makes sense to model them separately.
In theory, other stars also exert an influence on both the Earth's orientation and its orbital plane, but these effects are so small that they are not modeled. | {
"domain": "astronomy.stackexchange",
"id": 6276,
"tags": "solar-system, ecliptic, precession, equinox"
} |
Is this formula I derived for net acceleration correct? | Question: I was thinking about acceleration due to gravity and I thought of deriving a formula that gives the net acceleration due to gravity between two bodies. Now, by net acceleration, I basically mean the effective acceleration. Please have a look :
Let $A$ and $B$ be two objects with masses $m_1$ and $m_2$ respectively
and the distance between them be $d$.
Let $F$ be the force of attraction between them and $g_{m_1}$ be the acceleration of $m_1$ due to the gravitational force of $m_2$ and let $g_{m_2}$ be the acceleration of $m_2$ due to the gravitational force of $m_1$.
Now, according ot Newton's Law of Gravitation: $$F = G \dfrac{m_1m_2}{d^2}$$
We know that: $$F = m_1g_{m_1}$$
$$\text{and}$$
$$F = m_2g_{m_2}$$
This implies that: $$m_1g_{m_1} = G \dfrac {m_1m_2}{d^2} \implies g_{m_1} = G \dfrac {m_2}{d^2}$$
In a similar manner: $$g_{m_2} = G \dfrac {m_1}{d^2}$$
Now, this is the part where I think I'm making a mistake.
What I think is that here both the objects are accelerating towards each other with accelerations of $g_{m_1}$ and $g_{m_2}$ respectively.
So, I think the net acceleration between $A$ and $B$ would be $$g_{m_1}+g_{m_2} = G \dfrac{m_2}{d^2} + G \dfrac{m_1}{d^2}$$
$$\ \ = \dfrac {G}{d^2}(m_2+m_1)$$
Now, I think that if this formula is correctly derived, it gives the net acceleration by which two masses mutually attract each other. And wouldn't this formula imply that the acceleration due to gravity does depend on the mass of the object that is into consideration which is actually not the case.
So, that would imply that this formula is wrong. So, please let me know where the error is
Thanks!
PS : All edits on formatting are welcome :)
Answer: Your formula is just fine. You can read about the "2 body problem" and solve it entirely.
You seem to be upset about that the relative acceleration depends on both masses, but is correct.
Think of $m_2$ >>> $m_1$. You get $g_1$ >>> $g_2$. You can aproximate relative acceleration by $g_1$ and
the acceleration of $m_1$ towards $m_2$ does not depend on $m_1$. | {
"domain": "physics.stackexchange",
"id": 68126,
"tags": "newtonian-mechanics, newtonian-gravity, acceleration"
} |
Electromagnetic waves and photons | Question: In water waves, the wave is transmitted through vertically moving water particles which face no displacement nonetheless the wave is moving.
So does the same happen with photons in electromagnetic waves or does photons move along the wave?
Answer: Really, photons are the wave. What makes a wave in the classical sense is a large number of photons all averaging together.
Your question is an obvious guess to make -- other wave phenomenon is a result of local interactions within some medium, so electromagnetic waves must be the same. For a while, people guessed that there was a medium that carried electromagnetic waves, and they called it the aether. Turns out, it doesn't exist. | {
"domain": "physics.stackexchange",
"id": 38035,
"tags": "waves, electromagnetic-radiation, photons"
} |
"Day-of-the-week-finder" | Question: I'm new to Python/Programming and wrote this simple "Day of the week/Holiday finder" script. Ideally I'd use something so I can have a little widget on my desktop, but right now I'm looking for any and all suggestions to improve my code?
I'm importing this holiday module
import datetime
import holidays
today = datetime.date.today()
today = today.strftime("%m/%d/%Y")
today = datetime.datetime.strptime(today, "%m/%d/%Y")
def days_between(d1):
d2 = today
return (d2 - d1).days
def find_dt(input_dt):
print(input_dt)
dt1 = datetime.datetime.strptime(input_dt, "%m/%d/%Y")
dt2 = dt1.strftime('%A')
if days_between(dt1) == 0:
print("Today IS {d}.".format(d=dt2))
elif days_between(dt1) <= 1:
print("It'll be a {d}.".format(d=dt2))
else:
print("It was a {d}.".format(d=dt2))
us_holidays = holidays.UnitedStates()
us_h = us_holidays.get(input_dt)
if us_h is None and days_between(dt1) == 0:
print("Today is not a holiday.")
elif us_h is None and days_between(dt1) <= 1:
print("It won't be a holiday.")
elif us_h is None and days_between(dt1) >= 1:
print("It wasn't a holiday.")
elif days_between(dt1) == 0:
print("Today is {h}.".format(h=us_h))
elif days_between(dt1) <= 1:
print("It'll be {h}.".format(h=us_h))
else:
print("It was {h}.".format(h=us_h))
def main():
d = input('Type a date using MM/DD/YYYY: ')
# d = '12/25/2015'
find_dt(d)
main()
Answer: today
First of all, I'm super confused as to why you're doing
today = datetime.date.today()
today = today.strftime("%m/%d/%Y")
today = datetime.datetime.strptime(today, "%m/%d/%Y")
It seems like you'd be fine if you just did
today = datetime.datetime.today()
Additionally, constants (such as today) are traditionally named in ALL_CAPS.
days_between
It is very counter-intuitive to me that this only takes a single argument. I think writing it this way
def days_between(d1, d2):
return (d2 - d1).days
def days_from_today(date):
return days_between(date, today)
makes much more sense.
find_dt
I don't like everything that is going on here. First of all, I would argue that it shouldn't be the job of find_dt to convert an input string to a datetime object - the caller should be responsible for that.
Next, you're doing two distinct things. One is displaying the day of the week for the given date, and the other is displaying information about that day being a US holiday. These should probably be distinct.
def day_of_week(date):
date_string = date.strftime("%A")
from_today = days_from_today(date)
if from_today == 0:
info_string = "Today IS {}"
elif from_today <= 1:
info_string = "It will be a {}"
else:
info_string = "It was a {}"
print(info_string.format(date_string))
A few notes about this. I didn't call days_from-today every time - that's pretty wasteful. Additionally, because each one does basically the same thing, instead of repeating the print I create the appropriate string and then just print into that at the end.
Next, holidays.
def holiday_status(date):
us_holidays = holidays.UnitedStates()
us_h = us_holidays.get(date)
from_today = days_from_today(date)
if us_h is None:
if from_today == 0:
print("Today is not a holiday")
elif from_today <= 1:
print("It won't be a holiday")
else:
print("It wasn't a holiday")
else:
if from_today == 0:
info_string = "Today is {}"
elif from_today <= 1:
info_string = "It'll be {}"
else:
info_string = "It was {}"
print(info_string.format(us_h))
Notice that I used some similar strategies as with day_of_week. I also broke up your if statements a little more - while some people think nesting is bad, I think in this case it makes it much more clear what the conditions are.
__main__
You should have a block like this
if __name__ == '__main__':
main()
at the bottom of your file. Then it is import safe.
Locale
While right now you assume the US, it is conceivable that your users may be from other locales. You probably want something like this (the specifics of this depend on your Python version, but this is a rough approximation)
LOCALE_ENUM = enum("US UK FR JP ...")
LOCALES = {
LOCALE_ENUM.US: holidays.UnitedStates(),
LOCALE_ENUM.UK: holidays.UnitedKingdom(),
...
}
and then to define both find_dt and holiday_status to take a LOCALE_ENUM instance and use it to determine what locale's holidays they should use. | {
"domain": "codereview.stackexchange",
"id": 16569,
"tags": "python, beginner, python-3.x, datetime"
} |
how to create a searchable tree on Persian text? | Question: I wanna clean my huge text data from stop-words. I already have stop-word data that is provided on the below link. It seems to me, if I have a pre-built tree on stop-words, I could save lots of time. I want to search each word of text in this pre-built tree, if the word is in the tree I delete it from the text, if not I hold it.
O(n * l) to O(n*log(l)).
This is my stop-words
If you have better suggestions than the pre-built tree search, I would be grateful to share it with me.
Answer: Finally, I've found this answer with tire tree, but I wonder if you have better option:
readindg data:
#readindg stopword data
stopwords = pd.read_csv('STOPWORDS',header=None)
tire tree:
#creating tire tree
class TrieNode:
# Trie node class
def __init__(self):
self.children = [None]*15000
# isEndOfWord is True if node represent the end of the word
self.isEndOfWord = False
class Trie:
# Trie data structure class
def __init__(self):
self.root = self.getNode()
def getNode(self):
# Returns new trie node (initialized to NULLs)
return TrieNode()
def _charToIndex(self,ch):
# private helper function
# Converts key current character into index
# use only 'a' through 'z' and lower case
return ord(ch)-ord('!')
def insert(self,key):
# If not present, inserts key into trie
# If the key is prefix of trie node,
# just marks leaf node
pCrawl = self.root
length = len(key)
for level in range(length):
index = self._charToIndex(key[level])
# if current character is not present
if not pCrawl.children[index]:
pCrawl.children[index] = self.getNode()
pCrawl = pCrawl.children[index]
# mark last node as leaf
pCrawl.isEndOfWord = True
def search(self, key):
# Search key in the trie
# Returns true if key presents
# in trie, else false
pCrawl = self.root
length = len(key)
for level in range(length):
index = self._charToIndex(key[level])
if not pCrawl.children[index]:
return False
pCrawl = pCrawl.children[index]
return pCrawl != None and pCrawl.isEndOfWord
Example of use:
# Input keys (use only 'a' through 'z' and lower case)
keys = list(stopwords.loc[:,0])
output = ["Not present in trie",
"Present in trie"]
# Trie object
t = Trie()
# Construct trie
for key in keys:
t.insert(key)
print("{} ---- {}".format("از",output[t.search("از")]))
Output:
از ---- Present in trie | {
"domain": "datascience.stackexchange",
"id": 7066,
"tags": "python, text-mining"
} |
Valid Alphanumeric Palindrome | Question: Valid Alphanumeric Palindrome
Problem (from Leetcode)
Given a string, determine if it is a palindrome, considering only alphanumeric characters and ignoring cases. For example, A man, a plan, a canal: Panama is a palindrome but race a car is not a palindrome.
Discussion
My approach was the following:
Start with first and last characters of the input string (keeping track of their respective indices).
While the index of the first character is less than the index of it's "opposite" character...
a. For both characters, increment / decrement their index if they're not alphanumeric (and first character index is
less than it's opposite character index
b. The invalid case occurs when
The first character index is less than it's opposite
Both characters are not alphabetic or not numeric
Both characters are alphabetic and their lower case (or upper case) values are not equal
Both characters are numeric but their values are not equal
c. Increment the first character index and decrement the opposite character index
If able to exit the while loop, return true
Other discussion points
I could probably make the helper methods (like isAlphanumeric) private
Open to other (better) names
My if statement is pretty inelegant / hard to read - move to a helper method perhaps?
Convert the whiles to fors with conditionals?
Implementation
public class AlphanumericPalindromeValidator {
public static boolean isValid(String value) {
char[] chars = value.toCharArray();
int i = 0;
int j = value.length() - 1;
while (i < j) {
char character = chars[i];
while (!AlphanumericPalindromeValidator.isAlphanumeric(character) && i < j) {
i++;
character = chars[i];
}
char oppositeCharacter = chars[j];
while (!AlphanumericPalindromeValidator.isAlphanumeric(oppositeCharacter) && i < j) {
j--;
oppositeCharacter = chars[j];
}
if (i < j
&& !(AlphanumericPalindromeValidator.isAlphabeticCharacterPair(character, oppositeCharacter)
&& Character.toLowerCase(character) == Character.toLowerCase(oppositeCharacter))
&& !(AlphanumericPalindromeValidator.isNumericCharacterPair(character, oppositeCharacter)
&& character == oppositeCharacter)) {
return false;
}
i++;
j--;
}
return true;
}
public static boolean isAlphanumeric(char c) {
return Character.isAlphabetic(c) || Character.isDigit(c);
}
public static boolean isAlphabeticCharacterPair(char c1, char c2) {
return Character.isAlphabetic(c1) && Character.isAlphabetic(c2);
}
public static boolean isNumericCharacterPair(char c1, char c2) {
return Character.isDigit(c1) && Character.isDigit(c2);
}
Answer: Code style
Definitely make the helper methods private;
There is no need to specify the class name when calling a static method in the same call. Instead of AlphanumericPalindromeValidator.isNumericCharacterPair(), just call isNumericCharacterPair();
Do a static import of the static methods of Character;
There is no need for the variables oppositeCharacter and character. Only keep them if it helps you make the code easier to read.
it's possible to make the inner while loop a one liner with a for, but I actually think the while better expresses what is the intent in this case.
If condition
I don't see the point of the first check: i < j. At ths point you know that i<=j. If i == j, the comparison will succeed. This is the case of palindromes with an odd length.
You don't need to check if the char is alphabetic to use Character.toLowerCase(). The method will simply return the same char if there is no mapping to lower case for a given char.
You can also use Character.isLetterOrDigit instead of your isAlphanumeric, unless you care about the subtle differences between "alphatetic" and "letter".
Suggested solution
No helper methods are needed.
public static boolean isValid(String value) {
char[] chars = value.toCharArray();
int i = 0;
int j = value.length() - 1;
while (i < j) {
while (!isLetterOrDigit(chars[i]) && i < j) {
i++;
}
while (!isLetterOrDigit(chars[j]) && i < j) {
j--;
}
if (Character.toLowerCase(chars[i]) != toLowerCase(chars[j])) {
return false;
}
i++;
j--;
}
return true;
} | {
"domain": "codereview.stackexchange",
"id": 28096,
"tags": "java, programming-challenge, palindrome"
} |
First "Revealing Module" implementation | Question: I have a little js I call a "jqGrid Factory" to encapsulate common settings and functionality across my web app.
I just want to see what improvements I can make.
var jqGridReportFactory = (function () {
var config = {
datatype: 'json',
mtype: 'GET',
height: 'auto',
autowidth: true,
shrinkToFit: true,
gridview: true,
sortable: true,
rowNum: 50,
rowList: [50, 100, 200],
viewrecords: true,
loadonce: false,
sortorder: 'asc',
sortname: 'Affiliate',
subGridSortname: 'SubAffiliate'
},
subGridOptions = {
plusicon: "ui-icon-plus",
minusicon: "ui-icon-minus",
openicon: "ui-icon-carat-1-sw"
};
function createReport(gridOptions, optionalConfig) {
$.extend(config, optionalConfig);
//$.extend(gridOptions, gridOptions);
var jqGridObj = {
url: gridOptions.url,
datatype: config.datatype,
mtype: config.mtype,
postData: gridOptions.postData,
colNames: gridOptions.colNames,
colModel: gridOptions.colModel,
height: config.height,
autowidth: config.autowidth,
shrinkToFit: config.shrinkToFit,
gridview: config.gridview,
sortable: config.sortable,
rowNum: config.rownum,
rowList: config.computeHighlightColorsrowList,
viewrecords: config.viewrecords,
loadonce: config.loadonce,
sortorder: config.sortorder,
sortname: gridOptions.sortname,
pager: gridOptions.pager,
loadError: function (xhr, st, err) {
reportLoadError('onLoadConversionHistory', xhr, st, err);
unblockUI();
},
gridComplete: function () {
unblockUI();
goToScrollPosition($('#reportPlaceHolder'));
},
subGrid: gridOptions.subGrid,
onSelectRow: function (rowid) {
$(this).jqGrid("toggleSubGridRow", rowid);
}
};
if (gridOptions.subGrid) {
jqGridObj = addSubGrid(jqGridObj, gridOptions);
}
//jqGrid factory go!!
$("#" + gridOptions.id).jqGrid(jqGridObj);
}
function addSubGrid(jqGridObj, gridOptions) {
var subGridObj = {
subGridOptions: subGridOptions,
subGridRowExpanded: function (subgridId, rowId) {
var affiliate = $("#" + id).jqGrid("getCell", rowId, 'Affiliate');
var subgridTableId = subgridId + "_t";
var $subGrid;
$("#" + subgridId).html("<table id='" + subgridTableId + "' class='scroll'></table>");
$subGrid = $('#' + subgridTableId); //cache subgrid, more performant
var subGridColNames = jQuery.extend({}, gridOptions.colNames);
var subGridColModel = jQuery.extend({}, gridOptions.colModel);
//change parent names from Affiliate to Subaffiliate
//other than that subGrid model is exactly the same as parent Affiliate model for all reports so far
subGridColNames[0] = 'SubAffiliate';
subGridColModel[0].name = 'SubAffiliate';
subGridColModel[0].index = 'SubAffiliate';
//add affiliate to subGridPostData
var a = { Affiliate: affiliate };
$.extend(gridOptions.subGridPostdata, a);
$subGrid.jqGrid({
url: gridOptions.url,
datatype: gridOptions.datatype,
mtype: gridOptions.mtype,
postData: gridOptions.subGridPostdata,
colNames: subGridColNames,
colModel: subGridColModel,
height: config.height,
sortname: config.subGridSortname,
sortorder: config.sortorder,
loadonce: config.loadonce,
//these subgrid setting should not be overridden in my opinion - Brian Ogden
autowidth: true,
shrinkToFit: true,
gridview: false,
sortable: false,
viewrecords: true
///////////////////////
});
if (gridOptions.subGridHeadersHidden) {
//hide subgrid column headers
$subGrid.closest("div.ui-jqgrid-view")
.children("div.ui-jqgrid-hdiv")
.hide();
}
},
subGridRowColapsed: function (subgridId, rowId) {
// this function is called before removing the data
var subgridTableId;
subgridTableId = subgridId + "_t"; //
$("#" + subgridTableId).remove();
}
};
$.extend(true,jqGridObj, subGridObj);
return jqGridObj;
}
return {
createReport: createReport
};
})();
UPDATE
Just wanted to show the latest of my refactoring of my jqgrid factory, I didn't renamed some of the options object members to camel case because I thought it was better for the users of this factory to match the option value that jqGrid uses:
var jqGridReportFactory = (function () {
var constAffiliateStr = "Affiliate";
var constSubAffiliateStr = "SubAffiliate";
//default icons should be used for all reports so they not currently an option that can changed when using this jqgrid factory - Brian Ogden 1-24-2014
var subo = {
plusicon: "ui-icon-plus",
minusicon: "ui-icon-minus",
openicon: "ui-icon-carat-1-sw"
};
function createReport(o) {
o = $.extend({
//apply default properties
datatype: 'json',
mtype: 'GET',
height: 'auto',
autowidth: true,
shrinkToFit: true,
gridview: true,
sortable: true,
rowNum: -1,
rowList: [50, 100, 200, -1],
viewrecords: true,
loadonce: true,
footerrow: false,
sortorder: 'asc',
sortname: constAffiliateStr,
subGridSortname: constSubAffiliateStr,
subgrid: false,
subGridHeadersHidden: true
}, o);
var jqGridObj = {
url: o.url,
datatype: o.datatype,
mtype: o.mtype,
postData: o.postData,
colNames: o.colNames,
colModel: o.colModel,
height: o.height,
autowidth: o.autowidth,
shrinkToFit: o.shrinkToFit,
gridview: o.gridview,
sortable: o.sortable,
rowNum: o.rowNum,
rowList: o.rowList,
viewrecords: o.viewrecords,
loadonce: o.loadonce,
sortorder: o.sortorder,
sortname: o.sortname,
pager: o.pager,
footerrow: true,
loadError: function (xhr, st, err) {
reportLoadError('onLoad' + o.id, xhr, st, err);
unblockUI();
},
loadComplete: function () {
if (o.rowNum == -1) {
$(o.pager + ' option[value=-1]').text('All');
//if loadOnce is true displays -1
if (o.loadonce) $(o.pager + ' input.ui-pg-input').next().text('1');
}
if (o.loadComplete) o.loadComplete();
},
gridComplete: function () {
unblockUI();
goToScrollPosition($('#reportPlaceHolder'));
if (o.gridComplete) o.gridComplete();
},
subGrid: o.subGrid,
onSelectRow: function (rowid) {
$(this).jqGrid("toggleSubGridRow", rowid);
},
onSortCol: function (index, idxcol, sortorder) {
var $icons = $(this.grid.headers[idxcol].el).find(">div.ui-jqgrid-sortable>span.s-ico");
if (this.p.sortorder === 'asc') {
//$icons.find('>span.ui-icon-asc').show();
$icons.find('>span.ui-icon-asc')[0].style.display = "";
$icons.find('>span.ui-icon-asc')[0].style.marginTop = "0px";
$icons.find('>span.ui-icon-desc').hide();
} else {
//$icons.find('>span.ui-icon-desc').show();
$icons.find('>span.ui-icon-desc')[0].style.display = "";
$icons.find('>span.ui-icon-desc')[0].style.marginTop = "0px";
$icons.find('>span.ui-icon-asc').hide();
}
}
};
/*=================================================*/
//Build subGrid
/*=================================================*/
if (o.subGrid) {
/*=================================================*/
//Check to see if subGrid colModel and colNames passed in, if not use Affiliate colModel and colNames
/*=================================================*/
var temp;
if (!o.subGridColModel) {
temp = $.extend(true, [], o.colModel); //deep copy array of objects
temp[0].name = constSubAffiliateStr;
temp[0].index = constSubAffiliateStr;
o.subGridColModel = temp;
}
if (!o.subGridColNames) {
//temp = o.colNames.slice(0);
temp = $.extend(true, [], o.colNames); //deep copy not needed but better safe then sorry
temp[0] = constSubAffiliateStr;
o.subGridColNames = temp;
}
/*=================================================*/
var subGridObj = {
subo: subo,
subGridRowExpanded: function (subgridId, rowId) {
var affiliate = $("#" + o.id).jqGrid("getCell", rowId, 'Affiliate');
var subgridTableId = subgridId + "_t";
var $subGrid;
$("#" + subgridId).html("<table id='" + subgridTableId + "' class='scroll'></table>");
$subGrid = $('#' + subgridTableId); //cache subgrid, more performant
//add affiliate to subGridPostData
var a = { Affiliate: affiliate };
$.extend(o.subGridPostData, a);
$subGrid.jqGrid({
url: o.url,
datatype: o.datatype,
mtype: o.mtype,
postData: o.subGridPostData,
colNames: o.subGridColNames,
colModel: o.subGridColModel,
height: o.height,
sortname: o.subGridSortname,
sortorder: o.sortorder,
loadonce: o.loadonce,
sortable: o.sortable,
//these subgrid setting should not be overridden in my opinion - Brian Ogden
autowidth: true,
shrinkToFit: true,
gridview: false,
viewrecords: true
///////////////////////
});
if (o.subGridGroupHeaders) {
$subGrid.jqGrid('setGroupHeaders', {
useColSpanStyle: true,
groupHeaders: o.subGridGroupHeaders
});
}
if (o.subGridHeadersHidden) {
//hide subgrid column headers
$subGrid.closest("div.ui-jqgrid-view")
.children("div.ui-jqgrid-hdiv")
.hide();
}
},
subGridRowColapsed: function (subgridId, rowId) {
// this function is called before removing the data
var subgridTableId;
subgridTableId = subgridId + "_t"; //
$("#" + subgridTableId).remove();
}
};
$.extend(jqGridObj, subGridObj);
}
/*=================================================*/
//jqGrid factory go!!
return $("#" + o.id).jqGrid(jqGridObj);
}
return {
createReport: createReport
};
})();
Answer: I like your code, good use of the Revealing Module pattern.
I only have the following minor observations:
mtype could use a better name ( I know, jqGrid uses it )
autowidth -> autoWidth ( lowerCamelCasing )
gridview -> gridView
rowList: [50, 100, 200] could use a comment as to what it does
viewrecords -> viewRecords( lowerCamelCasing ) etc. etc.
sortorder -> Could use a comment with the possible values
subGridOptions -> could use a comment with this URL : http://api.jqueryui.com/theming/icons/
delete uncommented code : //$.extend(gridOptions, gridOptions);
You should seriously consider building jqGridObj with $.extend out of gridOptions and config, cutting almost 20 lines.
It is considered better to have 1 var statement with comma-separated variables instead of multiple var statements. ( in subGridObj )
$subGrid.jqGrid could also probably be built with $.extend | {
"domain": "codereview.stackexchange",
"id": 5787,
"tags": "javascript, jquery, revealing-module-pattern"
} |
How to encourage the reinforcement-learning agent to reach the goal as quickly as possible, and what's the effect of discount factor? | Question: I am trying to use reinforcement learning to solve a task and compare its performance to humans.
The task is to find a single target in a fixed number of locations. At each step, the agent will pick one location, and check whether it contains the target. If the target is at this location, the agent will get a $+10$ reward and the trial ends; otherwise, the agent will get a hint at where the target is (with some stochastic noise), get a $-0.5$ reward, and it needs to pick another location in the next step. The trial will terminate if the agent cannot find the target within 40 steps (enough for humans). The goal is to solve the task as quickly and accurately as possible.
I am now trying to solve this problem by Deep Q-Network with prioritized experience replay. With the discount factor $\gamma=0.5$, the agent can learn quickly and solve the task with an accuracy close to 1.
My current questions are:
The accuracy level is already very high, but how to motivate the agent to find the target as quickly as possible?
What's the effect of $\gamma$ on the agent's task solving speed?
I am considering $\gamma$ because it relates to the time horizon of the agent's policy, but I now have two opposing ideas:
With $\gamma \rightarrow 0$, the agent is trying to maximize the immediate reward. Since the agent will only receive a positive reward when it finds the target, $\gamma \rightarrow 0$ motivates the agent to find the target in the immediate future, which means to solve the task quickly.
With $\gamma \rightarrow 1$, the agent is trying to maximize the discounted sum of reward in the long term. This means to reduce the negative rewards as much as possible, which also means to solve the task quickly.
Which one is correct?
I have tried training the network with $\gamma=0.1, 0.5, 0.9, 0.99$, but the network can only learn with $\gamma=0.1, 0.5$.
Answer:
The accuracy level is already very high, but how to motivate the agent to find the target as quickly as possible?
You already are, in two different ways:
A penalty (negative reward) for each time step taken.
A positive reward for completing a task, plus discounting.
Both of these choices are sufficient that action values will be maximised by taking the most direct route to complete a task, and minimise expected time steps from any starting point.
In theory you could completely lose one of the two approaches, and the reward system would still work.
For DQN I would lose the positive reward at the end and use a relatively high discount factor, e.g. $\gamma = 0.99$ - which is only required for numerical stability, and not really part of the problem definition. If your goal is to minimise number of time steps, then a simple count of number of remaining time steps is already a good cost function, and negating it to make a reinforcement learning return is close to ideal. It often works well with Q learning too because it will explore away from repetition, even if at the start of training it cannot reach the target state.
What's the effect of $\gamma$ on the agent's task solving speed?
This can be complex, but is a scenario with the only positive reward at the end and a discount factor, then expected reward will depend on expected future time steps in a geometric series. If in state $s_1$, action choice $a_1$ would lead to the end goal in 3 time steps and action choice $a_2$ would lead to the end goal in 4 time steps, with the only reward $r_T$ for reaching the goal state, then $q(s_1, a_1) = \gamma^3 r_T \gt q(s_1, a_2) = \gamma^4 r_T$, making $a_1$ the obvious choice.
A stochastic environment may muddy this a little, but in general the agent is going to prefer the lower number of timesteps to get to the goal. If distribution of number of timesteps can be very different, then different $\gamma$ values may cause slightly different gambling options made by the agent to complete faster. This is the reason why I suggest dropping the positive reward at the end, if your goal is to minimise expected number of timesteps to complete. That is because technically a change to $\gamma$ is a change to the problem definition - muddied slightly by usually needing $\gamma \lt 1$ when training a DQN to improve stability.
Which one is correct? I have tried training the network with $\gamma=0.1, 0.5, 0.9, 0.99$, but the network can only learn with $\gamma=0.1, 0.5$.
Both are correct, although you do have to be concerned about $\gamma$'s dual role as problem definition parameter and solution hyper-parameter when working with approximators.
I think (but without looking at your code) that you have a problem with your neural network hyperparameters. A discount factor of $0.9$ or $0.99$ should work with DQN, and is a very common choice. I suggest try a few different architecture choices, and also DQN hyperparameters such as experience replay size, time between copying learning network to target network etc.
Another thing that occurs to me, and may be specific to the environment that you are working with: If you are comparing the performance of your agent with a human, then a human may be able to apply their memory of previous attempts to this problem, and look for trends between time steps. If your state vector does not capture or summarise the history of guesses so far, and such "trend" information is actually useful for your problem, then you may need to add some kind of memory. You could modify the state to summarise attempts so far, or you could use an agent with memory, such as one based on a RNN.
Whether an agent with memory would help depends on the nature of the different "locations" that are being guessed at, and the hints. A very simple example of where memory would make a huge difference is game where the locations are all the numbers between 1 and 100, and the agent is told "higher" or "lower" when it makes an incorrect guess. Storing (or learning to store) the bounds implied by guesses so far would be critical to good performance of the agent. | {
"domain": "ai.stackexchange",
"id": 3050,
"tags": "reinforcement-learning, deep-rl, q-learning, reward-functions, discount-factor"
} |
What does $g_{tt}=0$ in the metric tensor mean? | Question: For example, in the Einstein-Rosen bridge metric, $g_{44}$ vanishes but it is pointed out that it is not a singularity.
Answer: As long as the metric is non-singular (i.e., its determinant is nonzero), all it means for a diagonal component to be zero is that the corresponding coordinate vector is a null vector. The closest that you can get to extracting any physical meaning from that is this: there exists at least one null vector in your spacetime. That's a pretty trivial result, though. This is because the metric relates physical things like distances to unphysical things like the units you use to measure those distances, or the particular basis you've chosen for a vector space — which means that no particular component of the metric has any physical significance.
[Note: I'll use the OP's apparent convention of the $t$ component being element 4 (starting from 1), as opposed to the usual element 0 (starting from 0).]
Even in Minkowski space with coordinates $(x, y, z, t)$ and metric
\begin{equation}
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1
\end{pmatrix},
\end{equation}
you can transform the coordinates by defining $u = (t-z)/\sqrt{2}$ and $v = (t+z)/\sqrt{2}$. Vectors in the $u$ and $v$ direction are just along the usual null cone, and are null vectors. But you can now transform the metric to the $(x, y, u, v)$ coordinate system, and you get
\begin{equation}
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & -1 \\
0 & 0 & -1 & 0
\end{pmatrix}.
\end{equation}
The determinant here is nonzero. In fact, the determinant is -1, just like the usual Minkowski metric — though any negative number could be physically identical. And since the $g_{uu}$ and $g_{vv}$ components are 0, we explicitly see that vectors in the $u$ and $v$ directions are null.
But we can still reconstruct the original $t$ and $z$ vectors as $t = (v+u) / \sqrt{2}$ and $z = (v-u) / \sqrt{2}$, and contracting those with the metric still gives us $-1$ and $+1$, respectively. So it's really the same physical content, just in a different coordinate system. | {
"domain": "physics.stackexchange",
"id": 60772,
"tags": "general-relativity, metric-tensor, coordinate-systems"
} |
Bottom camera problem | Question:
Hi,
I use gazebo 2.2 with ROS package tum_simulator. This is a package with simulator of quadrotor AR Parrot Drone.
I had thought that bottom camera is not working and only front camera is OK, but I noticed that image from bottom camera don't display ground but sky. (strange)
Please check this screen: http://snag.gy/L5Riw.jpg
This is a world from Gazebo and two windows from rviz. Left image is a view from bottom camera :( and right image is a view from front camera. Black shape near window of the house is quadrotors' model.
Have you ever heard abut this problem? Any ideas?
I notice that when I run this node I got some errors, I hope that this is a reason of problems with camera.
Error [SDF.cc:788] Missing element description for [offset]
Error [SDF.cc:788] Missing element description for [drift]
Error [SDF.cc:788] Missing element description for [driftFrequency]
[ INFO] [1415319050.065958118, 659.736000000]: Camera Plugin (ns = /) <tf_prefix_>, set to ""
[ INFO] [1415319050.244706409, 659.736000000]: Camera Plugin (ns = /) <tf_prefix_>, set to ""
Error [SDF.cc:788] Missing element description for [accelOffset]
Error [SDF.cc:788] Missing element description for [accelDriftFrequency]
Error [SDF.cc:788] Missing element description for [rateOffset]
Error [SDF.cc:788] Missing element description for [rateDriftFrequency]
Error [SDF.cc:788] Missing element description for [headingOffset]
Error [SDF.cc:788] Missing element description for [headingDriftFrequency]
Error [SDF.cc:788] Missing element description for [driftFrequency]
Error [SDF.cc:788] Missing element description for [driftFrequency]
Error [SDF.cc:788] Missing element description for [offset]
Error [SDF.cc:788] Missing element description for [driftFrequency]
Error [SDF.cc:788] Missing element description for [velocityOffset]
Error [SDF.cc:788] Missing element description for [velocityDriftFrequency]
[ INFO] [1415319050.590175130, 659.736000000]: Using imu information on topic ardrone/imu as source of orientation and angular velocity.
Please help.
Originally posted by green96 on ROS Answers with karma: 115 on 2014-11-07
Post score: 0
Answer:
It was an issue with the orientation of the bottom camera on a urdf file (@green96 found the issue). You can look here for more information.
Originally posted by Gary Servin with karma: 962 on 2014-11-08
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 19982,
"tags": "ros, tum-ardrone, ardrone-autonomy, tum-simulator, ardrone2.0"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.