anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Attraction and repulsion between electrons and protons | Question: I read somewhere that if we pack a proton and an electron together in a small volume, the combination does not attract or repel another electron or proton placed at a distance.
But my problem is that force on a charged particle due any other charged particle is not affected by any other placed near latter,i.e, the force between charged particles follows superposition rule. Then why the combination of a electron and an proton does not attract or repel any other charged particle?
Answer: The statement you're objecting to is describing the net force. You are correct that a far away electron will exert forces on both the electron and the proton, but those forces will be nearly equal and opposite, making them approximately cancel compared to the forces between the closely bound proton/electron. In reality, there will be a net attractive force because the electron will induce a polarization in the neutral hydrogen atom. If left alone, the electron and neutral hydrogen will eventually combine into a hydrogen anion. | {
"domain": "physics.stackexchange",
"id": 36450,
"tags": "forces, electrostatics, charge, coulombs-law"
} |
Is an EMF more/same/less in an insulator than in a conductor? | Question: Is an EMF (electromotive force) more/same/less in an insulator than in a conductor?
For example: A loop of copper and a loop of plastic in a changing magnetic field.
In which will the emf be the greatest?
Answer: emf exists in a closed path in space irrespective of the presence of a material medium. The presence of a conductor thus allows a path for electrons to flow. | {
"domain": "physics.stackexchange",
"id": 61256,
"tags": "electromagnetism, conductors, insulators"
} |
Why does Earth have a dip in the CO2 absortion spectrum from 14 to 16 micron? | Question: This paper shows that there is a dip in the CO2 absorption spectrum of Earth. In essence the trough of the absorption of CO2 for Earth is cut into two separate troughs instead of 1 large trough. Why is that?
original without arrow
As there have been some comments on whether this is planet related or molecule related, here is an additional image which shows the absence of the double trough in the CO2 absorption pattern for Venus whilst showing it for Earth:
Answer: Eric Jensen has already provided a nice link to a description of the basic structure of the ${\rm CO}_{2}$ spectrum, so I'll focus on the question of why there's a "spike" at 15.0 microns in the Earth spectrum, but not in the Venus or Mars spectra.
If you look at the link in Eric's answer, the very first image shows a high-resolution version of the ${\rm CO}_{2}$ absorption spectrum, including the main features to the right and left. There is also a very strong absorption peak in the middle (the "Q-branch") at about 15.0 microns (667 ${\rm cm}^{-1}$ in wavenumbers). The question then becomes: why does this feature show up as an extra dip in the Venus spectrum (and, barely visible, in the Mars spectrum) -- as one might expect -- but apparently as a peak in the Earth spectra?
The answer hinges on the fact that the spectra shown in the question are not, strictly speaking, absorption spectra. They are actually radiance/intensity measurements of infrared light emitted by different layers in the atmosphere of the planet in question, as seen from orbiting spacecraft. (In the first figure, it's narrowed to the atmosphere above certain locations on the Earth.) They thus show a combination of the molecular absorption spectra and the temperature structure of the atmosphere.
In simple terms, the spectra show thermal (blackbody) emission from layers in the atmosphere where upward-going photons can escape to space without being absorbed by higher layers in the atmosphere. The altitude of these layers is determined by the amount of the absorbing gas (e.g., ${\rm CO}_{2}$) and the strength of the absorption at a given wavelength. The stronger the absorption, the higher up is the layer that can emit freely to space.
(Note that for wavelengths with effectively zero absorption from any molecule, the "emitting layer" is actually the surface of the planet.)
How much infrared light is emitted at a given wavelength then depends on the temperature of the emitting layer, and it is here that the vertical temperature structure of the atmosphere becomes important. For Venus and Mars -- and for most of the emitting altitudes for Earth -- the temperature decreases with altitude, so that higher emitting layers have lower temperatures and thus emit less radiation. The net effect is to reproduce the absorption features, even though the whole thing is not, technically, an absorption spectrum.
But in the case of the Earth, the strongest absorption -- e.g., in the Q-branch part of the ${\rm CO}_{2}$ absorption, at 15.0 microns -- forces an emitting altitude that is so high it is actually in the stratosphere. And in the stratosphere, the temperature increases with altitude. So the Q-branch absorption peak forces the emitting layer to be at a higher temperature than is the case for the rest of the ${\rm CO}_{2}$ absorption regime.
This University of Chicago MODTRAN web page, which plots predicted emission from the Earth's atmosphere as seen from an altitude of 70 km (default setting), shows this rather nicely. The default parameters include a modern ${\rm CO}_{2}$ concentration of 400 ppm, which gives the feature seen in the Earth observations (i.e., the local emission peak in the middle of the 15 micron trough, which I've labeled "Q-branch reversal"):
But if you dial the ${\rm CO}_{2}$ concentration way down (e.g., to 10 ppm), the emitting altitudes are lower for all ${\rm CO}_{2}$-absorbing wavelenths, so much so that the emitting altitude for the Q-branch is down below the stratosphere, and so is at a lower temperature than the rest, and thus emits less flux, giving you the expected profile (i.e., with the weakest emission at the Q-branch wavelength, instead of a reversal):
(Finally, I believe the peculiar spectrum for the "Antarctic" case in the first figure is due to the unusual temperature profile of atmosphere in arctic regions, which includes the temperature increasing with altitude in the first few km above the ground; the fact that the interior of Antarctica is high altitude might also be relevant.) | {
"domain": "astronomy.stackexchange",
"id": 5455,
"tags": "earth, spectroscopy, venus"
} |
On the percentage of oxygen in the atmosphere | Question: The percentage of oxygen in the atmosphere is 21 percent, according to NASA.gov. I was trying to make a mathematical calculation about oxygen consumption of humans, when I realised that there is a gap in my understanding of the above mentioned fact. Is 21% of the MASS of the atmosphere that of oxygen, or is 21% of the VOLUME of the atmosphere that of oxygen? The mass is not evenly distributed in the atmosphere, and the volume of it is fixed. And for that matter, I have another question, how exactly is the percentage of oxygen in the atmosphere calculated?
Answer: The atmosphere is 20.95% Oxygen by volume, on a dry basis (i.e. without any water vapor). See here for detailed composition information.
Your second question, how is the oxygen percentage calculated, is misguided I believe. The composition of the atmosphere is measured with gas analyzers, not calculated. | {
"domain": "earthscience.stackexchange",
"id": 2760,
"tags": "atmosphere, oxygen"
} |
Will there be a 'partially reflected ray' at critical angle of incidence? | Question: Will there be a 'partially reflected ray' at critical angle of incidence?
This diagram is given in my textbook. There are 'partially reflected rays' when the 'angle of incidence' is less than 'critical angle'. But when the 'angle of incidence' is equal to 'critical angle', there are no 'partially reflected rays'. Is this correct?
I checked youtube for a real demonstration and found this video: https://www.youtube.com/watch?v=NAaHPRsveJk
It shows a 'partially reflected ray' even at critical angle, which then becomes the 'totally reflected ray' for higher incidence angles.
Answer: You are right. The reflected beam is always there. It is just that when incident angle becomes equal to the critical one, the intensity of the reflected beam becomes equal to the incident one, and then stays that way as you increase the incident angle further.
The choice to not to draw the reflected beam at the critical incident angle is purely a figure design issue. Probably to not to overcrowd the figure. | {
"domain": "physics.stackexchange",
"id": 92056,
"tags": "optics, reflection, refraction"
} |
Compact QED in different dimensions | Question: I would like to find papers, reviews or books where I can read more information about compact QED in different dimensions. I have read the chapter about compact QED in Polyakov's books but it is so short and there is luck of details, derivations etc.
My google skills were useless: I don't find more information about this topic.
Answer: Polyakov's analysis of compact QED (without fermions) is reviewed in detail in section 8.1 in the book
Greensite (2011), An Introduction to the Confinement Problem, https://www.springer.com/us/book/9783642143816
The book is well-written and engaging. The link given above includes free previews of each chapter, so you can get a feeling for the author's style. The book assumes familiarity with quantum field theory (QFT), specifically quantum chromodynamics (QCD). Lattice gauge theory is the starting point for most of the analyses. The focus is on understanding why confinement occurs in QCD and related models, often citing results of numerical (lattice) calculations as evidence for or against various ideas.
Compact QED is analyzed in section 8.1 as a mathematically-easier model with a confinement phase. This section can probably be read on its own, referring to previous chapters mainly for motivation and maybe a few notational conventions. Here are a couple of excerpts from section 8.1 ("Magnetic monopoles in compact QED") to confirm its relevance:
Compact QED in three and four dimensions has monopole excitations, and these excitations are responsible for the confinement of electric charge. The confinement property exists only at strong lattice couplings in $D=4$ dimensions, but it is found at all lattice couplings in $D=3$ dimensions. The word `compact' refers to the compactness of the $U(1)$ gauge group in the lattice (as opposed to the continuum) formulation of electrodynamics.
Polyakov's demonstration of confinement in compact QED$_3$ is quite beautiful... so the calculation is worth displaying... in a little more detail.
$D=3$ and $D=4$ are the only cases considered in section 8.1, but section 2.4 ("Possible phases of a gauge theory") mentions this about $D=2$:
In $D = 2$ dimensions it is easy to demonstrate that the only phase that exists is the magnetically disordered phase, for any lattice action of the form (2.24) [sum over plaquettes], and for any gauge group. Take the $Z_2$ gauge theory for simplicity... [details follow]
[Conclusion:] The underlying reason for magnetic disorder in $D = 2$ dimensions is the absence of a Bianchi constraint relating different components of the field strength tensor.
Compact QED is sometimes analyzed indirectly by regarding the $U(1)$ gauge group as a limiting case ($N\to\infty$) of the discrete $\mathbb{Z}_N$ gauge group, whose elements are $\exp(2\pi i n/N)$. The preceding excerpt refers to the case $N=2$.
Here's a summary of some of the context leading up to chapter 8 ("Monopoles, calorons, and dual superconductivity"): Chapter 3 goes into some depth comparing various possible definitions of "confinement" and explains why the author chooses to focus on a definition based on magnetic disorder. Section 3.4 introduces the idea of center symmetry, and chapter 4 concludes that center symmetry is closely associated with confinement in QCD. Compact QED is studied in chapter 8, where confinement mechanisms in various simpler models are compared and contrasted with the confinement mechanism in QCD. A major theme in chapter 8 is that the "abelian" confinement mechanism that operates in some simpler models may differ in important ways from the confinement mechanism in QCD. | {
"domain": "physics.stackexchange",
"id": 56714,
"tags": "resource-recommendations, quantum-electrodynamics"
} |
Does the wave nature of a particle refer to the wave function? | Question: In quantum mechanics when we talk about the wave nature of particles are we referring in fact to the wave function? Does the wave function describes the probability of finding a particle (ex: photons) at some location? So do the "waves" describe probabilities just the way in classical physics the electromagnetic waves describe the perturbations of the electric and magnetic fields?
Answer: No, because the wavefunctions are not waves in space. They are waves in enormous high-dimensional spaces of possibilities. If you have two particles, the wavefunction is waving in 6 dimensions (the two positions of the two particles make a six dimensional space of possibilities), if you have three particles, the wavefunction is in 9 dimensions. So it is always wrong to think of it as a wave in space, like a field.
There is a field which obeys the Schrodinger equation, but this classical field is a classical wave, like E and B, which describes many coherent bosons in the same quantum state all moving together, like a superfluid or a Bose-Einstein condensate. | {
"domain": "physics.stackexchange",
"id": 3909,
"tags": "quantum-mechanics, quantum-electrodynamics, wavefunction"
} |
Beat frequency when a bat flys towards a wall | Question: A bat is flying towards a wall while emitting an ultrasound of frequency 25kHz. Emitted sound and the sound that bounces off a wall form a beat frequency 1.65kHz, that the bat detects. With what speed does the bat approach the wall? Speed of sound is 340m/s.
This is the full exercise. The answer is:
The bat detects the frequency:
$$
f=f_0\left(\frac{1+\frac{v}{c}}{1-\frac{v}{c}}\right)
$$
coming from invisible source that is approaching the bat with speed $v$ which is also the speed with which the bat is approaching the wall.
Beat frequency:
$$
f_b=\frac{f-f_0}{2}
$$
and
$$
f=2\cdot f_b+f_0=f_0 \frac{1+ \frac{v}{c}}{1- \frac{v}{c}}
$$
From that we calculate:
$$
v=c \cdot \frac{f_b}{f_0+f_b}=20\,\text{m}/\text{s}
$$
I want to know why the beat frequency is $f_b=\frac{f-f_0}{2}$ and not $f_b=f-f_0$ . Any explanation would be much appreciated.
Answer: The beat frequency, mathematically, is indeed $\frac{f - f_0}{2}$, as can easily be shown using the factor formula in trigonometry. However, the perceived beat frequency is twice of that, which is $f - f_0$. This is because the beat frequency modulates the amplitude of the sound wave. In one full cycle of $2\pi$, the amplitude goes both positive and negative, so we hear beats twice as often. See this image for a better visualization. | {
"domain": "physics.stackexchange",
"id": 68398,
"tags": "homework-and-exercises, waves, acoustics, frequency, speed"
} |
How can I use an anti-derivative (integral) to find the velocity, given the acceleration? | Question: How can I write my acceleration as a function of time given its acceleration? Say a particle has an acceleration $a$ and an initial velocity $v_0$ at time $t = 0$. What is the particle's velocity at a later time $t$?
Please provide an answer which uses integration to find the velocity.
Answer: You have to think about what is the definition of an acceleration. The acceleration tells you how the velocity changes in time.
For example the gravitational acceleration at the surface of the earth is roughly $10 \frac{m}{s^2}$, which means if you begin to fall at time $t=0 s$, at $t=1 s$ you will have a velocity of $10\frac{m}{s^2}$, at $t=2s$ you will have a velocity of $20\frac{m}{s^2}$ and so on.
To be more precise mathematically, the acceleration is the derivative of the velocity, since it describes per definition the change in velocity. Of course if acceleration is the derivative of the velocity, then velocity is the antidervative of acceleration.
To write all down, the acceleration is defined as $$a(t) = \frac{d v(t)}{dt}$$
which means that the velocity can be computed as $$v(t)=\int{a(t)dt}$$
For example if $$a(t) = 10$$ and the velocity at time $t = 0$ is $v_0$ then $$v(t) = \int{a(t) dt} = 10 \cdot t + C$$
Since $v(0) = C = v_0$ we get as final expression $$v(t) = 10t+v_0 $$ | {
"domain": "physics.stackexchange",
"id": 90554,
"tags": "homework-and-exercises, kinematics, acceleration, velocity"
} |
Math in deriving Pauli Equation | Question: When deriving the Pauli Equation, it has the following step:
$$i\frac{e}{c}\hbar[\vec{A}\times\nabla+\nabla\times\vec{A}]\phi=i\frac{e}{c}\hbar\space curl\vec{A}\cdot \phi$$
$\phi$ is one of the spinors in the bispinor of the electron. $\vec{A}$ is the vector potential. How does it go from the LHS to the RHS? I thought $\nabla\times\vec{A}$ is just $\text{curl}\ \vec{A}$?
Answer: You need to show $\overbrace{\vec{A} \times \nabla \phi + \nabla \times (\vec{A}\phi)}^{LHS}=\overbrace{\phi \nabla\times \vec A}^{RHS} $
or equivalently $ \nabla \times (\vec{A}\phi)=\phi \nabla\times \vec A -\vec{A} \times \nabla \phi$
To show this we write
\begin{eqnarray}
\nabla \times (\vec{A}\phi) &=& \varepsilon_{ijk }\partial_i (A_j\,\phi)\;,
\\&=&\Big[\varepsilon_{ijk }\partial_i (A_j)\,\phi+\varepsilon_{ijk }A_j\,\partial_i (\phi)\Big]\;,\\
&=&\Big[\varepsilon_{ijk }\partial_i (A_j)\,\phi-\varepsilon_{jik }A_j\,\partial_i (\phi)\Big]\;,\\
&=&\Big[\varepsilon_{ijk }\partial_i (A_j)\,\phi-\varepsilon_{ijk }A_i\,\partial_j (\phi)\Big]\;,\\
&\equiv& (\nabla \times\vec A)\,\phi -\vec A \times \nabla \phi
\end{eqnarray}
Your $\cdot$ in the right hand side may also cause confusion since it look likes the dot product but indeed it is surely a scalar multiplication. | {
"domain": "physics.stackexchange",
"id": 38207,
"tags": "homework-and-exercises, quantum-electrodynamics, differentiation"
} |
What is the name of this color transformation? | Question: This is a question regarding color transformation in images. I have this color transformation matrix I am using to convert an RGB image to a color space whose name I do not know:
T = [(1/3) (1/3) (1/3); (1/2) 0 (-1/2); (-1/2) 1 (-1/2)]
What is the name of this color transform? I think I once saw somewhere that it is called KL transform because it was close to a KLT computed over a large collection of images, but I am not sure...
Answer: Just found the paper that referred to this transformation:
Y.I. Ohta, T. Kanade, T. Sakai. Color information for region segmentation, Comput. graphics Image Process. 13 (1980) 222—241.
It is indeed called the K-L space as it is an static approximation of the (dynamic) image specific KLT calculated over a large set of images. The main reason is that the eigenvectors remain approximately the same for a large set of natural color images. | {
"domain": "dsp.stackexchange",
"id": 207,
"tags": "image-processing"
} |
What languages can be reduced to a NP-complete problem in polynomial time | Question: NP-complete: Language is NP-complete, when it is in NP and every problem in NP is reducible to it in polynomial time. But what languages are reducible to a NP-complete problem (for example SAT) in polynomial time - other than languages in NP.
NP-hard: A problem H is NP-hard when every problem L in NP can be reduced in polynomial time to H. This is reversed reducing, so I'm not sure if this is a good candidate. We don't know whether one can solve NP-hard problem in polynomial time.
If P ≠ NP, then NP-hard problems cannot be solved in polynomial time. (NP-hardness wiki)
So, my question is what class of languages can be transformed to a NP-complete problem in polynomial time using Turing reductions and/or many-to-one reductions.
Answer: Under Karp reductions, the answer is exactly $\mathbf{NP}$: it is not hard to see that if a language is Karp-reducible to any $\mathbf{NP}$-language, then it is in $\mathbf{NP}$ too. On the other hand, all of $\mathbf{NP}$ reduces to $\mathbf{NP}$-complete languages by definition.
Under Turing reductions, the answer is the class $\mathbf{P}^\mathbf{NP}$: languages that are decidable by a polynomial-time algorithm that has an access to an $\mathbf{NP}$ oracle. Indeed, a Turing reduction of a language $L$ to an $\mathbf{NP}$-complete language is by definition a polynomial-time algorithm deciding $L$ with oracle access to $\mathbf{NP}$. On the other hand, any such algorithm can be simulated with oracle access to an $\mathbf{NP}$-complete language, thus obtaining a Turing reduction. | {
"domain": "cstheory.stackexchange",
"id": 4839,
"tags": "np-hardness, sat, reductions, np-complete"
} |
Problem with Energy Transfer Rate Conversion | Question: Given the question:
Calculate the energy transfer rate across a 6 in wall of firebrick with a temperature difference across the wall of 50 °C. The thermal conductivity of the firebrick is $0.65\ \frac{\text{Btu}}{\text{hr ft }^\circ\text{F}}$ at the temperature of interest.
The correct answer is 369 W/m2
I used the following approach:
$${ x = 6in => 0.5ft }$$
$${ \triangle T = 50^oC => 122^oF }$$
$${ k = 0.65Btu/hr-ft-F }$$
$${ \frac{Q}{A} = \frac {k\triangle T}{x} = 158.6Btu/hr*ft^2 => 500W/m^2 }$$
But I did not calculate the correct answer. Is there a step that I am missing?
Answer: Your C to F conversion is correct for a different question.
The Celsius and Fahrenheit scales have their zero points offset by a factor equal to 32 F degrees and this has to be allowed for when converting from one measurement system to the other
However
In this case you wish not to convert between two "scales" but to express the delta difference in degrees F rather than degrees C. So the ratio 9:5 is relevant and the offset of 32 degrees F between the two scales is irrelevant.
Short version of answer is: Doh! :-) | {
"domain": "engineering.stackexchange",
"id": 3688,
"tags": "thermodynamics, heat-transfer"
} |
roslaunch prevents node from creating/writing to file | Question:
For our current project, we created a launch file that initializes several nodes, one of which is supposed to create a log file, open it, write a header line at the beginning of the file, and then close the file.
When we run the nodes individually from terminal, this function works fine, but when the project is run from a launch file, the log file is never created. Do nodes run from a launch file not have permission to create new log files?
Function to create log file:
char Beacon::createLogFile(char *logDataFileName)
{
int status;
status = mkdir("Log", S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH);
if (status == -1)
{
// error; it might be that the directory already exists
if (errno == EEXIST)
{
// no problem
}
else if (errno == EACCES)
{
printf("Permission\n");
return 1;
}
}
else
{
// Directory successfully created
}
FILE* logData_f_stream;
logData_f_stream = fopen(logDataFileName, "w");
fprintf(logData_f_stream, "NodeID Latitude Longitude Time OWTT\n");
fclose(logData_f_stream);
return 0;
}
Originally posted by UW NDCL on ROS Answers with karma: 3 on 2012-07-05
Post score: 0
Answer:
You should look for your log file in your ~/.ros directory when starting nodes from launch files.
Originally posted by Thomas D with karma: 4347 on 2012-07-05
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by weiin on 2012-07-05:
or alternatively, set absolute path for the file name, so you know exactly where to look.
Comment by Thomas D on 2012-07-06:
Be careful with absolute paths though because this will make your code less portable if/when other people start using your work or if/when you try to use your own code on a different computer with a different directory structure. | {
"domain": "robotics.stackexchange",
"id": 10064,
"tags": "roslaunch"
} |
Is the height of the tree the number of edges or number of nodes? | Question: I'm so confused by some of the theorems online about tree heights. Does tree height mean the number of edges or nodes? if nodes, does it include the node it is counting from? Can the height of a tree start from 0?
Answer: As Yuval says, there's no standard definition. This is not because computer scientists are indecisive but because it's sometimes more convenient to use one definition and sometimes more convenient to use the other. For example, a full, balanced binary tree of height $h$ has $2^h$ leaves if you define height as number of edges and $2^h-1$ vertices in total if you define height as number of vertices. Each of these statements becomes less convenient if you use the other definition and have to keep writing $h-1$ or $h+1$.
The situation is exactly the same as the natural numbers: sometimes, it's more convenient to say that zero is a natural number (for example, the natural numbers are a semiring only if zero is included); other times, it's more convenient to omit zero (for example, if you always want to be able to divide by a natural number). In fact, similar things happen throughout mathematics. Another example is that it's common to insist that graphs have at least one edge (or at least one vertex) to avoid having to start all your theorems "If $G$ is not trivial, then..." | {
"domain": "cs.stackexchange",
"id": 3500,
"tags": "graphs, terminology, trees"
} |
What happens when an alkylborane is treated with acetic acid? | Question: In the hydroboration reactions of alkenes I've seen so far, they were always followed by oxidation with $\ce{H2O2/NaOH}$ (to yield alcohols).
Today I found a peculiar reaction in which acetic acid-d1 was added after hydroboration to give a deuterated product:
I don't quite understand what exactly happens after the hydroboration part. Can someone enlighten me here?
Answer: Acetic acid effects protonolysis of the borane:
If AcOD is used then RD (instead of RH) is formed. | {
"domain": "chemistry.stackexchange",
"id": 9283,
"tags": "organic-chemistry, reaction-mechanism, organoboron-compounds"
} |
Help in proving L-Completeness | Question: I'm trying to prove that the following language is L-complete
A is a language where each word is comprised of 0s and 1s & the number of 0's is double that of the number of 1's
So far I've managed to show that it can be solved in log space using a counter that adds 2 for every '1' and deducts 1 for every 0.
I need to prove now that every language in L is predictable logarithmically to A
Answer: Can't make a comment so have to use an answer. It's a well-known fact that every non-trivial language in $L$ is complete under log-space reduction because the reduction could be used to decide the language with only two values for a mapping. | {
"domain": "cs.stackexchange",
"id": 19466,
"tags": "complexity-theory, reductions, space-complexity"
} |
How to use two different datasets as train and test sets? | Question: Recently I started reading more about NLP and following tutorials in Python in order to learn more about the subject. The problem that I've encountered, now that I'm trying to make my own classification algorithm (the text sends a positive/negative message) regards the training and the testing datasets. In all the examples that I've found, only one dataset is used, a dataset that is later split into training/testing. I have two datasets, and my approach involved putting together, in the same corpus, all the texts in the two datasets (after preprocessing) and after, splitting the corpus into a test set and a training set.
datasetTrain = pd.read_csv('train.tsv', delimiter = '\t', quoting = 3)
datasetTrain['PN'].value_counts()
datasetTest = pd.read_csv('test.tsv', delimiter = '\t', quoting = 3)
datasetTest['PN'].value_counts()
corpus = []
y = []
# some preprocessing
y.append(posNeg)
corpus.append(text)
from sklearn.feature_extraction.text import TfidfVectorizer
transf = TfidfVectorizer(stop_words = stopwords, ngram_range = (1,1), min_df = 5, max_df = 0.65)
X = transf.fit_transform(corpus).toarray()
# Splitting the dataset into the Training set and Test set
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.11, random_state = 0)
The reason why I've done this is because I'm working with the Bag of Words model and if I'm creating from the beginning X_train and X_test (y_train, y_test respectively) and not using the splitting function, I get an error when running the classification algorithm:
X_train = transf.fit_transform(corpustrain).toarray()
X_test = transf.fit_transform(corpustest).toarray()
...
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
ValueError: Number of features of the model must match the input. Model n_features is 2770 and input n_features is 585
I'm kind of new at this and I was wondering if anyone could please guide me in the right direction?
Answer: You may want to use a pipeline to do this operation. Specifically, you do NOT want to train the TFIDFVectorizer the entire corpus- doing so gives your model hints about what features may be in the test set that don't exist in the training set- a concept frequently referred to as "leakage" or "data snooping".
The correct pattern is:
transf = transf.fit(X_train)
X_train = transf.transform(X_train)
X_test = transf.transform(X_test)
Using a pipeline, you would fuse the TFIDFVectorizer with your model into a single object that does the transformation and prediction in a single step. It's easier to maintain a solid methodology within that pattern.
In the example code, you're both fitting and transforming in the same step fit_transform, which is creating different features each time and is the source of your error. | {
"domain": "datascience.stackexchange",
"id": 4042,
"tags": "python, nlp, dataset, training"
} |
mixture of maximally mixed and maximally entangled state | Question: Consider the quantum system $\mathcal{B}(\mathbb{C}^d\otimes\mathbb{C}^d)$ and $|\psi\rangle=\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|i,i\rangle$ be the (standard) maximally entangled state. Consider the state
$\rho_\lambda=\lambda \frac{\mathbb{I}_{d^2}}{d^2}+(1-\lambda)|\psi\rangle\langle\psi|.$
Now for some values of $\lambda$ this state is entangled (example $\lambda=0$ it is $|\psi\rangle\langle\psi|$) and hence its entanglement can be detected by partial transpose operation.
Can $\rho(\lambda)$ be an entangled state which is positive under partial transpose (known in literature as PPT entangled state) for some values $\lambda$? My intuition tells me that this is the case. However, I was told (without reference) that this is not the case and for the values of $\lambda$ we get only separable states or not PPT entangled states. I could not find the corresponding paper. May be I am not giving the proper string for searching. Advanced thanks for any suggestion, reference or comment.
Answer: To answer this question, we will first compute the values of $\lambda$ for which $\rho(\lambda)$ is PPT and separately compute the values for which it is entangled.
Let $T$ be the transpose map, such that the partial transpose map may be written
as $(\mathbb{I}\otimes T)$, where $\mathbb{I}$ is the identity on $\mathbb{C}^d$. One can show that the partial transpose maps the standard maximally entangled state into the SWAP operator
$$(\mathbb{I}\otimes T)|\psi\rangle\langle\psi|=\frac{1}{d}W,$$
where $W=\sum_{i,j}|i\rangle\langle j|\otimes|j\rangle\langle i|$. For reference, you can take a look at John Watrous' excellent lecture notes. The SWAP operator has states with eigenvalue $-1$, let's call one of them $|w\rangle$. We then have
\begin{align}
\langle w|(\mathbb{I}\otimes T)\rho(\lambda)|w\rangle&=\lambda\langle w|\frac{\mathbb{I}}{d^2}|w\rangle+(1-\lambda)\langle w|W|w\rangle\\
&=\frac{\lambda}{d^2}-\frac{(1-\lambda)}{d}.
\end{align}
We want this expression to be positive, which gives us the condition $$\lambda\geq\frac{1}{1+d}.$$
On the other hand, we can calculate the maximum overlap $\langle\psi|\rho_s|\psi\rangle$ that a separable state $\rho_s$ can have with $|\psi\rangle$, such that if the overlap of $\rho(\lambda)$ is greater than this maximum, we know that $\rho(\lambda)$ is entangled. It can be shown (see for example this review) that in our case this maximum is precisely $\frac{1}{d}$. Therefore, $\rho(\lambda)$ is entangled whenever
\begin{align}
\langle\psi|\rho(\lambda)|\psi\rangle&\geq\frac{1}{d}\\
\Rightarrow\frac{\lambda}{d^2}+(1-\lambda)&\geq\frac{1}{d},
\end{align}
which gives the condition
$$\lambda\geq\frac{d^2-d}{d^2-1}.$$
However, you can quickly check that both conditions cannot be met simultaneously, so there is no value of $\lambda$ for which $\rho(\lambda)$ is entangled and PPT. | {
"domain": "physics.stackexchange",
"id": 29143,
"tags": "quantum-mechanics, quantum-information, quantum-entanglement"
} |
Why is $L = \{ \langle M \rangle : L(M) = \{ \langle M \rangle \} \}$ not Turing-recognizable? | Question: In other words, $L$ is the language of Turing machines that recognize the language consisting of only themselves.
Why is $L$ not Turing-recognizable? $L$ is clearly not decidable by Rice's Theorem, but how do I go one step further and also prove that no machine can enumerate $L$?
Answer: Given a Turing machine $T$, use the recursion theorem to construct a Turing machine $T'$ that halts if its input $x$ is $\langle T' \rangle$, runs $T$ on $x$ if $x < \langle T' \rangle$, and runs $T$ on $x-1$ if $x > \langle T' \rangle$. If you could recursively enumerate $L$, then you could use this reduction to recursively enumerate all machines that never halt. | {
"domain": "cs.stackexchange",
"id": 817,
"tags": "formal-languages"
} |
Prime Numbers Store | Question: Let's say we need to create a store for selling prime numbers.
Users enter the store and ask to buy a number.
If the number asked is a prime number,
1.1. then it's either available for sale
1.2. or was purchased earlier, hence not available for sale.
If the number asked is not a prime number, then it doesn't exist.
Let us assume that the store contains the maximum possible number of primes that could be represented by a primitive type (in my implementation I assume the largest prime number isn't larger than Int32's MaxValue, and to actually run this in a reasonable time, I set a MAX_VALUE constant to 100000).
The store must handle concurrent calls.
Buying should be fast as possible!
When the store opens for business, it should already contain the numbers for sale.
Users cannot be blocked by shopping for numbers, nor by other buyers.
Example: UserA asks to purchase a number and then UserB arrives and asks to purchase a number, then UserB won't need to wait until the system's dealt with UserA's transaction. So when UserA's transaction finishes, a response will be delivered to UserA, then UserB's transaction will start and once finished a response will be delivered to UserB.
My Solution:
I define an enum called NumberType, that basically describes whats the buying state of each number.
According to the problem definition, every number could be either prime, then it may be available for sale or not available (because it was bought before), or the number isn't prime, so it doesn't exist.
NumberTypes.cs
namespace PrimeStore
{
public enum NumberType
{
SoldSuccessfully,
NotAvailable,
NotExist
}
}
Next, I define a Singleton class named Store, that is lazy-initiated on first-access to at most MAX_VALUE number of prime numbers in a Dictionary of {Int32, Boolean} pairs, initiating all the Boolean's to false since nothing was bought yet.
To calculate all those prime numbers on initialization, I use a very known algorithm AKA Sieve of Eratosthenes in parallel, because it takes very long to calculate.
This method was taken from the .NET 4.0 examples article.
Then the public method that is exposed outside is:
public void BuyNumber(Int32 number, Action {NumberType} callback)
The user may choose whatever number and supplies a callback method that takes a NumberType as a parameter and decides what to do whether the number was successfully bought, wasn't available or doesn't exist.
To verify that the number is prime (in the boundaries between 2..MAX_VALUE as defined in the source code) a simple ContainsKey check on the dictionary is enough.
But if the number does exist (it is prime) then a lock must be gained in order to exclusively buy the number by one user.
In each case, the user's callback is called with the right choice from the enum.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace PrimeStore
{
public sealed class Store
{
public const Int32 MAX_VALUE = 100000 - 2;
private static Store _instance;
private static object syncRoot = new Object();
private readonly Dictionary<Int32, Boolean> _primes;
private Store()
{
_primes = new Dictionary<int, bool>();
IEnumerable<int> numbers = Enumerable.Range(2, MAX_VALUE);
var parallelQuery =
from n in numbers.AsParallel()
where Enumerable.Range(2, (int)Math.Sqrt(n)).All(i => n % i > 0)
select n;
foreach (var number in parallelQuery)
{
_primes.Add(number, false);
}
}
public static Store Instance
{
get
{
if (_instance == null)
{
lock (syncRoot)
{
if (_instance == null)
_instance = new Store();
}
}
return _instance;
}
}
public void BuyNumber(Int32 number, Action<NumberType> callback)
{
// no lock is needed here since just checking if a number is prime
// and its a readonly operation.
if (!_primes.ContainsKey(number))
{
callback(NumberType.NotExist);
}
else
{
BuyPrime(number, callback);
}
}
private void BuyPrime(Int32 number, Action<NumberType> callback)
{
Boolean bought = false;
// the number is prime, then obtain exclusive access
// to the dictionary and try to buy it.
lock (_primes)
{
if (_primes[number] == false)
{
_primes[number] = true;
bought = true;
}
}
if (bought)
{
callback(NumberType.SoldSuccessfully);
}
else
{
callback(NumberType.NotAvailable);
}
}
}
}
Here's a simple program that makes concurrent buys (in parallel) of many numbers between 0..MAX_VALUE, then repeats itself for a few times in order to re-buy numbers who were already bought (that's the purpose of the outer for loop).
The callback simply writes the result of each case to the Console.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace PrimeStore
{
public class Program
{
public static void Main(string[] args)
{
for (int i = 0; i < 3; i++)
{
Parallel.For(0, Store.MAX_VALUE, (number) =>
{
Store.Instance.BuyNumber(number, (numberType) =>
{
switch (numberType)
{
case NumberType.NotAvailable:
Console.WriteLine("{0} is not available for sale.", number);
break;
case NumberType.NotExist:
Console.WriteLine("{0} is not a prime number.", number);
break;
case NumberType.SoldSuccessfully:
Console.WriteLine("Successfully bought {0}.", number);
break;
}
});
});
}
}
}
}
Can anyone think of a better way to store the numbers, assuming that all prime numbers must already be stored when opening the store (in my implementation - on the first call to Store's BuyNumber) without causing transactions a run-time worse than \$O(1)\$?
What if we'd tried to increase MAX_VALUE to almost Int32.MaxValue?
What do you think will happen first, an OutOfMemoryException or 3 hours of waiting?
Answer: foreach (var number in parallelQuery)
{
_primes.Add(number, false);
}
You could simplify this to:
_primes = parallelQuery.ToDictionary(n => n, n => false);
public void BuyNumber(Int32 number, Action<NumberType> callback)
I don't understand why are you using a callback here. The whole method is completely synchronous, so I think you should simply return the result from it.
// no lock is needed here since just checking if a number is prime
// and its a readonly operation.
if (!_primes.ContainsKey(number))
Read-only operation doesn't mean that you don't need a lock. The documentation for Dictionary says:
A Dictionary<TKey, TValue> can support multiple readers concurrently, as long as the collection is not modified.
The problem is, there could be a write occurring from another thread, so according to the documentation, you should use locking in this case (possibly using ReaderWriterLockSlim which allows multiple readers at the same time). | {
"domain": "codereview.stackexchange",
"id": 3794,
"tags": "c#, primes, singleton, callback, sieve-of-eratosthenes"
} |
C++ beginner exercises: bracketing search | Question: After receiving feedback for the grading exercise from this list I proceeded to the next one that looked interesting to me, namely implementing the bracketing search for a computer to guess a number from 1 to 100 in 7 or less guesses. I tried to keep what I was advised in my previous question but a few of these points were not relevant in the second exercise. I'm looking for any feedback, but possibly something about:
Variable scope and declaration. I've declared variables in the specific functions that use them. Is that how it should be?
This feels "heavy handed". All the switches and loops give me kind of feel of code smell. Is this how c++ programs that loop commands are structured? Should I be abstracting that further?
#include <iostream>
#include <chrono>
#include <random>
static const char QUIT = 'q';
static const char HUMAN_GUESSER = 'h';
static const char COMPUTER_GUESSER = 'c';
static const char CHOSEN_NUMBER_IS_HIGHER = 'h';
static const char CHOSEN_NUMBER_IS_LOWER = 'l';
static const char GUESS_IS_CORRECT = 'c';
static const int DEFAULT_MIN = 1;
static const int DEFAULT_MAX = 100;
int guessNumber(int min, int max) {
return ((max - min) / 2) + min;
}
bool computerGuesser() {
char guessInformation;
int min = DEFAULT_MIN;
int max = DEFAULT_MAX;
int guess;
bool abort = false;
std::cout
<< "Think of a number which the computer shall guess. (Please don't change the number during the task ;)\n";
for (int numberOfGuesses = 1; !abort; ++numberOfGuesses) {
guess = guessNumber(min, max);
std::cout << "Is the number " << guess << "? Answer with higher ("
<< CHOSEN_NUMBER_IS_HIGHER << ") if the chosen number is higher than the guess, lower ("
<< CHOSEN_NUMBER_IS_LOWER << ") if it is lower or correct (" << GUESS_IS_CORRECT
<< ") if computer got it right.\n";
if (std::cin >> guessInformation) {
switch (guessInformation) {
case CHOSEN_NUMBER_IS_HIGHER:
min = guess + 1;
break;
case CHOSEN_NUMBER_IS_LOWER:
max = guess - 1;
break;
case GUESS_IS_CORRECT:
std::cout << "Yeah! Computer got the correct number in " << numberOfGuesses << " guesses\n";
abort = true;
break;
default:
std::cout << "that's a wrong option\n";
abort = true;
}
}
}
return abort;
}
int makeRandomNumber(int min, int max) {
auto seed = std::chrono::system_clock::now().time_since_epoch().count();
std::mt19937_64 generator(seed);
std::uniform_int_distribution<int> distribution(min, max);
return distribution(generator);
}
bool humanGuesser() {
int userGuess;
int numberOfGuesses = 0;
int internalGuess = makeRandomNumber(DEFAULT_MIN, DEFAULT_MAX);
std::cout << "Enter your guess.\n";
while (std::cin >> userGuess) {
++numberOfGuesses;
if (userGuess == internalGuess) {
std::cout << "Correct. It took you " << numberOfGuesses << " guesses.\n";
break;
} else {
std::cout << "Incorrect. The internal guess is " << (userGuess < internalGuess ? "higher." : "lower.")
<< "\n";
}
}
return true;
}
bool dispatchCommand(char command) {
switch (command) {
case COMPUTER_GUESSER:
return computerGuesser();
case HUMAN_GUESSER:
return humanGuesser();
case QUIT:
default:
return false;
}
}
int main() {
char command;
std::cout << "Hello, World! Welcome to the super duper number guesser.\n";
while (true) {
std::cout << "Who will be playing, you (h) or the computer (c)? Press (q) to quit.\n";
if (std::cin >> command && dispatchCommand(command)) {
continue;
} else {
break;
}
}
}
cmake_minimum_required(VERSION 3.8)
project(bracketing)
set(CMAKE_CXX_STANDARD 17)
set(SOURCE_FILES main.cpp)
add_executable(bracketing ${SOURCE_FILES})
Answer: This code is really easy to read. It has good naming and avoids "magic numbers." Your variables all seem properly declared and scopes. I see some things that could be improved, but honestly, it's nothing major. Here are my ideas.
Smaller Functions
In your computerGuesser() function I think you could break out some of the functionality into separate functions to make it less "heavy handed" as you put it. For example, prompting the user could be a function, as could the switch statement that decides what to do next. Something like this:
char promptUser(const int guess)
{
char guessInformation = '\0';
do {
std::cout << "Is the number " << guess << "? Answer with higher ("
<< CHOSEN_NUMBER_IS_HIGHER << ") if the chosen number is higher than the guess, lower ("
<< CHOSEN_NUMBER_IS_LOWER << ") if it is lower or correct (" << GUESS_IS_CORRECT
<< ") if computer got it right.\n";
std::cin >> guessInformation;
if ((guessInformation != CHOSEN_NUMBER_IS_HIGHER) &&
(guessInformation != CHOSEN_NUMBER_IS_LOWER) &&
(guessInformation != GUESS_IS_CORRECT))
{
guessInformation = '\0';
std::cout << "that's a wrong option. Please try again.\n";
}
} while (guessInformation);
}
Note that I changed the above a little bit. If the user enters an invalid entry, it now asks them to try again and gives them another chance instead of aborting.
Next, the function to decide what to do with the result:
bool handleGuess(const char guessInformation, const int numberOfGuesses,
int& min, int& max)
{
bool abort = false;
switch (guessInformation) {
case CHOSEN_NUMBER_IS_HIGHER:
min = guess + 1;
break;
case CHOSEN_NUMBER_IS_LOWER:
max = guess - 1;
break;
case GUESS_IS_CORRECT:
std::cout << "Yeah! Computer got the correct number in " << numberOfGuesses << " guesses\n";
abort = true;
break;
}
return abort;
}
Then your computerGuesser() function becomes this:
bool computerGuesser() {
char guessInformation;
int min = DEFAULT_MIN;
int max = DEFAULT_MAX;
int guess;
bool abort = false;
std::cout
<< "Think of a number which the computer shall guess. (Please don't change the number during the task ;)\n";
for (int numberOfGuesses = 1; !abort; ++numberOfGuesses) {
guess = guessNumber(min, max);
guessInformation = promptUser(guess);
abort = handleGuess(guessInformation, numberOfGuesses);
}
return abort;
}
Simpler Logic
I generally dislike infinite loops when there's an obvious end condition. The while loop you have in main() has an end condition, so there's no reason to make it an infinite loop. I'd write it like this:
int main() {
char command;
std::cout << "Hello, World! Welcome to the super duper number guesser.\n";
do {
std::cout << "Who will be playing, you (h) or the computer (c)? Press (q) to quit.\n";
} while ((std::cin >> command) && dispatchCommand(command));
} | {
"domain": "codereview.stackexchange",
"id": 27734,
"tags": "c++, beginner, number-guessing-game"
} |
Is Hudson Bay part of the Arctic or the Atlantic Ocean? | Question: Years ago I learned that the Columbia ice field was North America's triple divide point: water flowing from this ice field could drain into the Artic, the Pacific, and the Atlantic Ocean. Something I never questioned. Today I came upon this map, and was surprised to see that the triple point was actually further south, close to the USA/Canada border:
I looked it up and found about Triple Divide Peak. The article has another, more precise map:
So it appears that there are actually two triple points. That led me to search which ocean Hudson Bay belongs to... and this is where it gets tricky. Citing various sources, Wikipedia says:
Hudson Bay is often considered part of the Arctic Ocean; the International Hydrographic Organization, in its 2002 working draft of Limits of Oceans and Seas defined the Hudson Bay, with its outlet extending from 62.5 to 66.5 degrees north (just a few miles south of the Arctic Circle) as being part of the Arctic Ocean, specifically "Arctic Ocean Subdivision 9.11." Other authorities include it in the Atlantic, in part because of its greater water budget connection with that ocean.
Some sources describe Hudson Bay as a marginal sea of the Atlantic Ocean, or the Arctic Ocean.
So, my questions are: Which objective criterion could be used to settle this dispute? Considering this criterion, which ocean Hudson Bay belongs to?
Answer: Water above the blue line flows to the arctic. No question there. Water below the green line flows to the Atlantic. Also no debate.
Water between the blue and green lines flows to Hudson Bay. But where does Hudson Bay flow? The answer, technically, is both. But which one does it primarily flow to? From this article, it seems like the vast majority of flow out of Hudson Bay is to the Atlantic. In fact, it's an enormous source of freshwater to the Atlantic. A lot of that water comes from the Arctic Ocean (and here is where the definition of 'ocean' becomes itself kind of fraught).
But if you were to simply ask: "at which point in the continental US would water FLOW to three different oceans?", Snow Dome has the better claim. The majority of water leaving the Montana Peak towards Hudson Bay will end up in the Atlantic Ocean, not the Arctic.
If you were to ask, "at which point(s) does any water flow to all three Oceans?", it's technically the entire locus of points along the great divide between these two "triple points".
But if you had to pick just one point, Snow Dome seems more appropriate. | {
"domain": "earthscience.stackexchange",
"id": 2450,
"tags": "oceanography, geography, watershed"
} |
Can an Earth-like planet survive if our Sun went Supernova? | Question: First, I would like to point out, yes, our sun does not have enough mass to be a candidate for Supernova, this is a scenario were our sun is though, and after the various life cycle stages of the sun and whether or not life managed to survive or not on the planet, The Sun goes of as a Supernova, it is 1 AU from an Earth-like planet, or equivalent to 1 AU from a massive star, would the planet still be standing or would it be obliterated from the face of the Galaxy?
Here are some of the links that I have been looking through but could not find an answer for:
https://www.physicsforums.com/threads/effective-destructive-range-of-supernovae.312925/
https://earthsky.org/astronomy-essentials/supernove-distance
https://www.popularmechanics.com/space/deep-space/a26483/supernovas-deadly-twice-as-far-away/
Answer: One way of estimating this is to look at how much energy could be received by the planet. At distance $d$ it takes up $r^2/4 d^2$ of the sky as seen by the supernova. So for a $10^{44}$ J blast that is about $4.5444\times 10^{34}$ J.
The gravitational binding energy of Earth is about $2\times 10^{32}$ J. So we have about 227 times as much energy as is needed to separate all pieces of the planet to infinity. It is also a few thousand times the energy needed to heat up an Earth-mass of iron from 0K to vaporisation. So, yes, it looks like it could well obliterate the planet.
Whether it actually does so is a complex question of how protected the evaporating planet is by its plasma sheath during the explosion. Given that actual terrestrial exoplanets in close orbits at 2000K may lose an Earth mass per gigayear it looks plausible that at least slower scenarios can vaporise planets. | {
"domain": "astronomy.stackexchange",
"id": 3918,
"tags": "the-sun, earth, supernova"
} |
Optimal coverage of arbitrary mask by strided masks | Question: Say we have bit mask with some bits on and off:
1001110010101
We want to "deduce pattern", by covering this mask with as few strided masks as possible.
By "strided mask" I mean masks like:
11111111....
01010101....
00100100....
We can choose any length of strided masks and we need to cover initial mask completely.
Say for
1001110010101
Answer might be 3:
1001110010101
1001
11
10101
But I am not sure even in this case -- may be we can do better?
More tricky example:
11011101
covered by only 2 strided masks:
01010101
10001
Naive solution is 3: 11, 111 and 1
I have feeling this problem may be well-known, something like regular Post Embedding or like Set Cover, but everything I know is not exactly this problem.
How do you think can we have good algorithm here or can we prove NP or even undecidability?
Answer: Your problem is known as (exact) cover by arithmetic progressions (depending on whether you allow overlaps or not). Both variants were shown to be NP-complete by Heath, Covering a Set with Arithmetic Progressions is NP-Complete. | {
"domain": "cs.stackexchange",
"id": 19756,
"tags": "algorithms, np-complete, combinatorics"
} |
Why can't we use gauss's law to find the electric field passing along an axis perpendicular to the wire's midpoint? | Question: I have seen this answer on why gauss's law can't be used to find the electric field due to a finite wire
however, I'm unable to understand why translational symmetry is lost if we're only concerned with the field along the axis perpendicular to the wire's midpoint.
Answer: Short answer: in the finite wire case, there's an extra electric flux term due in the y-direction.
If you use Gauss's law,
$$\int E dS = Q/\epsilon_0$$
The integral term on the LHS involves the total electric flux contained within that imaginary cylinder.
In the case of the infinite wire, there was no flux in the y-direction since the wire runs continuous and we excluded that in our calculations.
However, in the finite wire case, we must include the flux term due to the electric field lines running in the y-direction. This is why calculating the electric field for a finite wire is more complex. | {
"domain": "physics.stackexchange",
"id": 90898,
"tags": "electrostatics, electric-fields, gauss-law"
} |
What is the initial state in the second register in Shor's algorithm? | Question: Following up on this question, can someone help me clear up the notation we're using for states in Shor's algorithm? It's very unclear what the sentence "where the second register is $|1\rangle$ made from $n$ qubits" in the Wikipedia article means. The input state is supposedly $|0\rangle^{\otimes 2n+1} \otimes |1\rangle$. The first register makes sense, and one could write $|0\rangle^{\otimes 2n+1}=|\underbrace{0 \ldots 0}_{2n+1}\rangle$. Would one say that's $|0\rangle$ made from $2n+1$ qubits?
It appears we have states for all integers $0 \le k < 2^n$, which makes sense because if our fundamental qubit states are $|0\rangle$ and $|1\rangle$ then we can write $k$ in binary and we have $2^n$ tensor product states. Isn't the state $|0\rangle = |\underbrace{0 \ldots 0}_{n}\rangle$ and $|1\rangle = |1\underbrace{0 \ldots 0}_{n-1}\rangle$?
Clearly there must be $n$ qubits in the second register as input since $U$ (multiplication by $a$ modulo $N$) is a $2^n \times 2^n$ matrix. Yes, $U$ has eigenvectors, and $|1\rangle$ is an equal sum of those eigenvectors, but that doesn't help me understand the fixed input state.
Answer: They mean that the second register uses $n$ qubits, and it's in the state $|1\rangle$.
This is using the standard way of enumerating the $2^n$ possible (computational basis) $n$-qubit states, so $|k\rangle$ denotes the $n$-qubit state corresponding to the binary string which in base-10 corresponds to $k\in\mathbb{N}$.
Consider for example $n=2$. Then
$$|0\rangle \simeq|0,0\rangle,
\quad
|1\rangle \simeq|0,1\rangle,
\quad
|2\rangle \simeq|1,0\rangle,
\quad
|3\rangle \simeq|1,1\rangle.$$
So more generally, you could say that $|1\rangle$ denotes the $n$-qubit state with binary representation $|0\rangle^{\otimes(n-1)}\otimes|1\rangle$. Although this expression is generally not very insightful in this context.
Note also that conventions may vary, and so you might have the equivalence $|1\rangle\simeq|1\rangle\otimes|0\rangle^{\otimes(n-1)}$ rather than the one above. This doesn't affect any result of course, it's only a notational detail.
The reason you use this particular state for the second register is that, as you might observe directly, the modular multiplication operator $U_a$ has eigenvectors $|u_s\rangle$, corresponding to eigenvalues $e^{2\pi i s/p}$ with $p$ period, which sum to precisely $|1\rangle$. Therefore using $|1\rangle$ as second register is a convenient way to perform a quantum phase estimation with a set of possible eigenstates of $U_a$ in superposition. | {
"domain": "quantumcomputing.stackexchange",
"id": 5167,
"tags": "quantum-algorithms, shors-algorithm"
} |
Visualization of odometry msgs | Question:
Hi,
my question is related to the code of odometry msgs at
http://www.ros.org/wiki/navigation/Tutorials/RobotSetup/Odom
How can I visualize a series of odometry msgs in rviz. How do I have to configure rviz to get a sequence of arrows illustrating path and velocity? If I start the small example together with rviz (fixed frame = /base_link) I see with target frame = one stationary arrow?
Best wishes
Poseidonius
Originally posted by Poseidonius on ROS Answers with karma: 427 on 2011-08-11
Post score: 2
Answer:
Try setting your fixed frame to /odom or /map (if you have a map). In my setup, you won't see any movement of the arrow when the fixed frame is set to /base_link. I've attached a snapshot from RViz after rotating the robot through 360 degrees. Note that this is using the latest Diamondback debian packages. The latest Electric seems to have a bug wherein the arrow points straight down instead of horizontally.
--patrick
odom_rviz_diamondback.png
Originally posted by Pi Robot with karma: 4046 on 2011-08-12
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Poseidonius on 2011-08-18:
Thanks for the ticket id ... I changed the line and enjoy correct visualization now !
Comment by Pi Robot on 2011-08-14:
It does seem to be a bug. I reported it here (https://code.ros.org/trac/ros-pkg/ticket/5120) and it has been confirmed by one other person (joq).
Comment by Poseidonius on 2011-08-14:
Thanks patrik,
Comment by Poseidonius on 2011-08-14:
Thanks patrik,
Comment by Poseidonius on 2011-08-14:
I am using elctric and I see the odometry arrow pointing straight down. It is really a bug or do I write wrong code ? | {
"domain": "robotics.stackexchange",
"id": 6398,
"tags": "navigation, odometry, rviz, transform"
} |
What is a positive epitope fragment | Question: What is a positive epitope fragment? I found one paper on the subject:
COBEpro: a novel system for predicting continuous B-cell epitopes by
Michael J. Sweredoski and Pierre Baldi
Answer: The group tries to predict possible epitope sequences for the generation of antibodies. This includes also non-linear epitopes where not all amino acids of the epitope are line up behind each other, but come in close contact due to a 3D-structure. To train their algorithm, they use data, where the boundaries of the epitopes and the non-epitopes are known and clearly mapped.
The "Dataset and Preparation" section in this paper contains the following:
In this article, we used several different datasets to train and
benchmark COBEpro. These datasets were derived from several different
previously published sources: BciPep (Saha et al., 2005), Pellequer
(Pellequer et al., 1993) and HIV (Korber et al., 2003). The BciPep
datasets consist of epitope/non-epitope sequence fragments. The
Pellequer and HIV datasets consist of whole antigen proteins annotated
with precise epitope boundaries.
The "positive epitope fragments" are known and verified ("positive") epitopes from these datasets in the training database. The fragment refers to restriction of length to 20 amino acids which they choose to apply. Shorter epitopes got amino acids added so they get 20 aa long, longer got truncated. | {
"domain": "biology.stackexchange",
"id": 3414,
"tags": "cell-biology, bioinformatics, immunology, antigen"
} |
Kinetic energy of different phases of water | Question:
In a closed flask at zero degrees Celsius, all three forms of water exist. Which form of water will have the maximum average kinetic energy?
The answer is that all forms of water will have same energy. Why?
Answer: You can consider "average kinetic energy" equivalent to "temperature". The question doesn't directly mention if the contents of the flask are all at the same temperature, but if they are at the same temperature then they all have the same average kinetic energy.
Considering that water has a triple point at 0 degrees Celsius and a certain pressure, I think that we can reasonably interpret the question as meaning that the three phases are in equilibrium at this triple point. So, in this case all three "forms" (more formally called "phases") of water are present and they all have the same average kinetic energy (temperature). None of them has a higher average kinetic energy than either of the others, so you can't say that any of them has a "maximum" average kinetic energy. | {
"domain": "physics.stackexchange",
"id": 20568,
"tags": "homework-and-exercises, thermodynamics"
} |
Simple validation and mathematical function in MATLAB | Question: I've made a script (Triangle.m) to calculate the area of a triangle in MATLAB:
function Triangle()
% Main function
clc;
clear variables;
close all;
% Set function values
base = force_positive_finite("Base");
height = force_positive_finite("Height");
area = tri_area(base,height);
fprintf("Area: %2.3f m²\n",area);
end
function output = force_positive_finite(var_name)
% Ask/force the user to enter a positive number
output = -1;
while true
output = input("Enter in "+var_name+" (m): ");
if input_error_response(output)
% If no error, leave loop
break
end
end
end
function response = input_error_response(in)
% Warns user if input is invalid
if isempty(in) || in < 0 || in==inf % Value must be a non-empty positive number
disp("Enter a value: {0 <= VALUE < inf}");
response = false; % This signals the loop to continue
else
response = true; % This means we can exit the loop
end
end
function area = tri_area(base,height)
% Calculate the area of a triangle
arguments
base {mustBeNumeric, mustBeFinite};
height {mustBeNumeric, mustBeFinite};
end
area = base*height*0.5;
end
It contains force_positive_finite to ensure the input is >= 0 and finite, then is fed into tri_area where it is checked to be numeric and finite, before calculating the area.
I am new to MATLAB, so any pointers on convention, optimisation and general improvements are appreicated.
Answer: The function Triangle() is clearly a script that you turned into a function. I personally don't like putting clc, clear and close at the top of my scripts, but I see this a lot, and it's not a terrible thing to do. In a function they are very much out of place.
A function has its own workspace (it cannot see variables defined outside of it), so there really is no need for a clear. This is the point of using functions instead of scripts: each function has its own workspace ("scope" in other programming languages), which is cleared the moment the function ends. The base workspace doesn't get cluttered with variables if you use functions instead of scripts, and the code inside the function will not accidentally be using variables from a different scope (which makes finding bugs easier).
Given that Triangle() asks for user input, a clc is not terrible, but you could accomplish the same effect (make the input prompt more visible) by first printing some empty lines.
close all closes all figure windows. Your program doesn't deal with figure windows, why attack any existing ones? Just let them be!
force_positive_finite() is fine. Of course there are other ways to implement it. Instead of break, for example, you could return. This makes it a bit easier (IMO) to read the function, because I don't need to look for additional code below the loop. I know this is where the function ends. But simpler would be:
function output = force_positive_finite(var_name)
% Ask/force the user to enter a positive number
output = -1;
while ~input_error_response(output)
output = input("Enter in "+var_name+" (m): ");
end
end
while tests a condition. Instead of invalidating that test by testing true, and adding an additional if statement, use that test to do what your if statement does.
Similarly, input_error_response() can be simplified a bit. The construct if <test>, v = true, else, v = false is a typical anti-pattern, better written as v = <test>. You do a bit more inside your if statement, still you can avoid assigning true or false to your response variable:
function response = input_error_response(in)
% Warns user if input is invalid
response = ~isempty(in) && in >= 0 && isfinite(in)
if ~response
disp("Enter a value: {0 <= VALUE < inf}");
end
end
Note that your comment "Value must be a non-empty positive number" comes next to a test for the number being non-negative, which is not the same as positive. Positive: in > 0. Non-negative: in >= 0. [Your original test, in < 0 is negated as in >= 0, as I put into my version of the function.]
I changed the in == inf test to ~isfinite(in). This catches also the case where in == -inf (which you already tested for with the non-negativity test), and the case where in is NaN.
Finally: What if the user inputs "foo"? What if the user inputs [4, 6]? You should probably also test that the input is a double array (isnumeric(in) or isfloat(in) or isa(in,'double')), and scalar (isscalar(in)). | {
"domain": "codereview.stackexchange",
"id": 44561,
"tags": "matlab"
} |
How does the smell of a compound come about, and is it possible to define a smell? | Question: Colour - and eyesight in general - arises because objects reflect/transmit certain wavelengths of colour, which is detected by our eyes.
On the other hand, what gives rise to smell? Is there a branch of chemistry associated with this? Is it possible to define the smell of any substance, using some parameters analogous to wavelength in the case of sight?
We know that $\ce{H2S}$ has a foul odor, but distilled $\ce{H2O}$ does not have any strong smell. Does $\ce{H2Se}$ have a similar foul smell?
Answer: As a sensation, olfaction does not seem to possess the same status as, say, vision. Most biologists, indeed most people not directly involved with fragrances or flavours seem to think that odour sensation is “subjective” and not necessarily shared by others.
What makes an odourant?
The general requirements for an odourant are that it should be volatile,
hydrophobic and have a molecular weight less than approximately 300 daltons.
The first two requirements make physical sense, for the molecule has
to reach the nose and may need to cross membranes. The size
requirement appears to be a biological constraint. A further
indication that the size limit has something to do with the
chemoreception mechanism comes from the fact that specific anosmias
become more frequent as molecular size increases.
To be sure, vapor pressure (volatility) falls rapidly with molecular size, but
that cannot be the reason why larger molecules have no smell, since some of the
strongest odourants (e.g. some steroids) are large molecules. Additionally, the cut-off is
very sharp indeed e.g substitution of the slightly larger silicon atom for a
carbon in a benzenoid musk causes it to become odourless.
Comparison of molecular size between a benzenoid musk (left) derived from
acetophenone and its sila counterpart ( right) in which the central carbon atom in the t-butyl
groups has been replaced with Si. The carbon musk is a strong odourant, the sila musk odourless.
Attempts have been made to accommodate discrepant structure-odour relations by a process known as conformational analysis. This involves exploring the space of conformations adopted by the odourant molecule when deformed away from its energy minimum.
Odour descriptors and odour profiles
Odour descriptors are the words that come to mind when smelling a substance.
The more generally understood the words are, the more useful they are as descriptors. In practice, it is easy for any observer, after a little training, to use the standard descriptors of fragrance chemistry. Example of descriptors include musky, camphoraceous etc
Smelling chemical groups
A fact that has, in our opinion, received too little attention from olfaction researchers is the ability of humans to detect the presence of functional groups with great reliability.
The case of thiols ($\ce{-SH}$) is familiar, but groups ($\ce{NO2}$), aldehydes ($\ce{C=O(H)}$), can be reliably identified once the odour character the functional group character confers is known. When nitriles are used as chemically stable replacement for aldehydes, they impart a metallic character to any smell: cumin nitrile smells like metallic cumin (cuminaldehyde), citronellyl nitrile smells like metallic lemongrass (citronellal), and nonadienylnitrile smells like metallic cucumber (nonadienal). Oximes give a green-camphoraceous character, isonitriles a flat metallic character of great power and unpleasantness, nitro groups a sweet-ethereal character,etc.
Here are some odour categories and their representative molecules, chosen to illustrate
structural diversity:
Musk
Musk odour descriptors might be “smooth clean, sweet and powdery”. The
molecules that possess this odour character are exceptionally diverse
in structure. Macrocyclic musks contain a 15-18 carbon cycle closed
either by a carbonyl or by a lactone and smell similar but fresher and
more natural, often with fruity overtones (cyclopentadecanolide,
ambrettolide). Nitro musks, discovered originally as a byproduct of
explosives chemistry, smell sweeter and are reminiscent of
old-fashioned barbershop smells.
Representatives from five chemical classes which yield musk odors. 1 androst-16-en-3a-
ol, a steroid musk. 2: ambrettolide, a macrocyclic musk. 3: Musk Bauer, a nitro musk. 4: Tonalid, a
tetralin musk. 4: Traseolide, a indane musk.
Ambergris
Originally derived from concretions spat out by whales and aged in the
sun, ambergris odorants smell nothing like natural ambergris tincture,
which has a weak animalic marine smell. The smell of ambergris
odorants was once aptly described to us by a chemist-perfumer as
“glorified isopropanol”. Ambergris odourants provide an interesting
combination of very closely related smells with widely different
structures: amberketal, timberol, karanal and cedramber
Two ambergris odorants, timberol (left) and cedramber (right)
Bitter almonds
This easily-recognized category is interesting because it includes a small molecule (HCN) which, however, is perceived by a large fraction of observers to smell as metallic not almond-like to. Benzaldehyde, nitrobenzene,trans-2-hexenal (but see above) are good examples.
The complexity of structure-odour relations, and the fact that the three dimensional structure of the receptor site is unknown, make it very difficult to apply conventional quantitative structure activity relationships.
Plausible theories of odour
Many theories of Structure-odour relations (SORs) have been proposed
in the past (reviewed in Moncrieff, 1951) but advances in biological
understanding, not least the discovery of odourant receptors, have
gradually ruled them out. There appears to be two possible types of
SOR theory left standing:
Shape-based theories: Odotopes
Most enzyme-substrate and receptor-ligand binding relies on molecular
recognition between protein and ligand. Recognition depends on
interactions that can be either attractive or repulsive (Davies and
Timms 1998). All attractive chemical interactions are ultimately
electrostatic in nature whether they occur between fixed charges,
dipoles, induced dipoles or atoms able to form weak electron bonds
(e.g. hydrogen bonds).
Repulsive interactions can be electrostatic or
quantum-mechanical (electron shell exchange repulsion). Almost every
change in molecular structure (with some exceptions which will
described below) alters the set of surface features capable of forming
such attractive or repulsive interactions, and thus affects what we
loosely call molecular shape.
Recently, both in vivo and vitro studies have shown that, generally receptors respond to more than one odourant, suggesting that they detect the presence not of the whole molecule but of a partial structural feature thereof, hence odotopes.
According to odotope theory the smell of a molecule is then due to the pattern,
i.e. the relative excitation of a number N of receptors to which it binds.
Ethyl citronellyl oxalate, a molecule possessing a macrocyclic musk odour but linearin shape. Right: a macrocyclic musk, cyclopentadecanolide. Shape-based theories assume that the linear musk assumes a conformation close to that of the macrocyclic when binding to the receptor, hence the similarity in odour.
Vibration theories
The idea that the nose operates as a vibrational spectroscope was
first proposed by Dyson (1938) and later taken up and refined by
Wright (1982). What makes it attractive in principle is that
vibrational spectra share three properties with human olfaction.
No two molecular spectra are exactly alike, particularly in the aptly named
“fingerprint region”.
Many functional groups are easily identified by their specific
vibrational frequencies.
System utilizing a physical property as basic as vibration will be ready for never-before-smelt molecules, i.e. does not depend on a repertory of existing or expected structures. In that sense, it does not rely on molecular recognition.
Remarkably, even bonds between atoms can be detected: the acetylenic C-C triple bond of –ynes imparts a isothiocyanate-like mustard-like smell to molecules which is clearly recognizable, for example in acetylene and in methyloctynoate.
Functional groups as odotopes
An odotope theory can explain these regularities only by assuming that the
functional group is an odotope. In the older structure-odour literature, this used to be
described as electronic factors (as opposed to steric). The idea was that, given that many
functional groups were similar in size, the recognition mechanism must somehow be
sensitive to the fine structure of the electron distribution (orbital energies, charge
density, etc) of the functional group.
However this proposition has some shortfalls;
Consider for instance the SH group in, say methanethiol. Alcohols never smell of
sulfur, whereas thiols always do. What could make the SH infallibly distinctive as an
odotope, as compared to the OH group? Partial charge, bond length, bond angle and
atom size are somewhat different between $\ce{R–SH}$ and $\ce{R–OH}$, but it is hard to see how
these can be detected with absolute reliability by, say, an aminoacid side chain in the
presence of thermal motion.
Replacing a C=C bond with a sulfur atom does not change odour character, suggesting that “electronic” properties of sulfur are not sufficient for molecular recognition.
Functional groups and vibrational theory
By contrast, the distinctive smell of functional groups is a natural feature of a
vibrational theory. Above 1800 wavenumbers, IR absorption lines are diagnostic of the
stretch frequencies of diatomic functional groups.
The clearest example so far is that of boranes. The terminal B-H bond
in boranes has a stretch frequency whose range overlaps with that of
thiols. Turin (1996) therefore predicted that boranes should smell
sulfuraceous, despite the complete absence of similarity, both
structurally and chemically, between boron and sulfur.
A comparison between borane and thiol smells is best made using decaborane. Decaborane smells strongly of boiled onion, a typical SH smell. Other, less stable boranes share this sulfuraceous smell character;
The dependence of the sulfuraceous character on molecular vibrations
and atomic partial charges, as predicted by a vibrational theory.
Decaborane (left) smells sulfuraceous, and its terminal B-H bonds have
a stretch frequency ˜ 2500 wavenumbers. In triethylamine-borane
(middle), the B-H stretch is shifed to 2300 wavenumbers and the
sulfuraceous smell is no longer present. In p-carborane (right) the
near-neutral partial charges make the SH bond odourless.
In summary it it could said there is still more work needed on study of structure-odour chemistry to have conclusive evidence on the best theory, currently vibrational theory is evidently successful at explaining the fact that we smell functional groups even when sterically hindered, and in accounting for differences in smell between isotopes, while the odotope theory explains little.
References
Structure-odour relations: a modern perspective: Luca Turin et al. [Available online: https://pubs.acs.org/doi/abs/10.1021/cr950068a] | {
"domain": "chemistry.stackexchange",
"id": 8823,
"tags": "biochemistry, smell"
} |
What does lowercase r-s notation mean? | Question: I came across a naming convention which I haven't seen before. I let ChemDraw name the following compound for me and got a name containing lowercase "r" and "s" configurations.
Can someone tell me what this means and something about the actual convention behind it? I think it must have something to do with rings and maybe with some kind of "pseudo-chirality" due to a certain conformation of the ethyl group which is attached to the ring, but I'm only guessing.
Answer: This notation is used to designate so-called "pseudoasymmetric carbons." This occurs in instances in which precisely two of the groups bonded to a tetrahedral carbon are structurally indistinguishable in terms of connectivity (i.e., the same atoms, bonded with the same multiplicity, and in the same order moving outward from the carbon whose absolute configuration is being assigned), but contain chiral centers which have opposite configurations. See the IUPAC Gold Book entry on pseudo-asymmetric carbons.
As for the molecule in your example, it is named incorrectly. There is an inherent symmetry present in 1,4-disubstituted cyclohexanes that prevents them from being chiral (except in cases where the substituents on the ring themselves contain chiral centers, which methyl and ethyl groups do not), and without any chiral centers in the molecule, there obviously can be no pseudoasymmetric carbons either. I would add, however, that cis/trans isomerism is, of course, possible for 1,4-disubstituted cyclohexanes, but that's entirely different from the property of chirality. The only stereochemical designator that could reasonably be added to the molecule from your example is "cis-". | {
"domain": "chemistry.stackexchange",
"id": 6274,
"tags": "nomenclature, stereochemistry"
} |
Do we know if there are asteroids leading or following Earth in Earth's orbit around the Sun? | Question: A search of the Internet is so clouded with discussions about the asteroid belt or asteroids orbiting the planet Earth that I couldn't see an answer to my question.
Do we know if there are asteroids orbiting the Sun in Earth's orbit (NOT orbiting the Earth).
To be clear: these would be asteroids located somewhere around the Sun at the same distance as Earth, either following it or leading it in the Earth's own orbit around the Sun.
I hope that was clear.
Answer: There are several known Earth co-orbital asteroids. The first to be discovered was (3753) Cruithne, which is often mistakenly described as "Earth's second moon". Cruithne has a bean-shaped orbit when viewed in a reference frame rotating with the Earth.
The only known Earth Trojan is 2010 TK7, which orbits the L4 point located 60° ahead of the Earth. | {
"domain": "astronomy.stackexchange",
"id": 3451,
"tags": "the-sun, orbit, earth, asteroids"
} |
Description of “Logistics Domain” in AI | Question: While reading some papers in AI (for a project I have to do), I see expressions "blocks world domain" and "logistics domain". I know what blocks world domain is, but I don't know the definition of logistics domain.
Any help would be appreciated.
Answer: Logistics and Blocksworld are domains that are often used as examples in the field of automated planning and scheduling.
Logistics was used in the first international planning competition IPC98. The description on this site says:
There are several cities, each containing several locations, some of which are airports. There are also trucks, which can drive within a single city, and airplanes, which can fly between airports. The goal is to get some packages from various locations to various new locations. This domain was created by Bart Selman and Henry Kautz, based on an earlier domain by Manuela Veloso.
It is described in more detail in by McDermott:
D. McDermott, The 1998 AI Planning Systems competition, AI Magazine 21 (2), pp. 35-55. 2000.
(paper)
A complexity analysis for this domain is in the follwing paper by Helmert:
M. Helmert.
Complexity results for standard benchmark domains in planning.
Artificial Intelligence 143 (2), pp. 219-262. 2003. (paper) | {
"domain": "cs.stackexchange",
"id": 2447,
"tags": "terminology, artificial-intelligence"
} |
How to modify single phase fluid/solid coupled PDEs to account for a phase change? | Question: I have two PDEs that model both fluid and solid temperature change due to fluid flow through a packed bed. Schematic and equations here (where the f and s subscripts are for the fluid and solid respectively):
It seems these PDEs are limited to a single fluid phase (either gas or liquid) and do not account for any potential phase change (there is no phase change term). As far as I understand, this means that these equations can be used to model gas or liquid flow through a packed bed, but cannot be used where a phase change occurs.
My question is how could these equations be modified or used to account for a potential phase change occurring? Ideally rather than modelling a gas flow through a packed bed, I would like to model a gas flow through a cold packed bed causing liquefaction of the gas.
In addition, if there are any publications that model this I would love to see them. So far I've only found single phase models.
Link to source: https://www.sciencedirect.com/science/article/pii/S0306261921008138
Answer: I'm going to present the limiting solution for a much simpler case, and then you can see if you can modify it for the case you are interested in, in which there is a phase change.
At time zero, I have a bed containing a liquid, both of which are at $T_0$. After time zero, I start flowing liquid at temperature $T_1$ through the bed at pore velocity u. For this case, my basic starting equations are $$\epsilon \rho_fC_{pf}\left[\frac{\partial T_f}{\partial t}+u\frac{\partial T_f}{\partial x}\right]=ha_s(T_s-T_f)$$and$$(1-\epsilon) \rho_sC_{ps}\frac{\partial T_s}{\partial t}=ha_s(T_f-T_s)$$If we add these two equations together, we obtain:$$\epsilon \rho_fC_{pf}\frac{\partial T_f}{\partial t}+(1-\epsilon) \rho_sC_{ps}\frac{\partial T_s}{\partial t}+\epsilon \rho_fC_{pf}u\frac{\partial T_f}{\partial x}=0$$In our limiting situation, the liquid and solid bed temperatures will approach one another both behind- and ahead of the wave front. Therefore, in this limiting situation, we can write:$$\epsilon \rho_fC_{pf}\frac{\partial T}{\partial t}+(1-\epsilon) \rho_sC_{ps}\frac{\partial T}{\partial t}+\epsilon \rho_fC_{pf}u\frac{\partial T}{\partial x}=0$$or$$\frac{\partial T}{\partial t}+\frac{\epsilon \rho_fC_{pf}}{\epsilon \rho_fC_{pf}+(1-\epsilon) \rho_sC_{ps}}u\frac{\partial T}{\partial x}=0$$The solution to this is a sharp wave, for which $T=T_0$ for $x>Vt$ and $T=T_1$ for $x<Vt$, where the wave velocity V is given by $$V=\frac{\epsilon \rho_fC_{pf}}{\epsilon \rho_fC_{pf}+(1-\epsilon) \rho_sC_{ps}}u$$ Of course, as expected, according to this, because of the thermal inertia of the bed, the wave front is traveling much more slowly than the fluid. | {
"domain": "physics.stackexchange",
"id": 83426,
"tags": "thermodynamics, fluid-dynamics, diffusion, navier-stokes, convection"
} |
Debunking scientific paper "Has global warming already arrived?" | Question: I read the IPCC reports, I know global warming is happening and is man-made.
But as any skeptic should do periodically, I went to a few climate deniers websites and stumbled upon this peer-reviewed scientific paper, published in "Journal of Atmospheric and Solar-Terrestrial Physics":
https://www.sciencedirect.com/science/article/abs/pii/S1364682618305030
The paper is coming from Costas Varotsos, a Greek physicist known from his contribution to the global climate-dynamics research and remote sensing.
The full paper is here:
https://sci-hub.tw/https://doi.org/10.1016/j.jastp.2018.10.020
It's too technical for me to point what the mistakes could have been made in that paper.
The only counter-argument I found on the web is a comment about it on the skepticalscience website:
https://skepticalscience.com/news.php?p=3&t=103&&n=292#131401
Could anyone translate that comment in less technical terms for me ?
Answer: The comment is saying that the authors are making a fallacy as follows:
Temperature increase will be associated with an increase in tropopause height.
Using instrument/dataset X, we fail to measure a tropopause height increase.
Therefore, there is no temperature increase.
The fallacy here is that instrument/dataset X is not suitable to measure tropopause height, therefore one should not expect it can measure a tropopause height increase.
The comment is referring to the University of Alabama in Huntsville temperature dataset. This is derived from satellite microwave temperature sounders¹. From the Wikipedia article, this dataset provides:
UAH provide data on three broad levels of the atmosphere.
The Lower troposphere - TLT (originally called T2LT).
The mid troposphere - TMT
The lower stratosphere - TLS
The tropopause is the boundary between troposphere and stratosphere. It can be defined in various ways, but to measure tropopause height you generally need to measure the temperature in the vicinity to the tropopause. As this is apparently not provided by the dataset, it is impossible to use the dataset to measure tropopause height.
¹From personal experience, I can tell you that it is difficult to determine long-term temperature triends from satellite data. Polar-orbiting satellites in sun-synchronous orbits experience a drift over time, which means that their local time ascending node changes; in lay terms, this means that when they previously always passed at 14:00 over a certain spot, 2 years later this may be at 16:00. Even if that time wouldn't change, instruments degrade over time and get replaced by new instruments. You need to be very careful to make sure you're detecting a trend in the temperature and not in the instrument itself! I've worked on this in the FIDUCEO project, I was doing infrared temperature sounders and colleagues from Hamburg were doing microwave humidity sounders, but microwave temperature sounders were out of scope because none of us had the depth of expertise needed. Even with the expertise, we underestimated the difficulty of the problem and ran out of time before we could properly declare we had delivered what had been promised to the funding agency, although we learned a lot. | {
"domain": "earthscience.stackexchange",
"id": 2014,
"tags": "climate-change"
} |
LALR(1) grammar for simple math parser | Question: I am trying to write a simple parser for a small calculator project, that should be able to parse e.g. the following inputs:
5 + 3
5 + f(4)
5 + f(x)
x = 5
f(x) = 3*x
so basically, I want to be able to parse expressions (that may contain variables and function calls), variable assignments, and function definitions using the = operator.
The problem is that in function definitions, only identifiers must be allowed in the definition, e.g.
f(5+1) = 3*x
should not be legal in the grammar. Thus, I need to define two distinct cases for functions, where the first (bFunc, containing only ids) could occur on both sides of =, and the second (func, can also contain expressions) must not occur on the left side.
This is the grammar so far:
stmt -> expr.
stmt -> fdef.
stmt -> vdef.
fdef -> bFunc = expr.
vdef -> id = expr.
bFunc -> id ( idList ).
func -> id ( list ).
func -> id ( ).
list -> expr.
list -> list , expr.
idList -> id.
idList -> idList , id.
expr -> term.
expr -> expr + term.
expr -> expr - term.
term -> atom.
term -> term / atom.
term -> term * atom.
atom -> id.
atom -> num.
atom -> func.
atom -> ( expr ).
However, I was not able to figure out how to specify a valid LALR(1) grammar because of this problem.
The tool http://mdaines.github.io/grammophone/#/ reports reduce-reduce conflicts.
The problem is that the parser would not know whether id should be reduced to idList or atom.
Surely it would be possible to simply use func in both cases and catch invalid left sides later in the program.
But my question is now, is it even possible to write a LALR(1) grammar for this problem? How does one decide whether it is possible or not?
And may my problem be the reason why programming languages use keywords like def or function for function definitions?
Answer: As modified by your edit, your grammar is unambiguous. Unfortunately, it is not deterministic; no limited lookahead is sufficient to decide whether when the parser reaches a comma: $$\bf{ID}\;\bf{(}\;\bf{ID}\;\bullet\;\bf{,}\;\cdots$$ it should predict $\it{bFunc}$ by reducing $\bf{ID}$ to $\it{idList}$ or predict $\it{func}$ by reducing it to $\it{atom}$ (and thus eventually to $\it{expr}$ and $\it{list}$).
One reasonably straightforward solution is to differentiate between $\it{expr}$ and $\bf{ID}$ by creating the non-terminal $\it{expr_{!ID}}$ which matches $\tt{expr}\setminus\{\bf{ID}\}$ and using it in $\it{list}$. (Note that $\it{list}$ is not a simple list of $\it{expr_{!ID}}$; rather, it is a list in which at least one element is a $\it{expr_{!ID}}$. See below for sample grammar).
That makes $\it{list}$ and $\it{idList}$ disjoint and the grammar is deterministic. However, the distinction is not strictly related to the semantics of the language. It creates a curious hybrid parse tree node in which some of the nodes in the argument list's parse subtree are $\it{idList}$ and the rest are $\it{list}$. (Look at the parse tree for f(a,b,c,d+3), for example). Indeed, a complete function call might be parsed as a $\it{bFunc}$, which will need to be converted to $\it{bFunc}$ when it turns out that it is not followed by an $\bf{=}$.
A practical parser will need to repair the parse tree for $\it{func}$ by converting the $\it{idList}$ nodes to $\it{list}$. That could be done in a reduction action, or it could be done in a post-parse tree walk, but it will almost certainly need to be done, since the semantic distinction is real: the use of an $\bf{ID}$ in a parameter list is a binding, while the use of the same $\bf{ID}$ in a function call is a use.
So at the high level, what we end up with is:
$$\begin{align}
\tt{bFunc}&\to\;ID\;\tt{(}\;\tt{idList}\;\tt{)}\\
\tt{func}&\to\;ID\;\tt{(}\;\tt{list}\;\tt{)}\\
\tt{func}&\to\;ID\;\tt{(}\;\tt{)}\\
\tt{func}&\to\;ID\;\tt{(}\;\tt{idList}\;\tt{)}\\
\\
\tt{list}&\to\;\tt{expr_{!ID}}\\
\tt{list}&\to\;\tt{list}\;,\;\tt{expr}\\
\tt{list}&\to\;\tt{idList}\;,\;\tt{expr_{!ID}}\\
\\
\tt{idList}&\to\;\tt{ID}\\
\tt{idList}&\to\;\tt{idList}\;,\;\tt{ID}\\
\end{align}$$
We also need to define the non-terminal $\it{expr_{!ID}}$ which doesn't match $\bf{ID}$ (and its pars at the other chained precedence levels). The straightforward solution is to remove the production $\text{atom}\;\to\;ID$, thus removing the token $\bf{ID}$ from the direct unit-production chain. Then we add it back in the non-terminals in which it is permitted (i.e. the ones which are not restricted to $\it{!ID}$). So the expression grammar now looks like this:
$$\begin{align}
\tt{expr}&\to\;\tt{expr_{!ID}}\\
\tt{expr}&\to\;\tt{ID}\\
\tt{term}&\to\;\tt{term_{!ID}}\\
\tt{term}&\to\;\tt{ID}\\
\tt{atom}&\to\;\tt{atom_{!ID}}\\
\tt{atom}&\to\;\tt{ID}\\
\\
\tt{expr_{!ID}}&\to\;\tt{expr}\;\bf{+}\;\tt{term}\\
\tt{expr_{!ID}}&\to\;\tt{expr}\;\bf{-}\;\tt{term}\\
\tt{expr_{!ID}}&\to\;\tt{term_{!ID}}\\
\\
\tt{term_{!ID}}&\to\;\tt{term}\;\bf{/}\;\tt{atom}\\
\tt{term_{!ID}}&\to\;\tt{term}\;\bf{*}\;\tt{atom}\\
\tt{term_{!ID}}&\to\;\tt{atom_{!ID}}\\
\\
\tt{atom_{!ID}}&\to\;\tt{NUM}\\
\tt{atom_{!ID}}&\to\;\tt{func}\\
\tt{atom_{!ID}}&\to\;\bf{(}\;\tt{expr}\;\bf{)}\\
\end{align}$$
which is only a little longer than the equivalent lines in your original.
It's interesting to note that the syntax formalism used by ECMAScript (and some other semi-related technologies) uses a macro-enhanced form of BNF in which non-terminals can be given boolean parameters, like the $\tt{!ID}$ subscript I used above. With that feature, the grammar could be written even more compactly (although the macro expansion would be a bit bigger).
As another interesting note, your grammar will work perfectly without the definition of $\it{expr_{!ID}}$ if processed with Bison or almost any other Yacc-derivative parser generator. (You do need to add the extra productions in the list definitions to convert $\it{idList}$ to $\it{list}$, as indicated above.) The parser generator will signal four reduce-reduce conflicts, since there is an ambiguity. But as long as the list productions come before the expression productions, Yacc/Bison's automatic conflict resolution mechanism will produce the correct resolution. That's probably not the optimal solution, but it is certainly the shortest. | {
"domain": "cs.stackexchange",
"id": 19661,
"tags": "regular-languages, formal-grammars, parsers"
} |
What is rospy.sleep for? | Question:
Really confused here. I want to delay certain publishing of messages to a topic for about 1-2 seconds. Can rospy.sleep help? My code already works fine, but I need the delay since I cannot put any delays in the Arduino/rosserial because it gives a lost sync error. Can someone please help? I've been doing this for 2 weeks and no solution so far. Here is my code.
#!/usr/bin/env python
import rospy
import time
from std_msgs.msg import Int32
from sensor_msgs.msg import Range
flags = Int32()
class warning_flag():
def __init__(self):
self.aIR_FR = None
self.aIR_FL = None
self.dIR_front = None
self.dIR_BR = None
self.dIR_BL = None
self.sonic = None
def aIR_FR_callback(self, aIR_FR_msg):
print "Analog IR Right Range: %s" % aIR_FR_msg.range
self.aIR_FR = aIR_FR_msg.range
self.warn()
def aIR_FL_callback(self, aIR_FL_msg):
print "Analog IR Left Range: %s" % aIR_FL_msg.range
self.aIR_FL = aIR_FL_msg.range
self.warn()
def dIR_BL_callback(self, dIR_BL_msg):
print "Digital IR Left Range: %s" % dIR_BL_msg.range
self.dIR_BL = dIR_BL_msg.range
self.warn()
def dIR_BR_callback(self, dIR_BR_msg):
print "Digital IR Right Range: %s" % dIR_BR_msg.range
self.dIR_BR = dIR_BR_msg.range
self.warn()
def dIR_front_callback(self, dIR_front_msg):
print "Digital IR Range: %s" % dIR_front_msg.range
self.dIR_front = dIR_front_msg.range
self.warn()
def sonic_callback(self, sonic_msg):
print "Ultrasonic Range: %s" % sonic_msg.range
self.sonic = sonic_msg.range
self.warn()
def warn(self):
sonic_zone = 0.3
aIR_FR_zone = 0.2
aIR_FL_zone = 0.2
dIR_front_zone = 0
dIR_BR_zone = 1
dIR_BL_zone = 1
if (self.aIR_FR > aIR_FR_zone and self.aIR_FL <= aIR_FL_zone):
print "Turn right"
flags.data = 2
elif (self.aIR_FR <= aIR_FR_zone and self.aIR_FL > aIR_FL_zone):
print "Turn Left"
flags.data = 3
elif (self.aIR_FR <= aIR_FR_zone and self.aIR_FL <= aIR_FL_zone):
if ((self.sonic > sonic_zone and self.dIR_front != dIR_front_zone) or
(self.sonic <= sonic_zone and self.dIR_front != dIR_front_zone)):
if (self.dIR_BR != dIR_BR_zone and self.dIR_BL != dIR_BL_zone):
print "Stop"
flags.data = 5
else:
if (self.aIR_FR > self.aIR_FL):
print "Turn Right"
flags.data = 2
else:
print "Turn Left"
flags.data = 3
elif ((self.sonic > sonic_zone and self.dIR_front == dIR_front_zone) or
(self.sonic <= sonic_zone and self.dIR_front == dIR_front_zone)):
if (self.dIR_BR == dIR_BR_zone and self.dIR_BL == dIR_BL_zone):
print "Back off"
flags.data = 4
#put delay here
if (self.aIR_FR > self.aIR_FL):
print "Turn Right"
flags.data = 2
else:
print "Turn Left"
flags.data = 3
else:
if (self.aIR_FR > self.aIR_FL):
print "Turn Right"
flags.data = 2
else:
print "Turn Left"
flags.data = 3
elif (self.aIR_FR > aIR_FR_zone and self.aIR_FL > aIR_FL_zone):
if (self.sonic > sonic_zone and self.dIR_front == dIR_front_zone):
print "Advance"
flags.data = 1
else:
if (self.dIR_BR == dIR_BR_zone and self.dIR_BL == dIR_BL_zone):
print "Back off"
flags.data = 4
#put delay here
if (self.aIR_FR > self.aIR_FL):
print "Turn Right"
flags.data = 2
else:
print "Turn Left"
flags.data = 3
else:
if (self.aIR_FR > self.aIR_FL):
print "Turn Right"
flags.data = 2
else:
print "Turn Left"
flags.data = 3
def main():
rospy.init_node('sensor_pub_sub_node')
warn_pub=rospy.Publisher('Warnings', Int32, queue_size=1000)
warning = warning_flag()
aIR_FR_sub=rospy.Subscriber('aIR_FR', Range, warning.aIR_FR_callback)
aIR_FL_sub=rospy.Subscriber('aIR_FL', Range, warning.aIR_FL_callback)
dIR_BR_sub=rospy.Subscriber('dIR_BR', Range, warning.dIR_BR_callback)
dIR_BL_sub=rospy.Subscriber('dIR_BL', Range, warning.dIR_BL_callback)
dIR_front_sub=rospy.Subscriber('dIR_front', Range, warning.dIR_front_callback)
sonic_sub=rospy.Subscriber('ultrasound', Range, warning.sonic_callback)
rate = rospy.Rate(10)
while not rospy.is_shutdown():
warn_pub.publish(flags.data)
rate.sleep()
if __name__=='__main__':
try:
main()
except rospy.ROSInterruptException:
pass
Originally posted by Nelle on ROS Answers with karma: 21 on 2018-09-04
Post score: 0
Answer:
rospy sleep is another way like time.sleep to pause a thread from running at its full possible rate and taking unnecessary resources. A while true would take up alot of resources otherwise.
Originally posted by stevemacenski with karma: 8272 on 2018-09-04
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Nelle on 2018-09-04:
Can I use rospy.sleep in my code then? Just to pause for a few seconds? I want to delay the publishing of a message for a topic.
Comment by stevemacenski on 2018-09-04:
Sure, its an option if you just want to pause
Comment by Nelle on 2018-09-04:
I tried to put it in my code and it doesn't seem to work. It keeps coming back to back off even if my sensors for other movements were triggered.
Comment by fvd on 2018-09-07:
I don't understand your problem exactly, you might have to be more precise (and possibly do it in another question) | {
"domain": "robotics.stackexchange",
"id": 31710,
"tags": "rosserial, ros-kinetic"
} |
Determining if two strings are anagrams of each other | Question: Is this code a good solution for the question, or is there a better way to do it?
package ArraysAndStrings;
import java.util.Arrays;
public class anagram{
private boolean isAnagram = false;
public boolean Anagrams(String str1, String str2){
if(str1.length() != str2.length()){
return isAnagram;
}
boolean [] char_set = new boolean[256];
boolean [] char_set1 = new boolean [256];
for(int i =0;i<str1.length();i++){
int val1 = str1.charAt(i);
int val2 = str2.charAt(i);
char_set[val1] = true;
char_set1[val2] = true;
}
if(Arrays.equals(char_set, char_set1)){
isAnagram = true;
}
return isAnagram;
}
public static void main(String [] args){
anagram ang = new anagram();
System.out.println(ang.Anagrams("mary","army"));
}
}
Answer: The code is technically broken: it only tells that the strings are composed of the same letters. It is not enough for the strings to be anagrams. Each letter must appear the same number of times in both strings.
Making your char_set array integer instead of boolean you can get the correct result still in linear time:
int [] counters = new int[256];
set_counters_to_zero();
for (int i = 0; i < str1.len(); i++) {
counters[str1.charAt(i)]++;
}
for (int i = 0; i < str2.len(); i++) {
counters[str2.charAt(i)]--;
}
return all_counters_are_zero(); | {
"domain": "codereview.stackexchange",
"id": 18014,
"tags": "java, strings"
} |
Which is the limit of lossless compression data? (if there exists such a limit) | Question: Lately I've been dealing with compression-related algorithms, and I was wondering which is the best compression ratio that can be achievable by lossless data compression.
So far, the only source I could find on this topic was the Wikipedia:
Lossless compression of digitized data
such as video, digitized film, and
audio preserves all the information,
but can rarely do much better than 1:2
compression because of the intrinsic
entropy of the data.
Unfortunately, Wikipedia's article doesn't contain a reference or citation to support this claim. I'm not a data-compression expert, so I'd appreciate any information you can provide on this subject, or if you could point me to a more reliable source than Wikipedia.
Answer: I am not sure if anyone has yet explained why the magical number seems to be exactly 1:2 and not, for example, 1:1.1 or 1:20.
One reason is that in many typical cases almost half of the digitised data is noise, and noise (by definition) cannot be compressed.
I did a very simple experiment:
I took a grey card. To a human eye, it looks like a plain, neutral piece of grey cardboard. In particular, there is no information.
And then I took a normal scanner – exactly the kind of device that people might use to digitise their photos.
I scanned the grey card. (Actually, I scanned the grey card together with a postcard. The postcard was there for sanity-checking so that I could make sure the scanner software does not do anything strange, such as automatically add contrast when it sees the featureless grey card.)
I cropped a 1000x1000 pixel part of the grey card, and converted it to greyscale (8 bits per pixel).
What we have now should be a fairly good example of what happens when you study a featureless part of a scanned black & white photo, for example, clear sky. In principle, there should be exactly nothing to see.
However, with a larger magnification, it actually looks like this:
There is no clearly visible pattern, but it does not have a uniform grey colour. Part of it is most likely caused by the imperfections of the grey card, but I would assume that most of it is simply noise produced by the scanner (thermal noise in the sensor cell, amplifier, A/D converter, etc.). Looks pretty much like Gaussian noise; here is the histogram (in logarithmic scale):
Now if we assume that each pixel has its shade picked i.i.d. from this distribution, how much entropy do we have? My Python script told me that we have as much as 3.3 bits of entropy per pixel. And that's a lot of noise.
If this really was the case, it would imply that no matter which compression algorithm we use, the 1000x1000 pixel bitmap would be compressed, in the best case, into a 412500-byte file. And what happens in practice: I got a 432018-byte PNG file, pretty close.
If we over-generalise slightly, it seems that no matter which black & white photos I scan with this scanner, I will get the sum of the following:
"useful" information (if any),
noise, approx. 3 bits per pixel.
Now even if your compression algorithm squeezes the useful information into << 1 bits per pixel, you will still have as much as 3 bits per pixel of incompressible noise. And the uncompressed version is 8 bits per pixel. So the compression ratio will be in the ballpark of 1:2, no matter what you do.
Another example, with an attempt to find over-idealised conditions:
A modern DSLR camera, using the lowest sensitivity setting (least noise).
An out-of-focus shot of a grey card (even if there was some visible information in the grey card, it would be blurred away).
Conversion of the RAW file into a 8-bit greyscale image, without adding any contrast. I used typical settings in a commercial RAW converter. The converter tries to reduce noise by default. Moreover, we are saving the end result as an 8-bit file – we are, in essence, throwing away the lowest-order bits of the raw sensor readings!
And what was the end result? It looks much better than what I got from the scanner; the noise is less pronounced, and there is exactly nothing to be seen. Nevertheless, the Gaussian noise is there:
And the entropy? 2.7 bits per pixel. File size in practice? 344923 bytes for 1M pixels. In a truly best-case scenario, with some cheating, we pushed the compression ratio to 1:3.
Of course all of this has exactly nothing to do with TCS research, but I think it is good to keep in mind what really limits the compression of real-world digitised data. Advances in the design of fancier compression algorithms and raw CPU power is not going to help; if you want to save all the noise losslessly, you cannot do much better than 1:2. | {
"domain": "cstheory.stackexchange",
"id": 5682,
"tags": "it.information-theory, data-streams"
} |
What's the physics behind the jumps with seemingly non-conserved angular momentum? | Question: The #1 rule of sports biomechanics is the conservation of angular momentum. It dictates that whenever an athlete performs an acrobatic jump, the angular momentum that he has created on takeoff is to stay unchanged until he lands. He can control the speed of rotation by expanding or retracting his limbs, but he can't just randomly stop rotating in mid-air and then continue again out of nowhere.
For rotations around multiple axes (twists etc), I understand that the conservation of momentum should work for each of individual axes.
Now take a look at this jump (starts at 0.51): https://youtu.be/sb82tVOq2dY?t=51
On takeoff, the diver initiates a flip with a twist (a spin around both vertical and horizontal axes at the same time). But then, in the middle of the jump, he somehow kills the vertical component of rotation and converts to a plain frontflip.
In another video, you can see the opposite: https://www.youtube.com/watch?v=fwDGrNKiTi8
Here, the athlete initiates a pure frontflip rotation on takeoff. However, before the last flip, he somehow initiates an additional rotation around vertical axes, pulling that 180 in the end seemingly out of nowhere. And I've seen people pulling even 360s like that out of nowhere.
So what's going on there? Is it possible for an athlete to initiate or kill angular momentum in mid-air somehow? Or is there some other effect at play?
Answer: To explain how orientation can change whilst angular momentum is conserved it is first best to look at a slightly simpler system - a cat in free fall!
Here is a series of photographs taken in $1894$ which shows a cat turning its body to ensure that it lands on its feet.
This gif file illustrates how a cat changes its shape to rotate and yet still to conserve angular momentum.
Finally here is a video of such an event with the cat suffering no harm.
So the key is changing body shape to achieve a rotation whilst conserving angular momentum.
This is shown using a selection of stills from the gymnast video.
First head on.
Arm movements starting in slide $\rm d$ initiate the twisting of the gymnast.
From the side.
Here is a dive executed in the video referenced by the OP.
The diver when on the diving board cannot use it to start a twisting rotation as that rotation could not be removed towards the end of the dive and I think that is also against completion rules.
The Physics of somersaulting and tumbling is explained in an article published in Scientific American.
By moving the arms a diver can start and stop a twist.
The somersault rotation continues from start to finish but before entering the water the diver increases the moment of inertia about a horizontal axis by stretching out thus reducing the speed of rotation.
By timing the entry to perfection and whilst still rotating the diver enter the water with the smallest horizontal profile.
Note the rotation continuing under the water.
This slow motion video of a twisting somersault shows clearly how the arms are used to initiate twisting. | {
"domain": "physics.stackexchange",
"id": 85431,
"tags": "newtonian-mechanics, rotational-dynamics, angular-momentum, conservation-laws, everyday-life"
} |
Enthalpy of isobaric and isothermal process happen simultaneously | Question: Let's assume the system is undergoing isobaric and isothermal process, simultaneously.
The gas of the system is assumed ideal, and its volume is changed from $V_1$ to $V_2$.
Internal energy of the system does not change, since the gas is ideal. ($\Delta U=0$)
Work done by/to system will be $w=-p_{const}(V_2-V_1)$, and the heat will be $q=-w$.
But how about enthalpy change $\Delta H$?
I know that by the definition of enthalpy, $H=U+pV$ changes to
$$\Delta H=\Delta U+\Delta(pV)$$
and since pressure is constant, the enthalpy is just
$$\Delta H=0+p_{const}\Delta V=p_{const}(V_2-V_1)$$
But other though is, using ideal gas law,
$$\Delta H=\Delta U+\Delta(pV)=\Delta U+nR\Delta T$$
since the process is also isothermal, $\Delta T=0$
Then, enthalpy is also
$$\Delta H=0$$
Which one is correct?
-Edit-
I might have to specify that this question came out when solving textbook problem, Physical Chemistry by Atkins, 9th Edition, Chapter 2, excercise 2.3
"A sample consisting of 1.00 mol Ar is expanded isothermally at 273.15K from 22.4$dm^3$ to 44.8$dm^3$ (a) reversibly, (b) against a constant external pressure equal to the final pressure of the gas and (c) freely. Calculate $q, w, \Delta U, \Delta H$."
I got above question when solving for part (b)
Answer: The processes described in part (b) and part (c) are irreversible, which you are not taking into account. More specifically, the expansion in part (b) is not isobaric - the pressure of the gas is not constant, it is only the external pressure which is fixed.
You know $(P,V,T)$ both before and after the expansion (via the given parameters and the ideal gas law), which means you can easily calculate the changes in the state variables $U$ and $H$. Calculating $q$ and $w$ is more subtle. Note that while the internal pressure of the gas is not constant, the external pressure is, which allows you to calculate the work that the environment does on the gas.
This, along with the first law of thermodynamics, should be all that you need. | {
"domain": "physics.stackexchange",
"id": 66302,
"tags": "homework-and-exercises, thermodynamics"
} |
Writing the overall cell reaction for the Calomel electrode | Question: I an studying about electrochemical cells, and came across the Calomel electrode. It turns out that the half cell reactions and the overall reaction are as follows.
Anode half-cell: $\text{Hg}_{2}\text{Cl}_{2}\rightarrow \text{Hg}_{2}^{2+}+2\text{Cl}^{-}$
Cathode half-cell: $\text{Hg}_{2}^{2+}+2e^{-}\rightarrow 2\text{Hg}$
Overall cell reaction: $\text{Hg}_{2}\text{Cl}_{2}+2e^{-}\rightarrow 2\text{Hg}+2\text{Cl}^{-}$
I am not sure why while writing the overall reaction, the electron was not gotten rid of. Besides why is there no net exchange of an electron in the anode half-cell reaction. Clarification would be appreciated. Thanks.
Answer: The calomel electrode is not a cell. It can be part of a cell. It is a half-cell. A cell is made of two electrodes, one calomel electrode and another one. The reaction occurring in the calomel electrode is the reaction you present as "overall cell reaction". What you described as "anode half-cell" has nothing to do with electrochemistry. It is the equation describing how $\ce{Hg_2Cl_2}$ gets dissolved in water. | {
"domain": "chemistry.stackexchange",
"id": 14124,
"tags": "electrochemistry"
} |
Is there an infinite amount of conserved currents for a given finite symmetry? | Question: Let's say we have a field $\phi(x)$ that gets transformed to $\phi(x, \epsilon)$ under some finite transformation. We also define $\phi(x,0)=\phi(x)$. If we Taylor expand our transformation we get:
$$\phi(x,\epsilon) = \phi(x) + \frac{\partial \phi(x,\epsilon)}{\partial \epsilon}\Bigr|_{\epsilon = 0}\epsilon+\frac{1}{2}\frac{\partial^2 \phi(x,\epsilon)}{\partial \epsilon^2}\Bigr|_{\epsilon = 0}\epsilon^2+\cdots$$
As this is a symmetry that conserves the action we want $S(\epsilon)=S$, so we can also expand this out using a Taylor series
$$S(\epsilon)=S(0)+\frac{d S(\epsilon)}{d \epsilon}\Bigr|_{\epsilon = 0}\epsilon+\frac{1}{2}\frac{d^2 S(\epsilon)}{d\epsilon^2}\Bigr|_{\epsilon = 0}\epsilon^2+\cdots.$$ The definition of the coefficients is the different orders of variation of the function so
$$S(\epsilon)=S+\delta S\epsilon+\frac{1}{2}\delta^2 S \epsilon^2+\cdots$$
So for this to be true we can see that each coefficient of $\mathcal{O}(\epsilon)$ must be $0$. So the first-order expression gives the 'normal' Noether's theorem expression. Will the rest give new currents?
Answer:
Assuming that we are talking about a single finite 1-parameter global quasisymmetry, it is a flow, which is in 1-to-1 correspondence with a vector field, or equivalently, an infinitesimal 1-parameter global quasisymmetry.
In other words, it counts as 1 and the same quasisymmetry, so Noether's first theorem only yields 1 continuity equation, which in turn leads to just 1 conserved quantity.
Related:
Noether's Theorem: Lie groups vs. Lie algebras; finite vs. infinitesimal symmetries
Why are infinitesimal shifts sufficient to prove that a symmetry holds? | {
"domain": "physics.stackexchange",
"id": 89780,
"tags": "lagrangian-formalism, symmetry, conservation-laws, vector-fields, noethers-theorem"
} |
When diamonds "migrate" from deep underground to the surface, do they maintain pressure inside when there is no more pressure outside? If so, how? | Question: From Science News' A mineral found in a diamond’s flaws contains the source of some of Earth’s heat:
A tiny bit of rock trapped inside a diamond is now opening a brand-new window into what the planet’s lower mantle looks like. Inside the diamond is a newly identified silicate mineral dubbed davemaoite that can only have formed in Earth’s lower mantle, researchers report November 12 in Science. It’s the first time that scientists have managed to definitively prove that this type of lower mantle mineral — previously just predicted from laboratory experiments — actually exists in nature. The team named the mineral for well-known experimental high-pressure geophysicist Ho-kwang (Dave) Mao (SN: 3/16/04)
Scientists had previously estimated that about 5 percent to 7 percent of the lower mantle must be made up of this mineral, Tschauner says. But it’s fiendishly difficult to directly observe such deep-Earth minerals. That’s because minerals that are stable in the intense pressures of the lower mantle — which extends all the way to 2,700 kilometers below Earth’s surface — begin to rearrange their crystal structures as soon as the pressure lets up.
Even the planet’s most common mineral, a lower mantle magnesium iron silicate known as bridgmanite, was largely theoretical until 2014, when it was discovered to have naturally occurred within a meteorite that had slammed into Australia with a force that generated crushing, deep mantle-like pressures in the rock (SN: 11/27/14). To date, bridgmanite is the only other high-pressure silicate mineral confirmed to exist in nature.
Diamonds act like time capsules, locking in the original mineral forms on their journey to the surface. The discovery of davemaoite is not only a confirmation of its existence, but it also reveals the location of some sources of heat deep inside Earth.... By identifying the chemical makeup of davemaoite, researchers can now confirm where those elements reside.
That’s because the Botswana diamond also contained a high-pressure form of ice as well as another high-pressure mineral known as wüstite (SN: 3/8/18). The presence of those inclusions helped narrow down the rough pressures at which the davemaoite might have formed: somewhere between 24 billion pascals and 35 billion pascals, Tschauner says. It’s hard to say exactly what depth that corresponds to, he adds. But the discovery directly links heat generation (the radioactive materials), the water cycle (the ice) and the carbon cycle (represented by the formation of the diamond itself), all in the deep mantle, Tschauner says.
From the article I think that I'm being told that the diamond is preserving enough pressure to keep both the "davemaoite" and " a high-pressure form of ice" and the wüstite stable as well.
Am I understanding this correctly?
Question: When diamonds "migrate" from deep underground to the surface, do they maintain pressure inside when there is no more pressure outside? If so, how?
I would think that as the diamond rises to the surface and the pressure relaxes outside it would relax and expand uniformly and the pressure would relax inside as well. If that's not the case, why not?
The tiny gray blobs of mineral embedded in this slice of clear diamond are the first samples of newly named davemaoite, a calcium silicate perovskite mineral that only forms in the lower mantle. AARON CELESTIAN/NATURAL HISTORY MUSEUM OF LOS ANGELES COUNTY
Answer: One of the more interesting examples of diamond maintaining high pressure in its lattice is discussed in this answer from Space Exploration SE. Put briefly, Ice VII inclusions have been found in diamonds at Earth's surface despite this phase of water requiring GPa pressure levels to form. In this case the required pressure must have been inherited within the diamond lattice within which the ice was found, and the calculated pressure from the lattice parameter is indeed between 8 and 11 GPa where Ice VII would be stable.
The tendency to maintain pressure internally is not entirely unique to diamond. Any solid formed under pressure can maintain such pressure internally in its crystal lattice. However, if the surrounding pressure is released then the material may also deform to relieve the internal pressure. So, roughly, only an amount of pressure similar to the yield strength (which is typically well below the bulk modulus) is expected to be retained. The mechanics behind this result is described below. For most solids this limit is so low that the inclusions end up in their "normal" low-pressure phases, not very interesting. What is unique about diamond is its much superior strength: [Ruoff1](https://doi.org/10.1063/1.326378) gives a yield strength of 35 GPa, enabling it to rentain enough internal pressure (if it is formed under such pressure) to stabilize Ice VII, perovskite-structured silicates, etc.
Reference
Arthur L. Ruoff (1979). "On the yield strength of diamond".
Journal of Applied Physics 50, 3354. https://doi.org/10.1063/1.326378
The pressure's on: How a solid matrix retains pressure ... or not
Consider a spherical particle of radius $r_p$ exerting pressure $P$ on a surrounding solid matrix. In the absence of a counterbalancing pressure from the outside, the imposed pressure from within generates a compressive stress $\sigma_c$ in the radial direction and a tensile stress $\sigma_t$ in the two orthogonal directions (along spheres concentric with the particle) through the volune of the surrounding solid. As shown in the picture below, both components decrease with the cube of the distance from the particle, and so have maximum magnitude at the particle surface. There the negative compressive stress is $-P$ and the positive tensile stress is $+P/2$.
We can apply the Von Mises yield criterion which states that the surrounding matrix yields, thus reducing the retained pressure, when
$(\sigma_1-\sigma_2)^2+(\sigma_2-\sigma_3)^2+(\sigma_3-\sigma_1)^2\ge2(YS)^2$
where $\sigma_1,\sigma_2,\sigma_3$ are the three orthogonal principal components of the stress tensor. Here $\sigma_1=\sigma_c=-P$ and $\sigma_2=\sigma_3=\sigma_t=+P/2$, from which the yield criterion then becomes
$P\ge(2/3)(YS)$
For a diamond lattice with a yield strength of 35 GPa this means the diamond can sustain a pressure up to 23 GPa, as quoted in the main text, around an Ice VII inclusions, whereas most other minerals would have yield stresses well below 1 GPa and thus fail to retain enough pressure to sustain Ice VII or other GPa-pressure phase inclusions. | {
"domain": "earthscience.stackexchange",
"id": 2438,
"tags": "mantle, crystallography, crystals, pressure, diamond"
} |
How does athmospheric pressure influence dewpoint? | Question: I have a sensor that measures temperature, relative humidity and air pressure.
All the formulas I could find for both absolute humidity and the dew-point only use the temperature and humidity values, never the pressure.
I wonder if that is because:
A) The relatively small pressure differences from 1atm that occur naturally (outside of artificial pressure vessels) have effects that are too small to be relevant for the realistically achievable measurement accuracy in usual settings. Or:
B) The relative humidity changes proportional to the air pressure in relation to the formulas for absolute humidity and dew point, so that it's effect is already considered through the relative humidity value. Or:
C) Nobody is as pedantic of getting the highest accuracy possible as me ;)
I consider A or B to be the more likely explanations, but I'd love to hear the input of more knowledgeable people on the point.
This question seems related:
What law or formula discusses the relationship between pressure and dew point?
Reading this confirmed my suspicion that changing the pressure (&volume) of a given gas mixture containing water vapor will change it's dew point.
But I'm unsure if remeasuring temperature (which I believe would be raised by increased pressure) and relative humidity (unsure if and how that would change) would result in values that when used to calculate a fresh dew-point would accurately reflect the change caused by the new air pressure.
UPDATE: So I found this calculator: https://airpack.nl/tools/dew-point-calculator/
Not as informative as a formula would be, but playing around with it showed me that a change of just 50 mbar to a air-mixture with dewpoint 15°C will change the dewpoint by ~1°C - so option A) seems be much less likely to me now.
UPDATE2: I thought I should add the formula I'm currently using to calculate the dew-point:
let a = (17.67 * self.temperature_c) / (243.5 + self.temperature_c);
let b = (self.rel_humidity_pct / 100.0).ln();
(243.5 * (b + a)) / (17.67 - b - a)
Answer: I believe I have found an answer, I'm still not 100% sure but here it is:
https://en.wikipedia.org/wiki/Vapor_pressure#Meaning_in_meteorology
... Actually, as stated by Dalton's law (known since 1802), the partial pressure of water vapor or any substance does not depend on air at all, ...
Since the dew-point by definition is the temperature at which the current partial pressure of water vapor is equal to the saturation vapor pressure, I believe I can infer from above statement that the dew point also does not depend on any properties of the air except the partial pressure and temperature of the water vapor contained in it.
I'd love to read comments confirming or denying either my conclusion, or the quote from Wikipedia I'm basing it on. | {
"domain": "physics.stackexchange",
"id": 97476,
"tags": "pressure, temperature, humidity"
} |
Wave packet destructive interference | Question: In Cohen-Tannoudji's Quantum mechanics book, I was reading about an example of a wave packet.
The wave function is a superposition of 3 waves with different wave number:
$k_0$, $k_0 + \frac{\Delta k}{2}$, $k_0 - \frac{\Delta k}{2}$
And Amplitudes:
1, $\frac 12$,$\frac 12$.
The wave function is:
$$\psi(x)= Const. [e^{ik_0x} + \frac 12 e^{i(k_0 + \frac{\Delta k}{2})x} +\frac 12e^{i(k_0 - \frac{\Delta k}{2})x}]$$
$$\psi(x)= Const.e^{ik_0x} [1+cos(\frac {\Delta k}{2}x)]$$
One way to find out the distance for a destructive interference is to equalize with zero the expression in the bracket with the cos() expression. With this method you simply write
$cos(\frac {\Delta k}{2}x)=-1=cos\pi$ and from here we get $x=\frac{2\pi} {\Delta k}$.
But in the book the following is said:
As one moves away from x=0,the waves become more and more out of phase, and $|\psi(x)|$decreases.The interference becomes completely destructive when the phase shift between
$e^{ik_0x}$ and $e^{i(k_0 \pm \frac{\Delta k}{2})x}$ is equal to $\pm \pi$:$\psi(x)$ goes to zero when $x=\pm \frac{2\pi} {\Delta k}$.
How does this translates mathematically? How can one study the phase shift when the waves are given in a complex expression? As it can be clearly seen, here you get $x=\pm\frac{2\pi} {\Delta k}$ instead of $x=\frac{2\pi} {\Delta k}$. One can argue that, initially, I could also write:
$cos(\frac {\Delta k}{2}x)=-1=cos(-\pi)$ and I would get the $x=-\frac{2\pi} {\Delta k}$ but this seems forced, knowing that $cos(-x)=cosx$.
So, to sum it up, I am interested into how can I investigate the phase shift of 2 waves (3 in this case) when the waves are given in a complex expression. I want to be able to cleanly write the correct result
Answer: In three waves, you may class them into two group of equal amplitude.
Group one:
$$
\frac{1}{2} e^{ik_0x}\,\, \text{ and } \,\,\frac{1}{2} e^{i\left(k_0+\frac{\Delta k}{2}\right) x};
$$
Group two:
$$
\frac{1}{2} e^{ik_0x}\,\, \text{ and } \,\,\frac{1}{2} ei^{i\left(k_0-\frac{\Delta k}{2}\right) x};
$$
The phase (the exponent after $i$) difference in group one is
$$\phi(x) = k_0x - \left(k_0+\frac{\Delta k}{2}\right) x = - \frac{\Delta k}{2} x$$
The destructive interference between the two equal-amplitude waves in the group one occurs at $$ \phi(x) =- \frac{\Delta k}{2} x = \pm \pi.$$
A similar argument may apply to the group two. | {
"domain": "physics.stackexchange",
"id": 86805,
"tags": "quantum-mechanics, wavefunction, interference, superposition"
} |
Production: TensorFlow and Keras | Question: I always here about TensorFlow is good because it is used for deploying and production. Does that mean that people don't use Keras for deploying models? If keras is now integrated into TensorFlow, does that mean that it can also be used for deployment and production?
Answer: Once model has been trained with Keras
It can be exported to TensorFlow model OR
With tf.keras it can be served as http service
Examples :
https://towardsdatascience.com/deploying-keras-models-using-tensorflow-serving-and-flask-508ba00f1037
https://medium.com/tensorflow/training-and-serving-ml-models-with-tf-keras-fd975cc0fa27
https://medium.com/@mr.acle/exporting-deep-learning-models-from-keras-to-tensorflow-serving-7d4a6e49ce3 | {
"domain": "datascience.stackexchange",
"id": 4528,
"tags": "keras, tensorflow"
} |
Stored procedure to write in a file | Question: The following stored procedure allows me to write in a file stored on my SQL Server:
CREATE PROCEDURE [dbo].[spWriteToFile]
(
@PATH_TO_FILE nvarchar(MAX),
@TEXT_TO_WRITE nvarchar(MAX)
)
AS
BEGIN
DECLARE @OLE int
DECLARE @FileID int
EXECUTE sp_OACreate 'Scripting.FileSystemObject', @OLE OUT
EXECUTE sp_OAMethod @OLE, 'OpenTextFile', @FileID OUT, @PATH_TO_FILE, 8, 1
EXECUTE sp_OAMethod @FileID, 'WriteLine', NULL, @TEXT_TO_WRITE
EXECUTE sp_OADestroy @FileID
EXECUTE sp_OADestroy @OLE
END
In order to grant all the needed permissions, I had to execute the following query:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'Ole Automation Procedures', 1;
GO
RECONFIGURE;
GO
Here is an exemple to execute the stored procedure:
-- NB: The folder "D:\test\" has to be created before
EXEC dbo.spWriteToFile N'D:\test\t1.txt', N'Hello World !'
My problem is that I have to run this stored procedure a lot and therefore I have some performance issues. Is there something I can improve in my implementation to improve performance ? Is there another way to write in a file using SQL Server that could be faster ? If not, I think I'll have to try to call it less often.
NB: I'm using SQL Server Express Edition 2014 in case it could be relevant to my issue.
Answer: As discussed in comments with @scsimon and @Milney, I've created the following table:
CREATE TABLE [dbo].[file_write]
(
[ID] bigint NOT NULL IDENTITY(1,1) ,
[PATH_FILE] nvarchar(500) NOT NULL ,
[TEXT_FILE] nvarchar(500) NOT NULL ,
[DATE_WRITE] datetime NOT NULL ,
PRIMARY KEY ([ID])
)
After that, i've written a stored procedure in order to insert into this table using only PATH_FILE and TEXT_FILE as parameters. This procedure replaces the one I had written in my question.
CREATE PROCEDURE [dbo].[insertFileWrite]
(
@PATH_FILE nvarchar(500),
@TEXT_FILE nvarchar(500)
)
AS
BEGIN
DECLARE @NOW datetime = CURRENT_TIMESTAMP
INSERT INTO dbo.file_write(PATH_FILE, TEXT_FILE, DATE_WRITE)
VALUES (@PATH_FILE , @TEXT_FILE , @NOW)
END
Finally, i've written a second stored procedure in order to write the lines that are in my table. This stored procedure is executed every 5 minutes by a scheduled task on the server.
CREATE PROCEDURE [dbo].[writeAllIntoFile]
AS
BEGIN
DECLARE @PATH_FILE nvarchar(500)
DECLARE @PATH_FILE_PREV nvarchar(500)
DECLARE @TEXT_FILE nvarchar(500)
DECLARE @ID bigint
DECLARE @OLE int
DECLARE @FileID int
BEGIN TRANSACTION t_writeAllIntoFile
BEGIN TRY
DECLARE c_writeFile CURSOR FOR
SELECT PATH_FILE,
TEXT_FILE,
ID
FROM dbo.file_write
ORDER BY PATH_FILE ASC,
DATE_WRITE ASC
OPEN c_writeFile
FETCH NEXT FROM c_writeFile INTO @PATH_FILE, @TEXT_FILE, @ID
SET @PATH_FILE_PREV = @PATH_FILE
EXECUTE sp_OACreate 'Scripting.FileSystemObject', @OLE OUT
EXECUTE sp_OAMethod @OLE, 'OpenTextFile', @FileID OUT, @PATH_FILE, 8, 1
WHILE @@FETCH_STATUS = 0
BEGIN
IF (@PATH_FILE <> @PATH_FILE_PREV)
BEGIN
EXECUTE sp_OADestroy @FileID
EXECUTE sp_OADestroy @OLE
EXECUTE sp_OACreate 'Scripting.FileSystemObject', @OLE OUT
EXECUTE sp_OAMethod @OLE, 'OpenTextFile', @FileID OUT, @PATH_FILE, 8, 1
END
EXECUTE sp_OAMethod @FileID, 'WriteLine', NULL, @TEXT_FILE
DELETE FROM dbo.file_write WHERE ID = @ID
SET @PATH_FILE_PREV = @PATH_FILE
FETCH NEXT FROM c_writeFile INTO @PATH_FILE, @TEXT_FILE, @ID
END
CLOSE c_writeFile
DEALLOCATE c_writeFile
EXECUTE sp_OADestroy @FileID
EXECUTE sp_OADestroy @OLE
COMMIT TRANSACTION t_writeAllIntoFile
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION t_writeAllIntoFile
IF (SELECT CURSOR_STATUS('global','c_writeFile')) >= -1
BEGIN
IF (SELECT CURSOR_STATUS('global','c_writeFile')) > -1
BEGIN
CLOSE c_writeFile
END
DEALLOCATE c_writeFile
END
RETURN -999
END CATCH
END
Whis this solution, I don't have to open and close my file everytime I want to write one line into it. Since writing is now running on its side, performances have been a lot improved. | {
"domain": "codereview.stackexchange",
"id": 26579,
"tags": "sql, sql-server"
} |
AutoComplete program using the n-gram model | Question: For my Advanced Data Mining class (undergrad) we were to design a program that would predict the next word a user is likely to type via automatic text classification using the n-gram model.
The following is what I came up with. The reason I am posting this is because I am about to graduate and I want to be aware of any bad habits, inefficiencies, or examples of poor implementation that I might be prone to before I have to go out and interview for Data Science Graduate programs. Unfortunately, my professors are too busy to give this kind of analysis and it seems like they just grade us on whether a program works or not (this one does).
Note: This program uses the AOL search data from here in a text file called searchterms.txt.
import java.io.*;
import java.util.*;
public class AutoComplete {
static LinkedList<String> sentences = new LinkedList<String>();
public static void main(String[] args) throws FileNotFoundException {
/* Define Variables */
int n = 3;
Hashtable<String, Hashtable<String, Double>> nGram = new Hashtable<String, Hashtable<String, Double>>();
Scanner inFile = new Scanner(new File("searchterms.txt"));
Scanner input = new Scanner(System.in);
/* Output for progress tracking */
System.out.println("Reading in search data...");
/* Populate the LinkedList sentences with data from AOL search dataset */
while(inFile.hasNext()) {
String unparsed = inFile.nextLine().intern();
String[] parsed = unparsed.split("\t");
sentences.add("<S> " + parsed[1] + " </S>");
}
inFile.close();
/* Output for progress tracking */
System.out.println("Successfully archived searches.");
System.out.println("Creating 3 grams...");
/* Split sentences into words */
for(String s : sentences) {
String[] words = s.split("[\\s]");
for(int i = 0; i <= words.length-n; i++) {
if(nGram.containsKey(words[i] + " " + words[i+1])) {
//Output for testing
//System.out.println("MATCH FOUND! ("+ words[i+2]+") Incrementing...");
if(nGram.get(words[i]+" "+words[i+1]).containsKey(words[i+2])) {
double v = nGram.get(words[i] + " " + words[i+1]).get(words[i+2]);
v++;
nGram.get(words[i] + " " + words[i+1]).put(words[i+2], v);
} else {
nGram.get(words[i] + " " + words[i+1]).put(words[i+2], 1.0);
}
} else {
//Output for testing
//System.out.println("No match found. Adding..." + words[i+2]);
nGram.put(words[i]+" "+words[i+1], createResult(words[i+2]));
}
}
}
/* Output for progress tracking */
System.out.println("Successfully created 3 grams.");
/* Loop so you can play with this forever */
String sTerm = "";
while(true) {
/* Request User Input */
System.out.println("Please enter your search terms (or type /q to quit):");
sTerm = input.nextLine();
if (sTerm.equalsIgnoreCase("/q")) break;
/* Format user input */
String[] terms = sTerm.split("[\\s]");
if (terms.length < 2) {
sTerm = "<S> " + terms[0];
//Output for testing
//System.out.println(sTerm);
} else {
sTerm = terms[terms.length-2] + " " + terms[terms.length-1];
//Output for testing
//System.out.println(sTerm);
}
/* Normalize to percent values */
double sum = 0;
try {
for(String s : nGram.get(sTerm).keySet()) {
sum += nGram.get(sTerm).get(s);
}
for(String s : nGram.get(sTerm).keySet()) {
nGram.get(sTerm).put(s, nGram.get(sTerm).get(s)/sum);
}
} catch (Exception NullPointerException) {
System.out.println("Search query not found in database.");
}
/* Give prediction */
try {
System.out.println("Prediction: " + prediction(nGram.get(sTerm)) + " ("+ Math.round(predValue(nGram.get(sTerm))*100) +"%)");
} catch (Exception NullPointerException) {
System.out.println("Cannot make a prediction.");
}
/* Testing block */
//System.out.println(nGram.get(sTerm).keySet());
//System.out.println(nGram.get(sTerm).values());
}
input.close();
}
/* Needed for scope */
static final Hashtable<String, Double> createResult(String s) {
Hashtable<String, Double> result = new Hashtable<String, Double>();
result.put(s, 1.0);
return result;
}
/* Prediction methods */
static final String prediction(Hashtable<String, Double> h) {
String key = "";
double max = 0;
for(String s : h.keySet()) {
if(h.get(s) > max) {
max = h.get(s);
key = s;
}
}
return key;
}
static final double predValue(Hashtable<String, Double> h) {
double max = 0;
for(String s : h.keySet()) {
if(h.get(s) > max) {
max = h.get(s);
}
}
return max;
}
}
Answer: Your code looks at first glance quite complete and professional, so onto the points. I hereby assume that you are using Java 7, since you have not made any restrictions and it is the most common version, though I may be wrong.
Consider changing your programs design. Currently you have an AutoComplete class, with almost everything in the main class. Now what happens if you want to run two AutoComplete instances simultaneously? You cannot do that in one program.
I would advice to change the following points:
Make an AutoComplete class that can operate on it's own, you tell it what to do, what the inputs are, and you can call methods on it that give you output.
One candidate for refactoring is the input file, this should be an input argument.
Another point is that you request user input inside your processing, the user input should be asked beforehand and also be an input parameter.
The prediction which gets printed while processing, should be an output.
Use diamond inference where possible, this means that for example LinkedList<String> sentences = new LinkedList<String>(); can be written as LinkedList<String> sentences = new LinkedList<>();.
Code against interfaces instead of against classes. Take your LinkedList<String> sentences again. Nowhere I see a requirement to use a LinkedList here, you just want to use a list, so only constrain yourself to writing: List<String> sentences = new LinkedLIst<>(). This allows you to change the exact type of List at a later point.
I see that you only loop over the LinkedList<String> sentences, you have no special requirement to use a linked list, consider using the more or less default ArrayList, which provides constant lookup times and generally performs better. In your case the performance seems to be equal as all you do is, underlying to the enhanced for-loop, use an Iterator.
Consider changing from using the File API to the Path API at some point, it offers more future-ready changes and will coöperate better with Java 8.
Prefer a class that receives print statements over directly printing to System.out during processing. In bigger projects this usually is a logger framework, to which you then attach System.out writers and also file writers for logfiles. In your case you may use a simplified version of this.
A Hashtable is old, very old, use the nowadays standard called Map, with as default implementation a HashMap. Some method names/semantics may have changed, but they both serve the same purpose.
Do not catch all exceptions with Exception NullPointerException, you may have confused yourself here, but this catches all exceptions of type Exception (so all), and gives the caught exception the name NullPointerException. But even then, catching NullPointerExceptions is not good and you should just let them fall through such that they terminate your program (or thread), so you can actually fix the issue, rather than a Cannot make prediction. message.
As a whole, the best advice I can give you is to consider more abstraction, make your methods smaller and give them a single responsibility. A very important second advice is to use language features that are the standard in this day. | {
"domain": "codereview.stackexchange",
"id": 7249,
"tags": "java, optimization, data-mining"
} |
Intuition behind field transfomations | Question: Consider a real field $V^{\mu}(x)$ defined on a 4-dimensional Minkowski space. Acted by a transformation $\Lambda = \Lambda^{\mu}{}_{\nu} $ it transforms like
$$V^{\mu}(x) \to V^{'\mu}(x) = \Lambda^{\mu}{}_{\nu} V^{\nu}(\Lambda^{-1}x)$$
My question is: what is the intuition behind this transformation? I can't wrap my head around it.
Answer: The components of the vector field transforms the same way as the components of the position vector.
For a scalar field with the value of 7 at point (3,0), for another reference frame rotated $90 ^\circ$, the rotated point (0,-3) has the same value.
But for a vectorial field with value (0,1) at point (3,0), for the new frame, the rotated point (0,-3) has now the value (1,0).
The same rotation matrix
$\begin {bmatrix}
0 & 1\\
-1 & 0
\end{bmatrix}$
is applied to the vector position and to the vector field value itself.
It is easy to draw a picture and see what happens in this toy model 2-D. | {
"domain": "physics.stackexchange",
"id": 76745,
"tags": "special-relativity, field-theory, vector-fields"
} |
Does refraction take place in plane mirror? | Question: I have read that reflection and refraction both occur simultaneously. My question is, does refraction also take place in plane mirror or is there only reflection?Why or why not?
Answer: In general when light strikes a surface, it can be transmitted, reflected, or absorbed. When light strikes at an angle, transmitted light changes direction if the index of refraction of the two media are different. This is called refraction.
At first glance, is sounds like a mirror reflects all light (except for a small amount of absorption), so there can be no refraction.
However, many mirrors are a piece of glass with a reflective coating on the back surface. So light can be refracted on the front surface, reflected from the back surface, and refracted again on the way out the front surface. | {
"domain": "physics.stackexchange",
"id": 69017,
"tags": "reflection, refraction"
} |
Resumption-based IO systems? | Question: I've been playing around with resumptions lately, mostly from Abramsky's classic paper Retracing Some Paths in Process Algebra. They are quite slick (basically solutions to the domain equation $R = I \to (O \times R)$), and very reminiscent of Kahn networks.
Of course, this observation is not original to me --- they form a traced monoidal category, and this fact was used by Abramsky and Jagadeesan to give semantics to linear logic. At any rate, note that if you feed a resumption $r$ an input of type $I$, you get an output of type $O$ and an updated resumption $r'$, which is what lets you model the fact that a dataflow node can change as it sees inputs come in.
As a result, it seems like they could give a nice API for building I/O transducers in a higher-order language like ML or Haskell, but I can't seem to find any papers describing such a thing. But they've been around for decades, and Gordon Plotkin invented them, so it's not like they've languished in obscurity. So I was wondering if anyone had seen them put to such use.
Answer: This looks a lot like the I/O API described by Felleisen et al in A Functional I/O System (or Fun for Freshman Kids). Basically, you write (in the simpler, non-distributed setting), a series of event handlers, each of which accepts the current state, and returns an updated state. Finally, there's a to-draw handler, which produces the "output" for each state.
If we recast this API slightly, we can package up the handlers and the current state together, and each time a handler returns both a new state and a new set of handlers. We might call this package of state and operations an "object". :) If we then make the result a pair of this object, and the "output", then we have exactly the type of resumptions.
Interestingly, in the paper, Felleisen et al do exactly this when moving to the distributed setting -- every operation returns a pair of new state and "output" in the form of messages to be sent to the other participants in the system. | {
"domain": "cstheory.stackexchange",
"id": 5680,
"tags": "reference-request, pl.programming-languages, functional-programming"
} |
Best tutorial for learning nav_stack | Question:
I am learning about and experimenting with the Navigation stack. We have a Robotis Turtlebot3 and soon will have a ClearPath Turtlebot2. But the class has 9 students.
I've been scouring google for a nav stack tutorial with actual code that allows me to see the various mov_base capabilities in simulation. I am running Kinetic.
Can someone refer me to a good tutorial that will work on Kinetic and that allows me to understand path planning and work with it. Believe me I searched and have tried to follow many of them (not just from ROS.org) and each one fails in a different way.
Originally posted by pitosalas on ROS Answers with karma: 628 on 2018-01-26
Post score: 0
Answer:
hey all tutorial work well for all versions but only need some modification and here are the link for turtlebot learning tutorials :
http://wiki.ros.org/turtlebot_navigation/Tutorials
http://learn.turtlebot.com/
http://emanual.robotis.com/docs/en/platform/turtlebot3/overview/
check these link they will help you i am sure about this
Originally posted by lagankapoor with karma: 216 on 2018-01-29
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 29879,
"tags": "rviz, ros-kinetic"
} |
Experimental proof for the antineutron | Question: An answer to the question How detectors in particle accelerators can differentiate neutrons from antineutrons do not show that an antineutron measurement was successful. The answers are from 2014 and I’m curious about the current situation around the measurements of antineutrons.
Answer: Single interactions with antineutrons have been seen in bubble chambers.
The original paper in this link.
Please note that such events prove experimentally the existence of antineutrons. The difficulty with high energy experiments and detector lies in the high energy showers created by all particles. One has to rely on tracking detectors to define charged particles, and the antineutron is not charged, then on calorimeters to get momentum and energy , but it is not possible in the calorimeters to identify the particle if it does not connect with a charged particle in the particle detector.
It is hard enough to make a neutron beam, as they cannot be controlled with electric and magnetic fields. Neutron antineutron oscillations are another story, which has not materialized experimentally, but has nothing to do with the existence or not of the antineutron. | {
"domain": "physics.stackexchange",
"id": 63632,
"tags": "antimatter, neutrons"
} |
Why is dark energy dominant between galaxies but not inside galaxies? | Question: The ideas of dark matter and dark energy are mind blowing.
Why is it said that dark matter overcomes dark energy in galaxies but it loses the battle in intergalactic space? In other words, why is dark energy dominant between galaxies but not inside galaxies?
Answer: These aspects of astronomy and cosmology are indeed very interesting and very significant, but don't allow the names to get in the way of your understanding. Dark matter is a form of matter made (most likely) of particles which don't interact very much with the matter we are more familiar with (i.e. protons, neutrons, electrons etc.). The evidence for it has several strands (rotation curves of galaxies, gravitational lensing, calculations of structure formation, calculations of matter content from nucleosynthesis in the early universe, etc.)
The evidence for dark energy is summarised here:
What is the evidence that dark energy exists? (as of 2020))
"Dark energy" is a rather confusing name, in my opinion. It refers to the behaviour of the expansion of the universe at the largest scales. Ordinary matter tends to pull things together by gravitational attraction and therefore always slows the expansion. But the equations of general relativity allow that there might be effects which accelerate the expansion. Such effects get the name "dark energy". I wish the cosmologists had settled on a better name. But there it is. The name arises because this contribution to the overall dynamics of the universe enters the equations in two places, one of which behaves like energy and the other of which behaves like stress, in fact a form of tension (the opposite of pressure). But in physics if something behaves like X then we say it is X. So it is called energy. Dark because it does not emit electromagnetic radiation.
The most significant thing about this contribution called dark energy is that it enters the equations of general relativity as a term which just gets added on, irrespective of where the matter in the universe may be. It is added on in exactly the same way everywhere. And most of the universe is vast empty voids between filaments of dark matter. Therefore the dark energy contribution adds up to a large total effect on average, even though it is tiny compared to the ordinary matter and dark matter at any given place where matter is present. The reason why the gravitational attraction of ordinary matter and of dark matter easily wins against the repulsive effects of this other term, wherever the matter is actually present, is simply that the dark energy per unit volume is so small. But after averaging over the whole volume of the universe it nevertheless makes the biggest contribution to the dynamics of the whole universe on average, because it is present throughout the otherwise empty voids, and those voids make up most of the volume. | {
"domain": "physics.stackexchange",
"id": 76169,
"tags": "cosmology, space-expansion, dark-matter, galaxies, dark-energy"
} |
CRF message passing as convolution operation | Question: I was reading the paper _Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials (Philipp Krähenbül and Vladen Koltun, in Proceedings of 25th Annual Conference on Neural Information Processing Systems (NIPS), 2011 pdf), and I didn't understand this equation (eq 5) in the paper:
I understand the first equation, but not the second one. If $k(f_i,f_j) = e^{-1*(f_i-f_j)^{2}}$, then $k(f_i,f_j)$ would return a scalar value between 0 and 1 ; 0 if $f_i$ and $f_j$ are far apart in the feature space, 1 otherwise. So, $k(f_i,f_j)Q_j$ adds a fraction of $Q_j$ to $Q_i$ . The first equation is clear. I don't understand how this leads to the 2nd equation. In the 2nd equation, the Gaussian kernel is now over $Q$ instead of being over $f_i,f_j$ and then multiplied by $f_i$. Where is the $f_j$? Can someone explain how the 2nd equation is derived from the first one?
$G \otimes Q$ would again return a value between 0 and 1, which is multiplied by $f_i$ (which is an n-dimensional feature vector). So the result of $(G \otimes Q)(f_i)$ would be an n-dimensional vector, and we subtract $Q_i$ (a scalar) from that. Am I misunderstanding something here?
Answer: I read the article. The meaning of the equation:
$$[G\otimes Q](f_i)$$ stand for $G$ convolution with $Q$ in the feature point $f_i$ and not multiplied by $f_i$.
The $f_j$ is hidden inside the convolution operation.
The actual operation is done using cross bilateral filter.
You can see more information on the Project site of the article.
You can also find a source code in this site. | {
"domain": "cs.stackexchange",
"id": 7597,
"tags": "machine-learning, computer-vision"
} |
How come an anti-reflective coating makes glass *more* transparent? | Question: The book I'm reading about optics says that an anti-reflective film applied on glass* makes the glass more transparent, because the air→film and film→glass reflected waves (originated from a paraxial incoming wave) interfere destructively with each other, resulting on virtually no reflected light; therefore the "extra" light that would normally get reflected, gets transmitted instead (to honor the principle of conservation of energy, I suppose?).
However, this answer states that "Superposition is the principle that the amplitudes due to two waves incident on the same point in space at the same time can be naively added together, but the waves do not affect each other."
So, how does this fit into this picture? If the reflected waves actually continue happily travelling back, where does the extra transmitted light come from?
* the film is described as (1) having an intermediate index of refraction between those of air and glass, so that both the air-film and film-glass reflections are "hard", i.e., produce a 180º inversion in the phase of the incoming wave, and (2) having a depth of 1/4 of the wavelength of the wave in the film, so that the film-glass reflection travels half its wavelength back and meets the air-film reflection in the opposite phase, thus cancelling it.
Answer: The thickness of the AR coating is chosen such that the reflections from the two interfaces cancel out (at the wavelength for which the AR coating was designed):
See Anti-reflective coating in Wikipedia.
As endolith points out in the comments, to explain how the transmission is enhanced, you have to draw a few more rays in the diagram. Here's another illustration, from the Wikipedia article for Fabry–Pérot interferometer, which shows a few higher-order reflections:
For the anti-reflective coating, you choose the thickness such that R1 and R2 cancel while T1 and T2 constructively interfere. Note that this is dependent on the wavelength, the angle of incidence, and the index of refraction of whatever is being coated. With other thicknesses, you can make a high-reflectivity coating, or a coating of whatever reflectivity you want. | {
"domain": "physics.stackexchange",
"id": 1274,
"tags": "optics, interference, reflection"
} |
Chemical species equations in a kinetic mechanism of combustion | Question: In combustion, for a kinetic mechanism involving $m$ reactions and $n$ chemical species, of the form
\begin{align}
\begin{cases}
\nu_{11}' \mathcal{S}_1 + \nu_{12}' \mathcal{S}_2 + \dots + \nu_{1n}' \mathcal{S}_{n} &\ce{->[k_1]} \hspace{0.75cm} \nu_{11}'' \mathcal{S}_1 + \nu_{12}'' \mathcal{S}_2 + \dots + \nu_{1n}'' \mathcal{S}_{n} \\
\nu_{21}' \mathcal{S}_1 + \nu_{22}' \mathcal{S}_2 + \dots + \nu_{2n}' \mathcal{S}_{n} &\ce{->[k_2]} \hspace{0.75cm} \nu_{21}'' \mathcal{S}_1 + \nu_{22}'' \mathcal{S}_2 + \dots + \nu_{2n}'' \mathcal{S}_{n} \\
\hspace{2.63cm} \vdots & \hspace{0.45cm} \vdots \hspace{3.85cm} \vdots \\
\nu_{m1}' \mathcal{S}_1 + \nu_{m2}' \mathcal{S}_2 + \dots + \nu_{mn}' \mathcal{S}_{n} &\ce{->[k_{m}]} \hspace{0.75cm} \nu_{m1}'' \mathcal{S}_1 + \nu_{m2}'' \mathcal{S}_2 + \dots + \nu_{mn}'' \mathcal{S}_{n}
\end{cases},
\end{align}
we have the rate of the $i$-th reaction given by
\begin{equation}\label{eq1}
q_i = k_i[\mathcal{S}_1]^{\nu_{i1}'}[\mathcal{S}_2]^{\nu_{i2}'}\dots [\mathcal{S}_n]^{\nu_{in}'},
\end{equation}
where $[\mathcal{S}_j]$ corresponds to the molar concentration of the specie $\mathcal{S}_j$, and $k_i$ corresponds to the Arrhenius equation
\begin{equation*} %ARRHENIUS MODIFICADA
k_i = A_i T^{\beta} e^{-E_a / (RT)}.
\end{equation*}
The change in the concentrations of all species with time $t$ is, then, given by the
system of differential equations
\begin{equation}\label{eq2}
\frac{\mathrm{d}[\mathcal{S}_j]}{\mathrm{d}t} = \sum_{i=1}^{m} (\nu_{ij}''-\nu_{ij}') q_{i}, \hspace{3cm} j=1,\dots,n.
\end{equation}
I received a computational code from a combustion researcher, in which the molar concentrations of the species, $[\mathcal{S}_j]$, in the equations above are replaced by the mass fraction of the species, $Y_j$. Thus, the code solves the system
\begin{equation}
\frac{\mathrm{d} Y_j}{\mathrm{d}t} = \sum_{i=1}^{m} (\nu_{ij}''-\nu_{ij}') q_{i}, \hspace{3cm} j=1,\dots,n;
\end{equation}
with
\begin{equation}
q_i = k_iY_1^{\nu_{i1}'}Y_2^{\nu_{i2}'}\dots Y_n^{\nu_{in}'}.
\end{equation}
Can this substitution be made so that the equations still make sense?
Answer: The relation between molar concentration and mass fraction is
\begin{align}
[S_j] = \frac{N_j}{V} = \frac{m_j}{M_jV} = \frac{Y_jm}{M_jV} \rightarrow
[S_j] = \left(\frac{\rho}{M_j}\right)Y_j \tag{1}
\end{align}
where $M_j$ is the molar mass of chemical species $ j $ and $\rho$ is the mass density.
It seems that your hope of writing the rate law by analogy, i.e. changing $[S_j]$ for $Y_j$, is true under very restrictive conditions: (1) irreversible elementary reaction $A \rightarrow B $, and (2) constant volume process. We prove this by using Eq. (1) in a batch reactor, like your equations show, where there is no inlet or outlet of any chemical species
\begin{align}
\require{cancel}
\text{[Rate of acumulation of A within the system]}
=& \text{[Rate of flow of A into the system = 0]} \\
-& \text{[Rate of flow of A out of the system = 0]} \\
+& \text{[Rate of generation of A within the system]} \\
\frac{dN_A}{dt} &= \nu_A rV \\
\frac{d([S_A]V)}{dt} &= -k[S_A]V \\
\bcancel{V}\cancel{\left(\frac{\rho}{M_A}\right)}\frac{dY_A}{dt} &=
-k\cancel{\left(\frac{\rho}{M_A}\right)}Y_A \bcancel{V} \\
\frac{dY_A}{dt} &= -kY_A \therefore r' = k Y_A
\end{align}
The condition of constant volume, in your system, is satisfied by the way you have written the differential equations. The only way you can arrive to that form, is by taking out $V$ from the time derivative, and cancelling it on both sides. I will continue with this assumption, and since the mass is constant, then the mass density $\rho$ also is.
It is unusual to carry reactions, specially gas-phase reactions like combustions, in this manner. I will leave at the end the mathematical restriction that it imposes.
Nevertheless, lets try to obtain an expression according to your case.
1. Rate law in terms of $Y_j$ Combined with Eq. (1) it gives
\begin{align}
q_i &= k_i \prod_{k = 1}^n [S_k]^{\nu_{ik}'} \\
q_i &= k_i \prod_{k = 1}^n \left(\frac{\rho Y_k}{M_k}\right)^{\nu_{ik}'} \\
q_i &= k_i \rho^{\sum_{k = 1}^n \nu_{ik}'} \prod_{k = 1}^n
\left(\frac{Y_k}{M_k}\right)^{\nu_{ik}'} \\
q_i &= k_i \rho^{\nu_{i}'} \prod_{k = 1}^n
\left(\frac{Y_k}{M_k}\right)^{\nu_{ik}'} \tag{3} \\
\end{align}
where I have defined $\nu_i' := \sum_{k = 1}^n \nu_{ik}'$. If we go back to the the reaction scheme, and stare at row $i$, this guy is the sum of all the stoichiometric coefficients of the reactants for that row.
2. Mole balance in terms of $Y_j$ The rate of change of the concentration of species $j$, using Eqs. (1) and (3), yields
\begin{align}
\dfrac{d[S_j]}{dt} &= \sum_{i = 1}^m (\nu_{ij}'' - \nu_{ij}')q_i \\
\left(\frac{\rho}{M_j}\right)\frac{dY_j}{dt}
&= \sum_{i = 1}^m (\nu_{ij}'' - \nu_{ij}') k_i \rho^{\nu_{i}'}
\prod_{k = 1}^n \left(\frac{Y_k}{M_k}\right)^{\nu_{ik}'} \\
\end{align}
$$ \boxed{\frac{dY_j}{dt} = M_j
\sum_{i = 1}^m (\nu_{ij}'' - \nu_{ij}') k_i \rho^{\nu_{i}' - 1}
\prod_{k = 1}^n \left(\frac{Y_k}{M_k}\right)^{\nu_{ik}'}} \tag{4} $$
Eq. (4) is as far as we can get, but is the desired expression, as we have the ODE in terms of the mass fraction of species $j$.
3. Constant volume process In the simplest of cases, the mixture of gases will obey the ideal gas law
\begin{align}
\rho &= \frac{P}{MRT} \\
\ln\rho &= \ln(P) - \ln(MRT) \\
\frac{d\ln\rho}{dt} &= \left(\frac{1}{P}\right)\frac{dP}{dt} - \left(\frac{1}{\cancel{MR}T}\right)
(\cancel{MR})\frac{dT}{dt} - \frac{1}{M\cancel{RT}}(\cancel{RT})
\frac{dM}{dt} \\
\frac{d\ln\rho}{dt} &= \left(\frac{1}{P}\right)\frac{dP}{dt} - \left(\frac{1}{T}\right)\frac{dT}{dt}
- \left(\frac{1}{M}\right)\frac{dM}{dt} \tag{5} \\
\end{align}
Where $M$ is the molar mass of the mixture. At any instant in time, it has a value of $ M = \sum_{k = 1}^n M_ky_k $, and continuing with Eq. (5)
\begin{align}
\frac{d\ln\rho}{dt} &= \left(\frac{1}{P}\right)\frac{dP}{dt} - \left(\frac{1}{T}\right)\frac{dT}{dt}
- \frac{1}{\sum_{k = 1}^n M_ky_k}
\frac{d}{dt}\left(\sum_{k = 1}^n M_ky_k\right) \\
\frac{d\ln\rho}{dt} &= \left(\frac{1}{P}\right)\frac{dP}{dt} - \left(\frac{1}{T}\right)\frac{dT}{dt} - \frac{1}{\sum_{k = 1}^n M_ky_k}
\sum_{k = 1}^n M_k\frac{dy_k}{dt} \tag{6} \\
\end{align}
In consequence, if the mass density is to remain constant, by Eq. (6)
$$ \boxed{\left(\frac{1}{P}\right)\frac{dP}{dt} = \left(\frac{1}{T}\right)\frac{dT}{dt}
+ \frac{1}{\sum_{k = 1}^n M_ky_k}
\sum_{k = 1}^n M_k\frac{dy_k}{dt}} \tag{7} $$
Eq. (7) must be satisfied at any instant of time while solving the system of differential equations. It requires the need of the energy balance, in order to obtain an expression of $dT/dt$. Of course, you can always sweep it under the carpet, and the pressure will "adjust" itself to meet the demands of the process. Pretty much like a normal force in a body diagram. However, if you can couple Eq. (7) to the system and solve it, then perhaps high values of pressure will be obtained (e.g. if the system of reactions is excessively exothermic). This is an indication that the isochoric operation may be unfeasable in reality. | {
"domain": "chemistry.stackexchange",
"id": 17409,
"tags": "reaction-mechanism, kinetics, concentration, combustion, chemical-engineering"
} |
Finding sessions associated with an IP address using PDO | Question: Originally Posted on Stack Overflow
First of all I want to say that I'm new to PDO. I did tried it once but since I found the oop solution complicated and even impossible (SELECT * FROM table_name) I decided to use PDO.
But I'm not sure if I do it right, so I'd like to take criticism about what I've done. I'm "translating" the mysqli stmt to pdo. Here's an example from one thing I've "translated":
// Client IP has been defined previously,
// But for the example:
$ip = '3ffe:1900:4545:3:200:f8ff:fe21:67cf';
$sql_ip = inet_pton($ip); // IPV6
try {
$stmt = $pdo->prepare('SELECT * FROM sessions WHERE s_ipv4 = :s_ipv4 OR s_ipv6 = :s_ipv6');
$stmt->bindParam(':s_ipv4', $sql_ip);
$stmt->bindParam(':s_ipv6', $sql_ip);
$stmt->execute();
if ($stmt->rowCount() === 0) {
// No rows
} else {
// Do something
}
} catch (Exception $exception) {
die ($exception->getMessage());
}
unset($stmt);
Am I doing it right?
Answer: There are two main points of improvement:
First of all, never ever use die(error message) in your scripts. Neither try/catch should be ever used only to echo the error message out. You'd be surprised, but you will have more informative error message if just get rid of the whole try/catch/die stuff. Please read my article on PHP error reporting for the details
PDO code itself could be made more tidy, thanks to various helper functions PDO offers
So the whole code should be
$sql_ip = inet_pton($ip); // IPV6
$stmt = $pdo->prepare('SELECT 1 FROM sessions WHERE s_ipv4 = ? OR s_ipv6 = ?');
$stmt->execute([$sql_ip,$sql_ip]);
if ($stmt->fetchColumn()) {
// Do something
} else {
// No rows
}
Here I used positional placeholders for brevity, and sent your variables directly to execute for the same purpose.
Also I recommend to avoid rowCount() function as it would be more logical: in your code you are selecting some data, but never use it. Instead, I would suggest to use the actual data selected (just a literal "1" in your case, fetched directly using fetchColumn() method). | {
"domain": "codereview.stackexchange",
"id": 28548,
"tags": "php, mysql, pdo, ip-address"
} |
Are all diffusion-like processes described as wave-like in relativity-compatible formulations? | Question: Citing from Wikipedia's article on relativistic heat conduction:
For most of the last century, it was recognized that Fourier equation
(and its more general Fick's law of diffusion) is in contradiction
with the theory of relativity, for at least one reason: it admits
infinite speed of propagation of heat signals within the continuum
field. [...] To overcome this contradiction, workers such as Cattaneo,
Vernotte, Chester, and others proposed that Fourier equation should be
upgraded from the parabolic to a hyperbolic form,
$$\frac{1}{C^2}\frac{\partial^2 \theta}{\partial t^2}
+\frac{1}{\alpha}\frac{\partial \theta}{\partial t}=\nabla^2\theta$$ also known as the Telegrapher's equation. Interestingly, the form of
this equation traces its origins to Maxwell’s equations of
electrodynamics; hence, the wave nature of heat is implied.
It appears to me that the PDEs describing any other diffusion process –for instance, the Fokker–Planck equation for Brownian motion– will also assume an infinite speed of propagation. Then, if my intuition is correct, they'll be incompatible with SR, and will have to be "upgraded" to hyperbolic, wave-like equations.
If this were a general rule, would we have, for instance, a relativistic wave equation for Brownian motion? It appears unlikely... Is there, then, any example of diffusion-like/dispersive equation whose form "survives" into a relativity-compatible description?
Edit:
I'll add a broader reformulation of the question, as suggested by a @CuriousOne comment:
Can we find a first order equation that models the finite velocity limits or are we automatically being thrown back to second order equations? Is there a general mathematical theorem at play here about the solutions of first vs. second order equations?
Answer: This is a subtle and somewhat complicated question, but I think the basic answer is ``no''.
1) The relativistic Boltzmann equation is
$$
p^\mu\partial_\mu f = C[f]
$$
which has the same structure as the non-relativistic Boltzmann equation. This equation can be used to derive relativistic Fokker-Planck equations. One example is the Landau collision term, which describes the scattering of charged particles in a relativistic plasma. The resulting FP equation has the same structure as the non-relativistic FP equation, see, for example http://www.sciencedirect.com/science/article/pii/0378437180901570 .
2) Also note that the Cattaneo equation (and similar equations for other diffusive problems) are not ``fundamental'' equations. Take the equation of current conservation
$$
\partial_0 n +\vec\nabla\cdot\vec\jmath = 0 .
$$
Fick's law is that $\vec\jmath$ is instantaneously equal to the diffusive flux $-D\vec\nabla n$. This is incompatible with relativity. We can try to fix things by writing down a relaxation time model for the current,
$$
\tau\partial_0 \vec\jmath = -(\vec\jmath+D\vec\nabla n) ,
$$
which gives the Cattaneo equation
$$
\tau\partial_0^2 n + \partial_0 n - D\nabla^2 n = 0 \, .
$$
But, in general there could be a much more complicated memory kernel
$$
\vec\jmath (r,t) =\int dr' dt' \, G(r,t;r' ,t' )\nabla n(r' ,t' )
$$
and the relaxation time model is an approximation that follows from
simple kinetic models in the limit $\partial_0n \ll n/\tau$.
3) Also note that the issue is not just related to relativistic invariance and causality. In a non-relativistic gas it is also impossible for the current to be instantaneously equal to the diffusive flux. Take an ultracold gas in which the atoms move at speeds $\sim cm/s$. Then any diffusive front that moves at $m/s$ (nowhere near the speed of light) is clearly unphysical, and the Cattaneo equation is more appropriate than Fick's law. What is happening here is that we took Fick's law, which is a long-wavelength (coarse grained) approximation, and pushed it to distances that are too short. | {
"domain": "physics.stackexchange",
"id": 34964,
"tags": "thermodynamics, special-relativity, waves, relativity, differential-equations"
} |
Transform NavSatFix to Tf | Question:
This is probably a dumb question.
I am working with the Kitti dataset. I am trying to express the content of the topic /kitti/oxts/gps/fix (NavSatFix - lat, long, alt) in the reference world reference system of Tf (x, y, z).
Do you know a way to do it? Thanks
Originally posted by Filippo Grazioli on ROS Answers with karma: 21 on 2020-05-27
Post score: 0
Original comments
Comment by Weasfas on 2020-06-09:
Hi @Filippo Grazioli,
Have you considered the robot_localization package and its navsat_transform_node?
Answer:
My personal solution was the following:
a) transform the initial GPS position (lat, long, alt) to an ECEF point - let's name this O;
b) center a ENU reference system in O;
c) express all other GPS positions (lat, long, alt) w.r.t. the ENU reference system centred in O.
Originally posted by Filippo Grazioli with karma: 21 on 2020-06-10
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 35019,
"tags": "ros, gps, transform"
} |
Why does stagnation pressure reduce across a normal shock? | Question: I am seeking an explanation for this graph where the subscript "1" refers to the supersonic region and the subscript "2" refers to the subsonic region present beyond a normal shock.
The static pressure curve shows an increasing trend. Shouldn't the same be applicable to the stagnation pressure Po?
Is the entropy generation associated with the stagnating of the kinetic energy term so high?
Answer: This can be concluded by reviewing Gibbs equation for upstream and downstream stagnation conditions. $$T_0ds_0=dh_0-\frac 1{\rho_0}dP_0$$
Because across the shock wave is an adiabatic process, $dh_0=0$
Then Gibbs equation becomes $$ds_0=-\frac 1{\rho_0T_0}dP_0=-\frac {R}{P_0}dP_0$$
We know entropy increases. This leads to conclusion that the stagnant pressure decreases. | {
"domain": "physics.stackexchange",
"id": 31003,
"tags": "thermodynamics, fluid-dynamics, entropy, ideal-gas, shock-waves"
} |
Would the following be an acceptable part of an algorithm if used for prime factorization | Question: Suppose I have some super fancy algorithm for prime factorization. I want to demonstrate its potential on a difficult case, like an RSA sized number composed of two primes,$\space n=p_1p_2$. As far as I know, 2-factor primes are considered to be most difficult. I want to demonstrate that it performs in a good runtime. Would it be considered cheating to hard code into the algorithm an expression that checks immediately after finding $p_1$ whether the $n$ contains a $p_2$ such that $p_2= \frac{n}{p_1}$ and terminating if it is so?
Would this be okay for demonstration purposes? Would it fly in an RSA challenge?
Is a provision for such difficult cases a faux-pas in algorithm design?
Answer: It's not cheating. The last step of an algorithm can certainly be: compute $n/p_1$ and check whether that is an integer and is prime. That's an allowable step in an algorithm and can be computed efficiently.
RSA challenges allow you to do whatever you want to obtain a factorization, as long as you can implement it and it finishes running and gives you a result. | {
"domain": "cstheory.stackexchange",
"id": 5523,
"tags": "ds.algorithms, factoring, primes"
} |
About comparing distances between different frames of reference, and meter sticks | Question: If a frame of reference $S'$ is moving with respect to a frame of reference $S$ with a velocity $v$ (along the $x$ axis of $S$), then an event $(x,t)$ in $S$ is viewed as an event $(x',t')$ in $S'$ such that
$$x'=\gamma (x-vt).$$
Let us say we have a rod stationary in $S$; the coordinates of its two ends are $x_{1}$ and $x_{2}$. The length of this rod as measured with meters sticks in $S$ is thus $l_{0}=x_{2}-x_{1}$ meters.
If we now turn to $S'$, the rod appears to be moving. At a given instant $t'$, the two ends will have coordinates $x_{1}'$ and $x_{2}'$, and they are related to $x_{1}$ and $x_{2}$ by
$$x'_{1}=\gamma (x_{1}-vt)$$
$$x'_{2}=\gamma (x_{2}-vt).$$
The length of this rod in $S'$ is $l'=x_{2}'-x_{1}'$ meters, and we have
$$x_{2}'-x_{1}'=\gamma (x_{2}-x_{1})$$
or $$\boxed{l' ~\mathrm{meters}=\gamma {l_{0}} ~~\mathrm{meters}.}$$
Questions: Are the meters sticks we use to measure distances in $S$ the same as those we use in $S'$? Aren't the meters sticks we used in $S'$ contracted? And if so, then what does the equation above mean if the meters sticks used in the LHS are different from the meter sticks used in the RHS?
Answer: The idea is as follows: suppose that two friends start at rest with the same stick of length $l_0$. Then one of the two, call it $S^\prime$ starts moving with respect to the other friend $S$. Now both friends do the same experiment and measure the length of their stick and the length of the friend's stick. What they, both, will find is that, while the length of their sticks (so the one stationary with their frame) remaind the same $l_0$, the length of the stick of the friend has shortened, by how much? Exactily by a factor of $\gamma$.
So say that the friend in $S$ measures the length of the stick of the friend in $S^\prime$, moving with respect to him with velocity $v$, he'll measure
$$l^\prime = \gamma l_0$$
Note that even if the friend in the frame $S^\prime$ would have measured the length of the stick of the friend in $S$, which now is moving with velocity $-v$, he would have got the same result. | {
"domain": "physics.stackexchange",
"id": 65449,
"tags": "special-relativity, reference-frames, distance"
} |
Seurat VlnPlot presenting expression of multiple genes in a single cluster | Question: Seurat VlnPlots are most commonly used to visualize differences in any given gene expression across multiple clusters or cell types. For example:
VlnPlot(object = MouseCellAtlas, idents = c("T cell","Neutrophil","Erythroblast","Monocyte","Macrophage"), features = 'Fzd9')
But suppose I want to view the expression of several genes in just idents = "T cell". I can do this to generate three separate plots:
VlnPlot(object = mca, idents = 'T cell', features = c('Fzd9','Ctnnb1','Apc'), combine = TRUE, ncol = 3)
But ideally I'd like this second plot to look just like the first plot, except with only a single tissue on the legend ("T cell") and multiple genes in the various colored violins. Unfortunately, I am not aware of any simple way to combine these violins into a single plot area.
This doesn't do what I need:
VlnPlot(object = mca, idents = 'T cell', features = c('Fzd9','Ctnnb1','Apc'), combine = TRUE, ncol = 3)
Is there built-in functionality in Seurat for generating a VlnPlot for multiple genes in a single tissue in a single plot? Or can anyone suggest a workaround?
Answer: To do so one workaround it to have your data in "long format" and then use the column that holds the "gene names" as the x variable while plotting.
You can use FetchData() to extract data from a Seurat object. VlnPlot's default is the data slot (of the active assay if using Seurat v3 I suppose). And you can specify which cells and genes to retrieve.
selected_cells <- names(panc8$celltype[panc8$celltype == "gamma"])
data <- FetchData(panc8,
vars = c('FZD9','CTNNB1','APC'),
cells = selected_cells ,
slot = "data")
> head(data)
FZD9 CTNNB1 APC
D101_5 0 0.000000 0.000000
D101_43 0 2.007853 1.001958
D101_93 0 0.000000 0.000000
Melt() would transform your data into "long format". By not specifying any arguments, we push for all of the info in the three variables to be gathered in two columns:
long_data <- melt(data)
No id variables; using all as measure variables
> head(long_data)
variable value
1 FZD9 0
2 FZD9 0
3 FZD9 0
> tail(long_data)
variable value
1870 APC 0
1871 APC 0
1872 APC 0
ggplot2 is used to add "violin" and "jitter" layers. You can customize the output to look (exactly) like the VlnPlot() output.
ggplot(long_data,
aes(x = variable, y = value)) +
geom_violin() +
geom_jitter(size = 0.1)
And here is the same graph generated by VlnPlot():
There are no "violins" as the counts are almost entirely zeros. And see how different the ranges are in the y axes of the VlnPlot. So the "tweak" I have presented here would only work for genes that are expressed at similar levels / similar ranges. | {
"domain": "bioinformatics.stackexchange",
"id": 1177,
"tags": "r, scrnaseq, seurat, ggplot2"
} |
How is solving Proca equation equivalent to scalar field equation? | Question: My prof. told me that using differential forms proca equation reduces to solving for scalar field equation. How is that? I can’t see how does one relate to Scalar equation using differential forms.
Proca equation: $$\mathcal{L} = \frac{-1}{16}F^{\mu v}F_{\mu v} + \frac{1}{8\pi}m^2A_\mu A^\mu.$$
Equation of motion for Proca: $$\partial_\mu F^{\mu v} + m^2 A^v = 0.$$
Answer: The Lagrangian that you use for the Proca equation looks a bit unusual, I will factor out $\frac{1}{8}$ and change the factor ($\frac{m}{\sqrt{\pi}} \rightarrow \frac{m}{\sqrt{2}}$): Then written out in differential forms it looks like:
$$L = 8{\cal{L}} =-\frac{1}{2} dA \wedge \star dA + \frac{1}{2} m^2 A\wedge \star A$$
We will derive the L-E equations via variation (actually I did most of it already in another post, so I will shorten the derivation a little bit, the details can be looked up in the post https://physics.stackexchange.com/a/432941/30506 ):
$$\delta S =\int\delta L = -\frac{1}{2} \int (d\delta A \wedge \star dA + dA \wedge \star d\delta A) + \frac{1}{2} m^2 \int (\delta A \wedge \star A + A\wedge \star \delta A) = -\frac{1}{2} \int 2 d\delta A \wedge \star dA + m^2 \int \delta A \wedge \star A $$
In the next manipulation we will use the product rule:
$$d(\delta A\wedge \star d A) = d\delta A \wedge\star dA - \delta A \wedge d\star dA $$ therefore we have: $$-d\delta A\wedge \star dA = - d(\delta A\wedge \star dA ) -\delta A\wedge d\star dA $$.
We will this substitute in the first term of the varied action:
$$\delta S =\int - d(\delta A\wedge \star dA ) -\int \delta A\wedge d\star dA + m^2 \int \delta A \wedge \star A $$
Finally the first term is an integral over an absolute derivative which be transformed into a surface integral on whose surface the variation $\delta A=0$. So we get finally:
$$0=\delta S = - \int \delta A \wedge (d\star dA - m^2 \star A )$$
As the last expression has to be zero for all variations $\delta A$, the result of the variation is:
$$d\star dA - m^2 \star A =0 $$ or a bit more nicely written (actually at the moment I don't know if $\star \star =1$ or $\star \star =-1$, but I will check that up).
$$ \star d\star dA = m^2 A$$
In some books $ \delta: = \star d\star$ (this $\delta$, however, has nothing to do with the variation) and with $F=dA$ we get:
$$\delta F =m^2 A $$
The definition of the hodge operator can be looked up in the other post https://physics.stackexchange.com/a/432941/30506 I mentioned at the beginning. | {
"domain": "physics.stackexchange",
"id": 55765,
"tags": "homework-and-exercises, electromagnetism, lagrangian-formalism, mass, field-theory"
} |
Why cooked food considered nutritious if proteins decompose at much lower temperatures? | Question: Food is cooked/baked at temperatures that are significantly higher than what's considered normal for proteins/amino acids (40°C). Why is it, then, that such food is still considered nutritious after cooking? (meat, cheesecake, lentils, quinoa, mushrooms, etc.)
Answer: Cooking is just a form of digestion.
What is digestion?
Digestion is the process of breaking down big molecules into smaller molecules. When you cook food you break down big molecules into its small components.
Why do we digest food?
Think about a long sequence of DNA for example. You eat corn and you have in your body a long sequence of corn DNA. What can you do with that long molecule....well, nothing. You need to break this long molecule down in order to be able to assimilate the constitutive nutrients. With those nutrients, you can now make up your own DNA. In other words, we need to break the big Lego castle that is food to get the building blocks in order to rebuild a spaceship. Digestion refer to the first process (breaking up the Lego castle). We refer to the part of the metabolism who breaks things down as catabolism and we refer to the part of metabolism that build things up as anabolism. Note that catabolism + anabolism = metabolism.
Why do we cook then instead of digesting by ourself
Digestion takes much energy and require the organism to have the right organs and to have the right matter (enzymes and stuff). We can save up energy (and the other stuff) by digesting the food outside our body. One could say that humans, (just like spiders for example) are performing external digestion.
Why cooked food considered nutritious
Cooked food don't have more nutrients than raw food. It is actually likely that in the process of cooking you would lose some nutrients going in the water that you throw away typically.
The big difference is that nutrients in cooked food are easy to assimilate which might end up being more healthy depending on the specific food source. Cooking may also destroy potentially toxic products.
Note that in response to the cooking behaviour, human gut has evolved to be shorter than it was in our ancestor. | {
"domain": "biology.stackexchange",
"id": 4163,
"tags": "proteins, food, nutrition, temperature"
} |
Task space to joint motion space conversion | Question: I am the moment trying to read and understand this paper Task Constrained Motion Planning in Robot Joint Space but seem to have a hard time understanding the math.
The paper describes how to perform task constrained motion planning in cases where a frame is constrained to a specific task.
the problem the paper tackles is when sampling in joint space, randomized planners typically produce samples that lie outside the constraint manifold. the method they proposed methods use a specified motion constrain vector to formulate a distance metric in task space and project samples within a tolerance distance of the constrain.
Given the this I am seem to a bit confused on some simple terms they define in this paper.
Examples. How is a task space coordinate defined ? what information does it have?
they compute the $$\Delta x = T_e^t(q_s)$$ which is transformation matrix of the end effector with respect to the task frame.
What I don't get is why the end effector? and why the end effector with respect to the task frame?
Secondly.
Later in the paper they write down an expression that relates the task space to the joint space motion. They do it using the Jacobian, but seem to miss explaining (in my opinion) what $E(q_s)$ actually do.
$$J(q_s) = E(q_s)J^t(q_s)$$
What is said about it in the paper is that
Given the configuration $q_s$, instantaneous velocities have a linear
relationship $E(q_s)$
why the need of instaneous? what is the definition of an instantaneous component? how does it differ from the information given by the jacobian?
Basically i don't understand how and why the mapping is as it is?..
Answer: "Why the end effector?" Because a robot working on a task uses its end effector to interact with the objects that make up the task. For example, it is common to align the jaws ("fingers") of a parallel-jaw gripper with the sides of an object to be picked up. The simplest way to describe that object is by using a fixed coordinate system in which the task objectives are easy to visualize. Maybe, for example, $\hat x$ is pointed in the direction of travel of a conveyor belt. Since a robot can be placed with its base frame in an arbitrary relationship to the task space, it is usually easier to use the task coordinate system and make the transposes described.
Regarding $E$, look at the paper's appendix. The author uses this matrix to relate the Euler angles of the robot to the task system. | {
"domain": "robotics.stackexchange",
"id": 1102,
"tags": "robotic-arm, motion-planning, jacobian"
} |
What is the relation between renormalization in physics and divergent series in mathematics? | Question: The theory of Divergent Series was developed by Hardy and other mathematicians in the first half of the past century, giving rigorous methods of summation to get unique and consistent results from divergent series. Or so.
In physics, it is said that the pertubative expansion for the calculation of QFT scattering amplitudes is a divergent series and that its summation is solved via the renormalisation group.
Is there some explicit connection between the mathematical theory and the physics formalism?
Perhaps the question can have different answers for two different layers: the "renormalized series", which can be still divergent, and the structure of counter-terms doing the summation at a given order. If so, please make clear what layer are you addressing in the answer. Thanks!
Answer: You are conflating three conceptually different categories of "regularizations" of seemingly divergent series (and integrals).
The type of resummations that Hardy would talk about are similar to the zeta-function regularization - the example that is most familiar to the physicists. For example,
$$S=\sum_{n=1}^\infty n= -\frac{1}{12}$$
is the most famous sum. Note that this result is unique; it is a well-defined number. In particular, that allows one to calculate the critical dimension of bosonic string theory from $(D-2)S/2+1=0$ and the result is $D=26$. Fundamentally speaking, there is no real divergence in the sum. The "divergent pieces" may be subtracted "completely".
However, in the usual cases of renormalization - of a loop diagram - in quantum field theory, there are divergences. Renormalization removes the "infinite part" of these terms. A finite term is left but the magnitude of the term is not uniquely determined, like it was in the case of the sum of positive integers. Instead, every type of a divergence in a loop diagram produces one parameter - analogous to the coupling constant - that has to be adjusted. Because the finite results can be "anything", this is clearly something else than the zeta-regularization and, more generally, Hardy's procedures whose very goal was to produce unique, well-defined results for seemingly divergent expressions. Infinitesimally speaking, the Renormalization Group only mixes the lower-order contributions (by the number of loops) into a higher-order contribution.
So these are two different things that one should distinguish.
There is another category of problems that is different from both categories above: the summation of the perturbative expansions to all orders. It can be demonstrated that in almost all fields theories - and perturbative string theories as well - the perturbative expansions diverge. For a small coupling, one can sum them up to the smallest term, before the factorial-like coefficient begins to increase the terms again, despite the $g^{2L}$ suppression. The smallest term is of the same order as the leading non-perturbative contributions.
At the very end, if the theory can be non-perturbatively well-defined - and both QCD-like theories and string theory can, at least in principle - the full function as a function of the coupling constant $g$ exists. But it just can't be fully obtained from the perturbative expansion. The Renormalization Group won't really help you because it only mixes the perturbative terms of another order to a perturbative diagram you want to calculate. If you don't know the non-perturbative physics, the equations of the Renormalization Group won't fill the gap because they will keep you in the perturbative realm.
So I have sketched three different things: in the Hardy/zeta problems, the answer to the divergent series was unique; in the particular $L$-loop diagrams in QFT, it wasn't unique but the infinite part was subtracted and the finite part was obtained by a comparison with the experiments; and in the perturbative expansion resummed to all orders, the sum actually didn't converge and indeed, it didn't know about all the information about the full result for a finite $g$.
The last statement may have some subtleties; at least for some theories, the non-perturbative physics is fully determined by the perturbative physics. But I think it is not quite general and we have counterexamples - e.g. for AdS/CFT with orthogonal groups and different discrete values of $B$ etc. So it means that the perturbative expansion doesn't uniquely determine the theory non-perturbatively.
Because the three examples differ at the level of "what can be calculated" and "what cannot", they are different. | {
"domain": "physics.stackexchange",
"id": 1263,
"tags": "mathematical-physics, renormalization"
} |
Kirchoff's Current Law - Confusion | Question: While I come across some explanations on why KCL works, it is usually attributed to the Law of Conservation of Charge. But by the statement of KCL, it states that the current entering and leaving a node are equal. For this to hold true, the charge entering and leaving the node must be equal (This is guaranteed by the Law of Conservation of Charge). My question is "Why the time taken by the charges to enter and leave the circuit needs to be equal?"
Could anyone please provide me an intuitive explanation or proof for this? I'm unable to internalize the concept. If I'm clear with this, I could understand about constant current in a series circuit.
P.S.: I'm sorry if my Physics vocabulary is not good because I'm a high schooler.
Answer:
Why the time taken by the charges to enter and leave the circuit needs to be equal?
Those do not, in fact, need to be equal. However, if we have a situation where they are at least approximately equal, then we can simplify our analysis a lot.
For some background, circuit theory is a simplification of Maxwell’s equations. It relies on three assumptions. Of these three assumptions, the relevant one here is:
The net charge inside any circuit element is always 0.
So if a charge enters one terminal of a circuit element then the same charge must immediately leave another terminal, otherwise the net charge in the circuit element would be non-zero.
Not all possible circuit elements have that property. For instance, suppose that we wanted to consider each plate of a capacitor as its own separate circuit element. Then current would flow into a plate and not out and it would gain charge. Because of this, in order to use capacitors in circuit theory, we have to consider both plates together as part of one circuit element.
So there is nothing in nature that forces this assumption to be true, but there are many devices where it is true. When we design and build circuits we use those devices so that our circuits are easier to design and understand. So the only reason “why” this is true is because we deliberately construct circuits out of devices where it holds. | {
"domain": "physics.stackexchange",
"id": 94783,
"tags": "electric-circuits, electric-current, charge, conservation-laws"
} |
Nastran RBE2 coordinate change | Question: I am looking for a method changing the coordinate system of RBE2 or RBE3 elements.
Below is the nastran description of RBE2 in Nastran user guide.
There is no field for coordinate configuration.
Now, I am wondering if coordinate change is possible or not.
Could anyone explain how to change coordinate of RBE2?
Answer: The RBE2 element uses the output coordinate systems that are defined for the grid points (field CD on the GRID card).
If you want to use two different coordinate systems at the same point in your model, you can define two grid points with different output coordinate systems, and join them with a zero-length RBE2 or RBAR element.
Note that if a grid point is used in more than one rigid element, it can only be a dependent grid point in one of those elements. But if the rigid element connects all 6 degrees of freedom at each grid, it makes no difference which grid you choose as the independent one, so this isn't a "real" restriction on what you can do - it's only a feature of the way NASTRAN works internally! | {
"domain": "engineering.stackexchange",
"id": 1191,
"tags": "mechanical-engineering, modeling, simulation"
} |
Learning Rate based on error of the network | Question: I am not an expert and do not have theoretical justification for that, but it seems to me that the smaller network error is, the smaller learning rate should be.
Is there an algorithm to dynamically update learning rate based on total error of the network without relying on any hyper-parameters ?
Answer: Your intuition is on point, and shrinking the learning rate like this is often referred to as "annealing". But linking the learning rate to error magnitude neglects certain problematic error surface topologies. An excellent motivating example is the Rosenbrock "Banana" Function, which is often used as a test case for optimization algorithms. The "banana" is a low error valley which hides the global minimum. If an optimization path finds its way into this valley, the path to the global minimum is along a nearly flat gradient.
If you use an optimization algorithm that naively shrinks the learning rate relative to the error, you're going to get stuck as soon as you hit the valley. On the one hand: congrats! You've achieved a low error solution. But you're not necessarily anywhere near the global minimum. So how can we do better?
An approach used by modern gradient-based methods like Adagrad, RMSProp, and Adam is to separately assign learning rates to each parameter, and tie the learning rate to the magnitude of the respective parameter's update. The Stanford CS231n lecture notes explains:
Adagrad is an adaptive learning rate method originally proposed by Duchi et al..
# Assume the gradient dx and parameter vector x
cache += dx**2
x += - learning_rate * dx / (np.sqrt(cache) + eps)
Notice that the variable cache has size equal to the size of the gradient, and keeps track of per-parameter sum of squared gradients. This is then used to normalize the parameter update step, element-wise. Notice that the weights that receive high gradients will have their effective learning rate reduced, while weights that receive small or infrequent updates will have their effective learning rate increased. | {
"domain": "datascience.stackexchange",
"id": 2755,
"tags": "machine-learning, neural-network, deep-learning, optimization, learning-rate"
} |
Must a strong reducing agent be a weak oxidising agent, and vice versa? | Question: As titled, must a strong reducing agent be a weak oxidising agent, and must a strong oxidising agent be a weak reducing agent? For instance, fluorine is a very strong oxidising agent, and it cannot act as a reducing agent. Is this true for all chemicals? If not, what would be the explanation to why it isn't the case?
Answer: What does it mean for a substance to be strong oxidizer? Well, an oxidizer acquires electrons. For a substance to do a good job of ripping out electrons from most other materials, it needs to have unoccupied electronic states of very low energy in order to coax the electrons to move towards it. For a strong reductant, the whole picture is flipped around - the substance needs occupied electronic states of very high energy, such that the electrons will readily jump out to anything even slightly willing to accept them.
So what would a compound that is a strong oxidizer and a strong reductant look like, conceptually? Well, it would simultaneously need both a very low energy unoccupied state, and a very high energy occupied state. You can roughly imagine taking all of the allowed electronic states in a substance, ordering them by energy, and "filling in" the electronic states from the bottom (lowest energy) and going up until you've assigned all the electrons the substance has.
The problem is that for simple/small molecules in their most stable condition, you can't really do this "filling in" procedure and be left simultaneously with a hole near the bottom and an electron way up high. If this happens, the molecule is going to do whatever it can to make that high energy electron fall into the hole. It can do this either by a direct electron transfer (the molecule self-oxidizes and self-reduces at the same time), or more likely, the molecule will rearrange itself somehow such that it forms new electronic states where there is no deep hole and sky-high electron. So in general, it is true that a substance cannot be simultaneously be a strong oxidizer and a strong reductant; if it were, it would just react on its own and stop being at least one of them.
But there are ways around this.
If a molecule is not simple and small, it can be conceivably engineered and subdivided into regions which don't "communicate" electronically very well. In a sense, a strongly oxidizing end of the molecule may not "know" there is also a strongly reducing end. I don't really know of a realized example of this, but it would be a situation similar to intramolecular frustrated Lewis pairs, where molecules simultaneously contain strongly acidic and strongly basic segments which can't interact on their own, usually due to spatial constraints. An electrochemical version of this would likely be a more subtle matter, but is not physically impossible.
The real workaround though, is to realize that you don't always have to operate in the ground electronic state. For many perfectly ordinary substances, even ones with no significant oxidising or reducing power, it is possible to expose them to high energy photons (say, blue, violet or ultraviolet photons). A photon of the right energy can then excite an electron from a low energy state to a high energy one. Not only does the excited electron carry a lot more energy (potentially making it powerfully reducing), but it leaves a hole behind (which can be powerfully oxidizing) - the photon is forcefully breaking the "filling in" procedure. Now, it is possible for the excited molecule to be, in all respects, simultaneously a strong oxidizer and a strong reductant!
This is a delicate condition, which in most cases lasts around a nanosecond, but in certain situations can last substantially longer. Nevertheless, that is plenty of time to do chemical reactions, which generally happen within a few picoseconds. This is the field of photochemistry. Even here though, one typically chooses substances in order to explore only their increased oxidation or reduction power when exposed to light, not both simultaneously in a single substance. | {
"domain": "chemistry.stackexchange",
"id": 17486,
"tags": "redox"
} |
Binary Search Tree: Replace $k$ min elements with their average | Question: Given a valid binary search tree whose keys are unique real numbers, and a set of $k$ pointers to the $k$ minimum elements in the tree, will the BST property be maintained if I replace all $k$ elements with the average of the $k$ elements?
The BST property as given in Corman:
Let $x$ be a node in a binary search tree. If $y$ is a node in the
left subtree of $x$, then $y.key \leq x.key$. If $y$ is a node in the
right subtree of $x$, then $y.key \geq x.key$.
I've tried this with a few test cases for $k=3$ and a few different trees, and it seems to hold, but I'm not sure if it actually does and how I could prove it.
Answer: The following is either a proof, or an argument that runs in circles.
Given a binary search tree the ordered sequence of keys can be retrieved using the symmetric inorder traversal. Conversely any ordered sequence of keys can be stored in a binary tree with the BST property only if the keys are mapped precisely in inorder. (Proof: the root has a unique value as all keys the the left must be smaller, to the right must be larger. Use recusion for the subtrees).
Now take a BST tree and retrieve its keys in order. Take any consecutive segment of the keys and replace these keys by any value between the first and last of the segment. The result is again an ordered sequence of keys and remapping to the original tree will give again a BST.
Taking the minimum $k$ values and replacing with the mean is a special case. | {
"domain": "cs.stackexchange",
"id": 1557,
"tags": "data-structures, proof-techniques, search-trees"
} |
Stress tensor of fluid in equilibrium, inertial frame | Question: There are some points in this wikipedia chapter. Main equation is:
$$ T^{\alpha \beta} \, = \left(\rho + {p \over c^2}\right)u^{\alpha}u^{\beta} + p g^{\alpha \beta} $$
where $c$ is explicit.
The one for the trace is:
$$T = 3p - \rho c^2$$
that seems contradictory with:
$$T^{\alpha\beta} = \left( \begin{matrix}
\rho & 0 & 0 & 0 \\
0 & p & 0 & 0 \\
0 & 0 & p & 0 \\
0 & 0 & 0 & p
\end{matrix} \right)$$
with trace $3p+\rho$ (difference in sign and value of last term).
The expression for the four-velocity:
$$u^{\alpha} = (1, 0, 0, 0)$$
is not the usual one $(c, 0, 0, 0)$.
Finally, the metric:
$$g^{\alpha\beta} \, = \left( \begin{matrix}
- c^{-2} & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{matrix} \right)$$
also with explicit $c$, it is also not the usual:
$$\left( \begin{matrix}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{matrix} \right)$$
Are the wikipedia equations in this chapter using a coherent notation ? If yes, how to explain the previous points ?
Answer: The trace is ${T^\mu}_\mu$ not $T^{\mu\mu}$, so the minus sign from lowering lowering the $\mu=0$ index accounts for the sign difference. Also $c=1$ for most people, so the he wiki may not be consistent, but getting right is easy.
Looking at the Wiki, it is $g^{\mu\nu}$nthat has the $c^{-2}$, and to lower the index you need $g_{\mu\nu}$ which has the $-c^2$, so the wiki article is consistent. | {
"domain": "physics.stackexchange",
"id": 70761,
"tags": "general-relativity, stress-energy-momentum-tensor"
} |
How to place XGBoost in a full stack for ML? | Question: Is XGBoost complete by itself for prod-strength machine learning? If not, with which other tools or libs is it typically combined, and how?
(I recently read a description of a stack that included ca 5 pieces, including XGBoost and Keras.)
Answer: Yes, it is a full-strength Machine Learning paradigm.
XGBoost is basically Extreme Gradient Boosting.
It only takes in numeric matrix data. So, you might want to convert your data such that it is compatible with XGBoost.
The wide range of parameters of the xgboost paradigm is what makes it so diverse. Boosting can be done on trees and linear models, and then more parameters can be defined depending on the model you have selected.
So, yes it is a complete paradigm in itself. But, when you want more than the limitations of xgboost like linear and tree models, then you can use the concept of ensembling.
In the case of ensembles, the tools/libraries which can be used depends on the data scientist who is conducting the experiment. It can be Keras or Theano or TensorFlow, or anything which he/she is comfortable with. (opinion-based) | {
"domain": "datascience.stackexchange",
"id": 530,
"tags": "machine-learning, python, tools, xgboost"
} |
Entity Framework Core verify SaveChanges count | Question: I am using ASP.NET Core 3.1 and have been assigned a task to verify the count of changes done using SaveChanges().
It is expected that the developer should know how many records will be changed before-hand when SaveChanges() will be called.
To implement it, I have created an extension method for DbContext called SaveChangesAndVerify(int expectedChangeCount) where I am using transaction and equating this parameter with the return value of SaveChanges().
If the values match, the transaction is committed and if it doesn't match, the transaction is rolled back.
Please check the code below and let me know if it would work and if there are any considerations that I need to make. Also, is there a better way to do this?
public static class DbContextExtensions
{
public static int SaveChangesAndVerify(this DbContext context, int expectedChangeCount)
{
context.Database.BeginTransaction();
var actualChangeCount = context.SaveChanges();
if (actualChangeCount == expectedChangeCount)
{
context.Database.CommitTransaction();
return actualChangeCount;
}
else
{
context.Database.RollbackTransaction();
throw new DbUpdateException($"Expected count {expectedChangeCount} did not match actual count {actualChangeCount} while saving the changes.");
}
}
public static async Task<int> SaveChangesAndVerifyAsync(this DbContext context, int expectedChangeCount, CancellationToken cancellationToken = default)
{
await context.Database.BeginTransactionAsync();
var actualChangeCount = await context.SaveChangesAsync();
if(actualChangeCount == expectedChangeCount)
{
context.Database.CommitTransaction();
return actualChangeCount;
}
else
{
context.Database.RollbackTransaction();
throw new DbUpdateException($"Expected count {expectedChangeCount} did not match actual count {actualChangeCount} while saving the changes.");
}
}
}
A sample usage would be like context.SaveChangesAndVerify(1) where a developer is expecting only 1 record to update.
Answer: I can't think of another way of doing this - only the database knows how many rows have been affected. That said, your code has some problems.
You need to dispose of your transaction when you're done with it. This has 2 additional benefits:
You don't need to rollback manually
You don't need to worry about exceptions in SaveChanges (you haven't handled that at all at the moment).
Let's look at how that changes things:
public static int SaveChangesAndVerify(
this DbContext context,
int expectedChangeCount)
{
using (var transaction = context.Database.BeginTransaction())
{
var actualChangeCount = context.SaveChanges();
if (actualChangeCount == expectedChangeCount)
{
transaction.Commit();
return actualChangeCount;
}
throw new DbUpdateException($"Expected count {expectedChangeCount} did not match actual count {actualChangeCount} while saving the changes.");
}
} | {
"domain": "codereview.stackexchange",
"id": 39435,
"tags": "c#, asp.net-core, entity-framework-core"
} |
Different formulas for magnetic flux | Question: I am familiar with the formula $\phi=BA\cos(\theta)$ fo magnetic flux, where I know $\theta$ is the angle between $B$ and $A$. I came across another formula ; $\phi=Ba\cos(\omega t)$. Wanted to ask why is $\theta$ replaced by ωt here? are the two formulas used in different cases?
Answer: Maybe it's better to start from the "original" definition of magnetic flux which is the amount of the magnetic field passing perpendicularly through some surface area, or mathematically, is defined as the dot product of the magnetic field ($\bf B$) and that surface area ($\bf A$) (remember area is a vector which perpendicular to the surface):
$$ \phi = {\bf B \cdot A} = B A \cos\left(\theta\right)$$
That's why you get $\cos\left(\theta\right)$. Now imagine, either the magnet or the surface is rotating, then the amount of the field passing perpendicularly through the surface will change as function of time and this variation (rotation) is expressed by the $\theta$ as a function of time ($t$), namely $\theta(t) =\omega t $. Therefore the flux can now be written as:
$$ \phi = B A \cos\left( \theta(t) \right) = B A \cos\left( \omega t \right) $$ | {
"domain": "physics.stackexchange",
"id": 60132,
"tags": "electromagnetism, magnetic-fields"
} |
Tertiary carbanion in more alkylated alkene | Question: Saytzeff rule says that more alkylated alkene is more stable and the reason is hyper conjugation.
Going by this, 2,3 Dimethyl but-2-ene should be most stable alkene due to 12-alpha hydrogen.
But on drawing hyper conjugative structures of the molecule and a less alkylated alkene, say 2-Methyl propene; there was formation of a tertiary carbanion in 2,3 Dimethyl but-2-ene! and not in 2-Methyl propene. Instead in this molecule there is a primary carbanion which is much more stable than tertiary. I know that there will be 12 hyper conjugative structures in 2,3 Dimethyl but-2-ene rather than 6 structures but isn’t a tertiary carbanion poorly stable and make overall molecule less stable than 2-Methyl propene?
Can anyone please prove me wrong?
Thanks in advance
Answer: There are two types of description of hyperconjugation: (1) Valence bond approach & (2) Molecular orbital approach. Your question is pertaining to valence bond approach as you are interested in the stability of the canonical forms.
In Valence bond approach, hyperconjugation is said to happen due to conjugation of C—H bond electron pair with adjacent double bond. We call this phenomenon 'hyper'-conjugation because there is no real negative charge that is getting delocalized. Rather, the partial negative charge which is accumulated on the carbon due to electronegativity difference with hydrogen is getting delocalized.
In a real carbanion, the carbon has ~100% share of the negative charge. However, in this case, the carbon has only partial share of the negative charge. To be precise, about 7%, because 7% is the ionic character of a C—H bond (Calculated using Hannay-Smith's Relationship for Calculating Percentage Ionic Character [1]).
In case of conjugation of real carbanion, the ~100% negative charge is getting delocalized, and you can give importance to the stability of canonical forms to detrmine whether that conjugation is relevant. However, when ~7% negative charge is getting delocalized, it is more important to give importance to the number of such conjugations, than the relevance of each conjugation based on stability of canonical forms, because anyway, ~7% negative charge won't cause much destabilization even in a tertiary carbanion!
In Molecular orbital approach, hyperconjugation is said to happen due to overlap of bonding σ oribitals of C—H bond with adjacent antibonding π* orbital (in case of double bond). Owing to energy & orientation mismatch, the overlap is minimal and thus stabilization arising from hyperconjugation becomes significant only when the number of hyperconjugations is greater. So, this explains why 2,3-dimethyl-but-2-ene assumes much greater stabilization from hyperconjugation compared to 2-methyl-propene. | {
"domain": "chemistry.stackexchange",
"id": 13361,
"tags": "organic-chemistry"
} |
Einstein's mirror in train thought experiment | Question: I'm a bit confused about one of Einstein's thought experiments.
In his experiment, he sits in a train travelling at the speed of light and holds up a mirror. From what I've read and researched, he will see his reflection in the mirror. I don't understand why or how he will. Any explanations would be appreciated.
Thanks in advance.
Answer: That is because the speed of light is a fundamental quantity and the frame of reference doesn't change its value. The speed of light doesn't follow the newtonian relative motion and that is the very essence of this thought experiment. If the speed of light would have followed the relative motion then the person sitting on the train (travelling with light speed) can never have seen himself in the mirror as the speed of the light would have become $0$ and never would have reached his/her eyes, but that is not the case. | {
"domain": "physics.stackexchange",
"id": 90674,
"tags": "special-relativity, inertial-frames, thought-experiment"
} |
std::vector of pointers with the rule of three | Question: I have a class that has a std::vector of pointers, I'm NOT going to give any of those pointers to objects outside of it, I mean, I'm not going to share the pointers. - I was reading that isn't everywhere that we will use smart pointers and that raw pointer isn't useless as long that someone is responsible for deleting it. The problem with that class is that it can be inherited and if I don't use the rule of three all the classes that inherits from will have a copy of the vector with the pointers then the destructor of those objects will result in double delete. - Is it safe or I still need to use std::shared_ptr in this case?
class Keeper
{
public:
Keeper() {}
Keeper(const Keeper&) = delete;
Keeper& operator=(const Keeper&) = delete;
virtual ~Keeper()
{
for (int i = 0; i < pointers.size(); ++i)
{
delete pointers[i];
}
}
void push_back(Pointer * pointer)
{
pointers.push_back(pointer);
}
// erase method...
protected:
std::vector<Pointer *> pointers;
};
Answer: Is it safe? To a point.
Naming
I hope the name Pointer is just for illustration purposes as otherwise that is a pretty terrible name.
The changes I would make are:
Accepting pointers must indicate ownership.
Your current interface does not indicate that it is taking ownership of the pointers. So as a user of your class I need to dig inside and find out if you are taking ownership or not before I can use it.
void push_back(Pointer * pointer);
I would change the interface to specifically take a unique_ptr that way people know that you are taking ownership of the object.
void push_back(std::unqiue_ptr<Pointer> pointer);
No confusion here.
Storage of pointers.
I have no problem with you storing a vector of pointers or a vector of smart pointer. Either makes sense. As long as you define the Copy Constructor/ Assignment Operator and Destrtuctor if you use pointer (like you have). You could argue that keeping an array of smart pointers will save you some work in coding (tiny (probably insignificant cost1 of managing the data). Alternatively you can use a container specifically designed to hold pointers.
std::vector<Pointer*> data; // Add Rule of Three.
std::vector<std::unique_ptr<Pointer>> data; //
boost::ptr_vector<Pointer> data; // Takes ownership of pointers.
// Only exposes members as references
// to actual object (not pointer) and thus
// makes using it with standard algorithms
// much easier and intuitive.
To Clarify because of comments: I suspect for most compilers the cost of std::unique_ptr will be zero at runtime. But it is something worth validating with your compiler before assuming.
Access from derived types.
Personally I would not give direct access to the vector from derived class like that.
protected:
std::vector<Pointer *> pointers;
Even if you have a vector of unique_ptr it is still to easy for the derived class to break encapsulation and do something you don't intend. You should provide them with a safer interface. How you do that depends.
If you know none of the members will be NULL then I would provide them with a function that returns a reference.
protected:
// Anybody that wants to access the data gets a reference.
Pointer& data(std::size_t index) {return *pointers[index];}
// Note: the above code just uses the original code as a base line.
// If you change the storage medium I would probably still
// provide this as an interface layer so that I don't
// break encapsulation.
//
// Of course there are a lot of caviats that depend on how
// this will be used. But without further context this is
// the best I can do at the moment.
private:
std::vector<Pointer*> pointers;
If you want derived classes to modify the class you may need to think of something else. But you have not provided enough context for me to go further.
Iterator
Based on comments just showing how easy it would be to throw an iterator together that can be used without exposing implementations details. Note: this one is not complete but shows enough of how it would work (and I was slightly bored at the time and wanted to write some code).
class MyIterator
{
Keeper* parent;
int index
public:
MyIterator(Keeper& parent, int index)
: parent(&parent)
, index(index)
{}
MyIterator& operator++()
{
++index;
return *this;
}
MyIterator operator++(int)
{
MyIterator result(*this);
++(*this);
return result;
}
Pointer& operator*()
{
return parent->data(index);
}
} | {
"domain": "codereview.stackexchange",
"id": 10560,
"tags": "c++, c++11, memory-management, c++14"
} |
Eigenvalues of Hamiltonian in Another Basis | Question: I am taking a quantum mechanics class and was assigned this problem:
Among other things, I am asked to find the eigenvalues of $H$ in terms of $a$, $b$ and $\sigma$. I'm sort of lost of even how to approach this.
Since $\hat{A}$ is Hermitian, I am assuming that $|a\rangle$ and $|b\rangle$ are orthonormal and complete. After that I'm pretty lost about even how to start. I have roughly written down that
$ \hat H | \lambda_n\rangle = \lambda_n|\lambda_n\rangle$ where $\lambda$ is an eigenvalue and $|\lambda_n\rangle$ is an eigenvector. What I don't grasp at all, is how to reconcile the outer product,
$|a\rangle \langle b| + |b\rangle \langle a| $ since that result is a matrix, right?
Answer: I think the easiest way to solve these problems when you first encounter them is to convert them into a matrix problem, where it is easier to proceed.
Since $|a\rangle$ and $|b\rangle$ form an orthonormal basis, one way of proceeding is to represent them as the column vectors:
$$
|a\rangle := \begin{bmatrix} 1 \\ 0 \end{bmatrix} \ \ \text{and} \ \ |b\rangle := \begin{bmatrix} 0 \\ 1 \end{bmatrix}
$$
You can pick any vectors you like (as long as $|a\rangle$ and $|b\rangle$ are orthonormal), so we may as well pick the above simple case.
You can look up the rules for outer products on wikipedia, what you get is:
$$
|a\rangle\langle b| = \begin{bmatrix} 1 \\ 0 \end{bmatrix}\begin{bmatrix} 0 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \\
|b\rangle\langle a| = \begin{bmatrix} 0 \\ 1 \end{bmatrix}\begin{bmatrix} 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} \\
$$
which means that your Hamiltonian is the matrix:
$$
H = \sigma \big( |a\rangle\langle b|+|b\rangle\langle a| \big) = \begin{bmatrix} 0 & \sigma \\ \sigma & 0 \end{bmatrix}
$$
This is probably going to be useful for you, because it is less abstract and I assume you've diagonalized a $2\times 2$ matrix before. You should check this yourself, but the eigenvalues turn out to be $\lambda_{\pm} = \pm \sigma$ and the (normalized) eigenvectors are $| \lambda_{\pm} \rangle = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ \pm 1 \end{bmatrix}$.
What you do at the end of the day is notice that the eigenvectors can be writen in terms of $|a\rangle := \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $|b\rangle := \begin{bmatrix} 0 \\ 1 \end{bmatrix}$, where we get:
$$
| \lambda_{\pm} \rangle = \frac{1}{\sqrt{2}}|a\rangle \pm \frac{1}{\sqrt{2}}|b\rangle
$$
You should double check that these satisfy $H |\lambda_{\pm}\rangle = \lambda_{\pm} |\lambda_{\pm}\rangle$ in terms of the abstract vectors (no longer written as column vectors).
The magic of the above is that you would have found exactly the same answer, even if you used a different representation of $|a\rangle$ and $|b\rangle$ (for example, if you want to give yourself a headache, try the same calculation with the different representation $|a\rangle := \frac{1}{\sqrt{17^2 + \pi^2}} \begin{bmatrix} 17 \\ \pi \end{bmatrix}$ and $|b\rangle := \frac{1}{\sqrt{17^2 + \pi^2}} \begin{bmatrix} \pi \\ - 17 \end{bmatrix}$. You'll get the same abstract answer $| \lambda_{\pm} \rangle = \frac{1}{\sqrt{2}}|a\rangle \pm \frac{1}{\sqrt{2}}|b\rangle$ with $\lambda_{\pm} = \pm \sigma$ at the end of the day!) | {
"domain": "physics.stackexchange",
"id": 86169,
"tags": "quantum-mechanics, homework-and-exercises, hilbert-space, hamiltonian, eigenvalue"
} |
Stream Partitioner | Question: Discussion
I've been trying to increase my knowledge of Java Streams and for practice, I devised up the requirement of partitioning a Stream of values into a Stream of a List of values.
Implementation
package data.structures.streams.utils;
import java.util.ArrayList;
import java.util.List;
import java.util.Spliterator;
import java.util.Spliterators;
import java.util.function.Consumer;
import java.util.stream.Stream;
import java.util.stream.StreamSupport;
public class Partitioner {
static class PartitionSpliterator<T> extends Spliterators.AbstractSpliterator<List<T>> {
private final Spliterator<T> values;
private final int partitionSize;
private List<T> currentPartition;
public PartitionSpliterator(final Spliterator<T> values, final int partitionSize) {
super(
Double.valueOf(Math.ceil((double) (values.estimateSize() / partitionSize)))
.intValue(),
ORDERED | IMMUTABLE | SIZED | SUBSIZED
);
if (1 > partitionSize) {
throw new IllegalArgumentException("Size must be a positive integer");
}
this.values = values;
this.partitionSize = partitionSize;
this.currentPartition = null;
}
@Override
public boolean tryAdvance(Consumer<? super List<T>> action) {
if (null == currentPartition) {
currentPartition = new ArrayList<>();
}
while (partitionSize > currentPartition.size() && values.tryAdvance(currentPartition::add)) {
}
if (currentPartition.isEmpty()) {
return false;
}
action.accept(currentPartition);
currentPartition = null;
return true;
}
}
public static <Value> Stream<List<Value>> partition(final Stream<Value> values, final int size) {
if (1 > size) {
throw new IllegalArgumentException("Size must be a positive integer");
}
return StreamSupport.stream(
new PartitionSpliterator<>(values.spliterator(), size),
values.isParallel()
);
}
}
Answer: Avoid naming generic types in such a way as they look like classes. This is confusing to readers of the code. The official convention would suggest V rather than Value. Other answers to that question provide other conventions.
The code has every comparison between two values done in the opposite way as the vast majority of programmers would expect. This makes the code harder to read.
When comparing a constant to a variable, by convention the variable is first.
When comparing two variables, if one is iterated upwards and one is the fixed upper bound, by convention the iterated value is first.
When comparing a variable to null, by convention the variable is first.
currentPartition is scoped incorrectly. It is only used in tryAdvance, and does not persist beyond a single call. It should be moved into that method. There is also no value in setting it to null.
The while loop in tryAdvance is technically correct, but it is harder on readers to understand the important bit is the side effect of the tryAdvance call. Strongly consider rewriting it to just repeatedly call tryAdvance. This is very easy to read and should be highly performant. If, later, performance testing shows this is a bottleneck (it won't), put the tryAdvance call into the while block and break if the boolean comes back false.
Since you know the size of the partition, you can presize the ArrayList. This will usually not matter, but in some cases it might be somewhat more performant, and there's no readability impact. Note that you may still see one resize depending on the JDK's implementation.
PartitionSpliterator's size check calls the variable size, but the parameter name is partitionSize.
PartitionSpliterator should be private and final.
By convention, public members appear before private members in a class, and also methods appear before nested classes.
Partitioner is not designed for extension and should be final.
Partitioner is not designed to be instantiated, and should have a private constructor to prevent instantiation. (By convention, constructors appear before other methods, even if the constructor is private and the method is public).
The first argument (est) to super in PartitionSpliterator() is very hard to read and also incorrect. If the passed-in spliterator returns MAX_VALUE, you will assign a very large and probably very bad estimate. Either use MAX_VALUE always here, or add a private static method which correctly computes the estimate.
Then note that it only needs an estimated value, and being off by 1 is fine. Just divide the sizes and add 1. This will be right in every case except where they divide evenly, in which case it will be 1 too large.
If you made all these changes, your code might look more like:
public final class Partitioner {
private Partitioner() {}
public static <V> Stream<List<V>> partition(final Stream<V> values, final int size) {
if (size < 1) {
throw new IllegalArgumentException("Size must be a positive integer");
}
return StreamSupport.stream(
new PartitionSpliterator<>(values.spliterator(), size),
values.isParallel()
);
}
private static final class PartitionSpliterator<T> extends Spliterators.AbstractSpliterator<List<T>> {
private final Spliterator<T> values;
private final int partitionSize;
public PartitionSpliterator(final Spliterator<T> values, final int partitionSize) {
super(computeEstimatedSize(values.estimateSize(), partitionSize),
ORDERED | IMMUTABLE | SIZED | SUBSIZED
);
if (partitionSize < 1) {
throw new IllegalArgumentException("Size must be a positive integer");
}
this.values = values;
this.partitionSize = partitionSize;
}
@Override
public boolean tryAdvance(Consumer<? super List<T>> action) {
List<T> currentPartition = new ArrayList<>(partitionSize);
while (currentPartition.size() < partitionSize) {
values.tryAdvance(currentPartition::add);
/*
boolean addedValue = values.tryAdvance(currentPartition::add));
if (!addedValue) {
break;
}
*/
}
if (currentPartition.isEmpty()) {
return false;
}
action.accept(currentPartition);
return true;
}
private static long computeEstimatedSize(long estimatedValuesSize, int partitionSize) {
if (estimatedValuesSize == Long.MAX_VALUE) {
return Long.MAX_VALUE;
}
return (estimatedValuesSize / partitionSize) + 1;
}
}
} | {
"domain": "codereview.stackexchange",
"id": 43199,
"tags": "java, functional-programming"
} |
Correct way to expand the generating functional | Question: Consider the following self-interacting real scalar field theory
$$
\mathcal{L} = \frac{1}{2}(\partial_\mu\phi)(\partial^\mu\phi) - \frac{1}{2}m^2\phi^2 - \frac{\lambda}{4!}\phi^4
$$
with $m^2 > 0$ and $\lambda > 0$. It is well-known that the generating functional of the full (i.e., interacting) theory can be written as
$$
Z[J] = \exp\Biggl\{-i\frac{\lambda}{4!} \int dx\ \frac{\delta^4}{\delta J(x)^4}\Biggr\}Z_0[J]
$$
where
$$
Z_0[J] = \mathcal{N} \exp\Biggl\{-\frac{1}{2} \int dz \int dw\ J(z) \Delta_F(z - w) J(w)\Biggr\}
$$
is the generating functional of the corresponding free theory. Here
$$
\Delta_F(z - w) = \int \frac{dp}{(2\pi)^d}\ \frac{i}{p^2 - m^2} e^{ip(z - w)}
$$
is Feynman's propagator of the scalar field $\phi$ and $\mathcal{N}$ is a normalization constant.
We can expand -in powers of $\lambda$- the exponential operator
$$
\exp\Biggl\{-i\frac{\lambda}{4!} \int dx\ \frac{\delta^4}{\delta J(x)^4}\Biggr\} = \sum^\infty_{\ell=0} \frac{(-i)^\ell \lambda^\ell}{(4!)^\ell\ell!} \int dx_1 \cdots \int dx_\ell\ \frac{\delta^{4\ell}}{\delta J(x_1)^4 \cdots \delta J(x_\ell)^4}.
$$
On the other hand, my problem is that:
I am not sure how to make a series expansion of the generating functional $Z_0[J]$ of the free theory.
I see two possibilities:
According to equation (1.49) from this document (here, the author is working with the $\phi^3$-real scalar theory and the generating functional is called $W[J]$ instead of $Z[J]$), I should be able to make the expansion
\begin{align}
Z_0[J] &= \mathcal{N} \exp\Biggl\{-\frac{1}{2} \int dz \int dw\ J(z) \Delta_F(z - w) J(w)\Biggr\} \\
&= \mathcal{N} \sum^\infty_{k=0} \frac{1}{k!} \Biggl(-\frac{1}{2} \int dz \int dw\ J(z) \Delta_F(z - w) J(w)\Biggr)^k \\
&= \mathcal{N} \sum^\infty_{k=0} \frac{(-1)^k}{2^kk!} \int dz_1 \int dw_1 \cdots \int dz_k \int dw_k\ J(z_1)J(w_1) \cdots J(z_k)J(w_k) \Delta_F(z_1 - w_1) \cdots \Delta_F(z_k - w_k).
\end{align}
If this is correct, then we would have the functional expansion
\begin{align}
Z[J] &= \mathcal{N} \sum^\infty_{\ell=0} \sum^\infty_{k=0} \frac{(-1)^k(-i)^\ell \lambda^\ell}{(4!)^\ell\ell!2^kk!} \int dx_1 \cdots \int dx_\ell\ \int dz_1 \int dw_1 \cdots \int dz_k \int dw_k \\
&\quad \times \Delta_F(z_1 - w_1) \cdots \Delta_F(z_k - w_k) \frac{\delta^{4\ell}}{\delta J(x_1)^4 \cdots \delta J(x_\ell)^4} J(z_1)J(w_1) \cdots J(z_k)J(w_k).
\end{align}
Consequently, some lowest-order terms are
\begin{align}
\frac{Z[J]}{\mathcal{N}} &= \lambda^0\Biggl\{1 - \frac{1}{2} \int dz_1 \int dw_1\ \Delta_F(z_1 - w_1)J(z_1)J(w_1) + \cdots\Biggr\} \\
&\quad + \lambda\Biggl\{- \frac{i}{8} \int dx_1\ \Delta_F(x_1 - x_1)\Delta_F(x_1 - x_1) + \cdots\Biggr\} + \mathcal{O}(\lambda^2).
\end{align}
According to equation (92) from this document, the correct series expansion of $Z_0[J]$ is given by a Volterra series
\begin{align}
Z_0[J] &= \mathcal{N} \exp\Biggl\{-\frac{1}{2} \int dz \int dw\ J(z) \Delta_F(z - w) J(w)\Biggr\} \\
&= \sum^\infty_{k=0} \frac{1}{k!} \int dz_1 \cdots \int dz_k\ J(z_1) \cdots \delta J(z_k) \frac{\delta^kZ_0[J]}{\delta J(z_1) \cdots \delta J(z_k)}\Biggr|_{J=0}.\end{align}
If this is correct, then we would have the functional expansion
\begin{align}
Z[J] &= \sum^\infty_{\ell=0} \sum^\infty_{k=0} \frac{(-i)^\ell \lambda^\ell}{(4!)^\ell\ell!k!} \int dx_1 \cdots \int dx_\ell\ \int dz_1 \cdots \int dz_k \\
&\quad \times \Biggl[\frac{\delta^{4\ell}}{\delta J(x_1)^4 \cdots \delta J(x_\ell)^4} J(z_1) \cdots J(z_k)\Biggr] \frac{\delta^kZ_0[J]}{\delta J(z_1) \cdots \delta J(z_k)}\Biggr|_{J=0}.
\end{align}
Consequently, some lowest-order terms are
\begin{align}
\frac{Z[J]}{\mathcal{N}} &= \lambda^0\Biggl\{1 - \frac{1}{2} \int dz_1 \int dz_2\ J(z_1)J(z_1)\Delta_F(z_1 - z_2) \\
&\quad - \frac{1}{4!}\int dz_1 \int dz_2 \int dz_3 \int dz_4\ J(z_1)J(z_2)J(z_3)J(z_4) \\
&\quad \times \Bigl[\Delta_F(z_1 - z_2)\Delta_F(z_3 - z_4) + \Delta_F(z_1 - z_3)\Delta_F(z_2 - z_4) + \Delta_F(z_1 - z_4)\Delta_F(z_2 - z_3)\Bigr] + \cdots\Biggr\} \\
&\quad + \lambda\Biggl\{\cdots\Biggr\} + \mathcal{O}(\lambda^2).
\end{align}
Thus, as far as I can see both expansion are not yielding the same result. I would like to know which one is the correct one and why is the other wrong; or maybe both are equivalent but I can't see it.
EDIT: if both expansion for the generating functional $Z_0[J]$ are equal, i.e.
$$
\sum^\infty_{k=0} \frac{1}{k!} \int dz_1 \cdots \int dz_k\ J(z_1) \cdots \delta J(z_k) \frac{\delta^kZ_0[J]}{\delta J(z_1) \cdots \delta J(z_k)}\Biggr|_{J=0} = \mathcal{N} \sum^\infty_{k=0} \frac{(-1)^k}{2^kk!} \int dz_1 \int dw_1 \cdots \int dz_k \int dw_k\ J(z_1)J(w_1) \cdots J(z_k)J(w_k) \Delta_F(z_1 - w_1) \cdots \Delta_F(z_k - w_k)
$$
then, could we set the following identification?
$$
\frac{\delta^kZ_0[J]}{\delta J(z_1) \cdots \delta J(z_k)}\Biggr|_{J=0} = \mathcal{N} \frac{(-1)^k}{2^k} \int dw_1 \cdots \int dw_k\ J(w_1) \cdots J(w_k) \Delta_F(z_1 - w_1) \cdots \Delta_F(z_k - w_k)
$$
Answer: The two series for $Z_0[J]$ are equivalent. This generating functional is defined by :
\begin{align}
Z_0[J] &= \mathcal{N} \exp\Biggl\{-\frac{1}{2} \int dz \int dw\ J(z) \Delta_F(z - w) J(w)\Biggr\} \\
&= \mathcal{N} \sum^\infty_{k=0} \frac{1}{k!} \Biggl(-\frac{1}{2} \int dz \int dw\ J(z) \Delta_F(z - w) J(w)\Biggr)^k \\
\end{align}
Then, the second series expansion is Taylor's formula :
\begin{align}
Z_0[J]&= \sum^\infty_{k=0} \frac{1}{k!} \int dz_1 \cdots \int dz_k\ J(z_1) \cdots J(z_k) \frac{\delta^kZ_0[J]}{\delta J(z_1) \cdots \delta J(z_k)}\Biggr|_{J=0}\end{align}
This formula actually holds for any functional $F[J]$ (which admits a formal series expansion) :
\begin{align}
F[J]&= \sum^\infty_{k=0} \frac{1}{k!} \int dz_1 \cdots \int dz_k\ J(z_1) \cdots J(z_k) \frac{\delta^kF[J]}{\delta J(z_1) \cdots \delta J(z_k)}\Biggr|_{J=0}\end{align}
To prove Taylor's formula, we assume that we can expand $F$ as :
$$ F[J] = \sum_{k=0}^\infty \int dz_1\ldots\int dz_kF_k(z_1,\ldots,z_k)J(z_1)\ldots J(z_k)$$
where the $F_k$ are integral kernels. The calculations should work even if the $F_k$ are higher order distributions, but this is not needed here.
Then, we compute its $k$th functional derivative at $J = 0$. This vanishes except on the term which contains exactly $k$ insertions of $J$. Therefore :
$$\left.\frac{\delta^k F[J]}{\delta J(z_1)\ldots \delta J(z_k)}\right|_{J=0} = \sum_{\sigma \in \mathfrak S_k}F_k(z_{\sigma(1)},\ldots z_{\sigma(k)})$$
where the sum runs over all permutation of $\{1,\ldots,k\}$. When we integrate over $z_1,\ldots,z_k$, we can relabel the variables :
\begin{align}
\int dz_1 \cdots \int dz_k\ J(z_1) \cdots J(z_k) \frac{\delta^kF[J]}{\delta J(z_1) \cdots \delta J(z_k)}\Biggr|_{J=0} &=\sum_{\sigma \in \mathfrak S_k}\int dz_1 \cdots \int dz_k\ J(z_1) \cdots J(z_k)F_k(z_{\sigma(1)},\ldots z_{\sigma(k)}) \\
&= k! \int dz_1\ldots\int dz_kF_k(z_1,\ldots,z_k)J(z_1)\ldots J(z_k)
\end{align}
Dividing by $k!$ and summing over $k$, we see that Taylor's formula holds.
NB : the same calculations would have worked directly on $Z_0[J]$ but the precise expression for the kernels $F_k(z_1,\ldots,z_k)$ is hard to write down formally (Wick's theorem). Since it's precise form is not needed, it is easier to do the proof in a more general setting.
Edit : The expression for the functional derivatives of the free generating functional are given, for $n$ an even integer, by :
\begin{gather}
\left.\frac{\delta^{n} Z_0[J]}{\delta J(z_1) \ldots \delta J(z_{n})}\right|_{J=0} &= \left(-\frac 12\right)^n\sum_{\sigma \in\mathfrak S_{n}} \Delta_F(z_{\sigma(1)}-z_{\sigma(2)})\ldots \Delta_F(z_{\sigma(n-1)} -z_{\sigma(n)})
\end{gather}
For $n$ odd, the functional derivative vanishes. | {
"domain": "physics.stackexchange",
"id": 89334,
"tags": "quantum-field-theory, feynman-diagrams, path-integral, perturbation-theory, propagator"
} |
How does nuclear fuel reprocessing work? | Question: As far I know, the depleted fuel cells are crushed, and solved in nitric acid.
What is coming after that? This nitric acid should probably contain a very wide spectrum of different salts (practically, all of the elements between 35-65, and yet a lot of transurans, and a lot of uranium (both of 235 ad 238), and plutonium).
To reach an efficient reprocessing, the uranium (or, at least, the actinid elements) should be somehow separated from this solution. But, AFAIK, they have very different chemical properties. How is it possible to separate only the transuranic materials?
Answer: Fission Products Extraction
The seperation of plutionium and uranium from the other fission products is done with the organic molecule tributyl phosphate by liquid-liquid-extraction. You have two phases, one organic and one aqueous. The fission products will solve in the aqueous phase and uranium/plutonium will solve in the organic phase with the tributyl phosphate.
Uranium/Plutonium Separation
To seperate uranium from plutonium you have to reduce the plutonium with uranium nitrate. Again, you have two streams: One with the uranium/plutonium from before and one aqueous stream with uranium nitrate (U4+). Plutonium will be reduced and solve in the aqueous solution.
To accomplish the aforementioned chemical processes, the liquid-liquid extraction you can use the following desgins:
Mixer-Settler
(source)
Pulse Column
(source)
The principle is always: Organic phase is lighter than the aqueous phase. Both phases are first separated, than mixed, and then again separated. During the mixing the chemical reactions occur.
Please see the mentioned source for more information. | {
"domain": "engineering.stackexchange",
"id": 5,
"tags": "nuclear-technology, nuclear-reprocessing"
} |
Do the two coils in electomagnetic induction repel each other? | Question: I have two coil A and B. Coil A has an ac current going into it and is producting a magnetic field of which is oscilating. Coil B is gaining an ac current from the fluctuating field due to Lenz's law. I wanted to verify does the two coils A and B repel each other because of the magnetic field generated?
Answer: Yes according to Lenz's Law the induced current in always generates a field that opposes the cause. Therefore the Coils repel each other. | {
"domain": "physics.stackexchange",
"id": 27187,
"tags": "electromagnetism, electromagnetic-induction"
} |
Parallel factorial algorithm using std::thread | Question: This code calculates the factorial of a number on multiple threads. My issue: it is only a little bit faster than the sequential version of it (and I think I know why, I just can't find a way to solve this).
I use boost::multiprecision::cpp_int so the limits of default integers are not a problem, the size of integers is only limited by memory.
Only showing the relevant parts:
// ... other includes ...
#include <boost/multiprecision/cpp_int.hpp>
#define THREAD_COUNT 4
std::atomic<int> thread_num(1); // global variable
// stuff...
void threaded_factorial(unsigned long long int num, boost::multiprecision::cpp_int& bigInt)
{
int threadid = thread_num++; // thread_num is atomic, so this is safe
boost::multiprecision::cpp_int N = 1;
for (unsigned long long int i = threadid; i <= num; i = i + THREAD_COUNT)
{
N *=(i);
}
std::lock_guard<std::mutex> lock(mu); // race condition --> mutex needed
bigInt *= N;
}
// more stuff ...
And the call of the function:
// ...
boost::multiprecision::cpp_int result = 1;
std::thread workers[THREAD_COUNT];
for (int i = 0; i < THREAD_COUNT; ++i)
{
workers[i] = std::thread(threaded_factorial, num, std::ref(result));
}
for (int i = 0; i < THREAD_COUNT; ++i)
{
workers[i].join();
}
// ...
The results seem correct, but as I said, this is not much faster than sequential code.
For example. The calculation of the factorial of 325253 took
67586 ms on 4 threads
76226 ms on a single thread
That is some really poor performance.
The reason, I think is that the for cycle in the threaded_factorial function roughly takes the same amount of time for each thread to complete, so when the std::mutex mu is locked, (THREAD_COUNT-1) threads have to wait for the one which locked the mutex.
This way, most of the work (by far the largest multiplications) is happening in a sequential manner, so the algorithm is really slow.
How can I work around this issue and make this work efficiently?
Answer: Firstly, some issues with API usage. Using raw std::threads is generally not the way you want to go with this sort of thing: prefer to use std::async. This also means you don't need to pass in by reference for updates, as it can return a value. The other big win from this is that you don't need to lock and perform an update in the thread that is running the calculation; this can be done independently.
Firstly, let's modify the threaded_factorial function:
constexpr static auto threads = 4U;
constexpr static auto test = 325253U;
namespace mp = boost::multiprecision;
mp::cpp_int thread_fact(unsigned num, int start)
{
mp::cpp_int n = start;
for (auto i = start + threads; i <= num; i += threads) {
n *= i;
}
return n;
}
To call this, we setup some arrays for the std::futures that will be returned, as well as an array of partial results.
std::array<std::future<mp::cpp_int>, threads> futures;
std::array<mp::cpp_int, threads> results;
for (auto i = 1; i <= threads; ++i) {
futures[i - 1] = std::async(std::launch::async, thread_fact, test, i);
}
for (auto i = 0; i < threads; ++i) {
results[i] = futures[i].get();
}
Now, the step where you combine these is actually pretty expensive. Multiplying two numbers that are of this magnitude will be time consuming; let's launch the multiplications in separate threads as well:
std::future<mp::cpp_int> x = std::async([&results]() -> mp::cpp_int { return results[0] * results[1]; });
std::future<mp::cpp_int> y = std::async([&results]() -> mp::cpp_int { return results[2] * results[3]; });
auto x_val = x.get();
auto y_val = y.get();
auto z = x_val * y_val;
Making these changes, this runs in a bit under 10 seconds for me. In fact, from the profile graph, most of that time is spent doing the combining.
Others have pointed out that further algorithmic improvements are possible if you need more speed. | {
"domain": "codereview.stackexchange",
"id": 43733,
"tags": "c++, c++11, multithreading"
} |
How far out can one determine a program is halting? | Question: Suppose we have a finite set of programs, say, something like every Turing machine with 2 states and 7 symbols. After running all of them for a very long time, we've narrowed it down to a small subset thereof. Of those remaining, let's suppose that at least one will run forever and at least one will eventually halt, and that they fall into that category where it is impossible to tell which without running them for arbitrarily many more steps.
As I understand it, if we're determined enough and ignore physical considerations of time and memory, we'll theoretically reach the halting points for all those which are going to halt, although we'll never know that for sure. But before we reach that point, e.g., one step before the longest-running program halts, it will be obvious that that program is about to halt; we'll be able to look at its current symbol, state, and transform table, and it will be clear what's next.
Arguably, that's effectively computing that final step. But in reality, I have a very hard time imagining a scenario in which it isn't obvious that it's winding down 2, 3, or many steps earlier. In particular, if it's one of those recursion-based algorithms typical of BB-champions, or something like a Collatz sequence, various indicators that had been growing and growing for much of its run will suddenly be shrinking, or otherwise changing behavior.
There must be a limit to how far out such behavior can be detectable, or the halting problem wouldn't exist. On the other hand, it seems likely to me that there's probably also an upper bound on how obfuscated its behavior can be, or more concretely, how many steps out from a halt we could theoretically determine it will halt, without needing to directly compute all the remaining steps.
My question is whether this seems correct, and if so, whether anybody's figured out results along these lines already.
Answer: What you are describing is indistinguishable from:
making a (free) copy of the Turing machine (in its current state)
running it for $n$ steps
seeing if it halted.
I fail to see how this gives you any new capabilities. An analogy to your logic would be: "I can determine, what's gonna happen 10 minutes into the future, if I just wait 10 minutes.".
So to answer your question in the header: You can determine, if your Turing machine is halting $n$ steps ahead, by running $n$ computations ahead on a copy of it.
Predicting the halting problem is only "obvious" for obvious cases, but can be incomprehensibly hard in other cases (see this answer). | {
"domain": "cs.stackexchange",
"id": 21677,
"tags": "turing-machines, halting-problem, heuristics, busy-beaver"
} |
Order of magnitude for the range of vision | Question: Let's say you're in the middle of a desert, with nothing but sand. Let's also assume that you have a 20/20 vision. When you're just looking there's a point that your eye can't reach name it $M$ and let your position be modeled as a point $O$. In this case what's the best approximation for the distance $OM$. Sorry If this seems unclear but I tried my best to describe this case.
Edit:
Maybe this can be a better description. Let's suppose you put an object on the floor, and you started getting back until you no longer see it, then what would be the distance walked so that can happen. We will assume the Earth is perfectly spheric
Answer: Assuming your height is $2m$, the earth is a perfect sphere of radius $6400km$, $OM$ would be $\approx 5km$. This can be achieved through basic trigonometry and approximations.
Here is a very rough diagram (sorry for my pathetic paint skills). $OC = MC$ is the radius of the earth $= 6400km$. $OP= 2m$ which is greatly exaggerated here for illustration purposes. Since the line PM is tangent to the circle at M, $\angle PMC =90^{\circ}$. So $PM = PCsin\theta$, and also $cos\theta = \frac{PC}{MC}$. From these two equations $PM$ can be evaluated. Since $\theta$ is so small, we can approximate $PM$ to $OM$. | {
"domain": "physics.stackexchange",
"id": 24813,
"tags": "homework-and-exercises, optics, geometry"
} |
How could emotional intelligence be implemented? | Question: I've seen emotional intelligence defined as the capacity to be aware of, control, and express one's emotions, and to handle interpersonal relationships judiciously and empathetically.
What are some strategies for artificial intelligence to begin to tackle this problem and develop emotional intelligence for computers?
Are there examples where this is already happening to a degree today?
Wouldn't a computer that passes a Turing test necessarily express emotional intelligence or it would be seen as an obvious computer?
Perhaps that is why early programs that pass the test represented young people, who presumably have lower emotional intelligence.
Answer: Architectures for recognizing and generating emotion are typically somewhat complex and don't generally have short descriptions, so it's probably better to reference the literature rather than give a misleading soundbite:
Some of the early work in affective computing was done by Rosalind W. Picard. There is a research group at MIT specializing in this area.
Some of the more developed architectural ideas are due to Marvin Minsky.
A pre-publication draft of his book, The Emotion Machine, is available via Wikipedia.
Emotional intelligence would certainly seem to be a necessary component of passing the Turing test - indeed, in the original Turing test essay in Computing Machinery and Intelligence implied some degree of "Theory of Mind" about Mr. Pickwick's preferences:
Yet Christmas is a Winter’s day, and I do not think Mr. Pickwick would mind the comparison. | {
"domain": "ai.stackexchange",
"id": 98,
"tags": "emotional-intelligence, turing-test, affective-computing"
} |
I am not able to get the exact definition of a solution | Question: A substance which is in larger proportion by mass is called solvent and which is in lesser proportion is called solute.
What if the volume of the substance with lesser mass is more. Will it still remain same?
Answer: The definition given by the IUPAC Gold Book
A liquid or solid phase containing more than one substance, when for convenience one (or more) substance, which is called the solvent, is treated differently from the other substances, which are called solutes. When, as is often but not necessarily the case, the sum of the mole fractions of solutes is small compared with unity, the solution is called a dilute solution. A superscript attached to the ∞ symbol for a property of a solution denotes the property in the limit of infinite dilution. | {
"domain": "chemistry.stackexchange",
"id": 11349,
"tags": "solutions, terminology"
} |
Working out the mean velocity of particles in a gas | Question: I'm trying to answer the following question:
Air consists of molecules Oxygen (Molecular mass = 32$amu$) and Nitrogen (Molecular mass = 28$amu$). Calculate the two mean translational kinetic energies of Oxygen and Nitrogen at 20($^\circ C$)
To solve it I have done:
Use $E = \frac{3}{2}kT$
Energy = $\frac{3}{2} \times (1.38 \times 10^{-23}) \times (20+273) = 6.07 \times 10^{-21}$
Use $KE = \frac{1}{2}mv^2$:
For oxygen: $\sqrt{\frac{6.07 \times 10^{-21}}{2 \times (32 \div 6.02\times 10^{23})}} = 15.11$
However, 15.11 isn't the answer in the textbook (the answer is 480m/s)
For nitrogen: $\sqrt{\frac{6.07 \times 10^{-21}}{2 \times (28 \div6.02\times 10^{23}) \div 32}} = 16.12$
16.12 isn't the answer either (it's 510m/s)
I know that my answers are wrong (gas molecules don't move as slow as I calculated at room temperature) but I can't see why my method doesn't work. Any help?
Answer: It may be a good idea to add the appropriate units in your calculations. Doing so will help you to localize the mistake. At the end, your results are only wrong because of a multiplicative factor of $\sqrt{1,000}\sim 31.7$ that must be added to your result to obtain the right one.
It is actually not hard to see where this wrong factor comes from.
http://en.wikipedia.org/wiki/Atomic_mass_unit
In your denominators, you used a value for the mass of the nuclei that is based on the ratio of the type $32$ divided by Avogadro's constant (number of particles per mole). However, in this way, you obtain the value that assumes the natural conversion factor $1\,{\rm g/mole}$: Avogadro's constant was originally defined as the number of molecules in one gram-molecule. However, you want to get the masses in kilograms – and the proton mass is about $1.66\times 10^{-27}\,{\rm kg}$, to proceed in the SI units.
So effectively, the right easiest fix of your formulae is either to substitute the explicit masses in kilograms or to replace your Avogadro's constant by $6.023\times 10^{26}/{\rm mole}$ whose numerical value is the number of molecules in one kilogram-molecule (note the kilo) or, equivalently, keep Avogadro's and replace $32$ by $0.032$ etc. Then you get the right results within some tiny error margins (the masses 32 and 28 amu aren't quite accurate: proton and neutron masses differ and there are additional corrections from electrons and from nuclear binding energies). | {
"domain": "physics.stackexchange",
"id": 5808,
"tags": "homework-and-exercises, ideal-gas, kinetic-theory"
} |
Possibility of Nonzero or All Azimuthal $E$ Field Component of a Line Charge enclosed by a Gaussian Cylinder | Question: Source: https://openstax.org/books/university-physics-volume-2/pages/6-3-applying-gausss-law
For a line charge on an axis, why is all of the E field only pointed in the $\hat{s}$ direction? How do we know that some of the component of the E field is not pointed in the $\hat{\phi}$ direction? When would there be a case where there is all or some in the $\hat{\phi}$ component of the E field when looking at line charges in cylindrical coordinates? Is it not possible because we purposely choose the cylindrical gaussian shape for line charges to only have one component of the E field be nonzero?
Answer: Recall that the symmetry of the charge distribution must match the symmetry of the electric field. The charge distribution of an infinite, uniform line of charge is symmetrical under reflections across any axis that is perpendicular to it. For the symmetries to match, the electric field must always be perpendicular to the line of charge. For some visual aid, view the figures below:
Note: These figures are not my own and are from "Physics for Scientists and Engineeers: A Strategic Approach" by Randall Knight. | {
"domain": "physics.stackexchange",
"id": 76656,
"tags": "electrostatics, electric-fields, gauss-law"
} |
Why can't we perfectly focus light-abberations aside | Question: I don't understand why there is necessarily a diffraction limitation on optical systems. Where does this limitation in focusing light come from?
Answer: That's a good question, and one that looks simple but has a complicated answer. Here's my attempt at an answer with no maths - as usual in physics you'll only really understand it by getting stuck into the mathematics.
It's commonly believed that lenses work by bending the light. You see diagrams like:
showing the light ray bending as it passes through the lens. This is one way of looking at it, but a more fundamental explanation is that the lens changes the phase of the plane wave passing through it. Specifically the phase change produced by the lens varies with distance away from the centre line. So on the left side of the lens we have a plane wave of constant phase, while on the right side we have a plane wave with the phase varying with distance. The result is that on the right side we get an interference pattern - we generally call the interference pattern the image, but it is an interference pattern.
Incidentally, this is why a Fresnel lens can focus light even though it's a completely different shape to your usual convex lens. The Fresnel lens produces the same phase changes as a convex lens, so it focuses light in the same way.
But back to your question: the reason that the image isn't perfect is that it's formed by interference of only a finite portion of light wave i.e. the portion passing through the lens. The bigger the lens the more of the light wave forms the interference pattern and the better the image. But to get a perfect image you need all the light, i.e. an infinitely big lens, to contribute to the interference pattern.
Mathematically, the light intensity at the focal plane is the Fourier transform of the incoming light. The integration limits of a Fourier transform are from $-\infty$ to $+\infty$, but a finite size lens restricts the integration limits and changes the intensity in the focal plane away from the perfect case. It actually convolves the light intensity in the focal plane with the Fourier transform of the aperture through which the light passes. For a round lens this means your image is convolved with an Airy disk, and this smears out the image slightly. | {
"domain": "physics.stackexchange",
"id": 9899,
"tags": "optics, diffraction"
} |
Which will give faster SN2 reaction | Question:
In $\ce{H2C=CH-Br}$ and $\ce{H3C-CH2-Br}$, which will react faster towards a $\mathrm{S_N2}$ reaction?
According to me, as double bond exhibit −I effect, hence the 1st should do a faster reaction. Am I right, or is there any other reason?
Answer: $\ce{CH3-CH-Br}$ will give faster $\mathrm{S_N2}$ reaction because when a nucleophile will approach $\ce{CH2=CH-Br}$ for $\mathrm{S_N2}$ reaction the double bond between $\ce{CH2=CH}$ will hinder its approach (steric effect), but there is no such hindrance in case of $\ce{CH3-CH2-Br}$.
To support the answer we can add one more point that in case of $\ce{CH3-CH2-Br}$ the charge $δ^+$ on the $\ce{C}$ atom of $\ce{CH2}$ will be greater in magnitude than that at the $\ce{C}$ atom of $\ce{CH}$ in case of $\ce{CH2=CH-Br}$ because the double bond has better −I effect than single bond, hence it will be easier for $\ce{-Br}$ to attract the shared electron pair towards it and develop a greater $δ^+$ charge on $\ce{C}$ in case of $\ce{CH3-CH2-Br}$, which will ultimately support the approach of the nucleophile for the $\mathrm{S_N2}$ reaction. | {
"domain": "chemistry.stackexchange",
"id": 10378,
"tags": "organic-chemistry, nucleophilic-substitution, nucleophilicity"
} |
What does "the magnesium salt of EDTA" mean? | Question: I am attempting to prepare an ammonium chloride/ammonium hydroxide buffer solution ($\ce{pH}=10 \pm 0.1$) for titrating water hardness with calgamite and EDTA.
In the 17th Edition of the Standard Methods for the Examination of Water and Wastewater, the reagent preparation section for titrating water hardness (Section 2340 C.) states:
Mix the $\ce{NH4Cl+NH4OH}$
Add $\pu{1.25 g}$ magnesium salt of EDTA (commercially available).
Dilute with distilled water.
After researching, I have assumed it to mean Magnesium Disodium EDTA (hydrate), CAS# 14402-88-1.
In reading the source material and searching online for synonyms of "magnesium salt of EDTA," I can't seem to find an exact hit on the term. Searches included PubChem, ChemSpider, Sigma-Aldrich, and Cole-Parmer, among others. Each seemed to point me toward "Magnesium Disodium EDTA Hydrate." I was hoping that with the amount of knowledge on this site, someone might recognize the term "salt of" and be able to help me out and clarify the term, or confirm that I am using the correct EDTA salt.
Answer: The authors have some strange kind of humor... it's a riddle.
Below you might see
2) If the magnesium salt of EDTA is unavailable, dissolve 1.179 g disodium salt of
ethylenediaminetetraacetic acid dihydrate (analytical reagent grade) and 780 mg magnesium
sulfate (MgSO4⋅7H2O) or 644 mg magnesium chloride (MgCl2⋅6H2O) in 50 mL distilled water.
Add this solution to 16.9 g NH4Cl and 143 mL conc NH4OH with mixing and dilute to 250 mL
with distilled water.
From the molar mass of $\ce{MgSO4 * 7 H2O}$ ($\pu{246.47 g/mol}$) you can calculate the number of moles of magnesium the "magnesium salt of EDTA" should contain. With the help of the data sheet of your purchased magnesium salt of EDTA you can calculate the required quantity in grams.
You also could calculate possible stoichiometries of the imaginary "magnesium salt of EDTA" if you like. But be aware that not all commercially available magnesium salts of EDTA have a well defined stoichiometry. | {
"domain": "chemistry.stackexchange",
"id": 9952,
"tags": "analytical-chemistry, titration, terminology"
} |
What is a Sand Motor "for"? | Question: I've looked at/listened to:
NPR: Protecting The Netherlands' Vulnerable Coasts With A 'Sand Motor'
Zandmotor YouTube: The Sand Motor Five years of Building with Nature
ecoshape.org: Sand Motor Delfland Coast
dezandmotor.nl Introduction
Wikipedia Sand Engine
but I'm still not getting what this is for. Is this a "natural" way to form long-lasting sand dunes on the coast, or something more, or different?
Is it possible to summarize the primary goal of this project? What exactly is the desired outcome, or success criteria?
Answer: From the material presented, it appears the southern coast of the Netherlands is subject to coastal erosion which requires the importation of sand to restore the coastal profile and maintain sand dunes that ensure the integrity of the coast.
Previously, sand was imported every five years and deposited along the coast by mechanized equipment, in a series of restoration campaigns. This is a very disruptive process for natural systems and the humans along that part of the coast.
The sand motor is an artificial, sacrificial peninsular of sand that will be redistributed along the coast, continually replenishing the sand lost to erosion. The slower and continuous redistribution of sand, but natural forces will allow for a consolidation of the coastal sand dunes in a seemingly natural manner that is not disruptive like the previous replenishment campaigns conducted every five years.
The continuous slow replenishment of coastal sand, also allows flora to naturally establish itself and colonize the sand dunes. Their roots acting as a mesh to hold sand, thus reinforcing the sand dunes in a natural and non-disruptive manner.
The project is aiming to produce a more natural way to producing long lasting sand dunes to protect the integrity of the coast. | {
"domain": "earthscience.stackexchange",
"id": 1259,
"tags": "geoengineering"
} |
Determination of High-Accuracy Distances of Terrestrial Planets from the Sun | Question: I understand how in olden days the Sun-Planet distances were estimated using:-
(i) measures of planet orbital periods ($T$) from analysis of observations over several centuries;
(ii) Kepler's 3rd Law $(T^2 = k.a^3)$ applied to the planet orbital periods to determine the relative lengths of the Sun-Planet semi-major axis distances ($a$) expressed in terms of the AU (astronomical unit = distance from centre of Earth to centre of Sun);
(iii) particular Earth-Planet distances along Solar radials were measured using Parallax techniques such as that by Casinni for Mars and Transits of Venus;
(iv) From one or more Earth-Planet radial distances the length of the AU could then be determined by simple algebra.
(v) Other Sun-Planet distances could then be determined once the AU length was found.
Modern ephemerides e.g. those issued by JPL use various techniques:-
The orbits of the inner planets are known to subkilometer accuracy through fitting radio tracking measurements of spacecraft in orbit about them. Very long baseline interferometry measurements of spacecraft at Mars allow the orientation of the ephemeris to be tied to the International Celestial Reference Frame with an accuracy of 0′′.0002. This orientation is the limiting error source for the orbits of the terrestrial planets, and corresponds to orbit uncertainties of a few hundred meters.
The orbits of Jupiter and Saturn are determined to accuracies of tens of kilometers as a result of fitting spacecraft tracking data.
The orbits of Uranus, Neptune, and Pluto are determined primarily from astrometric observations, for which measurement uncertainties due to the Earth’s atmosphere, combined with star catalog uncertainties, limit position accuracies to several thousand kilometers.
from Folkner et al 2014.
I understand how spaceraft telemetry has been used to obtain highly accurate distances between Earth and the terrestrial planets. But I am not clear about how highly-accurate distances can be determined between those planets and the centre of the Sun.
Question
Do such (Sun-planet-distance) determinations still basically rely on (the more accurate Newtonian version of) Kepler's 3rd Law relating planet orbital period and Sun-planet semi-major axis distance?
$$\frac{T^2}{a^3} = \frac{4 \pi^2}{G(M+m)}$$
What other methods or assumptions are used/involved?
EDIT - Afterthoughts
After further thought prompted by feedback from u/atmosphericprisonescape
my question boils down to "How are the positions (over time) of the Sun's Centre tied-in (with sub-kilometre accuracy) to the 4D space-time array of triangulated, high-accuracy, telemetry-derived, Terrestrial planet position determinations.
I guess that Kepler's 3rd Law is not invoked as such. Rather, high-accuracy (sub-kilometre) determinations of the position of the Solar centre presumably require (in addition to the high-accuracy planet positions) some specific deterministic "motivation model"; i.e. a model of motion-determining factors.
These factors would include Newtonian Inertia, Newtonian Gravitational forces and Non-Newtonian factors leading to additional orbital angular velocity (cf Non-Newtonian Perihelion Precession and General Relativity as described at wikipedia/Apsidal_precession ).
The "motivation model" would constrain the relative positions of the Sun and planets via their involment as motion generators, reactors and enactors.
Answer: The DE430 JPL ephemeris "memo" (https://ipnpr.jpl.nasa.gov/progress_report/42-196/196C.pdf) has details of how it is constructed and what from. Mercury and Venus's positions are tracked by tracking the spacecraft orbiting them (to sub-km accuracy). Mars, Jupiter and Saturn are also measured from the orbiting spacecraft such as Galileo and Cassini. The outer planets plus Pluto positions are primarily done from astrometry so measuring the positions of the planets relative to the background stars, in the past through photographic plates or with CCDs in more modern times.
Positions of stars and planets are all measured in something called the International Celestial Reference System (ICRS) which is based on measuring very distant and fixed on the sky objects called quasars using a precise radio technique called VLBI. | {
"domain": "astronomy.stackexchange",
"id": 3055,
"tags": "astrometry, ephemeris"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.