anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Question about Principle of least action - Landau | Question: I have this conceptual question: In Landau's book of classical mechanics, about the principle of least action, it's written:
$$\left. \delta S =\frac{\partial L}{\partial v} \delta q \right\rvert_{t_1}^{t_2} + \ \int\limits_{t_1}^{t_2} \ dt \left(\frac{\partial L}{\partial q}\ - \frac d {dt}\frac {\partial L}{\partial v}\right) \delta q \ =0 ,$$
where $q=q(t)$ is the position function, $v=v(t)$ velocity function, $S$ the action, and L the Lagrangian of the system.
There is the condition $\delta q(t_1)=\delta q(t_2)=0$.
So the first term is zero, and then it says " there remains an integral which must vanish for all values of $\delta q$. This can be so only if the integrand is zero identically.
Well I can't understand why
$$\left(\frac{\partial L}{\partial q}\ - \frac d {dt}\frac {\partial L}{\partial v}\right) =0 \;.$$
Answer: Consider the following integral
$$\int_{t_1}^{t_2}F(t)\delta q(t) dt=0,$$
for every possible $\delta q$. Then choose a function $\delta q$ which has a large value but is different from zero only in an infinitesimal neighborhood of a point $t_0\in[t_1,t_2]$. Then,
$$0=\int_{t_1}^{t_2}F(t)\delta q(t) dt=\int_{t_0-\epsilon}^{t_0+\epsilon}F(t)\delta q(t) dt=F(t_0)\int_{t_0-\epsilon}^{t_0+\epsilon}\delta q(t) dt.$$
In the last equal sign we took $F$ out of the integral because the function is approximately constant in the infinitesimal interval $2\epsilon$. Now, the last integral above is different from zero, so $F(t_0)$ has to vanish. Repeat this argument for all $t\in[t_1,t_2]$ and you obtain that $F(t)$ is identically zero. This is basically the fundamental lemma of variational calculus. | {
"domain": "physics.stackexchange",
"id": 39607,
"tags": "classical-mechanics, lagrangian-formalism, action, variational-calculus"
} |
Accelaration causing external force? | Question: Here is the original question:
A car of mass $m$ is initially at rest on the boat of mass $M$ tied to the wall of a dock by a massless inextensible string. The car accelarates from rest to velocity $v$ in time $t$. The car then applies its brakes and comes to rest in negligible time. Neglect friction between the boat and water;the time at which the boat will strike the wall
I approached the question as follows:
from time=$0$ to time=$t$
I assumed that the force of tension (due to string), acts only on the centre of mass of car boat system. Then the accelaration of the car-boat system would be:
$$a_{car-boat}=\frac{m\left(a_\text{car with respect to water}\right)+M\left(a_{boat}\right)}{M+m}$$
Here is what I don't understand:
If the car and boat were not attached by string, then the boat would move backwards (with respect to reference frame attached to water). However, the boat moves forward due to the tension force in the string. The tension force only arises because of the car's motion, and causes accelaration of the centre of mass of the car-boat system.
Why does the boat move forward? If it moves forward, shouldn't the rope become slack and not exert any force on the boat?
Answer: I'm first going to restate what I believe you meant to ask in your post... correct me if I misunderstood.
Scenario
Setup:
A car is at rest atop a boat
The boat is attached to a wall by a string
Situation:
The car accelerates towards the wall (in direction of string)
After some time $t$ the car stops quickly
Question:
Why does the boat move towards the wall?
Assumptions:
String cannot be stretched
No friction between boat and water
Understanding the problem
The acceleration of the car
When the car is resting on the boat, we can find where their center of mass lies. If we then accelerate the car (assuming no losses), the boat is pushed in the opposite direction such that the center of mass stays in a fixed position. Note that the boat cannot be pushed away from the wall in this situation because it is tethered by a string.
The stopping of the car
For the car to decelerate to rest (relative to the boat) it brakes, receiving a frictional force directed away from the wall and, as given by Newton's Third Law, exerting an equal force on the boat directed towards the wall.
The acceleration of the boat
The boat will then be accelerated by the reaction force exerted on it, caused by the car's braking, causing it to begin moving towards the wall.
The motion of the boat
The boat is then simply traveling with uniform velocity towards the wall, with which it will collide after some amount of time.
Calculation process
The acceleration of the car
The actual magnitude of the acceleration is not important for finding the final velocity of the boat, so all we are concerned with is the velocity of the car after the acceleration, which we already know to be:
$$v_{\text{car}}=v$$
2/3. The stopping of the car/acceleration of the boat
Conservation of momentum allows us to calculate the velocity of the boat this problem simply enough:
$$p_f=p_i$$
$$(m_{\text{boat}}+m_{\text{car}})v_{\text{boat}_f}=m_{\text{car}}v_{\text{car}_i}$$
$$v_{\text{boat}_f}=\frac{m_{\text{car}}v_{\text{car}_i}}{m_{\text{boat}}+m_{\text{car}}}$$
The motion of the boat
This is now just a problem of finding the time given a uniform velocity:
$$t=\frac{d}{v_{\text{boat}}}$$
where $d$ is the distance to the wall. | {
"domain": "physics.stackexchange",
"id": 44493,
"tags": "newtonian-mechanics, forces"
} |
Which is the minimal number of operations for intractability? | Question: If we have an algorithm that need to run $n=2$ operations and then halt, I think we could say the problem it solves, is tractable, but if $n=10^{120}$ althought It could be theoretically solvable it seems to be intractable, and what about a problem that needs $n=10^{1000}$, or $n=10^{10^{1000}}$ operations, that's seems an intractable problem for sure.
Then we see there is a k, from which $n\ge k$ operations problems are intractable, and $n\lt k$ are tractable ones.
I doubt about that k to exist.. Where is the limit? Can a Technological advance turn some intractable problems for a given n into a tractable ?
I would like to read your opinion.
EDIT
I think this question is similar as asking if Church–Turing thesis is correct, because if the difference about solving a computable problem in a Turing Machine and in any other Turing Complete Machine, is "only a constant" about the number of operations, then I think that asking about computable is the same as asking about effective calculability.. Now I see tractable means polynomial time, and inctractable is related with no polynomial time solution. But the difference between two machines, for the same (even tractable) problem, is about Church-Turing thesis.
Answer: Intractability is a graphical metaphor for a notion related to asymptotic complexity.
Let me remind that in abstract terms a computational problem is to find given an input $s$ a solution $r$ to an instance of the problem specified by $s$.
A problem is tractable if for every input $s$ of size $n$ a solution can be found with a number of operations bounded by a certain slowly growing function $P(n)$. The definition of slow growth is not important at this stage. Let us agree that a linear function is slowly growing and an exponential $e^n$ is not slowly growing.
If no such slowly growing upper bound exists, we have to admit that the problem is intractable. That is for any improvement in computer technology we can slightly increase the size of the input to render the effect of this improvement to zero.
Thus the precise meaning of the notion of intractability is that it stands against any technological advance.
It is important to note that there is no concrete integer number that characterizes intractability. It is the rate of growth that makes a problem intractable.
If instead you look at a specific question like solving the game of chess, you cannot say that it is intractable since as soon as it is solved once the problem will become trivial. Thus the notion of intractability does not apply to specific one-off problems whatever be the number of operations that a particular algorithm to solve them takes.
Here is a final remark to check your understanding.
An algorithm is called efficient if it efficiently solves every
instance of some computational problem. If for some instance it takes
this efficient algorithm $10^{20}$ operations to complete, a trivial
algorithm that simply reproduces the solution once produced by an
efficient algorithm will beat the efficient algorithm to this
problem. Yet this trivial algorithm will fail at any instance of the
problem that was not previously seen. | {
"domain": "cs.stackexchange",
"id": 291,
"tags": "complexity-theory, church-turing-thesis"
} |
Mass-spring system linear equations: I don't get the last term, shouldn't it be $V=\frac{1}{2}k_3x_{\text{wall}}^2-2k_3x_{\text{wall}}x_2+k_3x_2^2$? | Question: I don't understand the last term in setting up the linear system of equations for multiple mass-spring systems. It is about the last spring in the next example:
Source: https://math24.net/mass-spring-system.html
I understand the terms, but for the last spring the potential energy is
$$ V= \frac{1}{2} k_3 \Delta x^2 $$
Where I am confused about the $\Delta x$. Shouldn't this be just the length of the spring, in other words, the difference between the $x$ position of the wall and the $x_2$ position of the mass $m_2$?
$$ \Delta x = x_{\text{wall}} - x_2$$
Which means:
$$ V= \frac{1}{2} k_3 \Delta x^2 = \frac{1}{2} k_3 (x_{\text{wall}} - x_2)^2$$
$$ V=\frac{1}{2} k_3 (x_{\text{wall}}^2 -2x_{\text{wall}}x_2+ x_2^2)$$
$$ V=\frac{1}{2} k_3x_{\text{wall}}^2 -2k_3x_{\text{wall}}x_2+ k_3x_2^2$$
Which is completely different than $\frac{k_3x_2^2}{2}$
Answer:
Shouldn't this be just the length of the spring, in other words, the difference between the position of the wall and the 2 position of the mass 2?
$\Delta x$ refers to the change in spring's length, not the absolute length. In problems like this, one usually assumes each block's coordinate to be equal to zero when in equilibrium. In such coordinate choice, the position of the block $m_2$ is equivalent to the change in $k_3$ spring's length. So, $\Delta x = x_2$. | {
"domain": "physics.stackexchange",
"id": 91844,
"tags": "homework-and-exercises, newtonian-mechanics, spring, linear-systems, coupled-oscillators"
} |
Publication Authorship Credits | Question: Many physics papers now have dozens of authors per paper. Experimental physics may have multi-organizational and multi-country contributing staffs, but I'd guess that most of the names don't contribute a word or equation to a paper, yet they get individual authorship credit.
My question is who determines the author list, does everybody listed have editing privilages, and perhaps most importantly, who decides on their listed order?
Answer: OK, this is for experimental high energy physics as I worked in the field for over 40 years.
There are groups in institutions, universities and research ones. There are many such in each country, and there are many countries. The group leaders in the group decide who signs a paper, mainly by the man hours put in the construction and running of the experiment and also considering contributions in analysis of present and other papers. The order is alphabetical per author within a group, per name of institution. There have been long discussions on changing the credit attribution, but I see that the same holds for LHC papers.
Why are there so many names? In my carreer I worked on one large ( previous had about 50 people) experiment( 350 people) from inception of the idea to taking data and analysis. It took 10 years to build the detector by hundreds of people, years that yielded very few publications from the full work put in, certainly over 8 hours a day. Credit was accumulating from analysis published/worked-on previous experiment papers. Then another 15 years of data analysis where there are also large numbers of working groups, made up from people from all groups, and many people working on the same subject with their own analysis. The final paper is decided by the working group, a joining of all individual analysis. The names are still the ones the individual institute group leader gives to the working group. The working group proposes a preprint to the collaboration editing board, and if the board approves of the paper it goes to the full collaboration meeting, passed by consensus. Every person who signs can comment and ask for changes. They usually do not, as there is trust that the working groups are doing their job well.
The system is completely open. Any group member can join in the analysis and comment. | {
"domain": "physics.stackexchange",
"id": 572,
"tags": "soft-question"
} |
How is the Bernoulli's Principle related to a campfire blowpipe? | Question: My uncle recently told me that a campfire blowpipe (It is used to blow air into the fire so that it continues to burn) works on the Bernoulli's Principle. How is it so?
Answer:
If by campfire blowpipe is intended something like the schematic, where the user blows into the left side and the right side is pointed at the fire, then your uncle is loosely correct.
Because of the continuity requirement and assuming air is not very compressible at low pressures and speeds, then the outlet and inlet airspeeds, resp. $v_2$ and $v_1$ are related by:
$$A_1v_1\approx A_2v_2$$
So:
$$v_2\approx \frac{A_1}{A_2}v_1$$
Thus $v_2 > v_1$.
So the campfire blowpipe allows to, from a safe distance, direct a high speed jet of air onto the fire, acc. Bernoulli's Principle, applied here:
$$\frac12 v_1^2+\frac{p_1}{\rho}+gz=\frac12 v_2^2+\frac{p_0}{\rho}+gz$$
Or:
$$\frac12(v_2^2-v_1^2)=\frac{p_1-p_0}{\rho}$$
Where $p_1$ is provided by the user and $p_0$ is the atmospheric pressure.
Combined with the equation higher up, $v_2$ could then be estimated. | {
"domain": "physics.stackexchange",
"id": 30117,
"tags": "bernoulli-equation"
} |
Orbiting body around a star | Question: Let us assume there's a body with mass $m$ and velocity $v$, at a distance $r$ from another body of mass $M$. The velocity vector is perpendicular to the radial vector. With these values, how do we find the apogee and perigee of the elliptical orbit of the body?
The problem I'm facing here is that we can assume $r$ to be any distance for the ellipse, i.e., apogee $r_a = r$, or perigee $r_p = r$. So: $$v=\sqrt{2GM\left(\frac{1}{r}-\frac{1}{r_p + r_a}\right)}$$
Assuming $r=r_a$ we can solve for $r_p$ and vice versa.
Therefore, in each case, we find different values. But in reality, there is only one set of values for a body with tangential velocity $v$.
So whats the concept here?
Answer: That equation relating the mass, orbital speed, radial distance, and semi-major axis is known as the vis-viva equation. Its standard form is
$$v^2 = \mu\left(\frac2r - \frac1a\right)$$
where $\mu$ is the standard gravitational parameter $GM$, where $M$ is the sum of the masses of the two bodies. It's common to neglect the mass of the smaller body when it's much smaller than the mass of the larger body.
I have a derivation of the vis-viva equation here
In your scenario, the initial velocity vector is perpendicular to the radial vector. Now in a Kepler ellipse, that can only happen at periapsis or apoapsis. That is, your initial $r$ is either $r_a$ or $r_p$. So we can rearrange vis-viva to calculate $a$, and then use $2a = r_p + r_a$ to calculate the other $r$. Then we just compare the two $r$ values to figure out which one's which.
Rearranging, we get
$$\frac1a = \frac2r - \frac{v^2}\mu$$
$$a = \frac{\mu r}{2\mu - rv^2}$$
With a little more algebra, we get the other radius,
$$r_2 = \frac{r^2v^2}{2\mu - rv^2}$$
Note that $a$ becomes infinite when $2\mu - rv^2 = 0$. That's the escape velocity, which gives a parabolic trajectory. And when $2\mu - rv^2 < 0$, $a$ becomes negative, and we have a hyperbolic trajectory. | {
"domain": "physics.stackexchange",
"id": 100587,
"tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, celestial-mechanics"
} |
Software for simulating NMR spectra | Question: I am wondering is there any open source software that predict NMR spectra by giving the chemical shift?
Something like Spinach library http://spindynamics.org/group/?page_id=12 (Matlab is not free, so...).
Answer: There is still the old WINDNMR programme developed by Hans Reich: https://www.chem.wisc.edu/areas/reich/plt/windnmr.htm which apparently should still work on modern versions of Windows.
It is not quite as sophisticated as Spinach, but if all you want is a 1D spectrum with various multiplets, then it is probably good enough.
If you have the computational resources, then there is software developed by Stefan Grimme's group https://xtb-docs.readthedocs.io/en/latest/enso_doc/enso.html, but you can't specify shifts manually, you need to generate them via a quantum chemical calculation using ORCA. Technically, all of these are free to use, but obviously it is not always practical. But in any case, if you are interested, the relevant publication is Angew. Chem. Int. Ed. 2017, 56 (46), 14763–14769. It should be possible now to run all the steps using just one ORCA input file; I suggest looking in the ORCA manual for more information.
For what it's worth, if you have the programming knowledge required, it's easy enough to write a (e.g.) Python script that is capable of simulating two- or three-spin (maybe four-spin) systems. At the end of the day, an NMR experiment is "just" a bunch of unitary operators acting on density matrices, i.e. lots and lots of matrix multiplication. For example, you could take a look at some of my MATLAB code here which simulates the first FID of a sensitivity-enhanced HSQC experiment. It is fairly trivial to port this to Python, since numpy provides you with pretty much every function you might need.
[Disclaimer: the code will not necessarily stay there forever.]
The only problem is that this scales exponentially with the number of spins, so unless you do some serious optimisation of the code (like that done in Spinach), it becomes intractable very quickly. But if your molecule can be "decomposed" into several spin systems which are small, then the overall spectrum is just the sum of the spectrum of each spin system, so you might be able to get away with simpler code. I've never thought about how you might do that, although my guess is that you would want to construct a Hamiltonian matrix, then pass it to a function which would call itself recursively if the matrix can be written in block-diagonal form. | {
"domain": "chemistry.stackexchange",
"id": 13384,
"tags": "computational-chemistry, nmr-spectroscopy, software"
} |
Is there anything to improve in this MergeSort code in Java? | Question: I have written an implementation of the merge sort algorithm in Java.
Here is the MergeSort class that I created.
public class MergeSort{
public static void sort(Comparable[] a){
Comparable[] b = new Comparable[a.length];
sort(a, b, 0, a.length-1);
}
private static void sort(Comparable[] a, Comparable[] b, int lo, int hi){
if(lo>=hi) return;
int mid = lo+(hi-lo)/2;
sort(a, b, lo, mid);
sort(a, b, mid+1, hi);
merge(a, b, lo, hi, mid);
}
private static void merge(Comparable[] a, Comparable[] b, int lo, int hi, int mid){
int i=lo;
int j=mid+1;
for(int k=lo;k<=hi;k++){
if(i<=mid && j<=hi){
if(a[i].compareTo(a[j])>0) {b[k]=a[j++];continue;}
else if(a[i].compareTo(a[j])<0) {b[k]=a[i++]; continue;}
else {
b[k]=a[i++];
j++;
continue;
}
}
if(j>hi && i<=mid) {b[k]=a[i++];continue;}
if(i>mid && j<=hi) {b[k]=a[j++];continue;}
}
for(int n=lo;n<=hi;n++){
a[n]=b[n];
}
}
}
The code passed some simple tests. I wonder is there anything wrong or missing out? Anywhere to improve?
Answer: Generics
In Java, it is not safe to create an array of a generic type. This is what you are doing...
Comparable is really Comparable<T>, and you are creating an array of Comparable[].
Because the generic types get removed when you compile the code (see Type Erasure), there is no safe way for Java to ensure that you have the same types of data in your array.
For example, you could have specified your input as:
Integer[] data = new Integer[10];
Then, even though it is an array of Integer, the java compiler only knows it is an array of Comparable, so it can allow you to say data[0] = "broken";, but that will fail, at runtime, because you cannot put a String in to an array of Integer. because Java can only tell at runtime whether the data is valid input to a Generically typed array, it warns you whenever you use one.
This leads to ugly warnings. The right solution is to use an appropriately typed collection List<Integer> data will be type-checkable at compile time, and java can 'do things right'.
An alternate solution is to specify the full type of the Comparable as part of the method definition (a Generic Method... ):
public static <T extends Comparable<?>> void sort(T[] a){
....
}
... and then follow that same generic method in to your inner methods
It is complicated. There is some reference material here:
Cannot create arrays of Parameterized Types
Generic Methods
Style
You are not using white-space to help make things readable. Consider this line of code:
int mid = lo+(hi-lo)/2;
This should be:
int mid = lo + (hi - lo) / 2;
See the java Code-Style guidelines for this:
All binary operators except . should be separated from their operands by spaces. Blank spaces should never separate unary operators such as unary minus, increment ("++"), and decrement ("--") from their operands
Additionally, you have a number of complex 1-liners that should be split on to multiple lines:
if(i>mid && j<=hi) {b[k]=a[j++];continue;}
That should be:
if(i > mid && j <= hi) {
b[k]=a[j++];
continue;
}
Algorithm
MidPoint
You have also copied your code from some reference texts, or otherwise done some mid-point research:
int mid = lo+(hi-lo)/2
This mechanism for getting the mid point of the array is the right pattern to use. Anyone who does it this way is either copying the code, has run in to the bug, or has been well-educated. This is a good thing, but, in case you did not know why you were doing it this way, now you know.
Bug
I believe you have a bug in the code when you have two values the same, one in each part of the merge....
... part of the reason this is hard to see is because your code is often doing more than one thing on each line:
if(a[i].compareTo(a[j])>0) {b[k]=a[j++];continue;}
else if(a[i].compareTo(a[j])<0) {b[k]=a[i++]; continue;}
else {
b[k]=a[i++];
j++;
continue;
}
If I space out that code block, there are a number of things in there that are apparent:
if(a[i].compareTo(a[j]) > 0) {
b[k]=a[j++];
continue;
} else if(a[i].compareTo(a[j]) < 0) {
b[k]=a[i++];
continue;
} else {
b[k]=a[i++];
j++;
continue;
}
First up, for some Comparable classes the compareTo method can be expensive. and, you will be doing it twice for many checks. You should do it once, and save away the result:
int comparison = a[i].compareTo(a[j])
if (comparison > 0) {
...
} else if (comparison < 0) {
...
} else {
...
}
OK, back to the bug.... in the final else block, you have:
} else {
b[k]=a[i++];
j++;
continue;
}
This block increments both the i and the j pointers, but it only adds the i value to the merged set, which means the equal value at j is 'lost'.
There are two ways to solve this, but I prefer treating the equals values as if the one is larger than the other.... if you convert the one comparison check to be >= 0 instead of just > 0, you will successfully merge results.... then you can get rid of the whole last block entirely....
it also means you only need one compareTo check:
if(a[i].compareTo(a[j]) >= 0) {
b[k]=a[j++];
continue;
} else {
b[k]=a[i++];
continue;
}
ReWrite
I am not one of those people who thinks 'continue' is a bad thing. I like it for the job it can do, but your use of the continue is excessive.... Every path through the loop is a 'continue... this is something easily solved with a little planning. Consider this rewrite:
private static void merge(Comparable[] a, Comparable[] b, int lo, int hi,
int mid) {
int i = lo;
int j = mid + 1;
for (int k = lo; k <= hi; k++) {
if (i <= mid && j <= hi) {
if (a[i].compareTo(a[j]) >= 0) {
b[k] = a[j++];
} else {
b[k] = a[i++];
}
} else if (j > hi && i <= mid) {
b[k] = a[i++];
} else if (i > mid && j <= hi) {
b[k] = a[j++];
}
}
for (int n = lo; n <= hi; n++) {
a[n] = b[n];
}
} | {
"domain": "codereview.stackexchange",
"id": 6259,
"tags": "java, mergesort"
} |
Function to split either a string array or string into a string array | Question: I am programming a language interpreter in C# and have recently created a set of functions that receive either a string or a string[] and split it by a received string.
For example, with a string:
"Hey:123:hello:456"
":"
It will return this array:
{"hey","123","hello","456"}
And with a string[]:
{"a:b","c","d:e:f"}
":"
It will return this array:
{"a","b","c","d","e","f"}
How can I improve efficiency and maybe make it tidier?
public string[] SplitRows(object thevar, string delimiter) {
if (delimiter == "<newline>") delimiter = "\n";
if (thevar.GetType() == typeof(string)) {
string temp = (string) thevar;
return temp.Split(new string[] {
delimiter
}, System.StringSplitOptions.None);
}
if (thevar.GetType() == typeof(string[])) {
return stringarraysplitter((string[]) thevar, delimiter);
}
return null;
}
public string[] stringarraysplitter(string[] arr, string delimiter) {
string[][] tempr = new string[arr.Length][];
for (int i = 0; i < arr.Length; i++) {
tempr[i] = arr[i].Split(new string[] {
delimiter
}, System.StringSplitOptions.None);
}
System.Collections.Generic.List < string > templist = new System.Collections.Generic.List < string > ();
for (int i = 0; i < tempr.Length; i++) {
for (int j = 0; j < tempr[i].Length; j++) {
templist.Add(tempr[i][j]);
}
}
return templist.ToArray();
}
Side notes:
SplitRows() is the only main function allowed to receive either a string array or string. Even though I use 2 functions, I cannot create 2 for the 2 variable types.
<newline> is a replacement string for \n in my language and can be ignored.
Answer: if (delimiter == "<newline>") delimiter = "\n";
Why do you need this? Why not use normal strings?
if (thevar.GetType() == typeof(string)) {
What's wrong with normal method overloading?
if (thevar.GetType() == typeof(string)) {...}
if (thevar.GetType() == typeof(string[])) {...}
return null;
If arguments is not valid throw an ArgumentException. Don't say "I'll just return null, and see where it will go wrong."
temp.Split(new string[] {delimiter}, System.StringSplitOptions.None);
This bit repeats, and should be extracted into a method.
for (int i = 0; i < tempr.Length; i++) {
for (int j = 0; j < tempr[i].Length; j++) {
Use Linq to simplify loop operations.
Here is the cleaned up version:
public IEnumerable<string> Split(string str, string delimiter)
{
return str.Split(new[]{delimiter}, StringSplitOptions.None);
}
public IEnumerable<string> Split(IEnumerable<string> arr, string delimiter) {
return arr.SelectMany(s => Split(s, delimiter));
} | {
"domain": "codereview.stackexchange",
"id": 17517,
"tags": "c#, performance, strings"
} |
Forcing a steel pipe to rust completely | Question: I understand rust. Fe + H20 → OH- → Fe(OH)2 or 3 and/or FenOm
I understand rust prevention and steels that are resistant to rusting. Trying to google this question gives thousands of answers on prevention.
But what I'm interested in is how to force steel to rust. And not in an artsy way, so my steel stuff looks weathered and cool (these are the other thousands of search results for forcing iron/steel to rust). I want to dispose of steel by turning it into iron oxide. I don't actually really want to do this since it's a lot easier to just landfill or recycle, but I thought of it as an alternative and realized it's really not that easy.
The trouble seems to me that the passivation layer stops rust from completely going completely through the metal. Can this be forced by running active corrosion circuit in reverse, or will the equation still slow to a crawl once a layer of rust builds up on the surface?
Answer: As we in the trade say "rust never sleeps". Rust develops on the surface , rust is somewhat protective in some "weathering" steels such as Corten. Small additions of Cu ,Cr, and P , mostly provide limited slowing of rust . Otherwise there are many tables , based on conditions and location of exposure . The tables will give rust rates like 5 mpy ( mils per year ; mil is a thousandth of an inch , pardon my US units). The loss rates will be much higher on seacoasts or where rain often occurs. So with no other information ,you can guesstimate your steel will disappear in (THICKNESS divided by 0.005 ) years. Half that time for two sides exposed. | {
"domain": "chemistry.stackexchange",
"id": 15218,
"tags": "oxidation-state, iron"
} |
Why haven't Earth and Venus got any tiny moons? Or have they? | Question: Why haven't some meteoroids gotten caught in Earth's or Venus' orbit?
AFAIK most meteors are tiny fragments from comets. Shouldn't some comet tail sometime have passed Earth orbit at velocities suitable for our planet to capture such fragements? And 100 000s of asteroids have been detected. Why haven't the inner planets gotten asteroid like moons like the outer planets have? The asteroid belt isn't too far away.
Earth's large Luna might clean them away, but that won't explain Venus' lack of tiny moons.
Answer: The strength of the Earth's gravitational field compared to the Moon and the Sun is not enough to capture and hold satellites - there are too many disruptive forces that would rip them away over time.
However there are some objects at the Lagrangian points - the points where the gravitational fields of the Earth and other objects are equal and so it is possible to have a (likely-meta) stable orbit.
This gives some details about what might be found at the various Lagrangian points: http://en.wikipedia.org/wiki/List_of_objects_at_Lagrangian_points | {
"domain": "astronomy.stackexchange",
"id": 266,
"tags": "gravity, earth, asteroids, natural-satellites, venus"
} |
Finding subgraphs with high treewidth and constant degree | Question: I am given a graph $G$ with treewidth $k$ and arbitrary degree, and I would like to find a subgraph $H$ of $G$ (not necessarily an induced subgraph) such that $H$ has constant degree and its treewidth is as high as possible. Formally my problem is the following: having chosen a degree bound $d \in \mathbb{N}$, what is the "best" function $f : \mathbb{N} \to \mathbb{N}$ such that, in any graph $G$ with treewidth $k$, I can find (hopefully efficiently) a subgraph $H$ of $G$ with maximal degree $\leq d$ and treewidth $f(k)$.
Obviously we should take $d \geq 3$ as there are no high treewidth graphs with maximal degree $<3$. For $d = 3$ I know that you can take $f$ such that $f(k) = \Omega(k^{1/100})$ or so, by appealing to Chekuri and Chuzhoy's grid minor extraction result (and using it to extract a high-treewidth degree-3 graph, e.g., a wall, as a topological minor), with the computation of the subgraph being feasible (in RP). However, this is a very powerful result with an elaborate proof, so it feels wrong to use it for what looks like a much simpler problem: I would just like to find any constant-degree, high-treewidth subgraph, not a specific one like in their result. Further, the bound on $f$ is not as good as I would have hoped. Sure, it is known that it can be made $\Omega(k^{1/20})$ (up to giving up efficiency of the computation), but I would hope for something like $\Omega(k)$. So, is it possible to show that, given a graph $G$ of treewidth $k$, there is a subgraph of $G$ with constant degree and linear treewidth in $k$?
I'm also interested in the exact same question for pathwidth rather than treewidth. For pathwidth I don't know any analogue to grid minor extraction, so the problem seems even more mysterious...
Answer: See the paper by Julia Chuzhoy and myself on Treewidth sparsifiers.
We show that one can obtain a subgraph of degree at most 3 with treewidth $\Omega(k/polylog(k))$ where $k$ is the treewidth of $G$.
https://arxiv.org/abs/1410.1016 The proof is shorter than the one
for grid minors but it is still not that that easy and builds on several previous
tools.
Suppose you settle for an easier target - degree 4 and treewidth $\Omega(k^{1/4})$ then you can get it much more easily via result of Reed and Wood on grid-like minors. https://arxiv.org/abs/0809.0724
Another easy result you can obtain is the following which is a starting point for some of the more involved proofs. You can get a subgraph of degre $\log^2(k)$
and treewidth $\Omega(k/\mathsf{polylog}(k))$. You can see the treewidth sparsifier paper for the argument to achieve this. | {
"domain": "cstheory.stackexchange",
"id": 5328,
"tags": "co.combinatorics, treewidth, graph-minor, bounded-degree"
} |
Torque angular momentum equation in case of pure rolling | Question: I was doing some problems in rotational mechanics.
In one such problem a solid sphere was kept on an inclined plane and pure rolling was taking place. In the solutions they have applied:
$$\tau = \frac{dL}{dt}$$
Torque equal to rate of change of angular momentum equation about the bottommost point.
But I know that the above mentioned equation is valid only for inertial frame of reference and the bottommost point is accelerated towards the centre so it should be a non-inertial frame of reference and hence pseudo torque should be applied.
Any help will be appreciated.
PS
This is one of the given solutions in the book.
The question just asks the velocity of COM of sphere after it has descended through a height H.
I have already mentioned the necessary details.
But i am posting the whole question.
A uniform sphere of mass m and radius R rolls without slipping down an inclined plane set at an angle to the horizontal.
The question has three parts
(1)Magnitude of friction coefficient when slipping is absent
(2)kinetic energy of sphere t seconds after the beginning of motion and (3) the velocity of COM at the moment it has descended through height H.
It is an easy question.
My book mentions five ways to solve the question of which i have posted the one that i did not understand.
Answer: The following analysis is performed in an inertial "lab/ground" reference frame.
Let our system be a collection of point particles labelled by index $i$: $\{m_i,\mathbf{x}_i,\mathbf{v}_i,\mathbf{a}_i\}$
and let $P$ be a general point that may or may not be in motion with respect to the inertial frame.
The angular momentum with respect to point $P$ is defined1 as follows.
$$\mathbf{L}_P=\sum_i m_i\; (\mathbf{x}_i -\mathbf{x}_P)\times(\mathbf{v}_i-\mathbf{v}_P)\tag{1}$$ Taking time derivative on both sides, we have:\begin{align}\frac{d\mathbf{L}_P}{dt}&=\sum_i m_i\;(\mathbf{x}_i -\mathbf{x}_P)\times(\mathbf{a}_i-\mathbf{a}_P) \\
&=\sum_i m_i\;(\mathbf{x}_i -\mathbf{x}_P)\times\mathbf{a}_i-\sum_im_i\;\mathbf{x}_i\times \mathbf{a}_P +\sum_im_i\;\mathbf{x}_p \times\mathbf{a}_P \\ &=\sum_i(\mathbf{x}_i -\mathbf{x}_P)\times \mathbf{F}_i+(\mathbf{x}_{cm}-\mathbf{x}_P)\times(-m\mathbf{a}_P) \\ &=\boldsymbol{\tau}_P+(\mathbf{x}_{cm}-\mathbf{x}_P)\times(-m\mathbf{a}_P) \tag{2}\end{align}
The second term on the RHS of $(2)$ is precisely the "fictitious/pseudo" torque associated with the fictitious force $-m\mathbf{a}_P$, which OP has mentioned.
Let's get back to OP's original question and consider the point of contact of the sphere with the incline to be point $P$. In this case, we observe:
$$\mathbf{a}_P \text{ is parallel to } \mathbf{x}_{cm}-\mathbf{x}_P \stackrel{\text{Using eq. $(2)$}}\Rightarrow \frac{d\mathbf{L}_P}{dt}=\boldsymbol{\tau}_P$$
Also, since the velocity of the point of contact is zero, we observe that $(1)$ can be rewritten as follows:
\begin{align}\mathbf{L}_P&=\sum_i m_i\;(\mathbf{x}_i-\mathbf{x}_{cm})\times (\mathbf{v}_i-\mathbf{v}_{cm})+\sum_i m_i \; (\mathbf{x}_i-\mathbf{x}_{P})\times \mathbf{v}_{cm} \\ &= \mathbf{L}_{cm}+m\;(\mathbf{x}_{cm}-\mathbf{x}_P)\times\mathbf{v}_{cm}\\ L_P &=\frac{2}{5}mR^2\omega+mRv_{cm}\end{align}
References
Kirk T. McDonald, Comments on Torque Analyses; eq. $(14)$-$(15)$
1 One may also define angular momentum with respect to point $P$ as follows.
$$\mathbf{L}_P=\sum_i m_i\;(\mathbf{x}_i-\mathbf{x}_P)\times\mathbf{v}_i$$ The differences between the two definitions and detailed discussions are present in $[1]$. | {
"domain": "physics.stackexchange",
"id": 66021,
"tags": "homework-and-exercises, rotational-dynamics, rotational-kinematics"
} |
Can I get turned around backwards in time? | Question: On a surface embedded in Euclidian space, where the metric signature is all positive, it is possible for a particle traveling along a geodesic to encounter a curved bit and then get turned around so that it heads in the opposite direction.
Could there be a manifold that did the same turning-around trick, but in a timelike direction? Would this create particles for which increasing proper time corresponded to decreasing coordinate time?
Answer: Yes, this happens when you enter or exit the inner Cauchy horizon of a rotating Kerr black hole at the right angle, see Madore's article and Hamilton's penrose diagram; this is sometimes referred to as the Carter time machine.
Inside the inner horizon it is even possible to meet an other observer who fell through at an other angle so his proper time is going in the same direction as the coordinate time, while your proper time goes in the opposite direction, which would have weird effects since not only would you hear the other observer talk backwards, but also he would remember things that are in your future while what's in your past is in his future.
That would be even weirder if you chose to blow yourself up, then the other observer would observe an implosion that would create rather than destroy you, and a bunch of other paradox thought experiments would become possible.
The only problem is that when you go through the Cauchy horizon, you get hit by an infinite blueshifted signal (in the moment when you cross that horizon, you go to future infinity and back in an infinitesimal amount of proper time), see Dafermos's lecture. | {
"domain": "physics.stackexchange",
"id": 67029,
"tags": "spacetime, metric-tensor, geodesics"
} |
Finding the Force of two objects - by using Acceleration but only ONE of the given masses? | Question: I came across the following question in my physics textbook and wanted to try to solve it:
A 1700 kg car is towing a larger vehicle with mass 2400 kg. The two vehicles accelerate uniformly from a stoplight, reaching a speed of 15km/h in 11 s. Find the force needed to accelerate the connected vehicles, as well as the minimum strength of the rope between them.
I did get the correct answer after a while by finding Acceleration (which was 0.3788m/s^2and then multiplying it by the mass of the truck, which was 2400kg. (The answer should be 910 N)
The part that I don't understand (and I had trouble finding the answer because of this very reason) is: Why is the mass not the sum of both masses of the car AND the truck? Should it not be both, as it specifically mentions that they are tied together, and therefore they technically act as one whole system?
Answer: Forget about the towing vehicle, and focus just on the rope connected to the truck. If you were looking at the scene through a small window, and all you could see was the rope and the truck, what force should there be on that rope in order to accelerate that truck? You don't have to know what is on the other end of the rope - it could have been a winch attached to the earth itself...
Now if you were looking at the force between the wheels of the towing vehicle, and the road surface, that is a force that will be used to accelerate both the car and the truck. If both were of the same mass, and the force on the rope was $F$, then the force on the (sum of the four) wheels would be $2F$ - one $F$ to accelerate the truck, and one to accelerate the car. | {
"domain": "physics.stackexchange",
"id": 28819,
"tags": "newtonian-mechanics, kinematics"
} |
End-of-life date for ROS distributions? | Question:
In a recent mailing list announcement it was brought up that the ROS Electric buildfarm was going to be retired to make room for the ROS Hydro distribution. This led me to wonder: is there an end-of-life date for ROS distributions? I looked at the target platforms REP and the Distributions wiki but I do not see it mentioned. Is a ROS distribution life cycle tied to all of the Ubuntu versions that are supported, the LTS Ubuntu release, or some other metric?
--
Edit:
To be more clear, is the term retire the same as end-of-life? What I mean is, even though ROS Electric will be retired from the build farm, will changes to core ROS packages still be pushed into ROS Electric and just not tested? Is there a date for when changes to core ROS packages are not updated with fixes or improvements? Is there a date for when ROS Electric packages will no longer be available from the repositories?
Originally posted by Thomas D on ROS Answers with karma: 4347 on 2013-03-06
Post score: 6
Original comments
Comment by Thomas on 2013-03-06:
+1 I would like a clear answer on this one. My impression was that the ROS EOL was bound to the associated Ubuntu distributions EOL. Is this true?
Answer:
From what I understand from that email, it is implied that the lifetime of any rosdistro is three rosdistros (2 active + 1 development). So Fuerte will retire when I-ROS is in devel, and Groovy goes when J-ROS is in devel etc.
Originally posted by weiin with karma: 2268 on 2013-03-06
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Thomas D on 2013-03-06:
I should have been more clear -- edited original. Is retire the same as end-of-life?
Comment by tfoote on 2013-04-18:
When it's retired that means that there will be no new binary releases. We will keep hosting the existing software as long as possible. | {
"domain": "robotics.stackexchange",
"id": 13230,
"tags": "ros, rosdistro"
} |
Draw a frequency response graph from a simple equation | Question: How is it possible to draw an audio frequency response graph (x-axis : freq from 20 to 20 000hz, y-axis : dB) from a simple equation like
y[n] = x[n] - x[n-486]
(Example: with this equation I get a nice comb filter that removes frequency 197,5Hz and all its harmonics, for an audio file with sampling rate 96 Khz)
How to turn such an equation into a graphical interpretation like this
http://downloadfreesamples.com/wp-content/uploads/2013/08/EQ-Four-CIS-DSP_3.jpg ?
Then I could see how deep / steep the filtering is.
Thank you
(PS : I continue to read references and online books about filter design that were given to me in previous questions, thanks for these references by the way.)
Answer: The tool of choice here is the z-transform, which is applied to your equation. It transforms sequences of complex numbers to functions in the complex plane. Its most useful properties are that it takes delays to powers of the argument z, and for z = exp(i omega) we get the fourier transform of the transformed sequence. Look up the details, I'll just show you how it works here.
y[n] = x[n] - x[n-486] ---> Y(z) = X(z) - X(z)*z^(-486)
The equation in z-domain factors on the right hand side to give Y(z) = X(z) ( 1 - z^(-486) ).
Since Y(z) is the output and X(z) is the input, for linear systems we define the transfer function to be Y(z)/X(z) = H(z) and the above equation becomes
H(z) = Y(z)/X(z) = 1 - z^(-486)
Now if we substitute z = exp(i omega) we get the frequency response of your system, which is
H_f(omega) = 1 - exp(i omega)^(-486)
and with the properties of exp() it simplifies to
H_f(omega) = 1 - exp(- 486 i omega)
If you need actual frequency f instead angular frequency omega, just substitute omega = 2 pi f/fs, with sampling rate fs.
Finally, you'll probably want to know just the magnitude of the response, so look at abs(H_f).
In this specific case it's very simple to calculate because abs(H_f) = sqrt( H_f * conj(H_f) ) and conj(H_f) = 1 - exp(+ 486 i omega).
As requested, some code. Here's what you can do in matlab/octave to get the magnitude frequency response with a logarithmic frequency axis and magnitude axis in dB.
% Create a logarithmically spaced frequency axis with range 20Hz to 20kHz
fAxis = logspace( log10( 20 ) , log10( 20000) , 10000);
% calculate the complex frequency response at these frequencies
% Set SampleRate before calling this, like SampleRate = 44100;
Hf = 1 - exp(1i*486*2*pi*fAxis/SampleRate);
% plot the magnitude spectrum
semilogx(fAxis,10*log10(Hf.*conj(Hf))); | {
"domain": "dsp.stackexchange",
"id": 1234,
"tags": "filters, filter-design, signal-analysis, lowpass-filter, frequency-response"
} |
FFT - preferred way of tone detection | Question: I am using FFT to detect certain frequencies (DTMF). I can clearly see the frequencies in the spectrum. Is there a preferred / optimal way to decide if a certain frequency is present or not by looking at the spectrum?
I can think at least of:
Bin absolute value (ie. bin amplitude)
Difference between maximum bin and second highest
Ratio between maximum bin and second highest
Ratio between maximum bin and sum of all others
Answer: Number 4 : Comparing narrow-band energy to the energy of the total spectrum is a commonly described method of DTMF detection. This may involve using more than one FFT result bin and/or interpolation between bins.
The other methods depend on whether one has an accurate estimate of the signal level, noise floor, and how well the tone frequency is centered in a single FFT bin (related to frequency, sample rate, FFT length and window). | {
"domain": "dsp.stackexchange",
"id": 6316,
"tags": "fft, frequency-spectrum"
} |
Why is Enceladus the only geologically active moon among its neighbours? | Question: My understanding of why Enceladus is geologically active is that tidal forces from Saturn and - to a lesser extent - from the nearby larger moon Dione provide heat to the moon's interior, just like Jupiter does for Io.
Should the other mid-to-large-sized moons Mimas, Tethys, Dione and Rhea not also exhibit similar activity from the same forces? Granted, Dione and Rhea are further from Saturn and thus less susceptible to tidal stresses, but what about Tethys and Mimas? What special attributes does Enceladus possess that has allowed it to be geologically active in modern times? Is it purely down to the fact that the interior of Enceladus is more ice than rock compared to the other moons, or are there more factors to consider?
Answer: This puzzle is something called the Mimas paradox. The two bodies are nearly twins, but Mimas is substantially less active than Enceladus. There are currently several proposed solutions:
Mimas cooled quickly, before it entered its resonance with Tethys (which contributes to tidal heating, just as Dione does for Enceladus). A "hot start" from rapid radioactive decay (of aluminum-26 and iron-60) shortly after Enceladus' formation could have allowed its subsurface ocean to stay liquid.
Enceladus is rockier and therefore has a higher density. This would have led to quicker cooling, and would also mean that tidal dissipation could be higher for Enceladus than for Mimas. It does seem a bit paradoxical - one would imagine that more rock would mean less water ice and therefore a smaller subsurface ocean - but this isn't a problem.
It's worth noting that Tethys is thought to be made almost entirely of ice. If the rocky-Enceladus hypothesis is true, then that could be a major factor in the lack of geological activity on Tethys. However, Tethys likely once had a subsurface ocean. Dione currently does, and experiences tidal heating; however, its semi-major axis is large enough that the effects are not as strong as on Enceladus. | {
"domain": "astronomy.stackexchange",
"id": 2522,
"tags": "natural-satellites, saturn, enceladus"
} |
Rough, easy DIY method of measuring magnetic field strength | Question: How to easily, using standard DIY equipment measure the strength of magnetic field generated by a permanent magnet?
Narrowing down the "loose language" of the above:
strength of magnetic field: either flux density B at given point relative to the magnet or magnetic flux ΦB over area enclosed by a loop made of wire - whichever will be easier to measure, either of those is fine.
standard DIY equipment: commonly found household items, rudimentary tinkering tools. Soldering tools, multimeter, simple electronic parts, or maybe an easy to make spring-based dynamometer - anything of this class of complexity.
The distance of measurement is such that the field is easily noticeable through simplest methods e.g. another magnet held in hand exerts perceptible force - distance of maybe 5cm away at most.
The measurement doesn't need to be very accurate - error of order of 50% is quite acceptable. Simplicity is preferred over accuracy.
Rationale: trying to estimate what coil I need to generate sufficient amount of power to light a LED with a frictionless generator based on that magnet (knowing speed of movement of the magnet and location of the coil relative to the path of the magnet). If you know other simple methods of doing that (without need for measuring the field), they are most welcome them too.
Answer: For what it's worth: http://www.coolmagnetman.com/magmeter.htm - a home-made device based on a Hall effect device - for about $40. | {
"domain": "physics.stackexchange",
"id": 62844,
"tags": "electromagnetism, magnetic-fields, measurements, home-experiment"
} |
How does electron know when to change into a wave? | Question: It is known that electron behaves as a wave also.
How does electron know that it has to change into a wave?
Are there any factors that influence the behavior of electron changing into wave?
Answer: The electron is always a wave.
The electron is wave, as experiments of diffraction and interference showed.
Waves come in an infinity of "shapes". Some kinds of shapes have some properties, and others have other properties. Examples of properties are position and momentum. The two shapes of the electron's wave having these properties are
When the wave is concentrated at a point. In this case, it has a definite position. It doesn't have a definite momentum.
When the wave is a plane wave, having a definite frequency (more complex shapes can be obtained by superposing various frequencies). In this case, there is no definite position, since the plane wave extends in all space.
The problem is, when does the electron how to be plane wave, or to be concentrated at a point. The answer is strange. If you measure the frequency (or momentum), you will find that the electron is a plane wave. If you measure the position, you will find that the wave is concentrated at a point.
Yes, you understood well. The electron, and any other particle for that matter, has precisely the kind of shape for which the property you want to measure is defined. Measure another property, which is not compatible with the property previously measured, and you will find it has another shape. Now, this may look strange, but this is how it happens.
Wait, there is more, when more particles of the same kind are present. Then, saying it is a wave is not enough. But this is another story. | {
"domain": "physics.stackexchange",
"id": 9209,
"tags": "quantum-mechanics, electrons"
} |
Does the universe expand at the same rate everywhere in the universe? | Question: Specifically, I am wondering if some areas of the universe expand faster than other areas and whether the faster expanding areas diffuse the expansion through the slower expanding areas or does the expansion occur at a uniform rate throughout the entire universe.
Answer: What's outside the observable Universe, we can't say anything about, but averaged over large enough scales ($\gtrsim$ a billion lightyears), it does indeed seem to be expanding uniformly.
However, the presence of mass, or more generally energy, retards the expansion. This means that on the scale of clusters of galaxies, the Universe expands more slowly, and on the scale of galaxy groups, the galaxies' mutual gravitational attraction will prevent them from receding from each other. This is also why our galaxy, Solar system, planet, and bicycles will never get torn apart (unless the cosmological constant is not a constant).
Conversely, in mass underdensities, i.e. the huge voids between clusters and filaments of gas and galaxies, expansion is increased (relative to denser regions). In fact, it has been hypothesized that that the observed accelerated expansion of the Universe is not due to dark energy, but could be an "illusion" from accidentally living in the center of a huge underdensity (e.g. Zibin et al. 2008). More recent observations seem to rule out this possibility, though | {
"domain": "astronomy.stackexchange",
"id": 1069,
"tags": "expansion, cosmological-inflation, redshift, dark-energy"
} |
Where does the law of conservation of momentum apply? | Question: Take the scenario of a snowball hitting a tree and stopping. Initially, the snowball had momentum but now neither the snowball nor tree have momentum, so momentum is lost (thus the law of conservation of momentum is violated?). Or since the tree has such a large mass, is the velocity of the tree is so small that it's hardly noticeable?
If the explanation is the latter, this wouldn't hold for a fixed object of smaller mass. So in that case, how would the law of conservation of momentum hold?
Answer: Momentum is conserved only if there is no net external force on the system.
Consider the snowball and the tree as the system. In your case, the earth provides an external force on the tree, so the momentum of the snowball/tree system is not conserved. If the tree is "suspended" (not attached to the ground) momentum would be conserved, but the final velocity of the tree would be very small and hardly noticeable due to its large mass.
If the system is taken to be the snowball, tree, and earth, momentum is conserved , but the final velocity of the tree and earth (assuming the tree stays attached to the earth) is infinitesimally small due to the very large mass of the earth. | {
"domain": "physics.stackexchange",
"id": 75142,
"tags": "newtonian-mechanics, momentum, conservation-laws, inertial-frames, collision"
} |
User and picture models | Question: I have the following models:
User
Picture
Variant
User has_many pictures, picture has_many variants.
Variant has value price and I am trying to find the price of all pictures of selected (e.g. for user id 15) current user.
First I tried like this:
def value(user)
total = 0
user.pictures.each do |picture|
picture.variants.each do |variant|
total += variant.price
end
end
total.to_f
end
How could I improve this code so that I can avoid the N+1 problem?
Answer: You could do this with just one query. There are variations how to construct the query but I would probably do something like this:
picture_ids = user.pictures.select(:id) # Will be used as a subquery if you use select
Variant.where(picture_id: picture_ids).sum(:price)
This will only generate one query, and sum all the prices using sql, so you don't lose performance.
EDIT: As pointed out in the comment below, if you are using MySQL (or have a SQL server which does not handle subqueries that good) you can use pluck instead of select. That will make as separate query to fetch all the picture_ids and everything else will still work the same way. | {
"domain": "codereview.stackexchange",
"id": 26360,
"tags": "ruby, ruby-on-rails"
} |
It seems that expectation value of $H$ on coherent states is independent of time? But why? | Question: Let's say the particle is in the state $| \psi(0) \rangle = \exp(-i\alpha p/\hbar) |0 \rangle$, where $p$ is the momentum operator.
I have to show that $| \psi(0) \rangle$ is a coherent state and to compute the expectation value of $H = \hbar \omega (n+1/2)$.
The first question is simple and I obtain
$$| \psi(0) \rangle = \exp(-|\alpha|^2/2) \sum_{n=0}^{+\infty} \frac{\alpha^n}{\sqrt{n!}}| n \rangle$$
and, of course, it's easy to verify that:
$$a| \psi (0) \rangle = \alpha |\psi(0) \rangle$$
and this verify that $| \psi(0) \rangle$ is an eigenket of the annihilation operator, by definition of coherent state.
I don't understand entirely, instead, the physical mean of the second question.
Firstly, we have that $$| \psi(t) \rangle = \exp(-|\alpha|^2/2) \sum_{n=0}^{+\infty} \frac{\alpha^n}{\sqrt{n!}}\exp(-i\omega(n+1/2)t)| n \rangle$$
Noting that $a| \psi(t) \rangle = \alpha\cdot \exp(-i\omega t)|\psi(t) \rangle$ for compute the expectation value of hamiltonian we have $$\langle \psi(t)|H| \psi(t) \rangle = \langle \psi(t) | \hbar\omega(a^{\dagger}a+1/2)|\psi(t) \rangle = \hbar \omega(|\alpha|^2+1/2).$$
Is this correct? By the way, if it's is correct, it means that the expectation value of $H$ in a coherent state is independent of time. But why?
The coherent states does not lose their shape on time and I think: is this the answer? Because it remain the same over time?
Answer: As the Hamilton operator $H$ and the time-evolution operator $U(t)=e^{-iHt/\hbar}$ commute, the expectation value of $H$ in any state $|\psi(t)\rangle = U(t) | \psi(0)\rangle$ is independent of $t$. This has nothing to with $|\psi(0)\rangle$ being a coherent state or not. | {
"domain": "physics.stackexchange",
"id": 98303,
"tags": "quantum-mechanics, harmonic-oscillator, hamiltonian, time-evolution, coherent-states"
} |
Query about the evolution of the sun, from hydrogen shell burning to helium flashpoint | Question: My understanding of the transition from hydrogen shell burning to helium flash point:
The hydrogen shell burning occurs because the hydrogen in the suns core during the main sequence has been used up, gravity wins at the core and it contracts. Temperature goes up and there is a region around the core where the temperature is large enough to start fusion in the hydrogen in the shell. The core continues to contract because the increase in pressure and resistance to gravity from the shell isnt large enough to stop the contraction. The helium begins to fuse. There is a big flash because the helium fuses uncontrollably.
The question:
Why does the helium fuse uncontrollably? Also as the core contracts, shouldn't larger and larger regions around the core start to fuse hydrogen? I.e shouldn't the shell actually increase in size as the sun goes through the hydrogen shell burning phase.
Answer: Stars with an initial mass below about 2.1 solar masses will form a helium core with a high degree of electron degeneracy. Above this core is a shell of fusing hydrogen that deposits more helium onto the core. The core contracts as it becomes more massive; it is also heated by the fusing shell that surrounds it. Eventually a temperature is reached that is large enough to ignite the helium.
The heat capacity of the degenerate electrons in the core is very low, yet they contribute most of the gas pressure. Most of the fusion energy, at least initially goes into raising the temperature of the helium ions in the core and this does not (initially) increase the electron degeneracy pressure, which isn't very temperature sensitive. This allows the helium fusion rate to increase very rapidly, and is known as the "helium flash".
Your second question is puzzling. The approach to the helium flash is an ascent of the red giant branch. The luminosity of the star is increasing because more and more power is being produced in the hydrogen-burning shell as a result of increasing temperature and more hydrogen being brought up to fusion temperatures. | {
"domain": "physics.stackexchange",
"id": 57760,
"tags": "thermodynamics, astrophysics"
} |
Where do the Goldstone Boson degrees of freedom come from? | Question: The punchline of Goldstone's theorem is well known. When a continuous symmetry breaks
necessarily, new massless (or light, if the symmetry is not exact) scalar particles appear in the spectrum of possible excitations. There is one scalar particle—called a Nambu–Goldstone boson—for each generator of the symmetry that is broken, i.e., that does not preserve the ground state. The Nambu–Goldstone mode is a long-wavelength fluctuation of the corresponding order parameter.
Where do the degrees of freedom of the Goldstone bosons come from?
Answer: Well, Goldstone's 1961 sombrero potential model amply illustrates the basics. Let me vulgarize them.
In O(2) language, cavalierly about normalizations,
he thinks of a complex scalar field theory, whose real and imaginary components resolve to a two real scalar d.o.f. system, with prototype potential
$$
\frac{\lambda}{4} (A^2 + B^2 +\epsilon v^2)^2 .
$$
The O(2) isorotation symmetry sends A to $A\cos \theta- B \sin \theta$; and B to the orthogonal combination. The conserved current is $J_\mu = A\partial _\mu B - B\partial_\mu A $, i.e. $\partial \cdot J=0$, and
$
i\theta [Q, A]=\delta A ~,
$
and likewise for B, so then
$$
\delta A =- \theta B, ~~~~\delta B=\theta A ~.
$$
Slide the parameter ε from 1 (generic for non vanishing positive) ; to 0; to -1 (generic for non vanishing negative), and monitor the qualitative fate of the two fields along these three cases.
For ε=1, A and B are twins. They have the same mass, $\sqrt{\lambda} v$, as the minimum of the potential is at $\langle A\rangle =\langle B\rangle = 0$, and so
$\langle \delta A\rangle= \langle \delta B\rangle= 0 $, that is, $Q|0\rangle=0$, so the vacuum is invariant under isorotation, $e^{i\theta Q}|0\rangle=|0\rangle$. The potential looks like so:
Decreasing ε to 0, the mass of A and B decreases to 0, but they remain twins, and their rotation into each other is still linear, and the vacuum is still symmetric—all the above relations (except for the vanished mass) are the same as above.
As soon as ε goes negative, something catastrophically, qualitatively different happens: SSB. Take ε to be -1 for simplicity. Now the quartic potential morphs into the iconic Goldstone sombrero, and the minimum is this entire flat circle on the A-B plane.
The symmetry slides you around that degenerate bottom (orbit) without resistance.
A choice is thus forced for the ground state: suppose you pick, arbitrarily, $\langle B\rangle=0$ and $\langle A\rangle =v$. Since you are interested in excitations around this vacuum, change to variables of convenience, $A\equiv v+h$, so h is the excitation around this vacuum with $\langle h\rangle=0$.
The quartic potential now expands to
$$
\frac{\lambda}{4} \left ( B^4+ h^4 +2h^2B^2 +4vh(B^2+h^2) +4v^2 h^2 \right).
$$
There is no mass term for B, but a hefty mass for h (formerly known as A ; you could take this mass to infinity, if inclined, by sending λ there), similar to the mass it used to have before the SSB.
The crucial part is $\langle B \rangle= \langle h \rangle= \langle \delta A \rangle=0$ , but $\langle \delta B \rangle= \theta v$ , non-vanishing : the hallmark of the Goldstone boson, since $\delta B= \theta v+ \theta h$ ; call v the order parameter. This shift nonlinearity in the goldston's transform precludes the existence of a mass term for it, as that term would not be invariant under the symmetry, still all-powerful, but a bit hidden (whom are we fooling? This is dubbed the Nambu-Goldstone realization).
The current $J_\mu = v\partial_\mu B+ h\partial _\mu B - B\partial_\mu h $ , is, of course, still conserved, but, check that now $Q|0\rangle\neq 0 $ : the symmetry shifts the vacuum around the bottom of the sombrero, exciting sloshing Bs out of it—it taps the bowl of jello with a spoon. $|B(p=0)\rangle$ is degenerate with $|0\rangle$, as $\langle B(p)| J_\mu (x)|0\rangle\sim e^{ip\cdot x} v p_\mu$ .
Check that $[Q,H]=0$ , so $H (Q|0\rangle)= Q(H|0\rangle)= QE_0|0\rangle= E_0(Q|0\rangle)$ .
By contrast, oscillations of the massive h (the σ, or "Higgs") correspond to rolling up and down the walls of the valley of the sombrero, transversely to the valley axis.
The takeaway: As ε slides from 1 to 0 to -1, the mass of the "higgs", A/h, decreases to 0 and then back to above its former value; by contrast, the mass of B decreases to 0, and stays there: but, suddenly, for ε<0, it morphs into a goldston. | {
"domain": "physics.stackexchange",
"id": 44753,
"tags": "quantum-field-theory, solid-state-physics, symmetry, symmetry-breaking"
} |
3d holograms - How are they created? | Question: How do 3d holograms work exactly? I read that there is a laser in use, but how are the multiple perspectives generated and how is the light trapped? in a certain area to create the effect?
Furthermore, how do you generate different colors? Is it still using multicolored projectors or is there something special? What kinds of lenses are used?
Answer: The Wikipedia article that Anna mentioned is an excellent description of holography and I'm not going to try and compete with it, but since I'm guessing that you're not a physicist the following might help make things clearer.
When you "see" things, you see them because they enter your eye, get focussed by the lens and hit the retina. In addition you see 3D because the image recorded by your left and right eyes are slightly different, and the brain can reconstruct a 3D image from the differences.
So if you're looking at a mouse (to use the example from the Wikipedia article) it's the light reflected off the mouse that the eye uses to "see" the mouse. Suppose you could come up with some clever trick to remove the mouse but still send the light to your eyes as if it had come from the mouse. Your brain couldn't tell the difference because your eyes are still receiving the same light as when the mouse was there. This is what a hologram does.
A hologram is a pattern of light and dark areas. When you shine a laser onto a hologram the light and dark areas scatter the light by a process called diffraction. The clever bit is that the light is scattered in exactly the same way as if there were a mouse there, so your brain sees light that looks as if it has come from a mouse, so you see a mouse. It appears in 3D because the hologram scatters light differently depending on the angle you're looking at it, so your left and right eye receive differently scattered light just as they would from a real mouse.
You might think it would be tremndously difficult to make a hologram to scatter light in just the right way to make it appear as a mouse, but actually you make a hologram from a real mouse i.e. it's just a type of photograph.
It's hard to make multicoloured holograms because to "see" the hologram you have to shine a laser on it, and lasers are just a single colour. You could use three lasers, e.g. a red, green and blue laser, but annoyingly the hologram scatters different coloured light in different ways and your multicoloured hologram would be very blurred.
I hope this helps - to get any further you'll need to work through the Wikipedia article, and also understand what diffraction is. | {
"domain": "physics.stackexchange",
"id": 18365,
"tags": "optics, visible-light, hologram"
} |
Complexity of cubic graph decomposition | Question: I am aware that deciding the existence of decomposition of a cubic graph into edge-disjoint claws is polynomial time solvable. Since a cubic graph has a decomposition into edge-disjoint claws if and only if it is bipartite.
What is the complexity of deciding the existence of decomposition of a cubic graph into vertex disjoint claws? Is it NP-complete?
In the former problem, we partition the edge set into edge-disjoint claws while in the later one we partition the vertex set into vertex-disjoint claws.
Answer: It turns out that the problem is eqivalent to the 1-perfect code problem in cubic graphs. Therefore, Deciding the existence of decomposition of a cubic graph into vertex disjoint claws is NP-complete. | {
"domain": "cs.stackexchange",
"id": 5369,
"tags": "complexity-theory, graphs"
} |
Can the finite dimensional irreducible $(j_+,j_-)$ representations of the Lorentz group $SO(3,1)$ be unitary? | Question: Since the Lorentz group $SO(3,1)$ is non-compact, it doesn't have any finite dimensional unitary irreducible representation. Is this theorem really valid?
One can take complex linear combinations of hermitian angular momentum generator $J_i^\dagger=J_i$ boost generator $K_i^\dagger=-K_i$ to construct two hermitian generators $N_i^{\pm}=J_i\pm iK_i$. Then, it can be easily shown that the complexified Lie algebra of $SO(3,1)$ is isomorphic to that of $SU(2)\times SU(2)$. Since, the generators are now hermitian, the exponentiation of $\{iN_i^+,iN_i^-\}$ with real coefficients should produce finite dimensional unitary irreducible representations. The finite dimensional representations labeled by $(j_+,j_-)$ are therefore unitary.
$\bullet$ Does it mean we have achieved finite dimensional unitary representations of $SO(3,1)$?
$\bullet$ If the $(j_+,j_-)$ representations, are for some reason are non-unitary (why I do not understand), what is the need for considering such representations?
$\bullet$ Even if they are not unitary (for a reason I don't understand yet), they tell how classical fields transform such as Weyl fields, Dirac fields etc. So what is the problem even if they are non-unitary?
Answer: The statement "Non-compact groups don't have finite-dimensional unitary representations" is a heuristic, not a fact. $(\mathbb{R},+)$ is a non-compact Lie group that has non-trivial finite-dimensional unitary representations. However, the Poincaré group and the Lorentz group really don't have any finite-dimensional unitary representations.
Your construction fails because the complexification $\mathfrak{so}(1,3)_\mathbb{C}$ is only isomorphic to the complexification $(\mathfrak{su}(2)\oplus\mathfrak{su}(2))_\mathbb{C}$, not to the real Lie group. You found a unitary representation of $\mathfrak{su}(2)\oplus\mathfrak{su}(2)$ itself, but this doesn't give you a unitary representation of the complexification, nor of $\mathfrak{so}(1,3)$.
We care about those finite-dimensional representations of $\mathrm{SO}(1,3)$ even if they are not unitary because these are the representations on the target spaces of fields. The representation that needs to be unitary is the representation on the quantum space of states, but not on the target space of fields. Clearly, a vector field transforms in the "standard" representation of $\mathrm{SO}(1,3)$ and doesn't care that it's not unitary because the target space is $\mathbb{R}^{1,3}$ which isn't even a complex vector space to begin with! There is no "problem" with these representations, they just are not the representations we need on the Hilbert space of states, which are projective representations of $\mathrm{SO}(1,3)$, which are equivalent to unitary linear representations of $\mathrm{SL}(2,\mathbb{C})$, its universal cover. For more on the necessity of projective representation, see this Q&A of mine. | {
"domain": "physics.stackexchange",
"id": 35501,
"tags": "group-theory, group-representations, lorentz-symmetry, classical-field-theory, poincare-symmetry"
} |
How can -G/T and -A/T be thought of as 'disguised' entropies? | Question: I understand that entropy is defined as $\mathrm{d}S=\mathrm{d}q/T$ for a reversible change but I fail to see why $-G/T$ and $-A/T$ can be thought of as 'disguised' entropies. This is a question on my problem sheet at university and my tutor has hinted: "You need to think about the total entropy change for a process".
Answer: The fundamental equation of (chemical) thermodynamics $\ce{\Delta G} = \Delta H – {T\Delta S}$ may be divided by (-T):
$$\ce{-\frac{\Delta G}{T}} = -\frac{\Delta H_{sys}}{T} + {\Delta S}_{sys}$$
In this equation $\frac{\Delta H_{sys}}{T}$ is the entropy change of the system by “heat” exchange with the surroundings, hence $\frac{\Delta H_{sys}}{T} = -\Delta S_{surroundings}$. On this basis, the above equation may be identified as:
$$\Delta S_{total} ={\Delta S_{surroundings} + \Delta S}_{sys}$$
$\Delta S_{sys}$ and $\Delta S_{surroundings}$ strive mutually to a maximum of $\Delta S_{total}$.
$\ce{\Delta G}$ is a disguised entropy change, because $\ce{\Delta H}$ is intrinsically an entropy change too, as explained above. | {
"domain": "chemistry.stackexchange",
"id": 8612,
"tags": "thermodynamics, free-energy, entropy"
} |
Why does flux in closed surface remain constant if exterior charge is altered? | Question: Q. Charges $q_1$ and $q_2$ lie inside and outside respectively of a closed surface $S$. Let $E$ be the field at any point on $S$ and $\Phi$ be the flux of $E$ over $S$.
One of the answer is: if $q_2$ changes, $E$ will change but flux will not change.
According to Gauss law,
$$\Phi = \oint_S E \cdot ds \, . \tag{1}$$
Also,
$$\Phi_\text{external} = q/\epsilon_0 \, . \tag{2}$$
But according to the given answer if the electric field $E$ is changing then the flux should also change since $ds$ is constant.
Also, by formula (1), the flux coming out of the surface should remain constant as $q_1$ inside is constant.
In short, electric field changes, $q_2$ changes, $ds$ is constant. Then, by the given formulas how can the flux change?
The two formulas are contradictory. The answer given in the book is surely correct.
Answer:
But according to the given answer if the electric field $E$ is changing then the flux should also change since $ds$ is constant.
As you say, $E$ does change.
However, it changes both where it enters $S$ and where it exits $S$.
In the integral, the sign of $E \cdot ds$ depends on whether the field is pointing into or out of the surface.
To make this more clear, it is better to remember that $E$ and $ds$ are both vectors.
We should really write $\vec{E}$ and $d\vec{S}$, thus writing the integral as
$$\Phi = \oint_S \vec{E} \cdot d\vec{S} \, .$$
The vector $d\vec{S}$ has magnitude given by the size of the area element and by convention the direction points outward from the surface.
Now it is clear that if the field is pointing into the surface the contribution to the integral is negative, while if the field points out from the surface the contribution is positive.
This is illustrated in the diagram.
So you see that if we e.g. double the value of $q_2$ we double the amount of field entering and exiting $S$, so the total contribution is unchanged (and in particular it's still zero).
We could also move $q_2$ around, while keeping it outside of $S$.
However, even moving $q_2$ around does not change $\Phi$.
The reason for this is quite deep, but can be derived from the simple fact that the electric field of a charge has zero divergence,
$$\vec{\nabla} \cdot \vec{E} = 0 \, .$$
The divergence theorem says that given a vector field $\vec{U}$ and a surface $S$ which encloses a volume $V$,
$$\int_V \vec{\nabla}\cdot \vec{U} dV = \int_S \vec{U} \cdot d\vec{S} \, .$$
Since for the electric field of a charge $\vec{\nabla}\cdot \vec{E}$ is always zero, then the integral is zero no matter whether we move the charge around or change its amplitude.
Now, there is a very important detail I left out.
The divergence of $\vec{E}$ is not truly zero everywhere.
In fact, it is zero everywhere except right at the charge itself.
At the charge, the divergence is actually infinite.
The proper way to write the divergence of the electric field caused by a charge $q$ at position $\mathbf{x}_0$ is
$$\vec{\nabla} \cdot \vec{E} = \delta(\mathbf{x} - \mathbf{x}_0) q / \epsilon_0 \, .$$
If you plug this into the divergence theorem you find that the flux through a surface $S$ caused by this charge is zero if the charge is not inside the surface and $q/\epsilon$ if the charge is inside the surface. | {
"domain": "physics.stackexchange",
"id": 23976,
"tags": "electrostatics, electricity, charge"
} |
Energy Density of Radiation | Question: What actually does the term energy density u(v,T) denotes? Is it the energy density of cavity radiation or the radiation emitted by blackbody? Is it uniform? While we derive an expression of it , we consider cavity radiation and use this expression in proving Stefan’s Law.I am confused.Please explain.
Answer: Imagine I build a device which will (quite suddenly) close a set of walls around a volume of space. And the walls are perfectly transparent except in a narrow frequency band near $\nu$ where there are perfectly reflecting.
If I stick the machine into a cavity and close it, I will capture some light (all with frequency $\nu$) inside the newly formed box, and I can transport it somewhere and then let it out and measure how much energy was carried therein.1
Now, I know the volume $V$ of the box and the total energy $E$ of the light trapped therein, so I can compute $u = u(\nu) = \frac{E}{V}$. WIth lots of boxes I could do that for many frequency bands.
Finally, if I experiment with the light grabber in cavities of different temperatures I will find that
The energy density is largely independent of the material of the cavity.2
The densities are strongly dependent on the band I select.
The densities are strongly dependent on the temperature of the cavity.
So it makes sense in this case to write the energy density as a function of frequency and temperature:
$$u = u(\nu,T)\,.$$
1 Or if I can make precise enough measurement I can even measure the change in the boxes mass.
2Here I am assuming that the thermal spectrum overwhelms discrete excitations. | {
"domain": "physics.stackexchange",
"id": 51061,
"tags": "quantum-mechanics, thermal-radiation"
} |
Small text-based fight | Question: I plan to flesh this out into a hopefully full 2-3 minute text adventure game, and the most fun part to start with is the combat.
The fight function down at the bottom repeats itself quite a bit, the help() function call is in multiple places and I was wondering if there's any way to reduce that? Also general tips on how I can improve would be appreciated.
# -*- coding: utf-8 -*-
import random
from colorama import Fore, Back, Style
from colorama import init
from Tkinter import *
init()
'''
import pygame, sys
from pygame.locals import *
# set up pygame
pygame.init()
# set up the window
windowSurface = pygame.display.set_mode((500, 400), 0, 32)
pygame.display.set_caption('Hello world!')
# set up the colors
BLACK = (0, 0, 0)
WHITE = (255, 255, 255)
RED = (255, 0, 0)
GREEN = (0, 255, 0)
BLUE = (0, 0, 255)
# set up fonts
basicFont = pygame.font.SysFont(None, 48)
# set up the text
text = basicFont.render('Hello world!', True, WHITE, BLUE)
textRect = text.get_rect()
textRect.centerx = windowSurface.get_rect().centerx
textRect.centery = windowSurface.get_rect().centery
# draw the white background onto the surface
windowSurface.fill(WHITE)
# draw a green polygon onto the surface
pygame.draw.polygon(windowSurface, GREEN, ((146, 0), (291, 106), (236, 277), (56, 277), (0, 106)))
Attack: The amount of damage you do.
Health: Amount of damage you can take.
Armor: (not in yet)
Accuracy: The percentage something will hit.
'''
class Player(object):
def __init__(self, name):
self.name = name
self.attack = 3
self.health = 10
#not used
self.magic_attack = 2
self.armor = 1
def description(self):
print "%s the mighty hero!" % (self.name)
#A class full of the character's fighting abilities
class Ability(object):
def __init__(self, damage, accuracy):
self.damage = damage
self.accuracy = accuracy
def description(self, name):
#None of this is used yet, either
if name == "slash":
print "A very accurate attack with low damage."
elif name == "stab":
print "A high damaging attack with low accuracy."
elif name == "normal":
print "A normal attack."
########
#def defend
########
@staticmethod
def attack(attack_type):
while True:
damage = 0
is_hit = False
if attack_type == "normal":
if accuracy_calc (normal_attack.accuracy) == True:
damage = player.attack
is_hit = True
else:
print "You missed!"
break
elif attack_type == "blahblah":
damage = player.attack
is_hit = True
elif attack_type == "blahblah":
if accuracy_calc(stab_attack.accuracy) == True:
damage = player.attack * 2
is_hit = True
else:
print "You missed!"
break
else:
error("typo")
continue
if is_hit == True:
break
return damage
def accuracy_calc (accuracy):
rand = random.randint(0,100)
if rand <= accuracy:
#It DID hit
is_hit = True
else:
#It didn't hit
is_hit = False
return is_hit
class Enemy(object):
def __init__(self, name, attack, health, armor):
self.name = name
self. attack = attack
self.health = health
self.armor = armor
def description(self):
print "A %s." % (self.name)
def help():
print "Combat commands:"
print "Attack: Attacks the enemy."
print "Defend: Defends an attack."
print "Magic: Some advanced stuff you don't know yet"
#Defining a goblin. (att, def, ar)
goblin = Enemy("small goblin", 2, 8, 0)
#Prints a blank line, making things more readable.
print "\n"
#Defining the player. (name)
player = Player("Adam")
#Maybe make multiple characters?
#Not actually used, switch all this shiznit to magic.
normal_attack = Ability(1, 90)
#Defining abilities. It takes the percentage of an objects stat
#(dmg, acc)
slash_attack = Ability(1, 100)
#(dmg, acc)
stab_attack = Ability(2, 50)
#A list of abilities the player has. Abilities can't be used if they're set
#above in the class, but they can be if they're set up there AND put here.
#REDO THIS FOR MAGICAL ABILITIES, not added yet
ability_list = ["normal"]
#A random sentence every time someone makes a mistake. Type = "typo", "path", something else
def error (type):
#The list of errors that come up when someone typoes
typo_error = ["That's not a command!", "Could you say that again?",\
"Stop speaking gibberish!"]
if type == "typo":
error = random.choice(typo_error)
print error
def fight (player, enemy):
while player.health >= 0 and enemy.health >= 0:
#Splits combat into player and enemy turn
turn = "player"
#Repeats the last loop if a mistake is made
went_back = False
#Tracks if the players defends on his turn.
player_defend = False
while turn == "player":
input = raw_input("What would you like to do? (Use 'help' for commands.)")
if input == "help":
help()
continue
elif input == "attack":
player_damage = Ability.attack("normal")
enemy.health -= player_damage
#Add in accuracy
elif input == "defend":
player_defend = True
#CHANGE THIS TO A MAGIC SYSTEM. RENAME EVERYTHING AND REDO.
#remember to put in resistances, mayve a flat subtration or
#a percentage
#Add a running mechanic
elif input == "magic":
while True:
input = raw_input("Which ability would you like to use? (type 'back' there's nothing here yet")
if input == "help":
help()
continue
elif input in ability_list:
break
elif input == "back":
went_back = True
break
# player_damage = Ability.attack(input)
# enemy.health -= player_damage
# print enemy.health, "enemy hp"
# break
else:
error("typo")
continue
else:
error("typo")
continue
if went_back == True:
went_back = False
continue
print "You strike the enemy for ", (Fore.GREEN + str(player_damage) + Style.RESET_ALL), " damage!"
turn = "enemy"
#Prints if the player dies
if player.health <= 0:
print "You have been slain!"
break
#Checks if the enemy is dead, if it is then ends the fight
enemy_is_dead = False
if enemy.health <= 0:
print "You've defeated %s!" % (goblin.name)
enemy_is_dead = True
#Enemy's turn to attack.
if enemy_is_dead == False:
while turn == "enemy":
#If the play defends, do half damage
if player_defend == False:
enemy_damage = enemy.attack
else:
enemy_damage = enemy.attack / 2
player.health -= enemy_damage
text1 = str(enemy_damage)
text2 = str(player.health)
print "The monster did ", (Fore.RED + text1 + Style.RESET_ALL), "damage!"\
, "You have ", (Fore.RED + text2 + Style.RESET_ALL), "health left."
turn = "player"
# print(Style.RESET_ALL)
# print(Fore.RED + 'some red text')
fight (player, goblin)
Answer: There are multiple things you can improve/simplify.
Code Style
remove the extra newlines between the parts of the code, keeping 2 blank lines between the top-level class and function definitions, 1 blank line between the class methods (PEP8 reference)
the docstrings should be put into triple double-quotes. The module level docstring should be on top, before the import statements.
put the main execution code block into the if __name__ == '__main__':
use print() as a function instead of a statement for Python-3.x compatibility
when you put an inline comment, start with a space (PEP8 reference)
Code Simplifications
You can replace these multiple if/elif/elses:
def description(self, name):
#None of this is used yet, either
if name == "slash":
print "A very accurate attack with low damage."
elif name == "stab":
print "A high damaging attack with low accuracy."
elif name == "normal":
print "A normal attack."
with a dictionary lookup:
ABILITY_DESCRIPTIONS = {
"slash": "A very accurate attack with low damage.",
"stab": "A high damaging attack with low accuracy.",
"normal": "A normal attack."
}
def description(self, name):
print(ABILITY_DESCRIPTIONS.get(name, "Ability description not found"))
You can replace expressions like if is_hit == True: with if is_hit: - there is no need to explicitly check for equality with True. Same applies for other places when you compare with True or False.
accuracy_calc() function can be rewritten as:
def accuracy_calc(accuracy):
return random.randint(0, 100) <= accuracy | {
"domain": "codereview.stackexchange",
"id": 25054,
"tags": "python, python-2.x, adventure-game, battle-simulation"
} |
Concatenating 20 pointclouds in one large cloud - 3d scanner | Question:
Hi,
I have about 20 pictures of object and there is a laser line on each one. It starts on the begin of object and goes further about 0.5cm with every picture. I want to create 3D object pointcloud using these pictures. I can generate a single point cloud from one picture using sensor_msgs::PointCloud2 cloudOfPoints;. And I can do that with every picture, but how can I concatenate them to get 3D object? I was looking and find some similar problems, but they didn't help me with that. I was trying solutions like pcl::concatenatePointCloud, but it combines clouds in the same place what makes no effect. I must consider their distance in 3D world.
Can someone help mi with that?
Originally posted by Wrt on ROS Answers with karma: 3 on 2019-01-12
Post score: 0
Answer:
You're correct when you found pcl::concatenatePointCloud, but that is only half the answer. You also need to use the pcl::transformPointCloud function, there is a good tutorial for this here.
The important question you'll need to answer is how you find the exact transform from one slice of points to the next. For example if you're rotating the object being scanned on a flatform you could use an encoder to measure the angular offset and use that to generate the correct transformation matrix. If you could describe your system in a bit more detail then we could give you some specific advice about how to do this.
Hope this helps.
Originally posted by PeteBlackerThe3rd with karma: 9529 on 2019-01-13
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Wrt on 2019-01-13:
Aff, sorry I forgot to say something about my system, I have camera and laser mounted on one strip. This system is mounted on robotic arm and I only move it in one axis to scan object, so rotation is the same on all pictures, only translation is changed.
Comment by Wrt on 2019-01-13:
I was thinking about matrix transformation, but was not sure if it's a necessary to use it. I thought that different position of laser on pictures will be enough, so now I know it doesn't. I will check the tutorial. Thanks for your answer :)
Comment by PeteBlackerThe3rd on 2019-01-13:
Okay. You should be able to get the position of the sensor on the end of the robot arm (probably from the TF system). This will give you the transformation you need.
Comment by Wrt on 2019-01-14:
It works, thanks! | {
"domain": "robotics.stackexchange",
"id": 32265,
"tags": "pcl, ros-kinetic"
} |
Kuiper belt planet formation | Question: There are a lot of objects of different sizes in the Kuiper belt. Are objects in the Kuiper belt stable? Is there any chance in the future that one of the larger objects would accumulate enough critical mass to start the process of becoming a proper planet (i.e., clearing its neighboorhood)?
Answer: The Kuiper Belt is fairly large (with inner and outer radii $\sim30\text{ AU}$ and $\sim50\text{ AU}$), but it does not contain much mass. The total mass is likely somewhere between $0.01$ and $0.1$ Earth masses[1], [2], which is essentially a mass range from five times the mass of Pluto to the mass of Mars. Now, it was likely much more massive in the past - its origin and mass loss are tied to the evolution of the orbits of ice giants in the early Solar System (see in particular the Nice model) - but we're talking about the formation of planets in the future. At present, there simply isn't enough mass to go around.
Let's talk about "clearing the neighborhood". The IAU's definition of a planet merely states, as one of its three criteria, that a planet must have
cleared the neighbourhood around its orbit.
Wikipedia gives several quantitative measures of whether or not an object has cleared its neighborhood. The quantity $\Pi$ (Margot (2015) is given by
$$\Pi(m,a)=\frac{m}{M^{5/2}a^{9/8}}k$$
where $m$ is the mass of the body (Earth masses), $M$ is the mass of the star (solar masses), $a$ is the radius of its semimajor axis (AU), and $k\simeq807$. A body is a planet if $\Pi>1$. If the mass of this hypothetical planet, $m_H$, is $0.01$ Earth masses, then no orbit in the range $30\text{ AU}<a<40\text{ AU}$ gives me $\Pi(.01M_{\oplus},a)>1$. If $m_H$ is $0.1$ Earth masses, then almost all $a$ in the same range give me $\Pi(0.1M_{\oplus},a)>1$. That said, it is extremely unlikely that a body could amass even a small fraction of the total mass in the Kuiper Belt - remember the inner and outer boundaries!
The quantity $\Lambda$ (Stern & Levison (2002)) is given by
$$\Lambda(m,a)=\frac{\mu^2}{a^{3/2}}\times\left[\text{Terms accounting for the properties of the body's orbit}\right]$$
where $\mu$ is the ratio of the mass of the body to the mass of the star. The author's list a value on the order of $\Lambda=3\times10^{-3}$ for Pluto; if we assume that the orbital terms are about the same for this body, we find $\Lambda(0.01M_{\oplus},a)\sim0.075$ and $\Lambda(0.1M_{\oplus},a)\sim7.5$, with the cutoff being $\Lambda=1$. Again, it is not possible for the body to accumulate the total mass of the Kuiper Belt, so we can safely assume that $\Lambda\ll1$. | {
"domain": "astronomy.stackexchange",
"id": 2402,
"tags": "planet, planetary-formation, kuiper-belt"
} |
Creating neural net for xor function | Question: It is a well known fact that a 1-layer network cannot predict the xor function, since it is not linearly separable. I attempted to create a 2-layer network, using the logistic sigmoid function and backprop, to predict xor. My network has 2 neurons (and one bias) on the input layer, 2 neurons and 1 bias in the hidden layer, and 1 output neuron.
To my surprise, this will not converge. if I add a new layer, so I have a 3-layer network with input (2+1), hidden1 (2+1), hidden2 (2+1), and output, it works. Also, if I keep a 2-layer network, but I increase the hidden layer size to 4 neurons + 1 bias, it also converges. Is there a reason why a 2-layer network with 3 or less hidden neurons won't be able to model the xor function?
Answer: Yes, there is a reason. It has to do with how you initialize your weights.
There are 16 local minimums that have the highest probability of converging between 0.5 - 1.
Here is a paper that analyses the xor problem: Learning XOR: exploring the space of a
classic problem, Bland 1998. | {
"domain": "datascience.stackexchange",
"id": 1898,
"tags": "neural-network, backpropagation"
} |
Why can't stars be multicolored like gas giants? | Question: Gas giants like Jupiter and Saturn have bands of different colors in their atmosphere. These are due to the rotation of the planets. Stars rotate too, so why do most stars have patches/blotches of color rather than having latitudinal stripes?
Answer: Just rotation is the wrong tree to bark up on.
You see color variations on gas giants due to differences in composition, i.e. ammonia vs. sulfuric acid clouds on Jupiter, which are transported differently on the rotating planet in the up/downwelling bands.
Spots on stars originate due to very different physics. At the temperatures which are prevalent on stellar surfaces, molecules are mostly dissociated and ionized, we call this state a plasma, so no more colour effects due to ammonia or others. Star spots are concentrations of the local magnetic field, which is coupled to the plasma dynamics. The magnetic field pushes the gas aside and lets the surface cool locally. This makes dark spots that you see on stars like our sun.
There is a transition region in terms of surface temperature of 2000-3000K, at the masses of brown dwarves. Those failed stars seem to exhibit dark bands on their surfaces, which are thought to be due to absorption from exotic, high-temperature molecules like TiO and VO (Titanium and Vanadium oxide). | {
"domain": "astronomy.stackexchange",
"id": 5205,
"tags": "the-sun, gas-giants, sunspots"
} |
Transforming to a rotating frame in the $x$-basis | Question: I was reading this paper on analytically Solvable driven time-dependent two level quantum systems. The Hamiltonian considered in the paper is the following: $$H=\sigma_z\cdot J(t)/2)+\sigma_x\cdot h/2$$
$U$ is the unitary evolution operator with elements $u_{ij}$ (Equation (2) in the paper). It is given that after transforming to a rotating in the x-basis the following equation is obtained (equation (3) in the paper):
$$D_+=e^{+i\cdot h\cdot t/2}(u_{11}+u_{21})\cdot 1/\sqrt(2)$$ and $$D_-=e^{-i\cdot h\cdot t/2}(u_{11}-u_{21})\cdot 1/\sqrt(2)$$
What are these $D_+$ and $D_-$? I know that the general form of the rotation operator(which is also often denoted by $D$) for the spin system is given by:
$$
D(\hat n,\theta)=\mathrm e^{-i\theta\ \hat{n}\cdot\vec S}
$$
and when I computed it for rotation about the x-axis I got the rotation operator to be:$$D(\hat n,\theta)=\left[ {\begin{array}{cc}
\cos(\theta/2) & -i\sin(\theta/2) \\ -i\sin(\theta/2) & \cos(\theta/2) \ \end{array} } \right]$$
What are the $D$s obtained while transforming to a rotating frame in the attached picture? Am I doing anything wrong here?
The terminology is also quit confusing for me. Why is it called a "rotating" frame rather than the frame rotated by $\theta$? But, then that also does't make sense because in the original paper there is no $\theta$? So, then, what do they mean by "rotating"?
Answer: If $|\pm\rangle$ are the eigenvectors of ${\hat \sigma}_x$, ${\hat \sigma}_x |\pm\rangle = \pm |\pm\rangle$, then a rotating $x$-basis is defined as
$$
|+\rangle(t) = \exp\left(-i\frac{\omega t}{2} {\hat \sigma}_x\right)|+\rangle = e^{-i\omega t/2 } |+\rangle
$$
$$
|-\rangle(t) = \exp\left(-i\frac{\omega t}{2} {\hat \sigma}_x\right)|-\rangle = e^{i\omega t/2 }|-\rangle
$$
Notice that it is still an eigenbasis of ${\hat \sigma}_x$, ${\hat \sigma}_x |\pm\rangle(t) = \pm |\pm\rangle(t)$. Check why it is referred to as a rotating basis by looking at the transformation between $|\pm\rangle(t)$ and $|\uparrow\rangle$, $|\downarrow\rangle$.
In the present case take $\omega = h$ and let $d_\pm(t)$ be the expansion coefficients of the state vector in the rotating basis, that is,
$$
|\psi(t)\rangle = d_+(t) |+\rangle(t) + d_-(t)|-\rangle(t)
$$
Since ${\hat \sigma}_z |\pm\rangle(t) = e^{\mp i ht/2} |\mp \rangle$, the Schroedinger equation yields
$$
i \frac{d|\psi\rangle}{dt} = i {\dot d}_+ |+\rangle(t) + i d_+(t) \frac{d}{dt} |+\rangle(t) + i {\dot d}_-(t) |-\rangle(t) + i d_-(t) \frac{d}{dt} |-\rangle(t) =
$$
$$
= i {\dot d}_+ |+\rangle(t) + \frac{h}{2} d_+(t) |+\rangle(t) + i {\dot d}_-(t) |-\rangle(t) - \frac{h}{2} d_-(t) |-\rangle(t) =
$$
$$
\left[ \frac{J(t)}{2} d_-(t)e^{iht} + \frac{h}{2}d_+(t)\right] |+\rangle(t) + \left[ \frac{J(t)}{2} d_+(t)e^{-iht} - \frac{h}{2}d_-(t)\right] |-\rangle(t)
$$
wherefrom after simplification and identification follows
$$
{\dot d}_\pm(t) = - i \frac{J(t)}{2}e^{\pm iht} d_\mp(t)
$$
This looks very much like eq.(4) in the paper, but coefficients $d_\pm(t)$ are not yet related to the matrix elements $u_{11}$ and $u_{21}$ of the evolution operator ${\hat U}$, as required by eq.(3). To obtain the latter, and with it the meaning of the functions $D_\pm(t)$, let us look at the alternative expression for $|\psi(t)\rangle$ obtained by applying ${\hat U}(t)$ to the initial state vector in the z-basis, $|\psi(0) \rangle = c_1(0)|\uparrow\rangle + c_2(0)|\downarrow\rangle$:
$$
|\psi(t)\rangle = {\hat U}(t)|\psi(0) \rangle = c_1(0) {\hat U}(t)|\uparrow\rangle + c_2(0) {\hat U}(t)|\downarrow\rangle =
$$
$$
= \left[ u_{11}(t)c_1(0) - u^*_{21}(t) c_2(0)\right] |\uparrow\rangle + \left[ u_{21}(t) c_1(0) + u^*_{11}(t)c_2(0)\right]|\downarrow\rangle
$$
If we switch now to the x-basis and then to the rotating x-basis this reads
$$
|\psi(t)\rangle = \left[ u_{11}(t)c_1(0) - u^*_{21}(t) c_2(0)\right] \frac{1}{\sqrt{2}}\left(|+\rangle + |-\rangle\right) + \left[ u_{21}(t) c_1(0) + u^*_{11}(t)c_2(0)\right]\frac{1}{\sqrt{2}}\left(|+\rangle - |-\rangle\right) =
$$
$$
= \left[ \left[\frac{1}{\sqrt{2}} \left(u_{11} + u_{21} \right) e^{iht/2}\right] c_1(0) + \left[ \frac{1}{\sqrt{2}} \left(u_{11} - u_{21} \right) e^{-iht/2}\right]^* c_2(0)\right] |+\rangle(t) +
$$
$$
+ \left[ \left[\frac{1}{\sqrt{2}} \left(u_{11} - u_{21} \right) e^{-iht/2}\right] c_1(0) - \left[ \frac{1}{\sqrt{2}} \left(u_{11} + u_{21} \right) e^{iht/2}\right]^* c_2(0)\right] |-\rangle(t) =
$$
$$
= \left[ D_+(t) c_1(0) + D^*_-(t) c_2(0)\right] |+\rangle(t) + \left[ D_-(t) c_1(0) - D^*_+(t) c_2(0)\right] |-\rangle(t)
$$
In other words, the functions $D_\pm(t)$ simply provide a convenient reparametrization of the evolution in the rotating x-basis.
I leave it as an exercise to derive eq.(4) from the identification
$$
d_\pm(t) = D_\pm(t) c_1(0) \pm D^*_\mp(t) c_2(0)
$$
(Hint: coefficients $c_1(0)$, $c_2(0)$ are arbitrary). | {
"domain": "physics.stackexchange",
"id": 30634,
"tags": "quantum-mechanics, homework-and-exercises, angular-momentum, operators, hilbert-space"
} |
Is there any problem with using a single stop codon with a single CDS in prokaryotes? | Question: All protein coding sequences in the iGEM Registry are supposed to end with a double stop codon. Presumably, this is to decrease the potential for read-through, which could be problematic if one is putting together a polycistronic design.
If the design is intended to have only one CDS, however, and is targeted at a prokaryote (which have "backup" ribosome release mechanisms), then is there any problem with using only a single stop codon?
Answer: The impact of any read-through from a leaky stop codon in an expression unit with only one CDS would probably depend on a few things, mainly (i) where is the next in-frame stop codon, (ii) what are you trying to express, and (iii) how leaky is the stop codon?
In cases where the next in-frame stop codon is only a few base pairs away, there would probably be little impact, however in other cases the next stop codon could be far away. In these cases, there are two things which may cause an impact.
The first is that a long peptide sequence could be added to your protein, which depending on what you are expressing, may cause your protein to misfold or lose functionality.
The second is that you could get ribosome stalling, especially if any of the codons between your stop codon and a second stop codon require rare tRNAs. As you mentioned in your question, there are mechanisms for rescue in these scenarios, however if your CDS is expressed under a strong promoter on a high copy number plasmid, this mechanism may need to be mounted much more often than usual and cause burden on the cell. I should note that this is speculation on my part as I can’t find any studies which have shown this.
The actual impact any of these scenarios may have on your system would likely be strongly dependent on how leaky the stop codon actually is. As there are many examples of constructs which use only a singe TAA in their design with no apparent negative affects, presumably under ‘normal’ circumstances there is little risk to using a single stop codon. | {
"domain": "biology.stackexchange",
"id": 11269,
"tags": "synthetic-biology, codon"
} |
Is there any telescope on Earth that can see the lunar rovers on the Moon? | Question: If I have the right numbers, it seems to me that even the Hubble telescope might barely be able to make out a carcass of a blue whale on the surface of the Moon, which puts objects as small as the lunar rovers or the American flag left there during the Apollo lunar landings out of range.
The Hubble is often lauded as the best telescope we have built, partly because it is in space, where it's free from all of the interference of the atmosphere, but is there any ground-based observatory with better resolution? Is there any place on Earth where I could point a scope at the Sea of Tranquility and see what we left there over 40 years ago?
Answer:
The largest optical wavelength telescope that we have now is the Keck
Telscope in Hawaii which is 10 meters in diameter. The Hubble Space
Telescope is only 2.4 meters in diameter.
Resolving the larger lunar rover (which has a length of 3.1 meters)
would require a telescope 75 meters in diameter.
Information extracted from The Curious Team answers: Are there telescopes that can see the flag and lunar rover on the Moon?
Update: In a comment to my answer, @Envite mentions Astronomical Interferometers which are:
...an array of telescopes or mirror segments acting together to probe
structures with higher resolution by means of interferometry. The
benefit of the interferometer is that the angular resolution of the
instrument is nearly that of a telescope with the same aperture as a
single large instrument encompassing all of the individual
photon-collecting sub-components.
It also must be noted that probably this wouldn't be the best idea for watching the moon rovers since:
The drawback is that it does not
collect as many photons as a large instrument of that size. Thus it is
mainly useful for fine resolution of the more luminous astronomical
objects, such as close binary stars. | {
"domain": "astronomy.stackexchange",
"id": 153,
"tags": "the-moon, telescope, optics"
} |
Heap comparison based sorting question | Question: I need some help with this question from my problem set. I do not want a solution but some $\textbf{hints}$ on how to proceed.
Given $n$ records in a file that are so numerous, that just storing
the $n$ keys alone would require more RAM than is available on your
computer. Let $m$ be the maximum number of
records that you can store in RAM at once. Assuming that $m >$
$\sqrt{n}$, produce a $\mathcal{O}(n \log n)$ time algorithm for sorting
the $n$ records. The sorted records would then be written to a file, since
it will not be possible for them to all fit in memory at once.
$\textbf{My approach:}$ I figure I should use a heap based approach. Try to create a type of online algorithm to read $m$ records at a time, and then use heapsort or a type of comparison based algorithm to solve it?
$\textbf{Related Questions:}$
1) I don't understand the underlying assumption of letting $m>\sqrt{n}$?
Answer: Hint: Convert the input into $n/m < m$ chunks of length $m$ and sort the chunks. Then use your heap idea. Note that you're using the assumption $m > \sqrt{n}$ so that the heap fits in memory. | {
"domain": "cs.stackexchange",
"id": 6402,
"tags": "data-structures"
} |
why willow can not login psql | Question:
Dear All.
when I use ubuntu 11.04 or 10.04 to install ros pr2 electric package, I try to use the psql database with the following link:
http://www.ros.org/wiki/sql_database/Tutorials/Installing%20a%20PostgreSQL%20Server
But unfortunately, willow can not log in psql.
1. sudo apt-get install postgresql
2. psql
3. CREATE ROLE willow LOGIN CREATEDB CREATEROLE PASSWORD 'willow';
4.add the following into pg_hba.conf
willow user can connect locally with password
local all willow md5
5. restart computer
6. psql --username willow --password --dbname postgres
The result is that:
root@ubuntu:~# psql --username willow --password --dbname postgres
Password for user willow:
psql: FATAL: Ident authentication failed for user "willow"
Whether I use postgres or other os user to log in ubuntu, the result is the same.
It seems that I did not create the willow user.
What I do is just start the server use psql or psql --username postgres
Then I create the user and the \q.
So I didnot know how to solve this problem.
OR where can I get a good demo instruction to run the grasp and place demo on ubuntu 11.04 0r 10.04.
The problem is how to setup the database.
Best regards.
Zhenli
Originally posted by zhenli on ROS Answers with karma: 287 on 2012-02-15
Post score: 0
Answer:
Answer by myself:
Though it does not work, but just forward your step, you can also get what you want to use the database.
http://www.ros.org/wiki/household_objects_database/Tutorials/Install%20the%20household_objects%20database%20on%20your%20local%20database%20server
Originally posted by zhenli with karma: 287 on 2012-02-16
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 8254,
"tags": "ros"
} |
Milliliters of NaBr when changing molarity | Question: I was very confused by this problem. I was able to get the molarity, but I wasn’t certain how to go from there:
If $\pu{2.60 g}$ of $\ce{NaBr}$ is dissolved in enough water to make $\pu{160.0 mL}$ of solution, what is the molar concentration of $\ce{NaBr}$? How many milliliters of $\pu{0.10 M}$ $\ce{NaBr}$ would you need to supply $\pu{2.60 g}$ of $\ce{NaBr}$?
$\text{Molarity} = \pu{0.158 moles/L}$
I wasn’t certain how to answer the second part of the problem. I kept getting $\pu{159 mL}$, but the answer is $\pu{250 mL}$. What am I doing wrong?
Edit:
I am not entirely used to the M symbol for molarity, so I was mixing myself up. Instead, I equated $\pu{0.10 mol/L} = \frac{(\pu{2.60 g} \times \text{molar mass NaBr})}{x \ \pu{L}}$ — I solved from there for $x$ and got $\pu{250 mL}$.
Answer: I decided to give a help here since OP has given enough effort to solve this problem. First, OP needs to know the molarity of a solution is defined as the amount of solute (in this case $\ce{NaBr}$) in $\pu{mol}$ per $\pu{1.0 L}$ of solution. The first part of problem states:
If $\pu{2.60 g}$ of $\ce{NaBr}$ is dissolved in enough water to make $\pu{160.0 mL}$ of solution, what is the molar concentration of $\ce{NaBr}$?
To solve this problem you may need to know the molar mass of $\ce{NaBr}$, which is not given. But you can find it by googling it. The molar mass of $\ce{NaBr}$ is found as $\pu{102.8 g mol-1}$. Now you can find the amount of $\ce{NaBr}$ in $\pu{160.0 mL}$ or $\pu{0.160 L}$ of solution (volume has to be in $\pu{L}$), which is:
$$\frac{\pu{2.60 g}}{\pu{102.8 g mol-1}} = \pu{0.0253 mol}$$
Thus,
$$\text{Molarity of the solution} = \frac{\text{amount of NaBr}}{\text{volume of solution}} = \frac{\pu{0.0253 mol}}{\pu{0.160 L}} = \pu{0.158 mol L-1}\\ = \pu{0.158 M}$$
The second part of the question is:
How many milliliters of $\pu{0.10 M}$ $\ce{NaBr}$ would you need to supply $\pu{2.60 g}$ of $\ce{NaBr}$?
This supposed to be solved using the first part of the reaction. So, by first part, you know now that $\pu{0.160 L}$ of $\pu{0.158 M}$ $\ce{NaBr}$ contains exactly $\pu{2.60 g}$ of $\ce{NaBr}$. so using the dilution principle, $M_1V_1 = M_2V_2$, you can find your unknown:
Here, $M_1 = \pu{0.158 M}$, $V_1 = \pu{0.160 L}$, and $M_2 = \pu{0.10 M}$. Your unknown is $V_2$, you can find using $M_1V_1 = M_2V_2$:
$$V_1 = \frac{M_1V_1}{M_2} = \frac{\pu{0.158 M} \times \pu{0.160 L}}{\pu{0.10 M}} = \pu{0.253 L} = \pu{253 mL} $$
Since olarity of the solution is given as $\pu{0.10 M}$, the answer should be $\pu{250 mL} $ with correct significant figures.
Note: @Entropy has shown the correct calculation as you have done. However, it is a different method to find the volume of solution. I'm sure the one wrote the question intended to use the first part of the question to solve the second part. | {
"domain": "chemistry.stackexchange",
"id": 15304,
"tags": "inorganic-chemistry, aqueous-solution, computational-chemistry, water, solubility"
} |
Does uniqueness of the triorthogonal decomposition make quantum measurement objective? | Question: Some books and articles on quantum measurement theory make use of a theorem (by Elby-Bub 1994) called the Triorthogonal Decomposition Theorem:
For three subsystems, a state vector $\lvert \Psi \rangle$ has a unique triorthogonal decomposition $$\lvert \Psi \rangle = \sum_j{c_j \lvert a_j \rangle \otimes \lvert b_j \rangle \otimes \lvert e_j \rangle}$$ even if some of the $\lvert c_j \rvert$ are equal.
Here is the basic idea as I understand it:
Suppose we model a quantum measurement as an interaction between a system (S) and measurement apparatus (A). Here because there are only two subsystems of the complete S+A system, it is possible to decompose into many different bases. We have basis degeneracy.
But now suppose we include the environment as a source of decoherence. Then apparently we now have three subsystems (S+A+E) so we can use this Triorthogonal Decomposition Theorem to argue that the measured system decomposes into a unique basis.
Auletta (Quantum Mechanics, 2009) describes this as follows:
The uniqueness of triorthogonal decomposition is a very important point. In fact while the tracing out [of the environment] is only relative to the system and the apparatus, the uniqueness of the triorthogonal decomposition introduces an objective character in the measurement theory that can account for irreversibility.
I find this intriguing yet a bit unbelievable. I appreciate that Auletta is giving a very idealised presentation - I don't object to that, but I wish to know if this idea really is such an important principle as claimed.
Answer: The triorthogonal monomial basis for a Hilbert space decomposed into a triple tensor product is unique, the decomposition of the Hilbert space into such a triple product is not. So the "unique" basis into which a model decomposes is very much an artifact of the non-unique decomposition of the model into a system, an apparatus and an environment. A simple example of how non-unique that can be is the infamous "Heisenberg cut", moving which one can start the "observer" anywhere from the device's meter to the "consciousness" itself. Here is from Donald's Instability, Isolation, and the Tridecompositional Uniqueness Theorem
"Theorem 3.4 is concerned with variations in the global wave function. It is also important, for the question of the physical relevance of the tridecompositional uniqueness theorem, to recognize that the assumption that a physical Hilbert space has a fundamental tensor product structure may well be incorrect. This recognition can be supported by consideration of the derived and phenomenological nature of localized particles according to relativistic quantum field theory, but, even at a less sophisticated level, it is hard to justify the idea that there are the sort of natural boundaries which would allow the universe to be divided without ambiguity into a system, a
measuring apparatus, and an environment. If the tensor product structure is not a fundamental aspect of reality, then the ultimate laws of nature cannot depend on it."
On the other hand, the usual hand-wringing over the non-objectivity and irreality of a preferred basis for decoherence in the bidecompositional case is itself a tempest in a teacup. Wallace in Everett and Structure criticizes Kent and Barret for what he calls the "fallacy of exactness". If the criterion for being real and objective is to be written into the axioms of a theory then, say, tigers would be neither real nor objective:
"In other words, a preferred basis must either be written into the quantum-
mechanical axioms, or no such basis can exist — the idea of some approximate,
emergent preferred basis is not acceptable... either there is some precise truth about transtemporal identity which must written into the basic formalism of quantum mechanics, or there are simply no facts at all about the
past of a given world, or a given observer. (This seems to be what motivates
Bell (1981) to say that in the Everett interpretation the past is an illusion.)..."
"To see why it is reasonable to reject the dichotomy of the previous section, consider that in science there are many examples of objects which are certainly real, but which are not directly represented in the axioms. A dramatic example of such an object is the tiger: tigers are unquestionably real in any reasonable sense of the word, but they are certainly not part of the basic ontology of any physical theory." | {
"domain": "physics.stackexchange",
"id": 34944,
"tags": "quantum-mechanics, decoherence"
} |
Acoustic Response of Tube with Liquid Around It | Question: Speaking generally, is it possible to measure the height of a fluid surrounding a tube by emitting a frequency from a speaker at the top of the tube, and reading the response using a microphone at the bottom?
Is there a more efficient way of measuring height acoustically?
I understand the generality of the question, just looking for ideas.
Answer: Yes it's possible. I'll draw the tube horizontally for convenience:
Suppose the velocity of the sound in air is $v_a$ and the velocity in the fluid is $_f$, then the travel time, $t$, for the sound wave is:
$$ t = \frac{d-x}{v_a} + \frac{x}{v_f} $$
which rearranges to:
$$ x = \frac{v_fv_a}{v_f - v_a}\left(\frac{d}{v_a} - t\right) $$
So by measuring the travel time of the sound, $t$, you can calculate the height of the liquid, $x$.
What problems you'd run into in practice I'm afraid I don't know. | {
"domain": "physics.stackexchange",
"id": 33449,
"tags": "acoustics, frequency"
} |
Catching the beat! | Question: I'm lately fascinated with the whole "beat" concept, and have been doing some experiments. I'm trying to capture the beat waveform on an oscilloscope in run mode, but am having no luck. I'm putting a 30MHz and 37MHz tone into a combiner, then into one channel on the scope. I can only capture the beat waveform when i do a single shot capture. It's exactly what i want to see, the nodes are 7MHz apart, but i want to see in in run mode so i can vary one of the frequencies and see the waveform react. None of the triggering methods i've tried will allow me to do this. Is it even possible? If not, why? At least that would give me a better fundamental understanding of the oscilloscope's capability.
Answer: The beat frequency is a half of the difference (see Wikipedia here), so in your case $3.5MHz$. This should be your triggering frequency to see the beat. Yet, in your case, neither $30$ nor $37$ is a multiple of $3.5$, so it is no surprise that your don't see a static picture. Try making one of the frequencies a multiple of the half of the difference, feed this frequency to the second channel and trigger off it. For example, $30$ and $36MHz$ will beat at $3MHz$. Then $30MHz$ will be a multiple of $3MHz \,(3 \cdot 10 = 30)$ as well as $36MHZ \,(3\cdot 12=36)$ . Feed the beat to the first channel and also feed $30MHz$ (or $36MHz$) to the second channel and trigger off it. | {
"domain": "physics.stackexchange",
"id": 60709,
"tags": "frequency, signal-processing"
} |
Why are the axial and circumferential normal stresses in an axially loaded fluid cylinder equal? | Question: Some background: I posted this question on the shear rigidity of a fluid cylinder in a sloppier form earlier.
Thanks to some helpful feedback from Chet Miller I went back and reviewed tensors before reframing the question.
I can rephrase my question better now, but my original doubt remains unresolved.
Before I proceed, please note that I am not asking why fluids do not have shear rigidity. My last question was pooled into another question to that effect.
My questions are specific to a paper1 which addresses the stresses on a thick, hollow, liquid cylinder (page 392, equations 4,5,6) under an axial load (Figure 1).
I do not understand their argument for arriving at the conclusion that the circumferential and longitudinal/axial normal stresses in such a cylinder are equal. First of all they do not prove that
the normal stresses ($\sigma_r,_r$, $\sigma_z,_z$,$\sigma_\phi,_\phi$) are actually also the principal stresses.
They appear to be asserting that the normal stresses shown are the principal stresses and that therefore it follows that the shear stresses on the planes of the cylindrical coordinate system are zero. Which leads them to the conclusion that the circumferential and axial/longitudinal normal stresses are equal. A conclusion that would not hold for a solid cylindrical object with a finite shear rigidity then.
Maybe I am just misreading that paper segment. Do correct me if that is the case.
This is the page:
In summary then, why are the radial, circumferential and axial directions (i.e. the primary directions of the cylindrical coordinate system) the principal directions for a fluid cylinder under an axial load? Secondly, why are the circumferential and axial principal stresses equal? I have not come across a simple proof.
Edit: The earlier draft of this post was less concise. I found drafting this question a useful exercise and usually any feedback I get here is useful. I will try to work out the principal stresses for a generic cylinder under an axial load and then update this post.
References
1 Mechanical Equilibrium of Thick, Hollow, Liquid Membrane Cylinders. Waugh and Hochmuth. 1987.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1330003/pdf/biophysj00162-0035.pdf
Updated References with two new additions on Oct 6, 2023
2 The Mechanics of Axially Symmetric Liposomes. Pamplona and Calladine. 1993.
https://pubmed.ncbi.nlm.nih.gov/8326721/
3 Mechanics of tether formation in liposomes. Calladine and Greenwood. 2002.
https://pubmed.ncbi.nlm.nih.gov/12405601/
4 Mechanics and Thermodynamics of Biomembranes. Evans and Skalak. 1980.
Following on my dialogue with Chet below, I am updating my post with a schematic of the type of biophysical experiment I am modeling.
It is on this type of membrane:
https://en.m.wikipedia.org/wiki/Lipid_bilayer
I have come up with a data analysis model for an experiment like the one shown below and want to ensure that the basic physics behind my analysis is sound. The formation of membrane cylinders from bilayers subjected to pulling forces is a highly non trivial process. I am merely digging through some old papers in the area addressing already formed cylinders and reviewing the basic physics to be certain there are no glaring errors in my analytic model.
Of the cartoons below, the first one depicts the initiation of the formation of such a membrane cylinder and the second one a formed cylinder. Since it is a complicated problem I tried to trim it down to a simple question to keep it within the scope of the forum. I am not sure I succeeded.
Looking at your question guidelines, I suspect this question is indeed off-topic here. I will probably follow Chet's advice and try physicsforums instead.
Edited Update: I updated this post with my initial understanding of the model Chet very graciously provided below. However, it is still a work in progress.
Okay I have updated the post with how I figure Chet modeled it now. I had two of the stresses flipped ($\sigma_r$ and $\sigma_z$ and $\sigma_z$ is now $\sigma_n$).
I updated Chet's differential force equation with the third stress that is normal to membrane as shown. I am writing out the third equation (the first two being the resolution of the three stresses parallel to the meridian's tangent and normal to it) -that is the moment balance equation and brings in the membrane bending stiffness. Finally I have to set the boundary conditions and solve for the three stresses.
Not sure that diagram/the equation is correct. I always have a hard time visualizing these catenoids that have two different radii of curvature.
This is background physics work for an actual paper I am working on. My initial understanding of the physics behind this experiment was very rudimentary and I want to get the basics right this time.
Answer: As in your diagrams, let the membrane initially be horizontal at z = 0. Let $r_0$ be the radial location measured from the z axis in the initial flat membrane. Let the location of a material point in the deformed thin membrane be situated at $r=r(r_0,\theta)$ and $z=z(r_0,\theta)$.
Consider two closely neighboring material points in the initial undeformed membrane at ($r_0,\theta)$ and $(r_0+dr_0. \theta+d\theta)$. The length of the differential position vector joining these two material points initially is $\sqrt{(dr_0)^2+(r_0d\theta)^2}$. The differential position vector joining these same two material points in the deformed configuration is $$\hat{s}ds=\left(\hat{r}\frac{\partial r}{\partial r_0}+\hat{z}\frac{\partial z}{\partial r_0}\right)dr_0+\hat{\theta}\left(\frac{r}{r_0}\right)(r_0d\theta)$$where hatted quantities are unit vectors. The square of the length of this differential position vector is $$(ds)^2=\left[\left(\frac{\partial r}{\partial r_0}\right)^2+\left(\frac{\partial z}{\partial r_0}\right)^2\right](dr_0)^2+\left(\frac{r}{r_0}\right)^2(r_0d\theta)^2$$The ratio of the square of the deformed length to the square of the initial length is the square of the stretch ratio $\lambda$: $$\lambda^2=\lambda _r^2\cos^2{\alpha}+\lambda^2_{\theta}\sin^2{\alpha}$$with $$\lambda^2_r=\left[\left(\frac{\partial r}{\partial r_0}\right)^2+\left(\frac{\partial z}{\partial r_0}\right)^2\right]$$ $$\lambda^2_{\theta}=\left(\frac{r}{r_0}\right)^2$$ $$\cos{\alpha}=\frac{dr_0}{\sqrt{(dr_0)^2+(r_0d\theta)^2}}$$ and $$\sin{\alpha}=\frac{r_0d\theta}{\sqrt{(dr_0)^2+(r_0d\theta)^2}}$$
Now for a differential force balance on the deformed membrane. Consider the force balance on the "window" shaped element of the deformed membrane between $r_0$ and $r_0+dr_0$, and between $\theta$ and $\theta+d\theta$: $$\left(\sigma_r\lambda_{\theta}r_0d\theta t\hat{s}\right)_{r_0+dr_0}-\left(\sigma_r\lambda_{\theta}r_0d\theta t\hat{s}\right)_{r_0}+(\sigma_{\theta}\lambda_r dr_0 t\hat{\theta})_{\theta +d\theta}-(\sigma_{\theta}\lambda_r dr_0 t\hat{\theta})_{\theta}=0$$Dividing by $dr_0d\theta$ then gives: $$\frac{(\sigma_r\lambda_{\theta}r_0 t\hat{s})}{\partial r_0}+\frac{(\sigma_{\theta}\lambda_r t\hat{\theta})}{d\theta}=0$$Next, since the membrane material is incompressible, we have $$t=\frac{t_0}{\lambda_r \lambda_{\theta}}$$ Substituting this into the differential force balance then gives: $$\frac{[r_0(\sigma_r/\lambda_{r}) \hat{s}]}{\partial r_0}+\frac{[r_0(\sigma_{\theta}/\lambda_{\theta}) \hat{\theta}]}{r_0d\theta}=0$$
Define the engineering stress function $\sigma_E$ for a transversely isotropic membrane as follows: $$\sigma_E(\lambda_1,\lambda_2)=\frac{\sigma(\lambda_1,\lambda_2)}{\lambda_1}$$Then, our differential force balance becomes: $$\frac{[r_0\sigma_E(\lambda_{r},\lambda_{\theta}) \hat{s}]}{\partial r_0}+\frac{[r_0\sigma_E(\lambda_{\theta},\lambda_{r}) \hat{\theta}]}{r_0d\theta}=0$$ | {
"domain": "physics.stackexchange",
"id": 97716,
"tags": "stress-energy-momentum-tensor, stress-strain"
} |
Pop-Quiz program | Question: !!Code contains profanity for the purposes of point deduction!!
Scope:
This quiz program is being wrote for a single use. Due to this the names and questions/answers will be set in stone within the Teams Class and the Answers class.
I have used the fisher-yates shuffle to randomise an array allowing the console to randomly assign either teams or team captains. I included a text-to-speech function just to try it out hence the option to turn it on or off (this works fine).
Issues:
I have left a few comments describing issues I am having though as the code is operational I used Code Review rather than Stack Overflow. Please let me know if this belongs elsewhere and I will be more than happy to sort that out.
There are a couple of if statements for string.IsNullOrEmpty() that seem to make me have to use enter twice which I can't seem to work around (have read/tried so many posts).
Also looking for general advice on making my code neater or if I have any bad habits that should be addressed. Greatly appreciate all comments.
Program Class:
using System;
using System.Collections.Generic;
using System.Speech.Synthesis;
namespace PopQuiz
{
class Program
{
static Random _random = new Random();
static void Shuffle<T>(T[] array)
{
int n = array.Length;
for (int i = 0; i < n; i++)
{
int r = i + (int)(_random.NextDouble() * (n - i));
T t = array[r];
array[r] = array[i];
array[i] = t;
}
}
private static void FlushKeyboard()
{
while (Console.In.Read() != -1) ;
}
static void Main(string[] args)
{
Console.ForegroundColor = ConsoleColor.Cyan;
Console.TreatControlCAsInput = false;
Console.Title = "Quiz";
int[] array = { 1, 2, 3, 4, 5, 6, 7, 8 };
Shuffle(array);
//int space1 = 25;
string Team1 = "";
string Team2 = "";
string team;
string captains;
string players1 = null;
string text;
ConsoleKeyInfo cki;
Console.TreatControlCAsInput = true;
text = "Welcome to the pop quiz!\nWould you like me to continue voicing this quiz?\nPress Y for Yes or any other key to continue without:";
Console.WriteLine(text);
SpeechSynthesizer synth = new SpeechSynthesizer();
synth.SelectVoice("Microsoft Zira Desktop");
//synth.Speak(text);
cki = Console.ReadKey();
Console.Clear();
bool voiceYN = String.Equals(cki.Key.ToString(), "Y", StringComparison.InvariantCultureIgnoreCase);
text = "Thank you for your selection, would you like me to assign you a team at random or just pick team captains for you?\nPress T for a random Team or C for team Captains:";
Console.WriteLine(text);
if (voiceYN == true)
{
synth.Speak(text);
}
incorrect:
cki = Console.ReadKey();
bool tResult = String.Equals(cki.Key.ToString(), "T", StringComparison.InvariantCultureIgnoreCase);
bool cResult = String.Equals(cki.Key.ToString(), "C", StringComparison.InvariantCultureIgnoreCase);
var thePlayers = Team.GetPlayer();
{
if (tResult == false)
{
if (cResult == false)
{
text = "\nThat is not a valid selection, please enter either a T or a C";
Console.WriteLine(text);
if (voiceYN == true)
{
synth.Speak(text);
}
goto incorrect;
}
Console.Clear();
var myPlayer = thePlayers[array[7]];
captains = myPlayer.Name;
myPlayer = thePlayers[array[3]];
team = (" "+captains + myPlayer.Name);
captains = ($" Team 1: Team2:\n{team}");
text = ($"The selected team captains are:\n{captains}\n\nPress any key to continue when you are ready to proceed");
Console.WriteLine(text);
if (voiceYN == true)
{
synth.Speak(text);
}
Console.ReadKey();
Console.Clear();
}
else
{
Console.Clear();
for (int i = 2; i < 9; i += 2)
{
var myPlayer1 = thePlayers[array[i - 2]];
var myPlayer2 = thePlayers[array[i - 1]];
players1 = (players1 + " " + myPlayer1.Name + myPlayer2.Name + "\n");
}
team = players1;
players1 = (" Team 1: Team2:\n" + players1);
text = ("You have chosen to have your teams randomly assigned.\nYour teams are shown below:");
Console.WriteLine("{0}\n\n{1}", text, players1);
if (voiceYN == true)
{
synth.Speak(text);
}
}
}
text = "Now that the teams are set, it is time to decide your team names.\n\nTeam 1, what will your team name be?";
Console.WriteLine(text);
if (voiceYN == true)
{
synth.Speak(text);
}
Start:
Team1 = Console.ReadLine();
//This is the first if to give a true result initally when a false is expected but does not break the program
if (string.IsNullOrEmpty(Team1))
{
goto Start;
}
text = ("\nWelcome to the game " + Team1 + "!\n\nTeam 2, now it's your turn.\nPlease enter your team name...");
Console.WriteLine(text);
if (voiceYN == true)
{
synth.Speak(text);
}
same:
Team2 = Console.ReadLine();
//This is the second
if (string.IsNullOrEmpty(Team2))
{
goto same;
}
Console.Title = "Quiz";
bool result = String.Equals(Team1, Team2, StringComparison.InvariantCultureIgnoreCase);
if (result == true)
{
text = "\n\nYou can't have the same team names...\nStop being lazy and give me another one!";
Console.WriteLine(text);
if (voiceYN == true)
{
synth.Speak(text);
}
goto same;
}
text = ("\n\nWelcome to the game " + Team2 + "!\nTeam names are set... Let's get ready to begin.");
Console.WriteLine(text);
if (voiceYN == true)
{
synth.Speak(text);
}
int tl = Team1.Length;
int space2 = (25 - tl);
string space = null;
for (int i = 0; i <= space2; i++)
{
space = (space + " ");
}
team = ($" {Team1}{space}{Team2}\n{team}\n\n");
if (voiceYN == false)
{
Console.WriteLine("\n3");
System.Threading.Thread.Sleep(1000);
Console.WriteLine("2");
System.Threading.Thread.Sleep(1000);
Console.WriteLine("1");
System.Threading.Thread.Sleep(1000);
}
Console.Clear();
string teamIntro;
var theAnswer = Answers.GetAnswer();
bool IsOdd(int value)
{
return value % 2 != 0;
}
int t1Score = 0;
int t2Score = 0;
List<string> Profanity = new List<string>();
Profanity.Add("fuck");
Profanity.Add("shit");
Profanity.Add("bollocks");
Profanity.Add("wank");
Profanity.Add("cunt");
Profanity.Add("tosser");
Profanity.Add("bastard");
Profanity.Add("fanny");
Profanity.Add("faggot");
Profanity.Add("arse");
for (int i = 0; i < 10; i++)
{
Console.Clear();
var myQuestion = theAnswer[i+1];
if(IsOdd(i+1))
{
if (i == 8)
{
teamIntro = ($"{Team1} time for your final question.");
}
else
{
switch (array[i])
{
case 1:
teamIntro = ($"{Team1} you're up!");
break;
case 2:
teamIntro = ($"{Team1} it's your turn.");
break;
case 3:
teamIntro = ($"{Team1} this question's yours.");
break;
case 4:
teamIntro = ($"{Team1} let's see if you can get this one.");
break;
case 5:
teamIntro = ($"{Team1}! I've picked this one especially for you...");
break;
case 6:
teamIntro = ($"{Team1} this is a tough one!");
break;
case 7:
teamIntro = ($"{Team1} whenever you're ready, here's your question:");
break;
default:
teamIntro = ($"{Team1} try this one on for size:");
break;
}
}
}
else
{
if (i == 9)
{
teamIntro = ($"{Team2} time for your final question.");
}
else
{
switch (array[i])
{
case 1:
teamIntro = ($"{Team2} you're up!");
break;
case 2:
teamIntro = ($"{Team2} it's your turn.");
break;
case 3:
teamIntro = ($"{Team2} this question's yours.");
break;
case 4:
teamIntro = ($"{Team2} let's see if you can get this one.");
break;
case 5:
teamIntro = ($"{Team2}! I've picked this one especially for you...");
break;
case 6:
teamIntro = ($"{Team2} this is a tough one!");
break;
case 7:
teamIntro = ($"{Team2} whenever you're ready, here's your question:");
break;
default:
teamIntro = ($"{Team2} try this one on for size:");
break;
}
}
}
text = ($"{team}{myQuestion.NumText} Question:\n\n{teamIntro}\n\n{myQuestion.Question}");
Console.WriteLine(text);
if (voiceYN == true)
{
synth.Speak(text);
}
blank:
string myAnswer = Console.ReadLine();
//This is the third
if (string.IsNullOrEmpty(myAnswer))
{
goto blank;
}
result = string.Equals(myAnswer, myQuestion.Answer, StringComparison.InvariantCultureIgnoreCase);
if (result == true)
{
if (IsOdd(i + 1))
{
t1Score = t1Score + 1;
}
else
{
t2Score = t2Score + 1;
}
}
else
{
for (int p = 0; p < Profanity.Count; p++)
{
if (myAnswer.ToLower().Contains(Profanity[p]))
{
result = true;
break;
}
}
if (result == true)
{
Console.WriteLine("\nProfanity is not accepted! 1 point has been deducted from your team's score");
if (IsOdd(i))
{
t1Score = t1Score - 1;
}
else
{
t2Score = t2Score - 1;
}
System.Threading.Thread.Sleep(2000);
}
}
}
}
}
}
Teams Class:
using System.Collections.Generic;
namespace PopQuiz
{
public class Team
{
public string Name { get; set; }
public Team(string name)
{
Name = name;
}
public static Dictionary<int, Team> GetPlayer()
{
var player = new Dictionary<int, Team>();
var myPlayer = new Team("Name1 ");
player.Add(1, myPlayer);
myPlayer = new Team("Name2 ");
player.Add(2, myPlayer);
myPlayer = new Team("Name3 ");
player.Add(3, myPlayer);
myPlayer = new Team("Name4 ");
player.Add(4, myPlayer);
myPlayer = new Team("Name5 ");
player.Add(5, myPlayer);
myPlayer = new Team("Name6 ");
player.Add(6, myPlayer);
myPlayer = new Team("Name7 ");
player.Add(7, myPlayer);
myPlayer = new Team("Name8 ");
player.Add(8, myPlayer);
return player;
}
}
}
Answers Class:
using System.Collections.Generic;
namespace PopQuiz
{
public class Answers
{
public string Question { get; set; }
public string Answer { get; set; }
public string NumText { get; set; }
public Answers(string numText, string question, string answer)
{
NumText = numText;
Question = question;
Answer = answer;
}
public static Dictionary<int, Answers> GetAnswer()
{
var question = new Dictionary<int, Answers>();
var theAnswer = new Answers("First","Who?", "...");
question.Add(1, theAnswer);
theAnswer = new Answers("Second","What?", "...");
question.Add(2, theAnswer);
theAnswer = new Answers("Third","Where?", "...");
question.Add(3, theAnswer);
theAnswer = new Answers("Fourth","When?", "...");
question.Add(4, theAnswer);
theAnswer = new Answers("Fifth","How?", "...");
question.Add(5, theAnswer);
theAnswer = new Answers("Sixth", "Why?", "...");
question.Add(6, theAnswer);
theAnswer = new Answers("Seventh", "Which?", "...");
question.Add(7, theAnswer);
theAnswer = new Answers("Eighth", "With whom?", "...");
question.Add(8, theAnswer);
theAnswer = new Answers("Ninth", "What also?", "...");
question.Add(9, theAnswer);
theAnswer = new Answers("Tenth", "Done now?", "...");
question.Add(10, theAnswer);
return question;
}
}
}
Answer: I'm just going to point out things in whatever random order I see them in. Mind you there is so much going on with this code, that I'll just be scratching the surface.
All your if (x == true) or if (x == false) comparisons would be better written as if (x) or if (!x)
The whitespace in the names var myPlayer = new Team("Name1 ") shouldn't be added there, because the whitespace isn't part of the name. Do your formatting when you output it.
The code below creates a new string (myAnswer.ToLower()) for every profanity:
for (int p = 0; p < Profanity.Count; p++)
{
if (myAnswer.ToLower().Contains(Profanity[p]))
{
result = true;
break;
}
}
You could only do a ToLower() once and compare that to each profanity. Better yet, don't do ToLower(), and instead do a case insensitive comparison:
if (myAnswer.IndexOf(Profanity[p], StringComparison.OrdinalIgnoreCase) >= 0)
Better yet again, don't do this loop at all and use linq: result = Profanity.Any(p => myAnswer.IndexOf(p, StringComparison.OrdinalIgnoreCase) >= 0);. And then go one step further and change the name result to something more useful: var containsProfanity = Profanity.Any(p => myAnswer.IndexOf(p, StringComparison.OrdinalIgnoreCase) >= 0);
Although I don't really like the way it checks in the first place because it will pick up legit words as well - for example, Arsenal will be considered profanity.
Way, way too much going on in Main(). Pretty much everything in there, except maybe setting up the colours and title, should be in separate methods, which in turn should probably be in other classes.
var thePlayers = Team.GetPlayer(); -- GetPlayer() suggests a single player (but which one?), variable name suggests it returns a collection. Inconsistent naming.
You don't need to declare all your variables at the top of the method. Declare them generally right before you use them, unless it's clearer to do otherwise. In a method this big (Main()), it's not clear to have them all at the top.
Surely there must be a better way to get a player:
var myPlayer = thePlayers[array[7]];
captains = myPlayer.Name;
myPlayer = thePlayers[array[3]];
I can't even understand what's going on there. You're accessing a player at a hard coded array index, you know that index 7 will always be the captain, and overwriting the myPlayer variable. Right, and who is player index 3? Why is captains plural when it's assigned a single string? You really need to encapsulate this logic in methods that explain what they're doing, for example:
var captain = team.GetCaptain();
var player = team.GetPlayer("Bob");
Etc. This goes for basically everything else in Main(), all this ugly array logic should be hidden behind methods, but this one in particular stood out to me. | {
"domain": "codereview.stackexchange",
"id": 25685,
"tags": "c#, beginner, console"
} |
Install from source hangs on 'pcl' | Question:
Attempting to install from source. Following the directions for installing groovy from source. (generic source installation)
OS: 64 bit Ubuntu - Quantal
Chose Desktop install (Recommended) core packages.
I have gotten as far as the step that says...
Invoke catkin_make_isolated:
$ ./src/catkin/bin/catkin_make_isolated --install
Everything is working through package 144 of 153, but when it gets to pcl it errors out.
error message follows.
==> Processing plain cmake package: 'pcl'
==> Building with env: '/home/brian/ros_catkin_ws/install_isolated/env.sh'
Makefile exists, skipping explicit cmake invocation...
==> make cmake_check_build_system in '/home/brian/ros_catkin_ws/build_isolated/pcl'
==> make -j2 -l2 in '/home/brian/ros_catkin_ws/build_isolated/pcl'
[ 0%] Built target pcl_octree
[ 0%] Built target pcl_io_ply
[ 1%] Built target pcl_pcd_convert_NaN_nan
[ 2%] Built target pcl_ply2obj
Linking CXX shared library ../lib/libpcl_common.so
[ 2%] Built target pcl_ply2ply
/usr/bin/ld: cannot find -lsensor_msgs
/usr/bin/ld: cannot find -lroscpp_serialization
/usr/bin/ld: cannot find -lrosconsole
/usr/bin/ld: cannot find -lrostime
collect2: error: ld returned 1 exit status
make[2]: *** [lib/libpcl_common.so.1.6.0] Error 1
make[1]: *** [common/CMakeFiles/pcl_common.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
Scanning dependencies of target pcl_ply2raw
[ 3%] Building CXX object io/tools/ply/CMakeFiles/pcl_ply2raw.dir/ply2raw.cpp.o
Linking CXX executable ../../../bin/pcl_ply2raw
[ 3%] Built target pcl_ply2raw
make: *** [all] Error 2
<== Failed to process package 'pcl':
Command '/home/brian/ros_catkin_ws/install_isolated/env.sh make -j2 -l2' returned non-zero exit status 2
Reproduce this error by running:
==> /home/brian/ros_catkin_ws/install_isolated/env.sh make -j2 -l2
I have tried to use option --force-cmake, and a few other things, but I have made no progress. Any suggestions on how to proceed?
Originally posted by walkerbrianpatrick on ROS Answers with karma: 21 on 2013-01-29
Post score: 2
Answer:
See http://answers.ros.org/question/53401/error-when-compiling-pcl-for-ros-groovy-install-from-source-on-ubuntu-12041/
Originally posted by KruseT with karma: 7848 on 2013-02-07
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 12626,
"tags": "pcl, catkin"
} |
Reflecting photons and conservation of energy | Question: Does light reflecting from a material add kinetic energy to the material or does light only add kinetic energy when absorbed?
And if reflection adds kinetic energy, where is this energy being subtracted from? The amplitude/frequency of light? Or is absorption statistically always possible, just highly unlikeley when having a material with high albedo?
EDIT: Ok, so I have mad a calculation for blue light photon reflecting from a 1kg body, this is what I got:
f' = E'/h
= (E - vBody * 1kg) / h
= (h * 650*10^12 - ( 2 * ((E / c^2 * c)/(1 + E / c^2)) ) ) / h
= (h * 650*10^12 - ( 2 * ((((4,29 * 10^-19) / c^2) * c)/(1 + (4,29 * 10^-19) / c^2)) ) ) / h
= ((6.62607004 × 10^-34) * 650*10^12 - ( 2 * ((((4.29 * 10^-19) / 299792458^2) * 299792458)/(1 + (4.29 * 10^-19) / 299792458^2)) ) ) / (6.62607004 × 10^-34)
= 6.499.. × 10^14 Hz
Is that correct?
Answer: With the assumption that the "material" is initially at rest and no complicating factors such as tricky coupling to other material you haven't yet mentioned - for example if it's a chunk of something floating in space and it is struck by a beam of light...
Does light reflecting from a material add kinetic energy to the material
yes
or does light only add kinetic energy when absorbed?
not only, it adds kinetic energy to that material in both cases.
And if reflection adds kinetic energy, where is this energy being subtracted from? The amplitude/frequency of light?
yep... the outgoing reflected photons will have a lower energy and therefore lower frequency/longer wavelength.
Or is absorption statistically always possible, just highly unlikeley when having a material with high albedo?
You can do the calculation without necessarily using statistics. Every photon with a frequency $\nu$ has a certain momentum $p=h \nu / c$. You can convert to wavelength $\lambda$ using $\nu = c/ \lambda$ . If it is absorbed, then the kinetic energy of the material of mass m is $p^2/2m$. If it is reflected, you calculate the same way you'd do elastic scattering. Assuming the mass of the material is much larger than the equivalent energy of the photon (works for visible light an even a single atom) then the energy loss of the photon is small ad the kinetic energy added is just double $p^2/m$ because you've changed both momenta by $2p$ assuming 180° reflection.
You can read more about radiation pressure. | {
"domain": "physics.stackexchange",
"id": 34052,
"tags": "photons, energy-conservation"
} |
Measuring the Hamiltonian in the VQE | Question: I am trying to implement VQE in pyQuil and am dumbfounded by how to measure the expectation value of a general Hamiltonian on $\mathbb{C}^{2^n}$ i.e. determine $\langle\psi , H \psi\rangle$ on a Quantum computer. As far as I understand on a real Quantum Computer (not any quantum virtual machine) I can only measure in the computational basis, which is the basis of the Hamiltonian $H = X = \sum x \left|x\right>\left<x\right|$, but not for any Hamiltonian whose eigenvectors are not the computational basis. But how do I measure with any Hamiltonian that is not diagonal in the computational basis?
Sure I can measure e.g. some of the qubits in the $X$-basis instead of the $Z$-basis by applying a Hadamard gate to them, but this surely doesn't help me if I want to measure sth. non-local, i.e. if the ground-state of my hamiltonian is an entangled state.
On a maybe related note: Can I write any hamiltonian (hermitian matrix) as a Pauli decomposition? I know I can for a single qubit, but does this hold for multiqubits aswell?
Answer: Yes, you can decompose any Hamiltonian. For VQE purposes, any finite-dimensional Hamiltonian can be presented as a sum of terms which consist of tensor products of Pauli matrices (https://arxiv.org/abs/1304.3061):
$$
H = \sum_{\alpha, i} h^{\alpha}_{i} \sigma^{\alpha}_{i} +
\sum_{\alpha, \beta, i, j} h^{\alpha \beta}_{ij} \sigma^{\alpha}_{i} \sigma^{\beta}_{j} + \dots
$$
As such, the expected value $ \langle H \rangle$ can be estimated by measuring the expected values of such combinations of Pauli matrices. There may be trouble if the quantity of these terms grows exponentially in the size of the system, but many interesting Hamiltonians decompose into a polynomial number of operators.
The coefficients of the decomposition can be obtained by making a scalar product of the Hamiltonian with the basis term: $(H, A) = \frac1d \mathrm{Tr}(HA)$. | {
"domain": "quantumcomputing.stackexchange",
"id": 521,
"tags": "programming, gate-synthesis, pyquil"
} |
Minor mode for blog writing in HTML | Question: This is a collection of reasonably useful functions I put together for writing my blog.
(require 'htmlize)
(defvar blog-mode-map nil
"Keymap for blog minor mode")
(unless blog-mode-map
(let ((map (make-sparse-keymap)))
(define-key map "\C-cl" 'insert-link)
(define-key map "\C-cp" 'insert-code-block)
(define-key map "\C-cc" 'insert-inline-code)
(define-key map "\C-cb" 'insert-bold)
(define-key map "\C-cq" 'insert-quote)
(define-key map "\C-cs" 'insert-sig)
(define-key map "\C-cf" 'insert-footnote)
(define-key map "\C-c\C-l" 'region-to-link)
(define-key map "\C-c\C-p" 'region-to-code-block)
(define-key map "\C-c\C-c" 'region-to-inline-code)
(define-key map "\C-c\C-b" 'region-to-bold)
(define-key map "\C-c\C-q" 'region-to-quote)
(define-key map "\C-c\C-s" 'region-to-sig)
(define-key map "\C-c\C-f" 'region-to-footnote)
(define-key map "/" 'smart-backslash)
(setq blog-mode-map map)))
(define-minor-mode blog-mode
"This is a collection of useful keyboard macros for editing Langnostic"
nil
" Blog"
(use-local-map blog-mode-map))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; simple definitions
(defun insert-tag (start-tag &optional end-tag)
"Inserts a tag at point"
(interactive)
(insert start-tag)
(save-excursion
(insert (or end-tag ""))))
(defun wrap-region (start end start-tag &optional end-tag)
"Inserts end tag at the end of the region, and start tag at point"
(goto-char end)
(insert (or end-tag ""))
(goto-char start)
(insert start-tag))
(defmacro definsert (tag-name start-tag end-tag)
"Defines insert function."
`(defun ,(make-symbol (concat "insert-" (symbol-name tag-name))) ()
(interactive)
(insert-tag ,start-tag ,end-tag)))
(defmacro defregion (tag-name start-tag end-tag)
"Defines region wrapper function."
`(defun ,(make-symbol (concat "region-to-" (symbol-name tag-name))) ()
(interactive)
(wrap-region (region-beginning) (region-end) ,start-tag ,end-tag)))
(definsert link (concat "<a href=\"" (x-get-clipboard) "\">") "</a>")
(defregion link (concat "<a href=\"" (x-get-clipboard) "\">") "</a>")
(definsert bold "<b>" "</b>")
(defregion bold "<b>" "</b>")
(definsert quote "<blockquote>" "</blockquote>")
(defregion quote "<blockquote>" "</blockquote>")
(definsert sig "<span class=\"sig\">" "</span>")
(defregion sig "<span class=\"sig\">" "</span>")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; <pre> and <code> definitions
(definsert code-block "<pre>" "</pre>")
(definsert inline-code "<code>" "</code>")
;; region versions are more complicated to accomodate htmlize
(defun region-to-inline-code ()
"HTMLize just the current region and wrap it in a <code> block"
(interactive)
(let((htmlified (substring (htmlize-region-for-paste (region-beginning) (region-end)) 6 -6)))
(delete-region (region-beginning) (region-end))
(insert-inline-code)
(insert htmlified)))
(defun region-to-code-block ()
"HTMLize the current region and wrap it in a <pre> block"
(interactive)
(let ((htmlified (htmlize-region-for-paste (region-beginning) (region-end))))
(delete-region (region-beginning) (region-end))
(insert (concat "<pre>" (substring htmlified 6)))))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; footnote definitions
(defun insert-footnote ()
"Inserts footnote, and a return link at the bottom of the file.
Moves point to footnote location."
(interactive)
(progn (footnotes-header)
(let ((footnote-name (format-time-string "%a-%b-%d-%H%M%S%Z-%Y" (current-time))))
(insert "<a href=\"#foot-" footnote-name "\" name=\"note-" footnote-name "\">[note]</a>")
(goto-char (point-max))
(insert "\n\n<a href=\"#note-" footnote-name "\" name=\"foot-" footnote-name "\">[back]</a> - "))))
(defun region-to-footnote ()
"Inserts a footnote at point and return link at the bottom. Moves the current region to the end of the file.
Leaves point where it is."
(interactive)
(save-excursion (kill-region (region-beginning) (region-end))
(insert-footnote)
(yank)))
(defun footnotes-header ()
"Inserts footnote header if not already present"
(unless (save-excursion (search-forward "<hr />\n<h5>Footnotes</h5>" nil t))
(save-excursion
(goto-char (point-max))
(insert "\n\n<hr />\n<h5>Footnotes</h5>"))))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; utility
(defun smart-backslash ()
"Backslash closes previous tag when used in the combination </. Self-inserts otherwise."
(interactive)
(if (equal (save-excursion (backward-char) (thing-at-point 'char)) "<")
(progn (backward-delete-char 1)
(sgml-close-tag))
(insert "/")))
(provide 'blog-mode)
I'd like some advice on how to remove duplication in a few obvious places. To my mind, I should be able to do something like (deftag bold "b" "<b>" "</b>") which would expand into a definsert, defregion and define-key. I'm not sure how to have a macro define multiple functions in Elisp though; progn (quoted or not) doesn't seem to help.
Feel free to point out any components I could replace with pre-built Emacs23 functions, or anything I could do more elegantly.
EDIT: Updated version of this code can be found here.
Answer: Not sure if this is relevant as it is a year old but, you can look at https://github.com/Neil-Smithline/defassoclist to see how to create multiple functions in a macro.
You didn't include your failed attempts with progn so I can't tell you what you were doing wrong but progn does work. | {
"domain": "codereview.stackexchange",
"id": 1751,
"tags": "elisp"
} |
To stop or to go around? | Question: Let us we are moving in a car. There is a wall in front of us, we need to decide can we go around it or not. It is known that width of the wall is $w$ and our speed is $v = const$.
What is a simple approach to set formally conditions upon which it is safe to go around?
I understand that there are many possible details such as traction, weight of car etc. and will be glad even for the most simplistic analysis. If you can provide a source, that too is great. Thanks.
Answer: Each car depending on its handling has a maximum safe turning speed $v_{max}$, and radius $r$. Let us say the current speed $v<v_{max}$, then your turning angular velocity is $$\omega= v/r.$$ We need to go an arc of $\pi/2$ so the time it takes to turn is $$t=\pi/2\omega= \pi r/2v$$ which will give the decision distance $x$ as $x=t\cdot v$.
Edit
Above was for a wall wider than the cars cornering turn, if it is less, then the arc is smaller and we have $\theta= arccos(r-\text{car-width})$. | {
"domain": "engineering.stackexchange",
"id": 3905,
"tags": "control-engineering, applied-mechanics, kinematics, geometry"
} |
Reaction of borax with NaOH | Question: How does borax react with NaOH?
My textbook says that sodium peroxoborate is formed. I couldn't find the reaction anywhere else.
Can anyone verify as well as explain it?
Answer: The following reaction schemes are adapted from [1, pp. 76–77].
Amorphous borax $\ce{Na2B4O7}$ forms tetrahydroxoborate in concentrated sodium hydroxide solution:
$$\ce{Na2B4O7 + 7 H2O + 2 NaOH → 4 Na[B(OH)4]}$$
Upon fusion sodium metaborate is formed:
$$\ce{Na2B4O7 + 2 NaOH →[\pu{700-750 °C}] 4 NaBO2 + H2O}$$
In order to form peroxoborate, a peroxide should be introduced in the first place, e.g. by using hydrogen peroxide (see also Preparation of Sodium Peroxoborate):
$$\ce{Na2B4O7 + 2 NaOH + 4 H2O2 + 11 H2O → 2 Na2[B2(O2)2(OH)4] * 6H2O}$$
References
R. A. Lidin, V. A. Molochko, and L. L. Andreeva, Reactivity of Inorganic Substances, 3rd ed.; Khimia: Moscow, 2000. (in Russian) | {
"domain": "chemistry.stackexchange",
"id": 11979,
"tags": "inorganic-chemistry, acid-base, reference-request"
} |
Is there an equivalent to vertebrate endogenous retroviruses in invertebrates or microbes? | Question: I know jawed vertebrates have a lot of junk DNA floating around coming from ancient retroviruses. Some of the DNA is important to mammalian evolution. The DNA also provides a useful means to study the evolution of viruses.
I wonder if similar things happen for other lifeforms. I know parasitic wasps have a very interesting story even if I don't know the details.
I suppose even if viruses inserted themselves into microbes that microbes have much more pressure to have less DNA and so they may have gained genes but the evidence of junk DNA would have long ago been lost.
I suppose I'm asking: did ancient viruses insert themselves into other life in a similar manner to the endogenous retroviruses found in vertebrates and would we have evidence of it today in a similar manner?
Answer: I would not say that they are “equivalents” of endogenous retroviruses, but there are two types of elements in bacteria that bear some similarity.
Prophages
As some bacterial virus (phages) can undergo lysogeny as well as causing lysis of the host (e.g. phage lambda) it is not surprising that there are traces of phages in almost all sequenced bacterial genomes, although it is not always easy to detect and/or identify them unequivocally. This was reviewed by Casjens in 2003 who considered its significance in terms of the evolution of both phage and bacteria.
Of course, the structure of such phages differs from that of endogenous retroviruses and is generally much more complex.
Insertion Elements (IS)
To some extent endogenous retroviruses can be considered as retrotransposons elements, and this is certainly the case for the related LINE (L1) elements. So perhaps the bacterial insertion elements — the IS transposons — are worth mentioning. The structure of one class is shown below.
(Source: https://commons.wikimedia.org/wiki/File:Composite_transposon.svg) | {
"domain": "biology.stackexchange",
"id": 11847,
"tags": "microbiology, virology, retrovirus"
} |
Computer vision algorithms for binary classification of bird images | Question: I want to start a project to detect if an image is a crow or not a crow (crow as in the black bird). Is this referred to as "binary classification?" If I wanted to use open source Python libraries, what kind of algorithms/concepts/libraries should I be researching? Are there any datasets that could be useful for this classifier? Thank you!
Answer: Yes, this is binary classification as you only have two classes. Generally, "binary classification" is referred to as classification. If you have more than two classes, it is called multi class or multi-nominal classification. This Wikipedia article provides both a summary of what binary classification is and a short introduction into methods you can use to get started.
If you're looking for a specific python library to get started, I recommend Keras. It is quite easy to use and they provide good examples to get started. This is an example for binary classification of cats and dogs: Classification from scratch for cats and dogs
You can adapt it for your own dataset. | {
"domain": "ai.stackexchange",
"id": 3877,
"tags": "computer-vision, binary-classification"
} |
First-order model checking on general graphs is intractable | Question: I read that the first-order model checking problem is intractable on general graphs.
How is this shown? Would be happy about some reference!
Thanks in advance
Answer: You can represent classical NP-complete problems as model checking of first-order formulas in the language of graphs, for example subgraph isomorphism, dominating set, clique, vertex cover, and so on. | {
"domain": "cs.stackexchange",
"id": 19868,
"tags": "graphs, model-checking, parameterized-complexity"
} |
Commutation relations in Gupta-Bleuler formalism | Question: When quantising the EM field thanks to the Gupta-Bleuler formalism, Itzykson and Zuber assume that the canonical commutation rules are
$$ [\hat{A}_\rho (t,\vec{x}), \hat{\pi}^\nu(t,\vec{y})]= i \, g_\rho^{\, \,\nu}\, \delta^3(\vec{x}-\vec{y})$$
I don't quite understand where the metric tensor comes from. When quantising the scalar field, we assume commutation rules between the field and its conjugate momentum to be
$$[\hat{\phi} (t,\vec{x}), \hat{\pi}(t,\vec{y})]= i \, \delta^3(\vec{x}-\vec{y})$$
because of the Poisson bracket to commutator classical to quantum correspondance
$$\{\phi (t,\vec{x}), \pi(t,\vec{y})\}_{Poisson} \rightarrow -i \, [\hat{\phi} (t,\vec{x}), \hat{\pi}(t,\vec{y})] $$
For what reason does $g$ appear in this case?
Answer: The fact is that $\hat{\pi}^{\nu}$ is the conjugate momentum to $\hat{A}_{\nu}\;$, so the presence of the tensor $g_{\rho}^{\nu}=g_{\rho\tau}g^{\tau\nu}$ ensures that the commutator is taken between the field (component) and its respective conjugate momentum. | {
"domain": "physics.stackexchange",
"id": 57368,
"tags": "quantum-field-theory, field-theory, commutator"
} |
What is the time complexity tf2 core library | Question:
Hello All,
Can anyone guide me to get the time complexity of tf2 core library?
Thanks
Originally posted by babu on ROS Answers with karma: 1 on 2017-02-27
Post score: 0
Answer:
There's a paper on tf here: http://wiki.ros.org/Papers/TePRA2013_Foote
tf2 has several optimizations but has most of the same structure. The majority of the changes were dependency cleanup etc. You can see the tf2 design here: http://wiki.ros.org/tf2/Design
Performance related changes from tf include:
Adding support for static transforms. This avoids maintaining a time history, and providoes O(1) lookups.
There is support for remote caches, reducing the need for multiple caches.
Originally posted by tfoote with karma: 58457 on 2017-02-27
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 27147,
"tags": "ros, transform, tf2"
} |
Static Map corner at origin for Navigation Stack | Question:
Going through the tutorials for the ROS Navigation stack, using a static map generated through gmapping.
This is done on a simulation. When to robot is initialized, it is located at the origin. I perform gmapping and the map is generated, centered around origin.
However, after launching the navigation stack, following the tutorial, the static map seems to have the corner located at origin, and thus leads to
[ WARN] [1521163803.387421877, 5297.630000000]: The origin for the sensor at (-0.10, 0.03) is out of map bounds. So, the costmap cannot raytrace for it.
Note: If i add the map through map_server, and visualize it through RVIZ, then the map is properly centered with the center of the map on origin.
I am vaguely aware about the ability to add offsets to the static map. But not too clear on how, and I feel as though static offsets arn't an ideal solution, if the size of the map can change.
Originally posted by hni19 on ROS Answers with karma: 48 on 2018-03-15
Post score: 0
Answer:
Just solved the issue. The navigation tutorial directs you to use the pgm (image file) of the map. Using the YAML file instead provides resolution as well as the correct origin offsets.
Originally posted by hni19 with karma: 48 on 2018-03-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by serhat on 2021-10-15:
I had the same problem and solved the issue with this answer. Thank you! I did this change below in my move_base.launch(Refer to Section 2.5)
Previously:
<node name="map_server" pkg="map_server" type="map_server" args="$(find my_map_package)/my_map.pgm my_map_resolution"/>
For starting the map on origin of RvizGrid:
<node name="map_server" pkg="map_server" type="map_server" args="$(find my_map_package)/my_map.yaml"/> | {
"domain": "robotics.stackexchange",
"id": 30336,
"tags": "gazebo, navigation, rviz, ros-kinetic, static-map"
} |
How I reformat this data | Question: I have this data all is one line even gene from first sample joined the second one while I want this data column wise each field separated like I have provided
#Uploaded variation #Uploaded_variation 1 883477 G T 1_883477_G/TPRDM16 1 3125744 A G 1_3125744_A/GPRDM16
But I want
V1 V2 V3 V4 V5 V6
1_883477_G/T 1 883477 G T PRDM16
1_3125744_A/G 1 3125744 A G PRDM16
Answer: I am really curious what software or piece of code gave you this one-line data. I would think twice before using a tool giving such output. Anyway, I think regular expressions is the way to go in this case (this solution uses R):
library(stringr)
text <- "#Uploaded variation #Uploaded_variation 1 883477 G T 1_883477_G/TPRDM16 1 3125744 A G 1_3125744_A/GPRDM16"
my_pattern <- "[0-9]+\\s[0-9]+\\s[ATGC]+\\s[ATGC]+\\s[0-9]+_[0-9]+_[ATGC]+\\/[0-9A-Z]+"
info_as_list <- str_extract_all(text,
pattern = my_pattern) %>% unlist()
info_as_list <- sub("/",
" ",
info_as_list)
info_as_list <- str_split(info_as_list, " ")
as.data.frame(Reduce(rbind, info_as_list))
This is the output, I guess the rowname "init" comes from Reduce():
V1 V2 V3 V4 V5 V6
init 1 883477 G T 1_883477_G TPRDM16
1 3125744 A G 1_3125744_A GPRDM16
Here is what the regular expression does, you might need to alter this based on your data:
[0-9]+\\s : start position
[0-9]+\\s : end position
[ATGC]+\\s : ref nucleotide(s)
[ATGC]+\\s : alternate nucleotide(s)
[0-9]+ : start
_ : underscore
[0-9]+ : end
_ : underscore
[ATGC]+ : ref
\\/ : slash
[0-9A-Z]+ : gene symbol
+ means one or more of the preceding and R uses two backslashes, \\, as escape characters. | {
"domain": "bioinformatics.stackexchange",
"id": 1357,
"tags": "awk"
} |
Biological meaning of Lotka-Volterra Jacobian matrices | Question: In a book about community ecology, I learned about Lotka-Volterra models and dynamic food web models (I am not an ecologist in the first place).
In one of the chapters, a Jacobian matrix of partial derivatives of the Lotka-Volterra model at equilibrium densities is given:
\begin{bmatrix}-&-&0&0\\+&0&-&0\\0&+&0&-\\0&0&+&0\end{bmatrix}
where + and – represent positive or negative values.
This matrix is later interpreted in terms of direction of trophic links in a food web, but the reasoning allowing to jump form one to the other is not detailed.
So what is the biological meaning of these values?
Answer: The Jacobian tells how the system changes along different state variables (which can be, for instance, the concentrations of the predator and the prey).
The Jacobian matrix by itself doesn't give you a lot of intuitive information. However, the eigenvalues of the Jacobian matrix at the equilibrium point tell you the nature of the steady state. For example if all the eigenvalues are negative then the system is stable at that point and the equilibrium point is called a stable node. If some of them are positive and some are negative then the system is metastable at that point and the point is called a saddle point. If all eigenvalues are positive then the point is unstable. If the eigenvalues have an imaginary part then the system exhibits oscillations. All these properties are local i.e. they just tell you the behaviour of the system near the equilibrium point.
I hope I am clear. This is not in a strict sense, a biological question. If you want to know more then read a book on nonlinear dynamics. Non-linear dynamics and Chaos by Steven Strogatz is a good place to start.
EDIT
When the system is locally linear (linearized at a given point) then the system dynamics at that point can be described as
$$\frac{dx}{dt}=J.x$$
Where $x$ is the state vector (a vector of the variables in the system) and $J$ is the Jacobian matrix. In this situation, the signs will denote the effect of a variable on the other variable. However, I am not sure (I feel it is highly unlikely) if this can be used to make any inferences about the food web. | {
"domain": "biology.stackexchange",
"id": 9700,
"tags": "theoretical-biology, community-ecology"
} |
CodeFights: Snake Game | Question: Description
Your task is to imitate a turn-based variation of the popular "Snake" game.
You are given the initial configuration of the board and a list of commands which the snake follows one-by-one. The game ends if one of the following happens:
the snake tries to eat its tail
the snake tries to move out of the board
it executes all the given commands.
Output the board configuration after the game ends. With the snake replaced by 'X' if the game ends abnormally.
Examples
input:
snakeGame([['.', '.', '.', '.'],
['.', '.', '<', '*'],
['.', '.', '.', '*']],
"FFFFFRFFRRLLF")
output:
[['.', '.', '.', '.'],
['X', 'X', 'X', '.'],
['.', '.', '.', '.']]
input:
snakeGame([['.', '.', '^', '.', '.'],
['.', '.', '*', '*', '.'],
['.', '.', '.', '*', '*']],
"RFRF")
output:
[['.', '.', 'X', 'X', '.'],
['.', '.', 'X', 'X', '.'],
['.', '.', '.', 'X', '.']]
input:
snakeGame([['.', '.', '*', '>', '.'],
['.', '*', '*', '.', '.'],
['.', '.', '.', '.', '.']],
"FRFFRFFRFLFF")
output:
[['.', '.', '.', '.', '.'],
['<', '*', '*', '.', '.'],
['.', '.', '*', '.', '.']]
Code
def snakeGame(gameBoard, commands):
heads = {'v': (1, 0), '^': (-1, 0), '<': (0, -1), '>': (0, 1)}
new_direc = {'v': {'L': '>', 'R': '<'},
'^': {'L': '<', 'R': '>'},
'<': {'L': 'v', 'R': '^'},
'>': {'L': '^', 'R': 'v'}}
def find_snake():
def get_next(x, y):
for key, direc in heads.items():
new_x, new_y = x + direc[0], y + direc[1]
if new_x in range(len(gameBoard)) and new_y in range(len(gameBoard[0])):
if (new_x, new_y) not in snake:
if gameBoard[new_x][new_y] == '*':
return (new_x, new_y)
# Get the head and the next body opposite of snake's direction
snake = []
for i, row in enumerate(gameBoard):
for head in heads:
if head in row:
snake.append((i, row.index(head)))
snake.append((snake[0][0] + heads[head][0] * -1, snake[0][1] + heads[head][1] * -1))
# Append the rest of the body
while True:
n = get_next(snake[-1][0], snake[-1][1])
if n is None:
break
snake.append(n)
return snake
def move_snake(snake):
head = gameBoard[snake[0][0]][snake[0][1]]
new_x, new_y = snake[0][0] + heads[head][0], snake[0][1] + heads[head][1]
new_snake = []
if new_x in range(len(gameBoard)) and new_y in range(len(gameBoard[0])) and (new_x, new_y) not in snake:
new_snake.append((new_x, new_y))
for pos in snake[:-1]:
new_snake.append(pos)
return new_snake
# Find starting snake
snake = find_snake()
for command in commands:
if command in "LR":
# Change the head
gameBoard[snake[0][0]][snake[0][1]] = new_direc[gameBoard[snake[0][0]][snake[0][1]]][command]
else:
temp = move_snake(snake)
# if not valid move return dead snake
if temp is None:
for pos in snake:
x, y = pos
gameBoard[x][y] = 'X'
return gameBoard
# else move snake
for a, b in zip(snake, temp):
gameBoard[b[0]][b[1]] = gameBoard[a[0]][a[1]]
gameBoard[snake[-1][0]][snake[-1][1]] = '.'
snake = temp
return gameBoard
Answer: Your code seems to work and is split into small functions. Here are a few things to make it easier to understand.
Try to avoid nested functions
Nested functions can be really useful but in your case, it makes the code hard to understand because it is nested on many levels.
Also, because some parameters are not explicitly provided but somehow used as part of the functions context, it is hard to tell what is used/not used, input/output, updated/unchanged.
This is fairly easy to change to make the code more linear with the different functions defined one after the other (in order to do so, you need to add parameters in the function signatures and in the function calls). Also, you could take this chance to extract a few constant (and name them in UPPERCASE):
EMPTY = '.'
BODY = '*'
DEAD = 'X'
HEADS = {'v': (1, 0), '^': (-1, 0), '<': (0, -1), '>': (0, 1)}
def get_next(x, y, snake, gameBoard):
for key, direc in HEADS.items():
new_x, new_y = x + direc[0], y + direc[1]
if new_x in range(len(gameBoard)) and new_y in range(len(gameBoard[0])):
if (new_x, new_y) not in snake:
if gameBoard[new_x][new_y] == BODY:
return (new_x, new_y)
def find_snake(gameBoard):
# Get the head and the next body opposite of snake's direction
snake = []
for i, row in enumerate(gameBoard):
for head in HEADS:
if head in row:
snake.append((i, row.index(head)))
snake.append((snake[0][0] + HEADS[head][0] * -1, snake[0][1] + HEADS[head][1] * -1))
# Append the rest of the body
while True:
n = get_next(snake[-1][0], snake[-1][1], snake, gameBoard)
if n is None:
break
snake.append(n)
return snake
def move_snake(snake, gameBoard):
head = gameBoard[snake[0][0]][snake[0][1]]
new_x, new_y = snake[0][0] + HEADS[head][0], snake[0][1] + HEADS[head][1]
new_snake = []
if new_x in range(len(gameBoard)) and new_y in range(len(gameBoard[0])) and (new_x, new_y) not in snake:
new_snake.append((new_x, new_y))
for pos in snake[:-1]:
new_snake.append(pos)
return new_snake
def snakeGame(gameBoard, commands):
new_direc = {'v': {'L': '>', 'R': '<'},
'^': {'L': '<', 'R': '>'},
'<': {'L': 'v', 'R': '^'},
'>': {'L': '^', 'R': 'v'}}
# Find starting snake
snake = find_snake(gameBoard)
for command in commands:
if command in "LR":
# Change the head
gameBoard[snake[0][0]][snake[0][1]] = new_direc[gameBoard[snake[0][0]][snake[0][1]]][command]
else:
temp = move_snake(snake, gameBoard)
# if not valid move return dead snake
if temp is None:
for pos in snake:
x, y = pos
gameBoard[x][y] = DEAD
return gameBoard
# else move snake
for a, b in zip(snake, temp):
gameBoard[b[0]][b[1]] = gameBoard[a[0]][a[1]]
gameBoard[snake[-1][0]][snake[-1][1]] = EMPTY
snake = temp
return gameBoard
def test_snake(gameBoard, commands, expected):
out = snakeGame(gameBoard, commands)
if out != expected:
print(out, commands, expected)
test_snake([['.', '.', '.', '.'],
['.', '.', '<', '*'],
['.', '.', '.', '*']],
"FFFFFRFFRRLLF",
[['.', '.', '.', '.'],
['X', 'X', 'X', '.'],
['.', '.', '.', '.']])
test_snake([['.', '.', '^', '.', '.'],
['.', '.', '*', '*', '.'],
['.', '.', '.', '*', '*']],
"RFRF",
[['.', '.', 'X', 'X', '.'],
['.', '.', 'X', 'X', '.'],
['.', '.', '.', 'X', '.']])
test_snake([['.', '.', '*', '>', '.'],
['.', '*', '*', '.', '.'],
['.', '.', '.', '.', '.']],
"FRFFRFFRFLFF",
[['.', '.', '.', '.', '.'],
['<', '*', '*', '.', '.'],
['.', '.', '*', '.', '.']])
Getting rid of indices (almost) everywhere
In your code, element access (foo[i]) is used everywhere:
to get/set a particular cell of the grid
to get the x (or y) part of coordinates
the get the direction from a particular "head" characters
It may make things confusing but you can use a few techniques to get rid of it:
When you iterate over a dictionnary, you can choose if you are interested in keys, values or both. Then you don't need to get my_dict[my_key].
You can use iterator unpacking when you know the number of elements in it. This is very convenient when you handle coordinates: x, y = my_coor. Also, this can be used in the for syntax: for head, (dx, dy) in HEADS.items().
Instead of using list access to get the element you've just added to the (empty) list, you can use temporary variables:
snake.append((i, row.index(head)))
snake.append((snake[0][0] + heads[head][0] * -1, snake[0][1]
becomes
beginx, beginy = i, row.index(head)
snake.append((beginx, beginy))
snake.append((beginx - dx, beginy - dy))
It "costs" an additional line but it makes things clearer to me. (Also I took this chance to replace a + b * -1 by a - b).
Combining all these techniques, you'd get something like:
EMPTY = '.'
BODY = '*'
DEAD = 'X'
HEADS = {'v': (1, 0), '^': (-1, 0), '<': (0, -1), '>': (0, 1)}
def get_next(x, y, snake, gameBoard):
for (dx, dy) in HEADS.values():
new_x, new_y = x + dx, y + dy
if new_x in range(len(gameBoard)) and \
new_y in range(len(gameBoard[0])) and \
(new_x, new_y) not in snake and \
gameBoard[new_x][new_y] == BODY:
return (new_x, new_y)
def find_snake(gameBoard):
# Get the head and the next body opposite of snake's direction
snake = []
for i, row in enumerate(gameBoard):
for head, (dx, dy) in HEADS.items():
if head in row:
beginx, beginy = i, row.index(head)
snake.append((beginx, beginy))
snake.append((beginx - dx, beginy - dy))
# Append the rest of the body
while True:
n = get_next(snake[-1][0], snake[-1][1], snake, gameBoard)
if n is None:
break
snake.append(n)
return snake
def move_snake(snake, gameBoard):
headx, heady = snake[0]
head = gameBoard[headx][heady]
dx, dy = HEADS[head]
new_x, new_y = headx + dx, heady + dy
new_coord = new_x, new_y
new_snake = []
if new_x in range(len(gameBoard)) and \
new_y in range(len(gameBoard[0])) and \
new_coord not in snake:
new_snake.append(new_coord)
for pos in snake[:-1]:
new_snake.append(pos)
return new_snake
def snakeGame(gameBoard, commands):
new_direc = {'v': {'L': '>', 'R': '<'},
'^': {'L': '<', 'R': '>'},
'<': {'L': 'v', 'R': '^'},
'>': {'L': '^', 'R': 'v'}}
# Find starting snake
snake = find_snake(gameBoard)
for command in commands:
if command in "LR":
# Change the head
headx, heady = snake[0]
gameBoard[headx][heady] = new_direc[gameBoard[headx][heady]][command]
else:
temp = move_snake(snake, gameBoard)
# if not valid move return dead snake
if temp is None:
for (x, y) in snake:
gameBoard[x][y] = DEAD
return gameBoard
# else move snake
for a, b in zip(snake, temp):
gameBoard[b[0]][b[1]] = gameBoard[a[0]][a[1]]
tailx, taily = snake[-1]
gameBoard[tailx][taily] = EMPTY
snake = temp
return gameBoard
Other code simplifications
Now that we provide the full snake to the get_next function, it doesn't really need to x and y values.
def get_next(snake, gameBoard):
x, y = snake[-1]
....
In the move_snake function, you write a loop to add elements from snake[:-1]. You could use a simple + operation on lists.
Also, it is clearer to add return None at the end of a function whose return value is used. This is one of the latest addition to PEP 8:
Be consistent in return statements. Either all return statements in a
function should return an expression, or none of them should. If any
return statement returns an expression, any return statements where no
value is returned should explicitly state this as return None, and an
explicit return statement should be present at the end of the function
(if reachable).
Then, the whole function becomes:
def move_snake(snake, gameBoard):
headx, heady = snake[0]
head = gameBoard[headx][heady]
dx, dy = HEADS[head]
new_x, new_y = headx + dx, heady + dy
new_coord = new_x, new_y
if new_x in range(len(gameBoard)) and \
new_y in range(len(gameBoard[0])) and \
new_coord not in snake:
return [new_coord] + snake[:-1]
return None
Other ideas
At the moment, we have the snakes represented in 2 ways : as a list of body parts (which makes sense given how convenient it is to make the snake go forward) and as a drawing on a grid (which makes sense because it is our input and our output).
Also, for every step forward, we need to update both representations. Maybe, it would make sense to keep the listy-snake (and other data you might need) during the computations of the different steps and only generate the output grid at the very end.
As an exercice, I've tried to do this. I went for an object approach to store the different data needed and interact easily with them. I've tried to keep as much as possible from your code:
EMPTY = '.'
BODY = '*'
DEAD = 'X'
HEADS = {'v': (1, 0), '^': (-1, 0), '<': (0, -1), '>': (0, 1)}
class Snake(object):
# Heads and body parts are (x, y, char)
def __init__(self, head, dimx, dimy):
self.dimx = dimx
self.dimy = dimy
self.body = [head]
self.alive = True
def add_queue(self, body_part):
self.body.append(body_part)
def turn(self, direc):
new_direc = {'v': {'L': '>', 'R': '<'},
'^': {'L': '<', 'R': '>'},
'<': {'L': 'v', 'R': '^'},
'>': {'L': '^', 'R': 'v'}}
x, y, head = self.body[0]
new_head = new_direc[head][direc]
self.body[0] = (x, y, new_head)
def move_forward(self):
x, y, char = self.body[0]
dx, dy = HEADS[char]
new_x, new_y = x + dx, y + dy
if self.position_is_free(new_x, new_y):
self.body = [(new_x, new_y, char)] + [(x, y, BODY)] + self.body[1:-1]
else:
self.die()
def position_is_free(self, x, y):
return x in range(self.dimx) and \
y in range(self.dimy) and \
not any(x == x2 and y == y2 for (x2, y2, _) in self.body)
def die(self):
self.alive = False
self.body = [(x, y, DEAD) for (x, y, _) in self.body]
def get_as_grid(self):
g = [[EMPTY for i in range(self.dimy)] for j in range(self.dimx)]
for x, y, c in self.body:
g[x][y] = c
return g
def find_head(gameBoard):
for i, row in enumerate(gameBoard):
for head, (dx, dy) in HEADS.items():
if head in row:
return Snake((i, row.index(head), head), len(gameBoard), len(gameBoard[0]))
def get_next(snake, gameBoard):
x, y, _ = snake.body[-1]
for (dx, dy) in HEADS.values():
new_x, new_y = x + dx, y + dy
if snake.position_is_free(new_x, new_y) and \
gameBoard[new_x][new_y] == BODY:
return (new_x, new_y, BODY)
def find_snake(gameBoard):
# Get the head
s = find_head(gameBoard)
# Append the rest of the body
while True:
n = get_next(s, gameBoard)
if n is None:
break
s.add_queue(n)
return s
def snakeGame(gameBoard, commands):
# Find snake
s = find_snake(gameBoard)
for command in commands:
if command in "LR":
# Change the head
s.turn(command)
else:
s.move_forward()
if not s.alive:
break
return s.get_as_grid()
Something I had forgotten
if new_x in range(len(gameBoard)) can be rewritten if 0 <= new_x < len(gameBoard).
On Python2, this is much faster (because a list is generated and then we perform a linear search on it).
On Python3, the performance impact may not be so big. | {
"domain": "codereview.stackexchange",
"id": 28627,
"tags": "python, programming-challenge, python-3.x, snake-game"
} |
Uniformly random efficient sampling of shortest s-t paths, with optimal random bits | Question: Motivated by Efficiently sampling shortest s-t paths uniformly and independently at random,
The answers give methods of randomly sampling shortest $s\text{-}t$ paths. However, they use a lot of seemingly unnecessary random bits.
My question is:
Can the solution be improved to use a single random number in interval $[0,w(t))$, where $w(t)$ is the total number of shortest paths from $s\text{-}t$.
Alternatively, can the solution be improved to use $\left\lceil \log_2 w(t)\right\rceil $ random bits?
Answer: D. W. computes for each node $v \in S$ the number of paths $n(v)$ from $v$ to $t$. Using this information, it is easy to decode a number in the range $[0,n(s))$ to a unique path from $s$ to $t$ in $S$ (and so in the original graph). More generally, at each node $v$ we will come up with a procedure to map $[0,n(v))$ to a unique path from $v$ to $t$. Let the children of $v$ be $v_1,\ldots,v_k$. The idea is to write
$$ [0,n(v)) = [0,n(v_1)) \cup [n(v_1),n(v_1)+n(v_2)) \cup \cdots [n(v_1)+\cdots + n(v_{k-1}), n(v_1)+\cdots+n(v_k)), $$
and use the $i$th component to encode paths going through $v_i$. By subtracting $n(v_1) + \cdots + n(v_{i-1})$, we then reduce the problem to decoding a number in $[0,n(v_i))$ to a unique path from $v_i$ to $t$. Pseudocode left to the reader. | {
"domain": "cs.stackexchange",
"id": 2011,
"tags": "graphs, shortest-path, sampling, randomness"
} |
About the fifth gamma matrix | Question: How can one prove that
$$\gamma^5=\frac{i}{4!}\varepsilon_{\mu\nu\alpha\beta}\gamma^{\mu}\gamma^{\nu}\gamma^{\alpha}\gamma^{\beta}$$
from the following:
$$\gamma^5:=i\gamma^0\gamma^1\gamma^2\gamma^3=
\begin{pmatrix}
0&0&1&0\\
0&0&0&1\\
1&0&0&0\\
0&1&0&0\\
\end{pmatrix}$$
Answer: You will also need to know the anti-commutation relation of the gamma matrices:
$$\{\gamma^\mu, \gamma^\nu \} = \gamma^\mu \gamma^\nu + \gamma^\nu \gamma^\mu = 2 \eta^{\mu \nu} I_4$$
Since the metric is symmetric in its indices, that will not contribute to the sum.
That means, swapping two gamma matrices introduces a sign flip. This is exactly what $\epsilon_{\mu\nu\alpha\beta}$ does, and so each term will contribute the same (and so cancelling the $1/4!$). | {
"domain": "physics.stackexchange",
"id": 38763,
"tags": "dirac-matrices"
} |
convert from int32 to int | Question:
How to compare int32 with int or convert int32 to int?
Looks like No working this way:
Int = (int) Int32;
Originally posted by chenchengshao@gmail.com on ROS Answers with karma: 11 on 2016-01-20
Post score: 1
Original comments
Comment by ahendrix on 2016-01-20:
This forum is primarily for questions about ROS (Robot Operating System), and your question does not appear to be about ROS. You will probably get a better answer by asking on a general programming site like Stack Overflow.
Answer:
If the int32 come from a std::msgs you can access the value with .data
int var = yourMsg.data
here is the documentation : http://docs.ros.org/api/std_msgs/html/msg/Int32.html
Originally posted by F.Brosseau with karma: 379 on 2016-01-21
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 23504,
"tags": "ros"
} |
Mass-energy equivalence and gravitational potential energy | Question: If mass and energy are equivalent, and if gravitational potential energy is energy, why doesn't an object have more mass when it is at a higher altitude? Does the mass-energy equivalence work for kinetic energy only?
Answer: An object doesn't increase mass when its energy increases. The rest mass is the rest mass. Even with kinetic energy. If an object is moving, its mass doesn't increase. You can associate a mass to the system, however. So let's say two particles are vibrating in a molecule, each with mass $m_0$, then the molecule's mass wouldn't just be $2m_0$, you have to take into consideration that the particles have energy. The same is with gravitational potential energy. The object's mass doesn't increase when it goes up similar to how it doesn't increase when it moves faster.
That said, let's find out how such mass would work. Similar to how we assumed the molecule isn't moving, but its particles are. Let's assume a large box with the vertical length 1km. If there is an object at the top of the box, it has higher potential energy, so you could say the box has more mass than if the object was on the bottom of the box. | {
"domain": "physics.stackexchange",
"id": 89178,
"tags": "general-relativity, energy, mass, potential-energy, mass-energy"
} |
What's the purpose of statistical analysis ( statistically important features) vs feature elimination in machine learning | Question: I am developing a classification model for covid19 symptoms (after being ill) and I don't understand statistical analysis importance (some parts of it)
1 Firstly:
Basically we perform statystical analysis to learn about data. However what's the purpose of counting mean, standard deviation as shown here:
https://www.sciencedirect.com/science/article/pii/S0010482522000762#bib27
What insight will it give me?
2 Moreover:
They perform statistical test like Chi-Square to find the statistically significant features. Suppose they have around 15 "blood parameteres" and the tests would tell that only 10 of them are statistically important. Does it mean those 5 won't be used in the training and can be removed?
3 If they can be removed:
Would feature elimination prove the same? Suppose we used Recursive Feature Elimination / Random forest with 10-best features. Would results be the same?
Answer: Though not in the details, it looks like they took some of the continuous variables, ranked them, and then used Chi-square to determine feature set. No explanation given as to why they did that. Also regarding the features not found significant. You can certainly uses them in model. chi-square is a weak test, and there may be interactions found in the model which are meaningful.
In any case The statistical tests were exploratory. Then were not used for inference directly. It is always a good practice to perform basic statistical descriptive statistics before approaching any ML. For example they could have not performed the missing value imputation without first seeing how many there were. Also note that MVC variable has overlapping confidence intervals between COVID and non-COVID responses, which sometimes is a signal that there is not a significant difference due to that variables.
They selected four features: white blood cell count (WBC), monocyte count (MOT), age, and lymphocyte count (LYT) and they ran them through 8 machine learning algorithms to classify and they used a stacked ML model. | {
"domain": "datascience.stackexchange",
"id": 10794,
"tags": "machine-learning, classification, statistics"
} |
ROS navigation slam example | Question:
Hi,
Is there any navigation slam example(c,c++) with hokuyo node (only usb port,no serial port),
with rviz(show map,and setting goal),map(not static map)?
If it also includes gazebo and pictures(laser range finder location,robot),tutorials will be great~
If it totally follows the navigation tutorials will be the best!
Thank you~
Originally posted by sam on ROS Answers with karma: 2570 on 2011-06-07
Post score: 0
Answer:
I had a look at this http://www.hessmer.org/blog/2011/04/24/using-the-ros-navigation-stack/ for a real robot to look at that was set up using the navigation stack. I have also got the Navigation stack working with the hokuyo_node and the SLAM package as well if you need more information Ill post more here like .launch files and the such. Dr. Hessmer's Blog is useful for understanding the navigation package as well as he provides all of his code, just make sure to check out the right revision. If you are not using an arduino then you will have to change how you get your odometery and tf for the odom, but other than that its pretty straight forward. I also had issues looking at the tutorials and doing the Nav stack, and this helped a lot. It does not however integrate gazebo. It does follow the navigation tutorials very well though.
Enjoy,
-Drew
Originally posted by D_mangus with karma: 121 on 2011-06-08
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 5780,
"tags": "slam, navigation, hokuyo"
} |
Cleanly passing in a large number of mutable parameters through a python class | Question: I want to create two classes, one for attributes and one for functional behaviors. The point of the attributes class was to hold all of the attributes along with property methods (setters/getters), and I wanted to split the classes up this way so that everything related to the attributes would not fill up the entire other class.
Due to the way python handles mutable default arguments, I have to handle the attributes using None for default arguments, but I have a very large number of parameters. This has caused the code to become unnecessarily long when passing arguments around on the initialization:
class Attributes:
def __init__(self,
arg1,
arg2,
arg3,
arg4,
arg5):
self.arg1 = default_arg1 if arg1 is None else arg1
self.arg2 = default_arg2 if arg2 is None else arg2
self.arg3 = default_arg3 if arg3 is None else arg3
self.arg4 = default_arg4 if arg4 is None else arg4
self.arg5 = default_arg5 if arg5 is None else arg5
# attribute methods like getters and setters
class Functionality(Attributes):
def __init__(self,
arg1 = None,
arg2 = None,
arg3 = None,
arg4 = None,
arg5 = None):
super(Functionality, self).__init__(
arg1,
arg2,
arg3,
arg4,
arg5
)
# Methods that give functionality to the class
I want to be able to use this class as follows:
example_obj = Functionality(
arg1 = example_arg1,
arg4 = example_arg4
)
Is there any cleaner way to do this? I want to add more attributes (more than 20 attributes instead of 5), but the above code involves writing basically the same thing way too many times.
Answer: I'm going to address your existing code in a manner that generally goes from most minimal to most significant changes. I'd strongly recommend actually using the solution at the very bottom of this post; the minimal changes are more useful:
From a historical (pre-dataclasses) perspective
For understanding how classes and inheritance work
but there's no reason to torture yourself hand-writing all of this when dataclasses will do the work for you.
Delegating to defaults of super class
When you're inheriting __init__ from another class, you typically don't want to reproduce their parameters explicitly; it creates too much confusion and too much interdependency (making more opportunities for code to get out of sync). There are two standard solutions here:
Option 1: Explicitly delegate via **kwargs
When Functionality doesn't make direct use of any parameter, don't accept it by name. Just accept **kwargs (when you're talking about this many defaulted parameters, no one should be passing arguments positionally anyway; it's a nightmare for readability/maintainability) and pass it along. So Functionality would look like:
class Functionality(Attributes):
def __init__(self, **kwargs):
# Other stuff required for initialization
super().__init__(**kwargs) # Python 3.x doesn't require you to pass args to super in most cases
# Other stuff required for initialization
# Methods that give functionality to the class
Option 2: Inherit __init__ implicitly
In this case, it might be even simpler though; the __init__ of Functionality doesn't do anything beyond delegate to Attributes. If your real code is the same (nothing but a super().__init__ call), you'd just omit the definition of __init__ on Functionality and let it inherit Attributes's __init__ directly.
class Functionality(Attributes):
# No __init__ defined at all; uses Attributes.__init__ automatically
# Methods that give functionality to the class
Either way, you're no longer using separate defaults for each class.
Note: If the parent class must not accept default arguments, but the child class should, see the end of this answer for Preferred solution (especially if the child must not have defaults, and the parent class should): dataclasses everywhere (not described here since it relies on techniques for solving your mutable defaults problem, which I haven't gotten to yet).
Avoid problems with mutable defaults
There are two simple solutions for avoiding the problems with mutable defaults.
Option 1: Deep copy unconditionally
For the defaulted class itself, go ahead and use mutable arguments as defaults, but the safe way, making deep copies. This is often safer even when not passed as defaults, since without copying, you'd be aliasing values from the caller, and changes made by either you or the caller would affect the other.
Adding explicit mutable defaults, you end up with:
from copy import deepcopy
class Attributes:
def __init__(self, arg1=[], arg2={}, arg3=set(), arg4=MyMutable(), arg5=OtherMutable()):
self.arg1 = deepcopy(arg1)
self.arg2 = deepcopy(arg2)
self.arg3 = deepcopy(arg3)
self.arg4 = deepcopy(arg4)
self.arg5 = deepcopy(arg5)
# attribute methods like getters and setters
Option 2: Let dataclasses do your work for you
As an alternative to hand-writing all of Attributes, I'd suggest making it a dataclass, which means you don't need to repeat the names, and allows you to define default_factorys for each field to generate default values on demand. That would allow you to write a simpler Attributes (with far less name repetition) matching what I wrote above like so:
from dataclasses import dataclass, field
@dataclass
class Attributes:
arg1: list = field(default_factory=list)
arg2: dict = field(default_factory=dict)
arg3: set = field(default_factory=set)
arg4: MyMutable = field(default_factory=MyMutable)
arg5: OtherMutable = field(default_factory=OtherMutable)
# attribute methods like getters and setters
and it will generate __init__ (as well as __repr__ and __eq__ for good measure; you can turn them off, or other features on, by parameterizing the @dataclass decorator) for you, including automatically calling your default_factory to initialize the field if and only if the caller did not provide the parameter.
Note: Unlike the other solution, this doesn't ensure caller-provided arguments are copied, so the risk of aliasing would remain. You could always define a __post_init__ to copy any fields you think this is likely to be a problem for.
Side-note: For my example, I just annotated the types as list, dict, and set; to annotate properly, you'd usually the typing classes (List, Dict, Set) and annotate with the types the containers are expected to hold, e.g. arg1: List[int] = field(default_factory=list) if arg1 is expected to be a list of ints.
Preferred solution (especially if the child must not have defaults, and the parent class should): dataclasses everywhere
In your example, Attributes() fails for lack of arguments, while Functionality() does not (substituting defaults),
dataclasses still cover that case adequately. You'd define both parent and child as dataclasses, but not provide defaults for the fields on the parent:
from dataclasses import dataclass, field
@dataclass
class Attributes:
arg1: list
arg2: dict
arg3: set
arg4: MyMutable
arg5: OtherMutable
# Optionally define __post_init__ to modify arguments, perform other work
# attribute methods like getters and setters
@dataclass
class Functionality(Attributes):
arg1: list = field(default_factory=list)
arg2: dict = field(default_factory=dict)
arg3: set = field(default_factory=set)
arg4: MyMutable = field(default_factory=MyMutable)
arg5: OtherMutable = field(default_factory=OtherMutable)
# Optionally define __post_init__ to modify arguments, perform other work
# Methods that give functionality to the class
Since no defaults are set on Attributes, direct use of Attributes will require you all arguments to be provided. But dataclasses are helpful again here; the fields of a child class of a dataclass are:
All of the fields of the parent(s)
Plus all of the fields of the child
When a given field is defined in both, the child's definition wins
So we can provide default_factorys solely on the child (and get defaulting behavior), but not on the parent (so it never defaults). | {
"domain": "codereview.stackexchange",
"id": 39835,
"tags": "python, python-3.x, object-oriented, template, classes"
} |
A confusion from Weinberg's QFT text (a vanishing term in Lippmann-Schwinger equation) | Question: I was reviewing the first few chapters of Weinberg Vol I and found a hole in my understanding in page 112, where he tried to show in the asymptotic past $t=−∞$, the in states coincide with a free state. In particular, he argued the integral
$$\tag{1} \int d\alpha\frac{e^{-iE_{\alpha}t}g(\alpha)T_{\beta\alpha}^+\Phi_\beta} {E_\alpha-E_\beta+i\epsilon}$$
would vanish, where $d\alpha=d^3\mathbf{p}$ (also involves discrete indices like spin, but of no relevance here). In his argument, he used a contour integration in the complex $E_{\alpha}$ plane, in which the integral of central interest is the integration along real line
$$\tag{2} \int_{-\infty}^\infty dE_\alpha\frac{e^{-iE_{\alpha}t}g(\alpha)T_{\beta\alpha}^+\Phi_\beta} {E_\alpha-E_\beta+i\epsilon}$$
I don't see how to obtain (2) from (1), since the lower bound of energy is the rest mass, in the best case I could get something like $\int_{m}^\infty dE_\alpha\cdots$, but how could one extend this onto the whole real line.
Answer: 1) OP is basically wondering how Weinberg on the middle of p. 112 can extend the integration region from$^1$
$${\cal J}^{\pm}_{\beta}~=~ \int_{m_{\alpha}}^{\infty} \!dE_{\alpha}\frac{e^{-iE_{\alpha}t}g(E_{\alpha})T_{\beta\alpha}^{\pm}} {E_{\alpha}-E_{\beta}\pm i0^{+}}$$
to include the negative real axis
$${\cal J}^{\pm}_{\beta}~=~ \int_{-\infty}^{\infty} \!dE_{\alpha}\frac{e^{-iE_{\alpha}t}g(E_{\alpha})T_{\beta\alpha}^{\pm}} {E_{\alpha}-E_{\beta}\pm i0^{+}},$$
where $g:E_{\alpha}\mapsto g(E_{\alpha})$ is a meromorphic function?
2) That Weinberg (implicitly) assumes meromorphicity of the $g:D\subseteq \mathbb{C}\to \mathbb{C}$ function can be deduced further down on p. 112, where he writes that
[...] we can close the contour of integration for the integration variable $E_{\alpha}$ [...],
which is a clear reference to the residue theorem, which in turn assumes meromorphicity. Also Weinberg writes on the same page$^1$
[...] The functions $g(E_{\alpha})$ and $T_{\beta\alpha}^{\pm}$ may, in general, be expected to have some singularities at values of $E_{\alpha}$
with finite [...] imaginary parts [...]
So there is little doubt that Weinberg assumes meromorphicity of $g$.
3) On the other hand, on the bottom of p. 109, Weinberg writes$^1$
[...]Therefore, we must consider wave-packets, superpositions $\int\! dE_{\alpha}~g(E_{\alpha})\Psi_{\alpha}$ of states, with an amplitude $g(E_{\alpha})$ that is non-zero and smoothly varying over some finite range $\Delta E$ of energies.[...]
Now according to the identity theorem for holomorphic functions, if a function $g:D\subseteq \mathbb{C}\to \mathbb{C}$ is zero on a subset $S\subseteq D$ that has an accumulation point $c$ in the domain $D$, then $g\equiv 0$ is identically zero. However, any interval $I\subseteq \mathbb{R}$ on the real line of non-zero length has accumulation points. So if Weinberg in above quote literally means that $g$ is mathematically zero outside some finite interval $I\subseteq \mathbb{R}$, then $g\equiv 0$ would be identically zero in the whole complex plane.
Of course Weinberg doesn't mean that. He just means that $g$ outside some finite range takes so small values, that to the precision $\epsilon$ that we are working, it doesn't matter whether we include the integration region $\mathbb{R}\backslash I$, or not.
In particular, mathematically speaking, Weinberg has only proven the condition
$$\tag{3.1.12} \int_{m_{\alpha}}^{\infty} \!dE_{\alpha}~
e^{-iE_{\alpha}t} g(E_{\alpha}) \Psi^{\pm}_{\alpha}~\longrightarrow~
\int_{m_{\alpha}}^{\infty} \!dE_{\alpha}~
e^{-iE_{\alpha}t} g(E_{\alpha}) \Phi_{\alpha}
~\text{for}~ t\to\mp\infty \qquad $$
within some precision $\epsilon$. However, the precision $\epsilon$ can be made arbitrarily fine by preparing more and more sharply defined wavepackets $g$.
4) If one would like to have a concrete example of a $g$ function, one may think of a Lorentzian function (aka. Breit–Wigner or Cauchy distribution),
$$g(E_{\alpha}) ~=~ \frac{1}{\pi}\frac{\delta}{(E_{\alpha}-E_0)^2+\delta^2}, \qquad \int_{-\infty}^{\infty}\! dE_{\alpha}~g(E_{\alpha})~=~1,$$
for appropriate choices of constants $E_0$ and $\delta$.
5) Finally, one should not loose sight of Weinberg's main goal in Section 3.1, namely to argue the $\pm i0^{+}$ prescription in the Lippmann-Schwinger equations
$$\tag{3.1.17}\Psi^{\pm}_{\alpha}
~=~\Phi_{\alpha}+\int\!d\beta\frac{T_{\beta\alpha}^{\pm}\Phi_{\beta}} {E_{\alpha}-E_{\beta}\pm i0^{+}}.$$
The Lippmann-Schwinger equations (3.1.17) are not an approximation, and they are independent of the choice of wavepacket $g$.
--
$^1$ To simplify the discussion, we have taken the liberty to replace Weinberg's more general $\alpha$-integration with just an $E_{\alpha}$-integration. Here
$$\tag{3.1.4} \int \!d\alpha \cdots \equiv \sum_{n_1\sigma_1n_2\sigma_2\cdots}\int d^3p_1 d^3p_2 \cdots$$
Changing integration variable from $E_{\alpha}$ to momenta does not solve OP's problem, essentially because we still have to pick the branch of the pertinent square root that has positive real part, so that it doesn't bring us any closer in understanding negative energies. | {
"domain": "physics.stackexchange",
"id": 6328,
"tags": "quantum-field-theory, mathematical-physics, integration, analyticity"
} |
What are the primal and dual planes in the context of the point-line duality? | Question: In computational geometry, we can define a duality between points and lines. The line is the primal (or dual) object of a point, or a point is the primal (or dual) object of a line. However, the exact definition of the primal and dual planes is often not given in the literature. It's a little bit confusing which one is which. What are exactly the primal and dual planes in this context?
Answer: In short, the original $(x, y)$-plane is the primal plane and the new $(a, b)$-plane the dual plane.
More specifically, in a $(x, y)$-plane (or coordinate system)1, a non-vertical line can be represented as $\ell \colon y = ax - b$, where $a \in \mathbb{R}$ is the slope and $b \in \mathbb{R}$ the $y$-intercept. Hence, to describe a line, we just need to scalars, $a$ and $b$. We can state that a line has two degrees of freedom.
The idea of the duality between lines and points is that lines in the $(x, y)$-plane can be thought of or represented as points in a new $2$-dimensional space (or coordinate system), in which the coordinate axes are labeled $(a, b)$, rather than $(x, y)$. This new plane is denoted or labeled as $(a, b)$ to emphasize the fact that the values that $a$ and $b$ may assume respectively refer to the slope and $y$-intercept of the line $\ell$ in the original $(x, y)$-plane.
For example, the line $\ell : y = 2x + 1$, in the $(x, y)$-plane, corresponds to the point $\ell^* = (2, −1)$ in this new $(a, b)$-space 2, 3; that is, $\ell^*$ is a $2$-dimensional point with $2$ as the value of the $a$-coordinate and $-1$ as the value of this $b$-coordinate. A point $p=(5, 2)$ in the $(x, y)$-plane corresponds to the line $p^* \colon b = 5 a - 2$ in the $(a, b)$-plane 4. Similarly, a point $q = (1, 6)$ in the $(a, b)$-space, corresponds to a non-vertical line, $q^* \colon y = 1x − 6$ in the "original" plane $(x, y)$. And a line $t \colon \frac{1}{2} a - 5$ in the $(a, b)$-plane corresponds to the point $t^*=(\frac{1}{2}, 5)$ in the $(x, y)$-plane.
A convention is to call the original $(x, y)$-plane the primal plane, and the new (a, b)-plane the dual plane.
Why is $(x, y)$ the "original plane"? What's "original" about it? It's simply the plane where our initial problem is formulated. For example, we may have a problem where we are given a set of points. Then, we say that those points are given in the primal plane. We may convert our points to lines in the dual plane, because maybe we can formulate our problem in terms of lines, and the new problem is easier to solve.
Of course, we could have labeled the "original" plane with other letters, e.g. $(g, h)$, but we often label it $(x, y)$, so the new system is labeled differently, e.g. $(a, b)$. We could have labeled the new plane or coordinate system $(o, z)$: it really does not matter, i.e. it's just notation!
Here's a picture which summarizes all these words
Notes
Note that we label this plane $(x, y)$, but could have labeled it $(u, v)$ or any other tuple of two letters: it does not matter (as long as we are consistent).
Note the $-$ (i.e. the minus) in front of the $1$ in the point $\ell^*$.
We use an asteristik, $*$, to denote the dual of some object. For example, $u^*$ is the dual (in the dual plane) of $u$ (in the $(x, y)$ or primal plane). $u$ can either be a line or a point. If $u$ is a line, then $u^*$ is a point. If $u$ is a point, then $u^*$ is a line.
Note the $-$ (i.e. the minus) in front of the $-2$ in the equation of the line $p^*$. | {
"domain": "cs.stackexchange",
"id": 11343,
"tags": "terminology, computational-geometry"
} |
What geological mechanisms result in the great depth of the Mariana Trench? | Question: According to the Mariana Trench Oceanography page is at a maximum depth of
is 11,033 meters (36,201 feet)
The 'Challenger Deep' being the name of the deepest point.
I understand that it is a subduction trench, but my question is, what geological mechanism result in the great depth of the Mariana Trench over other trenches?
Answer: As taken from Wikipedia:
There are several factors that control the depth of trenches. The most important control is the supply of sediment, which fills the trench so that there is no bathymetric expression. It is therefore not surprising that the deepest trenches (deeper than 8,000 m (26,000 ft)) are all nonaccretionary. In contrast, all trenches with growing accretionary prisms are shallower than 8,000 m (26,000 ft).
A second order control on trench depth is the age of the lithosphere at the time of subduction. Because oceanic lithosphere cools and thickens as it ages, it subsides. The older the seafloor, the deeper it lies and this determines a minimum depth from which seafloor begins its descent. This obvious correlation can be removed by looking at the relative depth, the difference between regional seafloor depth and maximum trench depth. Relative depth may be controlled by the age of the lithosphere at the trench, the convergence rate, and the dip of the subducted slab at intermediate depths.
Finally, narrow slabs can sink and roll back more rapidly than broad plates, because it is easier for underlying asthenosphere to flow around the edges of the sinking plate. Such slabs may have steep dips at relatively shallow depths and so may be associated with unusually deep trenches, such as the Challenger Deep.
A simple picture for anyone who doesn't understand how these trenches form:
(source: wikispaces.com)
Also, the Mariana Trench isn't the only super deep trench in the ocean... 10 deepest parts of the ocean | {
"domain": "earthscience.stackexchange",
"id": 287,
"tags": "geophysics, geodynamics, subduction"
} |
Why Does Coordination of Metal Ions Happen Anyways? | Question: I've been studying coordination chemistry, but I still have a fundamental question about the area: why do transition metals even form coordination compounds, while main group metals do not? Why can't they just form simple ionic compounds, treating their d electrons as valence electrons?
On that note about d-orbitals as valence electrons: when do they act as valence electrons and form "simple" ionic (such as $\ce{CrCl_3}$) or covalent ($\ce{MnO_4^-}$) bonding, and when does coordination happen instead? Lately, I've began to think that transition metals don't even form simple ionic bonds or covalent bonds. It used to seem to me that in the chromate ion, $\ce{CrO_4^{2-}}$ (by drawing the Lewis structure) the d-orbitals in chromium participate as valence electrons in covalent bonding, but then how is it colored without d–d transitions? Sulfate, with $\ce{S}$ instead of $\ce{Cr}$, is colorless $\ce{SO4^2-}$. Thus, there must be coordination chemistry involved in $\ce{CrO4^2-}$ (is $\ce{O^2-}$ even a good ligand)? And with $\ce{CrCl_3}$, is it indeeed a simple ionic bond like I thought, or is there coordination going on with that, too?
Basically, why can transition metals exhibit coordination in addition to (at least what seems like) ionic and covalent bonds?
Answer: Your approach is fundamentally wrong. Main group elements form complexes all the time. It is just that these complexes are typically kinetically labile and quickly decompose, rearrange and so on.
Main group elements’ complexes are also typically colourless due to the lack of d-electrons or due to the fully populated d-subshell meaning that all electron transitions are in the UV range, so you cannot easily study them via colour changes.
The state of the d-subshell (full or empty, nothing in-between) also means that no additional stability of certain geometries exist; the s and p orbitals can more or less rearrange to whatever is required.
And finally note that ‘simple’ ionic compounds are, in fact, huge coordinative compounds. A single sodium ion in a $\ce{NaCl}$ crystal could be described as a $\ce{[Na(Cl)_{\frac{6}{6}}]}$ complex. | {
"domain": "chemistry.stackexchange",
"id": 5465,
"tags": "bond, coordination-compounds, transition-metals, ionic-compounds"
} |
How can a ligand be an integral membrane protein? | Question: My background is in mathematics, and not biology, so please bear with me. I am currently working on a project involving the effects of Epidermal growth factor treatment (EGF) on cell migration. I am reading a review of EGF signaling (Epidermal growth factor receptor targeting in cancer: A review of trends and strategies by Chetan Yewale, et. al.), and it states that "Various ligands can activate EGFR ... These ligands are expressed as integral membrane proteins." This statement makes absolutely no sense to me, and makes me question my understanding of signal transduction. I think of ligands as freely floating molecules that may eventually come into contact with the cell membrane and attach to some receptor. But a ligand expressed as an integral membrane protein? This seems contradictory to my understanding of ligands, which (I thought) are released from the cell in order to signal with cells (be it the same, neighbor, or distant cells). Integral membrane protein ligands would only be useful for autocrine signaling, which I don't think is true of EGF.
Answer: In biology ligand is a very broad term. Everything is called a ligand that has a receptor for it, regardless whether it is free or membrane-bound. There is very much sense in membrane bound ligands, because many cells in our body are capable of actively moving around (for example T-cells). Cells can use signal transduction by direct cell-to-cell contact - like in activation of T-cells, or cytotoxic T-cell killing. This wiki page covers the basics quite well.
Also, a quite thorough wiki page on ligands.
From the comments under the question by @WYSIWYG:
These kind of ligands act via juxtacrine signalling because they
cannot diffuse. Ephrin is another example. | {
"domain": "biology.stackexchange",
"id": 4125,
"tags": "biochemistry, cell-biology, cell-signaling"
} |
Simple Comonads in Scala? | Question: trait Comonad[M[_]] {
// map
def >>[A,B](a: M[A])(f: A => B): M[B]
// extract | coeta
def counit[A](a:M[A]): A
// coflatten | comu
def cojoin[A](a: M[A]): M[M[A]]
}
object Comonad {
implicit def listComonad[A]: Comonad[List]
=
new Comonad[List] {
def counit[A](lsa: List[A])
=
lsa match { case List(a) => a }
def cojoin[A](lsa:List[A]): List[List[A]]
=
List(lsa)
def >>[A,B](lsa: List[A])(f: A => B): List[B]
=
lsa map f
}
}
So yeah I'm looking at this and I don't have that correct feeling...
Anyone mind correcting this and maybe offerring one or two other simple comonads?
Answer: I believe, what's bothering you is a non-total definition of counit, right? (for cojoin one possible variation is lsa.tails) Indeed, a List does not have a valid comonad instance specifically bacause of that. It does have a valid semicomonad instance though.
Things that have a valid comonad instance are, for example: Identity, NonEmptyList, Zipper, Tuple. Here's a reddit question with more examples of comonads. | {
"domain": "codereview.stackexchange",
"id": 4023,
"tags": "functional-programming, scala"
} |
Simulation of a particle and chemical molecule | Question: I am currently studying the basics of simulation and want to try out some experiments. It is a known fact that gold nanoparticles can bind to chemical groups which contains thiol (R–SH group). I want to simulate the binding of a thiol group to a gold nanoparticle. How can I do this? Where do I have to start? Which software can be used for this purpose?
Answer: Density Functional Theory is worth investigating if you're looking for a fairly efficient way of investigating these interactions at an electronic level. The term 'nanoparticle' is wonderfully vague as it encompasses two to three orders of size magnitude - the size of the nanoparticle will inform the specific choice of method:
Small (as in, only a couple of atoms) nanoparticles are amenable to computation with atom-centered basis sets in programs such as Gaussian, ADF, ORCA, GAMESS and so on. The latter two may be acquired free for academic use. If your particle/ligand complex is highly symmetric and/or you have supercomputer time you might be able to work with a moderately large particle. I recall a paper abstract where researchers did TDDFT on a 120 atom silver nanoparticle in ADF (here it is!), aided by high symmetry.
Large systems, where it's a) computationally unreasonable to model the whole particle and b) to a first-order approximation the ligand-particle interaction can be treated as occurring on an infinite plane or edge, may be more amenable to periodic DFT using plane-wave basis sets (PW-DFT). There are many codes available, ranging from free to very expensive. Off the top of my head, VASP, SIESTA, Quantum Espresso, Abinit and CASTEP are fairly popular.
There's also an atom-centered periodic franken-version of ADF called ADF-BAND which can model 1D to 3D periodic systems, which is worth mentioning. It is not particularly well documented, however. | {
"domain": "chemistry.stackexchange",
"id": 169,
"tags": "computational-chemistry, nanotechnology"
} |
What are and why are sine and cosine modulated integrals used? | Question: I have found the definition of the following formulas in a paper regarding active vibration control, where they are called sine and cosine modulated integrals.
$y$ is measurement signal with a strong periodic component of frequency $N\Omega$
$$y^{(i)}_{Nc}=\frac{2}{T}\int^{T}_{0}y^{(i)}(ϕ)\cos(Nϕ)dϕ$$
$$y^{(i)}_{Ns}=\frac{2}{T}\int^{T}_{0}y^{(i)}(ϕ)\sin(Nϕ)dϕ$$
where $\phi=\Omega t$.
From these the vector $y_N^{(i)}$ is defined as
$y_N^{(i)} = \begin{bmatrix} y_{Nc}^{(1)}\\ y_{Ns}^{(1)}\\\vdots\end{bmatrix}$
The same is done for the control input(s) $u$.
Then a quadratic cost function to be minimized at each step is defined using these newly introduced signals in this way:
$J(k) = y^T_NQy_N+u^T_NRu_N $ where $Q$ and $R$ are just two weighing matrices.
Are they introduced to consider the sine and cosine components at the considered frequency of the signal?
Can someone explain their meaning and why are they introduced?
Another question: suppose I have the value $y_N$, how can I invert the relationship to get $y^{(i)}$?
Here is the link for the paper.
Answer: As someone made me notice in the comments, the integrals are just the coefficients of the Fourier Series for that frequency component (when the series is expressed with the cosine and sine parts separately).
The integral is performed on a period much bigger then the period corresponding to the frequency of the disturbance so that a better averaged is obtained.
To get back the signal in the time domain it is sufficient to multiply the two components by the cosine and sine respectively. Of course in this way the signal has only one frequency component. | {
"domain": "dsp.stackexchange",
"id": 2026,
"tags": "modulation, integration"
} |
Does emergence or the second law create more degrees of freedom total? | Question: Here's my layman thought process:
By emergence and the second law, new "modes" or points in configuration space become "unlocked" with macroscopic systems. One is a brain, which can have an experience or memory ontop of just being an assemblage of x microscopic particles.
The counter point might be that it takes x particles to make a brain with conscious experience, but x-1 particles to make a brain without. So we traded 1 degree of freedom in configuration space for another.
Is there anywhere to go with this idea? And where can I find out about if degrees of freedom are always there since the big bang, or being added (by this or by expansion).
Answer: Even when you consider 'emergent degrees of freedom' the macroscopic state of a brain has vastly fewer degrees of freedom than the microscopic state its constitutent particles.
Imagine picking a few hundred water molecules at random around the brain and giving them a small nudge. Will this have a noticeable effect on the state of the brain. Most likely then answer is no. Unless the molecule is in a particularly critical position or the nudge pushes it into somewhere water molecules normally shouldn't go your nudge will most likely be lost in with random thermal noise.
We can infact quantify exactly how much more information is present in the microstate than the microstate of the system. This is precisely the thermodynamic entropy of the system. So what you are proposing is essentially that the brain has a negative entropy. Now whilst a brain may not have the highest possible entropy for its constitutent parts (at a given temperature) its entropy certainly isn't negative. | {
"domain": "physics.stackexchange",
"id": 71187,
"tags": "thermodynamics, degrees-of-freedom, emergent-properties"
} |
Crystallization from a solution | Question: I was wondering if there is any other way to crystallize materials from a solution other than using supersaturation?
Thanks,
Answer: No. Per definition you need to have at least local supersaturation. You can archive it by different means (cooling, evaporating of solvent, mixing with other solvent...) but in the end you need supersaturation. See: http://xray.chem.ufl.edu/growing%20tips.htm or http://web.mit.edu/x-ray/cystallize.html
Of course you can use techniques like sublimation but your question was about solution (btw in case of sublimation you also have supersaturation, only difference is that your solvent is gas). | {
"domain": "chemistry.stackexchange",
"id": 502,
"tags": "organic-chemistry, purification, recrystallization"
} |
Context-free Grammar for a Context-free Language Intersecting a Regular Language (get the Maximum Number of Rules) | Question: It is well known that the intersection of $L \cap R$ of a context-free Language $L$ and a regular Language $R$ is context-free. Each proof I have seen constructs a automaton (a PDA) that accepts $L$ and one (a DFA) that accepts $R$. Than a further automaton (a PDA) is constructed that accepts $L \cap R$. This automaton can be converted to a context-free language.
Is there some literature that gives a proof without constructing automatons and constructs a context-free grammar that generates $L \cap R$ directly?
What I want to know actually: Given a context-free Language $L$ and a regular Language $R$. How many rules at most does a context-free grammar need to generate $L \cap R$?
Answer: Answer to your first question http://www.cs.umd.edu/~gasarch/BLOGPAPERS/cfg.pdf.
A summary of the proof given in the link. First, it is shown that proving $L \cap R$ is CFL reduces to proving that $L \cap R'$ is CFL, where $R'$ is a regular language recognized by a DFA with exactly one final state. Then from the grammar for L (in Chomsky normal form) and DFA for R' a new grammar for $L \cap R'$ is constructed using a few straightforward rules. | {
"domain": "cstheory.stackexchange",
"id": 2121,
"tags": "regular-language, descriptive-complexity, context-free-languages"
} |
Can we "trivialize" the equivalence between canonical quantization of fields and second quantization of particles? | Question: As Weinberg exposited in his QFT Vol1, there are two equivalent ways of arriving at the same quantum field theories:
(1). Start with single-particle representations of Poincare group, and then make a multiparticle theory out of it, while preserving principles of causality etc. I would call this approach the second quantization of particles, since second quantization is usually used to emphasize the many-body nature of a theory.
(2). Start with field representations of Poincare group, canonically quantize it, while preserving principles of causality, positive definiteness of energies etc. I would call this approach the quantization of fields, just as everyone else would call it.
Weinberg showed the proof of the equivalence between the above two approaches using some, though not hard, but let's say nontrivial, mathematics. The equivalence seems like a sheer miracle to me, or a complete coincidence. I do not feel that I understand the equivalence with the current state of mind. Is there a way to trivialize the equivalence? Or putting it another way, is there an a priori reasoning to argue, given the two sets of starting points of (1)(2), we have to get the same theory in the end?
Just as a side remark, many have suggested the term "second quantization" should be totally dumped, because it is really just the first quantization of fields. To me however, it still serves some purposes since the equivalence is not transparent.
Answer: The presumed equivalence between the canonical quantization and the Fock space representation is only a particular case.
The canonical formalism provides only with canonical Poisson brackets. The first step according to Dirac's axioms is to replace the Poisson brackets by commutators and since these commutators satisfy the Jacobi identity, they can be represented by linear operators on a Hilbert space.
Canonical quantization does not specify the Hilbert space.
Finding a Hilbert space where the operators acts linearly and satisfy the commutation relations is a problem in representation theory. This task is referred to as "quantization" in the modern literature.
The problem is that in the case of free fields, this problem does not have a unique solution (up to a unitary transformation in the Hilbert space). This situation is referred to as the existence of inequivalent quantizations or inequivalent representations. The Fock representation is only a special case. Some of the quantizations are called "non-Fock", because the Hilbert space does not have an underlying Fock space structure (i.e., cannot be interpreted as free particles), but there can even be inquivalent Fock representations.
Before, proceeding, let me tell you that inequivalent quantizations may be the areas where "new physics" can emerge because they can correspond to different quantum systems.
Also, let me emphasize, that the situation is completely different in the finite dimensional case. This is because that due to the Stone-von Neumann theorem, any representation of the canonical commutation relations in quantum mechanics is unitarily equivalent to the harmonic oscillator representation. Thus the issue of inquivalent representations of the canonical commutation relations occurs only due to the infinite dimensionality.
For a few examples of inquivalent quantizations of the canonical commutation relations of a scalar field on a Minkowski space-time, please see the following article by: Moschella and Schaeffer. In this article, they construct inequivalent representations by means of Bogoliubov transformation which changes the vacuum and they also present a thermofield representation. In all these representations the canonical operators are represented on a Hilbert space and the canonical commutation relations are satisfied. The Bogoliubov shifted vacuum cases correspond to broken Poincare' symmetries. One can argue that these solutions are unphysical, but the symmetry argument will not be enough in the case of quantization on a general curved nonhomogenous manifold. In this case we will not have a "physical" argument to dismiss some of the inequivalent representations.
The phenomena of inequivalent quantizations can be present even in the case of finite number of degrees of freedom on non-flat phase spaces.
Having said all that, I want nevertheless to provide you a more direct answer to your question (although it will not be unique due to the reasons listed above). As I understand the question, it can be stated that there is an algorithm for passing from the single particle Hilbert space to the Fock space. This algorithm can be summarized by the Fock factorization:
$$ \mathcal{F} = e^{\otimes \mathcal{h}}$$
Where $\mathcal{h}$ is the single particle Hilbert space and $\mathcal{F}$ is the Fock space. As stated before canonical quantization provides us only with the canonical commutaion relations:
$$[a_{\mathbf{k}}, a^{\dagger}_{\mathbf{l}}] = \delta^3(\mathbf{k} - \mathbf{l}) \mathbf{1}$$
At this stage we have only an ($C^{*}$)algebra of operators. The reverse question about the existence of an algorithm starting from the canonical commutation relations and ending with the Fock space (or equivalently, the answer to the question where is the Hilbert space?) is provided by the Gelfand -Naimark-Segal construction (GNS), which provides representations of $C^{*}$ algebras in terms of bounded operators on a Hilbert space.
The GNS construction starts from a state $\omega$ which is a positive linear functional on the algebra $ \mathcal{A}$ (in our case the algebra is the completion of all possible products of any number creation and annihilation operators).
The second step is choosing the whole algebra as an initial linear space $ \mathcal{A}$. In general, there will be null elements satisfying:
$$\omega (A^{\dagger}{A}) = 0$$
The Hilbert space is obtained by identifying elements differing by a null vector:
$$ \mathcal{H} = \mathcal{A} / \mathcal{N} $$
($\mathcal{N} $ is the space of null vectors).
The inner product on this Hilbert space is given by:
$$(A, B) = \omega (A^{\dagger}{B}) $$
It can be proved that the GNS construction is a cyclic representation where the Hilbert space is given by the action of operators on a cyclic "vacuum vector". The GNS construction gives all inequivalent representations of a given $C^{*}$ algebra (by bounded operators). In the case of a free scalar field the choice of a Gaussian state defined by its characteristic function:
$$ \omega_{\mathcal{F} }(e^{\int\frac{d^3k}{E_k} z_{\mathbf{k}}a^{\dagger}_{\mathbf{k}} + \bar{z}_{\mathbf{k}}a^{\mathbf{k}} }) = e^{\int\frac{d^3k}{E_k} \bar{z}_{\mathbf{k}} z_{\mathbf{k}}}$$
Where $z_{\mathbf{k}}$ are indeterminates which can be differentiated by to obtain the result for any product of operators.
The null vectors of this construction will be just combinations vanishing due to the canonical commutation relations (like $a_1 a_2 - a_2 a_1$). Thus this choice has Bose statistics. Also subspaces spanned by a product of a given number of creation operators will be the number subspaces.
The state of this specific construction is denoted by:
$\omega_{\mathcal{F}}$, since it produces the usual Fock space. Different state choices may result inequivalent quantizations. | {
"domain": "physics.stackexchange",
"id": 10442,
"tags": "quantum-field-theory, second-quantization"
} |
Calculate the latitudes of a Gaussian Grid | Question: I would like to calculate the latitude values of a Gaussian Grid of a size of my choosing. Unfortunately, I didn't find a method or a formula to do so. Where can I find this information? Alternatively, is there a function publicly available that can do the job?
Answer: I believe the NCL function NCL gaus should be able to give you the solution you are looking for. From the API documentation you are requested to provide - the number of latitude points per hemisphere and you should get Gaussian latitudes and Gaussian weights.
Here is a code sample from their website
nlat = 64 ; for globe
gau_info = gaus(nlat/2) ; divide by 2 to get "per hemisphere"
glat = gau_info(:,0) ; gaussian latitudes ( 1st dimension of gau_info)
gwgt = gau_info(:,1) ; gaussian weights ( 2nd dimension of gau_info) | {
"domain": "earthscience.stackexchange",
"id": 1171,
"tags": "meteorology, weather-forecasting, climate-models"
} |
Space-like and time-like: where do the names come from? | Question: Space-like separated events are events that, in a well-chosen reference frame, can take place at the same time but never happen at the same location.
On the other hand for time-like events, one can chose a reference frame such that they happen at the same place but never simultaneously.
I can't help thinking that the labels are therefore very ill-chosen... Is there another motivation for these names?
Answer: I admit that the terminology is not as self-explanatory as it should, however a source of confusion is the fact that you are actually looking at consequences of the definitions, instead of at the original definitions that hold in every connected time oriented spacetime. The terminology turns out to be more clear if you use the original definitions which are referring to the nature of the curves connecting the events.
For a pair of events in a generic spacetime, time-like related means, by definition, that there is a future directed time-like curve joining the points. In Minkowski spacetime, it is equivalent to say that there is a time-like geodesic joining the events and it implies (it is equivalent in that spacetime) that there is a Minkowski reference frame where the events have the same location at different times.
For a pair of events in a generic spacetime, causally related means, by definition, that there is a future directed causal curve joining the events. Causal curve means that its tangent vector is not space-like. Causal curves are those curves describing the stories of physical points transmitting interactions.
Finally, for a pair of events in a generic spacetime, space-like separated (or also, equivalently, causally separated) means, by definition that there is no future directed causal curve joining the events. In Minkowski spacetime, it is equivalent to say (and it justifies the name) that there is a space-like geodesic joining the events and, in turns, it implies (it is equivalent in Minkowski spacetime) that there is a Minkowski reference frame where both events occur at the same time in the rest frame of the reference frame. | {
"domain": "physics.stackexchange",
"id": 11125,
"tags": "terminology, relativity, metric-tensor, causality"
} |
Rock-Paper-Scissors-Lizard-Spock Challenge | Question:
"Scissors cuts paper, paper covers rock,
rock crushes lizard, lizard poisons Spock,
Spock smashes scissors, scissors decapitate lizard,
lizard eats paper, paper disproves Spock,
Spock vaporizes rock. And as it always has, rock crushes scissors."
-- Dr. Sheldon Cooper
So these are the rules.
Building blocks
The first thing I thought was "I need a way to compare the possible selections" - this sounded like IComparable<T>, so I started by implementing that interface in a SelectionBase class. Now because I knew I'd derive Rock, Paper, Scissors, Lizard and Spock classes from this, I decided to include a Name property that returns the type name, and since I also needed a way to display different verbs depending on the type of the opponent's selection, I also included a GetWinningVerb method:
public abstract class SelectionBase : IComparable<SelectionBase>
{
public abstract int CompareTo(SelectionBase other);
public string Name { get { return GetType().Name; } }
public abstract string GetWinningVerb(SelectionBase other);
}
The base class is implemented by each of the possible selections:
public class Rock : SelectionBase
{
public override string GetWinningVerb(SelectionBase other)
{
if (other is Scissors) return "crushes";
if (other is Lizard) return "crushes";
throw new InvalidOperationException("Are we playing the same game?");
}
public override int CompareTo(SelectionBase other)
{
if (other is Rock) return 0;
if (other is Paper) return -1;
if (other is Scissors) return 1;
if (other is Lizard) return 1;
if (other is Spock) return -1;
throw new InvalidOperationException("Are we playing the same game?");
}
}
public class Paper : SelectionBase
{
public override string GetWinningVerb(SelectionBase other)
{
if (other is Rock) return "covers";
if (other is Spock) return "disproves";
throw new InvalidOperationException("Are we playing the same game?");
}
public override int CompareTo(SelectionBase other)
{
if (other is Rock) return 1;
if (other is Paper) return 0;
if (other is Scissors) return -1;
if (other is Lizard) return -1;
if (other is Spock) return 1;
throw new InvalidOperationException("Are we playing the same game?");
}
}
public class Scissors : SelectionBase
{
public override string GetWinningVerb(SelectionBase other)
{
if (other is Paper) return "cuts";
if (other is Lizard) return "decapitates";
throw new InvalidOperationException("Are we playing the same game?");
}
public override int CompareTo(SelectionBase other)
{
if (other is Rock) return -1;
if (other is Paper) return 1;
if (other is Scissors) return 0;
if (other is Lizard) return 1;
if (other is Spock) return -1;
throw new InvalidOperationException("Are we playing the same game?");
}
}
public class Lizard : SelectionBase
{
public override string GetWinningVerb(SelectionBase other)
{
if (other is Paper) return "eats";
if (other is Spock) return "poisons";
throw new InvalidOperationException("Are we playing the same game?");
}
public override int CompareTo(SelectionBase other)
{
if (other is Rock) return -1;
if (other is Paper) return 1;
if (other is Scissors) return -1;
if (other is Lizard) return 0;
if (other is Spock) return 1;
throw new InvalidOperationException("Are we playing the same game?");
}
}
public class Spock : SelectionBase
{
public override string GetWinningVerb(SelectionBase other)
{
if (other is Rock) return "vaporizes";
if (other is Scissors) return "smashes";
throw new InvalidOperationException("Are we playing the same game?");
}
public override int CompareTo(SelectionBase other)
{
if (other is Rock) return 1;
if (other is Paper) return -1;
if (other is Scissors) return 1;
if (other is Lizard) return -1;
if (other is Spock) return 0;
throw new InvalidOperationException("Are we playing the same game?");
}
}
User Input
Then I needed a way to get user input. I knew I was going to make a simple console app, but just so I could run unit tests I decided to create an IUserInputProvider interface - the first pass had all 3 methods in the interface, but since I wasn't using them all, I only kept one; I don't think getting rid of GetUserInput(string) would hurt:
public interface IUserInputProvider
{
string GetValidUserInput(string prompt, IEnumerable<string> validValues);
}
public class ConsoleUserInputProvider : IUserInputProvider
{
private string GetUserInput(string prompt)
{
Console.WriteLine(prompt);
return Console.ReadLine();
}
private string GetUserInput(string prompt, IEnumerable<string> validValues)
{
var input = GetUserInput(prompt);
var isValid = validValues.Select(v => v.ToLower()).Contains(input.ToLower());
return isValid ? input : string.Empty;
}
public string GetValidUserInput(string prompt, IEnumerable<string> validValues)
{
var input = string.Empty;
var isValid = false;
while (!isValid)
{
input = GetUserInput(prompt, validValues);
isValid = !string.IsNullOrEmpty(input) || validValues.Contains(string.Empty);
}
return input;
}
}
The actual program
class Program
{
/*
"Scissors cuts paper, paper covers rock,
rock crushes lizard, lizard poisons Spock,
Spock smashes scissors, scissors decapitate lizard,
lizard eats paper, paper disproves Spock,
Spock vaporizes rock. And as it always has, rock crushes scissors."
-- Dr. Sheldon Cooper
*/
static void Main(string[] args)
{
var consoleReader = new ConsoleUserInputProvider();
var consoleWriter = new ConsoleResultWriter();
var game = new Game(consoleReader, consoleWriter);
game.Run();
}
}
IResultWriter
public interface IResultWriter
{
void OutputResult(int comparisonResult, SelectionBase player, SelectionBase sheldon);
}
public class ConsoleResultWriter : IResultWriter
{
public void OutputResult(int comparisonResult, SelectionBase player, SelectionBase sheldon)
{
var resultActions = new Dictionary<int, Action<SelectionBase, SelectionBase>>
{
{ 1, OutputPlayerWinsResult },
{ -1, OutputPlayerLosesResult },
{ 0, OutputTieResult }
};
resultActions[comparisonResult].Invoke(player, sheldon);
}
private void OutputPlayerLosesResult(SelectionBase player, SelectionBase sheldon)
{
Console.WriteLine("\n\tSheldon says: \"{0} {1} {2}. You lose!\"\n", sheldon.Name, sheldon.GetWinningVerb(player), player.Name);
}
private void OutputPlayerWinsResult(SelectionBase player, SelectionBase sheldon)
{
Console.WriteLine("\n\tSheldon says: \"{0} {1} {2}. You win!\"\n", player.Name, player.GetWinningVerb(sheldon), sheldon.Name);
}
private void OutputTieResult(SelectionBase player, SelectionBase sheldon)
{
Console.WriteLine("\n\tSheldon says: \"Meh. Tie!\"\n");
}
The actual game
I didn't bother with building a complex AI - we're playing against Sheldon Cooper here, and he systematically plays Spock.
public class Game
{
private readonly Dictionary<string, SelectionBase> _playable =
new Dictionary<string, SelectionBase>
{
{ "1", new Rock() },
{ "2", new Paper() },
{ "3", new Scissors() },
{ "4", new Lizard() },
{ "5", new Spock() }
};
private readonly IUserInputProvider _consoleInput;
private readonly IResultWriter _resultWriter;
public Game(IUserInputProvider console, IResultWriter resultWriter)
{
_consoleInput = console;
_resultWriter = resultWriter;
}
public void Run()
{
while (true)
{
LayoutGameScreen();
var player = GetUserSelection();
if (player == null) return;
var sheldon = new Spock();
var result = player.CompareTo(sheldon);
_resultWriter.OutputResult(result, player, sheldon);
Pause();
}
}
private void Pause()
{
Console.WriteLine("\nPress <ENTER> to continue.");
Console.ReadLine();
}
private void LayoutGameScreen()
{
Console.Clear();
Console.WriteLine("Rock-Paper-Scissors-Lizard-Spock 1.0\n{0}\n", new string('=', 40));
foreach (var item in _playable)
Console.WriteLine("\t[{0}] {1}", item.Key, item.Value.Name);
Console.WriteLine();
}
private SelectionBase GetUserSelection()
{
var values = _playable.Keys.ToList();
values.Add(string.Empty); // allows a non-selection
var input = _consoleInput.GetValidUserInput("Your selection? <ENTER> to quit.", values);
if (input == string.Empty) return null;
return _playable[input];
}
}
Output
Rock: "Spock vaporizes Rock. You lose!"
Paper: "Paper disproves Spock. You win!"
Scissors: "Spock smashes Scissors. You lose!"
Lizard: "Lizard poisons Spock. You win!"
Spock: "Meh. Tie!"
Answer: Let's look at your code from an extensibility point of view:
If Sheldon decides to add a new item to the game then you have to go to n classes to adjust the comparisons and winning verbs. I usually try to avoid such designs because whenever you require a developer to change stuff in n places when something new is added then he/she is bound to forget one place.
So how can we change the design? Well, a game seems to be suited for a rules approach especially since the rules are fairly simple and always of the same structure in this case:
enum Item
{
Rock, Paper, Scissors, Lizard, Spock
}
class Rule
{
public readonly Item Winner;
public readonly Item Loser;
public readonly string WinningPhrase;
public Rule(item winner, string winningPhrase, item loser)
{
Winner = winner;
Loser = loser;
WinningPhrase = winningPhrase;
}
public override string ToString()
{
return string.Format("{0} {1} {2}", Winner, WinningPhrase, Loser);
}
}
Assuming you have a list of rules somewhere:
static List<Rule> Rules = new List<Rule> {
new Rule(Item.Rock, "crushes", Item.Scissors),
new Rule(Item.Spock, "vaporizes", Item.Rock),
new Rule(Item.Paper, "disproves", Item.Spock),
new Rule(Item.Lizard, "eats", Item.Paper),
new Rule(Item.Scissors, "decapitate", Item.Lizard),
new Rule(Item.Spock, "smashes", Item.Scissors),
new Rule(Item.Lizard, "poisons", Item.Spock),
new Rule(Item.Rock, "crushes", Item.Lizard),
new Rule(Item.Paper, "covers", Item.Rock),
new Rule(Item.Scissors, "cut", Item.Paper)
}
You now can make a decision:
class Decision
{
private bool? _HasPlayerWon;
private Rule _WinningRule;
private Decision(bool? hasPlayerWon, Rule winningRule)
{
_HasPlayerWon = hasPlayerWon;
_WinningRule = winningRule;
}
public static Decision Decide(item player, item sheldon)
{
var rule = FindWinningRule(player, sheldon);
if (rule != null)
{
return new Decision(true, rule);
}
rule = FindWinningRule(sheldon, player);
if (rule != null)
{
return new Decision(false, rule);
}
return new Decision(null, null);
}
private static Rule FindWinningRule(item player, item opponent)
{
return Rules.FirstOrDefault(r => r.Winner == player && r.Loser == opponent);
}
public override string ToString()
{
if (_HasPlayerWon == null)
{
return "Meh. Tie!";
}
else if (_HasPlayerWon == true)
{
return string.Format("{0}. You win!", _WinningRule);
}
else
{
return string.Format("{0}. You lose!", _WinningRule);
}
}
}
If you want to add another item to the game then you add another entry into the enum and some additional rules and you are done.
One thing to improve with my version is that rules just effectively define winning rules and all other cases are implicitly ties which in the context of this game makes sense but could be made more explicit. | {
"domain": "codereview.stackexchange",
"id": 5953,
"tags": "c#, game, dependency-injection, community-challenge, rock-paper-scissors"
} |
How can you prove generalised Jacobi identity? | Question: This
$$[A, [B,C]] + [C, [A,B]] + [B, [C,A]] = 0$$
This proof is not wanted, since there is an attachment of some variables.
How can you prove the equation?
Answer: You can use the definition of the QM commutator $[X,Y]=XY-YX$, then expand all the commutators and simplify. | {
"domain": "physics.stackexchange",
"id": 9503,
"tags": "quantum-mechanics, homework-and-exercises"
} |
Where does the galactic electromagnetic radiation come from? | Question: What mechanisms or physical processes are responsible for the emission of electromagnetic radiation in a galaxy in the different wavelength bands?
Answer: Most of the non-darkmatter mass of the galaxy is in the form of plasma or neutron density neutron stars and black holes. Almost all of the gamma ray photons originate from the nuclei of the various elements in the plasma as they fuse together to form new elements, matter-antimatter annihilation, gamma ray burst events, or emissions from nucleons relaxing back to ground quantum states of nuclear isomers.
X-rays can also originate from relaxing nucleons, but for lower quantum energy drops than gamma rays.
A good portion of the x-ray spectrum comes from plasma and electrons being accelerated into black holes or neutron stars but colliding with charged plasma on the way. In fact, these x-ray emissions from Cygnus x-1 led to the first evidence of black holes.
Although photons of almost every frequency are emitted by supernovae explosions, a huge proportion of the spectrum emitted is in the x-ray range.
This due again to the very great acceleration or deceleration (Bremsstrahlung) of plasma nuclei and electrons through magnetic and electrical fields in plasma resulting from the supernova explosions. The study of distant supernovae was one of the main reasons for launching the Chandra x-ray space telescope.
The rest of the lower frequencies of photons which include UV, visible, IR, and radio mainly include the following sources:
photons leaving stellar surfaces from being emitted as electrons rejoin the nuclei of cooled plasma
cosmic ray and magnetic field interactions
black body radiations (brown dwarfs, dead stars, non-nuclear reaction bodies)
left over microwave cosmic background radiation from big bang
radio waves from pulsars | {
"domain": "physics.stackexchange",
"id": 46424,
"tags": "electromagnetic-radiation, astrophysics, astronomy, spectroscopy"
} |
Sequence to extend and forget | Question: After receiving some feetback about my previous attempt to create reusable, easily extendable sequence generators I try once again to create such a framework.
This time there is no inheritance. Just extension methods.
The new core is the Sequence class:
public class Sequence
{
public Sequence(int count) { Count = count; }
public int Count { get; }
public static Sequence Generate(int count)
{
return new Sequence(count);
}
public IEnumerable<T> Generate<T>(Func<IEnumerable<T>> generator) => generator().Take(Count);
}
By itself it doesn't do much. You need to extend it with your extensions that hook up to it and provide it with the actual generator for your sequence.
Here are some examples for the Fibonacci sequence:
static class FibonacciSequenceExtensions
{
public static IEnumerable<T> Fibonacci<T>(this Sequence sequence, T firstTwo, T firstStep, Func<T, T, T> sum)
{
return sequence.Generate(() => FibonacciGenerator(firstTwo, firstStep, sum));
}
public static IEnumerable<int> Fibonacci(this Sequence sequence, int firstTwo, int firstStep)
{
return sequence.Generate(() => FibonacciGenerator(firstTwo, firstStep, (x, y) => x + y));
}
public static IEnumerable<TimeSpan> Fibonacci(this Sequence sequence, TimeSpan firstTwo, TimeSpan firstStep)
{
return sequence.Generate(() => FibonacciGenerator(firstTwo, firstStep, (x, y) => x + y));
}
private static IEnumerable<T> FibonacciGenerator<T>(T firstTwo, T firstStep, Func<T, T, T> sum)
{
var preview = firstTwo;
var current = sum(firstTwo, firstStep);
yield return preview;
yield return preview;
yield return current;
while (true)
{
var newCurrent = sum(preview, current);
yield return newCurrent;
preview = current;
current = newCurrent;
}
}
}
Then you can use it like this:
var result = Sequence.Generate(10).Fibonacci(3, 4).ToList();
What's the purpose of this? To have only a single class for all sequences. You don't have to remember where you've implemented some sequence. You can use them all via a single class.
In case someone is interested how the final version looks like I put it here under the previous quesiton because it's the inheritance edition that I decided to keep.
Answer: The API looks ok, IMHO, except for this syntax:
Sequence.Generate(10).Generate(...); // huh?
The downside is that for every sequence, I have to write at least two methods. First, method, to generate an actual sequence (FibonacciGenerator()), second method to wire it up to sequence.Generate (Fibonacci<T>()). This looks like a lot of extra code for what essentially could have been:
SequenceExtensions.FibonacciGenerator(...).Take(count);
Some middle ground can be probably achieved if you use Range instead:
Enumerable.Range(0, count) //extend this
.Fibonacci(3, 4);
This way one method should be enough in most cases. And you can either drop the Sequence class altogether or leave it as a pretty wrapper around Enumerable.Range or similar loop abstraction:
Sequence.Create(10) //returns basic 0..9 sequence (IEnumerable<int>)
.Fibonacci(3, 4) //returns Fibonacci sequence, using initial sequence as loop
I am still not convinced that all this boiler plate code is worth the effort though. But oh well. :)
P.S. I think your Fibonacci sequence implementation is a bit unconventional. There is no "step" in classical implementation:
every number after the first two is the sum of the two preceding ones
So if I were to see Fibonacci(3, 4) in code, I would assume, that 3 and 4 are the first two numbers of the sequence, and that it will go as 3, 4, 7, 11 and not as 3, 3, 7, 10, 17 (3+3 =/= 7). | {
"domain": "codereview.stackexchange",
"id": 22928,
"tags": "c#, generics, fibonacci-sequence, extension-methods"
} |
Can bromine water be used to compare the reactivity of liquid 1-heptene and liquid heptane? | Question:
Describe how you would carry out a lab test to compare the reactivity of liquid 1-heptene and liquid heptane. State clearly what the possible results of the test are and what they would indicate.
Would you carry out the lab test using bromine?
Add about 1 mL (about twenty drops) of 1-heptene to one
container and about 1 mL of heptane to the other container.
Add an equal volume of bromine water to each of the liquids then
shake each container to the same extent.
Leave the containers to settle, then observe.
Would this process work for heptane and 1-heptene? Would the colour of the solution change?
Answer: According to this Wikipedia article:
"In electrophilic halogenation the addition of elemental bromine or
chlorine to alkenes yields vicinal dibromo- and dichloroalkanes
(1,2-dihalides or ethylene dihalides), respectively. The decoloration
of a solution of bromine in water is an analytical test for the
presence of alkenes."
Alkanes tend to be much less reactive toward bromine, so yes, your bromine water test should allow you to distinguish between heptane and 1-heptene. | {
"domain": "chemistry.stackexchange",
"id": 7602,
"tags": "organic-chemistry, hydrocarbons, reactivity, halogenation"
} |
Will floating point code return the same arithmetical results on two different computers? | Question: Say I am using boost or the built-in float or double mathematical libraries of my C++ compiler.
I distribute the program.
Will the execution of my C++ program on different machines given different floating point results given that the FPU is CPU specific and AMD may not return exactly the same thing as Intel or ARM.
Should I use fixed-point algorithms to circumvent this possibility?
Would the answer be different if my program was written in something other than C++?
Answer: As long as you execute the same machine code on the different machines and as long as the settings for the floating point unit are identical, you will get identical results. However, you cannot execute the same machine code on both Intel and ARM, so this answer is only hypothetic. Even on different Intel processors you have to take special care that exactly the same machine code gets executed. Some third party libraries like Intel-MKL use different machine specific code on different processors, and other libraries like FFTW measure speed at runtime, and select different algorithms (and hence different code) based on the outcome of these measurements.
Sometimes the compiler is able to inline a certain function (for example your rounding logic), and then you can end up with different results for identical input to that function at different parts of your code. Another very real non-reproducability for distributed floating point computations are potential reductions are the end of the computation. Because the result here depends on the order of the reduction steps, it is very easy to end up with slightly non-repreducible results.
I have run into the issues in the past, but never considered this to be a reason to switch to fixed-point algorithms. For robustness problems of "computational geometry" related code on the other hand, I consider integer or fixed point arithmetic to be an appropriate solution.
Should I use fixed-point algorithms to circumvent this possibility?
In conclusion, there are several issues related floating point numbers, and even some problems where floating point numbers are not the most appropriate solution, but in general floating point arithmetic is still the way to go. So no, you should not abandon floating arithmetic in general, but only in the specific cases where it causes unsurmountable robustness problems. | {
"domain": "cs.stackexchange",
"id": 2982,
"tags": "compilers, floating-point, numerical-analysis"
} |
ROS Answers SE migration: touch screen? | Question:
hi there. i'm working on a robot prototype that has a touch screen as a user interface. i have a few questions. i've done some research in the last few days but i couldn't find any reliable source of information. i would love to get feedback from everyone who's done this before.
hardware: what type of touch screen manufacturer & product would you recommend that works well with ros & ubuntu? we're using an intel nuc i5 for our robot.
software: how do you go about building beautiful touch UI with ros & ubuntu? different from tablets (which already have well defined sdk and api like in android and ios), i'm not sure what are the equivelance in ros & ubuntu.
ros support: are there any ros packages that support touch UI yet?
any feedback, comment, advice is much appreciated.
Originally posted by d on ROS Answers with karma: 121 on 2014-08-20
Post score: 1
Answer:
A common approach is to simply use a tablet as a programmable touch screen interacting with a ROS based robot using rosjava.
That gives you the nice touchscreen SDK/API/UI design guidelines which users are used to. And you don't need to constrain the hardware that runs on the rest of the machines.
There are a lot of computers out there which support touch screen displays. However always check for reviews about linux compatibility. If it's linux compatible you should be able to use it with ROS relatively easily.
The only ROS ui type packages I know of is ros_osc but that's probably not what you're looking for.
Originally posted by tfoote with karma: 58457 on 2014-08-21
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 19129,
"tags": "ros"
} |
class gazebo::physics::World’ has no member named ‘GetModel’; did you mean ‘Models’? | Question:
Hi All,
I am very new to Gazebo and ROS.
I have the following code snippet in my Ros World Plugin:
void Load(physics::WorldPtr _world, sdf::ElementPtr _sdf)
{
physics::ModelPtr ballmodel;
ballmodel = _world->GetModel("ball");
Any idea why I always get a error like
class gazebo::physics::World’ has no member named ‘GetModel’; did you mean ‘Models’?
I am using Ubuntu 18, Melodic and Gazebo 9.
Any idea?
Thanks in advance, Peter
EDIT
No idea what's going on, but the ball is not found :-(
class SimulatorPlugin : public WorldPlugin
{
void Load(physics::WorldPtr _world, sdf::ElementPtr _sdf)
{
sdf::SDF ballSDF;
ballSDF.SetFromString(
"<sdf version ='1.6'>\
<model name='ball'>\
<pose>0 0 0 0 0 0</pose>\
<include>\
<uri>/home/peter/gazebo_plugin/models/my_robocup_spl_ball</uri>\
</include>\
</model>\
</sdf>");
sdf::ElementPtr ballmodel = ballSDF.Root()->GetElement("model");
ballmodel->GetAttribute("name")->SetFromString("ball");
_world->InsertModelSDF(ballSDF);
physics::ModelPtr ballmodelPtr = _world->ModelByName("ball");
if (ballmodelPtr != NULL)
{
printf("ball found \n");
}
_world->LoadPlugin("librifball.so", "rifball", ballmodel);
}
}
Originally posted by PeterHer on Gazebo Answers with karma: 15 on 2019-08-20
Post score: 0
Answer:
This is the API documentation for Gazebo 9. The method you are looking for is ModelByName(name).
reaction to the EDIT
The use of the API looks right. One thing that comes on my mind is that the retrieval of the ball model is happening right after the call for the model insertion into the world. Is it possible the model is still being loaded when you want to get the pointer to it?
What happens if you wait until the model is fully loaded? Something like
class SimulatorPlugin : public WorldPlugin
{
void Load(physics::WorldPtr _world, sdf::ElementPtr _sdf)
{
sdf::SDF ballSDF;
ballSDF.SetFromString(
"<sdf version ='1.6'>\
<model name='ball'>\
<pose>0 0 0 0 0 0</pose>\
<include>\
<uri>/home/peter/gazebo_plugin/models/my_robocup_spl_ball</uri>\
</include>\
</model>\
</sdf>");
sdf::ElementPtr ballmodel = ballSDF.Root()->GetElement("model");
ballmodel->GetAttribute("name")->SetFromString("ball");
_world->InsertModelSDF(ballSDF);
physics::ModelPtr ballmodelPtr = _world->ModelByName("ball");
while(ballmodelPtr == NULL)
{
ballmodelPtr = _world->ModelByName("ball");
}
printf("ball found \n");
_world->LoadPlugin("librifball.so", "rifball", ballmodel);
}
}
EDIT 2
The solution to wait for the model to be loaded showed above cannot work. The while loop inside the Load function would block all the other processes so the model or anything else would not be loaded ever.
See this answer for the correct solution: https://answers.gazebosim.org/question/24887/how-to-wait-until-model-is-loaded-while-not-blocking-the-other-processes/?answer=24890#post-id-24890
Originally posted by kumpakri with karma: 755 on 2019-08-21
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by PeterHer on 2019-08-21:
Thanks, I was looking at the wrong API
I have tried your code, but it will fall into an endless loop. Skipping the While() this code result back that there world is loaded and only has one Model, named: ground_plane
void Init()
{
printf("IsLoaded: %d \n", my_worldPtr->IsLoaded());
printf("ModelCount: %d \n", my_worldPtr->ModelCount());
}
Maybe this has something to do with it when I run the sim. [Err] [REST.cc:205] Error in REST request
Comment by PeterHer on 2019-08-26:
Or maybe this: libcurl: (51) SSL: no alternative certificate subject name matches target host name 'api.ignitionfuel.org'
Comment by PeterHer on 2019-08-26:
BTW, the ball is visible in Gazebo and listed under Models
Comment by kumpakri on 2019-08-26:
@PeterHer try this https://www.youtube.com/watch?v=ftDz_EVoatw for the REST error. Otherwise I'm out of ideas. I think it is a good idea to create a new question for this issue. You will get more attention and people will be more willing to help you, if they don't see any answers to your question. | {
"domain": "robotics.stackexchange",
"id": 4426,
"tags": "gazebo-9"
} |
Splitting plain text dictionary data to multiple files, round 2 | Question: This is a continuation of my earlier question.
I have a plain text file with the content of a dictionary (Webster's Unabridged Dictionary) in this format:
A
A (named a in the English, and most commonly ä in other languages).
Defn: The first letter of the English and of many other alphabets.
The capital A of the alphabets of Middle and Western Europe, as also
the small letter (a), besides the forms in Italic, black letter,
etc., are all descended from the old Latin A, which was borrowed from
the Greek Alpha, of the same form; and this was made from the first
letter (Aleph, and itself from the Egyptian origin. The Aleph was a
consonant letter, with a guttural breath sound that was not an
element of Greek articulation; and the Greeks took it to represent
their vowel Alpha with the ä sound, the Phoenician alphabet having no
vowel symbols. This letter, in English, is used for several different
vowel sounds. See Guide to pronunciation, §§ 43-74. The regular long
a, as in fate, etc., is a comparatively modern sound, and has taken
the place of what, till about the early part of the 17th century, was
a sound of the quality of ä (as in far).
2. (Mus.)
Defn: The name of the sixth tone in the model major scale (that in
C), or the first tone of the minor scale, which is named after it the
scale in A minor. The second string of the violin is tuned to the A
in the treble staff.
-- A sharp (A#) is the name of a musical tone intermediate between A
and B.
-- A flat (A) is the name of a tone intermediate between A and G.
A per se Etym: (L. per se by itself), one preëminent; a nonesuch.
[Obs.]
O fair Creseide, the flower and A per se Of Troy and Greece. Chaucer.
A
A, prep. Etym: [Abbreviated form of an (AS. on). See On.]
1. In; on; at; by. [Obs.] "A God's name." "Torn a pieces." "Stand a
tiptoe." "A Sundays" Shak. "Wit that men have now a days." Chaucer.
"Set them a work." Robynson (More's Utopia)
2. In process of; in the act of; into; to; -- used with verbal
substantives in -ing which begin with a consonant. This is a
shortened form of the preposition an (which was used before the vowel
sound); as in a hunting, a building, a begging. "Jacob, when he was a
dying" Heb. xi. 21. "We'll a birding together." " It was a doing."
Shak. "He burst out a laughing." Macaulay. The hyphen may be used to
connect a with the verbal substantive (as, a-hunting, a-building) or
the words may be written separately. This form of expression is now
for the most part obsolete, the a being omitted and the verbal
substantive treated as a participle.
MALAY
Ma*lay", n.
Defn: One of a race of a brown or copper complexion in the Malay
Peninsula and the western islands of the Indian Archipelago.
MALAY; MALAYAN
Ma*lay", Ma*lay"an, a.
Defn: Of or pertaining to the Malays or their country.
-- n.
Defn: The Malay language. Malay apple (Bot.), a myrtaceous tree
(Eugenia Malaccensis) common in India; also, its applelike fruit.
MALAYALAM
Ma"la*ya"lam, n.
Defn: The name given to one the cultivated Dravidian languages,
closely related to the Tamil. Yule.
MALBROUCK
Mal"brouck, n. Etym: [F.] (Zoöl.)
Defn: A West African arboreal monkey (Cercopithecus cynosurus).
I want to convert this file to a different format to make it easier and more efficient to search in it. This is the idea:
Split the file into a collection of entries
Save each entry in its own file:
Not all in the same directory (100k+ files), split to multiple subdirs
Create an index file
One line per entry, in the format: FILENAME:WORD
I implemented the suggestions in @jcollado's excellent previous answer, and made some other improvements:
Refactored the functions, use parse_content generator to return term, content pairs
Added a count for terms that appear multiple times, in the form of -1, -2, -3, ... appended (to display later by GUI apps as subscripts)
Changed the directory splitting logic, because there were still hundreds of files in most output directories. For example now the word greeting will go in g/gr/gre/greeting-NUM.txt instead of gr/greeting-NUM.txt
Replaced OptionParser with ArgumentParser
Made it easier to debug, with the --dry-run and --max-count options
Fixed some minor bugs
This is the script I'm using now:
#!/usr/bin/env python
import re
import os
import logging
from argparse import ArgumentParser
DATA_DIR = 'data'
INDEX_PATH = os.path.join(DATA_DIR, 'index.dat')
re_entry_start = re.compile(r'[A-Z][A-Z0-9 ;\'-.,]*$')
re_nonalpha = re.compile(r'[^a-z]')
def write_entry_file(dirname, filename, content):
basedir = os.path.join(DATA_DIR, dirname)
if not os.path.isdir(basedir):
os.makedirs(basedir)
path = os.path.join(basedir, filename)
with open(path, 'w') as fh:
fh.write('\n'.join(content) + '\n')
def is_new_term(line, prev_line_blank):
return re_entry_start.match(line) and prev_line_blank and ' ' not in line
def parse_content(arg):
prev_line_blank = True
term = None
term_count = 0
content = []
with open(arg) as fh:
for line0 in fh:
line = line0.strip()
if is_new_term(line, prev_line_blank):
if term:
for term in term.split('; '):
yield term, content
prev_term = term
term = line.lower()
if term == prev_term:
term_count += 1
subscript = '-' + str(term_count)
else:
term_count = 1
subscript = ''
content = [term + subscript]
else:
content.append(line)
prev_line_blank = not line
def get_split_path(term, count):
slug = re_nonalpha.sub('_', term.lower()).ljust(3, '_')
dirname = os.path.join(slug[0], slug[:2], slug[:3])
filename = '{}-{}.txt'.format(slug, count)
return dirname, filename
def parse_file(arg, dry_run=False, max_count=0):
def rebuild_index():
count = 0
for term, content in parse_content(arg):
count += 1
if max_count and count > max_count:
break
dirname, filename = get_split_path(term, count)
entry = '{}/{}:{}'.format(dirname, filename, term)
logging.info(entry)
if not dry_run:
fh.write(entry + '\n')
write_entry_file(dirname, filename, content)
if dry_run:
rebuild_index()
else:
if not os.path.isdir(DATA_DIR):
os.makedirs(DATA_DIR)
with open(INDEX_PATH, 'w') as fh:
rebuild_index()
def main():
parser = ArgumentParser(description='Generate index and entry files '
'from cleaned plain text file')
parser.add_argument('--dry-run', '-d', '-n', action='store_true',
help="Dry run, don't write to files")
parser.add_argument('--max-count', '-c', type=int,
help="Exit after processing N records")
parser.add_argument('files', help="File(s) to parse", nargs='+')
args = parser.parse_args()
logging.basicConfig(level=logging.INFO,
format='%(levelname)s: %(message)s')
for arg in args.files:
parse_file(arg, dry_run=args.dry_run, max_count=args.max_count)
if __name__ == '__main__':
main()
It creates an index file like this:
a/a_/a__/a__-1.txt:a
a/a_/a__/a__-2.txt:a
a/a_/a__/a__-3.txt:a
a/a_/a__/a__-4.txt:a
a/a_/a__/a__-5.txt:a
a/a_/a__/a__-6.txt:a
a/a_/a__/a__-7.txt:a-
a/a_/a__/a__-8.txt:a 1
a/aa/aam/aam-9.txt:aam
a/aa/aar/aard_vark-10.txt:aard-vark
a/aa/aar/aard_wolf-11.txt:aard-wolf
a/aa/aar/aaronic-12.txt:aaronic
a/aa/aar/aaronical-13.txt:aaronical
a/aa/aar/aaron_s_rod-14.txt:aaron's rod
a/ab/ab_/ab_-15.txt:ab-
a/ab/ab_/ab_-16.txt:ab
a/ab/aba/abaca-17.txt:abaca
a/ab/aba/abacinate-18.txt:abacinate
a/ab/aba/abacination-19.txt:abacination
a/ab/aba/abaciscus-20.txt:abaciscus
a/ab/aba/abacist-21.txt:abacist
Although I like it much better than the previous version, but as I've made significant changes, I'm wondering:
Any new mistakes I added?
Anything else I can still do better?
The full input file (cleaned data) is here (10 MB download, 27 MB unzipped).
The open-source project is on GitHub.
Answer: The code looks good and it's been improved in multiple ways (argparse, index file opened just once, dry-run flag instead of debug one, ...). Hence, this review is going to focus in small details:
Please add docstrings to document the code. This will make the code more readable and make easier to potential collaborators in github to join the project.
Try to order the imports alphabetically. Not really needed, but is a nice touch.
For a better separation of concerns, write_entry_file shouldn't take care of joining content. This should happen earlier in parse_file.
The count variable in rebuild_index seems to be a good candidate for enumerate:
for count, (term, content) in enumerate(parse_content(arg)):
The dry_run check is in a couple of places in parse_file and might be confusing at first. What about something like:
def parse_file(arg, dry_run=False, max_count=0):
def rebuild_index():
count = 0
for term, content in parse_content(arg):
count += 1
if max_count and count > max_count:
break
dirname, filename = get_split_path(term, count)
entry = '{}/{}:{}'.format(dirname, filename, term)
logging.info(entry)
yield entry, dirname, filename, content
if dry_run:
for _index_data in rebuild_index():
pass
else:
if not os.path.isdir(DATA_DIR):
os.makedirs(DATA_DIR)
with open(INDEX_PATH, 'w') as fh:
for entry, dirname, filename, content in rebuild_index():
fh.write(entry + '\n')
write_entry_file(dirname, filename, content)
The argument parsing code should be in a separate function and main should take the arguments. This way, main is testable in case you want to write test cases in the future:
if __name__ == '__main__':
args = parse_arguments()
main(args) | {
"domain": "codereview.stackexchange",
"id": 9043,
"tags": "python, algorithm, parsing"
} |
intractable problem example? | Question: I have been doing some research into the limits of computation and I have come across the terms intractable and undecidable. Are these two terms the same thing?
What does intractable mean and what is an example of an intractable and undecidable problem?
Answer: Tractable usually means decidable in polynomial time. Under that definition, NP-complete problems appear to be intractable, but they're still decidable. | {
"domain": "cs.stackexchange",
"id": 5209,
"tags": "complexity-theory, computability"
} |
Opening a list of URLs and splitting the queries into different files | Question: I recently made a program in Python to open a list of URLs and split the queries into different files. I want to make this code more generic and simple. I am open to suggestions.
def parse_file():
# Open the file for reading
infile = open("URLlist.txt", 'r')
# Read every single line of the file into an array of lines
lines = infile.readlines()
# For every line in the array of lines, do something with that line
for line in lines: # for every line in lines.
print line # print the URL
# parse the URL query string list
# if the URL has a query component print out the the first
# argument(k) and if the URL has multiple arguments, print out
# the other arguments. (v)
for k,v in urlparse.parse_qsl(urlparse.urlparse(line).query):
print k, v
if k == "blog": # if the printed out arguements have blog in it
# write it out to the apropriate file and do this
# to all the others.
with open('blog_file.txt'.format(),'a') as f:
f.write(v)
elif k == 'p':
with open('p_file.txt'.format(),'a') as f:
f.write(v)
elif k == 'attachment_id':
with open('attachment_id_file.txt'.format(),'a') as f:
f.write(v)
elif k == 'lang':
with open('lang_file.txt'.format(),'a') as f:
f.write(v)
elif k == 'portfolio':
with open('portfolio_file.txt'.format(),'a') as f:
f.write(v)
elif k == 'page_id':
with open('page_id_file.txt'.format(),'a') as f:
f.write(v)
elif k == 'comments_popup':
with open('comments_popup_file.txt'.format(),'a') as f:
f.write(v)
elif k == 'iframe':
with open('iframe_file.txt'.format(),'a') as f:
f.write(v)
elif k == 'width':
with open('width_file.txt'.format(),'a') as f:
f.write(v)
elif k == 'height':
with open('height_file.txt'.format(),'a') as f:
f.write(v)
print "--------------------"
infile.close()
f.close()
parse_file()
Answer: Before We Get Started
It would be good if you could provide some examples of the URLs that you are looking to parse, I did a cursory search and I couldn't coerce a query result from any that I tried with:
urlparse.parse_qsl(urlparse.urlparse("some_url").query
In the interest of brevity I'll assume you have some set of URLs that this is working for.
Code Review
Step one in making your code more generic will be to remove the hard-coded URL text file from your function body. I would suggest it be a parameter to the parse_file function.
Also, You don't need to do this:
lines = infile.readlines()
for line in lines:
.readlines() isn't necessary in how it is being used here and the file can be read without it. You obviously know about the With Statement and how to use it as a context manager - but you're still opening and closing your URL list file manually!
infile = open("URLlist.txt", 'r')
...
infile.close()
Instead, you can alias the opened file in your With Statement and it will be iterable without .readlines():
with open(input_file) as input:
for line in input:
Also, if you find yourself using multiple If/Elif clauses you might consider it time to look into some of Python's more interesting data structures, below I'm using a dictionary.
You can replace your predicate:
if k == 'blog':
and instead use a key in a dictionary with the value corresponding to the the subsequent procedure.
{'blog' : "blog_file.txt"}
Revision
Below is a possible revision to your code:
import urlparse
def parse_file(input_file):
tags = {'blog' : "blog_file.txt",
'p' : "p_file.txt",
'attachment_id' : "attachment_id_file.txt",
'lang' : "lang_file.txt",
'portfolio' : "portfolio_file.txt",
'page_id' : "page_id_file.txt",
'comments_popup' : "comments_popup_file.txt",
'iframe' : "iframe_file.txt",
'width' : "width_file.txt",
'height' : "height_file.txt"}
with open(input_file) as input:
for line in input:
parsed_url = urlparse.parse_qsl(urlparse.urlparse(line).query)
if len(parsed_url) > 0:
for key, value in parsed_url:
if key in tags:
with open(tags[key], 'a') as output_file:
output_file.write(value)
else:
print key + " not in tags."
else:
print(line + " does not yield query.")
I think this should work as you describe, but without a set of example URLs it's difficult to say for sure. The line:
if len(parsed_url) > 0:
is used here because I couldn't coerce .query to return anything but an empty list on those URLs I attempted.
Final Thought
It might be a good idea to change the functionality for those keys not found in your tags dictionary. You might consider creating a miscellaneous, or catch-all file that unmatched values get written to. That way you don't totally lose work on those unmatched URLs.
This is all based on the assumption that you have some domain that your code is currently working for. | {
"domain": "codereview.stackexchange",
"id": 4813,
"tags": "python, parsing, generics, url, file"
} |
Fitting multiple piecewise functions to data and return functions and derivatives as Fortran code | Question: Background
For a future workshop I'll have to fit arbitrary functions (independent variable is height z) to data from multiple sources (output of different numerical weather prediction models) in a yet unknown format (but basically gridded height/value pairs). The functions only have to interpolate the data and be differentiable. There should explicitly be no theoretical background for the type of function, but they should be smooth. The goal is to use the gridded (meaning discrete) output of the numerical weather prediction model in our pollutant dispersion model, which requires continuous functions.
Workflow
choose the input model
load input data
define list of variables (not necessarily always the same)
define height ranges (for the piecewise function)
define base functions like "a0 + a1*z" for each height range and variable
optionally define weights, because some parts are more important that others
fit the piecewise functions
save the fitted functions and their derivatives as Fortran 90 free form source code (to be included in our model)
I don't think 1.-6. can be automated, but the rest should be.
Code
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
import pandas as pd
from scipy.optimize import curve_fit
from sympy import log, ln, Piecewise, lambdify, symbols, sympify, fcode, sqrt
def config(name):
"""Configuration of the piecewise function fitting, dependent on input name
Input:
name... name of experiment to fit data to, basically chooses settings
Output:
var_list... list of variables to fit
infunc_list_dict... dictionary with var_list as keys, each having a list as
value that contains strings with the sub-function to fit, from
the bottom up. Only the first (lowest) may have a constant value, all
others must be 0 at the height they "take over" (where their argument
is 0). There, the value of the lower, fitted function is added to
ensure continuity. The parameters for each function HAVE to be of the
pattern "aX", where "X" is numerically increasing (0, 1, 2...) within
each sub-function.
The arguments of aloft functions (not the bottom most) are usually
"z - t", unless there is some trickery with "s"
A constant, first sub-function is 'a0', while constant sub-function
aloft has to be '0' for technical reasons.
Variables replaced by values:
- t... current threshold height
- s... transition value at height t
- zi.. bounday layer height
thresh_list_dict... dictionary with var_list as keys, each having a list as
value that contains the height where the piecewise functions change.
for technical reasons the ground (0) and the top (np.inf) are also
included.
weight_list_dict... dictionary with var_list as keys, each having a list as
value that contains relative weights (to 1) that are used to force the
fitting to be closer to the real value at crucial points. This is
around the threshold heights, at the ground and at the ABL. To "turn
off" a weight, set it to 1. The first weight is at the ground and then
there are two around each treshold height and the last at the top.
i.e: [ground,
lower-of-thresh0, upper-of-thresh0,
lower-of-thresh1, upper-of-thresh1,
...
top]
the first function uses ground and lower-of-thresh0,
the second uses upper-of-thresh0 and lower-of-thresh1 until
the last uses lower-of-threshI and top
wefact_list_dict... analog to weight_list_dict, except that it contains
the relative distance where the weight in weight_list_dict is applied.
Relative distance means here: fraction of the total subrange. Typical
values are 0.1 or 0.2, meaning 10 or 20% of the total subrange take the
accompanying weight. If the corresponding weight equals 1, the value
has no influence.
teston... True: create plots; False: don't
saveon... True: don't show plots, save them as pdfs (only if teston==True).
printon... True: print output to console; False: don't
"""
teston = True
saveon = False
printon = False
# ========= TMP220 =========
if name == 'tmp220':
abl_height = 990
var_list = ['um', 'u2', 'v2', 'w2', 'w3', 'uw', 'eps']
infunc_list_dict = {
'um': ['a0*ln(z-t)**3 + a1*ln(z-t)**2 + a2*ln(z-t) + a3'],
'u2': ['a0 + a1*(z-t) + a2*(z-t)**2 + a3*(z-t)**3 + a4*(z-t)**4 + a5*(z-t)**5',
'a0*(z-t) + a1*(z-t)**2'],
'v2': ['a0 + a1*(z-t) + a2*(z-t)**2 + a3*(z-t)**3 + a4*(z-t)**4 + a5*(z-t)**5',
'a0*(z-t) + a1*(z-t)**2'],
'w2': ['a0 + a1*(z-t) + a2*(z-t)**2 + a3*(z-t)**3 + a4*(z-t)**4 + a5*(z-t)**5',
'a0*(z-t) + a1*(z-t)**2'],
'w3': ['a0',
'0'],
'uw': ['a0 + a1*(z-t) + a2*(z-t)**2 + a3*(z-t)**3 + a4*(z-t)**4 + a5*(z-t)**5',
'a0*(z-t) + a1*(z-t)**2 + a2*(z-t)**3 + a3*(z-t)**4'],
'eps': ['a0 + a1*(z-t) + a2*(z-t)**2 + a3*(z-t)**3 + a4*(z-t)**4 + a5*(z-t)**5',
'a0*(z-t)**a1 + a2*(z-t)**3 + a3*(z-t)**2 + a4*(z-t)**4 + a5*(z-t)**6'],
}
thresh_list_dict = {
'um': [0.0, np.inf],
'u2': [0.0, 12.5, np.inf],
'v2': [0.0, 12.5, np.inf],
'w2': [0.0, 12.5, np.inf],
'w3': [0.0, 12.5, np.inf],
'uw': [0.0, 12.5, np.inf],
'eps': [0.0, 12.5, np.inf],
}
weight_list_dict = {
'um': [100, 1],
'u2': [100, 5000, 1, 1],
'v2': [100, 5000, 1, 1],
'w2': [100, 5000, 1, 1],
'w3': [100, 5000, 1, 1],
'uw': [100, 5000, 1, 1],
'eps': [100, 5000, 1, 1],
}
wefact_list_dict = {
'um': [0.2, 0.1],
'u2': [0.2, 0.2, 0.1, 0.1],
'v2': [0.2, 0.2, 0.1, 0.1],
'w2': [0.2, 0.2, 0.1, 0.1],
'w3': [0.2, 0.2, 0.1, 0.1],
'uw': [0.2, 0.2, 0.1, 0.1],
'eps': [0.2, 0.2, 0.1, 0.1],
}
#elif name == 'SOMETHING ELSE': analog to above, omitted for brevity
else:
raise ValueError('Unsupported name, configure in config()')
return (var_list, abl_height, infunc_list_dict, thresh_list_dict,
weight_list_dict, wefact_list_dict, teston, saveon, printon)
def read_scm_data(name_str):
"""This routines reads in the profiles from the SCMs
Input: # TODO (depends on their format), for now dummy data
Output: dataframe: z, u2, v2, w2, w3, uw, um, eps
"""
# TODO: add actual read routine, this is just dummy input
if name_str == 'tmp220':
out = pd.read_csv('tmp220.csv', delimiter=',')
#elif name_str == 'SOMETHING ELSE': as above, omitted for brevity
else:
raise ValueError('Unknown name, configure in read_scm_data()')
return out
def test_fit(name, var_list, func_dict, data, saveon):
"""plot of data vs fitted functions
"""
# Omitted for brevity, not that relevant
def fit_func(var, abl_height, data_z, data_v, infunc_str_list,
thresh_list, weight_list, wefact_list):
"""Converts the piecewise defined functions in infunc_str_list with the
thresholds in thresh_list (and the weights defined by weight_list and
wefact_list) to a SymPy expression and fits it to (data_z, data_v), where
data_z is height and data_v are the values in each height. Returns the
piecewise SymPy function with substituded parameters.
"""
z = symbols('z')
y_list = [] # holds the subfunctions
niterations = 20000
# transition_value holds the value that is added to each sub-function
# to ensure a continuous function. this is obviously 0 for the first
# subfunction and equal to the value of the previous sub-function at the
# threshold height for each subsequent sub-function.
transition_value = 0
# for each piece of the function:
for i, func_str in enumerate(infunc_str_list):
# find number of parameters and create those SymPy objects
nparams = func_str.count('a')
a = symbols('a0:%d' % nparams)
t = symbols('t') # transition height
s = symbols('s') # transition value
zi = symbols('zi') # boundary layer height
# check the string and create the sympy expression
verify_func_str(var, func_str)
y_list.append(sympify(func_str))
# add the transition value and substitute the placeholder variables:
y_list[i] += transition_value
y_list[i] = y_list[i].subs(t, thresh_list[i])
y_list[i] = y_list[i].subs(s, transition_value)
y_list[i] = y_list[i].subs(zi, abl_height)
# lambdify the sympy-expression with a somewhat ugly hack:
t = [z]
for j in range(nparams):
t.append(a[j])
func = lambdify(tuple(t), y_list[i], modules=np)
# create the correction subset of the data
local_index = data_z > thresh_list[i] & data_z < thresh_list[i + 1]
local_z = data_z[local_index]
local_v = data_v[local_index]
# create the weight arrays. they have the same size as the local_z and
# are 1 everywhere except the range defined with wefact, where they
# are the specified weight. see config() for definitions.
weight = np.ones_like(local_z)
z_range = local_z[-1] - local_z[0]
lower_weight_lim = local_z[0] + wefact_list[2*i] * z_range
upper_weight_lim = local_z[-1] - wefact_list[2*i + 1] * z_range
weight[local_z < lower_weight_lim] = weight_list[2*i]
weight[local_z > upper_weight_lim] = weight_list[2*i + 1]
sigma = 1. / weight
# fit the function to the data, checking for constant function aloft:
if nparams > 0:
popt, pcov = curve_fit(func, local_z, local_v, sigma=sigma,
maxfev=niterations)
# substitute fitted parameters in sympy expression:
for j in range(nparams):
y_list[i] = y_list[i].subs(a[j], popt[j])
# calculate the new transition_value:
if nparams > 0:
transition_value = func(thresh_list[i + 1], *popt)
else:
transition_value = func(thresh_list[i + 1])
# After all sub-functions are fitted, combine them to a piecewise function.
# This is a terrible hack, but I couldn't find out how to create piecewise
# functions dynamically...
if len(y_list) == 1:
y = y_list[0]
elif len(y_list) == 2:
y = Piecewise((y_list[0], z <= thresh_list[1]),
(y_list[1], True))
elif len(y_list) == 3:
y = Piecewise((y_list[0], z <= thresh_list[1]),
(y_list[1], z <= thresh_list[2]),
(y_list[2], True))
elif len(y_list) == 4:
y = Piecewise((y_list[0], z <= thresh_list[1]),
(y_list[1], z <= thresh_list[2]),
(y_list[2], z <= thresh_list[3]),
(y_list[3], True))
elif len(y_list) == 5:
y = Piecewise((y_list[0], z <= thresh_list[1]),
(y_list[1], z <= thresh_list[2]),
(y_list[2], z <= thresh_list[3]),
(y_list[3], z <= thresh_list[4]),
(y_list[4], True))
else:
raise ValueError('More than five sub-functions not implemented yet')
return y
def create_deriv(funcname, func):
"""Creates the derivative of the function, taking into account that v2 has
two "derivatives".
careful: returns tuple of two functions if funcname==v2, else one function
"""
z = symbols('z')
if funcname != 'v2':
return func.diff(z)
else:
deriv = func.diff(z)
deriv_sig = sqrt(func).diff(z)
return (deriv, deriv_sig)
def verify_input(name, infunc_list, thresh_list,
weight_list, wefact_list):
"""rudimentary checks if the functions, weights and thresholds are faulty
"""
nfuncs = len(infunc_list)
if len(thresh_list) != nfuncs + 1:
raise ValueError('Number of functions and thresholds disagree for ' +
var + ' of ' + name)
if len(weight_list) != nfuncs * 2:
raise ValueError('Number of functions and weights disagree for ' +
var + ' of ' + name)
if len(wefact_list) != nfuncs * 2:
raise ValueError('Number of functions and weight factors disagree' +
' for ' + var + ' of ' + name)
def verify_func_str(var, func_str):
"""Checks if the function string has linearly increasing parameters,
starting with 0 (i.e. "a0, a1, a2..."), because otherwise there is only
a cryptic error in minpack.
"""
index_list = []
for c, char in enumerate(func_str):
if char == 'a':
index_list.append(int(func_str[c+1]))
if list(range(0, len(index_list))) != index_list:
raise ValueError(func_str + ' has non-monotonically increasing' +
'parameter indices or does not start with a0' +
' in variable ' + var)
def main(name, var_list, abl_height, infunc_list_dict, thresh_list_dict,
weight_list_dict, wefact_list_dict, teston, saveon, printon):
"""Start routines, print output (if printon), save Fortran functions in a
file and start testing (if teston)
"""
func_dict, deri_dict = {}, {}
# file_str stores everything that is written to file in one string
file_str = '!' + 78*'=' + '\n' + '! ' + name + '\n!' + 78*'=' + '\n'
if printon:
print(' ' + 78*'_' + ' ')
print('|' + 78*' ' + '|')
print('|' + 30*' ' + name + (48-len(name))*' ' + '|')
print('|' + 78*' ' + '|')
data = read_scm_data(name)
for var in var_list:
verify_input(name, infunc_list_dict[var], thresh_list_dict[var],
weight_list_dict[var], wefact_list_dict[var])
if printon:
print('! ----- ' + var)
file_str += '! ----- ' + var + '\n'
# use data.z.values to get rid of the pandas-overhead and because
# some stuff is apparently not possible otherwise (like data.z[-1])
func_dict[var] = fit_func(var, abl_height, data.z.values,
data[var].values, infunc_list_dict[var],
thresh_list_dict[var], weight_list_dict[var],
wefact_list_dict[var])
func_fstr = fcode(func_dict[var], source_format='free', assign_to=var,
standard=95)
if printon:
print(func_fstr)
file_str += func_fstr + '\n'
if var != 'v2':
deri_dict[var] = create_deriv(var, func_dict[var])
deri_fstr = fcode(deri_dict[var], source_format='free',
assign_to='d'+var)
if printon:
print(deri_fstr)
file_str += deri_fstr + '\n\n'
else:
deri_dict[var], deri_dict['sigv'] = create_deriv(var,
func_dict[var])
deri_fstr = fcode(deri_dict[var], source_format='free',
assign_to='d'+var, standard=95)
deri2_fstr = fcode(deri_dict['sigv'], source_format='free',
assign_to='dsigv', standard=95)
file_str += deri_fstr + '\n'
file_str += deri2_fstr + '\n\n'
if printon:
print(deri_fstr)
print(deri2_fstr)
if printon:
print('')
if printon:
print('|' + 78*'_' + '|\n')
file_str = file_str + '\n\n' # end with newlines
if teston:
test_fit(name, var_list, func_dict, data, saveon)
# save fortran functions in file:
filename = name + '_turbparas.inc'
with open(filename, 'w') as f:
f.write(file_str)
if __name__ == '__main__':
name = 'tmpBUB'
main(name, *config(name))
The head of the .csv being read:
z,eps,u2,v2,w2,w3,uw,um
1.00100005,0.15477078,1.99948072,1.29966235,0.55985457,0.00001,-0.1565989,0.87667412
1.55074883,0.17472017,1.99948072,1.29966235,0.55985457,0.00001,-0.22731581,0.91040421
2.10049748,0.18977416,1.99948072,1.29966235,0.55985457,0.00001,-0.29404435,1.09541452
2.65024614,0.20183319,1.99948072,1.29966235,0.60718876,0.00001,-0.35773316,1.31745398
3.19999504,0.21174671,2.0886364,1.29966235,0.71062374,0.00001,-0.41881028,1.54424596
3.74974346,0.21996136,2.38007522,1.29966235,0.80987662,0.00001,-0.47746179,1.76498318
4.29949236,0.22673394,2.6594243,1.29966235,0.90503877,0.00001,-0.53373927,1.97600245
4.84924126,0.23222093,2.92654347,1.29966235,0.99606091,0.00001,-0.58761126,2.1763401
5.39898968,0.23652065,3.18101335,1.29966235,1.08279908,0.00001,-0.63899046,2.36611438
The full (157 kB) file can be found at: Google Drive.
The code runs and does what I want, but I don't have any formal training in programming and I'm sure it could be improved. Speed is not a huge issue (really depends on how complicated the functions to be fit are), but reliability and adaptability are.
Some points I know are "wrong":
too many comments that explain what the code does (I like those, because I forget stuff)
the input strings for the base sub-functions are too long (>80) and lack spaces around *. It's a compromise.
if a height-range has less data points than the parameters of the corresponding subfunction, the code halts with a helpful minpack error message.
Some details I'd like to be different but also know to be impossible without changing sympy:
Fortran output with 4 instead of 3 spaces
Fortran output with 132 line length instead of 80.
A dynamic way to combine the pieces to one piecewise function to avoid those if conditions at the end of 'fit_func'. (maybe that is possible?)
Answer: Doesn't look too bad IMO, but there's definitely lots of duplication
going on that could be made shorter. Most of the comments are great,
though in some cases I'd recommend longer (not just single-letter)
variable names! I know it's common in scientific code though.
Now, first thing, I'd restructure the main block such that it's easier
to use interactively, e.g. have just main("tmp220") and let it figure
out the configuration itself - if you still want the ability to override
the variables, consider making them optional / use a dictionary or so.
Line 184 gives me an error because the grouping should be explicit - I
don't know what options would change that or if it's a NumPy version
issue, in any case I've added parenthesis so it reads:
local_index = (data_z > thresh_list[i]) & (data_z < thresh_list[i + 1])
Apropos variable names, it took me a moment to decipher teston etc.,
perhaps just at least an an underscore, or have a different name that
makes the word boundaries clearer.
I'd make config return a dictionary, then the order of values doesn't
matter anymore, it's more self-explanatory interactively and finally if
you have a bunch of nested dictionaries it's also a bit more
straightforward than repeating variable names all over again. E.g.:
def config(name):
...
return {
"tmp220": {
"abl_height": 990,
...
},
...
}[name]
Perhaps just move the configurations out into a constant though. If the
test, save and print flags should be the same, have an
update({"test_on": ..., ...}) call in there to add the settings.
Also a lot of the values in the configuration are the same - if they're
not being modified, maybe reuse them?
For fit_func and main, I'd split things up more into separate
functions for separate functionality, e.g:
y_list.append(subs(sympify(func_str) + transition_value,
(t, thresh_list[i]),
(s, transition_value),
(zi, abl_height)))
with subs something like:
def subs(expression, *substitutions):
for substitution in substitutions:
expression = expression.subs(*substitution)
return expression
The print_on flag, well actually, checking for that all over the
place, I'd
redirect standard output instead.
Buffering to a string and then writing to a file, why? It's probably
not very harmful due to the length of the output, but still, the pattern
isn't that useful (unless I missed some reason here).
Constants like 78, 30 and 48 are probably, you know, constant, but
what if you decide to change the output formatting? I'd look for some
helper library for that, or, failing that, create function for
formatting that has these values as (default) parameters.
create_deriv could be slightly more compact:
def create_deriv(funcname, func):
"""..."""
z = symbols('z')
deriv = func.diff(z)
if funcname != 'v2':
return deriv
return deriv, sqrt(func).diff(z)
Instead of appending the values using a range, simply extend the
list (or concatenate the two in case the values allow for that):
t = [z]
t.extend(a)
# or perhaps
func = lambdify(tuple([z] + list(a)), y_list[i], modules=np)
The checks for nparams > 0 could be more compact (also pcov is
unused; and in Python 2 consider using xrange where possible; in the
following example I'd also take a look at zip/itertools.izip to make
it look nicer), e.g.:
popt = ()
# fit the function to the data, checking for constant function aloft:
if nparams > 0:
popt = curve_fit(func, local_z, local_v, sigma=sigma,
maxfev=niterations)[0]
# substitute fitted parameters in sympy expression:
for j in xrange(nparams):
y_list[i] = y_list[i].subs(a[j], popt[j])
# calculate the new transition_value:
transition_value = func(thresh_list[i + 1], *popt)
In general I'd probably make local variables out of things like
y_list[i] if it's always the same thing you're manipulating, but
that's just me.
For the Piecewise creation you can simply call the function with the
list of arguments spliced in like you're already doing for config,
e.g.:
if len(y_list) == 1:
y = y_list[0]
else:
y = Piecewise(*[(y_j, z <= thresh_j_1)
for y_j, thresh_j_1
in zip(y_list, thresh_list[1:])]
+ [(y_list[-1], True)])
If this is too unclear, split it up, or use a range again.
Hope that gives you some ideas around it. | {
"domain": "codereview.stackexchange",
"id": 25125,
"tags": "python, numerical-methods, scipy, sympy"
} |
How can the binary OR function be computed by a MOD3 gate of constant fan-in? | Question: I've been working on a problem and in order to prove the bigger picture, I need to understand how a binary OR function can be computed by a constant fan-in MOD3 gate. I would seem that the output would need to depend on every input bit as just looking at some of them would leave out the possibility that those that weren't looked at could change the answer. Any help is appreciated.
Answer:
Well a MOD3 gate would output 1 iff the remainder after dividing the sum of the input bits by 3 is non-zero.
Well in this case an OR gate is simply a MOD3 gate with fan-in 2. Consider the truth table:
A B (A + B) % 3 OUT
0 0 0 0
0 1 1 1
1 0 1 1
1 1 2 1 | {
"domain": "cs.stackexchange",
"id": 11308,
"tags": "arithmetic, circuits, binary"
} |
How do we know the temperature on the planets? | Question: I was watching a show and they were saying that the temperature of Pluto (I know it is not a planet) is about $-300 {}^\circ F$. I know that depends where in the orbit Pluto is, but how do we determine the temperature?
Answer: There's a few ways the temperature can be measured remotely.
The easiest way is to measure the amount, and for bonus, spectrum, of the radiated heat. All objects greater than Absolute Zero radiate a certain amount of energy. The wavelength spectrum can be determined by Planck's Law, and the amount of energy by
the Stefan-Boltzmann law, both of which are listed below.
$I'(\lambda,T) =\frac{2 hc^2}{\lambda^5}\frac{1}{ e^{\frac{hc}{\lambda kT}}-1}$
$j^{\star} = \sigma T^{4}$
So, to determine the temperature, all one needs to do is measure the energy over the waveband of interest, and one can get the temperature.
In the case of Pluto, an easier method can be done. We know the total amount of energy going toward Pluto, based off of its distance from the Sun. We can use this to measure the average temperature of an object receiving that amount of energy.
The last method involves knowing the physical properties of the object. For instance, Pluto has a atmosphere of Nitrogen, Carbon Dioxide, and Methane. The atmosphere will freeze as Pluto moves further out in its orbit. The temperature can be inferred by what materials haven't frozen out of the atmosphere yet, which can be determined with spectroscopy.
Typically, the first one is done, and the second when a more accurate method cannot be used (IE, no direct spectrum of the planet observed). | {
"domain": "physics.stackexchange",
"id": 3240,
"tags": "temperature, planets, spectroscopy"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.