anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Projectile motion problem with upward acceleration and horizontal velocity | Question: An electron in a cathode-ray tube is traveling horizontally at 2.10×10^9 cm/s when deflection plates give it an upward acceleration of 5.30×10^17 cm/s^2 .
B.) What is its vertical displacement during this time?
My work https://i.stack.imgur.com/wWJkj.jpg
I am asking this because the answer my teacher gave is 2.3cm
Thanks in advance
Answer: $y = y_0 + ut + 0.5at^2$
Since $y_0$ and $u$ are $0$, we have $y = 0.5~at^2$.
In your calculation $t = 2.95 \times 10^{-9}$, which is correct.
So, $y = 0.5\times5.30×10^{17}\times (2.95 \times 10^{-9})^2$
Therefore,
$y = 2.3~\text{cm}$ | {
"domain": "physics.stackexchange",
"id": 6344,
"tags": "homework-and-exercises, kinematics, velocity, projectile"
} |
Circular-shift with variable bitlength | Question: For some memory masking operations I need a bitwise-rotation functionality for unsigned integral types. I came across solutions based on the wiki article, but have not found a variation for a function for bitwise-rotation within an "arbitrary" field of bits. Therefore, I have extended the solution of John Regehr, and now I want some feedback from you. I would also appreciate any advice on using mechanisms provided by the C++ core language up to C++14 (without the use of the std namespace).
#include <iostream>
#include <vector>
#include <bitset>
uint64_t CircShift (uint64_t x, uint8_t n, uint8_t bitwidth)
{
return ((x<<n)&((1<<bitwidth)-1)) | (x>>(bitwidth-n));
}
int main()
{
uint64_t var=0b000110100110;
var=CircShift(var,2,9);
std::cout<<std::bitset<12>(var);
}
Answer: I think the function looks good. It does what it's supposed to do when the input is correct. However, if the input number x is larger or equal to 1<<bitwidth, the behavior is undefined. To rectify this, I suggest:
uint64_t CircShift(uint64_t x, uint8_t n, uint8_t bitwidth)
{
return ((x<<n)&((1<<bitwidth)-1)) |
((x&((1<<bitwidth)-1))>>(bitwidth-n));
}
Just add another mask to the right-shifted part. Also note that for shifts that are bigger than 64 bits, both your original function and my new function behave strangely. To rectify this, you can use the fact that a circular shift by bitlength steps simply give you the original number:
uint64_t CircShift(uint64_t x, uint8_t n, uint8_t bitwidth)
{
return ((x<<(n%bitwidth))&((1<<bitwidth)-1))
| ((x&((1<<bitwidth)-1))>>(bitwidth-(n%bitwidth)));
} | {
"domain": "codereview.stackexchange",
"id": 31673,
"tags": "c++, c++14, bitwise"
} |
2 dimensional relative motion | Question: I came across the following question:
Airplanes A and B are flying with constant velocity in the same vertical plane at angles $30°$ and $60°$ as shown in the following figure. The speed of A is $100\sqrt{3}$ m/s. At time $t$=0, an observer in A finds B at a distance of 500 m. This observer sees B moving with a constant velocity perpendicular to the line of motion of A. If, at $t=t′$, A just escapes being hit by B, what is $t′$ (in s)?
My question is this; when they say "moving with a constant velocity perpendicular to the line of motion of A", what do they mean? What information am I getting exactly? And also, how does A manage to escape being hit by B?!
Answer:
Imagine you're looking at the situation from behind plane $A$. For plane $B$ all you would see is the velocity perpendicular to the line of motion of $A$.
I've decomposed plane $B's$ velocity vector into a component perpendicular to $A$ and one parallel to $A$, using basic trigonometry. For the interception problem only the perpendicular component does the trick: the other component is simply parallel to $A$. | {
"domain": "physics.stackexchange",
"id": 33135,
"tags": "homework-and-exercises, kinematics, relative-motion"
} |
Redistributing Color in a RGB Image According to a Gaussian Distribution | Question: I haven't done this stuff in a while.
If I have an image $I$ I can equalize the histogram of the image using some opencv procedure, it's already defined. Equalizing an histogram means essentially to construct a function such that applied to each pixel $p \in I$ the new value it's like it had been sampled from a uniform distribution, assuming gray scale image for such explanation.
I don't remember if what I want to do has a specific name, but I kind of remember it was possible, but suppose I have a gaussian distribution (instead of a uniform one), say $\mathcal{N}(0,\Sigma)$, and I want to re-distribute the colors according to such distribution.
Is there a way to do this in opencv?
Answer: After you equalize the histogram you can think of your data as a stream of variables $ {X}_{i} $ where $ X \sim U \left[ 0, 1 \right] $.
Now all you need is to transform samples of Uniform Random Variable into Gaussian Variable.
You should do that by applying the Inverse CDF of Gaussian Distribution.
Basically applying the Inverse Transform Sampling method.
For Normal Distribution easy choice would be the Box Muller Transform.
After you apply this, you can just multiply each RGB pixel in the Cholesky Factorization of $ \Sigma $ to get exactly what you need. | {
"domain": "dsp.stackexchange",
"id": 6272,
"tags": "image-processing, opencv, statistics, distortion"
} |
Cast iron as EMF shield | Question: I wanted to ask if cast iron can be used as make-shift Electric and magnetic fields (EMF) shield.
For instance, can a frying-pan made of high-quality cast-iron and enamel be used as make-shift protection against laptop EMF?
I tried googling and found materials such as zinc, nickel, copper, steel. But none mentioned iron or cast-iron specifically.
Thanks.
Answer: Key to effective shielding is complete coverage with no gaps or holes, using a good electrical conductor. For a cast-iron pan, complete coverage would require putting a close-fitting lid on top of the pan, which would cut off all your wi-fi & bluetooth connections and make the computer impossible to use. | {
"domain": "physics.stackexchange",
"id": 70239,
"tags": "electromagnetic-radiation"
} |
Resources on Cryptographic Applications of Expander Graphs | Question: I want to read papers on cryptography like
How to Recycle Random Bits
or Security Preserving Amplification of Hardness. They use random walks on expander graphs.
I need a short introduction to the subject of expander graphs and random walks on them. I prefer one that has cryptography in mind.
PS: In another post, Dai Le introduced the paper Expander graphs and their applications, S. Hoory, N. Linial, and A. Wigderson as an excellent paper on expander graphs. Unfortunately, it is too long (123 pages) to be useful for me.
Answer: I suggest you look at a survey by Oded Goldreich, called A computational perspective on sampling. In that survey he presents some basic facts on expanders (along with pointers to more extensive material). These facts seem to be sufficient to at least understand the "Security Preserving Amplification of Hardness" paper. In particular, in appendix A he surveys random walks on expanders, and in C.4 he presents the expander hitter, which is what is essentially done in that paper. | {
"domain": "cstheory.stackexchange",
"id": 461,
"tags": "graph-theory, reference-request, cr.crypto-security, expanders"
} |
Normalizing 3-Dimensional Wave Function | Question: How do you normalize a wave function in three dimensions with spherical coordinates?
Answer: Since the wavefunction depends on r, which is the spherical coordinate representing the distance from the origin, we use spherical coordinates to perform the integration because it is most convenient. And yes, this is a triple integral, $\int_0^{2\pi}d\phi\int_0^{\pi}\sin\theta d\theta\int_0^{\infty}r^2\Psi^*\Psi dr$. The wave function doesn't depend on the two angular coordinates, so it should be straight-forward to carry out if you've done triple integration before. | {
"domain": "physics.stackexchange",
"id": 21366,
"tags": "quantum-mechanics, wavefunction, normalization"
} |
Generalized Forces in the Lagrangian | Question: I have resolved this problem:
The lagrangian is only the kinetic part in the non-inertial frame:
$$ L(x, \theta) = \dfrac{1}{2}m \dot{\textbf{r}}^2 $$
where $\textbf{r}'= \tilde{\textbf{R}}+\textbf{r}$, $\textbf{r}$ the position vector of $m$ in the non-inertial frame from $O$, $\textbf{r}'$ the position vector of $m$ in the inertial frame from $F$, $\tilde{\textbf{R}}$ the position vector of the non-inertial frame $O$ from $F$. Note that
$$ \theta = \omega t $$ is the position of $O$ from $F$, and $x(0)=a$. So from the texts
$$ \textbf{a}_F = \textbf{a}_O +\textbf{A} + 2\vec{\omega}\times \textbf{v}_O +\vec{\omega}\times (\vec{\omega} \times \textbf{r}) + \dot{\vec{\omega}} \times \textbf{r} = \begin{pmatrix} \ddot{x} \\ 0 \\ 0\end{pmatrix} + 0 + \begin{pmatrix} -\omega^2 x \\ 2 \omega \dot{x} \\ 0 \end{pmatrix} +\begin{pmatrix} 0 \\-h\omega^2 \\ 0 \end{pmatrix} + 0.$$
Hence,
$$ m(\ddot{x}-\omega^2 x) =0 \qquad \Longrightarrow \qquad x(t) = a \cosh \omega t$$
and
$$\textbf{F}\equiv \textbf{R}_O= \begin{pmatrix} 0 \\ 2m \omega \dot{x}-hm\omega^2\\ 0 \end{pmatrix}_O = \begin{pmatrix} 0 \\ m\omega^2 (2\sqrt{x^2-a^2} -h) \\ 0 \end{pmatrix}_O$$
from $O$.
Generalized force $Q$ is $ \omega^2 x$? What are the Lagrange-Euler equations?
The second part is
How I add the new strings in the lagrangian? Where does the critical frequency come from?
Answer: The strings' contribution is added to the potential energy.
$$L(x,t)_{new} = L(x,t)_{old} - \frac{1}{2}k x^2 \cdot 2$$
Factor $2$ comes from the fact that one spring is streched by length $x$ and another is squeezed by the same length. | {
"domain": "physics.stackexchange",
"id": 44688,
"tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism"
} |
ROS Nav Stack - Obtaining correct heading at the goal without final pointturn | Question:
Hi
I am using ROS Indigo with the Navigation stack on a holonomic vehicle. The planning overall seems to work
However, once the robot reaches the XY goal it does a final pointturn. Is there a way to have it plan an arc so that when the destination is reached the heading is already correct?
It also seems to now like holonomic motions. Is there a way to bias the planner to use more holonomic motions (note: the planner has holonomic set to true).
Thanks
Originally posted by dkohanba on ROS Answers with karma: 1 on 2016-04-21
Post score: 0
Answer:
For the first part of your question (heading), you can try setting the global_planner's orientation filter to ForwardThenInterpolate (maybe experiment with all of them). You can read what each filter does here: GlobalPlanner.cfg
It looks like you'll have to use dynamic_reconfigure in order to set the filter.
Originally posted by spmaniato with karma: 1788 on 2016-04-22
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by dkohanba on 2016-04-25:
Thanks. I will try that. | {
"domain": "robotics.stackexchange",
"id": 24425,
"tags": "ros, navigation"
} |
Chordal Graphs and maximum independent sets | Question: For a chordal graph $G$ there is a clique tree such that its vertices corresponds to maximal cliques of $G$ and there is a edge between two vertices iff the intersection of the corresponding cliques are also their minimal vertex separator and for each vertex in the graph the cliques containing it, induces a subtree.
Now my questions are:-
1.Take a subpath of the tree of length 5 with the property that no
vertex has degree more that 2 and intersection of all the maximal
cliques is non empty. Does there exist a independent set of
atleast size 3 in the subgraph induced by that vertices present in the
maximal cliques taken in the path?
2.If yes, then is the bound of path length and independent set size tight?
Answer: There are arbitrarily long sub-path whose induced subgraph contains no three independent vertices. See the graph below for length 4:
Indeed, vertices in a sub-path induces an interval subgraph. So you can use standard interval models (the endpoints of the interval for a vertex $v$ are the indices of leftmost and rightmost bags that contain $v$) to construct the tight example easily. The graph given above have intervals:
[1,1], [1,2], [1,3], [2,4], [3,4],[4,4]. Likewise, you can construct for a subpath of length n: the basic idea is that each interval either starts from 1 or ends at n. | {
"domain": "cstheory.stackexchange",
"id": 2617,
"tags": "graph-theory"
} |
A Failable that allows safe returning of exceptions | Question: This was inspired by a conversation in chat, that started with the discussion of C#7.0 tuples and out parameter declarations, which led to the idea that there is no 'good1' way to return an error state in C# without throwing an exception.
Out of curiosity, I wondered what it would take to design a type that was transparent to the developer, but allowed them to safely return exceptions without having to unwind the stack.
For those who don't know, when you throw/catch Exception objects in C# (or VB.NET, F#, any .NET language follows the same requirements), the most expensive part tends to be the stack. Throwing an exception is cheap, but catch the exception and the stack has to unwind and reflect against itself to give you the information you need. This is by-design, of course. The language and framework designers wanted exceptions to mean that the program entered an 'exceptional state', that is, there is an issue that needs resolved.
The problem is that some methods don't really need to throw an exception on error, they could, instead, just return a pass/fail and then fill an out parameter. The other option is to return a Tuple<bool, T>, where T is the return type.
Of course, this doesn't give us the ability to return an Exception, just pass/fail. Sometimes we may want to return what went wrong.
So, alas, I get to the Failable<T> struct that I created today. By including implicit conversions to and from T and Exception, it allows us to simply return Exception instead of throwing, creating a much cheaper management of error states.
The only caveat to this approach from a usability standpoint, is that one does not simply define an implicit conversion from null. This means that Failable<string> value = null; is invalid, but Failable<string> value = (Failable<string>)null; is, as well as Failable<string> value = new Failable<string>(null);.
If the framework/language designers ever open up implicit conversions from null, then this struct would be completely transparent.
public struct Failable<T> : IEquatable<Failable<T>>
{
public Exception Exception { get; }
public T Result { get; }
public bool Passed { get; }
private Failable(Exception exception, T result, bool passed)
{
Exception = exception;
Result = result;
Passed = passed;
}
public Failable(Exception exception)
: this(exception, default(T), false)
{
}
public Failable(T result)
: this(null, result, true)
{
}
public static implicit operator Failable<T>(Exception exception) => new Failable<T>(exception);
public static implicit operator Failable<T>(T result) => new Failable<T>(result);
public static implicit operator Exception(Failable<T> result) => result.Exception;
public static implicit operator T(Failable<T> result) => result.Result;
public override string ToString() => (Passed ? Result?.ToString() : Exception?.ToString()) ?? "null";
public override int GetHashCode() => Exception.GetHashCode() ^ Result.GetHashCode();
public bool Equals(Failable<T> other) => this == other;
public override bool Equals(object obj) => obj is Failable<T> && this == (Failable<T>)obj;
public static bool operator ==(Failable<T> a, Failable<T> b) => a.Exception == b.Exception && a.Result.Equals(b.Result);
public static bool operator !=(Failable<T> a, Failable<T> b) => a.Exception != b.Exception || !a.Result.Equals(b.Result);
public static readonly Failable<T> Empty = new Failable<T>();
}
Now, to demonstrate how this works I defined a very ugly method, so do not review it please, that goes through all the possible features of this struct:
static Failable<T> FailableTest<T>(bool pass, bool nullOrThrow, T result)
{
try
{
if (pass)
{
if (nullOrThrow)
{
// Both options are valid:
// return new Failable<T>(null);
return (Failable<T>)null;
}
else
{
return result;
}
}
else
{
if (nullOrThrow)
{
throw new ArgumentException($"Throwing as expected, {nameof(pass)}:'{pass}', {nameof(nullOrThrow)}:'{nullOrThrow}'.");
}
else
{
return new ArgumentException($"Returning as expected, {nameof(pass)}:'{pass}', {nameof(nullOrThrow)}:'{nullOrThrow}'.");
}
}
}
catch (Exception e)
{
return e;
}
}
Our test cases are something on the order of:
Console.WriteLine("Pass : " + FailableTest(true, false, "1. String on pass").ToString());
Console.WriteLine("Fail : " + FailableTest(false, false, "2. String on pass").ToString());
Console.WriteLine("Null : " + FailableTest(true, true, "3. String on pass").ToString());
Console.WriteLine("Throw : " + FailableTest(false, true, "4. String on pass").ToString());
Console.WriteLine("Cast : " + (FailableTest(true, false, 15) - FailableTest(true, false, 5)));
Returning:
Pass : 1. String on pass
Fail : System.ArgumentException: Returning as expected, pass:'False', nullOrThrow:'False'.
Null : null
Throw : System.ArgumentException: Throwing as expected, pass:'False', nullOrThrow:'True'.
at GenericFailableTest.Program.FailableTest[T](Boolean pass, Boolean nullOrThrow, T result) in c:\users\ebrown\documents\visual studio 2017\Projects\Test CSharp Projects\GenericFailableTest\Program.cs:line 44
Cast : 10
The interesting thing about this is that the implicit operator T operator allows you to just ignore this class altogether:
var str = FailableTest(true, false, "Some String");
And this is by design, which brings me to my main list of questions:
Should there be a T(Failable<T>) operator? If so, should it be implicit?
Should there be an Exception(Failable<T>) operator? If so, should it be implicit?
Should the API include a Failable(Tuple<bool, T>) constructor that allows one to pass a tuple of (pass, value)? A proper definition might be:
static Failable<int> TryParse(string input)
{
int result;
if (int.TryParse(input, out result))
{
return result;
}
else
{
return new ArgumentException($"The string '{input}' was not a valid integer.");
}
}
var parseResult = new Failable<int>();
while (!parseResult.Passed)
{
parseResult = TryParse(Console.ReadLine());
}
var value = parseResult.Result;
// Do something with `value`
1: The term 'good' here is subjective, there are two major alternatives to throwing exceptions already present in the language:
1. Use out parameters;
2. Return a Tuple;
Either of these are 'good' in certain lights; I'm simply attempting to draw out another possible alternative.
Answer: First: congratulations, you have rediscovered the error monad.
https://hackage.haskell.org/package/mtl-2.2.1/docs/Control-Monad-Error.html
Second: as noted in the comments, C# already has the concept of "wrap up either a value or an exception, namely, Task<T>. You can use Task.FromException and Task.FromResult to construct them. Of course Result on the task either produces the result or throws the exception, as does await.
Also, this illustrates that there need be no asynchrony in a task! A task is just the concept of "I'll provide a value or an exception in the future if I don't already have it now". If you already have it now, great; you can use tasks to represent your "failable" concept, and await them like any other task.
Should there be a T(Failable<T>) operator? If so, should it be implicit?
Operators that convert a generic type to anything whatsoever can be difficult to reason about. Certainly it should not be implicit because the operation is not guaranteed to succeed! If you want this, it should be explicit.
Look at the design of Task for inspiration here. Notice that the factories are static methods and they are very clear when they are being called. And it is also very clear when the result is being fetched.
Similarly look at the design for nullable. (The "maybe monad" is very similar to the error monad; more on this below.) There is an implicit conversion from T to T?, but the conversion from T? to T is explicit.
Should there be an Exception(Failable<T>) operator? If so, should it be implicit?
I would find this confusing.
Should the API include a Failable(Tuple<bool, T>) constructor that allows one to pass a tuple of (pass, value)?
I don't understand the question. (Though I note that a (bool, T) tuple is the structure of the maybe monad, aka Nullable<T> in C#.)
Exercise 1: you have created a monad, so you should be able to define the monad operators on them; if you do so, then you can use your type in LINQ queries! Can you implement members:
struct Failable<T> ... {
...
public Failable<T> Select<R>(Func<T, R> f) { ... }
public Failable<C> SelectMany<B, C>(
Func<T, Failable<B>> f1,
Func<T, B, C> f2) { ... }
public Failable<T> Where(Func<T, bool> f) { ... }
}
If you do that, then you can write queries:
Failable<int> f = OperationThatCanFail();
Failable<double> d = from i in f where i > 0 select Math.log(i);
If you've done it right then d should be either a failure code, or the log of integer i.
Exercise 2: You've implemented the error monad; can you now implement the tracing monad? a Trace<T> has the value of a T, but also has an operation that appends a string to the trace, so you can track the movement of T around your program.
Exercise 3: nullable is implemented as a (bool, T) pair. Failable is implemented as an (Exception, T) pair. Trace is implemented as a (string, T) pair. Can you design and implement a generalized State<S, T> type which associates an S with a T, and then derive the other monads from it?
Finally, you might consider more advanced operations. For example:
public static Func<A, Failable<R>> ToFailable(this Func<A, R> f)
{
return a =>
{
try
{
return new Failable<R>(f(a));
}
catch(Exception x)
{
return new Failable<R>(x);
}
};
}
Now you can take existing functions of the form A-->R that can throw, and turn them into functions that cannot throw. | {
"domain": "codereview.stackexchange",
"id": 25656,
"tags": "c#, error-handling, generics, type-safety, state"
} |
Excitation energy of a compound nucleus | Question: I am looking to calculate the excitation energy of Polonium-210 from the masses of Helium-4 and Lead-206. Using a table of nuclides, I found that the masses are:$M_{Pb} = 205.97 \: amu, M_{He} =4.0026 = \: amu, M_{Po} = 209.98 \: amu$. The mass defect is thus:
$$\Delta M = 209.98 - 205.97 - 4.0026 = 0.0074 \: amu$$
How do I convert this mass defect value to an excitation energy?
Answer: 1 amu is 1/12 of the mass of $^{12}C$ (roughly, the mass of the proton). Converting to energy, 1 amu = 931.5 MeV. Then, the excitation energy in your xample is 6.88 MeV, which sounds reasonable. | {
"domain": "physics.stackexchange",
"id": 78532,
"tags": "homework-and-exercises, nuclear-physics, atomic-physics"
} |
Why does Gauss' Law apply to any shape given that it's a closed surface? | Question: I've read some answers on stack exchange for this question, but I don't find them particularly clear. I am convinced of the case for a sphere, since the area of a sphere is used in the derivation for the law.
So maybe relating all other areas to the area of a sphere would provide an intuitive 'proof'?
I was uncertain, however, about approximating an infinitesimal curved surface to a flat surface.
Answer: Gauss law deals with electric flux and electric flux is related to number of field lines cutting or passing through a surface. And a charge create definite number of field lines, thus if any closed surface is enclosing the charge every field lines emmited or created by the charge will pass through the closed surface. So, whatever the closed surface be, the number of field lines passing through the surface will always be the same. | {
"domain": "physics.stackexchange",
"id": 99978,
"tags": "electrostatics, electric-fields, charge, gauss-law, coulombs-law"
} |
Numbering in a derivative of azulene | Question: I was wondering if anyone can tell me how to number this molecule:
This molecule is derived from azulene by addition of two sulfurs and two oxygens at positions 2, 6, 1, and 5 respectively. I would like to learn the proper procedure of numbering. I tried googling around and reading papers but could not find the correct terminology for this process.
Answer: If your aim is to know just the chemical name, you may ask a computer.
If the chemical already is known, chances are you find it in a data base (e.g., reaxys by Elsevier, or Scifinder by The American Chemical Society). The later even has the option to draw a structure and to request a naming for it, even if there wasn't yet a publication indexed by SciFinder about said molecule. A resource you typically find in Chemistry departments.
There are standalone programs helping you to find a chemical name for this structure. 2,6-bis(sulfanyl)azulene-1,5-dione, if you ask ACD Chemsketch (here, the freeware was used [free in the naming up to 50 atoms]); but Perkin Elmer's ChemDraw, or ChemDoodle -- often accessible within a campus wide license program by an university -- likely find it / generate it equally well.
If you aim to learn the nomenclature by yourself, the handy pocket guide by Helmchen may be worth to consider. Portions of it may be seen in googles book preview, like the screen photo taken below.
(credit to Google books, p. 16) | {
"domain": "chemistry.stackexchange",
"id": 11032,
"tags": "organic-chemistry, nomenclature"
} |
Why does the "air" close or open doors? | Question: Assume we have an empty room with two doors on the opposite sides of it. If we open one of the doors, the door on the opposite side automatically closes "harder", and if we close one of the doors the other door opens. Why does this happen?
Answer: If a door opens outward, pulling it open creates a slight lowering of air pressure inside the room just for a moment. If the door on the other side opens inward, then the slightly greater air pressure on the outside of that door will push it open.
The reverse is true when you close that door by pushing it inwards. The air pressure in the room rises slightly because the door is pushing air into the room, and if the other door opens outward, then that slight increase in air pressure inside the room will push that door open. If it opens inward and is open, it will get pushed shut by the air pressure when you shut the other door. | {
"domain": "physics.stackexchange",
"id": 87544,
"tags": "pressure, everyday-life, flow, air"
} |
Relative Humidity Problem | Question: I have been assigned to solve the problem: the ongoing temperature of the room $T_1 = 7$; RH is 85% and I am asked to figure out how much the temperature must be increased in order that RH would drop to 70% or less.
I would like to know what I did wrong - why I could not find this second temperature. I can infinitely use the definition of RH and Antoine equation, but never find $T_2$ in the way. Why? Because I do not see any connection between parameters $p_1, p_{sat_1}, v_1, v_{sat_1}, p_2, p_{sat_2}, v_2, v_{sat_2}$. Knowing $T_1$ I can find $p_1$ and then $v_1$; and also the temperature at which the air shall be saturated with water vapour: $T_{d_1}$. Knowing them does not solve the risen problem. That means that I do not understand something about how these parameters are connected between themselves.
The only possibility for me is to look into what happens in the room upon the rising the temperature up to $T_2$. Will the RH drop? Not necessarily, because the more environment's temperature, the more moisture the air can store, so boosting RH. But okay we did somehow rise $T$ up to $T_2$ and $\varphi_2 = 0.7$. Definitely, $T_{d_2} \ne T_{d_1}$; maybe $p_{sat_2} = p_{sat_1}$? I think no as well, because both parameters depend on the specific temperature.
What do I misunderstand? Maybe a quantity gets preserved? And by it I should solve the problem. I am confused. Volumes are not given. Only the concurrent temperature of the room and RH; then they boost temperature and tell that RH is 0.7.
NOTATION
$T$ is temperature,
$\varphi$ is RH
$v$ is the water vapor contained in the air
$p$ is the partial water vapour pressure
$p_{sat}$ when the air gets saturated with the water vapour
$v_{sat}$ is the amount of the water vapour contained in the saturated air.
Answer: Comment turned answer as requested by Bill N.
This question seems to assume that the amount of water vapor in the air stays constant. Then this statement
[...] Will the RH drop? Not necessarily, because the more environment's temperature, the more moisture the air can store, so boosting RH.
makes no sense. If the air can store more moisture, but the amount of water in the air stays constant, RH must drop.
Answering the follow-up question in the comments:
You can see this as taking some volume of air from the outside and heating it up. You know the new temperature and RH, but the absolute amount of water will stay constant. Therefore, if the air outside was saturated with water vapor, the temperature there must have been the dew point of the air inside (which is also the dew point of the air outside, since this only depends on the absolute amount of water in the air, which stays constant). | {
"domain": "physics.stackexchange",
"id": 74367,
"tags": "humidity"
} |
Effect of weights on the Louvain communities detected | Question: The Louvain method for community detection aims to optimize modularity and hence detect communities in the given graph. In case of a weighted graph would it be valid to assume that an edge with a higher weight would be prioritized to be part of a community as against an edge with a lower weight while optimizing modularity ?
Answer: Yes, weights are interpreted as a strength of connection in the context of modularity. This is perhaps easiest to see using the interpretation of unweighted modularity as
The fraction of edges within communities minus the expected such fraction in a random graph with the same degree sequence.
Switching the adjacency matrix to the weighted adjacency matrix to get the weighted modularity just changes this to
The fraction of total weights within communities minus the expected such fraction in a random weighted graph with the same total weights at each vertex.
That said, you do have to be careful. There are community detection methods that do treat weights as distances:
SO: meaning of weights in community detection algorithms
It may be worth running a simple example when first using a specific algorithm implementation; e.g., with $G=C_4$, weight one perfect matching very much higher than the other, the communities should end up being two $K_2$s, and you want to know whether they contain the high- or low- weight edges. | {
"domain": "datascience.stackexchange",
"id": 8492,
"tags": "graphs, community"
} |
Current map provided with laser scan | Question:
Hi, I am new in ROS and I want to do a SLAM using gmapping package with "laser scan" and "odometry data".
In the first step I just want to see the current map provided with laser scan in fixed location (x = 0, y = 0, theta = 0) in rviz.
I have laser scan topic but I don't have a odometry data yet, so I want to make a fake odometry data. For this I think I should publish a TF data with all zeros.
I think I should do some thing like this link:
1
2
3
Am I in the right path? could you please tell me what should I do in details? or show me some references?
Originally posted by AliAs on ROS Answers with karma: 63 on 2014-09-10
Post score: 0
Answer:
If you're using a laser scanner, using hector_mapping can be easier than gmapping. Launch the mapping program and publish a 0,0,0 static transform between base_link and laser to account for your non existent odometry and you should be set.
<node pkg="tf" type="static_transform_publisher" name="base_to_laser_tf" args="0 0 0 0 0 0 base_link laser 100"/>
Just my two cents.
Originally posted by SaiHV with karma: 275 on 2014-09-10
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 19365,
"tags": "ros, navigation, learning-tf, gmapping, laserscan"
} |
Does the radio (between two co-moving astronauts) stop working when crossing the event horizon? | Question: There are a lot of questions about crossing the EH (event horizon) of a black hole on this site.
Some of them suggest, that when you cross the horizon, nothing special happens, you don't even notice crossing the horizon, and some suggest that it is even impossible to detect the horizon locally.
Nothing special happens to the observer as they cross the event horizon.
Falling into a black hole
In your co-ordinate system you will notice nothing unusual.
What do you feel when crossing the event horizon?
There will be no discontinuity in behaviour at the event horizon.
Taking selfies while falling, would you be able to notice a horizon before hitting a singularity?
Now there are others, who suggest that inside the horizon, everything, including light must move towards the singularity, the singularity becomes a point in time (future).
So inside the horizon even a light ray directed outwards actually moves inwards not outwards.
How does light behave within a black hole's event horizon?
https://arxiv.org/abs/2002.01135
Is the event horizon locally detectable?
Based on the first one, when two astronauts cross the EH together, their walkie talkie (or radio) could keep working.
Based on the second one, this is not so clear. Obviously, outside the horizon, the radio still works, because EM waves from the sender still spread spherically, and would still reach the receiver. But once you cross the horizon, the curvature becomes so extreme, that the escape velocity exceeds the speed of light. Thus, EM waves would not spread spherically anymore, but only towards the singularity. Based on this, the EM waves from the sender might not be able to reach the receiver anymore, this the radio stops working when crossing the EH.
Just to make it clear, I am asking about two astronauts, co-moving, falling in together, and will the radio stop working between the two of them?
Question:
Does the radio (between two co-moving astronauts) stop working when crossing the event horizon?
Answer:
Does the radio (between two co-moving astronauts) stop working when crossing the event horizon?
Assuming that the black hole is massive enough that there are negligible tidal effects at the horizon then their radios would continue to work and their conversation would carry on without a pause.
Now there are others, who suggest that inside the horizon, everything, including light must move towards the singularity, the singularity becomes a point in time (future).
This is true also. There is no contradiction between the two claims. Because the astronauts are also falling in towards the singularity it is not necessary for light to go outward in order to go from one astronaut to the other. If you draw the worldlines of the communications you will find that indeed they never go outward. | {
"domain": "physics.stackexchange",
"id": 69880,
"tags": "quantum-mechanics, electromagnetism, general-relativity, electromagnetic-radiation"
} |
Anti-Markovnikov addition of HCl | Question: The following reaction has been troubling me for a while now.
Since peroxides won't give the anti-Markovnikov product when used with HCl, I cannot think of any reagents that would result in the product shown in the image above. There doesn't seem to be any Wagner–Meerwein rearrangement either. This left me wondering, what reagent(s) should be used to form the product shown above.
Answer: One way to do it would be do first do a hydroboration-oxidation reaction, which leads to the anti-Markovnikov addition of $\ce{H2O}$. So you will get an $\ce{-OH}$ group instead of $\ce{-Cl}$. Then, we can replace the $\ce{-OH}$ group with $\ce{-Cl}$ by treating it with $\ce{PCl3}$ or $\ce{PCl5}$. | {
"domain": "chemistry.stackexchange",
"id": 13646,
"tags": "organic-chemistry, c-c-addition, rearrangements"
} |
Would the subsurface event leading to tree death at West Tern Lake in Yellowstone be visible in thermal imaging? | Question: The New York Times article It’s Warm and Stealthy, and It Killed Yellowstone Trees and Turned Soil Pale describes an even that took place over the past few decades where a small and well-defined forested spot in Yellowstone National Park underwent some change, killing all of the trees growing there.
Credit: United States Geological Survey
The article explains the possible explanation that the trees died from changes in subsurface water chemistry.
Dr. Vaughan perused the aerial shots taken of the suspicious region. In 1994, nothing seemed amiss and trees were growing healthily. However, from then through 2017, trees had perished, and the soil had turned a sickly off-white color. The only reasonable explanation was that a new thermal area had been covertly growing in the region since the late-1990s, altering the ground’s chemistry with its superheated fluids.
Parenthetically speaking, to me "superheated fluids" sounds like something that could kill trees thermally, even without chemistry.
Would heating like this show up as a hot spot in thermal IR Earth imaging? Or is this likely to have been too deep to result in a significant surface temperature change?
Google maps screen shot centered at 44.663584 N, 110.278843 W:
Answer: If it’s hot enough to alter the groundwater chemistry that is killing the trees, then you would see it in satellite thermal imaging. Remember, tree roots only go down about 20 to 30 feet at the most, so this is all pretty shallow stuff. This is actually quite a strong thermal anomaly. Like you said, the heat alone may have killed the tree/tree roots.the white areas of the photos are likely salt deposits from the thermal water left behind after evaporation. So again, the thermal water is shallow, indeed, it’s right at the surface. Pretty cool. | {
"domain": "earthscience.stackexchange",
"id": 1846,
"tags": "groundwater, earth-observation, geothermal-heat, underground-water"
} |
Creation of skip list: Las Vegas or Monte Carlo? | Question: I have come across this video on skip lists:
https://www.youtube.com/watch?v=UGaOXaXAM5M
Clearly, the creation of skip-list from a sorted singly linked list is a randomized algorithm.
But I am confused: is it a Las Vegas algorithm or a Monte Carlo algorithm?
I feel it is a Monte Carlo algorithm since there is a possiblity of an incorrect output(degenerate case where every node is in every level), and if we don't maintain a checking condition, we won't know where the algo went wrong.
Am I correct?
P.S. My first question here.
Answer: The difference between a Las Vegas algorithm and a Monte Carlo algorithm is that a Las Vegas algorithm is always correct but its running time may be large with small probability, whereas a Monte Carlo algorithm always has the same time complexity but may give wrong results with small probability.
These definitions are more appropriate for decision problems and optimization problems, and less for data structures. That said, they still make sense for data structures. Here are two examples:
Skip-lists correctly implement the list abstract data type. They are efficient with high probability. This is like a Las Vegas algorithm.
Bloom filters are always efficient (in this case we care about space complexity), but may give wrong results to queries. This is like a Monte Carlo algorithm.
Most data structures are designed with the Las Vegas objective in mind. | {
"domain": "cs.stackexchange",
"id": 17708,
"tags": "data-structures, randomized-algorithms"
} |
Converting a state from hash into a JavaScript object | Question: This code gets a state from a query string hash part, converts the hash part to a JavaScript object and then tries to get a view, dashboard or a mode value. If those keys are not present in object, it returns $.pulse.qs.views.home.
Does this JavaScript code with ternary operator follow common best practices? It looks like ternary operator abuse and it is not readable. Or is it just me?
var state = $.query.getState();
var screen = state[$.pulse.qs.view]
? state[$.pulse.qs.view]
: state[$.pulse.qs.dashboard] || state[$.pulse.qs.mode]
? $.pulse.qs.views.dashboard
: $.pulse.qs.views.home;
Answer: I'm not aware of a definitive standard concerning the reasonable limits of the ternary operator, in any language.
The general rule of thumb is to avoid the ternary operator when it hurts readability,
but that's subjective and may change from person to person, team to team.
See also this related discussion.
What can be said fairly objectively, the ternary operator is best when:
Used without nesting (single ternary)
Used for a trivial simplification in a perfectly clear way
As the first answer on the linked question points out,
if you find the code unreadable, then it's probably time to change it.
But this is a matter of taste.
I'm on the fence, for example: thanks to the indentation,
the code seems still readable to me.
If it's not up to you to change and rewrite with nicely spelled out if-else statements,
then that's tough luck,
because I can't really think of a compelling argument to prove that this piece of code is fundamentally wrong. | {
"domain": "codereview.stackexchange",
"id": 11588,
"tags": "javascript"
} |
does aluminum oxide float on molten aluminum and by what mechanism? | Question: Not really a chemistry question since no reactions happening, but couldn't find a more appropriate stack exchange site.
Considering recycling aluminum cans for scrap metal using induction heating. I've seen people do it with very unsophisticated equipment, but what I don't get is Al2O3 is denser (3.95g/mL) than molten aluminum (2.38 g/mL).
I suppose surface tension can stop any oxide that forms at the surface of the melt from sinking, but how strong is that? Is it resistant to mild stirring (e.g. when you skim off the dross)? Are there other examples of denser materials floating?
Answer: A chunk of $\ce{Al2O3}$ would sink in molten aluminum. However, $\ce{Al2O3}$ usually appears in smaller pieces (down to microns), with irregular surfaces, probably filled with air ($\ce{N2}$) from when it was cool. A mix of metallic aluminum and $\ce{Al2O3}$ could allow aluminum to sink thru the solid particles after melting, allowing the particles to rise above the surface of the liquid. $\ce{Al2O3}$ has a very high melting point.
The mass of solid particles would have some very weak structure, and some buoyancy by trapped air. The surface tension of aluminum is very high (~900 dyne/cm) , so it might very well account for the covering of the metal by a heavier substance and the non-wetting of the oxide.
This is reminiscent of floating a needle on water.
In the electrolytic production of aluminum from $\ce{Al2O3}$:
The electrolyte is a molten bath of cryolite ($\ce{Na3AlF6}$) and
dissolved $\ce{Al2O3}$. Cryolite is a good solvent for $\ce{Al2O3}$
with low melting point, satisfactory viscosity, low vapour pressure,
and density lower than that of liquid aluminum (2 vs 2.3 g/cm3), which
allows natural separation of the product from the salt at the bottom
of the cell.
References
https://link.springer.com/article/10.1007%2Fs11663-999-0108-4
https://en.wikipedia.org/wiki/Aluminium_smelting | {
"domain": "chemistry.stackexchange",
"id": 12271,
"tags": "metallurgy"
} |
How to find the altitude above mean sea level of the wind in forecast data such as GFS | Question: I'm looking to retrieve from weather forecasts (such as GFS or HRRR) the U and V component of the wind with respect to some altitude (meters) above mean sea level.
To do so, I've implement a interpolate following the same idea as in the link below.
https://stackoverflow.com/a/61107634/6528830
However, I'm confused by the results it would seem like part of some pressure level would be underground.
If I look at the geopotential height @ ground or water surface. It seems to match with terrain elevation. Yet I don't believe it matched exactly, is this normal?
However, it's the 1000 mb geopotential that really confuses me:
How could some points be lower that in the gph@ground?
What am I not understanding about this data and how can I use it to figure out what is the wind at specifics altitudes above mean sea level?
Answer: It makes sense, some locations have a surface pressure under 1000 mb... which means to get to 1000 mb you'd have to go below ground (the basic idea of the calculation is that pressure decreases as you ascend in the atmosphere... so to get it to increase from sea level pressure, you'd have to go the other direction, underground).
It certainly feels hints of being unrealistic... but I would think if you dug such a deep enough hole wide enough, you would get to the given pressure.
Here's the sea level pressure map similar to your 1000 mb map:
(Source: weathernerds.org)
Those negative height areas indeed match a chain of lows with surface pressure below 1000 mb in the Southern Ocean and a 992 mb low over Europe. So if the station is 992 mb... you'd have to even lower in the atmosphere (= negative) to get to 1000 mb.
This is actually a common interpolation on pressure maps in meteorology... mean sea level pressure itself... probably the most common map we use... is found at a height that is actually below ground at most places, because almost all locations are somewhat above sea level. But by comparing on a flat surface it allows us to better identify the weather-causing features, factoring out the otherwise overwhelming factor of pressure decrease with altitude above sea level (your ears pop on mountains despite being at 0 ft above the ground because the atmospheric pressure is still less because there's less atmosphere above you)
Even at 850 mb, another common geopotential height for analysis... we can plot the heights above sea level... but we often will choose to mask out the model's winds in spots to show the 850 height is truly subterranean there, so the model's predicted winds are of imaginary quality, being below ground level:
(From pivotalweather)... the grey is masked out wind.
So all your plots looks quite reasonable, and hopefully that helps make sense of the concept. Not every location has 1000 mb (or even 850 mb) in its atmosphere... both because some sites have sea-level pressures below that value (the cause of the issues on your map, as geopotential is based upon sea-level), and also because the elevation of some sites means lower surface pressure (not the cause of your negative geopotential heights, but the cause of the wind maskout on some such plots as shown). | {
"domain": "earthscience.stackexchange",
"id": 2462,
"tags": "wind, gfs, altitude"
} |
(How) Could we discover/analyze NP problems in the absence of the Turing model of computation? | Question: From a purely abstract math/computational reasoning point of view, (how) could one even discover or reason about problems like 3-SAT, Subset Sum, Traveling Salesman etc.,? Would we be even able to reason about them in any meaningful way with just the functional point of view? Would it even be possible?
I've been mulling on this question purely from a point of self inquiry as part of learning the lambda calculus model of computation. I understand that it's "non-intuitive" and that's why Godel favored the Turing model. However, I just wish to know what are the known theoretical limitations of this functional style of computation and how much of a hindrance would it be for analyzing the NP class of problems?
Answer: You may wish to look at cost semantics for functional languages. These are various computational complexity measures for functional languages that do not pass through any kind of Turing machine, RAM machine, etc. A good place to start looking is this Lambda the Ultimate post, which has some good further references.
Section 7.4 of Bob Harper's Practical Foundations for Programming Languages explains the costs semantics.
The paper On the relative usefulness of fireballs by Accattoli and Coen shows that $\lambda$-calculus has at most linear blowup with respect to RAM machine model.
In summary, on this other planet things would be pretty much the same with regards to NP, but there would be fewer buffer overflows, and there wouldn't be as much garbage lying around. | {
"domain": "cstheory.stackexchange",
"id": 3872,
"tags": "soft-question, lambda-calculus, turing-machines, functional-programming, np"
} |
Finding mass ratio from the collision of two bodies | Question: I am currently studying Clssical Mechanics, fifth edition, by Kibble and Berkshire. Problem 1 of chapter 1 is as follows:
An object $A$ moving with velocity $\mathbf{v}$ collides with a stationary object $B$. After the collision, $A$ is moving with velocity $\dfrac{1}{2}\mathbf{v}$ and $B$ with velocity $\dfrac{3}{2}\mathbf{v}$. Find the ratio of their masses. If, instead of bouncing apart, the two bodies stuck together after the collision, with what velocity would they then move.
I've read over the chapter, trying to find some indication of how to do this problem, but I do not see how this is possible with the information given. The most relevant equation that I could find relates to the law of conservation of momentum:
If we allow two small bodies to collide, then during the collision the effects of more remote bodies are generally negligible in comparison with their effect on each other, and we may treat them approximately as an isolated system. (Such collisions will be discussed in detail in Chapter 2 and 7.) The mass ratio can then be determined from measurements of their velocities before and after the collision, by using (1.7) or its immediate consequence, the law of conservation of momentum,
$$m_1 \mathbf{v}_1 + m_2 \mathbf{v}_2 = \text{constant}. \tag{1.8}$$
But this is all that is written on the subject, and I wonder if more information on the law of conservation of momentum is required in order to solve this problem.
The solution is said to be $m_A/m_B = 3$; $3\mathbf{v}/4$.
I would greatly appreciate it if people would please take the time to explain how
Answer: Conservation of momentum in this case reads
$$m_1\mathbf v=\tfrac 1 2m_1\mathbf v+\tfrac 3 2m_2\mathbf v$$
which is just plugging in the velocities of before and after the collision. This is enough information to solve for $m_1/m_2$.
In the second case the velocity of after the collision is unknown so let's introduce $\mathbf{v}_1',\mathbf{v}_2'$ as the velocities after the collision. We know they stick together so we can simplify it to $\mathbf v'=\mathbf{v}_1'=\mathbf{v}_2'$. What equation do you get when you plug this in for conservation of momentum?
Edit: I used $m_1,m_2$ like in the general case but you can ofcourse replace $m_1$ by $m_A$ and $m_2$ by $m_B$. | {
"domain": "physics.stackexchange",
"id": 68535,
"tags": "homework-and-exercises, classical-mechanics, momentum, mass, conservation-laws"
} |
What is the hydrodynamical explanation for the 'footprint' of a diving whale? | Question: When a whale dives it leaves behind a so called 'footprint'. The water seems to be calmer or the surface is at least more smooth an shows less wrinkles.
Image source
I read some text which were talking about a 'wake of the diving whale' but I wanted to understand it from a fluid dynamic (two-phase) point of view, so I thought asking here might be a good idea.
Answer: The whale creates a vortex ring with its tail, which moves upwards and creates an oval patch of outward current on the surface: short surface waves can't propagate (well) against the current, and that's why the patch is smoother than the ocean around it.
Source: The flow induced by a mock whale : origin of flukeprint formation. | {
"domain": "physics.stackexchange",
"id": 43437,
"tags": "fluid-dynamics, water, surface-tension"
} |
Spin conservation in pair production | Question: In QED, when two photons collide, they can turn into an electron and positron pair. We know from $U(1)$ gauge symmetry that the total charge of the initial and final states must be conserved. On the other hand, I expect that the total spin must also be conserved. But I do not quite get the details of how this works.
In this post the total spin of two-photon-state is discussed. Based on the transversality argument, OP argues there are three distinct spin states associated with the two-photon system. Two of them correspond to the spin-0 representation and the remaining one corresponds to a spin-2 state.
Based on the above argument, if the total spin in pair production is to be conserved, I would assume that the incoming photons must be in the spin-0 state, excluding the spin-2 state because the spin-state of the created electron-positron pair does not have a spin-2 representation. As far as I know, this spin state can have one spin-0 rep. and three spin-1 rep.
Edit: Also, in Wikipedia page there is Landau–Yang theorem, stating that a massive particle with spin 1 cannot decay into two photons. I suspect this selection rule follows from the requirement of the conservation of the total spin. Because as suggested in the linked question two-photon state does not have a spin-1 rep.
Is this reasoning correct?
The second point is about symmetry. If the total spin is to be conserved, what is the associated symmetry? I am thinking it must the rotational invariance of pair production amplitude. But what do the generators of this rotational symmetry look like? and where do they act? These generators must not correspond to the ordinary rotations in space. Because this would correspond to the conservation of orbital angular momentum, not spin.
Answer: Spin angular momentum is not conserved; only the sum of spin and orbital angular momentum is conserved. As a trivial example of this, consider a hydrogen atom decaying from $2p$ to $1s$ by emitting a photon. The photon carries one unit of angular momentum, but the spin of the electron doesn't change; instead orbital angular momentum is lost.
Furthermore, in many situations you can't even unambiguously define the two separately (how much of the proton's angular momentum is due to the angular momentum of its constituents?), so "conservation of spin" is not even meaningful. Conservation of total angular momentum is always meaningful, because it's the conserved quantity associated with rotational symmetry.
Based on the above argument, if the total spin in pair production is to be conserved, I would assume that the incoming photons must be in the spin-0 state, excluding the spin-2 state because the spin-state of the created electron-positron pair does not have a spin-2 representation. As far as I know, this spin state can have one spin-0 rep. and three spin-1 rep.
No, because the electron and positron can come out in the $p$-wave, carrying orbital angular momentum. This is called $p$-wave annihilation, and it's not an exotic phenomenon; for instance, it shows up in the partial wave expansion in undergraduate quantum mechanics.
Landau–Yang theorem, stating that a massive particle with spin 1 cannot decay into two photons. I suspect this selection rule follows from the requirement of the conservation of the total spin.
The Landau-Yang theorem doesn't state that spin is conserved. Essentially, it uses the fact that total angular momentum is conserved, along with the fact that in this simple situation, there is no orbital angular momentum: you can always go to the rest frame of the massive particle, and in that frame the photons always come out back to back. | {
"domain": "physics.stackexchange",
"id": 70986,
"tags": "quantum-field-theory, particle-physics, angular-momentum, conservation-laws, quantum-electrodynamics"
} |
Physical meaning of relativistic saturation of an instability? | Question: In the derivation of the Rayleigh-Taylor instability when the fluid is in the extreme relativistic limit ($\rho_0 c^2 \ll p$) and there is a large effective gravity ($ g \gg kc^2$), where $\rho_0$, $p$, $g$, and $k$ are the rest mass fluid density, pressure, effective acceleration, and wave vector for the instability, respectively, then the instability growth rate is observed to be independent of $g$. What phenomenologically permits this saturation in growth rate to not further increase? Cannot the instability continue increasing if energy is available through $g$?
Answer: The Rayleigh-Taylor instability is driven by a difference in potential energy density between two fluids. If the interface between the fluids is disturbed by a small perturbation of size $z$, then the pressure from above changes by $\rho_1 g z$, while the pressure from below changes by $\rho_2 g z$. If $\rho_1$ is greater than $\rho_2$, then the perturbation grows. The growth rate is determined by $\omega^2 = gk \eta$ where $\eta$ is the Atwood number
$$
\eta = \frac{\rho_1 - \rho_2}{\rho_1 + \rho_2}
$$
Physically, the numerator represents the pressure difference that drives the instability, while the denominator represents the inertia of the fluid that has to move as the instability grows. We can draw a loose analogy with Newton's law $F = ma$, with $\rho_1 - \rho_2$ as the "force," $\rho_1 + \rho_2$ as the "mass," and the Atwood number $\eta$ as the "acceleration."
In the Newtonian regime, that's all we need to worry about. But in the relativistic regime, the energy of the fluid becomes a significant contributor to its mass-energy. As a result, the fluid has a higher-than-expected inertia and the denominator of the Atwood number increases. We can take this into account by correcting the Atwood number to
$$
\eta_{rel} = \frac{\rho_1 - \rho_2}{8p/c^2 + \rho_1 + \rho_2}
$$
In the highly relativistic regime, the $8p/c^2$ term overwhelms the rest of the denominator, so we get a very small Atwood number and the Rayleigh-Taylor instability grows much more slowly than we'd expect.
That still isn't a full explanation, though, because the corrected growth rate $\omega^2 = gk\eta_{rel}$ still depends on $g$. Indeed, the growth rate does not saturate for incompressible fluids. When the fluid is compressible and $g$ is very high, however, another effect comes into play. As the instability grows, the local density in the instability must rise because the fluid is compressed by the $g$-force. (Just as Earth's atmosphere becomes denser with lower altitude.) Without this change in density, the pressure would not be able to rise sufficiently to fuel further instability growth. But the density can only increase if molecules are pulled in from other parts of the fluid. The rate at which this can happen is limited by the speed of sound in the fluid (since the molecules are attracted towards the instability by pressure differences rather than the $g$-force). That's what causes the growth rate to saturate.
Aside: In relativity, of course, there is a hard upper bound on the growth rate because molecules cannot move faster than the speed of light. Note that saturation happens before this limit is reached.
As you correctly pointed out in your comment, the same thing can happen in the non-relativistic case. (You can see this from the equations given by Livescu for the Rayleigh-Taylor instability in compressible non-relativistic fluids.) In practice, this is not especially significant because we rarely see situations with very high $g$ and low Atwood number. Because relativity decreases the effective Atwood number, however, saturation becomes somewhat more important in the relativistic regime (though it's still not especially relevant in most astrophysical situations, for the reasons given by Allen and Hughes).
Sorry if that all seems a bit "hand-wavey," but I don't think I can make it any more specific without the seriously complex math in the Allen and Hughes paper. | {
"domain": "physics.stackexchange",
"id": 48889,
"tags": "fluid-dynamics, relativity, stability"
} |
The unreasonable power of non-uniformity | Question: From the common sense point of view, it is easy to believe that adding non-determinism to $\mathsf{P}$ significantly extends its power, i.e., $\mathsf{NP}$ is much larger than $\mathsf{P}$. After all, non-determinism allows exponential parallelism, which undoubtedly appears very powerful.
On the other hand, if we just add non-uniformity to $\mathsf{P}$, obtaining $\mathsf{P}/poly$, then the intuition is less clear (assuming we exclude non-recursive languages that could occur in $\mathsf{P}/poly$). One could expect that merely allowing different polynomial time algorithms for different input lengths (but not leaving the recursive realm) is a less powerful extension than the exponential parallelism in non-determinism.
Interestingly, however, if we compare these classes with the very large class $\mathsf{NEXP}$, then we see the following counter-intuitive situation. We know that $\mathsf{NEXP}$ properly contains $\mathsf{NP}$, which is not surprising. (After all, $\mathsf{NEXP}$ allows doubly exponential parallelism.) On the other hand, currently we cannot rule out $\mathsf{NEXP}\subseteq \mathsf{P}/poly$.
Thus, in this sense, non-uniformity, when added to polynomial time, possibly makes it extremely powerful, potentially more powerful than non-determinism. It might even go as far as to simulate doubly exponential parallelism! Even though we believe this is not the case, but the fact that currently it cannot be ruled it out still suggests that complexity theorists are struggling with "mighty powers" here.
How would you explain to an intelligent layman what is behind this "unreasonable power" of non-uniformity?
Answer: A flip answer is that this isn't the first thing about complexity theory that I'd try to explain to a layperson! To even appreciate the idea of nonuniformity, and how it differs from nondeterminism, you need to be further down in the weeds with the definitions of complexity classes than many people are willing to get.
Having said that, one perspective that I've found helpful, when explaining P/poly to undergraduates, is that nonuniformity really means you can have an infinite sequence of better and better algorithms, as you go to larger and larger input lengths. In practice, for example, we know that the naïve matrix multiplication algorithm works best for matrices up to size 100x100 or so, and then at some point Strassen multiplication becomes better, and then the more recent algorithms only become better for astronomically-large matrices that would never arise in practice. So, what if you had the magical ability to zero in on the best algorithm for whatever range of n's you happened to be working with?
Sure, that would be a weird ability, and all things considered, probably not as useful as the ability to solve NP-complete problems in polynomial time. But strictly speaking, it would be an incomparable ability: it's not one that you would get automatically even if P=NP. Indeed, you can even construct contrived examples of uncomputable problems (e.g., given 0n as input, does the nth Turing machine halt?) that this ability would allow you to solve. So, that's the power of nonuniformity.
To understand the point of considering this strange power, you'd probably need to say something about the quest to prove circuit lower bounds, and the fact that, from the standpoint of many of our lower bound techniques, it's uniformity that seems like a weird extra condition that we almost never need. | {
"domain": "cstheory.stackexchange",
"id": 2684,
"tags": "cc.complexity-theory, complexity-classes, big-picture"
} |
How to calculate a meaningful distance between multidimensional tensors | Question: TLDR: given two tensors $t_1$ and $t_2$, both with shape $(c,h,w),$ how shall the distance between them be measured?
More Info: I'm working on a project in which I'm trying to distinguish between an anomalous sample (specifically from MNIST) and a "regular" sample (specifically from CIFAR10). The solution I chose is to consider the feature maps that are given by ResNet and use kNN. More specifically:
I embed the entire CIFAR10_TRAIN data to achieve a dataset that consists of activations with dimension $(N,c,h,w)$ where $N$ is the size of CIFAR_TRAIN
I embed $2$ new test samples $t_C$ and $t_M$ from CIFAR10_TEST and MNIST_TEST respectively (both with shape $(c,h,w)$), same as I did with the training data.
(!) I find the k-Nearest-Neighbours of $t_C$ and $t_M$ w.r.t the embedding of the training data
I calculate the mean distance between the $k$ neighbors
Given some predefined threshold, I classify $t_C$ and $t_M$ as regular or anomalous, hoping that the distance for $t_M$ would be higher, as it represents O.O.D sample.
Notice that in (!) I need some distance measure, but this is not trivial as these are tensors, not vectors.
What I've Tried: a trivial solution is to flatten the tensor to have shape $(c\cdot h\cdot w)$ and then use basic $\ell_2$, but the results turned out pretty bad. (could not distinguish regular vs anomalous in this case). Hence: Is there a better way of measuring this distance?
Answer: You could try an earth mover distance in 2d or 3d over the image? For example you could follow this example, but call it sequentially. The idea would be something like the following (untested and written on my cell phone):
def cumsum_3d(a):
a = torch.cumsum(a, -1)
a = torch.cumsum(a, -2)
a = torch.cumsum(a, -3)
return a
def norm_3d(a):
return a / torch.sum(a, dim=(-1,-2,-3), keepdim=True)
def emd_3d(a, b):
a = norm_3d(a)
b = norm_3d(b)
return torch.mean(torch.square(cumsum_3d(a) - cumsum_3d(b)), dim=(-1,-2,-3))
This should also work with batched data. I would also try normalizing the images first (so they each sum to 1) unless you want to account for changes in intensity. | {
"domain": "ai.stackexchange",
"id": 3402,
"tags": "features, feature-extraction, metric, anomaly-detection, tensor"
} |
How the concentration of an antibiotic is chosen for YPD medium? | Question: I need to prepare YPD plates containing hygromycin, but I don't know which concentration I can use. I'll select transformants of Hansenula polymorpha.
Answer: According to this site,
for yeast (I assume this means Saccharomyces cerevisiae) you should use 50 µg/ml to 200 µg/ml; for fungi (!) use 100 µg/ml to 300 µg/ml. The site also stresses the importance of the pH of the medium.
I think that, unless a Hansenula expert comes along, you will have to try an initial experiment to measure the sensitivity of untransformed cells across the range indicated. Then you can try selecting resistant cells at a concentration somewhat higher than that which kills untransformed cells. Finally, you can check to see how resistant the transformants are, to enable you to decide on a useful concentration for routine selection of transformants. | {
"domain": "biology.stackexchange",
"id": 943,
"tags": "antibiotics, yeast, transformation, medium"
} |
rtabmap_ros tutorial zed_stereo_throttle.launch gives unused args error | Question:
At the moment I'm trying to use remote mapping on a robot with a jetson tx2 and a laptop with ubuntu 16.04.
Both run with ros kinetic.
Following this tutorial http://wiki.ros.org/rtabmap_ros/Tutorials/RemoteMapping I build the throttle launch file on the jetson and recieved the following error.
~/catkin_ws$ roslaunch zed_wrapper zed_stereo_throttle.launch
... logging to /home/nvidia/.ros/log/03f8b8c0-5fa8-11ea-bbd6-b0fc36b76f7b/roslaunch-tegra-ubuntu-5544.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
unused args [resolution, publish_tf, frame_rate] for include of [/home/nvidia/catkin_ws/src/zed-ros-wrapper/zed_wrapper/launch/zed_camera_nodelet.launch]
The traceback for the exception was written to the log file
I'm kinda lost here because looking at the code for the nodelet I can clearly see the args used.
Does someone have any idea as of to what I did wrong?
Thank you in advandce and for reading this.
Edit:
Thanks to matlabbes awnser I could start the launch file, but get this enourmus error message:
nvidia@tegra-ubuntu:~$ roslaunch zed_wrapper zed_stereo_throttle.launch
... logging to /home/nvidia/.ros/log/a53abca6-638b-11ea-9403-00187daf6704/roslaunch-tegra-ubuntu-4469.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://192.168.0.15:41020/
SUMMARY
========
PARAMETERS
* /camera/stereo_sync/approx_sync: False
* /camera/stereo_sync/compressed_rate: 5.0
* /camera/zed_description: <?xml version="1....
* /camera/zed_node/auto_exposure: True
* /camera/zed_node/camera_model: zed
* /camera/zed_node/confidence: 100
* /camera/zed_node/depth/confidence_root: confidence
* /camera/zed_node/depth/depth_stabilization: 1
* /camera/zed_node/depth/depth_topic_root: depth
* /camera/zed_node/depth/disparity_topic: disparity/dispari...
* /camera/zed_node/depth/min_depth: 0.3
* /camera/zed_node/depth/openni_depth_mode: 0
* /camera/zed_node/depth/point_cloud_topic_root: point_cloud
* /camera/zed_node/depth/quality: 1
* /camera/zed_node/depth/sensing_mode: 0
* /camera/zed_node/exposure: 100
* /camera/zed_node/gain: 100
* /camera/zed_node/general/base_frame: base_link
* /camera/zed_node/general/camera_flip: False
* /camera/zed_node/general/camera_frame: zed_camera_center
* /camera/zed_node/general/frame_rate: 30
* /camera/zed_node/general/gpu_id: -1
* /camera/zed_node/general/left_camera_frame: zed_left_camera_f...
* /camera/zed_node/general/left_camera_optical_frame: zed_left_camera_o...
* /camera/zed_node/general/resolution: 2
* /camera/zed_node/general/right_camera_frame: zed_right_camera_...
* /camera/zed_node/general/right_camera_optical_frame: zed_right_camera_...
* /camera/zed_node/general/self_calib: True
* /camera/zed_node/general/serial_number: 0
* /camera/zed_node/general/svo_compression: 4
* /camera/zed_node/general/verbose: True
* /camera/zed_node/general/zed_id: -1
* /camera/zed_node/mapping/fused_pointcloud_freq: 1.0
* /camera/zed_node/mapping/mapping_enabled: False
* /camera/zed_node/mapping/resolution: 1
* /camera/zed_node/mat_resize_factor: 1.0
* /camera/zed_node/max_depth: 20.0
* /camera/zed_node/point_cloud_freq: 10.0
* /camera/zed_node/stream:
* /camera/zed_node/svo_file:
* /camera/zed_node/tracking/fixed_cov_value: 1e-6
* /camera/zed_node/tracking/fixed_covariance: False
* /camera/zed_node/tracking/fixed_z_value: 1.0
* /camera/zed_node/tracking/floor_alignment: False
* /camera/zed_node/tracking/init_odom_with_first_valid_pose: True
* /camera/zed_node/tracking/initial_base_pose: [0.0, 0.0, 0.0, 0...
* /camera/zed_node/tracking/map_frame: map
* /camera/zed_node/tracking/odometry_db:
* /camera/zed_node/tracking/odometry_frame: odom
* /camera/zed_node/tracking/odometry_topic: odom
* /camera/zed_node/tracking/path_max_count: -1
* /camera/zed_node/tracking/path_pub_rate: 2.0
* /camera/zed_node/tracking/pose_topic: pose
* /camera/zed_node/tracking/publish_map_tf: False
* /camera/zed_node/tracking/publish_pose_covariance: True
* /camera/zed_node/tracking/publish_tf: False
* /camera/zed_node/tracking/spatial_memory: True
* /camera/zed_node/tracking/two_d_mode: False
* /camera/zed_node/tracking/world_frame: map
* /camera/zed_node/video/color_enhancement: True
* /camera/zed_node/video/left_topic_root: left
* /camera/zed_node/video/rgb_topic_root: rgb
* /camera/zed_node/video/right_topic_root: right
* /camera/zed_node/video/stereo_topic_root: stereo
* /rosdistro: kinetic
* /rosversion: 1.12.13
NODES
/camera/
camera_nodelet_manager (nodelet/nodelet)
camera_zed_link (tf/static_transform_publisher)
stereo_sync (nodelet/nodelet)
zed_node (nodelet/nodelet)
zed_state_publisher (robot_state_publisher/robot_state_publisher)
ROS_MASTER_URI=http://192.168.0.10:11311
process[camera/camera_nodelet_manager-1]: started with pid [4478]
process[camera/zed_state_publisher-2]: started with pid [4479]
process[camera/zed_node-3]: started with pid [4480]
[ INFO] [1583927269.994617547]: Loading nodelet /camera/zed_node of type zed_wrapper/ZEDWrapperNodelet to manager camera_nodelet_manager with the following remappings:
[ INFO] [1583927270.008940163]: waitForService: Service [/camera/camera_nodelet_manager/load_nodelet] could not connect to host [192.168.0.15:50951], waiting...
process[camera/camera_zed_link-4]: started with pid [4484]
process[camera/stereo_sync-5]: started with pid [4490]
[ INFO] [1583927270.072780419]: Loading nodelet /camera/stereo_sync of type rtabmap_ros/stereo_sync to manager camera_nodelet_manager with the following remappings:
[ INFO] [1583927270.072966604]: /camera/left/camera_info -> /camera/left/camera_info
[ INFO] [1583927270.073069585]: /camera/left/image_rect -> /camera/left/image_rect_color
[ INFO] [1583927270.073129396]: /camera/rgbd_image -> /camera/rgbd_image
[ INFO] [1583927270.073187543]: /camera/right/camera_info -> /camera/right/camera_info
[ INFO] [1583927270.073245242]: /camera/right/image_rect -> /camera/right/image_rect_color
[ INFO] [1583927270.078909785]: waitForService: Service [/camera/camera_nodelet_manager/load_nodelet] has not been advertised, waiting...
[ INFO] [1583927270.083429982]: Initializing nodelet with 6 worker threads.
[ INFO] [1583927270.099957058]: waitForService: Service [/camera/camera_nodelet_manager/load_nodelet] is now available.
[ INFO] [1583927270.101268933]: waitForService: Service [/camera/camera_nodelet_manager/load_nodelet] is now available.
[ INFO] [1583927270.337140841]: SDK version : 2.8.5
[ INFO] [1583927270.337243886]: *** PARAMETERS ***
[ INFO] [1583927270.339634503]: * Camera Resolution -> HD720
[ INFO] [1583927270.342008735]: * Camera Framerate -> 30
[ INFO] [1583927270.343681140]: * Gpu ID -> -1
[ INFO] [1583927270.345681689]: * Camera ID -> -1
[ INFO] [1583927270.347354702]: * Verbose -> ENABLED
[ INFO] [1583927270.350750394]: * Camera Flip -> DISABLED
[ INFO] [1583927270.354667584]: * Self calibration -> ENABLED
[ INFO] [1583927270.357936133]: * Camera Model -> zed
[ INFO] [1583927270.384071344]: * Depth quality -> PERFORMANCE
[ INFO] [1583927270.385592317]: * Depth Sensing mode -> STANDARD
[ INFO] [1583927270.387178766]: * OpenNI mode -> DISABLED
[ INFO] [1583927270.388541683]: * Depth Stabilization -> ENABLED
[ INFO] [1583927270.390487637]: * Minimum depth -> 0.3
[ INFO] [1583927270.403877691]: * Path rate -> 2 Hz
[ INFO] [1583927270.405854239]: * Path history size -> 1
[ INFO] [1583927270.408919610]: * Odometry DB path ->
[ INFO] [1583927270.412088155]: * Spatial Memory -> ENABLED
[ INFO] [1583927270.413709005]: * IMU Fusion -> ENABLED
[ INFO] [1583927270.416465880]: * Floor alignment -> DISABLED
[ INFO] [1583927270.419359275]: * Init Odometry with first valid pose data -> ENABLED
[ INFO] [1583927270.422113366]: * Two D mode -> DISABLED
[ INFO] [1583927270.426288745]: * Publish Pose Covariance -> ENABLED
[ INFO] [1583927270.427655983]: * Fixed covariance -> DISABLED
[ INFO] [1583927270.429029236]: * Fixed cov. value -> 1e-06
[ INFO] [1583927270.431934183]: * Mapping -> DISABLED
[ INFO] [1583927270.435210509]: * IMU timestamp sync -> DISABLED
[ INFO] [1583927270.441195484]: * IMU data freq -> 0 Hz
[ INFO] [1583927270.450228325]: * SVO REC compression -> HEVC (H265)
[ INFO] [1583927270.481608762]: * world_frame -> map
[ INFO] [1583927270.481682942]: * map_frame -> map
[ INFO] [1583927270.481729600]: * odometry_frame -> odom
[ INFO] [1583927270.481768930]: * base_frame -> base_link
[ INFO] [1583927270.481805508]: * camera_frame -> zed_camera_center
[ INFO] [1583927270.481838949]: * imu_link -> imu_link
[ INFO] [1583927270.481878599]: * left_camera_frame -> zed_left_camera_frame
[ INFO] [1583927270.481918025]: * left_camera_optical_frame -> zed_left_camera_optical_frame
[ INFO] [1583927270.481952459]: * right_camera_frame -> zed_right_camera_frame
[ INFO] [1583927270.481987277]: * right_camera_optical_frame -> zed_right_camera_optical_frame
[ INFO] [1583927270.482023311]: * depth_frame -> zed_left_camera_frame
[ INFO] [1583927270.482057969]: * depth_optical_frame -> zed_left_camera_optical_frame
[ INFO] [1583927270.482093010]: * disparity_frame -> zed_left_camera_frame
[ INFO] [1583927270.482126324]: * disparity_optical_frame -> zed_left_camera_optical_frame
[ INFO] [1583927270.482158294]: * confidence_frame -> zed_left_camera_frame
[ INFO] [1583927270.482192471]: * confidence_optical_frame -> zed_left_camera_optical_frame
[ INFO] [1583927270.489497769]: * Broadcast odometry TF -> DISABLED
[ INFO] [1583927270.492494721]: * Broadcast map pose TF -> DISABLED
[ INFO] [1583927270.494306013]: * [DYN] mat_resize_factor -> 1
[ INFO] [1583927270.496319938]: * [DYN] confidence -> 100
[ INFO] [1583927270.498195745]: * [DYN] max_depth -> 20
[ INFO] [1583927270.499794290]: * [DYN] exposure -> 100
[ INFO] [1583927270.501177464]: * [DYN] gain -> 100
[ INFO] [1583927270.502553566]: * [DYN] auto_exposure -> ENABLED
[ INFO] [1583927270.503917539]: * [DYN] point_cloud_freq -> 10 Hz
[ INFO] [1583927270.529816642]: * Camera coordinate system -> COORDINATE_SYSTEM_RIGHT_HANDED_Z_UP_X_FWD
ZED (Init) >> Depth mode: PERFORMANCE
ZED (Init) >> Video mode: HD720@30
[ INFO] [1583927272.144733544]: ZED connection -> SUCCESS
[ INFO] [1583927274.145235164]: * CAMERA MODEL -> ZED
[ INFO] [1583927274.145368610]: * Serial Number -> 18463
[ INFO] [1583927274.145558539]: * FW Version -> 1523
[ INFO] [1583927274.300318508]: Advertised on topic /camera/zed_node/rgb/image_rect_color
[ INFO] [1583927274.300414545]: Advertised on topic /camera/zed_node/rgb/camera_info
[ INFO] [1583927274.333298300]: Advertised on topic /camera/zed_node/rgb_raw/image_raw_color
[ INFO] [1583927274.333376768]: Advertised on topic /camera/zed_node/rgb_raw/camera_info
[ INFO] [1583927274.374971286]: Advertised on topic /camera/zed_node/left/image_rect_color
[ INFO] [1583927274.375050586]: Advertised on topic /camera/zed_node/left/camera_info
[ INFO] [1583927274.408506050]: Advertised on topic /camera/zed_node/left_raw/image_raw_color
[ INFO] [1583927274.408587750]: Advertised on topic /camera/zed_node/left_raw/camera_info
[ INFO] [1583927274.446739476]: Advertised on topic /camera/zed_node/right/image_rect_color
[ INFO] [1583927274.446925821]: Advertised on topic /camera/zed_node/right/camera_info
[ INFO] [1583927274.484007062]: Advertised on topic /camera/zed_node/right_raw/image_raw_color
[ INFO] [1583927274.484101723]: Advertised on topic /camera/zed_node/right_raw/camera_info
[ INFO] [1583927274.517936277]: Advertised on topic /camera/zed_node/depth/depth_registered
[ INFO] [1583927274.518021817]: Advertised on topic /camera/zed_node/depth/camera_info
[ INFO] [1583927274.559468168]: Advertised on topic /camera/zed_node/confidence/confidence_image
[ INFO] [1583927274.559560205]: Advertised on topic /camera/zed_node/confidence/camera_info
[ INFO] [1583927274.594692230]: Advertised on topic /camera/zed_node/stereo/image_rect_color
[ INFO] [1583927274.627668502]: Advertised on topic /camera/zed_node/stereo_raw/image_raw_color
[ INFO] [1583927274.630222356]: Advertised on topic /camera/zed_node/confidence/confidence_map
[ INFO] [1583927274.632762224]: Advertised on topic /camera/zed_node/disparity/disparity_image
[ INFO] [1583927274.635572954]: Advertised on topic /camera/zed_node/point_cloud/cloud_registered
[ INFO] [1583927274.642802364]: Advertised on topic /camera/zed_node/pose
[ INFO] [1583927274.645218194]: Advertised on topic /camera/zed_node/pose_with_covariance
[ INFO] [1583927274.647578662]: Advertised on topic /camera/zed_node/odom
[ INFO] [1583927274.649778258]: Advertised on topic /camera/zed_node/path_odom
[ INFO] [1583927274.651936124]: Advertised on topic /camera/zed_node/path_map
[ERROR] [1583927274.727104864]: Failed to load nodelet [/camera/stereo_sync] of type [rtabmap_ros/stereo_sync] even after refreshing the cache: According to the loaded plugin descriptions the class rtabmap_ros/stereo_sync with base class type nodelet::Nodelet does not exist. Declared types are cv_camera/CvCameraNodelet image_view/disparity image_view/image libuvc_camera/driver pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/CropBox pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers pcl/RadiusOutlierRemoval pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SHOTEstimation pcl/SHOTEstimationOMP pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/VFHEstimation pcl/VoxelGrid uvc_camera/CameraNodelet uvc_camera/StereoNodelet zed_wrapper/ZEDWrapperNodelet
[ERROR] [1583927274.727227430]: The error before refreshing the cache was: According to the loaded plugin descriptions the class rtabmap_ros/stereo_sync with base class type nodelet::Nodelet does not exist. Declared types are cv_camera/CvCameraNodelet image_view/disparity image_view/image libuvc_camera/driver pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/CropBox pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers pcl/RadiusOutlierRemoval pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SHOTEstimation pcl/SHOTEstimationOMP pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/VFHEstimation pcl/VoxelGrid uvc_camera/CameraNodelet uvc_camera/StereoNodelet zed_wrapper/ZEDWrapperNodelet
[FATAL] [1583927274.727489234]: Failed to load nodelet '/camera/stereo_sync` of type `rtabmap_ros/stereo_sync` to manager `camera_nodelet_manager'
[camera/stereo_sync-5] process has died [pid 4490, exit code 255, cmd /opt/ros/kinetic/lib/nodelet/nodelet load rtabmap_ros/stereo_sync camera_nodelet_manager left/image_rect:=left/image_rect_color right/image_rect:=right/image_rect_color left/camera_info:=left/camera_info right/camera_info:=right/camera_info rgbd_image:=rgbd_image __name:=stereo_sync __log:=/home/nvidia/.ros/log/a53abca6-638b-11ea-9403-00187daf6704/camera-stereo_sync-5.log].
log file: /home/nvidia/.ros/log/a53abca6-638b-11ea-9403-00187daf6704/camera-stereo_sync-5*.log
^C[camera/camera_zed_link-4] killing on exit
[camera/zed_node-3] killing on exit
[camera/zed_state_publisher-2] killing on exit
[camera/camera_nodelet_manager-1] killing on exit
[ INFO] [1583927396.919918021]: Unloading nodelet /camera/zed_node from manager camera_nodelet_manager
shutting down processing monitor...
... shutting down processing monitor complete
done
Originally posted by Uwe on ROS Answers with karma: 3 on 2020-03-07
Post score: 0
Answer:
It seems those arguments have been removed from zed_camera_nodelet.launch. Remove those arguments from your zed_camera_nodelet.launch. resolution and frame_rate arguments are optional, but publish_tf would be modified here.
Originally posted by matlabbe with karma: 6409 on 2020-03-10
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Uwe on 2020-03-12:
Thanks that solved the error, though I got some other issues which leaded me to reinstall rtabmap. | {
"domain": "robotics.stackexchange",
"id": 34560,
"tags": "slam, navigation, ros-kinetic, rtabmap-ros"
} |
How to optimize the for loop for finding a matching 2 string using fuzzywuzzy | Question: I am getting the probability of a string being similar to another string in Python using fuzzywuzzy lib.
Currently, I am doing this using a for loop and the search is time consuming.
Below is working code :
from fuzzywuzzy import fuzz
with open('all_nut_data.csv', newline='') as csvfile:
spamwriter = csv.DictReader(csvfile)
mostsimilarcs = 0
mostsimilarns = 0
for rowdata in spamwriter:
mostsimilarns = fuzz.ratio(rowdata["Food Item"].lower(), name.lower())
if mostsimilarns > mostsimilarcs:
mostsimilarcs = mostsimilarns
row1 = rowdata
How I can optimize this code without for loop?
Note* CSV file contain 600,000 rows and 17 column
Answer: This will not be much faster, but more readable (IMO) and extendable. You are looking for the maximum (in similarity). So, use the built-in max function. You can also define a function that does the file reading (so you can swap it out for a list of dictionaries, or whatever, for testing) and a function to be use as key. I made it slightly more complicated than needed here to give some customizability. The word it is compared to is fixed, so it is passed to the outer function, but so is the column name (you could also hard-code that).
import csv
from fuzzywuzzy.fuzz import ratio as fuzz_ratio
def get_rows(file_name):
with open(file_name, newline='') as csvfile:
reader = csv.DictReader(csvfile)
yield from reader
def similarity_to(x, column_name):
x = x.lower()
def similarity(row):
return fuzz_ratio(row[column_name].lower(), x)
return similarity
if __name__ == "__main__":
items = get_rows('all_nut_data.csv')
name = "Hayelnut"
best_match = max(items, key=similarity_to(name, "Food Item"))
match_quality = similarity_to(name, "Food Item")(best_match)
max ensures that the key function is only called once per element (so no unnecessary calculations). However, since the similarity is not part of the row, you have to calculate it again at the end. On the other hand, I don't call name.lower() every loop iteration.
Note that get_rows is a generator. This is very nice because you don't need to load the whole file into memory (just like in your code), but if you want to run it multiple times, you need to recreate the generator each time.
In the end the code as currently written can not avoid having to call the function on each row, one at a time. With max at least the iteration is partially done in C and therefore potentially faster, but not by much. For some naive tests, the built-in max is about 30% faster than a simple for loop, like you have.
The only way to get a significant speed increase would be to use a vectorized version of that function. After some digging I found out that internally the fuzzywuzzy just returns the Levenshtein ratio for the two words (after type and bound checking, and then applies some casting and rounding) from the Levenshtein module. So you could look for different modules that implemented this or try if directly using the underlying method is faster. Unfortunately I have not managed to find a vectorized version of the Levenshtein ratio (or distance) where one word is fixed and the other is not.
However, there is fuzzywuzzy.process.extractOne, which lets you customize the scoring and processing. It might be even faster than the loop run by max:
from fuzzywuzzy import process, fuzz
def processor(x):
return x["Food Item"].lower()
def get_best_match(name, rows):
name = {"Food Item": name}
return process.extractOne(name, rows,
scorer=fuzz.ratio, processor=processor)
if __name__ == "__main__":
rows = get_rows('all_nut_data.csv')
name = "Hayelnut"
best_match, match_quality = get_best_match(name, rows)
print(best_match, match_quality)
The packing of the name in the dictionary is necessary, because the processor is called also on the query.
Using a local dictionary (from the hunspell package), which contains 65867 words, I get the following timings for finding the closest match for "Hayelnut":
OP: 207 ms ± 4.05 ms
max: 206 ms ± 8.33 ms
get_best_match: 221 ms ± 3.77 ms
So no real improvement, in fact the last function is even slightly slower! But at least all three determine that "hazelnut" is the correct choice in this case. | {
"domain": "codereview.stackexchange",
"id": 36151,
"tags": "python, performance"
} |
Robot position not updating in RVIZ | Question:
The robot position in RVIZ is not updating. However, when I launch gmapping node with the "map" frame fixed, it will update. I am not able to find the reason for this.
Originally posted by Shreyas on ROS Answers with karma: 1 on 2021-07-08
Post score: 0
Original comments
Comment by siddharthcb on 2021-07-08:
can you provide more details? what are you trying to do? and what is your config like?
Comment by Weasfas on 2021-07-09:
@Shreyas Rviz just plots anything is the Screen with the support of your tf tree. If no proper tf is set up Rviz will be unable to plot anything. The reason why you have something poltted when runnin Gmapping is likely because the node is generating the tf frames connection that you missed when it is not running. You may want to chech yor tf tree connections.
Comment by Shreyas on 2021-07-09:
The robot is configured via MAVROS/MAVLINK communication. I checked the tf. Everything is connected. The gmapping node will generate a "map" frame which will be connected to "odom" frame. similarly, If I use AMCL node, with the map frame fixed, it still same error. my email: shreysreddy@gmail.com if someone willing to help
Answer:
As pointed out already you need to check your tf tree. These are some things you can check:
Is the robot state publisher node running? The robot state publisher will publish your joint angles of your robot to tf tree(the left and right wheel link of your robot).
The gmapping package expects that a base_link→odom transform already exists(this transform should come from your controller)
Originally posted by Deepanshu Bansal with karma: 16 on 2021-07-09
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Shreyas on 2021-07-09:
Thanks deepanshu for replying, I checked it again. robot state publisher is running, Yes in normal case also, the base_link-> odom transform exists which comes from the controller. Also, for more information, When I use navigation package to implememt path planning, the local planner does work and robot in the RVIZ does move. however, in global map it doesn't.
my email: shreysreddy@gmail.com if someone willing to help | {
"domain": "robotics.stackexchange",
"id": 36674,
"tags": "ros, rviz, urdf, ros-melodic"
} |
Computing Nash equilibria in discrete auctions | Question: I am trying to compute the (pure strategy) Nash equilibria of some discrete auctions.
More precisely, let us define the strategy of each player as a function mapping from every valuation that they might have to their bid (i.e. the 'bidding function'). Let us suppose that each player's valuation is drawn from some finite set and that their bid must belong to this same (finite) set. I am interested in finding the set of bidding functions, one for each player, such that each player's bidding function is optimal given the bidding functions of all the other players.
If it makes things easier, we can assume that valuations are symmetric (i.e. each player's valuation is generated by the same probability distribution) and that there are only two bidders. Ideally, however, we would proceed without these simplifications. I am interested in computing the equilibrium for the first price sealed bid and all pay auctions (in the latter, you pay your bid even if you lose; in the former, you don't.)
I have considered writing down the discrete auction game in normal form and finding the equilibria using software like Gambit. However, this would seem tricky since the strategy space is so large. For example, if a player chooses bids from $ \{1,...,10\} $ and draws values from $ \{1,...,10\} $, then already they have $10^{10}$ pure strategies.
Does anyone have any ideas about how to proceed here?
Answer: If there are only two players, you could use integer linear programming.
I'll introduce some zero-or-one integer variables to encode the (unknown) bidding functions (via a one-hot encoding). Let $x_{v,b}=1$ if player $0$'s bid is $b$ when their valuation is $v$, and $0$ otherwise; and $y_{w,c}=1$ if player $1$'s bid is $c$ when their valuation is $w$, and $0$ otherwise. In this way, if there are $n$ possible valuations, we obtain $2n^2$ zero-or-one integer variables. We will write down linear inequalities on these variables that characterize the optimality condition, then use an integer programming solver to find a solution that satisfies all of these inequalities.
Note that if player $0$'s valuation is $v$ and player $1$'s valuation is $w$, then player $0$'s payoff (in a first-price auction) is
$$\sum_{b>c} x_{v,b} y_{w,c} (v-b).$$
Therefore, if player $0$'s valuation is $v$, player $0$'s expected payoff is
$$\sum_w p_1(w) \sum_{b>c} x_{v,b} y_{w,c} (v-b),$$
where $p_1(w)$ represents the probability that player $1$ gets valuation $w$. Now, we need this to be at least what it would be if player $0$ used any other bid for valuation $v$, i.e., any other choice of $x_{v,\cdot}$'s. If there are $n$ possible valuations, there are only $n$ possible bids, so we obtain $n$ inequalities
$$\sum_w p_1(w) \sum_{b>c} x_{v,b} y_{w,c} (v-b) \ge \sum_w p_1(w) \sum_{b'>c} y_{w,c} (v-b'),$$
where we obtain one such inequality for each possible value of $b'$. (The sum on the left-hand-side is over $w,b,c$ that satisfy the condition $b>c$; the sum on the right-hand-side is over $w,c$ that satisfy the condition $c<b'$.)
This is a quadratic inequality. We'll turn it into a linear inequality using the techniques of Express boolean logic operations in zero-one integer linear programming (ILP). In particular, introduce new zero-or-one variables $t_{v,w,b,c}$, with the intention that $t_{v,w,b,c}=x_{v,b} y_{w,c}$. This can be enforced using the linear inequalities $t_{v,w,b,c} \ge x_{v,b} + y_{w,c} - 1$, $t_{v,w,b,c} \le x_{v,b}$, $t_{v,w,b,c} \le y_{w,c}$, $0 \le t_{v,w,b,c} \le 1$. Now the optimality condition becomes
$$\sum_w p_1(w) \sum_{b>c} t_{v,w,b,c} (v-b) \ge \sum_w p_1(w) \sum{b'>c} y_{w,c} (v-b'),$$
for each $v,b'$. We obtain one such inequality for each $v$ and each $b'$. These $n^2$ inequalities ensure that player $0$'s strategy will be optimal (given player $1$'s strategy).
Then, add similar inequalities to require player $1$'s strategy to be optimal (given player $0$'s strategy).
Also add the $n^2+n$ constraints that $0 \le x_{v,b} \le 1$ and $\sum_b x_{v,b}=1$ to require the $x$'s to correspond to a bidding function, and similarly for the $y$'s.
Finally, take all of these inequalities and feed them to an off-the-shelf integer linear programming solver. If it finds a solution, then you have found a Nash equilibrium for your game. ILP is NP-hard, so there is no guarantee that it will terminate in a reasonable amount of time, but in my experience often the solvers can handle surprisingly large systems of inequalities.
The same approach can handle all-pay auctions as well as first-price auctions.
In practice, you might be able to speed up the solver by adding some extra constraints to help it identify a solution faster. In particular, I have a hunch that without loss of generality the optimal bidding function can be assumed to be monotonic: if we increase my valuation, my bid should never decrease. We can add extra inequalities to the system to characterize this additional assumption, and this might narrow the search space and thus help the ILP solver find a solution more rapidly.
One big limitation is that this seems limited to two players (or a very small number of players). If there are multiple players, the number of variables and inequalities will explode, and this might not be an effective approach. | {
"domain": "cs.stackexchange",
"id": 13378,
"tags": "game-theory"
} |
The electron jumps and lets loose photons | Question: Where is the source of the photon.
If the photon propagates from within the electrons transit does this point to some sort of field?
Does the energy come from a boundary being broken in laymens terms a sound barrier type effect the electron performs on its shell as it changes orbit?
Answer: A photon is just a quantum of light or transporter of electromagnetic radiation. Whenever an electromagnetic wave is radiated, it is transmitted in the form of photons. Thus when an electron jumps from one energy level to another, the difference between energies of the two energy levels between which the transition of electron takes place is radiated in the form of a high-energy photon. One thing must be understood that photon neither propagates through electrons, nor is emitted by electrons. Hence, electrons do not let loose photons. They just radiate energy which gets transmitted in the form of photons | {
"domain": "physics.stackexchange",
"id": 4255,
"tags": "quantum-field-theory, soft-question, electromagnetic-radiation, photons, electrons"
} |
navigation_stack send goal | Question:
hi, i try to move my robot in gazebo via the navigation stack. but i have no idea how to send goals to him, andhttp://www.ros.org/wiki/navigation/Tutorials/SendingSimpleGoals does not work, there comes just a compiler error
`/usr/bin/ld: CMakeFiles/simple_navigation_goals.dir/src/simple_navigation_goals.o: undefined reference to symbol 'vtable for boost::detail::thread_data_base'
/usr/bin/ld: note: 'vtable for boost::detail::thread_data_base' is defined in DSO /usr/lib/libboost_thread.so.1.46.1 so try adding it to the linker command line
/usr/lib/libboost_thread.so.1.46.1: could not read symbols: Invalid operation
collect2: ld returned 1 exit status`
are there some other easy examples or have you one ? it can also be in python.Or you can help me solve the compiler error (:
cmake_minimum_required(VERSION 2.4.6)
include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake)
# Set the build type. Options are:
# Coverage : w/ debug symbols, w/o optimization, w/ code-coverage
# Debug : w/ debug symbols, w/o optimization
# Release : w/o debug symbols, w/ optimization
# RelWithDebInfo : w/ debug symbols, w/ optimization
# MinSizeRel : w/o debug symbols, w/ optimization, stripped binaries
#set(ROS_BUILD_TYPE RelWithDebInfo)
rosbuild_init()
#set the default path for built executables to the "bin" directory
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
#set the default path for built libraries to the "lib" directory
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)
#uncomment if you have defined messages
#rosbuild_genmsg()
#uncomment if you have defined services
#rosbuild_gensrv()
#common commands for building c++ executables and libraries
#rosbuild_add_library(${PROJECT_NAME} src/example.cpp)
#target_link_libraries(${PROJECT_NAME} another_library)
#rosbuild_add_boost_directories()
#rosbuild_link_boost(${PROJECT_NAME} thread)
#rosbuild_add_executable(example examples/example.cpp)
#target_link_libraries(example ${PROJECT_NAME})
rosbuild_add_executable(simple_navigation_goals src/simple_navigation_goals.cpp)
Originally posted by pkohout on ROS Answers with karma: 336 on 2012-07-23
Post score: 0
Original comments
Comment by Erwan R. on 2012-07-24:
Check your boost version to be sure that you have thread_data_base included.
Comment by pkohout on 2012-07-24:
i have boost 1.46.1 and i have libboos-thread1.46.1 in /var/lib/dpkg/info.... where shuld be the thread_data_base ?
Answer:
The error message you get says that you should try adding boost to your linker command. Try that. Just uncomment the two boost related lines in your CMakeLists.txt, i.e. add the following:
rosbuild_add_boost_directories()
rosbuild_link_boost(${PROJECT_NAME} thread)
Originally posted by Lorenz with karma: 22731 on 2012-07-25
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by pkohout on 2012-07-25:
the old error is gone but i get another, it is:
Cannot specify link libraries for target "simple_navigation_goals" which is
not built by this project.
Call Stack (most recent call first):
CMakeLists.txt:28 (rosbuild_link_boost)
-- Configuring incomplete, errors occurred!
any ideas ?
Comment by tfoote on 2012-08-08:
@coon you should ask this as a separate question.
Comment by yangyangcv on 2012-10-08:
@coon you need to put the line rosbuild_link_boost(${PROJECT_NAME} thread) under the line rosbuild_add_executable() | {
"domain": "robotics.stackexchange",
"id": 10331,
"tags": "navigation, roscpp"
} |
Bombarding atoms with electron gun? | Question: Just to explain this question in a better way just think about the Rutherford's experiment(the alpha particle bombarded on Gold foil) be conducted using a electron gun in place of the Alpha one.. So what will happen when electron is targeted on a gold foil?
Either the atom rather the electrons of Ag atom become excited and therefore emit some sort of waves(if yes,then what kind of waves?), or in return electrons are ejected from the atom along with some waves, or it may be possible that the electron pass by the gold foil without any effect...
Answer: Well, I think, in principle, depending on the energy of electrons, different events can happen, but in the classical experiment electrons will diffract from the spacing within crystal lattice of gold in the foil.
P.S. The experiment is actually very well known in the history of science due to the following reason (quoted from here).
It is interesting to recall that G.P. Thompson, who shared the 1937
Nobel Prize with Davisson for these experiments which proved that
electrons are waves, is the son of J.J. Thompson who received the
Nobel Prize in 1906 for proving that cathode rays were actually
particles - electrons! And the amazing thing is that they were both
right. | {
"domain": "chemistry.stackexchange",
"id": 1547,
"tags": "physical-chemistry, atoms, electrons, quantum-chemistry"
} |
Effect of temperature on magnetic fields? | Question: My class was learning the magnetic effects of current and one of the things we learnt was that the magnetic field strength produced by some conductor is going to be directly proportional to the current flowing through it.
Now, by Ohm's Law, we also know that V = IR. So, if we do some temperature changes, R should change and so the current flowing in the conductor should change. As a result, the strength of the magnetic field should change.
However, this is more of a mathematical reason. What I desire is an explanation that is more theory oriented - What exactly does the temperature change do that causes the field strength to change?
Answer: Answer Updated 21 Jan 2022
To understand temperature effects, we need to look at the atomic
structure of the elements that make up the magnet. Temperature affects
magnetism by either strengthening or weakening a magnet’s attractive
force. A magnet subjected to heat experiences a reduction in its
magnetic field as the particles within the magnet are moving at an
increasingly faster and more sporadic rate. This jumbling confuses and
misaligns the magnetic domains, causing the magnetism to decrease.
Conversely, when the same magnet is exposed to low temperatures, its
magnetic property is enhanced and the strength increases.
source
Similar regarding this question about relation of magnetic field generated on an electric current carrying conductor wire for example, with temperature:
Increase of entropy (i.e. temperature) leads to partial misalignment of the discrete magnetic moments of the uniform directional flowing electron's current and therefore to a reduction of the generated corresponding magnetic field.
In theory, a conductor wire with higher resistivity $ρ$ will generate the same magnetic field strength $B$ around it with distance $r$, with a lower resistivity wire as long as both have the same current value $I$:
Magnetic field around wire
$$\mathrm{B}=\frac{\mu_{0} \mathrm{I}}{2 \pi \mathrm{r}}$$
However in a higher resistivity wire the atoms inside will scatter more electrons of the current and generate more heat (i.e. increase of entropy of current's uniform flow) therefore the number of electrons passing through the cross-section of the wire per unit of time will decrease thus also the current value $I$ will start to decrease as the wire heats up which will result in the reduction of the magnetic field assuming a fixed voltage value applied to the wire.
So you see the magnetic field is always analogue to the current but a higher resistivity $ρ$ wire conductor for a given cross-section of the wire, will result to a larger drop of current with time for the same voltage applied. The higher the temperature increase on the wire the larger the reduction of its magnetic field. Magnetic field strength reduces with current reduction and temperature increase because there are less number of uniform direction flowing electrons and therefore less number of aligned discrete magnetic moments of electrons exist per unit of time which lessens therefore the magnetic field strength generated around the wire (i.e. magnetism is all about coherence, alignment and uniformity).
Two identical dimensions wires one from gold and the other from silver, with compensated applied voltage for their different resistivity $ρ$, will initially generate the same current value in both conductors but with time the gold wire will increase more in temperature due its higher resistivity and experience a larger current drop and magnetic field strength drop than the silver.
Also, resistivity $ρ$ of material of conductor does not remain fixed but increases with temperature which makes things even worse. The best solution is too keep things cool using higher cross-section wires. | {
"domain": "physics.stackexchange",
"id": 85728,
"tags": "magnetic-fields, temperature, electrical-resistance"
} |
Removing NULL / empty fields | Question: just wanted to check with you could this be done better:
awk -F"\t" '{
for (i = 1; i <= NF; i++) {
if ($i != "NULL") {
printf("%s%s", $i, FS);
}
}
printf("\n");
}' file1
The goal is to print only non-NULL fields. For example:
echo "TestRecord001 NULL NULL Age 29 NULL NULL Name John" | awk -F"\t" '{
for (i = 1; i <= NF; i++) {
if ($i != "NULL") {
printf("%s%s", $i, FS);
}
}
printf("\n");
}'
will print out: TestRecord001 Age 29 Name John
Answer: The same behaviour can be achieved using sed as follows:
echo -e 'TestRecord001\tNULL\tNULL\tAge\t29\tNULL\tNULL\tName\tJohn' | sed '
s/$/\t/;
s/\(\tNULL\)\+\t/\t/g;
s/^NULL\t//';
Explanation:
sed s/SearchPattern/Replacement/g. Here s indicates that replacement operation is to be done. Strings matching SearchPattern will be replaced by Replacement. g indicates that the operation will have to be performed on every match and not just on the first occurrence in a line.
s/$/\t/ adds a tab to the end of each line. [$ matches the end of a line]
\(\tNULL\)\+\t matches a string of the form \tNULL\tNULL...NULL\t. This is replaced with \t.
After this the only remaining NULL is the one at the beginning of a line (without \t to its left). This is matched by ^NULL\t and replaced with the empty string. [^ matches the beginning of a line] | {
"domain": "codereview.stackexchange",
"id": 30741,
"tags": "shell, null, unix, awk"
} |
Check if a web element has a specific property and apply changes to the DOM according to the property's existence | Question: I wrote this Javascript snippet for a Polymer fronted, it works without issues.
The only thing that it accomplishes is changing an icon's orientation whenever a user is clicking on any expandable-item (expandable-item is a specific Tag name).
Imagine a classic Operating System's folder tree display and the + or - icon that shows if a specific folder is expandable or not.
When the user clicks, the clicked expandable-item tag receives the expanded attribute.
The user can click on several expandable-item and all of them must remain opened or closed according to the user clicks and, clearly, also the icons related to the items must change accordingly.
Are there any ways that you could suggest to make it more efficient and/or shorter?
var expandedItems = this.shadowRoot.querySelectorAll('expandable-item');
for (var i = 0; i < expandedItems.length; ++i) {
if (expandedItems[i].hasAttribute("expanded")){
expandedItems[i].getElementsByClassName("expandable-icon")[0].setAttribute("icon", "arrow:chevron-open-down");
} else {
expandedItems[i].getElementsByClassName("expandable-icon")[0].setAttribute("icon", "arrow:chevron-open-right");
}
}
Thanks for your suggestions.
Answer: Search online for "queryselectorall vs getelementsbytagname" and you will likely see results like this SO post from August 2013 and this post from September 2010. After reading those one can posit that using Element.getElementsByTagName() instead of querySelectorAll() would be quicker, since the selector appears to merely find elements by tag name. Be aware that method returns a Live HTMLCollection so you would need to put the elements into an array before iterating over them. There are some nice tools in ecmascript-6 to do this, like using the spread syntax or calling Array.from().
If you keep the for loop, you could consider using a for...of (also standard with ecmascript-6) in order to eliminate the accessing array elements with bracket notation.
I am not able to find references to those attributes icon="arrow:chevron-open-down" and icon="arrow:chevron-open-right" but if those could be handled in CSS then you could eliminate the whole JS iteration block. | {
"domain": "codereview.stackexchange",
"id": 34350,
"tags": "javascript, iteration, polymer"
} |
Simple Random TicTacToe Generator | Question: I just finished building a simple random Tic Tac Toe generator. The program will display 3 rows of a random numbers 0-1 and will print a message when 3 of the same numbers align. The program works perfectly, to my knowledge.
I am posting this code here in hopes that I can gain some advice/tips in regards to optimization and/or simplification.
import random
row1 = [round(random.uniform(0, 1)), round(random.uniform(0, 1)), round(random.uniform(0, 1))]
row2 = [round(random.uniform(0, 1)), round(random.uniform(0, 1)), round(random.uniform(0, 1))]
row3 = [round(random.uniform(0, 1)), round(random.uniform(0, 1)), round(random.uniform(0, 1))]
print(row1)
print(row2)
print(row3)
# If statements for all 3 rows
if (row1[0] == row1[1]) and [row1[1] == row1[2]] and (row1[2] == row1[0]):
print("All " + str(row1[0]) + "'s in top row!")
if (row2[0] == row2[1]) and [row2[1] == row2[2]] and (row2[2] == row2[0]):
print("All " + str(row2[0]) + "'s in middle row!")
if (row3[0] == row3[1]) and [row3[1] == row3[2]] and (row3[2] == row3[0]):
print("All " + str(row3[0]) + "'s in bottom row!")
# If statements for all 3 columns
if (row1[0] == row2[0]) and [row2[0] == row3[0]] and (row3[0] == row1[0]):
print("All " + str(row1[0]) + "'s in left column!")
if (row1[1] == row2[1]) and [row2[1] == row3[1]] and (row3[1] == row1[1]):
print("All " + str(row1[1]) + "'s in middle column!")
if (row1[2] == row2[2]) and [row2[2] == row3[2]] and (row3[2] == row1[2]):
print("All " + str(row1[2]) + "'s in right column!")
# If statements for diagonals
if (row1[0] == row2[1]) and (row2[1] == row3[2]) and (row3[2] == row1[0]):
print("All " + str(row1[0]) + "'s in diagonal!")
if (row1[2] == row2[1]) and (row2[1] == row3[0]) and (row3[0] == row1[2]):
print("All " + str(row1[2]) + "'s in diagonal!")
Answer: Some suggestions:
There’s a lot of repetition in the code for generating each of the rows. Repetition is generally a bit of a code smell; you could improve this by using a list comprehension for each row, for example:
row1 = [round(random.uniform(0, 1)) for _ in range(3)]
row2 = [round(random.uniform(0, 1)) for _ in range(3)]
row3 = [round(random.uniform(0, 1)) for _ in range(3)]
By nesting the list comprehension and using tuple unpacking, we can reduce repetition further:
row1, row2, row3 = [
[round(random.uniform(0, 1)) for _ in range(3)]
for _ in range(3)
]
Your code uses 1.0 and 0.0 for the two values, whereas convention is usually to use O and X. If you use random.choice instead of random.uniform, you can get this instead:
row1, row2, row3 = [
[random.choice(['O', 'X']) for _ in range(3)]
for _ in range(3)
]
It’s good that the rest of your code copes with this gracefully – it doesn’t actually care what values you’re putting in the cells.
Once you’ve done that, you can tidy up the printing of the rows so you don't see the square brackets/commas from Python’s list syntax.
There are lots of if statements with multiple conditions and repetition, which is a bit messy. It would be cleaner if you defined a function that checks if a group of values are all equal. Then you could write:
if all_equal(row1[0], row2[0], row3[0]):
print("All %s's in top row!" % row1[0])
That makes the conditions both shorter and easier to read.
Your script says “All X's in diagonal”, regardless of which diagonal it actually is. That’s less clear than it could be, and means it could potentially print the same sentence twice. I’d recommend using different terms to describe each diagonal – perhaps primary and secondary diagonal? | {
"domain": "codereview.stackexchange",
"id": 21406,
"tags": "python, python-3.x, random, tic-tac-toe"
} |
Effect of variable permittivity | Question: If I immerse a rod vertically in a liquid with a relative permittivity gradient (the permittivity decreases with depth), will the rod stretch (will the spacing of the atoms in the rod be affected by the varying permittivity)?
Answer: Permittivity is a macroscopic property of matter - it is a consequence of the way material is polarized in the presence of an electric field.
The properties of an atomic bond are determined by the atoms participating in the bond, the molecular structure in which they find themselves, and (to a lesser extent) the presence of a magnetic and / or electric field that is strong enough to affect the energy states of the orbitals.
If I understand your question correctly, you are asking about the influence of material outside of a (macroscopic) rod on the inter-atomic bonds. I believe that if there were any effect at all, it would be impossibly hard to measure.
Immersing a polymer (especially nylon) in a liquid can cause significant dimensional changes due to the absorption of water. But permittivity - no, it will not affect the rod. | {
"domain": "physics.stackexchange",
"id": 15547,
"tags": "electromagnetism, coulombs-law"
} |
What causes the heating of a black hole's accretion disk? | Question: Sheperd Doeleman said in a TED video that the accretion disk is heated by friction. Is that correct? I thought it might be adiabatic heating from the compression of the gasses.
Answer: It is mainly caused by:
Thomson scattering (coherent)
Compton scattering (decoherent)
Please see here:
https://arxiv.org/abs/1604.00070 | {
"domain": "physics.stackexchange",
"id": 58380,
"tags": "thermodynamics, black-holes, accretion-disk"
} |
On uniformly randomly choosing a pure quantum state | Question: If I have a Hilbert space $\mathcal{H}_A$, how can one uniformly randomly choose a pure quantum state in this space?
I believe the answer is to take the state $\vert 0\rangle$ and apply a random unitary to it. This random unitary is chosen according to a measure called the Haar measure. I do not have a measure theory background so I'm struggling to understand this and the mathematical discussions about the Haar measure are hard to follow.
Is there a physicist's guide to understanding how to pick a random quantum state from some Hilbert space such that every state is equiprobable?
Answer: I don't have a complete answer, but this may help.
A Hilbert space is a complete normed vector space. The norm gives vectors a length and an angle between pairs of vectors. From this, you can get volumes and areas. You want to choose states whose norm is $1$. That is, you want to pick vectors uniformly from the unit sphere. Dividing the sphere into small patches of area $\epsilon$, you want the same probability of being in any patch.
Googling "choosing points uniformly on n-dimensional spheres" turns up a number of solutions for finite dimensional Hilbert spaces. Wolfram has this one. Here is something on Geometry of high-dimensional space
But if you have an infinite dimensional Hilbert space, it gets trickier. You need a Haar measure. According to Wikipedia,
The Haar measure assigns an "invariant volume" to subsets of locally
compact topological groups, consequently defining an integral for
functions on those groups.
This doesn't sound very helpful. But a Hilbert space is a locally compact topological group. It means a Haar measure allows you to define integration over subsets of an infinite dimensional Hilbert space.
But I don't know if it is possible possible to choose a uniform distribution over an infinite dimensional hypersphere. From the Wikipedia article on the n-sphere, you can see the surface area of the unit sphere goes to 0 as the dimension increases. If all patches on the sphere have area $0$, it is hard to choose patches of comparable size.
Part of the trickiness is that you can form the set of all ordered infinity-tuples $(x_0, x_1, ...)$. But not all of these represent points in an infinite dimensional Hilbert space. A Hilbert space has a norm, which means that all vectors have a finite length. The usual norm is $\sqrt{{x_0}^2 + {x_1}^2 + ...}$ This means $(1/2, 1/4, ...)$ is a vector in the space, but $(1, 1, ...)$ is not.
Suppose you pick some patch around $(1,0,0,...)$, and another very similar patch around $(0,1,0,...)$, and so on. You now have an infinite number of patches that all must have equal probability. The probability of each is infinitesimal. | {
"domain": "physics.stackexchange",
"id": 66912,
"tags": "quantum-mechanics, quantum-information, randomness"
} |
What is the symbol to differentiate between 3D and 4D tensors? | Question: I am writing a computer program and in there I need to differentiate 3D tensors (metric tensor, Riemann tensor, Ricci scalar, Christoffel Symbols, etc.) from 4D ones.
I wanted to write something like ${g^{3}_{tt}}$, but this does not work for Riemann Tensor or Christoffel Symbols. Is there a convention for this type of thing?
Answer: It is common to indicate the spacetime dimension $d$ of the geometry with a pre-superscript $(d)$ on tensors, e.g. $^{(3)}T$ and $^{(4)}T$, and so forth. | {
"domain": "physics.stackexchange",
"id": 93159,
"tags": "general-relativity, metric-tensor, tensor-calculus, conventions, notation"
} |
What is the tiny dot of light in front of both lens in the image? I noticed it while trying to pass a 800 nm pulsed laser through the lens setup | Question: The tiny white dots were formed in front of both the lens when I was trying to pass a 7W, 90 fs, 800 nm pulsed laser through the lens setup shown in the image.
I noticed that the dots disappear when I block the incident laser and only appear when lasing. As additional information the laser is propagating from left to right in the image.
Was curious if it was due to reflections.
Answer: Reflected light from the lenses are being focused causing either dust particles or air to breakdown. | {
"domain": "physics.stackexchange",
"id": 98073,
"tags": "electromagnetism, optics, laser"
} |
Advent of Code 2020 - Day 1: finding 2 or 3 numbers that add up to 2020 | Question: Next: Advent of Code 2020 - Day 2: validating passwords
Problem statement
I decided to take a shot at Advent of Code 2020 to exercise my Rust knowledge. Here's the task for Day 1:
Day 1: Report Repair
[...]
Specifically, they need you to find the two entries that sum to 2020
and then multiply those two numbers together.
For example, suppose your expense report contained the following:
1721
979
366
299
675
1456
In this list, the two entries that sum to 2020 are 1721 and 299.
Multiplying them together produces 1721 * 299 = 514579, so the correct
answer is 514579.
[...]
Part Two
[...] They offer you a second one if you can find three numbers in
your expense report that meet the same criteria.
Using the above example again, the three entries that sum to 2020 are
979, 366, and 675. Multiplying them together produces the answer,
241861950.
The full story can be found on the website.
My solution
src/day_1.rs
use {anyhow::Result, std::io::prelude::*};
pub type Number = u64;
pub fn parse_data<R: BufRead>(reader: R) -> Result<Vec<Number>> {
reader.lines().map(|line| Ok(line?.parse()?)).collect()
}
// Finds a pair of numbers in `data` that add up to `sum`
pub fn find_pair(mut data: Vec<Number>, sum: Number) -> Option<(Number, Number)> {
data.sort_unstable();
find_pair_in_sorted(&data, sum)
}
// Finds a triple of numbers in `data` that add up to `sum`.
pub fn find_triple(mut data: Vec<Number>, sum: Number) -> Option<(Number, Number, Number)> {
data.sort_unstable();
while let Some(number) = data.pop() {
let sub_sum = match sum.checked_sub(number) {
Some(sub_sum) => sub_sum,
None => continue,
};
if let Some((a, b)) = find_pair_in_sorted(&data, sub_sum) {
return Some((a, b, number));
}
}
None
}
// Finds a pair of numbers in `data` that add up to `sum`
// by binary searching for `sum - n` for each number `n` in `data`.
// `data` must be sorted in ascending order.
pub fn find_pair_in_sorted(mut data: &[Number], sum: Number) -> Option<(Number, Number)> {
while let Some((&number, new_slice)) = data.split_last() {
data = new_slice;
let target = match sum.checked_sub(number) {
Some(target) => target,
None => continue,
};
if data.binary_search(&target).is_ok() {
return Some((target, number));
}
}
None
}
src/bin/day_1_1.rs
use {
anyhow::{anyhow, Result},
aoc_2020::day_1::{self as lib, Number},
std::{fs::File, io::BufReader},
};
const PATH: &str = "./data/day_1/input";
const SUM: Number = 2020;
fn main() -> Result<()> {
let file = BufReader::new(File::open(PATH)?);
let data = lib::parse_data(file)?;
let (a, b) = lib::find_pair(data, SUM)
.ok_or_else(|| anyhow!("cannot find pair that adds up to {}", SUM))?;
println!("{} * {} = {}", a, b, a * b);
Ok(())
}
src/bin/day_1_2.rs
use {
anyhow::{anyhow, Result},
aoc_2020::day_1::{self as lib, Number},
std::{fs::File, io::BufReader},
};
const PATH: &str = "./data/day_1/input";
const SUM: Number = 2020;
fn main() -> Result<()> {
let file = BufReader::new(File::open(PATH)?);
let data = lib::parse_data(file)?;
let (a, b, c) = lib::find_triple(data, SUM)
.ok_or_else(|| anyhow!("cannot find triple that adds up to {}", SUM))?;
println!("{} * {} * {} = {}", a, b, c, a * b * c);
Ok(())
}
Crates used: anyhow 1.0.37
cargo fmt and cargo clippy have been applied.
Answer: Your code is well done and shows familiarity with Rust.
The best I can do is comment on some parts of the code and provide two small, concrete pieces of advice. There is very little room for improvement in the code.
I see your Vecs in function parameters are well motivated. The one in find_triple is motivated by the need to do pop, and the one in find_pair just follows the other for consistency between parts of the task.
The decision to introduce Number as a type alias is ok, and works and looks fine, but keep in mind that it adds an extra mental hop for the reader.
Return types
Acting by intuition I'd change the return types to have const-length arrays instead of tuples. This brings attention to the fact that the array/tuple is homogeneous. I would change:
-> Option<(Number, Number, Number)> and return Some((a, b, number));
to
-> Option<[Number; 3]> and return Some([a, b, number]);
Typo?
It's good to be correct and consistent in naming. The function name is find_triple - but 'triple' is not a noun, 'triplet' is. | {
"domain": "codereview.stackexchange",
"id": 40449,
"tags": "algorithm, programming-challenge, rust"
} |
After a Type Ia supernova explosion, what becomes of the degenerate matter? | Question: A white dwarf below 1.44 solar mass in a binary system may accrete mass from its companion. If its core reaches the temperature for carbon fusion during this process, the white dwarf may reignite in a runaway fusion. The runaway fusion could release sufficient energy to unbind the white dwarf in a type Ia supernova. An earlier answer mentioned that the star is completely destroyed in a type Ia supernova.
Since the degenerate matter in the core of the white dwarf is the result of the gravitational compression, what becomes of the degenerate matter once the white dwarf is unbound?
Answer: It becomes part of the interstellar medium.
The carbon and oxygen is almost entirely fused into heavier elements and dispersed. This is the process by which much of the iron, nickel, manganese and some other elements (Si, S, Ar, Ca) come to be relatively abundant in stars formed in the last 10 billion years. | {
"domain": "astronomy.stackexchange",
"id": 6800,
"tags": "supernova, white-dwarf, degenerate-matter"
} |
Problem involving Dirac's equation | Question: I'm stuck in an equation derivation of Ryder's QFT book.
Starting with Dirac's equation:
$$(i\gamma^\mu\partial_\mu-m)\psi=0$$
If I multiply by $i\gamma^\nu\partial_\nu$, I get:
$$((\gamma^\nu\partial_\nu)(\gamma^\mu\partial_\mu)+i\gamma^\nu\partial_\nu m)\psi=0$$
I should get:
$$(\gamma^\nu\gamma^\mu\partial_\nu\partial_\mu+m^2)\psi=0$$
I suppose that this means:
$$i\gamma^\nu\partial_\nu=m$$
But I don't know why. ¿Could anyone show me the way to prove this property?
Thanks.
Answer: First, to get the equation you want apply $(i\gamma^\nu\partial_\nu + m)$ to both sides, then on the left hand side you'll get
\begin{align}
(i\gamma^\nu\partial_\nu + m)(i\gamma^\mu\partial_\mu - m)\psi
&= (-\gamma^\nu\gamma^\mu\partial_\nu\partial_\mu-m^2)\psi
\end{align}
which, when set to zero, gives
$$
(\gamma^\nu\gamma^\mu\partial_\nu\partial_\mu+m^2)\psi = 0
$$
as desired. The expression you wrote down is missing an $i$; if you apply $i\gamma^\nu\partial_\nu$ to both sides, then you get
$$
(\gamma^\nu\gamma^\mu\partial_\nu\partial_\mu + im\gamma^\nu\partial_\nu)\psi = 0
$$
which combined with the equation you wanted to get gives
$$
i\gamma^\nu\partial_\nu\psi = m\psi
$$
which is fine because this is just the Dirac equation again. You cannot conclude from this that $i\gamma^\nu\partial_\nu = m$; this is only true when acting on solutions to the Dirac equation, not as a statement about differential operators. | {
"domain": "physics.stackexchange",
"id": 7554,
"tags": "quantum-field-theory, dirac-equation"
} |
How long does it take for a hard clam (Mercenaria mercenaria) to reach sexual maturity? | Question: How long does it take for a wild hard clam (Mercenaria mercenaria) to reach an age where they are reproductive? How does fecundity depend on age and size? I've looked at several hard clam life cycles and none of them have this information. They tend to agree that hard clams are long lived and take multiple years to reach a size suitable for harvest. While I am more interested in wild hard clam, please do also share information on farmed hard clam if you have it?
Answer: According to Ebersole 1987, Mercenaria mercenaria reaches sexual maturity at two years of age although some individuals can reach maturity at 1 year of age under certain growing conditions. The mean size of individuals reaching sexual maturity is 33 mm with males somewhat smaller than females.
Ebersole cited a few studies that estimated fecundity of M. mercenaria. The results were highly variable. Individuals of about 60 mm in size released between 2 million and 6 million eggs. Other estimates were as high as 25 million eggs. Ebersole also cited a paper by Bricelj and Malouf (1980) where they found a general association of increasing fecundity with increasing clam size. I don't have access to that paper but a book called The Great South Bay edited by Schubel et al. 1991 discussed the results of Bricelj and Malouf. Scroll to page 45 of the Google Books preview. They show the mean fecundity and size as follows:
Seed (<48mm): 1.6 million eggs / individual
Little Necks (48-70mm): 2.9 million
Cherrystones (70-79mm): 5.9 million
Chowders (>79mm): 6.3 million
However, the figure on page 45 of Schubel et al., and taken from the Bricelj and Malout (1980) paper, shows a lot of variability in fecundity at different size classes. Hopefully, the citations in the Ebersole paper will point you to other useful studies. If you don't have ready access to the book, your local library may be able to get it for you on loan.
Literature Cited
Bricelj, V.M. and R.E. Malout. 1980. Aspects of reproduction of hard clams (Mercenaria mercenaria) in Great South Bay, New York. Proceedings of the National Shellfish Association 70: 216-229.
Ebersole, 1987. Species profiles: Life histories and environmental requirements of coastal fishes and invertebrates (South Atlantic). Hard clams. US Fish and Wildlife Service Biological Report 82 (11.75).
Schubel, J.R. et al. (editors) 1991. The Great South Bay. State University of New York Press, Albany, New York, USA. | {
"domain": "biology.stackexchange",
"id": 2716,
"tags": "ecology, marine-biology"
} |
Where does this equality involving wavefunctions and their derivatives evaluated at infinity come from? | Question: Is it true that $\frac{\partial\Psi_1^*}{\partial x}\Psi_2|^{\infty}_{-\infty}=\frac{\partial\Psi_2^*}{\partial x}\Psi_1|^{\infty}_{-\infty}$ for wavefunctions obeying the Schrodinger equation? For me it seems immediately obvious that not necessarily, but this seems to be necessary to show $\frac{d}{dt}\int_{-\infty}^{\infty}\Psi_1^*\Psi_2dx=0$
Answer: When solving the Schrödinger equation, one must also impose boundary conditions. It is usually taken as an implicit assumption that the wavefunctions vanish at spatial infinity. In other words, $\Psi_i(\pm\infty) = 0$ for $i = 1,2$, which leads to the expression you wrote down. | {
"domain": "physics.stackexchange",
"id": 84297,
"tags": "quantum-mechanics, wavefunction, schroedinger-equation"
} |
Coordinates vs. parametrization of a worldsheet | Question: In introductory string theory, the worldsheet is described (e.g. Tong, Polchinski) as a surface $X^\mu(\tau,\sigma)$ in Minkowski spacetime indexed by two parameters: $(\tau,\sigma)$. Now, I initially wrote this off as classic abuse of notation, where we're taking $X^\mu(\tau,\sigma)$ to be shorthand for $X^\mu(S(\tau,\sigma))$ where $S:\mathbb{R}^2\to M$ is the map to the "abstract" worldsheet and $X^\mu :M\to\mathbb{R}^4$ is the (homeomorphism defined by the) chart. However, then I was introduced to the induced metric on the worldsheet,$$\gamma_{\alpha\beta}=\frac{\partial X^\mu}{\partial\sigma^\alpha}\frac{\partial X^\nu}{\partial\sigma^\beta}\eta_{\mu\nu},$$ and I realized the $\sigma^\alpha\in\{\tau,\sigma\}$ are meant to be understood as genuine coordinates, despite the fact they were clearly introduced as parameters. But the parameters and coordinates need not have anything to do with each other. I could happily reparametrize the worldsheet and leave the chart alone and that obviously wouldn't be a coordinate transformation right?
The only way I can make sense of this is if really $X^\mu :\mathbb{R}^2\to\mathbb{R}^4$ and $X^\mu(\tau,\sigma)$ is shorthand for $X^\mu[\varphi(S(\tau,\sigma))]$ with the worldsheet chart $\varphi=S^{-1}$. Then whenever we perform a reparameterization we modify $S$ and $\varphi$ simultaneously so that it is a genuine coordinate transformation. As in, the reparametrization takes us from $S(\tau,\sigma)\rightarrow S^\prime(\tau^\prime,\sigma^\prime)$, and we modify the chart, $S^{-1}(p)\rightarrow (S^\prime)^{-1}(p)$ so that rather than a given point mapping to the coordinates $(\tau,\sigma)$ it maps to $(\tau^\prime,\sigma^\prime)$. So in effect, we simply choose the coordinates to numerically coincide with the parameters.
Is this correct or am I overthinking it?
Answer: I believe the core issue of your question is the sentence "But the parameters and coordinates need not have anything to do with each other". This is not true. If $\cal M$ is a smooth manifold, $U\subset \cal M$ is an open subset on which we define a coordinate chart $x: U\to \mathbb{R}^D$ the inverse $x^{-1}:\mathbb{R}^D\to U$ is called the associated parameterization. Coordinates and parameters are distinct names for the same thing.
The origin of this issue is probably because you recall that a parameterized curve is a map $\gamma:\mathbb{R}\to \mathbb{R}^n$ and then you seem to think that an embedding $\phi:\Sigma\to {\cal M}$ presuposes a parameterization. That is not true. Given one embedding $\phi : \Sigma\to {\cal M}$ you will have a parameterization when you introduce coordinates on $\Sigma$. In that setting a reparameterization is just a change of coordinates in $\Sigma$.
In the parameterized curve story, the thing is that one implicitly assumes that one is using the trivial chart in $\mathbb{R}$, namely $t: \mathbb{R}\to \mathbb{R}$ given as the identity element. A reparameterization, as you know from the study of curves, is then given by a change of coordinates.
So in summary: an embedding is just one smooth injective immersion which is a homeomorphism onto its image. An embedding exists independently of any choice of parameterization. When you introduce coordinates on the domain you then have a parameterization of the embedded submanifold. | {
"domain": "physics.stackexchange",
"id": 82956,
"tags": "differential-geometry, metric-tensor, string-theory, coordinate-systems"
} |
Haskell zmq & protobuf message forwarding | Question: I wrote a test program (and first "real" Haskell program!) that receives a zmq request on 1 socket, and forwards part of that message to another socket.
There are a 2 things I specifically don't like:
I couldn't figure out a way to build a protobuf message without specifying every field. This is especially annoying for fields like failure_message, which I had to set to Nothing
I'm not a fan of the lazyToStrictBS conversion
Is there any way to avoid these? Also, any other things I should fix? I would be shocked if there were no other issues.
The code:
module Main where
import System.ZMQ4.Monadic
import Control.Monad (forever)
import Data.ByteString.Char8 (pack, unpack)
import qualified Data.ByteString as ByteString (ByteString, concat)
import Data.ByteString.Lazy as LBS (ByteString, toChunks)
import Control.Concurrent (threadDelay)
import Text.ProtocolBuffers.WireMessage (messageGet)
import Text.ProtocolBuffers.WireMessage (messagePut)
import ProtoMsg.ForwardRequest (ForwardRequest)
import ProtoMsg.ForwardResponse
import ProtoMsg.ReqResponse (ReqResponse)
import ProtoMsg.Status
import ProtoMsg.Retort
lazyToStrictBS :: LBS.ByteString -> ByteString.ByteString
lazyToStrictBS x = ByteString.concat $ LBS.toChunks x
main :: IO ()
main =
runZMQ $ do
repSocket <- socket Rep
bind repSocket "tcp://*:9000"
pubSocket <- socket Pub
bind pubSocket "tcp://*:9001"
liftIO $ putStrLn "Ready"
forever $ do
msg <- receive repSocket
(liftIO.putStrLn.unwords) ["Received request:", unpack msg]
let repMsg = ProtoMsg.ForwardResponse.ForwardResponse {
header = ProtoMsg.ReqResponse.ReqResponse {
message_id = Nothing,
user_id = Nothing,
request_id = Nothing,
ProtoMsg.ReqResponse.status = Just ProtoMsg.Status.OK,
failure_message = Nothing
}
}
let respMsg = ProtoMsg.Retort.Retort {
status = Just ProtoMsg.Status.OK,
failure_message = Nothing
}
send repSocket [] (lazyToStrictBS (messagePut respMsg))
Answer: 1) If you have used hprotoc to generate Haskell modules then you should be able to use defaultValue and optional fields will be assigned to Nothing by default i.e.
header = ProtoMsg.ReqResponse.ReqResponse {
message_id = Nothing,
user_id = Nothing,
request_id = Nothing,
ProtoMsg.ReqResponse.status = Just ProtoMsg.Status.OK,
failure_message = Nothing
}
could be replaced with
header = defaultValue {
ProtoMsg.ReqResponse.status = Just ProtoMsg.Status.OK
}
2) Instead of your lazyToStrictBS you could use the standard function toStrict in the bytestring package | {
"domain": "codereview.stackexchange",
"id": 19146,
"tags": "haskell, zeromq, protocol-buffers"
} |
Time complexity of functions that call each other | Question: I'm having trouble reasoning about the time complexity of these mutually recursive functions. This was asked on SO here but the answer there didn't help me. I tried substituting one of the recurrences in the other but because they are mutually recursive I got stuck. I tried writing out the function calls for x=10 but I'm stuck making sense of that too. How do I go about working through something like this?
int foo(int x)
{
if(x < 1) return 1;
else return foo(x-1) + bar(x);
}
int bar(int x)
{
if(x < 2) return 1;
else return foo(x-1) + bar(x/2);
}
Edit: With @quicksort's answer I get S(n) = 2*S(n-1) + S(n/2) - S(n/2-1) and if I try to solve it using Wolfram I'm seeing something different: solution . Also, it's not intuitive to me that this is exponential. How does one see that?
Answer: Let $\mathcal{S}(n)$ be the running time of foo and $\mathcal{T}(n)$ be the running time of bar. We have the following system of recursive equations:
$$
\left\{
\begin{array}{r c l}
\mathcal{S}(n) & = & \mathcal{S}(n-1) + \mathcal{T}(n) + \Theta(1)\\
\mathcal{T}(n) & = & \mathcal{S}(n-1) + \mathcal{T}(n/2) + \Theta(1)
\end{array}
\right.
$$
By isolating $\mathcal{T}(n)$ in the first and $\mathcal{S}(n)$ in the second, we obtain:
$$
\left\{
\begin{array}{r c l}
\mathcal{S}(n-1) & = & \mathcal{T}(n) - \mathcal{T}(n/2) + \Theta(1)\\
\mathcal{T}(n) & = & \mathcal{S}(n) - \mathcal{S}(n-1) + \Theta(1)
\end{array}
\right.
$$
I will now solve for $\mathcal{T}$, with a similar reasoning holding for $\mathcal{S}$. Since:
$$
\mathcal{S}(n-1) = \mathcal{T}(n) - \mathcal{T}(n/2) + \Theta(1)
$$
We also have that:
$$
\mathcal{S}(n) = \mathcal{T}(n+1) - \mathcal{T}((n+1)/2) + \Theta(1)
$$
Therefore the first equation of our original system becomes:
$$
\mathcal{T}(n+1) - \mathcal{T}((n+1)/2) = \mathcal{T}(n) + \mathcal{T}(n) - \mathcal{T}(n/2) + \Theta(1)
$$
Reordering the terms:
$$
\mathcal{T}(n+1) = 2 \mathcal{T}(n) - \mathcal{T}(n/2) + \mathcal{T}((n+1)/2) + \Theta(1)
$$
Since $(n+1)/2$ is either $n/2$ or $n/2+1$, it must be that
$$ \mathcal{T}((n+1)/2) - \mathcal{T}(n/2) \ge 0$$
which means:
$$
\mathcal{T}(n+1) \ge 2 \mathcal{T}(n) + \Theta(1)
$$
and:
$$
\mathcal{T}(n) \in \Omega(2^n)
$$
We can check the other arrow (i.e. $ \mathcal{T}(n) \in \mathcal{O}(2^n) $) by induction. | {
"domain": "cs.stackexchange",
"id": 8126,
"tags": "complexity-theory, time-complexity, asymptotics"
} |
Reinforcement Learning - Model based and model free | Question: I'm studying reinforcement learning and I found confusing information. I know there are two different types of reinforcement learning, model based and model free. In the second image, it is possible to see TD learning, so I don't understand if Td learning is another type of reinforcement learning or it is a model based.
Answer: Reinforcement Learning is a paradigm of learning with reward and includes tons of methods. One categorization is between model-based RL and model-free RL.
In model-based the agent knows all the possible contingencies from the current state and can evaluate the expected return which will help it to take an informative decision. Computationally this means that the agent has a model of how the environment changes states given an action. It also knows the outcome of a specific transition. Before the agent moves, it makes an estimation of what to expect given current state. In cognitive terms this is planning. Model-based is also referred to as response-outcome model. The reason for this is that given a change in the outcome (reward) the agent can adapt its response accordingly to optimize return. This is because it has a model of the world that tells it "what happens if".
Model-free RL is related to stimulus-response learning or else habitual learning. The agent has no knowledge of "what happens if". Instead it has passed through an extensive period of trial and error till it managed to associate a particular state (stimulus) with a particular action (response) by (computationally) maintaining an estimation of expected return from the current state.
TD methods belong to MF RL as they do not use a model in order to predict what happens in future transitions. In other words they do not perform planning. Computationally, TD methods are used to learn value functions (estimators of expected return) by accumulating sampled experience (i.e interacting with the task by selecting actions and observe the outcome). Of course there is a trade-off between exploring new actions or exploiting ones that their outcome has been estimated.
In general, if I were to perform MB RL I would think the outcomes of all possible contingencies from my current state, up to a horizon. Then I would compare those and I would select the action that corresponds to the best one.
In contrast if I were to perform MF RL I would simply try an action that either I think is good because in the past it led me to a positive outcome, or a random action in order to see if I can get an even better outcome. After the action I would observe the outcome and update how good is that action based on the new evidence.
There have been some pathways in the brain that could computationally be described (approximately) by MF and MB RL. There has been research in pathways that combine both (learning a model of the world by experience).
If you are interested in RL from computational neuroscience perspective you could search some tutorial on MB/MF RL by Nathaniel Daw or Yael Niv (they have written a lot) and work representative of the field by Peter Dayan - one of the pioneers on the subject - and Samuel Gershman. | {
"domain": "datascience.stackexchange",
"id": 10606,
"tags": "reinforcement-learning"
} |
Time Complexity of Logarithmic For loop | Question: Say I have a for loop like this
for(int i=1;i<n;i=i*2)
{
for(int j=1;j<i;j=j*2)
{
cout<<"hello";
}
}
What is the time complexity of this loop?
I have approached this problem like this. The outer loop runs log(n) times and the inner loop runs log(i) times, so in total the complexity becomes $O(log(log(n)))$
Whereas, my friend has approached it like this. The outer loop is running,
$i=2^0$+$2^1$+$2^2$+$2^3$.... $2^{log(n)}$
times and since the inner loop is running log(i) times, so the total time complexity we have is
$TC=0+1+2+3...log(n)= O( (log(n))^2 )$
Which of these two is correct $O(log(log(n)))$ or $O( (log(n))^2 )$ ?
Answer: Our approach is to finding a recursive formula for the time complexity of the code. For each value of $i$, the inner loop runs $\log i$ times.
Suppose $T(n)$ is time complexity of given code, so:
$$T(n)=T(\frac{n}{2})+\log n$$.
At each step we have a $\log i$ cost for inner loop, and outer loop divide our $n$ by $2$. So we get above $T(n)$ that after solving by any known method (suppose $n=2^k$):
$$T(n)=\sum_{i=1}^{\log n}\log\frac{n}{2^i}$$
$$=\sum_{i=1}^{\log n}(\log n-i)=\sum_{i=1}^{\log n}i=O(\log^2n)$$
As a result:
$$T(n)=O(\log^2n)$$ | {
"domain": "cs.stackexchange",
"id": 18556,
"tags": "complexity-theory, time-complexity"
} |
Display x, y and z axis names next to each axis in rviz | Question:
Is there a way to display the names of the axis (of the TFs) next to each axis in rviz?
For example, if I remember correctly, the red axis should represent the x-axis, while the green axis should represent the y-axis. The problem is that I'm never sure about these conventions and having to look them up every time is a little bit tedious.
Originally posted by nbro on ROS Answers with karma: 372 on 2017-04-21
Post score: 1
Answer:
That's currently not possible. It also took me a while but once I realized that XYZ maps to RGB, it got pretty natural rather quickly.
Originally posted by rbbg with karma: 1823 on 2017-04-21
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 27684,
"tags": "rviz"
} |
Form builder class | Question: I am trying to learn OOP in PHP. My PHP knowledge is pretty good, but haven't tried OOP before. I'm currently building a formbuilder class. Can you take a look and tell me if this is the right way?
<?php
class Forms {
protected $_inputs;
private $_method, $_action;
public function __construct($method,$action = NULL)
{
$this->_method = $method;
$this->_action = $action;
}
public function addInput($type,$name,$value,$placeholder = NULL,$extra = NULL) {
$this->_inputs[] = array (
'type' => $type,
'name' => $name,
'value' => $value,
'placeholder' => $placeholder,
'extra' => $extra
);
}
public function addLabel($name) {
$this->_inputs[] = array (
'type' => 'label',
'name' => $name
);
}
public function addTextarea($name,$value = NULL, $placeholder = NULL) {
$this->_inputs[] = array (
'type' => 'textarea',
'name' => $name,
'value' => $value,
'placeholder' => $placeholder
);
}
public function printForm() {
$html = '<form action="'.$this->_action.'" method="'.$this->_method.'">';
for ($i = 0; $i < count($this->_inputs); $i++) {
switch($this->_inputs[$i]['type']) {
case "label":
$html .= '<label for="'.$this->_inputs[$i]['name'].'">'.$this->_inputs[$i]['name'].'</label>';
break;
case "text":
case "password":
case "submit":
case "email":
$html .= '<input type="'.$this->_inputs[$i]['type'].'" ';
$html .= ' name="'.$this->_inputs[$i]['name'].'" ';
$html .= ' value="'.$this->_inputs[$i]['value'].'" ';
$html.= ' placeholder="'.$this->_inputs[$i]['placeholder'].'"> ';
$html .= ' '.$this->_inputs[$i]['extra'].'';
break;
case "textarea":
$html .= '<textarea name="'.$this->_inputs[$i]['name'].' value="'.$this->_inputs[$i]['value'].'" placeholder="'.$this->_inputs[$i]['placeholder'].'"></textarea>';
break;
}
}
$html .= '</form>';
print($html);
}
}
?>
This is how I make the form on a page:
<?php
$form3 = new Forms('post');
$form3->addLabel('username');
$form3->addInput('text','username','','Username','<br>');
$form3->addLabel('password');
$form3->addInput('password','password','','Password','<br>');
$form3->addInput('submit','submit_login','Login2');
$form3->printForm();
?>
Answer: I think your code is good overall, I only have a couple of small points:
I wouldn't print inside the form class, but return the HTML instead, and then print inside the calling class.
I don't like the way you handle labels, as it's very inflexible. I have to use the same for value as the text of the label (what if the text is "5 Cats", which isn't allowed as id?). It's also not clear from the outside that this is how it works (as you are missing PHPDoc comments, and the parameter is called name, not id or for.
You are missing some functionality, such as adding classes or ids to the tags. Maybe you can add a generic addAttributeTo($name $key, $value) method, which adds the key/value pair to the element specified by name.
I would think about defending XSS in this class, so the caller doesn't have to remember doing it. | {
"domain": "codereview.stackexchange",
"id": 12503,
"tags": "php, beginner, object-oriented, form"
} |
Slow performance of Google Apps Script that updates data | Question: I have a Google Spreadsheet with a few hundreds of sheets there. Users update those sheets with new data under a strict and specific template. I created a Google Apps Script where once every while, I run it to change several things on every sheet, by keeping the original data intact and export those in multiple CSV files I store in Google Drive. In more details,
Iterate on every sheet
Duplicate the current one
Run a set of functions on every cell I have to update (~500 on each)
Export the sheet to CSV and store it in Drive
Delete the temp sheet
Move to the next one
Number 3 is the most time consuming. It might takes 30-40 seconds for every sheet and the functions are simple math formulas or dictionaries.
Here is the code which I have removed functions that I just repeat for more cells.
function saveAsCSV() {
var maxSheetID = 100;
var sheetsFolder = DriveApp.getFoldersByName('sheetsFolder_CSV').next();
var folder = sheetsFolder.createFolder('sheetsFolder' + new Date().getTime());
for (var sheetID = 1; sheetID <= maxSheetID; sheetID++) {
createTempSheet();
copyRowsWithCopyTo(sheetID);
var tempSheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('temp');
tempSheet.getDataRange().setDataValidation(null); //remove all validations I have
var cell1 = tempSheet.getRange(5,4);
cell1.setValue(function1(cell1.getValue()));
var cell2 = tempSheet.getRange(6,4);
cell2.setValue(function2(cell2.getValue()));
var cell3 = tempSheet.getRange(7,4);
cell3.setValue(function3(cell3.getValue()));
//continue for several cells like this on specific i,j indexes. No pattern.
//Table
for (var p = 9; p <= 30; p++) {
var tCell1 = tempSheet.getRange(p,2);
tCell1.setValue(function5(tCell1.getValue()));
var tCell2 = tempSheet.getRange(p,3);
tCell2.setValue(function6(tCell2.getValue()));
//continue for several cells like this with dynamic p and several columns
}
fileName = sheetID + ".csv";
var csvFile = convertRangeToCsvFile_(fileName, tempSheet);
folder.createFile(fileName, csvFile);
}
Browser.msgBox('Files are waiting in a folder named ' + folder.getName());
}
function createTempSheet() {
var activeSpreadsheet = SpreadsheetApp.getActiveSpreadsheet();
var tempSheet = activeSpreadsheet.getSheetByName("temp");
if (tempSheet != null) {
activeSpreadsheet.deleteSheet(tempSheet);
}
tempSheet = activeSpreadsheet.insertSheet();
tempSheet.setName("temp");
}
function convertRangeToCsvFile_(csvFileName, sheet) {
// get available data range in the spreadsheet
var activeRange = sheet.getDataRange();
try {
var data = activeRange.getValues();
var csvFile = undefined;
// loop through the data in the range and build a string with the csv data
if (data.length > 1) {
var csv = "";
for (var row = 0; row < data.length; row++) {
for (var col = 0; col < data[row].length; col++) {
if (data[row][col].toString().indexOf(",") != -1) {
data[row][col] = "\"" + data[row][col] + "\"";
}
}
// join each row's columns
// add a carriage return to end of each row, except for the last one
if (row < data.length-1) {
csv += data[row].join(",") + "\r\n";
}
else {
csv += data[row];
}
}
csvFile = csv;
}
return csvFile;
}
catch(err) {
Logger.log(err);
Browser.msgBox(err);
}
}
function copyRowsWithCopyTo(sourceName) {
let spreadSheet = SpreadsheetApp.getActiveSpreadsheet();
let sourceSheet = spreadSheet.getSheetByName(sourceName);
let sourceRange = sourceSheet.getDataRange();
let targetSheet = spreadSheet.getSheetByName('temp');
sourceRange.copyTo(targetSheet.getRange(1, 1));
}
Answer: As the documentation says,
Using JavaScript operations within your script is considerably faster than calling other services. Anything you can accomplish within Google Apps Script itself will be much faster than making calls that need to fetch data from Google's servers or an external server, such as requests to Spreadsheets, Docs, Sites, Translate, UrlFetch, and so on. Your scripts will run faster if you can find ways to minimize the calls the scripts make to those services.
The slowest part in the script is all those .getValue() and .setValue() calls. Not only is the script making those calls, it is alternating them as well, even in the same line(rg.setValue(function(rg.getValue()))). Alternating read and write is slow:
Every time you do a read, we must first empty (commit) the write cache to ensure that you're reading the latest data (you can force a write of the cache by calling SpreadsheetApp.flush()). Likewise, every time you do a write, we have to throw away the read cache because it's no longer valid. Therefore if you can avoid interleaving reads and writes, you'll get full benefit of the cache.
http://googleappsscript.blogspot.com/2010/06/optimizing-spreadsheet-operations.html
The script here leaves out essential information. What does function1 do? More importantly, whether function1 does anything to the data. Does it modify the sheet again? Also, when getting the csv, does the sheet make any calculations through formula?
Ideally, Your script structure should look like:
INPUT: Get data from one sheet
OUTPUT: Set modified data to Drive
This implies:
no temporary sheet creation,
no repeated get/set calls,
no relying on sheet formulas after getting data
Other minor changes include:
Use const and let as needed instead of declaring all variables as var. This helps the javascript engine optimize the memory and processing of such variables
The script repeats almost all the code multiple times. Practice DRY(Don't Repeat Yourself) principle. See eg below. Use loops where possible: Even in cases, where there isn't a pattern, you can create a list of indexes that you want and then loop over them.
insertSheet is able to create new sheet with old sheet as a template. So, the functions createTempSheet and copyRowsWithCopyTo are completely unnecessary.
console is a newer standard class. Use it instead of Logger
Script with sample modifications:
function saveAsCSV() {
const maxSheetID = 100;
const sheetsFolder = DriveApp.getFoldersByName('sheetsFolder_CSV').next();
const folderName = 'sheetsFolder' + new Date().getTime();
const folder = sheetsFolder.createFolder(folderName);
const ss = SpreadsheetApp.getActiveSpreadsheet();
for (let sheetID = 1; sheetID <= maxSheetID; sheetID++) {
/*needed? why not directly get the data*/ const tempSheet = ss.insertSheet(
'temp',
1,
{
template: ss.getSheetByName(sheetID),
}
),
datarange = tempSheet.getDataRange(),
/*1 GET call*/data = datarange.getValues(),
indexes = [
[5, 4], // automatically use index to calculate function number
[6, 4],
[7, 4, 'function3'], // or specify a function
];
datarange.setDataValidation(null); //remove all validations I have
indexes.forEach(/*DRY loop*/
([i, j, func], funcIdx) =>
(data[i][j] = this[ func ?? `function${funcIdx + 1}`](data[i][j]))
);
//continue for several cells like this on specific i,j indexes. No pattern.
//Table
for (let p = 9; p <= 30; p++) {
data[p][2] = function5(data[p][2]);
data[p][3] = function5(data[p][3]);
//continue for several cells like this with dynamic p and several columns
}
/*needed?1 SET call*/ datarange.setValues(data);
const fileName = sheetID + '.csv';
const csvFile = convertRangeToCsvFile_(/*DRY*/data);
folder.createFile(fileName, csvFile);
ss.deleteSheet(tempSheet);
}
Browser.msgBox('Files are waiting in a folder named ' + folderName/*DRY*/);
}
function convertRangeToCsvFile_(data) {
try {
let csvFile;
// loop through the data in the range and build a string with the csv data
if (data.length > 1) {
let csv = '';
for (let row = 0; row < data.length; row++) {
for (let col = 0; col < data[row].length; col++) {
if (data[row][col].toString().indexOf(',') != -1) {
data[row][col] = '"' + data[row][col] + '"';
}
}
// join each row's columns
// add a carriage return to end of each row, except for the last one
if (row < data.length - 1) {
csv += data[row].join(',') + '\r\n';
} else {
csv += data[row];
}
}
csvFile = csv;
}
return csvFile;
} catch (err) {
console.log(err);
Browser.msgBox(err);
}
} | {
"domain": "codereview.stackexchange",
"id": 43744,
"tags": "performance, google-apps-script, google-sheets"
} |
Solving angular momentum conservation with energy conservation? | Question: A kid of mass M stands at the edge of a platform of radius R which can be freely rotated about its axis. The moment of inertia of the platform is I. The system is at rest when a friend throws a ball of mass m and the kid catches it. If the velocity of the ball is v horizontally along the tangent to the edge of the platform when it was caught by the kid, find the angular speed of the platform after the event. Solve using energy conservation
Answer: Although this question can be likened to a homework type question there is an important idea embedded in it, in that the act of catching is an inelastic collision so kinetic energy will not be conserved. This means that although energy will be conserved, kinetic energy would not, so it would be extremely difficult, if not impossible(?), to solve this problem using energy conservation. | {
"domain": "physics.stackexchange",
"id": 73698,
"tags": "homework-and-exercises, rotational-dynamics"
} |
Mixed state after measurement | Question: I'm looking at Section 2.4.1 of Nielsen and Chuang's Quantum Computation and Quantum Information were they derive the density operator versions of the evolution and measurement postulates of quantum mechanics and something is bugging me.
Let
$$\{(p(i), | x_i \rangle) \colon i=1,2,...,n\}$$
be an ensemble and suppose you perform a measurement on this ensemble that results in outcome $m$. Then the post-measurement ensemble is
$$\{(p(i|m), | x'_i \rangle) \colon i=1,2,...,n\}$$
where $p(i|m)$ is the probability that the state of the system was originally $| x_i \rangle$ given that the measurement outcome was $m$. It seems natural enough to use $p(i|m)$ for the probabilities of the post-measurement ensemble. However, mathematically, I do not see why that should be the case.
My question then is this: Given only the definition of what an ensemble is and the state vector version of the postulates of QM, is there a way to derive the rules to compute the post-measurement probabilities of an ensemble?
Answer: This is just classical probability theory, note in particular the theorem
$p(m)*p(i|m) = p(i)*p(m|i)\;\;[=p(i\text{ and } m)]$.
Repeat the experiment an enormous number of times $N$. Out of these repetitions, $N*p(i)$ are the state $|x_i\rangle$. Out of these, $N*p(i)*p(m|i) = N*p(m)*p(i|m)$ result in the measurement outcome $m$. So out of the ensemble of $N*p(m)$ measurements that have outcome $m$, a fraction $p(i|m)$ started in state $|x_i\rangle$. | {
"domain": "physics.stackexchange",
"id": 13222,
"tags": "quantum-mechanics, quantum-information, measurement-problem"
} |
What exactly is the relation between the Holevo quantity and the mutual information? | Question: On this page, it is stated that the Holevo quantity is an upper bound to the accessible information of a quantum state. In the scenario where Alice encodes classical information into a quantum state and sends it to Bob, the accessible information is the maximum of the mutual information between Alice and Bob's registers over all possible measurements that Bob makes.
On the other hand, the classical capacity of a quantum channel also looks at the mutual information between Alice and Bob. The maximization of this mutual information is the Holevo quantity.
I do not understand the difference in the two settings. In particular, why is the Holevo quantity only an upper bound in the first linked page but is equal to the maximum mutual information in the second linked page?
Answer: Right, they are quite similar. The Holevo bound is a bound on the amount of accessible information between your quantum system and your classical system. The I(X;B) object written in the HSW theorem wikipedia page is actually this bound, while the $\chi$ there is the Holevo rate, or product state capacity. What HSW showed was that if you took many copies of a system, the asymptotic many system rate of accessible information across a quantum channel can be made to approach the single state Holevo bound.
Interestingly, Hastings showed that the many state Holevo bound is actually even higher than the single state bound, meaning that the HSW theorem does not demonstrate the "true" Holevo bound to be saturated. | {
"domain": "quantumcomputing.stackexchange",
"id": 1418,
"tags": "quantum-state, quantum-operation, information-theory, channel-capacity, mutual-information"
} |
Time Domain Example of Nyquist/Shannon? | Question: I looked at an old thread, Nyquist Frequency Phase Shift, but was left wondering still about oversampling.
I learned Shannon 40 years ago, then worked 30 years in (s/w) engineering, but now retired I have only begun using/configuring complete digital audio setups. I find myself baffled by exactly the sort of hands-on question that was posed in the referenced thread. Yet most of the relatively simple (IMO) questions I've thrown at the search engines have landed on generic "Nyquist says" posts, and almost all graphs have been frequency domain.
Like the author of the referenced post, if I draw a simple sine wave and "sample" it with something just over twice its frequency, then vary the phasing of the sample points (or the wave) a few times, I end up with very different squarewave patterns. Especially for an illustratively short burst, it's impossible for me to deduce how even the most perfect linear-phase brickwall filter could regenerate the exact analog input (amplitude plus phase), which would be critical in preserving harmonic structure for any instrument. Certainly Nyquist had little to draw upon except filters as a physical 'decoder'.
It is very easy to picture that, for a tiny increment over 2*Fm, something very similar to the zero-output example given in the referenced post would be generated for a fairly long burst. It's equally hard to see how any procedure would regenerate a perfect image of the input burst, intermediate oversampling etc. notwithstanding, whereas for a much lower frequency (same Fs) it looks feasible.
Since available hw/sw has improved so radically since CDs and DAT first showed up, has anyone seen a genuine graphed experiment? Something like a 20khz pure tone riding on a (reasonably short) 2khz tone burst (or any such two-tone combination to give a gauge for zero phase shift), sampled at a rate just higher (say 5% e.g.) than twice the higher frequency. I would love to see the results, in time domain, input vs. output. I'd also love for them to be exact over multiple sample/decode tests. Then I could get over the feeling that not just oversampling but huge oversampling might be a good idea after all.
Or does insistence on a readable burst of short length throw such a monkey wrench into the spectrum being sampled that Fmax would in reality be way, way higher?
Answer: A short burst of frequency f contains a ton of higher frequency spectra energy which is needed to make the burst short, or rectangular windowed, rather than infinitely long. Thus, a finite burst no longer meets the Nyquust criteria for sampling at just a bit over 2 times f.
The transform of a rectangular window is a Sinc, which has infinite support in the frequency domain. But a really really long window has a Sinc that decays faster, and thus may go below your noise floor soon enough.
If you want to sample at just a bit over 2 times f and not end up with aliasing, the signal may need to be stationary for a very very long time to allow reconstruction.
Or you can sample at a higher sample rate to acquire a sufficient amount of the spectra of the window that limits the length of a much shorter time domain burst.
So there's a trade-off between how close you can get to Nyquist frequencies versus the length of the signal. | {
"domain": "dsp.stackexchange",
"id": 4036,
"tags": "sampling, nyquist"
} |
An algorithm that find the max X/Y in a polygon in O(log n) | Question: I got a task to create two functions one finds max $X$ and the other $Y$ in a polygon in $O(\log n)$.
The polygon is represented by an array of its vertices where each vertex is represented by its coordinates $(X,Y)$.
We also know that the first vertex in the array has the smallest $X$ coordinate and after that, all the other coordinates are set in the array clockwise and also that all of them are different (all the $X$'s and all the $Y$'s are different)
I have the general idea.. using binary search but I'm not sure how to fully apply it.
How should move in the array while checking coordinates and yet keeping it in $O(\log n)$ complexity?
Answer: Assume the polygon is a convex polygon; otherwise there might not be an $O(\log n)$ algorithm to find the maximum of X-coordinates or Y-coordinates.
First, let us handle the case of X-coordinate.
Let $x_1, x_2, \cdots, x_n$ be the X-coordinate of the points. Since $x_1$ is the minimum X-coordinate, that sequence is a bitonic sequence. Here are two nice answers to how to sort a bitonic sequence.
Next, let us solve the case of Y-coordinate, which is mildly more complicated.
Let $y_1, y_2, \cdots, y_n, y_{n+1}=y_1$ be the Y-coordinate of the points. We can drill down into the following cases by comparing $y_1$ against $y_2$ and against $y_n$.
$C_{\lt, \lt}$ when $y_1\lt y_2$ and $y_1\lt y_n$. Here $y_1$ is the minimum.
$C_{\gt, \gt}$ when $y_1\gt y_2$ and $y_1\gt y_n$. Here $y_1$ is the maximum.
$C_{\lt, \gt}$ when $y_1\lt y_2$ and $y_1\gt y_n$. Here Y-coordinates goes up from $y_1$ to the maximum, then goes down to the minimum, then goes up to $y_1$.
$C_{\gt, \lt}$ when $y_1\gt y_2$ and $y_1\lt y_n$. Here Y-coordinates goes down from $y_1$ to the minimum, then goes up to the maximum, then goes down to $y_1$.
The first two cases are bitonic sequences, which can be treated as before.
Let us check the case of $C_{\lt, \gt}$; the case $C_{\gt, \lt}$ can be handled similarly.
Let $y_M$ is the maximum and $y_m$ is the minimum, where $M<m$. Given an index $1<i<n$, we can compare $i$ to $m$ and $M$ as follows:
If $y[i-1] < y[i] < y[i+1]$ and $y[i] > y_1$, then $i < M$.
If $y[i-1] < y[i] > y[i+1]$, then $i = M$.
If $y[i-1] > y[i] > y[i+1]$, then $M < i < m$.
If $y[i-1] > y[i] < y[i+1]$, then $i = m$.
If $y[i-1] < y[i] < y[i+1]$ and $y[i] < y_1$, then $i > m$.
The above result enables us to cut the search intervals in half each time with 2 or 3 comparisons, thus ensuring an algorithm with $O(\log n)$ time-complexity.
Exercise. What about the case when we have a sequence of $n$ distinct numbers, in which there are $t$ local maximums and $t+1$ local minimums? | {
"domain": "cs.stackexchange",
"id": 13569,
"tags": "algorithms, complexity-theory, time-complexity, computational-geometry, binary-search"
} |
Structure of a Pumping Lemma proof: contradiction or counterexample? | Question: This site is full of Pumping Lemma questions, and I do admit I've not read them all. I've tried some proofs myself and they seem to work, but I can't find anywhere what is the (general) exact structure of a proof where you show a language is not regular or context-free?
Wikipedia and most proofs start with "Suppose $L$ is a regular language", which would mean it is a proof by contradiction, because it isn't.
But the lemma has the quantifiers $\exists, \forall, \exists, \forall$ in order. And in the proof you assume a given constant given by the first $\exists$, and then you come up with some word (the first $\forall$), format it in some given substring division (the second $\exists$) and again come up with some $k$ (second $\forall$) for which you show it is not in $L$.
This seems to me like a counterexample proof ("this is not context-free/regular because I can come up with a counterexample which fails the pumping lemma"), and not a proof by contradiction?
Answer:
this is not context-free/regular because I can come up with a counterexample which fails the pumping lemma
You are missing that in order to construct this counterexample you have to assume that $L$ is regular. Then you apply the Pumping lemma, which yields the Pumping length $p$. Only then do you construct your (counter)example string (using $p$!) and show that it contradicts the Pumping lemma. Hence, the assumption must be false.
So while you can say that you derive the contradiction using a counterexample, the outer-most proof is by contradiction.
With the assumption, you wouldn't have $p$!
Note that proofs can be nested, so you don't need to see only one proof technique. For instance, induction proofs often use a proof by contradiction inside the inductive step. | {
"domain": "cs.stackexchange",
"id": 6853,
"tags": "proof-techniques, pumping-lemma"
} |
Why does potential chemical energy decrease with increasing oxidation of carbon? | Question: I saw an interesting spectrum in my biology class the other day. On the left end was methane - the least oxidized form of carbon - and the most energetic form of carbon (at least relative to what else was on the slide). On the right end was carbon dioxide - the most oxidized form of carbon - and the form with the lowest energy.
Now, I do agree with the slide - we can combust methane but we can't combust carbon dioxide - at least I don't think we can.
However, is there an intuitive explanation of why increasing oxidation state correlates with lower potential energy, at least for carbon?
Does increasing oxidation state correlate with lower potential energy for all elements?
Why does it seem that oxygen forms rather strong bonds with other elements in many cases?
Answer: From a very basic explanation from general chemistry:
Increasing your oxidation state increases the charge. So that means that you move to the left on the periodic table increasing your atom radius and decreasing your ionization energy and electronegativity.
If your IE decreases and you become more electronegative, your atom is more attractive for chemical reactions so it has a lower potential energy.
There is a less handwaving explanation from inorganic chemistry, but that is a little harder to understand and I don't want to formulate a MO diagram. | {
"domain": "chemistry.stackexchange",
"id": 8956,
"tags": "redox"
} |
Finding displacement of a body in freefall for an unknown time | Question: A brick is dropped from a building. After some time, the brick falls 40 meters in one second. Find how far the brick travels in the next second.
My intuition for this problem is to use two kinematics equations $$v_y^2 =v_{0y}^2+2a_y(y-y_0) \hspace{.5cm}\text{and}\hspace{.5cm} y-y_0=\frac{v_{0y}+v_y}{2}t$$ where $y-y_0=40$, $a_y=9.8$ and $t=1$. Writing $v_{0y}$ in terms of $v_y$ in the second equation gives me $v_{0y}=80-v_y$. Plugging this into the first equation gives us $v_y^2=(80-v_y)^2+2(9.8)(40)$ where we can solve $v_y$ as $v_y=44.9$. This is the velocity after the 1 second where it falls 40 meters. Finally, plugging this into the position equation $$y-y_0=v_{y}t+\frac{1}{2}a_yt^2 $$ as $v_{0y}$ and solving for the position in the next second gives us $$y-y_0=44.9(1)+\frac{1}{2}(9.8)(1)^2 $$ or $49.8$ meters.
Was my process correct for solving this problem?
Answer: Yes, your approach is correct.
But you must have solved a quadratic equation for this. This might be a little time consuming.
Instead you can use the equation for distance covered in $ n^{th} $ second. It is given by-
$S_n = u_0 + \frac {1}{2}a (2n-1)$
Here the case is even easier as $u_0$ is zero. You will get the $n^{th}$ time at which displacement is 40m. Once you get the time you can easily calculate the velocity and thereafter the distance travelled after that 1 second by the equations of motion.
$v_y = v_0 + a_yt$
$y - y_0 = v_0yt + \frac{1}{2}a_yt^2$
This will save your lot of time.
*In case you don't know equation for the distance travelled in $n^{th}$ second you can easily derive it from the equations of motion. * | {
"domain": "physics.stackexchange",
"id": 47027,
"tags": "homework-and-exercises, kinematics, free-fall"
} |
Aggregating standard deviations | Question: Imagine I have a collection of data, let's say the travel time for a road segment.
On this collection I want to calculate the mean and the standard deviation. Nothing hard so far.
Now imagine that instead of having my collection of values for one road segment, I have multiple collections of values that correspond to the multiple sub segments that compose the road segment.
For each of these collections, I know the average and the standard deviation.
From that, I want to aggregate these multiple average and standard deviation in order to get the average and standard deviation for the whole road segment.
For example, let's suppose I have the following dataset :
subSegmentA , subSegmentB , subSegmentC , subSegmentD
values 20 45 25 70
30 55 10 60
10 10 10 80
15 50 30 75
15 40 15 75
20 40 20 80
30 45 20 65
10 40 25 70
average 18.75 40.625 19.375 71.875
stddev 7.90569415 13.47948176 7.288689869 7.039429766
expected_global_average : 150.625
expected_global_stddev : 18.40758772
For the average there is no problem, a simple sum do the job, but I have trouble with the global_stddev.
I tried multiple solutions from here, without success.
Edit :
After further research, it seems mathematicaly impossible to calculate the standard deviation of a set based only from the standard deviation and average of subsets.
So I am trying to calculate a new metric, that would approximate this global standard deviation.
To do so, I can use in addition to the avg/stddev per subsegments, the length ratio of the subsegment to the road.
Answer: It is not perfect but you can try to re-create synthetic data based on the mean and sd. In R, you can use the rnorm function to create a normal distribution from mean and sd. The following is one of the ways to do it. Hope it helps! P.S. I just choose n = 1000 to illustrate how it can be done; you can try using different numbers.
a <- c(20,30,10,15,15,20,30,10)
b <- c(45,55,10,50,40,40,45,40)
c <- c(25,10,10,30,15,20,20,25)
d <- c(70,60,80,75,80,60,65,70)
n <- 1000
e <- c(rnorm(n, mean(a), sd(a)) +
rnorm(n, mean(b), sd(b)) +
rnorm(n, mean(c), sd(c)) +
rnorm(n, mean(d), sd(d)))
mean(e)
sd(e)
Try it online | {
"domain": "datascience.stackexchange",
"id": 11312,
"tags": "aggregation"
} |
If the set of Turing machines is countably infinite, how can a Turing machine always have a finite set of states? | Question: I have only begun studying this subject and have only completed the first few chapters of the Elements of the Theory of Computation.
I have seen the answers (on this site and elsewhere) saying that the set of all Turing machines is countably infinite. Intuitively, this makes sense to me, as I can imagine the equinumerosity of this set with the set of the natural numbers.
I have also seen the answers saying that a Turing machine cannot have infinite states by definition (the Elements provides one such definition). Moreover, an infinite state machine would be so powerful that it would not merit study, these answers say.
My question then is, how is it possible that a Turing machine must always have a finite set of states? If this were the case, could one not ask for the maximum number of states a Turing machine can have? And would one not expect this to be some fixed number?
I feel there are points about this theory that I am missing.
Answer: You are mentally reversing two logical quantifiers, that are implicit in the statement "Turing machines must always have a finite set of states".
You read this sentence as "there exists a finite set $S$ of states such that all Turing machines $M$ use only states in $S$". Logically, this is a "$\exists S \forall M$" property.
Instead, the sentence is meant to be read as "for each Turing machine $M$, there exists a finite set $S$ of states such that $M$ uses only states in $S$". Logically, this is a "$\forall M\exists S$" property.
This is a matter of natural language being sometimes too easy to be misread. | {
"domain": "cs.stackexchange",
"id": 21917,
"tags": "turing-machines, computability, infinity"
} |
How PIN (Positive Intrinsic Negative) junctions work? | Question: Studying the operation of the photodetectors I came across the PIN (Positive Intrinsic Negative) junctions. Honestly, I have not found very clear material on the Internet.
Exactly how do these PIN (Positive Intrinsic Negative) junctions work?
Answer: The idea is that an "intrinsic" layer of semiconductor, neither P nor N, will contain no free charge carriers in equilibrium. Photoelectric detection works best in depleted material with no free charge carriers around. If there are free charge carriers, they interfere with the collection of the photoelectric charge.
In reality, it is generally impossible to make perfectly intrinsic material, so a PIN diode is merely a diode with a very lightly doped layer between cathode and anode. It may still require a substantial bias to deplete that layer. For example, the Hamamatsu S2744 is intended for operation with 70V of bias. | {
"domain": "physics.stackexchange",
"id": 87769,
"tags": "semiconductor-physics"
} |
Is there a simple way to get the circular dichroism of a molecule from its structure? | Question: Are there any heuristics to get the relative absorbtion of left and right circularly polarized light by a molecule from its molecular structure? Is it even possible to predict which polarization is selectively absorbed?
Answer:
Is there a simple way to get the circular dichroism of a molecule from
it's structure?
Short Answer: Yes, for many molecules if you know the molecule's structure, then you can predict the shape of the optical rotary dispersion (ORD) or circular dichroism (CD) curve. Conversely, and perhaps more importantly, if you know the shape of the ORD or CD curve, then you can predict the molecule's structure, in particular, the absolute configuration at chiral centers in the molecule. There are many empirical techniques, such as the "Octant" rule. These methods are based on numerous experimental observations that have been codified into sets of rules for different classes of compounds. These empirical rules can be used to determine the absolute configuration of cyclic ketones, conjugated dienes, alpha, beta-unsaturated ketones, twisted biphenyls as well as other systems.
Long Answer: When polarized light is passed through a chiral sample and the light is absorbed, there are two physical phenomenon that can be measured. One is called the circular birefringence and it measures the difference in the refractive indices ($\ce{n_{R}, n_{L}}$) of the sample when a beam of right and left circularly polarized light is passed through the sample. When this difference is plotted against wavelength an ORD curve results. The other parameter is called circular dichroism and it measures the difference in the molar extinction coefficients ($\ce{\epsilon_{R}, \epsilon_{L}}$) when right and left circularly polarized light passes through the sample. When this difference is plotted against wavelength a CD curve results. The dependence of these two phenomenon on wavelength is termed the Cotton effect.
The following figure shows the uv-vis absorption (blue line), ORD curve (green line) and CD curve (red line) for the two enantiomers of camphor sulfonic acid. Note how the ORD and CD curves for one enantiomer are mirror images of the same curves for the other enantiomer. It is the carbonyl chromophore's $\ce{n->\pi^{\ast}}$ absorption that is responsible for the curves seen in this image. In ORD spectra there are typically 2 maxima, one positive and one negative. The positive maxima is a "peak", while the negative maxima is termed a "trough". If, when moving from longer to shorter wavelength, a peak is encountered first, then the ORD curve is said to be positive (+). Conversely, if a trough is encountered first, then the ORD curve is negative (-).
image source
Chromophores can be divided into two groups, symmetric or inherently dissymmetric. A carbonyl, nitro and olefin are examples of groups that, by themselves, are symmetric. They may be perturbed by asymmetry on other atoms around them, but alone, by themselves, they possess no inherent asymmetry. On the other hand, groups such as helicenes, twisted biphenyls and unsaturated ketones possess a twist or handedness; such molecules are inherently dissymmetric. The intensity of an ORD or CD curve from a molecule containing an inherently dissymmetric chromophore is usually much greater than the peak intensity from a molecule containing a symmetric chromophore.
With that information as background, we can now apply these concepts to the examination of real molecules. One of the most studied classes of molecules where these concepts have been applied are the substituted cyclohexanones. In the early 60's Djerassi, Moffitt, Woodward, Moscowitz and Klyne (some of the greats!) developed an empirical set of rules to predict the absolute configuration of chiral centers in substituted cyclohexanones. The rule became known as the "Octant" rule and came about because of the work they were doing with complex natural products that often contained a substituted cyclohexanone ring. They would run a reaction on a pure enantiomer of a cyclohexanone containing natural product and wanted a method to quickly determine if a chiral carbon in the molecule retained its absolute configuration or if the configuration was inverted in the reaction. After comparing ORD curves from countless cyclohexanone containing natural products, they were able to compile their observations into the Octant rule.
As its name implies, the Octant rule divides the region around a cyclohexanone carbonyl into 8 regions or octants. As the picture below (left part of figure) illustrates, the octants are generated by bisecting the carbonyl group with 3 planes. Two of the planes contain the carbonyl group. One divides the carbonyl into top and bottom regions and the other divides the carbonyl into right and left regions. The third plane actually passes through the middle of the carbonyl and is perpendicular to the 2 other planes. This third plane divides the molecule into front and back regions. There are a few molecules that do have substituents in the front region, but most molecules do not, so we can usually simplify our analysis by only considering the 4 rear octants. Each octant is also assigned a plus or minus symbol telling us whether we would expect a substituent in the octant to contribute in a positive or negative manner to the ORD or CD curves.
image source
The figure on the right shows how a cyclohexanone would be placed in the octant diagram and how its projection appears as a rectangle when sighted down the carbonyl group.
Let's examine the case of 3-methyl cyclohexanone. First let’s draw the possible conformers of both S- and R-3-methylcyclohexanone
Now let’s take the 2 possible conformations for R-3-methylcyclohexanone (axial and equatorial methyl group) and project them onto the octant diagram. We see that the equatorial conformation has a positive contribution to the Cotton effect, while the axial conformer has a negative contribution. At equilibrium, the molecule exists predominantly in the conformation with the methyl group equatorial, therefore we expect the positive Cotton effect from this conformer to outweigh the negative contribution to the Cotton effect from the axial conformer. Therefore, we expect a positive ORD and CD curve for R-3-methylcyclohexanone. Similar analysis leads to the expectation of a negative ORD and CD curve for the S enantiomer. Indeed, experimentally it is found that the ORD curve for R-3-methylcyclohexanone is positive and negative for S-3-methylcyclohexanone. The full name for the R enantiomer is R-(+)-3-methylcyclohexanone
image source
More detail on the 3-cyclohexanone example, as well a nice, concise review of ORD can be found here.
Books have been written on this subject so there is much that I have left out. For those who would like to explore further here are a few phrases that, in addition to the previous link, can help you get started: Cotton effect, ORD biological applications, and magnetic circular dichroism | {
"domain": "chemistry.stackexchange",
"id": 2340,
"tags": "physical-chemistry, chirality"
} |
Ideal gas undergoing linear process on $V$-$T$ graph must be isobaric? | Question:
One mole of an ideal gas in initial state A undergoes a cyclic process ABCA, as shown in Fig. Its pressure at A is $P_0$. Choose the correct option(s) from the following.
(A) Internal energies at A and B are the same
(B) Work done by the gas in process AB is $P_0 V_0 ln 4 $
(C) Pressure at C is $P_0/4 $
(D) Temperature at C is $T_0/4$
Question source: JEE advanced 2010
I have doubt only for option (C) and (D). You may attempt question with neutral mind before opening the spoilers.
Official answer: (A),(B),(C),(D)
My answer: (A),(B)
Method: There is no information of temperature or pressure at point C. From diagram information is revealed that process BC is linear but it may or may not be isobaric. So, we cannot conclude or comment anything about option C and D. Hence, they should not be ticked in exam. Is there any condition that process BC must be isobaric?
In general: If the gas follows a line in V-T space, must the process be isobaric?
Answer: There is a difference between proportionality and a more general straight line. I get the equation for the line as $$V=V_0\left[1+3\frac{(T-T_C)}{(T_0-T_C)}\right]$$So, $$P=\frac{P_0(T/T_0)}{\left[1+3\frac{(T-T_C)}{(T_0-T_C)}\right]}$$The only way that P is the same at the two end points B and C is if $T_C=T_0/4$. Mathematically, this will also guarantee that P is constant at all other points along the line. | {
"domain": "physics.stackexchange",
"id": 81819,
"tags": "thermodynamics, ideal-gas, gas"
} |
Symmetry in quantum mechanics | Question: My professor told us that in quantum mechanics a transformation is a symmetry transformation if $$ UH(\psi) = HU(\psi) $$
Can you give me an easy explanation for this definition?
Answer: In a context like this, a symmetry is a transformation that converts solutions of the equation(s) of motion to other solutions of the equation(s) of motion.
In this case, the equation of motion is the Schrödinger equation
$$
i\hbar\frac{d}{dt}\psi=H\psi.
\tag{1}
$$
We can multiply both sides of equation (1) by $U$ to get
$$
Ui\hbar\frac{d}{dt}\psi=UH\psi.
\tag{2}
$$
If $UH=HU$ and $U$ is independent of time, then equation (2) may be rewritten as
$$
i\hbar\frac{d}{dt}U\psi=HU\psi.
\tag{3}
$$
which says that if $\psi$ solves equation (1), then so does $U\psi$, so $U$ is a symmetry.
For a more general definition of symmetry in QM, see
Symmetry transformations on a quantum system; Definitions | {
"domain": "physics.stackexchange",
"id": 57324,
"tags": "quantum-mechanics, operators, symmetry, hamiltonian, commutator"
} |
Memory efficient structure for membership checking without false positive | Question: The initial task can be described like this:
I have a requirement to deduplicate HUGE list(potentially billions of items) without storing the original items - it's simply unaffordable
All I need to know is answer to the question "Has my system ever seen this element before?"
The most close data structure I was able to find so far is a bloom filter, but it has false positives which better to avoid in my task as it results in data loss
For example providing I account to store at least 2^32 items, with positive error rate of just 1%(which means 1% of all urls won't be visited) I would need at least
n = 4,294,967,296, p = 0.01 (1 in 100) → m = 41,167,512,262 (4.79GB), k = 7
4.79GB of memory...
The task itself is a high scale web crawler, so I need to keep track of already visited urls(or sha1 hashes of this urls)
Any help is welcome
Thanks!
Answer: For web crawler scale why not use a distributed database like Apache Cassandra? Lookups on indexes are efficient and no false positives. | {
"domain": "datascience.stackexchange",
"id": 496,
"tags": "bigdata, scalability"
} |
Implementation of binary tree traversal | Question: Based on what i understood of Binary Tree, queue and recursion I implemented this Breadth First Search algorithm as follows.
Do you have any suggestions to improve (in term of readability, good practice etc...) it?
# Definition for a binary tree node.
class Node(object):
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
def preorder_print(self):
print(self.val)
if self.left:
self.left.preorder_print()
if self.right:
self.right.preorder_print()
return
def postorder_print(self):
if self.left:
self.left.postorder_print()
if self.right:
self.right.postorder_print()
print(self.val)
return
def inorder_print(self):
if self.left:
self.left.inorder_print()
print(self.val)
if self.right:
self.right.inorder_print()
return
def bread_traversal_print(self):
q=Queue()
q.push(self)
def bt():
#import pdb;pdb.set_trace
if not q:
return
node=q.pop()
if node:
if node.left:
q.push(node.left)
if node.right:
q.push(node.right)
print(node.val)
bt()
bt()
class Queue():
def __init__(self):
self.data = []
def push(self,elt):
self.data.append(elt)
def pop(self):
if self.data:
return self.data.pop(0)
def peek(self):
if self.data:
return self.data[0]
if __name__ == "__main__":
#input = [l for l in 'FDJBEGKACIH']
root = Node(1)
root.left = Node(2)
root.right = Node(3)
root.left.left = Node(4)
root.left.right = Node(5)
root.right.left = Node(6)
root.right.right = Node(7)
print("preorder:")
root.preorder_print()
print("postorder:")
root.postorder_print()
print("inorder:")
root.inorder_print()
print('bread :')
root.bread_traversal_print()
""" 1
2 3
4 5 6 7
[1 2 4 5 3 6 7]
"""
EDIT: Yes BFS for Breadth First Search would be more appropriate ;)
Answer: Class definition
There is no need to explicitly inherit from object; that is a Python 2.x anachronism.
Instead of class Node(object):, simply write class Node:.
Similarly, class Queue(): can simply be class Queue:.
Default Parameters
Every Node should have a value; there is little reason to provide a default of 0 for the val parameter. By forcing the caller to provide the value, you are less likely to accidentally have a node with the default integer 0 value when all the other nodes are created with (say) strings.
Unnecessary returns
There is no need for the return statements in preorder_print, postorder_print, and inorder_print.
Visitor pattern
You have defined several methods for traversing the tree. All of them print the values. If you wanted to do anything else with the values, you're stuck writing another (set of 4?) function.
Instead, you should separate the traversal from the operation. Visit each node, in some order, and do something at that node.
class Node:
...
def preorder(self, visitor):
visitor(self.val)
if self.left:
self.left.preorder(visitor)
if self.right:
self.preorder(visitor)
...
# Do a pre-order traversal the tree, starting at the root, printing each values.
root.preorder(print)
Breadth First Traversal
First, the function should probably be called breadth_traversal_print, not bread_traversal_print.
Second, there is no need to use recursion during a breadth first traversal. You have a Queue which keeps track of the nodes you need to visit. You just need to loop while the queue is not empty. The recursion is doing a loop in a resource-intensive fashion. If Python had tail-call-optimization, the inefficiency could be removed automatically; but Python doesn't, so looping this way is wrong.
def breadth_traversal_print(self):
q = Queue()
q.push(self)
while q:
node = q.pop()
if node:
if node.left:
q.push(node.left)
if node.right:
q.push(node.right)
print(node.val)
PEP-8
The PEP 8 -- Style Guide for Python enumerates many style guidelines all programs should follow.
White Space
There should be a space around binary operators, like =. You violate this with q=Queue(). Note that = used with keyword arguments is not an operator, so doesn't shouldn't be surrounded by white space; therefore def __init__(self, val=0, left=None, right=None): is correct.
There should be a space after commas. You violate this in def push(self,elt):
No-op code
This string ...
""" 1
2 3
4 5 6 7
[1 2 4 5 3 6 7]
"""
... is a statement which does nothing, and has no side-effect. Perhaps you meant this to be a comment?
NOTE: This is different from """docstrings""". A string statement which immediately follows a class or def statement is treated as a docstring, removed from the program code, and stored in the __doc__ member of the class or function by the Python interpreter. There, it can be extracted by help(...) and other documentation generation programs. | {
"domain": "codereview.stackexchange",
"id": 40579,
"tags": "python, python-3.x, tree, binary-tree"
} |
Are there human cells, apart from red blood cells and platelets, without a nucleus? | Question: I know that blood platelets and erythrocytes do not have a nucleus. Are there more cells in the human body without a nucleus, such as pancreas, cartilage, or lung cells?
Answer: Short answer
As far as I know, red blood cells and blood platelets are the only human cells in our body without a nucleus.
Background
Erythrocytes and thrombocytes are the only human cells without a nucleus, as far as I know. However, if you count the gut as being part of the human body (in essence it is a continuation of the skin and as such it can be considered to be on our outside), then we are loaded with cells lacking a nucleus, namely all the bacteria that live in our intestines such as E. coli. Bacteria, being prokaryotes, lack a nucleus. In fact, there are ten times more bacteria than human cells in our gut (Wenner, 2007).
Reference
Wenner, Sci Am 2007 | {
"domain": "biology.stackexchange",
"id": 3834,
"tags": "cell-biology"
} |
How to find a curvature of the space-time by having $g^{\alpha \beta}$ in the following case without cumbersome calculations? | Question: The metric tensor for Fock-Lorentz space-time,
$$
\mathbf r_{||}{'} = \frac{\gamma (u)(\mathbf r_{||} - \mathbf u t)}{\lambda \gamma (u) (\mathbf u \cdot \mathbf r) + \lambda c^{2} (1 - \gamma (u))t + 1},
$$
$$\mathbf r_{\perp}{'} = \frac{\mathbf r_{\perp}}{\lambda \gamma (u) (\mathbf u \cdot \mathbf r) + \lambda c^{2} (1 - \gamma (u))t + 1},
$$
$$
t' = \frac{\gamma (u)(t - \frac{(\mathbf u \cdot \mathbf r)}{c^{2}})}{\lambda \gamma (u) (\mathbf u \cdot \mathbf r ) + \lambda c^{2} (1 - \gamma (u))t + 1},
$$
is given by
$$
g^{\alpha \beta} = \begin{bmatrix} \frac{1}{(1 + c^{2}\lambda t)^{4}} & \frac{c\lambda x}{(1 + c^{2}\lambda t)^{3}} & \frac{c\lambda y}{(1 + c^{2}\lambda t)^{3}} & \frac{c\lambda z}{(1 + c^{2}\lambda t)^{3}} \\ \frac{c\lambda x}{(1 + c^{2}\lambda t)^{3}} & -\frac{1}{(1 + c^{2}\lambda t)^{2}} & 0 & 0 \\ \frac{c\lambda y}{(1 + c^{2}\lambda t)^{3}} & 0 & -\frac{1}{(1 + c^{2}\lambda t)^{2}} & 0 \\ \frac{c\lambda z}{(1 + c^{2}\lambda t)^{3}} & 0 & 0 & -\frac{1}{(1 + c^{2}\lambda t)^{2}} \end{bmatrix},
$$
where $c, \lambda $ are constant.
Is there a slick way to find the Ricci scalar without cumbersome calculations with Christoffel symbols (if at all possible)?
Answer: Cleaning up the notation a bit by rescaling coordinates to get rid of $c$ and $\lambda$ and pulling out a common factor gives:
$$g^{\alpha \beta} = \frac{1}{(1 + t)^{2}}
\begin{bmatrix} \frac{1}{(1 + t)^{2}} & \frac{x}{1 + t} & \frac{y}{1 + t} & \frac{z}{1 + t} \\
\frac{x}{1 + t} & -1 & 0 & 0 \\
\frac{y}{1 + t} & 0 & -1 & 0 \\
\frac{z}{1 + t} & 0 & 0 & -1
\end{bmatrix}.
$$
You can use the properties of the Ricci scalar under conformal transformations (google them) to forget about the overall factor by performing a conformal (Weyl) rescaling. You can change time coordinates $t\to\tau$ by integrating
$$ \mathrm{d}\tau = (1+t) \mathrm{d}t,\ \partial_\tau = \frac{1}{1+t} \partial_t.$$
This removes $t$ completely from the conformally rescaled metric:
$$\tilde{g}^{\alpha \beta} =
\begin{bmatrix} 1 & x & y & z \\
x & -1 & 0 & 0 \\
y & 0 & -1 & 0 \\
z & 0 & 0 & -1
\end{bmatrix}.
$$
Then going to spherical coordinates (scroll to the bottom of the page for the relevant formulae) simplifies the off diagonal part (check this, I haven't been careful!):
$$\tilde{g}^{\alpha \beta} =
\begin{bmatrix} 1 & r & 0 & 0 \\
r & -1 & 0 & 0 \\
0 & 0 & -\frac{1}{r^2} & 0 \\
0 & 0 & 0 & -\frac{1}{r^2 \sin^2 \theta}
\end{bmatrix}.
$$
You can do some more coordinate transformation mixing $r$ and $\tau$ to diagonalise the metric if you want but I'm getting tired of this. It is straightforward now to compute the Ricci scalar for this metric (and significantly simpler than the original form of the metric). You can probably look up formulae for the curvature tensors of metrics in this form. | {
"domain": "physics.stackexchange",
"id": 7077,
"tags": "general-relativity, spacetime, metric-tensor"
} |
units and nature | Question: I am wondering whether the five$^1$ units of the natural unit system really is dictated by nature, or invented to satisfy the limited mind of man?
Is the number of linearly independent units a property of the nature, or can we use any number we like? If it truly is a property of nature, what is the number? -five? can we prove that there not is more?
In the answers please do not consider cgs units, as extensions really is needed to cover all phenomenon. - or atomic units where units only disappear out of convenience. - or SI units where e.g. the mole for substance amount is just as crazy as say some invented unit to measure amount of money.
$^1$length, time, mass, electric charge, and temperature (or/and other linearly independent units spanning the same space).
Answer: Even seasoned professionals disagree on this one. Trialogue on the number of fundamental constants by M. J. Duff, L. B. Okun, G. Veneziano, 2002:
This paper consists of three separate articles on the number of
fundamental dimensionful constants in physics. We started our debate
in summer 1992 on the terrace of the famous CERN cafeteria. In the
summer of 2001 we returned to the subject to find that our views still
diverged and decided to explain our current positions. LBO develops
the traditional approach with three constants, GV argues in favor of
at most two (within superstring theory), while MJD advocates zero.
Okun's thesis is that 3 units (e.g. $c$, $\hbar$, $G$) are necessary for measurements to be meaningful. This is in part a semantic argument.
Veneziano says that 2 units are necessary: action $\hbar$ and some mass $m_{fund}$ in QFT+GR; or a length $\lambda_s$ and time $c$ in string theory; and no more than 2 in M-theory although he's not sure.
Finally, Duff says there is no need for units at all, all quantities are fundamentally subject to some symmetry, and units are merely conventions for measurement.
This is a very fun paper and answers your question thoroughly. | {
"domain": "physics.stackexchange",
"id": 2363,
"tags": "units, dimensional-analysis"
} |
Is there an equivalent to wetness for air? | Question: I was wondering if there was something equivalent to the property of being wet with water, but with air instead. For example, if I drop water on my shirt, I'll notice by its appearance and feel that it is wet, so in a sense its properties were changed by being exposed to water.
So I'm wondering if similarly, by being exposed to ambient air, my shirt is somehow being changed, i.e. if it was in a vacuum would it feel or appear different than when it's exposed to air?
Answer: Well - air does have "relative humidity" and this really affects the things that interact with it. For example - you will have a hard time cooling down by sweating when the relative humidity is very high, as the rate of evaporation that you can achieve (and therefore heat rejection) becomes quite low: this is why you end up "sweaty" on a hot muggy day.
Similarly, materials like nylon are very hygroscopic: when they are in humid air, they will absorb some of that moisture. This will affect their mechanical properties. An extreme example of this can be seen with sugar and salt - they will "melt" when exposed to humid air for a while.
If we are talking about "pure air", then we still see this effect. There are chemical reactions (mostly with the oxygen in the air) that can result in changes. For example, various metals will oxidize when left unprotected in the air: you could say they became "wet". Exposing them to a reducing environment (e.g. hot hydrogen gas) could "dry" them again, as the oxygen is reduced and the surface once again becomes metallic.
I hope that I didn't completely misunderstand your question... | {
"domain": "physics.stackexchange",
"id": 15749,
"tags": "water, vacuum, air"
} |
Balance factor changes after local rotations in AVL tree | Question: I try to understand balance factors change after local rotations in AVL trees.
Given the rotate_left operation:
x y'
/ \ / \
a y => x' c
/ \ / \
b c a b
and $b(x)$, $b(y)$ - balance factors for $x$ and $y$ nodes - I want to find $b(x')$ and $b(y')$.
In my reasoning I will use the Iverson bracket notation, that denotes a number that is 1 if the condition in square brackets is satisfied, and 0 otherwise:
$$
[P]=\begin{cases} 1, \text{ if } P \text{ is true}; \\
0, \text{ otherwise}.\end{cases} $$
Balance factor for the node $x'$ can be calculated like this:
$$b(x') = h(b) - h(a)$$
where $h(b)$ and $h(a)$ - the heights of sub-trees $a$ and $b$.
Let's substitute $h(b) = h(y) - b(y)[b(y) > 0] - 1$ and $h(a) = h(x) - b(x)[b(x) > 0] - 1$:
$$b(x') = (h(y) - b(y)[b(y) > 0] - 1) - (h(x) - b(x)[b(x) > 0] - 1)$$
Some simplification:
$$b(x') = h(y) - b(y)[b(y) > 0] - h(x) + b(x)[b(x) > 0]$$
Now substitute $h(y) = h(x) + b(x)[b(x) \le 0] - 1 $:
$$b(x') = h(x) + b(x)[b(x) \le 0] - 1 - b(y)[b(y) > 0] - h(x) + b(x)[b(x) > 0]$$
Obviously, $[b(x) \le 0] + [b(x) > 0] = 1$:
$$b(x') = h(x) + b(x) - 1 - b(y)[b(y) > 0] - h(x)$$
Simplify again:
$$b(x') = b(x) - b(y)[b(y) > 0] - 1$$
In the same way I can find balance factor for $y'$. Skipping intermediate steps I get:
$$ b(y') = h(c) - h(x') =\\
...\\
= b(x) + b(y)[b(y) \le 0] - b(x')[b(x') > 0] - 2$$
Somehow I have feeling that this is not the simplest formula for balance factors.
Is there any simpler approach to calculate balance factors, which would always work even if the tree becomes unbalanced?
EDIT:
The simplest formulas I managed to get look like this (see my own answer for details):
$$b(y′)=b(y)+b(x')[b(x')\le0]−1$$
$$b(x′)=b(x)−b(y)[b(y)>0]−1$$
Answer: EDIT:
@Maxym's answer is correct after all and is actually equivalent. I had simply misinterpreted the notation. Leaving this answer anyway as the cited link provides a useful explanation.
While @Maxym's answer is on the right track, his formula didn't quite work for me. After beating my head against the wall for quite some time, I found this link written by Brad Appleton that seems to be the right formula (at least my unit tests all work now and my tree stays coherent): http://oopweb.com/Algorithms/Documents/AvlTrees/Volume/AvlTrees.htm
Using Maxym's notation, it would be something like this for the left rotation (also I reversed the order -- since the formula for the new y depends on the new x, it makes sense to me to list x first):
$$b(x′) = b(x) − 1 - max(b(y), 0)$$
$$b(y′) = b(y) - 1 + min(b(x′), 0)$$
And for the right rotation:
$$b(x′) = b(x) + 1 - min(b(y), 0)$$
$$b(y′) = b(y) + 1 + max(b(x′), 0)$$
In case that page goes away, I'm including the relevant portion:
Calculating New Balances After a Rotation
To calculate the new balances after a single left rotation; assume we
have the following case:
A B
/ \ / \
/ \ / \
a B ==> A c
/ \ / \
/ \ / \
b c a b
The left is what the tree looked like BEFORE the rotation and the
right is what the tree looks like after the rotation. Capital letters
are used to denote single nodes and lowercase letters are used to
denote subtrees.
The "balance" of a tree is the height of its right subtree less the
height of its left subtree. Therefore, we can calculate the new
balances of "A" and "B" as follows (ht is the height function):
NewBal(A) = ht(b) - ht(a)
OldBal(A) = ht(B) - ht(a) = ( 1 + max (ht(b), ht(c)) ) - ht(a)
subtracting the second equation from the first yields:
NewBal(A) - OldBal(A) = ht(b) - ( 1 + max (ht(b), ht(c)) )
+ ht(a) - ht(a)
canceling out the ht(a) terms and adding OldBal(A) to both sides
yields:
NewBal(A) = OldBal(A) - 1 - (max (ht(b), ht(c)) - ht(b) )
Noting that max(x, y) - z = max(x-z, y-z), we get:
NewBal(A) = OldBal(A) - 1 - (max (ht(b) - ht(b), ht(c) - ht(b)) )
But ht(c) - ht(b) is OldBal(B) so we get:
NewBal(A) = OldBal(A) - 1 - (max (0, OldBal(B)) )
= OldBal(A) - 1 - max (0, OldBal(B))
Thus, for A, we get the equation:
NewBal(A) = OldBal(A) - 1 - max (0, OldBal(B))
To calculate the Balance for B we perform a similar computation:
NewBal(B) = ht(c) - ht(A)
= ht(c) - (1 + max(ht(a), ht(b)) )
OldBal(B) = ht(c) - ht(b)
subtracting the second equation from the first yields:
NewBal(B) - OldBal(B) = ht(c) - ht(c)
+ ht(b) - (1 + max(ht(a), ht(b)) )
canceling, and adding OldBal(B) to both sides gives:
NewBal(B) = OldBal(B) - 1 - (max(ht(a), ht(b)) - ht(b))
= OldBal(B) - 1 - (max(ht(a) - ht(b), ht(b) - ht(b))
But ht(a) - ht(b) is - (ht(b) - ht(a)) = -NewBal(A), so ...
NewBal(B) = OldBal(B) - 1 - max( -NewBal(A), 0)
Using the fact that min(x,y) = -max(-x, -y) we get:
NewBal(B) = OldBal(B) - 1 + min( NewBal(A), 0)
So, for a single left rotation we have shown the the new balances for
the nodes A and B are given by the following equations:
NewBal(A) = OldBal(A) - 1 - max(OldBal(B), 0)
NewBal(B) = OldBal(B) - 1 + min(NewBal(A), 0)
Now let us look at the case of a single right rotation. The case we
will use is the same one we used for the single left rotation only
with all the left and right subtrees switched around so that we have
the mirror image of the case we used for our left rotation.
A B
/ \ / \
/ \ / \
B a ==> c A
/ \ / \
/ \ / \
c b b a
If we perform the same calculations that we made for the left
rotation, we will see that the new balances for a single right
rotation are given by the following equations:
NewBal(A) = OldBal(A) + 1 - min(OldBal(B), 0)
NewBal(B) = OldBal(B) + 1 + max(NewBal(A), 0) | {
"domain": "cs.stackexchange",
"id": 7962,
"tags": "algorithm-analysis, data-structures, search-trees, balanced-search-trees"
} |
What does the phrase "reducing atmosphere" mean in quantitative terms? | Question: There is a debate about "how reducing" the atmosphere of the early Earth was, and this question is my attempt to grasp toward an understanding of what that that phrase really means. As a bit of context, the Miller–Urey experiment produces a wide variety of complex organic molecules, but people say it requires a very reducing atmosphere to work, and it's disputed whether the early Earth's atmosphere was reducing enough for analogous processes to occur. Ultimately I would like to understand what it is about reducing atmospheres that allows complex organic molecules to be created so easily, but the first step is to understand what it really means to say one mixture of gases is "more reducing" than another.
If anything can be more reducing than something else, it seems like $\ce{CH4}$ gas should be more reducing than $\ce{O2}$, so I'm trying to understand the extent to which this can be quantified. However, I'm currently self-learning chemistry, and I'm having difficulty seeing connections between some of the concepts. (So apologies in advance for the rambling nature of this question.)
It seemed like one place to start might be to draw the net reaction of methane oxidation:
Obviously, if you add up the total oxidation state of every atom, you get zero - the carbon gets oxidised, the oxygen gets reduced, and the hydrogen stays the same. This must always be the case for any reaction (right?) since electrons are conserved. At first this made me think that maybe you can't really say that one thing is objectively "more reducing" than another. Though of course you can still talk about its tendency to oxidise or reduce carbon, and maybe that's what people are talking about when they say an atmosphere is "reducing" or "oxidising".
But then I came across the concept of redox potential, which does seem to be about things being objectively more reducing or oxidising than others. The problem is, the discussions of the concept that I can find online seem to be in terms of electrons "being transferred" from one molecule to another. But in the reaction above, as far as I can see, electrons aren't being transferred from the $\ce{O2}$ to the $\ce{CH4}$ but instead both molecules are being destroyed and replaced with completely different ones. So I'm having trouble seeing whether, or how, the concept of redox potential can be applied in this case.
Given this confusion, I guess my specific questions are:
When people talk about an atmosphere being "more reducing" than another, are they referring to redox potential or something else, such as the degree to which the atmosphere reduces or oxidises carbon? If it's the latter, is it something that can be quantified, or is it more of a subjective judgement?; and
What is the relationship between the concepts of the redox potential of a substance on the one hand, and the oxidation states of atoms on the other? Can the concept of redox potential be applied to gases like $\ce{CH4}$ and $\ce{O2}$?
Answer: I’m an environmental chemist and we frequently refer to ‘oxidizing’ or ‘reducing’ conditions in the environment. I’ll use soils as an example. The ‘oxic zone’ is the area of soil that oxygen can reach by diffusion and is seen as being under ‘oxidizing’ conditions because the reduction of $\ce{O2}$ is the dominant redox reaction that occurs in that zone. Elements that have multiple oxidation states tend be in the higher oxidation states. For example, sulfur will exist primarily as sulfate ($\ce{SO4^2-}$). As you get further away from the Earth’s surface, the oxygen is depleted (by microbes) and the redox conditions change: the dominant reduction reaction changes. Usually in this order: $\ce{O2}$, nitrate, manganese(III,IV), iron(III), sulfate and finally $\ce{H2}$. When the conditions become so reducing that the microbes are using $\ce{H2}$ as an electron acceptor, they making methane via a process called methanogenesis. This order arises because of reduction potentials. Microbes get the most energy for the least effort from reducing $\ce{O2}$, so they do that until it runs out, which is when they switch to the next best thing available.
Ultimately what it means to be a ‘reducing environment’ is that if you added a compound with a higher reduction potential then the dominant redox pair, it would become reduced. For example, if you could put $\ce{SO4^2-}$ in a methanogenic environment, it would be reduced to $\ce{H2S}$ very quickly.
I’ve read that the composition of Earth’s atmosphere in pre-life earth was very different from today. If I recall correctly, there was no $\ce{O2}$ as it was formed by microbes later on. Without $\ce{O2}$ in Earth’s atmosphere, it makes sense that conditions would be very reducing based on an analogy to the soil redox zones. | {
"domain": "chemistry.stackexchange",
"id": 358,
"tags": "organic-chemistry, redox"
} |
Can I leave natural outliers in a dataset in training? | Question: Can I leave unedited natural outliers in a dataset (outliers that have not appeared just because of mistyping of mistakes in the data)? Or should I also remove them or change them?
Answer: Yes you should keep the natural outliers in a dataset. They represent an extreme end of the data you have and contain useful info. They also help you with anomaly detection if you wish.
But it also depends on the type of problem at hand. If for example in the case of Titanic dataset, where we are classifying who survived and who didn't. It is ok to remove the outliers as removing them won't be detrimental to the result. The passengers are already dead and removing the outliers won't lead to some serious loss.
On the other hand in the case of classifying weather a patient has a tumor or not, removing the outliers would be a bad idea, as it will lead to misclassifying and ultimately incorrect diagnosis/treatment.
If you are certain that these outliers are because of mistyping then you can safely remove them, but only if you are certain that they are because of mistyping. Else for a real world problem, it is always wise to keep the outliers.
Cheers! | {
"domain": "datascience.stackexchange",
"id": 10392,
"tags": "statistics, outlier, pretraining"
} |
Formulae for gravitatitional equilibrium | Question: I am trying to calculate at which point gravitational equilibrium sets in for various bodies (planets stars neutron stars etc.) assuming they are perfect spheres. However, the radius I get is not equal to what it should be according to Wikipedia (I tried it for the Sun, a planet and a neutron star, but all of them were off by quite a bit).
Below is the example of the neutron star radius I'm trying to get:
My result (factor) is 1 at:
Radius: $2228588\mathrm{\ m}$
Density: $1.2768744892680034 \times 10^{11} \mathrm{\ kg/m^3}$
The Wikipedia article about neutron stars says the radius of a neutron star is about 12 km and the density is about 10 times higher than mine. My result is 2228km, which is quite a bit off, so I have a second question:
Am I calculating all the forces I need?
Are the formulae I'm using correct?
I took them from Wikipedia and university slides and I found multiple variations of almost all of them (most of which I actually didn't list below), so I'm quite confused as to what is correct.
I know that I'm using iron as the element and I should split it up and convert the protons to neutrons, but that would only increase the degeneracy pressure and result in an even bigger radius, if I'm not mistaken.
Here's how I calculated it:
Mass of an electron: $M_e$
Standard atomic weight of hydrogen: $M_h = 1.00782503223$
Mass of a neutron: $M_n$
Atomic mass of ${\mathrm{Fe}^{56}_{26}}$: $55.934936 \mathrm{\ u}$
Number of atoms: $7.938 \times 10^{31}$
Number of neutrons per atom: $(56-26) \times \text{number of atoms}$
Number of electrons per atom: $26 \times \text{number of atoms}$
Total number of particles: $\text{number of atoms} + \text{number of electrons}$
Mass: $\text{atomic mass} \times \text{number of atoms} = 4.4400513610370694 \times 10^{30} \mathrm{\ kg} \approx 2.3M_☉$
Radius: $2,228,588 \mathrm{\ m}$
Temperature: $9.452 \times 10^{-7}° K$
Force of gravity: $\frac{GM^2}{r}$
Volume: $\frac{4\pi r^3}{3}$
Particle density: $\frac{\text{number of particles}}{\text{volume}}$
a: $\frac{4\sigma}{c}$
Radiation pressure: $\frac{a T^4}{3}$
Gas pressure: $\text{particle density} \times k_B T$
Electron degeneracy pressure: $\frac{\pi^3 \hbar^2} {15 M_e} \times \frac{3 \times \text{number of electrons}} {V \pi}^{\frac{5}{3}}$
Electron degeneracy pressure 2: $\frac{\pi^2 \hbar^2} {5 M_e \times M_h^{\frac{5}{3}}} \times \frac{3}{\pi}^{\frac{2}{3}} \times \frac{\frac{M}{V}}{1}^ {\frac{5}{3}}$
Neutron degeneracy pressure: $\frac{\pi^3 \hbar^2 }{15 M_e} \times \frac{3 \times \text{number of neutrons}}{V \pi}^{\frac{5}{3}}$
Neutron degeneracy pressure 2: $\frac{3^{\frac{10}{3}} \hbar^2 }{ 15 \pi^{\frac{1}{3}} \times M_n^{\frac{8}{3}} \times r^5} \times \frac{M}{4}^{\frac{5} {3}}$
Total pressure: $\text{gas pressure + radiation pressure + neutron degeneracy pressure + electron degeneracy pressure}$
Total pressure force: $\text{total pressure} \times \text{volume}$
If total pressure force = force of gravity, then the body is in equilibrium.
Answer:
Mass = $\text{atomic mass} \times \text{number of atoms} = 4.4400513610370694 \times 10^{30} \mathrm{\ kg} \approx 2.3M_☉$
Herein lies the problem: your units and exponents are messed up. The mass of your iron isotope is $55.934936 \mathrm{\ u}$, and there are $7.938⋅10^{31}$ atoms. Do the math, and you'd get a total mass of $4.4400513610370694 \times 10^{33}\mathrm{\ u}$. That converts to approximately $7372878 \mathrm{\ kg}$. Nowhere near the mass of the Sun! | {
"domain": "astronomy.stackexchange",
"id": 1877,
"tags": "star, gravity, neutron-star, planet, hydrostatic-equilibrium"
} |
Why we take Laplace Transform of functions which converged using Fourier Transform | Question: There are several functions for which we know that Fourier Transform will exist but still we calculate its Laplace Transform. Can I know the reason why we need to take Laplace transform for which we know its convergence?
Thanks
Answer: There is a large class of functions for which both the Fourier transform and the Laplace transform exist, and for which one can be obtained from the other by setting $s=j\omega$. (Note that even when both exist, the latter need not be the case). So for this class of functions, obtaining the Laplace transform from the Fourier transform (or vice versa) does not require any additional work.
Example: $$\begin{align}x(t)&=e^{-at}u(t),\quad a>0\\
\text{Fourier transform:}\quad X(j\omega)&=\frac{1}{j\omega +a}\\
\text{Laplace transform:}\quad X(s)&=\frac{1}{s +a},\quad |s|> -a\end{align}$$
What the Laplace transform offers is a description in the complex $s$-plane, such as poles and zeros of transfer functions, from which many system properties can be easily deduced. The Fourier transform only shows the behavior on the frequency axis (i.e., the spectrum). | {
"domain": "dsp.stackexchange",
"id": 3312,
"tags": "fourier-transform, transform, laplace-transform"
} |
Escape velocity problem | Question: I was given a problem at school:
How much Energy do we need to make a rocket of mass $m$ faster than the escape velocity so that it can travel in outer space?
Here's how I worked:
I know that the escape velocity from a object of mass $M$ that in this case is the Earth is
$$v_f=\sqrt{\frac{2GM}{r}}$$
Then I calculated the Work that I need using the formula for Kinetic Energy:
$$
K=\frac{1}{2}mv^2 \rightarrow K=\frac{1}{2}mv_f^2=\frac{1}{2}m\sqrt{\frac{2GM}{r}}^2=\frac{1}{2}m\frac{2GM}{r}= \frac{mMG}{r}
$$
Is this right?
Now I know that $W=F\cdot S$ and if I want to know the force I need to change my equation to $F=\frac{mMG}{rS}$ right?
Thanks!
p.s. I noticed that $\frac{mMG}{r}$ is similar to the equation for the gravitational force: $F_g=G \frac{mM}{r^2}$ but is equal only that is equal to $\frac{F_g}{r} $
Answer: Work is in units of energy. For conservative forces, like gravity here acting on the rocket,
$ W = \Delta KE$
The escape velocity gives you the velocity needed to just escape earth so that $v = 0$ in space. So the minimum energy needed to escape earth is where $v = 0$ in space. When $v = 0$ so too is $KE$.
$ W = KE_f - KE_i = -KE_i = -\frac{GMm}{r}$
where $r$ is the radius of the earth. Just plug into this, and this is the minimum work, ie the minimum energy you need to escape. The negative indicates that the work done by gravity on the ship is negative (ship motion is in opposite direction to force of gravity). The energy done by an external force is positive.
Note that also for conservative forces, $W = -\Delta U$ where $U$ is the potential energy. The expression we found below,
$-\frac{GMm}{r}$
Is the formula for gravitational potential energy. So another way to look at the problem is how to get the gravitational potential energy to zero. On earth the grav potential energy is some negative number. We need to get that number to zero. So by just plugging into the grav potential energy to find what it is on earth, that is how much is needed to get the grav potential to zero. When grav potential is zero, you wont be attracted back to earth. | {
"domain": "physics.stackexchange",
"id": 13493,
"tags": "homework-and-exercises, newtonian-mechanics, gravity, energy, escape-velocity"
} |
Can a Hamiltonian of a tripartite system map an product state into a product state? | Question: Suppose we have a finite dimensional Hilbert space $\mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C$ and a Hamiltonian has the following form:
$$H = H_A \otimes I \otimes I + I \otimes H_B \otimes I + |u\rangle \langle u| \otimes V,$$
where $|u\rangle \in \mathcal{H}_A \otimes \mathcal{H}_B$.
Does there exist a state $|x\rangle|v\rangle$ such that
$$(H_A \otimes I \otimes I + I \otimes H_B \otimes I + |u\rangle \langle u| \otimes V)|x\rangle|v\rangle = x |x\rangle |w\rangle?$$
In the above equation, we have $|x\rangle \in \mathcal{H}_A \otimes \mathcal{H}_B$ and $|v\rangle, |w\rangle \in \mathcal{H}_C$.
The equation is true (we get a product state) whenever $|v\rangle$ is an eigenvector of $V$ and we get that $|v\rangle = |w\rangle$. However, I'm interested in a general case such that the RHS vector is a product state of systems AB and C.
Answer: Let's just do the special case of 3 qubits. Without loss of generality (up to unitary transformations, rescalings,...), $H_A$ and $H_B$ might as well be Pauli $Z$ matrices.
There are two possibilities for getting $|x\rangle$ on output, if $|x\rangle$ input.
$|x\rangle$ is an eigenstate of $Z\otimes I+I\otimes Z$ (i.e. it has fixed number of 1s) of eigenvalue $\lambda$. In this case, we require $\langle u|x\rangle=0,1$. If 0, $|v\rangle=|w\rangle$. If 1, then $|w\rangle=\lambda|v\rangle+V|v\rangle$, and you have free choice of $V$. A non-trivial case might use something like $|x\rangle=(|01\rangle-|10\rangle)/\sqrt{2}$.
If $|x\rangle$ is not an eigenstate of $Z\otimes I+I\otimes Z$, then
$$
(Z\otimes I+I\otimes Z)|x\rangle=\lambda|x\rangle+\gamma |y\rangle.
$$
($|y\rangle$ is some state orthogonal to $|x\rangle$, $\gamma\neq 0$.)
Then we require
$$
|u\rangle(V|v\rangle)\langle u|x\rangle=(\alpha|x\rangle|-\delta |y\rangle)V|v\rangle.
$$
In order to make sure we cancel off the non-x component, this requires $\delta V|v\rangle=\gamma|v\rangle$. Hence, this is the rather boring case of $|v\rangle$ being an eigenvector of $V$.
Although I made the simplifying assumption about the dimension of the Hilbert space, I don't immediately see anywhere that I've actually used it in the calculation, so I believe this is a general result. | {
"domain": "quantumcomputing.stackexchange",
"id": 4820,
"tags": "entanglement, hamiltonian-simulation, hamiltonian, linear-algebra"
} |
How $X$ noise propagates through controlled-$S$ gate | Question: I am struggling to find how an $X$ noise propagates through a controlled-$S$ gate.
Here`s the circuit.
Answer: In Quirk, the trick to finding the equivalent operation after an intermediate operation is to check that their phase kickback exactly cancels even when operating on entangled qubits:
Note that the bottom qubit ends up OFF. This is a proof that X before CS is the inverse of $S \cdot X \cdot CZ$ after CS. | {
"domain": "quantumcomputing.stackexchange",
"id": 4218,
"tags": "quantum-state, quantum-circuit, transpile, compiling"
} |
What is the difference between this "spectral width" and the laser linewidth (FWHM)? | Question: I am currently looking at the 1550nm fibre-coupled DFB laser diodes on Alibaba for use in interferometry experiments. These are commonly used in optical communications applications, but it seems to me that they would still be suitable for interferometry experiments. The problem is that they don't list a a linewidth (FWHM), which I want to know for my application. However, many of them do list "spectral width"/"spectrum width", $\Delta \lambda$, at -3db or -20db, in units of nanometers. This "spectral width" value seems to be close to what I would expect for linewidth (in terms of FWHM), but a bit higher. See this example:
What is the difference between this "spectral width" and the laser linewidth (FWHM)?
What information can I gain from this "spectral width" value, in terms of the linewidth of the laser? For instance, if "spectral width" is actually always greater than linewidth (FWHM), then, despite not knowing the linewidth (FWHM), can I at least assume that the linewidth (FWHM) of the laser is significantly lower than the "spectral width" (in the above case, significantly lower than 0.5 nm)?
This question is related, but I don't think it answers my question.
Answer:
many of them do list "spectral width"/"spectrum width", $\Delta \lambda$, at -3db or -20db, in units of nanometers.
A spectral width at -3 dB (half intensity) is exactly what a FWHM is.
Depending on the vendor and product, there might be a typical spectrum plot included in the datasheet that makes this more clear.
This "spectral width" value seems to be close to what I would expect for linewidth (in terms of FWHM), but a bit higher.
Remember that the maximum specification is a commitment from the vendor not to sell you any device with a wider spectral width. If their typical width is 0.5 nm but most of their customers don't require a spectral width narrower than 1 nm, it's better for them to specify 1 nm and not have to reject the occasional laser that produces 0.6 or 0.7 nm.
For customers who require a narrower spectral width, they may be able to produce a custom part number that is tested to tighter limits.
Also keep in mind that based on the notes after the table, these lasers are designed for cable TV distribution applications, not interferometry. And, given you're buying these on Alibaba, and the center wavelength uncertainty is extremely wide, these lasers may be rejected parts from a more tightly spec'ed product. | {
"domain": "physics.stackexchange",
"id": 83735,
"tags": "laser, interferometry, photonics, laser-cavity"
} |
Why wash a sample before titration? | Question: In the awwa notebook for $\ce{Na2SiF6}$ determination, why it is required to wash the product sample in an alcoholised $\ce{KCl}$ solution before titration?
Answer: Your protocol gives you the answer: to make sure your sediment is neutral. After all, you titrate with NaOH afterwards, so any wrong starting point in terms of pH would skew your result. | {
"domain": "chemistry.stackexchange",
"id": 3280,
"tags": "titration"
} |
Why is an S-S bond stronger than an O-O bond? | Question: I'm wondering why exactly the single bond between two sulfur atoms is stronger than that of two oxygen atoms. According to this page, an $\ce{O-O}$ bond has an enthalpy of $142~\mathrm{kJ~mol^{-1}}$, and a $\ce{S-S}$ bond in $\ce{S8}$ an enthalpy of $226~\mathrm{kJ~mol^{-1}}$. This one reports the $\ce{S-S}$ bond enthalpy to be $268~\mathrm{kJ~mol^{-1}}$, but I'm not sure which molecule they mean, or how they measured it. Anyway, it's still higher than that of $\ce{O-O}$.
Searching the Net, the only justification I could find was something similar to concepts they apply in VSEPR, like in this Yahoo Answers thread with such remarkable grammar. Quoting the answer, which might have borrowed some stuff from a high school textbook,
due to small size the lone pair of electrons on oxygen atoms repel the bond pair of O-O bond to a greater extent than the lone pair of electrons on the sulfur atoms in S-S bond....as a result S-S bond (bond energy=213 kj/mole)is much more stronger than O-O(bond energy = 138 kj/mole) bond $\ldots$
Other variations of the same argument can be seen here, but it doesn't make sense, since one couldn't apply the same argument to $\ce{O=O}$ and $\ce{S=S}$. The first reference documents the $\ce{S=S}$ and $\ce{O=O}$ bond enthalpies to be $425$ and $494~\mathrm{kJ~mol^{-1}}$, respectively.
It's a bit shaky, and I'm looking for a solid explanation using MO or VB, or anything else that actually works. So, why is an $\bf\ce{S-S}$ single bond stronger than $\bf\ce{O-O}$, despite $\bf\ce{O=O}$ being obviously stronger than $\bf\ce{S=S}$?
Answer:
TL;DR: The $\ce{O-O}$ and $\ce{S-S}$ bonds, such as those in $\ce{O2^2-}$ and $\ce{S2^2-}$, are derived from $\sigma$-type overlap. However, because the $\pi$ and $\pi^*$ MOs are also filled, the $\pi$-type overlap also affects the strength of the bond, although the bond order is unaffected. Bond strengths normally decrease down the group due to poorer $\sigma$ overlap. The first member of each group is an anomaly because for these elements, the $\pi^*$ orbital is strongly antibonding and population of this orbital weakens the bond.
Setting the stage
The simplest species with an $\ce{O-O}$ bond would be the peroxide anion, $\ce{O2^2-}$, for which we can easily construct an MO diagram. The $\mathrm{1s}$ and $\mathrm{2s}$ orbitals do not contribute to the discussion so they have been neglected. For $\ce{S2^2-}$, the diagram is qualitatively the same, except that $\mathrm{2p}$ needs to be changed to a $\mathrm{3p}$.
The main bonding contribution comes from, of course, the $\sigma$ MO. The greater the $\sigma$ MO is lowered in energy from the constituent $\mathrm{2p}$ AOs, the more the electrons are stabilised, and hence the stronger the bond.
However, even though the $\pi$ bond order is zero, the population of both $\pi$ and $\pi^*$ orbitals does also affect the bond strength. This is because the $\pi^*$ orbital is more antibonding than the $\pi$ orbital is bonding. (See these questions for more details: 1, 2.) So, when both $\pi$ and $\pi^*$ orbitals are fully occupied, there is a net antibonding effect. This doesn't reduce the bond order; the bond order is still 1. The only effect is to just weaken the bond a little.
Comparing the $\sigma$-type overlap
The two AOs that overlap to form the $\sigma$ bond are the two $\mathrm{p}_z$ orbitals. The extent to which the $\sigma$ MO is stabilised depends on an integral, called the overlap, between the two $n\mathrm{p}_z$ orbitals ($n = 2,3$). Formally, this is defined as
$$S^{(\sigma)}_{n\mathrm{p}n\mathrm{p}} = \left\langle n\mathrm{p}_{z,\ce{A}}\middle| n\mathrm{p}_{z,\ce{B}}\right\rangle = \int (\phi_{n\mathrm{p}_{z,\ce{A}}})^*(\phi_{n\mathrm{p}_{z,\ce{B}}})\,\mathrm{d}\tau$$
It turns out that, going down the group, this quantity decreases. This has to do with the $n\mathrm{p}$ orbitals becoming more diffuse down the group, which reduces their overlap.
Therefore, going down the group, the stabilisation of the $\sigma$ MO decreases, and one would expect the $\ce{X-X}$ bond to become weaker. That is indeed observed for the Group 14 elements. However, it certainly doesn't seem to work here. That's because we ignored the other two important orbitals.
Comparing the $\pi$-type overlap
The answer for our question lies in these two orbitals. The larger the splitting of the $\pi$ and $\pi^*$ MOs, the larger the net antibonding effect will be. Conversely, if there is zero splitting, then there will be no net antibonding effect.
The magnitude of splitting of the $\pi$ and $\pi^*$ MOs again depends on the overlap integral between the two $n\mathrm{p}$ AOs, but this time they are $\mathrm{p}_x$ and $\mathrm{p}_y$ orbitals. And as we found out earlier, this quantity decreases down the group; meaning that the net $\pi$-type antibonding effect also weakens going down the group.
Putting it all together
Actually, to look solely at oxygen and sulfur would be doing ourselves a disservice. So let's look at how the trend continues.
$$\begin{array}{|c|c|c|c|}
\hline
\mathbf{X} & \mathbf{BDE(X-X)\ /\ kJ\ mol^{-1}} & \mathbf{X} & \mathbf{BDE(X-X)\ /\ kJ\ mol^{-1}} \\
\hline
\ce{O} & 144 & \ce{F} & 158 \\
\ce{S} & 266 & \ce{Cl} & 243 \\
\ce{Se} & 192 & \ce{Br} & 193 \\
\ce{Te} & 126 & \ce{I} & 151 \\
\hline
\end{array}$$
(Source: Prof. Dermot O'Hare's web page.)
You can see that the trend goes this way: there is an overall decrease going from the second member of each group downwards. However, the first member has an exceptionally weak single bond.
The rationalisation, based on the two factors discussed earlier, is straightforward. The general decrease in bond strength arises due to weakening $\sigma$-type overlap. However, in the first member of each group, the strong $\pi$-type overlap serves to weaken the bond.
I also added the Group 17 elements in the table above. That's because the trend is exactly the same, and it's not a fluke! The MO diagram of $\ce{F2}$ is practically the same as that of $\ce{O2^2-}$, so all of the arguments above also apply to the halogens.
How about the double bonds?
In order to look at the double bond, we want to find a species that has an $\ce{O-O}$ bond order of $2$. That's not hard at all. It's called dioxygen, $\ce{O2}$, and its MO scheme is exactly the same as above except that there are two fewer electrons in the $\pi^*$ orbitals.
Since there are only two electrons in the $\pi^*$ MOs as compared to four in the $\pi$ MOs, overall the $\pi$ and $\pi^*$ orbitals generate a net bonding effect. (After all, this is where the second "bond" comes from.) Since the $\pi$-$\pi^*$ splitting is much larger in $\ce{O2}$ than in $\ce{S2}$, the $\pi$ bond in $\ce{O2}$ is much stronger than the $\pi$ bond in $\ce{S2}$.
So, in this case, both the $\sigma$ and the $\pi$ bonds in $\ce{O2}$ are stronger than in $\ce{S2}$. There should be absolutely no question now as to which of the $\ce{O=O}$ or the $\ce{S=S}$ bonds is stronger! | {
"domain": "chemistry.stackexchange",
"id": 6459,
"tags": "inorganic-chemistry, bond, molecules"
} |
Do birds require cilia for left/right symmetry during their embryological stages? | Question: I know that in humans ciliary dyskenisia can lead to a Situs inversus, is this also the case for birds and reptilians?
Answer: Here is a paper that talks about left-right patterning across bilateral organisms, discussing both Xenopus (frog) and chicken.
It looks like it may be hard to generalize, but the answer appears to be "yes" for Xenopus and "no" for chicken (see e.g. Figure 2, and text immediately following).
However, it looks like the question of what exactly is going on in chicken is not well understood, according to this other paper. Chickens do have lethal ciliopathy mutants homologous to mutants causing left-right defects in other organisms, but these mutants appear to have normal left-right asymmetry development in chicken.
It is also not clear that the requirement of motile cilia for left-right asymmetry is generalizable within clades though, as pigs have apparently also lost the dependence on cilia for L-R, whereas mice and humans do require cilia for L-R. It appears that pig and chicken have dispensed with cilia by the same histological mechanism (see image below, Figure 7 of first paper).
I do not know whether the mechanism has been investigated in other birds.
Hope that helps. | {
"domain": "biology.stackexchange",
"id": 9693,
"tags": "embryology"
} |
Question on Pauli matrices completeness relation | Question: In the derivation on Wikipedia, it says that if
$$2 M_{\alpha \beta} = \delta_{\alpha \beta} M_{\gamma\gamma} + \sum_k \sigma^k_{\alpha \beta} \sigma^k_{\gamma \delta} M_{\delta \gamma}$$
for any matrix $M$ then it follows that
$$ \sum_k \sigma^k_{\alpha \beta} \sigma^k_{\gamma \delta} = 2 \delta_{\alpha \delta} \delta_{\beta \gamma} - \delta_{\alpha \beta} \delta_{ \gamma \delta} $$
but I do not see how one goes from the first to the second equation.
Answer: $$\sum_{k} \sigma^{k}_{\alpha \beta} \sigma^{k}_{\gamma \delta} M_{\delta \gamma} = 2 M_{\alpha \beta} - \delta_{\alpha \beta} M_{\gamma \gamma}$$
Using the repeated indices convention, by now:
But $$M_{\alpha \beta} = M_{\delta \gamma} \delta^{\delta}_{ \alpha} \delta^{\gamma}_{\beta}$$
And $$ M_{\gamma \gamma} = M_{\delta \gamma} \delta^{\delta}_{ \gamma}$$
On the first equation, gives
$$\sum_{k} \sigma^{k}_{\alpha \beta} \sigma^{k}_{\gamma \delta} M_{\delta \gamma} = 2 M_{\delta \gamma} \delta^{\delta}_{ \alpha} \delta^{\gamma}_{\beta} - \delta_{\alpha \beta} \delta^{\delta}_{ \gamma} M_{\delta \gamma} $$
So
$$\bigg(\sum_{k} \sigma^{k}_{\alpha \beta} \sigma^{k}_{\gamma \delta} - 2 \delta^{\delta}_{ \alpha} \delta^{\gamma}_{\beta} + \delta_{\alpha \beta} \delta^{\delta}_{ \gamma} \bigg)M_{\delta \gamma} = 0$$
To this expression be valid for any generic matrix $M$, the expression among the parenthesis need to be zero | {
"domain": "physics.stackexchange",
"id": 90940,
"tags": "quantum-mechanics, matrix-elements"
} |
system plugin to publish /clock from gazebo 1.7.1 standalone to ROS | Question:
Hi,
I have not installed simulator_gazebo in ros-groovy and currently working with gazebo standalone 1.7.1. I need to publish /clock as in simulator_gazebo. I figured that it was done by the gazebo_ros_api_plugin. However, gazebo_ros_api_plugin do much more than publishing /clock and I isolated the following parts for my purpose. But, still it seems something is missing as /clock is not advertised. Any help would be appreciated.
Also, how do you run the gzclient with this plugin? I tried with gzclient -g libmy_gazebo_ros_clock.so and it didnt work.
Thank you
#include
namespace gazebo
{
SystemClock::SystemClock()
{
this->world_created_ = false;
}
////////////////////////////////////////////////////////////////////////////////
// Destructor
SystemClock::~SystemClock()
{
event::Events::DisconnectWorldUpdateBegin(this->time_update_event_);
// Finalize the controller
this->rosnode_->shutdown();
delete this->rosnode_;
// shutdown ros queue
this->gazebo_callback_queue_thread_->join();
delete this->gazebo_callback_queue_thread_;
}
/// \brief ros queue thread for this node
void SystemClock::gazeboQueueThread()
{
ROS_DEBUG_STREAM("Callback thread id=" rosnode_->ok())
this->gazebo_queue_.callAvailable(ros::WallDuration(timeout));
}
void SystemClock::Load(int argc, char** argv)
{
// setup ros related
if (!ros::isInitialized())
ros::init(argc,argv,"gazebo",ros::init_options::NoSigintHandler);
else
ROS_ERROR("Something other than this gazebo_ros_api plugin started ros::init(...), command line arguments may not be parsed properly.");
this->rosnode_ = new ros::NodeHandle("~");
/// \brief setup custom callback queue
gazebo_callback_queue_thread_ = new boost::thread(&SystemClock::gazeboQueueThread,this);
// below needs the world to be created first
this->load_gazebo_ros_clock_plugin_event_ = event::Events::ConnectWorldCreated(boost::bind(&SystemClock::LoadGazeboClockPlugin,this,_1));
}
void SystemClock::LoadGazeboClockPlugin(std::string _worldName)
{
// make sure things are only called once
event::Events::DisconnectWorldCreated(this->load_gazebo_ros_clock_plugin_event_);
this->lock_.lock();
if (this->world_created_)
{
this->lock_.unlock();
return;
}
// set flag to true and load this plugin
this->world_created_ = true;
this->lock_.unlock();
this->world = physics::get_world(_worldName);
if (!this->world)
{
ROS_FATAL("cannot load gazebo ros api server plugin, physics::get_world() fails to return world");
return;
}
this->gazebonode_ = transport::NodePtr(new transport::Node());
this->gazebonode_->Init(_worldName);
// publish clock for simulated ros time
pub_clock_ = this->rosnode_->advertise("/clock",10);
// set param for use_sim_time if not set by user alread
this->rosnode_->setParam("/use_sim_time", true);
this->time_update_event_ = event::Events::ConnectWorldUpdateStart(boost::bind(&SystemClock::publishSimTime,this));
}
////////////////////////////////////////////////////////////////////////////////
/// \brief
void SystemClock::publishSimTime(const boost::shared_ptr &msg)
{
ROS_ERROR("CLOCK2");
common::Time currentTime = msgs::Convert( msg->sim_time() );
rosgraph_msgs::Clock ros_time_;
ros_time_.clock.fromSec(currentTime.Double());
// publish time to ros
this->pub_clock_.publish(ros_time_);
}
void SystemClock::publishSimTime()
{
common::Time currentTime = this->world->GetSimTime();
rosgraph_msgs::Clock ros_time_;
ros_time_.clock.fromSec(currentTime.Double());
// publish time to ros
this->pub_clock_.publish(ros_time_);
}
// Register this plugin with the simulator
GZ_REGISTER_SYSTEM_PLUGIN(SystemClock)
}
Originally posted by peshala on Gazebo Answers with karma: 197 on 2013-05-08
Post score: 0
Answer:
Go to the directory where you have your system plugin and
with
gazebo -s libmy_gazebo_ros_clock.so
the /clock is published OK. And, you may also remove the following parts from the above code, as well.
In SystemClock::~SystemClock()
// shutdown ros queue
this->gazebo_callback_queue_thread_->join();
delete this->gazebo_callback_queue_thread_;
In SystemClock::Load
/// \brief setup custom callback queue
gazebo_callback_queue_thread_ = new boost::thread(&SystemClock::gazeboQueueThread,this);
/// \brief ros queue thread for this node
void SystemClock::gazeboQueueThread()
{
ROS_DEBUG_STREAM("Callback thread id=" rosnode_->ok())
this->gazebo_queue_.callAvailable(ros::WallDuration(timeout));
}
Originally posted by peshala with karma: 197 on 2013-05-08
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3273,
"tags": "gazebo"
} |
Is the direction of base current always constant in a BJT transistor? | Question: Is the direction of base current always constant in a BJT transistor?
Excluding the transistor breakdown state, is the direction of the base current always the same in the four modes: active mode, cut-off mode, reverse mode, and saturation mode?
However, if the transistor breaks down, does the direction of base current reverse?
Answer: In most applications, the direction of the base current is always the same while the BJT is on (in forward-active, saturation or reverse-active modes): into the base in npn and out of the base in pnp BJTs. There are three typical exceptions I can think of.
1. Turn-off transients
If you connect a voltage source across a pn-junction diode in series with a resistor (such that the diode is forward biased), current flows in the diode in the usual direction. If you then suddenly turn off the voltage source, current flows in the diode in the opposite direction until the charge responsible for forward biasing the diode is removed. The time it takes for this to happen is called the storage delay time.
A BJT consists of two pn-junctions at least one of which (typically the base-emitter junction) is forward biased when the device is turned on. Much like in the case of a diode, the base current can temporarily reverse while the device is being turned off to eliminate the charge stored in the base, after which it decays to zero.
2. Reverse leakage current
If neither junction is forward biased (i.e. in cut-off), there can be a tiny leakage base current opposite to the usual direction. A typical example is when the base-emitter junction is off and the base-collector junction is reverse biased. Again, this is exactly analogous to the reverse leakage current in a diode.
3. Avalanche breakdown
If the reverse bias voltage of the base-collector junction is pushed past the breakdown voltage, avalanche multiplication results in a rapid rise in collector current. In the common-base configuration, this current goes out of the base of an npn BJT and into the base of a pnp BJT, i.e. in the reverse direction. | {
"domain": "physics.stackexchange",
"id": 99461,
"tags": "electric-circuits, semiconductor-physics, electronics, electrical-engineering"
} |
What shifts protons on a benzene ring to lower ppm? | Question: Why do these protons appear at 6.9 and 6.93 ppm, at lower ppm value? I assumed it was due to the anisotropic effect, specifically I thought the circulating (pi) electrons induced a magnetic field which causes them to experience a stronger net field, and those that were lower were not being affected by the induced magnetic field. This is instead apparently due to resonance. How come it is due to resonance? Is the circulation of electrons (ring current) not affecting them all or instead is resonance the primary reason why some protons come lower than others?
Answer: It’s due to resonance as the methoxy group is an ortho-para electron-donating group by resonance.
Hence the ortho and para positions are more electron rich, so their protons are more acidic and further downfield. | {
"domain": "chemistry.stackexchange",
"id": 17706,
"tags": "organic-chemistry, spectroscopy, nmr-spectroscopy"
} |
No module named setuptools.config | Question:
Hello, I tried to follow the instructions to install ROS2 using colcon and I get the following error (even though the build continue and pass):
[0.639s]
ERROR:colcon.colcon_core.entry_point:Exception
loading extension
'colcon_core.package_identification.python':
No module named 'setuptools.config'
Traceback (most recent call last):
File
"/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 98, in load_entry_points
extension_type = load_entry_point(entry_point) File
"/usr/lib/python3/dist-packages/colcon_core/entry_point.py", line 140, in load_entry_point
return entry_point.load() File "/usr/lib/python3/dist-packages/pkg_resources/init.py",
line 2229, in load
return self.resolve() File "/usr/lib/python3/dist-packages/pkg_resources/init.py",
line 2235, in resolve
module = import(self.module_name, fromlist=['name'], level=0) File
"/usr/lib/python3/dist-packages/colcon_core/package_identification/python.py",
line 10, in
from setuptools.config import read_configuration ImportError: No
module named 'setuptools.config'
With the following command:
colcon build --symlink-install
I am running on Ubuntu 16.04 and followed this link https://github.com/ros2/ros2/wiki/Linux-Development-Setup
Do I need to specify something to remove this error ?
Thanks
Originally posted by pokitoz on ROS Answers with karma: 527 on 2018-07-05
Post score: 1
Original comments
Comment by Dirk Thomas on 2018-07-05:
What version of setuptools do you have installed? python3 -m pip freeze --all | grep setuptools
Answer:
Ah! Yes I guess it was too old as explained in https://github.com/ros2/ros2/issues/512
My version was setuptools==20.7.0 and after the "sudo -H python3 -m pip install -U setuptools" it is now setuptools==39.2.0. thanks !
Originally posted by pokitoz with karma: 527 on 2018-07-06
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 31198,
"tags": "ros, ros2, ros-bouncy, colcon"
} |
Is time finite or infinite? | Question: I am not a scientist nor do I have a degree in Astrophysics, but I do like to learn new things by asking questions. With that being said, I have read that time is relative to space which began after the big bang. Moreover, before or right at the moment before the big bang, energy was confined to a very very small scale like the tip of the needle or something.
But, I wonder since energy can neither be created nor destroyed, that means energy is infinity with no beginning.
So, when it is said time started with the big bang, is that in reference to our universe? Because in reference to energy, shouldn't time be infinite too? Or time has to be relative to space as well and so there is no concept of time prior to the big bang?
Answer: As far as we can tell, the four-dimensional space-time continuum is unbounded in the space directions (space is infinite) the time dimension seems to have a singularity (about 13.8 billion years before now) and so it is impossible to talk in a sensible way about events before that time. We would normally say that the universe and time itself started at that point, and so any statement about "before the big bang" is completely meaningless. There are other possibilities, such as "eternal inflation", but they are rather speculative, with little positive evidence.
In the other direction, into the future, time seems to be unbounded, though this is less certain, it depends very precisely on several cosmological parameters, whose values are not yet certainly known.
So our current best hypothesis is that time is finite into the past, but infinite into the future.
Energy is locally conserved within the universe, but don't apply such reasoning to the formation of the universe. Indeed the whole concept of physics ceases to operate at that singularity. The basis of nature as cause-and-effect cannot operate at a singularity.
If space is indeed infinite, and the universe is homogenous and isotropic, then the total mass-energy of the universe could be infinite. However the potential gravitational energy might be negative infinite, and so the total energy of the universe might be zero, leading to Hawking's observation that "the universe is the ultimate free lunch". | {
"domain": "astronomy.stackexchange",
"id": 6478,
"tags": "universe, time"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.