anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
DAL mapping efficiency | Question: I am wondering if I can make these classes a bit more efficient.
Test Results
Single Run
Method 1: 5 Columns - Text Query - 81178 Records = 00:00:00.6390366
secs
Method 2: 5 Columns - Text Query - 81178 Records = 00:00:00.5360307
secs
10 Run Loop
Method 1: 5 Columns - Text Query - 81178 Records = 00:00:05.3253045
secs
Method 2: 5 Columns - Text Query - 81178 Records = 00:00:05.0912912
secs
100 Run Loop
Method 1: 5 Columns - Text Query - 81178 Records = 00:00:54.1270959
secs
Method 2: 5 Columns - Text Query - 81178 Records = 00:00:53.8710813
secs
All 3 attempts for both methods never peak over 25% CPU usage.
As you can see there really is no significant improvement over either method, and method 2 (judging by CPU usage) does not seem to multi-thread.
I am thinking that if I can get rid of my usage of reflection to map the columns to strongly-typed classes that it would make a significant boost to both methods performance, and I am sure that I can make improvements to the asyncronicity of method 2 as well... I just don't know how.
WrapperTest.cs
private static IList<T> Map<T>(DbDataReader dr) where T : new()
{
try
{
// initialize our returnable list
List<T> list = new List<T>();
// fire up the lamda mapping
var converter = new Converter<T>();
// read in each row, and properly map it to our T object
var obj = converter.CreateItemFromRow(dr);
// reutrn it
return list;
}
catch (Exception ex)
{
// Catch an exception if any, an write it out to our logging mechanism, in addition to adding it our returnable message property
_Msg += "Wrapper.Map Exception: " + ex.Message;
ErrorReporting.WriteEm.WriteItem(ex, "o7th.Class.Library.Data.Wrapper.Map", _Msg);
// make sure this method returns a default List
return default(List<T>);
}
}
This is a continuation of this question.
The code above definitely runs more efficiently now. Is there any way to make it better?
Answer: Use LINQ expression compilation to generate mapping code at runtime. The concept is to generate a method that does obj.Property1 = dataReader["Property1"]; ... dynamically.
public class Converter<T> where T : new()
{
private static ConcurrentDictionary<Type, object> _convertActionMap = new ConcurrentDictionary<Type, object>();
private Action<IDataReader, T> _convertAction;
private static Action<IDataReader, T> GetMapFunc()
{
var exps = new List<Expression>();
var paramExp = Expression.Parameter(typeof(IDataReader), "dataReader");
var targetExp = Expression.Parameter(typeof(T), "target");
var getPropInfo = typeof(IDataRecord).GetProperty("Item", new[] { typeof(string) });
foreach (var property in typeof(T).GetProperties())
{
var getPropExp = Expression.MakeIndex(paramExp, getPropInfo, new[] { Expression.Constant(property.Name, typeof(string)) });
var castExp = Expression.TypeAs(getPropExp, property.PropertyType);
//var bindExp = Expression.Bind(property, castExp);
var bindExp = Expression.Assign(Expression.Property(targetExp, property), castExp);
exps.Add(bindExp);
}
return Expression.Lambda<Action<IDataReader, T>>(Expression.Block(exps), new[] { paramExp, targetExp }).Compile();
}
public Converter()
{
_convertAction = (Action<IDataReader, T>)_convertActionMap.GetOrAdd(typeof(T), (t) => GetMapFunc());
}
public T CreateItemFromRow(IDataReader dataReader)
{
T result = new T();
_convertAction(dataReader, result);
return result;
}
}
Test method with 80,000 x 100 iteration
static void Main(string[] args)
{
var dummyReader = new DummyDataReader();
var properties = typeof(DummyObject).GetProperties();
var startDate = DateTime.Now;
var converter = new Converter<DummyObject>();
for (int i = 0; i < 80000 * 100; i++)
{
//var obj = CreateItemFromRow2<DummyObject>(new DummyDataReader());
var obj = CreateItemFromRow<DummyObject>(dummyReader, properties);
//var obj = converter.CreateItemFromRow(dummyReader);
dummyReader.DummyTail = i;
}
//var obj = CreateItemFromRow2<DummyObject>(new DummyDataReader());
Console.WriteLine("Time used : " + (DateTime.Now - startDate).ToString());
Console.ReadLine();
}
Result:
CreateItemFromRow : 18.5 seconds
Converter<T> : 7.3 seconds
Map function:
private static IList<T> Map<T>(DbDataReader dr) where T : new()
{
// initialize our returnable list
List<T> list = new List<T>();
// fire up the lamda mapping
var converter = new Converter<T>();
while (dr.Read()) {
// read in each row, and properly map it to our T object
var obj = converter.CreateItemFromRow(dr);
list.Add(obj);
}
// reutrn it
return list;
} | {
"domain": "codereview.stackexchange",
"id": 4849,
"tags": "c#, performance, sql, sql-server, asynchronous"
} |
Why are reachability trees meaningful for Petri Nets | Question: To construct a reachability tree we start with the initial marking and then consider all enabled transitions and fire them one by one. That is, we find the marking corresponding to firing each.
I can't see why is this meaningful because if we were simulating the petri net then starting with the initial state we would fire all possible transitions that can fire together to reach the next marking and not just one of them.
So I don't understand what's the purpose of constructing a reachability tree if the markings (configurations) it will give consider firing only one transitition at a time which isn't how a petri net works.
Answer: Most Petri nets do work by having their transitions fired one by one.
See, for instance, these course notes. In any marking, one or more transitions may be enabled. One of these may then fire, producing a new marking. The reachability graph is the labeled directed graph with as its vertices all reachable markings, and as its arcs transitions, such that an arc labeled $t$ from $M_1$ to $M_2$ exists if and only if firing transition $t$ in marking $M_1$ will produce marking $M_2$.
Of course this may be extended by allowing multiple transitions, or even bags of transitions, to fire all at once. But in standard Place/Transition nets, this isn't very interesting. Firing a bag of transitions has the exact same net effect as firing each of its member transitions in some arbitrary sequence - you don't gain additional behaviour by firing them all at once.
This is no longer true for different kinds of Petri nets, such as
nets in which transitions may take time, or
nets with inhibitor arcs.
So, for a proper answer to your question, you'll need to tell us what kind of Petri net you're dealing with. | {
"domain": "cs.stackexchange",
"id": 19834,
"tags": "computation-models, petri-nets"
} |
On work done by internal forces which is coming out to be not equal to zero | Question: 1) Let us consider a block which explodes due to some internal mechanism into two smaller fragments of equal masses.The system was initially at rest and now is having some finite kinetic energy(due to momentum conservation).We can hence comment that the work has been done by the internal force by the Work-energy Theorem since there are no other forces acting on the system.But this seems to contradict the fact that work done by internal forces is always 0.Where am I going wrong? I have researched similar questions on stack and other site but to no avail.
Also,textbooks for some reason do not consider a lot of theory on this matter for some reason which adds to my woes.
2) I have another question that in a two mass spring block system does the spring do any work?It should be 0 according to me as it is an internal force when solving from COM frame but is this also true from a ground frame?While writing the work energy theorem on this system, would the spring work show up even in the form of potntial energy?
Answer: The answer to your first question is that work done by internal forces only sums to zero in the case of rigid bodies, so the principle does not apply to an exploding body.
The same is true for the two blocks linked by a spring- it is not a rigid body.
The two cases are analogous. In the case of the exploding block, potential energy stored in the explosive was converted into the KE of the two moving parts. Likewise the spring can store PE which is converted into the KE of the blocks. | {
"domain": "physics.stackexchange",
"id": 61787,
"tags": "work, potential-energy"
} |
Can the rate of peristaltic pump's flow be accurate across changes in fluid viscosity? | Question: I'm building an arduino controlled pump system to be able to move fluids around. I need this to be fairly accurate, but extreme precision isn't required. Since there will be a variety of liquids moved through the pump, I've determined a peristaltic pump the best fit. But I don't think I fully understand them, and had a few questions..
Since I'll need to purge the system... Can a peristaltic pump push air? Let's assume you have a 2m of tubing, and you pump a bunch of water through it. Can you remove the tube from the water reservoir so it is open to the air, and effectively purge the system of any remaining water?
Since I want to fairly accurately measure flow, could I simply count milliseconds instead of using a flowmeter? ... Will a peristaltic pump ALWAYS pump at a constant rate, regardless of the viscosity of the fluid? That is, will maple syrup come out at the same rate as water?
Shopping question, ignore I suppose ... Anyone know where I may find a fast/high flow peristaltic pump? I'm looking to be able to pump, at a minimum, .5oz/sec
Would be determinant upon #3 ... What sort of relay would I want for toggling this on/off with an arduino?
Answer: The answers to "will it work" and "will viscosity affect flow rate" is really "it depends".
Peristaltic pumps that push air exist.
All pumps work by using low pressure to pull in fluid and high pressure to push it out. In this case, the low pressure is the suction created when the flexible tube tries to retain its shape after being pressed by the roller. The high pressure is created by the roller moving forward, like you would do with a tube of toothpaste.
Here's where it gets interesting.
Completely flattening the tube is a foolproof way to pump, but it shortens the life of the tube. So, a lot of pumps take advantage of the viscosity of the fluid (the slowing down of the fluid near the walls of the tube). All the rollers really need to do is make it easier for the fluid to flow forward than backwards.
So, the pump is actually balancing the following factors:
The ability of the tubing to regain its shape (springy-ness of the tube, affected by the fluid viscosity)
The pressure of the fluid at the pump exit
The resistance of the fluid to flowing backwards through the restricted tube (fluid viscosity, affected by the extent that the tube is constricted by the roller)
You'll have to find out whether your pump is completely flattening the tube, or just relying on water's higher viscosity in order to increase the life of the tubing. | {
"domain": "robotics.stackexchange",
"id": 247,
"tags": "arduino"
} |
Merge Sort an integer array | Question: I've implemented merge sort an integer array. Is it okay to use i,j,k as variable name for looping? Should I change them to more meaningful names? Overall, any further suggestions on this code?
MergeSort.java
import java.util.Arrays;
public class MergeSort {
public static void main(String[] args) {
}
public static void mergeSort(int[] inputArray) {
int size = inputArray.length;
if (size < 2)
return;
int mid = size / 2;
int leftSize = mid;
int rightSize = size - mid;
int[] left = new int[leftSize];
int[] right = new int[rightSize];
for (int i = 0; i < mid; i++) {
left[i] = inputArray[i];
}
for (int i = mid; i < size; i++) {
right[i - mid] = inputArray[i];
}
mergeSort(left);
mergeSort(right);
merge(left, right, inputArray);
}
public static void merge(int[] left, int[] right, int[] arr) {
int leftSize = left.length;
int rightSize = right.length;
int i = 0, j = 0, k = 0;
while (i < leftSize && j < rightSize) {
if (left[i] <= right[j]) {
arr[k] = left[i];
i++;
k++;
} else {
arr[k] = right[j];
k++;
j++;
}
}
while (i < leftSize) {
arr[k] = left[i];
k++;
i++;
}
while (j < leftSize) {
arr[k] = right[j];
k++;
j++;
}
}
}
MergeSortTest.java
import static org.junit.Assert.*;
import java.util.Arrays;
import org.junit.Test;
public class MergeSortTest {
@Test
public void reverseInput(){
int[] arr={22,21,19,18,15,14,9,7,5};
MergeSort.mergeSort(arr);
assertEquals("[5, 7, 9, 14, 15, 18, 19, 21, 22]", Arrays.toString(arr));
}
@Test
public void emptyInput(){
int[] arr={};
MergeSort.mergeSort(arr);
assertEquals("[]", Arrays.toString(arr));
}
@Test
public void alreadySorted(){
int[] arr={1,2,3,4,5,6,7,8,9,10};
MergeSort.mergeSort(arr);
assertEquals("[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]", Arrays.toString(arr));
}
}
Answer: Overall I would have to say that this is the neatest and most clear implementation of a Merge Sort that I have seen. There is nothing wrong with variables i, j and k. They are standard names for looped array indexes. There is a single empty line in your code which is inconsistent with the rest of the method. That is the only nitpick I can find in terms of the style and neatness. It is a tiny, inconsequential thing.
The only recommendations I can make, are things that will improve performance, or will introduce some other common practices that may impact readability, though they are still more standard than what you have.
First, some common practice things
Consider doing post-increments inside the array-reference. This code:
while (i < leftSize && j < rightSize) {
if (left[i] <= right[j]) {
arr[k] = left[i];
i++;
k++;
} else {
arr[k] = right[j];
k++;
j++;
}
}
while (i < leftSize) {
arr[k] = left[i];
k++;
i++;
}
while (j < leftSize) {
arr[k] = right[j];
k++;
j++;
}
would commonly be written as:
while (i < leftSize && j < rightSize) {
if (left[i] <= right[j]) {
arr[k++] = left[i++];
} else {
arr[k++] = right[j++];
}
}
while (i < leftSize) {
arr[k++] = left[i++];
}
while (j < leftSize) {
arr[k++] = right[j++];
}
There is no change to the logic, the code is just a bit more compressed. The impact to readability is debatable, you get more code on a screen, but you have to look for the increments.
You copy the array contents using a regular loop:
int[] left = new int[leftSize];
int[] right = new int[rightSize];
for (int i = 0; i < mid; i++) {
left[i] = inputArray[i];
}
for (int i = mid; i < size; i++) {
right[i - mid] = inputArray[i];
}
The above would typically be done (for performance reasons), as one of two ways, either:
int[] left = new int[leftSize];
int[] right = new int[rightSize];
System.arraycopy(inputArray, 0, left, 0, leftSize);
System.arraycopy(inputArray, leftSize, right, 0, rightSize);
In Java6 and above, the Arrays utility class can be used too:
int[] left = Arrays.copyOfRange(inputArray, 0, leftSize);
int[] right = Arrays.copyOfRange(inputArray, leftSize, inputArray.length);
I would go with the Arrays.copyOfRange(...) call.
The above changes do not change the logic of your program at all, just the techniques used at the various points.
Your implementation would still be 'textbook'.
In terms of performance, though, your limit is the number of times you create, copy, and discard arrays of data.
There is a variant of the Merge Sort that uses just two arrays, the input array, and a 'temp' array that is the same size. The algorithm repeatedly merges small chunks of data from one array, to the other, then swaps them, and merges the now larger chunks back to the first, and keeps doing that until the data is sorted.
Using that algorithm means there is no additional array copying, etc. It is much faster, but the implementation would look very different to yours. Still I would recommend you try it. There are examples on Wikipedia | {
"domain": "codereview.stackexchange",
"id": 23199,
"tags": "java, beginner, sorting, mergesort"
} |
Understanding interaction terms in the Lagrangian | Question: Given a Lagrangian, how does one understand the dynamics by just observing the interaction terms? specifically, can I determine all the possible or at least some of the possible scattering processes, just by observing the interaction terms in the lagrangian?.
Particularly, Let's say, $$L_{int} = g\bar{\psi}\psi\phi + \lambda \phi^4.$$ In this case, how do I figure out the possible interactions?
I can see that, there will be self-interactions of $\phi$ field, and the "Yukawa term", $g\bar{\psi}\psi\phi$, can be used to model, Nucleon - Nucleon scattering, how do we know that such a term can model Nucleon scattering, and are there other non-trivial interactions?
How does one figure out that, just from the Lagrangian?
Answer: It's important to look at what field creates/annihilates what. A real scalar field has both $a, a^\dagger$ and can therefore both create/annihilate particles. A complex scalar field comes in 2 versions-$\phi\sim a+b^\dagger$ and $\phi^\dagger\sim a^\dagger+b$ where $a,b$ refer to particle, antiparticle. It is clear that $\phi$ annihilates a particle but creates an antiparticle, whereas $\phi^\dagger$ does the opposite. Similarly, for fermions, we have $\psi\sim a+b^\dagger$, $\bar{\psi}\sim a^\dagger+b$. It is clear that $\psi$ annihilates a fermion and creates an antifermion, and $\bar{\psi}$ does the opposite. Then, in a Yukawa like interaction, we see that given $\bar{\psi}\psi\phi$; we can either
Annihilate antifermion with $\bar{\psi}$ and annihilate fermion with $\psi$ at a vertex, and annihilate/create a scalar $\phi$(with a complex scalar, there's a distinction between what's annihilated and created, but not here).
Annihilate fermion with $\psi$ and create fermion with $\bar{\psi}$ and create or annihilate a scalar $\phi$. You can read these as 'fermion and scalar annihilate to produce fermion' or as 'fermion is annihilated to create a fermion-scalar pair' respectively.
and so on.
Some details are sketched below.
The scattering amplitudes are constructed from correlation functions using standard techniques. In particular, we have the Dyson formula where we do a time ordered integral over a term that looks like $e^{-i\int dt H(t)}$, where $H$ is the corresponding hamiltonian that captures the interactions.
In perturbation theory, the Hamiltonian contains some small coupling constant (say, $g$ or $\lambda$) which allows us to expand the exponential term above in a power series in these couplings and fields. For example, schematically, $\langle T\{e^{-i\int dt\int \lambda\phi^4 d^3x}\}\rangle\approx \langle 1-\lambda\int\phi^4dtd^3x+\lambda^2(\cdot)+...\rangle$. So in perturbation theory, the problem reduces one to that of finding n-point functions of these fields.
These correlators are evaluated by Wick's theorem. The question now is, what correlations are we computing i.e. what are the initial and final states. The correlator simply measures the probability amplitude for a given state to go into another one in the presence of interactions(given by the field content/operator inside the $\langle\cdot\rangle$).
Remember that the fields inside the correlators are 'interaction picture' fields and have an expansion in terms of the creation and annihilation operators, generically denoted $a_p^\dagger, a_p$. Now, an external state could be something like a one particle state made of a scalar, such as $|p\rangle=a^\dagger_p|0\rangle$, or a 2 particle state made out of, say, a fermion and an anti-fermion $|p_f, p'_{af}\rangle=a^\dagger_pb^\dagger_{p'}|0\rangle$(spin labels suppressed). When calculating a correlation function between such states, these external state creation/annihilation operators can contract with the
relevant fields and Wick's theorem tells us how to do these contractions.
The key part then becomes identifying what external states are we looking at. The minimal building block of these contractions-the lowest possible number of fields you require to get a nonzero Wick contraction-is diagrammatically interpreted as a vertex. Then, studying scatterings amounts to understanding the allowed vertices(and the propagators connecting them, which come from the sandwiched contracting with each other instead of external states). For example, in $\phi^4$ theory; you cannot have a correlation function involving 3 external states and say 4 powers of $\phi$, as something is sure to go uncontracted and the result will be zero. 2 external states is just the 2 point function-the propagator-which we already know. If we had 4 external states however-then each external state could be contracted with one of the $\phi$'s inside the correlator and one would say a scattering has occurred. If this is a $1\to 3$ scattering or $2\to 2$ is a different question that depends on specifying precisely the external states and their time stamps. So, the 'vertex' here is a 4 point function-it means that the lower order process involves 4 particles.
In Yukawa theory, you have the interaction $g\bar{\psi}\psi\phi$. This field has 2 sets of fermion-antifermion creation/annihilation operators, and one set of scalar creation/annihilation operators. This means the external states should be comprised of atleast 2 fermions/antifermions and one scalar to have anything that's nonzero. Depending on your choice of initial states, such a vertex can describe a process like $fermion+antifermion\to scalar$, or $(anti)fermion+scalar\to(anti)fermion$. Again, what process actually happens requires you to specify what the precise external states are. | {
"domain": "physics.stackexchange",
"id": 85866,
"tags": "quantum-field-theory, lagrangian-formalism, scattering, feynman-diagrams, interactions"
} |
What is the advantage of positional encoding over one hot encoding in a transformer model? | Question: I'm trying to read and understand the paper Attention is all you need and in it, they used positional encoding with sin for even indices and cos for odd indices.
In the paper (Section 3.5), they mentioned
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks.
My question is that if there is no recurrence, why not use One Hot Encoding. What is the advantage of using a sinusoidal positional encoding?
Answer: You are mixing two different concepts in the same question:
One hot encoding: approach to encode $n$ discrete tokens by having an $n$-dimensional vectors with all 0's except one 1. This can be used to encode the tokens them selves in networks with discrete inputs, but only if $n$ is not very large, as the amount of memory needed is very large. Transformers (and most other NLP neural models) use embeddings, not one-hot encoding. With embeddings, you have a table with $n$ entries, each of them being a vector of dimensionality $e$. In order to represent token $k$, you select the $k$th entry in the embedding table. The embeddings are trained with the rest of the network in the task.
Positional encoding: in recurrent networks like LSTMs and GRUs, the network processes the input sequentially, token after token. The hidden state at position $t+1$ depends on the hidden state from position $t$. This way, the network has a means to identify the relative positions of each token by accumulating information. However, in the Transformer, there is no built-in notion of the sequence of tokens. Positional encodings are the way to solve this issue: you keep a separate embedding table with vectors. Instead of using the token to index the table, you use the position of the token. This way, the positional embedding table is much smaller than the token embedding table, normally containing a few hundred entries. For each token in the sequence, the input to the first attention layer is computed by adding up the token embedding entry and the positional embedding entry. Positional embeddings can either be trained with the rest of the network (just like token embeddings) or pre-computed by the sinusoidal formula from (Vaswani et al., 2017); having pre-computed positional embeddings leads to less trainable parameters with no loss in the resulting quality.
Therefore, there is no advantage of anyone over the other, as they are used for orthogonal purposes. | {
"domain": "datascience.stackexchange",
"id": 12172,
"tags": "machine-learning, nlp, encoding, attention-mechanism"
} |
Reciprocating vs. rotating machines | Question: Why is the RPM achieved on a reciprocating machine in general lesser than that obtained in a rotating machine? For example, an IC engine typically provides an RPM of about 2500 while a turbine can go as high as 80,000 RPM. The inertia and the repetitive change in direction of motion in case of reciprocating machine is one aspect, but is there an analytical justification/limitation on the RPM?
Answer: A turbine has its mass more widely distributed, so that the forces needed to accelerate each part are spread over larger area cylindrical shells and not transmitted through a narrow connecting rod, so the stresses in the blades of a turbine are much less than the stresses in the push rod of a reciprocating engine working at the same angular speed.
To understand this statement more deeply, think of the mass of a piston divided up amongst many equal spokes and what the stresses in these spokes as opposed to the stresses in a single push rod shoving the same mass, but concentrated, to and fro.
It is also likely that one can balance (i.e. achieve zero oscillating force on the bearings through even mass distribution) a nearly axisymmetric object like a turbine than one far from axisymmetry much more accurately for the same mechanical tolerances on the parts.
A factor which I do not understand but which may also be important here is the rate of reaction for the fuel burning happening inside the engine, so a chemist or thermodynamicist can answer this part of the question. The reactants probably have a much longer time to burn more evenly as they flow through a turbine than in the cylinder of a reciprocating engine working at the same angular frequency. This rate will limit the speed as well. | {
"domain": "physics.stackexchange",
"id": 10162,
"tags": "newtonian-mechanics"
} |
What are some good books on oncology? | Question: I'm looking for some book suggestions on oncology, preferably I want them to be fairly recent. I am not worried if they are fairly technical, as long as they have good accurate content and layout.
Answer: You could try DeVita, Hellman, and Rosenberg's Cancer: Principles & Practice of Oncology. It has detailed information on both biological and clinical aspects of cancer. Apparently, the publishers regularly release online updates to cover the latest developments in oncology.
Reference to the current edition:
DeVita VT, Rosenberg SA, Lawrence TS, editors. DeVita, Hellman, and Rosenberg's cancer: principles & practice of oncology. 11th Edition. Philadelphia: Wolters Kluwer; 2018. 2432 p. | {
"domain": "biology.stackexchange",
"id": 10678,
"tags": "physiology, cancer, medicine"
} |
What is the amplifying medium in a laser diode? | Question: I know it seems like a trivial question, but I can't seem to find the answer anywhere.
So - what is the amplifying medium in a laser diode?
Meaning: In a classical laser resonator, there a substance (like HeNe gas) which undergoes a population inversion to allow lasing.
Every source I've read describes how a laser diode produces photons via recombination, and has metallic walls to allow photons to bounce back and forth inside the cavity, but no mention of an amplifying medium.
Answer: The amplifying medium in a semiconductor diode laser is the semiconductor itself. More precisely, it's a thin layer of material surrounding the junction that separates the p-type side of the diode from the n-type. Application of an electric current through the diode allows a non-equilibrium condition to exist in that layer in which electrons and holes can exist in the same region of material ... briefly, until recombination occurs, and light is emitted. The current, however, continues to provide a fresh supply of electrons and holes. So a steady-state condition, not an equilibrium condition, exists where electrons and holes are constantly injected, electrons and holes constantly recombine, and light is constantly emitted. | {
"domain": "physics.stackexchange",
"id": 13792,
"tags": "optics, laser"
} |
Poisson's equation for time dependent charge | Question: Is Poisson's equation valid for a time dependent charge density? I think Poisson's equation is valid just for electrostatic fields. But I saw a paper that's used this equation for time dependent charge density. May anyone help me with this contradiction?
Answer: No, Poisson's equation can be valid for time-varying fields. The Wikipedia page on the subject says in deriving
$$
\nabla^2\varphi=\frac{\rho_f}{\epsilon_0}\tag{1}
$$
that (my emphasis),
The above discussion assumes that the magnetic field is not varying in time. The same Poisson equation arises even if it does vary in time, as long as the Coulomb gauge is used. In this more general context, computing $\varphi$ is no longer sufficient to calculate $\mathbf E$, since $\mathbf E$ also depends on the magnetic vector potential $\mathbf A$, which must be independently computed. See Maxwell's equation in potential formulation for more on $\varphi$ and $\mathbf A$ in Maxwell's equations and how Poisson's equation is obtained in this case.
Where the Coulomb gauge requires
$$
\nabla\cdot\mathbf A(\mathbf r,t)=0
$$
If we pick this gauge (always possible), then the (1) is valid for electrodynamics. | {
"domain": "physics.stackexchange",
"id": 18754,
"tags": "electromagnetism, classical-electrodynamics"
} |
How did the three quarks ($u, d, s$) acquire different masses? | Question: If the three quarks $u, d, s$ had the same mass, they would have an $SU(3)$ flavor symmetry ($u, d, s$). This symmetry is broken because these three quarks have acquired different masses through interactions with the Higgs field (Yukawa interactions). However, in the Standard Model, Yukawa interactions are between the Higgs field and the doublet ($u, d$). What about the triplet ($u, d, s$)? How does this triplet interact with the Higgs field so that these quarks acquire their different masses?
Answer: In the SM, all six quarks, d,u,s,c,b,t, (and leptons) get their varied masses through gauge-invariant Yukawa interactions; their strong or generation symmetries are completely irrelevant, and the size or systematics or such masses is not part of the SM to explain. They are six arbitrary parameters (Yukawa couplings) completely unconstrained by SM symmetries; but, of course, beyond the SM model-building seeks to predict them, somehow.
Typically, e.g., the weak-gauge-invariant couplings responsible for the mass of the d are
$$
-y_d \overline{ \begin{pmatrix} u_{L} \\ d_L \end{pmatrix} } \cdot \Phi ~ ~d_R +\hbox{h.c.},
$$
where the v.e.v. of the Higgs amounts to
$$
\langle \Phi \rangle = \frac{v}{\sqrt{2}} \begin{pmatrix} 0 \\ 1 \end{pmatrix},$$
for v ~ 0.25 TeV . You then see $m_d=y_d v/\sqrt{2}$.
The mass of the u in the weak doublet knows nothing about that coupling, and arises out of a completely independent Yukawa,
$$
-y_u \overline{ \begin{pmatrix} u_{L} \\ d_L \end{pmatrix} } \cdot \tilde{\Phi} ~ ~u_R +\hbox{h.c.},
$$
where, of course,
$$
\langle \tilde{\Phi} \rangle =\langle i\tau_2 \Phi^* \rangle = \frac{v}{\sqrt{2}} \begin{pmatrix} 1 \\ 0 \end{pmatrix}.
$$
You write two such terms of each kind for the other four quarks, and you are done.
The sizes of the Yukawas, and so the masses are experimental inputs: the structure of the SM accommodates them all, and gives model-builders something to do in inferring them out as something beyond the SM. Thus, there never could be an issue of them acquiring different masses:
There never was a good reason for any quark masses, or any fermion masses, to not be as different as they please. Expectations of the contrary in the SM rises to the level of metaphysical falsehood.
Corrections of these masses due to electromagnetism or chiral symmetry breaking effects of QCD are implicit in the SM basic interactions, but messier to estimate.
small practical complication in "real life": Actually, for the 3 generations of the real world, there are more yukawas, cross generational, yielding more elaborate, non-diagonal mass matrices. Diagonalization of such ends up producing the CKM mixing matrix. | {
"domain": "physics.stackexchange",
"id": 51040,
"tags": "particle-physics, mass, standard-model, higgs, quarks"
} |
Why does a balloon spiral in the air instead of moving in a straight line? | Question: When an air-filled balloon is released without its opening tied up, it moves in a circular path rather than a straight line. Why is that?
Answer: A friend sent this as his explanation and it seems quite satisfactory to me:
For a balloon to fly in a straight line, the direction of the jet of expelled air would have to be in line with the balloon’s centre of mass and its centre of drag – the point where the forces resisting the balloon’s forward motion are symmetrical
If these two centres don’t coincide, the centre of drag should be behind the centre of mass, otherwise stability is compromised. The reason darts and arrows have flights is to keep the two centres in line and ensure drag is at the rear of the moving projectile.
If the balloon’s line of thrust does not pass through the centre of mass (which is almost certain) but is in the same plane as the line joining the centres of mass and drag (which
is unlikely), the balloon would travel in a circle in that plane, although the pull of gravity will ultimately force it down to the ground, especially as the air driving it forward expires. However, because these lines generally do not intersect, thrust from the balloon’s opening comes at an angle to the plane of the circle, pushing the balloon into the helical, screw-like motion you saw when carrying out the experiment. The thrust of the balloon and the air resistance to the balloon will not cancel each other out in such a situation and so a turning moment is exerted. | {
"domain": "physics.stackexchange",
"id": 2129,
"tags": "newtonian-mechanics, fluid-dynamics, kinematics, everyday-life"
} |
Why do bluetooth headsets get interference (choppy sound quality) outdoors? | Question: This is more of a general physics question to help me understand how to choose sports headsets in the future, however it is too specific to a certain use case (bluetooth headset) to belong in the Physics StackExchange.
Problem:
3 of my 4 sports bluetooth headsets suffer from choppy sound quality (signal interference?) the minute I step outside. It seems to be especially bad under a clear sky no matter the time of day.
The 4th and the most expensive headset does not suffer from this, but it has a different (more compact) design. The worst performance was by the cheapest sports headset (from ebay) and it was unusable outdoors.
Observations:
In all cases my phone is located in my trouser pocket, but moving it closer to my head (the headset) seems to lessen signal interference, but not remedy the issue completely.
In all tests the headsets have been fully charged.
The speed of movement also seems to play a factor, but I am not able to reproduce that affect every time.
I can say that clothes are not a factor, as the same clothing indoors produces no interference.
Theories:
background EM radiation coming from outer space?
EM radiation coming from our sun? Although I have not observed if the moon reflecting the EM from the Sun plays a part or not.
Answer: As you might have already realized there is a good reason that some of Bluetooth headset are inexpensive. Poor design. High quality, well designed Bluetooth headset tend to be expensive. Also not all expensive Bluetooth headsets are high quality.
While it is difficult to root cause failures without physically analyzing the hardware, I will provide some possibilities. Since the device is a handset the device is using Bluetooth classic. Either way some understanding of Bluetooth technology might serve well.
Bluetooth operates in the 2.4 MHz frequency band, more specifically between 2402 and 2480 MHz frequency. If the 2 MHz bottom and 3.5 MHz upper guard band are included then the total bandwidth is 2400 to 2483.5 MHz. Bluetooth classic uses 79 channels, of which each channel has a 1 MHz bandwidth. More importantly Bluetooth uses a radio technology called frequency hopping spread spectrum, and if adaptive frequency hopping (AFH) is enabled it usually performs 800 hops a second. Bluetooth divides data into packets and each packet is transmitted on one of the 79 channels.
When transmitting each of the data is subjected to path loss (or path attenuation). That is when the propagation of electromagnetic waves is subjected to a reduction in power density. In your case this path loss is influence by propagation medium such as dry or moist air as well as distance between the transmitter and receiver.
In your situation propagation losses would include absorption loss or penetration loss which a when the signal passes through a medium not transparent to electromagnetic waves or diffraction losses, which is when the radio signal is obstructed by an opaque obstacle.
The signal transmitted by your device also takes multiple paths before arriving at the receiver. The signal received by the receiver might vary in strength, and propagation time etc.
A well designed system will account for all of the above and many more factors. In your case the inexpensive headset might have low Receive Signal Level (RSL), or might not have accounted for fade margins. A good design will include about 20 to 25 dbm fade margin. So if your headset has a RSL of 40 dbm, and a receive threshold is about 60 dbm. The design has accounted for 20 dbm of fade margin.
Also note that different environmental factors cause different level of signal attenuation. Example a water drop can cause about 5 dbm of attenuation. Not all environmental factors are present at all times. Depending on the application designer choose applicable fade margins.
I hope I have given you a good over view of Bluetooth, Radio Frequency and electromagnetic propagations help you make an informed technical decision on your next purchase.
Reference:
Does Weather Effect Wireless? The 5 Misconceptions - Part 1
What Are Some Signal Interference Issues for the Bluetooth Technology?
Receive Signal Level (RSL or RSSI) is Too Low
What is the difference between Bluetooth Low Energy and Bluetooth BR/EDR in Park mode? | {
"domain": "engineering.stackexchange",
"id": 1205,
"tags": "electrical-engineering, telecommunication, wireless-communication"
} |
Maximal Velocity of an Object in Free Fall | Question: A baseball is dropped from a high point. Since the velocity is large, we can say that the drag force is proportional to the square of the velocity, $F_d = \gamma v^2$. My goal is to determine the maximal (terminal) velocity of the ball.
My attempt to solve the problem started with some simple force balancing:$$ F = ma = -mg + \gamma v^2 $$
Because I'm solving for v(t), I took $a = \frac{dv}{dt}$:
$$ m \frac{dv}{dt} = -mg+\gamma v^2$$
I then took the integral of both sides multiplied by dt:
$$ \int \frac{m}{mg-\gamma v^2}dv = \int-dt $$
$$m\int \frac{1}{g-\frac{\gamma}{m}v^2}dv = -\int dt$$
At this point, the integral becomes very complicated and it is difficult to solve for v(x). I'm sure there's a simpler way to solve this problem, but I just can't see it. What am I overlooking?
Answer: When the ball has reached the terminal velocity, this doesn't increase anymore, and therefore $\dot v=0$. I think you can take it from here. | {
"domain": "physics.stackexchange",
"id": 19139,
"tags": "homework-and-exercises, differentiation, free-fall"
} |
Acidic strength order in gaseous phase | Question: I got this question in my exam:
Arrange the following in increasing order of acidic strength in gaseous phase
(i) $\ce{CH3CH2COOH}$ (ii) $\ce{CH3COOH}$
I felt that increasing order must be (i) $<$ (ii). But answer given is (ii) $<$ (i). Why will the order change in gaseous phase?
Answer: Acidity or acid strength can be expressed either in terms of the proton affinity (PA) of the conjugate base, e.g., PA($\ce{A-}$), or in terms of the acid dissociation constant, e.g., $\mathrm{p}K_{\mathrm{a}(\ce{HA})}$. The former expression is often favored among gas-phase chemists, whereas the latter expression is most familiar to solution chemists.
In aqueous solution, $\mathrm{p}K_{\mathrm{a}(\ce{CH3CO2H})} = 4.76$ while $\mathrm{p}K_{\mathrm{a}(\ce{CH3CH2CO2H})} = 4.87$, thus acidity of acetic acid is greater than that of propanoic acid. Keep in mind that solvent plays a huge role in $\mathrm{p}K_{\mathrm{a}}$ measurements. For example, compare $\mathrm{p}K_{\mathrm{a}}$ measurements in $\ce{H2O}$ and in $\ce{DMSO}$:
$$
\begin{array}{c|cc}
\hline
\ce{HA} & \mathrm{p}K_{\mathrm{a}} \text{ in } \ce{H2O} & \mathrm{p}K_{\mathrm{a}} \text{ in } \ce{DMSO} \\
\hline
\ce{CH3CO2H} & 4.76 & 12.3 \\
\ce{H2O} & 15.7 & 31.2 \\
\ce{CH3OH} & 15.5 & 27.9 \\
\ce{C6H5OH} & 9.95 & 18.0 \\
\hline
\end{array}
$$
However, the proton affinity (PA) measurements of the conjugate bases of these acids in gas phase tell a different story. Proton affinity is a thermodynamic measurement conducted in the gas phase. These measurements give a pure view of base/acid strength, without complications from solvent effects such as salvation, thus take the solvent out of the equation.
Cardwell, et al. (Ref.1) has measured the proton affinities of various $\ce{R-CO2H}$ acids, including ethanoic and propanoic acids ($\ce{R}=\ce{CH3}$ and $\ce{CH3CH2}$, respectively). They conclude that:
The acidity increment (with the size of $\ce{R}$-group) is relatively small and can be attributed to the stabilizing effect of the polarizability of the alkyl group on the negative ion.
References:
Gary Caldwell, Richard Renneboog, Paul Kebarle, “Gas phase acidities of aliphatic carboxylic acids, based on measurements of proton transfer equilibria,” Canadian Journal of Chemistry 1989, 67(9), 611-618 (https://doi.org/10.1139/v89-092).
Also read: Diethard K. Bohme, Edward Lee-Ruff, L. Brewster Young, “Acidity order of selected Brønsted acids in the gas phase of $\pu{300 K}$,” Journal of American Chemical Society 1972, 94(15), 5153-5159 (https://doi.org/10.1021/ja00770a002) and references there in. | {
"domain": "chemistry.stackexchange",
"id": 12856,
"tags": "organic-chemistry, acid-base"
} |
Strong or weak force between protons and neutrons? | Question: I had learned that what keeps the protons and neutrons bound is the strong force and the weak force has to do with radioactive decay. but today I read an article from Science Daily that contradicts this. Here is the quote:
“Protons and neutrons are made of smaller particles called quarks that are bound together by the strong interaction, which is one of the four known forces of nature: strong force, electromagnetism, weak force and gravity. The weak force exists in the tiny distance within and between protons and neutrons; the strong interaction confines quarks in neutrons and protons.”
Can this be clarified by the community? Thanks!
Answer: I think it is a poor phrasing. The strong force is responsible for keeping quarks together inside a proton or a neutron, but also between quarks belonging to different proton/neutron (but I am not sure for proton-proton interaction because of the repulsive electromagnetic force).
By contrast, the weak force is responsible for turning one type of quarks into another type or quarks, thus transforming a proton into a neutron in the case of beta+ radioactive decay, with emission of a positron (the overall charge is always conserved). | {
"domain": "chemistry.stackexchange",
"id": 13789,
"tags": "atoms, atomic-structure"
} |
Tutorials - Understanding ROS Nodes | Question:
Hello!
I have Ubuntu 10.04 and installes ROS electric.
I am trying to make $ rosmake turtlesim (Tutorials - Understanding ROS Nodes), but get following error message:
rosmake Packages requested are: ['turtlesim']
rosmake Logging to directory/home/janina/.ros/rosmake/rosmake_output-20120802-160904
rosmake Expanded args ['turtlesim'] to:
[]
rosmake WARNING: The following args could not be parsed as stacks or packages: ['turtlesim']
rosmake ERROR: No arguments could be parsed into valid package or stack names.
How can I install the turtlesim package? I think it,s missing?
Originally posted by Janina on ROS Answers with karma: 11 on 2012-08-02
Post score: 0
Answer:
You need to export the package paths correctly. As you allready noticed your packages can not be found. The reason is most likely that you did not correct setup the ROS_PACKAGE_PATH variable.
you can check the Path by:
echo $ROS_PACKAGE_PATH
The output shows the paths which ROS is searching for Packages.
Check also this page here
Originally posted by tlinder with karma: 663 on 2012-08-05
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 10454,
"tags": "ros"
} |
Texture loading in LWJGL 3 | Question: I made the following class to simpify texture loading in LWJGL 3, and modeled it after the slick-util API:
public class Texture {
private ByteBuffer data;
private int id;
private int width;
private int height;
private ByteBuffer resizeBuffer(ByteBuffer buffer, int size) {
ByteBuffer newBuffer = BufferUtils.createByteBuffer(size);
buffer.flip();
newBuffer.put(buffer);
return newBuffer;
}
private ByteBuffer imageToByteBuffer(String path, int size) throws IOException {
ByteBuffer buffer;
URL url = Thread.currentThread().getContextClassLoader().getResource(path);
File file = new File(url.getFile());
if (file.isFile()) {
FileInputStream stream = new FileInputStream(file);
FileChannel channel = stream.getChannel();
buffer = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size());
channel.close();
stream.close();
} else {
buffer = BufferUtils.createByteBuffer(size);
InputStream source = url.openStream();
if (source == null) {
throw new FileNotFoundException(path);
}
try {
ReadableByteChannel channel = Channels.newChannel(source);
try {
while (true) {
int bytes = channel.read(buffer);
if (bytes == -1) {
break;
}
if (buffer.remaining() == 0) {
buffer = resizeBuffer(buffer, buffer.capacity() * 2);
}
}
buffer.flip();
} finally {
channel.close();
}
} finally {
source.close();
}
}
return buffer;
}
public Texture(String path) throws IOException {
IntBuffer width = BufferUtils.createIntBuffer(1);
IntBuffer height = BufferUtils.createIntBuffer(1);
IntBuffer components = BufferUtils.createIntBuffer(1);
data = STBImage.stbi_load_from_memory(imageToByteBuffer(path, 1024), width, height, components, 4);
id = GL11.glGenTextures();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, id);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR_MIPMAP_LINEAR);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL14.GL_GENERATE_MIPMAP, GL11.GL_TRUE);
this.width = width.get(0);
this.height = height.get(0);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGBA8, this.width, this.height, 0, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, data);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);
STBImage.stbi_image_free(data);
}
public int getWidth() {
return width;
}
public int getHeight() {
return height;
}
public int getID() {
return id;
}
public ByteBuffer getByteBuffer() {
return data;
}
public byte[] getByteArray() {
return data.array();
}
public void bind() {
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, id);
}
public void release() {
GL11.glDisable(GL11.GL_TEXTURE_2D);
}
}
I use it as follows:
bg.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0F, 0F);
GL11.glVertex2f(-1F, 1F);
GL11.glTexCoord2f(1F, 0F);
GL11.glVertex2f(1F, 1F);
GL11.glTexCoord2f(1F, 1F);
GL11.glVertex2f(1F, -1F);
GL11.glTexCoord2f(0F, 1F);
GL11.glVertex2f(-1F, -1F);
GL11.glEnd();
bg.release();
where bg is a declared Texture that throws no errors.
Answer: This is a pretty straightforward and readable implementation, and I see how it can be used. Here are some suggestions:
Use Modern OpenGL
It looks like you're using OpenGL 1.x. That's really old OpenGL. You should be using OpenGL 3 or later, really. It's supported on all major platforms. It's not too difficult to learn if you already know OpenGL 1 and it offers a lot of benefits.
Keep Track of Your Texture Target
While a texture ID is certainly an important piece of information to keep around, it's not the only piece you need. Almost all texture-related functions in OpenGL require a texture target instead of or in addition to a texture ID. If you don't keep it around you'll either end up needing it and not having it, or being forced to always use one particular texture target, which is not ideal.
Naming
While your variable names are decent, they could be even better. For example, in both resizeBuffer() and imageToByteBuffer() you have an argument named size. What units is size in? Bytes? Pixels? Vertices? You should be more explicit with a name like numBytes or size_in_bytes, or whatever's appropriate.
Also the function name imageToByteBuffer() is confusing. From the name, it seems like it's going to read a file at a path and return a byte buffer containing the pixels from the image at that path. But the second half of the function looks like it does something else. I'm not familiar enough with the Java libraries to understand when file.isFile() would return false, and what else a file would be if not a file. (I assume since it's a URL it might be a network resource?) I would recommend making 2 functions - one for reading a local file and one for reading a remote file (or whatever the else path does), with appropriate names and call them from imageToByteBuffer. Something like this:
if (file.isFile())
{
return imageFromLocalFile(path);
}
else
{
return imageFromRemoteFile(path, size); // or whatever it should be called
}
Also, how does a caller know what to pass in for size? Isn't that something that is discovered by reading the file? | {
"domain": "codereview.stackexchange",
"id": 24647,
"tags": "java, image, opengl, lwjgl"
} |
What is the core of the lightning? | Question: The post Lightning mentions about "the core of a lightning bolt":
The core of a lightning bolt is a few centimeters in diameter.
I couldn't find any article discussing about this. The reference link has been changed, so I couldn't read the original post. So what is it?
Answer: I found the paper you initially referred to here:
http://onlinelibrary.wiley.com/doi/10.1029/JB073i006p01889/full
based on the hyperlink by the xkcd webpage you referred
http://www.agu.org/pubs/crossref/1968/JB073i006p01889.shtml
The full reference for this paper is: Oetzel, G. N. (1968), Computation of the diameter of a lightning return stroke, J. Geophys. Res., 73(6), 1889–1896, doi:10.1029/JB073i006p01889.
In the abstract:
The diameter of a first return stroke is found to lie in the range 1–4 cm. Subsequent return strokes are smaller, with diameters in the range 0.2–0.5 cm. In many cases, strokes to towers have shown evidence of the smaller strokes occurring alone. These are explained as strokes initiated by an upward leader.
This paper is quite old - but is relevant nevertheless.
A more recent paper review the physics of lightning in this review paper, titled 'The Physics of lightning' by Dwyer and Uman (2014). They present the various steps implied when a lightning form, illustrated by this illustration (Figure 1):
Basically, there is several stages when a lightning strike. There is the stepped leader component, that somewhat lead the way for the exchanges that will follow. In the figure, we notice the positive and negative signs (+ / -) and how they change position depending on the discharge.
The stepped leader can be followed by return strikes and so on. In this paper they mention diameters for the stepped leader, which may appear to be much larger than the return stroke, such as in the range of one to ten meter in diameter:
The luminous diameter of the stepped leader has been measured photographically to be between 1 and 10 m.
But - this is by using photography - yet the core of the lightning at the center (the core) seem to be a few centimeter in diameter:
It is thought, however, that most of the stepped-leader current flows down a narrow conducting core a few centimeters in diameter at the center of the observed leader.
So in the end both papers tend toward similar centimeter-scale measurements for the lightning core.
The core: aforementioned literature refer to the component of the actual lightning as a 'current carrying core' surrounded by a corona sheath. The sheath is what appear to be 1 to 10 m in diameter, where the actual 'electricity channel' is centimeter wide;
so this core as the centimeter wide channel is the electron pathway. | {
"domain": "earthscience.stackexchange",
"id": 930,
"tags": "lightning"
} |
Why is $\pi^0 \rightarrow \gamma \rightarrow e^-e^+$ forbidden but $\pi^+\rightarrow W^+ \rightarrow e^+ \nu_e$ allowed? | Question: I am slightly confused why the decay $\pi^0 \rightarrow \gamma \rightarrow e^-e^+$ is forbidden. A naive guess would say that the intemediate photon $\gamma$ has a spin 1, the initial pion has a spin $0$ therefore this violates spin conservation. However, on the same reasoning $\pi^+\rightarrow W^+ \rightarrow e^+ \nu_e$ would be forbidden, but it's not. Therefore I cannot see why we cannot have $\pi^0 \rightarrow \gamma \rightarrow e^-e^+$ yet still have $\pi^+\rightarrow W^+ \rightarrow e^+ \nu_e$. Please can someone explain? (p.s. all intermediate particles should be taken as virtual).
Answer: The QCD and QED themselves conserve parity. The conclusion of this statement is that all corresponding effective vertices must conserve the parity. The only coupling of $\pi^{0}$ to $\gamma$ conserving the parity is
$$
L_{\pi^{0}} \simeq \frac{\pi^{0}}{\Lambda}\epsilon^{\mu\nu\alpha\beta}F_{\mu\nu}F_{\alpha\beta},
$$
which doesn't allow your decay process $\pi^{0} \to \gamma^{*} \to e^{-}e^{+}$. To understand this, note that the pion fields are pseudo-scalars, while the photon field is the vector field, while the Levi-Civita tensor $\epsilon^{\mu\nu\alpha\beta}$ is the pseudo-tensor.
However, it is possible to construct the effective vertex allowing the decay $\pi^{0}\to e^{+}e^{-}$ through $Z$-boson, i.e., through weak interactions. The reason is that they directly violate the parity. Therefore, it is possible to construct phenomenological parity-violating low-dimensional effective interaction vertex
$$
L_{\pi^{0}}' \simeq \Lambda'\partial^{\mu}\pi^{0}Z_{\mu},
$$
allowing the decay process $\pi^{0} \to Z^{*} \to e^{+}e^{-}$.
By the same reason, it is easy to construct parity violating vertex
$$
L_{\pi^{+}} = \tilde{\Lambda}\partial^{\mu}\pi^{+}W^{-}_{\mu} + \text{ h.c.},
$$
allowing your decay process $\pi^{+}\to W^{+*} \to l^{+}\nu_{l}$. | {
"domain": "physics.stackexchange",
"id": 69469,
"tags": "particle-physics, conservation-laws, standard-model, parity, pions"
} |
Position from velocity time graph | Question:
I have taken Calculus before but unfortunately no equation is given so I have to do it the old way.
An object is at x = 0 at t = 0 and moves along the x axis according to the velocity-time graph shown below
I am asked to find the final position x of the object at t = 18s.
Here is my logic, but apparently the answer is wrong.
I find the area of each of the sections of the graph. If the graph is below the x axis it's area will be negative. I found the answer to be 97m. I got this from -44+-11+28.5+76+47.5. But this is wrong.
Answer: Your approach is correct; your ability to read data from a graph is suspect (the divisions are 2 m/s each).
The initial velocity is -12 m/s, and at time t=9 s it is up to 18 m/s
That should change your answer... | {
"domain": "physics.stackexchange",
"id": 27793,
"tags": "homework-and-exercises, kinematics, time, velocity"
} |
arm_navigation - cylindrical links don't avoid collision | Question:
Hi!
I'm following the arm_navigation tutorials and I went through the planning description configuration wizard successfully. However, when trying 'Planning Components Visualizer', I couldn't get my custom robot to avoid a pole, it would always go through it as if it wasn't there.
All links of my robot were defined as cylinders in the urdf file, but when I converted them to boxes, the robot started avoiding the pole successfully.
Is this supposed to be like this, or is it a bug (or my problem maybe)?
Thanks in advance!
Originally posted by Pedro on ROS Answers with karma: 102 on 2012-10-17
Post score: 0
Original comments
Comment by dornhege on 2012-10-17:
Maybe you only added visual, but not collision? According to your description it doesn't seem so and it might be a bug that should be filed.
Comment by Pedro on 2012-10-17:
I added both. Visual can be a cylinder, as long as collision is a box.
Answer:
I remember running into this problem as well. ODE, which is used by arm navigation for collision checking, does not support native cylinder-cylinder collision checks. This is known and should result in an error. I think there may even be a bug when checking collisions between other shapes and cylinders.
I don't think it will be fixed though, but the planned replacement MoveIT will use FCL for collision checks. It should work better there.
Originally posted by AHornung with karma: 5904 on 2012-10-17
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by David Lu on 2012-10-17:
I confirm this is a bug though I can't find the ticket at the moment. Recommended hack is to make a mesh of a cylinder. | {
"domain": "robotics.stackexchange",
"id": 11407,
"tags": "urdf, arm-navigation"
} |
Phase Transition in 2D Hard Rods | Question: Consider $N$ rods in a plane with length $2l$ restricted to rotate by an angle $\theta$ leading to excluded volume $\Omega (\theta) = l^2(\theta + \sin\theta)$. Under the assumption that the phase space volume corresponding to the rotational freedom is proportional to $\theta$, maximizing the entropy leads to the following constraint on the density:
$$n=\frac{2}{l^2} \frac{1}{\theta(2+\cos\theta)+\sin\theta}$$
such that there is a minimum density at some critical angle $\theta_c$. There are two related claims about this system I am not understanding: (1) that a phase transition occurs at $\theta_c$, and (2) that the local entropy maximum becomes "unstable" at $\theta_c$.
I am wondering how to justify either of these claims, and how the two are related. Wouldn't a singularity in $n$ be a more appropriate way to characterize a phase transition? How does a minimum characterize it instead? What makes an entropy maximum unstable and how does this imply a phase transition?
This is all inspired by problem 5.7 in Kardar's Statistical Physics of Particles.
Answer: I still don't understand how to get your formula for $n$. If I understand correctly, you assimilate each rod by its exclusion volume, giving it three degrees of freedom, $2$ of translation, one of rotation. Let $A=\theta$ the space occupied by each effective rod in rotation space (rotation within the excluded volume), and $V$ the total accessible position space, I get the number of microstates up to a $\theta$ independent factor:
$$
W = A^NV(V-\Omega) ...(V-(N-1)\Omega)\\
\propto A^N\prod_{k = 0}^{N-1}\left(1-n\Omega\frac{k}{N}\right)
$$
($\theta$ independent factor) In the thermodynamic limit $N\gg1$ this becomes:
$$
S =N\ln A +N\int_0^1dx\ln(1-n\Omega x) +S_0\\
= N\left(\ln A -1-\frac{1-n\Omega}{n\Omega}\ln(1-n\Omega)\right) + S_0
$$
($S_0$ a $\theta$ independent additive term)
so
$$
\frac{dS}{d\theta} = N\frac{A'}{A}+N\left(\frac{1}{n\Omega}+\frac{1}{(n\Omega)^2}\ln(1-n\Omega)\right)n\Omega'
$$
From the equation $\frac{dS}{d\theta} = 0$ you have, with some caveats, an implicit definition of $\theta$. Btw, there is no simple formula for $n$ as a function of $\theta$. More precisely, for all of this to make sense, you need to assume all along that $n\Omega\leq 1$ so letting $n_c = \frac{1}{\pi l^2}$, for $n\leq n_c$ the above resolution is for $\theta\in[0,\pi]$ and for $n\geq n_c$ you have $\theta\in[0,\theta_m]$ with the implicit definition $\Omega(\theta_m) = 1/n$.
For the actual resolution of $\theta$, you therefore need to distinguish the two cases:
$n>n_c$: there is necessarily have a zero since $\frac{dS}{d\theta} \to +\infty$ when $\theta\to 0$ and $\frac{dS}{d\theta} \to -\infty$ when $\theta\to \theta_m$. It is actually the only solution, which defines $\theta$.
$n<n_c$: there is no solution, and you always have $\frac{dS}{d\theta}>0$, so $\theta=\pi$.
On an intuitive level, you have a competition between the number of states in rotation which increases with $\Omega$ and number of states in position which decreases with $\Omega$. When $n\gg n_c$, the second effect dominates and $\theta$ is small and as $n$ decreases, $\theta$ increases. At high density, the rods are tightly packed and are therefore highly constrained. Conversely, at low density, the rods are completely free to rotate. This is why you get a phase transition, you have a sudden plateau at $n=n_c$ and a resulting discontinuous slope. This kind of plateau effect is recurrent in those self-consistent approaches, you might have seen something similar in mean-field theory.
You can look at the asymptotic behaviour of $\theta$ when $n\to n_c^+$. Let $n = n_c+\epsilon$ and $\theta=\pi-\delta$ in this regime, I found (the computation is a bit tricky, you should check it for yourself):
$$
\delta = (2\pi^2\epsilon)^{1/3}
$$
which gives you a discontinuous derivative at the transition.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 89603,
"tags": "statistical-mechanics, entropy, phase-transition"
} |
Unknown item shown in rqt_graph | Question:
In my rqt_graph shown below, the namespace path_modification has an oval titled color in the namespace. What is this?
I've looked at rostopic list and it's neither a topic nor a service running on the network. I've googled around, but I can only find information about the display color of a node in rqt_graph, not an item titled "color" in rqt_graph.
rosnode info path_modification returns:
Node [/path_modification]
Publications:
* /rosout [rosgraph_msgs/Log]
Subscriptions: None
Services:
* /path_modification
* /path_modification/get_loggers
* /path_modification/set_logger_level
contacting node http://127.0.0.1:44616/ ...
Pid: 11250
Connections:
* topic: /rosout
* to: /rosout
* direction: outbound
* transport: TCPROS
I don't see anything related to "color" at all in that output. I don't think rqt_graph shows rosparams, but I checked rosparam list and I don't see "color" in there either. The output of rosnode list is:
/path_modification
/ramp_planner
/ramp_sensing
/rosout
/trajectory_evaluation
/trajectory_generator
/trajectory_view
So it doesn't seem to be a node either. If anyone knows what this is, please let me know. Any help is appreciated.
Originally posted by sterlingm on ROS Answers with karma: 380 on 2016-12-07
Post score: 4
Answer:
its a bug, been fixed in the source
https://github.com/ros-visualization/qt_gui_core/pull/86
Originally posted by nickw with karma: 1504 on 2017-05-30
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 26424,
"tags": "ros, ros-kinetic, rqt-graph"
} |
Serializing Objects to Delimited Files Part II | Question: This is a follow up to my previous question: Serializing objects to delimited files
I've added some feature enhancements, and based on suggestions from rolfl in chat, I've fixed up a couple inconsistencies with the serializer.
First, if you don't mark any properties with DelimitedColumnAttribute, I added a DelimitedIgnoreAttribute which will blacklist columns instead. For objects with no properties marked with either, all properties are serialized instead, with the following exception:
No collection properties (except System.String) are serialized, period.
You can also replace invalid values in names/fields (values that have the RowDelimiter or ColumnDelimiter in them) with whatever you specify.
You can choose whether or not to include the header row.
You can choose to quote values/names. If you choose to, all values/names are quoted.
You can choose how double-quotes are escaped (necessary for quoted values/names).
DelimitedSerializer.cs:
/// <summary>
/// Represents a serializer that will serialize arbitrary objects to files with specific row and column separators.
/// </summary>
public class DelimitedSerializer
{
/// <summary>
/// The string to be used to separate columns.
/// </summary>
public string ColumnDelimiter { get; set; }
/// <summary>
/// The string to be used to separate rows.
/// </summary>
public string RowDelimiter { get; set; }
/// <summary>
/// If not null, then sequences in values and names which are identical to the <see cref="ColumnDelimiter"/> will be replaced with this value.
/// </summary>
public string InvalidColumnReplace { get; set; }
/// <summary>
/// If not null, then sequences in values and names which are identical to the <see cref="RowDelimiter"/> will be replaced with this value.
/// </summary>
public string InvalidRowReplace { get; set; }
/// <summary>
/// If true, a trailing <see cref="ColumnDelimiter"/> will be included on each line. (Some legacy systems require this.)
/// </summary>
public bool IncludeTrailingDelimiter { get; set; }
/// <summary>
/// If true, an empty row will be included at the end of the response. (Some legacy systems require this.)
/// </summary>
public bool IncludeEmptyRow { get; set; }
/// <summary>
/// If true, then all values and columns will be quoted in double-quotes.
/// </summary>
public bool QuoteValues { get; set; }
/// <summary>
/// If not null, then double quotes appearing inside a value will be escaped with this value.
/// </summary>
public string DoubleQuoteEscape { get; set; }
/// <summary>
/// If true, then a header row will be output.
/// </summary>
public bool IncludeHeader { get; set; }
/// <summary>
/// Serializes an object to a delimited file. Throws an exception if any of the property names, column names, or values contain either the <see cref="ColumnDelimiter"/> or the <see cref="RowDelimiter"/>.
/// </summary>
/// <typeparam name="T">The type of the object to serialize.</typeparam>
/// <param name="items">A list of the items to serialize.</param>
/// <returns>The serialized string.</returns>
public string Serialize<T>(List<T> items)
{
if (string.IsNullOrEmpty(ColumnDelimiter))
{
throw new ArgumentException($"The property '{nameof(ColumnDelimiter)}' cannot be null or an empty string.");
}
if (string.IsNullOrEmpty(RowDelimiter))
{
throw new ArgumentException($"The property '{nameof(RowDelimiter)}' cannot be null or an empty string.");
}
var result = new ExtendedStringBuilder();
var properties = typeof(T).GetProperties()
.Select(p => new
{
Attribute = p.GetCustomAttribute<DelimitedColumnAttribute>(),
Info = p
})
.Where(x => x.Attribute != null)
.OrderBy(x => x.Attribute.Order)
.ThenBy(x => x.Attribute.Name)
.ThenBy(x => x.Info.Name)
.ToList();
if (properties.Count == 0)
{
properties = typeof(T).GetProperties()
.Where(x => x.GetCustomAttribute<DelimitedIgnoreAttribute>() == null)
.Select(p => new
{
Attribute = new DelimitedColumnAttribute { Name = p.Name },
Info = p
})
.Where(x => x.Attribute != null)
.OrderBy(x => x.Attribute.Order)
.ThenBy(x => x.Attribute.Name)
.ThenBy(x => x.Info.Name)
.ToList();
}
Action<string, string, string> validateCharacters = (string name, string checkFor, string humanLocation) =>
{
if (name.Contains(checkFor))
{
throw new ArgumentException($"The {humanLocation} string '{name}' contains an invalid character: '{checkFor}'.");
}
};
var columnLine = new ExtendedStringBuilder();
foreach (var property in properties)
{
if (property.Info.PropertyType.IsArray || (property.Info.PropertyType != typeof(string) && property.Info.PropertyType.GetInterface(typeof(IEnumerable<>).FullName) != null))
{
continue;
}
var name = property.Attribute?.Name ?? property.Info.Name;
if (InvalidColumnReplace != null)
{
name = name.Replace(ColumnDelimiter, InvalidColumnReplace);
}
if (InvalidRowReplace != null)
{
name = name.Replace(RowDelimiter, InvalidRowReplace);
}
if (DoubleQuoteEscape != null)
{
name = name.Replace("\"", DoubleQuoteEscape);
}
validateCharacters(name, ColumnDelimiter, "column name");
validateCharacters(name, RowDelimiter, "column name");
if (columnLine.HasBeenAppended)
{
columnLine += ColumnDelimiter;
}
if (QuoteValues)
{
columnLine += "\"";
}
columnLine += name;
if (QuoteValues)
{
columnLine += "\"";
}
}
if (IncludeTrailingDelimiter)
{
columnLine += ColumnDelimiter;
}
if (IncludeHeader)
{
result += columnLine;
}
foreach (var item in items)
{
var row = new ExtendedStringBuilder();
foreach (var property in properties)
{
if (property.Info.PropertyType.IsArray || (property.Info.PropertyType != typeof(string) && property.Info.PropertyType.GetInterface(typeof(IEnumerable<>).FullName) != null))
{
continue;
}
var value = property.Info.GetValue(item)?.ToString();
if (property.Info.PropertyType == typeof(DateTime) || property.Info.PropertyType == typeof(DateTime?))
{
value = ((DateTime?)property.Info.GetValue(item))?.ToString("u");
}
if (value != null)
{
if (InvalidColumnReplace != null)
{
value = value.Replace(ColumnDelimiter, InvalidColumnReplace);
}
if (InvalidRowReplace != null)
{
value = value.Replace(RowDelimiter, InvalidRowReplace);
}
if (DoubleQuoteEscape != null)
{
value = value.Replace("\"", DoubleQuoteEscape);
}
validateCharacters(value, ColumnDelimiter, "property value");
validateCharacters(value, RowDelimiter, "property value");
}
if (row.HasBeenAppended)
{
row += ColumnDelimiter;
}
if (QuoteValues)
{
row += "\"";
}
row += value;
if (QuoteValues)
{
row += "\"";
}
}
if (IncludeTrailingDelimiter)
{
row += ColumnDelimiter;
}
if (result.HasBeenAppended)
{
result += RowDelimiter;
}
result += row;
}
return result;
}
/// <summary>
/// Returns an instance of the <see cref="DelimitedSerializer"/> setup for Tab-Separated Value files.
/// </summary>
public static DelimitedSerializer TsvSerializer => new DelimitedSerializer
{
ColumnDelimiter = "\t",
RowDelimiter = "\r\n",
InvalidColumnReplace = "\\t",
IncludeHeader = true
};
/// <summary>
/// Returns an instance of the <see cref="DelimitedSerializer"/> setup for Comma-Separated Value files.
/// </summary>
public static DelimitedSerializer CsvSerializer => new DelimitedSerializer
{
ColumnDelimiter = ",",
RowDelimiter = "\r\n",
InvalidColumnReplace = "\\u002C",
IncludeHeader = true
};
/// <summary>
/// Returns an instance of the <see cref="DelimitedSerializer"/> setup for Pipe-Separated Value files.
/// </summary>
public static DelimitedSerializer PsvSerializer => new DelimitedSerializer
{
ColumnDelimiter = "|",
RowDelimiter = "\r\n",
InvalidColumnReplace = "\\u007C",
IncludeHeader = true
};
/// <summary>
/// Returns an instance of the <see cref="DelimitedSerializer"/> from the RFC 4180 specification. See: https://tools.ietf.org/html/rfc4180
/// </summary>
public static DelimitedSerializer Rfc4180Serializer => new DelimitedSerializer
{
ColumnDelimiter = ",",
RowDelimiter = "\r\n",
IncludeHeader = true,
IncludeTrailingDelimiter = true,
QuoteValues = true,
DoubleQuoteEscape = "\"\""
};
}
DelimitedColumnAttribute.cs:
/// <summary>
/// Represents a column which can be used in a <see cref="DelimitedSerializer"/>.
/// </summary>
[AttributeUsage(AttributeTargets.Property)]
public class DelimitedColumnAttribute : Attribute
{
/// <summary>
/// The name of the column.
/// </summary>
public string Name { get; set; }
/// <summary>
/// The order the column should appear in.
/// </summary>
public int Order { get; set; }
}
DelimitedIgnoreAttribute.cs:
[AttributeUsage(AttributeTargets.Property)]
public class DelimitedIgnoreAttribute : Attribute
{
}
Answer: To me the Serialize method is too big. I'd split it in three parts.
The first part would be the reflection stuff where you read and sort the properties - this could be a new class like ElementReflector or maybe an extension.
The second part (maybe a method) would be the first foreach that I cannot figure out what it does.
The third part (maybe a method too) would be the second foreach that does something but I'm not sure what.
Putting them in methods would give them a meaning without you having to explain each time what they are good for.
public string Serialize<T>(List<T> items)
{
... prepare sort etc properties
var result = new StringBuilder()
.Append(CreateHeader(properties))
.Append(SerializeData(items, properties));
return result.ToString();
}
ifs
if (property.Info.PropertyType.IsArray || (property.Info.PropertyType != typeof(string) && property.Info.PropertyType.GetInterface(typeof(IEnumerable<>).FullName) != null))
You could use a variable for that to tell what this condition is for. Or even create an extension for it.
The same applies for many other ifs where the intentions is not clear.
Besides you use the same condition twice (in both loops) so an extenstion would definitely help.
ColumnDelimiter & RowDelimiter
If these two properties cannot be null you should requrie them via the constructor instead of checking them in the Serilize method.
Also if the user can change them later then you should check the values in the property setter instead of throwing exeptions later and causing astonishment why isn't this working.
DelimitedColumnAttribute
It is possible to create an invalid attribute because there is no construtor that enforces the required values. I guess the name is required if the attribute is specified.
properties
properties = typeof(T).GetProperties()
.Where(x => x.GetCustomAttribute<DelimitedIgnoreAttribute>() == null)
.Select(p => new
{
Attribute = new DelimitedColumnAttribute { Name = p.Name },
Info = p
})
.Where(x => x.Attribute != null)
.OrderBy(x => x.Attribute.Order)
.ThenBy(x => x.Attribute.Name)
.ThenBy(x => x.Info.Name)
.ToList();
You are always adding the attribute so this .Where(x => x.Attribute != null) isn't necessary.
As the Attribute.Name is equal p.Name either the .ThenBy(x => x.Attribute.Name) or .ThenBy(x => x.Info.Name) can be removed. | {
"domain": "codereview.stackexchange",
"id": 21739,
"tags": "c#, .net, serialization, reflection"
} |
Product of Levi-Civita Symbols | Question: I was reviewing Levi-Civita symbols and came across this identity:
$$ \epsilon_{ijk} \epsilon_{ijn} = 2 \delta_{kn}$$
My first thought was the identity that involves a determinant:
$$\epsilon_{ijk}\epsilon_{lmn}=\det\left|
\begin{array}{cccc}
\delta_{il} & \delta_{im} & \delta_{in} \\
\delta_{jl} & \delta_{jm} & \delta_{jn} \\
\delta_{kl} & \delta_{km} & \delta_{kn}
\end{array}
\right|
$$
which is frequently used to prove other identities, such as
$$\epsilon_{ijk}\epsilon_{lmk}=\delta_{il}\delta_{jm}-\delta_{im}\delta_{jl}$$
If I were to employ this approach, I would take the determinant and replace $l$ with $i$ and $m$ with $j$, which is obviously easy to do. However, I seem to recall there being a significantly more elegant approach that doesn't resort to using this determinant-based definition or the identity that follows - does anyone remember what it is?
Answer: The expression $\epsilon_{ijk} \epsilon_{ijn}$ is only nonzero when $i \neq j \neq k$ and $i \neq j \neq n$, so it is only nonzero if $k = n$, so it is proportional to $\delta_{kn}$. To figure out the constant of proportionality, set $k = n = 3$. We want to evaluate
$$\epsilon_{ij3} \epsilon_{ij3}.$$
The only nonzero terms are when $(i, j) = (1, 2)$ and $(i, j) = (2, 1)$, giving contributions of $1^2$ and $(-1)^2$ respectively. Then the constant is $2$. | {
"domain": "physics.stackexchange",
"id": 33510,
"tags": "homework-and-exercises, tensor-calculus"
} |
Using a Subject to trigger a music list to reload | Question: In my Android app I have a Service that is periodically polling a server for a music playlist XML. For certain occasions I also want to trigger a reload manually.
I am using Retrofit for the server communication and my Service exposes an Observable<Playlist> to which some other components of the app can subscribe. At the moment there are two such subscribers: One Fragment that contains the playback controls and one other Service that manages the actual audio streaming as well as the updates to the Notifications.
Now, I want all Subscribers to receive any new playlist info as soon as it becomes available. In particular this means that if I trigger a reload manually from the Fragment, the Service should also immediately receive the server response.
No matter how I turn it, all my solutions need a Subject. Here's the one I like best:
private Observable<MusicInfoResponse> mReloadingObservable;
private PublishSubject<Long> mManualRefreshesSubject = PublishSubject.<Long>create();
private void setupObservable() {
Observable<Long> timer = Observable.timer(0, 15, TimeUnit.SECONDS);
Observable<Long> pulses = Observable.merge(timer, mManualRefreshesSubject);
mReloadingObservable = pulses
.flatMap(new Func1<Long, Observable<MusicInfoResponse>>() {
@Override
public Observable<MusicInfoResponse> call(Long aLong) {
Log.d(LOG_TAG, "reloading music info XML");
return reloadMusicInfo();
}
})
.share();
}
public Observable<MusicInfoResponse> subscribeToMusicInfo() {
return mReloadingObservable;
}
public void triggerManualRefresh() {
mManualRefreshesSubject.onNext(1l);
}
Since, it seems to me, one should generally avoid Subjects and try to use only normal Observables instead my question is: Is there any way to this that does not need Subjects? Is this a case where using Subjects is the way to go?
And also: What are the main drawbacks of Subjects in general?
Answer: There is no rule like
one should generally avoid Subjects
You should just be careful around them, like you have. Just as a rule of thumb, keep Subjects private (inside your class), and if you want the outside world to be able to subscribe to it, just call subject.asObservable.
As for your code, you can perform the setupObservable() inside an obervable.onSubscribe() so that you have a cold observable and not a hot one.
If you really have to remove the Subject, you can do a Observable.create(subsriber -> ...). This subscriber variable would be the replacement for your mManualRefreshesSubject.
Frankly, there is not much difference between either pattern. The only thing you should do is check whether the subject.isUnsubscribed() / subscriber.isSubscribed. One benifit of using a subscriber is that you make sure to do stuff onSubscribe and not before. | {
"domain": "codereview.stackexchange",
"id": 13466,
"tags": "java, android, rx-java"
} |
SPOJ - POWFIB (Fibo and non fibo) Time Limit Exceeds | Question: Problem
Find (a^b)%M, where
a = Nth non-fibonacci number
b = Nth fibonacci number modulo M
M = 1000000007
Consider fibonacci series 1,1,2,3,.....
INPUT
First line contains T , the number of test cases.
Each next T lines contains a number N.
OUTPUT
Print T lines of output where each line corresponds to the required answer.
EXAMPLE
Input:
3
3
2
1
Output:
49
6
4
Constraints
1<=T<=100000
1<=N<=10^7
Here is my code for the problem link
#include <cstdio>
#include <unordered_map>
using namespace std;
typedef long long int ll;
ll M = 1000000007;
ll mulmod(ll a, ll b) //modular multiplication
{
ll x = 0;
ll y = a%M;
while(b>0)
{
if(b%2)
x = (x+y)%M;
y = (y+y)%M;
b/=2;
}
return x%M;
}
ll modulo(ll a, ll b) //modular exponentiation
{
ll x = 1;
ll y = a;
while(b)
{
if(b%2)
x = mulmod(x,y);
y = mulmod(y,y);
b/=2;
}
return x%M;
}
unordered_map<ll,ll> Fib;
ll fibo(ll n) //n+1 th fibonacci number
{
if(n<2)
return 1;
if(Fib.find(n) != Fib.end())
return Fib[n];
Fib[n] = (fibo((n+1) / 2)*fibo(n/2) + fibo((n-1) / 2)*fibo((n-2) / 2)) % M;
return Fib[n];
}
ll nonfibo(ll n) //nth non-fibonacci number
{
ll a = 1, b = 2, c = 3;
while(n>0)
{
a = b;
b = c;
c = a+b;
n-=(c-b-1);
}
n+=(c-b-1);
return n + b;
}
int main()
{
ll t;
scanf("%lld",&t);
while(t--)
{
ll n;
scanf("%lld",&n);
printf("%lld\n",modulo(nonfibo(n),fibo(n-1)));
}
return 0;
}
It exceeds the time limit, how do I improve the code?
Answer: Here, you are using modular exponentiation and modular multiplication, which are fast enough.
Your nonfibo function is also taking only O(lg(n)) time.
But your method for computing nth fibonacci number is not much efficient. Try using this Linear Recurrence Solving Method. | {
"domain": "codereview.stackexchange",
"id": 14907,
"tags": "c++, programming-challenge, time-limit-exceeded, fibonacci-sequence"
} |
How to compute the inverse of an operation in Q#? | Question: I want to implement amplitude amplification using Q#. I have the operation $A$
that prepares my initial state and I need to compute $ A^{-1} $ to implement the algorithm.
Is there an easy way to do that in Q# (a keyword or operation)?
Answer: As given in the documentation, if your operation is unitary, you can add the statement adjoint auto; within the operation after the body block. This will generate the adjoint (which is the inverse for unitary).
Then, to use the inverse call Adjoint A(parameters) | {
"domain": "quantumcomputing.stackexchange",
"id": 818,
"tags": "programming, q#"
} |
Implement clojure-a-like comp() function in python3 | Question: With a friend some time ago we wanted to write our own implementation of comp() clojure function in python3. we ended up writing the following function:
def comp(*fs):
"""
Takes a set of functions and returns a fn that is the
composition of those fns. The returned fn takes a variable number of args,
applies the rightmost of fns to the args, the next fn (right-to-left) to
the result, etc.
"""
if len(fs) == 1:
return lambda *x: fs[0](*x)
else:
return lambda *x: fs[0](comp(*fs[1:])(*x))
In my oppinion it's still opaque and difficult to read. But it works as in:
# Checking for single argument
>>> [comp(lambda x: x*10,lambda x: x+2, lambda x: x*3, lambda x: x+3)(x) for x in range(6)]
[110, 140, 170, 200, 230, 260]
# Checking for multiple arguments
>>> a = lambda x: sum(x)
>>> b = lambda x, y, z: [x*1, y*2, z*3]
>>> comp(a, b)(1, 2, 3)
14
How could it be written better?
Answer: Your approach has one serious drawbacks: it creates N-1 new bound function instances on every invocation of the returned function with an argument length of N if big-O performance guarantees are relevant to you. To avoid this you can either use a loop or the reduce operation, both of which only create a fixed number of function instances (that doesn't depend on the number of functions to chain).
Additionally I believe that both variants are easier to understand than the code in your question because it doesn't rely on recursion.
Loop
def comp_loop(*fns):
def comped(*x):
for fn in reversed(fns):
x = fn(*x)
return x
if len(fns) == 0:
return lambda *x: x
if len(fns) == 1:
return fns[0]
return comped
Reduce
from functools import reduce
def comp_reduce(*fns):
if len(fns) == 0:
return lambda *x: x
if len(fns) == 1:
return fns[0]
return lambda *x: reduce(lambda accum, fn: fn(*accum), reversed(fns), x) | {
"domain": "codereview.stackexchange",
"id": 27848,
"tags": "python, python-3.x, functional-programming"
} |
Specifying plugins to local_costmap.yaml causes it to disappear | Question:
As an additional feature to my navigation stack, I'd like to add social_navigation_layers (http://wiki.ros.org/social_navigation_layers) to my local costmap, to distinguish between obstacles and humans. This should add extra inflation to the local costmap, such that my teb local planner better can navigate in the environment. For this I'm trying to specify layers of both the global and local costmap in the form of plugins as specified by this tutorial: http://wiki.ros.org/costmap_2d/Tutorials/Configuring%20Layered%20Costmaps.
The problem I'm facing is, that when specifying any plugin (including default) for my local costmap, the local costmap disappears from RViz and the topic /move_base/local_costmap/costmap returns only zero's. This ruins my navigation. If I comment out all plugins in the local costmap then everything works fine, besides that the social_navigation_layers won't be used. Specifying any and all plugins for the global costmap, then this seems to work fine. Using 'rosparam get' I can see, that the plugins are used for both the local and global costmap, despite it not working for the local costmap. If I do not specify any plugins for the local costmap then pre-Hydro parameters will be used, indicated by a warning in the terminal
Currently I'm running Melodic (Ubuntu 18.04), but I've also tested this on Kinetic (Ubuntu 16.04) with the same issue.
Picture of missing costmap.
The small curved green line represents parts of an obstacle detected by the laser scanner and shown through teb markers.
UPDATE 1: Added .yaml files, and warning with pre-Hydro parameters.
UPDATE 2: Changed local_costmap.yaml to the way I want them to be, added output of 'rosparam get' and picture of missing costmap from RViz.
costmap_common.yaml
# Common settings for costmaps
# May be overwritten in specific global and local costmap files
# Robot Footprint Parameters
#footprint: [[0.55,0.5],[0.55,-0.5],[-0.55,-0.5],[-0.55,0.5]] #Square robot
#footprint_padding: 0.01
robot_radius: 0.55 # Circular robot
# inflation_radius: 0.35
# Global Filtering Parameters
obstacle_range: 7
raytrace_range: 4.0
costmap_global.yaml
global_costmap:
# Coordinate frame and tf parameters
global_frame: map # Test with /map as per documentation
robot_base_frame: base_link
transform_tolerance: 1.5
# Rate parameters
update_frequency: 0.5
# publish_frequency: 1.0
# Map management parameters
rolling_window: false
static_map: true
plugins:
- {name: static_layer, type: "costmap_2d::StaticLayer"}
- {name: obstacles_layer, type: "costmap_2d::VoxelLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
# Layer Definitions
static_layer:
map_topic: /map
subscribe_to_updates: true
inflation_layer:
# cost_scaling_factor: 10.0 # Default: 10.0
inflation_radius: 0.5
costmap_local.yaml
local_costmap:
# Coordinate frame and tf parameters
global_frame: odom
robot_base_frame: base_link
transform_tolerance: 1.0
# Rate parameters
update_frequency: 10.0
publish_frequency: 10.0
# Map management parameters
static_map: false
rolling_window: true
width: 10 # The width of the map in meters. Default: 10
height: 10 # The height of the map in meters. Default: 10
resolution: 0.05 # The resolution of the map in meters/cell. Default: 0.05
plugins:
- {name: obstacles_layer, type: "costmap_2d::ObstacleLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
- {name: social_layer, type: "social_navigation_layers::ProxemicLayer"}
- {name: social_pass_layer, type: "social_navigation_layers::PassingLayer"}
# Layer Definitions
inflation_layer:
#cost_scaling_factor: 1.0 # Default: 10.0
inflation_radius: 1.08
rosparam get /move_base/global_costmap
footprint: '[]'
footprint_padding: 0.01
global_frame: map
height: 10
inflation_layer: {cost_scaling_factor: 10.0, enabled: true, inflate_unknown: false,
inflation_radius: 0.5}
obstacle_range: 7
obstacles_layer: {combination_method: 1, enabled: true, footprint_clearing_enabled: true,
mark_threshold: 0, max_obstacle_height: 2.0, origin_z: 0.0, unknown_threshold: 15,
z_resolution: 0.2, z_voxels: 10}
origin_x: 0.0
origin_y: 0.0
plugins:
- {name: static_layer, type: 'costmap_2d::StaticLayer'}
- {name: obstacles_layer, type: 'costmap_2d::VoxelLayer'}
- {name: inflation_layer, type: 'costmap_2d::InflationLayer'}
publish_frequency: 0.0
raytrace_range: 4.0
resolution: 0.05
robot_base_frame: base_link
robot_radius: 0.55
rolling_window: false
static_layer: {enabled: true, map_topic: /map, subscribe_to_updates: true}
static_map: true
transform_tolerance: 1.5
update_frequency: 0.5
width: 10
rosparam get /move_base/local_costmap
footprint: '[]'
footprint_padding: 0.01
global_frame: odom
height: 10
inflation_layer: {cost_scaling_factor: 10.0, enabled: true, inflate_unknown: false,
inflation_radius: 1.08}
laser_scanner: {clearing: true, data_type: LaserScan, expected_update_rate: 9.0, marking: true,
sensor_frame: scan_combined, topic: /laser_unified/scan}
observation_sources: laser_scanner
obstacle_range: 7
obstacles_layer: {combination_method: 1, enabled: true, footprint_clearing_enabled: true,
max_obstacle_height: 2.0, obstacle_range: 7, raytrace_range: 4.0}
origin_x: 0.0
origin_y: 0.0
plugins:
- {name: obstacles_layer, type: 'costmap_2d::ObstacleLayer'}
- {name: inflation_layer, type: 'costmap_2d::InflationLayer'}
- {name: social_layer, type: 'social_navigation_layers::ProxemicLayer'}
- {name: social_pass_layer, type: 'social_navigation_layers::PassingLayer'}
publish_frequency: 10.0
raytrace_range: 4.0
resolution: 0.05
robot_base_frame: base_link
robot_radius: 0.55
rolling_window: true
social_layer: {amplitude: 77.0, covariance: 0.25, cutoff: 10.0, enabled: true, factor: 5.0,
keep_time: 0.75}
social_pass_layer: {amplitude: 77.0, covariance: 0.25, cutoff: 10.0, enabled: true,
factor: 5.0, keep_time: 0.75}
static_map: false
transform_tolerance: 1.0
update_frequency: 10.0
width: 10
Originally posted by KimJensen on ROS Answers with karma: 55 on 2020-04-08
Post score: 1
Original comments
Comment by gvdhoorn on 2020-04-08:\
I apparently don't have enough points to add pictures of my *_costmap_params.yaml files to this issue.
and that's a good thing, as .yaml files are simply text. There is no need to post screenshots of text.
Simply copy-paste it into your question, then select all the lines, press ctrl+k or the Preformatted Text button (ie: the one with 101010 on it).
As to your problem: YAML is very sensitive to indentation. Make sure to have consistent and correct indentation.
Comment by KimJensen on 2020-04-08:
Thank you for your response. I've updated the issue with the .yaml files.
I've focused on the indentation, so I think it should be correct.
Comment by David Lu on 2020-04-17:
Can you set the parameters the way you want them to be (i.e. uncomment the plugins) and then post the rosparam get / output.
Comment by KimJensen on 2020-04-20:
Thank you for your response. I've updated the issue with changes to the parameters, posted the output of rosparam get and inserted a picture of the missing obstacle.
Comment by David Lu on 2020-04-20:
Can you change it to the output of rosparam get /move_base/local_costmap?
Comment by KimJensen on 2020-04-21:
Yes of course. I've updated the issue with both local and global rosparam get /move_base/*_costmap
Answer:
After comparing the result of rosparam get /move_base/local_costmap for the file costmap_local.yaml with specified plugins and without (pre-Hydro parameters) I noticed two issues:
1.The laser_scanner and observation_sources parameters were not correctly contained inside the obstacle_layer of the local costmap. To correct this I moved these two parameters from a separate .yaml file in which I specified them (loaded into the local_costmap ns inside my .launch file) and to the costmap_local.yaml specified in the following way:
Specified in costmap_local.yaml
obstacle_layer:
observation_sources: laser_scanner
laser_scanner: {sensor_frame: scan_combined, data_type: LaserScan, topic: /laser_unified/scan, marking: true, clearing: true, expected_update_rate: 9.0}
Notice that this must be specified inside costmap_local.yaml and not costmap_common.yaml as I set static_map: true in the costmap_global.yaml. If placed inside costmap_common.yaml a warning similar to this will occur:
[ WARN] [1587471627.753143928, 471.157000000]: The origin for the sensor at (0.02, 0.02, -0.00) is out of map bounds. So, the costmap cannot raytrace for it.
2.Default or pre-Hydro layers call the obstacle layer in its singular form (without an s). Therefore, I changed the name of my obstacle layer plugin to be in the following form:
Specified in costmap_local.yaml
plugins:
- {name: obstacle_layer, type: "costmap_2d::ObstacleLayer}
Originally posted by KimJensen with karma: 55 on 2020-04-21
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by David Lu on 2020-04-21:
So you're all set?
Comment by KimJensen on 2020-04-21:
I’m set to the extend that I have the plugins working (including social_navigation_layers), and I can publish to /people (people_msgs/People) topic to insert humans and additional inflation into my local costmap. So the problem presented is this issue seems to be resolved.
Next step/issue is to configure the inflation of this inserted human, such that my local planner (teb) will actually adjust its path, instead of going straight through it. I want a lethal/inscribed/high cost inflation to published humans to be approx 1 meter in radius. Currently, the inserted inflation of the human has no effect on the local trajectory, which can be seen here. Changing the amplitude parameter of social_navigation_layer only has minor effects to the inflation scaling. How would you change scaling of the inflated area, such that the robot avoids the published human?
Comment by David Lu on 2020-04-22:
I wrote a whole paper about it: http://wustl.probablydavid.com/entry179.html
It's not easy.
Comment by KimJensen on 2020-04-27:
So you are confident that this is a tweaking issue, and not something incorrect in the way that I publish messages to the /people topic, or other potential errors in the way that I implement the social_navigation_layers?
Published message can be seen here.
Comment by KimJensen on 2020-04-27:
After reading your paper: Given that we're using a standard algorithm (Dijkstra) for global planner and timed elastic band (TEB) as local planner, would you then argue that the method presented in the paper is still valid for our case, as we have no added constant value of P? Finally, are there any way of introducing lethal obstacles using social_navigation_layers, as I believe TEB only accounts for those? | {
"domain": "robotics.stackexchange",
"id": 34721,
"tags": "navigation, ros-melodic"
} |
Runtime error "Can't create handler inside thread that has not called Looper.prepare()" in android application | Question:
I am writing android application for control Lego NXT and connect ROS. I use library LeJos for control Lego. I'v got runtime error "Can't create handler inside thread that has not called Looper.prepare()" when run motor NXT from rosjava node:
class NXTNode implements NodeMain {
public static RemoteMotor motor;
@Override
public GraphName getDefaultNodeName() {
return new GraphName("android/NXTNode");
}
@Override
public void onStart(final Node node) {
motor = new Motor().A;
motor.setPower(60);
motor.forward();
node.executeCancellableLoop(new CancellableLoop() {
@Override
protected void setup() {
}
@Override
protected void loop() throws InterruptedException {
Thread.sleep(10);
}
});
}
@Override
public void onShutdown(Node node) {
}
@Override
public void onShutdownComplete(Node node) {
}
}
Node "NXTNode" I run in
protected void init(NodeMainExecutor nodeMainExecutor) {
NodeConfiguration nodeConfiguration = NodeConfiguration.newPublic(InetAddressFactory.newNonLoopback().getHostAddress());
nodeConfiguration.setMasterUri(getMasterUri());
nxt_node = new NXTNode();
nodeMainExecutor.execute(nxt_node, nodeConfiguration);
}
If I run motor from interface (onClick) then motor is running
public void onClick(View arg0) {
motor = new Motor().A;
motor.setPower(60);
motor.forward();
}
Where could be the problem?
Thank you!
UPDATE:
FATAL EXCEPTION: pool-1-thread-7
java.lang.ExceptionInInitializerError
at org.ros.android.tutorial.pubsub.NXTNode.onStart(NXTNode.java:239)
at org.ros.internal.node.DefaultNode$1.run(DefaultNode.java:422)
at org.ros.internal.node.DefaultNode$1.run(DefaultNode.java:419)
at org.ros.concurrent.ListenerCollection$1.run(ListenerCollection.java:108)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1088)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:581)
at java.lang.Thread.run(Thread.java:1019)
Caused by: java.lang.RuntimeException: Can't create handler inside thread that has not called Looper.prepare()
at android.os.Handler.<init>(Handler.java:121)
at android.bluetooth.BluetoothAdapter$1.<init>(BluetoothAdapter.java:1012)
at android.bluetooth.BluetoothAdapter.<init>(BluetoothAdapter.java:1012)
at android.bluetooth.BluetoothAdapter.getDefaultAdapter(BluetoothAdapter.java:340)
at lejos.pc.comm.NXTCommAndroid.search(NXTCommAndroid.java:394)
at lejos.pc.comm.NXTConnector.search(NXTConnector.java:171)
at lejos.pc.comm.NXTConnector.connectTo(NXTConnector.java:222)
at lejos.pc.comm.NXTConnector.connectTo(NXTConnector.java:39)
at lejos.pc.comm.NXTCommandConnector.open(NXTCommandConnector.java:22)
at lejos.pc.comm.NXTCommandConnector.getSingletonOpen(NXTCommandConnector.java:41)
at lejos.nxt.Motor.<clinit>(Motor.java:15)
... 7 more
Originally posted by Alexandr Buyval on ROS Answers with karma: 641 on 2012-02-12
Post score: 0
Original comments
Comment by damonkohler on 2012-02-23:
Could you post the full stack trace?
Comment by Alexandr Buyval on 2012-02-23:
I updated question.
Answer:
I have solved problem.
I added Looper.prepare() in method onStart.
public void onStart(final Node node) {
Looper.prepare();
}
Originally posted by Alexandr Buyval with karma: 641 on 2012-02-25
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 8202,
"tags": "rosjava, android"
} |
Can hydrides react with hydrocarbons? And why? | Question: I have read there exist so called "magic" (super strong) acids which can provide a proton to saturated hydrocarbons, producing cation and hydrogen molecule. Can hydrides act in the same manner (pick a proton to form free hydrogen molecule)? And why?
Answer: The pKa of sodium or lithium hydride is (solvent dependent) about 35, the pKa of Butyl lithium is around 50 so clearly H- is not strong enough to tear a proton off an unactivated alkane pKa tables here | {
"domain": "chemistry.stackexchange",
"id": 11057,
"tags": "hydrocarbons, hydrogen"
} |
Where does the hidden supersymmetric sector of the MSSM come from? | Question: At the end of Chapter 14 of the "Supersymmetry Demystified book" from Patrick Labelle it is mentioned that to constrain the number of allowed softly SUSY breaking terms, a shadow or hidden supersymmetric sector can be assumed. In this hidden sector, SUSY is sponteanously broken and communicated to the visible sector of the MSSM where it leads to the desired soft SUSY breaking.
Because this was not explained any further, I am wondering where the particles of this shadow sector, which interacts only through a messenger interaction with the MSSM particles, would come from?
I`ve only (not in any detail) heard some catchwards such as SUGRA or gauge mediated SUSY breaking for example. I am still interested to learn in a bit more detail about what are the most important mechanisms to mediate SUSY breaking from such a hidden sector (and how this works) as currently considered by people working on this topic?
Additional thougth: Would a possible higgs at about 125 GeV and not much else at the "LHC scale" change the current thinking or constrain this business?
Answer: First of all, the soft SUSY-breaking terms are just an "effective" description that replaces lots of qualitative, unknown physics by 100+ parameters for the known physics. At the end, one wants to construct a full theory. For example (an important example), if the full theory is a stabilized string theory compactification, there aren't any undetermined parameters left: everything is calculable, including the value of the superpartner masses and the rest of the 100+ soft SUSY breaking coefficients. The same holds for a complete model in quantum field theory.
Now, in this full theory, it's still true that SUSY is spontaneously broken; in the description with the soft SUSY breaking terms, SUSY is really "explicitly broken" by these soft SUSY-breaking terms. This explicit breaking is just required to be "soft", i.e. not influencing short-distance SUSY cancellations e.g. to the Higgs mass. The soft SUSY breaking terms are exactly the explicit SUSY-breaking terms that you may potentially produce out of a spontaneous SUSY breaking mechanism by additional physics if you neglect this additional physics.
Another fact to realize is that the fields of the MSSM aren't enough to break SUSY. One may go through the possibilities. A typical SUSY breaking will assign a nonzero vev to an appropriate field but there's no appropriate field in the MSSM that could be the primary source of SUSY breaking. So one is de facto required to add additional fields beyond the MSSM and they're the primary causes of SUSY breaking – their vev etc. Once SUSY is strongly broken in phenomena involving these new fields, it gets broken everywhere: the breaking is "mediated" to the MSSM (=visible sector), too.
Where does the hidden sector come from? You could ask the same question about the MSSM fields, too. Where do they come from? The hidden sector is just another collection of fields. In quantum field theory, one just has to add all the fields manually. MSSM isn't a "minimum field theory" in any sense we understand, and MSSM plus hidden sector isn't either, so there's really no difference. We may extend the MSSM to a bigger GUT-like model by bigger symmetries; but we won't encounter too many candidates for the primordial SUSY breaking. The hidden sector needed to break SUSY is typically completely new and unrelated to symmetries to the visible sector.
In string/M-theory, one may get a more non-vacuous answer to such questions because string theory predicts not only the exact values of all the parameters; for a particular compactification, it also predicts the exact field content. So one may say that the MSSM fields and the other fields come from "somewhere", literally, e.g. from some particular modes of stringy fields that propagate either in the whole spacetime or at some smaller locus of the extra dimensions or on some branes that are localized in the extra dimensions, and so on.
Just to mention a visually simple example, heterotic M-theory by Hořava and Witten represents the whole world as $M^4\times CY_3 \times I^1$ where $M^4$ is the Minkowski space we know, $CY_3$ is a six-real-dimensional manifold (Calabi-Yau) with the 6 extra dimensions, and $I^1$ is a line interval. In total, one has 11 dimensions. The line interval has two boundaries, each of which is occupied by an $E_8$ gauge supermultiplet. One of them gives rise, after breaking of GUT-like symmetries, to the MSSM; the other boundary contains additional gauginos from another $E_8$ group. Gaugino condensation of these other gauginos, a hidden sector, may be the source of SUSY breaking that gets communicated to our boundary, i.e. the other $E_8$.
The different mechanisms of SUSY breaking are really classified according to the method of mediation to the MSSM. So you hear terms like gauge-mediated, gravity-mediated, anomaly-mediated, and so on. Historically, people would be excited about gauge mediation (mSUGRA is a model that is similar to it) and they said lots of things why they saw it was the best one. The generic gauge-mediated models with all light superpartners, which people considered natural in them, are mostly ruled out by the LHC data by now. For various reasons, things like the anomaly-mediated SUSY breaking were gradually becoming preferred in recent year or so and the likely 125 GeV Higgs is another reason to think that those could be the preferred scenarios.
This is a very technical subject whose details are inappropriate for a single answer at this server, I think. Pick some review of SUSY breaking, e.g. this one:
http://motls.blogspot.com/2006/01/supersymmetry-breaking.html
http://arxiv.org/abs/hep-th/0601076
This does focus on anomaly-mediated breaking as well, something that made it relatively ahead of time and that could turn out to be relevant. | {
"domain": "physics.stackexchange",
"id": 3284,
"tags": "research-level, supersymmetry, symmetry-breaking, mssm"
} |
error while compling in rosserial_arduino | Question:
This is my code for the publisher.
#include <ros.h>
#include <std_msgs/String.h>
ros::NodeHandle n;
std_msgs::String str_msg;
ros::Publisher pub_voltage("volts", &str_msg);
int left_IR_pin = 5;
int centre_IR_pin = 4;
int right_IR_pin = 10;
char right[12] ="right off";
char left[12] ="left off";
boolean directl,directr;
void setup()
{
n.initNode();
n.advertise(pub_voltage);
pinMode(left_IR_pin,INPUT);
pinMode(centre_IR_pin,INPUT);
pinMode(right_IR_pin,INPUT);
directl=digitalRead(left_IR_pin);
directr=digitalRead(right_IR_pin);
}
void loop(){
if (directl)
{ str_msg.data = left;
pub_voltage.publish(&std_msgs);
}
if (directr)
{ str_msg.data = right;
pub_voltage.publish(&std_msgs);
}
n.spinOnce();
delay(1000);
}
After compiling I get the following error:
sketch_jun18a.cpp: In function ‘void loop()’:
sketch_jun18a.cpp:31:34: error: expected primary-expression before ‘)’ token
sketch_jun18a.cpp:35:34: error: expected primary-expression before ‘)’ token
Originally posted by tejdeep on ROS Answers with karma: 1 on 2014-06-18
Post score: 0
Original comments
Comment by Wolf on 2014-06-19:\
My suggestion is to always try to recompile with arduino IDE, it occasionally fails for whatever reason... 2) if your problem persists 3 or 4 tries, please mark your code as code in the answers.ros.org editor. It is not really readable the way it is now.....
Answer:
I presume the issue here is the .publish(&std_msgs)... you can't publish a namespace, I presume you meant to publish str_msg, and you shouldn't need the &, so it should just be:
pub_voltage.publish(str_msg);
Originally posted by fergs with karma: 13902 on 2014-06-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 18306,
"tags": "arduino, rosserial"
} |
Kripke Models - evaluating the meaning of $\Box\Box p$ | Question: In Kripke models the evaluation of $x \vdash \Box p$ would be that every world reachable from $x$ satisfies $p$.
But how would the truth of $\Box\Box p$ be evaluated in Kripke models?
Answer: This is an unfortunate use of the word “reachable”, in that Kripke structures are graphs but “reachable” in a Kripke structure is not the same thing as “reachable” in a graph. Let us avoid this confusion by saying that a state $y$ is a successor of state $x$ if there is a (directed) edge $(x,y)$ in the structure.
Now, the semantics of modal logic says that $\Box\varphi$ is true at state $x$ if, and only if, $\varphi$ is true at every successor of $x$. Informally, $\Box\varphi$ means, “$\varphi$ is true everywhere I can get to from here in one step.” So, to understand the meaning of $\Box\Box p$, just substitute $\Box p$ for $\varphi$:
$\Box p$ is true at every successor
$p$ is true at every successor of every successor.
In other words, $p$ is true everywhere I can get in two steps. Note that this is not, in general, the same thing as $\Box p$, which means that $p$ is true after one step. Indeed, one can show that $(\Box \varphi)\rightarrow (\Box\Box\varphi)$ is a tautology in a particular Kripke frame if, and only if, the successor relation is transitive.
Note that, without assuming transitivity, basic modal logic with only $\Box$ and $\Diamond$ has no way of expressing “$\varphi$ is true everywhere that can be reached from here, in any number of steps” (i.e., the usual graph-theoretic meaning of “reachable”). | {
"domain": "cs.stackexchange",
"id": 7279,
"tags": "logic, modal-logic"
} |
Is the existence of electromagnetic standing waves dependent on the observers reference frame? | Question: If I take two plane EM waves travelling in opposite direction e.g. $E = E_0 \sin(kx-\omega t)$ and $E=E_0 \sin (kx + \omega t)$, they sum to give a standing wave with a time-averaged Poynting vector of zero.
If I use the appropriate special relativistic transformations to derive how these fields appear to an observer travelling at $v$ along the x-axis, I find that one E-field is diminished, one is boosted, whilst at the same time one wave is blue-shifted and the other red-shifted. These waves do not sum to give a standing wave in the moving frame of reference and have a non-zero time-averaged Poynting vector.
So, is the phenomenon of a standing wave dependent on the frame of reference of the observer?
Please note: That I have tried transforming the fields and summing them, but could not make sense of the result in terms of a travelling wave or a standing wave - hence the question. To get the bounty I'm looking for a form for $E^{\prime}$ or a decomposition of $E^{\prime}$ (that isn't just the sum of the transformed waves!) that makes clear the nature of the summed waves in the moving frame.
Answer: Yes, the concept of a standing wave is frame dependent, and a standing wave selects a specific frame just like the concept of a rest frame.
My initial answer was similar to BenCrowell's, and I made the same oversimplification mistake that his initial answer does. The are no spatial coordinates for the standing wave (even in its rest frame) in which the fields are a constant $\mathbf{E}=\mathbf{B}=0$. The existence of such would simplify the mental image of this scenario greatly, but unfortunately that is not the case.
However in the rest frame there are spatial planes in which the electric field is always zero, and other planes in which the magnetic field are always zero. There are even discrete times at which on spatial planes, both $\mathbf{E}$ and $\mathbf{B}$ are zero. Since these do not exist at all times, it becomes a bit difficult to discuss them 'moving', but a similar idea does appear.
In the original frame, away from a node the time averaged Poynting vector is zero, but the energy density is not zero. When transforming to the new frame, the components of the momentum and energy density will mix up (technically the stress energy tensor). So it should make conceptual sense that after the transformation the time averaged momentum density is no longer zero (also seen conceptually via your red/blue shifting argument).
So, does this count as a "standing wave"?
No. There are no longer constant node planes of $\mathbf{E}=0$ or $\mathbf{B}=0$, and there is now non-zero time averaged momentum density (so the stress energy tensor is that of something "moving" not "standing").
Calculation details:
Consider a standing wave formed from two plane waves travelling in the +/- x direction, linearly polarized in the z direction:
$$\mathbf{E} = \mathbf{\hat{z}} E_0\left[\sin(kx - \omega t) + \sin(-kx - \omega t)\right] $$
$$\mathbf{B} = \mathbf{\hat{y}} B_0\left[-\sin(kx - \omega t) + \sin(-kx - \omega t)\right] $$
For a single plane wave, the nodes where $\mathbf{E}=\mathbf{B}=0$ are moving planes in space (so contiguous volumes in spacetime). For the above sum of waves, the nodes are still planes in space, but only show up at discrete times.
Looking at where the electric field is zero:
$$\begin{align*}
0 &= \left[\sin(kx - \omega t) + \sin(-kx - \omega t)\right] \\
&= \sin(kx)\cos(\omega t) - \cos(kx)\sin(\omega t) - \sin(kx)\cos(\omega t) - \cos(kx)\sin(\omega t) \\
&= -2 \cos(kx)\sin(\omega t)
\end{align*} $$
Similarly for the magnetic we will find:
$$0 = -2 \sin(kx)\cos(\omega t)$$
which is similar, just shifted by a quarter wavelength in space and quarter period in time.
Focusing on the nodes in the electric field first, these are planes located at:
$$ kx = \pi \left(n+\frac{1}{2}\right) \quad \rightarrow \quad x = a_n = \frac{\pi}{k}\left(n+\frac{1}{2}\right) $$
with $n$ any integer, for any value of $y,z,t$ (so a constant plane in space). The electric field will also be zero everywhere when $\omega t = \pi m$ with $m$ any integer.
For the magnetic field, there are planes located at:
$$ kx = \pi n \quad \rightarrow \quad x = b_n = \frac{\pi n}{k} $$
with $n$ any integer, for any value of $y,z,t$ (again a constant plane in space). The magnetic field will also be zero everywhere when $\omega t = \pi (m+1/2)$ with $m$ any integer.
We therefore see spatial planes that show up with $\mathbf{E}=\mathbf{B}=0$ at $x=a_n,t=\pi (m+1/2)$ and at $x=b_n,t=\pi m$.
Let's define the new coordinates as a Lorentz transformation for the original coordinate system:
$$\begin{align*}
ct' &= \gamma(ct - \beta x) \\
x' &= \gamma(x - \beta ct) \\
y' &= y \\
z' &= z
\end{align*}$$
with $\beta=\frac{v}{c},\gamma=[1-\beta^2]^{1/2}$.
The transformation laws written out for the fields gives the fields in this new frame as:
$$\begin{align}
& \mathbf {{E}_{\parallel}}' = \mathbf {{E}_{\parallel}}\\
& \mathbf {{B}_{\parallel}}' = \mathbf {{B}_{\parallel}}\\
& \mathbf {{E}_{\bot}}'= \gamma \left( \mathbf {E}_{\bot} + \mathbf{ v} \times \mathbf {B} \right) \\
& \mathbf {{B}_{\bot}}'= \gamma \left( \mathbf {B}_{\bot} -\frac{1}{c^2} \mathbf{ v} \times \mathbf {E} \right)
\end{align}$$
While this is messy to write out in general, at the node planes all the components of $\mathbf{E}$ and $\mathbf{B}$ are zero, and so $\mathbf{B}'=0$ and $\mathbf{E}'=0$.
Note that since $v$ is perpendicular to $\mathbf{E}$ and $\mathbf{B}$, the previous spatial planes where $\mathbf{E}=0$ independent of time, or $\mathbf{B}=0$ independent of time, do not maintain this nice form after transformation as the electric and magnetic parts of the field will mix in the transformation. The only truly coordinate system independent 'nodes' are the ones in which the full electromagnetic tensor is zero.
Now all that is left is to find the coordinates of these nodes in the new frame. Here I'll just work it out for the $a_n$ planes.
In the original frame nodes are at spacetime coordinates $x=a_n$,$t=\pi (m+1/2)=T_m$ while $y,z$ can have any value. Therefore in the new frame
$$ ct' = \gamma(cT_m - \beta x_n)$$
Simultaneity is broken, so instead of all spatial planes appears at the same time, they will occur in order down the x axis. If there was a series of bells on the x-axis that rang whenever the electromagnetic tensor was zero, they would ring simultaneously at a set period in the rest frame, but in the moving frame there would be like travelling waves of the bells going off along the x axis.
Since these are discrete events, as the transformation parameter $v$ is increased, there would eventually be aliasing effects, making it difficult to fully interpret as a moving "wave" of nodes.
$$ x' = \gamma x_n - \gamma \beta cT_m$$
So similar to the original frame, $y',z'$ can take any value (so the nodes are still planes), with $x'$ having discrete values (but now these values depend on time). With each appearance of a node $n$, it will move along the $x$ axis. As before, with larger $v$ aliasing of the discrete events makes it suspect to 'label' nodes by $n$ to watch them move along.
Some concluding remarks on the calculation:
Since we started with a superposition of two plane waves, we could also write the result in the new frame as a superposition of two transformed plane waves. Since these waves will now have mismatched spatial and time frequencies, there will no longer be nice static planes of zero electric or magnetic field, but the nodes where $\mathbf{E}=\mathbf{B}=0$ still exist and allow us to mentally picture at least some features of the solutions. | {
"domain": "physics.stackexchange",
"id": 16873,
"tags": "electromagnetism, special-relativity, waves"
} |
symbol lookup error: /snap/core20/current/lib/x86_64-linux-gnu/libpthread.so.0: undefined symbol: __libc_pthread_init, version GLIBC_PRIVATE | Question: I was trying to setup a simple transform broadcaster by following Write a broadcaster (Python) from ROS2 Humble documentation.
I created the package, nodes, launch files, and everything else, including the addition of entry points, data files etc., and the package was successfully built using colcon. But when I tried to launch the node using the below command, an error occurred. I don't know what went wrong; but, in order to rectify any human error in following the steps, I tried doing the same thing multiple times and resulted in the same error.
Launch command I executed (given in the docs)
ros2 launch learning_tf2_py turtle_tf2_demo.launch.py
ERROR Log
jishnu@Legion:~/robotics/ros-git/tf2$ ros2 launch learning_tf2_py turtle_tf2_demo.launch.py
[INFO] [launch]: All log files can be found below /home/jishnu/.ros/log/2023-04-07-19-06-58-821889-Legion-2160358
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [turtlesim_node-1]: process started with pid [2160368]
[INFO] [turtle_tf2_broadcaster-2]: process started with pid [2160370]
[turtlesim_node-1] QSocketNotifier: Can only be used with threads started with QThread
[turtlesim_node-1] /opt/ros/humble/lib/turtlesim/turtlesim_node: symbol lookup error: /snap/core20/current/lib/x86_64-linux-gnu/libpthread.so.0: undefined symbol: __libc_pthread_init, version GLIBC_PRIVATE
[ERROR] [turtlesim_node-1]: process has died [pid 2160368, exit code 127, cmd '/opt/ros/humble/lib/turtlesim/turtlesim_node --ros-args -r __node:=sim'].
Answer: From Java: symbol lookup error: /snap/core20/current/lib/x86_64-linux-gnu/libpthread.so.0: undefined symbol: __libc_pthread_init, version GLIBC_PRIVATE on Stack Overflow:
Though I have already answered it here
The issue with how the VSCode Snap package libraries are configured to be used. They are setting the following environment variable GTK_PATH, which gets inherited by the VSCode Terminal.
Unsetting the environment variable in the VSCode terminal does seem to work for me.
unset GTK_PATH
Or you can clear out the environment variable by using the VSCode Configuration, as mentioned in the other answer.
You will probably need to run unset GTK_PATH each new session/login/reboot. To set it permanently, put the command in your profile.
Note: It is a good idea to make a note of what the value of GTK_PATH is, prior to unsetting it - that way you can revert it back to its previous value, should you encounter any issues. To see what its "usual" value is, run echo $GTK_PATH, prior to unsetting it... If you have already unset it, open a new terminal window, which should have the unset value of GTK_PATH (if not, try a reboot). | {
"domain": "robotics.stackexchange",
"id": 2675,
"tags": "ros-humble"
} |
Determine if $ y[n] = ny[n-1] + x[n]$ is linear time invariant and BIBO stable | Question: Check if the following system is linear time invariant and BIBO stable..
$$
y[n] = ny[n-1] + x[n]
$$
for $n\ge 0$. We are also given that the system is at rest (i.e. $y[−1] = 0$).
I know that to show linearity, we show that
$
H[ax_1 + bx_2] = aH[x_1] + bH[x_2]
$
except that the question has a $y[n-1]$ on the RHS which is throwing me off.
Edit: My solution:
$
LHS = $ $$H[ax_1[n] + bx_2[n]]
$$$$
= n(ay_1[n−1] + by_2[n−1]) + (ax_1[n]+bx_2[n])
$$$
RHS =$$$ aH[x_1] + bH[x_2]
$$$$
= any_1[n−1]+ax_1[n]+bny_2[n−1]+bx_2[n]
$$
So $
RHS = LHS
$
Edit 2: New issue when I try and show time invariance.
If $ x_2[n] = x1[n-k] $ then prove $y_2[n] = y_1[n-k]$.
BUT $RHS= $
$$
(n-k)y_1[n-k-1] + x_1[n-k]
$$ And $LHS = $
$$ny_2[n-1] + x_2[n] = ny_2[n-1] + x_1[n-k]$$
Answer: I'll give you some hints that hopefully will allow you to do your homework yourself.
In order to check linearity, write down $3$ difference equations: one for the response $y_1[n]$ to an input signal $x_1[n]$, one for the response $y_2[n]$ to an input signal $x_2[n]$, and one for the response $y_3[n]$ to an input signal $ax_1[n]+bx_2[n]$. Multiply the first difference equation by $a$, the second one by $b$, and add them. If the resulting difference equation for the sequence $ay_1[n]+by_2[n]$ is the same as for $y_3[n]$, then you can conclude $y_3[n]=ay_1[n]+by_2[n]$, and, consequently, the system must be linear.
For checking time-invariance, write down the difference equation for the response $y_2[n]$ to the input $x[n-k]$. Then replace in the original difference equation the index $n$ by $n-k$ and check if the two difference equations are the same. If they are, the system is time-invariant.
To answer the question about BIBO stability, ask yourself what the output signal will be for $x[n]=\delta[n]$ and draw your conclusion.
Now that you've figured out the solution I would like to point out that you can also find an explicit (non-recursive) expression for $y[n]$:
$$\begin{align} y[0]&=x[0]\\
y[1]&=1\cdot x[0]+x[1]\\
y[2]&=2\cdot 1\cdot x[0]+2\cdot x[1]+x[2]\\
y[3]&=3\cdot 2\cdot 1\cdot x[0]+3\cdot 2\cdot x[1]+3\cdot x[2]+x[3]\\
&\vdots\\
y[n]&=\sum_{k=0}^n\frac{n!}{k!}x[k],\qquad n\ge 0\tag{1}
\end{align}$$
from which it is very straightforward to show that the system is linear and time-varying. Furthermore, from $(1)$ it should be clear that the system is not BIBO-stable. | {
"domain": "dsp.stackexchange",
"id": 6390,
"tags": "linear-systems, homework, stability"
} |
SOLIDWORKS-Bolting a COTS part from a STEP File onto an Assembly | Question: Problem
I have a STEP file with a few holes for bolts. I'm hoping to bolt this part into stock material in Solidworks.
Example
My first thought is to extend the holes into other connected components in an assembly to ensure both parts now have the same hole profile.
I imported a STEP file into this basic assembly.
Then I mated the part to the assembly so it can't move.
Then I drew onto the face on the assembly that needed to be cut.
Then I exited the sketch, and Solidworks took me to the "Extruded cut" dialog.
Now holes are cut through the material.
Finally, I find and import the STEP models of the bolts they provided to my assembly. Each one is manually mated.
There has to be a more standard/convenient way to bolt in COTS parts, I'd hope?
Answer: The only other way to do this that I know of is to draw circles and extruded cut as usual to make the holes, then edit the circles in the assembly and create an external reference between your drawing and the holes in the STEP part. | {
"domain": "engineering.stackexchange",
"id": 4795,
"tags": "mechanical-engineering, solidworks, bolting"
} |
Magnetic field and magnetic domains | Question: I have a confusion regarding how magnetic field propagate in a medium (ferromagnets or air etc..). It is knows when a ferromagnet is exposed to an external magnetic field, the magnetic domain of the ferromagnet will be aligned with the external magnetic field. My question is this alignment of domain assure the propagation of the external magnetic field in the ferromagnet? The same question for the air. Do the magnetic field propagate in the air because the magnetic domains of the air are aligned?
Answer: No alignment of magnetic domains in a medium is needed for a magnetic field to propagate in that medium. Magnetic fields can propagate through a vacuum despite the fact that there are no magnetic moments or domains at all in a vacuum, right?
Similarly, no dielectric moments are required for electric fields to propagate in a medium, and electric fields can travel perfectly well through a vacuum. | {
"domain": "physics.stackexchange",
"id": 30025,
"tags": "electromagnetism, magnetic-fields"
} |
Sterile Neutrinos and Supersymmetry | Question: Just read a Scientific American review of a recent article which suggests evidence for sterile neutrinos:
http://www.nature.com/news/cosmic-mismatch-hints-at-the-existence-of-a-sterile-neutrino-1.14752
The article is a little unclear about exactly what evidence, but my question is more general. A sterile neutrino would certainly not necessarily imply a forth generation to the standard model - if memory serves there is some nice anomaly cancellation which would be ruined if we had four generations. I am mostly interested in what a sterile neutrinos would mean for supersymmetry - are they part of the MSSM? What is the minimal number of SUSY particles which must come along with such a neutrino? Do they HAVE to be SUSY, or can they be part of an extended SM?
Answer: Sterile neutrinos are not part of a standard SUSY model. Supersymmetry by itself doesn't do any better at explaining neutrino masses then the Standard Model. Adding sterile (i.e., not interacting with any known particles) neutrinos to either the Standard Model or a supersymmetric model (such as the MSSM) can naturally explain the observed light neutrino masses. This is most often done using what's known as a type I See-saw model where the new sterile neutrinos would have a very heavy mass which turns out to suppress the original neutrino masses, however there are several alternatives.
If sterile neutrinos were added in to a supersymmetric model then for each new sterile neutrino there must be one corresponding sneutrino (superpartner of the neutrino), since supersymmetry requires one boson for each fermion. | {
"domain": "physics.stackexchange",
"id": 12043,
"tags": "particle-physics, supersymmetry, neutrinos"
} |
How to upload a program in Qiskit runtime | Question: I'm trying to use qiskit runtime to run my own version of VQE (the idea is to later run other algorithms). I understand that to use the IBMRuntimeService.upload_program() function I need to store the program in some MY_PROGRAM.py file. What exactly does this file have to look like?
I have only found tutorials on running simple circuits (they don't have a classical optimisation part) or using pre-build Qiskit programs as 'hello-world' or 'QAOA'.
Answer: I think this is what you are searching for:
Creating Custom Programs for Qiskit Runtime
From the tutorial description:
Here we will demonstrate how to create, upload, and use a custom Program for Qiskit Runtime. As the utility of the Runtime execution engine lies in its ability to execute many quantum circuits with low latencies, this tutorial will show how to create your own Variational Quantum Eigensolver (VQE) program from scratch. | {
"domain": "quantumcomputing.stackexchange",
"id": 4237,
"tags": "qiskit, vqe, qiskit-runtime"
} |
Could a helcopteri fly with 1 propeller? | Question: If a helicopter had 2 stacked rotors with 1 propeller each, could it theoretically fly?
Answer: It could, but the stress on the axle of the rotor would be too high due to the non-uniform distribution of masses (similar to the gif below). Such a helicopter would be extremely uncomfortable to ride in, due to the resulting vibrations. Moreover, the life of such a rotor would be much shorter than the symmetric two-leafed rotor, so even if you manage to make one, you wouldn't want to fly in one. | {
"domain": "physics.stackexchange",
"id": 97234,
"tags": "aerodynamics"
} |
Python Game Challenge - First to Five | Question: I have nearly one year of experience with C++, and it's been a month that I have been diving into Python. I came across this python game challenge on the internet called First to Five.
The rules are the following: There are two players. Each player writes
a number. It can be any integer 1 or
greater. The players reveal their numbers. Whoever chose the lower
number gets 1 point, unless the lower number is lower by only 1, then
the player with the higher number gets 2 points. If they both chose
the same number, neither player gets a point. This repeats, and the
game ends when one player has 5 points.
I really like to use OOP, most of my practices use it, and I would love to know a little bit about your opinions and thoughts about this code. What/How can I improve? Is it good? Am I using conventions and good practices?
class Game:
game_win_point = 5;
def __init__(self):
self.victory = False;
self.winner = "";
self.player1 = Player();
self.player2 = Player();
def gatherInformation(self):
""" Gathers information about both players playing the game before the game starts. """
self.player1.setName(input("What's the name of player 1? >> "));
self.player2.setName(input("What's the name of player 2? >> "));
def check_number_input(self):
""" Checks if both players entered valid input. """
self.check_input(self.player1);
self.check_input(self.player2);
def check_input(self, player):
""" Checks if 'player' entered a valid input. """
try:
player.setCurrentNumber(int(input("{}'s number: >> ".format(player.name))));
if player.current_number <= 0:
raise ValueError;
except ValueError:
while not isinstance(player.current_number, int) or player.current_number <= 0:
print("{}, you have to input an integer greater than one!".format(player.name));
try:
player.setCurrentNumber(int(input("{}'s number: >> ".format(player.name))));
except ValueError:
player.setCurrentNumber(-1);
def update_game_score(self):
""" Updates the game score every round. """
if self.player1.current_number == self.player2.current_number:
# The numbers are the same.
print("The numbers are the same and nobody gets a point! The score is: {}: {} points VS {}: {} points".format(p1.name, p1.score, p2.name, p2.score));
else:
# The numbers are diferent.
greatest_number = max(self.player1.current_number, self.player2.current_number);
self.who_scores(greatest_number, self.player1, self.player2);
self.who_scores(greatest_number, self.player2, self.player1);
def who_scores(self, greatest_number, ps, pl):
"""
'greatest_number' stands for the maximum number inputted by player 1 and 2.
'ps' stands for 'possible scorer' and it is the player who is supposed to get a score.
'pl' stands for 'possible loser' and it is the player who is NOT supposed to get a score.
Decides who scores a point.
"""
if (greatest_number == ps.current_number):
if pl.current_number == (ps.current_number - 1):
# Then, 'possible_scorer' scores, because the 'possible_loser.current_number' is smaller by just 1.
ps.goal(point=2);
else:
# Then, 'possible_loser' scores, because the 'possible_loser.current_number' is the smallest.
pl.goal();
def check_winner(self):
""" Checks if there's a game winner. """
if self.player1.score >= self.game_win_point:
self.setVictory(True);
self.setWinner(self.player1);
if self.player2.score >= self.game_win_point:
self.setVictory(True);
self.setWinner(self.player2);
def setVictory(self, victory):
""" Sets the game victory variable to True if one of the players reach 5
or more points. Otherwise, it remains False."""
self.victory = victory;
def setWinner(self, winner):
""" Sets the specified winner player by the 'winner' attribute. """
self.winner = winner;
class Player:
def __init__(self):
self.score = 0;
self.name = "no_name";
self.current_number = -1;
def setScore(self, score):
""" Setter/Mutator to set a player's score. """
self.score = score;
def setName(self, name):
""" Setter/Mutator to set a player's name. """
self.name = name;
def setCurrentNumber(self, current_number):
""" Setter/Mutator to set a player's current inputted number. """
self.current_number = current_number;
def goal(self, point=1):
""" Increases this player's score. """
self.score += point;
def main():
print("Welcome to the game! This game was made for two players.");
game = Game();
game.gatherInformation();
while not game.victory:
# Getting input. If it is not a correct input, let's loop while the user gives uses a proper one.
game.check_number_input();
print("{} typed {}, and {} typed {}".format(game.player1.name, game.player1.current_number, game.player2.name, game.player2.current_number));
# Checking who earns a points, if there's someone.
game.update_game_score();
# Displaying the results of this round.
print("Results: {}: {} points VS {}: {} points.".format(game.player1.name, game.player1.score, game.player2.name, game.player2.score));
# Checking if there is a winner.
game.check_winner();
print("Congratulations, {}! You're the winner.".format(game.winner.name));
if __name__ == "__main__": main();
Answer: 1. Review
In Python, there's no need to use a semi-colon to end a statement.
Similarly, there's no need to put parentheses around the condition of an if statement.
The docstrings on the methods are good, but there should also be docstrings on the classes.
There's a bug in Game.update_game_score. If both players enter the same number, there's a NameError because the code uses p1 and p2 instead of self.player1 and self.player2.
The message "you have to input an integer greater than one" isn't right: you can also enter the number 1.
The Game.check_input method uses the methods and attributes of a single Player object. This suggests that it should be a method on the Player class.
The check_input method is misleadingly named, because it gets the input as well as checking it.
Instead of a separate gatherInformation function, it would make sense to gather the player's name in Player.__init__.
There's a lot of repeated code of the form:
do_something(self.player1)
do_something(self.player2)
This repetition can be avoided by putting the two players into a list, and iterating over the list:
for player in self.players:
do_something(player)
In Python, we don't generally bother with setters or mutators, we just update the appropriate attribute. So instead of:
self.setVictory(True)
we'd just write:
self.victory = True
and avoid the need for a setVictory method. The reason is that if we later need to do something clever whenever an attribute is set, we can do that using the property decorator. We don't have to plan it in advance by writing getters and setters.
(But later on we'll see that we don't even need a victory attribute.)
The logic in check_input is quite complex and repetitive. It could be simplified so that there is just one call to input, and just one try: ... except:. If you follow my suggestion of making this a method on the Player class then it becomes:
while True:
try:
self.number = int(input(f"{self.name}'s number: >>"))
if self.number > 1:
return
except ValueError:
pass
print(f"{self.name}, you have to input an integer at least one!")
Note that there is no need for the test not isinstance(player.number, int) because if the call to int succeeded then self.number must be an int.
The logic in update_game_score is quite complex. One way to simplify it would be like this:
diff = self.player1.number - self.player2.number
if diff < -1:
self.player1.score += 1
elif diff == -1:
self.player2.score += 2
elif diff == 0:
print("The numbers are the same and nobody gets a point!")
elif diff == 1:
self.player1.score += 2
else:
assert diff > 1:
self.player2.score += 1
The reason for writing it like this is to make it easy for a reader to check that we've covered all five cases.
The code is now short enough that it makes sense to inline all the methods into main and remove the Game class.
2. Revised code
GAME_WIN_POINTS = 5 # Number of points needed to win.
class Player:
"""A player in the game, with attributes:
name: str -- The name of the player
number: int -- Most recently input number
score: int -- Current score
"""
def __init__(self, n):
self.name = input(f"What's the name of player {n}? >> ")
self.score = 0
def __str__(self):
return f"{self.name}: {self.score} points"
def get_input(self):
"Get input from the player and update player.number."
while True:
try:
self.number = int(input(f"{self.name}'s number: >> "))
if self.number > 0:
return
except ValueError:
pass
print(f"{self.name}, you must input an integer at least one!")
def main():
"Play the game."
print("Welcome to the game! This game was made for two players.")
player1, player2 = players = [Player(i) for i in (1, 2)]
while True:
# Get input.
for player in players:
player.get_input()
# Update score.
diff = player1.number - player2.number
if diff < -1:
player1.score += 1
elif diff == -1:
player2.score += 2
elif diff == 0:
print("The numbers are the same and nobody gets a point!")
elif diff == 1:
player1.score += 2
else:
assert diff > 1
player2.score += 1
# Display scores.
print(f"Results: {player1} VS {player2}.")
# Check for a winner.
for player in players:
if player.score >= GAME_WIN_POINTS:
print(f"Congratulations, {player.name}! You're the winner.")
return
if __name__ == "__main__":
main() | {
"domain": "codereview.stackexchange",
"id": 30681,
"tags": "python, game"
} |
Self Consistent Field method and LCAO | Question: I am reading about the Self Consistent Field Method and Linear Combination of Atomic Orbitals.
Suppose we have one electron and one nucleus, then we can solve the Schrodinger equation explicitly.
If we have two electron and two nuclei, then if we neglect the inter-electronic repulsion, then the individual hamiltonian becomes
$h_i=-\frac{1}{2}\nabla_i^2-\sum_\alpha\frac{Z_{alpha}e^2}{r_{i\alpha}}$
So, $H=h_1+h_2$
We can write the total wavefunction as product of individual wavefunction.
$\psi(x_1,x_2)=\psi_i(x_1)\psi_j(x_2)$
Now we can expand the individual wavefunctions in basis of hydrogenic wavefunctions.
$\psi_i(x_1)=\sum_{n=1}^kc_{n1}\phi_n(x_1)$
Similarly, $\psi_j(x_1)=\sum_{n=1}^kc_{n2}\phi_n(x_2)$
So, $E=E_1+E_2$
$\displaystyle\langle E_1\rangle=\frac{\sum_{n,n'}a_{n'}a_n\int \phi_{n'}\hat H\phi_ndx}{\sum_{n,n'}a_{n'}a_n\int \phi_{n'}\phi_ndx}=\frac{\sum_{n,n'}a_{n'}a_n H_{n',n}}{\sum_{n,n'}a_{n'}a_nS_{n',n}}$
Now, minimizing this $\langle E_1\rangle$ we get $k$ linear equations in $k$ variables.
$\begin{pmatrix} H_{11}-E_1S_{11} & H_{12}-E_1S_{12} & ... & H_{1k}-E_1S_{1k} \\
H_{21}-E_1S_{21} & H_{22}-E_1S_{22} & ... & H_{2k}-E_1S_{2k} \\
... \\
H_{k1}-E_1S_{k1} & H_{k2}-E_1S_{k2} & ... & H_{kk}-E_1S_{kk}\end{pmatrix}\begin{pmatrix}c_{11}\\c_{21}\\...\\c_{k1}\end{pmatrix}=\begin{pmatrix}0\\0\\...\\0\end{pmatrix}$
For the non-trivial solution, we have to set the determinant of the above matrix to be 0, and as a result we get $k$ values of $E_1$. The lowest value of $E_1$ will be the good approximate to the ground state energy. Corresponding to this we get set of $c_{n1}s$
So, we get $\psi_i(x_1)$ and $E_1$ by the variational principle.
Similarly we can find the $\psi_j(x_2)$
But we have to take into account the spin also. So there are four possible states.
$\chi_1(x_1)=\psi_1(x_1)\alpha(\omega)$
$\chi_2(x_1)=\psi_1(x_1)\beta(\omega)$
$\chi_3(x_2)=\psi_1(x_1)\alpha(\omega)$
$\chi_4(x_2)=\psi_1(x_1)\beta(\omega)$
$\alpha$ corresponds to spin up and $\beta$ corresponds to spin down.
So, total wavefunction $\bar\psi(x_1,x_2)=\text{slater determinant}(\chi_i(x_1)\chi_j(x_2))$
where $i$ can take value $1$ or $2$ and $j$ take $3$ or $4$.
Doubts
i) Is my above analysis correct? While forming combine wavefunction $(\bar\psi)$ how can we determine which combination of $\chi$ will be the lowest in energy? Or is this arbitrary?
ii) In $H_2$ molecule, I have seen that the HOMO is $\sigma_{1s}$ which is formed by the linear combination of $1s$ orbital of first and second $H$ atom. How is this possible?
In the above analysis, we are expanding the $\psi_i(x_1)$ as the linear combination of $1s$, $2s$, etc orbitals of first hydrogen atom. Similarly, $\psi_j(x_2)$ is expressed as the linear combination of $1s$, $2s$, etc orbitals of second hydrogen atom. What is the point of expressing the $\psi_i(x_1)$ as the linear combination of the atomic orbitals of both the hydrogen atoms?
Am I missing something?
Please clarify the doubt.
Answer: I will provide an answer in multiple parts, and most of the information I describe here comes from source [2].
For a $\ce{H2}$ molecule, the Hamiltonian must consider the interactions between both electrons with both nuclei. So the condensed version would be as described in source [1] and applying the Born-Oppenheimer approximation:
$$\hat{H} = -\frac{1}{2}(\nabla^2_1 + \nabla^2_2) - \frac{1}{r_{1A}} - \frac{1}{r_{1B}} - \frac{1}{r_{2A}} - \frac{1}{r_{2B}} + \frac{1}{r_{12}} + \frac{1}{R}$$
Here, $r_{iA}$ represents how far electron $i$ is from atom A and atom B; $i=1,2$. $r_{12}$ accounts for the inter-electronic interaction and $R$ represents the distance between nuclei. While $R$ can be fixed, the energies are typically plotted against the internuclear distance like a function $E(R)$, hence where you get the Morse potential from.
History and Methods
We will focus on the spatial part for now. In 1927, Heitler and London proposed a method to analyze $\ce{H2}$ in particular called the valence-bond method. The point here is that we'll be constructing a trial function to approximate the actual wave function. There are several different approaches, one of which involves the linear combination of atomic orbitals with molecular-orbital method, but I'll get into that later. First, let's consider a trial wavefunction between two hydrogen atoms that are infinitely apart:
$$\psi_1 = 1s_A(1)1s_B(2)$$
Where $1s_A(1)$ represents electron 1 on the 1s orbital of atom A and $1s_B(2)$ represents electron 2 on the 1s orbital of atom B. One thing we have to consider here is that when the atoms are put closer together, the electrons are indistinguishable. That means we must also consider
$$\psi_2 = 1s_A(2)1s_B(1)$$
For example, electron 2 may be on $1s_A$ and electron 1 on $1s_B$ from the viewpoint of atomic orbitals. This means that the overall wave function can be represented as a linear combination of the two
$$\Psi_{VB} = c_1\psi_1 + c_2\psi_2$$
However, we should also include an ionic term. For example, if two electrons are on one side of the molecule, you'll have a positively and negatively charged molecules on either sides.
$$\Psi_{ionic} = 1s_A(1)1s_A(2) + 1s_B(1)1s_B(2)$$
We will come back to this, but we need to first talk about $\ce{H2+}$ and molecular orbitals.
$\ce{H2+}$ Molecular Orbitals
In molecular-orbital method, we essentially derive the molecular wave functions as a product of molecular orbitals (which are single-electron molecular wave function, see source [4]).
So, to understand how we came to the trial wave function for $\ce{H2}$, we need to first look at $\ce{H2+}$. The Hamiltonian is
$$\hat{H} = -\frac{1}{2}\nabla^2 - \frac{1}{r_A} - \frac{1}{r_A} + \frac{1}{R}$$
Where we have two nuclei A and B and one electron. While it is possible to solve this system exactly, we will solve it approximately to construct the molecular wave function for $\ce{H2}$ eventually. So take:
$$\psi_{\ce{H+}} = c_11s_A + c_21s_B$$
This is a linear combination of atomic orbitals. Again, the point is to create a trial wave function that can approximate the actual wave function of $\ce{H2+}$ well. Taking a linear combination of atomic orbitals has shown success, as we will later see in the treatment of $\ce{H2}$, and that is why it is widely used. The reason why we take the linear combination from both atoms is because they both contribute to the overall wave function.
For this trial wave function, the associated secular determinant would be
$$\begin{vmatrix}
H_{AA}-E\cdot1 & H_{AB} - RS\\
H_{AB}-ES & H_{BB}-E\cdot1\\
\end{vmatrix}$$
When we solve this, though it is somewhat complex and I don't want to type it all out, we will get two solutions, corresponding to the molecular orbitals, bonding and antibonding, formed with an $\ce{H2+}$ molecule. They are
$$\psi_+ = \frac{1}{\sqrt{2(1+S)}}(1s_A + 1s_B)$$
$$\psi_- = \frac{1}{\sqrt{2(1+S)}}(1s_A - 1s_B)$$
where $S = \int dr1s_A1s_B$. Source [2] provides the pathway to the solutions. Here, we've developed a set of molecular orbitals that are analogous to $\ce{H2}$, and we can use them to tackle it.
$\ce{H2}$ Analysis
What the molecular-orbital method wants us to consider is constructing a molecular wave function by combining molecular orbitals. With $\ce{H2+}$, we found the molecular orbitals using a linear combination of atomic orbitals as a trial function. Since $\psi_+$ is the molecular orbital with the ground state energy, we can basically describe $\ce{H2}$ by filling that bonding molecular orbital with another electron with opposite spin. This can be represented by a Slater determinant
$$\Psi = \begin{vmatrix}
\psi_+\alpha(1) & \psi_+\beta(1)\\
\psi_+\alpha(2) & \psi_+\beta(2)\\
\end{vmatrix}$$
$$= \psi_+\cdot\psi_+[\frac{1}{\sqrt{2}}[\alpha(1)\beta(2)-\alpha(2)\beta(1)]]$$
Considering only the spatial part, we get
$$\Psi_{MO} = \frac{1}{2(1+S)}[1s_A(1) + 1s_B(1)][1s_A(2) + 1s_B(2)]$$
$$ = 1s_A(1)1s_B(2) + 1s_A(2)1s_B(1) + 1s_A(1)1s_A(2) + 1s_B(1)1s_B(2)$$
Interestingly, if you do the math, this is almost just like combining the valance-bond method with the ionic terms but without the coefficients. This method is called the LCAO-MO method. This essentially explains how we got to molecular orbital theory, why we express molecular orbitals as a linear combination (as in the section about $\ce{H2+}$), and how we use that to analyze $\ce{H2}$.
Although this is the basis of molecular orbital theory, it is the simple version. We should technically extend molecular-orbital theory as the in our earlier equation with $\Psi_{VB}$ and $\Psi_{ionic}$, each term isn't contributing equally, so we should add coefficients to each term in our trial wave function and solve variationally. This is done by considering the excited antibonding state $\psi_-$, and, amazingly, when we do, we get precisely the form $\Psi_{CI} = c_1\Psi_{VB} + c_2\Psi_{ionic}$. The subscript in $\Psi_{CI}$ represents the extension of simple molecular-orbital theory by including excited-state configurations, known as Configuration Interaction.
In all, we could have used $\Psi_{VB}$, $\Psi_{MO}$ (simple version), and $\Psi_{CI}$ (extended version), or any other more complicated trial wave function to analyze $\ce{H2}$. It's just that the molecular-orbital method with configuration interactions considered is widely used. Source [2] uses this extended form to analyze $\ce{H2}$, but it's rather complicated.
Note that we are not taking into account the $\ce{2s}$ orbital in this analysis as are considering the minimal basis set which leads to two molecular orbitals. Even in molecular orbital diagrams, you will see that the $\ce{H2}$ molecular orbitals arise from a linear combination of the 1s orbitals. We could choose a different trial function to approximate the wavefunction with for example: $\Psi = c_11s_A + c_21s_B + c_32s_A + c_42s_B$. What we find is that when variationally computed, the ground-state molecular orbital becomes $\Psi = 0.7071(1s_A + 1s_B) + 0.00145(2s_A + 2s_B)$. The corresponding energy basically has no improvement compared to solely analyzing the linear combinations of $\ce{1s}$ orbitals with the valance-bond method (see source [2]). So, we are consider solely $\ce{1s}$ here.
Your Analysis
This derivation is in ways similar and different than yours. You basically have the form
$$\Psi_T = [c_11s_A(1) + c_21s_B(2)][c_31s_B(1) + c_41s_B(2)]$$
Sure, you could go about solving this by partitioning the Hamiltonian and ignoring the electron-electron repusion term, but all you really get is the $\ce{H2+}$ wavefunction for each electron. Also remember that the energies depend on the internuclear distance, and that is considered in the analysis of $\ce{H2+}$. So if we also don't include that in the Hamiltonian, then we aren't really analyzing a molecule anymore. Plus, the hard part is not constructing the secular determinant, the hard part is try to mathematically analyze the integrals inside the secular determinant.
As for your comment on spin, for this investigation, we say that the Hamiltonian does not depend on the spin, so the energy is determined by the spatial elements of the wave function.
Sources
[1] http://websites.umich.edu/~chem461/QMChap10.pdf
[2] Quantum Chemistry - Donald A. McQuarrie
[3] https://aip.scitation.org/doi/pdf/10.1063/1.1701472?casa_token=If9tKkNIyIQAAAAA:xYrx4IoQExf2NFTRSNA-ElywJRrhTmbsRrYZIlSPx_vZS9e9DI5EHcMKyU2jJ0AqrIU2d821bbE
[4]http://lampx.tugraz.at/~hadley/ss1/molecules/mo.php#:~:text=In%20the%20simplest%20approximation%2C%20the,r%E2%88%92%E2%83%97ra%7C.
Other Sources
[5] https://www.sciencedirect.com/topics/mathematics/london-theory#:~:text=The%20Heitler%E2%80%93London%20(Ref.,as%20shown%20in%20Figure%2013.2.
[6] https://dasher.wustl.edu/chem478/reading/frontier-orbitals-anh.pdf
[7] http://websites.umich.edu/~chem461/QMChap10.pdf
[8]https://chem.libretexts.org/Courses/University_of_California_Davis/UCD_Chem_107B%3A_Physical_Chemistry_for_Life_Scientists/Chapters/5%3A_The_Chemical_Bond/5.2%3A_Valence_Bond_Theory#:~:text=along%20the%20axis.-,The%20Valence%20Bond%20Wavefunction,Equation%205.2.
[9] https://www.sciencedirect.com/topics/chemistry/molecular-orbital-method
[10] https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1951.0240 | {
"domain": "chemistry.stackexchange",
"id": 17049,
"tags": "quantum-chemistry, molecular-structure, molecular-orbital-theory, orbitals, molecular-mechanics"
} |
Start, stop and restart script for a MongoDB server | Question: I have created an start, stop, restart script for a unix-mongodb-server.
It would be nice if you could look over my code and give me some helpful hints on what I could change. I tested the script under my Mac and it works.
This is also available under my GitHub account and available under GitHub -> Mongo_Start_Stop.
VERSION=1.1.2
SCRIPTNAME=$(basename "$0")
MONGOHOME=
MONGOBIN=$MONGOHOME/bin
MONGOD=$MONGOBIN/mongod
MONGODBPATH=
MONGODBCONFIG=
if [ $# != 1 ]
then
echo "Usage: $SCRIPTNAME [start|stop|restart]"
exit
fi
pid() {
ps -ef | awk '/[m]ongodb/ {print $2}'
}
stopServer() {
PID=$(pid)
if [ ! -z "$PID" ];
then
echo "... stopping mongodb-server with pid: $PID"
sudo kill $PID
else
echo "... mongodb-server is not running!"
fi
}
startServer() {
PID=$(pid)
if [ ! -z "$PID" ];
then
echo "... mongodb-server already running with pid: $PID"
else
echo "... starting mongodb-server"
sudo "$MONGOD" --dbpath "$MONGODBPATH" --config "$MONGODBCONFIG"
fi
}
restartServer() {
stopServer
sleep 1s
startServer
}
case "$1" in
start) startServer
;;
stop) stopServer
;;
restart) restartServer
;;
*) echo "unknown command"
exit
;;
esac
Answer: Bug
Can you spot the bug on this line?
PID=$(ps -ef | grep "[m]ongodb" | awk {'print $2'})
The awk script here should be {print $2},
with the curly braces included,
so that needs to be within the single-quotes:
PID=$(ps -ef | grep "[m]ongodb" | awk '{print $2}')
Quoting path variables
It's good to make it a habit to always double-quote path variables in command statements, to protect from globbing and word splitting. Instead of:
SCRIPTNAME=$(basename $0)
Write like this:
SCRIPTNAME=$(basename "$0")
The same goes for this line:
sudo $MONGOD --dbpath $MONGODBPATH --config $MONGODBCONFIG
Useless variable and echo
PID is not used anywhere else inside the script but here:
function isRunning {
PID=$(ps -ef | grep "[m]ongodb" | awk '{print $2}')
echo $PID
}
You could simplify as:
pid() {
ps -ef | awk '/[m]ongodb/ {print $2}'
}
I also renamed the function, because pid() reflects better its purpose.
User experience
Something's odd in this condition:
rc=$(isRunning)
echo "... stopping mongodb-server with pid: $rc"
if [ ! -z "$rc" ]; then
sudo kill $rc
fi
If the process is not running $rc will be empty, and users may think the output is weird and the script is buggy.
You might want to rearrange this part,
for example if the server is not running, then say so.
Make the most of awk
Very often a grep ... | awk ... combo can be simplified,
because awk can filter all by itself. Instead of:
PID=$(ps -ef | grep "[m]ongodb" | awk '{print $2}')
You can write:
PID=$(ps -ef | awk '/[m]ongodb/ {print $2}')
Style
This is fine:
function isRunning {
# ...
}
But the recommended style for function declarations is like this:
isRunning() {
# ...
}
Variable initialization
This is fine:
MONGOHOME=""
MONGOBIN="$MONGOHOME/bin"
MONGOD="$MONGOBIN/mongod"
MONGODBPATH=""
MONGODBCONFIG=""
But you could write simpler as:
MONGOHOME=
MONGOBIN=$MONGOHOME/bin
MONGOD=$MONGOBIN/mongod
MONGODBPATH=
MONGODBCONFIG=
Redundant semicolon
The semicolon at the end of exit; is redundant, I suggest to remove it:
if [ $# != 1 ]
then
echo "Usage: $SCRIPTNAME [start|stop|restart]"
exit;
fi | {
"domain": "codereview.stackexchange",
"id": 25680,
"tags": "bash, shell, mongodb"
} |
Are wave functions built of other wave functions? | Question: I have heard the term wave function used in a number of contexts describing both quantum objects (electrons, etc) and more macroscopic objects (atoms, birds, cats, etc). Is it correct to say a lithium atom, for example, has a wave function that can be described as an entangled set of wave functions that describe the electrons, protons, etc that make up the atom? If so are we able to extend this to talk about the wave function of a bird or even the whole universe?
Answer: Take a Hydrogen atom (simpler than a Lithium one) made of just one electron and one proton.
The complete orthonormal bases that span the Hilbert for the electron and for the proton are $\{|\phi_{\mathrm{el}}^i\rangle$} and $\{|\phi_{\mathrm{nuc}}^j\rangle\}$ respectively.
The state of the atom $|\Psi\rangle$ can then be expressed as
$$ |\Psi\rangle = \sum_{ij} c_{ij}|\phi_{\mathrm{el}}^i\rangle \otimes \phi_{\mathrm{nuc}}^j\rangle, $$
i.e. the basis for the whole atom is the tensor product of the bases of the inner constituents.
In general, $c_{ij} \neq c_i c_j$, i.e. you cannot decouple the electron from the nucleus and you have to stick to an entangled wavefunction, where the behaviour of one is affected by the behaviour of the other.
If the inner constituents of the atom can be treated separately (e.g. under the Born-Oppenheimer approximation), then you can simplify the problem by solving two Schrödinger equations, two Hamiltonians etc.
In this case, the electron and the proton are disentangled, $c_{ij} = c_i c_j$ and so:
$$ |\Psi\rangle = \sum_{ij} c_{i} c_j|\phi_{\mathrm{el}}^i\rangle \otimes |\phi_{\mathrm{nuc}}^j\rangle = \sum_{i} c_{i} |\phi_{\mathrm{el}}^i\rangle \otimes \sum_j c_j |\phi_{\mathrm{nuc}}^j\rangle = |\Phi_{\mathrm{el}}\rangle \otimes \Phi_{\mathrm{nuc}}\rangle, $$
i.e. you can constuct the total state directly from the states of the constituents
This reasoning then extends to other atoms, and, eventually, to birds and cats. At least on paper.
EDIT 1: for macroscopic, non isolated objects, decoherence and coupling to the environment causes quantum effects to wash out. You can still have macroscopic quantum objects, like superfluids, if carefully prepared and controlled).
EDIT 2: so if every single constituent of the universe were not interacting with anything else , then you could trivially construct the wavefunction of the whole universe just by the tensor product of the individual wavefunctions.
Of course stuff does interact so you can’t do this in practice.
The “wavefunction of the whole universe” is also (and maybe better) discussed in terms of density matrices, and pure and mixed states. I invite you to have a look at that stuff too. | {
"domain": "physics.stackexchange",
"id": 61528,
"tags": "quantum-mechanics, hilbert-space, wavefunction, quantum-entanglement, quantum-interpretations"
} |
Error: the position x and y subscribing from topic "/g500/pose" and "/uwsim/girona_500/odom is still 0, and z is increasing | Question:
No mattter what the initial position I set, the position x and y subscribing from topic "/g500/pose" and "/uwsim/girona_500/odom is still 0, and z is increasing. But actually, I can see that the vehicle is exactly at the position which I set.
Originally posted by ZYS on ROS Answers with karma: 108 on 2016-02-25
Post score: 0
Answer:
Hi,
I guess you are using the uwsim underwater_vehicle_dynamics package as the vehicle is "falling" (z increasing). When using the vehicle_dynamics package the initial position must be stated in it (no matter what you set in the uwsim XML config file) as that package is the one who sets the vehicle position in uwsim. Just set the position in the yaml file (g500/dynamics/initial_pose: ). We are still working to integrate the dynamics inside UWSim so everything is easier...
You also have to send messages to the thrusters_input, if you send velocities or positions to UWSim expect weird results as two nodes will be sending different information to UWSim.
Originally posted by Javier Perez with karma: 486 on 2016-02-25
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by ZYS on 2016-02-25:
Thank you. Then I set the position in the yaml file:g500/dynamics/initial_pose: [1, 1, 1]
But the result is still the same.
Comment by ZYS on 2016-02-25:
Oh I understand, I should set the position like this g500/dynamics/initial_pose: [1.0, 1.0, 1.0, 0, 0, 1.57]
Comment by ZYS on 2016-07-12:
Hi Javier, I run into another problem for the multibeam sensor. When I set to have 10 increase between -20 & 20 degree for it, it has 5 readings with the 1st one "NaN" and the last one "0.0". Also, in the GUI, I only see 4 green beams rather than expected 5 beams. How should I use the readings?
Comment by Javier Perez on 2016-07-13:
Hi, try to post a new question next time. I've seen there is a bug rounding numbers and eventually x/x != 1 so "NaN" results appear. I'm solving it and will upload it, but remember multibeam interpolates distances so you will get bad simulation with 10º increase, try using virtualrangesensors
Comment by ZYS on 2016-07-13:
Thank you so much for your rely. It is really helpful. | {
"domain": "robotics.stackexchange",
"id": 23904,
"tags": "ros, uwsim"
} |
Would a truly physical oscillation still be measured in hertz? | Question: I recently bought a new scroll saw and was commenting to someone about how it was a relatively slow saw... low ... RPMs (thinking like a circular saw). Then it occurred to me that not being a circle, the blade movement wouldn't really be measured in RPMs.
So I was trying to think what it would be measured in, and I realized it is moving up and down, oscillating, essentially like a wave... so thought maybe it would be measured in Hertz?
I looked up measures of frequency, and this article says that Hertz apply to physical waves such as sound (as well as EM waves). Technically this is not actually a sine wave, but it is essentially an oscillating movement.
So I was just wondering, would Hertz be the correct unit of measure for the movement of the blade on a scroll saw?
Answer: Indeed the Hertz unit is the correct unit to use. Hertz is a measure of oscillatory phenomena in
$$\frac{\mathrm{cycles}}{\mathrm{second}}$$
It doesn't matter that your scroll saw doesn't trace out a perfect sine wave, it's still oscillatory and Hertz is the right unit to measure that.
The Wikipedia article spells this out is better detail and uses a heart beat as an example of non-sinusoidal oscillation that can be measured in Hertz. | {
"domain": "physics.stackexchange",
"id": 8000,
"tags": "units, frequency, oscillators"
} |
Elongation in bar with unequal applied forces | Question: How is elongation in a uniform rod with unequal forces acting on opposite sides calculated? If applied forces are equal and opposite, the elongation is defined by the formula ($\delta = \frac{FL}{AE}$). How does the solution change for the case when forces are unequal (as shown)?
Answer: As you correctly note, the solution is different when the applied forces are not equal. The bar is not in static equilibrium: Both static and dynamic forces deform the bar in motion. These concepts are illustrated by superposition.
$$\delta = \delta_\text{static} + \delta_\text{dynamic} \qquad = \frac {F_\text{less}L}{EA} + \frac{(F_\text{more} - F_\text{less})L}{2AE}$$
During changes in acceleration (when $\frac{\mathrm{d\;a(t)}}{\mathrm{dt}} \neq 0$), forces and accelerations within the damped solid body are transient, where $a(x,t)$, until they reach steady-state, where $\frac{\partial{\;a(x,t)}}{\partial{x}} = 0$.
Transient deformations in solid bodies are illustrated by a mass/spring system, where each mass element can be thought to represent differential mass element.
Newton's Second Law requires that the bar (of mass $M$) accelerate in the direction of $F_{net}$.
$$\sum F \;\text{on bar:} \qquad F_\text{more}-F_\text{less} = Ma \qquad \Rightarrow \;\therefore a = \frac{F_\text{more} - F_\text{less}}{M}$$
The derivation of deformation is shown when a single force acts on the bar.
Dynamic Deformation:
A Free Body Diagram is taken at an arbitrary cross-section of the bar, where the mass of the split body is $m = (\frac{M}{L})x$. Summation of forces acting on $m$ is solved for $T(x)$.
$$\sum F \;\text{on split body:} \qquad F_{o} - T = ma$$
$$F_{o} - T = \overbrace{\left(\frac{M}{L}x\right)}^\text{m} \overbrace{\left(\frac{F_{o}}{M}\right)}^\text{a} = \frac{F_{o}}{L}x \qquad \Rightarrow \qquad T = F_{o} - \frac{F_{o}x}{L}$$
$$\therefore T = F_{o}\left(1-\frac{x}{L}\right)$$
The static axial deformation ($\delta = \frac{FL}{AE}$) written in differential form:
$$\mathrm{d \delta} = \frac{T \mathrm{dx}}{AE} = \frac{[F_{o}(1-\frac{x}{L})]\mathrm{dx}}{AE}$$
Integrate differential deformation over the length of the bar to determine total deformation:
$$\delta = \int_0^L \mathrm{d \delta} \; = \frac{F_{o}}{AE} \int_0^L 1-\frac{x}{L} \mathrm{dx} \implies \; \delta = \frac{F_{o}L}{2AE}$$
The derivation can be generalized to include both forces, where integration of $T(x) = F_\text{more} - \dfrac{F_\text{more}-F_\text{less}}{L}x$ results in the same solution given by superposition.
$$\therefore \delta = \; \overbrace{\frac{F_\text{more}L}{2AE} + \frac{F_\text{less}L}{2AE}}^\text{Integration} \;=\; \overbrace{\frac{F_\text{less}L}{AE} + \frac{(F_\text{more}-F_\text{less})L}{2AE}}^\text{Superposition}$$
References:
Stress and Deformation Analysis of Linear Elastic Bars in Tension
Introduction to Elasticity | {
"domain": "physics.stackexchange",
"id": 26916,
"tags": "newtonian-mechanics, forces, elasticity"
} |
What is the purpose of dead coils in the middle of a compression spring? | Question: What purpose do dead coils in the middle of a spring serve? For example, this spring from a ballpoint pen has two dead coils at each end and two in the center of the spring. Why add dead coils in the middle? What advantage does this create, or what disadvantage does this overcome? Is it a stability/buckling thing? Is it related to fatigue life? I've never seen this before and I couldn't find any other examples with a pretty extensive Google search, so I'm wondering if anyone has any insights. It's driving me bonkers as I just need to know. Thanks!
Answer: The dead coils on the ends of the spring furnish an (almost) flat surface, perpendicular to the spring axis, for the spring to engage the mating parts with. The dead coils in the middle effectively break the spring into two smaller springs, which will both compress in length upon loading without buckling, as NMech points out. | {
"domain": "engineering.stackexchange",
"id": 4447,
"tags": "springs"
} |
How to estimate filter coefficients of an all-pole system? | Question: Suppose we have measured the frequency response of a system that is known to be all-pole; measuring impulse response is not possible. What are the methods available, if any, to estimate the coefficients of the underlying IIR filter?
EDIT: Frequency response exists only as measured data, so closed mathematical expression is not available, nor is the phase information.
Answer: Half of the information about a system is in the phase response, and half of it in the amplitude response. There's no way to reconstruct a filter if you know only either half, unless you strongly restrict the poles mathematically. | {
"domain": "dsp.stackexchange",
"id": 3097,
"tags": "filters, estimation, infinite-impulse-response"
} |
BH singularity? Infinite density | Question: How can the density of a region of space go from finite density to infinite when there are no numbers larger than any Aleph0 number but smaller than any Aleph1 number (no decimal point in front of it, of course)? Aren't Planck volumes and strings designed to sidestep infinities?
My point there, stated differently, is how can the density go from finite to infinite when there is a 'no-number gap' between finite and infinite, with Cantor losing his mind contemplating that gap? And did he not prove that none was constructible/possible?
The entropy of the visible universe is $\sim (10^{122})^2$, if I'm not too far off the mark. This is a stupendous number but no closer to infinity than any other integer or real. And there is no room in it to tuck a singularity away.
In simpler terms. Our BH has a finite mass. If it has a region of infinite density, that region must be infinitesimal. But the Planck length is the lower limit on the size of regions of space. There are, therefore, no points in space, no infinitesimals, only punctoids, my term of convenience, the utility of which may become obvious in future posts.And no infinities.
Answer: Most physicists believe that the prediction of an infinite-density singularity (though note that for a Schwarzschild spacetime, the singularity is a moment in time, NOT a point in space) is a flaw in general relativity rather than a real physical thing that happens, and that at some density roughly around $m_{p}/\ell_{p}^{3}$, where $m_p$ is the planck mass, and $\ell_{p}$ is the planck length, quantum gravitational effects will take over and prevent a true singularity from forming. Obviously, without a working quantum theory of gravity, no one can know exactly how this happens, but this is the expectation. | {
"domain": "physics.stackexchange",
"id": 54648,
"tags": "general-relativity, black-holes, density, singularities"
} |
network interfaces configure for Hokuyo ust 10lx | Question:
Hi all, I have a Hokuyo ust 10LX LRF, which use ethernet RJ45 connector, rather than USB.
Sequence listed below:
Step 1. Use 10LX's default setting , 192.168.0.10 as LiDAR IP address.
Step 2. Set network.
$ sudo vi /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
allow-hotplug eth0
iface eth0 inet static
address 192.168.0.15
netmask 255.255.255.0
Step 3. Power on LiDAR with 12V@1000mA power supply,
Step 4. Plug RJ45 connector into PC's ethernet port.
Step 5. However get errors saying 'Destination Host Unreachable'.
$ ping 192.168.0.15
64 bytes from 192.168.0.15: icmp_seq=1 ttl=64 time=0.032 ms
64 bytes from 192.168.0.15: icmp_seq=2 ttl=64 time=0.037 ms ...
ping PC okay, but fail to LiDAR:
$ ping 192.168.0.10
PING 192.168.0.10 (192.168.0.10) 56(84) bytes of data.
From 192.168.0.15 icmp_seq=1 Destination Host Unreachable
From 192.168.0.15 icmp_seq=2 Destination Host Unreachable
From 192.168.0.15 icmp_seq=3 Destination Host Unreachable ...
Thanks very much for your help!
Originally posted by dehou14 on ROS Answers with karma: 1 on 2017-04-19
Post score: 0
Answer:
Check the output of ip route and make sure that there is a route going to the 192.168.0.0/24 block using eth0.
Originally posted by Geoff with karma: 4203 on 2017-04-20
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 27657,
"tags": "ros, ethernet, hokuyo"
} |
What are the next steps after stereo calibration? | Question:
I've completed the stereo calibration tutorial and have my sample images and parameters in a calibrationdata.tar.gz file. Where do I go from here?
I want to use this to create a depth map and point cloud for doing SLAM, but the tutorial doesn't mention anything about what to do with the calibrated camera. It only links to an old Willowgarage page which is long dead.
Although it's completely undocumented, digging around I found that the "Commit" button in the cameracalibrator.py gui saves a YAML file to ~/.ros/camera_info/head_camera.yaml. However, it's unclear what this file represents. The file content saved for my camera was:
image_width: 320
image_height: 240
camera_name: head_camera
camera_matrix:
rows: 3
cols: 3
data: [1034.231282067822, 0, 149.3986264583577, 0, 1059.726267583194, 27.5609680166836, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
rows: 1
cols: 5
data: [0.2006669805992255, -0.920386733283972, -0.06710445687079475, 0.01907417769046314, 0]
rectification_matrix:
rows: 3
cols: 3
data: [0.9491200720659563, 0.03439773884992495, 0.3130301652613153, -0.03871893115684572, 0.9992212648619694, 0.007596592506998183, -0.3125250920671578, -0.01933027184646673, 0.94971280259811]
projection_matrix:
rows: 3
cols: 4
data: [1300.089955798789, 0, -170.1720094680786, -89.58782554834289, 0, 1300.089955798789, 9.835054516792297, 0, 0, 0, 1, 0]
However, the raw ost.txt file exported to the /tmp/calibrationdata.tar.gz contained:
# oST version 5.0 parameters
[image]
width
320
height
240
[narrow_stereo/left]
camera matrix
958.183533 0.000000 178.343947
0.000000 998.494337 -2.125240
0.000000 0.000000 1.000000
distortion
-0.184614 3.428311 -0.078904 0.023000 0.000000
rectification
0.950676 0.038027 0.307845
-0.033772 0.999246 -0.019139
-0.308341 0.007799 0.951244
projection
1300.089956 0.000000 -170.172009 0.000000
0.000000 1300.089956 9.835055 0.000000
0.000000 0.000000 1.000000 0.000000
# oST version 5.0 parameters
[image]
width
320
height
240
[narrow_stereo/right]
camera matrix
1034.231282 0.000000 149.398626
0.000000 1059.726268 27.560968
0.000000 0.000000 1.000000
distortion
0.200667 -0.920387 -0.067104 0.019074 0.000000
rectification
0.949120 0.034398 0.313030
-0.038719 0.999221 0.007597
-0.312525 -0.019330 0.949713
projection
1300.089956 0.000000 -170.172009 -89.587826
0.000000 1300.089956 9.835055 0.000000
0.000000 0.000000 1.000000 0.000000
So it looks like the YAML file saved by cameracalibrator.py is truncated and only contains the calibration data for a single camera. Do I need to manually organize this data, for both cameras, in separate YAML files and then give those to the nodes that read/publish the camera images?
Edit: I was able to split the ost.txt file and convert it into separate yml files for each camera, and get a usb_cam node running who those calibration files. However, stereo_image_proc seems unable to connect or read these streams.
My usb_cam launch file:
<launch>
<arg name="device_left" default="/dev/video1" />
<arg name="device_right" default="/dev/video2" />
<arg name="width" default="320" />
<arg name="height" default="240" />
<arg name="format" default="yuyv" />
<group ns="raw_stereo">
<node name="left" pkg="usb_cam" type="usb_cam_node" output="screen">
<param name="video_device" value="$(arg device_left)" />
<param name="image_width" value="$(arg width)" />
<param name="image_height" value="$(arg height)" />
<param name="pixel_format" value="$(arg format)" />
<param name="camera_frame_id" value="left_camera" />
<param name="io_method" value="mmap" />
<param name="camera_info_url" type="string" value="file://$(find stereo_slam_test)/config/usb1-config-left.yml" />
</node>
<node name="right" pkg="usb_cam" type="usb_cam_node" output="screen">
<param name="video_device" value="$(arg device_right)" />
<param name="image_width" value="$(arg width)" />
<param name="image_height" value="$(arg height)" />
<param name="pixel_format" value="$(arg format)" />
<param name="camera_frame_id" value="right_camera" />
<param name="io_method" value="mmap" />
<param name="camera_info_url" type="string" value="file://$(find stereo_slam_test)/config/usb1-config-right.yml" />
</node>
</group>
</launch>
I then run:
roslaunch stereo_slam_test stereo_node.launch
ROS_NAMESPACE=raw_stereo rosrun stereo_image_proc stereo_image_proc approximate_sync:=true
The output of rostopic list is:
/raw_stereo/disparity
/raw_stereo/left/camera_info
/raw_stereo/left/image_color
/raw_stereo/left/image_color/compressed
/raw_stereo/left/image_color/compressed/parameter_descriptions
/raw_stereo/left/image_color/compressed/parameter_updates
/raw_stereo/left/image_color/compressedDepth
/raw_stereo/left/image_color/compressedDepth/parameter_descriptions
/raw_stereo/left/image_color/compressedDepth/parameter_updates
/raw_stereo/left/image_color/theora
/raw_stereo/left/image_color/theora/parameter_descriptions
/raw_stereo/left/image_color/theora/parameter_updates
/raw_stereo/left/image_mono
/raw_stereo/left/image_mono/compressed
/raw_stereo/left/image_mono/compressed/parameter_descriptions
/raw_stereo/left/image_mono/compressed/parameter_updates
/raw_stereo/left/image_mono/compressedDepth
/raw_stereo/left/image_mono/compressedDepth/parameter_descriptions
/raw_stereo/left/image_mono/compressedDepth/parameter_updates
/raw_stereo/left/image_mono/theora
/raw_stereo/left/image_mono/theora/parameter_descriptions
/raw_stereo/left/image_mono/theora/parameter_updates
/raw_stereo/left/image_raw
/raw_stereo/left/image_raw/compressed
/raw_stereo/left/image_raw/compressed/parameter_descriptions
/raw_stereo/left/image_raw/compressed/parameter_updates
/raw_stereo/left/image_raw/compressedDepth
/raw_stereo/left/image_raw/compressedDepth/parameter_descriptions
/raw_stereo/left/image_raw/compressedDepth/parameter_updates
/raw_stereo/left/image_raw/theora
/raw_stereo/left/image_raw/theora/parameter_descriptions
/raw_stereo/left/image_raw/theora/parameter_updates
/raw_stereo/left/image_rect
/raw_stereo/left/image_rect/compressed
/raw_stereo/left/image_rect/compressed/parameter_descriptions
/raw_stereo/left/image_rect/compressed/parameter_updates
/raw_stereo/left/image_rect/compressedDepth
/raw_stereo/left/image_rect/compressedDepth/parameter_descriptions
/raw_stereo/left/image_rect/compressedDepth/parameter_updates
/raw_stereo/left/image_rect/theora
/raw_stereo/left/image_rect/theora/parameter_descriptions
/raw_stereo/left/image_rect/theora/parameter_updates
/raw_stereo/left/image_rect_color
/raw_stereo/left/image_rect_color/compressed
/raw_stereo/left/image_rect_color/compressed/parameter_descriptions
/raw_stereo/left/image_rect_color/compressed/parameter_updates
/raw_stereo/left/image_rect_color/compressedDepth
/raw_stereo/left/image_rect_color/compressedDepth/parameter_descriptions
/raw_stereo/left/image_rect_color/compressedDepth/parameter_updates
/raw_stereo/left/image_rect_color/theora
/raw_stereo/left/image_rect_color/theora/parameter_descriptions
/raw_stereo/left/image_rect_color/theora/parameter_updates
/raw_stereo/points2
/raw_stereo/right/camera_info
/raw_stereo/right/image_color
/raw_stereo/right/image_color/compressed
/raw_stereo/right/image_color/compressed/parameter_descriptions
/raw_stereo/right/image_color/compressed/parameter_updates
/raw_stereo/right/image_color/compressedDepth
/raw_stereo/right/image_color/compressedDepth/parameter_descriptions
/raw_stereo/right/image_color/compressedDepth/parameter_updates
/raw_stereo/right/image_color/theora
/raw_stereo/right/image_color/theora/parameter_descriptions
/raw_stereo/right/image_color/theora/parameter_updates
/raw_stereo/right/image_mono
/raw_stereo/right/image_mono/compressed
/raw_stereo/right/image_mono/compressed/parameter_descriptions
/raw_stereo/right/image_mono/compressed/parameter_updates
/raw_stereo/right/image_mono/compressedDepth
/raw_stereo/right/image_mono/compressedDepth/parameter_descriptions
/raw_stereo/right/image_mono/compressedDepth/parameter_updates
/raw_stereo/right/image_mono/theora
/raw_stereo/right/image_mono/theora/parameter_descriptions
/raw_stereo/right/image_mono/theora/parameter_updates
/raw_stereo/right/image_raw
/raw_stereo/right/image_raw/compressed
/raw_stereo/right/image_raw/compressed/parameter_descriptions
/raw_stereo/right/image_raw/compressed/parameter_updates
/raw_stereo/right/image_raw/compressedDepth
/raw_stereo/right/image_raw/compressedDepth/parameter_descriptions
/raw_stereo/right/image_raw/compressedDepth/parameter_updates
/raw_stereo/right/image_raw/theora
/raw_stereo/right/image_raw/theora/parameter_descriptions
/raw_stereo/right/image_raw/theora/parameter_updates
/raw_stereo/right/image_rect
/raw_stereo/right/image_rect/compressed
/raw_stereo/right/image_rect/compressed/parameter_descriptions
/raw_stereo/right/image_rect/compressed/parameter_updates
/raw_stereo/right/image_rect/compressedDepth
/raw_stereo/right/image_rect/compressedDepth/parameter_descriptions
/raw_stereo/right/image_rect/compressedDepth/parameter_updates
/raw_stereo/right/image_rect/theora
/raw_stereo/right/image_rect/theora/parameter_descriptions
/raw_stereo/right/image_rect/theora/parameter_updates
/raw_stereo/right/image_rect_color
/raw_stereo/right/image_rect_color/compressed
/raw_stereo/right/image_rect_color/compressed/parameter_descriptions
/raw_stereo/right/image_rect_color/compressed/parameter_updates
/raw_stereo/right/image_rect_color/compressedDepth
/raw_stereo/right/image_rect_color/compressedDepth/parameter_descriptions
/raw_stereo/right/image_rect_color/compressedDepth/parameter_updates
/raw_stereo/right/image_rect_color/theora
/raw_stereo/right/image_rect_color/theora/parameter_descriptions
/raw_stereo/right/image_rect_color/theora/parameter_updates
/raw_stereo/stereo_image_proc/parameter_descriptions
/raw_stereo/stereo_image_proc/parameter_updates
/raw_stereo/stereo_image_proc_debayer_left/parameter_descriptions
/raw_stereo/stereo_image_proc_debayer_left/parameter_updates
/raw_stereo/stereo_image_proc_debayer_right/parameter_descriptions
/raw_stereo/stereo_image_proc_debayer_right/parameter_updates
/raw_stereo/stereo_image_proc_rectify_color_left/parameter_descriptions
/raw_stereo/stereo_image_proc_rectify_color_left/parameter_updates
/raw_stereo/stereo_image_proc_rectify_color_right/parameter_descriptions
/raw_stereo/stereo_image_proc_rectify_color_right/parameter_updates
/raw_stereo/stereo_image_proc_rectify_mono_left/parameter_descriptions
/raw_stereo/stereo_image_proc_rectify_mono_left/parameter_updates
/raw_stereo/stereo_image_proc_rectify_mono_right/parameter_descriptions
/raw_stereo/stereo_image_proc_rectify_mono_right/parameter_updates
/rosout
/rosout_agg
And the output of rqt_graph is:
What am I doing wrong?
Originally posted by Cerin on ROS Answers with karma: 940 on 2015-05-10
Post score: 0
Answer:
Hello,
i think it is the easiest way to do it manually. Split the .txt file. One for your right and one for your left camera. After that use the camera_calibration_parsers link text to convert them to .yml files.
The .yml files can be used to set up your camera_info msg. For example: The package usb_cam uses the parameter camera_info_url to set up the path to the .yml file. You can edit the parameters in a launch file.
Hope this helps.
Originally posted by Tirgo with karma: 66 on 2015-05-11
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Cerin on 2015-05-11:
Yeah, this essentially what I figured out myself last night. But what now? How do I get depth map from it? I tried using stereo_image_proc, but it just seems to hang.
Comment by Tirgo on 2015-05-11:
It hangs? stereo_img_proc is correct link text.
Maybe the topics of the camera are not set correctly? Can you post a picture of rqt_graph, while the camera nodes and stereo_img_proc is running? Or at least the output of rostopic list.
Comment by Cerin on 2015-05-11:
@Tirgo, I've updated my question to list all my current steps and the outputs of those commands. Does that graph signify no connection to stereo_image_proc?
Comment by Tirgo on 2015-05-12:
Do you get retified images? For example /raw_stereo/right/image_rect_color as the rectified image of the right camera?
And another hint: stereo-imgage_proc only calculates the disparity map based on the rectified images if a subscriber is existent! Did you ever try to subscribe to the depth-map? | {
"domain": "robotics.stackexchange",
"id": 21646,
"tags": "ros, camera, usb-cam, stereo-calibration, ros-indigo"
} |
Why isn’t information-probability relationship linear? | Question: I am completely new to information theory.
I was learning about information content but couldn’t make sense of why the relationship between information content and probability isn’t linear? And why it is sub-linear?
As the formula goes,
$$
I(E) = -\log p(E).
$$
Answer: So definition of self-information is chosen to be as follows (shamelessly copied from wiki page of "Information content" ) :
An event with probability 100% is perfectly unsurprising and yields no information.
The less probable an event is, the more surprising it is and the more information it yields.
If two independent events are measured separately, the total amount of information is the sum of the self-informations of the individual events.
Point 3 of the definition answers your question.
If $A$ and $B$ independent events and C is an event where both $A$ and $B$ take place then
$$P(C) = P(A \cap B) = P(A)\cdot P(B)$$
On the other hand information from $C$ is defined as follows:
$$I(C) = I(A)+I(B)$$
Let $f(\cdot)$ be the functional form of the information expression, formally $$I(E) = f(P(E))$$
Therefore,
$$f(P(C)) = f(P(A)) + f(P(B))$$
$$ = f(P(A).P(B))$$
Can you find an $f(x)$ which is of the form $k\cdot x$, such that the above properties are satisfied ? I think the answer is no.
But I would like to add that I don't recall very neat reasons for why it has to be only $k\log(x)$ as well. i.e why only $f(x) = k\log x $ satisfies $f(x\cdot y) = f(x)+f(y)$ | {
"domain": "cstheory.stackexchange",
"id": 5165,
"tags": "it.information-theory, shannon-entropy, information"
} |
What is the importance of alkaline condition in biuret test? | Question: Biuret test aims to quantify the amount of protein in a given unknown sample. Biuret agent contains copper sulphate, sodium potassium tartrate and Sodium hydroxide. Coppper ions form the complex of purple colour. Sodium hydroxide is added to make the solution alkaline. Why is it needed to make the solution alkaline?
Answer: In the Biuret reaction, copper forms a complex with the nitrogen of the peptide bond, that looks like this (Structure 61 from the paper below):
A strong base is necessary to provide the conditions for deprotonating these nitrogen atoms and have the conditions to form the complex. If you want to read this in detail, have a look at this publication:
Coordinating properties of the amide bond. Stability and structure of
metal ion complexes of peptides and related ligands | {
"domain": "biology.stackexchange",
"id": 9511,
"tags": "biochemistry, proteins"
} |
CommandBars, Buttons and Commands: Take 2 | Question: Following-up on CommandBars, Buttons and Commands: Cleanup is on the menu, I decided to try a more ambitious approach, as suggested in Nikita's answer. It works, but there are a number of "gotchas" that I'm not sure how to clean up.
So I have an ICommand interface, and its generic counterpart:
public interface ICommand
{
void Execute();
}
public interface ICommand<in T> : ICommand
{
void Execute(T parameter);
}
Note that there's no CanExecute method, because I haven't yet figured out where the VBE API or the Office CommandBar API is giving me a decent hook to execute it.
I haven't implemented any parameterized commands yet, but I'm thinking the parameterless overload will call the parameterized one after figuring out the parameter to use... but I'll stick to the "About" menu command for this post.
public class AboutCommand : ICommand
{
public void Execute()
{
using (var window = new _AboutWindow())
{
window.ShowDialog();
}
}
}
public class AboutCommandMenuItem : CommandMenuItemBase
{
public AboutCommandMenuItem(ICommand command)
: base(command)
{
}
public override string Key { get { return "RubberduckMenu_About"; } }
}
The CommandMenuItemBase is an abstract class that immensely facilitates implementing menu commands - all that's left to do is to specify the resource key that contains the caption.
public abstract class CommandMenuItemBase : ICommandMenuItem
{
private readonly ICommand _command;
protected CommandMenuItemBase(ICommand command)
{
_command = command;
}
public abstract string Key { get; }
public ICommand Command { get { return _command; } }
public Func<string> Caption { get { return () => RubberduckUI.ResourceManager.GetString(Key, RubberduckUI.Culture); } }
public bool IsParent { get { return false; } }
public virtual Image Image { get { return null; } }
public virtual Image Mask { get { return null; } }
}
The class implements ICommandMenuItem, which itself extends the IMenuItem interface:
public interface ICommandMenuItem : IMenuItem
{
ICommand Command { get; }
}
public interface IMenuItem
{
Func<string> Caption { get; }
string Key { get; }
bool IsParent { get; }
Image Image { get; }
Image Mask { get; }
}
With that done, I need a way to regroup and manage the menu items - enter ICommandBar:
public interface ICommandBar
{
void Localize();
void AddItem(IMenuItem item, bool? beginGroup = null, int? beforeIndex = null);
bool RemoveItem(IMenuItem item);
bool Remove();
IEnumerable<IMenuItem> Items { get; }
}
And now I could wrap a CommandBarPopup object, by implementing both IMenuItem and ICommandBar:
/// <summary>
/// An objects that wraps a <see cref="CommandBarPopup"/> instance.
/// </summary>
public abstract class ParentMenu : IMenuItem, ICommandBar
{
private readonly string _key;
private readonly Func<string> _caption;
private readonly CommandBarPopup _popup;
private readonly IDictionary<IMenuItem, CommandBarControl> _items = new Dictionary<IMenuItem, CommandBarControl>();
protected ParentMenu(CommandBarControls parent, string key, Func<string> caption, int? beforeIndex)
{
_key = key;
_caption = caption;
_popup = beforeIndex.HasValue
? (CommandBarPopup) parent.Add(MsoControlType.msoControlPopup, Temporary: true, Before: beforeIndex)
: (CommandBarPopup) parent.Add(MsoControlType.msoControlPopup, Temporary: true);
_popup.Tag = _key;
Localize();
}
public abstract void Initialize();
public string Key { get { return _key; } }
public Func<string> Caption { get {return _caption; } }
public bool IsParent { get { return true; } }
public Image Image { get {return null; } }
public Image Mask { get { return null; } }
public void Localize()
{
_popup.Caption = _caption.Invoke();
LocalizeChildren();
}
private void LocalizeChildren()
{
foreach (var kvp in _items)
{
var value = kvp.Key.Caption.Invoke();
kvp.Value.Caption = value;
}
}
public void AddItem(IMenuItem item, bool? beginGroup = null, int? beforeIndex = null)
{
var controlType = item.IsParent
? MsoControlType.msoControlPopup
: MsoControlType.msoControlButton;
var child = beforeIndex.HasValue
? _popup.Controls.Add(controlType, Temporary: true, Before: beforeIndex)
: _popup.Controls.Add(controlType, Temporary: true);
child.Caption = item.Caption.Invoke();
child.BeginGroup = beginGroup ?? false;
child.Tag = item.Key;
if (!item.IsParent)
{
var button = (CommandBarButton)child;
SetButtonImage(button, item.Image, item.Mask);
var command = ((ICommandMenuItem)item).Command;
button.Click += delegate { command.Execute(); };
}
_items.Add(item, child);
}
public bool RemoveItem(IMenuItem item)
{
try
{
var child = _items[item];
child.Delete();
Marshal.ReleaseComObject(child);
_items.Remove(item);
return true;
}
catch (COMException)
{
return false;
}
}
public bool Remove()
{
foreach (var menuItem in _items)
{
RemoveItem(menuItem.Key); // note: should we care if this fails?
}
try
{
_popup.Delete();
Marshal.ReleaseComObject(_popup);
return true;
}
catch (COMException)
{
return false;
}
}
public IEnumerable<IMenuItem> Items { get { return _items.Keys; } }
private static void SetButtonImage(CommandBarButton button, Image image, Image mask)
{
button.FaceId = 0;
if (image == null || mask == null)
{
return;
}
button.Picture = AxHostConverter.ImageToPictureDisp(image);
button.Mask = AxHostConverter.ImageToPictureDisp(mask);
}
private class AxHostConverter : AxHost
{
private AxHostConverter() : base("") { }
static public IPictureDisp ImageToPictureDisp(Image image)
{
return (IPictureDisp)GetIPictureDispFromPicture(image);
}
static public Image PictureDispToImage(IPictureDisp pictureDisp)
{
return GetPictureFromIPicture(pictureDisp);
}
}
}
And this class is implemented by the RubberduckParentMenu class, and will be implemented by a RefactorParentMenu class at one point, too.
public class RubberduckParentMenu : ParentMenu
{
private readonly CodeExplorerCommandMenuItem _codeExplorer;
private readonly OptionsCommandMenuItem _options;
private readonly AboutCommandMenuItem _about;
public RubberduckParentMenu(CommandBarControls parent, int beforeIndex,
CodeExplorerCommandMenuItem codeExplorer,
OptionsCommandMenuItem options,
AboutCommandMenuItem about)
: base(parent, "RubberduckMenu", () => RubberduckUI.RubberduckMenu, beforeIndex)
{
_codeExplorer = codeExplorer;
_options = options;
_about = about;
}
public override void Initialize()
{
AddItem(_codeExplorer);
AddItem(_options, true);
AddItem(_about, true);
}
}
And there's my problem: I'm going to end up with a constructor parameter for each menu item I want in the [Rubberduck] menu, and it doesn't feel right that I'm injecting concrete types when IMenuItem would suffice... on the other hand, how else could I know in which order to add them, which needs to BeginGroup, and what image/icon to use for which, if I were iterating IMenuItem instances?
Another problem is the IoC configuration, which is utterly annoying because I couldn't figure out how to set up a convention to do the ICommand bindings, so I'll need one for each type... and that's not very maintenance-friendly:
private void BindRubberduckMenu()
{
const int windowMenuId = 30009;
var menuBarControls = _vbe.CommandBars[1].Controls;
var beforeIndex = FindMenuInsertionIndex(menuBarControls, windowMenuId);
_kernel.Bind(t => t.FromThisAssembly()
.SelectAllClasses()
.InNamespaceOf<ICommand>()
.EndingWith("CommandMenuItem")
.BindToSelf());
_kernel.Bind<ICommand>().To<AboutCommand>().WhenInjectedExactlyInto<AboutCommandMenuItem>();
_kernel.Bind<ICommand>().To<OptionsCommand>().WhenInjectedExactlyInto<OptionsCommandMenuItem>();
_kernel.Bind<ICommand>().To<CodeExplorerCommand>().WhenInjectedExactlyInto<CodeExplorerCommandMenuItem>();
_kernel.Bind<RubberduckParentMenu>().ToSelf()
.WithConstructorArgument("parent", menuBarControls)
.WithConstructorArgument("beforeIndex", beforeIndex);
}
There has to be a better way. Anything else jumps at you?
Answer: I think the code is definitely much better! I wouldn't worry too much about registering the ICommands in your IoC. After all, you're not going to have hundreds of menu items yet.
Anyway, two very minor comments:
= beginGroup ?? false;
is better as
= beginGroup.GetValueOrDefault();
And this:
private AxHostConverter() : base("") { }
Would be better as:
private AxHostConverter() : base(string.Empty) { }
Edit
With the caveat that I don't know ninject and that this isn't very performant here's a reflection based approach for registration as a complete console app:
internal class Program
{
static void Main(string[] args)
{
IKernel kernel = new StandardKernel();
var query = from command in Assembly.GetExecutingAssembly().GetTypes()
where command.GetInterfaces().Contains(typeof(ICommand)) && command.IsClass
select
new Tuple<Type, Type>(
command,
FindCorrespondingType(command));
foreach (var item in query)
{
kernel.Bind<ICommand>().To(item.Item1).WhenInjectedExactlyInto(item.Item2);
}
kernel.Get<AboutCommandMenuItem>();
kernel.Get<OptionsCommandMenuItem>();
Console.ReadKey();
}
private static Type FindCorrespondingType(Type command)
{
return Assembly.GetExecutingAssembly().GetTypes().FirstOrDefault(t => t.Name == $"{command.Name}MenuItem");
}
}
public interface ICommand
{ }
public class AboutCommand : ICommand
{
}
public class OptionsCommand : ICommand
{
}
public class AboutCommandMenuItem
{
public AboutCommandMenuItem(ICommand comman)
{
Console.WriteLine(comman.GetType());
}
}
public class OptionsCommandMenuItem
{
public OptionsCommandMenuItem(ICommand comman)
{
Console.WriteLine(comman.GetType());
}
} | {
"domain": "codereview.stackexchange",
"id": 15078,
"tags": "c#, user-interface, rubberduck, ninject"
} |
Does the definition of "energy produced" by the LLNL's NIF fusion experiment include the energy delivered? | Question: A figure in a 23 Nov 2023 article on Optics.org shows that more energy was produced than delivered. That is, the area or the red region seems to represent the energy produced.
But a similar figure in another article on the Lawrence Livermore National Labs (LLNL) website could be interpreted as meaning that the measurement of "energy produced" includes the energy delivered. That is, the area of the red region plus the yellow region represents the energy produced.
I'd like to be able to reference an article from a reputable ground-truth source that unambiguously clarifies whether or not the numbers that were released for "energy produced" include the energy delivered.
If the "energy produced" is defined to include the energy delivered, then any experiment that fuses even a handful of atoms would "produce" more energy than is "delivered". By that definition, many experiments achieved ignition long before December 5th, 2022.
Update
There are a couple of sources of information that I think should be reviewed before answering...
####1####
Sabine Hossenfelder said in a recent video, referring to a December 2022 announcement...
They got about 50% more energy out of the fuel pellet than they shot in with the laser.
She also makes it pretty clear that she thinks that "Produced"=="Out Of Target" with the following graphic.
...and she characterizes the Second Ignition numbers in the same way.
####2####
This article on the Lawrence Livermore National Labs (LLNL) website says...
Fusion “ignition” refers to the moment when the energy from a controlled fusion reaction outstrips the rate at which x-ray radiation losses and electron conduction cool the implosion: as much or more energy “out” than “in.”
If "In+Produced=Out", this implies that if "Produced>=0", the requirement for ignition is satisfied.
Answer: No, "energy produced" doesn't include the energy of the laser radiation used to start and maintain nuclear reactions in the fuel. But you are correct that ignition was achieved before Dec 5, 2022 (for the scientists' definition of ignition, there seem to be more of them, unfortunately).
Papers from people involved with these experiments usually use the term "fusion yield", denoted $Y_{total}$, to refer to energy gained from the fuel.
In the first reported experiment in LLNL that increased this yield to levels comparable to the laser energy used, Hybrid-E experiment N210808 on August 8, 2021, this yield was 1.37 MJ, while the laser energy used was 1.917 MJ. According to these papers, this experiment surpassed the generalized Lawson criterion, thus achieved "ignition", even though the energy produced was smaller than the energy input. [1][2][3]
[1] Kritcher et al., Design of an inertial fusion experiment exceeding the Lawson criterion for ignition, Phys. Rev. E 106, 025201 – Published 8 August 2022, paper pdf file: https://journals.aps.org/pre/pdf/10.1103/PhysRevE.106.025201
[2] Zylstra et al., Experimental achievement and signatures of ignition at the NIF, Phys. Rev. E 106, 025202 – Published 8 August 2022, accepted manuscript pdf file: https://link.aps.org/accepted/10.1103/PhysRevE.106.025202
[3] https://www.photonics.com/Articles/LLNL_Experiment_Moves_to_the_Cusp_of_Fusion/a67273
Later experiments in 2022-2023 increased the yield for similar energy input still more, surpassing the "scientific energy breakeven" condition, where the yield is greater than the laser energy used.
On Dec 5, 2022, an experiment at NIF using 2.05 MJ of laser radiation energy achieved fusion yield energy 3.1 MJ, which is greater than the laser energy used, thus surpassing the condition of "scientific breakeven". [4]
[4] H. Abu-Shawareb et al. (The Indirect Drive ICF Collaboration), Achievement of Target Gain Larger than Unity in an Inertial Fusion Experiment, Phys. Rev. Lett. 132, 065102 – Published 5 February 2024, paper pdf file https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.132.065102
Strangely enough, the LLNL website https://www.llnl.gov/news/ignition (and thus also secondary sources like optics.org) claim that first ignition was achieved on Dec 5, 2022, which is much later than the experiment from 2021; they refer to the fact that in 2022, fusion energy yield surpassed the laser energy input. But this is not the scientific definition of "ignition" used in the papers above, but is the definition of "scientific energy breakeven", which is a different thing. So the science communication / PR department at LLNL seems to be using terminology differently than their scientists do (and it seems to me, incorrectly). | {
"domain": "physics.stackexchange",
"id": 100549,
"tags": "energy, fusion"
} |
Dividing data set into (almost) equal batches based on characteristics | Question: I have a large dataset containing 6 characteristics (all numerical).
I need to split this dataset into multiple batches to be processed in parallel, and ideally, the batches should be as equal in size as possible.
The catch is: I can only split based on the 6 characteristics, so I have to specify ranges of values for each batch.
Simplified example: [1,1,1,2,2,3,3,3,3,4] will be split in two by specifying numbers 1-2 go to batch 1 and 3-4 go to batch two and I end up with [1,1,1,2,2] and [3,3,3,3,4].
I managed to make a simple division algorithm based on one characteristic, unfortunately some of the characteristics might not be suitable to split on, and I have no way to reliably predict which characteristic will create a good split.
I remember from Uni that a classification algorithm might do the trick here, but I am not sure how to implement the requirement for similar size classes.
here is the simple code I wrote for one characteristic, it may help to understand the situation:
def divide_into_groups(lst: list, num_groups=2):
length = len(lst)
sorted_list = sorted(lst)
optimal_group_size = length // num_groups
initial_group_markers = [sorted_list[i * optimal_group_size] for i in range(1, num_groups)]
marker_edges = [(sorted_list.index(marker), length - 1 - sorted_list[::-1].index(marker))
for marker in initial_group_markers]
final_group_markers = []
for i, marker in enumerate(initial_group_markers):
if i - marker_edges[i][0] < marker_edges[i][1] - i:
final_group_markers.append(marker - 1)
else:
final_group_markers.append(marker)
distances = [abs(initial_group_markers[i] - final_group_markers[i]) for i in range(len(initial_group_markers))]
efficiency = (length - sum(distances)) / length
print('distances from optimals: %s' % distances)
print('efficiency of division is %s %%' % (efficiency * 100))
return final_group_markers
P.S. this is my first question in this forum. If I am missing some information or have made any mistake, please comment and I will fix as soon as possible.
Answer: If I got this right you want to split the dataset into train and test sets in a way that preserves the same proportions of examples in each class as observed in the original dataset?
This is called a stratified train-test split. See the stratify argument here sklearn split | {
"domain": "datascience.stackexchange",
"id": 8549,
"tags": "python, classification"
} |
Mini console based life simulation thing | Question: I made this mini life simulation thing. I got the idea from the Eloquent JavaScript textbook, and just kinda ran with it. I've been programming for a little less than year and this is probably the largest thing I've made to date, so I'm looking for a little feedback.
If there's anything you see that can be improved, please let me know.
main.cpp
#include <string>
#include <vector>
#include <algorithm>
#include <ctime>
#include <cstdlib>
#include <map>
#include <ncurses.h>
#include "terrariums.h"
/*==============================================================================
CONSTANT GAME VARIABLES
==============================================================================*/
const bool ALLOW_DIAGONAL_DIRECTIONS = true;
const char EMPTY_SYM = ' ';
const char ROCK_SYM = '@';
const char DUMBBUG_SYM = 'o';
const char M_SMARTBUG_SYM = 'X';
const char F_SMARTBUG_SYM = 'x';
const char DUMBBUGEGG_SYM = 'e';
const char SMARTBUGEGG_SYM = 'a';
const char SMALLPLANT_SYM = '\'';
/*
y|
|
| o
|_______
x
*/
struct Vec2 {
int x;
int y;
Vec2()
{
x = 0;
y = 0;
}
Vec2(int _x, int _y)
{
x = _x;
y = _y;
}
};
Vec2 operator + (Vec2 u, Vec2 v)
{
Vec2 result;
result.x = u.x + v.x;
result.y = u.y + v.y;
return result;
}
void operator += (Vec2 &u, Vec2 v)
{
u.x += v.x;
u.y += v.y;
}
Vec2 operator * (Vec2 u, int s)
{
Vec2 result;
result.x = u.x * s;
result.y = u.y * s;
return result;
}
Vec2 operator *= (Vec2 &u, int s)
{
u.x *= s;
u.y *= s;
}
//directions map is global
enum Direction { n=0, ne=4, e=1, se=5, s=2, sw=6, w=3, nw=7 };
std::map<Direction, Vec2> directions;
enum Action { nothing, walk, walktofood, changedirection, eat,
layegg, grow, hatch, die };
/*
o . ___---___ .
. .--\ --. . . .
./.;_.\ __/~ \.
/; / `-' __\ . \
. . / ,--' / . .; \ |
| .| / __ | -O- .
|__/ __ | . ; \ | . | |
| / \\_ . ;| \___|
. o | \ .~\\___,--' | .
| | . ; ~~~~\_ __|
| \ \ . . ; \ /_/ .
-O- . \ / . | ~/ .
| . ~\ \ . / /~ o
. ~--___ ; ___--~
. --- .
*/
int findRowLength(std::string s) {
int length = 1;
for (int i = 0; s[i] != '\n'; i++)
{
length++;
}
return length;
}
int findColLength(std::string s, int rowLength) {
int length = s.length() / rowLength;
return length;
}
struct Terrarium
{
std::string grid;
int rowLength;
int colLength;
Terrarium(std::string s)
{
grid = s;
rowLength = findRowLength(s);
colLength = findColLength(s, rowLength);
}
// Grid char manipulation
inline void changeCharAt(Vec2 location, char c)
{
grid[location.y * rowLength + location.x] = c;
}
inline char charAt(Vec2 location)
{
return grid[location.y * rowLength + location.x];
}
// Find Surroundings
std::map<Direction, char>
findDirectSurroundings(Vec2 pos)
{
std::map<Direction, char> surroundings;
for (int i = 0; i < directions.size(); i++)
{
surroundings[(Direction)i] = (charAt(pos + directions[(Direction)i]));
}
return surroundings;
}
std::map<Direction, char>
findExtendedSurroundings(Vec2 pos,
int range,
std::map<Direction, char> directSurroundings)
{
std::map<Direction, char> extendedSurroundings = directSurroundings;
for (int i = 1; i < range; i++)
{
for (int j = 0; j < directions.size(); j++)
{
if (extendedSurroundings[(Direction)j] == EMPTY_SYM) {
extendedSurroundings[(Direction)j] = charAt(pos + (directions[(Direction)j] * (i+1)));
}
}
}
return extendedSurroundings;
}
// Life manipulation
template <class T>
void registerLife(const char SYM, std::vector<T> &lifeVect, T (*f)(Vec2))
{
for (int y = 0; y <= colLength; y++) {
for (int x = 0; x <= rowLength; x++) {
if (charAt(Vec2(x, y)) == SYM)
lifeVect.push_back((*f)(Vec2(x, y)));
}
}
}
template <class T>
T newLife(T life)
{
changeCharAt(life.currentPos, life.sym);
return life;
}
template <class T>
inline void killLife(std::vector<T> &lifeVect, int i)
{
changeCharAt(lifeVect[i].currentPos, EMPTY_SYM);
lifeVect.erase(lifeVect.begin() + i);
}
};
/*==============================================================================
P R E D I C A T E S
==============================================================================*/
bool predicate_MapValueIsSmallPlant(std::pair<Direction, char> m)
{
return m.second == SMALLPLANT_SYM;
}
bool predicate_MapValueIsMaleSmartBug(std::pair<Direction, char> m)
{
return m.second == M_SMARTBUG_SYM;
}
/*==============================================================================
GENERAL BUG FUNTIONS
==============================================================================*/
bool canSupportBug(Terrarium &t, Vec2 pos)
{
return (t.charAt(pos) == EMPTY_SYM);
}
template <class T>
void moveBug(Terrarium &t, T &b, Direction d)
{
if (t.charAt(b.currentPos + directions[d]) != EMPTY_SYM)
return;
b.newPos += directions[d];
t.grid[b.newPos.y * t.rowLength + b.newPos.x] = b.sym;
t.grid[b.currentPos.y * t.rowLength + b.currentPos.x] = EMPTY_SYM;
b.currentPos = b.newPos;
}
/*
,_ /) (\ _,
>> <<,_,>> <<
// _0.-.0_ \\
\'._/ \_.'/
'-.\.--.--./.-'
__/ : :Y: : \ _
';, .-(_| : : | : : |_)-. ,:'
\\/.' |: : :|: : :| `.\//
(/ |: : :|: : :| \)
|: : :|: : :;
/\ : : | : : /\
(_/'.: :.: :.'\_)
\\ `""`""` //
\\ //
':. .:'
*/
struct DumbBug {
Vec2 currentPos;
Vec2 newPos;
int energy;
char sym;
std::map<Direction, char> surroundings;
DumbBug(Vec2 pos)
{
currentPos = pos;
newPos = pos;
energy = rand() % 10 + 100;
sym = DUMBBUG_SYM;
}
Action act()
{
if (energy <= 0)
return die;
else if (energy > 250)
return layegg;
else if ((std::find_if(surroundings.begin(), surroundings.end(), predicate_MapValueIsSmallPlant) != surroundings.end()) &&
energy <= 250)
return eat;
else
return walk;
}
};
DumbBug regDumbBug(Vec2 pos) { return DumbBug(pos); }
/*
_---~~(~~-_.
_{ ) )
, ) -~~- ( ,-' )_
( `-,_..`., )-- '_,)
( ` _) ( -~( -_ `, }
(_- _ ~_-~~~~`, ,' )
`~ -^( __;-,((()))
~~~~ {_ -_(())
`\ }
{ }
*/
const int SMARTBUG_SIGHT_DISTANCE = 8;
const int SMARTBUG_DIRECTION_CHANCE = 20; // 1/x chance of
const int SMARTBUG_MAX_ENERGY = 450; // changing direction
struct SmartBug {
Vec2 currentPos;
Vec2 newPos;
Direction direction;
int lifespan;
int energy;
char sym;
std::map<Direction, char> directSurroundings;
std::map<Direction, char> extendedSurroundings;
SmartBug(Vec2 pos)
{
currentPos = pos;
newPos = pos;
energy = rand() % 50 + 300;
lifespan = rand() % 300 + 1500;
rand() % 2 ? sym = M_SMARTBUG_SYM : sym = F_SMARTBUG_SYM;
direction = (Direction)(rand() % directions.size());
}
Action act(Terrarium &t)
{
if (energy <= 0 || lifespan <= 0)
return die;
else if ((std::find_if(directSurroundings.begin(), directSurroundings.end(), predicate_MapValueIsSmallPlant) != directSurroundings.end()) &&
energy <= SMARTBUG_MAX_ENERGY)
return eat;
else if (sym == F_SMARTBUG_SYM &&
(std::find_if(directSurroundings.begin(), directSurroundings.end(), predicate_MapValueIsMaleSmartBug) != directSurroundings.end()) &&
energy >= 400)
return layegg;
else
{
if (std::find_if(extendedSurroundings.begin(), extendedSurroundings.end(), predicate_MapValueIsSmallPlant) != extendedSurroundings.end())
return walktofood;
else
{
if (t.charAt(currentPos + directions[direction]) != EMPTY_SYM || rand() % SMARTBUG_DIRECTION_CHANCE == 0)
return changedirection;
else
return walk;
}
}
}
};
SmartBug regSmartBug(Vec2 pos) { return SmartBug(pos); }
/*
.-~-.
.' '.
/ \
.-~-. : ;
.' '.| |
/ \ :
: ; .-~""~-,/
| /` `'.
: | \
\ | /
`. .' \ .'
`~~~` '-.____.-'
*/
struct DumbBugEgg
{
Vec2 currentPos;
int hatchTime;
char sym;
DumbBugEgg(Vec2 pos)
{
currentPos = pos;
hatchTime = rand() % 150 + 150;
sym = DUMBBUGEGG_SYM;
}
Action act()
{
if (hatchTime <= 0)
return hatch;
else
return nothing;
}
};
DumbBugEgg regDumbBugEgg(Vec2 pos) { return DumbBugEgg(pos); }
struct SmartBugEgg
{
Vec2 currentPos;
int hatchTime;
char sym;
SmartBugEgg(Vec2 pos)
{
currentPos = pos;
hatchTime = rand() % 100 + 100;
sym = SMARTBUGEGG_SYM;
}
Action act()
{
if (hatchTime <= 0)
return hatch;
else
return nothing;
}
};
SmartBugEgg regSmartBugEgg(Vec2 pos) { return SmartBugEgg(pos); }
/*
___..._
_,--' "`-.
,'. . \
,/:. . . .'
|;.. . _..--'
`--:...-,-'""\
|:. `.
l;. l
`|:. |
|:. `.,
.l;. j, ,
`. \`;:. //,/
.\\)`;,|\'/(
` `;;'' `(,
*/
const int SMALLPLANT_GROW_VALUE = 50;
struct SmallPlant {
Vec2 currentPos;
int energy;
char sym;
std::map<Direction, char> surroundings;
int surroundingSmallPlants;
SmallPlant(Vec2 pos)
{
currentPos = pos;
energy = 10;
sym = SMALLPLANT_SYM;
}
void drainEnergy(int a, int b, int c, int d, int e)
{
if (surroundingSmallPlants <= a)
energy += 3;
else if (surroundingSmallPlants <= b)
energy += 2;
else if (surroundingSmallPlants <= c)
energy += 1;
else if (surroundingSmallPlants <= d)
energy += -1;
else if (surroundingSmallPlants <= e)
energy += -3;
}
Action act(Terrarium t)
{
if (energy <= 0 || t.charAt(currentPos) == EMPTY_SYM)
return die;
else if (energy >= SMALLPLANT_GROW_VALUE)
return grow;
else
return nothing;
}
};
SmallPlant regSmallPlant(Vec2 pos) { return SmallPlant(pos); }
bool canSupportSmallPlant(Terrarium &t, Vec2 pos)
{
if (t.charAt(pos) != EMPTY_SYM)
return false;
std::map<Direction, char> surroundings = t.findDirectSurroundings(pos);
int surroundingSmallPlants = 0;
for (int i = 0; i < surroundings.size(); i++)
{
if (surroundings[(Direction)i] == SMALLPLANT_SYM)
surroundingSmallPlants++;
}
if (surroundingSmallPlants < 2)
return true;
else
return false;
}
/*==============================================================================
================================================================================
M A I N S T A R T S H E R E
================================================================================
==============================================================================*/
int main() {
//random seed
srand(time(0));
//declare directions
directions[n] = Vec2( 0, -1);
directions[e] = Vec2( 1, 0);
directions[s] = Vec2( 0, 1);
directions[w] = Vec2(-1, 0);
if(ALLOW_DIAGONAL_DIRECTIONS) {
directions[ne] = Vec2( 1, -1);
directions[se] = Vec2( 1, 1);
directions[sw] = Vec2(-1, 1);
directions[nw] = Vec2(-1, -1);
}
Terrarium t(Terra::bigPlan);
// Register life that starts in Terrarium
std::vector<DumbBug> dumbBugs;
t.registerLife(DUMBBUG_SYM, dumbBugs, regDumbBug);
std::vector<SmartBug> smartBugs;
t.registerLife(F_SMARTBUG_SYM, smartBugs, regSmartBug);
t.registerLife(M_SMARTBUG_SYM, smartBugs, regSmartBug);
std::vector<SmallPlant> smallPlants;
t.registerLife(SMALLPLANT_SYM, smallPlants, regSmallPlant);
std::vector<DumbBugEgg> dumbBugEggs;
t.registerLife(DUMBBUGEGG_SYM, dumbBugEggs, regDumbBugEgg);
std::vector<SmartBugEgg> smartBugEggs;
t.registerLife(SMARTBUGEGG_SYM, smartBugEggs, regSmartBugEgg);
// curses stuff
initscr();
raw();
keypad(stdscr, true);
noecho();
timeout(0);
curs_set(0);
bool keepWinOpen = true;
while (keepWinOpen) {
// 'q' to quit
int in = getch();
if (in == 'q') {
keepWinOpen = false;
}
/*------------------------------------------------------------------------------
DumbBug Behavior
------------------------------------------------------------------------------*/
for (int i = 0; i < dumbBugs.size(); i++)
{
DumbBug* b = &dumbBugs[i];
b->energy--;
b->surroundings = t.findDirectSurroundings(b->currentPos);
switch(b->act())
{
case die:
{
t.killLife(dumbBugs, i);
} break;
case layegg:
{
int r = rand() % directions.size();
if (canSupportBug(t, b->currentPos + directions[(Direction)r]))
dumbBugEggs.push_back(t.newLife(DumbBugEgg(b->currentPos + directions[(Direction)r])));
b->energy = 100;
} break;
case eat:
{
for (int j = 0; j < b->surroundings.size(); j++) {
if (b->surroundings[(Direction)j] == SMALLPLANT_SYM) {
t.changeCharAt(b->currentPos + directions[(Direction)j], EMPTY_SYM);
j = b->surroundings.size();
}
}
b->energy += 20;
} break;
case walk:
{
int r = rand() % (directions.size() + 6);
if (r < directions.size()) {
moveBug(t, *b, (Direction)r);
}
} break;
}
}
/*------------------------------------------------------------------------------
DumbBug Egg Behavior
------------------------------------------------------------------------------*/
for (int i = 0; i < dumbBugEggs.size(); i++)
{
DumbBugEgg* e = &dumbBugEggs[i];
e->hatchTime--;
switch(e->act())
{
case hatch:
{
dumbBugs.push_back(t.newLife(DumbBug(e->currentPos)));
dumbBugEggs.erase(dumbBugEggs.begin() + i);
} break;
}
}
/*------------------------------------------------------------------------------
SmartBug Behavior
------------------------------------------------------------------------------*/
for (int i = 0; i < smartBugs.size(); i++)
{
SmartBug* b = &smartBugs[i];
b->energy--;
b->lifespan--;
b->directSurroundings = t.findDirectSurroundings(b->currentPos);
b->extendedSurroundings = t.findExtendedSurroundings(b->currentPos,
SMARTBUG_SIGHT_DISTANCE,
b->directSurroundings);
switch(b->act(t))
{
case die:
{
t.killLife(smartBugs, i);
} break;
case eat:
{
for (int j = 0; j < b->directSurroundings.size(); j++) {
if (b->directSurroundings[(Direction)j] == SMALLPLANT_SYM) {
t.changeCharAt(b->currentPos + directions[(Direction)j], EMPTY_SYM);
j = b->directSurroundings.size();
}
}
b->energy += 20;
} break;
case layegg:
{
int r = rand() % directions.size();
if (canSupportBug(t, b->currentPos + directions[(Direction)r]))
{
smartBugEggs.push_back(t.newLife(SmartBugEgg(b->currentPos + directions[(Direction)r])));
b->energy = 100;
}
} break;
case walktofood:
{
for (int j = 0; j < b->extendedSurroundings.size(); j++) {
if (b->extendedSurroundings[(Direction)j] == SMALLPLANT_SYM) {
moveBug(t, *b, (Direction)j);
j = b->extendedSurroundings.size();
}
}
} break;
case changedirection:
{
b->direction = (Direction)(rand() % directions.size());
} break;
case walk:
{
moveBug(t, *b, b->direction);
}break;
}
}
/*------------------------------------------------------------------------------
SmartBug Egg Behavior
------------------------------------------------------------------------------*/
for (int i = 0; i < smartBugEggs.size(); i++)
{
SmartBugEgg* e = &smartBugEggs[i];
e->hatchTime--;
switch(e->act())
{
case hatch:
{
smartBugs.push_back(t.newLife(SmartBug(e->currentPos)));
smartBugEggs.erase(smartBugEggs.begin() + i);
} break;
}
}
/*------------------------------------------------------------------------------
SmallPlant Behavior
------------------------------------------------------------------------------*/
for (int i = 0; i < smallPlants.size(); i++)
{
SmallPlant* p = &smallPlants[i];
p->surroundings = t.findDirectSurroundings(p->currentPos);
p->surroundingSmallPlants = 0;
for (int j = 0; j < p->surroundings.size(); j++)
{
if (p->surroundings[(Direction)j] == SMALLPLANT_SYM)
p->surroundingSmallPlants++;
}
switch(p->act(t))
{
case die:
{
t.killLife(smallPlants, i);
} break;
case grow:
{
int r = rand() % directions.size();
if (canSupportSmallPlant(t, p->currentPos + directions[(Direction)r]))
smallPlants.push_back(t.newLife(SmallPlant(p->currentPos + directions[(Direction)r])));
p->energy = 10;
} break;
}
if (ALLOW_DIAGONAL_DIRECTIONS)
{
p->drainEnergy(0, 3, 5, 7, 8);
}
else
{
p->drainEnergy(0, 1, 2, 3, 4);
}
}
// Useful counter information
static int bugsAlive = 0;
static int prevBugsAlive = 0;
static int totalBugs = 0;
static int peakBugAmount = 0;
prevBugsAlive = bugsAlive;
bugsAlive = smartBugs.size();
if (bugsAlive > prevBugsAlive)
totalBugs += bugsAlive - prevBugsAlive;
if (bugsAlive > peakBugAmount)
peakBugAmount = bugsAlive;
mvprintw(0, 0, t.grid.c_str());
mvprintw(t.colLength + 1, 0, "Bugs Alive: %i ", bugsAlive);
mvprintw(t.colLength + 2, 0, "Total Bugs: %i", totalBugs);
mvprintw(t.colLength + 1, 20, "Peak Bug Amout: %i", peakBugAmount);
if (smartBugs.size() > 0)
mvprintw(t.colLength + 2, 20, "Bug1 Lifespan: %i ", smartBugs[0].lifespan);
napms(35);
}
refresh();
endwin();
}
terrariums.h
#ifndef TERRARIUMS_H
#define TERRARIUMS_H
#include <string>
namespace Terra
{
std::string bigPlan = "@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n"
"@'' @@@@@@@@@@@@@@@@@@@@@@@@@@@\n"
"@ '' @@ a @@@@@ @@@@@@\n"
"@ @@@ @@@@@@@\n"
"@ '@@@ @\n"
"@ @@@ '@@@\n"
"@ @@ @@\n"
"@ @\n"
"@ @\n"
"@ @@@@ @\n"
"@ '@@@ @@ @\n"
"@ @@ @@ @@@ @\n"
"@@ @ @@@ @@ @@@@@ @\n"
"@@ '@ @@@' @@@@@@@ @\n"
"@' @ @@ @@@@@@@@@@ @\n"
"@ @@@ @@@@ @\n"
"@ @ @\n"
"@ @\n"
"@ a @\n"
"@ @\n"
"@ @\n"
"@ a @\n"
"@ @\n"
"@ @ @\n"
"@' @@@ @@@@@@@ @\n"
"@ @@@@@@ @@@' @@@@@@ @\n"
"@@@@@@@@@@@@@ @@@ '@@@@@@ @ @\n"
"@''@@@@@@@@@@@ @@ @@@ @@@ @\n"
"@ ' @@@@@@@a@@@@@@@@ @\n"
"@ '@@@@@@ @\n"
"@ @'@@@@ @\n"
"@ @@@ @@ @\n"
"@ @@ @' ' @\n"
"@ ' @@ @\n"
"@' @@ @\n"
"@'''' @\n"
"@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n";
}
#endif //TERRARIUMS_H
Answer: Here are a few other things you might consider changing:
Use scoped enumerations (enum class, C++11) if you can:
You have a few enums with some fairly common / easy to collide names, like the ones in Direction and Action. Those would be a lot better if scoped. With a C++11 enum class, you could do it:
enum class Direction {
n = 0,
ne = 4,
e = 1,
se = 5,
s = 2,
sw = 6,
w = 3,
nw = 7
};
enum class Action {
nothing,
walk,
walkToFood,
changeDirection,
eat,
layEgg,
grow,
hatch,
die
};
Then you access them with a scope, e.g.: Action::walk. Also, I would camelCase the constants for the multi-word ones to aid readability.
Then there are your global constants for the map symbols. Usually, the ALL_CAPS notation is reserved to use with preprocessor macros (AKA #defines). Most code these days use camelCase for typed constants as well. The all-uppercase notation of macros stands out more to indicate that the constant might behave in some unusual way / might be implemented using som preprocessor trick. Now back to the map symbols, I would also make those an enum class. The enumerator can be of any integer type, including char:
// Notice that we explicitly require this enum to be of size `char`
enum class Symbols : char {
empty = ' ',
rock = '@',
dumbBug = 'o',
mSmartBug = 'X',
fSmartBug = 'x',
dumbBugEgg = 'e',
smartBugEgg = 'a',
smallPlant = '\''
};
I personally also like to align the assignment operators to have a "visual handrail".
Use the constructor initialization list to init member data:
When initializing member data of a class or struct in a constructor, it is preferable to use the member initialization list. That will ensure the members are initialized only once, by calling its constructor. Example:
struct Terrarium
{
Terrarium(const std::string & s)
: grid(s)
, rowLength(findRowLength(s))
, colLength(findColLength(s, rowLength))
{ }
...
If you init the members using operator = in the body of the constructor, that might incur a duplicate initialization. First, the default constructor of the type will be implicitly called, then you will init again with the assignment.
Pass complex objects by reference when adequate:
I see that in many places you are passing complex objects such as std::strings by value. Remember that in C++ the default is always a copy. Take this function for instance:
int findColLength(std::string s, int rowLength) {
int length = s.length() / rowLength;
return length;
}
The string s is being copied just to have its length queried inside the function. You should really be taking that parameter by const reference:
int findColLength(const std::string & s, int rowLength) { /* ... */ }
^^^^^ ^
Native/built-in types, such as ints and floats should still be passed by value. References only make sense when you want to avoid coping heavy objects or when you need to have an input-output function parameter.
inline is redundant inside a class/struct body:
When you declare and define a class/struct method directly inside the body, that method is already implicitly inline. Adding the keyword in this case is redundant verbosity. Template functions/classes are always inline, so they don't need the keyword either.
Take a look at the new <random> library:
C++11 added an awesome new pseudo-random number library that replaces std::rand. You might want to check it out and use it if your compiler supports it.
Avoid very long lines:
Excessively long lines are hard to follow and won't fit on most screens. Lets take this for instance:
if (std::find_if(extendedSurroundings.begin(), extendedSurroundings.end(), predicate_MapValueIsSmallPlant) != extendedSurroundings.end())
return walktofood;
You should introduce some "explaining variables" there to shorten this statement and also clarify it:
auto smallPant = std::find_if(
extendedSurroundings.begin(),
extendedSurroundings.end(),
predicate_MapValueIsSmallPlant);
const bool smallPlantNear = (smallPant != extendedSurroundings.end());
if (smallPlantNear) {
return walktofood;
}
However, there are a few statements similar to this one, using std::find_if in one of the lists. So I would probably consider introducing a helper function that wraps the common operation(s). | {
"domain": "codereview.stackexchange",
"id": 12880,
"tags": "c++, console, simulation"
} |
What is a "mechanistic study"? | Question: I believe a "mechanistic study" means a study where a medicinal product is being used but the purpose of the study is to investigate the patient or disease, not the medicinal product.
How does this differ from any other type of study? What is the purpose of this study? Why would we not be interested in the medicinal product?
Please provide a very simple example that a non-scientist could understand.
Answer: A mechanistic study aims to uncover a mechanism. It can be a mechanism of the disease (exactly how does the disease do damage to the body), or a mechanism of a drug (exactly how does the drug prevent or repair the damage) that is being investigated.
Understanding mechanism, be it mechanism of disease or mechanism of the drug, is very important toward targeted improvements of treatments, as opposed to trying lots and lots of compounds, hoping that someday we'll find one that's better.
Non-mechanistic studies involving drugs aim to show that the drug provides a clinically relevant health benefit over competing treatments, or to determine the safe dosage of a drug. At that point, the mechanism of action of a drug is known to a reasonably large extent.
In short, there are many different things that need to be known for drugs to reach the market. Mechanism is one of them. | {
"domain": "biology.stackexchange",
"id": 1322,
"tags": "terminology, pharmacology"
} |
Effiency during submarine gliding | Question: Imagine a sort of submarine glider using gravity downwards and buoyancy upwards. An air pump is needed to press air into pressure vessels when ballast tank is filled with water, but I guess that's all.
Energy effiency would be high in deep water, but how fast could such a glider travel?
The answer seems to be about 3 knot and they could travel thousands of kilometres.
Liberdade class underwater glider
Answer: The "glider" you describe already exists and is used by oceanographers around the world to collect data in the oceans. It looks like a torpedo with little wings sticking out of its sides. It propels itself exactly as you describe: it releases a little air to become heavy and sinks- gliding forward as it does. When it reaches its assigned maximum depth, it adds buoyancy from a small air tank and begins to rise- gliding forward as it does. All the while it is measuring the salinity, acidity and temperature of the water, reading all this data to memory inside its onboard computer. When it reaches the surface, it streams this data via radio link to a satellite, and includes with it its GPS position. The radio link then programs the glider for another run in a new direction and sends it on its way.
A glider like this can autonomously profile the ocean for a month or two at a time before it runs out of air and battery power. | {
"domain": "physics.stackexchange",
"id": 63658,
"tags": "fluid-dynamics, lift"
} |
Detecting undirected cycles in logarithmic space | Question: I have a lot of difficulties with constructing algorithms that use $O(\log n)$ space, as I am unsure about how much can be stored on the worktape.
I am trying to figure out an algorithm for the problem:
UCYCLE = {<G>|G is an undirected graph that contains a simple cycle}
I can't seem to understand how to create an algorithm in logspace that can solve this problem. In my head, it has to store all the previously visited nodes (since it can't visit the same node twice). Also, I am really stuck on how to start searching, when I can't use non-determinism. Can someone help me with some pointers on how to go forward so I can figure out an algorithm?
(PS: This is not homework, but I need to understand how to create algorithms for problems in L and NL)
Answer: For the undirected cycle problem, you can traverse each connected component: at each node, when coming in through edge $k$, leave through edge $k+1$. (We can assume edges are ordered at each vertex.)
Now vou start such a traversal once from every node $v$, remembering only $v$ and the edge you left $v$ through. If the traversal returns to $v$ through a different edge, then $v$ lies on a cycle. If this is not the case for all $v$, then the graph is acyclic.
In general, in a logarithmic space algorithm you can only store constantly many pointers into the input and counters. | {
"domain": "cstheory.stackexchange",
"id": 2722,
"tags": "logspace"
} |
Why doesn't evolution converge on perfection? | Question: I got to know about an organism called "Tardigrade(water bear)" which is an extremely hardy organism and can survive in most conditions.
My question is that if the aim of life in general is to ensure the continuity of the species, why have we not simply stayed as tardigrades? it seems like they are the perfect candidates for survival purposes- ensuring(to a degree) that the species does not get wiped out as easily as dinosaurs.
Does that mean that life has a more different incentive--not to just only survive? or it doesn't have any? Could this be the reason of our incapability to make a superhuman intelligence, because our imitation of learning is to reach a certain objective when life does not have any distinct goal? Or am I missing a key point here?
BTW I am an amateur in Machine Learning where we basically try to mimic the learning of phenomenon of nature through 'evolution'. So I would appreciate answers with minimum of abbreviations and as simple as possible :)
Edit:
I am overwhelmed by the response I have received but seeing the answers and communincations, I have inferred that my question may be very basic and vague to biologists. The person who can answer would be the one who has studied both subjects(Deep learning and Evolution). But even then I thank you all for devoting you precious time to attend to my question. Cheers! :)
Also I wonder if there is some paradox somewhere here - in Machine Learning when we simulate some environment the agent, just like evolution figures how to survive it. But when more factors are present, the intelligence doesn't increase after a certain point. Could this Thus be that there is something ethereal unexplainable by science (like soul) which actually gives us a more-than-enough complex brain to further increase our intelligence? Or is this a baseless thought?
Answer: The key point you're missing is that perfection is a variable, or perhaps more accurately, a function of many variables that depend on environmental factors and the actions of other species. Even tardigrades have evolved a multitude of different species*, suited to different environments and lifestyles. And apparently they don't do all that well in hot water: https://www.nature.com/articles/s41598-019-56965-z
A creature that evolves to be close to perfection for one set of environmental variables can be seriously mal-adapted to a different set. An albatross, for instance, is very close to perfection - that is, well adapted - to its niche as a creature that flies over oceans, while a dolphin is likewise close to perfection** for an air-breathing creature that lives in that ocean. But either would die in short order in the other's environment, and neither is as well-suited to desert life as a camel.
As for the really unrelated question of why we don't know how to make a "super-intelligence", we don't even know how intelligence actually works, as you should know if you've studied the field. On current evidence, intelligence, at least of the tool-making sort that some humans display, doesn't really seem to be a trait suited to long-term survival.
*About 1150, per Wikipedia: https://en.wikipedia.org/wiki/Tardigrade
**So much so that their basic body plans have been repeated throughout evolution: e.g. ichthyosaurs and marine pterosaurs. There are numerous other examples of such convergent evolution: https://en.wikipedia.org/wiki/Convergent_evolution | {
"domain": "biology.stackexchange",
"id": 10299,
"tags": "evolution, molecular-evolution, learning"
} |
Greiner´s Field Quantization question | Question:
I upload a screenshot of Greiner´s book on QFT. I don´t understand one step. I need help understanding equation (3), what are the mathematical steps in between?
Greiner, Field Quantization, page 245 (Moller Scattering)
Answer: Just use the commutation relations (assuming bosons) $[b_{\mathbf p,s},b^\dagger_{\mathbf p's'}]=\delta(\mathbf p-\mathbf p')\delta_{ss'}$ to get rid of the operators.
So if you have
$$A=\langle 0|b_{\mathbf p's'}b^\dagger_{\mathbf p's'}|0\rangle$$
then
$$A=\langle 0|b^\dagger_{\mathbf p's'}b_{ \mathbf p's'}+\delta(\mathbf p-\mathbf p')\delta_{ss'}|0\rangle$$
but then the operators in the left side are 0 because $b_{\mathbf ps}|0\rangle=0$. Using $\langle 0|0\rangle=1$, we finally have
$$A=\delta(\mathbf p-\mathbf p')\delta_{ss'}$$.
Advanced techniques include the use of Wick's theorem. | {
"domain": "physics.stackexchange",
"id": 96104,
"tags": "homework-and-exercises, quantum-field-theory, operators, hilbert-space, wick-theorem"
} |
Mathematically speaking, is a "signal" a function or the set of outputs from a function | Question: In engineering it is well known that we abuse the notation of a function $f: A \to B$ with the image of that function $f(t) = \left\{y \in B\mid f(t) = y, t \in A\right\}$
When we say: let $f(t) = \sin(\omega t)$ be a signal, do we mean a function or the set of output from that function i.e. is signal a function or the output?
Similarly, looking at the diagram below, would one say that $\epsilon(t)$ is a set of inputs (i.e. scalars) going into the controller, or a function?
Answer: A signal is a physical quantity (e.g. voltage) carrying information, or a set of values (e.g. samples in discrete case) of the given function for different values of the underlying independent variable.
In the diagram above $\epsilon(t)$ is a mathematical modeling (i.e. function) that characterizes the set of inputs (i.e. scalers) going into the controller.
Strictly speaking, a signal is the "observed" (measurable) physical quantity while a function is a mathematical model (abstract representation) describing the relationship between the involved variables. | {
"domain": "dsp.stackexchange",
"id": 4023,
"tags": "signal-analysis, control-systems"
} |
Coordinate Transformation of Scalar Fields in QFT | Question: By definition scalar fields are independent of coordinate system, thus I would expect a scalar field $\psi [x]$ would not change under the transformation $x^\mu \to x^\mu + \epsilon^\mu $. Correct?
Now when I look in the book "Introduction to QFT" by Peskin and Schroeder they state (in an example) that the scalar field $\psi[x]$ under an infinitesimal coordinate transformation $x^\mu \to x^\mu -a^\mu$ transforms as $\psi[x] \to \psi[x+a] = \psi[x] +a^\mu\partial_\mu\psi[x]$.
One possible solution I thought was that a scalar field $f:M\to\mathcal{R}$ from a variety (in this case spacetime) is invariant under coordinate transformations, but that the coordinate representation $\psi[x]$ does change under coordinate transformations (obviously).
Is this anywhere near correct?
Answer: Here's what's really going on. In classical field theory, a basic set of objects that we often consider are scalar fields $\phi:M\to \mathbb R$ where $M$ is a manifold. Now we can ask ourselves the following question:
Is there some natural notion of how a scalar field defined on a given manifold "transforms" under a coordinate transformation?
I claim that the answer is yes, and I'll attempt to justify my claim both mathematically, and physically. The bottom line is that we ultimately have to define the way in which fields transform under certain types of transformations, but any old definition will not necessarily be useful in math or physics, so we must make well-motivated definitions and then show that they are useful for modeling physical systems.
Mathematical perspective. (manifolds and coordinate charts)
Recall that a coordinate system (aka coordinate chart) on an $n$-dimensional manfiold $M$ is a (sufficiently smooth) mapping $\psi:U\to \mathbb R^n$ where $U$ is some open subset of $M$. We can use such a coordinate system to define a coordinate representation $\phi_\psi$ of the scalar field $\phi$ as
\begin{align}
\phi_\psi = \phi\circ\psi^{-1}:V\to\mathbb R
\end{align}
where $V$ is the image of $U$ under $\psi$. Now let two coordinate systems $\psi:U_1\to \mathbb R^n$ and $\psi_2:U_2\to\mathbb R^n$ be given such that $U_1\cap U_2\neq \emptyset$. The coordinate representation of $\phi$ in these two coordinate systems is $\phi_1 = \phi\circ \psi_1^{-1}$ and $\phi_2 = \phi\circ \psi_2^{-1}$.
Now consider a point $x\in U_1\cap U_2$, then $x$ is mapped to some point $x_1\in \mathbb R^n$ under $\psi_1$ and to some point $x_2\in \mathbb R^n$ under $\psi_2$. We can therefore write
\begin{align}
\phi(x) &= \phi \circ \psi_1^{-1} \circ \psi_1(x) = \phi_1(x_1) \\
\phi(x) &= \phi \circ \psi_2^{-1} \circ \psi_2(x) = \phi_2(x_2)
\end{align}
so that
\begin{align}
\phi_1(x_1) = \phi_2(x_2)
\end{align}
In other words, the value of the coordinate representation $\phi_1$ evaluated at the coordinate representation $x_1 = \psi_1(x)$ of the point $x$ agrees with the value of the coordinate representation $\phi_2$ evaluated at the coordinate representation $x_2 = \psi_2(x)$ of the same point $x$. This is one way of understanding what it means for a scalar field to be "invariant" under a change of coordinates.
If, in particular, the manifold $M$ we are considering is $\mathbb R^{3,1} = (\mathbb R^4, \eta)$, namely four-dimensional Minkowski space, then we could consider the following two coordinate systems:
\begin{align}
\psi_1(x) &= x \\
\psi_2(x) &= \Lambda x+a
\end{align}
where $\Lambda$ is a Lorentz transformation and $a\in \mathbb R^4$, then the coordinate representations $\phi_1$ and $\phi_2$ of $\phi$ are, as noted above, related by
\begin{align}
\phi_1(x) = \phi_2(\Lambda x + a)
\end{align}
If we switch notation a bit and write $\phi_1 = \phi$ and $\phi_2 = \phi'$, then this reads
\begin{align}
\phi'(\Lambda x +a) = \phi(x)
\end{align}
which is the standard expression you'll see in field theory texts.
Physical perspective.
Here's a lower-dimensional analogy. Imagine a temperature field $T:\mathbb R^2 \to \mathbb R$ on the plane that assigns a real number that we interpret as the temperature at each point on some two-dimensional surface. Suppose that this temperature field is generated by some apparatus under the surface, and suppose that we translate the apparatus by a vector $\vec a$. We could now ask ourselves:
What will the temperature field produced by the translated apparatus look like? Well, each point in the temperature distribution will be translated by the amount $\vec a$. So, for example, if the point $\vec x_0$ has temperature $T(\vec x_0) = 113^\circ\,\mathrm K$, then after the apparatus is translated, the point $\vec x_0 + \vec a$ will have the same temperature $113^\circ\,\mathrm K$ as the point $\vec x_0$ before the apparatus was translated. The mathematical way of writing this is that if $T'$ denotes the translated temperature field, then $T'$ is related to $T$ by
\begin{align}
T'(\vec x+\vec a) = T(\vec x)
\end{align}
The a similar argument could be made for a scalar field on Minkowski space, but instead of simply translating some temperature apparatus, we could imagine boosting or translating something producing some Lorentz scalar field, and we would be motivated to define the transformation law of a scalar field under Poincare transformation as
\begin{align}
\phi'(\Lambda x+a) = \phi(x)
\end{align} | {
"domain": "physics.stackexchange",
"id": 9473,
"tags": "differential-geometry, field-theory, tensor-calculus, coordinate-systems, covariance"
} |
Friction on a rod resting against a rough horizontal rail? | Question: In which direction does friction act on a rod leaning on a rough horizontal rail, where the normal force is perpendicular to the rod and the rod has its bottom end on a smooth horizontal floor with which it makes angle theta?
Answer: Here is what the free body diagram of the rod is
In red is the weight going through the center of the rod. In pink are the normal forces where the rod rests on the ground and rail. And in orange are the friction forces associated with the normal forces. I assumed the rod is trying to slide downwards in the absence of friction.
If in the course of calculating these forces, they end up with a negative sign, then you need to reverse the direction of the corresponding arrow. | {
"domain": "physics.stackexchange",
"id": 74351,
"tags": "homework-and-exercises, newtonian-mechanics, friction, rigid-body-dynamics, statics"
} |
Role of Term Constants in Simply Typed Lambda Calculus | Question: In the Wikipedia article on Simply Typed Lambda Calculus (among other places), there is a notion of a "term constant". This is particularly notable in the production grammar given:
In this production grammar, the c is the term constant. I am new to the simply typed variant of lambda calculus -- I do not understand what role this constant plays in the overall computations of STLC. Can anyone give some examples of how the term constant is used and explain its general purpose?
Answer: As per Sam's request, I rephrase my comment as an answer.
Constants play an important role in λ-calculi (not just the simply typed variant).
They are often convenient: even though we can usually represent our target data using
$\lambda$-terms without constants this tends to be unwieldy. For example
integers
directly by some variant of an unary encoding, e.g. the Church encoding has $$\lambda g x. \underbrace{g ( g ( ... g ( g\ x ) .. ) ) }_{n}$$ as representation of $n$. That leads to cumbersome, unreadable terms. With constants we can do arithmetic with 0,1,2,..., and +,−,∗,...
In a typed calculus, we need constructors for types, and constants play this rule. E.g. 0,1,2,..., or +,−,∗,...
for the type of integers, true,false... or ∨,∧ for booleans etc. Naturally we also need to add reductions for constants like $3+4 \rightarrow 7$ etc (those are called $\delta$-reductions).
Note that the STLC was born with constants : Church's original
formulation contains constants $N_{oo}$, $A_{ooo}$, $\Pi_{o(o\alpha)}$
as well as $\iota_{\alpha(o\alpha)}$, representing negation,
disjunction, universal quantification and the selection operator,
respectively. | {
"domain": "cs.stackexchange",
"id": 2583,
"tags": "type-theory, lambda-calculus"
} |
Queries to implement Navigation stack in simulation | Question:
I am trying to implement navigation stack in simulation, but first I want to build the map and save to map server as it is requirement. To build the map, I have followed steps from this link :http://wiki.ros.org/stdr_simulator/Tutorials/Create%20a%20map%20with%20gmapping
I am able to see the simulated map in rviz, how can I use this to load to the map server?
Originally posted by Ramesh on ROS Answers with karma: 3 on 2017-05-03
Post score: 0
Answer:
You want to use the map_saver node from the map_server package. You can use this to store the map built with gmapping to disk (as a pgm image and a yaml file containing map metadata), and then later load that map using the map_server. Then you can do things like localize within the map using amcl.
Originally posted by jarvisschultz with karma: 9031 on 2017-05-03
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Ramesh on 2017-05-03:
I've built map and got two files one is .pgm and other in .yaml, Now how can load that map using map_server?
I am using this command, rosrun map_server map_server mymap.yaml and getting response :
[ INFO] [1493855683.820361988]: Read a 775 X 746 map @ 0.020 m/cell
how can I use this for amcl?
Comment by jarvisschultz on 2017-05-04:
amcl calls the static_map service offered by the map_server to get a copy of the map. | {
"domain": "robotics.stackexchange",
"id": 27801,
"tags": "navigation"
} |
Batch file to delete calling program and then itself | Question: There are times when I want to run a one-off executable on an end-user's machine that cleans up after itself, i.e. deletes itself without a trace. An exe can't delete itself because an exe can't be deleted while it's running. However, a batch file can delete itself, presumably because its contents get copied so the file can close before the code gets run.
So what if there was a general-purpose batch file that deletes the calling executable and then deletes itself? I couldn't find such a batch file so I made one, thinking maybe it could be useful to others. And obviously I want to see if it can be improved upon, of course.
This is particularly useful for an NSIS installer because the batch file can be embedded in the exe and then extracted.
Make sure you copy the batch file before you test it because it will delete itself!
@ECHO OFF
SET filename=%~nx1
IF "%filename%"=="" GOTO:EOF
TASKKILL /IM "%filename%" /F
:LOOP
TASKLIST | FIND /I "%filename%" >NUL 2>&1
IF ERRORLEVEL 1 (
GOTO CONTINUE
) ELSE (
ECHO %filename% is still running
PING -n 6 127.0.0.1>NUL
GOTO LOOP
)
:CONTINUE
DEL /F "%~f1"
DEL /F "%~f0"
I figured the easiest way to get the path of the calling program is to just have it passed in as a parameter. The IF "%filename%"=="" GOTO:EOF is very important because if the first parameter isn't defined then DEL /F "%~f1" will automatically act like a * is there and try to delete the contents of the whole folder.
Answer: TASKKILL /IM "%filename%" /F
It makes sense that you might want to do this, but I expect the file would be more useful if it waited for its calling program to end on its own. That way we can be sure the calling executable is done with everything else, including releasing any resources it's holding. Because you already have looping code checking for the process, this should be as simple as removing the line.
Comments would be great.
For example, I understand that PING -n 6 127.0.0.1>NUL is a fairly common delaying tactic, but it's still downright odd if you haven't seen it before.
Likewise filename manipulations (%~nx1 and such) will always be arcane. A comment saying that it pulls out the name and extension would be appreciated.
There's an alternative here, which is a bit of a cheat but worth mentioning as potentially safer in some cases. Because the use case of this batch file is that it's created by the executable that wants cleaning up, that executable could do the text substitution and put its own filename directly into the relevant code of the batch file. Obviously this depends on the language that the executable is written in having good string manipulation capabilties, but it would help protect against deleting whole folders and other such disasters.
GOTO CONTINUE
Personally, I would not have called it 'CONTINUE' because in many C family languages continue would continue with the next iteration of a loop, rather than break out of it. Obviously it makes no difference to the script, and it's not something that a pure batch programmer would trip over. I just found it slightly jarring, and thought I'd mention it.
) ELSE (
This is not clear cut, but some people find it easier to read if the else is omitted when the if side breaks, returns, or otherwise interupts the control flow. That is
:LOOP
TASKLIST | FIND /I "%filename%" >NUL 2>&1
IF ERRORLEVEL 1 (
GOTO CONTINUE
)
ECHO %filename% is still running
PING -n 6 127.0.0.1>NUL
GOTO LOOP
Alternatively, you could swap the test around and avoid the goto CONTINUE altogether.
:LOOP
TASKLIST | FIND /I "%filename%" >NUL 2>&1
IF ERRORLEVEL 0 (
ECHO %filename% is still running
PING -n 6 127.0.0.1>NUL
GOTO LOOP
)
I would personally find that easier to read as the batch magic equivalent of a conventional spin lock style construction. | {
"domain": "codereview.stackexchange",
"id": 30493,
"tags": "windows, batch"
} |
Android Dependencies problem - Teleop & camera tutorial | Question:
Hi there,
I followed this doc rosjava_core and android_core to install a fresh android_core rosjava_core on my fresh ubuntu 11.10. After gradlew instructions, I imported projects into eclipse.
PubSub tutorial works properly but I still have difficulties to get android_tutorial_teleop and android_tutorial_camera tutorial to work.
Teleop app failed on the runtime due to a ClassNotFoundException : org.ros.android.views.visualization.VisualizationView
Camera app throws a ClassNotFoundException caused by org.ros.android.views.RosCameraPreviewView
These classes are declared in the android_gingerbread project, which is set in the "Android Dependencies" of Teleop and Camera projects. According to this android doc "android dependencies" are automatically exported that's why I'm confused now.
I probably missed something because pubsub tuto works correctly and required org.ros.android.views.RosTextView from android_gingerbread.
Thanks for help,
Jérôme
PS : Got the same exceptions when running the apk generated by gradlew
Originally posted by Jérôme Monceaux on ROS Answers with karma: 1 on 2012-04-30
Post score: 0
Original comments
Comment by Jérôme Monceaux on 2012-05-06:
Please, note that teleop depends directly on android_honeycomb_mr2, but android_honeycomb_mr2 depends on android_gingerbread, so teleop depends on android_gingerbread too.
Answer:
If I'm right, teleop is depending on honeycomb, not on gingerbread.
Thats what is said in the build.gradle of the project.
Originally posted by Kat with karma: 155 on 2012-05-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9199,
"tags": "ros, rosjava, android, dependencies"
} |
Why does concave mirror simultaneously form two images? | Question: In an experiment to find focal length of a concave mirror, first we had to estimate its rough focal length. I kept an object at point which was beyond the focus in front of the mirror and got its real image on the screen. But as soon as I removed the screen, I could even see an inverted image of the needle behind the mirror (i.e., virtual image). How is this possible as the concave mirror always form a real image for an object kept at a distance greater than the focal length in front of it?
Answer: to see the real inverted image "behind" the mirror is kind of an optical illusion. The picture is still in front of the mirror, but your eyes can not find a frame of reference, so it "sees" the picture at the usual place you see it in plane mirror, a virtual picture would not be inverted.
to help your ey to see the picture, at the place where it really is, place som kind of mauve frame instead of the screen, sometimes it even helps if you put just your finger beside the place the screen was before and you see the real picture in the air beside your finger. | {
"domain": "physics.stackexchange",
"id": 64964,
"tags": "optics, experimental-physics, reflection, geometric-optics"
} |
Relation between perturbative renormalization and RG group flow | Question: I have been trying to understand renormalization for a few days. I have consulted many sources and physics.stackexhange posts but I have unresolved issues, so I am hoping someone can help me!
I think I properly understand RG group flow, and how integrating out higher modes whilst keeping low energy physics constant leads to flows in coupling constants depending on the momentum cut-off. What I am struggling to understand, is exactly how this relates to the perturbative algorithm employed in getting rid of infinities in QFT.
It appears that the modern understanding of effective field theories is that a momentum cut-off is perfectly valid. For example, in the article by G. Peter Lepage at arXiv: hep-ph/0506330, he notes that one can simply leave the cut-off as an explicit parameter and compute physical quantities using this cut-off. Once the coupling constants have been tuned so that the theory agrees with some experimental measurements one obtains a predictive theory with an explicit cut-off.
However, in Chapter 5 of my lecture notes, Prof. Skinner employs a perturbative calculation where he redefines couplings constants by shifting them by an amount dependent on the cut-off in order to get rid of the cut-off dependence in an amplitude calculation. This idea and process seems at odds with the understanding outline in my previous paragraph. What is the resolution?
Answer: Let's say you do "normal" perturbation theory with cutoff regularization with cutoff $\Lambda$ in a theory with bare parameters $a_i$. The bare parameters will be functions of $\Lambda$, $a_i(\Lambda)$. You can use words like "we choose counter terms to cancel the divergences in loop integrals". This amounts to choosing the $\Lambda$ dependence in $a_i(\Lambda)$ order by order to cancel the $\Lambda$ dependence of loop integrals in correlation functions. The theory explicitly depends on $\Lambda$, because the $a_i$ are functions of $\Lambda$. But, observable quantities do not depend on $\Lambda$.
We don't have to change the words that much to connect with a renormalization group picture. The main difference is that rather than thinking of $\Lambda$ as a fixed cutoff parameter, we think of it as a parameter that can vary. The $\Lambda$ dependence of observable quantities must cancel out. We can express condition as saying that the $\Lambda$ derivative of a correlation function must vanish. This condition implies that the $a_i(\Lambda)$ will obey a differential equation (in fact there are a tower of differential equations coming from different correlation functions). What this equation expresses is how we must update the value of the $a_i$ parameters as we vary $\Lambda$. | {
"domain": "physics.stackexchange",
"id": 87191,
"tags": "quantum-field-theory, renormalization, effective-field-theory"
} |
Structure of NO2 molecule | Question: Which is the correct structure of $\ce{NO2}$? While searching the internet I found out that
$\ce{NO2}$ have a coordinate and two covalent bonds. $\ce{N}$ will have a positive charge. $\ce{O}$ (coordinate bond) will have a negative charge (this is the part which I don't understand. As $\ce{O}$'s octet is complete, why will it have a negative charge?)
$\ce{NO2}$ came from $\ce{HNO2}$ that's from $\ce{O}$ have a negative charge. but in this molecule, there is no $\ce{O}$ with coordinate bond.
In one molecule $\ce{N}$ have one electron and positive charge and in other two electrons and no charge?
Answer: $\ce{NO2}$ is a free radical. Its resonance structures are as follows:
Unsurprisingly, $\ce{NO2}$ reacts readily with a variety of substances, including itself at low temperatures to form $\ce{N2O4}$.
From Wikipedia:
Nitrogen dioxide at −196°C, 0°C, 23°C, 35°C, and 50°C. ($\ce{NO2}$) converts to the colorless dinitrogen tetroxide ($\ce{N2O4}$) at low temperatures, and reverts to $\ce{NO2}$ at higher temperatures. | {
"domain": "chemistry.stackexchange",
"id": 11969,
"tags": "molecular-structure, oxides"
} |
How to select a subset from the array $A=\{\frac{S_1}{K_1},\frac{S_2}{K_2},...,\frac{S_n}{K_n}\}$ such taht the subset can meet this requirement? | Question: Given an array $A=\{\frac{S_1}{K_1},\frac{S_2}{K_2},...,\frac{S_n}{K_n}\}$ whose any $S_i$ and $K_i$ are positive floating-point, and an positive floating-point $R$, how to select a set of element $C=\{\frac{S_{p_1}}{K_{p_1}},\frac{S_{p_2}}{K_{p_2}},...,\frac{S_{p_m}}{K_{p_m}}\}$ such that the absolute value of the difference between $\frac{\sum_{i=1}^m S_{p_i}}{\sum_{i=1}^m K_{p_i}}$ and $R$ (i.e. $|\frac{\sum_{i=1}^m S_{p_i}}{\sum_{i=1}^m K_{p_i}}-R|$)is minimal. The subset $C$ has at least one element.
This problem has confused me for many weeks. I can't even prove whether it is an NP or NP-hard problem. Could anyone give me some ideas on how to address it? I really appreciate any help you can provide.
Answer: The special case where $K_1=K_2=\cdots=K_n$ is as hard as the subset-sum problem, which is NP-hard -- so your problem is NP-hard as well.
A plausible approach is to use integer linear programming. Let $x_i$ be a zero-or-one integer variable. You are looking for $x_i$ such that
$$-\epsilon \sum_i x_i K_i \le \sum_i x_i S_i - R \sum_i x_i K_i \le \epsilon \sum_i x_i K_i.$$
For fixed $\epsilon$, you can test whether there is a solution to this linear inequality using an ILP solver. Then, you can use binary search on $\epsilon$ to find the smallest $\epsilon$ for which a solution exists. | {
"domain": "cs.stackexchange",
"id": 20624,
"tags": "algorithms"
} |
Can decision trees handle Nominal Categorical variables? | Question: I have read that decision trees can handle categorical columns without encoding them.
However, as decision trees make splits on the data, how does it handle Nominal Categorical variables?
Surely a standard numerical split would be spurious in this situation.
Answer: The fact is, if you feed a training dataset with categorical features without numerical encoding into a decision tree, say, in scikit-learn of Python, you see an error message, as the package has no idea how to do the binary split with your (nominal) categorical feature.
In principle, we can choose to do tree split in a way that all different values of a categorical feature give rise to different branches. However, if the number of categories is large, you will get a shallow tree which is not good for capturing complex dependencies.
The usual approach to deal with categorical features is to first numerically encode them, either one-hot, hash, target encoding...,etc. Target encoding is appealing in that it does not create many features, but the issue of target leakage has been perplexing until the recent emergence of catboost.
https://arxiv.org/abs/1706.09516
To address your further question:
Both latest versions of XGBoost and LightGMB support one-hot encoding and and a special kind of target statistics encoding.
https://xgboost.readthedocs.io/en/stable/tutorials/categorical.html
https://lightgbm.readthedocs.io/en/latest/Features.html#optimal-split-for-categorical-features
The trick for the latter comes from Fisher's paper "On Grouping for Maximum Homogeneity" back in 1958. The question we have is to get the best split for an internal node in the tree growth. Best is defined in terms of some kind of homogeneity. Suppose a categorical feature has values $\{a,b,c,d\}$. Then, Fisher told us that we don't need to enumerate all possible partitions, $\{\{a\}, \{b,c,d\}\}$, $\{\{a,b\},\{c,d\} \}$,...etc. but it suffices to partition according to some order of elements. The ordering implemented in both XGBoost and LightGBM is the gradient statistics. That is, you first convert a (nominal) categorical feature to a numerical one by gradient statistics and then find the best split like what you do with a numerical feature.
"Do all the better decision tree models, XGBoost, LightGBM deal with nominal categorical columns automatically?"
Yes, in the sense that you only need to specify which columns are of the categorical type and then the package finding the optimal split automatically for those categorical features internally according to the passage above. Let me stress that it feels like no encoding is done and the packages handle categorical columns automatically, but it is the gradient statistics (a kind of target encoding) under the hood.
"What would you recommend then?"
It depends on what you want to do. XGBoost used to not support automatic handling of categorical features but now they do. LightGBM, XGBoost, and Catboost are state-of-the-art boosting machine packages. I am not in a position to benchmark their performances, but all of these provide a pretty good API to deal with categorical features.
One caveat: There is one issue of the gradient statistics encoding. In such encoding target values are used and so target leakage exists. Catboost improved this by using a special kind of target statistics encoding.
In short, in principle you can split in a way that each category is mapped to a new branch. This is a split according to a nominal categorical feature without encoding, but it is no good. So we do need some kind of good encoding to do the split and the state-of-the-art packages all support automatic handling of categorical features. If you dataset is small and exhibits prediction bias, you may try catboost. | {
"domain": "datascience.stackexchange",
"id": 11643,
"tags": "decision-trees, categorical-data, encoding, one-hot-encoding, categorical-encoding"
} |
If a very huge Earthquake occured anywhere on Earth could waves emerge to come together again on the opposite side? | Question: Suppose that a super-powerful earthquake occurred anywhere on Earth, say one with the value 10 on Richter's scale. The quake can have any value but as can be read in a comment below the highest value ever measured was 32 on a superdense star. In that case, it's much more difficult to tear the star apart. The Earth, in contrast, could be torn apart by a quake with value 10 because she is highly less massive.
Suppose the quake was mainly transversal (in a vertical direction). Could it be that correspondingly waves emerged from the center of the quake, traveling the Earth around to come together and reinforced again on the opposite side of the center, with the effect that the quake was felt more strongly on the opposite side of the center than at places halfway from the center (or halfway to the opposite side of the center), to say it in one long breath? Or would too much energy be absorbed from the waves by the Earth to reach the opposite side?
Answer: It is called "antipodal focusing". See for example Antipodal focusing of seismic waves observed with the USArray.
We present an analysis of the M-w = 5.3 earthquake that occurred in the Southeast Indian Ridge on 2010 February 11 using USArray data. The epicentre of this event is antipodal to the USArray, providing us with an opportunity to observe in details the antipodal focusing of seismic waves in space and time. We compare the observed signals with synthetic seismograms computed for a spherically symmetric earth model
The above paper deals with "body waves" that travel through the interior of the Earth.
There are also Rayleigh waves that travel on the surface and can travel around the Earth several times before dissipating (Wikipedia). Antipodal focusing of seismic waves due to large meteorite impacts on Earth does numerical simulations of surface waves at the antipode of the Chicxulub impact. The waves do not arrive at the antipode at the same time because of Earth’s ellipsoidal shape and different rock properties along their paths.
Isosurfaces of the norm of the peak displacement vector after a vertical impact for the impact hemisphere (left-hand side) and antipodal hemisphere (right-hand side).
(They say: "Note that contours of the continents are today’s, not their end Cretaceous positions, and therefore do not show the original Chicxulub antipode 65 Ma.") | {
"domain": "earthscience.stackexchange",
"id": 1925,
"tags": "earthquakes, waves, scale"
} |
How to know positive or negative phrase from qshere diagram? | Question: How to find out whether it is a positive or negative $\pi$ phrase between 00 and 11 from q-Sphere diagram?
Answer: The phase corresponding to the color of state $|11\rangle$ is $e^{i \pi}= -1$. So, it is a negative phase.
Generally, you plug in whatever angle the color of the state corresponds to into $e
^{i \theta}$ to get the phase of the state. | {
"domain": "quantumcomputing.stackexchange",
"id": 2999,
"tags": "qiskit"
} |
Rolling objects released from rest: time down the ramp | Question:
Since we are given the values of I for each object, I was able to calculate the KE's of each:
the solid spheres had KE of $1/5mv^2$; the hollow sphere had a KE of $1/5 mv^2$, and the hoop had $1/2mv^2$ (for KE).
Since $KE = W = Fd = mad$, $a = KE/(md)$. Since all KE's had m's in their equations, the m's can be cancelled out. This will give us a, and since from $v^2 = v_0^2 + 2ax$ where $x$ is the same for all and $v_0 = 0$ for all (and thus v depends solely on the a). Then I had $a_D = v^2/(2d), a_B = v^2/(3d),$ and $a_A = a_C = v^2/(5d)$.
That tells me that then the order from fastest to slowest should be
D > B > A = C.
The answer, however, is the exact opposite: A=C > B > D.
Could someone please help me see why?
Answer: Each object starts at the same height, and ends at the same height, converting its gravitational potential energy into kinetic energy. For extended objects that rotate, the kinetic energy of the object is the sum of translational (straight-line) kinetic energy and rotational kinetic energy, because it takes energy to cause an object to increase its rate of rotation. Note that it takes more energy to achieve an equal increase in the rate of rotation of an object with a large moment of inertia vs. an object with a small moment of inertia. Assuming equal masses for the objects, this means that all objects reach the bottom of the ramp with the exact same total kinetic energy, but this total kinetic energy is divided between translational kinetic energy and rotational kinetic energy. Thus, the objects with the higher rotational kinetic energy (in other words, the higher moment of inertia) will have a lower translational kinetic energy, and hence, a lower velocity at the bottom of the ramp. Note that this effect applies even if the objects in the picture have different masses. | {
"domain": "physics.stackexchange",
"id": 53614,
"tags": "homework-and-exercises, energy, kinematics, rotation, speed"
} |
kinect obstacle avoidance | Question:
I have done the segmentation in the point cloud and cluster in the point cloud. Now, I have to implement the obstacle avoidance. I wonder how can I find a good algorithm for path planning? Is there any PCL API can help me to do path opening (gap between 2 clusters).
Originally posted by dmngu9 on ROS Answers with karma: 150 on 2015-04-24
Post score: 0
Answer:
Are you trying to plan a manipulator path between point cloud clusters, or are you trying to navigate a mobile robot between two obstacles?
If you're trying to plan a manipulator, use MoveIt and add collision objects to your scene that correspond to the location of the clusters. Your manipulator will automatically plan around them.
If you're trying to move a mobile robot, take a look into SLAM for automatic collision avoidance and mapping.
You should be able to use PCL to find the bounding boxes of two point clouds and then calculate the minimum distance between them. This would be your gap distance.
Originally posted by Adam Allevato with karma: 194 on 2015-05-01
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by dmngu9 on 2015-05-02:
already done the bounding box. but it doesnt give me good result. especially, when i have a chair with a gap underneath, the box includes everything. its not a very optimal approach
Comment by Adam Allevato on 2015-05-03:
If your bounding box includes everything, then you're not looking for the distance between two clusters as you said above. I'm trying to answer the original question - a gap between two clusters.
As I said above, look into SLAM-based nav: http://wiki.ros.org/navigation/Tutorials | {
"domain": "robotics.stackexchange",
"id": 21530,
"tags": "ros, kinect, pcl, 3d-navigation"
} |
Palindrome operations on a string | Question: You are given a string S initially and some Q queries. For each query you will have 2 integers L and R. For each query, you have to perform the following operations:
Arrange the letters from L to R inclusive to make a Palindrome. If you can form many such palindromes, then take the one that is lexicographically minimum. Ignore the query if no palindrome is possible on rearranging the letters.
You have to find the final string after all the queries.
Constraints:
1 <= length(S) <= 10^5
1 <= Q <= 10^5
1<= L <= R <= length(S)
Sample Input :
4
mmcs 1
1 3
Sample Output:
mcms
Explanation: The initial string is mmcs, there is 1 query which asks to make a palindrome from 1 3, so the palindrome will be mcm. Therefore the string will mcms.
If each query takes O(N) time, the overall time complexity would be O(NQ) which will give TLE. So each query should take around O(logn) time. But I am not able to think of anything which will solve this question. I think since we only need to find the final string rather than what every query result into, I guess there must be some other way to approach this question. Can anybody help me?
I found this question in a sample test of a coding competition.
Answer: The key idea is that there are only $26$ alphabets in the string. Let $\Sigma$ denote the set of alphabets, i.e., $|\Sigma| = 26$. Assume that you have a data structure that for every pair of $L$ and $R$, computes the number of occurrences of any alphabet in $[L,R]$ efficiently. Using those numbers, you can easily check in $O(|\Sigma|)$ time if the substring from $L$ to $R$ forms a palindrome or not; this part is easy I leave the details to you. After finding the palindrome, you need to update the data structure without changing the original string; this part is complex, which I will explain in more detail.
Data Structure: Maintain an Interval tree (or segment tree) for each alphabet. A leaf in the tree corresponds to an interval $[s,t]$, such that the alphabet appears contiguously in the string $S$ from positions $s$ to $t$. For example: Suppose the string $S$ is $baaaedaabbbzz$ and the alphabet is $a$. Then the leaves of the segment tree of alphabet $a$ will be: $[2,4], [7,8]$ since $a$ appears in the string from positions $2$ to $4$, and it also appears in the string from positions $7$ to $8$. Similarly, for $b$, the leaves of its segment tree would be $[1,1], [9,11]$.
The segment trees for all the alphabets can be constructed in $O(|S| \log |S|)$ time. If you are not familiar with segment trees, you will need to study it by yourself.
Query: For a given query $[L,R]$, you can find the number of occurrences of any alphabet between positions $L$ and $R$ using the corresponding segment tree of the alphabet. This takes $O(\log |S|)$ time for each alphabet. Now, you can easily check if the sub-string makes a palindrome or not.
Updating the Data Structure: Now, you do not need to output/print the palindrome; you just need to find the interval in which an alphabet occurs in the palindrome (the lexicographically smallest one). This can be done in $O(|\Sigma|)$ time; it is easy, I leave the details to you. There will be at most two intervals for each alphabet in the palindrome. Update the segment trees of each alphabet by adding these new intervals. Adding these two intervals takes $O(\log |S|)$ time. However, you also need to remove some old intervals that overlap with these new intervals. It also takes $O(\log |S|)$ time: Why? Think in terms of amortized analysis.
Lastly, after performing all the queries, you can reconstruct the final string using the segment trees.
Overall running time is $O((|S|+|\Sigma|) \cdot \log |S|) \approx 10^7$. | {
"domain": "cs.stackexchange",
"id": 18992,
"tags": "substrings"
} |
How to calculate mass of water of crystallisation? | Question: How do you work out the mass of water of crystallisation for 13.5 g of
$$\ce{Al2(SO4)3*6H2O}$$
The correct answer is 3.24 g of water but I'm unable to derive the answer.
What I have tried:
I worked out the mols of aluminium sulfate which is approx 0.038 mol by dividing 13.5 g/353.5.
Since the ratio of aluminium sulfate: water of crystallisation is 1:6, I multiplied the number of mols by 6 to result in 0.228 mol of water.
To work out the mass, I multiplied mols of water by molar mass to result in 4.101 g which is NOT the answer of 3.24 g .
What am I doing wrong and what is the correct method?
Answer: The mass of aluminium sulfate has to be taken along with the water of crystallization.
Now molar masses
$$
\begin{array}{c| c}
\text{compound} & \text{ mass (g)} \\
\hline
\ce{Al2(SO4)} & 342 \\
\ce{6H2O} & 108 \\
\ce{Al2(SO4)3\cdot 6H2O} & 450\\
\end{array}
$$
After that it's simply ratio and proportion find out the moles of hydrated salt then find out the moles of water finally get the mass of water.
$$\frac{13.5}{450} \times 6 \times 18 = 3.42$$ | {
"domain": "chemistry.stackexchange",
"id": 10017,
"tags": "water, crystal-structure"
} |
Difference between $u$ and $v$ as velocity variables in special relativity | Question: I am beginning to learn about special relativity, so I apologize for the (most likely) basic question.
I frequently see, for example, the Lorentz Factor given by the equation $\gamma = \frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$, which makes sense to me. However, I also see it frequently written with a $u$ instead of $v$. (I have also seen Lorentz Transformations and other equations with $u$ instead of $v$.)
Within special relativity, is there a conventional situation in which $u$ is used instead of $v$? Obviously, $u$ denotes some velocity, but is there a specific way in which it usually differs from $v$?
For example, this paper uses both $v$ and $u$ as velocities. (The content of the paper is way too advanced for what I know about SR, but this is just an example of what I am talking about.)
I tried to search for an answer, but most of the things I found were about $u$ being used as an initial velocity, which I don't think applies to this situation.
Edit: I glanced at the Wikipedia page for the relativistic velocity addition formula, which said
The notation employs $u$ as velocity of a body within a Lorentz frame S, and $v$ as velocity of a second frame S′, as measured in S, and $u′$ as the transformed velocity of the body within the second frame.
Is this the general convention?
Answer: In many discussions of Special Relativity, there are three velocities: the velocity of an object in some reference frame, the velocity of the same object in some other reference frame (one that is moving relative to the first frame), and the relative velocity between the two frames.
These are often denoted as $\mathbf u$, $\mathbf u’$, and $\mathbf v$, or $\mathbf v$, $\mathbf v’$, and $\mathbf u$. Since there is no standard, you have to pay attention to what the meanings are. | {
"domain": "physics.stackexchange",
"id": 72318,
"tags": "special-relativity, velocity, conventions, notation"
} |
Inconsistency in ray diagram | Question:
This example purports to show reflection of light rays from a spherical mirror. It looks good, until you try to draw a ray from the tip of the candle flame, then through the focal point, $F$, and then emerging parallel to the optic axis $(CF)$.
Go ahead, draw it on your computer monitor screen, or print this page and draw it on the paper. Whoops! This ray should then pass through the image of the tip of the candle flame, but it doesn't!
Here's what you get when you draw the third light ray (in red) from the candle flame. It comes nowhere near the image of the candle flame. As soon as you draw some of the "missing" rays, you discover that the diagram becomes inconsistent.
According to me, the diagram seems to be perfect. From where did this anomaly come from? Please help.
Answer: You have just shown the effect of spherical aberration.
Here is an accurate drawing showing that even when parallel to the principal axis rays after reflection do not meet at a point.
Here is an example which you may see when having a drink and a caustic is produced. | {
"domain": "physics.stackexchange",
"id": 67622,
"tags": "optics, reflection, curvature, geometric-optics"
} |
Why plants (eg. parsley) can keep vitamin C despite all the sun? | Question: I have read that vitamin C is highly sensitive to light. So, how could parsley, for instance keep its vitamin C as it's flooded with sunlight?
Answer: UV light promotes the photo-oxidation of ascorbic acid (AsA) to dehydroascorbic acid (DHA).
DHA can be reduced to AsA by a specific dehydrogenase using reduced glutathione as reductant. I'm pretty sure that all cells are able to carry out this recycling of ascorbate.
Interestingly, in plants it appears that ascorbate acts as an electron donor to photosystem II (which is equivalent to it being oxidised). | {
"domain": "biology.stackexchange",
"id": 1762,
"tags": "biochemistry, plant-physiology, vitamins"
} |
Best traffic control device/system for given situation | Question: I am familiar with the round-a-bout/traffic light pros and cons, however, I've been tasked with deciding on a solution for the given parameters below.
Currently, there is a two-way (four lane) road with a speed limit of 35 mph. Rough traffic counts for peak hours are 580 vehicles in the morning (primarily North bound), 660 vehicles mid-afternoon (North/South split), and 400 evening (primarily South bound).
Access to a new facility will create a need for an access road resulting in a 3-way intersection. Access road will be to the East and for the most part only North bound traffic will make right turns to enter the facility. Less than 10% of the North bound traffic is expected to use the facility at any given time. Once finished at this facility, it is expected that the traffic will leave via the same access road and make right hand turns to continue North bound. However, as much as 20% of this traffic will need to make a left turn to return South bound.
Assume a community with some, but very little experience with round-a-bouts, and a primary desire to keep North and South bound traffic flow steady.
Giving your opinions and reasons, would you consider a round-a-bout or traffic control device utilizing a camera system for the left turn lane on the access road to be better?
Answer: A round-a-bout introduces a disruption in any traffic, no matter which direction. In your case the only two "collision" directions - two left turns - are the least used ones, so traffic lights working in a preference program with induction loops on the two possible left turns, plus uncontrolled right turn lanes would be optimal.
The crossing would stay in green light as long as there's no need for vehicles to take the turn. Only detectors would activate the phases for left turns (and only one would block both the north and the south road). You can either limit the rate at which the side exits can be activated by detectors on the main direction (better, more expensive) or by timing it, so that there must pass at least X time between the turn activations.
If real estate is a premium, you can create about all of the infrastructure for this crossing by reducing the main road to one lane in each direction, dedicating the remaining lane to turns, inclusion, and the "islets".
Here's an actual map from one of our controllers with a similar situation. This one includes two zebra crossings as well. It uses more detectors to determine optimal timing of green light, and change signals even before given vehicle would have to stop.
(an interesting thing about this one, it uses detection loops on the main road but videodetection on the side road - the road was still due to be finished when the traffic lights were installed, and so we couldn't put induction loops in the gravel - currently induction loops would be better there (both more reliable and cheaper) except videodetection is already in place, so replacing it would only mean more cost.
Also, if your investor is rich and wants something prestigious and memorable (and truly optimal for the traffic), get a round-a-bout with a viaduct. | {
"domain": "engineering.stackexchange",
"id": 50,
"tags": "civil-engineering, highway-engineering, traffic-intersections"
} |
Ergodicity of the Drude model | Question: The Drude model of electric conduction in solids deals with independent free electrons subject to random collisions with the crystal lattice (the direction where the electrons are scattered after a collision is random).
A simplified model is the Lorentz gas, where the collision are deterministic. If I understand correctly, it was shown by Sinai that this model is ergodic.
What about the original Drude model: is it ergodic? (Are there references on that?) A side question is whether this would have a physical significance?
Answer: As far as I know, Gallavotti proved the ergodicity of the Lorentz gas, while Sinai proved that of a system of $N \leq 5$ rigid spheres. Anyway, this is a minor detail. For certain aspects, a more suitable model for the Drude model is the Boltzmann gas. Lanford has shown (in 1970s, I think) that the entropy for this model is always increasing, but anyone has proved that Boltzmann gas is ergodic. So the answer to your question is: if, at our level of accuracy, Lorentz gas was an appropriate mathematical scheme for the Drude model, than it would be ergodic. Else we can't conclude, since Sinai's result is very important but too limited. (at the present day.)
However, it is an interesting question from a mathematical point of view (for me, for example, it really is), but for a physicist it is not very important, since Drude model does not provide an appropriate level of precision for most of calculations carried out nowadays in solid state physics (at least, concerning what I have seen in my course of condensed matter, I'm not a specialist in solid state physics). Moreover, any of that models takes in account coulombian interactions between charged particles. (this remark restricts - in principle - a lot the domain of applicability of such schematizations.)
I think you could find very interesting the treatments of this subject (ergodic theory) by Halmos and Arnold in their classical monographs.
References.
P. R. Halmos, Lectures on ergodic theory
V.I. Arnold, Ergodic problems of classical mechanics and Mathematical methods of classical mechanics
G. Gallavotti, Statistical mechanics and The elements of mechanics | {
"domain": "physics.stackexchange",
"id": 10237,
"tags": "mathematical-physics, statistical-mechanics, condensed-matter, kinetic-theory"
} |
Is AMCL mandatory for outdoor robots? | Question:
Dear all,
I'm new in the ROS world and I have a question that would seem to be stupid for most of you but we all have to start somewhere.
I have trouble to understand the role of AMCL. I believe that AMCL is used by indoor robots to find a localization of the robot on a pre-loaded map (made with laserscan). Am I wrong?
My robot will be used outdoor with GPS, IMU, Wheel odometer and a laser scan. So I don’t need to include AMCL in the move_base node? Could you confirm that I can use the navigation stack without the AMCL node?
All the transformations from odom to map (publish initially by AMCL) will be managed by localization stack (ukf_localization_node or ekf_localization_node). Am I right?
Best regards
Originally posted by julienH on ROS Answers with karma: 11 on 2015-04-20
Post score: 1
Original comments
Comment by asimay_y on 2016-03-15:
hi, julienH, I wonder to know did you solve this problem? I am planning to do same work as you. can you please give me some instructions? thanks.
Comment by asimay_y on 2016-06-16:
hi, dear @julienH, I wonder to know how did you solve this problem? I am planning to do same work as you. can you please give me some instructions? thanks.
Answer:
AMCL can be used indoors as well as outdoors for localization depending on the application.
So.. the idea is to feed data from GPS, IMU and Wheel Odometer into a kalman filter (extended or unscented) to get a 3D pose estimate of the robot. You can directly use this in your move_base node or you can feed the output pose from the Kalman filter into the AMCL package which will use laser scans, laser based map and will output the (improved) robot's pose estimate. It is recommended to use amcl to improve your pose estimates from the Kalman Filter but you can get away without using AMCL if the output pose estimate from the Kalman filter is accurate. So, if you are satisfied with the output of Kalman filter, you can use navigation stack without AMCL otherwise using AMCL is not hard.
Hope it helps.
Originally posted by Naman with karma: 1464 on 2015-04-20
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by asimay_y on 2016-03-15:
hi, dear, how to feed the output pose from Kalman filter into AMCL package? because as I know, the input of AMCL is based on laser, not pose info. how to do this ? and we also want laser data function well combine with other odom data. Thanks.
Comment by 130s on 2016-06-16:
That should be separately asked as a new question. | {
"domain": "robotics.stackexchange",
"id": 21479,
"tags": "imu, navigation, gps, amcl, move-base"
} |
Is $\sin\left[2\alpha\right]\cos\left[2\alpha\right]\ge0$ a valid restriction on the angles of the principal stresses in 2D elasticity? | Question: This question pertains to Elasticity: Tensor, Dyadic, and Engineering Approaches By: Pei Chi Chou, Nicholas J. Pagano, Section 1.4.
The objective under discussion is to find the directions of stationary normal stress. The following transformation equations have been established:
$$\sigma^{\overline{x}}=\sigma^{x}\cos^{2}\left[\alpha\right]+\sigma^{y}\sin^{2}\left[\alpha\right]+2\tau^{x}{}_{y}\cos\left[\alpha\right]\sin\left[\alpha\right],\tag{1.5}$$
$$\sigma^{\overline{y}}=\sigma^{x}\sin^{2}\left[\alpha\right]+\sigma^{y}\cos^{2}\left[\alpha\right]-2\tau^{x}{}_{y}\cos\left[\alpha\right]\sin\left[\alpha\right].\tag{1.7}$$
Using the trigonometric identities
$$\sin\left[2\alpha\right]=2\cos\left[\alpha\right]\sin\left[\alpha\right],\tag{1.8a}$$
$$\sin^{2}\left[\alpha\right]=\frac{1}{2}\left(1-\cos\left[2\alpha\right]\right),\tag{1.8b}$$
$$\cos^{2}\left[\alpha\right]=\frac{1}{2}\left(1+\cos\left[2\alpha\right]\right),\tag{1.8c}$$
these become
$$\sigma^{\overline{x}}=\frac{\sigma^{x}+\sigma^{y}}{2}+\left(\frac{\sigma^{x}-\sigma^{y}}{2}\cos\left[2\alpha\right]+\tau^{x}{}_{y}\sin\left[2\alpha\right]\right),\tag{1.9a}$$
$$\sigma^{\overline{y}}=\frac{\sigma^{x}+\sigma^{y}}{2}-\left(\frac{\sigma^{x}-\sigma^{y}}{2}\cos\left[2\alpha\right]+\tau^{x}{}_{y}\sin\left[2\alpha\right]\right).\tag{1.9b}$$
We set the derivative with respect to $\alpha$ of either of these equations ($\sigma^{\overline{x}}$ in this case) equal to zero
$$\left(\sigma^{x}-\sigma^{y}\right)\sin\left[2\alpha\right]=2\tau^{x}{}_{y}\cos\left[2\alpha\right],\tag{1.11}$$
and find the two roots $\left\{ 2\alpha_{1},2\alpha_{2}\right\}$ of the resulting trigonometric expression
$$\tan\left[2\alpha\right]=\frac{2\tau^{x}{}_{y}}{\sigma^{x}-\sigma^{y}}.\tag{1.12}$$
It follows that $2\alpha_{2}=2\alpha_{1}\pm\pi;$ thus $\alpha_{2}=\alpha_{1}\pm\frac{\pi}{2}.$ The sine and cosine of $2\alpha$ are
$$\sin\left[2\alpha\right]=\pm\frac{2\tau^{x}{}_{y}}{\sqrt{4\left(\tau^{x}{}_{y}\right)^{2}+\left(\sigma^{x}-\sigma^{y}\right)^{2}}},\tag{1.12a}$$
$$\cos\left[2\alpha\right]=\pm\frac{\sigma^{x}-\sigma^{y}}{\sqrt{4\left(\tau^{x}{}_{y}\right)^{2}+\left(\sigma^{x}-\sigma^{y}\right)^{2}}}.\tag{1.12b}$$
In the first sentence of page 10, the book claims that $\sin\left[2\alpha\right]$ and $\cos\left[2\alpha\right]$ are either both positive or both negative. That would restrict $\alpha$ to $0\le\alpha\le\frac{\pi}{4}$ or $\frac{\pi}{2}\le\alpha\le\frac{3\pi}{4}$. More significantly, it would require $\tau^{x}{}_{y}$ and $\sigma^{x}-\sigma^{y}$ to have the same arithmetic sign. I see no justification for either of those restrictions. Is the claim that $\sin\left[2\alpha\right]\cos\left[2\alpha\right]\ge0$ valid?
The assertion that arithmetic signs of $\sin\left[2\alpha\right]$ and $\cos\left[2\alpha\right]$ must match appears to contradict the discussion in the final paragraph on page 10 in which the case of
$$\frac{\pi}{2}<2\alpha\iff2\tau^{x}{}_{y}>0\wedge\left(\sigma^{x}-\sigma^{y}\right)<0$$
is considered. That clearly makes the signs of eq 1.12a and eq 1.12b different.
It might be the case that any system can be fully characterized by considering a range of angles which conforms to $\sin\left[2\alpha\right]\cos\left[2\alpha\right]\ge0$ , but that is not the claim made by the authors.
Answer: The statement in the book has to be viewed through the lens of a particular method of proof. My original effort was not consistent with that approach. I now believe I can state the intent of the authors in a more satisfying manner.
Define the vector constant
$$\mathfrak{b}\equiv\begin{bmatrix}\frac{1}{2}\left(\sigma^{x}-\sigma^{y}\right)\\
\tau^{x}{}_{y}
\end{bmatrix}\equiv b\begin{bmatrix}\cos\left[\beta\right]\\
\sin\left[\beta\right]
\end{bmatrix}\equiv b\hat{\mathfrak{r}}\left[\beta\right],$$
the vector variable
$$\hat{\mathfrak{r}}\left[2\alpha\right]\equiv\begin{bmatrix}\cos\left[2\alpha\right]\\
\sin\left[2\alpha\right]
\end{bmatrix},$$
and the scalar the constant $\left\langle \sigma\right\rangle \equiv\frac{\sigma^{x}+\sigma^{y}}{2}.$ The first normal stress transformation equation (1.9a) is now written
$$\sigma^{\overline{x}}=\left\langle \sigma\right\rangle +\mathfrak{b}\cdot\hat{\mathfrak{r}}\left[2\alpha\right].$$
Setting the derivative with respect to $\alpha$ equal to $0$ produces
$$0=\hat{\mathfrak{r}}\left[\beta\right]\cdot\hat{\mathfrak{r}}\left[2\alpha+\frac{\pi}{2}\right].$$
Which implies
$$\hat{\mathfrak{r}}\left[2\alpha\right]=\pm\hat{\mathfrak{r}}\left[\beta\right].$$
So that
$$b\hat{\mathfrak{r}}\left[2\alpha\right]=\pm\frac{1}{2}\begin{bmatrix}\sigma^{x}-\sigma^{y}\\
2\tau^{x}{}_{y}
\end{bmatrix}=\pm\begin{bmatrix}\cos\left[2\alpha\right]\\
\sin\left[2\alpha\right]
\end{bmatrix},\text{and}$$
$$\tan\left[2\alpha\right]=\frac{\pm2\tau^{x}{}_{y}}{\pm\left(\sigma^{x}-\sigma^{y}\right)}.$$ | {
"domain": "physics.stackexchange",
"id": 52511,
"tags": "conventions, elasticity, continuum-mechanics, stress-strain, textbook-erratum"
} |
How do I use multilevel regression models? | Question: I have crime event data rows:
dayofweek1, region1, hour1, crimetype1
dayofweek2, region2, hour2, crimetype2 ...
and I want to use them as factors to model crime rates/probabilities at the region level.
I also want to use the resulting model to be able to input factor values to produce crime probabilities. e.g. on Sunday, in region1 there is a .03 chance of burglary at 3pm.
I think I should use a multilevel model, but everything I have found assumes a y value at the individual data row level which I do not have. All the row data are crimes.
Does anyone have an example of such a model (obviously not necessarily crime data, and preferably using python)?
Can the prediction bit be done?
Answer: I think what you're looking for is Survival analysis.
From Wikipedia:
Survival analysis is a branch of statistics for analyzing the expected duration of time until one or more events happen, such as death in biological organisms and failure in mechanical systems.
In your case, you'd like to predict the time a crime will occur in a given region.
Here is a good talk about survival analysis in Python.
This is a python library dedicated to survival analysis and the one used in the video mentioned above. The lib itself has some examples which will also help you understand survival analysis as a whole.
An introduction to the concepts of Survival Analysis and its implementation in lifelines package for Python.
Hope this helps!
EDIT1:
As of the 'per group prediction': your problem is this:
everything I have found assumes a y value at the individual data row level
There's no such thing as an algorithm to predict a target variable per group. You have to transform your data so it has a 'y' value per group, and then train some model based on that. It may be seen as a regression problem, or maybe as some kind of survival analysis. But you can't escape from the fact that you'll need one target value per group to start doing stuff. | {
"domain": "datascience.stackexchange",
"id": 6295,
"tags": "machine-learning, regression, predictive-modeling"
} |
How do I ensure my Java method never returns null? | Question: I've been reading Uncle Bob's Clean Code and have read many articles on the use of Optionals whenever you are faced with returning a null in a Java method. I'm currently refactoring my code and I just don't know how to optimize it with Optionals. Can someone give me some pointers?
public UserLog selectById(String userid) throws DataException {
List<UserLog> logs = selectUserLogs(SQL_SELECT_BY_ID, userid);
return logs.isEmpty() ? null : logs.get(0);
}
private List<UserLog> selectUserLogs(String sql, Object... args) throws DataException {
List<Map<String, Object>> rows = jdbcTemplate.queryForList(sql, args);
return rows.stream().map(this::toUserLogs)
.collect(Collectors.toList());
}
private UserLog toUserLogs(Map<String, Object> row) {
UserLog userLog = new UserLog("Steven Wozniak", 65, "Male");
return userLog;
}
/**
* Below shows how selectById is being used in another class
*/
...
public void callingMethod() {
if(starttime != null && FIVE_HOUR <= durationSinceStartTime() &&
userLogDao.selectById(userid) != null &&
userLogDao.selectById(userid).getStartTime() != null) {
log.warn("Logs came in late");
}
}
I'm having trouble optimizing the selectById method. Since I'm doing a Collector.toList() in the selectUserLogs method, I can be sure that I will ALWAYS get a List back. It could be an empty list, but never a null.
My initial attempt was:
public Optional<UserLog> selectById(String userid) throws DataException {
List<UserLog> logs = selectUserLogs(SQL_SELECT_BY_ID, userid);
return Optional.ofNullable(logs.get(0));
}
However, I completely have no idea how I'm going to optimize the calling of this method and performing the null checks in the callingMethod(). How would I re-write that so that it looks a tad more elegant?
Answer: Firstly, if you make I/O call as database request, I suggest you to call selectById only one time and not 2 time in the if (and maybe a thrid time after).
Then, for the no-null content, it depend of :
If you can create default value. It doesn't seems to be your case here, but as we don't know your full project, it's possible. You can create temporary value not saved in database which said it's empty like this:
public UserLog selectById(String userid) throws DataException {
List<UserLog> logs = selectUserLogs(SQL_SELECT_BY_ID, userid);
return logs.isEmpty() ? new UserLog(null, 0, "Unknow") : logs.get(0);
}
If you want that selectUserLogs return a list, you can do like this for the selectById method:
public UserLog selectById(String userid) throws DataException {
List<UserLog> logs = selectUserLogs(SQL_SELECT_BY_ID, userid);
return logs.isEmpty() ? Optional.empty() : Optional.of(logs.get(0));
}
If the selectUserLogs is only used here, I suggest you to use the Stream#findFirst method do like this:
private Optional<UserLog> selectById(String userid) throws DataException {
List<Map<String, Object>> rows = jdbcTemplate.queryForList(SQL_SELECT_BY_ID, userid);
return rows.stream().map(this::toUserLogs).findFirst();
}
A combination of the 2 and 3: 2 methods but using findFirst method:
public UserLog selectById(String userid) throws DataException {
List<UserLog> logs = selectUserLogs(SQL_SELECT_BY_ID, userid);
return logs.stream().findFirst();
}
Here you create a useless stream (which were created before...) but you keep both method. It's fine if the selectUserLogs is used somewhere else, and the actual result is what other method are looking for.
Finally, with optional you can do like this in your callingMethod() method:
public void callingMethod() {
if(starttime != null && FIVE_HOUR <= durationSinceStartTime()) {
userLogDao.selectById(userid).ifPresent(userLog -> {
if(userLog.getStartTime() != null) {
log.warn("Logs came in late");
}
})
}
} | {
"domain": "codereview.stackexchange",
"id": 44631,
"tags": "java"
} |
Creating keyword records in a database | Question: This code is part of a data model used to create keyword records in a database.
if (_queue
.OrderByDescending(record=>record.Primary)
.GroupBy(record=>record.Term)
.Select(record=>record.First())
.Select(Create)
.Any(result=>result <= 0))
{
throw new ModelException("Failed to create keyword record.");
}
While I can understand this, I was also thinking it was a little obfuscated.
Is the intent of this code clearly readable?
Answer: Definitely readable although I would just combine your two select statements and add some whitespace:
if (_queue.OrderByDescending(record => record.Primary)
.GroupBy(record => record.Term)
.Select(record => Create(record.First()))
.Any(result => result <= 0))
{
throw new ModelException("Failed to create keyword record.");
}
As you see I also prefer to keep the first statement on the same line and indent the next statements to align with the linq statement.
One remark though: first you're ordering your records and then grouping them. Is this done deliberately? I would assume that you'd want to group first and then order, not in the least because you would have to order less elements. But then again: grouping might be more performant if you order it first so I'm not entirely sure. | {
"domain": "codereview.stackexchange",
"id": 11511,
"tags": "c#, database"
} |
Trying to understand infinite gravitational energy | Question: Google has not been helpful because so many derivations of gravitational potential energy discuss $r$ at infinity.
My understanding of this
https://www.youtube.com/watch?v=IcxptIJS7kQ
is:
Dark energy increases with volume
The volume of the universe is increasing.
Therefore the total amount of dark energy is increasing.
If the universe is flat , it can expand forever at an increasing rate. So the amount of dark energy $\to\infty$.
The conservation of energy is not violated because gravitational energy is infinite.
We know that
$$U=-G\frac{mM}{r}$$
so when $r\to0$ $U \to\infty$.
But no particle has zero radius so $r$ cannot be 0 so $U$ must be finite.
So how can dark energy $\to\infty$?
Answer: You're mixing up two different things.
Firstly, for the universe as a whole energy is (probably) not conserved. I say probably because although most people think that energy is not conserved I have heard the opposing view argued with some conviction. I didn't watch the (hour and 20 minute!!) video you linked to, but as far as I know the problems regarding conservation of energy that dark energy raises are not simply linked to gravitational potential energy. Energy is not conserved because the universe as a whole violates time translation invariance, so Noether's theorem doesn't apply and energy is not necessarily conserved.
Your second point is that for a point mass the potential energy becomes infinite as $r$ tends to zero. We simply don't know if elementary particles are pointlike or not, though string theory suggests not. In any case we expect that some theory of quantum gravity will become important at very short distances, and this will prevent the gravitational potential from becoming infinite. In any case, as far as we know this is unrelated to dark energy.
I seem to have used the phrase as far as we know quite a lot. The problem is that these areas aren't fully understood, or in the case of quantum gravity hardly understood at all! | {
"domain": "physics.stackexchange",
"id": 9815,
"tags": "gravity, cosmology"
} |
Perpetual motion in spaces of different gravity? | Question: Imagine two locations with different amounts of gravity. I carry up a weight in low gravity, move it on this height over to the other place, and let it fall down there with higher gravity.
Wouldn't falling down release more energy than lifting up hast cost? If so, is it theoretically possible to generate such a transition between different levels of gravity near to each other?
Answer: The main point is that Newtonian gravity fields are conservative. What that means is that it is impossible to have a configuration like the one you drew without there being gravitational fields pointing to the left and to the right in the regions where you want to do the 'horizontal' transfer.
For example, you might try to achieve this on Earth by taking the usual uniform gravitational field and locating a very heavy mass just under the foot of the conveyor belt on the left. This will mean, though, that as you move your mass from the foot of that conveyor belt you will be fighting against the attraction of that very same mass, as shown with the red arrows:
The net result is that doing both of those horizontal transfers takes work, and in fact it must take exactly the same amount of work as what you've gained from lifting the object in the weaker field. There are, of course, many possible ways to achieve the fields you want, apart from the one in my image, but because all gravitational fields are the sum of attractive forces to a bunch of point masses, and the field of each point mass is conservative, you will always, necessarily, have cross-pointing fields like the one I pictured that will do away with any perpetual motion engine. | {
"domain": "physics.stackexchange",
"id": 11468,
"tags": "gravity, energy, perpetual-motion, thought-experiment"
} |
What's the meaning of the $\top$ symbol in a Hoare triple? | Question: I'm studying program verification and came across the following triple:
$$ \{\top\} \;P \; \{y=(x+1)\} $$
What's the meaning of the $\top$ symbol on the precondition? Does it mean $P$ can take any input?
Answer: The symbol $\top$, known as top, stands for "True". There is also a symbol $\bot$, known as bottom, which stands for "False". Top is always true, and bottom is always false. In your case, having a precondition that always holds is the same as having no precondition. | {
"domain": "cs.stackexchange",
"id": 7720,
"tags": "software-verification, hoare-logic"
} |
Using FFT to extract a specific frequency range in Matlab? | Question: I have a signal and from that I want to extract a specific frequency range, e.g. 10Hz to 15Hz. Now I heard that I should use Fast Fourier Transformation.
I'm going to do that in Matlab, but I have no idea where to start. Is there any good example how I can do this by passing the original signal and the frequency band as parameter?
Answer: To analyze a certain frequency band, you first have to know the sampling frequency $f_\mathrm{S}$ that has been used to acquire your signal $x$. To compute the discrete Fourier transform (DFT) in Matlab:
y = fft(x);
The output of Matlab's FFT function has length $N$ and begins with frequency 0, $N$ beeing the length of $x$. The frequency range you're looking for therefore lies in the index range
$$
N\frac{f_1}{f_\mathrm{S}} + 1\ldots N\frac{f_2}{f_\mathrm{S}} +1
$$
Where $f_1$ is the lower and $f_2$ is the upper limit of your frequency range and $f_1 \leq f_2 \leq f_\mathrm{S}/2$ must hold. The addition of 1 accounts for Matlab indices beginning at 1, not at 0. Also note that in general the above expressions are non-integer numbers. | {
"domain": "dsp.stackexchange",
"id": 1136,
"tags": "matlab, fft, frequency-spectrum"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.